Tom Faber speaks with director Daito Manabe about the processes behind his music video for Squarepusher's 'Terminal Slam'.
A woman in a tan jacket and a mint green skirt crosses the world’s most famous intersection in Shibuya, downtown Tokyo. She takes a pair of unassuming glasses from her bag and puts them on. The world around her, already a chaos of commuters and neon, is suddenly amplified. Adverts distort and degrade, passersby glitch and dissolve into sludgy camouflage. The world is customised, hacked for her eyes only.
Daito Manabe, director of Squarepusher's Terminal Slam video, doesn’t think this depiction of the future is very far off. Two or three years, tops. He has a good track record predicting these things. The Tokyo-based artist founded the influential Rhizomatiks company in 2006 and has been probing the uncharted territories between art and technology ever since.
For this goal he has suitable training. He began programming as early as ten years old before getting into DJing as a teenager, so both music and mathematics were early interests. While studying at the Institute of Advanced Media Arts and Sciences in Japan he searched for ways to express himself artistically through programming. One of the first results, Electric Stimulus to Face, was a curious experiment where he controlled the facial movements of friends using electrostimulation. The clips went viral on Youtube and Manabe’s inbox rapidly filled with offers of work and collaboration.
While the technology he uses has become more sophisticated, Manabe has continued to focus his work on the interface between the human body and machines. He has worked with some of the music world’s keenest futurists including Ryuichi Sakamoto, Björk, Nosaj Thing, FaltyDL and Squarepusher. He has also created campaigns for major brands and designed a live AR performance watched by millions at the 2016 Olympics closing ceremony in Rio De Janeiro.
Here he talks about bringing the sonic meltdown of Squarepusher into the visual realm and which future technologies might be closer than we realise.
You’ve said your starting interests are music and mathematics. Do you see many connections between the two?
Since reading the book “Music and Architecture” by composer Iannis Xenakis when studying maths at college, I became interested in the relationship between music and maths. Mathematics represents a broad world, and music is just one aspect of it. I believe there is something beyond pure soundscape, texture, and twelve-tone tonality.
What first drew you to Squarepusher’s music?
We bonded over basslines. When we talked I realised that we both pay attention to the bass first when listening to music. For me that’s because my father is a professional bassist and so that’s what I always listened out for on TV or at concerts as a kid.
What were your first projects together?
I directed music videos for Squarepusher x Z-Machines, which was a project where a robot band played instruments in a way humans could never do. By the time I got involved, the robot project had already been completed, so I simply programmed the lighting and filmed the robot’s performance. I was deeply impressed by the precision of the robot and the music itself, which really seemed to use the technology to its fullest potential.
My next project with Tom was supporting a band called “Shobaleader One”. The visuals for the show were made by Zak Norman. In Tokyo I supported their live show as a software operator and film director. Since Tom didn’t use quantised rhythms and everything was live, it was very tricky - I had to remember the whole piece of music to get each cue exactly right. It was difficult but a lot of fun.
Has there been a common theme to your work together?
I cannot be compared to him as he is a great artist who composes, programs, and plays bass by himself. My art also relates to the body and machines so we have that in common, but my projects are small compared to the magnificent world of his music. He is able to switch between human and machine, at times even being controlled by a machine.
How closely did you collaborate back and forth on the idea for the Terminal Slam video?
When Squarepusher was in Tokyo for Warp’s 30th anniversary we had our first brainstorming session at my studio. He was interested in the latest research on deep learning and what it might be able to do in the future, and we were both enthusiastic about the idea of transposing mechanical work into human hands and vice versa. Since then we communicated remotely. He was very committed to the visual part of the project and we exchanged opinions until the last day to make adjustments.
Why did you choose Shibuya as a location?
We had a few cities in our mind which are polluted with ads but I thought Shibuya is best. I grew up on those streets and have watched them change. When I was young, Shibuya was where you went for street culture but now there is large-scale development everywhere. High rise buildings are changing the landscape and tech companies are moving in. There are still night clubs, live music venues, record shops, cinemas and theatres, so there is a very strange mixture of people there, who come from both the underground and the mainstream. The advertisements are similarly varied, with an effect that feels both chaotic and mysterious.
Tell me about some of the tech we’re seeing in the video.
The boxes are visualizations of a machine detecting the positions of car, humans and ads. Artificial intelligence is used for semantic segmentation (classifying objects in an image), depth estimation (estimating depth from a 2D image) and object recognition. It also uses diminished reality, using tech to remove stimuli from the visual world, such as with the human figures whose details are erased.
Do you think something has to have an element of human creation in order to be art? Could an algorithm be truly creative?
In this video, we divided work between humans and AI. Humans do what humans are good at, and so does AI. For example, it may be difficult for AI to judge the difference between an advertisement and the sign for a shop without prior information, so we do it manually. On the other hand, it’s much faster for AI to detect the silhouette of a person in a crowd. I think collaboration is the most practical way for the creation of arts. While the imagery here is AI-generated, the algorithms and data are selected by humans in the first place.
Over the years your work has often incorporated glitch-like elements. What is so interesting to you about the idea of a glitch?
I love that there are so many different kinds of glitch and you never know what’s going to come out. It might create something beautiful beyond our imagination, or it could be boring. It takes time to generate and is unpredictable, kind of like cooking. When I first heard “Terminal Slam” it made me feel nostalgic. I got to know glitch art first when I was a student, so I wanted to recreate the aesthetic of that time in the video with these old-school glitches.
Do you think we’re far from a pair of glasses like this really existing?
I think people will walk around the city with these kinds of glasses to delete or replace ads in two years. It is still difficult now, but it will be available when 5G arrives.
Why did you decide to focus on how we might be able to hack advertising in the future? Are you concerned about these possibilities?
We could have just advertised the Squarepusher release normally, filling the city with ads, but it would have felt like falling into the plan of the ad agencies. I’m interested in envisioning a way for current technologies to change these methods. What happens in the video is still impossible to do in real-time with an app, but I think it’ll be possible in two or three years.
What is some new technology which hasn’t yet entered consumer use that could be a game-changer?
Five years ago I would have said real time semantic segmentation, object recognition, and pose estimation using deep learning, and those are all available now. You used to need a special camera for these things but now a normal camera can do it. What’s next, I think, is real time kinetic technology and 5G. We will be able to process video in real-time on the server side, meaning we can apply augmented and mixed reality to objects in the far distance, not just close by. I’m sure it’ll only be a year or two before this is a reality.
Would you generally describe your attitude towards technological development in the future as optimistic or wary?
I am very wary about it. But I won’t turn that wariness into the concept of my work. I demonstrate the targeted technology through actual projects or prototyping. I hope people will come to their own conclusions based on those experiments.
How do you think music videos might change in the future?
Where once videos such as lyric videos needed huge amounts of work, they can now be made instantly, and will soon be created by AI. Looking at audio-visual works, these were once made with great effort by artists such as Oskar Fischinger’s “Optical Poem” in 1938, from the analog age, or later animator John Whitney’s “Catalog” from 1961. But today this kind of thing can be made automatically with iTunes Visualizer.
We see from these stories that technologies that were once innovative will be democratised until everyone can use them. The truly intriguing ideas for music videos come from the persistence and inspiration of creators, using whatever technology is available.
Are you surprised that you went from tinkering with tech to working with some of the biggest artists and brands around?
I am so lucky that I can keep creating with the same guerrilla attitude as ever and, fortunately, collaborators actually expect me to be experimental. But I sometimes feel I am too conservative when I see those artists who I collaborated with, such as Björk, Arca, OK GO, Nosaj Thing, and of course Squarepusher. They are super challenging and experimental. I am just grateful that they still accept me as I am.