— Can you please introduce us to your working method? How do you build your songs and why did the preparation and learning process take you two years?
— I used OpenAI's
Jukebox algorithm to improvise over loops in the style of various artists like Cocteau Twins and The Weeknd, any of about 4000 artists it was trained on. I was going for a nostalgic, holiday theme, so I looked for loops and algorithmic models which supported that theme. It turned out that the most striking elements produced by Jukebox were the vocals — they sounded like lost ghosts. I liked the emotional quality even though they were sung by a computer.
Jukebox has the capacity to surprise, which is strange and exciting to say about a non-human system. It can also produce a lot of junk, so a big part of the process was listening to all the renders and picking out the best bits. I was also lucky enough to discover the source-separation algorithm Spleeter right before I started this project. That helped to open up the mix and focus on the vocals. After I had all the computer-generated parts isolated and repaired, it was like making any other track in Ableton.
It took two years to figure out a good way to incorporate AI into my workflow. I started using MIDI AI transformations for a
Grimes remix about a year ago and then I tried waveform AI transformations with a lot of different algorithms. After that I took a few months taking apart and rebuilding Magenta's DDSP algorithm, which helped me to understand how neural nets think. I also had to brush up on my linear algebra!