Cowriting an Album With AI

Why a neural net is like ‘a chain of guitar pedals’

Clive Thompson
OneZero

--

Robin Sloan and Jesse Solomon Clark of “The Cotton Modules”

These days, artificial intelligence is showing off its creative chops.

GPT-2 and GPT-3 generate text so well that people are using them to author text-adventure games and books of poetry. Visual artists are using image-generation AI to create neural-net paintings. You can create utterly photorealistic pictures of synthetic people that don’t exist.

But in the world of music, things have lagged a bit. Certainly, there are some cool tools out there — like Google’s Magenta music-creation AI, which you can use to autocomplete MIDI melodies or have a piano plink out a tune while you keysmash. The most ambitious is probably OpenAI’s “Jukebox”, which generates entire new songs in the style of well-known musical artists, including lyrics and (crude) singing.

But the truth is, the music tools are less polished and less complete than text or image generation. The tunes these tools produce are usually only parts of songs — or with Jukebox, the songs are complete but very lo-fi, and they don’t have clear verse-chorus structures. With music AI, you can’t just set-it-and-forget-it, push a button and have a creation come out.

Why? It’s because music, I suspect, is so deeply multidimensional, moreso even than textual or visual…

--

--

Clive Thompson
OneZero

I write 2X a week on tech, science, culture — and how those collide. Writer at NYT mag/Wired; author, “Coders”. @clive@saturation.social clive@clivethompson.net