How I Accidentally Created an Infinite Pixel Hellscape

A short apology and a longer explanation

Images courtesy of the author

Look, I may have accidentally summoned some demons from the pixel art underworld. I don’t know what is happening to these little creatures, but it sure doesn’t look good. They appear to be melting and turning into other objects. I’m really quite sorry about all this, but it happened and I’m here to tell you about it.

What is happening?

It all started with the poster hanging above the desk at which I’m writing these lines. It is one in a series by my friends at eBoy, a group of artists whose work you may recognize from a previous collaboration. The poster shows a colorful pixelated isometric view of a busy San Francisco.

You can get the San Francisco poster and more in the eBoy shop

There are other cities, real and fictional, in this Pixorama series too. You will find one for New York, Berlin, Tokyo, and many others. Importantly for this project, the raw images are all available in a big database.

Pixoramas of New York, Berlin, and Tokyo

For this next part, you need to know about generative adversarial networks, or GANs. If the name doesn’t sound familiar, maybe you’ve come across, which shows photos of people who really do not exist at all. Behind the website is a deep neural network that was trained with lots and lots of photos of faces and then asked to make up new ones. GANs have also been used to invent beetles, improve robotic simulation, and much more.

I trained a GAN too, one based on the eBoy database. First, I picked a bunch of images that were large enough and all of a similar style. I then cut them up into many small squares. This way, I could artificially create the tens of thousands of different examples necessary to make the training work.

Examples of the images used for training

In the next step, all of the training examples were fed through the neural network, which over time became better and better at imitating their appearance. I used Nvidia’s StyleGAN2 implementation and ran it on a Google Cloud VM with eight V100 GPUs.

You can find the source code and steps to reproduce the training on GitHub.

Some snapshots of the progression during training

Once the network is trained up, it can be used to generate a virtually infinite number of new images. These have (more or less) the same style as the original ones, but they have never been seen before. By picking a few random vectors from the latent space and interpolating between them, I can create these infinitely looping animations.

This goes on forever

It’s remarkable to me how the neural network managed to learn properties like the isometric grid, the one-pixel-wide black outlines (even curved ones), the color scheme, and different kinds of shapes.

You can generate your own unique GIFs or videos using a Colab I made.

When I first played with this concept last summer, I shared a similar looping video on Twitter. It looked much like the others in this story, but something felt off about it. There was this small grayish blob with a black slit floating across the image. You can best see it creeping around the bottom right corner starting about 22 seconds in.

At the time, the animation had been generated from the initial release of StyleGAN, and the authors of the follow-up paper would have you believe that it was a “normalization artifact” that they “fixed in StyleGAN2”.

I’m pretty convinced, however, that it was, in fact, the evil eye.

Inevitable technology. Now Robotics & Machine Learning at (Google) X.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store