Is AI Art Real Art?
Millions of images generated with code have fueled the engine of a new economy. But modern art has been associated with a lot of shenanigans over the years — is AI art the dawn of a new era, or the art world’s latest prank?
Generative Art uses techniques from Machine Learning and Machine Vision to generate images. And it has produced a lot of headlines around the subject of what is and isn’t real art.
To me, the answer is clear: AI-generated art is art. Of course, it is. It follows a recipe and has inputs and outputs. Developing the recipe is art, and using the recipe to bring something new into the world is also art. They’re both art. Why is this controversial?
I imagine it’s because, when most people think of art, they’re picturing the little reproductions of famous paintings like you find on mugs– water lilies, sunflowers, the Mona Lisa. They’re remembering works they’ve seen as school children, on slideshows and in textbooks. Most people know far less about “modern art,” and the word has become a synonym for abrasive, nonsensical, ugly, or for some other reason, unlovable.
But you can draw a line from today’s generative art straight back to the very beginning of modern art: Marcel Duchamp. You may know him as “the guy who put a urinal in a gallery and called it art.” That is correct. He forced new ways of looking at the objects that are all around us– shovels, math textbooks, bicycle wheels, chimney ventilators– which he termed “ready-mades.”
In the early part of the twentieth century, many of these prefabricated items were new enough to feel unfamiliar and strange. At the same time, they were quickly becoming so ubiquitous as to be all but invisible.
Duchamp’s ready-made pieces brought these objects to view in a new way, and caused audiences to ask new questions, such as: Is this even art? What am I looking at? What did the artist actually do here? What is “a work of art” anyway? And should we not maybe restrict the definition a bit?
Artists have been probing these same questions for at least a century now. The consensus seems to be: we should not restrict the definition of art; we should expand it. Art doesn’t even need to have a physical form — it can be completely immaterial. What matters is the decisions the artist makes, how the public interacts with it, and what it means to the community and culture from which it springs.
In AI-generated art, it might seem like the computer is making all the decisions. It may be that the artist doesn’t understand everything the algorithm is doing. (The developers who coded it probably wouldn’t say they understand everything that happens inside these processes.) But nothing at all will happen without an input from someone. That’s the point where you’ll find the art– when a person calculates how to weight their ML models, or makes decisions about training data, or makes any other choices about how to initiate the process.
Some object that tools available to make this kind of work seem “to do all the work.” Their output is difficult to predict, and they seem to have minds of their own. Before an artist can make interesting decisions with such a tool, they’d have to spend years studying and gaining proficiency in a whole new technology. Critics, curators, and the art-going public, in general, would have to develop new ways of interacting with the work, and new criteria to look for in it. A new art form will form new art. When has it been any different?