Published in


The Games That Machines Play

Why are computer scientists obsessed with building AI that can play games (like Chess and Go)?

South Korean professional Go player Lee Se-Dol (R) prepares for his match against Google’s artificial intelligence program, AlphaGo, during the Google DeepMind Challenge Match on March 10, 2016 in Seoul, South Korea. (Photo by Google via Getty Images)

“In life, unlike chess, the game continues after checkmate.” — Isaac Asimov

Historian and technologist David Nye has argued that “the meaning of a tool is inseparable from the stories that surround it.” In the context of artificial intelligence (AI), those stories have been dominated by the games that AI systems play.

It started with the Mechanical Turk, the chess-playing “machine” unveiled in the late 18th century. Although the so-called machine was a hoax, it set a precedent — you could even say, initiated an obsession — for computer scientists for centuries to come. According to Nathan Ensmenger, a computer science professor at Indiana University, many in the computing community believed that once a machine mastered chess — the “intellectual game par excellence,” according to Nobel Laureate Herbert Simon — “one would seem to have penetrated to the core of human intellectual endeavor.”

In 1965–1966, Soviet mathematician Alexander Kronrod called chess “the drosophila of AI.” By that he meant that the game was to artificial intelligence research what the fruit fly had been to genetics research: a testbed for the field’s biggest ideas, at once accessible enough to experiment on easily and complex enough to learn from. Fruit flies are easy to maintain in a small lab, have a short reproductive cycle of one to two weeks (enabling researchers to study multiple generations in a matter of months), and have over 60% of the disease-causing genes in humans. As David Bilder, former president of the National Drosophila Board of Directors, points out, fruit fly research has led in one way or another to five Nobel prizes over the past 85 years. Chess, computer scientists believed, could have a similar impact on AI. Ensmenger noted a few years ago that, “It is a rare discussion of AI, whether historical, philosophical, or technical, that does not eventually come around to chess-playing computers.”

Nor were computer scientists the only people convinced that chess was AI’s alpha and omega. When on May 11, 1997, IBM’s Deep Blue computer beat Garry Kasparov, the media and public response was enthusiastic. This seemed to prove the legitimacy of computers, demonstrating that they could now emulate, and even beat, humans at a task that was both mathematically and technically difficult — but also one that involved as much art as science. Was Kubrick’s HAL 9000 just around the corner?

As the initial excitement settled down, critics began questioning what this accomplishment actually meant for machine intelligence. John McCarthy, the organizer of the world’s firstAI conference at Dartmouth, wrote in a piece published in Science in 1997 that “Computer chess has developed as much as genetics might have if the geneticists had concentrated their efforts starting in 1910 on breeding racing Drosophila. We would have some science, but mainly we would have very fast fruit flies.”

Others shared that critique. In 1990, MIT professor Rodney Brooks argued that the field’s obsession with games was problematic in that it anchored intelligence to systems of symbols rather than to the sort of physical reality that supports and propels human intelligence. “Traditional AI has tried to demonstrate sophisticated reasoning in rather impoverished domains,” wrote Rodney Brooks in an article titled “Elephants Don’t Play Chess.” Programmers, he said, should aim for AI that performs simpler tasks — like understanding language or manipulating objects in the physical world — than winning chess tournaments but that operates “robustly in noisy complex domains” rather than “the sea of symbols” that games provide. The programmers, however, did not heed his advice. Games conveniently offered a setting in which AI systems could compete against the top-ranked humans — and against each other — to easily quantify progress.

Jeopardy would be their next touchstone. In 2011, IBM’s Watson, a natural language processing (NLP) and question-answer system built on a supercomputer, set out to beat Ken Jennings and Brad Rutter, the two best players in the history of the hit television game show. Research showed that in order to surpass human Jeopardy champions, a computer would have to be far more multi-faceted than Deep Blue was in 1997. For example, turn-taking doesn’t exist in Jeopardy. Instead, a player has to decide, very quickly, how confident it is that it will be right. It also needed to be able to choose categories and clues, and to develop wagering strategies. Watson was able to manage all those tasks. When the game ended, the computer had won $77,147, Jennings $24,000, and Rutter $21,600. Jennings responded to his defeat with good humor. At the bottom of his Final Jeopardy response, broadcast live, he wrote, “I, for one, welcome our new computer overlords.”

In 2014, Google bought the UK start-up DeepMind, a company specializing in AI research and neural networks, and turned its attention to a new game board — Go. Its AlphaGo program beat Go’s reigning champion, Lee Sedol four games to one.

Recently, a research team at CMU built a Poker-playing bot that beat top professionals at six-player no-limit Texas hold’em Poker. Unlike Chess and Go where you know the exact positions of your pieces and those of the opponent at any given time (i.e. games of perfect information), Poker is a game of imperfect information. Your opponent has hidden cards that influences their gameplay in the future. Having AI that can play Poker well is a huge step forward because most real-world interactions (e.g. consider negotiating with a counterparty) involve imperfect information.

So what comes next now that AI can beat humans at even Poker? What game would push machines to new levels of human-ness, in order to surpass humans? The history of computing has shown that what we conquer determines where we go next.

I recently spoke with James Barrat, a documentary filmmaker and author of The Final Invention. At some point in the conversation, the subject of games arose, and I asked him which one he thought computer scientists and their AI systems might tackle now that even Go had been conquered. He sat back, considered it, and finally said something I’ve not been able to forget: “I don’t think there are any games left. The next game is reality.”

This post is based on a chapter from my book A Human’s Guide to Machine Intelligence.

A Human’s Guide to Machine Intelligence (book cover)
From The Human’s Guide to Machine Intelligence by Kartik Hosanagar, published by Viking, a division of Penguin Random House, LLC. Copyright © 2021 by Kartik Hosanagar.

The undercurrents of the future. A publication from Medium about technology and people.

Recommended from Medium

The Best Machine Learning Research of 2019 So Far

Technology Behind Lattice Automation

The future of work: will everything become “automatable”?

Green is the new Black: Saving Amazon Rainforests using AI!

VESSL AI — Heading into 2022

How BOT you are ?

Top machine learning companies 2019 | Machine Learning Vendors | USA | India

Chatbots Copenhagen: The flourishing Community 🤖

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Kartik Hosanagar

Kartik Hosanagar

Founder Wharton Prof. Author: A Human’s Guide to Machine Intelligence. Faculty lead Fmr cofounder @Yodle (acq Web).

More from Medium

What Are the Negative Impacts of Artificial Intelligence?

What Are the Negative Impacts of Artificial Intelligence

Technology, AI, Robots… and Me

Artificial Intelligence: Family Pet or Future Owners?