How to Give A.I. a Pinch of Consciousness

A.I. researchers are turning to neuroscience to build smarter, more powerful neural networks

Chris Baraniuk
Published in
6 min readSep 11, 2020


Engineered Arts prosthetic expert Mike Humphrey checks on Fred, a recently completed Mesmer robot that was built at the company’s headquarters in Penryn on May 9, 2018, in Cornwall, England. Photo: Matt Cardy/Stringer/Getty Images

In 1998, an engineer in Sony’s computer science lab in Japan filmed a lost-looking robot moving trepidatiously around an enclosure. The robot was tasked with two objectives: avoid obstacles and find objects in the pen. It was able to do so because of its ability to learn the contours of the enclosure and the locations of the sought-after objects.

But whenever the robot encountered an obstacle it didn’t expect, something interesting happened: Its cognitive processes momentarily became chaotic. The robot was grappling with new, unexpected data that didn’t match its predictions about the enclosure. The researchers who set up the experiment argued that the robot’s “self-consciousness” arose in this moment of incoherence. Rather than carrying on as usual, it had to turn its attention inward, so to speak, to decide how to deal with the conflict.

This idea about self-consciousness — that it asserts itself in specific contexts, such as when we are confronted with information that forces us to reassess our environment and then make an executive decision about what to do next — is an old one, dating back to the work of the German philosopher Martin Heidegger in the early 20th century. Now, A.I. researchers are increasingly influenced by neuroscience and are investigating whether neural networks can and should achieve the same higher levels of cognition that occur in the human brain.

Far from the “stupid” robots of today, which don’t have any real understanding of where they are or what they experience, the hope is that a level of awareness analogous to consciousness in humans could make future A.I.s much more intelligent. They could learn by themselves, for example, how to select and focus on data in order to acquire new skills that they assimilate and go on to perform with ease. But giving machines the power to think like this also brings with it risks — and ethical uncertainties.

“I don’t design consciousness,” says Jun Tani, PhD, co-designer of the 1998 experiment and now a professor in the Cognitive Neurorobotics Research Unit at the Okinawa Institute of Technology. He tells OneZero that to describe what…