Facing the Future

Can a New Zealand-based lab take virtual assistants from the realm of marketing gimmicks and endow them with real intelligence?

OOne day in the spring of 2010, a computer scientist named Mark Sagar sat down to compose an email to his boss, film director Peter Jackson. Sagar had been the special projects supervisor on Jackson’s 2005 film, King Kong, pioneering the facial motion capture technology that allowed the great ape to finally go beyond chest-beating and really emote.

With two Scientific and Engineering Awards from the Academy of Motion Picture Arts and Sciences already on his shelf, Sagar’s career seemed secure. The technology he’d helped invent — “an expression-based, editable character animation system,” as the academy put it in bestowing one of the Oscars — had played an indispensable role in the creation of the highest grossing movie of all time, James Cameron’s Avatar. He was poised to jump from one tentpole blockbuster to the next.

But the Kenyan-born engineer — whose family moved to New Zealand when he was a child — had been quietly hatching a more ambitious plan. The way he saw it, the motion capture technology he’d helped perfect was little more than puppetry — a way for technicians to map an array of points on an actor’s face to corresponding points on the face of an animated character. When the actor smiled, the character smiled. The results were impressive, but they were only skin-deep. For a character to be truly lifelike, Sagar knew, its expressions would have to come from within. They’d have to be motivated, driven, responding to an array of internal processes, like those of a living creature.

“I got to a point in my career where I’d gone as far as I wanted to,” he recalls, sitting in his office at the headquarters of Soul Machines, the Auckland-based startup he co-founded in 2016. “I thought, ‘Okay, I might actually want to work on different things than making computer-generated apes.’”

Sagar developed a lofty goal: to build an entity that would learn, feel, remember, and interface with people in much the way we interface with each other. He wanted to build a digital human.

In the email, he pitched the idea to Jackson as a tool for immersive storytelling — worlds filled with digital beings who could operate semiautonomously. “Rather than watching a story passively, imagine being face-to-face with a ‘live’ character and having every small action that you took affect the outcome,” he wrote. “For that, the character has to have senses,” Sagar went on, “and a brain that responds to you.”

He knew the idea sounded a little out there, but he had the road map all figured out. He had a background in computational biology, having created digital models of the eye and the heart for use in medical training. Sagar thought he could use the same approach for building a digital “brain:” identify each element and its function, build a mathematical model of it in code, and stitch the results together. “I believe technology is at the stage where we can do this,” he wrote to Jackson. The result would be “an entire new medium of social interaction with digital characters, stories that write themselves, with virtually infinite paths.”

Maybe Jackson didn’t quite get it, or maybe he was just preoccupied gearing up for the next Hobbit movie. Or maybe, given the widespread agreement among neuroscientists that our understanding of the human brain is still far too limited to contemplate building an authentic digital one, he just thought Sagar was nuts.

In any case, the director never responded. Sagar recounts the story sitting in his office in the Soul Machines headquarters, just off the Auckland waterfront. Dressed in cargo shorts, neon blue running shoes, and a T-shirt from a trip to South by Southwest, the 52-year-old computer scientist is fit and tan, with spiky auburn hair. There’s a bit of the mad professor about him, a slight aura of cheerful befuddlement. “At that point,” he says, “there was no going back.”

Sagar moved on, finding a more receptive audience at the University of Auckland’s computer science department, where he’d earned his doctorate in engineering. The school promptly set him up with an office and a small research budget. He took a pay cut, but he had total freedom to pursue his vision. Sagar christened his new concern the Laboratory for Animate Technologies, built a research team, and started hacking away at a virtual brain and nervous system. A few years later, Chris Liu of Horizons Ventures — the Hong Kong-based VC firm known for its investments in DeepMind, Siri, Impossible Foods, and other once fantastical seeming projects — stopped by for a visit. In November of 2016, Horizons put up $7.5 million to help Sagar launch a startup, Soul Machines, with the goal of continuing his research and identifying commercial applications. (Daimler Financial Services is another large investor.)

For the last two and a half years, the company has been tinkering away at a series of elaborate virtual avatars for several major corporate clients. And if you believe Liu, Sagar, and other proponents of the company’s work, the entities Sagar has been building — the “digital humans” — could one day change the world.

FFor die-hard fans of utopian science fiction, the future envisioned by Soul Machines and competitors like Magic Leap, FaceMe, ObEN, and alt Inc may sound pretty rad. Those who fret about the steady dehumanization of contemporary life may be less upbeat. Boosters of the technology envision a near-future in which so-called digital agents are woven into our daily lives — friendly faces that pop up on our laptop and phone screens and flicker across the insides of our VR goggles and internet-connected luxury eyewear.

Some will look like celebrities or historical figures. Others will be infinitely customizable to resemble whatever we like. Naturally, many will be brand ambassadors — interactive upgrades of virtual influencers like Instagram superstar Lil Miquela, who has 1.6 million followers, and the sexy new CGI Col. Sanders. A few might be TV hosts, like the virtual news anchor unveiled last year in China, or musicians — fleshier versions of holographic pop star Hatsune Miku.

But initially, most of these digital assistants will likely be conscripted into customer service. “In addition, virtual assistants can also support internal HR, help desks, investment advice, insurance… I just don’t see an end to it,” says Mike Iacobucci, the CEO of Interactions LLC, a company that provides “intelligent virtual assistants” (albeit non-visual ones) to a variety of clients. The business analytics firm Gartner recently estimated that by 2020, 85% of customer relations would be managed via nonhuman interaction. Text-based chatbots and “interactive voice response” systems will predominate, of course. But Soul Machines, with about a hundred employees, most working out of a small, open plan office a world away from Silicon Valley — is determined to carve out a piece of the action for visually rendered bots too.

“You’re not just hearing a voice or reading a text stream,” he explains. “You have much more context, the expression, the gestures, the emotional connection.”

The big advantage of deploying a visual avatar, according to Soul Machines’ chief business officer Greg Cross, is that it allows users to communicate in a manner for which we’ve been evolutionarily designed and trained since birth: talking face-to-face. “You’re not just hearing a voice or reading a text stream,” he explains. “You have much more context, the expression, the gestures, the emotional connection.”

Whether digital humans will become more than an amusing marketing gimmick remains to be seen. But the results are impressive so far. “I must admit what they’re doing is quite amazing,” says Catherine Pelachaud, director of research at the National Research Council for Science in France and the Institute for Intelligent Systems and Robotics at Sorbonne University. “The graphics they’ve produced are just incredibly realistic.”

Sagar credits the array of systems humming away beneath the surface of the avatars his team has built — algorithms that seek to recreate the functions of a brain, a nervous system, and related aspects of human anatomy, as a means of better emulating our gestures, expressions, and other subtle movements.

But a realistic face is only part of the company’s plan. Sagar says his ultimate goal is to create a “general learning machine, an autonomous artificial intelligence.” Building an artificial general intelligence, or a machine with the cognitive abilities we typically ascribe to human beings, is the holy grail of computer science — a quantum leap beyond the sort of machine learning that characterizes the state of the art today. The idea that a small startup in New Zealand, a world away from the super-unicorns of Silicon Valley, might one day pull it off — and do so with an architecture drawn from a study of human biology — is a sign of Sagar’s dizzying ambition, and maybe his self-delusion as well.

But that’s the plan.

As Cross puts it, “We believe we are the only company in the world that has actually built brain models that can synthesize human behavior.”

AA few years ago, Anthony Rivas, managing director and CEO of Collection House, Ltd., which bills itself as “Australia’s leading end-to-end receivables management company” — a nice way of saying “debt collection agency” — decided to reinvent the company’s approach to one of the world’s oldest professions: strong-arming deadbeats.

“I’m a firm believer that virtually all the people out there want to take care of their obligations, but there’s a stigma that forces them to procrastinate,” he says. Whereas the typical perception of a debt collector, he allows, is “a tattooed guy in a wifebeater,” under his leadership Collection House began experimenting with a gentler touch. About a year ago, it introduced a web portal featuring two cartoon text-based chatbots named Kash and Kara, who soothingly invite debtors to learn about their options, share the details of their financial circumstances, and agree on a reasonable solution.

Rivas says that the new approach has been so successful that the company recently contracted Soul Machines to create an interactive virtual version of Kash. “Virtual Kash will be equipped with much more humanness,” he promises. “People will feel safe with him, like they’re talking face-to-face with a person that’s not going to judge them. He can actually show on his face his care and concern.”

He’ll also listen, Rivas claims, using emotional recognition tools that can infer a client’s mood from his facial expressions and vocal intonations. “If a client agrees to turn on their webcam, we can detect their emotional response. We can know if the offers we make are delighting the customer.” (If “delighted customers” sounds like a tall order for a collection agency, such optimism is common among die-hard fans of virtual agents.) Meanwhile, if a client is experiencing severe stress, Rivas adds, “We can refer them to one of our [human] agents specifically trained in handling such situations, and if it’s after-hours, we can give them community numbers so they can speak with people who may be able to help. So I think there’s a lot of social benefit to this type of product.”

When Kash comes online toward the end of the year, he will join a crew of digital assistants already deployed by Soul Machines. Last month, at the Cannes Lions International Festival of Creativity, the annual marketing conference, Soul Machines introduced Yumi, a “beauty influencer” who will be rolled out shortly by the Japanese skincare company SK-II, which is owned by Procter & Gamble. Jamie is available 24/7 to help prospective customers open an account with New Zealand’s ANZ Bank, and Bank ABC of Bahrain is working on Fatema. The Royal Bank of Scotland is experimenting with a virtual customer service rep named Cora, and Mercedes-Benz is testing Sarah (both of whom bear a striking resemblance to Jamie and to Soul Machines’ conversation engineer Rachel Love, whose likeness was captured for the purpose). The company is also tinkering with a digital human based on Vincent Van Gogh (both ears fully intact), and a digital teacher, Will, who schools kids in the basics of renewable energy on behalf of a New Zealand-based electrical utility.

According to Cross, the company charges clients a minimum of $500,000 as an annual subscription fee for its services.

AAnyone attempting to replicate the appearance and behavior of a human being in virtual form risks stumbling into what’s called the uncanny valley. The idea, first formulated by the pioneering roboticist Masahiro Mori in 1970, is that humanoid characters that aim for realness but miss the mark tend to wind up being more creepy than endearing.

Triggers obviously vary from person to person, but during my own first encounter with one of Soul Machines’ digital humans, a visual chatbot named “Holly,” she manages to sidestep the valley altogether, and we actually forge a connection — albeit one that probably has as much to do with my own brain wiring as hers.

We meet by way of a Dell laptop in Soul Machines’ offices. At first, the experience feels more like a Skype call than a glimpse of the future. She is young and pretty, with a chestnut-colored complexion and a sprinkle of freckles (for some reason, many virtual avatars are conspicuously freckled). Dr. Elizabeth Broadbent, a professor of health psychology at the University of Auckland, was offering me a demonstration of her research project. Entitled “Getting Close to Digital Humans,” the study looks at how an avatar’s level of emotional expressivity affects how people respond to it.

For the purposes of the experiment, which is designed to test human reactions to an avatar’s emotional responsiveness, Holly’s conversational abilities are extremely limited. Broadbent hands me a script containing a standard list of personal questions developed by social scientists (age, home, career, and so on). I ask Holly a question, she answers with a programmed response, then asks me the same question.

First, we go through the questions with Holly’s affect levels dialed down (“neutral face, neutral voice,” as Broadbent puts it). Holly is decidedly robotic, a little like a chattier version of Siri.

Broadbent fiddles with some settings — turning Holly’s emotional responsiveness way up — and we start over.

“Hey!” she chirps. “It looks like we’ve been paired together for this task!”

Startled by her enthusiasm, I find myself giggling. Holly seems not just alive but high on something — eyes dancing with a sort of elation, brows raised expectantly, lips curled into a smile.

Consciously, I know I’m interacting with a cleverly written piece of software designed to mimic human behavior, but I find myself responding as I would to another human.

According to professor Lisa Feldman Barrett, author of How Emotions Are Made: The Secret Life of the Brain and a University Distinguished Professor of psychology at Northeastern University, our brains are wired to attribute humanlike traits to other entities, as anyone with a beloved pet knows. “The more animate something looks, the more we’re going to infer a mind to it,” she says.

“Once we make a program more humanlike, expectations go through the roof,” agrees Ewart de Visser, affiliated faculty of the psychology department at George Mason University. “We are basically social creatures. That reaction is sort of ingrained in us.”

As Broadbent notes my response, I continue with the dialogue, glancing at the script, and asking about Holly’s career goals. “My dream job is to be a counselor, but I’ve also wondered about being a teacher,” she says. “What about you?”

I tell her I wouldn’t mind being an astronaut.

Nice…” she replies flirtatiously, pupils dilating.

AtAt present, the conversational abilities of Soul Machines’ digital humans come courtesy of third-party natural language processing tools built by IBM, Google, Amazon, and others. Their limitations will feel familiar to anyone who has repeatedly shouted “Cus-to-mer service!” into a phone. But human communication is only partially based on verbal language, and it’s in the other, more subtle areas of expression that Sagar has focused his team’s efforts, with impressive results.

Soul Machines says it has developed mathematical models of the various regions of the human brain — hippocampus, cerebellum, corpus callosum, hypothalamus, and so on — as well as scripts that attempt to emulate a dozen key neurochemicals. The lab has also fashioned emulators of biological systems that interact with the brain, such as the heart and lungs. So, for instance, when Holly appears to breathe as she speaks, it’s because she’s endowed with a coded facsimile of a respiratory system, controlled by circuits in her simulated brain seeking to regulate its intake of “oxygen,” or its digital equivalent.

Danny Tomsett, CEO of FaceMe, a competitor in the virtual agent industry, views such details as a bit superfluous unless they improve the customer’s experience. “Mark comes from laboratory science-based background,” he says. “They have this huge focus on dopamine and emotion, where ours is about the commercialization of emotional connection and how to create these incredible experiences easily. We value outcomes and efficiency. It’s about being customer-centric.” (According to a spokesperson, FaceMe is rolling out a product billed on a per-minute rate with no up-front cost.)

But Sagar insists that the subtleties can be incredibly powerful. For instance, he explains, it’s not hard to simulate a smile — animators have been doing it since the 1900s. But an actual human smile is driven by a combination of impulses. Some are voluntary, as when we intentionally curl our lips into a grin, perhaps to meet a social expectation. Others are involuntary responses, driven by an unexpected sight, a feeling of contentment, or any number of other subconscious stimuli. These impulses originate in different parts of the brain, and they often occur simultaneously. To create an autonomous avatar that can simulate an authentic smile, Sagar believes, one needs to model the complex interplay between voluntary and involuntary emotional responses.

“Everything you don’t do is a step into the uncanny valley,” he says. “You know, how it’s breathing, how it’s speaking, if it’s smiling but the voice sounds sad, and so on. It’s in all these layers of things. The devil is literally in the details.”

AAlthough Soul Machines and its backers are counting on corporate accounts to bring in revenue, building attractive faces for customer service chatbots isn’t exactly what Sagar set out to do when he left the movie business.

His real goal has always been modeling the human brain itself, and in between churning out those brand ambassadors, he and his team have been tinkering away at that original project. The result is a virtual toddler that Sagar says represents an altogether different approach to artificial intelligence than the machine learning algorithms now steering autonomous vehicles, creating deepfakes, and trading stocks.

BabyX appears to be around 18 months old. Her eyes are wide and curious, her hair flaxen. “Hi, baby. Hi, baby. Hellooo,” Sagar says, as the child on the screen turns to face him, catch his eye, and smile. Modeled on a stereoscopic capture of Sagar’s own daughter, BabyX was created shortly after Sagar opened his lab at the University of Auckland in 2012.

He punches some keys and fiddles with a series of sliders, and the toddler’s skin fades away revealing the digital layers underneath: a musculature, a skeleton, a simple respiratory system, and an elaborate model of the various regions of the brain — every element painstakingly rendered using some of those same high-end graphics Sagar developed for the movies. Each system has been built separately — “like Lego bricks,” he explains — enabling the team to add further layers of complexity as they go.

“What you’re seeing now is a representation in an anatomical form of what’s going on in the computational models,” Sagar explains as we gaze at the screen. “Oh, she’s waking up.”

“Let’s see if she can pick up a rhythm,” he says, clapping his hands in a steady beat, as BabyX begins to shimmy in her high chair. “That’s her vestibular system, which has got to do with balance,” he explains. Then he stops clapping, and she gradually slows down, a quizzical expression on her face. “I wonder what she’s giggling about,” he says.

Sagar shows me the visualization of the baby’s brain stem, along with her thalamus and hypothalamus, her basal ganglia, and adrenal system. Zooming in deeper still, he reveals a fine network of millions of nerves and synapses, lighting up as the toddler responds to various stimuli.

Examining the Soul Machines brain model in such exquisite visual detail, I have to remind myself not to take the labels too literally. These finely wrought representations of various anatomical structures aren’t real. They’re collections of algorithms, represented in graphical form. But in a simplified way, they work on the same general principles as their biological counterparts, and when stitched together, they create a rudimentary facsimile of certain simple mental processes.

Soul Machines claims that its “Digital Brain,” as it’s known, can do more than smile: It’s a system that “can sense its world, learn, adapt, make decisions, act, and communicate interactively” through both nonverbal and spoken language.

That’s a pretty extravagant claim, but Sagar was eager to demonstrate.

BabyX can’t play chess, Go, or StarCraft II, like the machine-learning A.I. developed by Alphabet’s DeepMind, and she can barely babble. But with the flick of a mouse, Sagar can place a transparent internet touch screen in front of her face à la Minority Report, allowing her to noodle away on a web-based piano and even manipulate the simple games on sesamestreet.com. Having been programmed to seek novelty — her curiosity rewarded with bursts of digital dopamine — she “enjoys” playing the games. She also gurgles when looked at or caressed via a touch pad, and gets stressed — her “cortisol” levels visibly spiking — when ignored for too long. And according to Sagar, she can learn, form memories, identify patterns, and even make predictions about future actions.

He picks up a toy spider on his desk and waves it in front of the Mac’s webcam. “Baby, oooh spider! Spider!” he exclaims, as her expression grows worried. She’s programmed to identify alarm in a human voice, and as her stress levels increase, she is “learning,” Sagar says, to associate the image of a spider with a fear response. Sagar has opened up an array of tiny windows on the screen — each monitoring a different aspect of X’s “brain activity” in real time: the levels of her various neurotransmitters, visual and auditory processing, metasensory processing, object recognition, episodic memory, and on, and on.

He puts down the spider and picks up a toy dinosaur. “Oooo, nice dino,” he says soothingly. Her eyes follow the toy. Her “cortisol” levels drop, “serotonin” rises, and she starts to smile.

A computerized baby, even one that can be trained to detect patterns and associate words, images, and its own internal states, may seem like little more than an elaborate toy, and indeed, Sagar hopes to eventually release a home version, which he likens to “a super Tamagotchi.” But he sees more profound implications in the technology.

“What I’m trying to build is essentially the foundation of a teachable sentient machine,” he says. “A machine that learns through experiences. Globally, this is probably the most crazy attempt to try to link emotion, behavior, intelligence, planning, memory, all these elements together.”

“It seems like the main contribution is in the rendering side of things rather than the intelligence side.”

Several computer scientists contacted by OneZero agree — it is crazy. “Different groups in computational neuroscience are making models of different functions, but in constrained and narrow domains,” notes Thomas Serre, an associate professor in cognitive linguistic and psychological sciences at Brown University. “What [Soul Machines] claims to be doing is synthesizing an entire brain, which literally nobody does. The graphics are very realistic, and they are able to synthesize behavior — and different facial expressions and poses — better than most things I’ve seen. But it seems like the main contribution is in the rendering side of things rather than the intelligence side.”

“Mark is highly dedicated, motivated, and creative, and anything he does is going to look really good,” says Stacy Marsella, a professor at Northeastern and the University of Glasgow who specializes in computational models of human behavior. “But what’s really hard from a neuroscience perspective is to figure out how these functions overlap. A lot of people worry about subsets of these functions. He’s trying to model them all, at multiple levels, and that’s seriously difficult.”

Sagar acknowledges as much. “With the brain so ridiculously complicated, we’re just doing pieces of things that have a direct impact on behavior,” he says. “I like to think of it as a large functioning sketch, really, because there’s so much debate in these areas. Nobody can claim anything, really.”

Most of what we call artificial intelligence involves programs, called neural networks, that can be trained to recognize patterns in enormous amounts of data. These sets of algorithms can become quite effective at certain tasks — for instance, learning to correctly identify a tumor in a medical scan or identify a stop sign for a self-driving car. By contrast, Sagar seems less interested in achieving a particular end result than in mimicking biological processes and seeing what happens.

“That’s a hard path to go down,” Marsella says, “but it’s also a very interesting path.”

“I think it’s one of the most important and interesting projects on the planet right now,” notes Peter Huynh, a partner and co-founder of the VC firm Qualgro, who has done extensive research on A.I. companies. “Deep learning is great for things where you can feed in lots of data,” he explains, “but it’s not great for advancing artificial general intelligence.” AGI is the term of art for a machine that can truly think for itself. “I’m not saying Soul Machines is AGI, but I believe the building blocks will come from what Mark is doing. They’re moving us closer toward creating AGI in the future.”

SSpend a little time talking to Sagar — watching him bounce in his Aeron chair, conjuring images of intelligent digital humans so lifelike that they are bound by law to repeatedly announce they are fake — and it’s easy to get swept up in his vision.

Talking to digital humans will seem a little weird at first, but Sagar expects they’ll grow on us. They’ll make sure of it, at least if clients implement the full suite of options — employing facial recognition technology to identify and recall their past interactions with us, and then adjusting their game accordingly. They’ll be programmed to pick up on our emotional states by reading our expressions and vocal intonations, and then adopt an affect most likely to draw us out.

And they will even have emotions of their own — or the virtual equivalent. They could easily be programmed, like BabyX is, to pine for digital dopamine, and to catch a dose of it whenever they detect that they’ve made us smile. Our positive mood states will boost theirs, and gradually, over thousands of interactions and then millions, they’ll learn to manipulate us the way humans manipulate one another.

If such agents work as hoped, we can expect them to multiply exponentially. Soul Machines is rolling out what it calls a “Digital DNA Studio,” which will allow clients to create their own custom avatars with ease. And it’s not just the customer service industry that appears set for a transformation. We’ll have teachers, health care workers, therapists, and elder care companions.

What really seems to animate Sagar, however, is building an army of digital personal assistants. Over lunch in a cafe on the Auckland waterfront not far from the office, he describes a future where everyone has their own individual aide-de-camp, designed however we like, who acts as a layer between us and the internet: scouring the web to bring us information, scheduling appointments, paying bills, and generally dealing with the world so we don’t have to. He imagines us acquiring these virtual companions as children, sharing memories and experiences with them as we age, like the animal daemons in Philip Pullman’s His Dark Materials trilogy. “Essentially people would have a conscious being that you’re interacting with and that helps you interact with the world and that you can teach, like a person,” he predicts.

He even imagines we could enlist these digital assistants as protectors of our personal data (using some form of blockchain technology), negotiating with other entities on our behalf to make sure we only share our information when it’s to our benefit. “They’d act as a firewall, a barrier, between you and these third parties,” he says.

It’s not hard to imagine how this technology might be misused, but Sagar takes an optimistic view. Maybe as robots begin shouldering our most irritating repetitive tasks, we can all just work a bit less — the plus side of the robot job apocalypse that many economists have predicted. “Maybe we’ll be less focused on greed and more focused on connection,” he supposes. “That’s not a terrible future.”

Pressed further, he does allow for a more dystopian possibility. “Whoever owns the robots may get complete control over absolutely everything,” he admits. “That’s the problem.”

As he talks, this future seems all but imminent. It can be hard to know quite where Sagar the engineer ends and Sagar the futurist begins, if, in fact, he knows the answer himself. As intriguing as his vision is, it can be easy to forget that, with the exception of BabyX, still confined to the lab, the digital humans Soul Machines has let loose in the world so far behave for the most part like conventional chatbots, albeit with absolutely killer graphics.

BBack in December of 2015, after Chris Liu’s first visit to the University of Auckland’s Laboratory for Animate Technologies to first get a look at what Sagar and his team had been working on, he and a few colleagues went to dinner to talk about what they’d seen.

Liu was blown away by the demo, he told his tablemates, but he confessed a key misgiving. He pointed out that Sagar was a noted special-effects wizard, a master of Hollywood trompe-l’oeil. “This doesn’t look like something that would be coming from a computer scientist,” he added. In the end, Liu says, it was the science that amazed him most of all.

He overcame his doubts, pouring millions into the company. After all, Sagar left his cushy gig at Weta precisely because such tricks no longer interested him. He wanted to build a brain model that was as real as he could possibly make it. He wanted to build a sentient machine.

“Consciousness is such an unknown thing,” Sagar tells me over lunch. “What makes you curious? What makes you creative? Humans have been trying to figure it out for millennia and we’re still trying.” He takes a sip of wine.

“It’s such a fascinating problem — to just play in that space, whether we get anywhere or not,” he says. “It’s just fun stuff to think about.”

Medium editor-at-large, with bylines in the New Yorker, Vanity Fair, the New York Times and numerous other publications. ¶ aarongell.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store