How Elon Musk’s Neuralink Will Read Your Mind

Science has come a long way since I first held a brain in my hand

A woman with her eyes closed, only her eyes are lit on her face. There is a pink circle graphic behind her head.
Photo: Alpgiray Kelem/Getty Images

When you hold a human brain in your hands, it doesn’t feel how you’d expect.

Most people think of something soft and mushy, like a stress ball or a Jell-O mold. But pulled from a jar of formaldehyde, the brain is much denser and less yielding — it’s like holding a wad of suet.

For cognitive science (neuroscience) majors at Johns Hopkins University in the early 2000s, Brain Day was a right of passage. After spending three years of our lives studying the organ in extreme detail — through all-night sessions in the libraries, crushingly difficult exams, and intense lectures — it was our first opportunity to actually pick one up and hold a brain in our hands.

Our professors arrived with jars containing all kinds of brains — young ones, old ones, normal ones, diseased ones. The air had the sickening, sweet smell of industrial-strength preservatives.

One student fainted. We joked that if he fell and hit his head, he couldn’t have chosen a better place to do it than in the lecture hall of Michael McCloskey, one of the nation’s foremost cognitive neuropsychologists. (He was fine.)

Even after studying the brain for years, though, there was still an element of opaqueness about the organ. We could observe how the brain worked at myriad levels, understand how broad levels of activity in certain regions affected perception, and even see its deformities and lesions physically laid out in front of us. But the idea that consciousness and thoughts could be scientifically tractable in our lifetimes was alien. Even standing among the world’s leading experts — and holding a brain in our hands — the organ’s inner workings seemed mysterious and unknowable.

With the growth of deep learning, compressive sensing, and neural prosthetics, all this is poised to change. And at the forefront of this change is a mysterious, secretive company run by Tesla billionaire Elon Musk: Neuralink.

Neuralink launched in 2016 and is based in San Francisco. It has raised $158 million to date, including at least $100 million from Musk.

In true startup fashion, Neuralink has been incredibly secretive about its technologies, staff, and achievements. Most of what the tech community knows about it comes from job postings seeking animal research experts, a handful of formal announcements and papers, and a 2019 presentation by Musk.

While Neuralink does not divulge its methods, its goal is crystal clear: The company wants to use robots to embed electrodes in healthy peoples’ brains so they can merge with artificial intelligence. A.I. is one of Musk’s ongoing anxieties, and Neuralink is a concession to the concept of “if you can’t beat the machines, become one.”

Cyborgs. Brain surgery robots. Mind reading. It all sounds pretty spooky and science fiction–esque. Are Neuralink’s technologies even remotely reasonable?

Surprisingly, yes.

Again, if you asked me a decade ago — while I was holding someone’s petrified brain in my gloved hand — I would have given a vague scientist answer like, “Barring a major technological breakthrough, it seems unlikely.”

But that breakthrough has come. And it’s not in the hair-thin wires Neuralink plans to weave into your brain. It’s in the computer those wires connect to.

Neuralink is developing a brain prosthesis. Its goal is to implant thin filaments into the brain, threading them around blood vessels like a sewing machine stitching a shirt. These electrodes would be able to read from (and stimulate) up to 1,000 different locations along their route through the organ.

This already sounds like sci-fi. But clinically useful deep-brain stimulators currently exist. They’re an experimental last-line treatment for intractable diseases like Parkinson’s. Most read from (or stimulate) only a few locations in the brain — just two in some cases. And they use much thicker electrodes than Neuralink.

From a technological perspective, Neuralink’s proposed device is less an existential leap and more an impressive (if incremental) improvement on existing technologies.

The company will still need to contend with such things as thin electrodes snapping off (a living brain really is like soft Jell-O, unlike the preserved ones we held on Brain Day) or the formation of scar tissue. But these are the kinds of problems that are tractable with a stable of biomedical engineers and a few hundred million dollars to throw around.

Assuming Neuralink can create its proposed implant—which will still take years—how much does reading from 1,000 electrodes actually buy you? A typical human brain has 86 billion neurons. Isn’t reading from 1,000 discrete locations a drop in the bucket?

Not necessarily.

When I was studying neuroscience, there were two paradigms for reading data from the brain. The first was to look at overall patterns of brain activity, using devices like an fMRI scanner, PET scanner, or EKG. These scanners use fancy technology and statistical analyses to get a macro-level view of what the brain is up to. They’re great for things like examining which brain regions are involved in reading, emotion, and movement.

The other paradigm focused on reading at the extreme micro level — from one single channel on one individual neuron. Called a patch clamp, the technique won its creators the 1991 Nobel Prize. Patch clamps allow scientists to learn about specific brain functions in incredible detail, like the behavior of a single neurotransmitter in a certain class of neurons.

In the middle ground between macro and micro, though, there wasn’t much. The overriding assumption was that scientists could read from individual neurons to study their functions or see patterns of activity in the whole brain at once via fMRI and related tech. But getting detailed, neuron-level data from specific regions in real time was unrealistic. It would require installing an electrode for every single neuron — an impossible achievement.

A major development in computing is turning that assumption on its head. Deep learning is a branch of A.I. that has been quietly remaking fields and creating new business models since at least the early 2010s.

The so-called deep learning revolution is the reason Siri can finally understand you, Google Photos knows when you’ve uploaded a picture of your cat, you can deposit a check from your phone, and the traffic in Vegas is finally manageable. It’s also the tech behind totally new innovations, like self-driving cars.

Deep learning works by simulating, of all things, the human brain. So-called neural networks, which underlie the technology, use artificial neurons and synapses in massive computer systems to process information in completely new ways.

Deep learning systems are incredibly good pattern finders. You can present them with data, and if there’s any pattern at all, they’ll sniff it out. You don’t even need to know in advance exactly what you’re looking for. Like a child (or a PhD candidate), a deep learning system will not only find patterns in your data but also actually teach itself how to find those patterns in the first place.

Some of deep learning’s capabilities seem like magic. They can create believable fake faces from scratch, guess a person’s appearance from their voice, colorize black-and-white images, fly a drone, and even safely drive a car through a city.

At its core, deep learning is about using pattern recognition to understand a whole system from a tiny sample of data.

A perfect example of this is the field of compressive sensing. In a now-famous Wired article in 2010, scientists showed how compressive sensing could be used to recreate an accurate picture of then-president Barack Obama’s face from only a handful of randomly distributed pixels.

It’s amazing that a computer could perform that kind of reconstruction. But the more amazing thing is that all the information needed to rebuild Obama’s face was there in that tiny set of random pixels in the first place. As a human, you’d never guess that such a minuscule sample could be used to recreate a full image. But deep learning systems can easily find and leverage such patterns, even in incredibly sparse data.

For companies like Neuralink, these deep learning capabilities present a promising new path forward. They hold the tantalizing possibility that to understand the brain, you don’t need to read all its neurons. You just need a big enough sample—1,000 points seems fine for a targeted brain region—and a deep learning system that can take that tiny part and use it to rapidly recreate the whole.

Imagine this possible future: Neuralink has perfected the biomedical aspects of its implant. You’ve had one installed, and it’s reading from the neurons of your motor system, which controls movement.

When you think about moving your arm, the implant reads a pattern of neuron activity from across its 1,000 electrodes. Those instantly feed into a deep learning model. Like the computer reconstructing Obama, the system takes those 1,000 readings and extrapolates them out to a detailed plan for how you’d like to move your arm.

Rather than physically moving your arm, though, the computer’s analysis is used to move a robot arm (or computer cursor) in exactly the way you planned in your head.

It’s entirely conceivable that a computer could read your thoughts and reconstruct a scene that you’re imagining in your mind’s eye.

Now go one step further and imagine the implant is embedded in the brain regions linked to visual memory. As you imagine a physical place you’ve been (or a wholly new one that you’ve dreamed up), the computer records your neural activity and uses it to reconstruct a photorealistic version of what you’ve imagined.

That might take more than 1,000 inputs, but the basic concept is sound. We know there are neural correlates to spatial memory. London cabbies who learn the complex layout of the city, for example, consistently show measurable growth in specific brain regions.

If an implant could record from those regions and reconstruct full scenes using deep learning, it’s entirely conceivable that a computer could read your thoughts and reconstruct a scene that you’re imagining in your mind’s eye.

Such applications are likely a ways off. But for the first time ever, there’s a theoretical and practical pathway for getting there. That’s what makes companies like Neuralink so exciting. By bringing together A.I. and actual human brains at the level of information processing, Neuralink has created a real possibility of linking the two up.

Beyond visual mind reading, one can imagine some incredibly helpful applications for Neuralink’s technologies. Quadriplegics, for example, could use the tech to control wheelchairs or cursors using only their minds. And those suffering from amyotrophic lateral sclerosis (ALS) and other neurodegenerative disorders would have a new way to communicate.

These medical applications could be enough to keep Neuralink funded and afloat while it builds out its full-scale tech. Just as Tesla offers profitable electric cars as a platform to fund (and gather data for) self-driving cars, Neuralink could use implants for the severely disabled as a gateway for the study of broader brain functions.

Once you’ve had a brain implant installed to bypass your quadriplegia, what’s the harm in also using it to test out predicting your movements or reading your visual recollections of scenes? Neuralink could easily apply lessons learned from these early patients to a functional implant for the masses. Of course, there are myriad risks inherent in a mass-market technology that interacts with the human brain. But if these can be addressed, the technology itself is looking increasingly viable.

When I stood around in a stuffy lecture hall in Baltimore on Brain Day, I never imagined that these possibilities could exist in my lifetime. There’s a synergy here that feels fitting, somehow — the artificial brains of deep learning systems may well provide the quantum leap forward that allows us to finally understand our real ones.

Co-Founder & CEO of Gado Images. I write, speak and consult about tech, privacy, AI and photography. tom@gadoimages.com

Sign up for Pattern Matching

By OneZero

A newsletter that puts the week's most compelling tech stories in context, by OneZero senior writer Will Oremus. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

The undercurrents of the future. A publication from Medium about technology and people.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store