Machine Learning Widens the Gap Between Knowledge and Understanding

And gives us the tools for our next evolutionary step

Credit: peepo/Getty Images

The program “Deep Patient” doesn’t know that being knocked on the head can make us humans dizzy or that diabetics shouldn’t eat 5-pound Toblerone bars in one sitting. It doesn’t even know that the arm bone is connected to the wrist bone. All it knows is what researchers fed it in 2015: the medical records of 700,000 patients as discombobulated data, with no skeleton of understanding to hang it all on.

Yet, after analyzing the relationships among these blind bits, Deep Patient was not only able to diagnose the likelihood of individual patients developing particular diseases, it was in some instances more accurate than human physicians, including about some diseases that until now have utterly defied predictability.

Deep learning

If you ask your physician why Deep Patient thinks it might be wise for you to start taking statins or undergo preventive surgery, your doctor might not be able to tell you, but not because she’s not sufficiently smart or technical. Deep Patient is a type of artificial intelligence called deep learning (itself a type of machine learning) that finds relationships among pieces of data, knowing nothing about what that data represents.

From this, it assembles a network of information points, each with a weighting that determines how likely the points it’s connected to will “fire,” which in turn affects the points they’re connected to, the way firing a neuron in a brain would. To understand why Deep Patient thinks, say, that there’s a 72 percent chance that a particular patient will develop schizophrenia, a doctor would have to internalize those millions of points and each of their connections and weightings. But there are just too many, and they are in relationships that are too complex. You, as a patient, are, of course, free to reject Deep Patient’s probabilistic conclusions, but you do so at a risk, for the reality is that we use “black box” diagnostic systems that cannot explain their predictions because, in some cases, they are significantly more accurate than human doctors.

This is the future, and not just for medicine. Your phone’s navigation system, type-ahead predictions, language translation, music recommendations, and much more already rely on machine learning.

As this form of computation gets more advanced, it can get more mysterious. For example, if you subtract the number of possible chess moves from the number of possible Go moves, the remainder is still many times larger than the number of atoms in the universe. Yet, Google’s A.I.-based AlphaGo program routinely beats the top-ranked human players, even though it knows nothing about Go except what it’s learned from analyzing 60 million moves in 130,000 recorded games. If you examine AlphaGo’s inner states to try to discover why it made any one particular move, you are likely to see nothing but an ineffably complex set of weighted relationships among its data. AlphaGo simply may not be able to tell you in terms a human can understand why it made the moves that it did.

Yet, about an AlphaGo move that left some commenters literally speechless, one Go master, Fan Hui, said, “It’s not a human move. I’ve never seen a human play this move.” Then softly: “So beautiful. Beautiful. Beautiful. Beautiful.”

Deep learning’s algorithms work because they capture, better than any human can, the complexity, fluidity, and even the beauty of a universe in which everything affects everything else, all at once.

We’re beginning to accept that the true complexity of the world far outstrips the laws and models we devise to explain it.

Machine learning is just one of many tools and strategies that have been increasingly bringing us face-to-face with the incomprehensible intricacy of our everyday world. But this benefit comes at a price: We need to give up our insistence on always understanding our world and how things happen in it.

Shallow understanding

We humans have long been under the impression that if we can just understand the immutable laws of how things happen, we’ll be able to perfectly predict, plan for, and manage the future. If we know how weather happens, weather reports can tell us whether to take an umbrella to work. If we know what makes people click on one thing and not another in their Facebook feeds, we can design the perfect ad campaign. If we know how epidemics happen, we can prevent them from spreading. We have, therefore, made it our business to know how things happen by discovering the laws and models that govern our world.

Given how imperfect our knowledge has always been, this assumption has rested upon a deeper one. Our unstated contract with the universe has been that if we work hard enough and think clearly enough, the universe will yield its secrets, for the universe is knowable, and thus, at least, somewhat pliable to our will.

But now that our new tools, especially machine learning and the internet, are bringing home to us the immensity of the data and information around us, we’re beginning to accept that the true complexity of the world far outstrips the laws and models we devise to explain it. Our newly capacious machines can get closer to understanding it than we can, and they, as machines, don’t really understand anything at all.

This, in turn, challenges another assumption we hold one level further down: The universe is knowable to us because we humans (we’ve assumed) are uniquely able to understand how the universe works. At least since the ancient Hebrews, we have thought ourselves to be the creatures uniquely made by God with the capacity to receive His revelation of the truth. Since the ancient Greeks, we’ve defined ourselves as the rational animals who are able to see the logic and order beneath the apparent chaos of the world. Our most basic strategies have relied on this special relationship between us and our world.

Giving up on this traditional self-image of our species is wrenching and painful. Feeling crushed by information overload and nervously awaiting the next disruption of our business, government, or culture are just the localized pains of a deeper malady: the sense — sometimes expressed in uneasy jokes about the rise of robot overlords — that we are not as well adapted to our universe as we’d thought. Our minds cannot analyze or predict events as accurately and quickly as A.I. can. Evolution has given us minds tuned for survival and only incidentally for truth. Our claims about what makes our species special — emotion, intuition, creativity — are beginning to sound overinsistent and a bit desperate.

This literal disillusionment is something for us to embrace — and not only because it’s happening whether we embrace it or not. We are at the beginning of a great leap forward in our powers of understanding and managing the future: Rather than always having to wrestle our world down to a size we can predict, control, and feel comfortable with, we are starting to build strategies that take our world’s complexity into account.

We are taking this leap because it is already enabling us to be more efficient and effective, in touch with more people and ideas, more creative, more joyful. It is already recontextualizing many of our most basic ideas and our most deeply accustomed practices in our business and personal lives. It is reverberating through every reach of our culture.

The signs are all around us, but in many cases they’re hidden in practices and ideas that already seem normal and obvious. For example, before machine learning came to prominence, the internet was already getting us used to these changes.

The A/B mystery

When Barack Obama’s first presidential campaign tried out two versions of a sign-up button on its website, it found the one labeled “Learn more” drew dramatically more clicks than the same button labeled “Join Us Now” or “Sign Up Now.”

Another test showed that a black-and-white photo of the Obama family unexpectedly generated far more clicks than the color image the site had been using.

Then, when they put the “Learn more” button together with the black-and-white photo, sign-ups increased 40 percent.

Overall, the campaign estimated that almost a third of the 13 million names on its email list, and about $75 million in donations, were due to the improved performance provided by this sort of A/B testing, in which a site tries out variants of an ad or content on unknowing sets of random users, and then uses the results to decide which version the rest of the users will see.

It was even more surprising when the Obama team realized that a video of the candidate whipping up a crowd at a rally generated far fewer clicks than displaying a purely text-based message. What could explain this difference, given their candidate’s talents as an orator? The team did not know. Nor did they need to know. The empirical data told them which content to post on the campaign site, even if it didn’t tell them why. The results: more clicks, more donations, probably more votes.

A/B testing has become a common practice. The results you get on a search page on Google are the results of A/B testing. The layout of movies on Netflix results from A/B testing. Even some headlines used by the New York Times are the result of A/B testing. Between 2014 and 2016, Bing software engineers performed 21,200 A/B tests, a third of which led to changes to the service.

A/B testing works without needing, or generating, a hypothesis about why it works. Why does some ad on Amazon generate more sales if the image of the smiling young woman is on the left instead of the right? We can make up a theory, but we’d still be well advised to A/B test the position of the model in the next ad we create. That a black-and-white photo worked for Obama does not mean that his opponent, John McCain, should have ditched his color photos. That using a blue background instead of a green one worked for Amazon’s pitch for an outdoor grill gives us no reason to think it will work for an indoor grill or for a book of barbecue recipes.

With A/B testing, we often don’t have a mental framework that explains why one version of an ad works better than another.

In fact, it’s entirely plausible that the factors affecting people’s preferences are microscopic and fleeting. Maybe men over 50 prefer the ad with the model on the left but only if they are coming from a page that had a funny headline, while women from Detroit prefer the model on the right if the sun just peeked through their windows after two overcast days. Maybe some people prefer the black-and-white photo if they were just watching a high-contrast video and others prefer the color version if the Yankees just lost a game. Maybe some generalizations will emerge. Maybe not. We don’t know. The reasons may be as varied as the world itself.

We’ve been brought up to believe that the truth, and reality, of the world is expressed by a handful of immutable laws. Learn the laws and you can make predictions. Discover new laws and you can predict more things. If someone wants to know how you came up with a prediction, you can trot out the laws and the data you’ve plugged into them. But with A/B testing, we often don’t have a mental framework that explains why one version of an ad works better than another.

Think about throwing a beach ball. You expect the ball to arc while moving in the general direction you threw it in, for our mental model — the set of rules for how we think things interact — takes account of gravity and momentum. If the ball goes in another direction, you don’t throw out the model. Rather you assume you missed some element of the situation; maybe there was a gust of wind, or your hand slipped.

That is precisely what we don’t do for A/B testing. We don’t need to know why a black-and-white photo and a “Learn more” label increased donations to one particular campaign. And if the lessons we learned from a Democrat’s ad turn out not to work for her Republican opposition — and they well may not — that’s okay too, for it’s cheap enough just to run another A/B test.

A/B testing is just one example of a technique that inconspicuously shows us that principles, laws, and generalizations aren’t as important as we thought. Maybe — maybe — principles are what we use when we can’t handle the fine grains of reality.

The efficacy of complexity

We’ve just looked at examples of two computer-based technologies that are quite different: a programming technique (machine learning) and a global place (the internet) where we encounter others and their expressions of meaning and creativity. Of course, these technologies are often enmeshed: machine learning uses the internet to gather information at the scale it needs, and ever more internet-based services both use and feed machine learning.

These two technologies also have at least three things in common that have been teaching us about how the world works: Both are huge. Both are connected. Both are complex.

Their hugeness — their scale — is not of the sort we encounter when we visit the home of the world’s largest ball of twine or imagine all the world’s potatoes in a single pile. The importance of the hugeness of both machine learning and the internet is the level of detail they enable. Rather than having to get rid of detail by generalizing or suppressing “marginal” information and ideas, both of these technologies thrive on details and uniqueness.

The connectedness of both of these technologies means that the bits and pieces contained within them can affect one another without a backwards glance at the barriers that physical distance imposes. This connectedness is essential to both of these technologies: a network that connected one piece to another, one at a time, would be not the internet but the old telephone system. Our new technologies’ connectedness is massive, multi-way, distance-less, and essential.

The scale and connectedness of machine learning and the internet results in their complexity. The connections among the huge number of pieces can sometimes lead to chains of events that end up wildly far from where they started. Tiny differences can cause these systems to take unexpectedly sharp turns.

We don’t use these technologies because they are huge, connected, and complex. We use them because they work. Our success with these technologies — rather than the technologies themselves — is showing us the world as more complex and chaotic than we thought, which, in turn, is encouraging us to explore new approaches and strategies, challenging our assumptions about the nature and importance of understanding and explanations, and ultimately leading us to a new sense of how things happen.

Reprinted by permission of Harvard Business Review Press. Excerpted from EVERYDAY CHAOS: Technology, Complexity, and How We’re Thriving in a New World of Possibility by David Weinberger. Copyright 2019 David Weinberger. All rights reserved.

Written by

I mainly write about the effect of tech on our ideas

Get the Medium app