‘We’re Drowning in Data’: How to Think for Yourself in the Age of Experts and A.I.
Harvard lecturer and ‘Think for Yourself’ author Vikram Mansharamani explains his strategies for strengthening common sense in a time ruled by big tech
Between 2011 and 2017, at least 259 people died while trying to frame the perfect selfie. They fell off cliffs or down waterfalls or out high-rise windows while trying to snap the ideal shot for social media. These tragic deaths come amid rising cases of “death by GPS” and the 1,600,000 accidents per year caused by texting while driving. So, on top of the toll that screens are taking on our attention spans and interpersonal relationships, it’s clear that they’re downright deadly, too.
These are extreme — and tragic — examples of the blind obedience to technology that Vikram Mansharamani laments in his new book, Think for Yourself: Restoring Common Sense in an Age of Experts and Artificial Intelligence. Mansharamani is a lecturer at Harvard’s School of Engineering and Applied Sciences on decision-making skills, and while he’s not against expertise or technology per se, Mansharamani argues that “today’s interconnected problems demand integrated thinking … what we need is contextualized expertise that complements depth with breadth.”
His book is an effort to explain how we ended up in a situation where most of us are dependent on technology and experts to navigate our daily lives. Mansharamani says his book can “empower readers with tools and strategies to escape from it.”
OneZero caught up with Mansharamani to discuss how and why we’ve given our minds over to algorithms, how we might reclaim them, and why we should be skeptical of even the most benign uses of A.I., like when it’s used to offer diagnoses in health care.
This interview has been edited and condensed for clarity.
OneZero: You argue that people should break away from blindly following technology. Given how addictive many platforms are — especially social media — how can we expect people to regain focus?
Mansharamani: We’re drowning in data. There’s so much information and choices — and the result is we need help to filter that information and to optimize our choices because when we have that many choices, we think there must be a perfect one. We’re always living with this low-grade fever known as FOMO. And there’s this anxiety that there’s a better, more perfect choice out there, and we need to find it. So we turn to technologies to help us filter through the noise, to experts to help us make better choices. And in that process, we’re giving up some control by letting others choose where we focus and manage our information flow by being our filters.
I described focus as a double-edged sword, where one edge allows you to get deep knowledge by being focused and targeted in where you’re paying attention. But the other side of focus [might be termed] “broadly ignoring.” And those are two sides of the same coin, right? The more you focus, the more you ignore. And so the question is, what are we losing? The selection of what to ignore and what to focus upon is, in fact, a give-up of our autonomy. Increasingly, technology is framing what we see and how we see it.
You write that “in the course of our now habituated blind obedience to the people, tech, and systems … our intellectual self-reliance skills have withered.” But something like, say, a calculator, could help free up space for higher-level thinking. So what level of task should we outsource to technology, and when should we think on our own?
Well, I don’t even think of it that way. What I think about it is, it’s not necessarily the level of tech to outsource, it’s the significance of the decision. And tech is a tool, and like other tools and experts and other inputs that we could use, they all have a role and a time and a place. So I’m going to flip it and say, when the stakes of a decision are really high, we want to make sure that we are thinking for ourselves, actually understanding the assumptions that are going into the expertise that’s being offered, whether it’s from a human or a technology. We want to actually think about it and not blindly defer to it.
I’m not suggesting deference to experts is bad, if you proactively and mindfully do it. In fact, in the introduction to the book, I talk about a Stanford professor and his wife, who was diagnosed with cancer. It’s about knowing when it’s best to get in the backseat or give up the driver’s seat. They said, “We’ve decided emotionally making decisions around a cancer diagnosis are just too overwhelming. We spent a lot of time finding the doctor. We found one we trust. We’re blindly deferring, having mindfully chosen who to blindly defer to.” It’s a mindfulness argument, at some level, and it has to do with the stakes of the decision.
You argue that there are dangers to outsourcing our decision-making process to A.I., citing medical technologies used to diagnose patients as one example. What is the threat here?
Where it’s really useful to use technologies and to check diagnostically is where you can “catch the rabbits before they leave the pen.” But unfortunately, that process of doing more early diagnosis and more technology and more screening and more looking has resulted in more and more finding. And the problem is when we find cancer, we’re having trouble identifying whether it’s the fast-moving kind that could kill you or the slow-moving kind that may never affect you.
As for technology being able to help decipher between those two types of cancers, I think that would be hugely valuable. My understanding is that we don’t have great progress yet, on that front. So I think technology as a tool, if deployed towards the areas of confusion and questions that we have, could be really valuable. But we can’t lose track of the fact — and this is a really critical point — we can’t lose track of the fact that technologies, algorithms, artificial intelligence are all designed by humans.
Humans set the initial conditions. We may not know where those official conditions take the technology with machine learning. But we know we taught it how to learn, so to say, or we set the initial parameters, and we’re finding that in many of those cases, the algorithms are producing biased outcomes.
And those algorithms are being written by experts, as you point out.
What I’m talking about is expertise, and expertise can be embodied in multiple formats. Expertise can be embodied in an individual. Expertise could be embodied in the technology via an algorithm, embodied in a checklist that says, “Hey, we know these are the most important things to do at this time and this way — check, check, check.” And so you can call those protocols or rules generally in the management of bureaucracies that sort of are embedded or extend expertise, if you will.
But broadly speaking, “experts” have taken a hit lately. Many people are hesitant to trust experts or anyone they perceive to be “elite,” which, as we’re seeing with the Covid19 crisis, can be a serious problem. Why shouldn’t we listen to experts?
It’s a great issue. A lot of people ask me the question in this particular logic, which is that the U.S. commander in chief seems to think for himself. Is that necessarily a good thing?
People ask, “Vikram, are you suggesting we dismiss experts?” And the answer is absolutely not. What I’m suggesting is that unfortunately we humans tend to bounce like a ping pong ball between complete dismissal of experts and blind deference to experts.
But there’s a middle ground. The idea here is that we generate enough insight from experts. We extract the value that experts are able to produce, literally tapping into their expertise, but don’t give up our autonomy. So it’s a slightly more nuanced way to think about it. I don’t want to say, “All experts are bad.” A lot of progress in human society over hundreds of years has been because of specialization and expertise. I’m really flipping it around and saying that how we, as individuals, use experts is a problem. We need to listen to them, but we need to not be blindly deferential to them.
We all definitively seek out confirmatory evidence when we’re trying to make a tough choice. We have an inclination. We see confirmatory evidence, and we end up down this path where we only find data that endorses our existing view. And so one of the strategies I suggest to help combat some of these biases, for instance, is to say we should employ a devil’s advocate in our process.
So whether it’s a friend, a family member, or if it’s in a corporate setting, find any person on your team whose job is to actually take the opposite perspective, regardless of the merits. Someone who’s given the time, energy, and resources to produce the contrary case. A healthy way to make sure we find disagreement, and actually seek evidence, and think hard about evidence that’s opposite to what our natural confirmatory bias might lend itself towards.
Let’s seek disagreement rather than allowing our natural instincts to run amok.
You argue that specialization has become popular, but that’s it’s a problem and that it prevents the kind of systems thinking we actually need right now.
I’m a big fan of being a generalist. I think that connecting dots is as important, if not more important, than generating dots. And that fits into this idea of experts on tap, not on top. If you’re saying someone’s a great expert, let’s take their dot, but we need to paint our mosaic, and so let’s take those tiles, and put them into our picture, for our context. And we have to just remember that experts live in silos, almost definitionally — they’re narrow and deep. That means there’s a large domain outside of their area of expertise that may have an impact on us.
It’s critical that we retain a focus on the context because we’re the ones who know the context. It’s not malicious, it’s not intentional. It is structural. Deep expertise means narrow focus, which implies broad ignoring, which means living in that silo. Where the boundaries of that silo are matter. So in order to do that, in order to manage experts, we have to see the big picture — zoom out and see where and how all of these dots come together. So it’s really a call for integrated thinking. More systems thinking, where we see the connections across the silos. And in fact find that some of the value for us as individuals is in fact crossing these silos or connecting these silos.
Seeking disagreement is essential. If there were easy answers, we wouldn’t be stressing or thinking about these things. It’s when they’re nonobvious answers that disagreement can help us navigate through it.