You're reading for free via The Medium Newsletter's Friend Link. Become a member to access the best of Medium.

How To Survive Automation

Where’s your place in the future of work?

Sam Brinson
OneZero

Automation anxiety isn’t new.

In response to Luddites breaking and destroying the technologies they feared would take their jobs, the British parliament passed the ‘Destruction of Stocking Frames, etc. Act’ in 1812, which made such destruction punishable by death.

Long before that, Aristotle considered how if all our tools could perform their tasks by themselves, craftsmen would no longer need servants, and masters would no longer need slaves.

But throughout the big transitions in history — nomadic to agricultural lifestyles, farms to factories — we’ve not run out of jobs. Rather, the new and remaining jobs allowed us to produce more, and the economy grew substantially as a result.

There are serious questions about the quality of the new jobs. Moving from the farm to the factory wasn’t an upgrade for a lot of people. But there were jobs. The jobs of the future may also be more or less appealing, but they will still be there, right?

As a new wave of automation anxiety washes over us, should we look to history and learn the lesson that AI will create new jobs and further increase productivity? Or is AI a fundamentally different beast for which the past can’t prepare us?

“Even if the “this time is different” worry was wrong before, it might still be right today. What’s more, even if history were to repeat itself, we should still beware an excessively optimistic interpretation of the past. Yes, people did tend to find new work after being displaced by technology — but the way in which this happened was far from being gentle or benign.”

— Daniel Susskind, A World Without Work

A Little History

In a sense, artificial intelligence has developed in the opposite direction to human intelligence.

For most of us, the things we struggle with are subjects like logic and math, not telling apart dogs and cats. But in the early days of machine intelligence, the difficult tasks fell first.

So-called ‘symbolic AI’ could do algebra and geometry by the 1960s, as well as play chess and checkers. It was doing the hard stuff, and that prompted us to sing its praises and look so very optimistically toward a future where all our difficult problems would be solved. The hype grew, and the money came rolling in.

But the hype died down with a keen realisation — it can’t do the easy problems. The problem was those things we can do without so much as a thought: recognise a face, pick up an egg without breaking it, get out of bed and put a shirt on.

These early versions of AI required us to tell them precisely how to do what we wanted them to do. We gave them the rules and decision-trees and processes by hand, depending on what tasks they needed to achieve. This made them formulaic and repetitive.

Try programming a computer to empty a toy container and stack some blocks. It can’t learn for itself, so you have to tell it how to identify objects that might be different shapes, colours, and at different angles — how to move its arm, how to pick things up, how to place them without knocking the others over. And each of these tasks can be broken down further into sub-tasks.

It’s like telling someone how to shoot a basketball — instructions are nice, but in the end you need to get a feel for it. Early AI had no feel. It was rigid and inflexible. The initial optimism then became pessimism, the money dried up, and we entered an ‘AI winter.’

The idea is encapsulated in Moravec’s Paradox, whose namesake Hans had this to say in 1988:

“Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.”

Even while struggling with the ‘easy’ tasks, these early versions of AI still had an impact on the job market. A paper from the National Bureau of Economic Research suggests automation has been the main force behind U.S. income inequality over the last 40 years, as middle-class jobs that relied heavily on repetitive tasks, like phone operators and cashiers, became the domain of computers.

“We document that between 50% and 70% of changes in the US wage structure over the last four decades are accounted for by the relative wage declines of worker groups specialized in routine tasks in industries experiencing rapid automation.”

We lost routine and algorithmic jobs because they were perfect for the symbolic AI we initially built, but they couldn’t do much when creativity and flexibility were required. They couldn’t manage uncertainty or break the rules. Jobs requiring dynamic thought, social skills, and complex problem solving remained largely unscathed.

Machines That Learn

Today, when people talk about AI, they’re usually referring to a very different approach, one based on neural networks that take inspiration from the brain. The idea isn’t new. In the late 1950s, psychologist Frank Rosenblatt designed the ‘Perceptron,’ which could do basic pattern recognition tasks, on these principles.

Progress in the neural net approach hit its stride in 2012, thanks to an image recognition competition where researchers Geoffrey Hinton, Yann LeCun, and Yoshua Bengio cleaned up. Thus began what’s been called the ‘deep learning revolution’.

Since then there have been many remarkable successes that put Moravec’s paradox in question. Perhaps the best modern example comes from DeepMind, a British AI company headed by Demis Hassabis and bought by Google in 2014.

In 2016, DeepMind’s AlphaGo defeated Lee Sedol at Go. It learned by looking at past games from experts and extracting the important features that led to success. It was quite the achievement. Go has a rather absurd number of possible moves, so it wasn’t possible for AlphaGo to brute force its way through all of them. It needed strategies and shortcuts.

Not long after, AlphaGo was trumped by AlphaGo Zero, which didn’t use any past games — it was only given the rules, which it used to play against itself. In a blog, DeepMind says its power came from the fact it was “no longer constrained by the limits of human knowledge.”

Then came AlphaZero, which mastered not just Go, but Chess and Shogi too. And finally, we have MuZero, which has mastered Chess, Go, Shogi, and about 57 different Atari games, which it did without even being told what the rules were.

That bears repeating: We didn’t tell it what the rules were. We didn’t show it how people play or provide it with any good strategies. We just let it loose in these digital realms, where they learned for themselves what does what. And now it kicks our ass, and the asses of its AI predecessors.

Image from DeepMind

This machine learning approach to AI is very different to how symbolic AI functions. It’s much less about giving it rules to follow, and much more about feeding it data to learn its own lessons from.

Today’s AI is conversing with us, reading what we say and learning from us, identifying objects and driving cars, helping determine who gets a job and who goes to jail, discovering drugs and examining medical images, doing parkour and dancing to the Rolling Stones, folding proteins and controlling plasma inside fusion reactors.

In 2019 Rich Sutton wrote an article titled the ‘Bitter Lesson,’ and it seems an appropriate successor to Moravec’s paradox. The lesson is that as computers grow in processing power, they perform better without our instruction.

“The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. … Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation.”

Sutton’s Bitter Lesson piggy-backs on Moore’s Law, the observation that every few years the number of processors we can fit on a chip doubles. If these observations hold even somewhat true over the long term, greater computational capacity will result in more wins for the computers.

It’s not only the routine tasks at stake anymore — they’re displaying forms of creativity and novel problem solving, they’re doing things we don’t even know how to explain. This version of AI is coming for more than the middle class.

Let’s Work Together

In the near term, AI is likely to make the job market more volatile, but not necessarily lower the number of available jobs.

For one thing, jobs are collections of many different tasks. McKinsey suggests that less than 5% of jobs can be fully automated, but over half can be at least 30% automated. While the titles may stay the same, the day-to-day activities may change.

There’s also the potential for AI to create new jobs, with the World Economic Forum predicting 85 million jobs lost by 2025 — but 97 million gained. Those who are displaced, provided they have the access to the right education, can pivot into a new profession.

One reason I believe the near future will see different jobs, but not fewer jobs, is because AI is still applied rather narrowly, while people can learn quickly from little experience and generalise across domains. We might have AI tools that do certain tasks and make new things possible, but a human mind is still required to bring the different elements together creatively and flexibly.

Many people are going to benefit from AI tools that help streamline their workflows, give them new abilities, and allow them to focus on the parts of their jobs they actually enjoy. Then again, AI might also take on the tasks that people enjoy. Those who refuse to adopt such tools may find themselves at a disadvantage.

As Tyler Cowen writes:

“If you and your skills are a complement to the computer, your wage and labor market prospects are likely to be cheery. If your skills do not complement the computer, you may want to address that mismatch. Ever more people are starting to fall on one side of the divide or the other. That’s why average is over.”

Besides adopting the new tools, a dynamic job market makes lifelong learning a necessity. Gone are the times we could go to Uni for a few years and get a job that would last until retirement. Things change too quickly, and we have to change with them. We have to stay on our toes and continue broadening our knowledge base.

Depending on our perspective and circumstances, this might appear good or bad. If we get to learn at our own pace, about things that stoke our curiosity, we’re likely to feel that sense of self-improvement and growth. If we feel under pressure to stay ahead, if we’re scared of losing our jobs and not being able to pay rent, then learning will feel like a burden, an added source of stress and anxiety.

“In order to keep up with the world of 2050, you will need not merely to invent new ideas and products but above all to reinvent yourself again and again.”

Yuval Noah Harari, 21 Lessons for the 21st Century

We need to feel a sense of control over our lives, but surviving the effects of AI and automation may mean not only continuous reinvention, but an acceptance of the unknown — a recognition that our future depends on a technology we can’t completely understand or predict.

The Human Element

Even though past and present approaches to AI have their limitations, it’s difficult to see what abilities can’t eventually be automated. Are the problems insurmountable? Is it impossible to overcome the bias in data? To build AI that can properly explain itself? To build algorithms that can generalise knowledge across domains?

While it remains to be seen, I don’t believe these problems will last. Not only do I think that computers will be capable of whatever the human brain is, but in many aspects they will vastly improve on us. All the ways people go from sensations to perceptions, memories, thoughts and actions — is there anything that can’t be replicated or improved on?

Consider the limits placed on us — our heads can only get so big, our brains can only consume so much energy, and we can only pay attention to so much at once. We forget most of what we experience.

Computers can be hand-crafted by at least semi-intelligent creatures rather than haphazard evolutionary processes. They can be the size of warehouses and consume as much energy as a city if need be. They can remember anything and everything, and attend to many streams of information at any one time.

While we still need to narrow down the algorithms or design principles to overcome current problems, I see it as only a matter of time. In fact, AI and deep learning pioneer Geoffrey Hinton believes that human minds and current AI aren’t all that different:

“We humans are neural nets. What we can do, machines can do.”

Despite this, even if we can build computers and machines that can do everything we can (and more), there will likely be roles for humans that we keep simply because we prefer the human touch.

While I can’t say that we won’t, my preference is against machines with similar emotions to people. We don’t want them to get jealous or angry, to feel pain or to suffer. I’m also not fond of them dumbing themselves down to pass as human. But if they don’t behave and experience in the way that we do, it will be more difficult to empathise with them, and that makes certain social interactions difficult.

I have a feeling we’ll prefer watching people play sports and perform music, and to buy art and read books that reflect genuine human experiences. While there will be robots to care for the elderly and teach our kids, and programs that can engage in psychotherapy, that doesn’t mean we’ll all feel comfortable with them and trust that they understand our condition.

Then again, this isn’t to say there won’t be robot sports, or AI art, music, and writing. In many cases there already are, and they have proven popular. Studies of robot carers and algorithmic therapists have also shown promise. They will do these things, and they may do them well. But there will likely be some roles left for people to provide for those who just prefer the human element.

But this doesn’t leave us with all that much. We’ll have artists and athletes and people that care for other people, but the big money is likely to go to productive and intelligent computers, and those that employ them. What’s left for all the people that don’t make up these groups? I dare say, there may be a lot of them.

“Just as the Industrial Revolution created the working class, automation could create a global useless class, and the political and social history of the coming decades will revolve around the hopes and fears of this new class. Disruptive technologies, which have helped bring enormous progress, could be disastrous if they get out of hand.”

Yuval Noah Harari

Future Arrangements

If computational capacity and other new developments in AI do eventually surpass human intelligence in enough domains, those who stand to gain the most are the big tech companies — and not just Google, Apple, Amazon, Facebook, and Microsoft, but also Baidu, Tencent, Alibaba, and other international entities.

How they leverage this profound technology is going to be of significance for everyone else.

It’s worth noting that many tech moguls, politicians, and economists have been vocal proponents of a universal basic income (UBI), where everybody would be given enough money to keep themselves upright. There are different versions, but in essence this is about redistributing the gains from technology so it supports people rather than simply displacing them.

For instance, Andrew Yang ran for the U.S. presidency in 2020 with a promise of a “freedom dividend,” which was to be a payment of $1,000 each month to every American to help offset the impacts of automation. There have also been small trials in Finland and Stockton, California.

Depending on the distribution of wealth and what technologies are made available to people, the result could end up anywhere in between a dystopian sci-fi vision of slums and super-rich machine owners, or an age of equitable abundance where everybody lives free of much existential worry. The range of outcomes is staggering, frightening, maybe even exciting.

Here are a few perspectives and predictions:

Tyler Cowen:

“We will move from a society based on the pretense that everyone is given an okay standard of living to a society in which people are expected to fend for themselves much more than they do now. I imagine a world where, say, 10 to 15 percent of the citizenry is extremely wealthy and has fantastically comfortable and stimulating lives, the equivalent of current-day millionaires, albeit with better health care. Much of the rest of the country will have stagnant or maybe even falling wages in dollar terms, but a lot more opportunities for cheap fun and also cheap education. Many of these people will live quite well, and those will be the people who have the discipline to benefit from all the free or near-free services modern technology has made available. Others will fall by the wayside.”

— Tyler Cowen, Average is Over

FALC:

Taking a different perspective, there’s something called Fully Automated Luxury Communism, or FALC. A concept promoted chiefly by Aaron Bastani, the Atlantic describes it as “a strong brew of technological determinism, sunny utopianism, and souped-up socialism: Let the robots do all the work, and let humans enjoy the fruits of their labor in equal measure.”

In this scenario, we’d all be living like modern-day millionaires, thanks to robots taking care of all the hard work, producing everything we could ever need. Automation would serve human needs rather than profit motives, ushering in a post-work, post-scarcity society. Let’s just hope the machines are OK with that.

Butlerian Jihad:

Another option would be a Butlerian Jihad. Rather than being anywhere along the spectrum of utopia or dystopia, this option suggests a complete halt to technological development. Like the Luddites, we deny any further progress and choose to stay where we’re at.

Frank Herbert develops the idea in Dune, but he was likely inspired by Samuel Butler’s essay ‘Darwin among the machines,’ published way back in 1863. It advocated for the destruction of every machine of every sort. Herbert softened the stance slightly, writing only “Thou shalt not make a machine in the likeness of a human mind.”

However, while the risks are high, so is the potential. Can we resign ourselves to current technologies and forget about what AI might have been able to achieve? I find it difficult to imagine us stopping, even if many people feel that we should. Could we trust that every party is not working away at it behind closed doors? What if one country builds a super-intelligence while the others are left behind?

The Human Obstacle

So, is automation anxiety a real concern?

Provided we are willing and able to learn and adapt, the near future is manageable, somewhat like past anxieties in that there might be rough parts, but there will be work available.

But further out, when AI starts to pool its abilities together and human capacities start to succumb in greater numbers, it’s very unclear whether qualifications will mean anything, if a better education can save us.

If computers do become smarter than us, how could this be like other transitions? The anxiety will be well-founded. The only tasks left will be what we collectively choose to keep for ourselves — things we ban AI from doing or what masses of people refuse to have robots do.

With each day, AI continues to encroach on our physical and cognitive capacities. Where will it end?

I don’t believe the answer to that question will come from looking at technical limits to AI itself. We’ll need to look at people. Are we capable of working together to build a superior intellect that works with us? Works for us? Or are we going to get in our own way, compete when we need to cooperate, be driven by bad incentives, act too hastily on faulty assumptions, and bugger it all up?

“This is the truth about the AI revolution. There is no looming machine takeover, no army of malevolent robots plotting to rise up and enslave us.

It’s just people, deciding what kind of society we want.”

— Kevin Roose, Futureproof

We have to collectively decide on what we want people to do, what responsibilities and capacities we should hold onto even when we’re second best, and how to support people. If we don’t make these choices together, we risk having them made for us by tech companies and governments, and the AI systems they cling to.

One of the biggest concerns about these future all-powerful AI systems is how to ensure our values align. It’s going to be very hard to ensure they align with people when people don’t align with each other.

As Stephen Wolfram notes, perhaps the one thing we can’t automate is the specifying of certain goals. It will fall to us to define what AI works towards, what problems it solves.

“People talk about the future of intelligent machines and whether they’ll take over and decide what to do for themselves. But the inventing of goals is not something that has a path to automation. Someone or something has to define what a machine’s purpose should be — what it’s trying to execute. … When we consider the future of AI, we need to think about the goals. That’s what humans contribute; that’s what our civilization contributes.”

We need to develop the goals that will work for all of us, not just a privileged few. If we can, perhaps something tending towards the more utopian view is there for us.

AI could be the biggest achievement humanity ever produces, provided we’re up to the task. Unlike the haphazard evolutionary process that created us, this new creature can be intelligently designed — but that means we’ll have to behave intelligently too.

Responses (9)

What are your thoughts?

I agree AI won't take everyone's jobs, but this comes at a cost. AI is different from previous automation technologies because of its speed of changing work.
Some may not be able to learn new skills, but the more significant concern is who will be…

I have automation anxiety, and it isn't because AI is taking away our jobs. It's because AI isn't taking away enough jobs.
We have reached some tipping point where seemingly every product or service has "AI" or "machine learning" baked in. The…

People often feel anxious about losing repetitive, monotonous jobs—roles where they trade their precious hours for money in a system that can feel exploitative, almost like modern-day free servitude. But have we forgotten that just a few centuries…