Fix A.I. in the Private Sector by Fixing It in the Military First

Silicon Valley’s ethics via opt-out isn’t going to work anymore

Image created by freepik

Last week I wrote a piece for the popular military theory blog, War on the Rocks, about what Silicon Valley has learned about the impact of automation on complex systems and how that should shape the military’s goals.

Discussing the military use case for technology with other technologists is often awkward. Many believe that it is possible to instill new technologies like A.I. with ethical and safety principles purely from the comfortable environments of ivory academic towers or plush FAANG collaborative spaces. One need not go where the technology might have its deadliest and most critical impacts. No need to risk your reputation with the association.

I’ve come to believe that this attitude is naive. Unsafe or unethical technology is a curiosity in academia. It’s bad, but also good because there will be research dollars and accolades in documenting the edges of it. Unsafe or unethical technology is a PR problem for FAANG. It’s bad, but also good because it typically means more profits. By contrast, unsafe or unethical technology in the military is a core mission failure. It means more dead soldiers in the short term and social unrest that creates more dead soldiers in the long term.

When I started working in the federal space someone told me, “No organization on Earth is more incentivized to prevent war than the DoD. We should help them do that.” And in my own experience, I’ve found that to be true. People often confuse the rhetoric of politicians and defense contractors with the attitudes of servicemen and women. There is no organization more anti-war than the DoD because war doesn’t just mean dead bodies to them. It means dead friends, separated families, disrupted — perhaps permanently — lives and futures. The costs of war are not hypothetical in the military, so the aversion to warfare is more intense than anywhere else.

The best place to answer the question of “How do we build A.I. in a way that prevents unethical and unsafe outcomes?” is in the environment that has the most to lose from unethical and unsafe A.I. That will never be Google or Facebook, or Twitter. Recent events have demonstrated this. The best place to work on ethical A.I. and safe software is with the people for whom the stakes are greater than missed revenue targets.

We should help them.

What’s wrong with A.I.

I wanted to write the War on the Rocks piece because I was sick of listening to people talk about “how much money would be saved” or “how much more accurate the system would be” once A.I. replaced humans in the military. Both of these assumptions have a bad track record and little evidence to support them. Automation (of which A.I. is merely a sexy buzzword) does not make systems cheaper, safer, or more accurate. What it does is push the boundaries of what a system can support in terms of response time and complexity.

Those of us who have played nursemaid to a critical computer system know from experience the benefits of a faster system or a system that does more by touching more workflows are often limited. You might end up with fewer mistakes overall, but you will most definitely end up with more destructive, more unpredictable, and less preventable impacts from the mistakes that do happen.

Our first brush with the complications of using machines to eliminate human error came from automating the work that system administrators do setting up, upgrading, and configuring servers. Software engineers built automated systems that could read plain text files with configuration instructions and reconfigure servers as needed on the fly. Doing so greatly decreased the number of errors triggered by mistyped commands or by forgetting to upgrade one or two servers on a list. Companies that implemented this automated approach decreased mistakes and increased efficiency. Eliminating human error also allowed systems to grow, becoming faster and more complex. But this growth came at a price. In automated systems, the errors that do happen can cascade through multiple subsystems almost instantaneously, often taking down broad swaths of seemingly unrelated operations. In November 2020, such a bug disabled a large chunk of the internet for several hours. System failures caused by unexpected behavior from error-reducing technology trigger catastrophic outages a few times a year at the top technology firms in the world.

Helping Humans and Computers Fight Together: Military Lessons from Civilian AI”— War on the Rocks

And the military is already starting to notice the fine print hidden in their billion-dollar A.I. initiatives. Last week The Economist published an article outlining the perfect example of why automation does not make things cheaper:

The problem is that the savings tend to be wiped out because the drones rack up so many flying hours. Each of America’s Global Hawks, a surveillance drone that can conduct day-long sorties, flies an average of almost 1,400 hours annually — the equivalent of two months in the air. The U2 spy plane, a cold-war stalwart still in regular use, does less than half of that. During 2016–17, the last period for which complete figures are available, America’s ISR drones flew six times as many hours as every crewed ISR plane combined. Commanders’ “insatiable demand” for eyes in the sky has “prevented overall reductions in personnel and operating costs”, concludes CSIS. — The Economist

What’s remarkable to me about a piece like this is that we already knew that automation does not decrease the number of jobs. It sometimes changes the types of jobs available, which can have devastating economic impacts, but automation does not decrease the number of people you need to employ in the short term.

And while War on the Rocks wanted to frame my article as the military learning lessons from the private sector, the truth is if we software engineers were really all that smart we would have learned these lessons from the organizational scientists of the1970s and 1980s who first observed them. My favorite thing to cite lately is a paper from 1983 called “The Ironies of Automation.” With a quick find/replace operation you could publish this paper today as a revolutionary, groundbreaking study on Site Reliability Engineering. The CliffNotes version? Automating a system to remove human error tends to make the system weaker overall.

The private sector is struggling with the challenge of ethical A.I. because the underlying assumptions that determine how this technology is going to solve problems are wrong.

Automation does not reduce the number of people you need to employ to accomplish a goal.

Having more data does not lead to better decisions.

The impact of mistakes is more important than the number of mistakes. Optimizing for fewer mistakes often leads to more devastating failures.

The engineering challenge that is really compelling to me is: What would A.I. look like if it was grounded in assumptions about decision-making and system design that were actually true?

Why the fix will come from the DoD: Purchasing power

You may be thinking: “Okay, sure. But I don’t see why we can’t still fix these problems in the private sector and have that impact trickle down to the military.”

What few people realize about technology, is how much of it is directed by the purchasing power of the Department of Defense. “Right,” software engineers say. “I know about ARPANET, but that was ages ago. We have venture capital now, things don’t work that way anymore.”

Except that’s not true. That canister you talk to when you want to turn things on and off in your house? The technology didn’t exist until the DoD invested $150 million to build a “virtual office assistant” for military personnel. Originally called CALO which is Latin for “soldier’s servant” it was later rebranded as Siri and sold to Apple.

The DoD’s Grand Challenges turned Pittsburgh from a steel town to a center for self-driving vehicle development that Google, Uber, and all the major players in this space are leveraging.

Think the advancements in virtual reality we’ve seen lately comes from Kickstarter projects? The DoD has been dumping billions of dollars into that area since the mid-2000s.

Not to mention all the major shifts in fundamentals where the DoD put its finger on the scales. From “Readable” programming languages to networking protocols… The internet was built on packet switching networks because the DoD decided that was the better option. TCP/IP beat the technically superior OSI because the DoD was so heavily subsidizing its development that it was effectively free for the private sector to use.

Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”

“OSI: The Internet That Wasn’t” — IEEE Spectrum

What the DoD decides is important is what the private sector ends up building, not the other way around. One of the few exceptions to this rule is Zero Trust… but only barely. I say barely because Google developed the Zero Trust methodology in response to the military and intelligence agencies of foreign governments attacking them. So the DoD didn’t kick off the development of Zero Trust, but only because Google spotted the national security issue and invested in it first.

When you have a budget in the billions of dollars, you get to pick the winners and losers. What the DoD decides to buy shifts the whole private sector in that direction. Or rather, the companies that shift in that direction get millions of dollars from the DoD and the ones that don’t gradually go out of business.

The DoD is going to buy A.I. People who care deeply about A.I. being safe and ethical can’t afford to sit this one out.

It can only be done here

I’ve been interested in reimagining software engineering by incorporating safety science into our process for a while. Only recently have I come to realize that discussions about “ethical” and “safe” technology in academia and the private sector are largely irrelevant. When the DoD decides what “safety-critical” and “ethical A.I.” is in this space, the weight of their purchasing power will wipe out all other voices.

So it’s worth investing the time and energy helping to make sure they make the best decision possible. Both because they want to build technology that will lead to less armed conflict and fewer wars and because whatever becomes the standard at the DoD will become the standard we all have to live with outside of the DoD.

Author of Kill It with Fire Manage Aging Computer Systems (and Future Proof Modern Ones)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store