Fix A.I. in the Private Sector by Fixing It in the Military First

Silicon Valley’s ethics via opt-out isn’t going to work anymore

Marianne Bellotti
OneZero
Published in
8 min readMar 29, 2021

--

Image created by freepik

Last week I wrote a piece for the popular military theory blog, War on the Rocks, about what Silicon Valley has learned about the impact of automation on complex systems and how that should shape the military’s goals.

Discussing the military use case for technology with other technologists is often awkward. Many believe that it is possible to instill new technologies like A.I. with ethical and safety principles purely from the comfortable environments of ivory academic towers or plush FAANG collaborative spaces. One need not go where the technology might have its deadliest and most critical impacts. No need to risk your reputation with the association.

I’ve come to believe that this attitude is naive. Unsafe or unethical technology is a curiosity in academia. It’s bad, but also good because there will be research dollars and accolades in documenting the edges of it. Unsafe or unethical technology is a PR problem for FAANG. It’s bad, but also good because it typically means more profits. By contrast, unsafe or unethical technology in the military is a core mission failure. It means more dead soldiers in the short term and social unrest that creates more dead soldiers in the long…

--

--

Marianne Bellotti
OneZero

Author of Kill It with Fire Manage Aging Computer Systems (and Future Proof Modern Ones)