Member-only story

There’s No Such Thing As ‘Ethical A.I.’

Technologists believe the ethical challenges of A.I. can be solved with code, but the challenges are far more complex

Tom Chatfield
OneZero
5 min readJan 16, 2020

--

Image: Apisit Sorin / EyeEm/Getty Images

AArtificial intelligence should treat all people fairly, empower everyone, perform reliably and safely, be understandable, be secure and respect privacy, and have algorithmic accountability. It should be aligned with existing human values, be explainable, be fair, and respect user data rights. It should be used for socially beneficial purposes, and always remain under meaningful human control. Got that? Good.

These are some of the high-level headings under which Microsoft, IBM, and Google-owned DeepMind respectively set out their ethical principles for the development and deployment of A.I. They’re also, pretty much by definition, A Good Thing. Anything that insists upon technology’s weighty real-world repercussions — and its creators’ responsibilities towards these — is surely welcome in an age when automated systems are implicated in every facet of human existence.

And yet, when it comes to the ways in which A.I. codes of ethics are discussed, a troubling tendency is at work even as the world wakes up to the field’s significance. This is the belief that A.I. codes are recipes for automating ethics itself; and that once a broad…

--

--

OneZero
OneZero

Published in OneZero

OneZero is a former publication from Medium about the impact of technology on people and the future. Currently inactive and not taking submissions.

Tom Chatfield
Tom Chatfield

Written by Tom Chatfield

Author, tech philosopher. Critical thinking textbooks, tech thrillers, explorations of what it means to use tech well http://tomchatfield.net

Responses (12)