Member-only story
A.I. Ethics Boards Should Be Based on Human Rights
Tech companies should ensure their ethics boards are guided by universal human rights and resist bad faith arguments about diversity and free speech

Co-authored with Brenda K Leong
Who should be on the ethics board of a tech company that’s in the business of artificial intelligence (A.I.)? Given the attention to the devastating failure of Google’s proposed Advanced Technology External Advisory Council (ATEAC) earlier this year, which was announced and then canceled within a week, it’s crucial to get to the bottom of this question. Google, for one, admitted it’s “going back to the drawing board.”
Tech companies are realizing that artificial intelligence changes power dynamics and as providers of A.I. and machine learning systems, they should proactively consider the ethical impacts of their inventions. That’s why they’re publishing vision documents like “Principles for A.I.” when they haven’t done anything comparable for previous technologies. (Google never published a “Principles for Web Search.”) But what version of ethics should they choose? Ethical norms, principles, and judgments differ between time, place, and culture, and might be irreconcilable even within local communities. There’s so much disagreement that red lines aren’t even easily drawn around truly alarming A.I. applications, like lethal autonomous weapons and government scoring systems such as the one China is experimenting with.
Further complications arise because businesses, unlike individuals or governments, are accountable to shareholders. Fulfilling their fiduciary obligations can mean prioritizing growth, emphasizing profit, and working with international clients whose political allegiances vary along the democratic-authoritarian continuum.
This has led, understandably, to skepticism about the sincerity of corporate ethics. Whenever tech companies talk about ethics, critics worry that it’s a strategy for avoiding stronger government regulations and gaining goodwill, consisting of empty slogans followed by minimal legal compliance. Hence, when tech companies establish external A.I. ethics boards, they’ll probably be viewed as self-serving, “ethics washing” facades.