A.I. Ethics Boards Should Be Based on Human Rights

Tech companies should ensure their ethics boards are guided by universal human rights and resist bad faith arguments about diversity and free speech

Evan Selinger
OneZero

--

Credit: Westend61/Getty Images

Co-authored with Brenda K Leong

WWho should be on the ethics board of a tech company that’s in the business of artificial intelligence (A.I.)? Given the attention to the devastating failure of Google’s proposed Advanced Technology External Advisory Council (ATEAC) earlier this year, which was announced and then canceled within a week, it’s crucial to get to the bottom of this question. Google, for one, admitted it’s “going back to the drawing board.”

Tech companies are realizing that artificial intelligence changes power dynamics and as providers of A.I. and machine learning systems, they should proactively consider the ethical impacts of their inventions. That’s why they’re publishing vision documents like “Principles for A.I.” when they haven’t done anything comparable for previous technologies. (Google never published a “Principles for Web Search.”) But what version of ethics should they choose? Ethical norms, principles, and judgments differ between time, place, and culture, and might be irreconcilable even within local communities. There’s so much disagreement…

--

--