On March 26, Google announced the formation of an external advisory group to help the company navigate complex questions around the ethical and responsible development of new technologies, including artificial intelligence. By April 4, however, the council had been disbanded, and Google acknowledged that the company was “going back to the drawing board.”
Ironically, Google’s new group of ethics advisors fell apart because of ethical challenges. But apart from underlining just how fragile the current state of technology ethics is, the incident attests to a much larger challenge tech companies are facing: How can a company ensure that the products it develops — especially A.I. — are as good for society as they are for the company’s bottom line?
Google’s advisory council was established to help the company implement its A.I. principles—an “ethical charter to guide the responsible development and use of AI in our research and products.”
Launched last June, the principles articulate ideals and aspirations that few would dispute, including developing socially beneficial technologies, avoiding unfair bias, and ensuring safety. They mirror similar efforts from companies like Microsoft to develop an ethical foundation for A.I. development. They also reflect frameworks such as the Institute of Electrical and Electronics Engineers’ (IEEE) guidelines on ethically aligned design. At a time when there is legitimate growing concern over the potentially harmful personal and social impacts of A.I. and other technologies, these principles are laudable.
Ethics are essential to establishing a guiding basis for how powerful new technologies are developed and used.
And yet, as Google found out the hard way, framing socially responsible and beneficial development in terms of ethics is far from easy.
Part of the issue Google and other companies face is that while ethics involve enforcing social norms around what is considered right and appropriate versus what is wrong and inappropriate, ethics on their…