From car insurance quotes to which posts you see on social media, our online lives are guided by invisible, inscrutable algorithms. They help private companies and governments make decisions — or automate them altogether — using massive amounts of data. But despite how crucial they are to everyday life, most people don’t understand how algorithms use their data to make decisions, which means serious problems can go undetected. (Take, for example, research last year that showed anti-Black bias in a widely used algorithm that helps hospitals identify patients in need of extra medical care.)
How Bias Ruins A.I.
In wake of Banjo CEO revelations, bias in A.I. comes under new scrutiny
The New Zealand government has a plan to address this problem with what officials are calling the world’s first algorithm charter: a set of rules and principles for government agencies to follow when implementing algorithms that allow people to peek under the hood. By leading the way with responsible algorithm oversight, New Zealand hopes to set a model for other countries by demonstrating the value of transparency about how algorithms affect daily life.
Agencies that sign the charter make a number of commitments. For instance, they agree to publicly disclose in “plain English” when and how algorithms are used, ensure their algorithms do not perpetuate bias, and allow for a peer review to avoid “unintended consequences.”
The commitment also requires that the Te Ao Māori Indigenous perspective is included in the development of algorithms, as well as their use, and asks that agencies provide a point of contact that members of the public can use to inquire about algorithms, as well as challenge any decision made by an algorithm.