From car insurance quotes to which posts you see on social media, our online lives are guided by invisible, inscrutable algorithms. They help private companies and governments make decisions — or automate them altogether — using massive amounts of data. But despite how crucial they are to everyday life, most people don’t understand how algorithms use their data to make decisions, which means serious problems can go undetected. (Take, for example, research last year that showed anti-Black bias in a widely used algorithm that helps hospitals identify patients in need of extra medical care.)
How Bias Ruins A.I.
In wake of Banjo CEO revelations, bias in A.I. comes under new scrutiny
The New Zealand government has a plan to address this problem with what officials are calling the world’s first algorithm charter: a set of rules and principles for government agencies to follow when implementing algorithms that allow people to peek under the hood. By leading the way with responsible algorithm oversight, New Zealand hopes to set a model for other countries by demonstrating the value of transparency about how algorithms affect daily life.
Agencies that sign the charter make a number of commitments. For instance, they agree to publicly disclose in “plain English” when and how algorithms are used, ensure their algorithms do not perpetuate bias, and allow for a peer review to avoid “unintended consequences.”
The commitment also requires that the Te Ao Māori Indigenous perspective is included in the development of algorithms, as well as their use, and asks that agencies provide a point of contact that members of the public can use to inquire about algorithms, as well as challenge any decision made by an algorithm.
Given that algorithms are used by all facets of government, from calculating unemployment payments to how police patrol a neighborhood and profile people who live there, providing insight into how those algorithms truly work will help hold governments accountable to keeping them fair.
The charter has a big list of signatories so far, including the Ministry of Education, Ministry for the Environment, Statistics New Zealand, the New Zealand Defence Force, and many more. Notably missing from the list are the country’s police force and spy agencies like the Government Communications Security Bureau.
Though these issues can sound technical, algorithms in government can have huge impacts on public life. The New York Times reported in early 2020 that algorithms are used in the United States to “set police patrols, prison sentences and probation rules,” and in the Netherlands, “an algorithm flagged welfare fraud risks.”
There is rarely a way to see what data was used to reach these decisions, such as whether or not the algorithm considered gender, zip code, age, or any other number of factors, let alone if the data used to train the algorithm was fairly deployed in the first place. This can lead to “bias by proxy,” where a certain variable is used to determine a given outcome without an actual connection; for example, measuring a teacher’s effectiveness according to students’ scores on standardized tests when other systemic factors might be at work.
A study by ProPublica found that this kind of bias is commonplace, with a study of an algorithm used to generate a risk score for people arrested by a police department. Not only was the formula likely to “falsely flag Black defendants as future criminals,” but the study also found that “white defendants were mislabeled as low risk more often than black defendants.”
In New Zealand, biased algorithms are a problem as well, with The Guardian reporting that one of the charter signatories, the country’s Accident Compensation Authority, “was criticised in 2017 for using algorithms to detect fraud among those on its books.” Similar concerns about the correction agency and immigration authority have been raised in the past, both of which have signed on to the charter as well.
Requiring algorithms to be documented in plain text might help mitigate their impact on people who are directly affected by allowing them to verify whether or not they were treated fairly. Plain-text documentation would allow people to read about how a computer reached a conclusion about them and provide an official way to question that decision if it appeared unfair.
Granted, there have been problems with this kind of policy in the past. New York City enacted an “algorithmic accountability” bill in 2018 that was intended to bring transparency to various automated systems used by the city government. Two years later, CityLab reported that bureaucratic roadblocks had stopped even the most basic transparency — a list of automated systems used by the city — from being granted to the task force saddled with implementing the policy.
Still, if implemented correctly, New Zealand’s charter could help citizens build better trust in how the government uses their data and guides their lives. A notable example of how this lack of trust affects government can be found in Google’s failure to get its experimental city startup, Sidewalk Labs, off the ground in Toronto.
The project, a public-private partnership with the city of Toronto, struggled to explain its use of data pivotal to its “smart neighborhood” project. The smart city project fell apart in 2020 as residents questioned whether “Google’s algorithms would take too much control over city planning,” according to the Wall Street Journal. Perhaps if the company had been required to be transparent about that, citizens would have felt they could trust Sidewalk Labs, rather than pushing to kill the project.
Agencies that sign up to New Zealand’s Algorithm Charter must use a risk rating to determine how the charter is applied by assessing the likelihood of bias and how impactful it would be and then using that to document the risks of the algorithm’s use in daily life. While there are no penalties or enforcement mechanisms included in the charter today, a review process after a year will investigate whether it’s being applied by the agencies that sign on.
As a New Zealand citizen, I know I’d value being able to peek behind the scenes and verify that the government is treating me fairly when it uses algorithms and how the government is leveraging the data it has collected about me to reach a conclusion. Before the charter, there was no way to see how a decision was made about me or what data was used, but the rules give me a sense of trust, because the government must show its inner workings.
New Zealand is still figuring out how to implement the new guidance, but in the process it is laying the groundwork for other countries to follow suit. Throwing the curtain open on algorithms in government will reveal inequity, but it’ll give a voice back to citizens around the world, who are increasingly forced to trust their lives to the decisions of computers they can’t see.