Let’s Not Regulate A.I. Out of Existence
A.I. abuse is real, but so is fear-mongering
Poor Artificial Intelligence, technology’s scapegoat.
These concerns are so concrete that A.I. regulation is now racing far ahead of more general, and possibly more necessary, tech industry regulation (data, competition, content control, and moderation).
Earlier this week, the European Union dropped a 108-page A.I. policy document proposing sweeping A.I. regulations that attempt to touch on virtually every aspect of A.I. development. As senior analyst, A.I. Policy at the Centre for Data Innovation, Ben Muller noted in his lengthy Twitter thread, the adoption of these policies would place an enormous burden on companies (especially small ones) trying to develop fresh A.I. while complying with a dizzying array of new regulations.
While these regulations could take years to become law, tech companies around the world are probably looking at this policy document with alarm. Like the General Data Protection Regulation (GDPR) before it, this EU-grown regulation will have far-reaching consequences. Companies that develop technology, including A.I., do so with an eye toward the world. Regulations that start in one place typically migrate. Europe’s strict A.I. rules could change the A.I. landscape in the U.S., as well.
There are reasons to cheer the EU proposal. It means people are thinking about potential A.I. abuses in the areas of law enforcement and gender and race bias.
However, the proposal generously applies “high-risk” to most A.I. systems, and there’s a perfection demanded by this proposal that seems almost impossible to achieve.
“High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to the users.” Humans still program and train these systems. Even with all the necessary checks and…