Let’s Not Regulate A.I. Out of Existence

A.I. abuse is real, but so is fear-mongering

Photo by Hitesh Choudhary on Unsplash

Poor Artificial Intelligence, technology’s scapegoat.

It’s played the villain in innumerable films and is the boogeyman waiting to pluck your face from a crowd, take your job, put your face on someone else’s body, and carelessly launch a nuclear strike.

These concerns are so concrete that A.I. regulation is now racing far ahead of more general, and possibly more necessary, tech industry regulation (data, competition, content control, and moderation).

Earlier this week, the European Union dropped a 108-page A.I. policy document proposing sweeping A.I. regulations that attempt to touch on virtually every aspect of A.I. development. As senior analyst, A.I. Policy at the Centre for Data Innovation, Ben Muller noted in his lengthy Twitter thread, the adoption of these policies would place an enormous burden on companies (especially small ones) trying to develop fresh A.I. while complying with a dizzying array of new regulations.

While these regulations could take years to become law, tech companies around the world are probably looking at this policy document with alarm. Like the General Data Protection Regulation (GDPR) before it, this EU-grown regulation will have far-reaching consequences. Companies that develop technology, including A.I., do so with an eye toward the world. Regulations that start in one place typically migrate. Europe’s strict A.I. rules could change the A.I. landscape in the U.S., as well.

There are reasons to cheer the EU proposal. It means people are thinking about potential A.I. abuses in the areas of law enforcement and gender and race bias.

However, the proposal generously applies “high-risk” to most A.I. systems, and there’s a perfection demanded by this proposal that seems almost impossible to achieve.

“High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to the users.” Humans still program and train these systems. Even with all the necessary checks and balances in place, mistakes will be made.

The proposal leans heavily on the need for documentation and record-keeping that covers abilities, limitations, algorithms, data, training, testing, and validation. My guess is only the A.I. developers, and not EU officials, will be able to effectively evaluate any of this.

Those officials, though, will have the power to impose fines on “Union institutions, agencies and bodies falling within the scope of this Regulation.”

Much of the proposal’s language follows in this fashion and one must wonder if, upon seeing it for the first time, small A.I. developers might just throw up their hands and walk away to avoid dealing with onerous regulations and potentially huge fines for making mistakes.

What I think is lost in this discussion is the obvious good A.I. brings to society today and its vast potential in the coming years.

A.I. is being used to analyze vast amounts of space data and is having an enormous impact on health care. A.I. image and scan analysis are, for example, helping doctors identify breast and colon cancer. It’s also showing potential in vaccine creation. I guarantee that A.I. will someday save lives.

It’s those kinds of A.I.-driven data analysis that gets shoved aside by news of an A.I. beating a world-champion GO player or the world’s best-known entrepreneur raising alarms about a situation where “A.I. is vastly smarter than humans.”

That kind of fear-mongering leads consumers, who don’t understand the differences between A.I. that scans a crowd of 10,000 faces for one suspect and one that can create recipes based on pleasing ingredient combinations, to mistrust all A.I., and to write the kind of stifling regulation produced by the EU.

Even if you still think the negatives outweigh the benefits, we’ll arguably need better and bigger A.I. to manage and sift through the mountains of data we produce every single day. To deny A.I.’s role in this is like saying we don’t need garbage collection services and that our debris can just pile up on street corners indefinitely.

Want more of me in your life? More of my tech insights and musings to brighten your day? Sign up for my newsletter and I’ll send you a weekly update on the tech (and other stuff) that matters to me (and maybe you, too).

Tech expert, journalist, social media commentator, amateur cartoonist and robotics fan.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store