California’s Imperfect but Necessary Attempts to Regulate A.I.
While the state’s new laws on bots and deepfakes have their flaws, they represent a vital first step to curbing dangerous new technology
Co-authored by Madeline Lamo
There are real risks to regulating artificial intelligence. A.I. is everywhere, making it difficult to identify distinct characteristics that would require different treatment in the eyes of the law. Its outputs often implicate free speech. But one state has nonetheless rushed in. California, through a series of recent legislative efforts, has demonstrated its willingness to take on the challenge of regulating certain specific applications of A.I. These efforts have been perilous and imperfect. But the alternative — forgoing the opportunity to channel a transformative technology of our time — could be worse.
We have written before about California’s recent bot disclosure law. Aimed at preventing the spread of false information online, the 2018 law requires automated political and commercial accounts on social media to clearly disclose that they are bots. The law raises free speech concerns, in part by creating a scaffolding for censorship. The legislation does not necessarily address the unique harms bots can cause. Because bots largely amplify the spread of false information (rather than generate new false claims), the harm they cause is really about scale. Many bots, for example, could retweet a specific topic that could then appear in Twitter’s “trending topics.” It is unclear whether requiring individual bot accounts to disclose that they are automated would do anything to remedy this kind of scale-based harm.
The California legislature entered the A.I. fray once again during the 2019 legislative session, this time with a pair of bills aimed at curbing the potential harm caused by “deepfakes.” Deepfakes are A.I.-modified videos, often featuring one person’s likeness digitally superimposed onto the body of another. A high-quality deepfake can convincingly show what appears to be live video footage of something that never happened. The potential for mischief and malicious action is nearly boundless. California’s new laws target two specific categories of deepfakes: political and pornographic.
The political deepfakes law is largely a response to a potential problem. In 2018, a deepfake-style PSA featuring the likeness of President Barack Obama and the voice of actor-filmmaker Jordan Peele raised awareness of the potential for deepfakes to be used to spread misinformation in the political context by making videos of politicians saying or doing things they never said or did. To date, however, there have been no convincing, high-profile political deepfakes causing genuine confusion or disseminating false information. This past May, a doctored video of House Speaker Nancy Pelosi made her appear intoxicated. Though it garnered significant online attention, the video was not actually an A.I.-generated deepfake — it was simply slowed down footage.
The political deepfakes bill is speculative. The pornographic deepfakes law, by contrast, is a response to a very real and present problem. A recent study found that 96% of deepfakes are pornographic, almost exclusively featuring women. Celebrities are often the subject of deepfakes, but so too are ordinary women, in a kind of artificially generated revenge porn. Regardless of one’s celebrity status or lack thereof, a pornographic deepfake can cause immense psychological, economic, and social harm, as Danielle Citron and Bobby Chesney have explored.
California’s instincts are correct. If, as proponents and detractors alike claim, A.I. is the transformative technology of our time, then one of the things that must transform is our laws and legal institutions.
An initial review of California’s new deepfakes laws reveals several lessons learned. Like the bot disclosure requirement, the deepfakes laws are targeted at specific categories of speech, both of which may, under Supreme Court precedent, be regulated more heavily than speech on other subjects. Notably, the political deepfakes law does not apply to any and all political materials. Instead, borrowing heavily from campaign finance law, it creates a cause of action against someone who disseminates “materially deceptive audio or visual media” of a candidate for public office during the 60 days preceding an election.
Both bills create civil remedies, allowing individuals to pursue claims against deepfake creators who use their likenesses. This means that no government entity is required to police the internet for deepfakes, easing the administrative burden of such a law without sacrificing judicial review. Importantly, the political deepfakes bill contains exceptions for the news media, as well as for satire or parody videos.
Further nuance in the deepfakes bills comes in their differences from one another. Much like the bot disclosure bill, the campaign deepfakes law contains an exception for videos that bear a clear disclosure stating that they have been manipulated. The pornographic deepfakes law, however, contains no such exemption for videos bearing a disclaimer. This distinction reflects a recognition of the difference in the ways these two categories of videos can cause harm. Campaign-related deepfakes would cause harm primarily by spreading false information, which can be remedied with a disclaimer. Pornographic deepfakes, however, cause reputational and emotional harms that a simple disclaimer cannot cure.
Importantly, both deepfake bills have “sunset provisions,” forcing the legislature to revisit them down the line, or let the laws disappear. Civil liberties organizations and other experts nevertheless have their concerns, arguing, for example, that the political deepfakes law is ambiguous and could chill protected speech.
It remains to be seen whether the bot disclosure law or deepfakes bills will accomplish their stated aims. It’s fair to worry, meanwhile, about unintended consequences. But note that not all surprises are bad, and not all failures are the same. Take, for example, California’s similar intervention in the earlier days of the commercial web. Fifteen years ago the legislature required commercial websites to post privacy policies telling consumers what data they collect and what they do with it. Today it is fashionable to deride notice strategies as ineffective; no one reads privacy policies, which are full of empty promises.
But though privacy policies have proven ineffective at meaningfully informing consumers, they did provide a hook for early Federal Trade Commission enforcement actions for “deceptive” statements about privacy and security. As of this writing, nearly every household name internet company is under a consent decree with the FTC for privacy or security, and the Commission used these early victories to build up its capacity as a privacy enforcer and make a powerful case to Congress that further investment is warranted.
California, home to much of the technology that shapes our world today, has begun attempting to regulate A.I. Its first efforts to do so may be imperfect, and no law will be a panacea. But California’s instincts are correct. If, as proponents and detractors alike claim, A.I. is the transformative technology of our time, then one of the things that must transform is our laws and legal institutions. The alternative — an entirely unregulated technical wild west — is one fewer and fewer Americans seem to prefer. Fools rush in, but she who hesitates is lost.
Ryan Calo is the Lane Powell and D. Wayne Gittinger Associate Professor at the University of Washington School of Law
Madeline Lamo is the 2019–21 Media Litigation Fellow at the Reporters Committee for Freedom of the Press