Last week a bill l have been working on for the last eight months was introduced to the House of Representatives. The Deepfakes Accountability Act seeks to protect the lives of everyday Americans by giving us control of our online image. But what are deepfakes?
Deepfake video is the latest advancement in visual manipulation technologies, but it is not new as it seems — computer scientists note that the first mass-market visual modification technology was actually Photoshop, which was originally released in 1990. But what makes deepfakes different is that they are produced using open source A.I. software, combining facial mapping and audio augmentation to create a completely new property. The videos produced in this way place people in places they have never been, saying things they have never said.
If someone were to make deepfake pornographic content of me, it would undermine public trust and derail my career.
The emergence of deepfakes was brought to public attention by Hollywood actress Scarlett Johansson when an online deepfake of her seeming to perform explicit sexual acts was released last year. Her team worked tirelessly to refute the video and encouraged her to make a victim impact statement to clear her name, pointing to similar attacks on the actresses Natalie Portman and Emma Thompson. I was happy to see this show of strength — but at the same time I could not help but wonder: What if this happened to a poor Black woman like me?
I provide nonpartisan advice on artificial intelligence on Capitol Hill. My job is to help policymakers ensure that technological progress is in the public interest. If someone were to make deepfake pornographic content of me, it would undermine public trust and derail my career. I do not have the resources to salvage my reputation, to place my attack in a larger social context and to recoup the lost income. This is why I need the government to regulate the spread of deepfakes — and so do the rest of us.
This call was taken up by Congresswoman Yvette Clarke, the vice-chair of the House & Energy Committee and chair of the Tech Accountability Caucus. Clarke’s team has been working on this issue for over six months and were inspired to take immediate action last month after Facebook refused to remove a clearly doctored video of House Speaker Nancy Pelosi that made her appear to be intoxicated. I found the video deeply disturbing because alcoholism is widely seen as a medical condition. Had Pelosi actually been seeking treatment for this issue, the public disclosure of her condition could be viewed as what the legal scholar Danielle Citron calls the “unraveling of privacy.” Such technology, if unregulated, could usher in a time where the most private parts of our lives could be outed through the release of manipulated online content — or even worse, as was the case with Speaker Pelosi, could be invented whole cloth.
I am honored to lead an all-female team of computer scientists, researchers, legal scholars, and disinformation experts who explained the national security threats associated with deepfakes to legislators in Congress. We are public sector technologists in a growing field that seeks to provide a counterweight to the profit-driven tech companies who are not incentivized to remove lucrative and highly clickable content. There needs to be another set of actors weighing in on these questions who have an established commitment to the public good. This is why we need to put pressure on Congress to act.
In order to understand how deepfake technologies entered our communication systems, we have to look at their history. Safiya Noble, a professor at UCLA and author of the book Algorithms of Oppression, helped us with this endeavor. She has been studying the online treatment of Black women and girls for the last 10 years and has seen what I call the “disinformation migration.” This is when technologies that are meant to mislead the general public are first weaponized against women and girls to test their capabilities. Once they have been perfected they are then used against a larger segment of the population. This theory may explain why the public deployment of deepfakes started with female celebrities, before being used most recently against Facebook CEO Mark Zuckerberg. The evidence suggests the use of deepfakes will not stop there.
We are 17 months away from the 2020 Presidential Election and the rise of deepfake videos should concern us all. The indictment of 13 members of Russia’s Internet Research Agency (IRA) by Special Counsel Robert Mueller for online African-American voter suppression highlights how American anti-blackness was weaponized against the country as a whole. Mueller’s team showed how Russian operatives created 30 pro-black Facebook and Instagram sites with names like “blacktivst” and “woke blacks,” which thanks to advertising recommendation algorithms reached 1.2 million people.
During the deepfakes briefing process, Russia expert Nina Jankowitcz debunked the idea that disinformation campaigns were built solely on lies. She explained that Russian disinformation campaigns build trust with their target audiences by introducing verifiable facts into public debate. In the case of the IRA they bought Facebook ads highlighting Hillary Clinton’s support of the 1994 crime bill, linking this legislation to the criminalization of the African-American community, and reinforcing this argument with clips of her describing young black boys as “super predators.” That much was true, but they also weaved in inspirational quotes, funny memes and provided a space to vent online for African-Americans frustrated with their presidential options. Once messages expressing dissatisfaction with both 2016 candidates began accumulating, the IRA began to release ads suggesting members of these pro-Black communities should not vote for Hillary Clinton because she may still hold these views. That part wasn’t true, but it did fit into the narrative laid out by this otherwise trustworthy community.
The role of platform companies is to increase shareholder value, not weigh public concerns. The job of the government is to protect the American people.
Now imagine if nefarious actors could create deepfake video of a presidential candidate saying the N-word, or appearing in blackface? That would potentially and unfairly end their run. That’s why we need to stop the spread of deepfake video before it is used to interfere with the 2020 election.
Repealing Section 230 of the 1996 Communications and Decency Act, which absolves platform companies from taking responsibility for user-generated content, would raise First Amendment questions. So the Deepfake Accountability Act’s legal expert Mary Anne Franks, president of the Cyber Civil Rights Initiative and a professor at the University of Miami School of Law, argued for an amendment to the Federal Identity Theft and Assumption Deterrence Act of 1998, which would frame social media users as the consumers of content and therefore invoke consumer protection rights.
This new approach places the distribution of deepfake content alongside misappropriation of information such as names, addresses, or social security numbers, which the act already views as inflicting financial, reputational, and psychological injury. Amending the identity theft law to address deepfakes has the potential to serve as a powerful deterrent against the creation and distribution of these malicious images.
There is bipartisan support for the regulation of deepfakes because Congress sees this as a way to maintain the integrity of our democracy. The role of platform companies is to increase shareholder value, not weigh public concerns. The job of the government is to protect the American people. Please call your congressperson and ask them to support the legislation. Congress has to act.
Mutale Nkonde is an A.I. policy advisor and incoming fellow at the Berkman Klein Institute of Internet and Society at Harvard University.