Taylor Swift Act takes aim at harmful AI images

By Molly Miller, Missouri News Network
Posted 2/19/24

JEFFERSON CITY — Pornographic or explicit images and political deepfakes created by artificial intelligence have gotten the attention of legislators in Missouri.

Rep. Adam Schwadron, R-St. …

This item is available in full to subscribers.

Please log in to continue

E-mail
Password
Log in

Taylor Swift Act takes aim at harmful AI images

Posted

JEFFERSON CITY — Pornographic or explicit images and political deepfakes created by artificial intelligence have gotten the attention of legislators in Missouri.

Rep. Adam Schwadron, R-St. Charles, introduced the first piece of legislation on Jan. 30 to address pornographic or explicit images created by AI.

“The Taylor Swift Act” aims to combat the harmful effects of artificially produced or digitally altered explicit images by providing victims retribution.

According to Massachusetts Institute of Technology, what are commonly referred to as deepfakes, involve images that appear real but are instead completely created by technology. Humans are unable to produce deepfakes as the complexity of the images is so realistic that they could only be made by AI.

The false images often depict political figures or celebrities in compromising positions. The Department of Homeland Security reported in 2021 that 95% of deepfakes depicted pornographic images of women.

“The worst part is that women are disproportionately impacted by these deepfakes,” said Schwadron. “These fake images can be just as crushing, harmful and destructive as the real thing.”

The bill, HB 2573, will provide legal ramifications for victims, mainly the ability to file civil lawsuits against the creators of the images. Additionally, prosecutors will have the explicit power to file criminal charges against perpetrators but limit liability for sites that host the content.

Minnesota and New York enacted laws with civil and criminal protections for deepfake pornography, similar to the proposed Missouri law. Hawaii, Texas, Virginia, Georgia, South Dakota and Florida allow for criminal prosecution while California and Illinois permit civil suits.

Schwadron’s bill has not been referred to a committee as of Monday, but another AI-related bill is expected to be voted on Tuesday by a House committee.

Members of the Special Committee on Innovation and Technology last week heard testimony on HB 2628, sponsored by Rep. Ben Baker, R-Neosho, that would prohibit the use of AI to make content about a political figure or party within 90 days of an election without a disclaimer.

Under Section 230 of the Communications Act of 1934, enacted as part of the Communications Decency Act of 1996, the government provides limited liability protections for the hosts of the content posted by third parties according to a Department of Justice review. Social media companies are generally protected by Section 230 from being sued based on the content posted by one of its users or the platform’s decision to remove that content.

U.S. Sen. Josh Hawley, R-Missouri, emerged as an outspoken critic of the protection for internet companies under Section 230. In June of 2023, he introduced a bill with Sen. Richard Blumenthal, D–Connecticut, to exclude AI-generated content from liability coverage under Section 230.

Taylor Swift became the latest victim of AI-produced nude images last month that quickly went viral. A research firm studying disinformation, Graphika, told multiple news outlets that the images first appeared on the internet message board 4chan.

4chan is notorious in the online community for controversial content. According to Graphika, the fake images of Taylor Swift emerged as part of an online challenge for users to test their skills. It appears that the users participated in a sort of game to create lewd images of female celebrities and public figures.

Users of the platform X, formally known as Twitter, viewed Swift’s deepfakes 27 million times before the company removed the images. Various news outlets reported that the images were left publicly available on X on different accounts for 17 to 19 hours before being removed.

Jared Schroeder, an expert and professor on the First Amendment at the University of Missouri, said, “It’s not two groups of people, the victims and the government. It’s three groups: The social media firms have incredible power to limit this content.”

Microsoft’s CEO, Satya Nadella, responded to the incident during an exclusive interview with NBC News on the risks of AI technology. He said that he believes it is the platforms’ responsibility to design safeguards to limit the negative effects of AI.

In January, the European Union and the United Kingdom’s Competition and Markets Authority both said that they would be reviewing Microsoft’s relationship with OpenAI, the parent company of ChatGPT. Since 2019, Microsoft has reportedly invested approximately $13 billion dollars in OpenAI.