AI-generated images of Taylor Swift that were widely shared on social media in late January likely originated as part of a repeated challenge on one of the Internet’s most notorious message boards, according to a new report.
Graphika, a research firm that studies misinformation, spotted the images in a community on 4chan, a message board known for sharing hate speech, conspiracy theories and, increasingly, racist and offensive content generated using AI
The people on 4chan who created the images of the singer did so in a kind of game, the researchers said — a test to see if they could create lewd (and sometimes violent) images of famous female figures.
Swift’s composite images were leaked to other platforms and viewed millions of times. Fans rallied to Ms Swift’s defense and lawmakers called for stronger protections against AI-generated images.
Graphika found a thread on 4chan that encouraged people to try to bypass safeguards created by image creation tools such as OpenAI’s DALL-E, Microsoft Designer, and Bing Image Creator. Users were instructed to share “tips and tricks to find new ways to bypass filters” and told: “Good luck, be creative.”
Sharing unpleasant content through games allows people to feel connected to a larger community and is motivated by the cachet they receive for participating, experts say. Ahead of the 2022 midterm elections, groups on platforms like Telegram, WhatsApp and Truth Social have been on a hunt for voter fraud, earning points or honors for producing alleged evidence of voter fraud. (True proof of forgery is extremely rare.)
In the 4chan thread that led to the fake images of Ms. Swift, several users were complimented — “beautiful birthplace,” wrote one — and asked to share the prompt language used to create the images. One user complained that a prompt produced an image of a celebrity in a swimsuit rather than naked.
The rules posted by 4chan that apply site-wide do not specifically prohibit AI-generated sexual images of real adults.
“These images come from a community of people motivated by the ‘challenge’ of bypassing the safeguards of production AI products, and the new restrictions are seen as just another obstacle to ‘defeat,'” Cristina López G., senior analyst at Graphika. he said in a statement. “It is important to understand the gamified nature of this malicious activity in order to prevent further abuse at the source.”
Ms. Swift is “far from the only victim,” said Ms. Lopez G. In the 4chan community that manipulated her likeness, many actresses, singers and politicians appeared more frequently than Ms. Swift.
OpenAI said in a statement that the clear images of Ms. Swift were not created using its tools, noting that it filters out the clearest content when training the DALL-E model. The company also said it uses other safeguards, such as rejecting requests that ask for a public figure by name or seek explicit content.
Microsoft said it was “continuing to investigate these images” and added that it “has strengthened our existing security systems to further prevent our services from being abused to create images like these.” The company prohibits users from using its tools to create adult or intimate content without consent and warns repeat offenders that they may be banned.
Software-generated fake pornography has been a scourge since at least 2017, affecting unwitting celebrities, government figures, Twitch streamers, college students, and others. Patchy regulation leaves few victims with legal recourse. Even fewer have a dedicated fan base to drown out the fake images with coordinated “Protect Taylor Swift” posts.
After the fake images of Ms. Swift went viral, Karine Jean-Pierre, a White House press secretary, called the situation “alarming” and said social media companies’ lax enforcement of their rules disproportionately affected women and girls. He said the Department of Justice recently funded the first national helpline for people targeted for image sexual abuse, which the department described as meeting a “growing need for services” related to the distribution of personal images without consent. SAG-AFTRA, the union that represents tens of thousands of actors, called the fake images of Ms. Swift and others a “theft of their privacy and their right to autonomy.”
The artificially created versions of Ms. Swift have also been used to promote scams involving Le Creuset cookware. Artificial intelligence was used to impersonate President Biden’s voice in robocalls preventing voters from participating in the New Hampshire primary. Technology experts say that as AI tools become more accessible and easier to use, audio and video spoofs with lifelike avatars could be created in minutes.
Investigators said the first sexualized AI image of Ms. Swift in the 4chan thread appeared on Jan. 6, 11 days before they were said to have appeared on Telegram and 12 days before they appeared on X. 404 Media reported on Jan. 25 that the viral images of Swift had jumped to mainstream social media platforms from 4chan and a Telegram group dedicated to abusive images of women. British news agency Daily Mail reported that week that a website known for sharing sexual images of celebrities posted the images of Swift on January 15.
For several days, X blocked searches for Taylor Swift “with enough care that we could make sure we were cleaning up and removing all the images,” said Joe Benarroch, the company’s head of business operations.