AI technology’s recent integration into daily life has numerous benefits and a sizable impact on society, from data analysis to education. However, this relatively new and uncharted territory also creates less desirable AI developments for students to navigate.
Recently, AI-generated images of Taylor Swift spread rapidly on social media, nonconsensually depicting the singer-songwriter in fake, sexually explicit content. The original social media post was viewed 47 million times, and the images traveled across various accounts and platforms before being removed.
Swift’s team is reportedly considering legal action, and prominent voices, like the CEO of Microsoft and the White House press secretary, are calling for legislation that will further prevent these images from circulating.
As part of a social media-savvy generation, young people may still come across these images even as media platforms and AI programmers attempt to mitigate their existence. Students should speak out, report and ensure they are not purposefully consuming any nonconsensual sexual AI imagery to protect women and create a safer online environment.
The AI-generated images of Swift are often called deepfakes, which are videos, photos or audio that seem real but have been manipulated with AI. Technology can replace faces, manipulate facial expressions and synthesize speech, depicting someone appearing to say or do something they never did, according to the United States Government Accountability Office.
It’s easier than ever to create and spread realistic deepfake content due to the recent development of easily accessible AI generators. While this content can sometimes be harmless or entertaining, it is often used for malicious purposes, like influencing politics or producing pornographic material that disproportionately victimizes women.
Ninety-eight percent of deepfake content online is pornographic and non-consensual, and 99 percent of content is of women, according to an August 2023 study on deepfakes by Home Security Heroes, an online security company.
Shael Norris, founding executive director of SafeBAE, a student-led national organization working to prevent sexual violence among teens, believes explicit AI content is a serious form of sexual violence that exists to intimidate and silence women.
“As with any form of abuse, it’s the silencing and shaming victims in any given situation,” Norris said. “I think, when we’re looking at the proliferation of this happening, predominantly against female-identified people, it’s silencing and shaming of women so that they stay in their place and they stay down.”
Real or fake sexually explicit images of someone shared without their consent can lead to social anxiety and isolation, and harm their relationships with partners, friends and family, according to the Boston University School of Public Health.
Although many people are now pushing for stronger regulations on AI technology, there is currently no federal law restricting pornographic AI content in the United States.
AI-generated porn is complex because victims are violated each time an image is posted, shared or viewed. With billions of users across various platforms, images can spread rapidly and be difficult to take down.
Vincent Kamani believes AI needs to be regulated, but people in general also need to be better about the circulation of both real and fake inappropriate imagery.
“I don’t think people feel the need to respect privacy as much as we should, and so I think that’s the political side of it,” said Kamani, a sophomore anthropology major. “Also, I think there’s the social side of it, which is respecting people’s bodily autonomy and privacy.”
Regardless of the lack of restrictions, students should be mindful of the negative effects AI content has on the image’s subjects. By engaging with this material, individuals are violating the privacy of nonconsenting parties, and students should ensure they are avoiding or reporting these images instead.
The spread of these images happens most often to celebrities and influencers, but because AI generators are so easily accessible today, it is starting to become a problem among ordinary young people.
Legislators and social media platforms have the responsibility to implement stricter content moderation, but students also have to advocate against online harassment and nonconsensual pornographic imagery.
Isa Walker, a junior public relations major, believes Taylor Swift being the latest target of this content will hopefully draw more people to the conversation surrounding AI pornography.
“I do feel like it will impact other AI-generated content because I think before everybody was going crazy over Taylor Swift, it seemed like AI was still evolving and still something new,” Walker said. “I just think we’re taking it to a new level, and it’s not going to stop.”
In the digital age, each individual plays a vital role in shaping the online world and the content that receives attention there. Deepfakes have emerged as a serious danger to women, but refusing to engage with or reshare harmful content is a simple step students can take to cultivate a safer online environment and prevent the normalization of non-consensual content.
Even if legislation lags, an individual’s commitment to not perpetuate sexual AI images can send a much-needed message to those involved that this harmful content will not be absorbed or tolerated.
“At the very least, we need to speak out about how we feel about them,” Norris said. “Don’t resend them, don’t reshare, don’t talk about it and give it any more bandwidth other than telling people what you think, which is, ‘This is disgusting.’”
Ben White contributed reporting.
Be the first to comment