American singer Taylor Swift has once again fallen victim to AI deepfakes. Recently, a large number of AI-generated fake explicit and bloody photos of Taylor have been circulating on multiple social media platforms, garnering millions of views and causing a stir on social media. The situation has also caught the attention of the White House.
Taylor’s Team May Take Legal Action Against Related Websites
On January 26, White House spokesperson Pierre responded to the incident during a press conference, stating, “We are shocked by the circulation of these images. While social media companies make independent decisions on content management, we believe they can play a crucial role in enforcing their own rules to prevent the spread of erroneous information and intimate images of real individuals without consent.” She particularly emphasized the greater impact of lax enforcement on women, who are the primary targets of online harassment.
It is reported that these deepfake Taylor photos were generated by someone or some individuals using AI and uploaded to a shady website. The website is filled with numerous explicit photos featuring celebrities as the main subjects. The fake photos quickly spread to mainstream social media and among netizens, sparking strong dissatisfaction and anger from Taylor’s tens of millions of fans. Fans initiated the hashtag “ProtectTaylor,” with comments like “This is disgusting, it’s illegal,” “Stop spreading AI-generated images of Swift; she’s a person with feelings, she will be hurt,” and “Respect Taylor Swift, respect all women.”
Currently, some accounts involved in spreading the content have been banned by social media platforms, and platforms are working to remove the circulating images. However, some loopholes still exist, with individuals creating their own websites or selling these deepfake photos through anonymous groups. In fact, this is not the first time Taylor has been a victim of AI deepfakes. Not long ago, someone used AI technology to mimic her voice and appearance for promotional purposes, leading consumers to mistakenly believe that Taylor endorsed certain products. Reportedly, Taylor’s team is considering legal action against the spread of false photos, and websites disseminating such content may face legal consequences.
Public Concern Grows Over AI Deepfake Technology, Prompting the Need for Legislation
While fans express deep anger over Taylor’s ordeal, other netizens convey strong anxiety about the development of AI deepfake technology. Comments include, “Taylor is a highly influential celebrity; what if ordinary people also fall victim to deepfake explicit photos?” “AI is developing too quickly and terrifyingly. I can’t imagine if it were me; it would be irreversible.” “Support AI legislation; the abuse is too significant.” “This is really relevant to our lives; AI-generated images and videos mimic human facial expressions and voices so closely that ordinary people can’t easily distinguish.”
Since 2023, several cases of AI deepfake scams have occurred domestically, with a high susceptibility rate, making it challenging for people to guard against. There have also been malicious incidents involving the use of AI technology to create false rumors and spread harmful information.
Facing the lifelike nature of AI deepfake technology, experts, scholars, and relevant businesses both domestically and internationally have contemplated various methods of coping. For example, in December of last year, an invisible image watermark called “mist” was open-sourced online. This watermark is designed to resist AI models’ training and capturing of images, significantly disrupting AI models and causing varying degrees of damage to generated images. Additionally, some developers suggest that AI generation platforms and companies should implement invisible watermark mechanisms in AI-generated software to quickly identify and trace AI. Social media platforms should also enhance their technical capabilities to identify suspected AI-generated images, text, and video content.
However, ultimately, these technical measures to prevent and identify AI are only temporary solutions. They offer limited prevention and deterrent effects against the widespread use of deepfake technology. One of the root causes of the proliferation of deepfake technology lies in its profitability. Those manufacturing and spreading these deepfake images and videos are driven not only by personal curiosity but also by the potential for substantial returns. Even if national laws deem AI deepfakes illegal, it cannot stop individuals with ulterior motives from continuing to abuse this technology. Control over AI technology must progress beyond mere advocacy and appeals, with the implementation of standardized legislation and corresponding punishment systems becoming urgent. AI technology is not an uncontrollable monster, but it does require a sturdy cage to restrain it.