The United States Federal Communications Commission (FCC) is set to vote in the coming weeks on whether to deem the use of AI-generated voice robocalls as illegal. Commissioners will cast their votes on this proposal in the upcoming weeks.
Experts have noted that when developing apps for voice cloning, especially those related to celebrities and containing highly misleading content with significant dissemination risks, enhancing explicit identification design during the dissemination process can serve as a reminder to the audience.
On January 21, local time, some voters in New Hampshire reported receiving automated voicemails claiming to be from “President Biden,” advising recipients not to vote in the state’s primary election.
“Artificial intelligence imitating human voices, creating deceptively realistic speech, images, and videos to deceive consumers, is causing chaos. Whether the call involves a celebrity or politician you support or regardless of the relationship of the supposed relative in the call, when they call for help, we all could become targets of these AI-generated fraudulent calls.”
In response to recent AI deepfake incidents in the United States, on February 1, FCC Chair Jessica Rosenworcel proposed classifying AI-generated voice robocalls as illegal.
Previously, incidents included AI impersonating U.S. President Biden making calls to voters and the spread of AI-generated “indecent photos” falsely attributed to the well-known singer Taylor Swift, all occurring during a U.S. election year. Multiple AI deepfake incidents have raised concerns among the public regarding the deceptive potential of artificial intelligence.
Vote on Proposals in the Coming Weeks
Robocalls, also known as prerecorded calls, involve the use of computer-controlled auto-dialers to deliver pre-recorded messages, often for the purpose of telemarketing or spreading specific information, causing frequent disturbances for mobile users.
The FCC has stated that the use of AI-generated voices in robocalls has been continually improving in recent years, and these voices “have the potential to create misinformation by imitating the voices of celebrities, political candidates, and close family members.”
Jessica Rosenworcel, the FCC Chair, mentioned that her proposed Declaratory Ruling would “declare under existing law that this emerging technology is illegal, providing our partners in Attorneys General Offices nationwide with new tools they can use to combat these scams and protect consumers.” Reports indicate that commissioners will vote on the proposal in the coming weeks.
This proposal comes in the aftermath of residents in the U.S. state of New Hampshire receiving phone calls impersonating President Biden. On January 21, some voters in the state reported receiving automated voicemails claiming to be from “President Biden,” advising them not to vote in the state’s primary election.
White House Press Secretary Karine Jean-Pierre responded on the 22nd, confirming that the call was indeed fake, and President Biden did not have any recording. The New Hampshire Attorney General’s office stated that they are investigating these “deceptive” messages. Biden’s campaign manager, Julie Chavez Rodriguez, promptly released a statement indicating communication with the New Hampshire Attorney General’s office and active discussions within the campaign team.
The use of AI in influencing elections in the United States has precedents, extending beyond voice impersonation. In the lead-up to the 2023 Chicago mayoral election, “Chicago Lakefront News” released a video criticizing Democratic candidate Paul Vallas’s approach to gun violence. Despite Vallas’s campaign team condemning the video as AI-generated, it had already widely circulated on the internet. Vallas’s eventual electoral defeat may have been influenced by the negative impact of the video on his campaign.
The voice cloning company behind the “fake Biden” phone call has been identified as the startup ElevenLabs.
The company confirmed that the fabricated Biden voice was generated using its tools. Currently, ElevenLabs has suspended accounts using deepfake technology to disseminate information.
According to ElevenLabs’ official website, this AI research company is capable of generating voices in 29 languages. The company’s security policy recommends obtaining permission before cloning someone’s voice, but unauthorized cloning can be used for non-commercial purposes, including “political speech that contributes to public debate.” Additionally, the company warns against using cloned voices for fraud, discrimination, hate speech, or any illegal online misuse.
A January 26 report from Wired magazine revealed that ElevenLabs raised $80 million in a recent funding round, bringing its valuation to over $1.1 billion, making it a bona fide unicorn company. Notable investors in the company include figures such as Andreessen Horowitz, former GitHub CEO Nat Friedman, and Mustafa Suleyman, co-founder of the AI lab DeepMind.
Why is it Challenging to Defend Against AI Deepfake Technology?

Deepfake, a portmanteau of “deep learning” and “fake,” refers to the use of deep learning techniques to generate synthetic images, audio, or videos. The public availability of data on public figures provides ample material for AI training, making celebrities frequent victims of AI manipulation.
An anonymous technical expert from the internet security organization “NetKnife” stated that content produced using deepfake technology is visually and auditorily convincing, making it challenging to discern authenticity with the naked eye or traditional methods. Attackers can use technical means to conceal traces and features of deepfakes, making them difficult to detect. The abundance of images and video data available on the internet for training deep learning models is also exploited by deepfake technology. The more extensive the available dataset, the higher the quality of the model, resulting in deepfakes closely resembling real people.
The “NetKnife” technical expert also mentioned that deepfake detection technology is more sensitive than humans, detecting flaws in motion, lighting, and resolution, identifying forged features and abnormal patterns at the source. This enables early detection and prevention of the spread of deepfake content.