Biden Calls for a Ban on AI Voice Impersonation



Society may be wrestling with how to regulate AI. But President Biden is already convinced that the US needs to outlaw the technology’s ability to deepfake people’s voices. During his State of the Union address on Thursday, Biden only mentioned AI twice during his hour-long speech. But he called out AI voice impersonation as a threat that needs to be stopped. “Harness the promise of AI and protect us from peril. Ban AI voice impersonations and more,” he said near the end of his address.  It comes over a month after a robocall deepfaking Biden’s voice was used to discourage primary voters in New Hampshire. But ironically, the robocall wasn’t the work of scammers or foreign propagandists. Instead, a Democratic political consultant named Steve Kramer admitted that he orchestrated the robocalls to bring attention to the threat of AI voice impersonation.  “Maybe I’m a villain today, but I think in the end we get a better country and better democracy because of what I’ve done, deliberately,” Kramer told the Associated Press. So far, the Biden administration has tried to crack down on AI voice impersonation through the FCC. Last month, the US regulator confirmed that AI-generated voice calls violate existing US laws designed to protect consumers from pre-recorded automated calls. But the ruling does nothing to stop bad actors from spreading deepfakes over the internet. In the meantime, US lawmakers have floated at least two bills to try and stop AI voice impersonation. One of them, the No AI Fraud Act from Representatives María Elvira Salazar (R-FL) and Madeleine Dean (D-PA) proposes allowing people to sue the makers of such deepfakes to extract damages. However, the privacy group the Electronic Frontier Foundation criticized the legislation’s wording as too vague and broad, with the potential to ensnare all kinds of digital media that contains someone’s likeness.

Recommended by Our Editors

“If Congress really wants to protect performers and ordinary people from deceptive or exploitative uses of their images and voice, it should take a precise, careful and practical approach that avoids potential collateral damage to free expression, competition, and innovation,” the EFF added.In the Senate, meanwhile, the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act would create a “federal civil remedy” for deepfake porn victims. It covers “digital forgeries that depict the victim in the nude, or engaged in sexually-explicit conduct or sexual scenarios” created via software, machine learning, artificial intelligence, and other computer-generated or technological means.

Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

We will be happy to hear your thoughts

Leave a reply

AnsarSales
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart