TECH NEWS – The U.S. Federal Communications Commission (hereafter referred to as the FCC) believes that this will give states new tools to combat perpetrators of harmful activities.
While the voices generated by artificial intelligence can be good or bad, and not everyone supports the technology (not the voice of Geralt the Rivian, Doug Cockle, for example), the big question is how to combat harmful use. The FCC is now trying to do this by banning the use of AI-generated voices in robocalls…
This is what happens when the voice of Joe Biden, the President of the United States, is used (without his permission, of course…) to tell Democratic supporters not to vote (this happened in January and over 20,000 people were called this way!). The FCC said the new rule “expands the legal avenues through which state law enforcement can hold these perpetrators accountable under the law.”
“Bad actors are using AI-generated voices in unwanted robocalls to extort vulnerable family members, impersonate celebrities, and misinform voters. We’re putting the scammers behind these robocalls on notice. State attorneys general now have new tools to crack down on these scams and ensure that the public is protected from fraud and misinformation,” said FCC Chairman Jessica Rosenworcel.
Under the Telephone Consumer Protection Act, a perpetrator can be fined $500 to $1500 per call. This can quickly add up to a huge amount, as last August the FCC fined perpetrators $300 million for attempting to commit auto warranty fraud with over 500 million (!!!) calls in three months. That’s 64.3 calls per second!
Because of such scammers, stricter regulation is understandable, and therefore it is really good to see at least an attempt to combat this, but in our opinion the new rule seems entirely hopeless.