AI-generated deepfakes spark urgent calls for regulation and protection

The urgency over AI-generated deepfakes is intensifying.

This week's  victims included actor Tom Hanks.

Hanks posted a warning on social media that a video that's been circulating promoting a dental plan, is not him and that his image was used without his permission.

On Friday Gov. Hochul signed into law, a ban on pornographic images generated by AI without the consent of the individual. 

Whether a private citizen or a public figure, the technology used is so advanced, it's disturbing.

Arizona mother Jennifer Destefano is a deepfake victim who spoke before the Senate Judiciary Committee in June. 

Destefano said she received a phone call from who she believed was her daughter saying that she had been kidnapped, and her abductors were demanding money.

"It was my daughter's voice, it was her cries, it was her sobs, it was the way she spoke--I will never be able to shake that voice and the desperate cries for help out of my mind."

The same phone scam is targeting parents in New York. The voice cloning program can be made to sound like any child in distress by using a few clips of the real person's voice.  

Americans were swindled out of $8.8 billion in 2022 through these kind of fraud schemes.

With election season quickly approaching, deepfakes bring a new set of concerns.

Todd Helmus is a behavioral scientist with RAND Corporation who believes deepfakes will be used against democrats and republics without proper regulation. 

"I don't think the government is prepared for this," Helmus said.

New York congresswoman Yvette Clarke is proposing a bill that would require campaigns to disclose the use of AI generated content in their ads. 

"Right now what we're seeing on the internet, unfortunately, is the wild west of the internet. There aren't a whole lot of guardrails up for protecting the American people in many respects." Clarke said.

Congresswoman Clarke also wants to require watermarks for any synthetic images created. 

Google is already working on a system called SynthID, a tool that embeds an invisible digital watermark on all computer-generated images created. 

SynthID can also read scan incoming messages. Google says it is still in the testing phase. 

Intel is also working on another type of technology: ‘fake catcher,’ which they claim is 86% accurate in detecting fakes by focusing on eye movement and blood flow.

"What makes us humans? The answer is in our blood. When your heart pumps blood our veins are changing color," one expert said.

There is still much research and technology needed in order to combat these fakes, and the government is still trying to find way to regulate it.

Technology