He first saw this advertisement on Facebook. And then on TikTok. After seeing which only Elon Musk was visible Given the investment opportunity again and again, Heidi Swan thought this must be true.
“Looked exactly like Elon Musk, sounded exactly like Elon Musk and I thought it was him,” Swann said.
She contacted the company behind the pitch and opened an account with over $10,000. The 62-year-old healthcare worker thought she was making a smart investment from a businessman and investor making billions of dollars in cryptocurrencies.
But Swann will soon find that she has been betrayed by a new wave of high-tech thieves who use artificial intelligence to create deepfakes.
Even looking back at the videos now, knowing they were fake, Swann still thinks they look believable.
“He still looks like Elon Musk,” she said. “He still sounds like Elon Musk.”
Deepfake scams are on the rise in America
As artificial intelligence technology evolves and becomes more accessible, these types of scams are becoming more common.
according to DeloitteAI-generated content contributed to more than $12 billion in fraud losses last year, and could reach $40 billion in the US by 2027, according to a leading financial research group.
Both federal trade commission and this Better Business Bureau has issued a warning that deepfake scams are on the rise.
A Study AI firm Sensity found that Elon Musk is the most common celebrity used in deepfake scams. One possible reason is his wealth and entrepreneurship. The second reason is the number of interviews he has conducted; The more content a person has online, the easier it is to create credible deepfakes.
Anatomy of a deepfake
Christopher Miardo, a professor at the University of North Texas in Denton, is also using artificial intelligence. But he's using it to make art.
“It's not going to replace the creative arts,” Mirdo said. “It's just going to enhance them and change the way we understand things that we can do in the field of creativity.”
Even though Mirdo sees artificial intelligence as a way to be innovative, he also sees its dangers.
Mirdo showed the CBS News Texas I-Team how scammers can take a real video and use AI tools to alter a person's voice and mouth movements, making them appear to say something completely different. Are.
Advances in technology have made it easier to create deepfake videos. All individuals familiar with AI need to do is create a single still image and a video recording.
To demonstrate this, Murdo took a video of investigative reporter Brian New creating a deepfake of Elon Musk.
These AI-generated videos are rarely perfect, but they need to be convincing enough to trick an unwitting victim.
“If you're really trying to deceive people, I think you can do some pretty bad things with it,” Mirdo said.
How can you spot a deepfake?
Some deepfakes are easier to detect than others; There may be signs such as unnatural lip movements or strange body language. But as technology improves, it will become harder to tell just by looking.
There are a growing number of websites that claim they can detect deepfakes. Using three known deepfake videos and three authentic videos, the CBS News Texas I-Team put five of these websites to the unscientific test: deepware, attestive, Deepfake-o-meter, sensitivity And deepfake detector,
Overall, these five online tools correctly identified the tested videos about 75% of the time. The I-Team reached out to companies with the results; Their responses are below.
deepware
Deepware, a website that is free to use, initially failed to flag two fake videos tested by the I-Team. In an email, the company said that the clips used were very short and for best results, uploaded videos should be between 30 seconds to one minute long. DeepWare correctly identified all videos that were long. According to the company, its detection rate of 70% is considered good for the industry.
FAQ Section DeepWare's website says: “Deepfakes are not yet a solved problem. Our results indicate the probability of a specific video being a deepfake or not.”
deepfake detector
Deepfake Detector, a tool that charges $16.80 per month, identified one of the fake videos as containing “97% natural voices.” The company, which specializes in recognizing AI-generated voices, said in an email that factors like background noise or music can affect the results, but its accuracy rate is about 92%.
In response to a question about guidance for average consumers, the company wrote: “Our tool is designed to be user-friendly. Average consumers can easily upload an audio file to our website or directly access the content to analyze Our browser extension will provide an analysis using probabilities to help determine whether a video may contain deepfake elements, making it accessible even to those unfamiliar with AI technology. It is possible.”
attestive
Attestive marked the two original videos as “suspicious”. According to company CEO Nicos Vekiarides, false positives can arise from factors such as graphics and editing. Both authentic videos marked “suspicious” contained graphics and editing. The site offers free service, but also has a paid tier, where consumers can adjust settings and calibration for more in-depth analysis.
Although he acknowledges that attestation is not perfect, Vekiarides said that as deepfakes become harder to detect with the naked eye, these types of websites are needed as part of the solution.
“Our tool can determine whether something is suspicious or not, and then you can verify it with your eyes and say, 'I think this is suspicious,'” Vekariades said.
Deepfake-o-meter
DeepFake-O-Meter is another free tool supported by the University at Buffalo and the National Science Foundation. It identified two genuine videos with a high percentage of being AI-generated.
In an email, the creator of the open platform said that a limitation of the deepfake detection model is that video compression can lead to sync issues with video and audio and inconsistent mouth movements.
In response to a question about how everyday users can use the tool, the company emailed: “Currently, the main result shown to users is the probability value of this sample generated across various detection models. This is referred to as Can be used if multiple models agree on the same answer with confidence (for example, more than 80% for AI-generated or less than 20% for real video results). show a more understandable method, as well as developing new models that can output comprehensive identification results.”
sensitivity
Sensity's deepfake detector correctly identified all six clips, showing a heatmap indicating where AI manipulation was most likely.
The company offers a free trial period to use its service and told the I-Team that although it is currently tailored for private and public organizations, its future goal is to make the technology accessible to everyone. Is.