"The threat of AI-generated deepfakes is growing, but the capabilities to detect them are still not good enough"
"The threat of AI-generated deepfakes is growing, but the capabilities to detect them are still not good enough"
Michael Matias, Co-founder and CEO of Clarity, explained at Calcalist's AI conference that organizations will need to learn how to defend against deepfake attacks, which can lead to fraud.
"The threat of AI-generated deepfakes is growing, but the capabilities to detect these outputs are still not good enough," said Michael Matias, Co-founder and CEO of Clarity, at Calcalist's AI conference held in Tel Aviv.
Matias began his presentation by showing two video clips, one real and one a deepfake, and asked the audience to identify which was which. The results were unimpressive. However, according to him, it isn’t possible to rely on algorithms to detect deepfake videos. "The world is starting to develop solutions for deepfake detection through algorithmic means," he said.
"We ran the most advanced tools on the two videos, and they assumed both were real. In reality, only one of them was. This should concern us for several reasons. First, because the algorithms aren't good enough to detect them. And second, because people don't know when something is fake, and even worse, they may believe that something real is fake."
Matias explained that "deepfakes are a booming industry. They can be used in medicine, education, and entertainment. At the same time, academia is pushing research forward, with massive support for these studies, so it’s not surprising that Europol predicts that by 2026, about 90% of online content will be AI-generated. This includes videos on YouTube and social media, but also on platforms like Zoom and video calls, where we may interact with AI characters."
He emphasized how the development of deepfakes will impact the cyber world and even shared an example of a fraud attempt: "Organizations will need to learn how to defend against deepfake attacks, which can lead to breaches of internal networks or other types of fraud. Earlier this year, a company lost $25 million after an employee joined a Zoom call with senior executives, all of whom were deepfakes. After the call, the employee transferred the requested funds to the attacker's account."
Matias concluded by stating that much work remains regarding deepfake detection. "To create solutions, we need very high detection capabilities. But it's not enough to do this retroactively because, many times, the attack has already occurred, and the damage is done. We need a firewall that can identify in real-time that one of the participants in a call isn’t real.
"Alongside this, there is the issue of AI-based disinformation. On October 7, we worked with the Ministry of Defense on all the incoming media to determine what was real and what wasn’t. There was a wave of misinformation in the first few days, and we conducted focused work to verify the terrible videos that provided information on what was happening."