Skip to content

Hit enter to search or ESC to close

Deepfakes are becoming part of the misinformation culture. The rise of deepfakes is causing ripples across industries as their malicious use for fraud and other forms of cyber-attacks is increasing. Case studies where deepfakes are used in cyberattacks are on the rise. A recent deepfake video call scam saw UK engineering firm Arup tricked into sending $25 million to scammers.

A recent study from TitanHQ and Osterman Research has captured this security zeitgeist, finding that 11% of organizations surveyed have experienced a deepfake-enabled attack in the last year. This percentage may seem low compared to other techniques, like conventional phishing, but this is only the beginning…predictions are that around 8 million deepfakes will be shared by the end of 2025. Many will be used for nefarious purposes.

What is a Deepfake Cyberattack?

The word “deepfake” originated in the Reddit community back in 2017. A user going by the same username created a subreddit community to share deepfake pornography of celebrities created using open-source face-swapping technology. Since then, deepfakes have continued to be used in various malicious guises.

Deepfake content manipulates a piece of legitimate media, like a video, changing it to suit the editor's needs. The technology underlying a deepfake is a type of AI known as deep learning, a form of machine learning that uses a neural network. The algorithms that create the deepfakes are sophisticated enough to create highly realistic fake media. It is this believability that underpins the dangerous aspect of deepfake-enabled cyberattacks. Deepfakes manipulate people into performing an act that benefits the cybercriminal. Deepfakes may be the ultimate device in social engineering-enabled cyberattacks.

Examples of the Threat of AI and Deepfake Cyberattacks

Deepfake cyberattacks use sophisticated technology, but the tactics come down to the same thing - manipulation of human behavior. A mix-and-match pattern of AI-generated content using Large Language Models and deepfakes is appearing in the threat landscape. Examples of the applications of deepfakes show their use in fraud and phishing:

Deepfake Face Swapping

Deepfake algorithms are so sophisticated that they can swap faces in real time. A recent attack used this technique to target a business person from northern China. The scammers used face-swapping technology within a video conference to trick the businessman into thinking he was talking to a trusted friend who needed the money for a bidding process. The scam cost the man 4.3 million yuan ($622,000). A 2024 iProov report found a 704% increase in "Face Swap" deepfake attacks.

Deepfake + AI Agents = More Sophisticated and Dangerous Cyberattacks.

A warning of hyper-personalized phishing scams generated using AI bots is on the rise. Reports have identified fake and personalized emails created and designed using intelligence gathered by Generate AI. AI agents, like OpenAI's Operator, are now so sophisticated that they can interact with web pages, write code, and provide an ideal framework for an attacker to create, automate, and execute cyberattacks. Add to this capability the believability of deepfakes, and you have a perfect storm that will result in a security landscape populated by complex and sophisticated threats.

Microsoft Copilot Spoof Attack

Microsoft Copilot is an AI-enabled digital assistant. Researchers have identified phishing email campaigns targeting customers using Microsoft Copilot. The phishing emails are presented as Copilot communications and encourage users to click a link. This phishing could be tied to a deepfake video conference like the one resulting in $25 million losses for Arup.

Looking at the trends in deepfake cyberthreats, businesses and MSPs should expect deepfakes to infiltrate phishing and social engineering attacks as we move deeper into the AI-era. Predictions suggest that attackers will focus on the personal lives of corporate executives. The researchers highlight that the deepfake videos or audio will be designed to elicit an emotional response, typical of a social engineering attack. This response will create a knee-jerk reaction to send money or share confidential information.

Deepfake Detection Challenges

Identifying deepfakes can be a challenge. The methods used generally depend upon machine learning to identify fake videos or audio anomalies. However, detection rates are still far from ideal; Thomson Reuters found that 95% of synthetic identities presented during KYC checks are not detected. A method used to improve the detection of deepfakes is a facial biometric liveliness test. Liveliness tests continue to improve but are not 100% effective.

How do Deepfakes Affect Identity Verification for Email?

Deepfakes can be viewed as a form of identity theft. Deepfake videos, photos, and audio are used to impersonate an individual, allowing the attacker to perform identity-related tasks that require identity verification. Suppose a hacker gains unauthorized access to an email account and has a deepfake of identity documents, such as a passport or driver's license. In that case, they have all they need to open online bank accounts or to create a citizen identity.

Biometric vendor iProov has found a massive increase in deepfakes deployed in digitally injected attacks. These types of attacks make deepfake cyberattacks scalable. Digital injection attacks are injected into a data stream or bypass a device camera, bypassing the onboarding and authentication measures to sensitive accounts like email.

Tools and Strategies to Help Spot Deepfakes

Businesses and MSPs should expect deepfakes to continue to become even more realistic. This poses a serious challenge in detecting these threats. The researchers at Osterman, however, recommend that organizations use a mix of Human Risk Management (HRM) strategies and email security solutions that include defensive AI capabilities. By deploying this mix of human-centered security strategies and AI-powered email security, an organization can set up smart layers of protection that handle sophisticated AI-assisted cyberattacks:

Human Risk Management (HRM)

Osterman places "Human Risk Management" strategies central to controlling AI-assisted cyberattacks, including deepfakes. HRM is a data-driven approach to risk mitigation based on behavior-driven security awareness training and phishing simulations. Osterman's research identifies the strategy expected to have the highest level of impact, which is the use of AI-assisted security awareness training and simulations for employees.

Defensive AI-Enabled Email Security

AI-enabled phishing attacks work alongside deepfakes to manipulate people's emotions and trust. Osterman research shows that email security tools are essential to augment HRM strategies. AI-enabled email security solutions can identify emerging threats and help stop Generative AI-augmented deepfake attacks. Technologies like PhishShield use advanced AI technologies, such as Natural Language Processing (NLP) and machine learning, to spot AI-assisted cyberattacks.

Using smarter human-centered security layers offers an effective way to handle deepfake threats. Companies and MSPs can use HRM and AI-powered email security to prevent increasingly sophisticated AI-assisted cyberattacks such as Business Email Compromise.

Ready to strengthen your defenses with a smarter approach to email security? Get in touch to learn how combining human-centered strategies with AI-powered email protection can help your organization stay ahead of sophisticated cyber threats. Get a free demo and see it in action.

Talk to our Team today

Talk to our Team today