New research from iProov reveals the latest attack trends on biometric verification systems and how they can be mitigated 

Key Findings:

  • Digital injection attacks occurred five times more than persistent presentation attacks (e.g., showing a mask to camera) in 2022.
  • Attackers are spoofing metadata and compromising even once-trusted device data, with a 149% increase in H2 of attacks targeting mobile platforms.

Deepfakes are now a common tool in cybersecurity attacks, with a new iteration of these attacks – novel face swaps – emerging for the first-time last year.

iProov, the world leader in face biometric verification and authentication technology, has today released the first of its kind Biometric Threat Landscape 2023 Report, sharing analysis and revealing the attack trends facing biometric systems. 

Digital identities are rapidly becoming more widely used as organizations’ and governments’ digital transformation projects mature and users demand more remote accessibility for everything, from creating a bank account to applying for government services. To support this transformation, many organizations have adopted biometric face verification, as it is widely recognized as offering the most user-friendly, secure, and inclusive authentication technology solution

Yet, as biometric face verification gains traction and becomes more widely adopted, threat actors are targeting all systems with sophisticated online attacks. To achieve both user-friendliness and security, organizations need to evaluate their biometric solutions for resilience in the face of these complex attacks. 

Digital Injection Attacks are Rampant – and Evolving 

Digital injection attacks – where a malicious actor bypasses a camera feed to trick a system with synthetic imagery and video recordings – occurred five times more frequently than persistent presentation attacks (i.e., showing a photo or mask to a system) on web in 2022. This is due to both the ease with which they can be automated and the rise in access to malware tools. More than three quarters of malware available on the dark web is available for under $10 USD, and with the rise of malware-as-a-service and plug-and-play kits, just 2-3% of threat actors today are advanced coders. 

Mobile platforms were also identified as increasingly vulnerable, with attacks now using software called emulators, which mimic the behavior of mobile devices. The report warns organizations against relying on device data for security, with a massive 149% increase in threat actors targeting mobile platforms in the second half of the year compared to the first. 

“Our analysis shows that the online threat landscape is always rapidly evolving,” said Andrew Newell, Chief Scientific Officer at iProov. “The 149% increase in attacks using emulators posing as mobile devices is a good example of how attack vectors arrive and scale very quickly. We have seen a rapid proliferation of low-cost, easy-to-use tools that has allowed threat actors to launch advanced, scalable attacks with limited technical skill.“

The Deepfake Threat is a Reality – and Novel Face Swap Attacks Emerge

Attacks using deepfake technology became far more common last year. The technology is hotly debated and becoming more mainstream, with bans on its non-consensual use forming an important part of the draft of the UK Online Safety Bill. Today, it is being commonly used by cyberattackers to create 3D videos that trick systems into thinking the real consumer is trying to authenticate.

2022 also saw the first use of a new type of synthetic digital attack – novel face swaps – which combine existing video or live streams and superimpose another identity over the original feeds in real time. This type of complex attack appeared for the first time in H1 of 2022 but instances of its use continued to soar throughout the rest of the year. These attacks are incredibly challenging to detect for both active and passive verification systems. After emerging in the first half of 2022, novel face swaps rapidly grew by 295% from H1 to H2.


“In 2020, we warned of the emerging threat of deepfakes being digitally injected into camera feeds to impersonate an individual’s biometric verification process,” said Andrew Bud, founder and CEO of iProov. “This report proves that deepfake attacks are now a reality. Even with advanced machine-learning computer vision, systems are struggling to keep up in detecting and triaging these evolving attacks. Any organization that isn’t protecting its system against these threats needs to do so urgently, especially in high-risk identity verification scenarios.”

No One is Safe: Attacks are Happening En Masse, Indiscriminately 

Motion-based attacks launched en masse all over the world occurred three times a week last year, sending bursts of 100 to 200 verification attempts at a time to try and overwhelm platforms. Attacks targeted different systems at once and were indiscriminate of industry or geography, suggesting no organization is safe. Motion-based verification systems – which use active motions such as smiling, nodding, and blinking – were frequently targeted.

The iProov Biometric Threat Intelligence 2023 Report is informed by data from the iProov Security Operations Center (iSOC) and expert analysis.