February 27, 2026
Most security systems are built on a dangerous assumption: one check is enough to keep fraudsters out. It isn’t. And the numbers make that clear immediately.
Credentials were the first to fall. Microsoft Entra data shows that password-based attacks now account for over 99% of the 600 million daily identity attacks they observe. That SMS one-time passcode on your phone isn’t holding up, either; SIM swap attacks are rising >1000% year-on-year in the UK alone.
So organisations turned to facial biometrics, and attackers followed. Because high-assurance systems – those protecting critical processes, financial accounts, sensitive data – attract the most determined adversaries. In 2024 alone, iProov observed:
- Native virtual camera attacks surged 2,665%
- Face swap deepfake attacks tripled
- Injection attacks rose 783% (and a further 740% across 2025)
These aren’t just bigger numbers. They represent a fundamental shift: attackers are now specifically targeting the biometric layer, bypassing defences that organisations thought they could “set and forget”. The problem isn’t biometric technology – it’s the assumption that any single check, however sophisticated, is sufficient on its own.
When we talk about layered security here, we’re not talking about traditional multi-factor authentication – using a password alongside an authenticator app, for example (though that is highly recommended and increasingly mandated in many countries and industries). We’re talking about the defence-in-depth strategy within each factor itself — how layering multiple security controls within biometric verification ensures that if one check is fooled, others catch what it missed.
Online identity security is an arms race. It’s a continuous evolution, not an equation that can be solved and shelved. This is exactly why the architecture of your defence matters as much as the strength of any individual component. Let’s explore why multi-layered identity protection is essential, and how it keeps you ahead of bad actors.
The Problem with Single-Point Security
The parallel with passwords is not accidental. Passwords failed not because the concept was wrong, but because attackers got sophisticated enough to defeat single-factor credential checks at scale with methods such as brute force attacks and credential stuffing. The same dynamic plays out in low-assurance biometric solutions with increasingly complex attacks that are democratised, packaged, and sold by fraudsters.
Not all biometric liveness checks are equal. Some providers rely on a single-frame imagery check: does this face look real? The problem is that modern face swap tools, virtual cameras, and deepfake software are specifically engineered to pass that kind of check. iProov has identified over 115,000 possible attack combinations across the tools we actively track. No single check can cover that attack surface.
Single-frame checks capture a snapshot, not genuine presence. It can’t prove someone is really there, only that the image looks real, and many deepfakes are successfully designed to look real.
And relying on humans as the fallback is no answer either. Only 0.1% of people can reliably spot synthetic media. When 99.9% of people can’t tell real from fake, manual review is a vulnerability rather than a safety net.
You might reasonably ask: if single-point checks are the problem, doesn’t that apply to any biometric vendor – including iProov? It would, if iProov were a single-point check. It isn’t. The distinction is architectural, and that’s what we’ll explain next.
How Biometric Multi-Layered Security Actually Works
The answer isn’t simply adding more layers without intelligent correlation, as this just creates noise and friction. The goal is optimal integration, where each layer contributes distinct signals that, when analysed together, create a complete picture that no single layer could see alone.
Imagery alone also leaves the digital environment entirely unchecked. An attacker can use an emulator or inject a pre-recorded deepfake directly into the video stream – bypassing the camera entirely – a technique that imagery-only checks have no visibility into. Without metadata correlation (device integrity signals, emulator detection, environment verification), a liveness check is operating blind to the whole attack surface.
See how iProov’s multi-layered approach detects and blocks modern attacks in practice:
1: The Imagery Layer: Advanced Liveness Detection
The first layer proves a real person is physically present, not a photo, video, deepfake, or mask. This goes far beyond checking if a face looks convincing.
Dynamic Liveness uses Flashmark™ technology – your screen illuminates with randomised colours while the system analyses how light reflects off a genuine face across multiple frames. This creates a unique, real-time challenge-response that’s virtually impossible to fake even with sophisticated deepfakes, because it proves the person is present right now, not a replay attack.
2: The Metadata Layer: Digital Forensics
While liveness detection checks the person, the metadata layer checks the digital environment. This includes:
- Detecting compromised devices (jailbroken or rooted)
- Spotting emulators, often used in scaled fraud operations
- Identifying VPNs or anonymisers masking a device’s true origin
- Cross-checking technical signals against the claimed device
Consider the last point: an attacker presenting what appears to be an iPhone, but the image dimensions in the data stream don’t match any resolution any iPhone camera has ever produced. That signal alone doesn’t prove fraud. Combined with a VPN, an unusual IP origin, and liveness data that passed a little too cleanly, it could raise a serious red flag.
It’s the digital equivalent of a passport that looks real but carries the wrong country’s hologram. Modern fraudsters won’t always be caught by any single signal: GPS checks, IP analysis, or liveness detection alone each have blind spots. The key is ensuring each layer contributes unique intelligence that strengthens the overall decision.
3: The Continuous Monitoring Advantage
Because this is an arms race, deployment is not the end of the story. Attack tools that were elite last year are now sold as “Crime-as-a-Service” packages to anyone with a payment card.
iProov’s Security Operations Center (iSOC) continuously analyses real-world attack data, updating detection methods and algorithms as the threat landscape evolves. This isn’t a system you set and forget, but an active, adaptive defence. A static system, however well-designed at launch, is a fixed target.
iSOC and active threat monitoring are a large part of why iProov’s detection keeps pace with attack evolution rather than chasing it. The threat changes; so does the response. This proactive security posture is the only one that makes sense in an ever-changing threat landscape, and when no single check can ever be the last word.
Closing Thoughts: The Reality Check
Today, AI-powered fraud is point-and-click easy. Deepfake kits are sold like streaming subscriptions. Almost no one can tell real from fake.
Single-point checks – particularly less-accredited liveness solutions – are obsolete against this threat.
As Gartner articulates:
“Product leaders in the identity verification space are being driven to adopt a more holistic approach that incorporates a multilayered defense strategy to defend against deepfakes.”
The Impact of AI and Deepfakes on Identity Verification Report.
Multi-layered approaches that combine advanced liveness detection with metadata analysis and active threat monitoring aren’t just better. They’re necessary. Constant monitoring and active threat management aren’t optional extras; they’re the foundation.
The question isn’t whether multi-layered security is worth the investment. It’s whether you can afford to operate without it.
Interested in learning more about iProov’s layered biometric solutions?
- The first and only vendor to achieve an Ingenium Level 4 evaluation for injection attack detection – a standard that exceeds CEN TS 18099’s highest level (CEN High) in both scope and rigour. In an independent 40-day test, no injection attack pathway could be established: synthetic faces and deepfake videos had nowhere to go. Legitimate user rejection rate: just 1.3%, well below the 15% threshold required by the standard.
- Recently, the first and only vendor to meet the biometric verification requirements included in the new NIST Special Publication 800-63-4 Digital Identity Guidelines
- Book your consultative demo today.
