August 2 2023
In 2022, iProov witnessed a huge rise in a novel kind of face swap. This type is considerably more advanced than traditional face swaps – they’re three-dimensional and more resistant to established detection techniques. Our Threat Intelligence Report uncovered that the frequency of this novel threat grew exponentially – by 295% from H1 to H2.
Bad actors are using sophisticated generative AI technology to create and launch attacks in attempts to exploit organizations’ security systems and defraud individuals. Accordingly, we believe that awareness and understanding of deepfake technologies must be expanded and discussed more widely to counter these efforts. Without insight into the threat landscape, organizations will struggle to employ the appropriate verification technology that can defend against them.
This article explains what face swaps are, why they’re so uniquely dangerous, and discusses solutions to the growing threat.
Face swaps are a type of synthetic imagery created from two inputs. They combine existing video or live streams and superimpose another identity over the original feed in real-time.
The end result is fake 3D video output, which is merged from more than one face.
To summarize, face swaps can be boiled down to a 3-step process:
A face matcher without adequate defenses in place would identify the output as the genuine individual. The end result will look a little something like this:
A face swap attack refers specifically to using the above synthetic imagery alongside chosen deployment methodology (such as man-in-the-middle or camera bypass) to launch a targeted attack on a system or organization.
Criminals can use face swaps to commit crimes such as new account fraud, account takeover fraud, or synthetic identity fraud. You can imagine how effective a face swap could be during an online identity verification process, as a fraudster can control the actions of the outputted face at will. Face swaps are especially unique because they can be utilized in real-time.
Picture this: a fraudster needs to pass a video call verification check. A traditional pre-recorded or 2D deepfake would be useless here, because it couldn’t be used to answer questions in real-time. However, using a live face swap, a criminal could complete the video call verification by morphing their own actions (and even speech) with another input of the genuine person they’re pretending to be – ultimately creating a synthetic output to fool the verification process.
Let’s consider a few additional issues associated with novel face swap attacks:
Face swap attacks are delivered by digital injection techniques in order to try to spoof a biometric authentication system.
A digital injection attack is where an attack is injected into an application or network server connection, bypassing the sensor entirely.
A recorded video could be held up to a camera (which is referred to as a presentation attack), but we would not classify this as a face swap. A faceswap is the use of any number of applications to apply a false digital representation of a person’s face (in whole or in part), and overlay it on that of the actors. This is done by digital injection.
Digital injection attacks are the most dangerous deployment method, because they are a highly scalable and replicable form of attack. While Presentation Attack Detection is accredited by organizations such as NIST FRVT and iBeta, no such testing exists for the detection of digital injection attacks – so organizations are advised to do their own research on how vendors mitigate against this growing attack methodology and keep users safe on an ongoing basis.
As more and more activities move online and digital transformation and digital identity projects mature, the need for strong user verification and authentication is only set to grow in importance.
The truth is that traditional verification methods have failed to keep users safe online. You cannot trust data alone as confirmation of who someone is. Passwords can be cracked, stolen, lost, or shared. OTPs can be intercepted. Video call verification can be spoofed and rely on manual judgment, which can no longer reliably distinguish between genuine and synthetic imagery.
So, biometric face verification has emerged as the only secure and convenient method of verifying user identity online. The crux here is that not all biometric face verification solutions are created equally.
Biometric solutions are differentiated around their ability to establish liveness and provide an inclusive user experience (with regard to age, gender, ethnicity, cognitive ability, and so on.) For more information on liveness and the different biometric face verification technologies on the market alongside their key differentiators, read our Demystifying Biometric Face Verification ebook here.
As we’ve highlighted in this article, there are serious growing threats to security (as with any identity assurance technology). When choosing a biometric solution, you must be aware of the challenges around security to be able to employ the appropriate verification technology.
Let’s consider a few of the key factors to specifically consider when choosing a biometric vendor that defends against face swaps:
The technology to create deepfakes is continually getting better, cheaper, and more readily available. That’s why deepfake protection will become more and more crucial as the deepfake threat grows, and more people become aware of the dangers.
Organizations need to evaluate their verification solutions for resilience in the face of these complex attacks such as face swaps. For more information about face swaps and the evolving threat landscape, read our latest report, The “iProov Biometric Threat Intelligence Report 2023”. Inside, we illuminate the key attack patterns witnessed throughout 2022. The first of its kind, it highlights previously unknown in-production patterns of biometric attacks, helping organizations make informed decisions on which technology and what level of security to deploy. Read the full report here.