A deepfake video of FTX former CEO Sam Bankman-Fried (SBF) has been circulated on Twitter – fraudsters looking to steal funds from users of the collapsed crypto exchange, lured viewers to a website where they could supposedly get compensated for their losses by sending in crypto tokens and receiving double in return.

The fraudsters, taking old interview footage of Bankman-Fried, used a voice emulator to capture his likeness. This is not the first time a deepfake has been used to scam those in the crypto industry. In July 2022, a sophisticated scam using deepfake technology managed to drain liquidity from Brazilian crypto exchange, BlueBenx, by impersonating the COO of Binance.

The recent high-profile SBF deepfake is the tip of the iceberg. Criminals now have access to the technology and means to create incredibly realistic and convincing deepfakes. And they’re using these deepfakes to launch large-scale attacks at organizations and their users worldwide.

This article will:

  • Look at how criminals are using deepfakes to attack organizations
  • Examine whether humans can successfully detect deepfakes
  • Recommend steps organizations can take to defend against the growing deepfake threat.

How Are Deepfakes Being Used To Attack Organizations?

Video Conferencing

The global pandemic accelerated the transition from in-person to remote activities. Thanks to this, the video conferencing market has boomed and is expected to continue growing. Now that many organizations are communicating with colleagues, users, and job candidates remotely, criminals are using deepfakes to exploit this channel.

They’re doing this in several ways. For one, deepfakes are being used to enhance traditional BEC (Business Email Compromise), also known as CEO fraud. BEC is a highly targeted social engineering attack where criminals impersonate an organization’s leader to convince staff to execute actions, such as making payments, switching payroll, and divulging sensitive information. In mimicking the faces and voices of individuals during video calls, deepfakes can make BEC scams far more convincing.

That’s not all. In 2022, the FBI warned that deepfakes are also being used for fraudulent job applications for remote tech roles. Read more about how deepfakes are used in remote working scams here.

Face Verification

Biometric face verification enables users to verify their identity and gain access to an online service by scanning a government-issued ID and their face. They can then use their face every time they wish to authenticate and return to the service.

Automated face verification is a highly secure and usable means of identity verification for onboarding. Other remote methods, such as staff-to-user video calls require costly resources and risk human error. Likewise, as an authentication method, face verification provides organizations with the opportunity to go passwordless and resolves the security and usability issues that come with OTP authentication.

However, as the use of face verification has increased, bad actors have conceived new ways to circumvent these systems to gain unauthorized access to online services. One of these methods is the creation and use of deepfakes. Next, we will explore the ways in which criminals try to achieve this.

How are Criminals Using Deepfakes To Exploit Face Verification?

Presentation Attacks

A presentation attack is an act of holding up an artifact to the user-facing camera to impersonate a legitimate bank customer, to try and spoof the face authentication sequence. These artefacts can take the form of static images, videos (e.g. replays of previous authentication attempts), and highly-quality masks. A deepfake video played on a device and held in front of the camera is another example of a presentation attack.

Presented deepfakes can be realistic and convincing. A non-reflective screen on a retina display makes images appear extremely crisp so that pixels are not visible to the naked eye or at viewing distance. To defend against presentation attacks, including presented deepfakes, biometric face verification systems must incorporate liveness detection, which we will explore later.

Digital Injection Attacks

Digitally injected imagery enables criminals to inject deepfakes, either of synthetic or genuine individuals, directly into the data stream or authentication process.

Digital injection attacks are the most dangerous form of threat because they are more difficult to detect than presentation attacks and can be replicated quickly. They carry none of the clues that artifacts do when they are presented to the camera, making the more sophisticated attacks challenging for systems to distinguish and near impossible for humans.

These attacks are also far more scalable. The process of creating a deepfake and presenting it to a camera can be effective, but it is limited in scope. The criminal can only do this one at a time.

Digital injection attacks, on the other hand, can be run from an attacker’s computer. Or they can be done using a thousand cloned devices in a data center operated by a criminal network.

Can Humans Be Trusted To Spot Deepfakes?

The SBF deepfake was mocked for its poor quality. Some Twitter users clearly spotted that it wasn’t a real video. Be that as it may, research has shown that humans are wholly inept a spotting deepfakes, especially when they’re of a certain quality.

In a study conducted by the IDIAP Research Institute, participants were shown progressively more convincing deepfakes interspersed with real videos and asked, ‘is the face of the person in the video real or fake?’ Only 24% of their participants successfully detected a ‘well-made’ deepfake.

Despite research showing the opposite, humans are unjustifiably confident in their ability to successfully detect deepfakes. In a recent survey conducted by iProov, 57% of consumers were confident that they could tell the difference between a real video and synthetic imagery.

Human inability to tell between a real person and a deepfake poses an issue for organizations that conduct identity verification via video conferencing. This is misplaced confidence, as the human eye can easily be spoofed. Organizations have little assurance that the users they are granted access to an online service via video conferencing are indeed real, and not a deepfake. Specialized software is required to provide this level of assurance.

How Can Organizations Defend Against the Deepfake Threat?

Liveness Detection

Liveness detection is incorporated into face verification and authentication systems to distinguish whether the individual asserting their identity is a real-life person and not a presented artifact.

There are a number of ways that a face verification system can achieve this. One is to ask the user to perform actions, such as reading a sequence of characters aloud or blinking or moving their head. Yet, deepfakes can be coded to do these things just as well. It also raises some tricky questions regarding accessibility and inclusivity.

Another approach is to detect liveness passively: i.e not instructing the user to perform actions and instead using clues from the imagery to distinguish between real and fake. This way the technology does the work for the user intuitively.

Liveness detection technology can therefore detect a deepfake if it is used as part of a presentation attack. But as mentioned previously, criminals now have the capability to inject deepfakes directly into the data stream, bypassing the authentication system altogether.

One-Time Biometrics

For high-risk use cases, such as opening a new account or transferring a large sum of money, most liveness detection technology does not provide a high enough level of assurance. Deepfakes can emulate a person verifying themselves, which some liveness technology cannot spot. Advanced methods are needed to secure against advanced threat types.

One-time biometrics that assure both liveness and that a user is a real-live person, verifying in real-time, is essential in an organization’s defense strategy against deepfakes

A one-time biometric is an authentication method that takes place in real-time to assure that a user is ’live’ and genuinely present. A one-time biometric is never repeated in a user’s lifetime and has a limited time duration, which cannot be reused or recreated and is worthless if stolen.

One way to achieve this with a standard device is to use the screen to project controlled illumination onto the user’s face to create a one-time biometric. Once used, it can’t be replayed by a person attempting to use a previous authentication to spoof the system.

Another advantage is that if it’s stolen, it’s worthless because it’s one-time and obsolete as soon as it’s used.

Request a demo here to find out how iProov uses liveness detection and one-time biometrics to assure that a user is the right person, a real person, and genuinely present at the time of authentication.

FTX crypto scam highlights threat of deepfakes