August 26 2022
You’ve probably seen a deepfake video – even if you didn’t realize it. Computer-generated Tom Cruises have been popping up all over the web during the past few years. Mark Zuckerberg is another common target, with videos circulating of him saying things he didn’t actually say. Then there was Channel 4’s infamous deepfake of the Queen, delivering an alternative Christmas message in the UK.
Deepfakes aren’t a new problem, but the tools needed to create them are becoming more readily available and more advanced. Ultimately, deepfakes are dangerous because they make it difficult for us to trust what we see and hear online. The potential for misuse and the threat to consumers, governments and enterprises, cannot be overstated.
Despite the growing threat of deepfakes to society, many people still don’t know what a deepfake is. To better understand the deepfake landscape, iProov surveyed 16,000 people across eight countries in 2022 (the U.S., Canada, Mexico, Germany, Italy, Spain, the UK and Australia), asking them a number of questions about deepfakes.
In this article we’ll share our new data, compare it to our results from 3 years prior, and discuss solutions to the growing threat.
We asked: “Do you know what a deepfake video is?”
Summary:
The percentage of people who know what a deepfake is has more than doubled since our last survey – in 2019, only 13% said they knew what a deepfake was, compared with 29% in 2022. On the one hand, it’s positive that awareness of the deepfake threat is growing. On the other hand, it’s concerning that just 29% of people are aware of deepfakes in 2022. Deepfakes have significant potential for misuse and fraud and if people don’t know what they are, they are less likely to be prepared to identify when they are being spoofed.
What are deepfakes? Deepfakes are videos or images created using AI-powered deep learning software to show people saying and doing things that they didn’t say or do. Deepfakes are increasingly being used to commit cybercrime – this could be for financial gain, social disruption, voting fraud, or other nefarious purposes. Deepfakes are used to commit fraud and access services by pretending to be someone else, or to gain access to services they wouldn’t be able to access using their true identity. They can be used in synthetic identity fraud, new account fraud, and account takeover fraud, and more. Deepfakes can be face swaps, re-enactments, or Generative Adversarial Networks (GANs), and can be used in a number of threat types such as presentation or digital injection attacks.
Ultimately, awareness of deepfakes and an understanding of the solutions available must be expanded and discussed more widely.
We asked: “Do you think you would be able to tell the difference between a real video and a deepfake?”
Summary:
57% of global respondents believe they could tell the difference between a real video and a deepfake, which is up from 37% in 2019. This is concerning, because the truth is that sophisticated deepfakes can be indistinguishable to the human eye. To verify a deepfake, deep learning and computer vision technologies are required to analyze certain properties, such as how light reflects on real skin versus imagery or synthetic skin.
The real problem right now is high-end deepfakes, such as the infamous Tom Cruise one for instance, which require sophisticated tools, knowledge, and time to create.
If we are over-confident in our ability to spot deepfakes, then we are more at risk of being fooled by one.
We asked respondents: “Which of the following worries you most about how deepfakes could be used against you? Please select all that apply.”
Summary:
People have wide-ranging fears surrounding deepfakes. The most popular themes are centered around theft and mistrust. And these fears are not misplaced: deepfakes have been used in real life to push political disinformation, harass activists, scam a CEO out of $243,000, and create fake accounts on social media to defraud genuine users.
As consumers, we continue to do more and more activities digitally, which means we need to be able to confirm our identity online. Yet the technology to create deepfakes is continually getting better, cheaper, and more readily available. That’s why deepfake protection will become more and more crucial as the deepfake threat grows, and more people become aware of the dangers.
We then asked respondents: “Which of the following statements do you agree with most about deepfakes? Please select all that apply.”
Summary:
Overall, this is largely similar to the data that we collected in 2019. In 2019, 58% agreed that deepfakes were a growing concern – the exact same as in 2022. What this shows is that consumers are rightly worried about the erosion of trust online. This is the difficult problem that iProov strives to solve – our patented biometric authentication technology can assure the genuine presence of a real, verifiable individual, confirming that they are who they claim to be and that they are not a deepfake or other presentation/digital injection attack.
Think of the thing that you are least likely to ever say or do. Now imagine your friends, family, or employer being shown a convincing video of you saying or doing it. It is easy to see the potential for malicious misuse. Of course, not all deepfakes are malicious or dangerous. Many have been used for social sharing and entertainment. But they have also been employed in hoaxes, revenge porn, and increasingly, fraud and impersonation.
Recorded Future reported that a criminal is willing to pay around ~16,000 USD for the creation of a high-end deepfake. The deepfakes can then facilitate advanced social engineering attacks for a significant profit. The problem will continue to worsen as deepfake capabilities become more accessible.
We asked survey respondents: “Would you be more likely to use an online service that had measures in place to prevent deepfakes being used?”
Summary:
People value deepfake protection slightly more in 2022 than they did back in 2019; overall, 75% stated that they would use an online service that could prevent deepfakes, versus 80% of global respondents in 2022.
What this demonstrates is that consumers want reassurance that they are being protected against deepfake attacks. By implementing iProov’s Genuine Presence Assurance® technology, governments, and enterprises can deliver online verification and authentication that protects against synthetic media such as deepfakes.
Biometric authentication is used to prove that a person is who they say they are in an online interaction – such as signing into a bank account or enroling for a new online service like a government scheme.
Cybercriminals are savvy, and try an ever-increasing number of different methods to circumnavigate biometric authentication security. They might use photos or pre-recorded videos and then hold them up to a device’s camera in the form of a presentation attack, or even synthetic imagery that is digitally injected into the data stream.
Researchers are expecting criminals to increase their use of deepfakes in the coming years. This shows it is vital to understand the deepfake threat and prepare ourselves – Europol.
That’s why liveness detection is crucial. Essentially, liveness detection ensures that an online user is a real person. It uses various technologies to differentiate between genuine humans and spoof artifacts. Without liveness detection, a criminal would be able to successfully spoof a system with presented fake photographs, videos, or masks.
Not all liveness detection is created equal, however. Most liveness detection technology can detect a presentation attack – the use of physical artifacts such as masks or recorded sessions played back to the device’s camera attempting to spoof the system, and could also include a deepfake video held in front of a camera.
However, other liveness providers cannot detect a digital injection attack, which bypasses the device (mobile or desktop) camera to inject synthetic imagery into the data stream. Only iProov’s Genuine Presence Assurance includes liveness detection technology that delivers the highest level of assurance – GPA can detect both presented deepfakes and deepfakes used in digital injection attacks.
iProov Patented Flashmark™ technology uses controlled illumination to create a one-time biometric that cannot be recreated or reused, providing greater anti-spoofing across a range of attacks; not just standard presentation attacks but also highly scalable injection attacks using deepfakes and sophisticated replays, delivering industry-leading level of assurance that the person is real and authenticating right now.
You can read more about Genuine Presence Assurance here and the innovative Flashmark technology powering it here.
Find out how iProov protects against deepfakes – book your iProov demo or contact us.