October 1 2019
A brand new nationwide study from iProov has revealed a sheer lack of awareness and education around deepfake technology amongst the UK public, with almost three-quarters (72%) saying they have never even heard of a deepfake video.
The research, conducted by a market leader in biometric facial authentication, polled 2,000 respondents across the UK to reveal their attitudes towards and understanding of deepfake technology. The results highlight a need for awareness and education on the impact of deepfakes which, if not addressed, will have huge implications on personal and professional security.
Commenting on the findings, Andrew Bud, Founder & CEO, iProov, said: “Awareness is the first defence against any cyber-security threat, as we’ve already seen with attacks like phishing and ransomware. Deepfakes, however, represent a whole new kind of danger to businesses and individuals.
Technology also has a big role to play in combating the threat, yet if the vast majority of people in the UK have such little awareness of deepfakes right now, they simply cannot begin to prepare themselves as they need to.”
The underlying societal threat
Until recently, deepfakes were a nascent concept. But, today, the technology behind them is threatening to undermine the notion of trust in moving images and is becoming increasingly accessible – be it through the creation of fake news, spoofing the identity checks required to log into a bank account, or even in revenge pornography. However, the research has revealed members of the public to be largely unaware of the threats:
Interestingly, once those surveyed were provided with a definition of a deepfake video, they began to recognise the technology’s mounting threat. In fact, just under two-thirds (65%) of people said that their newfound knowledge of the existence of deepfakes undermined their trust in the internet.
Notably, consumers went on to cite identity theft as the biggest concern (42%) for how they believed deepfake technology could be misused. Almost three-quarters (72%) of respondents also said they would be far more likely to use an online service with preventative measures in place to mitigate the use of deepfakes.
In public eyes, despite the security implications of the specific concerns raised surrounding identity theft, more than half of all respondents (55%) surprisingly called out social networks as the party most responsible for dealing with AI-generated synthetic videos.
Bud also added: “Taking the fight to this new wave of fraud means that security measures in this new post-truth era simply have to be as creative, sophisticated and fast-moving as the fraudsters.
Whilst adoption of biometric technology to crack down on deepfakes is growing amongst financial institutions, governments and large-scale enterprises, the challenge ahead lies in the effective detection of genuine human presence – a challenge that should not be underestimated.”