A brand new nationwide study from iProov has revealed a sheer lack of awareness and education around deepfake technology amongst the UK public, with almost three-quarters (72%) saying they have never even heard of a deepfake video.
- Almost three-quarters (72%) of Brits have never heard of a deepfake video
- Even when given a full definition, almost one-third (28%) believe deepfake videos to be harmless
- Over two-thirds (70%) confessed they would not be able to tell the difference between a deepfake and a real video
- More than half (55%) of Brits believe social networks are responsible for combatting deepfakes
We are pleased to make our HTML5 V2 Beta client publicly available on GitHub here.
Many of our partners and customers have been building web journeys for desktop, tablet and mobile. These sit alongside native applications as a major channel for user interactions.
Written by Jim Brenner, Research Scientist
As deepfakes continue to alarm journalists, politicians and pretty much anyone who dares participate in 2019’s digital-media-ruled society, many are looking beyond the J-Law/Steve Buscemi mashups and Trump/Dr. Evil sketches. Fears of a dystopian future in which we have lost complete trust in our physical senses are becoming increasingly real. Currently, researchers, governments and tech companies are wrestling with the very real threat posed by this AI-generated imagery, investigating how we might protect ourselves from future attacks of visual misinformation.
So far, identifying deepfakes has been reliant upon finding visual cues in synthetic imagery that doesn’t match the real deal. For example, a recent study extracted distinctive facial expressions from world leaders to determine the real imagery from the fake. However such approaches are somewhat of a cat and mouse struggle. When a new way to identify deepfakes is developed, deepfakers simply train models to combat them. In fact, as I was drafting this post, a new technique of expression style-transfer was published which will potentially make it much harder to detect deepfakes in this manner.
Gabriel Turner, Product Manager
Presentation Attack Detection or "PAD" is increasingly a hot topic within the biometrics industry. While this is definitely a step in the right direction, cyber-thieves are still diligently exploiting security gaps in identity proofing and strong customer authentication. Exclusive focus on presentation attacks alone fails to address vulnerabilities to other forms of identity spoofing.
This article will illustrate how PAD alone does not guarantee biometric security. CSOs and Compliance leads must also consider Replay Attack Detection or RAD.
Written by Tom Whitney, Head of Solutions Consultancy
In a world where most transactions are digital, remote and involve fewer and fewer personal interactions, identity is really important.
We use digital accounts to access everything from groceries to border control, from phone apps to pension statements. At the same time, organisations such as retailers, banks and government departments try to learn as much about you as possible to provide the most personalised experience, whether it be for commercial loyalty-card data capture, citizen security or Know-Your-Customer regulatory obligations.