Face biometric technology is commonly grouped under the catchall phrase of ‘Face Recognition.’ However, there is a critical distinction between ‘Identification’ and ‘Authentication.’
Facial Identification technology aims to increase human efficiency, often utilised in surveillance settings to aid in identifying a face in a database or a watchlist of individuals. This is known as a one-to-many search. This use of Face Recognition for surveillance has sparked huge privacy and human rights debates due to a lack of legal regulation and grey areas of consent. Facial Identification is often undertaken without meaningful consent and provides no direct benefit to an individual; instead, aiming to promote a safer society overall.
A brand new nationwide study from iProov has revealed a sheer lack of awareness and education around deepfake technology amongst the UK public, with almost three-quarters (72%) saying they have never even heard of a deepfake video.
- Almost three-quarters (72%) of Brits have never heard of a deepfake video
- Even when given a full definition, almost one-third (28%) believe deepfake videos to be harmless
- Over two-thirds (70%) confessed they would not be able to tell the difference between a deepfake and a real video
- More than half (55%) of Brits believe social networks are responsible for combatting deepfakes
We are pleased to make our HTML5 V2 Beta client publicly available on GitHub here.
Many of our partners and customers have been building web journeys for desktop, tablet and mobile. These sit alongside native applications as a major channel for user interactions.
Written by Jim Brenner, Research Scientist
As deepfakes continue to alarm journalists, politicians and pretty much anyone who dares participate in 2019’s digital-media-ruled society, many are looking beyond the J-Law/Steve Buscemi mashups and Trump/Dr. Evil sketches. Fears of a dystopian future in which we have lost complete trust in our physical senses are becoming increasingly real. Currently, researchers, governments and tech companies are wrestling with the very real threat posed by this AI-generated imagery, investigating how we might protect ourselves from future attacks of visual misinformation.
So far, identifying deepfakes has been reliant upon finding visual cues in synthetic imagery that doesn’t match the real deal. For example, a recent study extracted distinctive facial expressions from world leaders to determine the real imagery from the fake. However such approaches are somewhat of a cat and mouse struggle. When a new way to identify deepfakes is developed, deepfakers simply train models to combat them. In fact, as I was drafting this post, a new technique of expression style-transfer was published which will potentially make it much harder to detect deepfakes in this manner.
Gabriel Turner, Product Manager
Presentation Attack Detection or "PAD" is increasingly a hot topic within the biometrics industry. While this is definitely a step in the right direction, cyber-thieves are still diligently exploiting security gaps in identity proofing and strong customer authentication. Exclusive focus on presentation attacks alone fails to address vulnerabilities to other forms of identity spoofing.
This article will illustrate how PAD alone does not guarantee biometric security. CSOs and Compliance leads must also consider Replay Attack Detection or RAD.