The Replay Challenge

deepfake_2.png

Written by Andrew Bud, CEO & Founder

When we founded iProov in 2011, it seemed obvious to us that “replay attacks” would be amongst the most dangerous threats to face verification.  These occur when an app, device, communications link or store is compromised and video imagery of a victim is stolen; the stolen imagery is subsequently used to impersonate a victim.  Right from the start, we designed our system to be strongly resilient to this hazard. However, only now is the market beginning to understand the danger of replay attacks.

So, what is a replay attack and how can it be resisted?

Continue reading

website_colourful_face.png

Face biometric technology is commonly grouped under the catchall phrase of ‘Face Recognition.’ However, there is a critical distinction between ‘Identification’ and ‘Authentication.’

Facial Identification technology aims to increase human efficiency, often utilised in surveillance settings to aid in identifying a face in a database or a watchlist of individuals. This is known as a one-to-many search. This use of Face Recognition for surveillance has sparked huge privacy and human rights debates due to a lack of legal regulation and grey areas of consent. Facial Identification is often undertaken without meaningful consent and provides no direct benefit to an individual; instead, aiming to promote a safer society overall.

Continue reading

deepfakes_social_media_friendly.png

A brand new nationwide study from iProov has revealed a sheer lack of awareness and education around deepfake technology amongst the UK public, with almost three-quarters (72%) saying they have never even heard of a deepfake video.   

Key findings: 

  • Almost three-quarters (72%) of Brits have never heard of a deepfake video 
  • Even when given a full definition, almost one-third (28%) believe deepfake videos to be harmless 
  • Over two-thirds (70%) confessed they would not be able to tell the difference between a deepfake and a real video 
  • More than half (55%) of Brits believe social networks are responsible for combatting deepfakes 

Continue reading

HTML5 v2.Beta

html5_image.png

We are pleased to make our HTML5 V2 Beta client publicly available on GitHub here.

Many of our partners and customers have been building web journeys for desktop, tablet and mobile. These sit alongside native applications as a major channel for user interactions.

Continue reading

small_face.png

Written by Jim Brenner, Research Scientist 

As deepfakes continue to alarm journalists, politicians and pretty much anyone who dares participate in 2019’s digital-media-ruled society, many are looking beyond the J-Law/Steve Buscemi mashups and Trump/Dr. Evil sketches. Fears of a dystopian future in which we have lost complete trust in our physical senses are becoming increasingly real. Currently, researchers, governments and tech companies are wrestling with the very real threat posed by this AI-generated imagery, investigating how we might protect ourselves from future attacks of visual misinformation. 

So far, identifying deepfakes has been reliant upon finding visual cues in synthetic imagery that doesn’t match the real deal. For example, a recent study extracted distinctive facial expressions from world leaders to determine the real imagery from the fake. However such approaches are somewhat of a cat and mouse struggle. When a new way to identify deepfakes is developed, deepfakers simply train models to combat them. In fact, as I was drafting this post, a new technique of expression style-transfer was published which will potentially make it much harder to detect deepfakes in this manner. 

Continue reading

iProov Limited

WeWork
10 York Road
London
SE1 7ND
United Kingdom

Contact Us

Tel: +44 20 7993 2379

Email: This email address is being protected from spambots. You need JavaScript enabled to view it.