The Deepfake Age: Who gatekeeps digital trust?

small_face.png

Written by Jim Brenner, Research Scientist 

As deepfakes continue to alarm journalists, politicians and pretty much anyone who dares participate in 2019’s digital-media-ruled society, many are looking beyond the J-Law/Steve Buscemi mashups and Trump/Dr. Evil sketches. Fears of a dystopian future in which we have lost complete trust in our physical senses are becoming increasingly real. Currently, researchers, governments and tech companies are wrestling with the very real threat posed by this AI-generated imagery, investigating how we might protect ourselves from future attacks of visual misinformation. 

So far, identifying deepfakes has been reliant upon finding visual cues in synthetic imagery that doesn’t match the real deal. For example, a recent study extracted distinctive facial expressions from world leaders to determine the real imagery from the fake. However such approaches are somewhat of a cat and mouse struggle. When a new way to identify deepfakes is developed, deepfakers simply train models to combat them. In fact, as I was drafting this post, a new technique of expression style-transfer was published which will potentially make it much harder to detect deepfakes in this manner. 

The general discourse instead seems to have shifted towards establishing trusted sources that we can rely on to feedback truthful information. This trend perhaps comes as no surprise to anyone who has followed the constant evolution of forgery over the past couple of centuries.

Traditional media corporations and newspapers are easy to single out as candidates for these “gatekeepers of truth.” At least in the context of attacks of political misinformation, something that is generally viewed as one of the most daunting and immediate threats posed from deepfake technology. However, one of the less frequently discussed threats (but potentially much more dangerous) is in digital authentication - and the potential arbiters of truth here are less obvious.

Any form of remote biometric authentication system (not just facial) is potentially vulnerable to deepfake attacks. After all, if I can realistically transfer someone else’s face onto mine, it isn’t too much of a leap to do the same for their hands or their eyes or indeed other more subtle biometric cues. Given the right data and a sensible approach, these techniques are easy to transfer. As the trajectory of authentication heads inevitably towards digital and autonomous systems, this is a reality we must face.

Whilst purely visual-based classification of deepfakes is unlikely to be viable for much longer, by controlling the whole capture process, we can develop more sophisticated methods for detection. iProov’s patented Flashmark technology is a direct solution to this issue; by illuminating the user’s face with a unique randomly-generated sequence of colours we can detect whether the person behind the camera is not just the correct person, but also actually present. 

By being involved in the capture process in this way, iProov can act as a trusted source of identity (much like a media company can act as a trusted source of news). So whether it’s a deepfake or another form of identity spoofing, iProov ensures a fraudulent identity can’t be passed as legitimate. 




iProov Limited

WeWork
10 York Road
London
SE1 7ND
United Kingdom

Contact Us

Tel: +44 20 7993 2379

Email: This email address is being protected from spambots. You need JavaScript enabled to view it.