Generative Adversarial Network (GAN)

A Generative Adversarial Network (GAN) is a type of generative AI model architecture consisting of two neural networks – a generator and a discriminator – trained in opposition to each other. 

The generative model creates content based on the available training data to mimic the examples in the training data. Meanwhile, a discriminative model tests the results of the generative model by assessing the probability the tested sample comes from the dataset rather than the generative model. This is adversarial training. 

In simpler terms, it’s about two neural networks competing, one creating fake data and the other trying to spot the fakes. This ultimately results in highly convincing synthetic content.

The models continue to improve until the generated content is just as likely to come from the generative model as the training data. This method is so effective because it improves the outcome of its own authenticity by constantly checking against the very tools designed to outsmart it. This adversarial training process continues until the generator produces outputs plausible enough to fool the discriminator. GANs are commonly used for image synthesis tasks like creating deepfakes, photorealistic human faces, and photographic imagery enhancement.

Threat actors leverage GANs for malicious purposes – for example, to create synthetic imagery that makes synthetic identities more plausible. Synthetic imagery of a face depicting a non-existent person is generated using this technology and matched with forged IDs to circumvent facial biometrics during onboarding and throughout the user lifecycle. 

To detect media created by generative adversarial networks, iProov utilizes patented passive challenge-response biometric technology with deep learning and computer vision technologies to analyze certain properties that generative AI-created media cannot recreate – as there is no real person on the other side of the camera. This is why having a real-time biometric incorporated into liveness technology is critical for organizations to distinguish between synthetic media and genuine people.

To learn more about how fraudsters are harnessing generative AI such as GANs to undermine identity verification and bolster synthetic identity fraud, read our report “Stolen to Synthetic” here.

Read more: