Across the globe, governments, policymakers, and regulators are turning their attention to the subject of generative artificial intelligence (AI). This technology greatly reduces the technological barrier to creating highly realistic fake images, videos, and audio.
As generative AI develops, it will make it increasingly challenging to distinguish between real and manipulated content – leading to misinformation, propaganda, and deception on a grand scale. This threatens to erode trust and blur the lines between reality and fiction, ultimately challenging the function of an effective civil society.
In this article we discuss the topic of generative AI, alongside the tools, methods, and technologies available to combat the growing threat.
Why Does Generative AI Pose a Growing Threat?
AI is not intrinsically harmful – there are many beneficial and positive applications. However, a number of factors are leading to increased concern over its vast capabilities for nefarious purposes:
- The volume of AI technology-driven cybersecurity attacks is increasing at an alarming rate: For example, iProov revealed that face swap attacks are increasing in frequency: our Biometric Threat Intelligence Report report highlights face swaps were up 295% from H1 to H2 in 2022.
- AI-driven cybercrime has become far more accessible and scalable: Low-skilled criminals are gaining access to the resources necessary to launch sophisticated attacks at a low cost, or even for free, online. This makes it easier for bad actors to gain footholds in organizations’ information and cybersecurity systems. Similarly, the accessibility of this technology can make it easier for fraudsters to convince individuals they’re interacting with an authentic person, further enabling cybercrime, and fraud..
- The sophistication of generative AI is developing quickly, and people can no longer spot deepfakes: Synthetic media has reached the stage where it is impossible to distinguish between what’s fake and what’s real with the human eye – so manual inspection is not a viable solution. Similarly, approaches that place the burden of responsibility on people to spot synthetic media will only have a limited impact.
- Growth of Crime-as-a-Service: Generative AI is empowering bad actors, and the availability of online tools is accelerating the evolution of the threat landscape. This is enabling criminals to launch advanced attacks faster and at a larger scale. If attacks succeed, they rapidly escalate in volume and frequency as they are shared amongst established crime-as-a-service networks or on the darkweb, amplifying the risk of serious damage.
- Public perception and disinformation: Generative AI not only threatens individual organizations or governments, but also the information ecosystem and economy itself. This is already having real-world effects: for example, recently an AI-generated image of an explosion at the Pentagon caused a brief dip in the stock market.
The impacts of generative AI on society are endless: public sector benefit fraud, new account fraud, synthetic identity, fraud, voting fraud, disinformation, deceptive social media bots, robocalls, and more.
The dangers of combining different types of AI – such as imagery, voice, and text – pose a synergistic threat of rapidly evolving technologies that can be used in conjunction for massive impact. You can learn about the critical and urgent risks of voice spoofs in particular here.
The Worrisome Trend of Generative AI Has Caught the Attention of Governments and Policymakers Worldwide
Let’s consider a few examples:
Ultimately, there is now an arms race between the destructive uses of generative AI and the tools that we have to defend against them. The potential outcome is an identity crisis in the digital age, where the public cannot trust the media or public officials.
iProov Response to Safeguarding Against Generative AI Threats
It will become increasingly more difficult for governments, businesses, journalists, and the general public alike to combat or spot fraud, disinformation, and cybercrime. This is particularly pertinent as we approach many major political elections across the globe.
So, what can governments and policymakers actually do? What regulations and technologies are available? In summary:
- Facial biometric identity verification, with liveness and active threat monitoring, is critical: In an age where traditional technologies and the human eye can no longer verify the genuineness of a person remotely, biometric identity verification is needed to assure that a person is the true owner of their identity, content, and position. The most effective solutions incorporate mission-critical liveness technology with a one-time biometric and active threat management, which can determine whether the image presented is a real person and that the person is actually present at the time the information is being captured.
- Remote onboarding and authentication should be further secured: Biometric face verification has emerged as the most secure and convenient way to verify identity online, as traditional methods such as passwords and one-time codes have failed. However, when confronting generative AI, it is critical that liveness capabilities be thoroughly evaluated. Good is not good enough; security must be paramount for organizations.
- Regulation has to set guardrails for the moral majority: It’s critical to establish legal frameworks to address the ethical and social implications of generative AI and synthetic media. This enables law enforcement to target the bad guys with clear directives, while providing commercial drivers to address platform incentives. iProov has been submitting its PCAST response on this topic to various governments and policymakers across the globe.
Governmental and Policymaker Approach to AI: A Summary
Ultimately, generative AI poses significant risk to society. Clear policy and regulation is needed urgently in order to set the guardrails, align the moral majority, and guide organizations’ response.
Biometric and identity verification technology offers a critical lifeline for governments and other organizations that need to assure people who they say they are in the age of synthetic and falsified imagery and information.
But not just any verification technology will do. As generative AI advances, defenses will need to keep pace: mission-critical solutions that continuously evolve are essential.
iProov is soon set to release its “Identity Crisis in the Digital Age: Using Science-Based Biometrics to Combat Generative AI” report. Inside, we examine the criminal side of generative AI, discuss the “trust deficit” that threatens to undermine nations’ abilities to reach sustainable development goals, and advise on the technology and processes required to combat this threat. Keep an eye out on our socials for its imminent release.
Alternatively, book a demo of iProov’s solution here today.