December 18, 2023
This year, the volume of AI-generated identity attacks has escalated to such a level that it has become a topic of consideration for consumers, organizations, and even world leaders. Biometrics stands as a transformative force in reshaping how individuals interact with digital systems and how organizations safeguard sensitive information. As 2024 edges ever closer, the digital identity landscape is poised for significant advances, with innovations set to redefine verification, elevate security standards, and enhance user experiences.
Join us as we delve into the imminent future of biometrics, where the fusion of science and security promises a paradigm shift in how we authenticate, identify, and safeguard within the digital realm.
1. Biometrics will become the cornerstone of the US financial services market security infrastructure
Over the past year, many financial services organizations have expanded remote digital access to meet user demand. However, this has widened the digital attack surface and created opportunities for fraudsters. The US financial services sector has been slower to adopt digital identity technologies than some other regions which could be attributed to the challenges it faces around regulating interoperability and data exchange. Yet, with synthetic identity fraud expected to generate at least $23 billion in losses by 2030, pressure is mounting from all angles. Consumers expect to open accounts and access services remotely with speed and ease while fraudsters undermine security through online channels and siphoning money. All the while, there is the serious threat of Know Your Customer (KYC) and Anti Money Laundering (AML) non-compliance. Penalties for this include huge fines and potentially even criminal proceedings. Further, there is an increased risk of bypassing sanctions, and financing state adversaries. In response, many financial institutions are being prompted to take action. This has involved replacing cumbersome onboarding processes and supplanting outdated authentication methods like passwords and passcodes with advanced technologies to remotely onboard and authenticate existing online banking customers.
One of the front-runners is facial biometric verification technology, which delivers unmatched convenience and accessibility for customers while at the same time unmatched security challenges for adversaries. More financial institutions will recognize how biometric verification will reshape and redefine the positive impact that technology can have in balancing security with customer experience and will make the switch.
2. There will be a rapid increase in the number of developing countries building digital identity programs based on decentralized identity
An estimated 850 million people worldwide lack a legal form of identification, and without identity, people struggle to open a bank account, gain employment, and access healthcare, which leaves them financially excluded. Digital identity programs improve access to digital services and opportunities. They enable people to assert identity, access online platforms, and participate in digital government initiatives. Supported by investment from World Bank funds, digital identity programs can assist less advanced economies in preventing identity theft and fraud as well as provide an alternative way to prove their identities and access essential services such as benefits, healthcare, and education. Based on a decentralized identity these programs will enable users to digitally store and exchange identity documents, such as a driver’s license, and credentials, such as diplomas, and authenticate without a central authority. A decentralized identity puts the user in control by allowing them to manage their identity in a distributed approach. They will offer the convenience end-users now demand and open essential pathways for previously disadvantaged or marginalized individuals to access financial and welfare services.
3. Remote video calls to verify identity will be banned
Video call verification involves a one-to-one video call between the user and a trained operator. The user is asked to hold up an identity document, and the operator matches it against their face. However, video call verification is proven to provide little assurance that the end-user is a ‘live’ person and not generative AI-produced artificial imagery convincingly superimposed onto the threat actor’s face.
For example, in 2022, researchers at the Chaos Computer Club managed to circumvent video call verification technology by using generative AI and a forged ID. The case displayed how this technology, and the human operators it relies upon, are highly susceptible to synthetic imagery attacks. The German Federal Office for Information Security has since warned against video call verification for its vulnerability to these attacks.
If digital identity programs cannot defend against the threat of deepfakes at onboarding and authentication, they will be exploited for criminal purposes, such as payment fraud, money laundering, and terrorist funding. As such, we’ll see moves by financial services regulators to ban video call verification methods with a directive to choose more reliable methods based on hybrids combining automated AI matching and liveness detection with human supervision of the machine learning process.
4. Organizations will introduce mutual authentication between employees for high-risk communication and new employee remote onboarding
As organizations increasingly rely on digital means for confidential communication, the need for robust cybersecurity measures is paramount to mitigate risk. Introducing mutual authentication for high-risk communication and transactions is a crucial security measure that adds an extra layer of protection against unauthorized access and potential threats. In addition, in certain industries regulatory compliance mandates the implementation of robust security measures. Mutual authentication helps organizations meet these compliance requirements by demonstrating a commitment to ensuring secure communication channels and protecting sensitive information.
5. Corporate data breaches will triple due to successful AI-generated attacks
For some years now, organizations and individuals have relied on spotting phishing emails through spelling mistakes, and grammatical errors. That time has passed. Such is the quality of Chat GPT that threat actors can now use it to generate high-quality phishing attacks in very convincing communications with no suspicious clues. Consequently, 2024 will witness an acute increase in both the quality and volume of AI-generated phishing attacks. Security awareness training will become a redundant tool and organizations will be forced to seek alternative and more reliable methods to reliably authenticate both internal and external users of their platforms.
6. Crime-as-a-Service Speech and Video Syntheses kits will break the sub $100 barrier
Crime-as-a-Service and the availability of online tools accelerate the evolution of the threat landscape, enabling criminals to launch advanced attacks faster and at a larger scale. If attacks succeed, they rapidly escalate in volume and frequency, amplifying the risk of serious damage.
Bad actors are using sophisticated generative AI technology to create and launch attacks to exploit organizations’ security systems and defraud individuals. iProov has witnessed similar indications of low-skilled criminals gaining the ability to create and launch advanced synthetic imagery attacks. In our most recent biometric threat intelligence report, we saw the emergence and rapid growth of novel video face swaps. Face swaps are a form of synthetic imagery where the threat actor morphs more than one face to create a new fake 3D video output.
The cost of the resources needed to launch attacks is reducing and we fully expect Crime-as-a-Service kits to fall below $100.
7. The potential of AI to spread political disinformation will see authenticated authorship of images and written content be a legally required tool
There will be a plethora of AI-generated deepfake videos being used to persuade voters as we move towards elections. To counteract this there will be broad moves by technology companies to provide ways for people to verify the authenticity of the images they upload. For example, solutions that enable people to watermark the images and sign them when they are created or modified.
2024 is likely to see many attempts in this area because the use of deepfakes is so widespread. It will be a prerequisite that any content using images must offer some way to assure their genuineness and failure to do so will see those images devalued.
8. An AI-generated Zoom Call will lead to the first billion-dollar CEO fraud
CEO fraud is targeting at least 400 companies per day and poses a significant threat to organizations worldwide. In this type of crime, attackers pose as senior company executives and attempt to deceive employees into transferring funds, disclosing sensitive information, or initiating other fraudulent activities. They often involve sophisticated social engineering attacks making them challenging to detect. Generative AI tools are now being widely used by fraudsters to create deepfakes to imitate a person. Bad actors can now create convincing AI-generated audio and imagery and deploy it across platforms including Zoom, Microsoft Teams, and Slack. Without sophisticated monitoring and detection tools, it’s almost impossible to detect this type of synthetic imagery. As such, we fully expect to see an AI-generated Zoom call lead to the first billion-dollar CEO fraud in 2024.