8 July 2022
In June 2022, the FBI issued a public service announcement alerting organizations to a new cybersecurity risk: deepfake employees. According to the alert, there has been a rise in cybercriminals using “deepfakes and stolen Personally Identifiable Information (PII) to apply for a variety of remote work and work-at-home positions.”
Deepfakes are videos or images created using AI-powered software to show people saying and doing things that they didn’t say or do. In this case, attackers were using PII to steal identities and create deepfakes with the aim to secure jobs.
The FBI reported that the attackers used deepfakes to apply for roles in “information technology and computer programming, database, and software-related job functions.”
Cybercriminals are increasingly using deepfakes to commit crimes online. For example, deepfakes can be used to give credibility to synthetic identities, where the criminal creates fake people who don’t exist. Deepfakes can also be used for account takeover, where a deepfake of a real person is used to access their accounts.
In both cases, the aim is usually financial gain – open a credit card and max it out, or steal money from somebody’s account.
So why use a deepfake to acquire a role in tech? It’s not just for a monthly paycheck. The real threat here is that by securing a tech role within the company, the attacker then has access to “customer PII, financial data, corporate IT databases and/or proprietary information”, according to the FBI.
The attackers can then use this stolen data to hold the company to ransom or carry out further attacks.
The FBI became privy to these fraudulent happenings when they received a number of IC3 complaints (Internet Crime Complaint Center) from the organizations carrying out the interviews. Complainants reported that they detected spoofing and deepfake videos when speaking with candidates. Strange things happened during the calls, such as the mouths and lips not aligning with the words being said on screen.
These deepfakes seem pretty easy to spot, then. But let’s not forget that these were the deepfake attempts that failed. What’s not clear from the announcement is how many succeeded.
The global pandemic didn’t give birth to remote working, but it certainly made it more common. As social distancing measures were put in place, workers around the world swapped the office for the home.
Although social distancing rules are now a thing of the past in many countries, remote working isn’t.
Remote working comes with real benefits. For employers, it widens the talent pool. Barriers such as geographical distance are no longer a problem. Hiring managers can recruit the best talent wherever in the world they may be.
What’s more, employees want to work remotely. According to a survey by the Pew Research Centre, 61% of US workers stated they were working remotely out of choice.
But with the opportunities come threats. The very nature of remote working draws a distance between the employer and the employee. They may never meet in person. It becomes very difficult to verify that someone is who they say they are when they’re applying for a role.
You would think that speaking to somebody over a video call would solve this problem. Surely, as humans, we’re skilled enough to tell the difference between a real human being and a digitally rendered imitation of one?
But this is not the case. The rise of increasingly convincing deepfakes has undermined our ability to successfully distinguish a real-life person from a deepfake. Therefore, relying on our abilities – or the abilities of our colleagues – to make this distinction creates a risk. This also calls into question the efficacy of any kind of video interview as a security measure. The human eye can be spoofed. Can we be entirely sure that the person we’re talking to is even real?
Deepfakes are also becoming easier to produce. With a simple plug-in, attackers can create what’s called a ‘real-time deepfake’. This involves superimposing images to distort a video. This video can then be streamed into video conferencing calls communication channels.
The quick, easy and affordable nature of deepfakes and real-time deepfakes means that they provide a scalable method of committing fraudulent activity. As we continue to work and hire people remotely, cybercriminals will develop ever more convincing deepfakes to gain access to employment positions.
Employers across every industry should be concerned about this growing threat. But there is a solution.
Facial biometric technology can be used to verify that a person is who they claim to be when they carry out an activity online, such as opening a bank account or applying for a job.
In this case, an applicant could verify themselves when they submit their application. They could scan a photo ID document, such as a driver’s license or passport or ID card, and then scan their physical face to prove that they are who they say they are.
However, this does not prove that they are real. A face cannot be stolen, which makes biometrics very secure, but a face can be copied with a photograph or deepfake.
If you want to safeguard against deepfakes, you need to verify that a person is the right person, but that they are also a real person, and that they are authenticating right now.
Liveness is a form of facial verification that can detect whether the face being presented to the camera is a real, live human being. It also detects whether it is the correct person.
Liveness detection, therefore, can spot if somebody is presenting a picture, recording or mask of the victim to the camera. It can also identify a deepfake should it be presented to the camera.
What liveness cannot do, however, is protect against digitally injected, scalable attacks. These attacks can bypass the device sensors completely. See how digital injected attacks work.
Genuine Presence Assurance (GPA) is the only way to check that a remote individual, who is asserting their identity, is the right person, a real person and that they are authenticating right now.
GPA has the ability to identify whether an individual asserting their identity is a real human being. But it’s also built to detect digitally injected attacks – those that often use deepfakes to bypass the device sensors and spoof the system.
Whereas liveness can protect against known threats – mostly presentation threats (physical and digital artifacts shown to a screen) – GPA delivers defences against known, new and evolving synthetic digital injection attacks.
It does this with iProov’s Flashmark™ technology, which illuminates the remote user’s face with a unique, randomized sequence of colors. This mitigates the risk of replays or synthetic manipulation, preventing spoofing.
Learn more about Genuine Presence Assurance here.
As deepfakes become more sophisticated, so too should the biometric security that is used to combat them.
At iProov, we use machine learning technology, people, and processes to detect and block cyber attacks, including deepfakes. In doing this, we are constantly learning from those attacks. This helps prevent fraud, theft, money laundering, and other serious online crime today and tomorrow.
If you’d like to learn more about how iProov can help you to protect against deepfake crime using face biometric technology, download our full Work From Clone report today: