Gartner Urges CISOs to Secure Identity Verification Beyond Face Biometrics
Mokshita P.
10X Technology

Gartner Urges CISOs to Secure Identity Verification Beyond Face Biometrics

Gartner warns that by 2026, AI-generated deepfakes targeting face biometrics will lead enterprises to question the reliability of identity verification solutions, emphasizing the need for advanced countermeasures.

In a report by Gartner, Inc., it has been projected that by 2026, a surge in attacks utilising AI-generated deepfakes on face biometrics will lead to a substantial loss of trust in identity verification and authentication solutions. Approximately 30 percent of enterprises are expected to abandon reliance on these solutions as a standalone method for verifying identities.

Gartner's Vice President Analyst, Akif Khan, emphasised the evolving landscape of AI, stating, “In the past decade, several inflection points in the fields of AI have occurred that allow for the creation of synthetic images. These artificially generated images of real people’s faces, known as deepfakes, can be used by malicious actors to undermine biometric authentication or render it inefficient.”

The current identity verification and authentication processes using face biometrics heavily depend on presentation attack detection to assess the user's liveness. However, Khan pointed out that existing standards and testing processes do not adequately cover digital injection attacks facilitated by AI-generated deepfakes.

Gartner's research highlighted that while presentation attacks remain the most common vector, injection attacks surged by 200 percent in 2023. To counteract these threats, a combination of presentation attack detection, injection attack detection, and image inspection is recommended.

To mitigate the risks posed by AI-generated deepfakes, chief information security officers (CISOs) and risk management leaders are urged to collaborate with vendors demonstrating capabilities beyond current standards. Organisations are advised to define a minimum baseline of controls by working with vendors that actively invest in countering deepfake-based threats using IAD coupled with image inspection.

Once a baseline is established, CISOs and risk management leaders are encouraged to incorporate additional risk and recognition signals, such as device identification and behavioural analytics. These measures aim to enhance the detection capabilities of identity verification processes and save defences against potential deepfake attacks.

In conclusion, security and risk management leaders overseeing identity and access management are advised to adopt technologies that can authenticate genuine human presence and implement additional measures to thwart account takeovers, as the threat landscape continues to evolve with the rise of AI-driven deepfake attacks.