Page 312 - Cyber Defense eMagazine September 2025
P. 312

Public Trust Is Eroding

            For enterprise security leaders, it’s important to note just how much of an impact deepfakes have had on
            eroding the public’s trust. In a recent global study, 69% of consumers said AI-powered fraud now poses
            a  greater  threat  to  their  personal  security  than  traditional  forms  of  identity  theft.  As  deepfakes  and
            synthetic media grow more convincing, the line between real and fake continues to blur, and with it,
            confidence in online authenticity.

            When asked who should be responsible for stopping these threats, 43% of consumers pointed to Big
            Tech, while just 18% believed the responsibility lies with themselves. Users have a growing expectation
            for enterprises to keep their digital identities safe while engaging with their platform. As manipulated
            content becomes harder to detect, service providers must adapt their identity intelligence systems to
            meet the growing needs of their customers.

            So, what exactly are today’s most pressing identity fraud threats, and how can AI help us detect and stop
            them? Let’s break down three of the most concerning threats on the horizon, and the AI-powered identity
            intelligence strategies that can stop them in their tracks.




            1. Injected Selfies & Deepfakes: Breaking Biometrics

            Facial recognition is the foundation of biometric authentication. But what happens when the face isn’t
            real?
            Today, cybercriminals are increasingly turning to camera injection attacks, which are digitally inserted
            images  that bypass  the  physical camera  feed.  These attacks,  sometimes carried  out  through  virtual
            cameras, enable fraudsters to simulate a live video stream using pre-rendered or AI-generated faces.
            Camera injection attacks are the delivery mechanism behind most deepfake-based fraud, but they’re also
            being used as standalone attack vectors.

            Deepfake  fraud,  along  with  wreaking  havoc on  social  media  platforms  and  tarnishing  public  figures’
            reputations, is now impacting executives in the boardroom. Hackers are leveraging advanced deepfake
            tools to highjack high-level executives’ identities and exploit the enterprise. All it takes is a few images, a
            few seconds of a digitally recorded voice sample, and an unsuspecting employee to be scammed into
            performing costly actions by the fraudster.

            One answer to stopping these attacks is the use of liveness detection. This strategy uses AI-based
            algorithms that assess the authenticity of the person behind the camera in real time. Advanced systems
            can  detect  micro-movements,  light  reflections,  and  other  physiological  cues  that  can’t  be  faked.
            Multimodal  liveness  checks,  which  combine  visual,  auditory,  and  motion-based  signals,  are  quickly
            becoming the gold standard.











            Cyber Defense eMagazine – September 2025 Edition                                                                                                                                                                                                          312
            Copyright © 2025, Cyber Defense Magazine. All rights reserved worldwide.
   307   308   309   310   311   312   313   314   315   316   317