Page 125 - Cyber Defense eMagazine September 2025
P. 125

The Technology Behind Deepfakes

            Deepfakes are primarily created using machine learning algorithms, especially GANs. A GAN consists of
            two  neural  networks,  the  generator  and  the  discriminator,  that  work  together  to  create  data
            indistinguishable from real-world examples. The generator crafts synthetic media, while the discriminator
            evaluates their authenticity, prompting iterative improvements in realism. As a result, GANs can produce
            highly convincing images, videos, and voices.

            The  accessibility  of  deepfake  creation  tools  has  expanded  dramatically.  Open-source  libraries  and
            commercial platforms now allow individuals with modest technical skills to produce deepfakes. Video and
            audio impersonation can be achieved with relatively small datasets, sometimes just a few minutes of
            video  or  audio  of  the  target.  The  rise  of  deepfake-as-a-service  platforms  means  attackers  can  now
            outsource  the  creation  of  synthetic  content,  lowering  barriers  for  cybercriminals  and  increasing  the
            proliferation of malicious synthetic media.



            Real-world Threats and High-profile Incidents

            Deepfakes have already played a role in several documented cyberattacks. In 2019, a deepfake audio
            attack successfully impersonated a CEO’s voice to trick a subordinate into transferring €220,000 to a
            fraudulent account. Similar incidents have targeted companies, government agencies, and individuals,
            with attackers using synthetic voices or faces to bypass security controls or manipulate victims.

            Cybercriminals are increasingly leveraging deepfakes for spear-phishing campaigns and business email
            compromise (BEC) scams. Adversaries can use AI-generated audio or video to impersonate executives
            or trusted officials, lending authenticity to fraudulent requests. Political deepfakes have been deployed to
            spread misinformation or undermine public trust during elections, fueling disinformation campaigns that
            can influence opinions and destabilize societies.

            Moreover, deepfakes are used for extortion, harassment, and blackmail. Malicious parties can fabricate
            compromising  images  or  videos,  threatening  personal  reputations  and  causing  emotional  and
            psychological harm. As the technology advances, it becomes more challenging for victims, and even
            experts, to distinguish real from fake, escalating the overall risk landscape.



            Impact on Digital Trust and Society

            At an individual level, deepfakes pose severe risks to privacy, reputation, and financial security. Synthetic
            media can be used for identity theft, impersonation, and personal attacks that leave lasting damage on
            victims’  lives.  For  organizations,  deepfakes  open  the  door  to  social  engineering  attacks,  corporate
            espionage, fraud, and significant brand damage. A convincing deepfake can trick employees, investors,
            or customers, resulting in both direct financial loss and long-term erosion of trust.

            Democracies face even broader implications. Deepfakes can spread false information on social media,
            fueling disinformation campaigns that erode trust in public institutions and the media. Fabricated videos






            Cyber Defense eMagazine – September 2025 Edition                                                                                                                                                                                                          125
            Copyright © 2025, Cyber Defense Magazine. All rights reserved worldwide.
   120   121   122   123   124   125   126   127   128   129   130