Page 91 - Cyber Defense eMagazine RSAC Special Edition 2025
P. 91

Understanding the Vulnerabilities of Generative AI in Cybersecurity

            AI  serves as  a double-edged  sword  for midsized organizations,  empowering  them  to  innovate  while
            simultaneously exposing them to new vulnerabilities. Key risks arising from generative AI include:

               1.  Enhanced  Cyberattacks:  Cybercriminals  are  leveraging  AI  to  execute  more  effective  and
                   challenging-to-detect attacks. AI algorithms streamline the gathering of open-source intelligence,
                   enabling attackers to craft highly personalized phishing attempts that mimic legitimate business
                   communications.
               2.  Internal Data Risks: As organizations incorporate AI tools into their workflows, the handling of
                   sensitive information becomes a critical concern. Using open-source AI applications can expose
                   confidential data if not carefully managed.



            The Impact of Smarter Cyberattacks

            AI  increases  the  effectiveness  of  cyberattacks  by  facilitating  sophisticated  phishing  techniques  that
            produce  messages  indistinguishable  from  authentic  communications.  The  rise  of  business  email
            compromise (BEC) scams exemplifies this threat, where attackers, utilizing generative AI capabilities,
            can create convincing correspondence that evades detection.

            Additionally, emerging technologies like deepfake voice and video complicate the verification process,
            rendering  traditional  methods,  such  as  phone  calls,  less  reliable  and  creating  new  opportunities  for
            deception.




            Assessing AI Risks: A Framework for Organizations
            To address the risks presented by AI adoption, organizations must implement a structured approach to
            risk management. The World Economic Forum's recent report outlines essential questions for business
            leaders to consider:

               •  Is there a clear understanding of risk tolerance among stakeholders?
               •  How are the risks of AI weighed against potential rewards?
               •  Are effective governance policies established for the deployment of AI projects?
               •  Are organizations aware of their unique vulnerabilities associated with AI technologies?

            At  Pondurance,  we  encourage  adherence  to  established standards  such  as  the NIST  Cybersecurity
            Framework (CSF) 2.0 and NIST SP 800-53. These frameworks provide essential governance structures
            for AI security, ensuring comprehensive protection measures.












                                                                                                              91
   86   87   88   89   90   91   92   93   94   95   96