Page 139 - Cyber Defense eMagazine January 2024
P. 139

in terms of reputational, operational, and financial damage. The damage inflicted by such a breach
                   would  not  stop  at  the  company  boundaries,  but  would  create  a  ripple  effect  across  the  AI
                   ecosystem  as organizations  that had relied on the model(s) would  need to immediately  go into
                   damage control mode. Abruptly ceasing to use the model(s) would affect applications that require
                   it  and  security  teams  would  have  to  investigate,  reassess,  and  possibly  recreate  or  replace
                   elements of the organizational  security infrastructure.  Explaining their accountability to their own
                   shareholders  and customers would be a painful exercise for executives, and come with its own
                   set of consequences.

               •  An enterprise embracing  GenAI is going to have a permissioning  breach due to multiple
                   models at play and a lack of access controls. As a company layers in external base models,
                   such as ChatGPT, as well as models embedded in SaaS applications, and retrieval-augmented
                   generation (RAG) models, the organizational  attack surface expands,  the security team’s ability
                   to know what’s going on (observability)  decreases,  and the intense, perhaps  even giddy, focus
                   on  increased  productivity  overshadows  security  concerns.  Until,  that  is,  a  disgruntled  project
                   manager is given the access to the new proprietary accounting  model that the payroll manager
                   with  a  similar  name  requested.  Depending  on  the  level  of  disgruntlement  and  the  personality
                   involved,  company payroll information  could be shared in the next sotto voce rant at the coffee
                   machine, in an ill-considered all-hands email, or as breaking news on a business news website.
                   Or  nothing  will be  shared  and  no  one  will  notice  the  error  until  the  payroll  manager  makes  a
                   second request for access. Whatever the channel or audience, or lack thereof, the company has
                   experienced a serious breach of private, confidential, and highly personal data, and must address
                   it rapidly and thoroughly. The AI security team’s days or weeks will be spent reviewing and likely
                   overhauling the organization’s AI security infrastructure, at the very least, and the term “trust layer”
                   will become a feature of their vocabulary.

               •  Data  science  will become  increasingly  democratized  thanks  to foundation  models  (LLM
                   usage).  The  speed  and  power  of  LLMs  to  analyze  and  extract  important  insights  from  huge
                   amounts of data, to simplify complex, time-consuming  processes, and to develop scenarios and
                   predict future trends has already begun to bring big-data analytics into the workflow of teams and
                   departments  in  all  business  functions.  That  will  continue  to  scale  up  dramatically.  Across  an
                   organization,  teams  will  increasingly  be  able  to rapidly  generate  data  streams  tailored  to their
                   specific  needs,  which will streamline  productivity  and expand  the institutional  knowledge  base.
                   Humans will not be out of the loop, however, as I do not foresee models’ propensity to make stuff
                   up being resolved any time soon, although fine-tuning is showing some benefits in that area.


               •  Increasingly  new  and  novel  cyberattacks  created  by  offensive  fine-tuned  LLMs  like
                   WormGPT  and FraudGPT  will occur.  The ability  to fine-tune  specialized  models  quickly  and
                   with relative ease has been a boon to developers,  including the criminal variety. Just as models
                   can be trained on a specific collection of financial data, for instance, models can also be trained
                   on  a  corpus  of  malware-focused  data  and  be  built  with  no  guardrails,  ethical  boundaries,  or
                   limitations on criminal activity or intent. As natural language processing (NLP) models, these tools
                   function  as ChatGPT’s  evil cousins,  possessing  the  same capabilities  for generating  malicious




            Cyber Defense eMagazine – January 2024 Edition                                                                                                                                                                                                          139
            Copyright © 2024, Cyber Defense Magazine. All rights reserved worldwide.
   134   135   136   137   138   139   140   141   142   143   144