Page 197 - Cyber Defense eMagazine RSAC Special Edition 2025
P. 197

What are the unique challenges organizations face when implementing AI?

            Organizations,  especially  those  in  education,  healthcare  and  finance  face  a  fundamental  security
            paradox.  They  need  to  be  open  enough  to  foster  innovation  while  simultaneously  protecting  highly
            sensitive data, including records protected by FERPA, HIPAA and others.

            When organizations implement AI solutions, they're essentially creating new data pathways that weren't
            accounted for in traditional security models. GCE encountered this firsthand when implementing their AI
            chatbot platform for university students and faculty. Their security team recognized early on that users
            might inadvertently share personally identifiable information or sensitive health data that could potentially
            be stored in backend systems or used to train future models.

            The challenge becomes even more complex because these institutions often operate with limited security
            resources but face sophisticated threats. They need security solutions that can scale across thousands
            of users without hindering the user experience or technological advancement.

            What security risks should CISOs in education and other sensitive industries be most concerned about
            when deploying AI?

            The primary concern is what I call "unintentional data leakage" – when users share sensitive information
            with an AI system without realizing the downstream implications. Unlike traditional applications where
            data pathways are well-defined, AI systems can be black boxes that ingest, process and potentially store
            information in ways that aren't immediately transparent.

            There's also the risk of prompt injection attacks, where malicious actors might try to trick AI systems into
            revealing  information  they  shouldn't  have  access  to.  And  we  can't  forget  about  the  possibility  of  AI
            systems generating harmful or misleading content if not properly secured.

            Organizations need to think about securing both the input to these AI systems – what users are sharing
            – and the output – what the AI is generating in response.



            How can institutions balance security with the need for technology adoption and accessibility?

            A reflexive security response might be to lock everything down, but that approach simply doesn't work. If
            security becomes too restrictive, users will find workarounds – they'll use unsanctioned consumer AI tools
            instead, creating an even bigger shadow IT problem.

            The key  is  implementing  what  I  call  "frictionless  security"  –  protection  mechanisms  that  work  in  the
            background without users even knowing they're there. Think of it as an invisible safety net that catches
            sensitive data before it reaches external AI models but doesn't impede legitimate uses.

            API-driven  security  approaches  work  particularly  well  here.  Rather  than  restricting  AI  adoption,
            organizations can implement security solutions that automatically redact sensitive information from user
            prompts and uploaded files in real-time before they reach external AI models. As Mike Manrod, CISO at
            GCE explains: "What Pangea has allowed us to do is to intercept and apply logic to those prompts. If the







                                                                                                            197
   192   193   194   195   196   197   198   199   200   201   202