Page 198 - Cyber Defense eMagazine RSAC Special Edition 2025
P. 198

prompt violates a data rule for something we don't want going to an API like OpenAI or Anthropic, we
            apply redaction at machine speed, without any user impact or user experience change."

            It's also crucial to adopt a risk-based approach. Not all AI interactions carry the same level of risk. A
            student  asking  about  cafeteria  hours  doesn't  need  the  same  security  controls  as  a  faculty  member
            uploading research data.



            What technical capabilities should security teams look for when securing AI implementations?

            Security teams need several key capabilities when implementing AI security guardrails.

            Content redaction in real-time is foundational – technology must identify and remove sensitive information
            from prompts before they reach external AI models. An effective solution should recognize various data
            types like student IDs, health information, financial details and other PII without introducing noticeable
            delays.

            Further, file sanitization capabilities become essential as more AI systems accept document uploads.
            The  security  layer  needs  to  scan  and  clean  documents  of  sensitive  metadata  and  content  before
            processing, preventing data leakage through attached files. GCE built their security approach around this
            principle, seeking guardrails that could handle both conversational text and document uploads to their AI
            platform.

            Customizable policy controls allow security teams to define what constitutes sensitive information in their
            specific context. This is crucial because organizations often have unique identifiers and data structures
            that generic solutions might miss.


            Performance cannot be compromised – any security layer should add minimal latency to maintain a
            seamless user experience. When security introduces noticeable delays, user adoption inevitably suffers
            and creates incentives for workarounds. For GCE, this performance requirement was essential as they
            needed to deploy quickly without introducing latency that could affect student and faculty experiences.

            Comprehensive logging and visibility round out these requirements, providing the audit trails needed for
            compliance. Security teams need clear insights into what types of sensitive information are being caught
            and redacted, without necessarily exposing the sensitive data itself.



            What are the basics of a secure AI framework?

            I recommend a four-phase approach:

            Assessment: Begin by cataloging what AI systems are already in use, including shadow IT. Understand
            what data flows through these systems and where the sensitive information resides.

            Policy Development: Create clear policies about what types of information should never be shared with
            AI systems and what safeguards need to be in place. Get buy-in from academic, administrative and
            executive stakeholders.





                                                                                                            198
   193   194   195   196   197   198   199   200   201   202   203