As organizations race to implement AI solutions, security leaders face the challenge of enabling progress while protecting sensitive data. Grand Canyon Education (GCE), which serves 22 university partners, recently confronted this exact dilemma when deploying an AI chatbot platform for thousands of students and staff.
We spoke with Sourabh Satish, a cybersecurity veteran with over 25 years of experience and more than 260 issued patents, about the lessons learned from this implementation and how similar approaches can benefit organizations across sectors. Drawing from GCE’s experience and his extensive background in security architecture, Sourabh shares a framework for securing AI without stifling growth.
What are the unique challenges organizations face when implementing AI?
Organizations, especially those in education, healthcare and finance face a fundamental security paradox. They need to be open enough to foster innovation while simultaneously protecting highly sensitive data, including records protected by FERPA, HIPAA and others.
When organizations implement AI solutions, they’re essentially creating new data pathways that weren’t accounted for in traditional security models. GCE encountered this firsthand when implementing their AI chatbot platform for university students and faculty. Their security team recognized early on that users might inadvertently share personally identifiable information or sensitive health data that could potentially be stored in backend systems or used to train future models.
The challenge becomes even more complex because these institutions often operate with limited security resources but face sophisticated threats. They need security solutions that can scale across thousands of users without hindering the user experience or technological advancement.
What security risks should CISOs in education and other sensitive industries be most concerned about when deploying AI?
The primary concern is what I call “unintentional data leakage” – when users share sensitive information with an AI system without realizing the downstream implications. Unlike traditional applications where data pathways are well-defined, AI systems can be black boxes that ingest, process and potentially store information in ways that aren’t immediately transparent.
There’s also the risk of prompt injection attacks, where malicious actors might try to trick AI systems into revealing information they shouldn’t have access to. And we can’t forget about the possibility of AI systems generating harmful or misleading content if not properly secured.
Organizations need to think about securing both the input to these AI systems – what users are sharing – and the output – what the AI is generating in response.
How can institutions balance security with the need for technology adoption and accessibility?
A reflexive security response might be to lock everything down, but that approach simply doesn’t work. If security becomes too restrictive, users will find workarounds – they’ll use unsanctioned consumer AI tools instead, creating an even bigger shadow IT problem.
The key is implementing what I call “frictionless security” – protection mechanisms that work in the background without users even knowing they’re there. Think of it as an invisible safety net that catches sensitive data before it reaches external AI models but doesn’t impede legitimate uses.
API-driven security approaches work particularly well here. Rather than restricting AI adoption, organizations can implement security solutions that automatically redact sensitive information from user prompts and uploaded files in real-time before they reach external AI models. As Mike Manrod, CISO at GCE explains: “What Pangea has allowed us to do is to intercept and apply logic to those prompts. If the prompt violates a data rule for something we don’t want going to an API like OpenAI or Anthropic, we apply redaction at machine speed, without any user impact or user experience change.”
It’s also crucial to adopt a risk-based approach. Not all AI interactions carry the same level of risk. A student asking about cafeteria hours doesn’t need the same security controls as a faculty member uploading research data.
What technical capabilities should security teams look for when securing AI implementations?
Security teams need several key capabilities when implementing AI security guardrails.
Content redaction in real-time is foundational – technology must identify and remove sensitive information from prompts before they reach external AI models. An effective solution should recognize various data types like student IDs, health information, financial details and other PII without introducing noticeable delays.
Further, file sanitization capabilities become essential as more AI systems accept document uploads. The security layer needs to scan and clean documents of sensitive metadata and content before processing, preventing data leakage through attached files. GCE built their security approach around this principle, seeking guardrails that could handle both conversational text and document uploads to their AI platform.
Customizable policy controls allow security teams to define what constitutes sensitive information in their specific context. This is crucial because organizations often have unique identifiers and data structures that generic solutions might miss.
Performance cannot be compromised – any security layer should add minimal latency to maintain a seamless user experience. When security introduces noticeable delays, user adoption inevitably suffers and creates incentives for workarounds. For GCE, this performance requirement was essential as they needed to deploy quickly without introducing latency that could affect student and faculty experiences.
Comprehensive logging and visibility round out these requirements, providing the audit trails needed for compliance. Security teams need clear insights into what types of sensitive information are being caught and redacted, without necessarily exposing the sensitive data itself.
What are the basics of a secure AI framework?
I recommend a four-phase approach:
Assessment: Begin by cataloging what AI systems are already in use, including shadow IT. Understand what data flows through these systems and where the sensitive information resides.
Policy Development: Create clear policies about what types of information should never be shared with AI systems and what safeguards need to be in place. Get buy-in from academic, administrative and executive stakeholders.
Technical Implementation: Deploy appropriate security controls based on risk. This might include API-based redaction services, authentication mechanisms and monitoring tools. The GCE security team initially considered building their own redaction solution but quickly realized this would significantly delay their AI initiative. Instead, they identified an API-based solution that could be implemented quickly while meeting their security requirements.
Education and Awareness: Perhaps most importantly, educate users about AI security. Help them understand what information is appropriate to share and how to use AI systems safely.
Start with high-risk use cases – perhaps those handling sensitive records or research data – and expand from there. Remember that this is an iterative process; as AI capabilities evolve, so too must security measures. The most successful implementations come from close collaboration between security teams, developers and stakeholders who can work together to achieve the desired outcomes.
Looking ahead, what emerging AI security challenges should organizations prepare for?
We’re entering a fascinating period where AI is becoming increasingly embedded in nearly every process. Several challenges are emerging:
First, the line between generative AI and traditional applications is blurring. AI features are being integrated into everything from information systems to learning management platforms. This means security can’t be bolted on as an afterthought – it needs to be woven into the fabric of these applications.
Second, multimodal AI that processes images, audio and video creates new security dimensions. Organizations will need to think about how to protect sensitive information in these formats as well.
Third, the regulatory landscape around AI is rapidly evolving. Organizations should prepare for new compliance requirements specifically addressing AI usage and data protection.
Finally, there’s the challenge of AI alignment – ensuring that AI systems act in accordance with organizational values and objectives. This goes beyond traditional security but is equally important for maintaining trust.
GCE’s implementation shows how API-driven, SaaS microservice architecture provides the flexibility to adapt security policies over time as AI platforms evolve. Their team appreciated that with this approach, making redaction policy changes doesn’t require developer sprint cycles – security teams can update policies directly through a management console, significantly reducing the time needed to implement security changes.
The organizations that will thrive in this new landscape are those that view AI security not as a barrier to advancement but as an enabler – a foundation that allows them to deploy AI confidently and responsibly.
About the Author
Sourabh Satish is CTO and Co-Founder of Pangea and a serial entrepreneur with a 25+ year track record of designing and building ground-breaking security products and technologies, with more than 260 issued patents and additional patents pending. Sourabh most recently founded and served as CTO of Phantom Cyber, which was acquired by Splunk in 2018 and he previously served as a Distinguished Engineer at Symantec.
Read the full case study with Grand Canyon Education here: https://pangea.cloud/case-studies/grand-canyon-education/
Sourabh can be reached on LinkedIn and at https://pangea.cloud/