Page 275 - Cyber Defense eMagazine September 2025
P. 275

sacrificing  performance  –  a  balance  I’ve  helped  numerous  financial  institutions  achieve.  This  article
            examines the Defense in Depth approach to securing AI applications, covering data protection, model
            security, and enterprise integration.

            Modern AI workloads face critical security risks including:

               •  Prompt injection attacks – a threat I’ve seen increasingly targeted at chatbots
               •  Model output manipulation leading to hallucinations challenging for critical applications
               •  Training data poisoning attempts
               •  Model theft through extraction attacks that is a growing concern for proprietary trading models

            Drawing  from  my  recent  experience  heling  a  G-SIB  secure  their  generative  AI  infrastructure,
            organizations follow NISTs Defense in Depth (DiD) methodology to protect assets across all layers- from
            infrastructure to model endpoints. What’s particularly effective about this multi-layered strategy is how it
            combines robust security controls with native safeguards embedded within AWS Generative AI services.

            From my work I’ve learned that organizations can confidently advance their AI initiatives while upholding
            stringent standards for data protection, compliance, and operational security by following a methodical
            approach tailored to their risk profile.



            AWS Defense in Depth Strategy for Security

            While  Defense  in  Depth  (DiD)  isn’t  a  new  concept,  its  application  to  AI  workloads  requires  specific
            considerations. Let me share a real example: When implementing Bedrock for a major investment bank,
            we  discovered  that  traditional  DiD  approaches  needed  significant  adaptation  to  handle  the  unique
            challenges of foundation models.


            Defense in Depth (DiD) employs multiple layers of security controls to protect data, applications, and
            infrastructure.  What  makes  this  especially  crucial  for  AI  workloads  is  the  dynamic  nature  of  model
            interactions.  For  instance,  one  of  my  clients  discovered  that  traditional  data  loss  prevention  (DLP)
            solutions were insufficient for catching sensitive data in model outputs - we had to implement additional
            semantic analysis layers.

            Each security layer provides distinct but complementary controls. Drawing from my experience here’s a
            breakdown of how these layers work together in practice:


             Security Layer             Primary Function                 Key AWS Security Services

             Data Protection            Protect data at rest and in transit  AWS KMS

             (Core Layer)                                                AWS CloudHSM


                                                                         Amazon Macie
                                                                         AWS Secrets Manager






            Cyber Defense eMagazine – September 2025 Edition                                                                                                                                                                                                          275
            Copyright © 2025, Cyber Defense Magazine. All rights reserved worldwide.
   270   271   272   273   274   275   276   277   278   279   280