Page 86 - Cyber Defense eMagazine September 2025
P. 86

Applying Agentic AI in Security

            Penetration Testing

            Agentic AI can significantly enhance penetration testing (pentesting) by automating the initial stages of
            vulnerability discovery. AI agents can rapidly scan attack surfaces, and identify common security flaws.
            This enables human pentesters to focus on more sophisticated attack vectors, leading to deeper and
            more comprehensive security assessments.

            Threat Detection and Response

            Security Operations Center (SOC) teams are often overwhelmed with alerts, many of which turn out to
            be false positives. Agentic AI can autonomously investigate these alerts, distinguishing genuine threats
            from  benign  anomalies.  By  handling  routine  investigations,  agentic  AI  allows  human  analysts  to
            concentrate on high-priority incidents.

            Additionally, AI agents can be scaled instantly, effectively doubling a security team’s capacity without
            hiring additional staff. This automation enhances efficiency while reducing operational costs.

            Vulnerability Management

            Prioritizing  vulnerabilities  has  long  been  a  challenge  for  security  teams,  requiring  extensive  context
            gathering and analysis. Agentic AI can automate this process by reviewing historical vulnerability data,
            assessing  potential  risks,  and  recommending  remediation  strategies.  This  streamlines  the  workflow,
            ensuring that security teams focus on the most critical threats.



            Balancing the Risks and Rewards of Agentic AI

            While agentic AI offers significant advantages, it also presents new risks. Organizations must implement
            safeguards to prevent unintended consequences and security vulnerabilities.

            Guardrails and Human Oversight

            Excessive autonomy can be problematic. AI agents must be designed with oversight mechanisms to
            prevent misjudgments or unintended actions. Human-in-the-loop workflows ensure that critical decisions
            receive human review before execution.

            A  recent  legal  ruling  involving  Air  Canada’s  AI  chatbot  highlighted  the  importance  of  accountability.
            Companies cannot dismiss AI-generated errors as mere software glitches. Instead, they must assume
            responsibility for their AI systems and establish clear governance policies.

            Mitigating Prompt Injection Attacks

            Agentic AI’s ability to gather and process vast amounts of data also introduces security risks. Malicious
            actors  can  manipulate  AI  agents  through  prompt  injection  attacks,  steering  them  toward  unintended
            actions. This threat underscores the need for robust security protocols and continuous monitoring.






            Cyber Defense eMagazine – September 2025 Edition                                                                                                                                                                                                          86
            Copyright © 2025, Cyber Defense Magazine. All rights reserved worldwide.
   81   82   83   84   85   86   87   88   89   90   91