Page 102 - Cyber Defense eMagazine RSAC Special Edition 2025
P. 102

Understanding AI’s Strengths and Weaknesses in Cybersecurity

            In general, AI’s strengths lie in processing vast data volumes, pattern recognition, and automation. For
            cybersecurity teams, this might translate to machine learning models that can detect anomalies and
            patterns that humans might miss, AI-driven automation that reduces workloads for analysts, and more
            accurate, faster threat prioritization through AI-driven classification and scoring of threat indicators.

            On the other hand, anyone who has experienced a hallucination knows that AI is not infallible. It can lack
            the  necessary  context  to  manage  nuanced  threats  and  it  can  miss  novel  threats  due  to  training  on
            historical data. AI models and data can also drift over time or be compromised by adversaries.

            What this means is that AI is not a “silver bullet” tool that cybersecurity teams can simply set and forget.
            Without regular human intervention and oversight, AI is likely to misclassify threats and generally prove
            ineffective in the cybersecurity space.

            Instead, cybersecurity teams should think of AI as a collaborator and find the best ways to use human
            expertise, intuition, and creativity to compensate for AI’s weaknesses while leveraging its strengths.



            Perfecting the Human-AI Collaboration Equation

            Teams need to carefully rethink their existing work to identify the best way to integrate and make the
            most of AI. For example, at ThreatConnect, we design AI solutions and tools that work seamlessly into
            teams’ existing threat intel lifecycles. In this way, teams aren’t left to reinvent processes and procedures
            every time AI systems evolve. Instead, AI enhances existing, proven workflows in new ways.


            When thinking about integrating AI at your own company, it can be helpful to think about AI as the world’s
            fastest intern—incredibly helpful, but needing regular supervision and training.
            For example, here are a few best practices to foster greater AI-human collaboration in your cybersecurity
            operations:


               •  Establish  human  feedback  loops:  AI  models  should  regularly  incorporate  analyst  input  to
                   improve over time.
               •  Practice continuous monitoring: AI insights require regular validation to maintain accuracy.
               •  Deploy rigorous testing: AI-driven threat intelligence must be vetted to avoid blind spots.
               •  Commit  to  frequent  model  updates:  Environments,  inputs,  and  expectations  are  always
                   changing. Consider frequent model updates to ensure AI adapts to evolving threats.
               •  Don’t forget end-user input: Sometimes people use tools differently than intended. Listen to
                   end-user input to shape AI to meet real-world needs.
               •  Build  AI  talent  and  expertise:  As  AI  proliferates,  security  teams  must  understand  how  AI
                   systems work and where they pose risks.











                                                                                                            102
   97   98   99   100   101   102   103   104   105   106   107