Call us Toll Free (USA): 1-833-844-9468     International: +1-603-280-4451 M-F 8am to 6pm EST
AI and the Automation Dilemma

AI and the Automation Dilemma

Organizations’ critical need to take an integrated approach to their AI deployments

With the ever-present hype and buzz around artificial intelligence (AI), it has become relatively tedious to gauge just how far AI has advanced in the last few years and to what extent it is actually deployed in the products we use on a day-to-day basis. At one end of the continuum are companies that previously had nothing to do with AI, who now claim that they are powered by AI or Generative AI (GenAI). At the other end are technologists and economists who continually emphasize how consequential and revolutionary AI could be on society, referring to AI as the new electricity, the fourth industrial revolution, and so on.

The truth, rather unexcitingly, lies somewhere between vaporware and the new human epoch. AI capabilities have indeed been progressing at breakneck pace, advancing more quickly than many of us in the field would have predicted more than a few years ago. GenAI models are now able to coherently answer questions across a wide variety of topics; they can write, code, generate images and music, and much more. Nevertheless, these models are just fallible enough that they cannot be trusted to autonomously carry out end-to-end operations in mission-critical domains like cybersecurity. However, that does not mean that they can’t be hugely valuable in mission-critical workflows or trusted in production. Indeed, with thoughtful implementation, GenAI can be deployed to good effect.

Although there are many factors that can guide how to deploy AI into production applications in a safe and robust manner, here, we’ll dive into two broad principles that can guide the process.

First, is the idea of Defensive UX (user experience). Roughly speaking, defensive UX constitutes a set of design principles that ultimately aim to anticipate the types of errors that could arise when using AI in a product or workflow, and mitigate them by ensuring that those errors either cannot occur, or are non-catastrophic if they do. For example, if you know that your Large Language Model (LLM) occasionally hallucinates details when summarizing logs, a defensive design might automatically surface links to source data alongside every response, or constrain the output space through structured query generation, rather than freeform language completion.

Another high-level design principle, and one that is increasingly popular because of how successful it has been, is the notion of Agentic AI. Although it is not fundamentally different from using an LLM on its own, organizing how the LLM interacts with a user or interface by way of agents increases the correctness, verifiability, and robustness of LLM solutions. Agents pair a base LLM with the ability to use tools, access data, maintain a memory, and step through complex state diagrams of actions, all while adapting based on intermediate outcomes and user feedback. Rather than asking a model to output a perfect answer in one go, an agent treats a task as a series of steps — planning, retrieving, verifying, and ultimately resolving — not unlike how a skilled analyst or operator might proceed. This shift toward agent-based architectures is particularly important in security and automation, where precision is non-negotiable, and the cost of error can be high. Agents make it possible to integrate LLMs into environments that require not just linguistic fluency, but rigorous adherence to policy, traceability of decisions, and integration across disparate data systems.

Furthermore, this is to say nothing of the remaining challenges in deploying agentic AI systems in production. For instance, organizations must think not just in terms of how their AI is functioning, but also about things like AI and data governance; how feedback is collected, how performance is audited, and how workflows are adapted over time. This is where engineering culture meets organizational maturity – deployments succeed when they are not treated as one-off model integrations, but as ongoing product programs that blend UX, data, security, directly alongside AI.

Although AI is not on its own a silver bullet, it is nevertheless increasingly important for organizations to make effective use of AI to increase efficiency and productivity for themselves and their customers. As experimentation and adoption of AI progresses across a variety of sectors, we are likely to see that the organizations that succeed are the ones that have taken a more holistic and integrative approach to AI.

About the Author

AI and the Automation DilemmaSohrob Kazerounian is a Distinguished AI Researcher at Vectra AI where he develops and applies novel machine learning architectures in the domain of cybersecurity. After realizing that his goal of becoming a skilled hacker was not meant to be, he focused his studies on Artificial Intelligence, with a particular interest in neural networks. After receiving his Ph.D. in Cognitive and Neural Systems at Boston University, he held a postdoctoral fellowship at the Swiss AI Lab (IDSIA) working on Deep Learning, Recurrent Neural Networks, and Reinforcement Learning. Sohrob can be reached on LinkedIn and at our company website www.vectra.ai.

Top Global CISOs, Top InfoSec Innovators and Black Unicorn Awards Program for 2025 Now Open...

X

Stay Informed. Stay Secure. Read the Latest Cyber Defense eMag

X