Maximizing the Impact of AI/ML Threat Detection Tools

By Cary Wright, VP Product Management, Endace

Companies are increasingly looking to Artificial Intelligence (AI) and Machine Learning (ML) threat detection tools to improve the security posture of the enterprise. AI/ML shows great promise to detect previously invisible advanced persistent threats, insider activity, and new and emerging threats. These tools promise to detect threats that traditional network security tools miss by taking a radically different approach to analyzing network activity. This alternate method of uncovering security threats may prove to be a pivotal technology in the struggle to protect critical infrastructure, important assets, and sensitive data.

However, AI/ML threat detection tools require a significant investment in time and resources to deploy correctly to each environment. It’s not enough to just throw them into an infrastructure: we need to make sure that we can trust the technology, and that it’s attuned to our environment. It’s imperative to make sure that it is deployed in such a way that increases efficiency, provides greater clarity into lurking threats, and reduces the number of alerts we investigate every day, rather than simply adding additional noise and workload to already overstretched security teams.

The promise of AI and ML

Adopting AI/ML threat detection tools is not about replacing existing security tools. Rather, it’s about supplementing them with AI/ML to deliver additional capability and benefits.

AI/ML threat detection tools have the potential to significantly improve detection by identifying threats that other tools can’t, especially at the earliest stages of the attack lifecycle. They can potentially detect emerging unknown threats such as Zero-day vulnerabilities – for which there are no existing ‘signatures’ – or threats that signature-based tools struggle to detect – such as fileless malware. And they can help associate related events to identify coordinated attack activity that might indicate a high priority threat while reducing the amount of “noise” caused by lots of individual event alerts. Their ability to detect abnormal behavior also enables them to spot potentially malicious “insider threats” other tools may miss.

The other potential benefit that AI/ML tools offer is to improve productivity by automating elements of the threat remediation process – particularly for commonly occurring threats – and thereby free up time for analysts to focus on the high-priority and more advanced threats.

Ultimately, AI/ML tools have the potential to automate many of the manual activities involved in SecOps, such as isolating suspected compromised hosts from the network and blocking access to the network from potentially compromised devices or users. The challenge is, however, that in order for security teams to hand over responsibility for these sorts of activities to an AI/ML tool, they need to be able to trust that tool to make the right decisions and know how it arrived at its decision. Otherwise, the danger is that legitimate activity may be blocked and disrupt business activity, or alternatively malicious activity may be mistakenly identified as being OK and an alert not generated, leaving the organization open to risk.

Hurdles AI/ML detection tools must overcome

Despite their potential, AI/ML detection tools are not a panacea. They can miss threats and they can flag activities as malicious when they’re not. In order for these tools to be trusted, the alerts they raise and the decisions they make need to be continually validated for accuracy.

If we train our AI and machine learning algorithms the wrong way, they risk creating even more noise and alerting on things that are not real threats (“false positives”). This wastes analysts’ time as they chase these phantoms unnecessarily, only to discover they’re not real threats. Alternatively, they may also miss threats completely that should have been alerted on (“false negatives”).

How do we guard against the pitfalls? 

In the case of false positives, we need to validate that a detected event is not malicious and train the tool to ignore these situations in the future – while at the same time ensuring this doesn’t cause the tool not to alert on similar issues that are in fact malicious. The key to doing this effectively is having access to evidence that enables accurate and timely investigation of detected threats. Recorded packet history is an indispensable resource in this process – allowing analysts to determine precisely what happened and to accurately validate whether an identified threat is real or not.

Dealing with false negatives is more difficult. How can we determine whether a threat was missed that should have been detected? There are two main approaches to this. The first is to implement regular, proactive threat hunting to identify whether there are real threats that your detection tools – including AI/ML tools – are not detecting. Ultimately, threat hunting is a good habit to get into anyway, and if something is found that your AI/ML tool missed the first time around, it provides an opportunity to train it to correctly identify similar threats the next time.

The second approach is using simulation testing – of which one example is penetration testing. By creating simulated threats, companies can clearly see if their AI/ML threat detection tools are identifying them correctly or not. If they’re not, it’s once again an opportunity to train the tool to identify similar activity as a threat in the future.

What’s the architecture for a successful deployment? 

For security teams, the ideal is to have a “single-pane-of-glass” that collects and collates threat telemetry from all of the different sources and provides a single view of threat activity across the threat lifecycle. Typically, organizations are electing to implement a SIEM tool or a data lake that provides data mining and search capabilities. SOAR tools are also rapidly gaining in popularity – as a way to help organizations collect and analyze evidence of threat activity.

The key capability that organizations need is being able to quickly reconstruct events from the collected and collated telemetry to understand what happened, how it happened, and what the impact of that event is. With a centralized view of activity, analysts can ask and answer questions quickly to understand whether there has been lateral movement from an initial compromise, whether data has been exfiltrated or not, etc.

Having access to full packet capture data is an indispensable resource in enabling this capability. With access to the actual packets, including payload, analysts can see what activity took place on the network and reconstruct events precisely – right down to see what data may have been exfiltrated and how an attacker is moving around the network to increase their foothold.

Full packet capture data is also an incredibly powerful resource for proactive threat hunting. Packet data-driven threat hunting and simulation exercises are a great way for teams to determine the effectiveness of their detection tools – including AI/ML tools – to understand why they are not detecting events that they ought to, or alternatively why they are incorrectly flagging non-malicious activity as malicious.

Packet capture data is invaluable as an evidence source because it is complete and reliable. Where a skilled attacker will often delete or modify logs to hide their activity, it’s very difficult for them to manipulate packet data captured off the network – particularly when in most cases they are not even aware that it’s being captured and don’t have access to it. This makes packet data a trusted source of “truth” about what’s really happening on the network.

Right deployment, the right outcome

AI/ML detection tools have a lot of promise. However, there are pitfalls if the right architecture and capabilities are not in place. In order for AI/ML threat detection tools to deliver on their promise to reliably detect and remediate threats, companies must be able to trust them to make the right decisions, not to miss things, and to act accurately. To achieve this level of trust, we must be able to always verify and validate decisions made by AI/ML tools. To do this, companies need to ensure they have the right data.

Packets are an indispensable resource for validating AI decisions. But in order for packet data to be useful, it needs to be complete and accurate, with no blind spots, and provide as much lookback history as possible. It also needs to be easily accessible and provide fast search and data mining.

Considering how packet data can be incorporated into workflows is also important. When packet data can be integrated into security tools – enabling analysts to pivot from a specific alert or event to the related packets quickly and easily – the time it takes to investigate and resolve issues can be drastically reduced, dramatically increasing analyst productivity.

The right architectural approach is not one where AI/ML threat detection tools replace existing tools like IDS, firewalls, and endpoint protection tools: it’s one where they supplement them.

Alerts from your monitoring tools and other relevant evidence sources such as network flow data, log file data should all feed into a SIM/SIEM/Data Lake and SOAR tools so that analysts can operate from a “single-pane-of-glass” rather than having to bounce from tool to tool to see things. Packet evidence should be easily accessible from SIM/SIEM/Data Lake, SOAR tools, and security tools such as IDS/IPS, and firewalls to provide quick access for analysts and enable packet data to be accessed by automated processes such as SOAR playbooks.

With this architecture in place, companies can realize the promise of AI/ML technology to identify and remediate previously unknown threats at the earliest possible stage, with less noise and greater efficiency. Providing teams with integrated access to full network packet data is critical to ensure the accuracy and efficiency of AI/ML security tools and to ensure tools are properly tuned to match the environment in which it is deployed.

About the Author

Cary Wright AuthorSince 2017, Cary Wright has been Vice President of Product Management at Endace. With more than 25 years’ experience in the telecommunications and networking industries at companies like Ixia and Agilent Technologies, he has been pivotal in creating market-defining products. Cary has an innate understanding of customers’ needs and is instrumental in continuing the evolution of network recording and playback solutions and driving the growth of the Endace Fusion Partner

July 24, 2021

cyber defense awardsWe are in our 11th year, and Global InfoSec Awards are incredibly well received – helping build buzz, customer awareness, sales and marketing growth opportunities, investment opportunities and so much more.
Cyber Defense Awards

12th Anniversary Top InfoSec Innovator & Black Unicorn Awards for 2024 are now Open! Finalists Notified Before BlackHat USA 2024...