Call us Toll Free (USA): 1-833-844-9468     International: +1-603-280-4451 M-F 8am to 6pm EST
Anomaly Detection Is the Next Cybersecurity Paradigm

Anomaly Detection Is the Next Cybersecurity Paradigm

It’s time to move beyond static lists of things forbidden and things allowed.

By Aron Hsiao, Director of Marketing and Insights, Plurilock

Static lists have long been at the heart of cybersecurity.

Today, virtually every cybersecurity practice currently depends on lists of some kind. In network security, lists of addresses, ports, peers, and keys. In malware and environment security, lists of suspicious code and process “signatures.” In access management and authentication, lists of user credentials.

It’s rapidly becoming clear that these lists are no longer adequate. Their management, maintenance, and distribution drive countless billions in GDP, yet cybersecurity is as far from a solved problem as it’s ever been. Both breach rates and breach concerns amongst regulators and the public continue to grow exponentially.

Why Cybersecurity is Still Hard

At the end of the day, the problem is that these lists all fall short in the same way. We think of them as lists of exclusions and protections, but each such list is also secretly a direct avenue for attack, precisely through what it allows—or at least doesn’t forbid.

  • A list of valid credentials is also by nature a list of methods to compromise protected data, accounts, and privileges.
  • A configured firewall is also by nature a set of ports, addresses, and subnetworks that will remain vulnerable.
  • A set of malware signatures is also by nature description of the patterns that malware can avoid exhibiting in order to escape detection.
  • A PKI is inherently a set of doors that can always be opened with the right data—no matter how narrow or obscure we try to ensure that these doors remain.
  • And so on.

For years, security professionals have bemoaned “security through obscurity” even as so much of cybersecurity is fundamentally still about obscurity—ensuring that these lists remain either obscure or difficult to understand or decode. At the end of the day, it’s all security through obscurity. Once these things are no longer obscure, the doors are open.

If the last several decades have taught us anything, they’ve taught us that malicious actors are amazingly adept at finding ways to get ahold of or exploit these lists—these avenues for attack. Crooks pursue this strategy precisely because these lists are, unavoidably, avenues for attack.

No matter how sure we’ve been of each new (and often newly complex) protection method, each has always become, in the end, the latest door through which malicious actors enter.

New Authentication Practices: Behavioral Biometrics

Governments and security-critical organizations, faced over the last decade with millions or billions of new users, growing cloud profiles, and ballooning data and systems footprints—not to mention expanding attack and risk surfaces—have increasingly looked for new approaches.

In user authentication and PAM circles, behavioral-biometric authentication methods are now the leading solution to this problem. While usernames, passwords, tokens, fingerprints, and mobile SIMs are all attack vectors that bad actors can use to impersonate real users and gain illicit access, behavioral-biometric systems are fundamentally different.

In behavioral-biometric systems, which are driven by machine learning and observation over time, there is no particular credential that can be stolen and reused in order to gain entry. There are also no biographical or other credentials used or kept on file to act as objects of theft in order to access still other systems.

Instead, behavioral-biometric technologies recognize people based on tiny, machine-observable patterns in input or sensor data that they generate as they go about their business. In other words, on behavioral-biometric systems, users must be “recognizable” in wholly organic, multifaceted, and embodied ways—ways that are difficult if not impossible to simulate. Authentication happens inadvertently, as users simply act like—and are—themselves.

Generalizing Behavioral Biometrics to Anomaly Detection

At Plurilock we’ve long considered behavioral biometrics to be our core competency, yet recently we’ve been increasingly engaged in research and development on machine-to-machine security models for the Internet of Things and in new ways to detect and stop malware.

It’s rapidly becoming clear that all of these are cases in which stronger, more efficient, and more cost-effective security can be achieved using a group of very similar anomaly detection technologies.

The claim that “identity is the new perimeter” has been making the rounds over the last year or two, and we don’t disagree with it for human users. But this claim is actually a specialized instance of a more general claim that will shape cybersecurity in the decades to come. After all, identity is exactly the problem—and more and more, anomaly detection methods are the best way to establish it. So it’s not the identity that is the new perimeter—it’s an anomaly.

Securing User Accounts, Things, and Environments

But how does anomaly detection address the other problems I just mentioned?

Recall that behavioral biometrics enables us to recognize real users. It does this not with lists of static facts like credentials or fingerprints—that are in fact themselves vulnerabilities—but through the ability to recognize, without biographical data or physical markers, whether someone is “being themselves” or not. It’s fundamentally about detecting user anomalies.

Because users are human beings, we’ve long called this a biometric technology. But the same approach—using machine learning for anomaly detection—is now proving to be effective in other areas of cybersecurity as well. Devices are more and more like individuals in our era of highly complex things—individuals in timings, characteristics, and tendencies. This is especially true as machine learning and automation—and the unique ways in which these affect memory, process, and latency characteristics—take hold across more and more devices.

In the realm of malware, too, the Spy vs. Spy game of signature library updates versus new threat “strains” in the wild will soon be supplantable by anomaly detection through machine learning. Computing environments, process tables, and schedules are now deep and nuanced enough to offer—once again—rich signal environments that enable the recognition of both normal and anomalous states. The result is software security without signature scanning.

Rather than relying on static policies—which credentials grant access, which don’t, which MAC addresses and keys are in, which are out, which code fragments are allowed, and which aren’t—it’s time for the cybersecurity industry to begin to think in terms of recognition and anomaly detection, just as behavioral-biometric solutions now do with human users.

Making the Transition

The shift from list-based and credential-based forms of cybersecurity isn’t one that can or will happen overnight, but it’s one that needs urgently to happen nonetheless—and one that will happen simply because of the traditional paradigm can’t be sustained much longer. It’s just too expensive, complex, and ineffective at this point.

The old, static methods for securing data, accounts, and cyber-systems haven’t kept pace with the threat landscape—and the gap is now growing exponentially. For corporate officers and security professionals tasked with protecting users, systems, and data, it’s time to reorient thinking toward anomaly detection technologies as tomorrow’s keys to cybersecurity.

It’s time to stop thinking about how to keep our many lists obscure—and to start considering technologies that make list-based cybersecurity (and its vulnerabilities) obsolete.

About the Author

Anomaly Detection Is the Next Cybersecurity ParadigmAron Hsiao is the Director of Marketing and Insights at Plurilock Security Solutions, Inc. One of a number of PhDs on Plurilock’s senior team, Aron’s research background is in the analysis of human-computer interaction systems. Aron previously worked at big data startup Terapeak, at e-commerce giant eBay, Inc., and as an instructor at NYU, CUNY, and The New School for Social Research. In addition to his academic work and work at Plurilock, Aron is also the author of a number of books on Linux, cybersecurity, and open source technologies.

Aron can be reached online at [email protected] and at

cyberdefensegenius - ai chatbot

12th Anniversary Top InfoSec Innovator & Black Unicorn Awards for 2024 are now Open! Finalists Notified Before BlackHat USA 2024...