By John Linford, Forum Director, The Open Group Security Forum, and Open Trusted Technology Forum
The Open Group Security & Open Trusted Technology (OTTF)
Even in an industry known for rapid shifts and changing trends, the growth of Zero Trust in the cybersecurity zeitgeist over the last few years has been remarkable. It was until recently still just a germinating concept, and now it is on the roadmap of almost every security team in the world.
That is not an exaggeration. The fourth annual State of Zero Trust report from Okta, published last summer, found that 97% of survey respondents had “a defined Zero Trust initiative in place or planned to have one within the next few months”. That is a rise from just 16% in the first edition, and it is hard to think of another growth story quite like that.
So, should we just declare victory for Zero Trust now, safe in the knowledge that from here on out we are in the implementation phase, and that everyone will soon be benefitting from having replaced their old network edge model with continuous, granular authentication?
That might, unfortunately, be premature. One nice thing about Zero Trust is that, compared to some technological innovations, its fundamental principle is very simple to describe. The previous paragraph gave one way of phrasing it, and we could also accurately look at it in terms of shifting to data- and asset-centric security rather than network-centric approaches, or in terms of providing access only when required rather than denying access when necessary.
These overlapping definitions – and there are others in circulation – are fertile ground for hype. That means that, while 97% of Okta’s respondents might have the term ‘Zero Trust’ in their plans and projects, there are real questions to ask about what a true Zero Trust Architecture (ZTA) should or will ultimately look like.
Although simply stated in theory, true implementation of Zero Trust can be more challenging. For businesses and users alike, security practices can be deeply embedded and difficult to change. Organizations have broad suites of security tools and processes which may need to be reworked, and just as importantly, users are highly familiar with traditional processes and will require determined effort and communication to adjust to this new culture.
The effort involved is worth it, though, because the changing nature of cyber threat is quickly outpacing the ability of the traditional security perimeter model to combat it. Malicious actors are becoming ever more skilled at moving laterally to points of value within networks once the perimeter is breached, and there is only so much that security teams can do to ameliorate that damage.
At the same time, the shape of that traditional perimeter is becoming still harder to define. Changing working patterns mean that there is growing pressure to enable personal devices to access internal networks, while digitalized relationships between businesses and clients increasingly require bridges between what would once have been strictly distinct systems.
All of that means that the boundaries between insiders and outsiders are blurring – to the extent that thinking of it as a boundary is becoming significantly less useful or realistic. Instead, we need an effective ZTA in which the user, asset, or data is the perimeter.
We need a shared understanding of what truly is (and, just as importantly, what is not) Zero Trust. Any organization pursuing Zero Trust should start from a position of relying on robust, open, tested, vendor-neutral definitions of the methodology in order to assure that the systems they roll out really will meet the demands of future security threats.
There are widely used definitions available including NIST® 800-207, and The Open Group has published a clear, readable guide entitled the Zero Trust Commandments which outlines what is non-negotiable for a successful Zero Trust strategy. We are also in the process of developing our own standard ZTA framework, which will help to create a shared understanding among businesses, vendors, government, and academia about how different elements of ZTA should interact in order to deliver effective security.
The fact that there has been a rapid, large-scale move towards Zero Trust is, to be clear, a highly encouraging development: the costs of digital security breaches reliably increase year-on-year, and the need for a new response is clear. To get ZTA right, we first need to properly define it.
About the Author
John Linford is the Forum Director of The Open Group Security Forum and Open Trusted Technology Forum. As staff at The Open Group, John supports the leaders and participants of the Open Trusted Technology Forum in utilizing the resources of The Open Group to facilitate collaboration and follow The Open Group Standards process to publish their deliverables. Prior to joining The Open Group in June 2019, John worked as a Lecturer for San Jose State University, teaching courses in Economics.
John is Open FAIR™ certified and was the lead author of the Open FAIR Risk Analysis Process Guide (G180), which offers best practices for performing an Open FAIR risk analysis with an intent to help risk analysts understand how to apply the Open FAIR risk analysis methodology.