Behavioral Design for Cybersecurity
By Alex Blau, Vice President, ideas42
In this ever-maturing digital era, using technology to solve our everyday problems feels like an obvious thing to do. Yet, in the realm of cybersecurity, a domain in which some of the best and brightest are applying their minds to build more adept and complex high-tech safeguards, we still find limitations in what our silicon chips and machine learning algorithms are capable of. Despite our obvious predilection towards innovation, one lament I continue to hear from experts is that we’re not doing enough to deal with cybersecurity’s weakest link, the human in the middle.
In 2014, IBM wrote in their Cyber Security Intelligence Index report that, “over 95 percent of all [security] incidents investigated recognize ‘human error’ as a contributing factor.” However, to stem the growing cost of cybercrime, which is estimated to be more than $2 trillion globally by 2019, we continue to rely on technological solutions. Governments and private firms are rushing to invest billions of dollars in the next wave of hardware and software systems to protect institutions, organizations, and citizens from the mounting global cyber threat. But, while building a better firewall, or smarter threat detection software is a necessary defensive tactic, it will most certainly not be sufficient in securing our digital boarders.
When we focus our attention on the power of technology, we may forget to consider how the behaviors of people can create the most persistent threats. Simple things like clicking on a bad link, opening the wrong email attachment, or inserting an insecure USB drive can be devastating to network security. And yes, while some technology investments have sought to prevent these sorts of problems from occurring in the first place, applying innovations like AI to detect a phishing email or identify malware in an attachment only goes so far—how many times has your spam filter missed a phishing email?
In practice, human beings must fill the gap between the limited capabilities of a given technology and a system’s actual security needs, which ultimately relies on something humans are imperfect at using: their judgment.
Thankfully, there are entire fields of research that examine when human judgment fails and can be used to help the designers of security systems predict when a user will act in less than safe ways. By applying insights from behavioral economics and psychology, we can begin to understand why users might fall for phishing attacks, click through browser warnings, or do any number of unsafe things with their computers.
Take updating, for instance. Security professionals frequently relay that applying security updates in a timely manner is probably one of the most important security measures any user can take. Many operating systems even prompt the user to install updates as soon as they are ready. Yet, we often find that despite the ease and importance of updating, many users procrastinate on this critical step. Why? Part of the problem is that update prompts often come at the wrong time–when the user is preoccupied with something else–and provide the user an easy out in the form of various “remind me later” options. Because of this small design detail, users will be much more likely to defer on the update, no matter how important it is–how many times have you clicked “remind me tomorrow” before finally clicking on “update now”?
By understanding the factors that influence people’s decisions and actions, we can begin to identify solutions. For instance, instead of providing an option to push off the reminder to a later date, it might be prudent to get the user to commit to a specific time at which the update will take place. This way it is possible to prevent the endless stream of procrastination decisions, by getting the user to think critically about what a better time to update would be.
But, human error in cybersecurity is not limited to the decisions and actions of end-users. Computer engineers may build errors into code that compromise the security of their software, IT administrators may not set up security systems properly, and C-level executives may not make the right kinds of investment decisions in their organization’s cyberinfrastructure. However, by being able to recognize behavioral factors in cybersecurity, and identify their root causes, there is a rich vein of opportunity for making the system as a whole more robust.
Over the past year, ideas42, my behavioral science research, and design firm has been looking into what behavioral challenges exist in cybersecurity, and how we can use insights from the behavioral sciences to solve them. Through this effort, we wrote a cyber-novella, Deep Thought: A Cybersecurity Story, which tells the tale of how an extensive hack can be carried out by simply exploiting what we already know about the predictable errors in human behavior.
If you’re one of the people who have been seeking technological solutions to what you see as purely technological problems, then I recommend that you take a look at what we’ve learned, and see if there might be other ways of approaching the security challenges you’re trying to solve. And, if you’re one of those people who recognize that the person sitting between the chair and the computer is maybe one of the greatest threats to your cybersecurity system than you’re already on the right track.
About the Author
Alex Blau is a Vice President at ideas42. He has extensive experience applying insights from behavioral science to solve design and decision-making challenges in a broad array of domains. His current foci at ideas42 are in the areas of cybersecurity, financial inclusion, public safety, and A/B testing. Alex Blau can be reached via email at ablau@ideas42.org, and on twitter at @unbofu and at our company website http://www.ideas42.org