Who watches the watchers?
By Dr. Richard Ford, Chief Scientist, Forcepoint
Over the last few years, I think every CISO has become painfully aware of the so-called “insider threat” – that is, the risks posed by a malicious employee who intends to harm the company in some way. While we don’t like to think about our co-workers like that, there have been so many high-profile cases where an employee has decided to take an action that causes harm that this threat to the business cannot be ignored.
Operationally, there are important differences between an insider and an outsider. The traditional “moats and walls” security stance isn’t effective against those who are already inside, and while concepts such as Zero Trust (there’s a lot of material here, if you want to read more: https://go.forrester.com/blogs/ category/zero-trust/) embrace segmentation and separation of duties, technologically we continue to move toward a more nuanced world. Thus, our goal is to use different forms of monitoring and analytics to the detector, ideally, predict – various forms of employee misbehavior. These techniques are especially valuable when dealing with safety-critical systems or high-value data, but are applicable everywhere: most large companies need to make a data-driven decision around how rigorously they protect themselves from attacks that originate from within.
While these approaches can be very effective, they are less useful when attempting to identify users who have a legitimate requirement to use and modify the data they intend to steal. Analytically, it is easy to detect when a Project Manager is suddenly showing intense interest
in collecting source code, it’s infinitely harder to detect when a developer, who works with source code every day, actually is abusing that access. Typically, the data user or producer has by far the most extensive access and rights – and legitimately uses that data in her day job frequently. Thus, CISOs are left scratching their heads trying to figure out how to protect your information from those with legitimate access?
To put this in perspective, let’s take a quick look at a real-world case. According to an affidavit filed in the US District Court for the State of New York, these things happen and be quite unpleasant. From Case 1:18-MJ-434-CFH (see: http://online.wsj.com/public/resources/ documents/GE_Niskayuna_Complaint.pdf), we read about a Principal Engineer working for GE; as this case is pending, we’ll just randomly call them “Dr. Ford”. Dr. Ford is alleged to have collected Matlab files and encrypted them… and then used vim to append this data to an image file. This image was then sent to an email address external to the company.
When seen as a sequence of actions, this story is pretty easy to understand. However, when we view it at the individual event level, it is much harder to spot. A cursory examination of the image file, for example, would not have shown much amiss. Compressing files isn’t particularly unusual, and if the files were ones that Eve worked with most days, it’s hard to detect that their access is anomalous… because it isn’t! Only when viewed end-to-end does the full picture emerge.
Over the last few years, I think every CISO has become painfully aware of the so-called “insider threat” – that is, the risks posed by a malicious employee who intends to harm the company in some way. While we don’t like to think about our co-workers like that, there have been so many high-profile cases where an employee has decided to take an action that causes harm that this threat to the business cannot be ignored.
When seen as a sequence of actions, this story is pretty easy to understand. However, when we view it at the individual event level, it is much harder to spot. A cursory examination of the image file, for example, would not have shown much amiss. Compressing files isn’t particularly unusual, and if the files were ones that Eve worked with most days, it’s hard to detect that their access is anomalous… because it isn’t! Only when viewed end-to-end does the full picture emerge.
Operationally, there are important differences between an insider and an outsider. The traditional “moats and walls” security stance isn’t effective against those who are already inside, and while concepts such as Zero Trust (there’s a lot of material here, if you want to read more: https://go.forrester.com/blogs/category/zero- trust/) embrace segmentation and separation of duties, technologically we continue to move toward a more nuanced world. Thus, our goal is to use different forms of monitoring and analytics to the detector, ideally, predict – various forms of employee misbehavior. These techniques are especially valuable when dealing with safety-critical systems or high-value data, but are applicable everywhere: most large companies need to make a data-driven decision around how rigorously they protect themselves from attacks that originate from within.
While these approaches can be very effective, they are less useful when attempting to identify users who have a legitimate requirement to use and modify the data they intend to steal. Analytically, it is easy to detect when a Project Manager is suddenly showing intense interest in collecting source code, it’s infinitely harder to detect when a developer, who works with source code every day, actually is abusing that access. Typically, the data user or producer has by far the most extensive access and rights – and legitimately uses that data in her day job frequently. Thus, CISOs are left scratching their heads trying to figure out how to protect your information from those with legitimate access?
To put this in perspective, let’s take a quick look at a real-world case. According to an affidavit filed in the US District Court for the State of New York, these things happen and be quite unpleasant. From Case 1:18-MJ-434-CFH (see: http:// online.wsj.com/public/resources/documents/ GE_Niskayuna_Complaint.pdf), we read about a Principal Engineer working for GE; as this case is pending, we’ll just randomly call them “Dr. Ford”. Dr. Ford is alleged to have collected Matlab files and encrypted them… and then used vim to append this data to an image file. This image was then sent to an email address external to the company.
About the Author
Dr. Richard Ford is the Chief Scientist of Forcepoint. Ford has over 25 years’ experience in computer security, having worked with both offensive and defensive technology solutions. During his career, Ford has held positions with Virus Bulletin, IBM Research, Command Software Systems, and NTT Verio. In addition to work in the private sector, he has also worked in academia, having held an endowed chair in Computer Security, and worked as Head of the Computer Sciences and Cybersecurity Department at the Florida Institute of Technology. Under his leadership, the University was designated a National Center of Academic Excellence in Cybersecurity Research by the DHS and NSA. He has published numerous papers and holds several patents in the security area. Ford holds a Bachelor’s, Masters and D.Phil. in Physics from the University of Oxford. In addition to his work, he is an accomplished jazz flutist and Instrument Rated Pilot