Call us Toll Free (USA): 1-833-844-9468     International: +1-603-280-4451 M-F 8am to 6pm EST
Uncovering hidden cybersecurity risks

Uncovering hidden cybersecurity risks

By Adam Nichols, Principal of Software Security at GRIMM

The technology we use and depend upon has critical vulnerabilities in their software and firmware, lurking just beneath the surface of the code. Yet, our process has not changed.  A week does not go by where we are not reading about a serious vulnerability in the news and subsequently scrambling to see if we are affected.

This scramble, this mad dash, is a process we have all become accustomed to. The series of anxiety-inducing questions start to hum around our organizations, “Are we using the affected software? Are our configurations vulnerable? Has an attack been detected? Should we apply a patch as soon as possible to prevent exploitation, but in doing so risk the side effects of adding an untested patch?”

While trying to protect one’s organization from potential threats, the lifespan of the vulnerabilities in question is overlooked. If the vulnerabilities found were in recently added features, they may not have an impact on the organization if you are using stable or long-term-support software. On the other hand, if the vulnerabilities have been around for a long time, it begs the question of how they went unnoticed for so long.  More importantly, how can we find these things earlier, before there is a crisis?

Case studies

Recently, the GRIMM independent security research team began to build a case of examples to display that there is an underlying risk being accepted, perhaps unknowingly, by a large number of organizations.  The two examples that can be discussed publicly[1] are a Local Privilege Escalation (LPE) vulnerability in the Linux kernel and a Remote Code Execution (RCE) vulnerability in an enterprise time synchronization software product called Domain Time II. These two examples show the ability of vulnerabilities to be present in widely used products without being detected for well over a decade.

The bugs that were exploited to construct the Linux LPE were originally introduced in 2006.  The exploit allowed an unprivileged user to gain root access, and it affected several Linux distributions in their default configurations.  The Domain Time II vulnerability allowed a network attacker to hijack the update process to trick the person applying the update to install malware.  The underlying vulnerability was present at least as far back as 2007.  Although the name might not be familiar, the software is used in many critical sectors, such as aerospace, defense, government, banking and securities, manufacturing, and energy.

How do you uncover and/or mitigate these risks before they become an emergency?

Strategies for addressing this risk

There are a number of different ways organizations can attempt to address the risk of unknown vulnerabilities, each with its own strong points and limitations.  It takes a combination of them for optimal coverage. Typical threat intelligence only informs you of attacks after they happen. This information may be helpful, but it will not allow you to truly get ahead of the problem.

Maintaining an inventory of your environment is part of the solution, but without having a software bill of materials, there’s a risk that things will be missed.  For example, GitLab uses the nginx web server, so if someone only sees GitLab on the asset list, they may not realize that they are also impacted by vulnerabilities in nginx

To control costs, traditional penetration tests are either scoped to be a mile wide and an inch deep, or they’re very deep, but limited to one particular system.  These engagements are valuable, but it’s not feasible to have in-depth penetration tests on every single product that an organization uses.

Having your own dedicated team of security researchers can address the shortcomings of the approaches above.  An internal team will have a holistic view of your security posture, including the context of what is most important to your organization along with the ability to dig in and go where their research takes them.

Rise of the Information Assurance team

Information assurance teams focus on the products that your organization depends on.  They can work with your incident response team to see the trends specific to your industry and your organization.  Senior software security researchers will provide the intuition needed to know where the vulnerabilities are likely to be present, so the efforts are focused on the components which are most likely to have the biggest hidden risk.  The team should also include at least one person with threat modeling experience, who is able to quickly determine which components pose the biggest risk to your institution.

Having a diverse skill set is critical to the success of information assurance teams.  Their mission should be to uncover and mitigate these hidden risks.  They need the freedom to operate in a way that makes the best use of their time.  This likely includes integrating them with the procurement process so they can attempt to make sure things don’t get worse.  It means relying on their expert judgement to determine what systems they should look at, establish the order in which those systems should be investigated, and when it is time to stop looking at a single piece of a system and move on to the next one.

If you are thinking that this sounds a lot like Google’s Project Zero, you’re right.  The difference is that the project zero team is focused on products that are important to them, which likely only partially overlaps with the things that are important to you.  Having a team that is working for you solves this problem.

A team like this takes time to build, and it is expensive, which means it’s not an option for everyone.  If it’s not an option in your organization, you must ask yourself how you’re going to solve the problem.  Some options would be to:

  • outsource this work to an external team
  • depend on isolation, exploit mitigations, and network defenses
  • leverage cyber insurance
  • simply accept the risk

It’s important to always acknowledge the last option: accepting risk.  Risk acceptance is always an option, and it’s important that it be a choice, not something that was arrived at due to inaction.

Even if you have an information assurance team at your disposal, it does not mean that other efforts are no longer necessary.  Partnering with external security research teams, audits, penetration testing, network isolation, sandboxing and exploit mitigations are all still valuable tools to have in your toolbelt.  Understanding the shortcomings of each tool is the key to validate that they are being layered in a way that keeps your organization protected.

About the Author

Adam Nichols AuthorAdam Nichols is the Principal of Software Security at GRIMM.  He oversees all of the application security assessments, threat modeling engagements, and is involved with most of the projects involving controls testing.  It’s not uncommon for him to be involved at the technical level, helping with exploit development, and ensuring that all the critical code is covered.

Adam also oversees the Private Vulnerability Disclosure (PVD) program, which finds 0-days and provides subscribers an early warning so they can get mitigations rolled out before the information becomes public.

Adam can be reached online at [email protected] or via the company website at http://www.grimm-co.com.

[1] More examples are currently under embargo while we complete the coordinated disclosure process.  These will be made public in due time.

cyberdefensegenius - ai chatbot

12th Anniversary Top InfoSec Innovator & Black Unicorn Awards for 2024 remain open for late entries! Winners Announced October 31, 2024...

X