Application Isolation and Control – A Modern Defense for New Threats

on February 20, 2019 |

By Fraser Kyne, EMEA CTO, Bromium

 The detection method for preventing malware is fundamentally flawed, yet it is still the de facto standard in cybersecurity. Day after day, organizations scramble to protect against a growing number of threats, but all it takes is one piece of malware to go undetected to wreak havoc on IT systems.

Ironically, this was predicted by Alan Turing more than 80 years ago. His work proved no standard algorithm could ever predict an outcome for every possibility without falling into a logical paradox because of the halting problem. The halting problem proves that an algorithm cannot predict from a general description of a program and an input whether the program will finish running or execute forever.

The same logic applies to malware detection. A standard algorithm cannot be relied on to correctly identify every single threat that comes knocking because the volume of threats is large and varied, with previously unseen threats emerging every day.

A detection-based approach deployed by IT teams is akin to casting out a net, where the net will either be so large that it tangles itself, or it won’t be cast wide enough and will invariably allow some things to be missed. IT teams are trying to solve this problem by adding more layers to their detection solutions, but all this is doing is casting more nets plagued by the same problems.

Detection-based solutions can Over-complicate security landscapes

Hackers are resourceful, utilizing new tactics – such as polymorphic malware and zero-day exploits – to bypass detection-based software and break into critical IT systems. For example, in the Locky ransomware campaign, hackers customized the malware to execute after the fake document was closed, making it much harder to spot and bypassing the majority of detection-based AV solutions.

Instead of focusing on detection, organizations that are serious about security are starting to rely on segmentation. By segmenting networks and applications, businesses are seeing that they can prevent malware from causing harm and keep data and networks safe.

Segmentation offers businesses protection, but it relies on PCs or applications only having access to limited areas on the network. Early iterations failed to achieve a great uptake because adding new PCs to this system can be incredibly expensive and time-consuming during deployment.

Segmenting IP and sensitive data could also still leave users at risk if they don’t isolate the applications that are being used to access this data. Without a solution to these problems, network segmentation has largely failed to get off the ground and detection has persisted as the leading cybersecurity approach.

By focusing on isolation, security Is simplified and end users are protected

Everybody wants to be able to use technology to do more with less. In this instance, it means deploying more effective and reliable cybersecurity solutions. However, detection involves the complex process of “preventing, detecting, and responding”, where multiple layers of security are deployed to identify malware before it hits. However, these layers simply aren’t sufficient to protect against the volume and sophistication of the ransomware and targeted phishing attacks that are prevalent today. As you might expect, it also creates a tremendous expense.

While there are a few choices available that provide isolation, solutions that do this using virtualization are effectively bullet-proof. While no one can promise 100% protection, virtualization that starts on the chip, stops Meltdown, dramatically limits Spectre and works online or offline, can protect what’s targeted the most: endpoints.

Real solutions with a virtual defense

Isolation through virtualization works by allowing applications to open and carry out each task in its own self-contained virtual environment. This means that every tab that is opened in a browser, every Office or PDF document attached to an email, or any file that runs an untrusted executable, will be opened in an entirely isolated virtual environment that’s running on the hardware itself. The result is that any threat caused by an action in this environment won’t have access to anywhere else on the system and can be easily removed by simply destroying the virtual environment.

This allows users the freedom to download files and open documents, safely, knowing that they are no longer the last line of defense – giving users the ability to click with confidence. In fact, end users can let the malware run, because it doesn’t do any damage, and it allows IT teams to get detailed threat analysis. Users can get back to work; recruiters and HR teams can open emailed CVs, marketers can carry out research even if they click on a phishing link, and R&D teams can share downloaded resources without the fear of being stung by malicious files or links.

For organizations using this new approach, there is less worry. Virtualization-based security is being adopted by the giants: HP and Microsoft now use virtualization-based security to protect users. This is just the tip of the iceberg and marks the beginning of a virtualization revolution in security, where users no longer fear opening links and attachments and organizations can let their teams focus on innovation without worrying about making a security mistake.

About the Author

Fraser Kyne, EMEA CTO, Bromium Fraser Kyne is the EMEA CTO at Bromium. Fraser’s role has encompassed a wide range of both engineering and customer-facing activity. Prior to joining Bromium Fraser was a Technical Specialist and Business Development Manager at Citrix Systems. He has been a speaker at various industry events on topics such as virtualization, security, desktop transformation, and cloud computing.

Show Buttons
Hide Buttons