Do Highly Intelligent Language Models Pose a Cyber Threat?
Brett RaybouldBrett Raybould

Do Highly Intelligent Language Models Pose a Cyber Threat?

By Brett Raybould, EMEA Solutions Architect, Menlo Security

It is estimated that as many as 100 million users have engaged with AI chatbot, ChatGPT, since it was released at the end of November. Developed by OpenAI, ChatGPT has surprised with its ability to complete a range of tasks in a highly sophisticated and human-like way.

Since then, others have jumped on the AI bandwagon. Microsoft has already announced the launch of a new version of its Bing search engine that taps into a version of OpenAI’s cutting edge large language model that is customised specifically for search, the platform taking its key learnings and advancements from ChatGPT. Google has also said that it would be introducing Bard, an experimental conversational AI service positioned as a direct rival.

The headlines are hard to ignore, but not all the feedback to these highly advanced large language models has been positive. For many there is the worry that ChatGPT can be used as easily for negative purposes as is it for positive ones.

A professor at Wharton, University of Pennsylvania’s business school took an MBA exam using ChatGPT. The University found that it didn’t just pass the test but scored an impressive B/B-. With ChatGPT able to write an advanced essay on a subject within just a few seconds, the platform could easily be used by students to cheat. Is this a sign of how different the post-AI world will be perhaps?

Cybersecurity presents another growing concern. Blackberry reports that 71% of IT professionals believe that foreign states are likely to be using ChatGPT already for malicious purposes, such as nation state attacks. CyberArk also published findings about how it was able to create polymorphic malware using ChatGPT, while Check Point Research suggests that cybercriminals have been using it to create spear-phishing emails and share information-stealing malware code on criminal forums.

We decided to ask ChatGPT whether it could be misused by malicious actors to develop Highly Evasive Adaptive Threat (HEAT) attacks, a class of attacks capable of bypassing many traditional corporate security solutions such as firewalls, secure web gateways and phishing detection tools.

The initial response was: “Is it possible for ChatGTP or any other language model to be misused to generate malware? If a malicious actor has access to the model and trains it on malware samples, could it generate malicious code or be used to create phishing campaigns or other malicious activities? In this sense, ChatGTP generated malware could be considered a HEAT threat, as AI-generated malware can be sophisticated, adaptive and difficult to detect.”

So, the initial answer was yes, it can be used to generate HEAT attacks. However, in the same response the chatbot clarified that the creation of malware using AI language models is not an inherent capability of the models themselves, but rather the result of malicious use.

We then asked similar questions, and it concluded that ChatGPT does not pose a HEAT threat. One reply said: “While it is possible for a threat actor to misuse ChatGPT for malicious purposes, such as generating misleading or false information, it is not capable of generating malware on its own.”

Can ChatGPT democratise cybercrime?

OpenAI has been keen to reiterate that ChatGPT is not always correct, and therefore its responses should not be relied upon as fact. However, both the bot’s initial response and the studies conducted by Blackberry, CyberArk and Check Point Research suggest there may be cause for concern.

What we do know is the ChatGPT is centered around machine learning, so the more inputs it receives, the more sophisticated and accurate it will become.

The concern is that ChatGPT could be used to democratise cybercrime. By this, we mean that would-be threat actors with limited to no technical skills could learn to write credible social engineering or phishing emails, or even code evasive malware, using the ChatGPT platform or similar tools.

We have experienced the catastrophic damages that democratised cybercrime can cause already. Indeed, a major reason why the volume of ransomware attacks has increased so dramatically in recent years is because of a booming ransomware-as-a-service industry.

Therefore, given the potential cyber threats that these bots can bring, it is more important than ever that organisations properly protect themselves against HEAT attacks. We recommend that organisations incorporate isolation technology into their security strategies as an effective way of combating highly evasive threats – even potential ones created by AI.

This ensures that all active content is executed in an isolated, cloud-based browser, rather than on users’ devices. In doing so, it blocks malicious payloads from reaching their target endpoints, preventing attacks from happening in the first instance.

Highly evasive adaptive threats are already on the rise. Given the potential for threat actors to take advantage of the capabilities of language models in coordinating attacks, companies will need to enhance their security strategies as a priority.

About the Author

Do Highly Intelligent Language Models Pose a Cyber Threat?Brett is passionate about security and providing solutions to organisations looking to protect their most critical assets. Having worked for over 15 years for various tier 1 vendors who specialise in detection of inbound threats across web and email as well as data loss prevention, Brett joined Menlo Security in 2016 and discovered how isolation provides a new approach to solving the problems that detection-based systems continue to struggle with.

June 26, 2023

cyber defense awardsWe are in our 11th year, and Global InfoSec Awards are incredibly well received – helping build buzz, customer awareness, sales and marketing growth opportunities, investment opportunities and so much more.
Cyber Defense Awards

12th Anniversary Global InfoSec Awards for 2024 are now Open! Take advantage of co-marketing packages and enter today!

X