Call us Toll Free (USA): 1-833-844-9468     International: +1-603-280-4451 M-F 8am to 6pm EST
Building AI on a Foundation of Open Source Requires a Fundamentally New Approach to Application Security

Building AI on a Foundation of Open Source Requires a Fundamentally New Approach to Application Security

By Nadav Czerninski, Co-founder and CEO, Oligo Security

AI has sprung from the pages of science fiction into our daily lives.

The AI revolution is now accelerating, enabled by open-source software (OSS) models. These models are complex packages of open-source code made specifically for developing AI, allowing organizations to deploy AI models efficiently and at scale.

While most organizations ensure that any given line of standard open-source code is checked for vulnerabilities, the larger open-source models they deploy often escape the same scrutiny.

A series of recently discovered vulnerabilities highlights how supply chain attacks can be executed through malicious OSS models. This discovery raises concerns regarding the fragility of open-source models and the security of AI systems overall, emphasizing the critical need for stringent OSS security measures amid AI’s rapidly increasing popularity.

AI models and the OSS They Rely on

OSS models are the bedrock of the AI revolution. Why? Most organizations today don’t build their own AI models from scratch – they increasingly rely on open-source models as foundational components in their AI initiatives. This approach expedites development, allowing rapid deployment and customization to suit specific needs.

The tradeoff for this convenience and efficiency, however, is security.

OSS packages are widely used – and attackers know it. They also know that organizations struggle to scrutinize potential vulnerabilities in code written by outside developers. The result: OSS models often introduce vulnerabilities that malicious actors can easily exploit to compromise sensitive data, decision-making processes, and overall system integrity.

AI research and solutions are still in the “move fast and break things” phase, so it’s no surprise that security protocols for OSS models are still in their infancy. This is why cyber incidents like the recent Hugging Face vulnerability – where over 1500 LLM-integrated API tokens in use by more than 700 companies including Google and Microsoft became compromised – are increasing.

Ramifications of Not Securing OSS Models

Hacking into AI infrastructure gives bad actors the keys to the kingdom. Attackers can view and steal sensitive data, compromising user privacy on an unprecedented scale. They can also access, pilfer, and even delete AI models, compromising an organization’s intellectual property. Perhaps most alarmingly, they can alter the results produced by an AI model.

Imagine a scenario where an individual with a persistent headache asks an AI chatbot or Large Language Model (LLM) for basic headache tips. Instead of suggesting rest or Advil, the corrupted AI advises the person to combine two OTC medications in a way that, unbeknownst to the headache sufferer, can result in toxic or fatal side effects.

Alternatively, consider an autonomous car that relies on AI for its navigation. If its AI system has been tampered with, it might misinterpret a red traffic light as green, triggering a collision. Similarly, if an auto manufacturing facility’s AI quality control system is tampered with, it might falsely validate bad welds, risking driver safety and recalls.

Even in seemingly benign situations, compromised AI can be dangerous. Imagine that a user prompts an AI assistant to suggest travel tips for Italy. A compromised LLM could embed a malicious URL within its response, directing the user to a phishing site laden with malware – all disguised as a helpful travel resource. While the would-be traveler might never realize they’ve enabled an exploit, the attacker can now use their new access to infect more users’ computers on the network.

Proactive OSS Model security

As the risks of compromised OSS models grow, organizations must adopt a proactive stance towards fortifying their OSS model security. This calls for a multi-faceted approach that must go beyond mere reactive measures, which only come into play in the wake of security breaches.

Continuous monitoring and real-time threat detection mechanisms are key. Organizations should seek out advanced monitoring tools capable of identifying anomalies, unusual behaviors, or potential threats to open-source models in real time. AI-driven systems – fighting fire with fire – can be most effective in such cases.

Additionally, organizations should prioritize robust authentication protocols, encryption methods, and access controls to fortify the integrity of their AI infrastructure. Regular security audits, vulnerability assessments, and code reviews specifically tailored to open-source models will help identify and address potential weaknesses before they are exploited.

Finally, fostering a culture of organization-wide security awareness and proactive response within teams ensures that swift actions can be taken to mitigate emerging risks.

By integrating proactive security solutions that prevent, detect, and respond to threats in real time, organizations can enhance the cyber-resilience of their OSS model infrastructure and ensure that their data – and customers – stay protected from the dark side of the AI revolution.

About the Author

Building AI on a Foundation of Open Source Requires a Fundamentally New Approach to Application SecurityNadav Czerninski is the CEO and Co-founder of Oligo Security. With an extensive background as a senior officer in IDF Cyber and Intelligence units, Nadav’s experience has propelled Oligo to the forefront of runtime application security.

Nadav can be reached online at https://www.linkedin.com/in/nadav-czerninski/ and at our company website https://www.oligo.security/

cyberdefensegenius - ai chatbot

12th Anniversary Top InfoSec Innovator & Black Unicorn Awards for 2024 remain open for late entries! Winners Announced October 31, 2024...

X