There are multiple initiatives in the USA and European Union to regulate the Open-source AI use – from an ethics perspective to data safety. However, very little attention is being paid to the core – the open-source software component itself.
How does it interact with the proprietary software when embedded as an open-source library?
How are the open-source AI libraries built? Can we trust the training data, or should we double-check the sources? Where is the open-source AI software hosted? Has it been published through a reputable package repository, or is it sourced from platforms with a questionable security track record?
Defense Tech Perspective
For defense tech developers these questions are mission-critical. In today’s environment of persistent cyber threats and compromised digital infrastructure, engineers aren’t just building software. They are building real weapons to be used on a real battlefield. The Preliminary Assessment from the Center for Strategic & International Studies (CSIS) shows that 96% of the U.S. civil and military codebases are built using open-source software. And it’s no different with general product development in the IT industry worldwide.
3 Starting Points for Open-Source AI in Defense Tech:
- Open-source AI is still an open-source software
It means it’s fully open like open-source software in general. Open foundation models provide public access to their architecture, allowing individuals and businesses to review, modify, and utilize them according to their licensing terms.
This openness fosters community which meticulously examines model weights, training data, and the inference code – all this simplifies maintenance and significantly lowers costs for business. The open foundation models can also be customized and incorporated into proprietary solutions.
- Security & Legal Frameworks for open-source AI are still emerging
While security and legal approaches for traditional open-source software are well-established, the frameworks for open-source AI are only beginning to take shape.
And this is not about AI regulations. It stems from securing the core – the source code and its components, ways to interfere with and compromise them when embedded into proprietary products. The main question each engineer who works with open foundation models should ask themselves is, “What is inside? Is it secure?”
- Defense Tech is built using open-source software and is now beginning to integrate open-source AI
With the rising popularity of open-source AI, the defense tech is standing at the intersection of cutting-edge innovation, cybersecurity, and heavily regulated governmental procurements. While it’s crucial for all components to be transparent and open for state bodies as the end users, it must be balanced with uncompromising safety standards. This raises critical security considerations as the open-source AI safety approaches are more complicated to develop.
Open-Source Safety Initiatives: Applying Best Practices to Open-Source AI
During the last few years (2023-2024) there were multiple attempts from the U.S. information and security state agencies to gather feedback from the IT industry players and open-source community on their suggestions regarding open-source safety measures.
Key initiatives include the Open Source Software Security Roadmap (September 2023) from the Cybersecurity and Infrastructure Security Agency (CISA) and the Request for Comment from the National Telecommunications and Information Administration (NTIA) (October 30, 2023).
Both efforts focus on identifying and mitigating security risks in open-source software, helping government agencies distinguish between safe and potentially malicious components.
How defense tech developers can apply these findings to open-source AI:
- Developers should aim to check the open-source AI component’s Software Bill of Materials (SBOMs) before using this component;
- Developers should integrate tools that generate Software Bill of Materials (SBOMs) during the build process, as these tools have deeper access to detailed and accurate data compared to analysing the artifact;
- Developers should trace and verify the open-source AI’s dependencies provenance. Package repositories like npm and GitHub (for npm-based projects) offer dedicated tooling for this purpose.
- Developers should verify the package repositories’ safety levels – if they at least require multi-factor authentication (MFA) and allow security researchers to report vulnerabilities – key criteria for Level 1 security maturity. The Principles for Package Repository Security should serve as a determining guideline while integrating the open-source AI components in defense tech proprietary products.
Change of Shift with Change of US Administration?
It may seem that the new U.S. Administration is shifting from the cautious open-source AI approach towards developing unlimited AI capabilities. However, a closer look suggests otherwise – not much has changed in the policy continuity. Trump Administration is doing the same as their predecessors in the White House – they have initiated a new Request for Information (RFI) to shape the U.S. AI Action Plan. They encourage the industry to provide their input on the AI policy ideas, and this is where we can learn from.
One of the most noteworthy industry responses to the RFI comes from Open AI.
In their submission, the Open AI highlights the growing security risks imposed by non-democratic AI increase, which are strengthened with recent attempts from EU regulators to limit the scale of AI models development. These approaches influence the US AI policy and, according to the Open AI statement, hinder innovation.
One of the key takeaways defense tech developers can apply from this response is to verify the origin of the open-source software and the open foundation models they use. Some might indirectly derive from the Tier III countries (non-democratic PRC) and introduce elevated cybersecurity risks and national security concerns, particularly when it comes to defense tech applications.
This awareness is critical as governmental agencies are the largest customers in the defense tech field, and they cannot acquire compromised software components.
Another recommendation is to prioritize the implementation of cybersecurity, model weights security, and personnel security controls, which are likely to become the focus of coordinated global standards under emerging U.S. AI policy directions.
Conclusions
Open-source software often serves as the foundation for proprietary applications, and the defense tech industry is no exception. With the rising popularity of open-source AI, defense tech engineers are now leveraging these cutting-edge technologies to shape international security using new tools that remain relatively underexplored. This introduces unique challenges and risks that need careful attention.
The key policy makers – U.S. state defense agencies like Cybersecurity and Infrastructure Security Agency’s (CISA), National Telecommunications and Information Administration (NTIA), and the Office of Science and Technology Policy (OSTP) are constantly gathering feedback from the industry for the insights on how to shape emerging open-source and open-source AI policies.
This is a great resource for the defense tech developers’ education that should be prioritized by the businesses during the development cycle. Applying tools in development that generate the Software Bill of Materials and tracking the open-source AI’s dependencies’ provenance is a crucial first step for the defense tech applications’ transparency and cyber safety.
About the Author
Yuliia Verhun is a technology & business lawyer from the IT industry. For over 10 years, Yuliia has been helping international startups with corporate structuring, operations, board governance, intellectual property & data protection in the EU, USA, and Middle East. As General Counsel, Yuliia led investment rounds for tech startups in the UAE and prepared Unicheck – an EdTech SaaS platform serving over a million end users for large-scale public procurements in the U.S., and later, for a high-value M&A. These experiences reinforced her belief that transparent and secure software architecture is a strategic asset, consistently scrutinized during due diligence and procurement processes on the international stage. Yuliia is actively engaged in research at the intersection of open-source AI and cybersecurity, with a particular focus on applications in defense technology.
Yuliia Verhun can be reached online at [email protected], https://www.linkedin.com/in/yuliia-verhun-general-counsel/ and at my company website: https://generalcounsel.verhun.com/.