Page 272 - Cyber Defense eMagazine RSAC Special Edition 2025
P. 272
Key initiatives include the Open Source Software Security Roadmap (September 2023) from the
Cybersecurity and Infrastructure Security Agency (CISA) and the Request for Comment from the National
Telecommunications and Information Administration (NTIA) (October 30, 2023).
Both efforts focus on identifying and mitigating security risks in open-source software, helping
government agencies distinguish between safe and potentially malicious components.
How defense tech developers can apply these findings to open-source AI:
• Developers should aim to check the open-source AI component’s Software Bill of Materials
(SBOMs) before using this component;
• Developers should integrate tools that generate Software Bill of Materials (SBOMs) during the
build process, as these tools have deeper access to detailed and accurate data compared to
analysing the artifact;
• Developers should trace and verify the open-source AI’s dependencies provenance. Package
repositories like npm and GitHub (for npm-based projects) offer dedicated tooling for this purpose.
• Developers should verify the package repositories’ safety levels – if they at least require multi-
factor authentication (MFA) and allow security researchers to report vulnerabilities – key criteria
for Level 1 security maturity. The Principles for Package Repository Security should serve as a
determining guideline while integrating the open-source AI components in defense tech
proprietary products.
Change of Shift with Change of US Administration?
It may seem that the new U.S. Administration is shifting from the cautious open-source AI approach
towards developing unlimited AI capabilities. However, a closer look suggests otherwise – not much has
changed in the policy continuity. Trump Administration is doing the same as their predecessors in the
White House – they have initiated a new Request for Information (RFI) to shape the U.S. AI Action Plan.
They encourage the industry to provide their input on the AI policy ideas, and this is where we can learn
from.
One of the most noteworthy industry responses to the RFI comes from Open AI.
In their submission, the Open AI highlights the growing security risks imposed by non-democratic AI
increase, which are strengthened with recent attempts from EU regulators to limit the scale of AI models
development. These approaches influence the US AI policy and, according to the Open AI statement,
hinder innovation.
One of the key takeaways defense tech developers can apply from this response is to verify the origin of
the open-source software and the open foundation models they use. Some might indirectly derive from
the Tier III countries (non-democratic PRC) and introduce elevated cybersecurity risks and national
security concerns, particularly when it comes to defense tech applications.
272