Many government agencies operate under restrictions that limit their use of cloud technology for software development. This limits their ability to realize AI’s transformative potential because most cutting-edge AI solutions are cloud-based. The risks of external data processing and limited control over AI environments require them to take a more secure approach.
Simply avoiding AI is not an option. Agencies need to integrate AI into software development to enable efficient software modernization. But how can they take advantage of powerful AI tools to enhance productivity, strengthen security, and drive innovation, without exposing themselves to the risks entailed by cloud-based AI solutions?
Self-hosted AI models offer a strategic solution. By deploying and managing large language models (LLMs) and other advanced AI capabilities within their own secure infrastructure, whether on-premises data centers or private cloud environments, agencies gain the control needed to leverage AI while maintaining strict compliance standards and advancing mission-critical applications.
Key Benefits of a Self-Hosted AI Strategy
I’ve spent many years working with federal agency tech leaders, so I know that a statement like “let’s just host it ourselves” might raise some eyebrows. It’s not always straightforward, especially with a technology as new as AI. But there are signs that federal agencies and defense organizations are ready for a different way.
For example, the Pentagon is actively working on a “fast pass” approach to securing software components, aiming to onboard approved software more quickly by using existing standards such as Software Bill of Materials (SBOM), the NIST Secure Software Development Framework (SSDF), and other common attestation methods and risk assessments.
Meanwhile, the House Oversight and Government Reform Committee has been exploring ways to use IT modernization to make the government more efficient. And there’s a broad groundswell of interest in finding ways to leverage AI in government.
To name just a few more examples from the U.S. military:
- The Defense Information Systems Agency is working on a new data strategy that integrates data, analytics, and AI into all aspects of defense operations via a secure, self-hosted platform.
- The Army is building two new self-hosted AI tools, CamoGPT and NIPR GPT, to assist with predictive maintenance, analysis of adversaries’ communications, logistics optimization, and analysis of different proposed courses of action.
- The Air Force Research Lab is developing an open-source platform, the Air and Space Force Cognitive Engine, a flexible, single IT platform for operationalizing AI within the Air Force.
There are several clear benefits to government organizations hosting LLMs within their own secure infrastructure:
- Data Sovereignty: When handling sensitive national security information, the risks associated with external data processing and limited control over AI environments demand a more secure approach—one that keeps critical data within protected boundaries. Self-hosted environments help guarantee that level of security.
- Compliance Alignment: Federal agencies operate under complex regulatory frameworks, including the Federal Risk and Authorization Management Program (FedRAMP), International Traffic in Arms Regulation (ITAR), Federal Information Security Modernization Act (FISMA), and agency-specific mandates. Self-hosted environments provide the granular control needed to implement specific security controls, audit trails, and governance frameworks that meet these strict requirements.
- Enhanced Security Posture: Self-hosted models significantly reduce potential attack vectors by removing dependencies on external APIs and third-party infrastructure. Agencies maintain complete control over access management, network segmentation, and vulnerability patching within their AI systems.
- Mission-Specific Customization: Unlike pre-configured cloud solutions, agencies can select from a list of supported AI models using specialized datasets to align with their unique use cases and environments. This enables more effective, purpose-built AI solutions that directly support mission objectives—whether enhancing intelligence analysis, optimizing resources, or strengthening cybersecurity. This customization extends to integration with legacy systems, a common challenge in the public sector.
- Predictable Resource Management: While initial setup requires investment in infrastructure and expertise, self-hosted AI models can provide more predictable long-term cost structures than variable subscription-based cloud models. This approach offers greater flexibility for large-scale deployments and leverages existing infrastructure and personnel. Additionally, self-hosted AI can provide a secure environment for modernizing legacy systems while keeping sensitive code under direct oversight.
Fostering Innovation Within a Trusted Framework
Deploying AI in a secure, self-hosted environment doesn’t restrict innovation—it nurtures it within a foundation of trust and control. Agencies can use open-source AI advances while maintaining security, compliance, and performance standards. This flexibility empowers government developers and data scientists to build next-generation critical applications with security and compliance as foundational principles rather than afterthoughts.
It’s clear from the examples I cited above that the U.S. government, and the Department of Defense in particular, are serious about embracing the potential of AI for making their work more effective, efficient, and innovative. This movement is already well underway.
For federal agencies, integrating self-hosted AI models into software development workflows is essential for navigating the intricate web of security regulations while fostering innovation. Self-hosting enables AI to reach its full potential throughout the software development lifecycle. That, in turn, enhances operational effectiveness, fortifies security, and accelerates the creation of more intelligent applications to safeguard national interests in an increasingly complex digital environment.
A secure, technologically advanced future for the federal government depends on its ability to innovate with AI while upholding strict regulations and maintaining complete control over sensitive data. Self-hosted AI models are the way to do just that.
About the Author
Joel Krooswyk is the Federal CTO at GitLab. Joel has actively been involved in GitLab’s growth since 2017. His 25 years of leadership experience span not only the U.S. Public Sector, but also small, mid-market, and enterprise businesses globally. Joel combines deep government policy expertise with a wealth of experience in technology, software development, AI, and cybersecurity. He is frequently called upon by industry and agencies alike for policy commentary and response.
Follow Joel Krooswyk on LinkedIn https://www.linkedin.com/in/joelrkrooswyk/ and learn more about GitLab at https://about.gitlab.com/solutions/public-sector/.