Navigating Secure Adoption of AI Across Government and Connected Infrastructure
Gaurav (G.P.) PalGaurav (G.P.) Pal

Navigating Secure Adoption of AI Across Government and Connected Infrastructure

By Gaurav (G.P.) Pal, Founder and CEO, stackArmor

This year, artificial intelligence has exploded in popularity across all sectors and industries — including the federal government. AI is being used across agencies of all sizes to streamline tedious tasks and processes to create a more efficient workflow. This emerging technology has unlimited capabilities to help agencies better serve their missions and the public.

With all the hype, OMB recently released draft memo detailing 10 requirements for implementing artificial intelligence across the federal government. This information became available soon before the Biden Administration announced a teaser to its expected upcoming federal executive order on AI.  Among the requirements OMB listed is a recommendation to convene an AI governance board to provide much-needed oversight and guidance around this emerging technology.

As the government continues to experiment with AI implementation and wait for official standards, leaders need to ensure its governance and safety to enhance existing practices across public sector systems.

The Role of Partnership and Collaboration

It can be overwhelming for agencies trying to get their arms around AI governance. This technology is rapidly evolving and is being implemented faster than the government has been able to push out legislation to set up guardrails. Like the cloud adoption process across government a few years back, AI adoption is more than just an IT transformation. Leaders have a lot of questions and are trying to grasp the scope of how it can be used and what the potential drawbacks are.

Agencies in the beginning phase of AI adoption should consult cross-sector experts for counsel on where to begin and what the pitfalls might be. This might look like bringing together policy, acquisition, oversight, workforce, mission program owners and industry leaders to get the full picture and make the process a little less daunting.

Seeking industry partners with established centers of excellence or advisory committees will grant leaders unique advice and assistance in their AI journeys. Having a targeted focus and cross communication standard is very important to bring federal agencies together to ensure the safe adoption of AI.

The government leaders driving AI adoption are senior, well-experienced executives. They’re looking for an actionable and implementable framework that they can move with quickly. Federal CIOs and CISOs are eagerly awaiting a solution that allows them to responsibly deploy AI workloads for their agencies’ mission.

There are some helpful resources developed by industry experts and academics seeking to lead the way in AI security, such as:

  • The NIST AI Risk Management Framework, voluntary guidance for organizations looking to incorporate trustworthiness into the design, development, use, and evaluation of AI products.
  • The MITRE ATLAS project, helping develop security practices around AI/ML open-source projects and collecting a database of adversary threats to better understand the threat landscape.
  • OWASP’s published top 10 security considerations for large language models (LLMs) and generative AI use cases, as well as an AI security and privacy guide.
  • An IEEE program for ambassadors of AI systems, and its development of AI bias protection systems.

Additionally, as governing bodies like Congress, the White House and OMB as well as agencies like DHS and NIST are working on developing standards, safe and secure AI adoption can be accelerated by extending and leveraging existing governance models including FISMA and FedRAMP ATOs (Authority to Operate) by creating overlays for AI. Agencies can look to industry and academic resources for guidance to ensure AI is being implemented safely, with security top of mind.

Considerations for Upcoming AI Legislation and Government Guidance

The upcoming OMB guidance and federal AI executive order will be laser-focused on enabling the safe and secure adoption of AI.  The guidance should consider directing and requiring implementation of strong safety and security requirements that avoid burdensome and duplicative compliances. Expanding successful cybersecurity programs such as FedRAMP for AI systems should be considered to ensure that safe and secure AI and cloud technologies can be rapidly adopted by agencies.

Security and safety that is mandatory and backed by strong regulatory oversight should be the top priority for legislators and government leaders implementing AI across agencies and connected infrastructure.

About the Author

Navigating Secure Adoption of AI Across Government and Connected InfrastructureGaurav Pal (G.P.) is a Senior Technology Executive with over 20 years of information systems modernization and implementation experience. He is the CEO and co-founder of stackArmor, a security focused cloud solutions firm. G.P. also contributed to Federal cloud initiatives including U.S. Treasury’s Public Cloud Webhosting Solutions, Department of the Interior Foundation Cloud Hosting Services, and Recovery.gov 2.0. G.P. can be reached online at https://stackarmor.com/contact-us/ and at our company website https://stackarmor.com/

January 29, 2024

cyber defense awardsWe are in our 11th year, and Global InfoSec Awards are incredibly well received – helping build buzz, customer awareness, sales and marketing growth opportunities, investment opportunities and so much more.
Cyber Defense Awards

12th Anniversary Global InfoSec Awards for 2024 are now Open! Take advantage of co-marketing packages and enter today!

X