Page 283 - Cyber Defense eMagazine September 2025
P. 283
These tools are being connected and integrated into organizations’ environments and bring with them
novel risks to Non-Human Identity (NHI) security, access to data, and resource entitlements. Against this
backdrop, security professionals with a “should I allow this?” approach must consider adopting a “how do
we secure this?” approach.
Security Considerations as AI Becomes Increasingly Pervasive in the Workplace
Leveraging GenAI solutions requires connecting them to your data. Whether it is to analyze trends in
usage data, or automate and streamline your user experience, connectivity is key. And the ways to
connect models to your environment is growing as well. For example, you can now connect an AI agent
to a plethora of services your organization already uses. However, these agents don’t just analyze data;
they can take active actions and execute automations across systems. This opens up powerful new
possibilities, but also significantly increases the potential blast radius of a bad actor or a misconfiguration.
This connectivity is facilitated by creating new identities or leveraging existing ones. In both cases, there
is a risk of over-provisioning permissions or assigning roles that are poorly suited for the task, especially
if those identities aren't properly monitored or secured.
Some of the most common risks include:
- Over-permissioned access granted to AI agents that may not need broad or persistent privileges
- Unmonitored identities, often created through OAuth flows or personal accounts, that operate
without central visibility
- Shadow integrations introduced by individual teams or business units without security oversight
- Poor revocation processes for short-lived or experimental tools that retain access longer than
intended
This problem is further compounded by how these integrations are introduced. Developers, eager to
explore and implement the latest technologies, may hastily onboard AI tools without fully considering the
scope of access being granted. In other cases, non-developers might connect AI services using their own
credentials, creating unmanaged backdoors into sensitive environments.
As AI becomes more embedded in everyday workflows, security professionals must ensure that the
convenience of integration does not come at the cost of visibility and control.
Treat AI Agents as High-Risk Actors
AI agents are being granted access to internal systems, from CRMs and cloud infrastructure to financial
tools and code repositories, and these workloads often operate with elevated privileges–yet there's
minimal monitoring of what they're doing with that access. And unlike traditional analytics tools, AI agents
can take action: opening tickets, modifying configurations, updating records, or triggering workflows. This
level of autonomy makes them powerful but also risky.
Cyber Defense eMagazine – September 2025 Edition 283
Copyright © 2025, Cyber Defense Magazine. All rights reserved worldwide.