As I walked across the wide expanses of the vendor hall during the annual Black Hat USA event in early August, I searched and searched for a vendor booth without referencing artificial intelligence.
I couldn’t find one.
That worries me. As cybersecurity professionals, we know that no system — not a single one — is infallibly secure. Yet, as one company after another rushes to incorporate AI-powered tools in cybersecurity products, we’re encouraged to believe that AI will answer all of the security problems that confound us.
The truth is AI also opens a whole new world of potential cybersecurity issues, particularly for industries such as financial services. However, the human factor will remain the most important element of any effective cybersecurity effort, no matter how much AI power is deployed in the battle.
The first challenge arrives
For many organizations, the first great challenge is with the fast adoption of Copilot, the Microsoft generative AI chatbot rolled out just over 18 months ago.
As Copilot is integrated into Microsoft 365 applications, it speeds the creation of documents and presentations, captures action items from Teams meetings, summarizes email discussions, and provides insights on spreadsheet data.
Only 12 months after Copilot was launched, Microsoft revealed that the platform has been used by 50,000 organizations — including more than half of the Fortune 500 — and about 1.3 million paid users. Larger organizations are taking a further step with the creation of custom Copilot applications, but this pace of adoption also demonstrates that small and medium-sized enterprises also are adopting the AI tool.
It’s evident, too, that the application is being well-used. When Copilot was launched, Microsoft cited internal research that found that 70 percent of people would like to delegate as much as possible to AI to lessen their workload. There’s no way of knowing how much work is being shifted to Copilot and other AI tools, but this increased adoption shows work is being completed via AI at an astounding rate.
Copilot and the deep pool of data
Here’s the problem: The foundation of Microsoft 365 applications is data. Copilot taps into that data to create content. But sensitive data — personal identifying information of customers for example — must remain secure. But as Copilot snags data, it risks revealing information that should be secure.
Copilot can create lots of sensitive data on its own, drawing from the deep well that’s available to users of Microsoft 365. Those newly created documents don’t always carry the same security tags as the source file.
A marketing team, for instance, might use Copilot to help analyze recent customer survey data. Some of the customer comments in the survey files might be confidential. The analysis developed with Copilot might not recognize that these comments are confidential, the analysis could be uploaded to a company server with wide access, and sensitive customer data would spill out.
It’s fair to assume any new file — whether it’s created with the help of AI or not — is going to end up somewhere it doesn’t belong.
A survey this spring by Concentric AI highlighted the risks. It looked at more than 550 million data records and found that the average organization has more than 800,000 files at risk due to oversharing. According to the study, 16 percent of an organization’s critical data is overshared, 17 percent of at-risk files are overshared with third parties and 90 percent of documents considered “business critical” are being shared outside of the executive suite.
The Dark Cloud: Intruders
As troublesome as oversharing may be, far more serious issues will arise as intruders learn how to exploit Copilot to gain access to data and systems.
Already, cybersecurity professionals at Blackhat demonstrated numerous ways that bad actors within an organization as well as outsiders could use Copilot to access internal company information, manipulate it, or steal it.
Essentially, Copilot — or any similar AI-powered assistant — is just another user on the network. Its access privileges can be exploited just like those of any other user.
Microsoft’s security focus
Microsoft, as expected, paid close attention to security in the development of this powerful AI tool. Copilot adheres to security and compliance standards, it relies on encrypted data transfer, it uses its Entra ID tool to authenticate access and it doesn’t allow third-party sharing by default.
This is all good. But it’s critically important to note that Copilot relies on existing permissions and policies. In their rush to meet the demand of workers and efficiency-minded executives for fast deployment of Copilot, cybersecurity professionals may not take the time to look as closely as they should at existing permissions in their organization’s systems. This should be audited at least annually, but that’s a heavy lift for small and midsized organizations where employees are already likely wearing multiple hats.
And the security that Microsoft has built into Copilot crumbles into sand when users store sensitive information in much less secure locations — notably, their personal OneDrive accounts, or don’t classify the data appropriately according to company policies.
AI security tools and more
Copilot, of course, is only the most visible of the AI-powered tools whose development and deployment will challenge the cybersecurity profession.
It’s not surprising, then, that technology entrepreneurs have identified AI-focused cybersecurity as a growth market. The long rows of vendors at Blackhat, each with their take on the improvement of cybersecurity in an AI environment, speak volumes about the direction the profession is taking.
Many of the cybersecurity products that are pouring into the market address real needs. Many of them provide clever and efficient solutions — often relying on AI-powered tools to battle cybersecurity threats in the AI environment.
But AI itself won’t provide all the solutions for AI security. Cybersecurity professionals who limit themselves to the deployment of technological tools will be crushingly disappointed if they don’t simultaneously build a good cybersecurity culture, a culture that recognizes the importance and best practices of cybersecurity across the organization.
Practical steps for staff
First, and most importantly, every person in the organization needs to understand that cybersecurity is an organization-wide responsibility. It is not the job of the cybersecurity team. Rather, good security depends on each individual. This message needs to be part of Day One onboarding, and it needs to be driven deep into the organization’s culture.
Second, the requirement of personal responsibility is particularly important when users rely on Copilot or other AI-powered assistance. Good cybersecurity trainers will help them understand how AI tools can tap deep into the organization’s files, and trainers will emphasize again and again the danger that arises when files incorporated into AI-assisted documents don’t carry the same security tags as they did in their original home.
Third, cybersecurity teams within organizations must ensure they are putting as many guardrails into place as possible when deploying CoPilot or enabling users with add-on tools like CoPilot Studio. At a minimum, those responsible for cybersecurity (whether in-house or outsourced) should be familiar with the types of exploits demonstrated at Blackhat and other security conferences, and enable as much protection as possible when rolling out these tools.
Fourth, cybersecurity leaders must remember that not everyone in the organization understands the meaning of security tags. Staff members almost certainly will understand that Social Security numbers must be kept secure. However, they may not exercise the same caution with the terms of a vendor contract or results of a customer survey. From the executive suite to the desk where the interns sit, everyone needs to be on the same page.
Fifth, if everyone is to be on the same page, cybersecurity leaders need to ensure that common nomenclature is used across the organization. Is “secure” the same as “confidential”? Remove any doubt with standard terminology.
Practical steps for cybersecurity leaders
While security is an organization-wide responsibility, good cybersecurity executives create an AI environment that allows the organization to thrive.
First, they refuse to let themselves be rushed into poorly vetted decisions about AI tools. To be sure, competitive pressures and the rapid adoption of AI tools by organizations large and small require timely decision-making. But cybersecurity executives need to understand the risks that any AI tool brings — and they all carry some risk. This understanding needs to be shared across the C-suite.
Before AI tools such as Copilot are deployed, organizations should carefully review their data classification and access policies, especially related to sensitive data. The standard always will be “need to know” first followed by “least privilege.” These standards become eroded over time, however, as more and more people argue that they need to know. Adoption of AI tools should be accompanied by a review and reset of permissions.
Finally, cybersecurity leaders need to battle against a philosophy of “set it and forget it” when they deploy AI tools. Training programs must be ongoing. Access-control policies must be reviewed and revised regularly, especially in organizations that are growing rapidly or expanding their use of AI tools.
These are big jobs requiring heavy lifting by cybersecurity professionals. They’ll get some help from the many products that are arriving in the market. But success ultimately will depend on the ability of cybersecurity professionals to motivate, train, and support the people who are the cornerstones of any successful security effort.
About the Author
Michael Cocanower is Founder and Chief Executive Officer of AdviserCyber, a Phoenix-based cybersecurity consultancy serving Registered Investment Advisers (RIAs). A graduate of Arizona State University with degrees in finance and computer science, he has worked more than 25 years in the IT sector. Michael, a recognized author and subject matter expert, has earned certifications as both an Investment Adviser Certified Compliance Professional® and as a Certified Ethical Hacker. He is frequently quoted in leading international publications and has served on the United States Board of Directors of the International Association of Microsoft Certified Partners and the International Board of the same organization for many years. He also served on the Microsoft Infrastructure Partner Advisory Council.