Sound cybersecurity principles for responsible generative AI innovation
By Lisa O’Connor, Managing Director—Accenture Security, Cybersecurity R&D, Accenture Labs
As companies turn to generative AI to transform business operations, traditional notions of trust are being reshaped. How will the enterprise navigate a journey toward Responsible AI—designing, developing and deploying AI in ways that engender confidence and trust?
AI brings unprecedented opportunities to businesses, but also incredible responsibility. Its direct impact on people’s lives has raised considerable questions around AI ethics, data governance, trust and legality. In fact, Accenture’s 2022 Tech Vision research found that only 35% of global consumers trust how AI is being implemented by organizations. And 77% think organizations must be held accountable for their misuse of AI.
That skepticism in understandable. Humans will have a more difficult time determining if sources of information are reliable. There are risks that large language models will be used to manipulate data in ways that will make us question the veracity of all sorts of information.
Today, threat actors are using Gen AI to write more and more effective phishing campaigns. They are getting better at leveraging collected profiles and the troves of information we share on-line on social sites to craft precision spearphishing. Attacks methods against AI and Generative AI are all over the dark web. These methods take advantage of how these models work. For example “prompt injection” attacks can cause the large language model (LLM) to unknowingly execute the malicious user’s instructions, leading to the potential exploit of backend systems or the presentation of false information to the unknowing user. Vulnerabilities and the proliferation of the knowledge on how to use them means that well-meaning initiatives without the right security may put relationships at risk and proprietary data exposed. The cybersecurity playbook must keep pace with these new realities.
But using gen AI in the organization also put trust at risk. The following are steps leading enterprises are taking to build trust that generative AI is being used responsibly:
- Build consensus about risk appetite—Accenture’s recent “State of Cybersecurity Resilience 2023” research found that 65% of so-called “cyber transformers” apply three leading practices to excel at risk management. By contrast, just 11% of the rest adopt a “best-in-class” approach. The leaders apply a cyber risk-based framework that is completely integrated into their enterprise risk management program. Their operations and executive leadership consistently agree on the priority of assets and which operations to protect. And they consider cybersecurity risk to a great extent when evaluating overall enterprise risk.
- Ditch the jargon—Provide non-technical explanations. Business leaders and the board need non-technical explanation and a common understanding to agree on governance guardrails and appreciate the risks of having actual business data compromised. Stories and what-if scenarios can help users gain a gut-level appreciation about the risks of undermining trust. Users need to appreciate that once corporate data is out in the public environment, it is not coming back.
- Promote early engagement—Pilots can shape opportunities and value propositions and should be used to provide critical feedback that should be shared with technology providers and standards organizations. Inside the organization, this critical feedback can be used to develop standardized business-ready applications and an enhanced understanding of necessary controls.
- Empower cross-organization governance and development—Avoid siloed development efforts. Legal, Risk, IT, Information Security, Marketing and HR should all be engaged in charting the gen AI journey. One enterprise we know has an “Executive Information Management Committee; another a “funnel group” for bi-weekly evaluation of use cases.
- Offer a safe “sandbox”—The rest of the business is keen to work with gen AI tools. We hear from CISOs that unsupervised “shadow” efforts are underway throughout many enterprises. To get ahead of the risks of rogue efforts, establish an environment for users to test the appropriate uses and limitations of various models, and of the data that trained the model. For example, a CISO we know is encouraging people to experiment safely by using ChatGPT to plan their next holiday.
- Identify the prerequisites for sustainable generative AI success—A strategic discussion with business leaders is required to ensure that the generative AI journey actually leads to business value. There needs to be agreement on governance and a focus on investments that create sustainable value. Priorities need to be right-sized so transformed processes are affordable. Short-term successes should not come at the cost of overlooking responsible AI principles.
- Establish a sustainable AI architecture—Get the organization ready to use large language models cost-effectively. Adopt a foundation model—pure play, open source, or cloud provider—that is fit for purpose. Determine the right approach for providing access to gen AI models. Dedicated infrastructure offers better cost predictability but adds complexity compared to managed cloud services. A modern data foundation is required to create measurable business value from proprietary data in your model. A well planned and executed security strategy can mitigate the risk of compromise that comes with generative AI products.
- Develop use cases that reinforce trust—Demonstrate to the CEO, the board and other leaders what is possible with generative AI. Also, highlight the privacy and intellectual property risks and propose criteria for evaluating the value of use cases that will inevitably be brought forward by other areas of the business. In time, generative AI could support enterprise governance and information security, protecting against fraud, improving regulatory compliance, and proactively identifying risk by drawing cross-domain connections and inferences both within and outside the organization.
- Know data sources and data lineage—Monitor network traffic and shadow models to prevent data from leaving the enterprise. Ensure that detection response is up to date. Database records, system files, configurations, user files, applications, and customer data may all be at risk of leakage in a public large language model environment. Not understanding or curating the training data set can lead to inaccuracies, misinformation, discrimination, bias, harm, lack of fairness or adversarial actions like data poisoning.
While the full impact of generative AI is evolving, one fact is clear: trust cannot be compromised. Establishing and maintaining trust means the organization must ensure the security, privacy, and the safety of individuals, communities, businesses, data and services. If stakeholders trust how businesses are using generative AI, the business strategies predicated on that technology can thrive.
About the Author
Lisa O’Connor is Accenture’s Managing Director, Global Security Research and Development, a visionary leader who understands both the opportunities and risks of emerging technologies to the business. Her role is to enable the security and resilience of the future enterprise through a program of applied research, co-innovating with the Global 2000, governments, academia and start-ups. Her applied research programs in Washington, DC and Herzliya, Israel include Generative AI, Quantum Security, Metaverse Security, Trustworthy AI, Intelligent Secure Data Mesh, Ontological Mesh and Cyber Digital Twins.
She has over 35 years of information security experience, with over 16 years in financial services, including serving as CISO at Fannie Mae. Her nine-year tenure at the National Security Agency included special assignments to the White House Communications Agency and to the Surveys and Investigations Staff of the House Appropriations Committee.
Lisa holds a Masters of Engineering Administration from George Washington University and a Bachelor of Science of Electrical Engineering from Lehigh University. She is an avid rock climber.
Lisa can be reached online on LinkedIn at https://www.linkedin.com/in/lisaoconnor/ and at Accenture’s company website https://www.accenture.com/it-it/about/leadership/lisa-oconnor