Why Continuous Training Must Come Before The AI-driven SDLC
Mike BurchMike Burch

Why Continuous Training Must Come Before The AI-driven SDLC

By Mike Burch, Director of Application Security, Security Journey

Despite the hype, generative AI is unlikely to transform the world. But there are sectors where it could significantly disrupt the status quo. One of these is software development. Yet the time savings and productivity benefits of using tools like ChatGPT start to erode if what they’re producing is full of bugs. At best, such inadvertent errors will require extra time for developer teams to fix. At worst, they might creep into production.

If organizations want to take advantage of AI to optimize the software development lifecycle (SDLC), they must first give their teams suitable training to manage the risk of something going wrong.

One step forward, two steps back

Every step forward we take with AI, the bad guys hit back. While generative AI and large language models (LLMs) could be a productivity boon for stretched developer teams, the technology has also been seized on by those with nefarious intent. Tools like FraudGPT and WormGPT are already circulating on the cybercrime underground, offering to lower the barrier to entry for budding hackers. Their developers claim these, and other tools can help write malware, create hacking tools, find vulnerabilities and craft grammatically convincing phishing emails.

But developers could also be handing threat actors an open goal, while undermining the reputation and bottom line of their employer, if they place too much faith in LLMs and lack the skills to check their output. Research backs this up. One study from Stanford University claims that participants who had access to an AI assistant were “more likely to introduce security vulnerabilities for the majority of programming tasks, yet also more likely to rate their insecure answers as secure compared to those in our control group.”

A separate University of Quebec study is similarly unequivocal, reporting that “ChatGPT frequently produces insecure code.” Only five of the 21 cases the researchers investigated produced secure code initially. Even after the AI was explicitly requested to correct the code, it did so as directed in only seven cases. These aren’t good odds for any developer team.

There are also concerns that AI could create new risks like “hallucination squatting.” This was recently reported when researchers asked ChatGPT to look for an existing open source library, and the tool came back with three that didn’t exist. Hallucinations of this sort are common with current generative AI models. However, the researchers posited that, if a hacker did the same probing, they could create an actual open-source project with the same name as the hallucinated responses – directing unwitting users to malicious code.

Start with your people

Part of the reason for the buggy output these researchers are getting back is that the input was poor. In other words, the code and data used to train the model was of poor quality in the first place. That’s because LLMs don’t generate new content per se, but deliver a kind of contextually appropriate mashup of things they’ve been trained on. It’s proof if any were needed that many developers produce vulnerable code.

Better training is required so that teams relying on generative AI are more capable of spotting these kinds of mistakes. If done well, it would also arm them with the knowledge needed to be able to use AI models more effectively. As the researchers at Stanford explained: “we found that participants who invested more in the creation of their queries to the AI assistant, such as providing helper functions or adjusting the parameters, were more likely to eventually provide secure solutions.”

So, what should effective secure coding training programs look like? They need to be continuous, in order to always keep security front of mind and to ensure developers are equipped even as the threat and technology landscapes evolve. Training programs should be universally taught to everyone who has a role to play in the SDLC, including QA, UX and project management teams, as well as developers. But they should also be tailored individually for each of these groups according to the specific challenges they face. And they should have a focus on rewarding excellence, so that security champions emerge who can organically influence others.

Handle with care

One of the most effective ways to use AI to produce secure results is by minimizing the task you give it. Just like when a developer writes a function, if they put too many tasks in the individual function it becomes bloated and difficult to understand or manage. When we ask AI to help us write code it should be for very small tasks that are easy for us to understand and quickly evaluate for security. Rather than asking it to add authentication to our project, we should ask it to show us a function to validate a user based on the credentials provided. Then we can adapt that function to our project. Someday AI might be able to write code for us, but today it works much better as a reference to help us when we are stuck rather than a tool that can produce secure code for us.

Above all, organizations should treat AI with care. Yes, it can be a useful resource, but only if treated as the fallible coding partner it often is. For that reason, it should only be used according to the corporate risk appetite and in line with security policy. Faster coding isn’t better if it comes with bugs. We need to train our people before we let loose the machines.

About the Author

Why Continuous Training Must Come Before The AI-driven SDLCMike Burch, Director of Application Security, Security Journey. Michael is an Ex-Army Green Beret turned application security engineer. Currently, he serves as the senior enlisted Cyber Network Defender for the North Carolina National Guard. In his civilian career, he is the Director of Application Security and content team lead for Security Journey, a SaaS-based application security training platform. He leverages his security knowledge and experience as a developer to educate and challenge other developers to be a part of the security team. http://www.securityjourney.com/

January 2, 2024

cyber defense awardsWe are in our 11th year, and Global InfoSec Awards are incredibly well received – helping build buzz, customer awareness, sales and marketing growth opportunities, investment opportunities and so much more.
Cyber Defense Awards

12th Anniversary Global InfoSec Awards for 2024 are now Open! Take advantage of co-marketing packages and enter today!

X