AI code security


The first was a Stanford University study, “Do Users Write More Insecure Code with AI Assistants?” and the other was a Wuhan University study, “Exploring Security Weaknesses of Copilot Generated Code in Github.”

The Stanford study found the following:
  • Participants who had access to an AI assistant wrote significantly less secure code than those without access to an assistant.
  • Participants with access to an AI assistant were also more likely to believe they wrote secure code, suggesting that such tools may lead users to be overconfident about security flaws in their code.
  • Participants who invested more in creating their queries for the AI assistant, such as providing helper functions or adjusting the parameters, were more likely to eventually offer secure solutions.
The Wuhan study found that almost 30% of Copilot-generated code snippets have security weaknesses: 

Focusing specifically on Python, 91 of 277 snippets, or 33%, contained security weaknesses; of those 91 snippets, there were 277 instances of security weakness. 

In other words, the insecure code was VERY insecure. 


Comments

Popular posts from this blog

Perplexity

Hamza Chaudhry