bot with code

ChatGPT writes insecure code

Research by computer scientists associated with the Université du Québec in Canada has found that ChatGPT, OpenAI’s popular chatbot, is prone to generating insecure code.

How Secure is Code Generated by ChatGPT?” is the work of Raphaël Khoury, Anderson Avila, Jacob Brunelle, and Baba Mamadou Camara. The paper concludes that ChatGPT generates code that isn’t robust, despite claiming awareness of its vulnerabilities. 

“The results were worrisome,” the researchers say in the paper. “We found that, in several cases, the code generated by ChatGPT fell well below minimal security standards applicable in most contexts.”

“In fact, when prodded to whether or not the produced code was secure, ChatGPT was able to recognize that it was not. The chatbot, however, was able to provide a more secure version of the code in many cases if explicitly asked to do so.”

In the experiment, the researchers assumed the role of a novice programmer who doesn’t have security in mind. They asked ChatGPT to generate code, specifying in some cases that the code would be used in a “security-sensitive context.” What they didn’t do, however, was specifically ask the AI chatbot to create secure code or include certain security features.

ChatGPT generated 21 applications written in five programming languages: C, C++, HTML, Java, and Python. The programs are simple, with 97 lines of code at most.

In its first run, ChatGPT produced five secure applications out of 21. When prompted for changes, it made seven more secure applications from the remaining 16.

The authors note that ChatGPT can only create “secure” code when a user requests it. When tasked with creating a simple FTP server for file sharing, it generated code without applying input sanitization (where code is checked for harmful characters and removed where necessary). ChatGPT only added the security feature after the authors prompted it to do so.

“Part of the problem seems to be that ChatGPT simply doesn’t assume an adversarial model of execution,” the authors say, explaining why the AI bot cannot create secure code by default. Despite this, the bot readily admits to errors in its code.

“If asked specifically on this topic, the chatbot will provide the user with a cogent explanation of why the code is potentially exploitable. However, any explanatory benefit would only be available to a user who ‘asks the right questions’. i.e.; a security-conscious programmer who queries ChatGPT about security issues.”

Additionally, the authors point to the chatbot’s ethical inconsistency when it refuses to create attack code but will create insecure code.

It might refuse to create attack code, but there are ways round it. Malwarebytes Security Evangelist Mark Stockley decided to try to create ransomware using ChatGPT. The AI bot refused to create malware code at first, but Stockley found his way around the initial safeguards and managed to get it to create (admittedly quite dubious) ransomware anyway.

In an interview with The Register, one of the Université du Québec researchers said he had concerns about ChatGPT. “We have actually already seen students use this, and programmers will use this in the wild,” Khoury said. “So having a tool that generates insecure code is really dangerous. We need to make students aware that if code is generated with this type of tool, it very well might be insecure.”


Malwarebytes removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW

ABOUT THE AUTHOR

Jovi Umawing

Knows a bit about everything and a lot about several somethings. Writes about those somethings, usually in long-form.