Though OpenAI released ChatGPT in November of last year, it has taken the public conversation by storm in the last month. In its own words, ChatGPT is “a state-of-the-art language generation model developed by OpenAI [which] uses deep learning techniques to generate human-like text based on the input it receives.” Universities, including Johns Hopkins University’s School of Advanced International Affairs (SAIS), are already incorporating the platform into the classroom while others are scrambling to ensure they can spot AI-generated essays and assignment responses. Outside of academic circles, people are using ChatGPT to generate recipes, organize travel, translating, basic research, and more.
Part of that “and more,” however, is to generate malware and prepare natural-sounding phishing emails through which to deploy said malware. The degree of success in doing so, however, is limited at best.
One black-hat-turned-white-hat hacker, Marcus Hutchins, told Cyberscoop that it took hours to get a functional piece of code out of ChatGPT and generally not possible to turn that into anything usable as malware. Though the platform may be able to generate code that could be malware after significant human input, it likely does not and will not have the ability to act as a stand-in for human expertise in creating malware.
ChatGPT stands to contribute to research, writing, and general question-asking, but it most likely will not be a security risk that your organization needs to worry about.