This is the second iteration of a Consortium Networks assessment on the risks posed by ChatGPT. Our first assessment can be found here and discusses the platform’s ability to write malicious code. This assessment stands.
ChatGPT and generative AI have become an inescapable part of the cybersecurity conversation since January, dominating everything from individual conversations to the massive RSA conference in late April. Fears abound about how generative AI could revolutionize cybercrime and radically change the cybersecurity landscape.
Some fears are well-founded, particularly those that popped up following Samsung’s source code being accidentally leaked by an employee to ChatGPT. However, this is no new issue and source code has been leaked by accident well before ChatGPT came onto the scene. The greater issue here is one that has been around much longer and is much less flashy than emerging technology and that is strong, comprehensive, and understandable cybersecurity policies.
Entering sensitive information into ChatGPT allows the AI to learn from it, but, as of now, ChatGPT cannot update itself in real-time, meaning that one user’s inputs will not show up in another’s outputs. This is likely to change in future AI models and it would behoove organizations to get ahead of this issue by putting policies in place now to build a culture of security.
In addition to a comprehensive acceptable use policy, having a strong, properly configured data loss prevention (DLP) solution would mitigate this risk. DLP solutions can be configured to inspect the content of conversations between ChatGPT and employees to ensure sensitive information is not being transmitted. By automatically monitoring these conversations, a company can ensure that ChatGPT is being used appropriately and safely.
Many companies and even countries are banning ChatGPT outright until they can figure out exactly what risks are posed by generative AI. Doing so leaves companies missing out on the benefits that AI can offer their businesses in making employees more efficient and productive. Bans keep organizations from becoming early adopters who can bring new technology in and better their company through it rather than sit on the sidelines out of fear. Risks are important to understand and account for in decision-making, but the internal risks associated with ChatGPT can be mitigated. Companies that refuse to get on board with AI, a technology that is only going to become more widespread and necessary for business operations going forward, are going to be left behind.