ChatGPT, an advanced chatbot developed by OpenAI, has attracted a massive amount of interest since its debut in November 2022. This powerful new artificial intelligence system is widely regarded as the most advanced chatbot in the world, with the ability to perform a wide range of tasks in an extremely sophisticated and human-like manner, including answering questions, writing emails, and even coding. The system’s natural language processing capabilities have generated a significant amount of buzz. According to OpenAI CEO Sam Altman, ChatGPT had over a million users within just five days of its launch. It now has over 100 million subscribers. Tesla and SpaceX owner, Elon Musk, can be quoted as saying:

“ChatGPT is scary good. We are not far from dangerously strong AI.”


Despite its impressive capabilities, not all feedback on ChatGPT has been great with many giving it a lukewarm reception. Critics are expressing concerns over how they could undermine education, with the chatbot’s ability to create an entire essay within seconds making it easier for students to cheat. However, the more immediate concern is whether advanced AI models like ChatGPT pose a cyber threat. According to research by Blackberry, the Canadian software and cybersecurity company, the chatbot may have already been used in nation-state cyber-attacks. A survey of 1,500 IT professionals found that 71% believed foreign states are likely to be using the technology for malicious purposes against other nations. Worse still, information security wizard CyberArk was able to create polymorphic malware using ChatGPT, according to its published findings. Additionally, Check Point Research discovered examples of cybercriminals using the AI platform for malicious purposes, from creating convincing spear-phishing emails to sharing an information-stealing piece of malware code developed using the chatbot on criminal forums.


ChatGPT was asked whether it could potentially be exploited by threat actors seeking to create highly evasive adaptive threat (HEAT) attacks. These types of attacks represent a novel and swiftly expanding category of attack techniques that can bypass numerous layers of security software, including firewalls, secure web gateways, and phishing detection tools. This is how the AI responded:

“It is possible for ChatGPT or any other language model to be misused to generate malware. If a malicious actor has access to the model and trains it on malware samples, it could generate malicious code or be used to create phishing campaigns or other nefarious activities. In this sense, ChatGPT generated malware could be considered a HEAT threat, as AI-generated malware can be sophisticated, adaptive and difficult to detect.”

Two other responses went as follows:

“ChatGPT itself is not capable of generating malware. It is simply a language model developed by OpenAI that generates text based on the input it receives. Misusing ChatGPT, such as using it to generate phishing scams or spreading false information, can be unethical, but it wouldn’t result in the creation of malware.”


“While it is possible for a threat actor to misuse ChatGPT for malicious purposes, such as generating misleading or false information, it is not capable of generating malware on its own.”

The long and the short is that the chatbot doesn’t seem to think that it poses a HEAT threat. However, it didn’t discount being an accomplice to phishing scams.


ChatGPT explicitly acknowledges that its responses may not always be accurate and that it should not be relied upon as a source of advice, given that it can only respond based on the information it has been trained on. Regarding the potential cyber threat posed by ChatGPT, the concern stems from its reliance on machine learning, which means that the quality of its outputs improves with the quantity of inputs it receives.


This is the most concerning aspect of the platform. In fact, research has shown that those who have already used it for malicious purposes had limited-to-zero cybersecurity skills, raising the alarm that ChatGPT could democratize the creation of evasive malware or more sophisticated social engineering and phishing emails. The impact of democratised cybercrime has been seen with the rise of ransomware-as-a-service. As such, it is critical for organizations to take steps to protect themselves from this possibility. ChatGPT has even recommended the following precautions:

“To minimize the risk of ChatGPT-generated malware, it is essential to follow sound security practices, such as access controls, monitoring, and secure data storage.”


One recommendation is that SMEs incorporate isolation technology into their cybersecurity strategies. This technology ensures that all active content is executed in a cloud-based, isolated browser rather than on the user’s device. It shifts the focus away from post-compromise detection and towards prevention, working to ensure that malicious payloads never reach the targeted endpoint. With HEAT attacks becoming increasingly prevalent, and the potential for future implications arising from the abuse of intelligent systems such as ChatGPT, organisations must update their defences to guard against sophisticated, modern attack methods.


If you need help with implementing isolation technology into your cybersecurity, look no further. I have 20+ years of experience in professional business IT support, specializing in cybersecurity and risk mitigation for SMEs. So please don’t wait until it’s to late. Contact me today and let’s your systems and data preventively isolated.

Leave a comment