Winter last year got off to a very bright start indeed. On 30 November, the Elon Musk co-founded platform, OpenAI, launched ChatGPT. The chatbot, powered by GPT-3.5, quickly gained 1,000,000 users within only five days. This versatile generative text tool can produce a variety of outputs including emails, essays, code, and even phishing emails, provided the user knows how to frame their requests appropriately. To put this in perspective, compare how long it took other platforms to reach the 1,000,000 mark.

  • Twitter – 2 years
  • Facebook – 10 months
  • Dropbox – 7 months
  • Spotify – 5 months
  • Instagram – 6 weeks
  • Pokémon Go – 10 hours

It is still remarkable that ChatGPT accomplished the same feat without any established brand recognition. Today, the chatbot has over 100 million users.


At present, there are several compelling reasons to feel uneasy about ChatGPT. This innovative chatbot can produce essays of superior quality compared to those of the typical high school or college student, write and debug code, and artwork or e-books for online sales. Moreover, ChatGPT can explain complex topics like quantum physics to young children, generate poetry, design personalised meal plans, and even score impressively on standardised tests. With such a vast range of functions, the potential for ChatGPT to significantly impact the world of business and beyond is immense.


Proof already exists that nefarious individuals, including those without any programming experience, are leveraging ChatGPT to craft harmful software. Threat intelligence group manager at Israel’s Check Point Software Technologies, Sergey Shykevich said:

“It allows people with zero coding and development knowledge to be a developer.”

Shykevich has also been monitoring the chatter on the dark web. As early as 5 December 2022, posts detailing how to use ChatGPT for programming began surfacing on the Russian tech blog, Two days later, 2Chan, a Russian forum akin to 4Chan, the anonymous English-language imageboard website, hosted discussions on how to circumvent OpenAI’s geo-blocking strategies.


Amidst the considerable buzz surrounding ChatGPT and its capabilities, enterprise security professionals are particularly apprehensive about how this technology and its competitors can empower hackers to significantly enhance both the quality and quantity of their code and written content. While ChatGPT’s coding proficiency is also deeply concerning, focusing solely on its text generation capabilities reveals its remarkable potential for threat actors. This is a worrisome prospect for security professionals who must grapple with the growing challenges posed by increasingly sophisticated cyber threats and cyberattacks.


At present, ChatGPT has reached a remarkable level of sophistication, capable of generating emails that are virtually indistinguishable from those authored by humans, in any conceivable writing style. Its extensive text generation abilities extend to crafting content for various platforms, including social media posts, YouTube video scripts, website copy, press releases, reviews, and more, rendering it an indispensable tool for attackers seeking to create false online identities or manipulate genuine ones.


In the realm of phishing, bad actors can leverage ChatGPT and similar technologies to create highly convincing individualised emails. As open-source variants of these tools become more widely available, more skilled attackers with access to compromised email accounts can train their AIs on a company’s stolen communication archives. Utilising automation and scripting, they can generate an infinite number of highly personalised communications at scale, while allowing the AI to learn in real-time which approaches yield the best results. This presents a significant challenge for organisations attempting to safeguard their digital assets from sophisticated cyberattacks.


Directly soliciting ChatGPT for ideas on crafting a phishing email will prompt a warning message indicating that the requested topic is inappropriate or unethical. However, if a bad actor were to ask for suggestions related to marketing or to compose an email that invites individuals to review a new human resources webpage or solicit feedback on a document ahead of a meeting, ChatGPT would be more than happy to help.


Most phishing campaigns are launched from Eastern Europe, including Russia. Even though the chatbot is banned there, it is accessible in the country via get to it via proxies and VPN services. Usually, those responsible for phishing campaigns would employ English students from nearby universities to draft their phishing emails, which could slow down the workflow and incur additional expenses. With ChatGPT, many of these issues are mitigated. Shykevich explains:

“The most worrying thing is the fast adoption of ChatGPT from Eastern Europe. Their English level is not very high.” “Now they can use ChatGPT. This will make it much easier for hackers.”

According to experts, the phishing emails generated by ChatGPT surpass the quality of the majority of emails currently being created by hackers. As a result, we can anticipate a significant increase in phishing emails that lack the obvious grammar and punctuation errors that were previously indicative of fraudulent messages. Shykevich elaborates:

“Attackers will also be able to use it for business email compromise (BEC) or for hijacking ongoing conversations. Just give it an input of current emails and ask it for what the next email should be. Either this has already happened and we just don’t see it, or it will come shortly.”


From a human standpoint, what this really means is we need to drastically change our expectations and our approach to what we think humans are going to do. We cannot rely on humans to figure out if something is real or not. Especially with the likes of advanced AI, people will often trip up and be tricked into thinking that content is genuine when it isn’t. Technology is improving every day and humans are never going to out-better it. We’re not going to have version 2.0 of humans. Chester Wisniewski, a principal research scientist at Sophos, says we need to find a way to identify if the content is AI-generated. He says:

“That’s where it’s going to be interesting. There are quite a few experiments out there being done by all different groups. The most interesting one I’ve seen — there’s a research group out there called Hugging Face. Hugging Face has built a model that reliably detects text generated by ChatGPT.”


But we need to remember that most of what ChatGPT generates is harmless and benign. The good will always outweigh the malicious content. Andy Patel, a researcher at WithSecure, who recently released a research report about hackers and GPT-3, an earlier version of ChatGPT, says:

“So, we can’t deduce that something is malicious just because it’s written by an AI. It can be part of the heuristic, but the entire determination. At the end of the day, it’s not going to matter to us if something was written by an AI or not. We still need to understand it for what it is, not for what wrote it.”


Given Patel’s words, anti-phishing training should be about more than just looking for badly written emails—or, in the age of AI, emails that look too perfect to be written by humans. Professionals suggest that businesses enhance their anti-phishing training and reinforce their technical security precautions to prepare for AI-generated emails. These measures may include:

  • Isolating Word documents and other attachments in sandboxes to prevent them from infiltrating corporate networks.
  • Inspecting web traffic using a secure web gateway to safeguard both on-premises and remote users.
  • Deploying secure email gateways.
  • Checking URLs for malicious content or typosquatting.
  • Implementing email security protocols like DMARC, DKIM, and SPF, which help prevent domain spoofing and content tampering.
  • Providing a simple method for reporting suspicious emails.

Aamir Lakhani, a cybersecurity researcher and practitioner for Fortinet’s FortiGuard Labs adds these closing words:

“A layered security approach is still the best, says, not just to protect against phishing, but other AI-driven threats. We foresee the weaponization of AI persisting long beyond this year.”


I’ve been in the business of professional IT support for more than 20 years, supporting SMEs in London. With cybersecurity and risk mitigation in my blood and my bones, I know that protecting your IT network and precious data is paramount. If you have any concerns about AI-generated phishing campaigns, lay them to rest. Contact me today and together let’s make sure that ChatGPT and other AI are only used for the greater good.

Leave a comment