THE FAKE AI

DALL-E GOES PHISHING

Besides using AI-powered tools like ChatGPT to cheat on their coursework, students may also be facilitating cybercriminals in scamming you out of your personal information and hard-earned cash. According to cybersecurity researchers from SafeGuard Cyber, a LinkedIn-based phishing campaign aimed at stealing personal details used the AI art-generation platform Dall-E. Also developed by Open AI, the same American start-up responsible for creating ChatGPT, Dall-E is an AI model that creates images from text-based prompts.

DECEITFUL BLACK PAPER

SafeGuard researchers found a deceitful whitepaper that targeted sales executives seeking to improve lead conversion rates. The ‘Sales Intelligence’ advertisement purportedly featured a distinct colour pattern in the lower-right corner of the graphic design, similar to the watermark typically visible on images created by Dall-E. When reviewing the whitepaper, which the researchers believe was generated using AI, users were prompted to click on a call-to-action button. This led to a form being automatically filled with the user’s personal LinkedIn profile information.

THE VOICE OF LINKEDIN

The so-called ‘Sales Intelligence’ account promoting the ad was essentially a blank page containing only a link to a jewellery store in Arizona. SafeGuard believes that the account was created solely to collect users’ information for dubious and potentially malicious purposes. While not as paralysing as compromised bank details, even basic information such as a user’s name, email, LinkedIn profile, and date of birth can serve as a starting point for more serious cybercrimes like identity theft. Although not yet confirmed by LinkedIn, the researchers suspect that these scams are AI-based. It is challenging to understand how the Dall-E watermark could have been included otherwise. In response to the incident, a LinkedIn spokesperson made this statement:

“We continually evolve our teams and technology to stay ahead of new forms of scams so that we always help keep our members’ experiences safe. We also encourage our members to report suspicious activity or accounts so that we can take action to remove and restrict them, as we did in this case.”

THE GIFT OF LANGUAGE

Doctor Ilia Kolochenko, an adjunct professor of cybersecurity at Capitol Technology University, believes that the issue of being scammed will likely escalate rapidly over the next decade due to the rise of AI. Specifically, Chat GPT-style tools could make it much easier for hackers from non-English-speaking countries to create convincing and coherent scam messages. Kolochenko referred to it as a “gift” for this youthful demographic. Many of the scam emails that we are familiar with are not well-written, so AI may provide significant leverage for bad actors attempting to produce official-sounding documents. The professor explained:

“We have a lot of young, talented cyber criminals who simply don’t have great English skills.”

According to Kolochenko, this is not the first time we have seen this type of behaviour. He claims to have observed other instances of phishing emails that appear to have been generated using AI, as well as cybercriminals employing AI chatbots to communicate with their targets. The professor explains that these cybercriminals frequently pose as a large tech company’s tech support desk to demand payments.

DIFFICULT TO DETECT

James Bores, a cybersecurity consultant with over two decades of experience and founder of Bores Consultancy in the UK, believes it is challenging for consumers to detect fraudulent AI-generated content. Consequently, he holds a pessimistic view on how consumers can protect themselves from such content. Bores explained:

“In terms of what people can do, the depressing answer is not much. There are tools to try and detect AI-generated content, but they are unreliable. Some of them work, and some of them don’t, and these will only become more unreliable as the technology gets better.”

DON’T BELIEVE IT

Kolochenko recommends being skeptical of written communications with flawless grammar and spelling, as people commonly make typographical errors, write in lowercase, or use casual language in their emails. He explained that if someone receives a message that appears to be too articulate to have come from their usually brief and difficult-to-understand colleague, they should question its authenticity. Kolochenko also suggests that consumers should be cautious of contextual cues, such as receiving an email from a colleague supposedly based in the States at 9 am UK time.

IS IT REAL?

To determine if a video is an AI-generated deepfake, Bores recommends examining the shadows or observing any unusual blinking patterns in the subject. For written content, he advises paying attention to any peculiar phrasing. However, he acknowledges that unusual phrasing may be due to someone simply having an off day. Inevitably, there is no definitive way to determine whether something is real or fake. When it comes to evaluating the authenticity of a picture featuring an actual person, Bores recommends checking if the eyes are symmetrical since most individuals have some degree of asymmetry in their eyes.

BACK-TO-BASICS

Bores also says that the best way to prevent falling victim to AI fraud is to adopt a back-to-basics approach and be cautious of anything that appears too good to be true. Since it is difficult to determine what content is generated by AI, he recommends relying on basic instincts and skepticism to protect oneself from potential scams. The consultant, who has an MSc in Cybersecurity from Northumbria University, said:

“If it’s asking for anything out of character, or anything new, that should be a warning sign immediately. To be honest, if it’s asking for anything at all.”

Bores spelt out that if you feel suspicious about a communication from a company, you should try to contact them through a recognised route and ask for a reference number. If they cannot provide one, then don’t pursue anything further.

WHEN WILL AI BE REGULATED?

Big-tech gameplayers such as LinkedIn and Twitter have the ability to regulate AI-generated content, but Bores doubts they’ll do much without being compelled to by the powers that be. Kolochenko believes that with the significant investment in AI technology made by companies such as Microsoft, they should eventually be able to develop reliable methods to detect AI-generated content. He further suggests that a high-profile fraud involving AI-generated material, such as a deepfake speech by a politician that goes viral on platforms like LinkedIn, may force these tech giants to take some decisive action. Kolochenko predicts that big social platforms such as Facebook, LinkedIn and Twitter will soon prohibit users from posting AI-generated content without a disclaimer as part of their terms and conditions. The question remains – how they will enforce this ban?

TAKING ON THE DEEPFAKES

I’ve been a purveyor of professional IT services for more than 20 years and specialise in cybersecurity and risk mitigation. If you’re worried about deepfakes and phishing scams from AI, fear not! Together, we will ensure that your data is always safe and secure and your online identity is protected. Get in touch with me today and let’s get the most out of chatbot AI – not the worst.

Leave a comment