As students and workers harness generative artificial intelligence (AI) tools for their studies and work, so, too, have crooks.

Underground forums are selling modified versions of ChatGPT that circumvent safety filters to generate scam content, the Cyber Security Agency of Singapore (CSA) said in its Singapore Cyber Landscape report for 2023, published on July 30.

FraudGPT and WormGPT, two modified versions of OpenAI’s chatbot, have been reportedly sold to more than 3,000 customers globally since July 2023, raising fears that generative AI could herald a wave of cyber attacks, scams and falsehoods.

FraudGPT is marketed on the Dark Web as a tool to learn how to hack, and write malware and malicious code. WormGPT was developed to circumvent ChatGPT’s guard rails, such as prohibition against the generation of phishing e-mails or writing malware code.

Roughly 13 per cent of phishing scams in 2023 analysed by CSA showed signs that they were likely made with AI.

Since OpenAI’s ChatGPT launch in late 2022, cyber-security firms have reported a growing trend of hackers using the AI tool to gather critical information about software to find vulnerabilities in companies’ systems that can be exploited.

Microsoft, for example, disclosed that bad actors had used AI to study technical protocols for military-related equipment such as radars and satellites, illustrating how AI can be used in reconnaissance before an attack is staged.

CSA wrote: “Threat actors can use AI to scrape social media profiles and public websites for personally identifiable information, thereby increasing the speed and scale of highly personalised social engineering attacks.”

Cracked chatbots continue to emerge despite efforts to clamp down on them. The Telegram channel promoting WormGPT was shut down, only for other similar tools to appear elsewhere.

AI-powered password generators like PassGAN, which can be deployed at scale, can also crack more than half of common passwords in under a minute.

Another way the technology is deployed maliciously is in the generation of deepfake images to bypass biometric authentication. For example, to beat the use of facial recognition as a security feature, fraudsters turn to face-swopping apps.

CSA said identity verification firms have reported exponential increases in deepfake fraud attempts in 2023.

There is a growing underground market of criminal developers peddling impersonation services that employ deepfakes, fake social media accounts and AI-generated spam content that can bypass anti-phishing controls of popular e-mail services, and can be used to deploy scam campaigns, said CSA.

Deepfake scams have come a long way since one of the earliest incidents surfaced in 2019, when The Wall Street Journal reported that a chief executive of a Britain-based energy firm was duped into transferring US$243,000 (S$326,500) to scammers who mimicked the boss of the firm’s parent company through a phone call.

Deepfakes have only grown more convincing since then, said CSA. It cited a case in 2024 where an employee from a multinational firm was tricked into sending more than US$25 million to fraudsters after attending a real-time AI-generated videoconference.

For now, conventional cyber-hygiene measures remain largely relevant in mitigating AI-enabled threats, said CSA, urging users to use strong passwords and multi-factor authentication, and to regularly update their software to patch vulnerabilities.

They should also learn to spot deepfake scams:

  • Assess the source, context and aim of a message.
  • Analyse the audio-visual elements of the content for fuzzy or inconsistent images, which are telltale signs of AI-generated footage.
  • Authenticate content using tech tools, if available. These tools can detect the origins of content or analyse pixels for inconsistencies.

An AI arms race

Despite concerns about AI, the very same technology is being used by the cyber-security sector to combat scams.

“Through machine learning and algorithms, AI can be trained to detect deepfakes, phishing e-mails and suspicious activities,” said CSA.

Cyber-security firm Ensign InfoSecurity has developed an AI system that analyses internet traffic for signs of a malicious attack.

Ensign is also exploring methods to detect deepfakes in real time, such as analysing each frame for signs of pixel manipulation, amid increasing worries about AI-generated misinformation impacting elections globally in 2024.

Algorithms can be trained to spot unnatural facial movements, lighting discrepancies and irregularities in eye reflections, which are all tell-tale signs of deepfakes, said CSA.

But developers face challenges in using AI for cyber security, such as false positives, false negatives, and an arms race against cyber criminals that may be unsustainable in the long run for many organisations.

CSA said: “As law enforcement (agencies train) their AI systems, cyber criminals can also actively develop methods to evade and fool AI detection systems. This can result in a fast-paced, resource-intensive arms race in which AI systems constantly adapt.”

Asia News Network (ANN)/The Straits Times