views
Cybercriminals are always coming up with new scam techniques in the huge realm of the internet, ranging from identity theft to sophisticated virus assaults. The emergence of generative Artificial Intelligence (AI) tools has introduced a new layer of complexity to the cybersecurity landscape, marked by the rise of dark AI.
Ensuring online security has become more critical than ever. A particularly alarming development in AI is the creation of “dark LLMs” (large language models). These unregulated versions of standard AI systems like ChatGPT have been repurposed for illicit activities, functioning without ethical constraints and with disturbing accuracy and efficiency.
Cybercriminals now utilise dark LLMs to automate and enhance phishing schemes, develop sophisticated malware, and produce scam content. This is accomplished through LLM “jailbreaking”, where prompts are used to bypass the model’s internal safeguards and filters. For instance, FraudGPT is capable of writing malicious code, designing phishing websites, and generating undetectable malware. It provides tools for executing various cybercrimes — from credit card fraud to digital impersonation.
Another dark LLM, WormGPT, generates convincing phishing emails that can deceive even the most vigilant users. Based on the GPT-J model, WormGPT is also employed to create malware and conduct business email compromise attacks, which involve targeting specific organisations with tailored phishing attempts.
Abhishek Singh, co-founder and CEO at SecureDApp, a blockchain start-up, expressed deep concern about the emergence of such tools. He said: “Malicious AI models like FraudGPT and WormGPT are a game-changer in the world of online threats. These sophisticated models can generate incredibly convincing content, making it difficult for even the most cautious users to detect fraud.”
He added: “FraudGPT is like a master con artist, crafting fake emails, reviews, and messages that can trick people into revealing sensitive info or sending money to scammers. And WormGPT? It’s like a digital virus, spreading itself across the internet, adapting to evade detection, and even modifying its own code to stay one step ahead of security measures. As someone who has dedicated their career to making the digital world safer, I believe we need to take these threats seriously. We must develop innovative solutions, educate users, and stay vigilant to combat these malicious AI models. The future of our digital landscape depends on it.”
Meanwhile, Amit Relan, co-Founder and CEO of mFilterIt, highlighted the escalating threat, saying: “From identity theft to sophisticated malware attacks, cybercriminals keep coming up with new scam methods. Generative AI tools have now added a new layer of complexity to the cyber security landscape through the rise of dark AI. Staying on top of your online security is more important than ever. Dark LLMs, the uncensored versions of everyday AI systems such as ChatGPT, are deployed to automate and enhance phishing campaigns, create sophisticated malware, and generate scam content using prompts to bypass built-in safeguards and filters.”
Relan emphasised the need for proactive measures and awareness. He said: “The government is working to spread awareness and implement proactive measures to handle such threats. As per the RBI’s annual report, the amount involved in fraud cases surged to Rs 1,457 crore in FY24 from Rs 227 crore the previous year. The number of online frauds in the country surged 334 per cent year-on-year (YoY) to 29,082 in the financial year 2023-24 (FY24).”
He believes that a mix of human awareness and proactive monitoring with AI-based threat detection tools can help respond to such threats more effectively. According to Relan: “Patch up the vulnerabilities that cybercriminals try to exploit before it’s too late. Awareness to identify the clues in phishing messages, such as poor grammar, generic greetings, suspicious email addresses, overly urgent requests, or suspicious links, is crucial. Banking regulators are proactively working towards combating sophisticated threats such as malicious LLMs. Other large consumer brands also need to buckle up.”
Comments
0 comment