Dark AI Threat Looms Over APAC as Cybercriminals Harness Generative Models for Stealth Attacks

Cybersecurity experts are warning Asia-Pacific organisations to brace for a new wave of sophisticated and covert cyberattacks fuelled by “Dark AI” — malicious applications of artificial intelligence that can bypass traditional defences and accelerate criminal operations.

Rise of the Cybercriminal’s AI Arsenal

Kaspersky says attackers are increasingly deploying non-restricted large language models (LLMs) outside safety and compliance frameworks to conduct unethical or illegal activities. Known as Dark AI, these systems can be programmed to deceive, manipulate, and launch cyberattacks without oversight.

“Since ChatGPT gained global popularity in 2023, we have observed several useful adoptions of AI… In the same breath, bad actors are using it to enhance their attacking capabilities. We are entering an era in cybersecurity and in our society where AI is the shield and Dark AI is the sword,” said Sergey Lozhkin, Head of Global Research & Analysis Team (GReAT) for META and APAC at Kaspersky.

Black Hat GPTs and Nation-State Exploitation

Cybersecurity

One of the most common Dark AI threats comes from “Black Hat GPTs” — models designed or modified for malicious purposes such as creating phishing campaigns, generating malware code, producing deepfakes, and aiding red team penetration tests.

Examples include WormGPT, DarkBard, FraudGPT, and Xanthorox, which Kaspersky says are either private or semi-private models built to support cybercrime, fraud, and malicious automation.

Lozhkin warned of an emerging trend of nation-state actors incorporating LLMs into espionage campaigns. OpenAI recently disclosed that it had disrupted over 20 covert influence and cyber operations attempting to misuse its tools, including creating fake personas, real-time interactions with targets, and multilingual disinformation content.

Staying Ahead of Dark AI

“AI doesn’t inherently know right from wrong… As dark AI tools become more accessible and capable, it’s crucial for organisations and individuals in Asia Pacific to strengthen cybersecurity hygiene, invest in threat detection powered by AI itself, and stay educated on how these technologies can be exploited,” Lozhkin said.

Kaspersky recommends organisations:

  • Deploy next-generation security solutions like Kaspersky Next to detect AI-powered threats.
  • Use real-time threat intelligence to monitor AI-driven exploits.
  • Enforce strict access controls and provide employee training to mitigate risks from “shadow AI” and data leakage.
  • Establish a Security Operations Centre (SOC) for real-time monitoring and rapid response.

Author

  • Hello! I’m Mark, the founder of techcoffeehouse.com. I love a good plate of Chicken Rice. So, if you have a story as good as the dish, HMU!

    View all posts Managing Editor

Discover more from techcoffeehouse.com

Subscribe to get the latest posts sent to your email.

Use promo code “TCH15” to get 15% off on checkout.

Share your thoughts

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from techcoffeehouse.com

Subscribe now to keep reading and get access to the full archive.

Continue reading

Discover more from techcoffeehouse.com

Subscribe now to keep reading and get access to the full archive.

Continue reading