Asia Pacific is emerging as the world’s fastest-moving region for artificial intelligence adoption, with cybersecurity firm Kaspersky warning that the same momentum is accelerating the scale and sophistication of cyber threats.
According to Kaspersky, 78 per cent of professionals in Asia Pacific use AI at least weekly, compared with a global average of 72 per cent. The firm said the region’s high connectivity, widespread device usage and younger, tech-savvy populations are driving a “bottom-up” adoption pattern that often precedes formal enterprise rollouts.
This dynamic is turning Asia Pacific into what Kaspersky describes as a global proving ground for AI-driven business transformation — and an early testing environment for AI-enabled cybercrime.
AI adoption drives new cyber risks
Kaspersky said the rapid spread of large language models (LLMs) and generative AI is reshaping the cybersecurity landscape heading into 2026, affecting both individual users and organisations.
One key concern is the mainstreaming of deepfakes. Synthetic images, videos and voice content are becoming more common in scams and social engineering attacks, prompting companies to incorporate deepfake awareness into staff training and internal security policies.
While visual deepfakes are already highly realistic, Kaspersky noted that audio quality is improving rapidly. At the same time, user-friendly content generation tools are lowering the barrier to entry, allowing non-experts to create convincing fake content with minimal effort.
The firm also highlighted growing challenges in identifying AI-generated material. Current labelling mechanisms are inconsistent and easy to bypass, particularly with open-weight and open-source models. This is likely to drive further technical and regulatory efforts to distinguish synthetic from authentic content.
Open models narrow the gap with closed systems
Kaspersky said open-weight AI models are increasingly matching closed systems in cybersecurity-related tasks, while circulating with fewer safeguards.
This convergence is blurring the line between legitimate and malicious use. AI-generated scam emails, phishing pages and fake brand identities are becoming harder to distinguish from genuine communications, especially as companies themselves adopt synthetic content in marketing and advertising.
“Distinguishing real from fake will become more challenging, not just for users but also for automated detection systems,” the company said.
AI transforms both attack and defence
Threat actors are already using AI across multiple stages of the cyber kill chain, from writing malicious code and automating infrastructure to probing for vulnerabilities and deploying attacks. Kaspersky expects this trend to intensify, with attackers also attempting to conceal AI involvement to evade detection.
At the same time, AI is becoming more embedded in security operations. Agent-based systems are expected to continuously scan networks, identify weaknesses and provide contextual analysis, reducing manual workloads for security operations centre (SOC) teams.
“AI is reshaping cybersecurity from both sides,” said Vladislav Tushkanov, Research Development Group Manager at Kaspersky. “Attackers are using it to automate attacks and create convincing fake content, while defenders use it to detect threats and make faster, smarter decisions.”
Adrian Hia, Managing Director for Asia Pacific at Kaspersky, said the region’s speed of AI adoption presents both opportunity and risk.
“Asia Pacific is setting the global pace for AI adoption, with consumers and enterprises advancing faster than any other region,” he said. “This momentum is creating tremendous opportunity, but also redefining how cyber threats emerge and scale.”



Share your thoughts