Can we protect ourselves against AI-driven cyber attacks?

Many tech trend predictions indicate an increase in cyber attacks in 2024. One emerging trend that appears to be gaining traction is the misuse of artificial intelligence.

Recently, there has been a rise in attacks where politicians such as the Prime Minister and Deputy Prime Minister of Singapore, Lee Hsien Loong and Lawrence Wong respectively, have had their voices and images replicated by AI and misused in scams. The Prime Minister’s wife, Ho Ching, has also become a victim, with a video of her being published on YouTube as an ad to promote a fake investment opportunity.

This misuse of digital identity is likely just the beginning of a potential widespread proliferation in the future, which raises the question – what can be done?

For more on this, we speak with Johan Fantenberg, Principal Solutions Architect APJ, Ping Identity.

What are some AI attacks that you have observed?

The ongoing flight to digital means that a growing number of business transactions will be taking place online. This makes the ability to quickly and accurately authenticate digital identities essential for success, especially for economies such as Singapore that have made significant progress on its digitalisation goals.

We have observed cybercriminals increasingly leveraging AI to circumvent advanced identity controls, making it far easier to impersonate others. One of the most alarming examples of AI-powered threats to identity security is voice verification, commonly used by call centers, especially those in the banking industry, as a method of authentication.

Advances in generative AI enable cybercriminals to create a synthetic copy of people’s voices in minutes with just a high-quality recording, perhaps taken from a spam phone call. While many businesses are building safeguards to help mitigate this risk with biometrics, it remains challenging for voice verification systems to accurately distinguish between real and synthesized voices.

Cybercriminals are also utilizing AI to intensify the frequency and sophistication of phishing attacks. AI algorithms are capable of creating more convincing phishing emails with personal information collected from sources such as social media and are less likely to contain obvious errors that make most phishing emails so easy to spot.

What do these cybercriminals want?

While there can be several motivations for cybercriminals, the majority are typically on the lookout for opportunities to monetize illicit cybercrime. Digital identity data is a cybercriminal’s favorite target due to a simple reason – user credentials allow them to breach corporate networks that hold a wealth of high-value data that can be held for ransom or sold to other threat actors.

Seeking a big payoff, many are highly motivated to use new technologies and techniques to increase their success rates. The use of AI technologies empowers these cybercriminals to leverage data from prior data breaches and publicly available information, combining them to commit more effective identity fraud. This could involve cybercriminals working to gain unauthorized access to personal banking accounts, or even into corporate networks whereupon larger attacks such as ransomware may be launched.

Illustrating the risks, one of the top scams in Singapore involves impersonating a friend to fraudulently request for loans or directing them to click on links with malware. With the use of AI, cybercriminals can develop more convincing personas, making them more believable for their potential victims.

What are the common signs or indicators that someone may be targeted by an AI-driven cyber attack?

While any person or organization can realistically fall victim to AI-driven cyber attacks, cybercriminals are more likely to target those from industries such as banking or healthcare. Related data from these industries often comprise highly personalized information that can be used to create entire human personas, facilitating identity theft that can lead to larger payoffs.

In 2024, we anticipate AI-based fraud to accelerate exponentially, causing users to increasingly question the validity and integrity of multimedia like video, images, and audio files. However, the silver lining is that this is likely to lead to more innovative technology and practices in content verification and validation. With cybercriminals developing increasingly sophisticated attack methods, security teams will need to innovate to keep pace.

We will likely see more companies expanding partnerships this year to deliver best-in-class technology for evolving challenges. Ecosystem collaboration, especially between the public and private sector, is a critical aspect towards maintaining up-to-date awareness of an evolving threat landscape, and to build effective defenses against new types of attacks and exploits. This will be key to continuously inform and educate users of risks, as well as support ongoing  efforts at the organizational level to increase security and privacy whilst offering great and efficient user experiences.

How can we distinguish between what is real and what is not?

Image generated by Adobe Firefly

Rapid advances in generative AI unfortunately means that deepfake technology that can impersonate a person’s likeness through both audio and visual means is only going to become more authentic, and therefore harder to detect by the average consumer. This is evident from recent highly convincing deepfake videos featuring Prime Minister Lee Hsien Loong and Deputy Prime Minister Lawrence Wong promoting cryptocurrency investment scams.

This is particularly risky for digitally vulnerable individuals, especially the elderly, who may not have as much experience in identifying or dealing with sophisticated scams. According to data from the Singapore Police Force, seniors aged 60 and above are more commonly targeted in phishing and social media impersonation scams. The use of deepfake technology can make these scams even more convincing.

To stay protected from these sophisticated scams, it is necessary to question the usual level of trust. Both individuals and security decision-makers should verify the legitimacy of any action or response by contacting the verified organization or person that initiated the request. Additionally, it is crucial to report any deepfake-related scams or cyberattacks to the authorities, in order to raise awareness and educate the public.

What can public figures and organizations, especially those in the banking and financial industry, do to protect their identity and their audience/customers

With AI threats set to become increasingly common in today’s business landscape, we see identity taking center stage for many organizations in 2024. This is especially so with our digital-first economy creating near-endless attack surfaces for fraudsters to target. As such, the importance of secure identity solutions simply cannot be overstated.

Identity and access management (IAM) systems can enhance an organization’s ability to detect and prevent identity fraud, by implementing multiple factors aimed at reducing risk in the user authentication process. IAMs can detect suspicious activity by analyzing metrics such as IP address, geolocation, and past user behaviors for anomalies, and demand further proof of identity through methods such as Multi-factor Authentication (MFA).

The use of MFA moves the identity verification process from basic authentication to include an out-of-channel factor. For example, after entering their credentials, a user may also be asked to accept a push notification from an authenticator app sent to a personal device, adding a second layer of verification.

Implementing passwordless authentication can further reduce risks, making one piece of the authentication a cryptographic function instead of credentials that can be fraudulently misused by cybercriminals. Passwords are typically the weakest link in any organization’s security setup. With different factors to reduce risk in the user authentication process, identity orchestration can build a workflow that integrates risk factors and tailors the user experience based on the goal of reducing identity fraud.

It is encouraging to see that major technology platforms are providing support for strong passwordless authentication methods. FIDO2/WebAuthentication is a great step towards reducing the risks associated with traditional credentials, including greatly reducing the risk of man-in-the-middle type of attacks. Deploying passwordless or IAM capabilities in combination with real-time AI/ML based risk signal analysis can empower security teams to better understand user behavior, to better detect and respond to threats and possibly also neutralise such threats.

Key takeaways!

Use IAM systems with Multi-Factor Authentication (MFA).

Consider implementing passwordless authentication to enhance security.

Educate users about cyber threats and best practices.

Foster collaboration between sectors for shared insights and defense.

Employ real-time AI/ML analysis for threat detection.

Author

  • Hello! I’m Mark, the founder of techcoffeehouse.com. I love a good plate of Chicken Rice. So, if you have a story as good as the dish, HMU!

    View all posts Managing Editor

Discover more from techcoffeehouse.com

Subscribe to get the latest posts sent to your email.

Use promo code “TCH15” to get 15% off on checkout.

Share your thoughts

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from techcoffeehouse.com

Subscribe now to keep reading and get access to the full archive.

Continue reading

Discover more from techcoffeehouse.com

Subscribe now to keep reading and get access to the full archive.

Continue reading