- Trend Micro urges industry-led regulation and innovative defense strategies
- Specialized cloud-based ML models may be singled out for data poisoning attacks
In its 2024 predictions, cybersecurity company, Trend Micro, warns of the transformative role of generative AI (GenAI) in the cyber threat landscape and a coming tsunami of sophisticated social engineering tactics and identity theft powered GenAI tools.
Despite malicious large language model (LLM) WormGPT’s shutdown in August 2023, Trend Micro expects more of its spawn to populate the dark web. In the interim, threat actors will also find other ways to use AI for cybercrime. While legislation to regulate the use of generative AI is yet to be passed, it is paramount that defenders implement zero-trust policies and establish a vigilant mindset for their respective enterprises to avoid falling prey to AI-powered scams.
Eric Skinner, VP of market strategy at Trend Micro said, “Advanced large language models (LLMs), proficient in any language, pose a significant threat as they eliminate the traditional indicators of phishing such as odd formatting or grammatical errors, making them exceedingly difficult to detect. Businesses must transition beyond conventional phishing training and prioritize the adoption of modern security controls. These advanced defenses not only exceed human capabilities in detection but also ensure resilience against these tactics.”
The widespread availability and improved quality of GenAI, coupled with the use of Generative Adversarial Networks (GANs), are expected to disrupt the phishing market in 2024. This transformation will enable cost-effective creation of hyper-realistic audio and video content—driving a new wave of business email compromise (BEC), virtual kidnapping, and other scams, Trend Micro predicts.
Given the potentially lucrative gains that threat actors might achieve through malicious activities, threat actors will be incentivized to develop nefarious GenAI tools for these campaigns or to use legitimate ones with stolen credentials and VPNs to hide their identities.
(BEC cost victims over US$2.7 billion (RM12.70 billion) in 2022, according to the FBI Internet Crime Complaint Center.)
AI models themselves may also come under attack in 2024. While GenAI and LLM datasets are difficult for threat actors to influence, specialized cloud-based machine learning models are a far more attractive target. The more focused datasets they are trained on will be singled out for data poisoning attacks with various outcomes in mind—from exfiltrating sensitive data to disrupting fraud filters and even connected vehicles. Such attacks already cost less than US$100 (RM470) to carry out.
These trends may, in turn, lead to increased regulatory scrutiny and a push from the cybersecurity sector to take matters into its own hands.
“In the coming year, the cyber industry will begin to outpace the government when it comes to developing cybersecurity-specific AI policy or regulations,” said Greg Young, VP of cybersecurity at Trend Micro. “The industry is moving quickly to self-regulate on an opt-in basis.”
Elsewhere, Trend Micro’s 2024 predictions report highlighted:
A surge in cloud-native worm attacks, targeting vulnerabilities and misconfigurations and using a high degree of automation to impact multiple containers, accounts and services with minimal effort.
As cloud adoption becomes more critical to business transformation today, enterprises need to look beyond their routine malware and vulnerability scans. In 2024, cloud environments will be a playground for tailor-made worms crafted to exploit cloud technologies and misconfigurations will serve as an easy entry point for attackers.
With just a single successful exploit, particularly through misconfigured APIs in the likes of Kubernetes (where 60% of surveyed Kubernetes clusters experienced attacks from malware campaigns), Docker, and WeaveScope, attacks with worming capabilities can set off rapid propagation in cloud environments. In short, these worm attacks use interconnectivity — the very benefit for which the cloud was made — against cloud environments.
Trend Micro predicts the value of the cloud API market to be at US$9 billion by 2031.
[US$1 = RM0.212]
Against the reality where US$60 is the minimum payment required by malicious actors to poison datasets cloud security will be crucial for organizations to address security gaps in cloud environments, highlighting the vulnerability of cloud-native applications to automated attacks. Proactive measures, including robust defense mechanisms and thorough security audits, are essential to mitigate risks.
Data poisoning will make machine-learning (ML) models an exciting and expansive attack surface for threat actors to explore as these promise a wide variety of high rewards with very few risks. A compromised ML model can open the floodgates to possibly divulging confidential data for extraction, writing malicious instructions, and providing biased content that could lead to user dissatisfaction or potential legal repercussions.
Validating and authenticating training datasets will become increasingly imperative, especially while ML remains an expensive integration for many businesses. Enterprises who take their algorithms off-premises to lower cost will also be more vulnerable since they rely on sourced data from third-party data lakes and federated learning systems. This means that they are completely dependent on datasets stored within cloud storage services guarded by systems outside their own.
More supply chain attacks will target not only upstream open-source software components but also inventory identity management tools, such as telco SIMs, which are crucial for fleet and inventory systems. The attacks will solidify the need for enterprises to implement application security tools to gain visibility over their continuous integration and continuous delivery (CI/CD) systems.
Cybercriminals can take advantage of providers with weak defenses to gain access to widely used software and find their way into supply chain vendors. Ultimately, however, they will wreak the most havoc for end users. In 2024, vendors need to anticipate that ambitious threat actors will strike at the source — the very code on which IT infrastructures are built — with attacks that will persistently focus on third-party components like libraries, pipelines, and containers.
Attacks on private blockchains will increase as a result of more enterprises turning to them to lower costs. Since private blockchains generally face fewer stress tests and lack the same level of resilience compared to their battle-hardened public counterparts that face off constant attacks, cybercriminals will likely gun for administrative rights to the former to modify, override, or erase entries and then demand a ransom. Alternatively, they could try to encrypt the entire blockchain if it’s possible to seize control of enough nodes.
The increased criminal attention on Web3 technologies will also lay the groundwork in 2024 for the first criminal groups that run entirely on decentralized autonomous organizations (DAOs) and are governed by self-executing smart contracts hosted on blockchain networks. Indeed, a preamble to these threat groups has already been observed in actors who weaponize smart contracts to add layers of complexity to cryptocurrency-related crimes against decentralized finance platforms.