Unit 42 Research shows that harmful LLMs increase AI-based cyberattacks, putting Indonesia at more risk
Jakarta, Thekabarnews.com—The rapid growth of artificial intelligence (AI) is making the dual-use risk bigger. Many businesses and public agencies in Indonesia use large language models (LLMs) like...
Jakarta, Thekabarnews.com—The rapid growth of artificial intelligence (AI) is making the dual-use risk bigger. Many businesses and public agencies in Indonesia use large language models (LLMs) like ChatGPT and Google Gemini.
Table Of Content
However, companies are increasingly using this same technology to automate phishing, online fraud, and large-scale malware attacks.
Due to its heavy reliance on instant messaging apps, e-commerce sites, and digital public services, Indonesia is particularly vulnerable to AI-driven cyberattacks. This dependence makes it easier for thieves to execute more complex attacks.
Previous research has revealed the distribution of malware and phishing activities through bogus ChatGPT apps. At the same time, the National Cyber Security Incident Response Team (CSIRT) has seen signs of AI-agent-style threats. These threats can steal personal information and financial credentials.
Dark LLMs are sold openly on Telegram and the dark web
These results are in line with a study by Palo Alto Networks Unit 42 called The Dual-Use Dilemma of AI: Malicious LLMs. The study shows that “dark LLMs,” such as WormGPT, FraudGPT, and KawaiiGPT, are becoming more common.
According to Unit 42, hackers create these fraudulent AI models without any safety features. Furthermore, they sell them publicly on Telegram channels and dark web forums.
Their availability makes it much easier for hackers to penetrate into systems. Because of this, hackers can attack more quickly and in more places.
Unit 42 found three main ways that harmful LLMs could change cybercrime in Indonesia:
• Phishing assaults that are quite convincing.
Advanced language generation lets hackers make fake phishing communications and business email compromise (BEC) frauds. These fake messages look like they come from legitimate corporate executives, banks, or government agencies.
• Making money out of cybercrime.
With malicious LLMs, you can make malware, phishing kits, and programs that steal data right away. Previously, this used to need a lot of technical knowledge.
• Making cybercrime more accessible to everyone.
Without technical impediments, low-skilled criminals may now commit digital fraud and extortion fast and often. As a result, cybercrime has become a cheap, large-scale business.
Unit 42 said that WormGPT can already make scam content in Indonesian that sounds natural and fits the situation. This makes it tougher to identify assaults.
Policy response and a strategy to stop it before it happens
Unit 42 said that cybercriminals are increasingly using powerful AI, which speeds up illicit behavior in digital environments. Because of this, authorities are being asked to set up rules and standards. These measures help stop the proliferation of fraudulent AI models.
Experts recognize regular security assessments and adhering to best practices as crucial strategies to mitigate the risks of AI-induced cyberattacks.
Unit 42 said that the biggest problem for Indonesia as it works on its national AI roadmap is not limiting AI tools. Instead, the country needs to become more resilient to large-scale, fast-moving AI-based attacks.
Experts consider a prevention-first approach that integrates safe AI practices into governance and defense strategies essential. Companies, consumers, and vital services require this type of strategy to safeguard them from the growing digital dangers posed by new technology.
Working together between the government and the private sector is important
Unit 42 also discussed how important it is for the government, regulators, and the business sector to work closely together.
In Indonesia’s fast-changing digital ecosystem, it will be important to achieve a balance between technological innovation and cybersecurity protection. Policymakers can achieve this by incorporating safe AI practices into national AI governance frameworks.
No Comment! Be the first one.