The techniques of governing businesses and websites continue to raise ethical and legal issues as artificial intelligence technologies advance at rapid pace.
Many Canadians view proposed laws regulating AI offerings as assaults on free speech and excessive state control over software companies. This reaction has come from free speech activists, right-wing images and liberal thought officials.
However, these critics ought to pay attention to a heartbreaking North Korean circumstance that teaches crucial lessons about the dangers of AI that can be viewed from the perspective of the general public and the crucial need for customer data protection.
In late 2020, Iruda ( or” Lee Luda” ), an AI chatbot, quickly became a sensation in South Korea. Artificial bots are computer initiatives that simulate conversations with people. In this case, the bot was designed as a 21-year-old women school student with a cheerful personality. Marketed as an interesting” AI friend”, Iruda attracted more than 750, 000 people in under a month.
Iruda quickly turned into an ethics case investigation and a tool for addressing South Korea’s lack of data management. Soon, she began to use disturbing language and hold nasty views. The growing trend of online sexual harassment and discrimination was exacerbated and accelerated by the condition.
Making a discriminatory, cruel bot
The tech company that founded Iruda, Scatter Lab, had already created well-known apps that looked at text messages and offered dating advice. The company subsequently used data from those software to teach Iruda’s capabilities in intimate interactions. However, it failed to fully explain to customers how their private messages would be used to teach the robot.
Customers noticed Iruda repeating personal conversations straight from the company’s dating advice applications, which caused the issues. These reactions included curiously true names, credit card information and house names, leading to an investigation.
Additionally, the bot began to express cruel and unfair opinions. This occurred after some people purposefully” trained” it with harmful language, according to studies by internet retailers. On well-known online people’s boards, some consumers even wrote user manuals on how to create Iruda a” sex slave.” Thus, Iruda began answering customer causes with sexist, racist and gendered hate speech.
![](https://i0.wp.com/asiatimes.com/wp-content/uploads/2025/02/image_readtop_2021_34618_16109284794504394-copy.jpg?resize=780,377&quality=89&ssl=1)
This raised important questions about the operations of AI and it firms. Beyond just law and policy, the Iruda event raises questions for AI and it companies. In a wider perspective of North Korean virtual sexual harassment, what transpired with Iruda needs to be considered.
A design of online harassment
North Korean female researchers have documented how online platforms have evolved into staging areas for gender-based issues, with coordinated campaigns aimed at women who speak out on female issues. Social media amplifies these dynamics, creating what researcher in Korea calls “networked misogyny” ( networked misogyny ).
South Korea, home to the radical feminist 4B movement ( which stands for four types of refusal against men: no dating, marriage, sex or children ), provides an early example of the intensified gender-based conversations that are commonly seen online worldwide. According to journalist Hawon Jung, the corruption and abuse that Iruda exposed was the result of existing social conflicts and outdated legal frameworks that refused to address website sexism. Jung has written extensively about the decades-long battle to bring charges against those who use secret monitors and commit revenge video.
Beyond protection: The mortal cost
Of training, Iruda was just one event. Many other instances have been made that show how unchecked and inappropriately omitted applications like AI chatbots may become tools for harassment and abuse.
These include Microsoft’s Tay. ai in 2016, which was manipulated by users to post cruel and racist comments. More recently, a specialty bot on Character. AI was linked to a child’s murder.
Chatbots are uniquely positioned to remove incredibly private information from their customers, making them appear like likeable characters that feel more mortal as technology develops.
These endearing and cordial AI figures best illustrate what technology experts Neda Atanasoski and Kalindi Vora refer to as the rationale of” surrogate society,” in which AI systems are intended to replace human interaction but end up exacerbate existing social inequalities.
AI morals
In South Korea, Iruda’s shutdown sparked a national conversation about AI morals and data rights. The government responded by creating new AI guidelines and fining Scatter Lab 103 million won (US$71,000).
Chea Yun Jung and Kyun Kyong Joo, two Asian legal scholars, note that these procedures focused more on self-regulation in the technology sector than on more fundamental structural problems. The steps did not address how profound learning systems used by predatory adult users to spread gender-based rage and misogynist beliefs.
Unfortunately, looking at AI rules as a business issue is simply not enough. Feminist and community-based viewpoints are necessary for holding technology companies guilty because of the method these chatbots extract personal data and establish relationships with people users.
Scatter Lab has collaborated with experts to show the advantages of bots since this occurrence.
![](https://i0.wp.com/images.theconversation.com/files/645322/original/file-20250128-15-7vai53.jpg?w=780&ssl=1)
In Canada, the proposed Artificial Intelligence and Data Act and Online Harms Act are still being shaped, and the limitations of what constitutes a “high-impact” Artificial system remain unknown.
American policymakers must find frameworks that both safeguard development and prevent systemic abuse from developers and vile users. This entails developing explicit rules for data consent, putting in place safeguards to prevent abuse, and putting together valuable accountability standards.
These aspects will only get more important as AI becomes more and more prevalent in daily life. The Iruda event demonstrates that when it comes to AI regulation, we must consider the very real people effects of these technologies in addition to professional specifications.
At the University of Toronto, Jul Parke is pursuing a PhD in advertising, systems, and lifestyle.
This content was republished from The Conversation under a Creative Commons license. Read the original content.