AI fueling a deepfake porn crisis in South Korea – Asia Times

Deepfake porn, a damaging AI byproduct that has been used to target anyone from Taylor Swift to American school girls, is a topic that is difficult to discuss.

However, a recent report from startup Security Heroes revealed that 53 % of the 95 % of deepfake porn videos analyzed from various sources featured South Korean singers and actresses, which suggests this group is disproportionately targeted.

But, what’s behind South Korea’s algorithmic issue? And what can be done in this regard?

Deepfakes are online altered voice, video, or photo files that convincingly show anyone saying or acting in a way that they never would have otherwise. Deepfakes are becoming increasingly popular among South Asian teenagers, making them so popular that some even see it as a joke. And they do n’t just target celebrities.

On Telegram, team messages have been made for the specific purpose of engaging in image-based sexual abuse of women, including middle-school and high-school students, teachers and family members. People who have their photos on social media platforms like Facebook, Instagram, and KakaoTalk are also generally targeted.

The culprits use AI machines to create the false pictures, which is then sold and/or blindly disseminated, along with victims ‘ social media accounts, telephone numbers and KakaoTalk usernames. One Telegram party attracted some 220, 000 people, according to a Guardian statement.

Lack of awareness

Despite the fact that victims of gender-based murder in South Korea are seriously harmed, there is still not enough attention about the problem.

South Korea has experienced rapid technological advancement in recent years. It has the highest internet connectivity and is cited as having the best laptop ownership in the world. Some jobs, including those in eateries, manufacturing and public transportation, are being quickly replaced by robots and AI.

However, as Human Rights Watch points out, the nation’s efforts to achieve gender equality and another human rights standards have not kept up with the development of technology. Additionally, research has demonstrated that technological advancement can really make gender-based crime more prevalent.

Digital sex crimes against children and adolescents in South Korea have been a serious problem since the” Nth Room” case, in particular. Around 260, 000 participants participated in sharing exploitative and coercive intimate content in this case, which included hundreds of young victims ( many of whom were minors ).

The event sparked a lot of outcry and calls for more security. It also led to the development of more stringent problems in the 2020 Act on Special Circumstances Concerning the Punishment of Sexual Crimes.

However, according to the Supreme Prosecutors ‘ Office, only 28 % of the total 17, 495 digital sex offenders indicted in 2021, which highlights the ongoing difficulties in addressing digital sex crimes.

In 2020, the Ministry of Justice’s Digital Sexual Crimes Task Force proposed about 60 constitutional provisions, which have still not been accepted. The organization was disbanded soon after Yoon Suk Yeol’s government sacked in 2022.

During the 2022 national competition, Yoon said,” there is no fundamental sex bias” in South Korea and pledged to dismantle the Ministry of Gender Equality and Family, the main department responsible for preventing gender-based crime. This position has n’t been filled since February of this year.

Is technology also be the answer?

However, South Korea provides evidence that AI is not always dangerous. A digital sexual violence support center run by the Seoul Metropolitan Government created a tool that is continuously track, archive, and discard photoshopped images and videos.

The 2024 UN Public Administration Prize-winning technology has reduced the time it takes to get deepfakes from two days to three minutes on average. But while such efforts can help minimize further damage from deepfakes, they are unlikely to be an exhaustive answers, as results on victims can be frequent.

The state needs to hold service companies, such as social media platforms and messaging software, accountable for ensuring consumer protection in order for significant change to occur.

The South Korean government made a press release on August 30th about plans to push for legislation to make the sale, order, and browsing of deepfakes illegal in the country.

Until deepfakes in South Korea are recognized as a harmful form of gender-based crime, studies and testing may continue to fall small. A varied approach may be needed to address the photoshopped problem, including stronger laws, reform and training.

In addition to raising awareness of gender-based murder, South Korean officials had put a focus on supporting victims as well as creating proactive guidelines and educational programs to stop violence in its tracks.

Sungshin ( Luna ) Bae, a PhD student and special public officer for gender equality at the Supreme Prosecutor’s Office in South Korea, is a graduate student at Monash University.

This content was republished from The Conversation under a Creative Commons license. Read the original content.