Indonesia at forefront of Asia’s AI hopes and fears – Asia Times

The recent&nbsp, Global Public Opinion on Artificial Intelligence&nbsp, survey ( GPO- AI ) revealed that 66 % of Indonesians are concerned about the misuse of artificial intelligence ( AI ) compared to a global average of 49 %.

Indonesia has a political society, a vibrant tech startup ecosystem, and a lot of social media use, all of which present risks when utilizing AI.

But, Indonesia has the governmental tools to mitigate risks, while taking advantage of chances, if policymakers, economy, and civil society work together effectively to solve the government’s concerns.

Specifically, politicians have been willing to handle AI. Last year, the Ministry of Communications and Information ( Kominfo ) published&nbsp, Circular no. 9 in 2023&nbsp, on the ethical use of AI, and policy may be on the manner following the launch of an AI&nbsp, preparation assessment&nbsp, with UNESCO. This year, Indonesia joined another ASEAN member states in supporting an&nbsp, the ASEAN Guide on AI Governance.

Instead of waiting until a stand-alone regulation is debated, passed, and resourced, Indonesia should use all the regulatory tools at its leisure to address It right away and strengthen those resources. &nbsp, &nbsp,

Indonesia most recently passed the Personal Data Privacy Law ( PDP Law ) in 2022. Opportunities to strengthen these institutions in Indonesia may be seized by identifying global best practices and putting them into practice early, despite the fact that the laws and institutions are innovative.

This will be no simple process, as the standard equipment of the laws themselves will need to be implemented and issues addressed, as some&nbsp, academics&nbsp, have warned.

For instance, nations with lively privacy laws were the first to address conceptual AI businesses. Italy’s data protection authority&nbsp, put an order on OpenAI’s common ChatGPT in 2023, citing a lack of time identification procedures, and details regarding the running of personal data in teaching the AI. &nbsp,

Around Asia since also, countries with established protection rules&nbsp, are addressing&nbsp, AI. New Zealand, Australia, Singapore and South Korea have dealt with issues as varied as integrated decision- making, biological identification, and providing companies using conceptual AI with guidance to alleviate privacy risks. &nbsp,

Indonesia would do well to make its newly developed privacy laws integrated and harmonized both locally and globally and ensure that they address AI issues. Indonesia’s intellectual property ( IP ) laws also offer an opportunity to address the public’s concerns about the misuse of AI. &nbsp,

Indonesia’s growing number of AI businesses may be given some advice regarding the potential for trademark infringement when using copyrighted data to educate AI. Authorities can also provide guidance on whether and how much conceptual AI content can be protected by copyright by guiding creatives in their use of conceptual AI tools. &nbsp,

Strong anti-piracy and counterfeiting rules may also help address the concerns of the general public regarding AI. Taking down probably copyright-infringing AI work or addressing AI-fueled online promotion of fake goods will help to increase the public’s confidence in AI. &nbsp,

One area of Internet that should be addressed in Indonesia, and really worldwide, is the straight of attention, or character rights – which protect a person’s name, image and voice from corporate misuse. Recent news highlights the urgency, such as when OpenAI voiced a voice that resembled Scarlett Johansen’s in a recent version of ChatGPT.

The trust and safety issues that underlie many rule-of-law and democracy issues are particularly troubling in deepfakes. In earlier this year, Indonesia was at the forefront of using generative AI. Some examples are benign, such as a deepfake of a dancing politician, but others are questionable, such as deepfakes resurrecting deceased politicians.

Indonesia’s history should be studied, and recommendations and best practices should be widely distributed, not just for the country’s sake but also for those of other democracies. &nbsp,

To shield democracies from deep-fakes around the world, rules are being proposed. For example, India’s Election Commission recently&nbsp, circulated&nbsp, guidelines notifying political parties to adhere to existing rules, and not create or spread harmful deepfakes. While in the US, legislators have &nbsp, proposed&nbsp, a bill that would require transparent notification of deepfake video or audio in political advertising.

Beyond election integrity, regulators addressing online safety can look to best practices globally. The Online Safety Regulators Network, a group of eight independent regulators from around the world, recently published guides to address human rights issues in online safety and how to create regulatory coherence for deep fakes in particular.

Tapping into these networks will be crucial for Indonesia’s civil society engaged in digital rights issues as well as for government policymakers. &nbsp,

Indonesians are concerned about the misuse of AI, but they also represent the epicenter of global optimism. &nbsp, Stanford’s AI Index Report 2024&nbsp, reveals that 78 % of Indonesians believe AI services and tools have more benefits than risks – the highest of 31 countries surveyed. &nbsp,

Indonesia can draw on its cautious optimism and experience and make use of the laws in place to establish a vibrant and moral AI ecosystem that can serve as a model for other countries. &nbsp,

Seth Hays is an attorney and managing director of APAC GATES, a Taipei- based rights consultancy. Additionally, he is in charge of the non-profit Digital Governance Asia Center of Excellence, which is dedicated to sharing policy best practices across Asia.