The United Nations Secretary-General’s Advisory Board on Artificial Intelligence ( AI ) has released its final report on governing AI for humanity.
The document provides a strategy for maximizing the potential of AI while also addressing its dangers. Additionally, it urges all governments and partners to work together to promote the advancement and safety of all human freedom.
This document appears to be a good step forward for AI, promoting both current research and reducing potential risks. Nevertheless, the finer information of the report highlight a number of issues.
Reminiscent of the IPCC
On October 26, 2023, the UN expert panel on AI was first established. This committee’s goal is to make suggestions for the global leadership of AI.
It says this approach is needed to ensure the gains of AI, such as opening new areas of scientific inquiry, are equally distributed, while the challenges of this technology, such as mass surveillance and the spread of misinformation, are mitigated.
39 people make up the advisory panel, which includes people from a variety of locations and professional sectors. Among them are business associates from Microsoft, Mozilla, Sony, Collinear AI and OpenAI.
The Intergovernmental Panel on Climate Change ( IPCC ) of the UN, which aims to provide important input into international climate change negotiations, is similar to the committee.
The addition of popular industry figures on the advisory table on AI is a significant difference from the IPCC. This may have benefits, such as a more educated understanding of AI systems. But it may also include disadvantages, such as slanted viewpoints in favor of professional interests.
The most recent release of the last report on AI for mankind provides a valuable insight into what we can probably anticipate from this committee.
What’s in the statement?
An time report that was released in December 2023 followed the last statement on governing AI for society. It makes seven suggestions for closing the gaps in the current AI management framework.
These include the establishment of a global AI data model, the establishment of an independent global scientific section on AI, and the development of an AI standards trade. Additionally, the report concludes with a call to action for all relevant stakeholders and governments to collectively manage AI.
The disparaging and occasionally conflicting statements made throughout the document are what’s troubling. For instance, the record makes good recommendations for management to address the impact of AI on focused power and wealth and its political and economic ramifications.
But, it even claims that:
No one now has the knowledge to entirely control AI’s outputs or its evolution.
On some records, this state is not technically accurate. There are “black box” systems, in which the output is known, but the computation used to create outputs is not. However, in terms of technology, AI techniques are generally well understood.
AI reflects a range of features. This range includes profound understanding systems like facial recognition as well as relational AI systems like ChatGPT. It is inaccurate to assume that all of these methods have the same amount of impenetrable difficulty.
The addition of this claim raises doubts about the benefits of having industry representatives serve as expert boards, who should be providing more in-depth knowledge of Artificial technologies.
The notion of AI evolving on its own accord is another problem that this state raises. The following narratives, which falsely portray AI as a system of organization, are what has been interesting about the increase of AI in recent years.
This false narrative creates a creative blame for the business, shifting perceptions of responsibility and liability away from those who design and develop these systems.
Despite the simple undertones of impotence in the face of AI systems and the uneven claims made throughout, the statement does in some ways positively advance the conversation.
A tiny step forth
In spite of conflicting claims made throughout the record that suggest then, the document and its call to action are a good move forward.
The participation of the phrase “hallucinations” is a important instance of these contradictions.
Sam Altman, the company’s CEO, used the word to redefine absurd outcomes as part of the “magic” of AI, which is how the word itself gained popularity.
Hallucinations are never a essentially accepted term—they’re a innovative marketing plan. It is not productive to push for the , management of AI while also praising a word that implies a technologies that cannot be governed.
What the document lacks is consistency in how AI is viewed and understood.
Additionally, it lacks application specificity, which is a typical stumbling block for some AI initiatives. A holistic approach to AI management will only succeed if it is able to account for the specificities of each program and domain.
The document represents a positive step forward. To promote advancement while reducing the numerous negative effects of AI, it will need refinements and modifications.
Zena Assaad is older teacher, School of Engineering, Australian National University
The Conversation has republished this post under a Creative Commons license. Read the original post.