Philosophy is crucial in the age of AI – Asia Times

Both pleased and frightened by recent advances in engineering and scientific knowledge. They will undoubtedly remain to.

OpenAI just announced that it anticipates” superintelligence” – AI surpassing human powers – this century. In response, it is reassembling a new group and investing 20 % of its technology solutions in ensuring that such AI systems ‘ behavior is in line with individual beliefs.

It seems they do n’t want rogue artificial superintelligences waging war on humanity, as in James Cameron’s 1984 science fiction thriller, The Terminator ( ominously, Arnold Schwarzenegger’s Terminator is sent back in time from 2029 ). Best machine-learning scientists and engineers are needed to assist them in resolving the issue.

But does philosophers have something to help? What can be anticipated of the discipline’s age-old status in the midst of the newest, technologically advanced age?

To start, it is important to point out that idea has been a key component of AI’s development since its conception. One of the first Artificial success stories was a 1956 computer software, dubbed the the Logic Theorist, created by Allen Newell and Herbert Simon.

The thinkers Alfred North Whitehead and Bertrand Russell’s 1910 work, Principia Mathematica, whose goal was to put all of mathematics on a single logical foundation, used propositions from its work to show theorems.

However, the first advances in logic in AI were largely due to the fundamental debates that mathematicians and philosophers pursued.

The late 19th century’s development of modern logic by European philosopher Gottlob Frege was a major step. Frege introduced quantitative factors into logic more than physical objects like people.

His method made it possible to say not only, for case,” Joe Biden is leader” but also to consistently communicate such general thoughts as that” there exists an X for that X is president”, where” there exists” is a quantifier, and” X” is a variable.

Another significant figures in the 1930s included Polish mathematician Alfred Tarski’s “proof of the indefinability of truth” and Austrian-born mathematician Kurt Gödel, whose proofs of accuracy and uncertainty are about the limits of what can be proven.

The former showed that” reality” in any standard official system cannot be defined within that particular program, but that arithmetical wisdom, for example, may be defined within the structure of arithmetic.

Lastly, Alan Turing’s abstract conception of a computer from 1936 influenced earlier AI greatly.

It might be said, however, that even if such great conventional symbolic AI was obliged to high-level beliefs and reasoning, the” second-wave” AI, based on deep understanding, derives more from the practical engineering feats associated with processing large quantities of data.

Still, philosophy has played a role here too. Take large language models, such as the one that powers ChatGPT, which produces conversational text. They are enormous models, with billions or even trillions of parameters, trained on vast datasets (typically comprising much of the internet ).

They primarily monitor and exploit statistical patterns of language usage, but they do so at the very core. In the middle of the 20th century, the Austrian philosopher Ludwig Wittgenstein espoused the idea that” the meaning of a word is its use in the language.”

But contemporary philosophy, and not just its history, is relevant to AI and its development. An LLM could actually comprehend the language being processed. Might it achieve consciousness? These are deeply philosophical questions.

Science has so far failed to fully explain how the human brain’s cells are created. Some philosophers even believe that this is such a “hard problem” that is beyond the scope of science, and may require a helping hand of philosophy.

In a similar vein, we can ask whether an image-generating AI could be truly creative. British cognitive scientist and AI philosopher Margaret Boden claims that AI will struggle to evaluate new ideas as creative people do.

She also anticipates that only a hybrid ( neural-symbolic ) architecture, which incorporates both deep learning from data and logical techniques, will produce artificial general intelligence.

Human values

In response to our inquiry about the role of philosophy in the era of AI, ChatGPT responded to our request that it “helps ensure that the development and use of AI are aligned with human values.”

In this context, perhaps we should be allowed to say that AI alignment is a social issue that engineers or tech companies must address in addition to the technical one. That will require input from philosophers, but also social scientists, lawyers, policymakers, citizen users and others.

Apple Park is the corporate headquarters of Apple Inc in Silicon Valley,
Some philosophers criticize the tech sector. Photo: iwonderTV / Shutterstock via The Conversation

In fact, many people are concerned about how much tech companies are becoming more powerful and influential and how they impact democracy. Some claim that we need to consider AI in a whole new way while also considering the underlying systems that support the sector.

For instance, the British barrister and author Jamie Susskind has argued that a “digital republic” is necessary, one that ultimately rejects the very political and economic system that has given tech companies such a powerful influence.

Finally, let us briefly ask, how will AI affect philosophy? Formal logic in philosophy actually dates to Aristotle’s work in antiquity. The German philosopher Gottfried Leibniz suggested that a” calculus ratiocinator” might one day be developed, a calculating device that would enable us to interpret philosophical and scientific ideas in a quasi-oracular manner.

Some authors advocate a” computational philosophy,” which literally entails assumptions and derives consequences from them, and perhaps we are now beginning to realize that vision. This ultimately allows factual and/or value-oriented assessments of the outcomes.

The PolyGraphs project, for instance, simulates the effects of social media sharing. Then, using this information, we can computationally examine the formative nature of our opinions.

Certainly, progress in AI has given philosophers plenty to think about, it may even have begun to provide some answers.

Brian Ball is an associate professor of philosophy at Northeastern University London, and Anthony Grayling is a professor of philosophy at Northeastern University London.

The Conversation has republished this article under a Creative Commons license. Read the original article.