A Nobel nod to AI godfathers who made machines learn – Asia Times

You have a lot of scientists, mathematicians, and engineers to thank if your teeth dropped while you watched the most recent AI-generated videos, your bank balance was saved from crooks by a fraud detection system, or your time was made a much easier because you were allowed to determine a text message while on the go.

But two names, Princeton University scientist John Hopfield and University of Toronto system professor Geoffrey Hinton, stand out for fundamental contributions to the profound learning technology that makes those experience possible.

For their ground-breaking work in the field of artificial neural networks, the two experts received the Nobel Prize in Physics on October 8, 2024. Although natural neural networks are the modeled for artificial neural networks, both researchers ‘ work relied on statistical science, which is why the prize in science was awarded.

a woman and two men sit at a long table while a large display screen behind them shows the images of two men
The Nobel Committee announces the 2024 Prize in Physics. Photo: Atila Altuntas / Anadolu via Getty Images via The Talk

How a nerve computes

The study of natural cells in living neurons is where artificial neural networks come from. A straightforward concept of the neuron’s functioning was developed by neurophysiologist Warren McCulloch and mathematician Walter Pitts in 1943. A synapse is connected to its surrounding cells in the McCulloch-Pitts design, and they can send signals to them. Therefore, it can incorporate those signs to give signals to additional neurons.

But there is a spin: It does weigh signals coming from different companions separately. Consider whether you are trying to decide whether to purchase a brand-new smartphone. You talk to your buddies and ask them for their suggestions.

Collect all companion tips and choose to go along with what the majority of them say is a straightforward method. For example, you ask three companions, Alice, Bob and Charlie, and they say yay, yay and no, respectively. Because you have two yays and one no, you decide to purchase the telephone.

However, you may believe some friends more because they have in-depth knowledge of technological devices. So you might decide to give more weight to their suggestions. For instance, if Charlie is very experienced, you might qualify his nay three times before deciding to not purchase the phone.

If you’re unfortunate to have a friend who fully despises you in terms of technical gadgets, you may even give them a bad name. Their phew is therefore counted as both a phew and a yay.

When you’ve made your own choice about whether the new phone is a good choice, another friends may ask you for your advice. Also, in artificial and natural neural networks, neurons may index signals from their relatives and give a signal to other neurons.

This potential leads to a vital variation: Is there a cycle in the system? For instance, if I ask Alice, Bob and Charlie today, and tomorrow Alice asks me for my advice, then there is a period: from Alice to me, and from me again to Alice.

a diagram showing four circles stacked vertically with lines of different colors interconnecting them
In recurrent neural networks, cells talk back and forth rather than in only one direction. Zawersh/Wikimedia, CC BY-SA

If the connections between neurons do n’t have a cycle, then computer scientists refer to it as a feedforward neural network. A proposes network’s cells may be arranged in layers.

The sources are the first part. The second level sends its signals to the second level, and so on. The network’s outcomes are reflected in the final level.

However, if the system contains a pattern, computer experts refer to it as a recurrent neural network, and the plans of cells can be more challenging than those in feedforward neural networks.

Hopfield network

Biology served as the initial source of artificial neural networks ‘ inspiration, but soon other fields began to influence their development. These included logic, mathematics and physics.

The Hopfield network, or Hopfield network, was a particular type of recurrent neural network that the physicist John Hopfield studied. In particular, he studied their dynamics: What happens to the network over time?

Similar dynamics are crucial when social networks transmit information. Everyone is aware of the rise in memes and the creation of echo chambers in online social networks. These are all collective phenomena that ultimately result from straightforward information exchanges between network users.

Hopfield was the first to investigate the dynamics of recurrent neural networks by using physics-based models, particularly those created for studying magnetism. He further demonstrated that such neural networks can have a memory function using their dynamics.

Boltzmann machines and backpropagation

During the 1980s, Geoffrey Hinton, computational neurobiologist Terrence Sejnowski and others extended Hopfield’s ideas to create a new class of models called Boltzmann machines, named for the 19th-century physicist Ludwig Boltzmann.

As the name suggests, Boltzmann’s statistical physics is the inspiration for the design of these models. Boltzmann machines could generate new patterns, planting the seeds of the modern generative AI revolution, in contrast to Hopfield networks, which could store patterns and correct errors in patterns like a spellchecker does.

Hinton was also part of another breakthrough that happened in the 1980s: backpropagation. You must somehow select the appropriate weights for the connections between artificial neurons if you want them to perform interesting tasks.

Backpropagation is a crucial algorithm that enables the selection of weights based on the network’s performance on a training dataset. However, training complex artificial neural networks continued to be challenging.

Hinton and his coworkers figured out how to train multilayer networks using Boltzmann machines in the 2000s by first pretraining the network layer by layer, then applying a different fine-tuning algorithm to the pre-trained network in order to further adjust the weights. Deep networks were given the name of multiple layers, and the deep learning revolution had already begun.

AI pays it back to physics

The physics Nobel Prize demonstrates how physics’ ideas contributed to the development of deep learning.

Deep learning has now begun to pay its respects to physics, allowing quick and accurate simulations of everything from molecules and materials to the climate of the planet.

The prize committee’s decision to award Hopfield and Hinton the Nobel Prize in physics demonstrates its belief in humanity’s ability to use these discoveries to advance human development and create a sustainable world.

Ambuj Tewari is professor of statistics, University of Michigan

The Conversation has republished this article under a Creative Commons license. Read the original article.