2024 Nobel physics prize goes to AI researchers?

U.S. scientist John Hopfield and British-Canadian Geoffrey Hinton won the 2024 Nobel Prize in Physics on Tuesday for discoveries and inventions in machine learning that paved the way for the artificial intelligence boom.

Heralded for its revolutionary potential in areas ranging from cutting-edge scientific discovery to more efficient admin, the emerging technology on which the duo worked has also raised fears humankind may soon be outsmarted and outcompeted by its own creation.

Am I missing the physics angle here? I am not saying these two are not worthy of recognition but it certainly does not seem like physics research to me.

Seems like computer science. Does this make sense I am not seeing?

The official press release gives more detail on what they consider the physics angle

This year’s two Nobel Laureates in Physics have used tools from physics to develop methods that are the foundation of today’s powerful machine learning.

John Hopfield invented a network that uses a method for saving and recreating patterns. We can imagine the nodes as pixels. The Hopfield network utilises physics that describes a material’s characteristics due to its atomic spin
[…]
Geoffrey Hinton […]used tools from statistical physics, the science of systems built from many similar components.

Beat me to it.
This isn’t the first Nobel Prize to an economics researcher. Herbert Simon of CMU got one, but in economics, not for his early AI work.

With the advent of Quantum Computers, I don’t think you can separate the two anymore.

ISTM if they invented a quantum computer then there is a physics angle. But using a quantum computer is now physics?

Interesting choice which I admit had me scratching my head too at first. But from what I have read, the AI work of Hopfield and Hinton does draw from Physics, especially statistical mechanics, in a pretty deep way. I asked ChatGPT about the links and this is the first two parts of the answer relevant to Hopfield and Hinton’s work:

1. Statistical Mechanics and Neural Networks

A particularly important connection between neural networks and theoretical physics comes from statistical mechanics, the branch of physics that deals with large systems of interacting particles. In neural networks, the neurons and their connections (synapses) can be thought of as analogous to particles in a physical system. The goal is to understand the collective behavior of these elements.

The most prominent example of this link is the Hopfield network (1982), a type of recurrent neural network. John Hopfield introduced a model where the activity of neurons (1s and -1s) evolves over time based on an energy function. This energy function is akin to the Hamiltonian used in physics to describe the energy state of a system of particles.

  • In a Hopfield network, the system seeks to minimize this energy function, converging to stable states that represent “memories” stored in the network.
  • This is mathematically similar to models in spin glasses from statistical mechanics, where spins (analogous to neurons) interact in a disordered system. Understanding how the system reaches equilibrium states and local minima was a shared problem between physics and neural networks.

This cross-pollination was furthered by the work of theoretical physicists like Haim Sompolinsky and David Amit, who applied tools from spin-glass theory to study neural network dynamics and memory capacity.

2. Boltzmann Machines and Thermodynamics

Another direct connection is in Boltzmann machines, which were introduced by Geoffrey Hinton and Terry Sejnowski in the mid-1980s. Boltzmann machines are probabilistic models of neural networks where neurons stochastically flip their states based on an energy function. The energy minimization principle governing Boltzmann machines is closely related to the laws of thermodynamics, particularly the Boltzmann distribution from statistical mechanics.

  • The training process in Boltzmann machines involves computing probabilities of different network states and updating the system so that it “learns” patterns in data. This is analogous to how physical systems tend to evolve toward states of lower free energy.
  • Annealing is a concept used in both physics and machine learning. Simulated annealing is a technique borrowed from metallurgy, where a system is allowed to “cool” slowly to find a global minimum in the energy landscape. In neural networks, this corresponds to finding optimal weights or configurations.

Incidentally Hopfield, who I admit I had not heard of, has had one of the most remarkable, interdisciplinary careers in science with university appointments in Physics,Chemistry and Biology in elite universities. He apparently did important early work in quantum mechanics on something called the Hopfield Dielectric. He then transitioned to molecular biology where he did important work on how genetic processes like DNA replication achieve such high levels of accuracy. He has won prestigious Physics prizes like the Dirac medal so it’s not a huge surprise that he has won a Physics Nobel.

Very meta to ask an AI about a Nobel Prize in Physics awarded to, essentially, AI researchers.

Yeah, though actually I just use ChatGPT all the time whenever I come across something interesting. It’s an incredible tool.

Incidentally Hinton said that when he first received the Nobel prize call he was at first skeptical but was reassured on hearing a bunch of Swedish voices. It occurred to me that the technology he helped create has made it pretty easy for anyone to spoof any kind of voice they want.

Given that its results can’t be trusted to be accurate, I recommend avoiding it for reliable results.

The AI Mafia strikes again! Half the Chemistry prize goes to Demis Hassabis and John Jumper for developing an AI model for predicting protein structure.

Here is how it will go:
2028: For the first time an AI, rather than an AI researcher, is awarded the Nobel
2033: AIs have won all the Nobel Prizes
2037: No human jury can keep up with all the research and the Nobel jury now consists of AIs
2043: The AI jury has given a few Nobels to humans to protect our feelings but it’s getting ridiculous and from now on every prize goes to an AI

That would be like someone winning the James Watt International Medal for mechanical engineering just because they used a mechanical pencil to write an economics paper.

Indeed.

But setting aside the issues of reliability and utility, I think this award is less about the suitability of generative AI as a candidate for the Nobel Prize in Physics than the lack of fundamental advances in the foundations of physics, and even halting progress in innovations in condensed matter physics, superconductivity, controlled nuclear fusion, et cetera. Despite all of the hype from the pop-sci press there have been little fundamental innovation in basic understanding of the physical nature of the universe, and in the last quarter century the Nobel Prize has been awarded either for work done decades prior, confirmations of widely accepted phenomena such as supermassive black holes in active galactic nuclei, broad “contributions” to a general field like cosmology or superfluidity, or technological applications such as integrated circuits and laser physics. With a couple of debatable awards, the last really fundamental work in physics recognized by the Nobel committee was the 1999 Prize award to Gerard ‘t Hooft and Martinus Veltman for their work on the quantum structure of electroweak interactions. (One could make the argument for the 2004 prize to Gross, Politzer, and Wilczek, and perhaps the 2012 award to Haroche and Wineland, but the others are mostly awards for technological innovations of already known physics or awards to a small number of contributors for much broader work in a field.)

We may be at a point that the synthesis and comprehension of physical theory and scientific observation are approaching a capability limit of human cognition, inspiration, and collaboration, and that some kind of more advanced generative “AI tools” will be necessary for real advances in physics (and systems biology and neuroscience, climate and weather projection, complexity theory, et cetera), and indeed this is already occurring in tentative ways in many fields. LLMs and the neural network approach to machine cognition is still quite nascent and currently a computationally brute force approach to problem-solving but it may very well be the first steps to radical advances is foundational physics and our ability to interpret and manipulate the natural world. Whether we will actually control such innovations is another question, but the award to Hinton and Hopfield is not without merit and foresight.

Stranger

Well it’s a practical application the laws of physics. I consider it a branch of physics like Astrophysics, Biophysics, etc.

Incidentally Hassabis was the lead AI designer of the famous god-game Black and White and the designer of the political simulation game called Republic:the revolution. I actually remember discussing the latter on message boards more than 20 years back. It generated a lot of excitement though the actual game was generally considered a letdown.

I wonder if Hassabis is the first significant game developer to win a Nobel Prize. It’s rather remarkable that he spent his early career designing games, got a Phd in Cognitive Neuroscience and ended up with a Nobel Prize in Chemistry.

It seemed like half the content at the latest ACS national meeting was AI related. It mostly looked like useless crap in the same vein I’ve heard my entire career.