Search FQXi


David Vognar: "Completeness theorem: If a system’s components can transduce, that system..." in The Entropic Price of...

Georgina Woodward: "On obtaining the singular, relative, measurement product it replaces the..." in The Present State of...

Steve Dufourny: "The paper of Wilczek of course is very relevant considering the idea about..." in The Noise of Gravitons

Georgina Woodward: "Material neuronal structure in which memory is encoded, physical records..." in Quantum Physics and the...

Steve Dufourny: "It is really how we consider the structure of the spacetime, and also how..." in The Noise of Gravitons

Aleksandr Maltsev: "Hi Georgina, Write a letter to" in Quantum Physics and the...

Georgina Woodward: "In quantum experiments using particles, there won't be swapping with a..." in The Present State of...

Aleksandr Maltsev: "I shortened the phrase Zeeya Merali  «Why does time flow….?    How..." in Time's Arrow, Black Holes...

click titles to read articles

The Entropic Price of Building the Perfect Clock: Q&A with Natalia Ares
Experiments investigating the thermodynamics of clocks can teach us about the origin of time's arrow.

Schrödinger’s A.I. Could Test the Foundations of Reality
Physicists lay out blueprints for running a 'Wigner's Friend' experiment using an artificial intelligence, built on a quantum computer, as an 'observer.'

Expanding the Mind (Literally): Q&A with Karim Jerbi and Jordan O'Byrne
Using a brain-computer interface to create a consciousness 'add-on' to help test Integrated Information Theory.

Quanthoven's Fifth
A quantum computer composes chart-topping music, programmed by physicists striving to understand consciousness.

The Math of Consciousness: Q&A with Kobi Kremnitzer
A meditating mathematician is developing a theory of conscious experience to help understand the boundary between the quantum and classical world.

February 7, 2023

The Thermodynamic Limits on Intelligence: Q&A with David Wolpert
Calculating the energy needed to acquire and compute information could help explain the (in)efficiency of human brains and guide the search for extra-terrestrial intelligence.
by Miriam Frankel
FQXi Awardees: David Wolpert
March 25, 2022
Bookmark and Share

Searching For Signs of Intelligence
Credit: sdecoret, Shutterstock
The question of just what exactly distinguishes living matter from dead lumps of atoms, such as rock, is one of the greatest mysteries of physics. Even our best theories can’t really describe life, agency, consciousness or intelligence. But, with the help of an FQXi grant of over $118,000, David Wolpert, a physicist at the Santa Fe Institute, New Mexico, is trying to crack the puzzle with information theory and statistical physics. He wants to understand what constitutes and limits intelligence—defined as the ability to acquire information from surroundings and use it for computation in order to stay alive. The findings may one day help explain why human brains aren’t more efficient—and how to best search for intelligent life in the universe.

You propose that intelligence is intimately tied to information gathering. As such, it makes sense that information theory, first developed by Claude Shannon in the 1940s, is a good approach to understanding it. But you are also combining this analysis with statistical physics, an approach normally used by physicists working on thermodynamics—the science of heat and energy transfer—and trying to consider the properties of large groups of atoms. What’s the benefit of considering intelligence in terms of statistical physics or thermodynamics? And what challenges come with applying it to living systems?

When it comes to life, it’s very natural to use statistical physics because you’re trying to generalize from many different biochemical systems—all of which are far away from thermodynamic equilibrium, and rely on thermodynamic phenomena to stay that way, on what are called ’equilibrium systems.’ Think for example of a cup of hot tea cooling down to the same temperature as its surroundings—a state of equilibrium—allowing no change after that. But if you look at everything around us—all the interesting systems—they’re not at equilibrium.

Do you mean, for example, biological processes such as those responsible for pumping certain ions into our cells and thereby creating a higher concentration of such ions inside them compared to outside—a state out of equilibrium?

Yes, or, as another example, the biological process of a journalist interviewing a nerd over Zoom. None of these processes are in equilibrium. In fact, arguably, equilibrium means death. Certainly, equilibrium means no intelligence.

We might gain much
greater insights into
living organisms here
on Earth as well into
the origins of life.
- David Wolpert
Fortunately though, there was actually a bit of a revolution in statistical physics, starting in the 21st century, with the birth of the field of non-equilibrium statistical physics, or "stochastic thermodynamics" as it is often called. This new field was not just complicated math built on the previous maths. It was more like "wait, let’s look at this thing a little differently"—and then when you’ve done that, you’ve knocked open the piñata, and all this formalism falls out. It has basically started to explode. And it provides us with a way of doing statistical physics for evolving systems that are arbitrarily far away from thermal equilibrium.

So how can this help us understand intelligence?

When you’re getting "semantic information"—information which carries meaning for a given system—from the environment, you are outside of thermal equilibrium. If we were to stop that information from coming in, if we were to intervene in the dynamic process coupling the system to its environment, that system would relax to thermal equilibrium (Kolchinsky, A. & Wolpert, D. H. Interface Focus. 82018004120180041 (2018)). Imagine you are a cell and you can go right or left—and there’s more food to your right and less food to your left. You probe to see in both directions where the amount of food is going up. If you don’t do this, you’ll run out of food and die.

You are trying to work out how much energy it takes to maintain intelligence in this way under various limiting constraints. Why?

David Wolpert
Santa Fe Institute
Whenever you do a very simple computational operation, such as the very simple operations underlying the functioning of neurons in the brain, there’s a minimal thermodynamic cost, that is, a minimal amount of energy required. We know the minimum energetic cost of simply erasing one bit—0 or 1—of information is equivalent to something called the Boltzmann’s constant (which is tiny on the order of
10-23) multiplied by the temperature and the natural logarithm of 2 (roughly 0.69). This is teeny. Even if you scale it up to the human brain or to living systems, it’s insignificant. So why is it that with systems such as the human brain, where there’s massive evolutionary incentives to reduce the cost of intelligence, the best we can do is get it down to the brain consuming 20% of all your calories? Or as another example, why is it that the digital computers we use, such as your phone and your Mac, use so much energy despite all these brilliant computer engineers trying to use as little energy as possible? Why is it that cells require so much energy to do the very, very simple kinds of intelligence that they do? It must be because you can’t actually get anywhere close to the tiny theoretical limit. There must be other constraints on how the system can behave that keep it from being able to get anywhere close to the theoretical minimum energetic cost.

What might those constraints be?

Your brain is constrained in a way that whatever intelligent actions you do, it has to be built out of neurons—with noisy kind of stuff going on. That’s a massive constraint. And digital engineers making your phone and your Mac are under the constraint that they have to use what’s called CMOS (complementary metal oxide semiconductor) technology. Those constraints must be what’s causing the energetic needs of intelligent systems to be so large. We know what these constraints are in many real systems. But for much less sophisticated systems, where we’re dealing with much simpler sets of more fundamental constraints, this relationship between energy, constraints and intelligence is still very, very complicated.

Can this help us better understand human intelligence?

We might gain much greater insights into living organisms here on Earth as well into the origins of life. If we can understand the relationship between the constraints, the energetics and intelligence, we might be able to understand questions such as "Why the hell can’t we have brains that are much less costly?" There are great incentives to evolution to keep us as dumb as possible. So to understand the evolutionary history of humans, why evolution decided with us, to keep going despite the costs, we need to learn about the thermodynamic characteristics, the energetic constraints and energetic requirement of intelligent systems.

Could this help build better intelligent systems, such as AI?

Hopefully, it could one day help us build more energetically efficient ones.

What other insights might this line of research lead to?

If we could understand this, and the general physics of it, that would give us a way to be able to understand, for example, what we’re looking at when we send probes to the Jovian atmosphere. If there are living intelligent beings on Jupiter, they’re not going to be like us, made out of organic chemical molecules. They’ll be something else—something very strange. If there are creatures living in the photosphere of the sun— building huge civilizations there—they would not be like us. However, if we can find systems whose physics is the same as the physics that we identified for intelligent systems, then we’ll be in a better place to discover it.

Comment on this Article

Please read the important Introduction that governs your participation in this community. Inappropriate language will not be tolerated and posts containing such language will be deleted. Otherwise, this is a free speech Forum and all are welcome!
  • Please enter the text of your post, then click the "Submit New Post" button below. You may also optionally add file attachments below before submitting your edits.

  • HTML tags are not permitted in posts, and will automatically be stripped out. Links to other web sites are permitted. For instructions on how to add links, please read the link help page.

  • You may use superscript (10100) and subscript (A2) using [sup]...[/sup] and [sub]...[/sub] tags.

  • You may use bold (important) and italics (emphasize) using [b]...[/b] and [i]...[/i] tags.

  • You may also include LateX equations into your post.

Insert LaTeX Equation [hide]

LaTeX equations may be displayed in FQXi Forum posts by including them within [equation]...[/equation] tags. You may type your equation directly into your post, or use the LaTeX Equation Preview feature below to see how your equation will render (this is recommended).

For more help on LaTeX, please see the LaTeX Project Home Page.

LaTeX Equation Preview

preview equation
clear equation
insert equation into post at cursor

Your name: (optional)

Recent Comments

we are on a platform very relevant, the best platform for the theoretical physics and it is free when we respect the rules, but we are not on facebook or others where we can discuss about conspiracies or irrational things please. We can have assumptions that we don t affirm them but that must respect the physics , maths , sciences. The free energy of Tesla was a concept of sciences fiction.

It is not rational deterministic ideas , it is non sense, the free energy concept of Tesla furthermore is not true. The best actually is this nuclear for the planet and the best is to check this fusion. Furthermore even if it was true and it is not the case the humans are not ready to have this kind of concept , but fortunally that does not exist. Stop to tell non sense please, and for god , you are still a person thinking that god is an infinite heat and that the light is the only one truth ,...

Presenting free energy

Reality is the ultimate source of energy. It realizes itself in a neverending cycle of creation and destruction. By creating new universes in the lab we can generate a neverending cycle of free energy. Quantum tunnelling is one example of  how to create a neverending cycle by renewing available resources of energy. It manifests itself in the form of light. By renewing the mass-energy equivalence / conversion it creates new light to be available for renewal and...

read all article comments

Please enter your e-mail address:
Note: Joining the FQXi mailing list does not give you a login account or constitute membership in the organization.