Search FQXi

If you are aware of an interesting new academic paper (that has been published in a peer-reviewed journal or has appeared on the arXiv), a conference talk (at an official professional scientific meeting), an external blog post (by a professional scientist) or a news item (in the mainstream news media), which you think might make an interesting topic for an FQXi blog post, then please contact us at with a link to the original source and a sentence about why you think that the work is worthy of discussion. Please note that we receive many such suggestions and while we endeavour to respond to them, we may not be able to reply to all suggestions.

Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you.

Forum Home
Terms of Use

Order posts by:
 chronological order
 most recent first

Posts by the blogger are highlighted in orange; posts by FQXi Members are highlighted in blue.

By using the FQXi Forum, you acknowledge reading and agree to abide by the Terms of Use

 RSS feed | RSS help

Robert McEachern: on 1/15/15 at 4:09am UTC, wrote "If the brain worked only at the neuronal level, for example by storing one...

Edwin Klingman: on 1/14/15 at 1:34am UTC, wrote This comment addresses only the FQXi write-up above, not all the work...

Joselle Kehoe: on 1/13/15 at 20:34pm UTC, wrote I’ve become fascinated with Gregory Chaitin’s exploration of randomness...


Robert McEachern: ""all experiments have pointed towards this and there is no way to avoid..." in Review of "Foundations of...

Joe Fisher: "Dear Steve Agnew, Naturally provided VISIBLE realty am not a silly humanly..." in Can Time Be Saved From...

James Putnam: "Light bends because it is accelerating. It accelerates toward an object..." in Black Hole Photographed...

Steve Agnew: "Stringy and loop quantum are the two big contenders, but neither has a..." in Can Time Be Saved From...

Robert McEachern: "Lorenzo, The nature of "information" is well understood outside of..." in Review of "Foundations of...

Georgina Woodward: "Steve, Lorraine is writing about a simpler "knowing " rather than the..." in The Nature of Time

Steve Agnew: "Knowing information necessarily means neural action potentials. Atom and..." in The Nature of Time

click titles to read articles

Can Time Be Saved From Physics?
Philosophers, physicists and neuroscientists discuss how our sense of time’s flow might arise through our interactions with external stimuli—despite suggestions from Einstein's relativity that our perception of the passage of time is an illusion.

A devilish new framework of thermodynamics that focuses on how we observe information could help illuminate our understanding of probability and rewrite quantum theory.

Gravity's Residue
An unusual approach to unifying the laws of physics could solve Hawking's black-hole information paradox—and its predicted gravitational "memory effect" could be picked up by LIGO.

Could Mind Forge the Universe?
Objective reality, and the laws of physics themselves, emerge from our observations, according to a new framework that turns what we think of as fundamental on its head.

Dissolving Quantum Paradoxes
The impossibility of building a perfect clock could help explain away microscale weirdness.

May 20, 2019

CATEGORY: Blog [back]
TOPIC: A mathematical philosophy - a digital view [refresh]
Bookmark and Share
Login or create account to post reply or comment.

Blogger Joselle Kehoe wrote on Jan. 13, 2015 @ 20:34 GMT
I’ve become fascinated with Gregory Chaitin’s exploration of randomness in computing and his impulse to bring these observations to bear on physical, mathematical, and biological theories. His work inevitably addresses epistemological questions – what it means to know, to comprehend – and leads him to move (as he says in a recent paper) in the direction of “a mathematical approach to philosophical questions.” I do not have much expertise in computing (but do not assume the same about my readers) and so I won't try to clarify the formal content of his papers. However, the path Chaitin follows is from Leibniz to Hilbert to Gödel and Turing. With his development of algorithmic information theory, he has studied the expression of information in a program, and formalized an expression of randomness.

The paper to which I referred above, Conceptual Complexity and Algorithmic Information, is from this past June. It can be found on As is often the case, Chaitin begins with Leibniz:

"In our modern reading of Leibniz, Sections V and VI both assert that the essence of explanation is compression. An explanation has to be much simpler, more compact, than what it explains."

The idea of ‘compression’ has been used to talk about how the brain works to interpret a myriad of what one might call repeated sensory information, like the visual attributes of faces. Language, itself, has been described as cognitive compression. Chaitin reminds us of the Middle Ages’ search for the perfect language, that would give us a way to analyze the components of truth, and suggests that Hilbert’s program was a later version of that dream. And while Hilbert’s program to find a complete formal system for all of mathematics failed, Turing had an idea that has provided a different grasp of the problem. For Turing,

"there are universal languages for formalizing all possible mathematical algorithms, and algorithmic information theory tells us which are the most concise, the most expressive such languages."

Compression is happening in the search for ‘the most concise.’ Chaitin then defines conceptual complexity, which is at the center of his argument. The conceptual complexity of an object X is

"...the size in bits of the most compact program for calculating X, presupposing that we have picked as our complexity standard a particular fixed, maximally compact, concise universal programming language U. This is technically known as the algorithmic information content of the object X denoted H(X)…In medieval terms, H(X) is the minimum number of yes/no decisions that God would have to make to create X."

He employs this idea, this “new intellectual toolkit,” in a brief discussion of mathematics, physics, and evolution, modeling evolution with algorithmic mutations. He also suggests an application of one of the features of algorithmic information theory, to Giulio Tononi’s integrated information theory of consciousness. As I see it, a mathematical way of thinking brings algorithmic information theory to life, which then appears to hold the keys to a clearer view of physical, biological and digital processes.

In his discussion of consciousness Chaitin suggests an important idea – that thought reaches down to molecular activity.

"If the brain worked only at the neuronal level, for example by storing one bit per neuron, it would have roughly the capacity of a pen drive, far too low to account for human intelligence. But at the RNA/DNA molecular biology level, the total information capacity is quite immense.

In the life of a research mathematician it is frequently the case that one works fruitlessly on a problem for hours then wakes up the next morning with many new ideas. The intuitive mind has much, much greater information processing capacity than the rational mind. Indeed, it seems capable of exponential search.

We can connect the two levels postulated here by having a unique molecular “name” correspond to each neuron, for example to the proverbial “grand- mother cell.” In other words, we postulate that the unconscious “mirrors” the associations represented in the connections between neurons. Connections at the upper conscious level correspond at the lower unconscious level to enzymes that transform the molecular name of one neuron into the molecular name of another. In this way, a chemical soup can perform massive parallel searches through chains of associations, something that cannot be done at the conscious level.

When enough of the chemical name for a particular neuron forms and accumulates in the unconscious, that neuron is stimulated and fires, bringing the idea into the conscious mind.

And long-chain molecules can represent memories or sequences of words or ideas, i.e., thoughts."

This possibility is suggested in the light of a digital view of things. The paper concludes in this way:

"We now have a new fundamental substance, information, that comes together with a digital world-view.

And – most ontological of all – perhaps with the aid of these concepts we can begin again to view the world as consisting of both mind and matter. The notion of mind that perhaps begins to emerge from these musings is mathematically quantified, which is why we declared at the start that this essay pretends to take additional steps in the direction of a mathematical form of philosophy.

The eventual goal is a more precise, quantitative analysis of the concept of “mind.” Can one measure the power of a mind like one measures the power of a computer?"

Quantification as a goal can be misunderstood. To many it signifies a deterministic, controllable world. Chaitin’s idea of quantification is motivated by the exact opposite. His systems are necessarily open-ended and creative. Quantification is more the evidence of comprehension.

There is one more thing in this paper that I enjoyed reading. It comes up when he introduces the brain to his discussion of complexity. I’ll just reproduce it here without comment.

"Later in this essay, we shall attempt to analyze human intelligence and the brain. That’s also connected with complexity, because the human brain is the most complicated thing there is in biology. Indeed, our brain is presumably the goal of biological evolution, at least for those who believe that evolution has a goal. Not according to Darwin! For others, however, evolution is matter’s way of creating mind." (emphasis added)

You can view a video of Tononi talking about his information theory of consciousness, from last year's FQXi conference, here:

(first appeared on

Bookmark and Share
this post has been edited by the forum administrator

report post as inappropriate

Edwin Eugene Klingman wrote on Jan. 14, 2015 @ 01:34 GMT
This comment addresses only the FQXi write-up above, not all the work referred to by it.

In particular, the "essence of explanation is compression. An explanation has to be much simpler, more compact, then what it explains." My current essay The Nature of Bell's Hidden Constraints depicts this generic process in figure 1 on page 1, diagramming the transformation(s) from unlimited measurement data to feature vector, and deriving dynamics from feature vectors.

Next, Chaitin is reported to suggest "an important idea" - that thought reaches down to molecular activity. My first essay Fundamental Physics of Consciousness treated consciousness as a field, thereby encompassing all of the neuronal material. This too is more dynamic than Chaitin's idea.

Finally, 'the paper concludes' "we now have a new fundamental substance, information, that comes together with the digital world view." My essay Gravity and the Nature of Information examines information and concludes there is no such "substance". The field referred to above is substantial enough, with self interactive dynamic properties. And my essay The Analog-In, Digital-Out Universe traced the link from analog to digital view. And the existence of the consciousness field also belies the concept that "evolution is matter's way of "creating" mind" (although obviously it increases the effective density and complexity of the local consciousness field, anchoring it in a self-aware, mobile organism with volition.)

In short, it's good to see increasing attention being paid to consciousness, even if it is (in my opinion) in a mistaken direction.

Edwin Eugene Klingman

Bookmark and Share
report post as inappropriate

Robert H McEachern wrote on Jan. 15, 2015 @ 04:09 GMT
"If the brain worked only at the neuronal level, for example by storing one bit per neuron..."

No one that knows anything about neurons assumes that they can only store a single bit. They assume they can store several orders of magnitude more information than that, approximately 10 bits, for each of 1000 synapses, at a minimum.

"Can one measure the power of a mind like one measures the power of a computer?" Yes. The estimate I published, over twenty years ago, in "Human and Machine Intelligence: An Evolutionary View", was 100,000,000 Gigaflops.

Rob McEachern

Bookmark and Share
this post has been edited by the author since its original submission

report post as inappropriate

Login or create account to post reply or comment.

Please enter your e-mail address:
Note: Joining the FQXi mailing list does not give you a login account or constitute membership in the organization.