If you are aware of an interesting new academic paper (that has been published in a peer-reviewed journal or has appeared on the arXiv), a conference talk (at an official professional scientific meeting), an external blog post (by a professional scientist) or a news item (in the mainstream news media), which you think might make an interesting topic for an FQXi blog post, then please contact us at forums@fqxi.org with a link to the original source and a sentence about why you think that the work is worthy of discussion. Please note that we receive many such suggestions and while we endeavour to respond to them, we may not be able to reply to all suggestions.

Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you.

Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you.

Forum Home

Introduction

Terms of Use

RSS feed | RSS help

Introduction

Terms of Use

*Posts by the blogger are highlighted in orange; posts by FQXi Members are highlighted in blue.*

RSS feed | RSS help

RECENT POSTS IN THIS TOPIC

**Moshe**: *on* 6/20/07 at 22:24pm UTC, wrote Thanks Anthony.

**Anthony Aguirre**: *on* 6/19/07 at 15:49pm UTC, wrote Moshe: I'm glad you brought up the planet example, which I think is...

**Count Iblis**: *on* 6/2/07 at 16:30pm UTC, wrote Actually, what I was suggesting is just the approach by Hartle and...

**Moshe**: *on* 6/1/07 at 16:19pm UTC, wrote I am wondering if there is an example where typicality can be used as a...

**Anthony Aguirre**: *on* 5/31/07 at 19:31pm UTC, wrote Count Iblis: I'm not sure what you mean. Sure, we could give these...

**Count Iblis**: *on* 5/31/07 at 17:34pm UTC, wrote About the attached text file: "Worries about 'Top down' or 'Full...

**paul valletta**: *on* 5/31/07 at 0:50am UTC, wrote Interestingly, I have been reading the Hartle-Srednicki paper for a number...

**Anthony Aguirre**: *on* 5/30/07 at 23:07pm UTC, wrote I've just finished reading an interesting paper by Hartle and Srednicki...

RECENT FORUM POSTS

**Lorraine Ford**: "John, You need to be able to explain why mass and energy are represented..."
*in* Emergent Reality: Markus...

**John Cox**: "Lorraine, Then we must agree to disagree about what physically constitutes..."
*in* Emergent Reality: Markus...

**Lorraine Ford**: "Malcolm, I don’t agree for one minute with Rob’s or Georgina’s views..."
*in* Alternative Models of...

**Georgina Woodward**: "Hi Malcolm. Robert is not a troll but like everyone who comments on this..."
*in* Alternative Models of...

**Lorraine Ford**: "Re "I tend to speed-read then review before scoring after reading a good..."
*in* Undecidability,...

RECENT ARTICLES

*click titles to read articles*

**First Things First: The Physics of Causality**

Why do we remember the past and not the future? Untangling the connections between cause and effect, choice, and entropy.

**Can Time Be Saved From Physics?**

Philosophers, physicists and neuroscientists discuss how our sense of time’s flow might arise through our interactions with external stimuli—despite suggestions from Einstein's relativity that our perception of the passage of time is an illusion.

**Thermo-Demonics**

A devilish new framework of thermodynamics that focuses on how we observe information could help illuminate our understanding of probability and rewrite quantum theory.

**Gravity's Residue**

An unusual approach to unifying the laws of physics could solve Hawking's black-hole information paradox—and its predicted gravitational "memory effect" could be picked up by LIGO.

**Could Mind Forge the Universe?**

Objective reality, and the laws of physics themselves, emerge from our observations, according to a new framework that turns what we think of as fundamental on its head.

RECENT FORUM POSTS

RECENT ARTICLES

Why do we remember the past and not the future? Untangling the connections between cause and effect, choice, and entropy.

Philosophers, physicists and neuroscientists discuss how our sense of time’s flow might arise through our interactions with external stimuli—despite suggestions from Einstein's relativity that our perception of the passage of time is an illusion.

A devilish new framework of thermodynamics that focuses on how we observe information could help illuminate our understanding of probability and rewrite quantum theory.

An unusual approach to unifying the laws of physics could solve Hawking's black-hole information paradox—and its predicted gravitational "memory effect" could be picked up by LIGO.

Objective reality, and the laws of physics themselves, emerge from our observations, according to a new framework that turns what we think of as fundamental on its head.

FQXi BLOGS

January 27, 2020

I've just finished reading an interesting paper by Hartle and Srednicki critiquing the assumption that 'we are typical', used in various cosmological model-testing arguments.

Here is the basic issue. There is an open problem in cosmology as to how to test a theory that entails a 'multiverse', which is to say an ensemble of regions, each member of which appears as a 'universe' to its...

view entire post

Here is the basic issue. There is an open problem in cosmology as to how to test a theory that entails a 'multiverse', which is to say an ensemble of regions, each member of which appears as a 'universe' to its...

view entire post

attachments: td_fnac_worries.txt, simp_topdown.jpg

this post has been edited by the forum administrator

report post as inappropriate

Interestingly, I have been reading the Hartle-Srednicki paper for a number of weeks, there appears some logical method that I have not quite grasped, as I am a big fan of Hartle, I keep it handy.

The bayesian reasoning is something I am only recently familiar with, but for example I was always taught that the:

http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes

did not reveal any prime number pattern, so I used a bit of logic to "move the goalposts", forgive the basic condition of this link as it has been somethign I want to eventually tidy up:

http://homepage.ntlworld.com/paul.valletta/PRIME%20GRIDS.

htm

but the basic picture is that given certain facts, one can bend the rules?

The fact is that if one alters the "fixed" Eratosthenes Sieve (from fixed columns and rows), to one that is dynamic, then the random prime numbers contained within a fixed grid as per Eratosthenes, fall into sections that reveal fractal/prime properties co-incedence?

Model predictions have many variables, strintheory for instance has a lot of problems because the Universe has not yet devulged it's hidden dimensions, this I believe is because the Universe is still evolving, and dimensionally it happens to be in a 3+1 phase at this moment in time.

The Universe of the future will definately have more dimensions than it has currently, thus stringtheory is a

" not yet correct " theory!..but I do not believe there will be any evidence for extra-dimensions, other than the maths, the maths are certainly correct, it just has to be time-stamped into the evolving Universe model.

Even in the far off future, Einstein's Theory of Relativity

will be correct, it will just have to be classified as a dimensional phase dependant theory, relevant to a specific time-slot within the Universe.

report post as inappropriate

The bayesian reasoning is something I am only recently familiar with, but for example I was always taught that the:

http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes

did not reveal any prime number pattern, so I used a bit of logic to "move the goalposts", forgive the basic condition of this link as it has been somethign I want to eventually tidy up:

http://homepage.ntlworld.com/paul.valletta/PRIME%20GRIDS.

htm

but the basic picture is that given certain facts, one can bend the rules?

The fact is that if one alters the "fixed" Eratosthenes Sieve (from fixed columns and rows), to one that is dynamic, then the random prime numbers contained within a fixed grid as per Eratosthenes, fall into sections that reveal fractal/prime properties co-incedence?

Model predictions have many variables, strintheory for instance has a lot of problems because the Universe has not yet devulged it's hidden dimensions, this I believe is because the Universe is still evolving, and dimensionally it happens to be in a 3+1 phase at this moment in time.

The Universe of the future will definately have more dimensions than it has currently, thus stringtheory is a

" not yet correct " theory!..but I do not believe there will be any evidence for extra-dimensions, other than the maths, the maths are certainly correct, it just has to be time-stamped into the evolving Universe model.

Even in the far off future, Einstein's Theory of Relativity

will be correct, it will just have to be classified as a dimensional phase dependant theory, relevant to a specific time-slot within the Universe.

report post as inappropriate

About the attached text file: "Worries about 'Top down' or 'Full non-indexical conditioning'"

Why not take into account prior probabilities of theories A and B? Then everything becomes well defined...

Of course, the scientists in that example may not know what values to assign to the a priori probabilities, but they must have some rough idea. How else did they come up with theories A and B in the first place if they didn't think they had some reasonable chance of being correct?

About the 500 kg ball of gas in a box :

"This seems rather displeasing, as the data D arises very naturally in A, but in an exceedingly strange way in theory B"

Some time ago I was thinking about this problem in terms of algorithmic complexity and artificial intelligence, which allows one to formalize the notion of "exceedingly strange". Suppose we simulate both models in a computer (assumed to be powerful enough to simulate every relevant detail). We want to talk to Anthony who, we know, "lives" in both these worlds.

We can try to locate Anthony by using a search algorithm. Compared to the "gas in a box" universe, the search algorithm for the big bang universe isn't all that complicated. Also the run time to locate Anthony is much less. And after locating him, it's much easier to talk to him.

In fact, if you analyze a conversation between Anthony in the gas in the box universe and us, then you see that you mus constantly seach for Anthony using the search algorithm, because he disintegrates all the time. The search algorithm must be capable of locating Anthony at each time over and over again. i.e. it must be able to predict the state Anthony will be in given his state at a slightly earlier time.

So, the search algorithm does the bulk of the computation that generates Anthony's consciousness. Can we then really say that Anthony exists at all in that universe? It's a bit like how Strong AI proponents refute Searl's Chinese Room Argument...

report post as inappropriate

Why not take into account prior probabilities of theories A and B? Then everything becomes well defined...

Of course, the scientists in that example may not know what values to assign to the a priori probabilities, but they must have some rough idea. How else did they come up with theories A and B in the first place if they didn't think they had some reasonable chance of being correct?

About the 500 kg ball of gas in a box :

"This seems rather displeasing, as the data D arises very naturally in A, but in an exceedingly strange way in theory B"

Some time ago I was thinking about this problem in terms of algorithmic complexity and artificial intelligence, which allows one to formalize the notion of "exceedingly strange". Suppose we simulate both models in a computer (assumed to be powerful enough to simulate every relevant detail). We want to talk to Anthony who, we know, "lives" in both these worlds.

We can try to locate Anthony by using a search algorithm. Compared to the "gas in a box" universe, the search algorithm for the big bang universe isn't all that complicated. Also the run time to locate Anthony is much less. And after locating him, it's much easier to talk to him.

In fact, if you analyze a conversation between Anthony in the gas in the box universe and us, then you see that you mus constantly seach for Anthony using the search algorithm, because he disintegrates all the time. The search algorithm must be capable of locating Anthony at each time over and over again. i.e. it must be able to predict the state Anthony will be in given his state at a slightly earlier time.

So, the search algorithm does the bulk of the computation that generates Anthony's consciousness. Can we then really say that Anthony exists at all in that universe? It's a bit like how Strong AI proponents refute Searl's Chinese Room Argument...

report post as inappropriate

Count Iblis:

I'm not sure what you mean. Sure, we could give these theories different prior probabilities, but on what basis? That is, I think the relevant question is how we are supposed to use the data to distinguish theories, when it is not clear a-priori which is correct (i.e. fairly simialr prior probabilities). I think perhaps I am missing what you are getting at.

Your second point is interesting. Indeed part of what is so troubling about the 'gasball' theory is that along with producing the proper Anthony, you produce every other possible macrostate that you can compose (via course-graining) out of you ergodically-sampled microstates. So all of Anthony's particular qualities (for better or worse) are given no more 'credit' than any random blob of gas -- in fact Anthony is stupendously more rare. Your take, using computability, is an interesting perspective on this -- I wonder if some sort of 'search algorithm measure' could make sense?

report post as inappropriate

I'm not sure what you mean. Sure, we could give these theories different prior probabilities, but on what basis? That is, I think the relevant question is how we are supposed to use the data to distinguish theories, when it is not clear a-priori which is correct (i.e. fairly simialr prior probabilities). I think perhaps I am missing what you are getting at.

Your second point is interesting. Indeed part of what is so troubling about the 'gasball' theory is that along with producing the proper Anthony, you produce every other possible macrostate that you can compose (via course-graining) out of you ergodically-sampled microstates. So all of Anthony's particular qualities (for better or worse) are given no more 'credit' than any random blob of gas -- in fact Anthony is stupendously more rare. Your take, using computability, is an interesting perspective on this -- I wonder if some sort of 'search algorithm measure' could make sense?

report post as inappropriate

I am wondering if there is an example where typicality can be used as a quantitative tool, in context better explored than multiverse scenarios. For example for observables that are clearly anthropically determined such as the distance of the earth to the sun, is there a way to estimate that number based on appropriately chosen measure? we probably know more about planetary systems than about branches of the wave functions or pocket universes.

report post as inappropriate

report post as inappropriate

Actually, what I was suggesting is just the approach by Hartle and Srednicki you wrote about. If there is no prior preference for theory A or B, then you can just take the prior probabilites equal to each other. These will then be updated using Bayes theorem. Any change in the updated probabilities is then due to the data, so there is nothing ambiguous/strange about that...

About the Boltzmann brain, perhaps one can also use the algorithmic complexity of the unitary transformation you need to generate the observer from the intitial state. The probability that Anthony can be found in some state |psi> is sum over k of ||^2 where the Anthony_k> form a complete set of states containing Anthony. The state vector of the universe is obtained from some intitial state: |psi> = U(t)|psi(0)>

Now, we already assume that universes with simple laws of physics are more likely than universes specified by complex laws. E.g. who believes that the laws of physics will turn out to be specified by trillions of arbitrary parameters :) So, we already have a notion a complexity measure for the unitary transformation U(t) = Exp[-i H t], i.e. we don't think that a very complicated H is very likely.

So, perhaps we need to multiply:

sum over k of ||^2

by a probability which depends on the complexity of U(t). Here we insert the value of t in U and then look at the comlexity of that transformation. The larger we make t, the moe bytes you need to specify U(t). So, the Boltzmann brain contributions we get by integrating over t till infinity get supressed.

report post as inappropriate

About the Boltzmann brain, perhaps one can also use the algorithmic complexity of the unitary transformation you need to generate the observer from the intitial state. The probability that Anthony can be found in some state |psi> is sum over k of ||^2 where the Anthony_k> form a complete set of states containing Anthony. The state vector of the universe is obtained from some intitial state: |psi> = U(t)|psi(0)>

Now, we already assume that universes with simple laws of physics are more likely than universes specified by complex laws. E.g. who believes that the laws of physics will turn out to be specified by trillions of arbitrary parameters :) So, we already have a notion a complexity measure for the unitary transformation U(t) = Exp[-i H t], i.e. we don't think that a very complicated H is very likely.

So, perhaps we need to multiply:

sum over k of ||^2

by a probability which depends on the complexity of U(t). Here we insert the value of t in U and then look at the comlexity of that transformation. The larger we make t, the moe bytes you need to specify U(t). So, the Boltzmann brain contributions we get by integrating over t till infinity get supressed.

report post as inappropriate

Moshe:

I'm glad you brought up the planet example, which I think is illustrative in several ways:

(1) Indeed, I don't think you'll find many people who will argue that 'anthropic' selection effects are unimportant in answering (say) "why is the earth-sun distance 1.5e11m?" I think this should give pause to those who say inclusion of such selection effects is "not science." Alternatively, we can define a set of questions (such as that one) to which we simply cannot give scientific answers. I prefer leaving these within the purview of science.

(2) BUT contemplating performing such a calculation (easy compared to the much harder task of applying this reasoning in cosmology) does cause one to despair a bit. An analogy I contemplate sometimes is the program of testing the the big-bang cosmology with just a pair of binoculars. In principle, sufficiently clever cosmological (and planet/star formation and exobiology, etc.) theorists could run the calculation through, generating the probability distributions for galaxies, stars, planets, planets 1.5e11m from the stars, planets with "life", etc. But would we succeed? Would we really be able to pin down even one cosmological parameter this way? And would we ever have come up with dark matter or dark energy?

But I think what you were asking is one of methodology, and the relevant analog might be convincingly and satisfactorily explaining (since it is already measured) the earth-sun distance given the standard cosmological model. What would this mean? Well, suppose I had a planet-formation theory that entailed that the incredibly vast majority of earth-mass planets form in rich galaxy clusters, because the only way (in my theory) planet formation works is for planets to condense out of 10^8 K gas of > 0.1 solar metallicity, then be captured by stars. Since we have very little observational evidence regarding the distribution of earth-mass planets, this might be hard to rule out on that count. But I might then be surprised that *we* are not in a rich cluster. This surprise could lead me to (a) accept that we are very wierd, or (b) theorize that there is some additional selection effect that forces the probability distribution over to spiral galaxies in small groups, or (c) figure that my theory is wrong.

My reading of Hartle and Srednicki, BTW, would be that the 'planet condensation' theory is just as good as a more conventional planet-formation theory, insofar as both give at least one instance of an earthlike-planet in a large spiral galaxy.

report post as inappropriate

I'm glad you brought up the planet example, which I think is illustrative in several ways:

(1) Indeed, I don't think you'll find many people who will argue that 'anthropic' selection effects are unimportant in answering (say) "why is the earth-sun distance 1.5e11m?" I think this should give pause to those who say inclusion of such selection effects is "not science." Alternatively, we can define a set of questions (such as that one) to which we simply cannot give scientific answers. I prefer leaving these within the purview of science.

(2) BUT contemplating performing such a calculation (easy compared to the much harder task of applying this reasoning in cosmology) does cause one to despair a bit. An analogy I contemplate sometimes is the program of testing the the big-bang cosmology with just a pair of binoculars. In principle, sufficiently clever cosmological (and planet/star formation and exobiology, etc.) theorists could run the calculation through, generating the probability distributions for galaxies, stars, planets, planets 1.5e11m from the stars, planets with "life", etc. But would we succeed? Would we really be able to pin down even one cosmological parameter this way? And would we ever have come up with dark matter or dark energy?

But I think what you were asking is one of methodology, and the relevant analog might be convincingly and satisfactorily explaining (since it is already measured) the earth-sun distance given the standard cosmological model. What would this mean? Well, suppose I had a planet-formation theory that entailed that the incredibly vast majority of earth-mass planets form in rich galaxy clusters, because the only way (in my theory) planet formation works is for planets to condense out of 10^8 K gas of > 0.1 solar metallicity, then be captured by stars. Since we have very little observational evidence regarding the distribution of earth-mass planets, this might be hard to rule out on that count. But I might then be surprised that *we* are not in a rich cluster. This surprise could lead me to (a) accept that we are very wierd, or (b) theorize that there is some additional selection effect that forces the probability distribution over to spiral galaxies in small groups, or (c) figure that my theory is wrong.

My reading of Hartle and Srednicki, BTW, would be that the 'planet condensation' theory is just as good as a more conventional planet-formation theory, insofar as both give at least one instance of an earthlike-planet in a large spiral galaxy.

report post as inappropriate

Login or create account to post reply or comment.