If you are aware of an interesting new academic paper (that has been published in a peer-reviewed journal or has appeared on the arXiv), a conference talk (at an official professional scientific meeting), an external blog post (by a professional scientist) or a news item (in the mainstream news media), which you think might make an interesting topic for an FQXi blog post, then please contact us at forums@fqxi.org with a link to the original source and a sentence about why you think that the work is worthy of discussion. Please note that we receive many such suggestions and while we endeavour to respond to them, we may not be able to reply to all suggestions.

Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you.

Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you.

Contests Home

Current Essay Contest

*Contest Partners: The Peter and Patricia Gruber Foundation and Scientific American*

Previous Contests

**Wandering Towards a Goal**

How can mindless mathematical laws give rise to aims and intention?

*December 2, 2016 to March 3, 2017*

Contest Partner: The Peter and Patricia Gruber Fund.

read/discuss • winners

**Trick or Truth: The Mysterious Connection Between Physics and Mathematics**

*Contest Partners: Nanotronics Imaging, The Peter and Patricia Gruber Foundation, and The John Templeton Foundation*

Media Partner: Scientific American

read/discuss • winners

**How Should Humanity Steer the Future?**

*January 9, 2014 - August 31, 2014*

*Contest Partners: Jaan Tallinn, The Peter and Patricia Gruber Foundation, The John Templeton Foundation, and Scientific American*

read/discuss • winners

**It From Bit or Bit From It**

*March 25 - June 28, 2013*

*Contest Partners: The Gruber Foundation, J. Templeton Foundation, and Scientific American*

read/discuss • winners

**Questioning the Foundations**

Which of Our Basic Physical Assumptions Are Wrong?

*May 24 - August 31, 2012*

*Contest Partners: The Peter and Patricia Gruber Foundation, SubMeta, and Scientific American*

read/discuss • winners

**Is Reality Digital or Analog?**

*November 2010 - February 2011*

*Contest Partners: The Peter and Patricia Gruber Foundation and Scientific American*

read/discuss • winners

**What's Ultimately Possible in Physics?**

*May - October 2009*

*Contest Partners: Astrid and Bruce McWilliams*

read/discuss • winners

**The Nature of Time**

*August - December 2008*

read/discuss • winners

Current Essay Contest

Previous Contests

How can mindless mathematical laws give rise to aims and intention?

Contest Partner: The Peter and Patricia Gruber Fund.

read/discuss • winners

Media Partner: Scientific American

read/discuss • winners

read/discuss • winners

read/discuss • winners

Which of Our Basic Physical Assumptions Are Wrong?

read/discuss • winners

read/discuss • winners

read/discuss • winners

read/discuss • winners

Forum Home

Introduction

Terms of Use

RSS feed | RSS help

Introduction

Terms of Use

*Posts by the author are highlighted in orange; posts by FQXi Members are highlighted in blue.*

RSS feed | RSS help

RECENT POSTS IN THIS TOPIC

RECENT FORUM POSTS

**John Merryman**: "The problem is that we do experience reality as those discrete flashes of..."
*in* The Quantum...

**Thomas Ray**: "(reposted in correct thread) Lorraine, Nah. That's nothing like my view...."
*in* 2015 in Review: New...

**Lorraine Ford**: "Clearly “law-of-nature” relationships and associated numbers represent..."
*in* Physics of the Observer -...

**Lee Bloomquist**: "Information Channel. An example from Jon Barwise. At the workshop..."
*in* Physics of the Observer -...

**Lee Bloomquist**: "Please clarify. I just tried to put a simple model of an observer in the..."
*in* Alternative Models of...

**Lee Bloomquist**: "Footnote...for the above post, the one with the equation existence =..."
*in* Alternative Models of...

**Thomas Ray**: "In fact, symmetry is the most pervasive physical principle that exists. ..."
*in* “Spookiness”...

**Thomas Ray**: "It's easy to get wound around the axle with black hole thermodynamics,..."
*in* “Spookiness”...

RECENT ARTICLES

*click titles to read articles*

**Why Time Might Not Be an Illusion**

Einstein’s relativity pushes physicists towards a picture of the universe as a block, in which the past, present, and future all exist on the same footing; but maybe that shift in thinking has gone too far.

**The Complexity Conundrum**

Resolving the black hole firewall paradox—by calculating what a real astronaut would compute at the black hole's edge.

**Quantum Dream Time**

Defining a ‘quantum clock’ and a 'quantum ruler' could help those attempting to unify physics—and solve the mystery of vanishing time.

**Our Place in the Multiverse**

Calculating the odds that intelligent observers arise in parallel universes—and working out what they might see.

**Sounding the Drums to Listen for Gravity’s Effect on Quantum Phenomena**

A bench-top experiment could test the notion that gravity breaks delicate quantum superpositions.

RECENT FORUM POSTS

RECENT ARTICLES

Einstein’s relativity pushes physicists towards a picture of the universe as a block, in which the past, present, and future all exist on the same footing; but maybe that shift in thinking has gone too far.

Resolving the black hole firewall paradox—by calculating what a real astronaut would compute at the black hole's edge.

Defining a ‘quantum clock’ and a 'quantum ruler' could help those attempting to unify physics—and solve the mystery of vanishing time.

Calculating the odds that intelligent observers arise in parallel universes—and working out what they might see.

A bench-top experiment could test the notion that gravity breaks delicate quantum superpositions.

FQXi FORUM

March 19, 2018

CATEGORY:
Is Reality Digital or Analog? Essay Contest (2010-2011)
[back]

TOPIC: The World is Either Algorithmic or Mostly Random by Hector Zenil [refresh]

TOPIC: The World is Either Algorithmic or Mostly Random by Hector Zenil [refresh]

I will propose the notion that the universe is digital, not as a claim about what the universe is made of but rather about the way it unfolds. Central to the argument will be the concepts of symmetry breaking and algorithmic probability, which will be used as tools to compare the way patterns are distributed in our world to the way patterns are distributed in a simulated digital one. These concepts will provide a framework for a discussion of the informational nature of reality. I will argue that if the universe were analog, then the world would likely be random, making it largely incomprehensible. The digital model has, however, an inherent beauty in its imposition of an upper limit and in the convergence in computational power to a maximal level of sophistication. Even if deterministic, that it is digital doesn’t mean that the world is trivial or predictable, but rather that it is built up from operations that at the lowest scale are very simple but that at a higher scale look complex and even random, though only in appearance.

Hector Zenil (BSc. Math, UNAM, 2005; MPhil. Logic, Paris 1 Sorbonne, 2006; PhD. Computer Science, Lille 1, 2011) has held visiting positions at the Massachusetts Institute of Technology and Carnegie Mellon University. He is a senior research associate at Wolfram Research, member of the Turing Centenary Advisory Committee, founding honorary associate of the Algorithmic Social Science Research Unit of the University of Trento and editor of Randomness Through Computation (published by World Scientific). His main research interests lie at the intersection of several disciplines in connection or application to the concept of randomness and algorithmic complexity motivated by foundational questions.

Dear Hector,

Thank you for your very interesting article. It does seem the rules in nature will turn out to be simple as opposed to needing very complicated constructs. I regularly see this in engineering where difficult problems taking much analysis boil down to only a few lines of code at their cores. It is interesting that you also regularly see this in your work. Thank you for your fine essay.

Kind Regards,

Russell

report post as inappropriate

Thank you for your very interesting article. It does seem the rules in nature will turn out to be simple as opposed to needing very complicated constructs. I regularly see this in engineering where difficult problems taking much analysis boil down to only a few lines of code at their cores. It is interesting that you also regularly see this in your work. Thank you for your fine essay.

Kind Regards,

Russell

report post as inappropriate

NP-completeness and the NP Hardness Assumption suggest that Reality isn't algorithmizable. Where, for example, is the algorithm for protein folding?

Arguably the emergence of complexity from Omega or quantum randomness can never be described for essentially the same reason: it's not a compressible process.

Or am I full of it? Missing the point?

report post as inappropriate

Arguably the emergence of complexity from Omega or quantum randomness can never be described for essentially the same reason: it's not a compressible process.

Or am I full of it? Missing the point?

report post as inappropriate

Dear nikman,

Thanks for your message. In my definition of an algorithmic world problems do not need to belong to a particular computational complexity class. My use of algorithmic is independent and compatible with the theory of computational complexity.

As you know, the framework and investigation of problems in the theory of computational complexity is based in the concept of the universal Turing machine. Problems, even if NP-complete, are studied as being carried out by a digital computer (the Turing machine), so even if NP-complete, they are algorithmic under my worldview.

The fact that some problems may take a long time to be solved in the size of the input doesn't mean, in my definition, that something is no longer algorithmic. Problems don't need to belong to any time complexity class to be algorithmic. On the contrary, if a problem can be described in algorithmic terms, then it is algorithmic, so NP-complete problems can coexist with my algorithmic universe.

On the other hand, we shouldn't forget that often instances of a NP problem may be easy to solve within polynomial time by a deterministic Turing machine, and that it is unknown whether there are any faster algorithms to solve NP-complete problems for all instances of the problem.

I think it is different to ignore what the algorithm of protein folding is than claiming (I'm not sure you did) that protein folding cannot be carried out by a (deterministic) Turing machine (either in polynomial time or not , something that we yet don't know).

Sincerely.

Thanks for your message. In my definition of an algorithmic world problems do not need to belong to a particular computational complexity class. My use of algorithmic is independent and compatible with the theory of computational complexity.

As you know, the framework and investigation of problems in the theory of computational complexity is based in the concept of the universal Turing machine. Problems, even if NP-complete, are studied as being carried out by a digital computer (the Turing machine), so even if NP-complete, they are algorithmic under my worldview.

The fact that some problems may take a long time to be solved in the size of the input doesn't mean, in my definition, that something is no longer algorithmic. Problems don't need to belong to any time complexity class to be algorithmic. On the contrary, if a problem can be described in algorithmic terms, then it is algorithmic, so NP-complete problems can coexist with my algorithmic universe.

On the other hand, we shouldn't forget that often instances of a NP problem may be easy to solve within polynomial time by a deterministic Turing machine, and that it is unknown whether there are any faster algorithms to solve NP-complete problems for all instances of the problem.

I think it is different to ignore what the algorithm of protein folding is than claiming (I'm not sure you did) that protein folding cannot be carried out by a (deterministic) Turing machine (either in polynomial time or not , something that we yet don't know).

Sincerely.

Dear Dr. Hector Zenil,

I am just now downloading your essay. I will read it to the best of my ability. I will be looking for whether or not your view of the universe is one of an unfolding 'program' in somewhat of a computer sense. Your statement:

"On the contrary, if a problem can be described in algorithmic terms, then it is algorithmic, ..."

Seems to me to suggest that you believe that the universe can be properly described and defined by the means of establishing 'steps'. I will be looking for your support for that view. If I am wrongly anticipating your position, then, I will learn this by reading your essay. Thank you for participating.

James Putnam

report post as inappropriate

I am just now downloading your essay. I will read it to the best of my ability. I will be looking for whether or not your view of the universe is one of an unfolding 'program' in somewhat of a computer sense. Your statement:

"On the contrary, if a problem can be described in algorithmic terms, then it is algorithmic, ..."

Seems to me to suggest that you believe that the universe can be properly described and defined by the means of establishing 'steps'. I will be looking for your support for that view. If I am wrongly anticipating your position, then, I will learn this by reading your essay. Thank you for participating.

James Putnam

report post as inappropriate

Dear Dr. Hector Zenil,

I find this essay to require a large challenge on many points. I will start slowly and see if there is any interest. From page one:

"Whether the universe began its existence as a single point, or whether its

inception was a state of complete randomness, one can think of either the point

or the state of randomness as quintessential states of perfect symmetry. Either

no part had more or less information because there were no parts or all parts

carried no information, like white noise on the screen of an untuned TV. In

such a state one would be unable to send a signal, simply because it would

be destroyed immediately. But thermal equilibrium in an expanding space was

unstable, so asymmetries started to arise and some regions now appeared cooler

than others. The universe quickly expanded and began to produce the first

structures."

This reads like the Book of Genesis. Without intelligence behind it, there is a lot of explaining to do. First question: Symmetry breaking of less or no information leads to increased information?

Moving to the end:

"An analog world means that one can divide space and/or time into an infinite

number of pieces, and that matter and everything else may be capable of fol-

lowing any of these infinitely many paths and convoluted trajectories. ..."

What is the empirical evidence to support the idea of space and/or time can be divided into pieces? I will leave it at two question for now. Later I will ask about bits and strings of bits and information and meaning.

James

report post as inappropriate

I find this essay to require a large challenge on many points. I will start slowly and see if there is any interest. From page one:

"Whether the universe began its existence as a single point, or whether its

inception was a state of complete randomness, one can think of either the point

or the state of randomness as quintessential states of perfect symmetry. Either

no part had more or less information because there were no parts or all parts

carried no information, like white noise on the screen of an untuned TV. In

such a state one would be unable to send a signal, simply because it would

be destroyed immediately. But thermal equilibrium in an expanding space was

unstable, so asymmetries started to arise and some regions now appeared cooler

than others. The universe quickly expanded and began to produce the first

structures."

This reads like the Book of Genesis. Without intelligence behind it, there is a lot of explaining to do. First question: Symmetry breaking of less or no information leads to increased information?

Moving to the end:

"An analog world means that one can divide space and/or time into an infinite

number of pieces, and that matter and everything else may be capable of fol-

lowing any of these infinitely many paths and convoluted trajectories. ..."

What is the empirical evidence to support the idea of space and/or time can be divided into pieces? I will leave it at two question for now. Later I will ask about bits and strings of bits and information and meaning.

James

report post as inappropriate

Thanks for that. And yes, I'm suggesting what you suggest I'm suggesting. I know that Turing himself first posited the possibility of non-recursive adjuncts ("oracles") to computation, but in retrospect he may not have realized the immense other world that might lead to.

Time will tell.

report post as inappropriate

Time will tell.

report post as inappropriate

Hector,

To make near-poetry of computer science, and have it be factually accurate and scientifically well grounded besides, is a tour de force. I look forward to seeing this piece published in a prestigious venue, as by any objective standard I know, it deserves to be.

It's so gratifying to see information theory getting the strong treatment in this contest, that I hoped it...

view entire post

To make near-poetry of computer science, and have it be factually accurate and scientifically well grounded besides, is a tour de force. I look forward to seeing this piece published in a prestigious venue, as by any objective standard I know, it deserves to be.

It's so gratifying to see information theory getting the strong treatment in this contest, that I hoped it...

view entire post

report post as inappropriate

T H Ray,

Thanks for your kind words. I'm glad you found the essay interesting and also to be me who stands in favor of information theory to support the digital view of the world in this exciting contest.

Sincerely.

Thanks for your kind words. I'm glad you found the essay interesting and also to be me who stands in favor of information theory to support the digital view of the world in this exciting contest.

Sincerely.

I hope you've made plans to be in Boston for ICCS this summer. If my own plans go as expected I'd love to share some conversation over a cold Sam Adams.

Tom

report post as inappropriate

Tom

report post as inappropriate

Hi Hector, I was very impressed with your essay. Very easy to read yet dealing with complex study matter. I have a question which relates to you talking about DNA incidentally:

Q: Why can't an Archimedes screw be used as a particle/wave model of gravity? Why is no-one experimenting with this simple idea of a screw being the analogy needed to visualise a force-inducing particle of attraction? If it then travelled around a wraparound universe it would emerge on the other side as force of repulsion i.e. dark energy. I don't understand why no-one has grasped this simple idea yet.

Many thanks.

report post as inappropriate

Q: Why can't an Archimedes screw be used as a particle/wave model of gravity? Why is no-one experimenting with this simple idea of a screw being the analogy needed to visualise a force-inducing particle of attraction? If it then travelled around a wraparound universe it would emerge on the other side as force of repulsion i.e. dark energy. I don't understand why no-one has grasped this simple idea yet.

Many thanks.

report post as inappropriate

Thank you very much Alan.

Concerning your question, I don't know why nobody has used the idea of an Archimedes screw. It sounds to me that something similar has been explored in the form of some topological spaces that behave as you describe, although I'm not sure whether they have been connected to the dark energy phenomenon.

Best regards.

Concerning your question, I don't know why nobody has used the idea of an Archimedes screw. It sounds to me that something similar has been explored in the form of some topological spaces that behave as you describe, although I'm not sure whether they have been connected to the dark energy phenomenon.

Best regards.

Thanks Hector! You're the third person to appreciate the connection. If Newton had hit on this idea we would never have had Einstein's spacetime continuum imo. It leads on to the idea explaining the 100,000 year ice age problems which are encountered with Milankovitch cycles. Nevermind..

Alan

report post as inappropriate

Alan

report post as inappropriate

Hi Hector Zenil,

Congratuations, that permits to understand better the computing and its randomness.

I ask me how is the basis of these systems and laguages? The simulations can be optimized !

Good luck.

Best Regards

Steve

report post as inappropriate

Congratuations, that permits to understand better the computing and its randomness.

I ask me how is the basis of these systems and laguages? The simulations can be optimized !

Good luck.

Best Regards

Steve

report post as inappropriate

Thanks Steve. I think it is the first time I read a post from you with nothing about the Spheres model =)

Thanks again.

Thanks again.

:)..Peter says I am sphericentrist,probably a problem of vanity due to my young age(35):)

Regards

report post as inappropriate

Regards

report post as inappropriate

Hello,

Very interesting essay and at the same time hard to understand if someone does not have formal exposure to complexity theory. I still don't see two things: why an analog universe can't have an algorithmic representation, which is obviously what relativity has offered with very high accuracy, and how can one decide the fundamental question from all these, namely whether there is a smallest interval of space(time) or not. Another question: do you take for granted that algorithmic representations supervene on laws of nature?

Regards.

report post as inappropriate

Very interesting essay and at the same time hard to understand if someone does not have formal exposure to complexity theory. I still don't see two things: why an analog universe can't have an algorithmic representation, which is obviously what relativity has offered with very high accuracy, and how can one decide the fundamental question from all these, namely whether there is a smallest interval of space(time) or not. Another question: do you take for granted that algorithmic representations supervene on laws of nature?

Regards.

report post as inappropriate

Hello Efthimios,

Good question. What I claim, and the reason I believe my model is stronger than claiming directly that the world is digital, is because in my view one doesn't have to presume discreteness as a basic assumption of the world. One starts asking how the universe looks like in terms of the distribution of patterns in the world. Then one can conclude either that the world is...

view entire post

Good question. What I claim, and the reason I believe my model is stronger than claiming directly that the world is digital, is because in my view one doesn't have to presume discreteness as a basic assumption of the world. One starts asking how the universe looks like in terms of the distribution of patterns in the world. Then one can conclude either that the world is...

view entire post

Dear Hector,

Thank you for the detailed response. I now understand better your work (I hope), which I think is very interesting and original.

However, relativity theory tells us that the world is analog and fully deterministic. You say: " I will argue that if the universe were analog, then the world would likely be random, making it largely incomprehensible."

The above statement is contrary to the best scientific theory we have available that in based on continuity of spacetime and it is fully deterministic at the same time, contrary to your claim. In addition, this type of analog mode of a universe is comprehensible and falsifiable by experimentation, but hasn’t been falsified to this date.

Regardless analog computational machines, If the universe is analog, it is the best analog computer, we should not need to find another one.

I would like to know more about how your quoted statement above reconciles with relativity theory.

Thanks and regards.

report post as inappropriate

Thank you for the detailed response. I now understand better your work (I hope), which I think is very interesting and original.

However, relativity theory tells us that the world is analog and fully deterministic. You say: " I will argue that if the universe were analog, then the world would likely be random, making it largely incomprehensible."

The above statement is contrary to the best scientific theory we have available that in based on continuity of spacetime and it is fully deterministic at the same time, contrary to your claim. In addition, this type of analog mode of a universe is comprehensible and falsifiable by experimentation, but hasn’t been falsified to this date.

Regardless analog computational machines, If the universe is analog, it is the best analog computer, we should not need to find another one.

I would like to know more about how your quoted statement above reconciles with relativity theory.

Thanks and regards.

report post as inappropriate

Dear Efthimios,

Yes, classical and relativistic mechanics are both deterministic, and that's compatible with my algorithmic worldview. On the other hand, certain phenomena can be modeled assuming that matter and space exist as a continuum, meaning that matter is continuously distributed over an entire region of space. By definition, a continuum is a body that can be continually sub-divided into infinitesimal elements. However, matter is composed of molecules and atoms, separated by empty space. If a model like general relativity is believed to describe the world at all scales then one would also need to think of matter as continuum, something not compatible with my view but also not compatible with another large, and equally important, field of modern physics: quantum mechanics (the view that there are elementary particles and that they constitute all matter).

Modeling an object or a phenomenon as something doesn't mean it is that something. Even if on length scales greater than that of atomic distances, models may be highly accurate, they do not necessarily describe the universe at all scales or under all circumstances, which should reminds us that models are not always full descriptions of reality, so we should not take them to be at the most basic level of physical explanation.

You make a great point fully compatible with my worldview: if the world is analog, then we would need to live in the best possible analog world. That is what I argue, that chances of finding patterns and structures in an analog world would be very low unless, as you suggest, one assumes that our world is the best possible among all possible. Under the digital view, however, patterns and structures are basically an avoidable consequence, so no need of such a strong assumption.

Sincerely.

Yes, classical and relativistic mechanics are both deterministic, and that's compatible with my algorithmic worldview. On the other hand, certain phenomena can be modeled assuming that matter and space exist as a continuum, meaning that matter is continuously distributed over an entire region of space. By definition, a continuum is a body that can be continually sub-divided into infinitesimal elements. However, matter is composed of molecules and atoms, separated by empty space. If a model like general relativity is believed to describe the world at all scales then one would also need to think of matter as continuum, something not compatible with my view but also not compatible with another large, and equally important, field of modern physics: quantum mechanics (the view that there are elementary particles and that they constitute all matter).

Modeling an object or a phenomenon as something doesn't mean it is that something. Even if on length scales greater than that of atomic distances, models may be highly accurate, they do not necessarily describe the universe at all scales or under all circumstances, which should reminds us that models are not always full descriptions of reality, so we should not take them to be at the most basic level of physical explanation.

You make a great point fully compatible with my worldview: if the world is analog, then we would need to live in the best possible analog world. That is what I argue, that chances of finding patterns and structures in an analog world would be very low unless, as you suggest, one assumes that our world is the best possible among all possible. Under the digital view, however, patterns and structures are basically an avoidable consequence, so no need of such a strong assumption.

Sincerely.

Héctor:

Hello from another math student from ciencias (unam). Much older than you anyway. 45 now.

I'm in the contest also.

I read your essay and I liked it a lot, because I am into computation complexity also.

I have been far from academia for years, except for my participation on this and last year contest on fqxi.

I would like to know if you know there are computation complexity study research groups in Mexico.

I really find your essay quite good, let's wait on how the voting goes .

Please read my essay and comment .

report post as inappropriate

Hello from another math student from ciencias (unam). Much older than you anyway. 45 now.

I'm in the contest also.

I read your essay and I liked it a lot, because I am into computation complexity also.

I have been far from academia for years, except for my participation on this and last year contest on fqxi.

I would like to know if you know there are computation complexity study research groups in Mexico.

I really find your essay quite good, let's wait on how the voting goes .

Please read my essay and comment .

report post as inappropriate

Hola Juan Enrique,

Nice to meet you. I know of the Centro de Ciencias de la Complejidad (C3) at UNAM to which I'm associated with too. Sure, I will read your essay with interest.

Gracias por tu apoyo. Un saludo.

Nice to meet you. I know of the Centro de Ciencias de la Complejidad (C3) at UNAM to which I'm associated with too. Sure, I will read your essay with interest.

Gracias por tu apoyo. Un saludo.

EInsteins dice obeys these classical rules 1 ODD+ 1 EVEN= 2 ODD.

And 2 ODD+ 2 EVEN= 4 EVEN.

QM is determined by EInsteins dice and you can have a model of the universe where evrything is determined at least in the computer world......

This is not OUR UNIVERSE this is a universe where everythng is binary either zero or one.

report post as inappropriate

And 2 ODD+ 2 EVEN= 4 EVEN.

QM is determined by EInsteins dice and you can have a model of the universe where evrything is determined at least in the computer world......

This is not OUR UNIVERSE this is a universe where everythng is binary either zero or one.

report post as inappropriate

A detmermined universe where everything is binary.

Steve

attachments: Einsteins_Loaded_Dice.htm, 1_Einsteins_Loaded_Dice.htm

report post as inappropriate

Steve

attachments: Einsteins_Loaded_Dice.htm, 1_Einsteins_Loaded_Dice.htm

report post as inappropriate

EINSTEINS DICE FOR A QM UNIVERSE THAT IS DETERMINED BY BINARY

attachments: 2_Einsteins_Loaded_Dice.htm

report post as inappropriate

attachments: 2_Einsteins_Loaded_Dice.htm

report post as inappropriate

Dear Dr. Zenil,

I have just read your paper, and thought you may like to know that my essay agrees with your assertion that ‘operations that at the lowest scale are very simple’. My paper deals with physics in which I derive ‘the Light’ and ‘Equivalence Identity’.

This raises the question of whether Wolfram’s systematic computer search for simple rules with complicated consequences could ever 'accidentally discover’ the two foundations revealed in my paper.

In case you already haven't, you may like to read the following article by Chaitin

http://arxiv.org/PS_cache/math/pdf/0210/0210035v1.pdf

All the best,

Robert

report post as inappropriate

I have just read your paper, and thought you may like to know that my essay agrees with your assertion that ‘operations that at the lowest scale are very simple’. My paper deals with physics in which I derive ‘the Light’ and ‘Equivalence Identity’.

This raises the question of whether Wolfram’s systematic computer search for simple rules with complicated consequences could ever 'accidentally discover’ the two foundations revealed in my paper.

In case you already haven't, you may like to read the following article by Chaitin

http://arxiv.org/PS_cache/math/pdf/0210/0210035v1.pdf

All the best,

Robert

report post as inappropriate

Dear Robert,

Yes, I knew about Chaitin's paper, you do very well bringing it up to this discussion, specially as it is connected to my essay content.

Wolfram has recently written on his quest to find the universe rule that he also thinks should be simple. Here is the link: http://blog.wolfram.com/2007/09/11/my-hobby-hunting-for-our-

universe/

Sincerely.

Yes, I knew about Chaitin's paper, you do very well bringing it up to this discussion, specially as it is connected to my essay content.

Wolfram has recently written on his quest to find the universe rule that he also thinks should be simple. Here is the link: http://blog.wolfram.com/2007/09/11/my-hobby-hunting-for-our-

universe/

Sincerely.

Dear Dr. Zenil,

Chaitin's paper is also connected to my essay viz III What do Working Scientists Think about Simplicity and Complexity?

Cheers,

Robert

report post as inappropriate

Chaitin's paper is also connected to my essay viz III What do Working Scientists Think about Simplicity and Complexity?

Cheers,

Robert

report post as inappropriate

Robert,

There is a common agreement that algorithmic (program-size, aka Kolmogorov) complexity is the framework to talk about simplicity vs. complexity in science. This is based, as you may know, on the concept of the shortest possible description of an object.

The idea is that if the shortest program running on a universal Turing machine of, for example a string, is of about the length of the string, then the string is said to be complex or random, while if the program is considerably shorter than the original string length then the string is said to be simple. This means that if a string is compressible then it is simple, and if it is not then it is random.

Other finer measures have been proposed based on this same concept of algorithmic complexity, such as Bennett's logical depth. According to this other complexity measure, the complexity of an object is given by the decompression time of the near shortest programs producing an object. This measure has the particularity of distinguishing between simple or random vs. structure (organized complexity), as opposed to random complexity as in the original algorithmic sense.

These measures are, unfortunately, still largely underused, sometimes greatly overlooked or even misunderstood. I am quite surprised, for example, that only a handful of participants in this contest have even mentioned them to address the contest question, perhaps because they are relatively new theories. I'm glad to be the participant defending his view by using these state of the art tools.

The main problem is that these measures are not computable, meaning that there is no algorithm that gives you neither one or another complexity value when provided a string (because of the halting problem explained in my essay). There are, however, attempts to build tools based in these concepts, and this has been part of my own research program. If you are interested you can have a look at my recent list of papers on ArXiv: http://arxiv.org/find/all/1/all:+zenil/0/1/0/all/0/1

Sincerel

y.

There is a common agreement that algorithmic (program-size, aka Kolmogorov) complexity is the framework to talk about simplicity vs. complexity in science. This is based, as you may know, on the concept of the shortest possible description of an object.

The idea is that if the shortest program running on a universal Turing machine of, for example a string, is of about the length of the string, then the string is said to be complex or random, while if the program is considerably shorter than the original string length then the string is said to be simple. This means that if a string is compressible then it is simple, and if it is not then it is random.

Other finer measures have been proposed based on this same concept of algorithmic complexity, such as Bennett's logical depth. According to this other complexity measure, the complexity of an object is given by the decompression time of the near shortest programs producing an object. This measure has the particularity of distinguishing between simple or random vs. structure (organized complexity), as opposed to random complexity as in the original algorithmic sense.

These measures are, unfortunately, still largely underused, sometimes greatly overlooked or even misunderstood. I am quite surprised, for example, that only a handful of participants in this contest have even mentioned them to address the contest question, perhaps because they are relatively new theories. I'm glad to be the participant defending his view by using these state of the art tools.

The main problem is that these measures are not computable, meaning that there is no algorithm that gives you neither one or another complexity value when provided a string (because of the halting problem explained in my essay). There are, however, attempts to build tools based in these concepts, and this has been part of my own research program. If you are interested you can have a look at my recent list of papers on ArXiv: http://arxiv.org/find/all/1/all:+zenil/0/1/0/all/0/1

Sincerel

y.

Hector,

Thanks for the interesting essay.

Your example of the 158 characters of C that compress the first 2400 digits of pi seems to overstate the actual degree of algorithmic compression. The 158 characters of C do not produce the 2400 digits alone unless also combined with a C compiler which also has considerable information content. In other words, throwing the dice in the air would need to produce not only the C program itself, but also the compiler to properly interpret and execute the program. Correct?

Regards,

Tom

report post as inappropriate

Thanks for the interesting essay.

Your example of the 158 characters of C that compress the first 2400 digits of pi seems to overstate the actual degree of algorithmic compression. The 158 characters of C do not produce the 2400 digits alone unless also combined with a C compiler which also has considerable information content. In other words, throwing the dice in the air would need to produce not only the C program itself, but also the compiler to properly interpret and execute the program. Correct?

Regards,

Tom

report post as inappropriate

Dear Thomas,

That's a very good point. However, I don''t overlook the fact that one has to add the size of the C compiler to the size of the program. When one compares computer programs one has to do it on the basis of a common language. If the common language is C, as it is in this case, one can ignore the size of the compiler because it is the same size for any other program. In other words, because the additive constant is common to all programs one can ignore it.

The invariance theorem shows that it is not very important whether you add the compiler length or not or which computer language is used because between any 2 computer languages L and L' there exists a constant c only depending on the computer languages and not the string, such that for all binary strings s:

| K_L(s) - K_L'(s) | < c_L,L'

Think of this as saying that there is always a translator of fixed length (another compiler between computer languages) which one can use to talk about program lengths without caring too much about additive constants and without any loss of generality.

Good question. Thanks.

That's a very good point. However, I don''t overlook the fact that one has to add the size of the C compiler to the size of the program. When one compares computer programs one has to do it on the basis of a common language. If the common language is C, as it is in this case, one can ignore the size of the compiler because it is the same size for any other program. In other words, because the additive constant is common to all programs one can ignore it.

The invariance theorem shows that it is not very important whether you add the compiler length or not or which computer language is used because between any 2 computer languages L and L' there exists a constant c only depending on the computer languages and not the string, such that for all binary strings s:

| K_L(s) - K_L'(s) | < c_L,L'

Think of this as saying that there is always a translator of fixed length (another compiler between computer languages) which one can use to talk about program lengths without caring too much about additive constants and without any loss of generality.

Good question. Thanks.

Hector,

Thank you for the helpful clarification. It makes sense that one can ignore the compiler when comparing program lengths.

Regards,

Tom

report post as inappropriate

Thank you for the helpful clarification. It makes sense that one can ignore the compiler when comparing program lengths.

Regards,

Tom

report post as inappropriate

I should also add that a way to avoid large constants and concerns about shallow comparisons is to stay close to the 'machine language'. Remember that the definition of algorithmic complexity of a string is given in terms of the length in bits of the shortest program that produces the string.

One can often write subroutines to shortcut a computation. In Mathematica, for example, you can get any number of digits of Pi by simply executing N[Pi, n], with n the number of desired digits. Note, however, that the C program calculating the first 2400 digits of Pi does not use any particular function of C, but basic arithmetical operations. In any case, the main argument holds, that Pi is simpler to calculate by throwing bits that one interpret as instructions of a computer language, disregarding the language (or if you prefer rules), but it is much harder if you want to generate any number of digits of Pi by throwing the digits themselves into the air. This is because programs of Pi will be always short in relation to its expansion.

One can often write subroutines to shortcut a computation. In Mathematica, for example, you can get any number of digits of Pi by simply executing N[Pi, n], with n the number of desired digits. Note, however, that the C program calculating the first 2400 digits of Pi does not use any particular function of C, but basic arithmetical operations. In any case, the main argument holds, that Pi is simpler to calculate by throwing bits that one interpret as instructions of a computer language, disregarding the language (or if you prefer rules), but it is much harder if you want to generate any number of digits of Pi by throwing the digits themselves into the air. This is because programs of Pi will be always short in relation to its expansion.

Hi dear Hector you say "In any case, the main argument holds, that Pi is simpler to calculate by throwing bits that one interpret as instructions of a computer language, disregarding the language (or if you prefer rules), but it is much harder if you want to generate any number of digits of Pi by throwing the digits themselves into the air. This is because programs of Pi will be always short in relation to its expansion"

Could you develop, it's relevant that...

Regards

Steve

report post as inappropriate

Could you develop, it's relevant that...

Regards

Steve

report post as inappropriate

because for a real understanding of the theory of numbers, the reals and the continuity and discretness .....it must have a difference between the physicality and its distribution rational, the infinity behind our walls.And the adds and infinities invented by humans due to some adds or mult.Now if we take this language, mathematical, as the computing, don't forget it's a human invention where we create codes of continuities, where sometimes the categories permits the synchro and the sortings. Of course it's a beautiful machine and its language is logic, but for the simulations, the laws can be changed and thus the conclusions loose the real uniersal sense. It's essential when we want really interpret our reality objective physical. I can for example simulate the mass of stars and planets, or BH , that doesn't mean it's real...there we return about a very beautiful work of Eckard about the axiomatization rational of our reals with a real unity, 1 and reals domains.All is there in fact,the 0,- and infinity aren't reazlly real in their pure number and its distribution, spherical.

The language is the same because the maths are the maths but the reals are better than imaginaries ....the strings are a beautiful tool for the 2d picture , I prefer a spherical membran, oscillating and we can thus simulate the mass also,it's more logic, the programm just needs a little improvement inserting the number of the ultim entanglement of spheres.and their volumes from the main central sphere.In all case, the duality wave particle can be harmonized due to the proportions with mass.The strings were an idea, this idea can be universalized simply in the spherical logic in 3D.

Dear Hector, could you explain me the algorythms, If I know the principle, I can invent several models of sortings and synchro.Could you explain me how is the base of computing , this language in fact is logic and mathematic,but what is an algorythm of sorting for example, you insert what the volumes??? Or a serie, ....in fact how I have the pictures here at home on my pc for example.

Steve

report post as inappropriate

The language is the same because the maths are the maths but the reals are better than imaginaries ....the strings are a beautiful tool for the 2d picture , I prefer a spherical membran, oscillating and we can thus simulate the mass also,it's more logic, the programm just needs a little improvement inserting the number of the ultim entanglement of spheres.and their volumes from the main central sphere.In all case, the duality wave particle can be harmonized due to the proportions with mass.The strings were an idea, this idea can be universalized simply in the spherical logic in 3D.

Dear Hector, could you explain me the algorythms, If I know the principle, I can invent several models of sortings and synchro.Could you explain me how is the base of computing , this language in fact is logic and mathematic,but what is an algorythm of sorting for example, you insert what the volumes??? Or a serie, ....in fact how I have the pictures here at home on my pc for example.

Steve

report post as inappropriate

The continuity rational seems lost in the pure confusions, why just because the reals are correlated witht he biggest rationality.The cotegorification of sortings in computing seems the cause, due to the adapted algorythms probably.That's why probably we have some bizare simulations.In logic a real turing machine seems rational, it can't simply be an irrational road simply.

Dear Hector, you say"By definition, a continuum is a body that can be continually sub-divided into infinitesimal elements" I am not sure about that, really,that implies some confusions about the real meaning of the infinities.And the finite numbers! DEFINITING MASS FOR EXAMPLE.thus what is this fractilization, I think there is a little problem.A continuum is more than that,the time operator seems confound with the fractal of a body which is in logic finite in its pure newtonian fractalization.I can understand the difference with the 2d computing and the waves correlated with the fractal of this digit,a kind of unity...that permits the 2d forms ok but the strings aren't foundamental for our universe ,a spherical 3d sphere and the picalculus improved with a better fractal for the digit of this 3D sphere and its spherical membran forming all systems....if the rotations are proportional with mass and if the fractal is finite and precise in its decreasing of volumes.....3D spherical computer holographic .....a program of convergences will be easy, and after we can simulate correctly at my humble opinion, I AM PERSUADED THAT lAWRENCE CAN MADE THAT for the 3D holographic computer and its turing universality....the work of Pierce seems relevant about the real axiomatization , the caratheodory method also and the real proportionalities....the convergences seem easy between this 2d towards the 3D universal......the Mtheory is too weak simply ...3DSPHERES AND SPHERIZATION DEAR ALL .

Regards

Steve

report post as inappropriate

Dear Hector, you say"By definition, a continuum is a body that can be continually sub-divided into infinitesimal elements" I am not sure about that, really,that implies some confusions about the real meaning of the infinities.And the finite numbers! DEFINITING MASS FOR EXAMPLE.thus what is this fractilization, I think there is a little problem.A continuum is more than that,the time operator seems confound with the fractal of a body which is in logic finite in its pure newtonian fractalization.I can understand the difference with the 2d computing and the waves correlated with the fractal of this digit,a kind of unity...that permits the 2d forms ok but the strings aren't foundamental for our universe ,a spherical 3d sphere and the picalculus improved with a better fractal for the digit of this 3D sphere and its spherical membran forming all systems....if the rotations are proportional with mass and if the fractal is finite and precise in its decreasing of volumes.....3D spherical computer holographic .....a program of convergences will be easy, and after we can simulate correctly at my humble opinion, I AM PERSUADED THAT lAWRENCE CAN MADE THAT for the 3D holographic computer and its turing universality....the work of Pierce seems relevant about the real axiomatization , the caratheodory method also and the real proportionalities....the convergences seem easy between this 2d towards the 3D universal......the Mtheory is too weak simply ...3DSPHERES AND SPHERIZATION DEAR ALL .

Regards

Steve

report post as inappropriate

Your paper is interesting and presents some things to think about. David Tong here comes to an opposite conclusion. My sense is that continuous and discrete aspects of reality are complements. In my paper http://fqxi.org/community/forum/topic/810 I work aspects of the algebraic structure for quantum bits with black holes and AdS spacetimes.

The universe as a set of digital processors has some compelling features to it. As I see these are structures associated with qubits on horizons or AdS boundaries. The exterior world has equivalent quantum information content, but it is the holographic projection from the boundary or horizon. To compare to DNA it is analogous to the map which takes a single strand and parses that into complex folded polypeptides. We may then say this permits “errors,” or mutations, or in physics broken symmetries.

Of course from an algorithmic perspective we have the halting problem. The universe as a grand computer or quantum computer executes various algorithms, which are quantum bit processors for interacting fields. All of these need to be computable, and have a finite data stack for a standard scattering experiment. So there must be some sort of selection process, a sort of quantum Darwinism, which selects for qubit processors that are computable. The Chaitan halting probability may then be some estimated value which serves as a screening process. Maybe if the algorithm is nonhalting and requires an unbounded amount of energy it is renormalized out, or absorbed into a cut off.

Cheers LC

report post as inappropriate

The universe as a set of digital processors has some compelling features to it. As I see these are structures associated with qubits on horizons or AdS boundaries. The exterior world has equivalent quantum information content, but it is the holographic projection from the boundary or horizon. To compare to DNA it is analogous to the map which takes a single strand and parses that into complex folded polypeptides. We may then say this permits “errors,” or mutations, or in physics broken symmetries.

Of course from an algorithmic perspective we have the halting problem. The universe as a grand computer or quantum computer executes various algorithms, which are quantum bit processors for interacting fields. All of these need to be computable, and have a finite data stack for a standard scattering experiment. So there must be some sort of selection process, a sort of quantum Darwinism, which selects for qubit processors that are computable. The Chaitan halting probability may then be some estimated value which serves as a screening process. Maybe if the algorithm is nonhalting and requires an unbounded amount of energy it is renormalized out, or absorbed into a cut off.

Cheers LC

report post as inappropriate

Dr. Crowell,

The universe cannot just 'be':

"The universe as a grand computer or quantum computer executes various algorithms, which are quantum bit processors for interacting fields. All of these need to be computable, and have a finite data stack for a standard scattering experiment. So there must be some sort of selection process, a sort of quantum Darwinism, ..."

It needs a power supply and it needs to be programmed.

James

report post as inappropriate

The universe cannot just 'be':

"The universe as a grand computer or quantum computer executes various algorithms, which are quantum bit processors for interacting fields. All of these need to be computable, and have a finite data stack for a standard scattering experiment. So there must be some sort of selection process, a sort of quantum Darwinism, ..."

It needs a power supply and it needs to be programmed.

James

report post as inappropriate

Lawrence: I have difficulties seeing how the world could be digital and analog at the same time, but it might be.

James: When one performs a computation, say on a desktop computer, it is with a purpose in mind, for example, to print a document or to play a game. When you think that the computer started computing from 'random' data by picking programs at 'random' one gets the feeling that if the universe is a computer it was not really necessary programmed.

As you point out, if the universe is computing something a legitimate question is to ask who put the computer to run and what the universe is computing. While the answer to the first question is beyond my scope the second may be as simple as to believe that the universe is just computing itself, and sometimes we make it compute for ourselves (computers at the end are part of the universe, and when we compute with them we ask the universe to compute something for us).

On the other hand, if someone or something ran the universe code, the algorithmic view tells how this was, if necessarily, only at the very beginning, because the structure one finds today all over the universe is neither the result of chance nor the result of design, but can be explained by computation without having to think that there is a purpose, nor to have someone to intervene at every step to get to where it is today. This worldview claims that the universe outcome is the result of computation in which the theory of algorithmic probability explains and predicts the distribution of random-looking and organized structures.

James: When one performs a computation, say on a desktop computer, it is with a purpose in mind, for example, to print a document or to play a game. When you think that the computer started computing from 'random' data by picking programs at 'random' one gets the feeling that if the universe is a computer it was not really necessary programmed.

As you point out, if the universe is computing something a legitimate question is to ask who put the computer to run and what the universe is computing. While the answer to the first question is beyond my scope the second may be as simple as to believe that the universe is just computing itself, and sometimes we make it compute for ourselves (computers at the end are part of the universe, and when we compute with them we ask the universe to compute something for us).

On the other hand, if someone or something ran the universe code, the algorithmic view tells how this was, if necessarily, only at the very beginning, because the structure one finds today all over the universe is neither the result of chance nor the result of design, but can be explained by computation without having to think that there is a purpose, nor to have someone to intervene at every step to get to where it is today. This worldview claims that the universe outcome is the result of computation in which the theory of algorithmic probability explains and predicts the distribution of random-looking and organized structures.

Hector (and James),

going back to the remark by James -- if the universe is a computer (or, better, a computation), it needs power supply, and needs to be programmed -- I agree with the reply that there is no need to program it for a purpose, and not need to inject information during the computation. A lot of interesting things emerge in computations that are not the result of a purposeful design, and are 'closed', that is, not interacting with the outside, as many experiments have shown.

But we are left with the question of the 'power supply'. As a supporter of the digital/computational universe conjecture, I like to assume that everything must emerge from the universal computation (i.e., from spacetime): particles, matter, energy, up to life, and whatever else is going to emerge next. But don't we need some sort of energy to keep the computation running, step by step? How do we avoid the circularity of energy requiring energy to exist?

Perhaps a possible answer would be: we don't need energy to run the Computation because there is no actual, physical, Digital Computer that runs it, in the same way as we do not require power for an Analog Computer to run, say, the Navier-Stokes or Einstein equations, under an analog-based understanding of the universe.

An alternative answer, along the lines of Tegmark's Mathematical Universe Hypothesis, would be that the Computation does not unfold step by step: it is already all there, time being a sort of illusion (I wonder whether the fact that time and energy are conjugate variables plays a role here).

In any case, if we insist that the computational steps 'really happen', and that they require some non-null effort, hopefully not from metaphysical entities like angels (after all, angels don't sweat), it would be wise to keep it to the bare minimum. In this respect, a prefix-free universal Turing machine (as suggested by Hector), or a Turmite, or a network mobile automaton (as discussed in my contribution), all based on the operation of a simple, localized control head, are preferable to a cellular automaton, with its global operation mode. (By the way, to my knowledge, the first to push for the localized control head idea in a physical context has been S. Wolfram.)

Hector, James, what do you think?

report post as inappropriate

going back to the remark by James -- if the universe is a computer (or, better, a computation), it needs power supply, and needs to be programmed -- I agree with the reply that there is no need to program it for a purpose, and not need to inject information during the computation. A lot of interesting things emerge in computations that are not the result of a purposeful design, and are 'closed', that is, not interacting with the outside, as many experiments have shown.

But we are left with the question of the 'power supply'. As a supporter of the digital/computational universe conjecture, I like to assume that everything must emerge from the universal computation (i.e., from spacetime): particles, matter, energy, up to life, and whatever else is going to emerge next. But don't we need some sort of energy to keep the computation running, step by step? How do we avoid the circularity of energy requiring energy to exist?

Perhaps a possible answer would be: we don't need energy to run the Computation because there is no actual, physical, Digital Computer that runs it, in the same way as we do not require power for an Analog Computer to run, say, the Navier-Stokes or Einstein equations, under an analog-based understanding of the universe.

An alternative answer, along the lines of Tegmark's Mathematical Universe Hypothesis, would be that the Computation does not unfold step by step: it is already all there, time being a sort of illusion (I wonder whether the fact that time and energy are conjugate variables plays a role here).

In any case, if we insist that the computational steps 'really happen', and that they require some non-null effort, hopefully not from metaphysical entities like angels (after all, angels don't sweat), it would be wise to keep it to the bare minimum. In this respect, a prefix-free universal Turing machine (as suggested by Hector), or a Turmite, or a network mobile automaton (as discussed in my contribution), all based on the operation of a simple, localized control head, are preferable to a cellular automaton, with its global operation mode. (By the way, to my knowledge, the first to push for the localized control head idea in a physical context has been S. Wolfram.)

Hector, James, what do you think?

report post as inappropriate

quote:

These concepts will provide a framework for a discussion

of the informational nature of reality. I will argue that if the universe were

analog, then the world would likely be random, making it largely incomprehensible.

The digital model has, however, an inherent beauty in its

imposition of an upper limit and in the convergence in computational

power to a maximal level of sophistication. Even if deterministic, that it

is digital doesnt mean that the world is trivial or predictable, but rather

that it is built up from operations that at the lowest scale are very simple

but that at a higher scale look complex and even random, though only in

appearance.

end of quote

Analog meaning random, and incomprehensiblity side steps the question of where the information from prior universes comes from. If one has a million prior universes in some sense contributing to a present universe, the analog nature of reality would merely be a statement of chaotic mixing leading to a new reformulation of the universe.

No where would that imply randomness once the PRESENT universe is set up. I.e. that reformulation could be digital in its expression with an analog mixing of prew universe information added in, as the information base for emergent gravity

report post as inappropriate

These concepts will provide a framework for a discussion

of the informational nature of reality. I will argue that if the universe were

analog, then the world would likely be random, making it largely incomprehensible.

The digital model has, however, an inherent beauty in its

imposition of an upper limit and in the convergence in computational

power to a maximal level of sophistication. Even if deterministic, that it

is digital doesnt mean that the world is trivial or predictable, but rather

that it is built up from operations that at the lowest scale are very simple

but that at a higher scale look complex and even random, though only in

appearance.

end of quote

Analog meaning random, and incomprehensiblity side steps the question of where the information from prior universes comes from. If one has a million prior universes in some sense contributing to a present universe, the analog nature of reality would merely be a statement of chaotic mixing leading to a new reformulation of the universe.

No where would that imply randomness once the PRESENT universe is set up. I.e. that reformulation could be digital in its expression with an analog mixing of prew universe information added in, as the information base for emergent gravity

report post as inappropriate

Hi Andrew,

Interesting remarks. I knew people may read me as if I were opposing digital to random or algorithmic to analog. However, my path I take is, as established from the title, opposing algorithmic to random. You are right, analog may not necessarily mean a completely random world just as I claim a digital world is neither trivial nor necessarily predictable (I can unpack this upon request, an extended version of this essay is in its way, to be available at ArXiv.com).

The question, seems to me to be, whether one can associate randomness to the concept of analog. While the connection is not trivial, just as it is not the definition of analog, I think it has often been the case that analog is associated with lesser or higher degrees of uncertainty. Either in the form of true indeterminism or in the form of fundamental impediments of infinite precision of measurements, both which I would essentially link to properties related to both analog and randomness. For example, in dynamical systems, chaotic randomness is usually defined as the infinite possible trajectories for a system to diverge from very close initial configurations, over time and space. And if the world is analog one is also fundamentally unable to take measurements and always get the same value, as if there was some randomness involved (at the level of the measurement uncertainty one could even use measurements as a kind of pseudo-random number generator).

Thanks for your comments, I will think further about them.

Interesting remarks. I knew people may read me as if I were opposing digital to random or algorithmic to analog. However, my path I take is, as established from the title, opposing algorithmic to random. You are right, analog may not necessarily mean a completely random world just as I claim a digital world is neither trivial nor necessarily predictable (I can unpack this upon request, an extended version of this essay is in its way, to be available at ArXiv.com).

The question, seems to me to be, whether one can associate randomness to the concept of analog. While the connection is not trivial, just as it is not the definition of analog, I think it has often been the case that analog is associated with lesser or higher degrees of uncertainty. Either in the form of true indeterminism or in the form of fundamental impediments of infinite precision of measurements, both which I would essentially link to properties related to both analog and randomness. For example, in dynamical systems, chaotic randomness is usually defined as the infinite possible trajectories for a system to diverge from very close initial configurations, over time and space. And if the world is analog one is also fundamentally unable to take measurements and always get the same value, as if there was some randomness involved (at the level of the measurement uncertainty one could even use measurements as a kind of pseudo-random number generator).

Thanks for your comments, I will think further about them.

Dear Hector:

I liked your essay.

Regarding the last line of your essay, "Our reasoning and empiri-

cal findings suggest that the information in the world is the result of processes resembling computer programs rather than of dynamics characteristic of a more random, or analog, world."

I wonder if "... processes resembling computer programs..." will be eventually be found to be, "energies or intelligences that we currently are unable to understand, but indirect evidence points in that direction?"

Good luck!

joseph markell

report post as inappropriate

I liked your essay.

Regarding the last line of your essay, "Our reasoning and empiri-

cal findings suggest that the information in the world is the result of processes resembling computer programs rather than of dynamics characteristic of a more random, or analog, world."

I wonder if "... processes resembling computer programs..." will be eventually be found to be, "energies or intelligences that we currently are unable to understand, but indirect evidence points in that direction?"

Good luck!

joseph markell

report post as inappropriate

Thanks Joseph,

Actually what you mention is close, I think, to Wolfram's concept of intelligence. In Wolfram's view of intelligence it is a matter of identification rather than of sophistication. This is because of his Principle of Computational Equivalence (PCE). His PCE says that most non-trivial computations turn out to be of equivalent sophistication. That may explain why we are unable to, for example, master the task of weather forecasting, because at the end weather is as sophisticated than us (our minds) and we have no way to shortcut the computation of the weather (despite using supercomputers to forecast at most a couple of days, and often still wrong for the next day). So in Wolfram's view the weather is an intelligent system with which we cannot interact because we are an intelligence of a different type.

PCE is interesting for artificial intelligence, because it means we are surrounded by intelligence, yet we are trying so hard to create 'intelligent' systems when in fact what we are trying to create is intelligence that we recognize as such, i.e. intelligence of our same type. But the interesting conclusion is that one do not really need to try that hard designing an intelligent system, one can just go an use one of the many around in the computational universe, and then perhaps make it behave as you want (if what you want is to have it to behave like a human being) for which one would only need to have the system somehow to interact with us (and therefore have sensorial experience of the same type).

Thanks for your comment.

Actually what you mention is close, I think, to Wolfram's concept of intelligence. In Wolfram's view of intelligence it is a matter of identification rather than of sophistication. This is because of his Principle of Computational Equivalence (PCE). His PCE says that most non-trivial computations turn out to be of equivalent sophistication. That may explain why we are unable to, for example, master the task of weather forecasting, because at the end weather is as sophisticated than us (our minds) and we have no way to shortcut the computation of the weather (despite using supercomputers to forecast at most a couple of days, and often still wrong for the next day). So in Wolfram's view the weather is an intelligent system with which we cannot interact because we are an intelligence of a different type.

PCE is interesting for artificial intelligence, because it means we are surrounded by intelligence, yet we are trying so hard to create 'intelligent' systems when in fact what we are trying to create is intelligence that we recognize as such, i.e. intelligence of our same type. But the interesting conclusion is that one do not really need to try that hard designing an intelligent system, one can just go an use one of the many around in the computational universe, and then perhaps make it behave as you want (if what you want is to have it to behave like a human being) for which one would only need to have the system somehow to interact with us (and therefore have sensorial experience of the same type).

Thanks for your comment.

Dear Hector:

Thanks for your nice and detailed response. I can meld your ideas with the "portal" in my own essay.

Good luck!

Joseph Markell

report post as inappropriate

Thanks for your nice and detailed response. I can meld your ideas with the "portal" in my own essay.

Good luck!

Joseph Markell

report post as inappropriate

I don't want to stray too far off-topic. However, I was reminded by the talk of weather and intelligence, of a few points about the integrity of science that I made on a blog back in 2008 in defense of my friend and collaborator Pat Frank in connection with the global warming debate. The fireworks begin at post number 27.

Tom

report post as inappropriate

Tom

report post as inappropriate

Hector,

I agree that an analog (undiscretized) world would be "largely incomprehensible" - it would be seen as a super symmetric void with an unbroken symmetry (essentially "the nothing").

But we evidently have the obvious cosmos (the organized existence) and the less obvious chaos (the unorganized existence). This is the all-encompassing differentiation of reality. There is the differentiated, discretized corporeality and there is the undifferentiated, undiscretized void. There is the one and there is the zero.

Like the many, I agree with "the notion that the universe is digital" in "the way it unfolds" - because that is the idea of the cosmic or ordered existence. But your sidestep of the foundational question regarding "what the universe is made of" dampens your essay.

I think the big question regarding the existence includes both the 'what is discretized' and the 'how that is discretized'.

As for information, it is obvious that people forget what they forget - so, perhaps the all-encompassing existence also forgets in the super-symmetric entropic voidness...

Hector, perhaps you can also read and rate my essay. It would be interesting to find us together in the essay finals.

Rafael

report post as inappropriate

I agree that an analog (undiscretized) world would be "largely incomprehensible" - it would be seen as a super symmetric void with an unbroken symmetry (essentially "the nothing").

But we evidently have the obvious cosmos (the organized existence) and the less obvious chaos (the unorganized existence). This is the all-encompassing differentiation of reality. There is the differentiated, discretized corporeality and there is the undifferentiated, undiscretized void. There is the one and there is the zero.

Like the many, I agree with "the notion that the universe is digital" in "the way it unfolds" - because that is the idea of the cosmic or ordered existence. But your sidestep of the foundational question regarding "what the universe is made of" dampens your essay.

I think the big question regarding the existence includes both the 'what is discretized' and the 'how that is discretized'.

As for information, it is obvious that people forget what they forget - so, perhaps the all-encompassing existence also forgets in the super-symmetric entropic voidness...

Hector, perhaps you can also read and rate my essay. It would be interesting to find us together in the essay finals.

Rafael

report post as inappropriate

Rafael,

Thanks for your comments. You are right that I didn't jump to making claims on what the universe may be truly made off, or whether it may turn out to be digital or not. I prefer to leave the readers withdraw that conclusion by looking that an algorithmic world would not really need to assume but the kind of digital computations. Yet, my view is compatible with an analog algorithmic world. I explain, however, why I think that may not be the case, because it would look more like the uncomputable (truly random in the strictest mathematical sense) digits of a Chaitin Omega number rather than the more ordered digits of the mathematical constant Pi, random-looking but deterministic and plenty of algorithmic structure, as I think may be the case of the real world.

I don't think most people think that the universe is digital by the way it unfolds, I think this is a rather different view of mine, or if not a novel view a novel statistical treatment based in an empirical distribution and the concept algorithmic probability undertaken in my research project. Yet, I don't conclude from there that the world is digital, even if I (as do the results) suggest that there is no need to think it is analog.

Sincerely.

Thanks for your comments. You are right that I didn't jump to making claims on what the universe may be truly made off, or whether it may turn out to be digital or not. I prefer to leave the readers withdraw that conclusion by looking that an algorithmic world would not really need to assume but the kind of digital computations. Yet, my view is compatible with an analog algorithmic world. I explain, however, why I think that may not be the case, because it would look more like the uncomputable (truly random in the strictest mathematical sense) digits of a Chaitin Omega number rather than the more ordered digits of the mathematical constant Pi, random-looking but deterministic and plenty of algorithmic structure, as I think may be the case of the real world.

I don't think most people think that the universe is digital by the way it unfolds, I think this is a rather different view of mine, or if not a novel view a novel statistical treatment based in an empirical distribution and the concept algorithmic probability undertaken in my research project. Yet, I don't conclude from there that the world is digital, even if I (as do the results) suggest that there is no need to think it is analog.

Sincerely.

Hector, I added a remark here, but a bit up, in the Crowell-Putnam posts of Feb. 22. In the absence of an automatic notification service, I just wanted to let you know. Tommaso

report post as inappropriate

report post as inappropriate

Hi Tommaso,

I had already found your other message. Thanks.

Yes, it is a pity there is no an automatic notification service. However, on the left column there is a useful 'most recent first' sorting checkbox that I recently found.

I will answer your other message. Btw, I also wrote you something a week ago or so in your own discussion section concerning the question of taking data and code as different kind of entities, which I argue shouldn't be the case after Turing's work.

Cheers.

I had already found your other message. Thanks.

Yes, it is a pity there is no an automatic notification service. However, on the left column there is a useful 'most recent first' sorting checkbox that I recently found.

I will answer your other message. Btw, I also wrote you something a week ago or so in your own discussion section concerning the question of taking data and code as different kind of entities, which I argue shouldn't be the case after Turing's work.

Cheers.

Dear Hector Zenil,

You say above that "certain phenomena can be modeled assuming that matter and space exist as a continuum, meaning that matter is continuously distributed over an entire region of space. [but] matter is composed of molecules and atoms, separated by empty space."

Then you say to Lawrence: "I have difficulties seeing how the world could be digital and analog at the...

view entire post

You say above that "certain phenomena can be modeled assuming that matter and space exist as a continuum, meaning that matter is continuously distributed over an entire region of space. [but] matter is composed of molecules and atoms, separated by empty space."

Then you say to Lawrence: "I have difficulties seeing how the world could be digital and analog at the...

view entire post

report post as inappropriate

Dear Hector

I congratulate you on very good essay, though disagree with many of your fundamental tenets.

From your cv I quite understand your view that; "Physical laws, like computer programs, make things happen." I object on the basis that such 'entities' are not physically causal and can only propagate the non causal. They can do much of value, describe, predict, cause switches to throw and much more, but I feel it endangers our understanding of causality to claim to much.

Surely "Structure from randomness BY iterated computation" is a bounded view. Is it not enough to claim to just 'explain' some structure from randomness, aided by computation?

On causality I have found clearly and proposed that the apparent lack of quantum causality is only due to our lack of understanding. As Einstein said, we don't yet understand 1,000th of 1%..." Is it not arrogant to assume we know and understand, and that we can can judge it on those basis.?

I feel you have cart before horse in saying information 'makes' a cup a cup and a human a human'. I'm even tempted to suggest you may have spent too much time playing at computers in your youth! Yet I do understand what I hope you mean, a very 'catholic' translation of 'makes'. As a supporter of reality (and of Edwins view above) I'd strongly wish to preserve the real meaning of 'makes'.

And I do see your "matter and space exist as a continuum" differently to Edwin and applaud the view I see. I have derived and find it empirically consistent that in particle terms the discrete condenses from the (sub 'matter') continuum to implement 'change'. Indeed this goes far enough to resolve both SR and GR with QM, consistent with Edwin's and other good essays here. I'd be very pleased if you'd give your views on my essay, but warn you may be shocked by it's naive reality/ locality empiricism, perhaps in another universe to yours. (Though the discrete field model involved logically derives recycled sequential not parallel universes. - yet would you believe my own search beyond maths turned out to be on the same grounds Alice in Wonderland was written!?)

Through all this I am pleased to agree as an essay it deserves a front runner status, and enjoyed an entirely different viewpoint on nature to my own.

Best wishes

Peter Jackson

report post as inappropriate

I congratulate you on very good essay, though disagree with many of your fundamental tenets.

From your cv I quite understand your view that; "Physical laws, like computer programs, make things happen." I object on the basis that such 'entities' are not physically causal and can only propagate the non causal. They can do much of value, describe, predict, cause switches to throw and much more, but I feel it endangers our understanding of causality to claim to much.

Surely "Structure from randomness BY iterated computation" is a bounded view. Is it not enough to claim to just 'explain' some structure from randomness, aided by computation?

On causality I have found clearly and proposed that the apparent lack of quantum causality is only due to our lack of understanding. As Einstein said, we don't yet understand 1,000th of 1%..." Is it not arrogant to assume we know and understand, and that we can can judge it on those basis.?

I feel you have cart before horse in saying information 'makes' a cup a cup and a human a human'. I'm even tempted to suggest you may have spent too much time playing at computers in your youth! Yet I do understand what I hope you mean, a very 'catholic' translation of 'makes'. As a supporter of reality (and of Edwins view above) I'd strongly wish to preserve the real meaning of 'makes'.

And I do see your "matter and space exist as a continuum" differently to Edwin and applaud the view I see. I have derived and find it empirically consistent that in particle terms the discrete condenses from the (sub 'matter') continuum to implement 'change'. Indeed this goes far enough to resolve both SR and GR with QM, consistent with Edwin's and other good essays here. I'd be very pleased if you'd give your views on my essay, but warn you may be shocked by it's naive reality/ locality empiricism, perhaps in another universe to yours. (Though the discrete field model involved logically derives recycled sequential not parallel universes. - yet would you believe my own search beyond maths turned out to be on the same grounds Alice in Wonderland was written!?)

Through all this I am pleased to agree as an essay it deserves a front runner status, and enjoyed an entirely different viewpoint on nature to my own.

Best wishes

Peter Jackson

report post as inappropriate

Dear Peter,

Thank you for sharing and for your encouragement.

The claim that only information makes a cup a cup rather than a human being is because both human beings and cups are made exactly of the same elementary particles and it is nothing but the way they are arranged that make one or the other. But let me know how that could be wrong from a purely materialist point of view.

Sincerely.

Thank you for sharing and for your encouragement.

The claim that only information makes a cup a cup rather than a human being is because both human beings and cups are made exactly of the same elementary particles and it is nothing but the way they are arranged that make one or the other. But let me know how that could be wrong from a purely materialist point of view.

Sincerely.

Hector nice essay.

From an artistic perspective, I view the behavior of the universe as a reproductive system. Interesting enough, if you view a reproductive system as a set of random events, or you view it as a predictable (algorithmic) set of events, the resulting distribution is pretty much the same in either case.

That is the link to the Fibonnaci series I elude to in my essay. Check it out if you get a chance at http://fqxi.org/community/forum/topic/893.

Good Luck

Pete

report post as inappropriate

From an artistic perspective, I view the behavior of the universe as a reproductive system. Interesting enough, if you view a reproductive system as a set of random events, or you view it as a predictable (algorithmic) set of events, the resulting distribution is pretty much the same in either case.

That is the link to the Fibonnaci series I elude to in my essay. Check it out if you get a chance at http://fqxi.org/community/forum/topic/893.

Good Luck

Pete

report post as inappropriate

Thanks Pete. Vey nice and Interesting essay, I like your artistic point of view.

I discuss a bit the relationship of an object vs. its pictorial representation in this paper (in joint with Delahaye and Gaucherel) that you may find interesting as related to the concept of physical complexity:

Image Characterization and Classification by Physical Complexity, available online at http://arxiv.org/pdf/1006.0051

Best regards.

I discuss a bit the relationship of an object vs. its pictorial representation in this paper (in joint with Delahaye and Gaucherel) that you may find interesting as related to the concept of physical complexity:

Image Characterization and Classification by Physical Complexity, available online at http://arxiv.org/pdf/1006.0051

Best regards.

If sorting the essays by community rating is any indication: You are winning

report post as inappropriate

report post as inappropriate

Dear dr ZENIL,

Your essay is very understandable and readable, but I dont see the universe as a computer, I quite understand that humans created this machine and for that they had to design a way in which this machine really could make computations, these computations resulted in images and sounds that our senses sight and hearing could interprete as a virtual reality, the point is still the same it is "to be or not to be", the virtual universe we are creating out of the zero's and the one's is just ONE possibilllity that we have as mankind to CREATE, as I put it in my essay when we are in the possibillity to create a consciousnes that exists in this virtual reality we move one step further in the understanding of our own consciousness, this new consciousness however has all the restrictions of the DIGITAL essence, it is a second hand reality from our point of vieuw.

Analogue may also mean ONE, the continuity of the whole, not made of two units, when science will be able to construct a quantum "computer" there is a infinity of superpositions to choose, and all the answers of all the possible questions are in facto "present" even when he is not connected to the electricity, we can see that as a ONE, for achieving that we have to bring back choices to one...

kind regards

Wilhelmus de Wilde

report post as inappropriate

Your essay is very understandable and readable, but I dont see the universe as a computer, I quite understand that humans created this machine and for that they had to design a way in which this machine really could make computations, these computations resulted in images and sounds that our senses sight and hearing could interprete as a virtual reality, the point is still the same it is "to be or not to be", the virtual universe we are creating out of the zero's and the one's is just ONE possibilllity that we have as mankind to CREATE, as I put it in my essay when we are in the possibillity to create a consciousnes that exists in this virtual reality we move one step further in the understanding of our own consciousness, this new consciousness however has all the restrictions of the DIGITAL essence, it is a second hand reality from our point of vieuw.

Analogue may also mean ONE, the continuity of the whole, not made of two units, when science will be able to construct a quantum "computer" there is a infinity of superpositions to choose, and all the answers of all the possible questions are in facto "present" even when he is not connected to the electricity, we can see that as a ONE, for achieving that we have to bring back choices to one...

kind regards

Wilhelmus de Wilde

report post as inappropriate

Good afternoon (or the time you may read this post) Hector Zenil

I did not at all want to offend you in my post, on the contrary you gave me a lot of reasons to continue for the search of understanding our universe and explainede very clear how the "technical" side of our community is searching for more knowledge.

I would like to join to my post that in my opinion there is a difference between Intelligence and Consciousnes, with our total intelligence we can construct the LHC, but it is our consciousnes that asks always WHY, like a child that won't stop asking WHY, the HOW is the intelligence and the Why our consciousnes, the intelligence can be constructed by our (Turing)machines, but the consciousness untill now we could not reproduce so this is perhaps not a digital "substance", so not reproducable in the digital way (?), like a piece of art, you can copy it but the copy will never be the original.

The way we experience "reality" is different for every one, but seems to be analoguefor a majority, (ana = from logos= reason) so the way our reason interpretes it becomes reality, but this also means that there multiple interpretations, from which the digital interpretation is only one.

Perhaps my interpretation is not the one of a pure scientist like yourself, but I think that this is the richness of the rainbow of thoughts so beautiful expressed in this contest.

I wish you a lot of luck (digital ?) in the contest

and

best regards

Wilhelmus de Wilde

report post as inappropriate

I did not at all want to offend you in my post, on the contrary you gave me a lot of reasons to continue for the search of understanding our universe and explainede very clear how the "technical" side of our community is searching for more knowledge.

I would like to join to my post that in my opinion there is a difference between Intelligence and Consciousnes, with our total intelligence we can construct the LHC, but it is our consciousnes that asks always WHY, like a child that won't stop asking WHY, the HOW is the intelligence and the Why our consciousnes, the intelligence can be constructed by our (Turing)machines, but the consciousness untill now we could not reproduce so this is perhaps not a digital "substance", so not reproducable in the digital way (?), like a piece of art, you can copy it but the copy will never be the original.

The way we experience "reality" is different for every one, but seems to be analoguefor a majority, (ana = from logos= reason) so the way our reason interpretes it becomes reality, but this also means that there multiple interpretations, from which the digital interpretation is only one.

Perhaps my interpretation is not the one of a pure scientist like yourself, but I think that this is the richness of the rainbow of thoughts so beautiful expressed in this contest.

I wish you a lot of luck (digital ?) in the contest

and

best regards

Wilhelmus de Wilde

report post as inappropriate

Hello, Hector

Thanks for a well-written essay, which I read with keen interest. I like your (French!) style of clear analysis and expression. The strategy of comparing the evolution of patterns in nature and in computer programs seems very promising. I do have some comments about certain passages, and would like to know your response to them.

“Lossless compressibility” [p9] may...

view entire post

Thanks for a well-written essay, which I read with keen interest. I like your (French!) style of clear analysis and expression. The strategy of comparing the evolution of patterns in nature and in computer programs seems very promising. I do have some comments about certain passages, and would like to know your response to them.

“Lossless compressibility” [p9] may...

view entire post

report post as inappropriate

Hello Dan,

Thanks for your kind comments. Here are my answers to your interesting questions:

You say:

"“Lossless compressibility” [p9] may apply perfectly to a pattern that is already defined (like files in your computer), but only imperfectly to natural patterns (data from observations). Your result that “most empirical data carries an algorithmic signal” seems simply...

view entire post

Thanks for your kind comments. Here are my answers to your interesting questions:

You say:

"“Lossless compressibility” [p9] may apply perfectly to a pattern that is already defined (like files in your computer), but only imperfectly to natural patterns (data from observations). Your result that “most empirical data carries an algorithmic signal” seems simply...

view entire post

Thank you so much, Hector, for your thoughtful and patient replies. I haven’t been able to access McAllister’s 2003 article, but I did read the Twardy et. al. reply, which I think fairly refutes McAllister’s claims when interpreted in narrow terms. However, I did read a more recent piece by McAllister (2009) “What do patterns in empirical data tell us about the structure of the world”,...

view entire post

view entire post

report post as inappropriate

Our reasoning and empiri-cal findings suggest that the information in the world is the result of processesresembling computer programs rather than of dynamics characteristic of a more random, or analog, world.9

Hector,

This is a well-supported perspective, ably argued.

My prejudice is that the above only proves humankind's approach to understanding reality but my argument tends to lack your many details.

Jim Hoover

report post as inappropriate

Hector,

This is a well-supported perspective, ably argued.

My prejudice is that the above only proves humankind's approach to understanding reality but my argument tends to lack your many details.

Jim Hoover

report post as inappropriate

Hi Hector

You said; "The claim that only information makes a cup a cup rather than a human being is because both human beings and cups are made exactly of the same elementary particles and it is nothing but the way they are arranged that make one or the other. But let me know how that could be wrong from a purely materialist point of view."

I think the word 'makes' is the key, as it implies causality. I must entirely agree 'information' may be a good word to describe the difference, but the whole gamut of my own thesis here is that, while we can 'describe' something from any viewpoint, description is once removed from the reality of something, as so well described by Georgina in the most foundational terms. Correcting only this seems to bring Occam's razor into action.

Linguistic semantics apart, information is the difference in superposed wave patterns, causality is the interaction of it, and only allowed by qauntization. Therefore if either waves or particles were removed we'd be in the proverbial!

Do have a look at this easy read paper, with photographic evidence, if you're interested in the entertaining logical extension; http://vixra.org/abs/1102.0016

Best wishes

Peter

report post as inappropriate

You said; "The claim that only information makes a cup a cup rather than a human being is because both human beings and cups are made exactly of the same elementary particles and it is nothing but the way they are arranged that make one or the other. But let me know how that could be wrong from a purely materialist point of view."

I think the word 'makes' is the key, as it implies causality. I must entirely agree 'information' may be a good word to describe the difference, but the whole gamut of my own thesis here is that, while we can 'describe' something from any viewpoint, description is once removed from the reality of something, as so well described by Georgina in the most foundational terms. Correcting only this seems to bring Occam's razor into action.

Linguistic semantics apart, information is the difference in superposed wave patterns, causality is the interaction of it, and only allowed by qauntization. Therefore if either waves or particles were removed we'd be in the proverbial!

Do have a look at this easy read paper, with photographic evidence, if you're interested in the entertaining logical extension; http://vixra.org/abs/1102.0016

Best wishes

Peter

report post as inappropriate

Dear Dr. Hector Zenil,

Since it is a ''leading essay'', I must check it for consistency; I hope to find the novel ideas in physics and clear proofs about the nature of the Universe. The essay seems to contain two separate stories about the origin of the universe and the algorithmic nature of the world. Since it is an contest about the nature of reality, let's begin with the proofs about the...

view entire post

Since it is a ''leading essay'', I must check it for consistency; I hope to find the novel ideas in physics and clear proofs about the nature of the Universe. The essay seems to contain two separate stories about the origin of the universe and the algorithmic nature of the world. Since it is an contest about the nature of reality, let's begin with the proofs about the...

view entire post

report post as inappropriate

Constantin,

Thanks for your comments. I find difficult to address your arguments agains my essay one by one because I think there is a misreading from your part at several levels but I will do my best to address some of the most fundamental.

I can say, as I said before, that nothing in my essay is pretending to be a mathematical (or even physical) proof, it is statistical evidence in...

view entire post

Thanks for your comments. I find difficult to address your arguments agains my essay one by one because I think there is a misreading from your part at several levels but I will do my best to address some of the most fundamental.

I can say, as I said before, that nothing in my essay is pretending to be a mathematical (or even physical) proof, it is statistical evidence in...

view entire post

Dear Dr. Hector Zenil,

The professional scientists often send fantastic essays to our contest just because this contest is encouraging people to share with others to trigger interesting discussions. We do not write holy papers so it needs revision. Thus, even if your essay may have errors, it is not a catastrophe.

1) You write: ''a particle may have these properties only when...

view entire post

The professional scientists often send fantastic essays to our contest just because this contest is encouraging people to share with others to trigger interesting discussions. We do not write holy papers so it needs revision. Thus, even if your essay may have errors, it is not a catastrophe.

1) You write: ''a particle may have these properties only when...

view entire post

report post as inappropriate

Constantin,

The universe is capable of digital computation because there are digital computers in it (you are typing on one). I don't see how could that be wrong. The question is therefore whether the universe _only_ computes at the digital level.

On the other hand, respected quantum scientists think that an algorithmic world is possible and compatible with quantum mechanics, e.g. Seth Lloyd. But as you say, if you deny mainstream science it will be difficult to argue against any of your arguments. You make interesting points but they could be read more easily if they weren't so categorical.

I don't see anywhere in my essay what you say I said on the compressibility of data as a direct proof of the discreteness of the world. What I wrote is "One may wonder whether the lossless compressibility of data is in any sense an indication of the discreteness of the world." then I make my case that it may be an indication, not a proof.

Concerning your Neutrino argument, we yet don't exactly know what particles, if any, may be responsible of what we identify as gravitation, so when you say that a neutrino is not interacting with anything else and take it as a proof against my claim (not a proof) that particles may not be able to carry any information, the argument is not that convincing.

The math on which my arguments are based on, won't be wrong in 10 or 20 years, what might be wrong is the connection I make between the math, its consequences, and the real world, but that is what the contest is about, yet I offer what I think is evidence in favor of the algorithmic nature of the world.

Thanks.

The universe is capable of digital computation because there are digital computers in it (you are typing on one). I don't see how could that be wrong. The question is therefore whether the universe _only_ computes at the digital level.

On the other hand, respected quantum scientists think that an algorithmic world is possible and compatible with quantum mechanics, e.g. Seth Lloyd. But as you say, if you deny mainstream science it will be difficult to argue against any of your arguments. You make interesting points but they could be read more easily if they weren't so categorical.

I don't see anywhere in my essay what you say I said on the compressibility of data as a direct proof of the discreteness of the world. What I wrote is "One may wonder whether the lossless compressibility of data is in any sense an indication of the discreteness of the world." then I make my case that it may be an indication, not a proof.

Concerning your Neutrino argument, we yet don't exactly know what particles, if any, may be responsible of what we identify as gravitation, so when you say that a neutrino is not interacting with anything else and take it as a proof against my claim (not a proof) that particles may not be able to carry any information, the argument is not that convincing.

The math on which my arguments are based on, won't be wrong in 10 or 20 years, what might be wrong is the connection I make between the math, its consequences, and the real world, but that is what the contest is about, yet I offer what I think is evidence in favor of the algorithmic nature of the world.

Thanks.

Dear Hector,

I've made some progress with my novel idea of a helical screw in empty space as a model for the graviton. I've posted in another two leading essays so I'll copy and paste it here.

On day-by-day thinking about the novel idea of a mechanical Archimedes screw in empty space representing the force of gravity by gravitons, I have deduced an explanation for the galaxy rotation curve anomaly.

The helical screw model gives matter a new fundamental shape and dynamics which the standard model lacks imo. This non-spherical emission of gravitons is in stark contrast to the Newtonian/Einsteinian acceptance that "all things exert a gravitatinal field equally in all directions". This asymmetry of the gravitational field allows for the stars to experience a greater pull towards the galactic plane, due to their rotation giving more order to the inner fluid matter of the stellar core. Both the structure of the emitter and the absorber of the gravity particles is important. It also has implications for hidden matter at the centre of the galaxies..

I've given the idea some more thought and come to the conclusion that the stars furthest from the galactic centre must have a more 'bipolar nature' than the matter of stars of the inner halo presumably. This is the reason they have wandered towards the galactic plane whilst the halo stars have not. The outer stars' configuration means they experience a greater interaction with the flux pattern of the graviton field. Are the stars of the outer arms simply spinning faster?? We are on the outer edge of a spiral arm and so this would fit with this hypothesis. Our sun could have spin which is higher that that of the average halo star. This relationship between spin and distance from the galactic centre is a fundamental feature which ties in with the suggested mechanism of their creation.

All that is needed is an additional factor of stellar spin speed as well as it's mass and distance from the galactic centre. The relationship should then give calculated values which match those of the observed.

Best wishes,

Alan Lowey

report post as inappropriate

I've made some progress with my novel idea of a helical screw in empty space as a model for the graviton. I've posted in another two leading essays so I'll copy and paste it here.

On day-by-day thinking about the novel idea of a mechanical Archimedes screw in empty space representing the force of gravity by gravitons, I have deduced an explanation for the galaxy rotation curve anomaly.

The helical screw model gives matter a new fundamental shape and dynamics which the standard model lacks imo. This non-spherical emission of gravitons is in stark contrast to the Newtonian/Einsteinian acceptance that "all things exert a gravitatinal field equally in all directions". This asymmetry of the gravitational field allows for the stars to experience a greater pull towards the galactic plane, due to their rotation giving more order to the inner fluid matter of the stellar core. Both the structure of the emitter and the absorber of the gravity particles is important. It also has implications for hidden matter at the centre of the galaxies..

I've given the idea some more thought and come to the conclusion that the stars furthest from the galactic centre must have a more 'bipolar nature' than the matter of stars of the inner halo presumably. This is the reason they have wandered towards the galactic plane whilst the halo stars have not. The outer stars' configuration means they experience a greater interaction with the flux pattern of the graviton field. Are the stars of the outer arms simply spinning faster?? We are on the outer edge of a spiral arm and so this would fit with this hypothesis. Our sun could have spin which is higher that that of the average halo star. This relationship between spin and distance from the galactic centre is a fundamental feature which ties in with the suggested mechanism of their creation.

All that is needed is an additional factor of stellar spin speed as well as it's mass and distance from the galactic centre. The relationship should then give calculated values which match those of the observed.

Best wishes,

Alan Lowey

report post as inappropriate

Dear Zenil,

I repeat my post in Mr. Shing's blog ,but add that my theory is 100% information based, random(the main point) and algorithmic since I implement it using a computer program.

I was so happy to read your essay since it is very much related to my own theory:

http://www.qsa.netne.net

qsa

I think all the ideas of John Benavides , Tommaso Bolognesi ,D'Ariano, Zenil and few other are very much related. my website has not been updated, but here is the abstract of my upcoming paper.

In this letter I derive the laws of nature from the hypothesis that "Nature is made out of mathematics, literally". I present a method to design a universe using simple rules which turns out to have the properties similar to our reality. Particles are modeled as end of lines, one end is confined to a small region and the other goes to allover the universe. The Coulomb force (when lines cross) and gravity(when lines meet) appear naturally and they are two aspects of one process involving the interaction of these lines, and then by calculating the expectation values for positions. I am able to calculate what appears to be the Fine-structure constant. Gravity also appears with surprising results, it shows that gravity becomes repulsive when distance is great or when distance is very small. At this time I have only done 1D full simulation with interaction and 2D and 3D and indeed nD without interaction. I am working on 2D interaction now and already showing very surprising results. I can see a hint of the strong and the electroweak force. Time and space could be looked upon as derived quantities. I show that not only nature is discrete but also mathematics, since dx can only approach zero but it never is zero. In my model the ultimate irony is that our reality came about because there is only one way to design a dynamic universe and that only one allowed our existence. I guess you could say fortunately or unfortunately depending on how one's .

report post as inappropriate

I repeat my post in Mr. Shing's blog ,but add that my theory is 100% information based, random(the main point) and algorithmic since I implement it using a computer program.

I was so happy to read your essay since it is very much related to my own theory:

http://www.qsa.netne.net

qsa

I think all the ideas of John Benavides , Tommaso Bolognesi ,D'Ariano, Zenil and few other are very much related. my website has not been updated, but here is the abstract of my upcoming paper.

In this letter I derive the laws of nature from the hypothesis that "Nature is made out of mathematics, literally". I present a method to design a universe using simple rules which turns out to have the properties similar to our reality. Particles are modeled as end of lines, one end is confined to a small region and the other goes to allover the universe. The Coulomb force (when lines cross) and gravity(when lines meet) appear naturally and they are two aspects of one process involving the interaction of these lines, and then by calculating the expectation values for positions. I am able to calculate what appears to be the Fine-structure constant. Gravity also appears with surprising results, it shows that gravity becomes repulsive when distance is great or when distance is very small. At this time I have only done 1D full simulation with interaction and 2D and 3D and indeed nD without interaction. I am working on 2D interaction now and already showing very surprising results. I can see a hint of the strong and the electroweak force. Time and space could be looked upon as derived quantities. I show that not only nature is discrete but also mathematics, since dx can only approach zero but it never is zero. In my model the ultimate irony is that our reality came about because there is only one way to design a dynamic universe and that only one allowed our existence. I guess you could say fortunately or unfortunately depending on how one's .

report post as inappropriate

a list of the program in c++ for EM and gravity

// g.cpp : Defines the entry point for the console application.

//

#include

#include "stdafx.h"

#include

#include

#include

#include

//using std::cout;

#include

//using std::ios;

//using std::ofstream;

using namespace std;

// Global arrays

double S[951000];

double Po[951000];

double Lo[951000];

double Sy[951000];

double Poy[951000];

double Loy[951000];

double ex[500];

double ex1[500];

double fr[500];

int main() {

srand(time(0));

double i=0;

double g=0;

double frf;

double dist;

long l;

long d1;

long st1;

long d0;

long st0;

double f;

double f1;

double edx;

double edx1;

long m;

long p;

long li;

long p1;

long li1;

double en;

double alpha = 0.0;

double a1=0.0;

double a2=0.0;

double a3=0.0;

double avg=0;

double cn =0;

// double enf;

double intr;

l = 7000; // Universe size

d1 = 200; // Particle 1 size

d0 = d1; // Particle 2 size

double km = 20; // Setting the interval

double kj = 20000000; // # of random throws

intr = ((l)/((km*2.5)));

double d0div = d0 ;

cout

report post as inappropriate

// g.cpp : Defines the entry point for the console application.

//

#include

#include "stdafx.h"

#include

#include

#include

#include

//using std::cout;

#include

//using std::ios;

//using std::ofstream;

using namespace std;

// Global arrays

double S[951000];

double Po[951000];

double Lo[951000];

double Sy[951000];

double Poy[951000];

double Loy[951000];

double ex[500];

double ex1[500];

double fr[500];

int main() {

srand(time(0));

double i=0;

double g=0;

double frf;

double dist;

long l;

long d1;

long st1;

long d0;

long st0;

double f;

double f1;

double edx;

double edx1;

long m;

long p;

long li;

long p1;

long li1;

double en;

double alpha = 0.0;

double a1=0.0;

double a2=0.0;

double a3=0.0;

double avg=0;

double cn =0;

// double enf;

double intr;

l = 7000; // Universe size

d1 = 200; // Particle 1 size

d0 = d1; // Particle 2 size

double km = 20; // Setting the interval

double kj = 20000000; // # of random throws

intr = ((l)/((km*2.5)));

double d0div = d0 ;

cout

report post as inappropriate

Dear Dr. Hector Zenil,

As community score leader please read my essay

http://www.fqxi.org/community/forum/topic/946

report post as inappropriate

As community score leader please read my essay

http://www.fqxi.org/community/forum/topic/946

report post as inappropriate

Dear Hector Zenil

SImilar ideas about (objective) randomness has also Zeilinger. (And Neil Bates in this contests.) Maybe it is useful if you compare them with you. Otherwise, it is a clearly written essay.

Your essay is so good for me, that I used him twice for reference.

http://vixra.org/pdf/1103.0025v1.pdf

I was too late for this contests, so I am sending link here.

Regards

report post as inappropriate

SImilar ideas about (objective) randomness has also Zeilinger. (And Neil Bates in this contests.) Maybe it is useful if you compare them with you. Otherwise, it is a clearly written essay.

Your essay is so good for me, that I used him twice for reference.

http://vixra.org/pdf/1103.0025v1.pdf

I was too late for this contests, so I am sending link here.

Regards

report post as inappropriate

Dear Janko,

Interesting, thanks for your comments and for citing my essay. I shall read your paper in further detail.

As for Zeilinger, my position is similar to the opinions expressed in response to Zeilinger's in 'The Message of the Quantum?' by Daumer et al. (available online: http://www.maphy.uni-tuebingen.de/members/rotu/papers/zei.pd

f). Zeilinger claims that quantum randomness is intrinsically indeterministic and that experiments violating Bell's inequality imply that some properties do not exist until measured. These claims are, however, based in a particular (yet mainstream) interpretation of quantum mechanics from which he jumps to conclusions relying on various no-go or no-hidden-variables theorems—of people such as von Neumann, Bell, Kochen and Specker)—which are supposed to show that quantum randomness is truly indeterminstic.

And although I share with Daumer et al. the belief that Wheeler did not shed much light on the issue with his rather obscure treatment of information as related to, or as more fundamental than, physics; I do not share Daumer et al. claims about what they think is wrong with the informational worldview. As they say, Wheeler remarkable suggestion was that physics is only about information or that the physical world itself is information. I rather think, however, that the next level of unification (after the unification of other previously unrelated concepts in science, such as electricity and magnetism, light and electromagnetism, and energy and mass, to mention a few) is between information and physics (and ultimately, as a consequence, to computation), as it has already started to be the case (e.g. between statistical mechanics with information theory).

No interpretation of quantum mechanics rules out the possibility of deterministic randomness even at the quantum level. Some colleagues, however, have some interesting results establishing that hidden variables theories may require many more resources in memory to keep up with known quantum phenomena. In other words hidden variable theories are more expensive to assume, and memory needed to simulate what happens in the quantum world grows as bad as it could be for certain deterministic machines. But still, that does not rule out other possibilities, not even the hidden variables theories, even if not efficient in traditional terms.

Interesting, thanks for your comments and for citing my essay. I shall read your paper in further detail.

As for Zeilinger, my position is similar to the opinions expressed in response to Zeilinger's in 'The Message of the Quantum?' by Daumer et al. (available online: http://www.maphy.uni-tuebingen.de/members/rotu/papers/zei.pd

f). Zeilinger claims that quantum randomness is intrinsically indeterministic and that experiments violating Bell's inequality imply that some properties do not exist until measured. These claims are, however, based in a particular (yet mainstream) interpretation of quantum mechanics from which he jumps to conclusions relying on various no-go or no-hidden-variables theorems—of people such as von Neumann, Bell, Kochen and Specker)—which are supposed to show that quantum randomness is truly indeterminstic.

And although I share with Daumer et al. the belief that Wheeler did not shed much light on the issue with his rather obscure treatment of information as related to, or as more fundamental than, physics; I do not share Daumer et al. claims about what they think is wrong with the informational worldview. As they say, Wheeler remarkable suggestion was that physics is only about information or that the physical world itself is information. I rather think, however, that the next level of unification (after the unification of other previously unrelated concepts in science, such as electricity and magnetism, light and electromagnetism, and energy and mass, to mention a few) is between information and physics (and ultimately, as a consequence, to computation), as it has already started to be the case (e.g. between statistical mechanics with information theory).

No interpretation of quantum mechanics rules out the possibility of deterministic randomness even at the quantum level. Some colleagues, however, have some interesting results establishing that hidden variables theories may require many more resources in memory to keep up with known quantum phenomena. In other words hidden variable theories are more expensive to assume, and memory needed to simulate what happens in the quantum world grows as bad as it could be for certain deterministic machines. But still, that does not rule out other possibilities, not even the hidden variables theories, even if not efficient in traditional terms.

Dear Hector

Here is also my attempt to explain quantum randomness.

http://www.fqxi.org/community/forum/topic/571

(Con

test one year ago)

I think, that we need to explain all physics including Quantum gravity and consciousness. Now the quantum mechanics is not a complete theory.

We do not need hidden variables as additional parameters, but connections of known physical parameters should be clear and it is not yet.

So, I believe in quantum consciousness, and model for it is simple: additional very small elementary particles.

Regards

p.s I also write one not-speculative article:

http://vixra.org/abs/1012.0006

It is a base for my above mentioned article:

http://vixra.org/pdf/1103.0025v1.pdf

I hope to find someone to be endorser in arXiv.

report post as inappropriate

Here is also my attempt to explain quantum randomness.

http://www.fqxi.org/community/forum/topic/571

(Con

test one year ago)

I think, that we need to explain all physics including Quantum gravity and consciousness. Now the quantum mechanics is not a complete theory.

We do not need hidden variables as additional parameters, but connections of known physical parameters should be clear and it is not yet.

So, I believe in quantum consciousness, and model for it is simple: additional very small elementary particles.

Regards

p.s I also write one not-speculative article:

http://vixra.org/abs/1012.0006

It is a base for my above mentioned article:

http://vixra.org/pdf/1103.0025v1.pdf

I hope to find someone to be endorser in arXiv.

report post as inappropriate

Hector,

I read about you and your theory here: ''nature is seen as processing information computing the laws of physics and everything we see around us, including all sorts of complex things like life. In this view, the universe would be computing itself and our computers would betherefore doing nothing but reprograming a part of the universe to make it compute what we want to compute''.

To prove the ''algorithmic nature of the world'' you must explain first quantum mechanics and all forces including gravity by your algorithms and computation. I don't see today any algorithms for quantum mechanics and gravity in your papers, therefore this theory is a fantastic DREAM only. You'll never explain Heisenberg uncertainty by algorithms and computation because you must know the complete quantum information position-momentum BEFORE events occurs to process the motion of particle; this theory is forbidden by Quantum Mechanics and Black Hole physics. Since your Universe is algorithmic, you need a gigantic God-like computation able to run programs/algorithms for every particle and body.

Nature is really simple, but your theory insists on making it complicated - you need algorithms and computation for every particle. Where is this gigantic computer - outside of the Universe? This theory denies the Free Will - since the world is ruled by algorithms and computation, therefore all our future was programmed before we born. In this contest we are looking for theories able to SIMPLIFY the Nature but not to complicate the nature. It is one of the most fantastic theories that contradicts quantum mechanics, conservation laws, Black holes theories; it is surprising that people support such fantasy. Without algorithms and computation nature is very simple. It is a crime against Science to support the false theories.

Constantin

report post as inappropriate

I read about you and your theory here: ''nature is seen as processing information computing the laws of physics and everything we see around us, including all sorts of complex things like life. In this view, the universe would be computing itself and our computers would betherefore doing nothing but reprograming a part of the universe to make it compute what we want to compute''.

To prove the ''algorithmic nature of the world'' you must explain first quantum mechanics and all forces including gravity by your algorithms and computation. I don't see today any algorithms for quantum mechanics and gravity in your papers, therefore this theory is a fantastic DREAM only. You'll never explain Heisenberg uncertainty by algorithms and computation because you must know the complete quantum information position-momentum BEFORE events occurs to process the motion of particle; this theory is forbidden by Quantum Mechanics and Black Hole physics. Since your Universe is algorithmic, you need a gigantic God-like computation able to run programs/algorithms for every particle and body.

Nature is really simple, but your theory insists on making it complicated - you need algorithms and computation for every particle. Where is this gigantic computer - outside of the Universe? This theory denies the Free Will - since the world is ruled by algorithms and computation, therefore all our future was programmed before we born. In this contest we are looking for theories able to SIMPLIFY the Nature but not to complicate the nature. It is one of the most fantastic theories that contradicts quantum mechanics, conservation laws, Black holes theories; it is surprising that people support such fantasy. Without algorithms and computation nature is very simple. It is a crime against Science to support the false theories.

Constantin

report post as inappropriate

Constantin,

You came back to the very first arguments you presented before. I'm now convinced that the discussion will be fruitless if you persevere to claim to hold all true answers, which are commonly considered open questions in science.

It seems you keep on misreading the essay at several levels. Just to stress again the main hypothesis of my worldview, I'm using what is called Levin's semi-mesure, this is a tool that has been called also the universal distribution (see Kirchherr and Vitanyi paper online: http://homepages.cwi.nl/~paulv/papers/mathint97.ps) because it was proven (by Levin himself) that it dominates any other semi-computable measure. This tool also captures and formalizes Occam's razor, which as you may know is ill suited to complicate things because, by definition, it favors simplicity. My worldview is the simplest possible among the algorithmic explanations of the world. What I do is to calculate an experimental approximation of Levin's distribution and compare the result to the processes in the real-world, then I discuss the similarities and discrepancies. I don't need to explain what happens inside of black holes because not even current mainstream physics does, and it is beyond my current scope of research.

As for the quantum phenomena, issues with black holes, the teleportation that you think is vital for humanity, and other claims of the same sort I invite you to re-read our previous messages.

Thanks.

You came back to the very first arguments you presented before. I'm now convinced that the discussion will be fruitless if you persevere to claim to hold all true answers, which are commonly considered open questions in science.

It seems you keep on misreading the essay at several levels. Just to stress again the main hypothesis of my worldview, I'm using what is called Levin's semi-mesure, this is a tool that has been called also the universal distribution (see Kirchherr and Vitanyi paper online: http://homepages.cwi.nl/~paulv/papers/mathint97.ps) because it was proven (by Levin himself) that it dominates any other semi-computable measure. This tool also captures and formalizes Occam's razor, which as you may know is ill suited to complicate things because, by definition, it favors simplicity. My worldview is the simplest possible among the algorithmic explanations of the world. What I do is to calculate an experimental approximation of Levin's distribution and compare the result to the processes in the real-world, then I discuss the similarities and discrepancies. I don't need to explain what happens inside of black holes because not even current mainstream physics does, and it is beyond my current scope of research.

As for the quantum phenomena, issues with black holes, the teleportation that you think is vital for humanity, and other claims of the same sort I invite you to re-read our previous messages.

Thanks.

Hector,

My theory can explain at least teleportation but your theory can explain NOTHING in physics. Your theory is a mathematical construct only and I wrote in this contest already that all mathematical proofs in physical papers must be in DOUBT. Usually mathematics is used as a shield to hide the false theories. First of all, to create a real Physical theory you must include just quantum phenomena, black holes and teleportation but not mathematical models.

Your above answer is an attempt to suppress questions by a stream of senseless information. I read about your group and your paper ''On the algorithmic nature of the world'', it is the theory about the Computational nature of the Universe. Your friends like Janko Kokosar try to support you in order to create an illusion as if it is a very SCIENTIFIC paper.

Dear readers it is a false theory forbidden by Quantum Mechanics and Black Hole theory. It is a crime to vote for false theories. We need the powerful Science and Technology to survive.

Constantin

report post as inappropriate

My theory can explain at least teleportation but your theory can explain NOTHING in physics. Your theory is a mathematical construct only and I wrote in this contest already that all mathematical proofs in physical papers must be in DOUBT. Usually mathematics is used as a shield to hide the false theories. First of all, to create a real Physical theory you must include just quantum phenomena, black holes and teleportation but not mathematical models.

Your above answer is an attempt to suppress questions by a stream of senseless information. I read about your group and your paper ''On the algorithmic nature of the world'', it is the theory about the Computational nature of the Universe. Your friends like Janko Kokosar try to support you in order to create an illusion as if it is a very SCIENTIFIC paper.

Dear readers it is a false theory forbidden by Quantum Mechanics and Black Hole theory. It is a crime to vote for false theories. We need the powerful Science and Technology to survive.

Constantin

report post as inappropriate

Dear Constantin,

I do share your concern than ''algorithmic nature of the world'' has not been demonstrated convincingly so far. That is why I invite you to check out my website where I derive the laws of QM, QFT and QG just from such algorithm using a very simple program. The secret was in the postulate, everything else including the algorithmic way just followed naturally. The website has not been updated but I will send you the details if you are interested.

http://www.qsa.netne.net

report post as inappropriate

I do share your concern than ''algorithmic nature of the world'' has not been demonstrated convincingly so far. That is why I invite you to check out my website where I derive the laws of QM, QFT and QG just from such algorithm using a very simple program. The secret was in the postulate, everything else including the algorithmic way just followed naturally. The website has not been updated but I will send you the details if you are interested.

http://www.qsa.netne.net

report post as inappropriate

Dear Hector Zenil,

To me your essay is a bit too easily understandable written but not yet convincing. Perhaps, I am expecting too much from experts of computer science and probability like you. So far I do not see any chance how your rather speculative approach could become foundational. It reminds me of "in the beginning was the word" Big Bang = white noise, and then symmetry breaking made it flesh. What about other colors of noise, e.g. brown one?

Why and how did symmetry breaking start just with hydrogen atoms?

In the 2nd contest I made an unreplied comment to the essay by Steven Wolfram:

I argued that while digital computing is superior to analog computers, the latter are closer to reality than differential equations. I meant they are bound to the real time and in particular its direction. You did not refer to this matter, and I guess why: Your procedures of computing tend to be also natural in that they perform a series of forward steps even in for ... do loops, never backward in time. You presumably overlooked this when you equated the time-symmetric laws of nature with computer programs.

Regards,

Eckard

report post as inappropriate

To me your essay is a bit too easily understandable written but not yet convincing. Perhaps, I am expecting too much from experts of computer science and probability like you. So far I do not see any chance how your rather speculative approach could become foundational. It reminds me of "in the beginning was the word" Big Bang = white noise, and then symmetry breaking made it flesh. What about other colors of noise, e.g. brown one?

Why and how did symmetry breaking start just with hydrogen atoms?

In the 2nd contest I made an unreplied comment to the essay by Steven Wolfram:

I argued that while digital computing is superior to analog computers, the latter are closer to reality than differential equations. I meant they are bound to the real time and in particular its direction. You did not refer to this matter, and I guess why: Your procedures of computing tend to be also natural in that they perform a series of forward steps even in for ... do loops, never backward in time. You presumably overlooked this when you equated the time-symmetric laws of nature with computer programs.

Regards,

Eckard

report post as inappropriate

Eckard,

I never wrote 'Big Bang=white noise' not only because I think it is an oversimplification of something that deserves further discussion to be written in such a way, but also because it is not my belief. White noise is usually identified as indeterministic or 'true' (in some intuitive sense) randomness, yet I think all randomness are just complicated patterns result of the application of algorithmic rules (even if I am fully aware this view is in contradiction with the Copenhagen interpretation of quantum mechanics, but compatible with other interpretations).

Now, the question of other colors of noise is very interesting. Many noise colors, such as pink noise (aka 1/ƒ noise), for example, follow a power-law frequency distribution, as you may know, in quite an organized fashion. Power law distributions are often an indication that the source is not random in nature. Distributions associated to random processes are typically uniform or Gaussian. While theories of pink noise (and other colors) are still a matter of current research its typical distribution power-law shape is compatible with the empirical algorithmic distribution found in the distributions we generated from algorithmic sources (and compatible with the theoretical power-law universal distribution). As you can read from my essay our distributions from running computer programs generate about the same kind of randomness and in about the same frequency. I think this kind of noise may be explained as the tail of the algorithmic probability distribution that looks to us most random but actually follows after the most organized top which we identify as the structured part corresponding to the structured signals.

As you may know, pink noise is present all over in data series, it has a tendency to occur in natural physical systems, from almost all electronic devices to the electromagnetic radiation of astronomical bodies. In biological systems, it is also present in some statistics of DNA sequences, a source that we also analyzed w.r.t. its frequency distribution of patterns (tuples of different sizes) with some correlation with our algorithmic distributions (you can also read some of the processes acting over DNA, algorithmic in nature, and likely responsible for at least some of the shape of the overall distribution in DNA sequences).

Thanks.

I never wrote 'Big Bang=white noise' not only because I think it is an oversimplification of something that deserves further discussion to be written in such a way, but also because it is not my belief. White noise is usually identified as indeterministic or 'true' (in some intuitive sense) randomness, yet I think all randomness are just complicated patterns result of the application of algorithmic rules (even if I am fully aware this view is in contradiction with the Copenhagen interpretation of quantum mechanics, but compatible with other interpretations).

Now, the question of other colors of noise is very interesting. Many noise colors, such as pink noise (aka 1/ƒ noise), for example, follow a power-law frequency distribution, as you may know, in quite an organized fashion. Power law distributions are often an indication that the source is not random in nature. Distributions associated to random processes are typically uniform or Gaussian. While theories of pink noise (and other colors) are still a matter of current research its typical distribution power-law shape is compatible with the empirical algorithmic distribution found in the distributions we generated from algorithmic sources (and compatible with the theoretical power-law universal distribution). As you can read from my essay our distributions from running computer programs generate about the same kind of randomness and in about the same frequency. I think this kind of noise may be explained as the tail of the algorithmic probability distribution that looks to us most random but actually follows after the most organized top which we identify as the structured part corresponding to the structured signals.

As you may know, pink noise is present all over in data series, it has a tendency to occur in natural physical systems, from almost all electronic devices to the electromagnetic radiation of astronomical bodies. In biological systems, it is also present in some statistics of DNA sequences, a source that we also analyzed w.r.t. its frequency distribution of patterns (tuples of different sizes) with some correlation with our algorithmic distributions (you can also read some of the processes acting over DNA, algorithmic in nature, and likely responsible for at least some of the shape of the overall distribution in DNA sequences).

Thanks.

Dear Hector

You have written a very interesting essay. On my essay I propose an idea of how we can understand emergence on computation, that can be use to understand how a classical reality emerges from a quantum base. On my approach, We can introduce the computation information perspective, that you are proposing, and at the same time all the classical formalism. I would like to hear your opinions about it.

Regards,

J. Benavides.

report post as inappropriate

You have written a very interesting essay. On my essay I propose an idea of how we can understand emergence on computation, that can be use to understand how a classical reality emerges from a quantum base. On my approach, We can introduce the computation information perspective, that you are proposing, and at the same time all the classical formalism. I would like to hear your opinions about it.

Regards,

J. Benavides.

report post as inappropriate

I'd like to summarize my view in a few paragraphs, if that is possible from an already synthesized essay:

My view aims to provide a purely informational explanation to the organized structures in the world that we can find all over around, from the formation of galaxies to the appearance of an organized phenomenon such as life, despite the 2nd. law of thermodynamics predicting, with its principle of increasing entropy, rather the contrary, contradiction that is usually explained by arguments concerning closed systems regarded as exceptions that manage to locally decrease entropy while increasing entropy in its surroundings.

Two tools from the theory of algorithmic information are relevant to explain this presence of organized structures in the world, without necessarily violating thermodynamic principles but actually providing a reasonable explanation to this phenomena (i.e. the entropy derivation vs. the presence of organized structures), with the only assumption that what happens in the universe is the result of the application of rules (that layer after layer may look very complicated but are simple in their origin because are of the kind that can be carried out by digital computers). Then both the theory of algorithmic probability with its concept of Levin's universal distribution and Bennett's logical depth provide an explanation to the organized universe in which we seem to live, as a result of time in the universe rather seen as computational time.

As I argue, the view that the world is algorithmic in the terms described above is supported by, at least, two indicators: one is the compressibility of data in our world as it turns out to be the case as proven by the success in compressing data (either in digital or analog repositories). Not only data, but physical laws governing our reality have turned out to be compressible by models and formulae that scientists use to shortcut physical phenomena to make predictions about the world. The second indicator, supporting the algorithmic view, is the distribution of patterns in our world when compared (as we did) to the distribution of patterns produced by purely algorithmic worlds (by using abstract machines).

We do not necessarily jump to the conclusion that the world is digital but we do claim that the kind of rules producing these kind of distributions can be carried out by digital computation and therefore, according to this hypothesis, there is no need to assume an analog universe when it is about to explain the organization in the world since assuming an analog universe would be regarded under this view as an unnecessary complication of the theory. Yet, this algorithmic view, doesn't rule out the possibility of an analog algorithmic world.

My view aims to provide a purely informational explanation to the organized structures in the world that we can find all over around, from the formation of galaxies to the appearance of an organized phenomenon such as life, despite the 2nd. law of thermodynamics predicting, with its principle of increasing entropy, rather the contrary, contradiction that is usually explained by arguments concerning closed systems regarded as exceptions that manage to locally decrease entropy while increasing entropy in its surroundings.

Two tools from the theory of algorithmic information are relevant to explain this presence of organized structures in the world, without necessarily violating thermodynamic principles but actually providing a reasonable explanation to this phenomena (i.e. the entropy derivation vs. the presence of organized structures), with the only assumption that what happens in the universe is the result of the application of rules (that layer after layer may look very complicated but are simple in their origin because are of the kind that can be carried out by digital computers). Then both the theory of algorithmic probability with its concept of Levin's universal distribution and Bennett's logical depth provide an explanation to the organized universe in which we seem to live, as a result of time in the universe rather seen as computational time.

As I argue, the view that the world is algorithmic in the terms described above is supported by, at least, two indicators: one is the compressibility of data in our world as it turns out to be the case as proven by the success in compressing data (either in digital or analog repositories). Not only data, but physical laws governing our reality have turned out to be compressible by models and formulae that scientists use to shortcut physical phenomena to make predictions about the world. The second indicator, supporting the algorithmic view, is the distribution of patterns in our world when compared (as we did) to the distribution of patterns produced by purely algorithmic worlds (by using abstract machines).

We do not necessarily jump to the conclusion that the world is digital but we do claim that the kind of rules producing these kind of distributions can be carried out by digital computation and therefore, according to this hypothesis, there is no need to assume an analog universe when it is about to explain the organization in the world since assuming an analog universe would be regarded under this view as an unnecessary complication of the theory. Yet, this algorithmic view, doesn't rule out the possibility of an analog algorithmic world.

Dear Hector

What is your opinion about this site?

http://www.idsia.ch/~juergen/computeruniverse.html

Thank

you for advance

Yuri

report post as inappropriate

What is your opinion about this site?

http://www.idsia.ch/~juergen/computeruniverse.html

Thank

you for advance

Yuri

report post as inappropriate

Dear Sir,

It is fashionable among scientists to express their views incomprehensibly to retain their importance. However, since one of the criteria for this competition is “Accessible to a diverse, well-educated but non-specialist audience”, we would like you to kindly clarify what is meant by: “start from nothing: the state of the universe with all its matter and energy squeezed into...

view entire post

It is fashionable among scientists to express their views incomprehensibly to retain their importance. However, since one of the criteria for this competition is “Accessible to a diverse, well-educated but non-specialist audience”, we would like you to kindly clarify what is meant by: “start from nothing: the state of the universe with all its matter and energy squeezed into...

view entire post

report post as inappropriate

Dear Hector,

Congratulations on your dedication to the competition and your much deserved top ten placing. I have a bugging question for you, which I've also posed to all the top front runners btw:

Q: Coulomb's Law of electrostatics was modelled by Maxwell by mechanical means after his mathematical deductions as an added verification (thanks for that bit of info Edwin), which I highly admire. To me, this gives his equation some substance. I have a problem with the laws of gravity though, especially the mathematical representation that "every object attracts every other object equally in all directions." The 'fabric' of spacetime model of gravity doesn't lend itself to explain the law of electrostatics. Coulomb's law denotes two types of matter, one 'charged' positive and the opposite type 'charged' negative. An Archimedes screw model for the graviton can explain -both- the gravity law and the electrostatic law, whilst the 'fabric' of spacetime can't. Doesn't this by definition make the helical screw model better than than anything else that has been suggested for the mechanism of the gravity force?? Otherwise the unification of all the forces is an impossiblity imo. Do you have an opinion on my analysis at all?

Best wishes,

Alan

report post as inappropriate

report post as inappropriate

Congratulations on your dedication to the competition and your much deserved top ten placing. I have a bugging question for you, which I've also posed to all the top front runners btw:

Q: Coulomb's Law of electrostatics was modelled by Maxwell by mechanical means after his mathematical deductions as an added verification (thanks for that bit of info Edwin), which I highly admire. To me, this gives his equation some substance. I have a problem with the laws of gravity though, especially the mathematical representation that "every object attracts every other object equally in all directions." The 'fabric' of spacetime model of gravity doesn't lend itself to explain the law of electrostatics. Coulomb's law denotes two types of matter, one 'charged' positive and the opposite type 'charged' negative. An Archimedes screw model for the graviton can explain -both- the gravity law and the electrostatic law, whilst the 'fabric' of spacetime can't. Doesn't this by definition make the helical screw model better than than anything else that has been suggested for the mechanism of the gravity force?? Otherwise the unification of all the forces is an impossiblity imo. Do you have an opinion on my analysis at all?

Best wishes,

Alan

report post as inappropriate

Dear Sir,

You have raised a very important question. We have discussed it below the essay of Mr. Ian Durham. Here we reproduce it for you.

The latest finding of LHC is that the Universe was created from such a super-fluid and not gases. The confined field also interacts with the Universal field due to difference in density. This in turn modifies the nature of interactions at...

view entire post

You have raised a very important question. We have discussed it below the essay of Mr. Ian Durham. Here we reproduce it for you.

The latest finding of LHC is that the Universe was created from such a super-fluid and not gases. The confined field also interacts with the Universal field due to difference in density. This in turn modifies the nature of interactions at...

view entire post

report post as inappropriate

Dear Sir,

We would like to further clarify as follows:

According to our theory, gravity is a composite force of seven forces that are generated based on their charge. Thus, they are related to charge interactions. But we do not accept Coulomb's law. We have a different theory for it. We derive it from fundamental principles. In Coulomb’s law, F = k Q1 x Q2 /d^2. In a charge neutral object, either Q1 or Q2 will be zero reducing the whole equation to zero. This implies that no interaction is possible between a charged object and a charge neutral object. But this is contrary to experience. Hence the format of Coulomb’s law is wrong.

As we have repeatedly described, the atoms can be stable only when they are slightly negatively charged which makes the force directed towards the nucleus dominate the opposite force, but is not apparent from outside. Hence we do not experience it. We have theoretically derived the value of the electric charge of protons, neutrons and electrons as +10/11, -1/11 and -1. The negative sign indicates that the net force is directed towards the nucleus. Charge interaction takes place when a particle tries to attain equilibrium by coupling with another particle having similar charge. The proton has +10/11 charge means it is deficient in -1/11 charge. The general principle is same charge attracts. Thus, it interacts with the negative charge of electrons. The resultant hydrogen atom has a net charge of -1/11. Thus, it is highly reactionary. This -1/11 charge interacts with that of the neutron to form stable particles. These interactions can be of four types.

Positive + positive = explosive. By this, what we mean is the fusion reaction that leads to unleashing of huge amounts of energy. It’s opposite is also true in the case of fission, but since it is reduction, there is less energy release.

Positive + negative (total interaction) = internally creative (increased atomic number.) This means that if one proton and one electron is added to the atom, the atomic number goes up.

Positive + negative (partial interaction) = externally creative (becomes an ion.) This means that if one proton or one electron is added to the atom, the atom becomes ionic.

Negative + negative = no reaction. What it actually means is that though there will be no reaction between the two negatively charged particles; they will appear to repel each other as their nature is confinement. Like two pots that confine water cannot occupy the same place and if one is placed near another with some areas overlapping, then both repel each other. This is shown in the “Wheeler’s Aharonov–Bohm experiment”.

Regards,

basudeba

report post as inappropriate

We would like to further clarify as follows:

According to our theory, gravity is a composite force of seven forces that are generated based on their charge. Thus, they are related to charge interactions. But we do not accept Coulomb's law. We have a different theory for it. We derive it from fundamental principles. In Coulomb’s law, F = k Q1 x Q2 /d^2. In a charge neutral object, either Q1 or Q2 will be zero reducing the whole equation to zero. This implies that no interaction is possible between a charged object and a charge neutral object. But this is contrary to experience. Hence the format of Coulomb’s law is wrong.

As we have repeatedly described, the atoms can be stable only when they are slightly negatively charged which makes the force directed towards the nucleus dominate the opposite force, but is not apparent from outside. Hence we do not experience it. We have theoretically derived the value of the electric charge of protons, neutrons and electrons as +10/11, -1/11 and -1. The negative sign indicates that the net force is directed towards the nucleus. Charge interaction takes place when a particle tries to attain equilibrium by coupling with another particle having similar charge. The proton has +10/11 charge means it is deficient in -1/11 charge. The general principle is same charge attracts. Thus, it interacts with the negative charge of electrons. The resultant hydrogen atom has a net charge of -1/11. Thus, it is highly reactionary. This -1/11 charge interacts with that of the neutron to form stable particles. These interactions can be of four types.

Positive + positive = explosive. By this, what we mean is the fusion reaction that leads to unleashing of huge amounts of energy. It’s opposite is also true in the case of fission, but since it is reduction, there is less energy release.

Positive + negative (total interaction) = internally creative (increased atomic number.) This means that if one proton and one electron is added to the atom, the atomic number goes up.

Positive + negative (partial interaction) = externally creative (becomes an ion.) This means that if one proton or one electron is added to the atom, the atom becomes ionic.

Negative + negative = no reaction. What it actually means is that though there will be no reaction between the two negatively charged particles; they will appear to repel each other as their nature is confinement. Like two pots that confine water cannot occupy the same place and if one is placed near another with some areas overlapping, then both repel each other. This is shown in the “Wheeler’s Aharonov–Bohm experiment”.

Regards,

basudeba

report post as inappropriate

Dear Hector

Thanks for a lucid and very interesting paper. I would be most interested in how you would apply your expertise and approach (information theory and programming) to my earlier 2005 Beautiful Universe theory on which my present fqxi paper is based. The following is my reaction to some of your well-considered statements and ideas.

Your symmetry-breaking homochirality finds a very precise physical explanation in my theory - it is the rotation in one direction of the fundamental building blocks or nodes of a universal lattice that has only one type of information: angular momentum in units of h with the axis of rotation at a given spherical angle in a micro Bloch sphere.

"If the world were digital at the lowest scale one would end up seeing nothing but strings of bits." Not necessarily in a lattice the bits would be structured in crystal-like arrangement (either itself creating 3D space, or embedded in 3 hidden space dimensions) not one-D strings.

"What surprises us about the quantum world is precisely its lack of the causality that we see everywhere else and are so used to. But it is the interaction and its causal history that carries all the memory of the system" In my theory randomness is an artifact of the orderly spread of momentum through the lattice by a process resembling diffusion.

"if space is informational at its deepest level, if information is even more fundamental than the matter of which it is made and the physical laws governing that matter, then the question of whether these effects violate physical laws may be irrelevant. Producing random bits in a deterministic universe, where all events are the cause of other events, would actually be very expensive" In my theory "the medium is the message" the bits making up everything interact causally and locally - information is most efficiently transmitted in the form of angular momentum in units of h from node to node. This is impossible to understand or accept using present-day notions of physics, that is why I proposed specific steps how to reverse-engineer GR, space-time and quantum probability into a simpler more fundamental theory.

"Unveiling the machinery" Take that, Feynman! He famously avoided searching for a machinery that creates quantum phenomena.

Your research of Ref. 14 sounds very interesting. Is there an online version?

Best wishes from Vladimir

report post as inappropriate

Thanks for a lucid and very interesting paper. I would be most interested in how you would apply your expertise and approach (information theory and programming) to my earlier 2005 Beautiful Universe theory on which my present fqxi paper is based. The following is my reaction to some of your well-considered statements and ideas.

Your symmetry-breaking homochirality finds a very precise physical explanation in my theory - it is the rotation in one direction of the fundamental building blocks or nodes of a universal lattice that has only one type of information: angular momentum in units of h with the axis of rotation at a given spherical angle in a micro Bloch sphere.

"If the world were digital at the lowest scale one would end up seeing nothing but strings of bits." Not necessarily in a lattice the bits would be structured in crystal-like arrangement (either itself creating 3D space, or embedded in 3 hidden space dimensions) not one-D strings.

"What surprises us about the quantum world is precisely its lack of the causality that we see everywhere else and are so used to. But it is the interaction and its causal history that carries all the memory of the system" In my theory randomness is an artifact of the orderly spread of momentum through the lattice by a process resembling diffusion.

"if space is informational at its deepest level, if information is even more fundamental than the matter of which it is made and the physical laws governing that matter, then the question of whether these effects violate physical laws may be irrelevant. Producing random bits in a deterministic universe, where all events are the cause of other events, would actually be very expensive" In my theory "the medium is the message" the bits making up everything interact causally and locally - information is most efficiently transmitted in the form of angular momentum in units of h from node to node. This is impossible to understand or accept using present-day notions of physics, that is why I proposed specific steps how to reverse-engineer GR, space-time and quantum probability into a simpler more fundamental theory.

"Unveiling the machinery" Take that, Feynman! He famously avoided searching for a machinery that creates quantum phenomena.

Your research of Ref. 14 sounds very interesting. Is there an online version?

Best wishes from Vladimir

report post as inappropriate

Dear Vladimir,

I will get back to you later. I couldn't wait however to let you know that the choice of subtitle 'Unveiling the machinery' was precisely inspired from a Feynman quotation from one of his Messenger lectures at Cornell:

"It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time ... So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequerboard with all its apparent complexities."

Richard Feynman in 1964,

Concerning Ref. 14, yes there is an online version available at ArXiv:

On the Algorithmic Nature of the World by Hector Zenil, Jean-Paul Delahaye

http://arxiv.org/abs/0906.3554

Thanks for your comments, I'll have a look at what you tell me.

I'm also happy to tell that an extended version of the essay is coming soon, with many more details. I will upload it to ArXiv and announce the URL here.

Sincerely.

I will get back to you later. I couldn't wait however to let you know that the choice of subtitle 'Unveiling the machinery' was precisely inspired from a Feynman quotation from one of his Messenger lectures at Cornell:

"It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time ... So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequerboard with all its apparent complexities."

Richard Feynman in 1964,

Concerning Ref. 14, yes there is an online version available at ArXiv:

On the Algorithmic Nature of the World by Hector Zenil, Jean-Paul Delahaye

http://arxiv.org/abs/0906.3554

Thanks for your comments, I'll have a look at what you tell me.

I'm also happy to tell that an extended version of the essay is coming soon, with many more details. I will upload it to ArXiv and announce the URL here.

Sincerely.

Dear Hector

Sorry for the delay to respond- I just saw this. The FQXI website badly needs an author tracking function whereby you can see a list of all the threads to which you have contributed, with new responses listed chronologically.

Thanks for the Feynman quote - it shows that his physical intuition is deeper than his practical mathematical ingenuity - he devised the many-paths method to calculate quantum outcomes, but was not satisfied with it, and hoped for a simpler reality. In the sort of universal lattice of nodes such as the one I proposed, any local change in energy or node orientation immediately creates a Machian domino effect that spreads throughout the universe from node to node. This model - if successful - would explain why Feynman's' many-path hypothesis 'works', and also explains the simple machinary (node-to-node induction) behind this.

I found your essay "Algorithmic Nature..." rather abstract and too technical for my level of understanding - I would appreciate it if you can summarize it in a simple paragraph using everyday words - thanks. I look forward to your newer version of your paper.

report post as inappropriate

Sorry for the delay to respond- I just saw this. The FQXI website badly needs an author tracking function whereby you can see a list of all the threads to which you have contributed, with new responses listed chronologically.

Thanks for the Feynman quote - it shows that his physical intuition is deeper than his practical mathematical ingenuity - he devised the many-paths method to calculate quantum outcomes, but was not satisfied with it, and hoped for a simpler reality. In the sort of universal lattice of nodes such as the one I proposed, any local change in energy or node orientation immediately creates a Machian domino effect that spreads throughout the universe from node to node. This model - if successful - would explain why Feynman's' many-path hypothesis 'works', and also explains the simple machinary (node-to-node induction) behind this.

I found your essay "Algorithmic Nature..." rather abstract and too technical for my level of understanding - I would appreciate it if you can summarize it in a simple paragraph using everyday words - thanks. I look forward to your newer version of your paper.

report post as inappropriate

It seems to me very interesting

http://www.ma.hw.ac.uk/~oliver/Nature_article.pdf

Yuri

report post as inappropriate

http://www.ma.hw.ac.uk/~oliver/Nature_article.pdf

Yuri

report post as inappropriate

Dear Hector,

I would like to introduce myself in quantum terminology and share the truth that I have experienced with you. who am I?

I superpositioned myself to be me, to disentangle reality from virtuality and reveal the absolute truth.

Love,

Sridattadev.

report post as inappropriate

I would like to introduce myself in quantum terminology and share the truth that I have experienced with you. who am I?

I superpositioned myself to be me, to disentangle reality from virtuality and reveal the absolute truth.

Love,

Sridattadev.

report post as inappropriate

Login or create account to post reply or comment.