CATEGORY:
Is Reality Digital or Analog? Essay Contest (2010-2011)
[back]
TOPIC:
The World is Either Algorithmic or Mostly Random by Hector Zenil
[refresh]
Login or
create account to post reply or comment.
Author Hector Zenil wrote on Feb. 8, 2011 @ 13:01 GMT
Essay AbstractI will propose the notion that the universe is digital, not as a claim about what the universe is made of but rather about the way it unfolds. Central to the argument will be the concepts of symmetry breaking and algorithmic probability, which will be used as tools to compare the way patterns are distributed in our world to the way patterns are distributed in a simulated digital one. These concepts will provide a framework for a discussion of the informational nature of reality. I will argue that if the universe were analog, then the world would likely be random, making it largely incomprehensible. The digital model has, however, an inherent beauty in its imposition of an upper limit and in the convergence in computational power to a maximal level of sophistication. Even if deterministic, that it is digital doesn’t mean that the world is trivial or predictable, but rather that it is built up from operations that at the lowest scale are very simple but that at a higher scale look complex and even random, though only in appearance.
Author BioHector Zenil (BSc. Math, UNAM, 2005; MPhil. Logic, Paris 1 Sorbonne, 2006; PhD. Computer Science, Lille 1, 2011) has held visiting positions at the Massachusetts Institute of Technology and Carnegie Mellon University. He is a senior research associate at Wolfram Research, member of the Turing Centenary Advisory Committee, founding honorary associate of the Algorithmic Social Science Research Unit of the University of Trento and editor of Randomness Through Computation (published by World Scientific). His main research interests lie at the intersection of several disciplines in connection or application to the concept of randomness and algorithmic complexity motivated by foundational questions.
Download Essay PDF File
Russell Jurgensen wrote on Feb. 8, 2011 @ 19:01 GMT
Dear Hector,
Thank you for your very interesting article. It does seem the rules in nature will turn out to be simple as opposed to needing very complicated constructs. I regularly see this in engineering where difficult problems taking much analysis boil down to only a few lines of code at their cores. It is interesting that you also regularly see this in your work. Thank you for your fine essay.
Kind Regards,
Russell
report post as inappropriate
Author Hector Zenil replied on Feb. 8, 2011 @ 21:03 GMT
Dear Russell,
Thank you very much for your message.
nikman wrote on Feb. 8, 2011 @ 19:46 GMT
NP-completeness and the NP Hardness Assumption suggest that Reality isn't algorithmizable. Where, for example, is the algorithm for protein folding?
Arguably the emergence of complexity from Omega or quantum randomness can never be described for essentially the same reason: it's not a compressible process.
Or am I full of it? Missing the point?
report post as inappropriate
Author Hector Zenil replied on Feb. 8, 2011 @ 21:45 GMT
Dear nikman,
Thanks for your message. In my definition of an algorithmic world problems do not need to belong to a particular computational complexity class. My use of algorithmic is independent and compatible with the theory of computational complexity.
As you know, the framework and investigation of problems in the theory of computational complexity is based in the concept of the universal Turing machine. Problems, even if NP-complete, are studied as being carried out by a digital computer (the Turing machine), so even if NP-complete, they are algorithmic under my worldview.
The fact that some problems may take a long time to be solved in the size of the input doesn't mean, in my definition, that something is no longer algorithmic. Problems don't need to belong to any time complexity class to be algorithmic. On the contrary, if a problem can be described in algorithmic terms, then it is algorithmic, so NP-complete problems can coexist with my algorithmic universe.
On the other hand, we shouldn't forget that often instances of a NP problem may be easy to solve within polynomial time by a deterministic Turing machine, and that it is unknown whether there are any faster algorithms to solve NP-complete problems for all instances of the problem.
I think it is different to ignore what the algorithm of protein folding is than claiming (I'm not sure you did) that protein folding cannot be carried out by a (deterministic) Turing machine (either in polynomial time or not , something that we yet don't know).
Sincerely.
James Putnam replied on Feb. 11, 2011 @ 21:05 GMT
Dear Dr. Hector Zenil,
I am just now downloading your essay. I will read it to the best of my ability. I will be looking for whether or not your view of the universe is one of an unfolding 'program' in somewhat of a computer sense. Your statement:
"On the contrary, if a problem can be described in algorithmic terms, then it is algorithmic, ..."
Seems to me to suggest that you believe that the universe can be properly described and defined by the means of establishing 'steps'. I will be looking for your support for that view. If I am wrongly anticipating your position, then, I will learn this by reading your essay. Thank you for participating.
James Putnam
report post as inappropriate
James Putnam replied on Feb. 20, 2011 @ 21:52 GMT
Dear Dr. Hector Zenil,
I find this essay to require a large challenge on many points. I will start slowly and see if there is any interest. From page one:
"Whether the universe began its existence as a single point, or whether its
inception was a state of complete randomness, one can think of either the point
or the state of randomness as quintessential states of perfect symmetry. Either
no part had more or less information because there were no parts or all parts
carried no information, like white noise on the screen of an untuned TV. In
such a state one would be unable to send a signal, simply because it would
be destroyed immediately. But thermal equilibrium in an expanding space was
unstable, so asymmetries started to arise and some regions now appeared cooler
than others. The universe quickly expanded and began to produce the first
structures."
This reads like the Book of Genesis. Without intelligence behind it, there is a lot of explaining to do. First question: Symmetry breaking of less or no information leads to increased information?
Moving to the end:
"An analog world means that one can divide space and/or time into an infinite
number of pieces, and that matter and everything else may be capable of fol-
lowing any of these infinitely many paths and convoluted trajectories. ..."
What is the empirical evidence to support the idea of space and/or time can be divided into pieces? I will leave it at two question for now. Later I will ask about bits and strings of bits and information and meaning.
James
report post as inappropriate
James Putnam replied on Feb. 21, 2011 @ 00:30 GMT
I will add that no one gets anything out of nothing, not even physicists. Anything that is possible whether in computers of in the Universe is possible because it was provided for right from the beginning. Computer proograms can never give us anything more than what the programmer allowed for.
James
report post as inappropriate
James Putnam replied on Feb. 21, 2011 @ 00:35 GMT
Sorry, 'of' means 'or'. And, 'oo' means 'o'. My point is that you will not be permitted to assume anything for free. Some theoretical physicists pretend that there is a chance for something from nothing; but, You must show origins. How and where do you interject meaning into your program for the universe?
James
report post as inappropriate
Author Hector Zenil replied on Feb. 21, 2011 @ 01:15 GMT
James,
You may be reading too many essays at the same time, or not reading careful. When you say "What is the empirical evidence to support the idea of space and/or time can be divided into pieces?" There is no such evidence, it is just the common definition of analog. But in any case, I'm not supporting the analog worldview, but the digital one.
Also notice that the choice of subtitles does not replace the content of the sections. What you may take as an ideological genesis is only the account of the best theory we have today to explain the origin of the universe, that is the theory of the Big Bang that I use merely as a mean to explain how an informational view may explain how we came from a state where nothing existed (the singularity) or where everything was in a state of perfect chaos (the first seconds of the universe) to the state in which today we find highly structured things (including human beings).
My main contribution, I think, is that I'm precisely trying to avoid assuming anything, other than perhaps a few points which most scientists would agree. Such as the treatment of complexity, randomness, structure, and basic theory of information and computation.
Thanks for your comments.
Author Hector Zenil replied on Feb. 21, 2011 @ 01:26 GMT
James,
I may not share the conclusions of your worldview or your new physics theory but I see you advocate for 'all theory should be reducible to empirical properties' which I think is consonant with my worldview, that is that if we want to say how the universe looks like we should find scientific evidence in favor or contravention of it. In my case, I'm looking at the distribution of patterns in empirical data in the real world and comparing them to the distribution of patterns that a purely algorithmic (intrinsically digital) theory would predict. Then I draw some conclusions concerning how things seem to unfold in the universe versus how it should unfold if it were digital, and discuss the results.
Sincerely.
James Putnam replied on Feb. 21, 2011 @ 01:50 GMT
Dear Dr. Hector Zenil,
"You may be reading too many essays at the same time, or not reading careful. When you say "What is the empirical evidence to support the idea of space and/or time can be divided into pieces?" There is no such evidence, it is just the common definition of analog. But in any case, I'm not supporting the analog worldview, but the digital one."
You may be correct about too many essays. However, I argue for the analogue view. In that view, I do not see any division into parts other than for the purposes of computer type calculations. I know there is no such evidence. I do not know that it is the common definition of analogue. For me analogue is natural continuity.
"In my case, I'm looking at the distribution of patterns in empirical data in the real world and comparing them to the distribution of patterns that a purely algorithmic (intrinsically digital) theory would predict. Then I draw some conclusions concerning how things seem to unfold in the universe versus how it should unfold if it were digital, and discuss the results."
This I understood. What I did not understand was how this academic exercise relates to the real world. The fact that the real world allows for the design and building of computers does not, in my opinion, demonstrate that the reverse can be true. The universe is not a digital computer. The reason that I know this is because the universe does not rely upon code. It relies upon directness without substitute. Substitution is our approximation for reality.
James
report post as inappropriate
James Putnam replied on Feb. 21, 2011 @ 04:12 GMT
Leaving aside whatever my theoretical point of view is: Is your position that "Symmetry breaking of less or no information leads to increased information?"
James
report post as inappropriate
Author Hector Zenil replied on Feb. 21, 2011 @ 18:19 GMT
Dear James,
What would be the metric here? I think it may be a tricky question. Take Bennett's logic depth and the obvious answer is that the complexity of the world (which you may want to match with 'amount of information') obviously increases, it couldn't be otherwise. But it all depends on the definition of information, if you are matching information with matter/energy my answer may or...
view entire post
Dear James,
What would be the metric here? I think it may be a tricky question. Take Bennett's logic depth and the obvious answer is that the complexity of the world (which you may want to match with 'amount of information') obviously increases, it couldn't be otherwise. But it all depends on the definition of information, if you are matching information with matter/energy my answer may or may not contradict thermodynamics (but note that the current theory of the universe also contradicts thermodynamics at the point where the physical laws we know does not longer apply). If it is Shannon information it may be misleading because Shannon's entropy inherits the caveats of probability (that is that one cannot talk about the information content of individual objects, nor meaning associated to information) but it may leave thermodynamics laws intact. While in algorithmic information one can define individual information content and characterize lack of meaning as random or trivial (i.e. as carrying very little or no information). In this case, I think information has evolved from an early state (when the universe was so dense that everything looked random) into the current more organized forms that we see today (e.g. it is almost certain that in the early universe there was no life, because it seems life requires a long computing period to emerge). So my first answer would be that the process of symmetry breaking actually creates but also also destroys information.
Concerning the typical definition of analog, I find your view interesting and I agree with you that there are certainly different ways to conceive an analog world. From general relativity (GR), for example, it is matter who has to cover a continuum (otherwise GR seems to collapse into classical mechanics) and as such even if one may not be able to divide matter the exercise is to think that relativity theory somehow implies that matter is infinitely divisible (a way to say that it cover a continuum). But if that wouldn't really be the case (that one can think of matter as infinitely divisible) I wonder whether this matter wouldn't actually be better described as being discrete. But if what you conceive to be continuum is an abstract conception of space as an indivisible entity that may be another legitimate definition of analog world.
And that is one of my points, the fact that one cannot even agree on what an analog world may be, while the digital view is basically crystal clear (in a digital world computational power is also well defined). You are right when saying that proving that something resembles to something else doesn't rule out other possibilities. Unfortunately (for the analog worldview supporters I think) we have been incredibly successful modeling the world with digital approaches, while we don't seem to be sure how to really tackle the question or even compare our world to whatever an analog world may mean. My argument is why would someone believe that something is what doesn't look to be rather that what it looks to be. And when I say 'look' I mean something more than only the semantic of the word, because I try to scientifically quantify how much the world looks like an algorithmic (digital) one, and the methodology is described in the essay with references to some of my papers.
Thanks.
view post as summary
T H Ray replied on Feb. 22, 2011 @ 17:39 GMT
Nikman,
I just want to add something to Hector's point that just because we don't have an algorithm for protein folding, doesn't mean that such an algorithm is impossible.
My personal opinion is that for all structures whose final state is certain (whether a folded protein or e.g. a completed jigsaw puzzle) a polynomial time -- and even strongly polynomial time -- solution is either a function of algorithmic complexity or the result of a random process, exactly as Hector has it. It seems unlikely that the protein folding process is random -- for the reason that a wrongly folded protein causes disease. Given the robustness of nature as a whole, a random folding to the left instead of the right should not make a difference to the system. However, since the system _is_ senstively dependent on the correct configuration, one reasons that the subsystem in turn depends on information feedback from the system. Complex systems research is a very exciting and relatively new discipline; I for one am hopeful that Hector's research gets the professional attention it deserves.
Tom
report post as inappropriate
James Putnam replied on Feb. 22, 2011 @ 18:24 GMT
Hi Tom,
I am preparing a response to Hector's very helpful reply. I will respond to your remark:
"My personal opinion is that for all structures whose final state is certain (whether a folded protein or e.g. a completed jigsaw puzzle) a polynomial time -- and even strongly polynomial time -- solution is either a function of algorithmic complexity or the result of a random process, exactly as Hector has it."
We have opposite views. What I need to know from you is how you explain anything random being a process? What is it that you think random means? Better yet, please let me know how random even has meaning? What is random to you?
James
report post as inappropriate
T H Ray replied on Feb. 22, 2011 @ 19:23 GMT
James,
I don't try to explain it, because I don't have to. There is simply no way _in principle_ that one can distinguish process from reality. We know that a process isn't random when it is algorithmically compressible. Everything else is pseudo-random (even "random" number generators). If the universe is its own algorithm (and it may be)that would be the meaning of "randomness," an algorithmically incompressible process. Because we know that some processes are algorithmic, however, we cannot say that the world is random -- we can only say, as Hector has brilliantly titled his essay, that _either_ the world is algorithmic or it is mostly random.
In any case, science is a rationalist enterprise that is indifferent to personal beliefs. One doesn't assign meaning; meaning is determined in results interpreted by theory.
Tom
report post as inappropriate
James Putnam replied on Feb. 22, 2011 @ 19:37 GMT
Tom,
You can knock off that 'personal beliefs' criticism so long as you hold onto personal beliefs.
"I don't try to explain it, because I don't have to. There is simply no way _in principle_ that one can distinguish process from reality. We know that a process isn't random when it is algorithmically compressible. Everything else is pseudo-random (even "random" number generators). If the universe is its own algorithm (and it may be)that would be the meaning of "randomness," an algorithmically incompressible process. Because we know that some processes are algorithmic, however, we cannot say that the world is random -- we can only say, as Hector has brilliantly titled his essay, that _either_ the world is algorithmic or it is mostly random."
You either explain it or you do not have it. Yes process is reality. We know that process isn't random when we observe meaningful effects. If any process was random we wouldn't even recognize the problem. The reaons is that randomness can only mean meaninglessness. We can say that the world is definitely not random in any way because it continues to make sense. It is orderly. Orderliness results from meaningful control. There is no other kind of order, except in the purely imaginative theories of ideologues, I will respond to Hector separately.
James
report post as inappropriate
T H Ray replied on Feb. 22, 2011 @ 20:12 GMT
James,
YOUR reality is orderly because you believe it to be. Nature is indifferent to your beliefs -- there's a bigger world that doesn't fit into your personal reality. Not because I believe it. But because it's demonstrably so.
Tom
report post as inappropriate
James Putnam replied on Feb. 22, 2011 @ 20:25 GMT
tom,
"YOUR reality is orderly because you believe it to be. Nature is indifferent to your beliefs -- there's a bigger world that doesn't fit into your personal reality. Not because I believe it. But because it's demonstrably so."
Then explain how that is demonstrated; but, please do not include your personal beliefs. By the way, it is obvious that my reality is orderly because you and I are debating about it.
James
report post as inappropriate
T H Ray replied on Feb. 22, 2011 @ 21:47 GMT
James,
The "debate" is all in your mind. If you actually read Hector's essay, it should be clear to you how order emerges from random events.
Tom
report post as inappropriate
James Putnam replied on Feb. 22, 2011 @ 21:57 GMT
Tom,
No! Either he or you explains "how order emerges from random events." without pretending that saying so makes it so. I asked you what do you understand by the meaning of random? If random has a meaning the it is: No meaning!
James
report post as inappropriate
T H Ray replied on Feb. 22, 2011 @ 23:04 GMT
Again, James, it is you who assigns meaning. Not science.
Tom
report post as inappropriate
James Putnam replied on Feb. 22, 2011 @ 23:14 GMT
Tom,
No! All effects include meaning. Your point, that I don't understand meaning; I understand that meaning has nothing to do with meaninglessness, in other words, mechanical ideology. Please disprove me by explaining where meaning comes from? I assume that you are not going to teach me that meaning comes from meaningless?
James
report post as inappropriate
James Putnam replied on Feb. 22, 2011 @ 23:18 GMT
To Anyone Else,
If you are looking in on this conversation please understand that I do not get put off by graffitti. Either give scientific explanations or leave it alone.
James
report post as inappropriate
T H Ray replied on Feb. 22, 2011 @ 23:27 GMT
It's clear that when you assign whatever meaning to phenomena that you please, no one can teach you anything.
report post as inappropriate
James Putnam replied on Feb. 22, 2011 @ 23:28 GMT
Tom or anyone else,
Please give the starting point for existence. Is that starting point mechanical, meaning totally dumb, or what?
James
report post as inappropriate
James Putnam replied on Feb. 22, 2011 @ 23:36 GMT
Tom,
"It's clear that when you assign whatever meaning to phenomena that you please, no one can teach you anything."
No!!! You enterred the arena by your choice now give the empirical evidence for meaning arising from lack of meaning. If there was no lack of meaning, then explain that?
James
report post as inappropriate
James Putnam replied on Feb. 22, 2011 @ 23:59 GMT
Perhaps there is misunderstanding. Let us find out. For me analog means continuity with no exception. Whereas digital means separation among parts. I see no parts and no separation except in degree of effects and our ability to measure effects. All empirical evidence consists of measurements. Those measurements are always about imperfect measurements of changes of velocity of objects. We either go beyond that point of understanding or we accept dumbness as the guiding principle of the operation of the universe. If dumbness rules, then, we will not know it; because, dumbness never even could have gotten started being undumb.
James
report post as inappropriate
T H Ray replied on Feb. 23, 2011 @ 00:08 GMT
By your standard of meaning, if I say that the sun rising and setting means that Apollo is driving his fiery chariot across the sky, it has as much validity as one who says that it means the Earth is following a geodesic caused by the warping of spacetime due to the sun's much greater mass.
Tom
report post as inappropriate
James Putnam replied on Feb. 23, 2011 @ 00:28 GMT
Hi Tom,
You are not making scientific points. What do you have to offer to this thread?
James
report post as inappropriate
James Putnam replied on Feb. 23, 2011 @ 00:38 GMT
Anyone,
Yes, anyone is invited to explain how complexity evolved from simplicity or far worse from lack in meaning, without just restating what you have just read, 'complexity evolved from simplicty. Why is there this obvious major problem with theoretical physics that includes dumbness evolving to smartness?
James
report post as inappropriate
Author Hector Zenil replied on Feb. 23, 2011 @ 02:11 GMT
Hi James and Tom,
May I propose to try to define meaning? I think the general sentiment is that meaning is given by the observer as Tom suggests, if one really wants to define meaning in more objective terms one might want to do so in terms of capacity to carry information.
A single bit does not carry any information, so it is meaningless. It is also intrinsically meaningless because there is no context (one cannot interpret a single bit if is not preceded or followed by anything else with nothing more). A string of n same bits (either 1 or 0) may also be regarded as meaningless (unless one is forced to make an arbitrary external interpretation) because even if it carries a 'message' it cannot be rich in information (neither by algorithmic complexity nor by Shannon's entropy standards). At the other extreme, a random string may be or may be not taken as meaningful depending on the measure. What one can certainly say is that definitely something lying in between should represent what we may want to mean by 'meaning'. With the 2 extremes being: 'no information' (trivial) or 'complete non-sense' (random).
Plain algorithmic complexity, places randomness at the level of the highest complexity, but Bennett's logical depth (also based in algorithmic complexity) is able to distinguish between something that looks organized from something that looks random or trivial by introducing time (a parameter that seems unavoidable in reality and therefore the reason to associate this measure to 'physical complexity'). I think logical depth is the measure to be taken into account when it is about mapping complexity or meaning to reality. Unfortunately I couldn't get there in my essay but I'd would have enjoyed to talk about it. If so, we could have discussed about meaning within a better framework.
If you see how I may map information and meaning you may see how can I explain that meaning can come from meaningless in a rather objective, hopefully reasonable way. I think Levin's universal distribution together with Bennett's concept of logical depth are an appropriate informational framework to discuss this and it would have certainly completed my worldview, but I think it explains how simplicity and randomness can lead to organization and structure or, if you prefer, how lack of meaning can lead to meaning in the understanding of my treatment of meaning.
Feel free to focus on attacking my position. Best.
James Putnam replied on Feb. 23, 2011 @ 02:18 GMT
Dr. Hector Zenil,
I prefer learning:
"...but I think it explains how simplicity and randomness can lead to organization and structure or, if you prefer, how lack of meaning can lead to meaning in the understanding of my treatment of meaning."
How any part of this quote is supported without resort to preconditions. The problem with preconditions is that they had to be given by a preconditioner.
James
report post as inappropriate
Author Hector Zenil replied on Feb. 23, 2011 @ 02:49 GMT
James,
Definitely there are assumptions. My worldview is by no means a pure logical deduction from nothing, nor I can think of some result with no preconditions at all (think of math, one starts out of a set of axioms, sometimes not even easy to take by granted). If the question is what are my main assumptions, my main assumption is that the world follows an algorithmic distribution. And even if there is ideology in this position what I'm trying is precisely to detach myself from a purely ideological position and tackle the question with the best tools available from information theory to test this digital hypothesis and make my point.
As you may know from my essay, what I do is to compare empirical datasets to data produced by digital (algorithmic by definition) processes in their frequency distributions of patterns. This allows me to see wether patterns are distributed alike (or not) among the real and the simulated worlds.
The conclusion is that there is some resemblance and that differences can also be explained in information terms. Then I argue that because we have no idea how to compare datasets to an analog distribution (mostly because we cannot even agree on what analog means), and because the world definitely seems far to be random, a strong possible explanation is that, if the world is algorithmic then it is likely to be the result of processes similar to the processes matching the empirical data.
My explanation does not rule out the possibility of an algorithmic analog world (no essay here, I think, rules out each other possibility nor are categorical about their claims), but I argue that an algorithmic continuous world actually forces you to assume more preconditions than the digital one (if someone is categorical it shouldn't be taken very serious).
As you may agree, however, assuming too much is undesirable, at least from the point of view of science and how science has worked out for us in order to understand our world. My purpose is precisely to avoid starting from too many (or too strong) preconditions and Levin's universal distribution is, just as it is the uniform distribution in a random world, the one not making any assumptions when no other information about is known, other than assuming that processes are carried out by an algorithm rather than as the result of a truly random interaction.
Thanks.
T H Ray replied on Feb. 23, 2011 @ 11:36 GMT
James,
As I have pointed out to you on numerous occasions, you have a peculiar and nonstandard idea of what constitutes "science." Such a priori assumptions as "smartness does not come from dumbness" is hardly a scientific judgment. Not only is it not true, as complex systems science richly demonstrates, it is simply an assumption that a force or forces external to the universe control and direct events internal to the self interacting system that constitutes the universe in which we live. That violates several physical principles, most notably conservation of energy/information, but more significantly it allows you to assign whatever meaning you wish to "smartness" and "dumbness" so that whatever you propose will never be falsifiable science.
If you want to talk science to scientists, you may not necessarily have to embrace rationalism; you do, however, have to accept that science is a rationalist enterprise.
Tom
report post as inappropriate
T H Ray replied on Feb. 23, 2011 @ 12:04 GMT
Hector,
My definition of meaning in terms of algorithmic complexity and information theory is exactly the same as yours. I've written quite bit on it. Further, though, because meaning is independent of language -- process is not differentiable from reality. In other words, reality is observer dependent only to the limit of language (algorithm in computational terms).
Which makes the point that locality -- an analog, classically real, experience -- does not obviate the digital reality of nonlocal influences. This is entirely consistent with Einstein's characterization of "physically real," i.e., " ... indpendent in its physical properties, having a physical effect but not itself influenced by physical conditions."
The only analogy I can come up with at the moment is that of mapping one point to every point of an independent set of points. We can do it, only if there is sufficient distance between the point and the other set. This independence guarantees nonlocal mapping without obviating local continuity.
Tom
report post as inappropriate
Steve Dufourny replied on Feb. 23, 2011 @ 12:53 GMT
A littel beer from Belgium.....and hop you are friends.hihii
Spherically yours,
ps Hector,still one lol.
report post as inappropriate
Author Hector Zenil replied on Feb. 23, 2011 @ 23:30 GMT
Dear Thomas,
Interesting. I'm reading your essay. Could you also point me out to any other work of yours to read about your conception of meaning as you highlight? Do you have a webpage by the way?
Thanks and best.
T H Ray replied on Feb. 24, 2011 @ 02:23 GMT
hereIf you're interested enough, I'll email you a copy of my Popper Centennial paper. Abstract online, but not picked up for the proceedings.
Best,
Tom
report post as inappropriate
T H Ray replied on Feb. 24, 2011 @ 12:37 GMT
Correcting the link (I should know better than to try and do this from the bar):
Ray papersThe ICCS 2006 paper deals with the technical significance of the independence of language and meaning.
Thanks for reading my essay.
All best,
Tom
report post as inappropriate
Author Hector Zenil replied on Feb. 24, 2011 @ 21:27 GMT
James Putnam replied on Feb. 24, 2011 @ 23:00 GMT
Dr. Hector Zenil,
Your messages have been very informative. I appreciate the time you have taken to explain your views. The use of bits in order to represent information was a puzzle to me. I understand how computers fundtion. My point has to do with the practice of referring to code as information. While code is certainly information, I get the sense that the word information has been redefined to possibly represent meaning yet I don't view code as having meaning. It points us to meaning after it has been organized into a given set of signs. The code is a sign. So, the information that a code communicates, from my perspective, is meaningless unless it has been assigned previous to the use of the code. Therefore, code is information without meaning. It is a collection of signs that point us to meaning. The meaning exists elsewhere. We know where that elsewhere is, so, when we see the signs we look up the meaning elsewhere. I don't think that I agree that an analog world requires more preconditions. However, you are the expert and I don't want to monopolize your time. It is something I would have to think more about. So, I will ask one simple question: Is Morse code digital?
James
report post as inappropriate
Author Hector Zenil replied on Feb. 25, 2011 @ 01:52 GMT
Dear James,
Interesting question. Concerning whether data is different to code perhaps it would help to bring up the contribution of Alan Turing embedded in his seminal contribution of universality unifying both data and programs. While one can think of a machine input as data, and a machine as a program, each as separated entities, Turing proved, as you may know, that there is a general class of machines of the same type (defined in the same terms) capable of accepting descriptions of any other machine, and simulate their evolution for any input, hence taking a program+data as data.
This is, for example, why one can investigate the 'computational universe' (perform an exhaustive search) either by following an enumeration of Turing machines, or using one (universal Turing) machine running an enumeration of programs as data inputs. Because both approaches are equivalent. So, in my opinion, there is not an essential difference between data and code.
I will think about your Morse code question. Thanks.
hide replies
nikman wrote on Feb. 9, 2011 @ 03:58 GMT
Thanks for that. And yes, I'm suggesting what you suggest I'm suggesting. I know that Turing himself first posited the possibility of non-recursive adjuncts ("oracles") to computation, but in retrospect he may not have realized the immense other world that might lead to.
Time will tell.
report post as inappropriate
T H Ray wrote on Feb. 9, 2011 @ 14:04 GMT
Hector,
To make near-poetry of computer science, and have it be factually accurate and scientifically well grounded besides, is a tour de force. I look forward to seeing this piece published in a prestigious venue, as by any objective standard I know, it deserves to be.
It's so gratifying to see information theory getting the strong treatment in this contest, that I hoped it...
view entire post
Hector,
To make near-poetry of computer science, and have it be factually accurate and scientifically well grounded besides, is a tour de force. I look forward to seeing this piece published in a prestigious venue, as by any objective standard I know, it deserves to be.
It's so gratifying to see information theory getting the strong treatment in this contest, that I hoped it would.
All best,
Tom
view post as summary
report post as inappropriate
Author Hector Zenil replied on Feb. 9, 2011 @ 16:26 GMT
T H Ray,
Thanks for your kind words. I'm glad you found the essay interesting and also to be me who stands in favor of information theory to support the digital view of the world in this exciting contest.
Sincerely.
T H Ray replied on Feb. 9, 2011 @ 17:06 GMT
I hope you've made plans to be in Boston for ICCS this summer. If my own plans go as expected I'd love to share some conversation over a cold Sam Adams.
Tom
report post as inappropriate
Author Hector Zenil replied on Feb. 9, 2011 @ 21:21 GMT
Many thanks Tom,
I will likely be in Boston during the Summer (not sure if I will attend ICCS this time though). Drop me a line if your plans go as you hope. My email is on the first page of the essay.
Best.
Alan Lowey wrote on Feb. 11, 2011 @ 09:45 GMT
Hi Hector, I was very impressed with your essay. Very easy to read yet dealing with complex study matter. I have a question which relates to you talking about DNA incidentally:
Q: Why can't an Archimedes screw be used as a particle/wave model of gravity? Why is no-one experimenting with this simple idea of a screw being the analogy needed to visualise a force-inducing particle of attraction? If it then travelled around a wraparound universe it would emerge on the other side as force of repulsion i.e. dark energy. I don't understand why no-one has grasped this simple idea yet.
Many thanks.
report post as inappropriate
Author Hector Zenil replied on Feb. 12, 2011 @ 00:38 GMT
Thank you very much Alan.
Concerning your question, I don't know why nobody has used the idea of an Archimedes screw. It sounds to me that something similar has been explored in the form of some topological spaces that behave as you describe, although I'm not sure whether they have been connected to the dark energy phenomenon.
Best regards.
Alan Lowey replied on Feb. 12, 2011 @ 10:49 GMT
Thanks Hector! You're the third person to appreciate the connection. If Newton had hit on this idea we would never have had Einstein's spacetime continuum imo. It leads on to the idea explaining the 100,000 year ice age problems which are encountered with
Milankovitch cycles. Nevermind..
Alan
report post as inappropriate
Steve Dufourny wrote on Feb. 11, 2011 @ 14:37 GMT
Hi Hector Zenil,
Congratuations, that permits to understand better the computing and its randomness.
I ask me how is the basis of these systems and laguages? The simulations can be optimized !
Good luck.
Best Regards
Steve
report post as inappropriate
Author Hector Zenil replied on Feb. 12, 2011 @ 00:47 GMT
Thanks Steve. I think it is the first time I read a post from you with nothing about the Spheres model =)
Thanks again.
Steve Dufourny replied on Feb. 12, 2011 @ 09:40 GMT
:)..Peter says I am sphericentrist,probably a problem of vanity due to my young age(35):)
Regards
report post as inappropriate
Author Hector Zenil replied on Feb. 12, 2011 @ 13:12 GMT
I'm a bit younger than you Steve.
Steve Dufourny replied on Feb. 15, 2011 @ 10:00 GMT
:) it's cool.
Spherically yours(you see, still a word with sphere lol)
report post as inappropriate
Steve Spherini replied on Feb. 19, 2011 @ 17:24 GMT
the word of the day Rocksphere.lol
One per day hihi.
report post as inappropriate
Steve Dufourny replied on Feb. 22, 2011 @ 19:21 GMT
spherical universal biological turing machine the word of the day.
report post as inappropriate
Author Hector Zenil replied on Feb. 23, 2011 @ 23:31 GMT
Definitely you have the Belgium humour Steve. Did you watch Rien à Declarer? You would have fit well in the movie.
Best.
Steve Dufourny replied on Feb. 25, 2011 @ 10:07 GMT
Hi Hector, hihihi yes indeed I like laugh....I have cried too much in my young life...thus it's better to smile to life.Hihihi we are surprisings the belgians but we have a good heart , it's the most important, we are braves simply as says Cesar in the past.hhih a little pub for this beautiful small surealist country.
for the film,no but some friends have seen it, it a very laughing film for them,I am going to see it soon I think, I love films.In fact here in Walloonia, we prefer laughing about our politicians than others things.But at this momment, they must create the government because there that becomes so ironic without government since 260 days...soon we shall be as Iran.The first country without government.The problem is that some people wants separate the country, and the others wan't change or improve their political systems.Thus they rest at their place.Thus you imagine the entrepreneurial mind and the creations of jobs, a catastrophe for walloonia.The youngs become very irritated and angry.They see their parents for example without job and without hope.All is very expansive and the majority is very limited here in monney.The cost of life is too much important and the salaries very weaks.It's really a problem of politic.We aren't numerous in Belgium, 11 millions and 4 millions of walloons.
Regards
Steve
report post as inappropriate
hide replies
Efthimios Harokopos wrote on Feb. 11, 2011 @ 15:16 GMT
Hello,
Very interesting essay and at the same time hard to understand if someone does not have formal exposure to complexity theory. I still don't see two things: why an analog universe can't have an algorithmic representation, which is obviously what relativity has offered with very high accuracy, and how can one decide the fundamental question from all these, namely whether there is a smallest interval of space(time) or not. Another question: do you take for granted that algorithmic representations supervene on laws of nature?
Regards.
report post as inappropriate
Author Hector Zenil replied on Feb. 12, 2011 @ 01:59 GMT
Hello Efthimios,
Good question. What I claim, and the reason I believe my model is stronger than claiming directly that the world is digital, is because in my view one doesn't have to presume discreteness as a basic assumption of the world. One starts asking how the universe looks like in terms of the distribution of patterns in the world. Then one can conclude either that the world is...
view entire post
Hello Efthimios,
Good question. What I claim, and the reason I believe my model is stronger than claiming directly that the world is digital, is because in my view one doesn't have to presume discreteness as a basic assumption of the world. One starts asking how the universe looks like in terms of the distribution of patterns in the world. Then one can conclude either that the world is digital because it looks like so (or does not), or that it is algorithmic (in the digital sense) even if it is not digital, case in which I argue we have no reasons to think it is not digital. We provide some evidence in favor of the resemblance between empirical and digital datasets and means to continue the investigation (investigation that has already provided some applications by the way, such as the calculation of the complexity of short strings where compression algorithms use to fail).
You may also mean that the world could be algorithmic in other different sense, in an analog fashion (in the sense of being carried out by an analog computer) and still remain algorithmic. It could be, but so far we have had a hard time trying to define analog computation, at least in feasible terms, and our best model to understanding the world has turned out to be digital (Turing) computation. This is why I focus on discussing the way the world seems to unfold and whether it may do so in one or another way. Our claims are supported by statistical results (statistics are not proofs though), and we found that patterns in the real world and the digital ones, that we simulated, seem to be distributed alike.
Among the things I argue in my essay is that the world would have greater chances to look random (or more random if you prefer) if it were analog. If one throws digits into the air of an analog (infinitely divisible) world and if this hypothetical world allows 'true' (indeterministic) randomness unlike in a digital one, then one would expect every digit to be like the digits of a Chaitin Omega (see definition in the Appendix of the essay), this is a number that is random by definition under our standard model of computation. You can perform the same thought experiment with a real number line and see that chances of picking a random number among all numbers, in a finite interval, is 1, that is complete certainty that you will pick a random number. Yet we don't experience that in our everyday life, but quite the opposite. Chaitin has proven that one cannot calculate most digits of an Omega number (for some Omega numbers not even a single digit), so in a world where random numbers persist, things might just have greater chances to look like Chaitin Omegas. The fact that we can do science in this world seems to be an indication favoring that this is not the case.
You are right, it is very interesting how physical models based on mathematical theories assume continuous variables, yet when one solves the equations of, let's say general relativity to take the example you mention, the model becomes algorithmic in the strict digital sense, either by the mechanistic way in which equations are solved by hand, or literally when solved by a digital computers. The algorithmic view might turn out to help as a shortcut to understanding the digital nature of the world without having to assume it at first.
Sincerely.
view post as summary
Efthimios Harokopos replied on Feb. 12, 2011 @ 09:22 GMT
Dear Hector,
Thank you for the detailed response. I now understand better your work (I hope), which I think is very interesting and original.
However, relativity theory tells us that the world is analog and fully deterministic. You say: " I will argue that if the universe were analog, then the world would likely be random, making it largely incomprehensible."
The above statement is contrary to the best scientific theory we have available that in based on continuity of spacetime and it is fully deterministic at the same time, contrary to your claim. In addition, this type of analog mode of a universe is comprehensible and falsifiable by experimentation, but hasn’t been falsified to this date.
Regardless analog computational machines, If the universe is analog, it is the best analog computer, we should not need to find another one.
I would like to know more about how your quoted statement above reconciles with relativity theory.
Thanks and regards.
report post as inappropriate
Author Hector Zenil replied on Feb. 12, 2011 @ 13:35 GMT
Dear Efthimios,
Yes, classical and relativistic mechanics are both deterministic, and that's compatible with my algorithmic worldview. On the other hand, certain phenomena can be modeled assuming that matter and space exist as a continuum, meaning that matter is continuously distributed over an entire region of space. By definition, a continuum is a body that can be continually sub-divided into infinitesimal elements. However, matter is composed of molecules and atoms, separated by empty space. If a model like general relativity is believed to describe the world at all scales then one would also need to think of matter as continuum, something not compatible with my view but also not compatible with another large, and equally important, field of modern physics: quantum mechanics (the view that there are elementary particles and that they constitute all matter).
Modeling an object or a phenomenon as something doesn't mean it is that something. Even if on length scales greater than that of atomic distances, models may be highly accurate, they do not necessarily describe the universe at all scales or under all circumstances, which should reminds us that models are not always full descriptions of reality, so we should not take them to be at the most basic level of physical explanation.
You make a great point fully compatible with my worldview: if the world is analog, then we would need to live in the best possible analog world. That is what I argue, that chances of finding patterns and structures in an analog world would be very low unless, as you suggest, one assumes that our world is the best possible among all possible. Under the digital view, however, patterns and structures are basically an avoidable consequence, so no need of such a strong assumption.
Sincerely.
Juan Enrique Ramos Beraud wrote on Feb. 16, 2011 @ 02:48 GMT
Héctor:
Hello from another math student from ciencias (unam). Much older than you anyway. 45 now.
I'm in the contest also.
I read your essay and I liked it a lot, because I am into computation complexity also.
I have been far from academia for years, except for my participation on this and last year contest on fqxi.
I would like to know if you know there are computation complexity study research groups in Mexico.
I really find your essay quite good, let's wait on how the voting goes .
Please read my essay and comment .
report post as inappropriate
Author Hector Zenil wrote on Feb. 16, 2011 @ 03:11 GMT
Hola Juan Enrique,
Nice to meet you. I know of the Centro de Ciencias de la Complejidad (C3) at UNAM to which I'm associated with too. Sure, I will read your essay with interest.
Gracias por tu apoyo. Un saludo.
JOE BLOGS wrote on Feb. 17, 2011 @ 08:02 GMT
EInsteins dice obeys these classical rules 1 ODD+ 1 EVEN= 2 ODD.
And 2 ODD+ 2 EVEN= 4 EVEN.
QM is determined by EInsteins dice and you can have a model of the universe where evrything is determined at least in the computer world......
This is not OUR UNIVERSE this is a universe where everythng is binary either zero or one.
report post as inappropriate
JOE BLOGS wrote on Feb. 17, 2011 @ 08:03 GMT
JOE BLOGS wrote on Feb. 17, 2011 @ 08:05 GMT
EINSTEINS DICE FOR A QM UNIVERSE THAT IS DETERMINED BY BINARY
attachments:
2_Einsteins_Loaded_Dice.htm
report post as inappropriate
Author Hector Zenil wrote on Feb. 17, 2011 @ 17:08 GMT
Interesting Joe. I should have a closer look at it. Regards.
Robert Spoljaric wrote on Feb. 18, 2011 @ 08:51 GMT
Dear Dr. Zenil,
I have just read your paper, and thought you may like to know that my essay agrees with your assertion that ‘operations that at the lowest scale are very simple’. My paper deals with physics in which I derive ‘the Light’ and ‘Equivalence Identity’.
This raises the question of whether Wolfram’s systematic computer search for simple rules with complicated consequences could ever 'accidentally discover’ the two foundations revealed in my paper.
In case you already haven't, you may like to read the following article by Chaitin
http://arxiv.org/PS_cache/math/pdf/0210/0210035v1.pdf
All the best,
Robert
report post as inappropriate
Author Hector Zenil replied on Feb. 18, 2011 @ 20:45 GMT
Dear Robert,
Yes, I knew about Chaitin's paper, you do very well bringing it up to this discussion, specially as it is connected to my essay content.
Wolfram has recently written on his quest to find the universe rule that he also thinks should be simple. Here is the link: http://blog.wolfram.com/2007/09/11/my-hobby-hunting-for-our-
universe/
Sincerely.
Robert Spoljaric replied on Feb. 18, 2011 @ 21:24 GMT
Dear Dr. Zenil,
Chaitin's paper is also connected to my essay viz III What do Working Scientists Think about Simplicity and Complexity?
Cheers,
Robert
report post as inappropriate
Author Hector Zenil replied on Feb. 20, 2011 @ 16:57 GMT
Robert,
There is a common agreement that algorithmic (program-size, aka Kolmogorov) complexity is the framework to talk about simplicity vs. complexity in science. This is based, as you may know, on the concept of the shortest possible description of an object.
The idea is that if the shortest program running on a universal Turing machine of, for example a string, is of about the length of the string, then the string is said to be complex or random, while if the program is considerably shorter than the original string length then the string is said to be simple. This means that if a string is compressible then it is simple, and if it is not then it is random.
Other finer measures have been proposed based on this same concept of algorithmic complexity, such as Bennett's logical depth. According to this other complexity measure, the complexity of an object is given by the decompression time of the near shortest programs producing an object. This measure has the particularity of distinguishing between simple or random vs. structure (organized complexity), as opposed to random complexity as in the original algorithmic sense.
These measures are, unfortunately, still largely underused, sometimes greatly overlooked or even misunderstood. I am quite surprised, for example, that only a handful of participants in this contest have even mentioned them to address the contest question, perhaps because they are relatively new theories. I'm glad to be the participant defending his view by using these state of the art tools.
The main problem is that these measures are not computable, meaning that there is no algorithm that gives you neither one or another complexity value when provided a string (because of the halting problem explained in my essay). There are, however, attempts to build tools based in these concepts, and this has been part of my own research program. If you are interested you can have a look at my recent list of papers on ArXiv: http://arxiv.org/find/all/1/all:+zenil/0/1/0/all/0/1
Sincerel
y.
Author Hector Zenil replied on Feb. 20, 2011 @ 22:45 GMT
I should also add that a way to avoid large constants and concerns about shallow comparisons is to stay close to the 'machine language'. Remember that the definition of algorithmic complexity of a string is given in terms of the length in bits of the shortest program that produces the string.
One can often write subroutines to shortcut a computation. In Mathematica, for example, you can get any number of digits of Pi by simply executing N[Pi, n], with n the number of desired digits. Note, however, that the C program calculating the first 2400 digits of Pi does not use any particular function of C, but basic arithmetical operations. In any case, the main argument holds, that Pi is simpler to calculate by throwing bits that one interpret as instructions of a computer language, disregarding the language (or if you prefer rules), but it is much harder if you want to generate any number of digits of Pi by throwing the digits themselves into the air. This is because programs of Pi will be always short in relation to its expansion.
Robert Spoljaric replied on Feb. 21, 2011 @ 01:27 GMT
Hello Dr. Zenil,
Thanks for the information.
Chaitin seeks to use AIT to establish irreducible mathematical truths. In my essay I establish irreducible physical truths ('the Light' and 'Equivalence Identity') using Physics. You may care to read my essay when you get a chance.
All the best,
Robert
report post as inappropriate
Author Hector Zenil replied on Feb. 21, 2011 @ 17:46 GMT
Sure Robert. Thanks again.
hide replies
Thomas J. McFarlane wrote on Feb. 20, 2011 @ 07:05 GMT
Hector,
Thanks for the interesting essay.
Your example of the 158 characters of C that compress the first 2400 digits of pi seems to overstate the actual degree of algorithmic compression. The 158 characters of C do not produce the 2400 digits alone unless also combined with a C compiler which also has considerable information content. In other words, throwing the dice in the air would need to produce not only the C program itself, but also the compiler to properly interpret and execute the program. Correct?
Regards,
Tom
report post as inappropriate
Author Hector Zenil replied on Feb. 20, 2011 @ 16:35 GMT
Dear Thomas,
That's a very good point. However, I don''t overlook the fact that one has to add the size of the C compiler to the size of the program. When one compares computer programs one has to do it on the basis of a common language. If the common language is C, as it is in this case, one can ignore the size of the compiler because it is the same size for any other program. In other words, because the additive constant is common to all programs one can ignore it.
The invariance theorem shows that it is not very important whether you add the compiler length or not or which computer language is used because between any 2 computer languages L and L' there exists a constant c only depending on the computer languages and not the string, such that for all binary strings s:
| K_L(s) - K_L'(s) | < c_L,L'
Think of this as saying that there is always a translator of fixed length (another compiler between computer languages) which one can use to talk about program lengths without caring too much about additive constants and without any loss of generality.
Good question. Thanks.
Thomas J. McFarlane replied on Feb. 25, 2011 @ 04:33 GMT
Hector,
Thank you for the helpful clarification. It makes sense that one can ignore the compiler when comparing program lengths.
Regards,
Tom
report post as inappropriate
Author Hector Zenil wrote on Feb. 20, 2011 @ 22:44 GMT
I should also add that a way to avoid large constants and concerns about shallow comparisons is to stay close to the 'machine language'. Remember that the definition of algorithmic complexity of a string is given in terms of the length in bits of the shortest program that produces the string.
One can often write subroutines to shortcut a computation. In Mathematica, for example, you can get any number of digits of Pi by simply executing N[Pi, n], with n the number of desired digits. Note, however, that the C program calculating the first 2400 digits of Pi does not use any particular function of C, but basic arithmetical operations. In any case, the main argument holds, that Pi is simpler to calculate by throwing bits that one interpret as instructions of a computer language, disregarding the language (or if you prefer rules), but it is much harder if you want to generate any number of digits of Pi by throwing the digits themselves into the air. This is because programs of Pi will be always short in relation to its expansion.
Steve Dufourny replied on Mar. 2, 2011 @ 16:43 GMT
Hi dear Hector you say "In any case, the main argument holds, that Pi is simpler to calculate by throwing bits that one interpret as instructions of a computer language, disregarding the language (or if you prefer rules), but it is much harder if you want to generate any number of digits of Pi by throwing the digits themselves into the air. This is because programs of Pi will be always short in relation to its expansion"
Could you develop, it's relevant that...
Regards
Steve
report post as inappropriate
Steve Dufourny replied on Mar. 2, 2011 @ 17:02 GMT
because for a real understanding of the theory of numbers, the reals and the continuity and discretness .....it must have a difference between the physicality and its distribution rational, the infinity behind our walls.And the adds and infinities invented by humans due to some adds or mult.Now if we take this language, mathematical, as the computing, don't forget it's a human invention where we create codes of continuities, where sometimes the categories permits the synchro and the sortings. Of course it's a beautiful machine and its language is logic, but for the simulations, the laws can be changed and thus the conclusions loose the real uniersal sense. It's essential when we want really interpret our reality objective physical. I can for example simulate the mass of stars and planets, or BH , that doesn't mean it's real...there we return about a very beautiful work of Eckard about the axiomatization rational of our reals with a real unity, 1 and reals domains.All is there in fact,the 0,- and infinity aren't reazlly real in their pure number and its distribution, spherical.
The language is the same because the maths are the maths but the reals are better than imaginaries ....the strings are a beautiful tool for the 2d picture , I prefer a spherical membran, oscillating and we can thus simulate the mass also,it's more logic, the programm just needs a little improvement inserting the number of the ultim entanglement of spheres.and their volumes from the main central sphere.In all case, the duality wave particle can be harmonized due to the proportions with mass.The strings were an idea, this idea can be universalized simply in the spherical logic in 3D.
Dear Hector, could you explain me the algorythms, If I know the principle, I can invent several models of sortings and synchro.Could you explain me how is the base of computing , this language in fact is logic and mathematic,but what is an algorythm of sorting for example, you insert what the volumes??? Or a serie, ....in fact how I have the pictures here at home on my pc for example.
Steve
report post as inappropriate
Steve Dufourny replied on Mar. 5, 2011 @ 13:39 GMT
The continuity rational seems lost in the pure confusions, why just because the reals are correlated witht he biggest rationality.The cotegorification of sortings in computing seems the cause, due to the adapted algorythms probably.That's why probably we have some bizare simulations.In logic a real turing machine seems rational, it can't simply be an irrational road simply.
Dear Hector, you say"By definition, a continuum is a body that can be continually sub-divided into infinitesimal elements" I am not sure about that, really,that implies some confusions about the real meaning of the infinities.And the finite numbers! DEFINITING MASS FOR EXAMPLE.thus what is this fractilization, I think there is a little problem.A continuum is more than that,the time operator seems confound with the fractal of a body which is in logic finite in its pure newtonian fractalization.I can understand the difference with the 2d computing and the waves correlated with the fractal of this digit,a kind of unity...that permits the 2d forms ok but the strings aren't foundamental for our universe ,a spherical 3d sphere and the picalculus improved with a better fractal for the digit of this 3D sphere and its spherical membran forming all systems....if the rotations are proportional with mass and if the fractal is finite and precise in its decreasing of volumes.....3D spherical computer holographic .....a program of convergences will be easy, and after we can simulate correctly at my humble opinion, I AM PERSUADED THAT lAWRENCE CAN MADE THAT for the 3D holographic computer and its turing universality....the work of Pierce seems relevant about the real axiomatization , the caratheodory method also and the real proportionalities....the convergences seem easy between this 2d towards the 3D universal......the Mtheory is too weak simply ...3DSPHERES AND SPHERIZATION DEAR ALL .
Regards
Steve
report post as inappropriate
Lawrence B Crowell wrote on Feb. 20, 2011 @ 23:13 GMT
Your paper is interesting and presents some things to think about. David Tong here comes to an opposite conclusion. My sense is that continuous and discrete aspects of reality are complements. In my paper http://fqxi.org/community/forum/topic/810 I work aspects of the algebraic structure for quantum bits with black holes and AdS spacetimes.
The universe as a set of digital processors has some compelling features to it. As I see these are structures associated with qubits on horizons or AdS boundaries. The exterior world has equivalent quantum information content, but it is the holographic projection from the boundary or horizon. To compare to DNA it is analogous to the map which takes a single strand and parses that into complex folded polypeptides. We may then say this permits “errors,” or mutations, or in physics broken symmetries.
Of course from an algorithmic perspective we have the halting problem. The universe as a grand computer or quantum computer executes various algorithms, which are quantum bit processors for interacting fields. All of these need to be computable, and have a finite data stack for a standard scattering experiment. So there must be some sort of selection process, a sort of quantum Darwinism, which selects for qubit processors that are computable. The Chaitan halting probability may then be some estimated value which serves as a screening process. Maybe if the algorithm is nonhalting and requires an unbounded amount of energy it is renormalized out, or absorbed into a cut off.
Cheers LC
report post as inappropriate
James Putnam replied on Feb. 22, 2011 @ 21:06 GMT
Dr. Crowell,
The universe cannot just 'be':
"The universe as a grand computer or quantum computer executes various algorithms, which are quantum bit processors for interacting fields. All of these need to be computable, and have a finite data stack for a standard scattering experiment. So there must be some sort of selection process, a sort of quantum Darwinism, ..."
It needs a power supply and it needs to be programmed.
James
report post as inappropriate
Author Hector Zenil replied on Feb. 25, 2011 @ 02:32 GMT
Lawrence: I have difficulties seeing how the world could be digital and analog at the same time, but it might be.
James: When one performs a computation, say on a desktop computer, it is with a purpose in mind, for example, to print a document or to play a game. When you think that the computer started computing from 'random' data by picking programs at 'random' one gets the feeling that if the universe is a computer it was not really necessary programmed.
As you point out, if the universe is computing something a legitimate question is to ask who put the computer to run and what the universe is computing. While the answer to the first question is beyond my scope the second may be as simple as to believe that the universe is just computing itself, and sometimes we make it compute for ourselves (computers at the end are part of the universe, and when we compute with them we ask the universe to compute something for us).
On the other hand, if someone or something ran the universe code, the algorithmic view tells how this was, if necessarily, only at the very beginning, because the structure one finds today all over the universe is neither the result of chance nor the result of design, but can be explained by computation without having to think that there is a purpose, nor to have someone to intervene at every step to get to where it is today. This worldview claims that the universe outcome is the result of computation in which the theory of algorithmic probability explains and predicts the distribution of random-looking and organized structures.
Member Tommaso Bolognesi replied on Feb. 25, 2011 @ 10:32 GMT
Hector (and James),
going back to the remark by James -- if the universe is a computer (or, better, a computation), it needs power supply, and needs to be programmed -- I agree with the reply that there is no need to program it for a purpose, and not need to inject information during the computation. A lot of interesting things emerge in computations that are not the result of a purposeful design, and are 'closed', that is, not interacting with the outside, as many experiments have shown.
But we are left with the question of the 'power supply'. As a supporter of the digital/computational universe conjecture, I like to assume that everything must emerge from the universal computation (i.e., from spacetime): particles, matter, energy, up to life, and whatever else is going to emerge next. But don't we need some sort of energy to keep the computation running, step by step? How do we avoid the circularity of energy requiring energy to exist?
Perhaps a possible answer would be: we don't need energy to run the Computation because there is no actual, physical, Digital Computer that runs it, in the same way as we do not require power for an Analog Computer to run, say, the Navier-Stokes or Einstein equations, under an analog-based understanding of the universe.
An alternative answer, along the lines of Tegmark's Mathematical Universe Hypothesis, would be that the Computation does not unfold step by step: it is already all there, time being a sort of illusion (I wonder whether the fact that time and energy are conjugate variables plays a role here).
In any case, if we insist that the computational steps 'really happen', and that they require some non-null effort, hopefully not from metaphysical entities like angels (after all, angels don't sweat), it would be wise to keep it to the bare minimum. In this respect, a prefix-free universal Turing machine (as suggested by Hector), or a Turmite, or a network mobile automaton (as discussed in my contribution), all based on the operation of a simple, localized control head, are preferable to a cellular automaton, with its global operation mode. (By the way, to my knowledge, the first to push for the localized control head idea in a physical context has been S. Wolfram.)
Hector, James, what do you think?
report post as inappropriate
T H Ray replied on Feb. 25, 2011 @ 11:49 GMT
The power supply of the universe is already given by E = mc^2. Matter, as observation so far informs us, makes up only a tiny portion of what we see but accounts for all of what we measure. Interaction among mass points is the engine of change in particle states. Programming? There is no physical principle that prevents the universe from programming itself, with itself. Nor is there a physical principle that prevents the universe from being its own algorithm and thus entirely random.
Tom
report post as inappropriate
Author Hector Zenil replied on Feb. 25, 2011 @ 11:55 GMT
Tommaso,
I think I would need time to think more about it. But indeed, the question of the power supply is a very interesting one. My feeling is that one only needs a first (strong enough) push (e.g. the Big Bang), then symmetry breaking does the rest, and eventually the initial power fades perhaps delimiting a maximal complexity (i.e. distribution of patterns under my algorithmic view).
For example, take a gas in a room. Thermodynamical stability is not the state in which every particle can remain still (at the same distance of every other) because a motionless state is an unstable configuration. In our reality, it requires more energy to keep everything still than just let things to collide. Now I have to accept that an analog world here would made me think that explains better why that happens, because if the underlying space were something like a grid I could not imagine why particles could not just remain still each in its own discrete cell, but in an analog world I would think that one can more easily explain why the least imperfection would break everything out. But it would suffice to start from a biased initial condition to avoid falling into an analog view to explain this because it is only necessary to explain the first symmetry breaking. At the end, even if in our minds an analog world would more likely explain this instability of particles perfectly uniformly distributed over the room space, I don't see why not one would expect also perfect equilibrium in such an analog world just as it would be the case in the digital to make an unstable motionless configuration a stable one.
Of course one can ask about the cause of the first symmetry breaking just as one can ask about the cause of the Big Bang and the causes of it's causes. I think at the end we will end up giving up on the first cause simply because either there is a first uncaused cause or because we cannot go indefinitely backwards in time looking always for the cause of the cause. My speculative position in this regard is that there is something because it is not the case that nothing is simpler than something, I think they are equally likely, but if you start with something and run an algorithmic process on it you end up pretty much with a universe looking like the one in which we live today. The first part of this argument is mere speculation but the second part, I argue, is not.
Thanks.
Author Hector Zenil replied on Feb. 25, 2011 @ 12:02 GMT
I think Tom remark is fair. The question of the power supply can be reformulated thanks to Einstein, to the question of matter supply, that is why there is something rather than nothing, which is the question at which I arrived in my previous post in this thread. As Tom suggests, I think the universe feeds itself of power. What is the origin of the power supply may be therefore tantamount to asking the cause of the Big Bang, and perhaps the best answer today is the same answer that cosmologists provide, we don't (yet) know. The algorithmic view answers, however, much of it from that time on, or at least the question of why there are structure in our universe rather than having remained in the original state.
James Putnam replied on Feb. 26, 2011 @ 02:07 GMT
Tommaso: "...going back to the remark by James -- if the universe is a computer (or, better, a computation), it needs power supply, and needs to be programmed -- I agree with the reply that there is no need to program it for a purpose, and not need to inject information during the computation. A lot of interesting things emerge in computations that are not the result of a purposeful design, and...
view entire post
Tommaso: "...going back to the remark by James -- if the universe is a computer (or, better, a computation), it needs power supply, and needs to be programmed -- I agree with the reply that there is no need to program it for a purpose, and not need to inject information during the computation. A lot of interesting things emerge in computations that are not the result of a purposeful design, and are 'closed', that is, not interacting with the outside, as many experiments have shown."
There is always a purpose. That purpose may not be a specific end result. However, the result was a part of purpose. In other words, everything that can possibly occur as a result of the properties of the universe was already potentially possible. Any equations that correctly model the universe already contain all possible outcomes. Nothing is added after the beginning. The origin of the universe had to have all possible outcomes included in its properties at the time of its origin. No latter miracles can be permitted. The concept of unpredictibility cannot stand for sneaking in latter miracles. So far as I know there is no true unpredictibility and no true randomness. Even those computations that involve results for which we cannot make specific individual predictions are always enveloped in some form of control. The end result is that they perform useful purpose for which we can assign meaning.
Tom: "The power supply of the universe is already given by E = mc^2. Matter, as observation so far informs us, makes up only a tiny portion of what we see but accounts for all of what we measure. Interaction among mass points is the engine of change in particle states. Programming? There is no physical principle that prevents the universe from programming itself, with itself. Nor is there a physical principle that prevents the universe from being its own algorithm and thus entirely random."
This is an argument in favor of something coming from nothing because nothing is really something. When I speak of nothing I do not mean something. By nothing I mean the completely inexplicable beginning of something.
Hector: "...the second may be as simple as to believe that the universe is just computing itself, and sometimes we make it compute for ourselves (computers at the end are part of the universe, and when we compute with them we ask the universe to compute something for us)."
Yes the universe computes itself. I see no reason to look for strings being pulled to manipulate it. There is no need for intervention. The beginning set all in motion. Any programming for any effect that has occurred or can occur was already in existence after the origin of the universe. It would be very interesting to know where that original programming came from; however, we don't have that information by scientific means. I would not refer to it as programming. I see it as the consequence of natural properties fully controlled and operating in a cooperative meaingful manner from which it cannot deviate. Nothing gets added after the origin. Everything gets explained by connecting it back to the origin.
Hector: "On the other hand, if someone or something ran the universe code, the algorithmic view tells how this was, if necessarily, only at the very beginning, because the structure one finds today all over the universe is neither the result of chance nor the result of design, but can be explained by computation without having to think that there is a purpose, nor to have someone to intervene at every step to get to where it is today. This worldview claims that the universe outcome is the result of computation in which the theory of algorithmic probability explains and predicts the distribution of random-looking and organized structures."
Well, I won't try to argue the details of algorithmic probability. However, if it in any way implies that some detail or meaning that is part of the universe was added on after the origin of the universe, then I see this as possibly being one of those theoretical fogs that get introduced in order to suggest that the answer lies just beyond our reach in some odd complexity which we can't quite explain.
James
view post as summary
report post as inappropriate
Author Hector Zenil replied on Feb. 26, 2011 @ 04:12 GMT
Hi James,
Then I think we agree. Algorithmic probability is not magic, although it has been called a 'miraculous distribution' before by Kirchherr, Li and Vianyi:
The Miraculous Universal Distribution
http://citeseerx.ist.psu.edu/viewdoc/download?do
i=10.1.1.120.2873&rep=rep1&type=pdf
And my own papers (in joint with Jean-Paul Delahaye) on the same subject, related to this essay:
Numerical Evaluation of Algorithmic Complexity for Short Strings: A Glance into the Innermost Structure of Randomness (http://arxiv.org/pdf/1101.4795)
and
On the Algorithmic Nature of the World (http://arxiv.org/pdf/0906.3554)
Levin's distribution establishes that ‘simple’ hypotheses – the ones you favor under Occam’s razor – are precisely those with small complexity. My work shows that Levin's distribution is not only a prior but it behaves as one would expect when one actually run the experiments, and even more surprising, it seems to fit (from weak to strong correlation significance) empirical datasets from the real world.
Best
James Putnam replied on Feb. 26, 2011 @ 14:39 GMT
Hector,
Thank you for those references. I need to be knowledgeable about this subject.
James
report post as inappropriate
Author Hector Zenil replied on Feb. 26, 2011 @ 19:24 GMT
hide replies
Andrew Beckwith wrote on Feb. 24, 2011 @ 21:48 GMT
quote:
These concepts will provide a framework for a discussion
of the informational nature of reality. I will argue that if the universe were
analog, then the world would likely be random, making it largely incomprehensible.
The digital model has, however, an inherent beauty in its
imposition of an upper limit and in the convergence in computational
power to a maximal level of sophistication. Even if deterministic, that it
is digital doesnt mean that the world is trivial or predictable, but rather
that it is built up from operations that at the lowest scale are very simple
but that at a higher scale look complex and even random, though only in
appearance.
end of quote
Analog meaning random, and incomprehensiblity side steps the question of where the information from prior universes comes from. If one has a million prior universes in some sense contributing to a present universe, the analog nature of reality would merely be a statement of chaotic mixing leading to a new reformulation of the universe.
No where would that imply randomness once the PRESENT universe is set up. I.e. that reformulation could be digital in its expression with an analog mixing of prew universe information added in, as the information base for emergent gravity
report post as inappropriate
Author Hector Zenil replied on Feb. 25, 2011 @ 02:12 GMT
Hi Andrew,
Interesting remarks. I knew people may read me as if I were opposing digital to random or algorithmic to analog. However, my path I take is, as established from the title, opposing algorithmic to random. You are right, analog may not necessarily mean a completely random world just as I claim a digital world is neither trivial nor necessarily predictable (I can unpack this upon request, an extended version of this essay is in its way, to be available at ArXiv.com).
The question, seems to me to be, whether one can associate randomness to the concept of analog. While the connection is not trivial, just as it is not the definition of analog, I think it has often been the case that analog is associated with lesser or higher degrees of uncertainty. Either in the form of true indeterminism or in the form of fundamental impediments of infinite precision of measurements, both which I would essentially link to properties related to both analog and randomness. For example, in dynamical systems, chaotic randomness is usually defined as the infinite possible trajectories for a system to diverge from very close initial configurations, over time and space. And if the world is analog one is also fundamentally unable to take measurements and always get the same value, as if there was some randomness involved (at the level of the measurement uncertainty one could even use measurements as a kind of pseudo-random number generator).
Thanks for your comments, I will think further about them.
Joseph Markell wrote on Feb. 25, 2011 @ 04:03 GMT
Dear Hector:
I liked your essay.
Regarding the last line of your essay, "Our reasoning and empiri-
cal findings suggest that the information in the world is the result of processes resembling computer programs rather than of dynamics characteristic of a more random, or analog, world."
I wonder if "... processes resembling computer programs..." will be eventually be found to be, "energies or intelligences that we currently are unable to understand, but indirect evidence points in that direction?"
Good luck!
joseph markell
report post as inappropriate
Author Hector Zenil replied on Feb. 25, 2011 @ 12:19 GMT
Thanks Joseph,
Actually what you mention is close, I think, to Wolfram's concept of intelligence. In Wolfram's view of intelligence it is a matter of identification rather than of sophistication. This is because of his Principle of Computational Equivalence (PCE). His PCE says that most non-trivial computations turn out to be of equivalent sophistication. That may explain why we are unable to, for example, master the task of weather forecasting, because at the end weather is as sophisticated than us (our minds) and we have no way to shortcut the computation of the weather (despite using supercomputers to forecast at most a couple of days, and often still wrong for the next day). So in Wolfram's view the weather is an intelligent system with which we cannot interact because we are an intelligence of a different type.
PCE is interesting for artificial intelligence, because it means we are surrounded by intelligence, yet we are trying so hard to create 'intelligent' systems when in fact what we are trying to create is intelligence that we recognize as such, i.e. intelligence of our same type. But the interesting conclusion is that one do not really need to try that hard designing an intelligent system, one can just go an use one of the many around in the computational universe, and then perhaps make it behave as you want (if what you want is to have it to behave like a human being) for which one would only need to have the system somehow to interact with us (and therefore have sensorial experience of the same type).
Thanks for your comment.
Joseph Markell replied on Feb. 25, 2011 @ 13:35 GMT
Dear Hector:
Thanks for your nice and detailed response. I can meld your ideas with the "portal" in my own essay.
Good luck!
Joseph Markell
report post as inappropriate
T H Ray replied on Feb. 25, 2011 @ 13:39 GMT
I don't want to stray too far off-topic. However, I was reminded by the talk of weather and intelligence, of a few points about the integrity of science that
I made on a blog back in 2008 in defense of my friend and collaborator Pat Frank in connection with the global warming debate. The fireworks begin at post number 27.
Tom
report post as inappropriate
Steev Dufourny replied on Mar. 10, 2011 @ 15:29 GMT
Hi all, interesting dear thinkers, interesting.Here is my humble point of vue about our realism and its deterministic continuity where the reals and our laws are respected.
Equivalencies will always be limited to what is the definition of a human creation. Intelligence is the pure result of a biological evolution. The mass, volume, its quantum numbers are accurate and specific,finite and precise. The mass increases....entropy and its cooling,you shall understand the mass and the light, purelly the samee. How would you reach this number of biological particles arranged since the beginning of physicality. The convergences are inrestings algorithmic applications. But We must under no circumstances wait autonomy of these supposed future intelligent machines. The only intelligence is by the hand and brain that is a result from a biological evolution and quantification of its mass. It is a tool of evolution, as is mathematics simply.In fact these computers aren't autonom , they need humans .....If you take the minearals for example......it's totally different in their pure numbers than biological entanglement of particles, quantics and evolved by fusion mass light in time space evolution in 3dimensions.The waves are relevant and are a tool also....Don't forget if already the intelligence has been created after so many years, you shall understand that it's difficult to create an intelligent biological creation, and thus you shall understand that the time is important...thus 2 main problems dear computers men , 2 impossibilities you can't create a biological intelligence in a so short moment, compared with the universal scale....and second the mineralSi or others are different than biological systems....thus humans, maths, computings, waves......TOOLS. Intelligence also thus let's be serious a little please hihihi
Best Regards
Steve
report post as inappropriate
hide replies
re castel wrote on Feb. 25, 2011 @ 10:28 GMT
Hector,
I agree that an analog (undiscretized) world would be "largely incomprehensible" - it would be seen as a super symmetric void with an unbroken symmetry (essentially "the nothing").
But we evidently have the obvious cosmos (the organized existence) and the less obvious chaos (the unorganized existence). This is the all-encompassing differentiation of reality. There is the differentiated, discretized corporeality and there is the undifferentiated, undiscretized void. There is the one and there is the zero.
Like the many, I agree with "the notion that the universe is digital" in "the way it unfolds" - because that is the idea of the cosmic or ordered existence. But your sidestep of the foundational question regarding "what the universe is made of" dampens your essay.
I think the big question regarding the existence includes both the 'what is discretized' and the 'how that is discretized'.
As for information, it is obvious that people forget what they forget - so, perhaps the all-encompassing existence also forgets in the super-symmetric entropic voidness...
Hector, perhaps you can also read and rate my essay. It would be interesting to find us together in the essay finals.
Rafael
report post as inappropriate
Author Hector Zenil replied on Mar. 2, 2011 @ 05:08 GMT
Rafael,
Thanks for your comments. You are right that I didn't jump to making claims on what the universe may be truly made off, or whether it may turn out to be digital or not. I prefer to leave the readers withdraw that conclusion by looking that an algorithmic world would not really need to assume but the kind of digital computations. Yet, my view is compatible with an analog algorithmic world. I explain, however, why I think that may not be the case, because it would look more like the uncomputable (truly random in the strictest mathematical sense) digits of a Chaitin Omega number rather than the more ordered digits of the mathematical constant Pi, random-looking but deterministic and plenty of algorithmic structure, as I think may be the case of the real world.
I don't think most people think that the universe is digital by the way it unfolds, I think this is a rather different view of mine, or if not a novel view a novel statistical treatment based in an empirical distribution and the concept algorithmic probability undertaken in my research project. Yet, I don't conclude from there that the world is digital, even if I (as do the results) suggest that there is no need to think it is analog.
Sincerely.
Member Tommaso Bolognesi wrote on Feb. 25, 2011 @ 10:45 GMT
Hector, I added a remark here, but a bit up, in the Crowell-Putnam posts of Feb. 22. In the absence of an automatic notification service, I just wanted to let you know. Tommaso
report post as inappropriate
Author Hector Zenil replied on Feb. 25, 2011 @ 11:28 GMT
Hi Tommaso,
I had already found your other message. Thanks.
Yes, it is a pity there is no an automatic notification service. However, on the left column there is a useful 'most recent first' sorting checkbox that I recently found.
I will answer your other message. Btw, I also wrote you something a week ago or so in your own discussion section concerning the question of taking data and code as different kind of entities, which I argue shouldn't be the case after Turing's work.
Cheers.
Edwin Eugene Klingman wrote on Feb. 26, 2011 @ 01:19 GMT
Dear Hector Zenil,
You say above that "certain phenomena can be modeled assuming that matter and space exist as a continuum, meaning that matter is continuously distributed over an entire region of space. [but] matter is composed of molecules and atoms, separated by empty space."
Then you say to Lawrence: "I have difficulties seeing how the world could be digital and analog at the...
view entire post
Dear Hector Zenil,
You say above that "certain phenomena can be modeled assuming that matter and space exist as a continuum, meaning that matter is continuously distributed over an entire region of space. [but] matter is composed of molecules and atoms, separated by empty space."
Then you say to Lawrence: "I have difficulties seeing how the world could be digital and analog at the same time, but it might be."
I argue in my
essay that the primordial gravity field is distributed over all space. But the continuous field is not initially composed of molecules and atoms. Maxwell taught fields have energy, and Einstein that energy has mass. Because gravity interacts with mass, it interacts with itself, in Yang-Mills fashion, producing fundamental particles that lead to molecules and atoms.
These particles are discrete and stable enough to last forever [protons] in a 'low temp' environment, while the 'hi temp' of colliders restores the local mass to the 'field state' as in the 'perfect fluid' seen at RHIC and LHC when heavy ions are collided. Upon re-cooling the field again 'condenses' to stable particles, although not necessarily the same mix as pre-collision.
This is one way that the world could be analog and digital at the same time, and my essay describes experiments to prove this.
You also respond to James Putnam by saying that it: "may be as simple as to believe that the universe is just computing itself", then say, "if someone or something ran the universe code,...".
My essay assumes that only one thing exists [the primordial field] and so evolution of the universe must proceed by self-interaction, which reasonably leads to our current reality. But a continuous field, interacting with itself, is essentially an analog computer.
If the field itself is a 'real' analog computer, neither 'program code' nor 'digital computer' concepts are required. David Tong states that "no one knows how to formulate a discrete version of the laws of physics," and also that "no one knows how to write down a discrete version of the Standard Model" and so we cannot simulate the known laws of physics on a computer. And as I noted in Brian Whitworth's 'VR' essay, if the "computer" is analog, there need be no "program code" since analog computers may simply be designed via the connections. In that sense analog models are compatible with Tom Ray's remark that E=mc^2 'powers' this universe.
Tommaso Bolognesi seems to agree when he states that "Perhaps ... there is no actual, physical, Digital Computer that runs it, in the same way as we do not require power for an Analog Computer to run, say, the Navier-Stokes or Einstein equations, under an analog-based understanding of the universe."
The field equations are analog and the field itself is the actual physical 'analog computer' that 'executes' the 'code' for our universe. I can see how those concerned with simulating reality on a digital computer might be concerned with 'algorithmic processes', but I don't believe that the idea of replacing a real analog computer [field] capable of explaining today's reality with an imagined 'digital computer' that exists in some other dimension, if not some other world, is a step forward.
Nevertheless, you have written a very interesting essay.
Edwin Eugene Klingman
view post as summary
report post as inappropriate
Author Hector Zenil replied on Mar. 2, 2011 @ 04:57 GMT
Dear Edwin,
Interesting comments. I agree with you on some points, I hope to find time soon to elaborate further and give you a more proper answer.
Thanks for your kind words.
Sincerely.
Peter Jackson wrote on Feb. 26, 2011 @ 23:12 GMT
Dear Hector
I congratulate you on very good essay, though disagree with many of your fundamental tenets.
From your cv I quite understand your view that; "Physical laws, like computer programs, make things happen." I object on the basis that such 'entities' are not physically causal and can only propagate the non causal. They can do much of value, describe, predict, cause switches to throw and much more, but I feel it endangers our understanding of causality to claim to much.
Surely "Structure from randomness BY iterated computation" is a bounded view. Is it not enough to claim to just 'explain' some structure from randomness, aided by computation?
On causality I have found clearly and proposed that the apparent lack of quantum causality is only due to our lack of understanding. As Einstein said, we don't yet understand 1,000th of 1%..." Is it not arrogant to assume we know and understand, and that we can can judge it on those basis.?
I feel you have cart before horse in saying information 'makes' a cup a cup and a human a human'. I'm even tempted to suggest you may have spent too much time playing at computers in your youth! Yet I do understand what I hope you mean, a very 'catholic' translation of 'makes'. As a supporter of reality (and of Edwins view above) I'd strongly wish to preserve the real meaning of 'makes'.
And I do see your "matter and space exist as a continuum" differently to Edwin and applaud the view I see. I have derived and find it empirically consistent that in particle terms the discrete condenses from the (sub 'matter') continuum to implement 'change'. Indeed this goes far enough to resolve both SR and GR with QM, consistent with Edwin's and other good essays here. I'd be very pleased if you'd give your views on my essay, but warn you may be shocked by it's naive reality/ locality empiricism, perhaps in another universe to yours. (Though the discrete field model involved logically derives recycled sequential not parallel universes. - yet would you believe my own search beyond maths turned out to be on the same grounds Alice in Wonderland was written!?)
Through all this I am pleased to agree as an essay it deserves a front runner status, and enjoyed an entirely different viewpoint on nature to my own.
Best wishes
Peter Jackson
report post as inappropriate
Author Hector Zenil replied on Mar. 2, 2011 @ 04:54 GMT
Dear Peter,
Thank you for sharing and for your encouragement.
The claim that only information makes a cup a cup rather than a human being is because both human beings and cups are made exactly of the same elementary particles and it is nothing but the way they are arranged that make one or the other. But let me know how that could be wrong from a purely materialist point of view.
Sincerely.
Peter David Mastro wrote on Feb. 27, 2011 @ 16:01 GMT
Hector nice essay.
From an artistic perspective, I view the behavior of the universe as a reproductive system. Interesting enough, if you view a reproductive system as a set of random events, or you view it as a predictable (algorithmic) set of events, the resulting distribution is pretty much the same in either case.
That is the link to the Fibonnaci series I elude to in my essay. Check it out if you get a chance at http://fqxi.org/community/forum/topic/893.
Good Luck
Pete
report post as inappropriate
Author Hector Zenil replied on Mar. 2, 2011 @ 04:50 GMT
Thanks Pete. Vey nice and Interesting essay, I like your artistic point of view.
I discuss a bit the relationship of an object vs. its pictorial representation in this paper (in joint with Delahaye and Gaucherel) that you may find interesting as related to the concept of physical complexity:
Image Characterization and Classification by Physical Complexity, available online at http://arxiv.org/pdf/1006.0051
Best regards.
Juan Enrique Ramos Beraud wrote on Feb. 27, 2011 @ 16:08 GMT
If sorting the essays by community rating is any indication: You are winning
report post as inappropriate
Wilhelmus de Wilde de Wilde wrote on Mar. 1, 2011 @ 17:03 GMT
Dear dr ZENIL,
Your essay is very understandable and readable, but I dont see the universe as a computer, I quite understand that humans created this machine and for that they had to design a way in which this machine really could make computations, these computations resulted in images and sounds that our senses sight and hearing could interprete as a virtual reality, the point is still the same it is "to be or not to be", the virtual universe we are creating out of the zero's and the one's is just ONE possibilllity that we have as mankind to CREATE, as I put it in my essay when we are in the possibillity to create a consciousnes that exists in this virtual reality we move one step further in the understanding of our own consciousness, this new consciousness however has all the restrictions of the DIGITAL essence, it is a second hand reality from our point of vieuw.
Analogue may also mean ONE, the continuity of the whole, not made of two units, when science will be able to construct a quantum "computer" there is a infinity of superpositions to choose, and all the answers of all the possible questions are in facto "present" even when he is not connected to the electricity, we can see that as a ONE, for achieving that we have to bring back choices to one...
kind regards
Wilhelmus de Wilde
report post as inappropriate
Wilhelmus de Wilde de Wilde replied on Mar. 2, 2011 @ 15:17 GMT
Good afternoon (or the time you may read this post) Hector Zenil
I did not at all want to offend you in my post, on the contrary you gave me a lot of reasons to continue for the search of understanding our universe and explainede very clear how the "technical" side of our community is searching for more knowledge.
I would like to join to my post that in my opinion there is a difference between Intelligence and Consciousnes, with our total intelligence we can construct the LHC, but it is our consciousnes that asks always WHY, like a child that won't stop asking WHY, the HOW is the intelligence and the Why our consciousnes, the intelligence can be constructed by our (Turing)machines, but the consciousness untill now we could not reproduce so this is perhaps not a digital "substance", so not reproducable in the digital way (?), like a piece of art, you can copy it but the copy will never be the original.
The way we experience "reality" is different for every one, but seems to be analoguefor a majority, (ana = from logos= reason) so the way our reason interpretes it becomes reality, but this also means that there multiple interpretations, from which the digital interpretation is only one.
Perhaps my interpretation is not the one of a pure scientist like yourself, but I think that this is the richness of the rainbow of thoughts so beautiful expressed in this contest.
I wish you a lot of luck (digital ?) in the contest
and
best regards
Wilhelmus de Wilde
report post as inappropriate
Author Hector Zenil replied on Mar. 2, 2011 @ 17:25 GMT
Dear Wilhelmus de Wilde,
You didn't offend me at all. I appreciate your comments.
Best.
Dan J. Bruiger wrote on Mar. 1, 2011 @ 18:44 GMT
Hello, Hector
Thanks for a well-written essay, which I read with keen interest. I like your (French!) style of clear analysis and expression. The strategy of comparing the evolution of patterns in nature and in computer programs seems very promising. I do have some comments about certain passages, and would like to know your response to them.
“Lossless compressibility” [p9] may...
view entire post
Hello, Hector
Thanks for a well-written essay, which I read with keen interest. I like your (French!) style of clear analysis and expression. The strategy of comparing the evolution of patterns in nature and in computer programs seems very promising. I do have some comments about certain passages, and would like to know your response to them.
“Lossless compressibility” [p9] may apply perfectly to a pattern that is already defined (like files in your computer), but only imperfectly to natural patterns (data from observations). Your result that “most empirical data carries an algorithmic signal” seems simply to restate the fact that there are evident patterns in the world. I cannot take this to mean, however, that the world in itself is pure ordered pattern, fully accountable in a set of algorithms. That seems too great a leap. Am I missing something?
[p5] You say that “Producing random bits in a deterministic universe…would actually be very expensive…” But isn’t this an argument AGAINST determinism? Perhaps nature cannot be forced into a mold that defines it either as deterministic or as non-deterministic. This could be the case if ‘determinism’ is actually a purely logico-mathematical concept and not ascertainable as an ontologically physical reality. The meaning of ‘determinism’ may be nothing other than logical implication (computability). We are free to project this upon nature, but isn’t it really our own invention? Similarly, we are free to imagine the universe as driven by simple algorithms (after all this works to some extent!). The really interesting thing, to me, is our apparent human need to know what the universe is in itself, in ultimate terms, apart from simply understanding what knowledge is (or can be) for us. Perhaps it is human thought that is driven by simple algorithms!
[p4] From an engineering point of view, “what makes a cup a cup” is information; but from a physics point of view, what makes a cup a cup is structure. They may seem to coincide in the cup, which is an artifact, more than in the case of the human body, which is not. Information is effectively a set of instructions to the engineer to build the cup (a program). No engineer, however, knows how to build a human body or any natural thing. (The cup too—as a physical thing rather than a conceptual thing—is made of natural materials.) Such knowledge presupposes a blueprint from which to construct the natural entity. But a natural thing does not come with a blueprint, and anything looking like its blueprint is actually a product of an analysis that can never be assumed complete or exhaustive. The natural thing is found, the simulation (and the information behind it) is made. I suspect this applies to atoms as well as cups.
[p5] You say “if information is even more fundamental than the matter of which it is made and the physical laws governing that matter, then the question of whether these effects violate physical laws may be irrelevant.” That is a big ‘if’! The very question to be decided! It would undeniably be convenient if natural reality were fundamentally “informational”, but that does not make it so. In the medieval world, violation of physical law was similarly irrelevant, because an omnipotent God was ultimately the cause of everything, including natural law and miracles!
[p8 re: “DNA construction] I think it is unfair to dismiss the interaction with environment as not a “true random function operating on the DNA”. What is a ‘true random function’? Even by a mathematical definition, one cannot prove randomness. How then to apply this to the real world? Perhaps we can say no more than that DNA construction, like the evolution of the universe as a whole, involves an interplay of predictable and unpredictable factors.
Thanks again and best wishes,
Dan
view post as summary
report post as inappropriate
Author Hector Zenil replied on Mar. 2, 2011 @ 04:42 GMT
Hello Dan,
Thanks for your kind comments. Here are my answers to your interesting questions:
You say:
"“Lossless compressibility” [p9] may apply perfectly to a pattern that is already defined (like files in your computer), but only imperfectly to natural patterns (data from observations). Your result that “most empirical data carries an algorithmic signal” seems simply...
view entire post
Hello Dan,
Thanks for your kind comments. Here are my answers to your interesting questions:
You say:
"“Lossless compressibility” [p9] may apply perfectly to a pattern that is already defined (like files in your computer), but only imperfectly to natural patterns (data from observations). Your result that “most empirical data carries an algorithmic signal” seems simply to restate the fact that there are evident patterns in the world. I cannot take this to mean, however, that the world in itself is pure ordered pattern, fully accountable in a set of algorithms. That seems too great a leap. Am I missing something?"
You are not missing anything, you make a fair point. That, as you say, may be the case. Ii find, however, interesting the fact that this 'algorithmic signal' seems to be present everywhere and that it can actually be quantified. There is a researcher that takes your point to the extreme, that empirical data sets are algorithmically random, you may be interested (although I cannot agree with his conclusions):
James McAllister's 2003 article, “Algorithmic randomness in empirical data” Studies in History and Philosophy of Science Part A 34 (3):633-646.
See also a strong reply:
Charles Twardy, Steve Gardner & David Dowe (2005). Empirical Data Sets Are Algorithmically Compressible: Reply to McAllister. Studies in the History and Philosophy of Science, Part A 36 (2):391-402.
You say:
"[p5] You say that “Producing random bits in a deterministic universe…would actually be very expensive…” But isn’t this an argument AGAINST determinism? Perhaps nature cannot be forced into a mold that defines it either as deterministic or as non-deterministic. This could be the case if ‘determinism’ is actually a purely logico-mathematical concept and not ascertainable as an ontologically physical reality. The meaning of ‘determinism’ may be nothing other than logical implication (computability)."
Yes, if a deterministic universe were able to produce true random bits that would be an argument against determinism if I were ready to accept that that actually happens (as Quantum Mechanics may imply). If I consider this possibility is only to explain that classical mechanics implies a deterministic universe yet quantum mechanics is supposed to be a source of free randomness which is one of the deepest contradictions between these two mainstream theories. I only point out that if the universe is deterministic as one may believe (as I do), then there is this fundamental incompatibility.
You say:
"We are free to project this upon nature, but isn’t it really our own invention? Similarly, we are free to imagine the universe as driven by simple algorithms (after all this works to some extent!). The really interesting thing, to me, is our apparent human need to know what the universe is in itself, in ultimate terms, apart from simply understanding what knowledge is (or can be) for us. Perhaps it is human thought that is driven by simple algorithms!"
Yes, it might be a projection, or even a mirage. But when a mirage works well (i.e. seems to explain and predict something) we use to call it a scientific model. I think it is fair to think that the world is based in simple rules if it turns out that, as it seems to be the case, it is comprehensible in a large extent with simple models of the world (including current scientific theories governed by simple formulae). If these simple rules turn out to produce the complexity we see around I think one can safely assume that they may be the responsible for the organized complexity in the world. It could, of course, be the case, that nature is fooling us making us to believe that rules are simple but actually are very complicated, looking only simple at the surface.
"[p4] From an engineering point of view, “what makes a cup a cup” is information; but from a physics point of view, what makes a cup a cup is structure. They may seem to coincide in the cup, which is an artifact, more than in the case of the human body, which is not. Information is effectively a set of instructions to the engineer to build the cup (a program). No engineer, however, knows how to build a human body or any natural thing. (The cup too—as a physical thing rather than a conceptual thing—is made of natural materials.) Such knowledge presupposes a blueprint from which to construct the natural entity. But a natural thing does not come with a blueprint, and anything looking like its blueprint is actually a product of an analysis that can never be assumed complete or exhaustive. The natural thing is found, the simulation (and the information behind it) is made. I suspect this applies to atoms as well as cups."
But a blueprint is a description which tells someone (if not you then nature) how to build something. The claim that only information makes a cup a cup rather than a human being is because both human beings and cups are made exactly of the same elementary particles and it is nothing but the way they are arranged that make one or the other. But let me know how that could be wrong from a purely materialist point of view.
You say:
"[p5] You say “if information is even more fundamental than the matter of which it is made and the physical laws governing that matter, then the question of whether these effects violate physical laws may be irrelevant.” That is a big ‘if’!"
Of course that is a big 'if', I would only dare to say so in a foundational question essay. Notice, however, that several authors think of information as more fundamental than physics itself. And it is a common practice in science to find all the time more fundamental structures on which previous ones were lying on. I don't have may troubles seeing information as more fundamental than matter but of course some may disagree (as they do, such as e.g. David Deutsch).
"The very question to be decided! It would undeniably be convenient if natural reality were fundamentally “informational”, but that does not make it so. In the medieval world, violation of physical law was similarly irrelevant, because an omnipotent God was ultimately the cause of everything, including natural law and miracles!"
Right, but I don't think we are going backwards, replacement of explanations that were once laws before is a common practice, if not the goal, of science, and the replacement seems to have a direction in the form of models that explain more and more phenomena and have greater prediction power. Of course saying that the world is something won't make it that something, but when you find that something smells, looks and behaves as something one can be persuaded that it is this something. In this case, it seems clear that information is, in the worst scenario, something as fundamental (even if it is a worldview and not necessarily an ontological truth) as other variables in the physical world (matter, energy). Whether this is the case or not we will see or we may never know. Notice, however, that many physicists have jumped to similar conclusions even if they are treated differently, by giving the concept of symmetry (that you may see as information or an abstract mathematical object) a foundational role for even predicting the existence of new particles, that so far has been quite successful.
"[p8 re: “DNA construction] I think it is unfair to dismiss the interaction with environment as not a “true random function operating on the DNA”."
It is a generalized agreement that the macro world is fully deterministic and does not produce indeterministic randomness, so mutation may be only truly a random process if based on quantum mechanics based on current physics.
"What is a ‘true random function’? Even by a mathematical definition, one cannot prove randomness."
When I write true random function I mean a function capable of producing truly independent random bits just as predicted by the mainstream interpretation of QM.
Thanks Dan, very interesting questions.
- Hector
view post as summary
Dan J. Bruiger wrote on Mar. 2, 2011 @ 19:42 GMT
Thank you so much, Hector, for your thoughtful and patient replies. I haven’t been able to access McAllister’s 2003 article, but I did read the Twardy et. al. reply, which I think fairly refutes McAllister’s claims when interpreted in narrow terms. However, I did read a more recent piece by McAllister (2009) “What do patterns in empirical data tell us about the structure of the world”,...
view entire post
Thank you so much, Hector, for your thoughtful and patient replies. I haven’t been able to access McAllister’s 2003 article, but I did read the Twardy et. al. reply, which I think fairly refutes McAllister’s claims when interpreted in narrow terms. However, I did read a more recent piece by McAllister (2009) “What do patterns in empirical data tell us about the structure of the world”, from which I can see that he hasn’t given up! His two main points there, liberally interpreted, seem to be (1) ‘noise’ is relative and may be mined for further pattern (signal), and (2) there is a sense in which ‘pattern’ is in the eye of the beholder. I would agree fully with (1), while acknowledging the usefulness of provisionally disregarding noise in pattern extraction. While I think he may go too far in his case for (2), there is something in the spirit of it that would certainly be useful should we ever have to confront alien scientists! In any case, it seems a wise proviso for human researchers to bear in mind.
In your reply you say:
“I only point out that if the universe is deterministic as one may believe (as I do), then there is this fundamental incompatibility [between randomness and determinism].”
My point is that the universe can only be deterministic if it happens to coincide perfectly with some formalism, for only such deductive systems are truly deterministic (i.e. the only meaning that can actually be assigned to causality is logical implication within some deductive system). In other words (to put it somewhat outlandishly), the universe can only be deterministic if it is not natural but artificial; conversely, if it is natural, it cannot be fully and finally mapped in any formalism. On the other hand, we are not in a position to say that it is fundamentally indeterministic either, since (mathematical) randomness cannot proven. This is why I wonder at the basic human impulse to assert a “truth of the matter” one way or the other, since it seems hopeless to establish that. I hope I am not trying your patience too much, but I would very much value your feedback to these ideas.
You reply also:
“But a blueprint is a description which tells someone (if not you then nature) how to build something. The claim that only information makes a cup a cup rather than a human being is because both human beings and cups are made exactly of the same elementary particles and it is nothing but the way they are arranged that make one or the other. But let me know how that could be wrong from a purely materialist point of view.”
While perhaps useful, I think it is a mistake to project human communication models upon nature. We should not assume that nature engages in some form of information processing or computation, along the lines utilized by human beings. We cannot assume that we possess the (complete) information involved in the structure of a natural thing, which is not made by us but found in an incompletely known state. In the sense hinted at by McAllister, the information is made by us, and we can never be sure how exhaustively (or correctly) it describes the real thing. We only know for certain the blueprints we literally make, not the blueprints we impute to nature.
Thanks again for the clarity of your thinking and your willingness to respond.
Dan
view post as summary
report post as inappropriate
James Lee Hoover wrote on Mar. 6, 2011 @ 18:17 GMT
Our reasoning and empiri-cal findings suggest that the information in the world is the result of processesresembling computer programs rather than of dynamics characteristic of a more random, or analog, world.9
Hector,
This is a well-supported perspective, ably argued.
My prejudice is that the above only proves humankind's approach to understanding reality but my argument tends to lack your many details.
Jim Hoover
report post as inappropriate
Peter Jackson wrote on Mar. 8, 2011 @ 16:18 GMT
Hi Hector
You said; "The claim that only information makes a cup a cup rather than a human being is because both human beings and cups are made exactly of the same elementary particles and it is nothing but the way they are arranged that make one or the other. But let me know how that could be wrong from a purely materialist point of view."
I think the word 'makes' is the key, as it implies causality. I must entirely agree 'information' may be a good word to describe the difference, but the whole gamut of my own thesis here is that, while we can 'describe' something from any viewpoint, description is once removed from the reality of something, as so well described by Georgina in the most foundational terms. Correcting only this seems to bring Occam's razor into action.
Linguistic semantics apart, information is the difference in superposed wave patterns, causality is the interaction of it, and only allowed by qauntization. Therefore if either waves or particles were removed we'd be in the proverbial!
Do have a look at this easy read paper, with photographic evidence, if you're interested in the entertaining logical extension; http://vixra.org/abs/1102.0016
Best wishes
Peter
report post as inappropriate
Constantin Leshan wrote on Mar. 10, 2011 @ 11:29 GMT
Dear Dr. Hector Zenil,
Since it is a ''leading essay'', I must check it for consistency; I hope to find the novel ideas in physics and clear proofs about the nature of the Universe. The essay seems to contain two separate stories about the origin of the universe and the algorithmic nature of the world. Since it is an contest about the nature of reality, let's begin with the proofs about the...
view entire post
Dear Dr. Hector Zenil,
Since it is a ''leading essay'', I must check it for consistency; I hope to find the novel ideas in physics and clear proofs about the nature of the Universe. The essay seems to contain two separate stories about the origin of the universe and the algorithmic nature of the world. Since it is an contest about the nature of reality, let's begin with the proofs about the nature of the world: ''One may wonder whether the lossless compressibility of data is in any sense an indication of the discreteness of the world''.
It is a completely senseless ''proof''. Now we can close all Physics' laboratories because we can find all fundamental information about the Universe by help of programmers and computer specialists. Programmers can tell us if gravity is analog or digital after a careful analysis of the compressibility of mp3 files. Dear Hector, please tell us more about discrete spacetime (spatial atoms) or a fundamental length scale by analyzing the compressibility of data. His further reasoning also is senseless: ''An analog world means that one can divide space and/or time into an infinite number of pieces, and that matter and everything else may be capable of following any of these infinitely many paths and convoluted trajectories''. You cannot divide space and/or time into an infinite number of pieces because it is forbidden by Heisenberg uncertainty principle. If you try to penetrate in a very small region of space, you need more and more energy. Therefore it is no sense to speak about the lossless compressibility of data because we can find the same answer by analyzing the Heisenberg uncertainty. Moreover, I can say that since the computer programs are digital, it is an indication that the world is digital. This statement is absolutely equivalent to Hector's statement about compressibility of data, therefore we both ''deserve'' the same prize.
Let's analyze the rest of Hector's essay: Everything out of nothing. The main problem in the cosmological theories which claims that the Universe started from nothing is to explain how a matter can appear from nothing. I don't see any solution for this problem in your essay. Why Hector use the title ''Everything out of nothing'', if he is not able to explain this problem? He writes: the universe began its existence as a single point; When the universe had cooled to the point where the simplest atoms could form'' - it are the old statements from the Big Bang theory. Thus, the first part of essay does not have any novel ideas, it is a story about Big-Bang-like theory.
Let's analyze his statements about the algorithmic nature of the world. Dear Hector, please show us an algorithm for the free motion of a particle and Heisenberg uncertainty. Since you state that the universe is capable of performing digital computation, please show us how this imaginary computer can process motion of a particle and Heisenberg uncertainty. To process the motion of a particle, this computation must know the complete information about position and momentum before events occurs. In this case you should accept that your theory contradicts quantum mechanics.
I can show you a place where the digital computation theory is wrong: 1) At the center of a black hole lies a gravitational singularity, a region where the spacetime curvature becomes infinite. Thus, at the center of a black hole a digital computation is not possible because spacetime curvature becomes infinite. You see, there are places and phenomena which exist without need in the digital computation. Since I found at least one place where the digital computation can not exist, it is a proof that this theory is wrong.
Another flaws in Hector's essay: ''But at the lowest level, the most elementary particles, just like single bits, carry no information when they are not interacting with other particles''. It is an erroneous statement; even if a particle is born, one carries information about his kind, mass, charges and so on.
Conclusions: The main conclusion of the essay about the nature of reality is absolutely senseless and unconvincing; I found neither proofs about the nature of the Universe nor novel ideas in this essay. The statement about computational nature of the Universe is very doubtful and contradicts quantum mechanics.
It is a crime against humanity and science to support the false theories. There are
advanced theories supported by nobody because all money is absorbed by false theories. The human race will NOT survive the next thousand years without teleportation and true Science.
Sincerely,
Constantin
view post as summary
report post as inappropriate
Author Hector Zenil replied on Mar. 10, 2011 @ 13:15 GMT
Constantin,
Thanks for your comments. I find difficult to address your arguments agains my essay one by one because I think there is a misreading from your part at several levels but I will do my best to address some of the most fundamental.
I can say, as I said before, that nothing in my essay is pretending to be a mathematical (or even physical) proof, it is statistical evidence in...
view entire post
Constantin,
Thanks for your comments. I find difficult to address your arguments agains my essay one by one because I think there is a misreading from your part at several levels but I will do my best to address some of the most fundamental.
I can say, as I said before, that nothing in my essay is pretending to be a mathematical (or even physical) proof, it is statistical evidence in favor of a personal worldview (and an original research as acknowledged by my peers upon publication of my work in books and journals). I'd also like to tell, again, that I'm not even jumping to the conclusion that the world is digital but rather algorithmic, the reader can then jump to the digital hypothesis using Occam's razor, if they wish.
I think I also clarified that my definition of the continuum is limited and I can acknowledge that you make a fair point, it is as limited as it was the space to talk about it in this contest. I could unpack more about the continuos case, but my main concern and what I stand for is that there is not even a general agreement of what continuity might be, while for the digital case the consensus is almost unanimous, both intuitively and formally in some extent.
Concerning whether an elementary particle carries information, you say it carries its mass, size and so on. I do not agree, a particle may have these properties only when interpreted from outside, a particle only has mass when measured related to other matter, it has a location only when fixing an external framework, and it has a size only when compared to other things. The particle by itself, from my point of view, does not store any of these parameters in itself, and it is only when interacting with others that this information is possible chaining itself into other processes (this makes more sense if seen under my algorithmic view, because it is processes that gives the algorithmic sense to the world in my view).
I see you are re-using your argument against Tommaso Bolognesi's essay regarding your claim that "At the center of a black hole lies a gravitational singularity, a region where the spacetime curvature becomes infinite. Thus, at the center of a black hole a digital computation is not possible because spacetime curvature becomes infinite." From it I guess it is you who claims to hold a proof that the world is analog, or at least that it is not discreet. Unfortunately, I don;t think the claim will convince most researchers, including me, simply because not knowing what happens inside a black hole doesn't rule out anything, but specially because physics as we know them are also inapplicable inside a black hole and scientists do not throw their theories away. But again, I'm not even standing strongly in favor of the digital hypothesis as I'm doing for the algorithmic case.
Finally, most if not all of my essay is based in hard science, particularly the mathematical theory of information and computation, except for my particular worldview that this contest is encouraging people to share with others to trigger interesting discussions.
I find difficult to argue against claims such as: "There are advanced theories supported by nobody because all money is absorbed by false theories [link inserted to your own work]. The human race will NOT survive the next thousand years without teleportation and true Science."
I will let others judge by themselves, but it is always a risk to tag science as 'true' science, specially when arguing against someone else theories in favor of yours.
Thanks for sharing.
view post as summary
Constantin Leshan replied on Mar. 10, 2011 @ 19:56 GMT
Dear Dr. Hector Zenil,
The professional scientists often send fantastic essays to our contest just because this contest is encouraging people to share with others to trigger interesting discussions. We do not write holy papers so it needs revision. Thus, even if your essay may have errors, it is not a catastrophe.
1) You write: ''a particle may have these properties only when...
view entire post
Dear Dr. Hector Zenil,
The professional scientists often send fantastic essays to our contest just because this contest is encouraging people to share with others to trigger interesting discussions. We do not write holy papers so it needs revision. Thus, even if your essay may have errors, it is not a catastrophe.
1) You write: ''a particle may have these properties only when interpreted from outside, a particle only has mass when measured related to other matter''. a) There are huge, cold clouds of gas and dust in our own galaxy. These particles are very cold and practically do not interact one with another. Meanwhile these clouds have gravity that influences the motion of planets, stars and even galaxies. It is a proof that the particles have mass even if they do not interact with each other or measuring devices. Thus, your statement is wrong.
b) There is a huge number of neutrino which is able to pass through ordinary matter almost unaffected. Neutrino practically do not interact with matter but their gravitation influence the motion of planets, stars and even galaxies; It means they carry information about their mass even when they don't interact with matter.c) If you mean gravitational interaction, then your statement also is wrong - particles interact gravitationally always. Thus, since particles carry information (about mass) without interacting, your statement is erroneous: ''But at the lowest level, the most elementary particles, just like single bits (the Shannon Entropy of a single bit is 0), carry no information when they are not interacting with other particles''.
2) ''the universe is capable of performing digital computation''. This statement also is wrong; it contradicts quantum mechanics and Black Hole physics. The same error I found in T. Bolognesi's essay and you can see my arguments on his page.
3) ''the lossless compressibility of data is in any sense an indication of the discreteness of the world''. This statement is completely senseless as I shown in my above post.(Programmers may tell us about the nature of the Universe by performing an analysis of the compressibility of mp3 files).
''Finally, most if not all of my essay is based in hard science'' - it proves nothing; the SM is a mathematical model only that can compute only but explain nothing. Please try to explain inertia, mass and the curvature of spacetime by help of your ''hard science''. Your ''hard science'' may fall during next 10 - 20. Pay attention how many doubtful essays you see in our contest - the same situation is everywhere in the Physics. About 70 percents of theoretical papers in physics are wrong. An example is your theory based on ''hard science'' which contradicts quantum mechanics and Black Hole physics. If your theory is true then quantum mechanics and Black hole theories are wrong and vice versa. Since astronomers found evidence for Black Holes, but I don't see observable evidence for the algorithmic Universe, we conclude that your theory is false rather than first two theories.
Sincerely,
Constantin
view post as summary
report post as inappropriate
Author Hector Zenil replied on Mar. 10, 2011 @ 21:11 GMT
Constantin,
The universe is capable of digital computation because there are digital computers in it (you are typing on one). I don't see how could that be wrong. The question is therefore whether the universe _only_ computes at the digital level.
On the other hand, respected quantum scientists think that an algorithmic world is possible and compatible with quantum mechanics, e.g. Seth Lloyd. But as you say, if you deny mainstream science it will be difficult to argue against any of your arguments. You make interesting points but they could be read more easily if they weren't so categorical.
I don't see anywhere in my essay what you say I said on the compressibility of data as a direct proof of the discreteness of the world. What I wrote is "One may wonder whether the lossless compressibility of data is in any sense an indication of the discreteness of the world." then I make my case that it may be an indication, not a proof.
Concerning your Neutrino argument, we yet don't exactly know what particles, if any, may be responsible of what we identify as gravitation, so when you say that a neutrino is not interacting with anything else and take it as a proof against my claim (not a proof) that particles may not be able to carry any information, the argument is not that convincing.
The math on which my arguments are based on, won't be wrong in 10 or 20 years, what might be wrong is the connection I make between the math, its consequences, and the real world, but that is what the contest is about, yet I offer what I think is evidence in favor of the algorithmic nature of the world.
Thanks.
Constantin Zaharia Leshan replied on Mar. 13, 2011 @ 16:35 GMT
Hector,
1) You write: ''a particle may have these properties only when interpreted from outside, a particle only has mass when measured related to other matter'' , 'But at the lowest level, the most elementary particles, just like single bits (the Shannon Entropy of a single bit is 0), carry no information when they are not interacting with other particles''.
Imagine a cloud of particles which does not interact and therefore, according to your essay, they do not have mass and do not carry information. Then you detect these particles, consequently all these particles suddenly get mass and begins to create gravity. It contradicts quantum mechanics, the measurement theory and energy conservation laws. The process of measurement cannot create energy/matter. Also, a lot of particles fly in the Universe without interacting. If these particles do not have mass without interaction then the mass of the universe must be very small; besides, the mass of the universe must fluctuate depending on the events of measurement/interaction. Thus, this statement is wrong and contradicts quantum mechanics and energy conservation laws.
Sincerely,
Constantin
report post as inappropriate
Author Hector Zenil replied on Mar. 13, 2011 @ 17:12 GMT
Constantin,
Neither me nor most physicists believe in action at a distance in physics, perhaps because we are mind wired to believe in a causal world (which is most of my claim, that we live in an algorithmic rule-based world). Action at a distance was sometimes a common mistake some centuries ago when people thought, for example, that explanations about electromagnetic phenomena implied...
view entire post
Constantin,
Neither me nor most physicists believe in action at a distance in physics, perhaps because we are mind wired to believe in a causal world (which is most of my claim, that we live in an algorithmic rule-based world). Action at a distance was sometimes a common mistake some centuries ago when people thought, for example, that explanations about electromagnetic phenomena implied action at a distance among objects, meaning that nothing between happened but that things exchanged information somehow 'magically'. What we have witnessed with the help of science, however, is that everything seems connected to everything else in some sort or another, and scientific research has unveiled most of these connections in the form of unifications as I evoked before (e.g. light and electricity, or the movement of the planets and falling objects on Earth).
You continue assuming things that are not acknowledged by everyone (nor by most thinkers) and so I cannot really imagine a cloud of particles which does not interact with anything else, yet I never said that things don't have mass if they don't interact with anything else, but rather that mass as a magnitude (information) only makes sense when there is interaction with something else. Which seems to make perfect sense even in the equations (a body's mass determines the degree to which it generates or is affected by the gravitational field of another body).
For example, the current quark mass is a logical derivation from the mathematical formalism of Quantum field theory and not from a descriptive origin but as a result of an external calculation. At the end particles are regarded as excited states of a field so the interpretation of particle mass is only a partial description of the model itself. I'm not, evidently, an expert in quantum mechanics and my informational interpretation of quantum phenomena is only a little aside interpretation of my algorithmic view based in nothing else but the application of information theory to what I think is the equivalent of single bits in physics: elementary particles, if one wants to map them in a one to one relationship. This view has also the advantage of unifying one with the other instead of replacing one as more fundamental than the other, which has triggered much of the debate on whether matter/energy or information is more fundamental.
When you say "The process of measurement cannot create energy/matter", I may agree although, unlike you, I would need to think further about it. I think measurements generate or unveil hidden information, in what exact way the do, I do not yet know. I think claiming that 'a lot of particles fly in the Universe without interacting' is quite a bold statement. Modern physics bet on the contrary, that particles lie over something else. For example, string theory seems to suggest that particles 'touch' each other even if they seem not to do so, by way of having a greater dimension than what they seem to have, and somehow interacting in higher dimensions. The main assumption is that everything interacts with something else, and what I'm saying seems perfectly compatible with this, that if you isolate the smallest particle then it will not carry any information, perhaps not even of itself and therefore measurements lead to what we think are random values or spooky behaviors at that level of reality.
Thanks.
view post as summary
hide replies
Alan Lowey wrote on Mar. 10, 2011 @ 12:29 GMT
Dear Hector,
I've made some progress with my novel idea of a helical screw in empty space as a model for the graviton. I've posted in another two leading essays so I'll copy and paste it here.
On day-by-day thinking about the novel idea of a mechanical Archimedes screw in empty space representing the force of gravity by gravitons, I have deduced an explanation for the
galaxy rotation curve anomaly.
The helical screw model gives matter a new fundamental shape and dynamics which the standard model lacks imo. This non-spherical emission of gravitons is in stark contrast to the Newtonian/Einsteinian acceptance that "all things exert a gravitatinal field equally in all directions". This asymmetry of the gravitational field allows for the stars to experience a greater pull towards the galactic plane, due to their rotation giving more order to the inner fluid matter of the stellar core. Both the structure of the emitter and the absorber of the gravity particles is important. It also has implications for hidden matter at the centre of the galaxies..
I've given the idea some more thought and come to the conclusion that the stars furthest from the galactic centre must have a more 'bipolar nature' than the matter of stars of the inner halo presumably. This is the reason they have wandered towards the galactic plane whilst the halo stars have not. The outer stars' configuration means they experience a greater interaction with the flux pattern of the graviton field. Are the stars of the outer arms simply spinning faster?? We are on the outer edge of a spiral arm and so this would fit with this hypothesis. Our sun could have spin which is higher that that of the average halo star. This relationship between spin and distance from the galactic centre is a fundamental feature which ties in with the suggested mechanism of their creation.
All that is needed is an additional factor of stellar spin speed as well as it's mass and distance from the galactic centre. The relationship should then give calculated values which match those of the observed.
Best wishes,
Alan Lowey
report post as inappropriate
QSA wrote on Mar. 10, 2011 @ 19:33 GMT
Dear Zenil,
I repeat my post in Mr. Shing's blog ,but add that my theory is 100% information based, random(the main point) and algorithmic since I implement it using a computer program.
I was so happy to read your essay since it is very much related to my own theory:
http://www.qsa.netne.net
qsa I think all the ideas of John Benavides , Tommaso Bolognesi ,D'Ariano, Zenil and few other are very much related. my website has not been updated, but here is the abstract of my upcoming paper.
In this letter I derive the laws of nature from the hypothesis that "Nature is made out of mathematics, literally". I present a method to design a universe using simple rules which turns out to have the properties similar to our reality. Particles are modeled as end of lines, one end is confined to a small region and the other goes to allover the universe. The Coulomb force (when lines cross) and gravity(when lines meet) appear naturally and they are two aspects of one process involving the interaction of these lines, and then by calculating the expectation values for positions. I am able to calculate what appears to be the Fine-structure constant. Gravity also appears with surprising results, it shows that gravity becomes repulsive when distance is great or when distance is very small. At this time I have only done 1D full simulation with interaction and 2D and 3D and indeed nD without interaction. I am working on 2D interaction now and already showing very surprising results. I can see a hint of the strong and the electroweak force. Time and space could be looked upon as derived quantities. I show that not only nature is discrete but also mathematics, since dx can only approach zero but it never is zero. In my model the ultimate irony is that our reality came about because there is only one way to design a dynamic universe and that only one allowed our existence. I guess you could say fortunately or unfortunately depending on how one's .
report post as inappropriate
QSA wrote on Mar. 10, 2011 @ 19:55 GMT
a list of the program in c++ for EM and gravity
// g.cpp : Defines the entry point for the console application.
//
#include
#include "stdafx.h"
#include
#include
#include
#include
//using std::cout;
#include
//using std::ios;
//using std::ofstream;
using namespace std;
// Global arrays
double S[951000];
double Po[951000];
double Lo[951000];
double Sy[951000];
double Poy[951000];
double Loy[951000];
double ex[500];
double ex1[500];
double fr[500];
int main() {
srand(time(0));
double i=0;
double g=0;
double frf;
double dist;
long l;
long d1;
long st1;
long d0;
long st0;
double f;
double f1;
double edx;
double edx1;
long m;
long p;
long li;
long p1;
long li1;
double en;
double alpha = 0.0;
double a1=0.0;
double a2=0.0;
double a3=0.0;
double avg=0;
double cn =0;
// double enf;
double intr;
l = 7000; // Universe size
d1 = 200; // Particle 1 size
d0 = d1; // Particle 2 size
double km = 20; // Setting the interval
double kj = 20000000; // # of random throws
intr = ((l)/((km*2.5)));
double d0div = d0 ;
cout
report post as inappropriate
Author Yuri Danoyan+ wrote on Mar. 10, 2011 @ 20:17 GMT
Dear Dr. Hector Zenil,
As community score leader please read my essay
http://www.fqxi.org/community/forum/topic/946
report post as inappropriate
Janko Kokosar wrote on Mar. 11, 2011 @ 21:00 GMT
Dear Hector Zenil
SImilar ideas about (objective) randomness has also Zeilinger. (And Neil Bates in this contests.) Maybe it is useful if you compare them with you. Otherwise, it is a clearly written essay.
Your essay is so good for me, that I used him twice for reference.
http://vixra.org/pdf/1103.0025v1.pdf
I was too late for this contests, so I am sending link here.
Regards
report post as inappropriate
Author Hector Zenil wrote on Mar. 13, 2011 @ 03:18 GMT
Dear Janko,
Interesting, thanks for your comments and for citing my essay. I shall read your paper in further detail.
As for Zeilinger, my position is similar to the opinions expressed in response to Zeilinger's in 'The Message of the Quantum?' by Daumer et al. (available online: http://www.maphy.uni-tuebingen.de/members/rotu/papers/zei.pd
f). Zeilinger claims that quantum randomness is intrinsically indeterministic and that experiments violating Bell's inequality imply that some properties do not exist until measured. These claims are, however, based in a particular (yet mainstream) interpretation of quantum mechanics from which he jumps to conclusions relying on various no-go or no-hidden-variables theorems—of people such as von Neumann, Bell, Kochen and Specker)—which are supposed to show that quantum randomness is truly indeterminstic.
And although I share with Daumer et al. the belief that Wheeler did not shed much light on the issue with his rather obscure treatment of information as related to, or as more fundamental than, physics; I do not share Daumer et al. claims about what they think is wrong with the informational worldview. As they say, Wheeler remarkable suggestion was that physics is only about information or that the physical world itself is information. I rather think, however, that the next level of unification (after the unification of other previously unrelated concepts in science, such as electricity and magnetism, light and electromagnetism, and energy and mass, to mention a few) is between information and physics (and ultimately, as a consequence, to computation), as it has already started to be the case (e.g. between statistical mechanics with information theory).
No interpretation of quantum mechanics rules out the possibility of deterministic randomness even at the quantum level. Some colleagues, however, have some interesting results establishing that hidden variables theories may require many more resources in memory to keep up with known quantum phenomena. In other words hidden variable theories are more expensive to assume, and memory needed to simulate what happens in the quantum world grows as bad as it could be for certain deterministic machines. But still, that does not rule out other possibilities, not even the hidden variables theories, even if not efficient in traditional terms.
Janko Kokosar wrote on Mar. 13, 2011 @ 08:58 GMT
Dear Hector
Here is also my attempt to explain quantum randomness.
http://www.fqxi.org/community/forum/topic/571
(Con
test one year ago)
I think, that we need to explain all physics including Quantum gravity and consciousness. Now the quantum mechanics is not a complete theory.
We do not need hidden variables as additional parameters, but connections of known physical parameters should be clear and it is not yet.
So, I believe in quantum consciousness, and model for it is simple: additional very small elementary particles.
Regards
p.s I also write one not-speculative article:
http://vixra.org/abs/1012.0006
It is a base for my above mentioned article:
http://vixra.org/pdf/1103.0025v1.pdf
I hope to find someone to be endorser in arXiv.
report post as inappropriate
Constantin Leshan wrote on Mar. 13, 2011 @ 18:32 GMT
Hector,
I read
about you and your theory here: ''nature is seen as processing information computing the laws of physics and everything we see around us, including all sorts of complex things like life. In this view, the universe would be computing itself and our computers would betherefore doing nothing but reprograming a part of the universe to make it compute what we want to compute''.
To prove the ''algorithmic nature of the world'' you must explain first quantum mechanics and all forces including gravity by your algorithms and computation. I don't see today any algorithms for quantum mechanics and gravity in your papers, therefore this theory is a fantastic DREAM only. You'll never explain Heisenberg uncertainty by algorithms and computation because you must know the complete quantum information position-momentum BEFORE events occurs to process the motion of particle; this theory is forbidden by Quantum Mechanics and Black Hole physics. Since your Universe is algorithmic, you need a gigantic God-like computation able to run programs/algorithms for every particle and body.
Nature is really simple, but your theory insists on making it complicated - you need algorithms and computation for every particle. Where is this gigantic computer - outside of the Universe? This theory denies the Free Will - since the world is ruled by algorithms and computation, therefore all our future was programmed before we born. In this contest we are looking for theories able to SIMPLIFY the Nature but not to complicate the nature. It is one of the most fantastic theories that contradicts quantum mechanics, conservation laws, Black holes theories; it is surprising that people support such fantasy. Without algorithms and computation nature is very simple. It is a crime against Science to support the false theories.
Constantin
report post as inappropriate
Author Hector Zenil replied on Mar. 13, 2011 @ 19:06 GMT
Constantin,
You came back to the very first arguments you presented before. I'm now convinced that the discussion will be fruitless if you persevere to claim to hold all true answers, which are commonly considered open questions in science.
It seems you keep on misreading the essay at several levels. Just to stress again the main hypothesis of my worldview, I'm using what is called Levin's semi-mesure, this is a tool that has been called also the universal distribution (see Kirchherr and Vitanyi paper online: http://homepages.cwi.nl/~paulv/papers/mathint97.ps) because it was proven (by Levin himself) that it dominates any other semi-computable measure. This tool also captures and formalizes Occam's razor, which as you may know is ill suited to complicate things because, by definition, it favors simplicity. My worldview is the simplest possible among the algorithmic explanations of the world. What I do is to calculate an experimental approximation of Levin's distribution and compare the result to the processes in the real-world, then I discuss the similarities and discrepancies. I don't need to explain what happens inside of black holes because not even current mainstream physics does, and it is beyond my current scope of research.
As for the quantum phenomena, issues with black holes, the teleportation that you think is vital for humanity, and other claims of the same sort I invite you to re-read our previous messages.
Thanks.
Constantin Leshan replied on Mar. 13, 2011 @ 20:53 GMT
Hector,
My theory can explain at least teleportation but your theory can explain NOTHING in physics. Your theory is a mathematical construct only and I wrote in this contest already that all mathematical proofs in physical papers must be in DOUBT. Usually mathematics is used as a shield to hide the false theories. First of all, to create a real Physical theory you must include just quantum phenomena, black holes and teleportation but not mathematical models.
Your above answer is an attempt to suppress questions by a stream of senseless information. I read about your group and your paper ''On the algorithmic nature of the world'', it is the theory about the Computational nature of the Universe. Your friends like Janko Kokosar try to support you in order to create an illusion as if it is a very SCIENTIFIC paper.
Dear readers it is a false theory forbidden by Quantum Mechanics and Black Hole theory. It is a crime to vote for false theories. We need the powerful Science and Technology to survive.
Constantin
report post as inappropriate
QSA replied on Mar. 14, 2011 @ 00:43 GMT
Dear Constantin,
I do share your concern than ''algorithmic nature of the world'' has not been demonstrated convincingly so far. That is why I invite you to check out my website where I derive the laws of QM, QFT and QG just from such algorithm using a very simple program. The secret was in the postulate, everything else including the algorithmic way just followed naturally. The website has not been updated but I will send you the details if you are interested.
http://www.qsa.netne.net
report post as inappropriate
Constantin Leshan replied on Mar. 14, 2011 @ 06:44 GMT
QSA
Your theory is a good approach for computer games but not for reality:''Gravity also appears with surprising results, it shows that gravity becomes repulsive when distance is great or when distance is very small'', '' I have been toying with the idea (existence is nothing but mathematics) in my mind for years.''.
Constantin
report post as inappropriate
QSA replied on Mar. 14, 2011 @ 17:02 GMT
Dear Constantin
Please specify how/why you arrived at your conclusion that my theory is good for "games", since it is easy to see how I reproduce the results of QM in a natural way. I could make a catagorical (even more accurate) statement about your theory of being no more than one more "mechanical" theory that has been attempted by maybe thousends of people over the past 100 years. But I won't, because the purpose of fqxi is to explore all avenues. I myself have a million questions about my own theoy which you could have raised, but making catagorical statements without a hint of an evidence is poor science, and a crime at one's own attempt to analyse and understand.
report post as inappropriate
Constantin Leshan replied on Mar. 14, 2011 @ 17:47 GMT
Dear QSA,
Why I arrived at conclusion that your theory is good for games? You write: ''Gravity also appears with surprising results, it shows that gravity becomes repulsive when distance is great or when distance is very small''. Gravity cannot have such properties. Besides, in order to create the computer model of gravitation you must know first the nature of gravitation and inertia. Do you know the nature of gravitation, inertia and spacetime? How can you model the curvature of spacetime if you do not know the nature of spacetime? Besides, even if the programs of simulation of gravity may exist, it does not mean that the Nature use your software. Can you show me the computer that processes the gravity and the motion of your body? In other words it, is a FANTASY only.
Besides, how can you model the Heisenberg uncertainty if you do not know the complete quantum information about position and momentum? You cannot know this information by definition because it is forbidden by Quantum Mechanics; therefore such computer theory is good for games only.
Constantin
report post as inappropriate
QSA replied on Mar. 14, 2011 @ 22:27 GMT
Dear Canstantin,
Thank you for your reply, much appreciated. As for gravity, it is known that GR cannot predict short distance(high energy) behavior and that is all the fuss about QG, also, for large distances galaxy rotations and expantion of the universe is not predicted by GR ,only with the tweaking using CC is done in a very unnatural way. Moreover, Gravity has been reinterpreted in other ways like theentropy picture and so spacetime curvture is not the only way, however I do agree that these pictures must be made to match.
But there is a problem in the mid range in my theory where it does predict attraction but the potential is quadratic, I have to sort this thing out.
Otherwise QM and QFT(as related to the 1/r law) perfectly match which confirms these theories from a diffrent prespective and what ever applies to them applies to my theory.
AS for Reality's computer here lies the bueaty of my theory. Once you assume it as a postulate(that numbers and some relationship is the only thing that is real) then that leads to the design of the algorithm as shown which produces the above results. That is how science works , if your assumptions lead to good results then that means they make good sense and valid.
It is needless to say that my theory and the rest here are one man show to find alturnative explaination, you can call them toy models since they are obviously not fully devoloped, but NEVER a game.
report post as inappropriate
hide replies
Eckard Blumschein wrote on Mar. 13, 2011 @ 22:59 GMT
Dear Hector Zenil,
To me your essay is a bit too easily understandable written but not yet convincing. Perhaps, I am expecting too much from experts of computer science and probability like you. So far I do not see any chance how your rather speculative approach could become foundational. It reminds me of "in the beginning was the word" Big Bang = white noise, and then symmetry breaking made it flesh. What about other colors of noise, e.g. brown one?
Why and how did symmetry breaking start just with hydrogen atoms?
In the 2nd contest I made an unreplied comment to the essay by Steven Wolfram:
I argued that while digital computing is superior to analog computers, the latter are closer to reality than differential equations. I meant they are bound to the real time and in particular its direction. You did not refer to this matter, and I guess why: Your procedures of computing tend to be also natural in that they perform a series of forward steps even in for ... do loops, never backward in time. You presumably overlooked this when you equated the time-symmetric laws of nature with computer programs.
Regards,
Eckard
report post as inappropriate
Author Hector Zenil replied on Mar. 16, 2011 @ 14:14 GMT
Eckard,
I never wrote 'Big Bang=white noise' not only because I think it is an oversimplification of something that deserves further discussion to be written in such a way, but also because it is not my belief. White noise is usually identified as indeterministic or 'true' (in some intuitive sense) randomness, yet I think all randomness are just complicated patterns result of the application of algorithmic rules (even if I am fully aware this view is in contradiction with the Copenhagen interpretation of quantum mechanics, but compatible with other interpretations).
Now, the question of other colors of noise is very interesting. Many noise colors, such as pink noise (aka 1/ƒ noise), for example, follow a power-law frequency distribution, as you may know, in quite an organized fashion. Power law distributions are often an indication that the source is not random in nature. Distributions associated to random processes are typically uniform or Gaussian. While theories of pink noise (and other colors) are still a matter of current research its typical distribution power-law shape is compatible with the empirical algorithmic distribution found in the distributions we generated from algorithmic sources (and compatible with the theoretical power-law universal distribution). As you can read from my essay our distributions from running computer programs generate about the same kind of randomness and in about the same frequency. I think this kind of noise may be explained as the tail of the algorithmic probability distribution that looks to us most random but actually follows after the most organized top which we identify as the structured part corresponding to the structured signals.
As you may know, pink noise is present all over in data series, it has a tendency to occur in natural physical systems, from almost all electronic devices to the electromagnetic radiation of astronomical bodies. In biological systems, it is also present in some statistics of DNA sequences, a source that we also analyzed w.r.t. its frequency distribution of patterns (tuples of different sizes) with some correlation with our algorithmic distributions (you can also read some of the processes acting over DNA, algorithmic in nature, and likely responsible for at least some of the shape of the overall distribution in DNA sequences).
Thanks.
John Benavides wrote on Mar. 14, 2011 @ 08:35 GMT
Dear Hector
You have written a very interesting essay. On my essay I propose an idea of how we can understand emergence on computation, that can be use to understand how a classical reality emerges from a quantum base. On my approach, We can introduce the computation information perspective, that you are proposing, and at the same time all the classical formalism. I would like to hear your opinions about it.
Regards,
J. Benavides.
report post as inappropriate
Author Hector Zenil replied on Mar. 15, 2011 @ 17:56 GMT
Dear John,
Thanks. I will read with care your essay, the abstract looks sound and interesting.
Thanks.
Author Hector Zenil wrote on Mar. 15, 2011 @ 19:27 GMT
I'd like to summarize my view in a few paragraphs, if that is possible from an already synthesized essay:
My view aims to provide a purely informational explanation to the organized structures in the world that we can find all over around, from the formation of galaxies to the appearance of an organized phenomenon such as life, despite the 2nd. law of thermodynamics predicting, with its principle of increasing entropy, rather the contrary, contradiction that is usually explained by arguments concerning closed systems regarded as exceptions that manage to locally decrease entropy while increasing entropy in its surroundings.
Two tools from the theory of algorithmic information are relevant to explain this presence of organized structures in the world, without necessarily violating thermodynamic principles but actually providing a reasonable explanation to this phenomena (i.e. the entropy derivation vs. the presence of organized structures), with the only assumption that what happens in the universe is the result of the application of rules (that layer after layer may look very complicated but are simple in their origin because are of the kind that can be carried out by digital computers). Then both the theory of algorithmic probability with its concept of Levin's universal distribution and Bennett's logical depth provide an explanation to the organized universe in which we seem to live, as a result of time in the universe rather seen as computational time.
As I argue, the view that the world is algorithmic in the terms described above is supported by, at least, two indicators: one is the compressibility of data in our world as it turns out to be the case as proven by the success in compressing data (either in digital or analog repositories). Not only data, but physical laws governing our reality have turned out to be compressible by models and formulae that scientists use to shortcut physical phenomena to make predictions about the world. The second indicator, supporting the algorithmic view, is the distribution of patterns in our world when compared (as we did) to the distribution of patterns produced by purely algorithmic worlds (by using abstract machines).
We do not necessarily jump to the conclusion that the world is digital but we do claim that the kind of rules producing these kind of distributions can be carried out by digital computation and therefore, according to this hypothesis, there is no need to assume an analog universe when it is about to explain the organization in the world since assuming an analog universe would be regarded under this view as an unnecessary complication of the theory. Yet, this algorithmic view, doesn't rule out the possibility of an analog algorithmic world.
Author Yuri Danoyan+ replied on Apr. 22, 2011 @ 22:12 GMT
Dear Hector
What is your opinion about this site?
http://www.idsia.ch/~juergen/computeruniverse.html
Thank
you for advance
Yuri
report post as inappropriate
basudeba wrote on Mar. 16, 2011 @ 06:36 GMT
Dear Sir,
It is fashionable among scientists to express their views incomprehensibly to retain their importance. However, since one of the criteria for this competition is “Accessible to a diverse, well-educated but non-specialist audience”, we would like you to kindly clarify what is meant by: “start from nothing: the state of the universe with all its matter and energy squeezed into...
view entire post
Dear Sir,
It is fashionable among scientists to express their views incomprehensibly to retain their importance. However, since one of the criteria for this competition is “Accessible to a diverse, well-educated but non-specialist audience”, we would like you to kindly clarify what is meant by: “start from nothing: the state of the universe with all its matter and energy squeezed into an infinitely small point of no length, no width, and infinite density called a singularity; or else a fraction later, out of a state of complete disorder such that once particles formed they couldn’t do anything except collide with each other in a completely disordered way.” If there was nothing, then what exploded into what? If disorder was created, then was there order before that? Who or what or which mechanism brought order to the disorderly state? If there is order now and there was order before the disorder, why can’t it be known and described? All systems tend to move towards thermal equilibrium. Then how does the Universe behave differently during those initial phases which are continuing till now? How did the expanding universe produce structures? Structures are produced by the opposite mechanism of consolidation. Cooling assumes pre-existence of cooler regions by expanding into which the exploded system gets cooled. This means the external system and not the big bang that is responsible for the structure formation. Then wherefrom this external system came into picture, since you say there was nothing.
There are a large number of different approaches or formulations to the foundations of Quantum Mechanics. There is the Heisenberg’s Matrix Formulation, Schrödinger’s Wave-function Formulation, Feynman’s Path Integral Formulation, Second Quantization Formulation, Wigner’s Phase Space Formulation, Density Matrix Formulation, Schwinger’s Variational Formulation, de Broglie-Bohm’s Pilot Wave Formulation, Hamilton-Jacobi Formulation etc. There are several Quantum Mechanical pictures based on placement of time-dependence. There is the Schrödinger Picture: time-dependent Wave-functions, the Heisenberg Picture: time-dependent operators and the Interaction Picture: time-dependence split. The different approaches are in fact, modifications of the theory. Each one introduces some prominent new theoretical aspect with new equations, which needs to be interpreted or explained. Thus, there are many different interpretations of Quantum Mechanics, which are very difficult to characterize. Prominent among them are; the Realistic Interpretation: wave-function describes reality, the Positivistic Interpretation: wave-function contains only the information about reality, the famous Copenhagen Interpretation: which is the orthodox Interpretation. Then there is Bohm’s Causal Interpretation, Everett’s Many World’s Interpretation, Mermin’s Ithaca Interpretation, etc. With so many contradictory views, quantum physics is not a coherent theory, but is truly weird.
String theory, which was developed with a view to harmonize General Relativity with Quantum theory, is said to be a high order theory where other models, such as super-gravity and quantum gravity appear as approximations. Unlike super-gravity, string theory is said to be a consistent and well-defined theory of quantum gravity, and therefore calculating the value of the cosmological constant from it should, at least in principle, be possible. On the other hand, the number of vacuum states associated with it seems to be quite large, and none of these features three large spatial dimensions, broken super-symmetry, and a small cosmological constant. The features of string theory which are at least potentially testable - such as the existence of super-symmetry and cosmic strings - are not specific to string theory. In addition, the features that are specific to string theory - the existence of strings - either do not lead to precise predictions or lead to predictions that are impossible to test with current levels of technology.
There are many unexplained questions relating to the strings. For example, given the measurement problem of quantum mechanics, what happens when a string is measured? Does the uncertainty principle apply to the whole string? Or does it apply only to some section of the string being measured? Does string theory modify the uncertainty principle? If we measure its position, do we get only the average position of the string? If the position of a string is measured with arbitrarily high accuracy, what happens to the momentum of the string? Does the momentum become undefined as opposed to simply unknown? What about the location of an end-point? If the measurement returns an end-point, then which end-point? Does the measurement return the position of some point along the string? (The string is said to be a Two dimensional object extended in space. Hence its position cannot be described by a finite set of numbers and thus, cannot be described by a finite set of measurements.) How do the Bell’s inequalities apply to string theory? We must get answers to these questions first before we probe more and spend (waste!) more money in such research. These questions should not be put under the carpet as inconvenient or on the ground that some day we will find the answers. That someday has been a very long period indeed!
It is high time to discard the “mainstream physics” by applying “Occam’s razor” and rewrite physics from scratch based on the presently available data. No amount of patch work will do. We have developed a theory based on fundamental principles that can satisfactorily answer all the questions posed here. We will publish it soon.
Regards,
basudeba.
view post as summary
report post as inappropriate
Alan Lowey wrote on Mar. 18, 2011 @ 14:48 GMT
Dear Hector,
Congratulations on your dedication to the competition and your much deserved top ten placing. I have a bugging question for you, which I've also posed to all the top front runners btw:
Q: Coulomb's Law of electrostatics was modelled by Maxwell by mechanical means after his mathematical deductions as an added verification (thanks for that bit of info Edwin), which I highly admire. To me, this gives his equation some substance. I have a problem with the laws of gravity though, especially the mathematical representation that "every object attracts every other object equally in all directions." The 'fabric' of spacetime model of gravity doesn't lend itself to explain the law of electrostatics. Coulomb's law denotes two types of matter, one 'charged' positive and the opposite type 'charged' negative. An Archimedes screw model for the graviton can explain -both- the gravity law and the electrostatic law, whilst the 'fabric' of spacetime can't. Doesn't this by definition make the helical screw model better than than anything else that has been suggested for the mechanism of the gravity force?? Otherwise the unification of all the forces is an impossiblity imo. Do you have an opinion on my analysis at all?
Best wishes,
Alan
report post as inappropriate
basudeba replied on Mar. 19, 2011 @ 23:36 GMT
Dear Sir,
You have raised a very important question. We have discussed it below the essay of Mr. Ian Durham. Here we reproduce it for you.
The latest finding of LHC is that the Universe was created from such a super-fluid and not gases. The confined field also interacts with the Universal field due to difference in density. This in turn modifies the nature of interactions at...
view entire post
Dear Sir,
You have raised a very important question. We have discussed it below the essay of Mr. Ian Durham. Here we reproduce it for you.
The latest finding of LHC is that the Universe was created from such a super-fluid and not gases. The confined field also interacts with the Universal field due to difference in density. This in turn modifies the nature of interactions at different points in the medium (Universal field).
A force can act only between two particles as only a particle can influence the field, which in turn can be experienced by another particle. If the external force of the field is more than the confining force of the two particles, then the two particles break up and join to form a new particle. We call this “sambhuti”. In the opposite case, the two particles experience the force without being internally affected. The force acts between the centers’ of mass of each treating each as a point particle. We call it “bibhuti”. This second category of relationship, which we call “udyaama”, is known as gravity. Since it stabilizes the two bodies at the maximum permissible distance between them depending upon their respective masses, we call it “urugaaya pratisthaa”. For reasons to be discussed separately, this is possible only if gravity is treated as a composite force.
The first category of forces, which are interactions between two bodies, acts differently based on proximity-proximity, proximity-distance, distance – proximity and distance – distance variables. We call these relationships “antaryaama”, “vahiryaama”, “upayaama” and “yaatayaama” respectively. This interaction affects the field also inducing various local disturbances. These disturbances are known as “nitya gati”, “yagnya gati”, “samprasaada gati” and “saamparaaya gati” respectively. Any particle entering the field at those points feels these disturbances, which are known as the strong nuclear interaction, weak nuclear interaction, electromagnetic interaction and radioactive disintegration respectively. Thus, you can see that gravity belongs to a completely different group of forces and cannot be integrated with other fundamental forces of Nature in the normal process. Yet, it has a different function by which other forces can be derived from it. We will discuss that separately.
According to our theory, gravity is a composite force of seven forces that are generated based on their charge. Thus, they are related to charge interactions. But we do not accept Coulomb's law. We have a different theory for it. We derive it from fundamental principles. We will discuss it separately.
According to our theory, all particles are locally confined fields. This confinement takes a three fold structure for the particle - center of mass or nucleus, extra-nuclear field and the confining orbitals. If we take into account the external field with which the particle interacts, it becomes a four-fold (3+1) structure. The particle interacts with the field in two ways. If the internal energy distribution cancels each other with a little inward pull, then it behaves as a stable particle. Thus, all particles can be described as composites that exhibit two types of charges: that which pushes out from a central point and is described as positive charge and that which confines it and is described as negative charge. Where both are balanced, it is neutral charge.
We have derived theoretically that the charge of proton in electron units is not +1, but +10/11. Similarly, the charge of neutron is not 0, but -1/11. This makes the atom slightly negatively charged. This excess negative charge is not experienced out side as it is directed towards the nucleus. It is not detected during measurements due to the nature of calibration of the measuring instrument. But it is released during fusion and fission.
The confinement described above takes place where the external field dominates to confine the particle. Here the particle becomes negatively charged. In the opposite case, the particle becomes positively charged. The particles are classified as positively charged or negatively charged according to whether the external field dominates over confinement or the confined force dominates over the local field. Since equilibrium is inherent in Nature, in either case, the particles search for their complements to become full. The less negative part of the proton (since it is +10/11, it has -1/11 negative charge) seeks to couple with the electron to become -1/11. This makes hydrogen atom highly reactionary.
The combined charge of proton and electron (-1/11) seeks the neutron since it has an equal charge. Thus, the opposites do not attract and same charge does not repel. It is not the opposite either. The charge interaction can be of four types:
positive + positive = explosive. That is seen in fusion reaction.
Positive + negative (total interaction) = internally creative (increased atomic number)
Positive + negative (partial interaction) = externally creative (becomes an ion and interacts with other particles.)
Negative + negative = no reaction. They co-exist.
For further clarification, kindly write to: mbasudeba@gmail.com.
Regards,
basudeba.
view post as summary
report post as inappropriate
basudeba replied on Mar. 23, 2011 @ 00:47 GMT
Dear Sir,
We would like to further clarify as follows:
According to our theory, gravity is a composite force of seven forces that are generated based on their charge. Thus, they are related to charge interactions. But we do not accept Coulomb's law. We have a different theory for it. We derive it from fundamental principles. In Coulomb’s law, F = k Q1 x Q2 /d^2. In a charge neutral object, either Q1 or Q2 will be zero reducing the whole equation to zero. This implies that no interaction is possible between a charged object and a charge neutral object. But this is contrary to experience. Hence the format of Coulomb’s law is wrong.
As we have repeatedly described, the atoms can be stable only when they are slightly negatively charged which makes the force directed towards the nucleus dominate the opposite force, but is not apparent from outside. Hence we do not experience it. We have theoretically derived the value of the electric charge of protons, neutrons and electrons as +10/11, -1/11 and -1. The negative sign indicates that the net force is directed towards the nucleus. Charge interaction takes place when a particle tries to attain equilibrium by coupling with another particle having similar charge. The proton has +10/11 charge means it is deficient in -1/11 charge. The general principle is same charge attracts. Thus, it interacts with the negative charge of electrons. The resultant hydrogen atom has a net charge of -1/11. Thus, it is highly reactionary. This -1/11 charge interacts with that of the neutron to form stable particles. These interactions can be of four types.
Positive + positive = explosive. By this, what we mean is the fusion reaction that leads to unleashing of huge amounts of energy. It’s opposite is also true in the case of fission, but since it is reduction, there is less energy release.
Positive + negative (total interaction) = internally creative (increased atomic number.) This means that if one proton and one electron is added to the atom, the atomic number goes up.
Positive + negative (partial interaction) = externally creative (becomes an ion.) This means that if one proton or one electron is added to the atom, the atom becomes ionic.
Negative + negative = no reaction. What it actually means is that though there will be no reaction between the two negatively charged particles; they will appear to repel each other as their nature is confinement. Like two pots that confine water cannot occupy the same place and if one is placed near another with some areas overlapping, then both repel each other. This is shown in the “Wheeler’s Aharonov–Bohm experiment”.
Regards,
basudeba
report post as inappropriate
Vladimir F. Tamari wrote on Mar. 25, 2011 @ 04:19 GMT
Dear Hector
Thanks for a lucid and very interesting paper. I would be most interested in how you would apply your expertise and approach (information theory and programming) to my earlier 2005
Beautiful Universe theory on which my present fqxi paper is based. The following is my reaction to some of your well-considered statements and ideas.
Your symmetry-breaking homochirality finds a very precise physical explanation in my theory - it is the rotation in one direction of the fundamental building blocks or nodes of a universal lattice that has only one type of information: angular momentum in units of h with the axis of rotation at a given spherical angle in a micro Bloch sphere.
"If the world were digital at the lowest scale one would end up seeing nothing but strings of bits." Not necessarily in a lattice the bits would be structured in crystal-like arrangement (either itself creating 3D space, or embedded in 3 hidden space dimensions) not one-D strings.
"What surprises us about the quantum world is precisely its lack of the causality that we see everywhere else and are so used to. But it is the interaction and its causal history that carries all the memory of the system" In my theory randomness is an artifact of the orderly spread of momentum through the lattice by a process resembling diffusion.
"if space is informational at its deepest level, if information is even more fundamental than the matter of which it is made and the physical laws governing that matter, then the question of whether these effects violate physical laws may be irrelevant. Producing random bits in a deterministic universe, where all events are the cause of other events, would actually be very expensive" In my theory "the medium is the message" the bits making up everything interact causally and locally - information is most efficiently transmitted in the form of angular momentum in units of h from node to node. This is impossible to understand or accept using present-day notions of physics, that is why I proposed specific steps how to reverse-engineer GR, space-time and quantum probability into a simpler more fundamental theory.
"Unveiling the machinery" Take that, Feynman! He famously avoided searching for a machinery that creates quantum phenomena.
Your research of Ref. 14 sounds very interesting. Is there an online version?
Best wishes from Vladimir
report post as inappropriate
Author Hector Zenil replied on Mar. 25, 2011 @ 21:32 GMT
Dear Vladimir,
I will get back to you later. I couldn't wait however to let you know that the choice of subtitle 'Unveiling the machinery' was precisely inspired from a Feynman quotation from one of his Messenger lectures at Cornell:
"It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time ... So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequerboard with all its apparent complexities."
Richard Feynman in 1964,
Concerning Ref. 14, yes there is an online version available at ArXiv:
On the Algorithmic Nature of the World by Hector Zenil, Jean-Paul Delahaye
http://arxiv.org/abs/0906.3554
Thanks for your comments, I'll have a look at what you tell me.
I'm also happy to tell that an extended version of the essay is coming soon, with many more details. I will upload it to ArXiv and announce the URL here.
Sincerely.
Vladimir F. Tamari wrote on Apr. 11, 2011 @ 15:15 GMT
Dear Hector
Sorry for the delay to respond- I just saw this. The FQXI website badly needs an author tracking function whereby you can see a list of all the threads to which you have contributed, with new responses listed chronologically.
Thanks for the Feynman quote - it shows that his physical intuition is deeper than his practical mathematical ingenuity - he devised the many-paths method to calculate quantum outcomes, but was not satisfied with it, and hoped for a simpler reality. In the sort of universal lattice of nodes such as the one I proposed, any local change in energy or node orientation immediately creates a Machian domino effect that spreads throughout the universe from node to node. This model - if successful - would explain why Feynman's' many-path hypothesis 'works', and also explains the simple machinary (node-to-node induction) behind this.
I found your essay "Algorithmic Nature..." rather abstract and too technical for my level of understanding - I would appreciate it if you can summarize it in a simple paragraph using everyday words - thanks. I look forward to your newer version of your paper.
report post as inappropriate
Author Yuri Danoyan+ wrote on Apr. 24, 2011 @ 19:35 GMT
It seems to me very interesting
http://www.ma.hw.ac.uk/~oliver/Nature_article.pdf
Yuri
report post as inappropriate
Sridattadev wrote on Aug. 2, 2011 @ 13:39 GMT
Dear Hector,
I would like to introduce myself in quantum terminology and share the truth that I have experienced with you.
who am I? I superpositioned myself to be me, to disentangle reality from virtuality and reveal the
absolute truth.
Love,
Sridattadev.
report post as inappropriate
Login or
create account to post reply or comment.