If you are aware of an interesting new academic paper (that has been published in a peer-reviewed journal or has appeared on the arXiv), a conference talk (at an official professional scientific meeting), an external blog post (by a professional scientist) or a news item (in the mainstream news media), which you think might make an interesting topic for an FQXi blog post, then please contact us at forums@fqxi.org with a link to the original source and a sentence about why you think that the work is worthy of discussion. Please note that we receive many such suggestions and while we endeavour to respond to them, we may not be able to reply to all suggestions.

Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you.

Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you.

Contests Home

Current Essay Contest

*Contest Partners: The Peter and Patricia Gruber Foundation and Scientific American*

Previous Contests

**Wandering Towards a Goal**

How can mindless mathematical laws give rise to aims and intention?

*December 2, 2016 to March 3, 2017*

Contest Partner: The Peter and Patricia Gruber Fund.

read/discuss • winners

**Trick or Truth: The Mysterious Connection Between Physics and Mathematics**

*Contest Partners: Nanotronics Imaging, The Peter and Patricia Gruber Foundation, and The John Templeton Foundation*

Media Partner: Scientific American

read/discuss • winners

**How Should Humanity Steer the Future?**

*January 9, 2014 - August 31, 2014*

*Contest Partners: Jaan Tallinn, The Peter and Patricia Gruber Foundation, The John Templeton Foundation, and Scientific American*

read/discuss • winners

**It From Bit or Bit From It**

*March 25 - June 28, 2013*

*Contest Partners: The Gruber Foundation, J. Templeton Foundation, and Scientific American*

read/discuss • winners

**Questioning the Foundations**

Which of Our Basic Physical Assumptions Are Wrong?

*May 24 - August 31, 2012*

*Contest Partners: The Peter and Patricia Gruber Foundation, SubMeta, and Scientific American*

read/discuss • winners

**Is Reality Digital or Analog?**

*November 2010 - February 2011*

*Contest Partners: The Peter and Patricia Gruber Foundation and Scientific American*

read/discuss • winners

**What's Ultimately Possible in Physics?**

*May - October 2009*

*Contest Partners: Astrid and Bruce McWilliams*

read/discuss • winners

**The Nature of Time**

*August - December 2008*

read/discuss • winners

Current Essay Contest

Previous Contests

How can mindless mathematical laws give rise to aims and intention?

Contest Partner: The Peter and Patricia Gruber Fund.

read/discuss • winners

Media Partner: Scientific American

read/discuss • winners

read/discuss • winners

read/discuss • winners

Which of Our Basic Physical Assumptions Are Wrong?

read/discuss • winners

read/discuss • winners

read/discuss • winners

read/discuss • winners

Forum Home

Introduction

Terms of Use

RSS feed | RSS help

Introduction

Terms of Use

*Posts by the author are highlighted in orange; posts by FQXi Members are highlighted in blue.*

RSS feed | RSS help

RECENT POSTS IN THIS TOPIC

RECENT FORUM POSTS

**Thomas Ray**: "(reposted in correct thread) Lorraine, Nah. That's nothing like my view...."
*in* 2015 in Review: New...

**Lorraine Ford**: "Clearly “law-of-nature” relationships and associated numbers represent..."
*in* Physics of the Observer -...

**Lee Bloomquist**: "Information Channel. An example from Jon Barwise. At the workshop..."
*in* Physics of the Observer -...

**Lee Bloomquist**: "Please clarify. I just tried to put a simple model of an observer in the..."
*in* Alternative Models of...

**Lee Bloomquist**: "Footnote...for the above post, the one with the equation existence =..."
*in* Alternative Models of...

**Thomas Ray**: "In fact, symmetry is the most pervasive physical principle that exists. ..."
*in* “Spookiness”...

**Thomas Ray**: "It's easy to get wound around the axle with black hole thermodynamics,..."
*in* “Spookiness”...

**Joe Fisher**: "It seems to have escaped Wolpert’s somewhat limited attention that no two..."
*in* Inferring the Limits on...

RECENT ARTICLES

*click titles to read articles*

**The Complexity Conundrum**

Resolving the black hole firewall paradox—by calculating what a real astronaut would compute at the black hole's edge.

**Quantum Dream Time**

Defining a ‘quantum clock’ and a 'quantum ruler' could help those attempting to unify physics—and solve the mystery of vanishing time.

**Our Place in the Multiverse**

Calculating the odds that intelligent observers arise in parallel universes—and working out what they might see.

**Sounding the Drums to Listen for Gravity’s Effect on Quantum Phenomena**

A bench-top experiment could test the notion that gravity breaks delicate quantum superpositions.

**Watching the Observers**

Accounting for quantum fuzziness could help us measure space and time—and the cosmos—more accurately.

RECENT FORUM POSTS

RECENT ARTICLES

Resolving the black hole firewall paradox—by calculating what a real astronaut would compute at the black hole's edge.

Defining a ‘quantum clock’ and a 'quantum ruler' could help those attempting to unify physics—and solve the mystery of vanishing time.

Calculating the odds that intelligent observers arise in parallel universes—and working out what they might see.

A bench-top experiment could test the notion that gravity breaks delicate quantum superpositions.

Accounting for quantum fuzziness could help us measure space and time—and the cosmos—more accurately.

FQXi FORUM

February 23, 2018

CATEGORY:
Is Reality Digital or Analog? Essay Contest (2010-2011)
[back]

TOPIC: Reality Is Ultimately Digital, and Its Program Is Still Undebugged by Tommaso Bolognesi [refresh]

TOPIC: Reality Is Ultimately Digital, and Its Program Is Still Undebugged by Tommaso Bolognesi [refresh]

Reality is ultimately digital, and all the complexity we observe in the physical universe, from subatomic particles to the biosphere, is a manifestation of the emergent properties of a digital computation that takes place at the smallest spacetime scale. Emergence in computation is an immensely creative force, whose relevance for theoretical physics is still largely underestimated. However, if the universe must be at all scientifically comprehensible, as suggested by a famous einsteinian quote, we have to additionally postulate this computation to sit at the bottom of a multi-level hierarchy of emergent phenomena satisfying appropriate requirements. In particular, we expect 'interesting things' to emerge at all levels, including the lowest ones. The digital/computational universe hypothesis gives us a great opportunity to achieve a concise, background independent theory, if the 'background' -- a lively spacetime substratum -- is equated with a finite causal set.

Tommaso Bolognesi (Laurea in Physics, Univ. of Pavia, 1976; M.Sc. in Computer Science, Univ. of Illinois at U-C, 1982), is senior researcher at ISTI, the Institute for Information Science and Technologies of the Italian National Research Council at Pisa. His research areas have included stochastic processes in computer music composition (1977-1982), models of concurrency, process algebra and formal methods for software development (1982-2005), and emergence in computational big-bangs (since 2005). He has published on various international scientific journals several papers in all three areas.

Dear Tommaso,

I am glad that you take part in this contest and my first impression is that your essay is very interesting. I wish you success.

All the best, Felix.

report post as inappropriate

I am glad that you take part in this contest and my first impression is that your essay is very interesting. I wish you success.

All the best, Felix.

report post as inappropriate

Thank you Felix for your welcome message. I am really curious about the actual interest that the causal-set based, digital/computational approach that I have described might attract in this context. I expected a few more contributions along those lines, but so far I have not seen any, and I wonder whether I should be happy or worried about it.

Dear Tomaso,

our essays were published almost simultaneously, and a fun thing is that I intended to name mine "GR 2.0 - Debugging a singularity". Remnant of that version is an endnote in which I compare the information loss in a black hole with memory leaks. I'll come back after I will read your essay.

Best wishes,

Cristi

report post as inappropriate

our essays were published almost simultaneously, and a fun thing is that I intended to name mine "GR 2.0 - Debugging a singularity". Remnant of that version is an endnote in which I compare the information loss in a black hole with memory leaks. I'll come back after I will read your essay.

Best wishes,

Cristi

report post as inappropriate

Dear Cristinel,

so you have removed the 'debugging' concept from your title. At first sight I liked the title 'GR 2.0 - Debugging a singularity', for its double software-oriented flavor; but after reading (quickly) your paper I think your final choice has been more appropriate, since your approach does not seem to relate at all with the 'digital/computational' finite universe conjecture. You are definitely on the 'analog' side!

so you have removed the 'debugging' concept from your title. At first sight I liked the title 'GR 2.0 - Debugging a singularity', for its double software-oriented flavor; but after reading (quickly) your paper I think your final choice has been more appropriate, since your approach does not seem to relate at all with the 'digital/computational' finite universe conjecture. You are definitely on the 'analog' side!

Dear Tomaso,

I enjoyed reading your essay, which is well written and reveals a deep understanding. Discrete approaches like that you explore can add much to our understanding of reality. I personally believe that it may be more in causal sets than just the conformal structure, and I strongly encourage their study. And trying to obtain laws we consider fundamental as emergent phenomena of simpler laws is what science is about.

Am I on the digital or analog side? It's complicated, I just added something about this here and here.

Best wishes,

Cristi Stoica

report post as inappropriate

I enjoyed reading your essay, which is well written and reveals a deep understanding. Discrete approaches like that you explore can add much to our understanding of reality. I personally believe that it may be more in causal sets than just the conformal structure, and I strongly encourage their study. And trying to obtain laws we consider fundamental as emergent phenomena of simpler laws is what science is about.

Am I on the digital or analog side? It's complicated, I just added something about this here and here.

Best wishes,

Cristi Stoica

report post as inappropriate

yes of course and a micro BH also because the singularity says that.

Me also I am musician and poet and my father was a bus driver and now he is died.And after what at the age of 20 I was in the coma. no but frankly hihihihii several pappers and this and that ....a big joke yes and big pub .

hop under review.Christi hiihhi you see I play everywhere as a child now,I love this platform and the vanities of scientists.

Computer vs rationality of our universe. Big Bangs with a S no but frankly, you simulate what an universe or your universe.A big joke all that.It's just computing , not physics.On that good bye.

Don't be offensed,I am just a little crazzy.Hop I am going to take my meds.until soon.

Best

Steve

report post as inappropriate

Me also I am musician and poet and my father was a bus driver and now he is died.And after what at the age of 20 I was in the coma. no but frankly hihihihii several pappers and this and that ....a big joke yes and big pub .

hop under review.Christi hiihhi you see I play everywhere as a child now,I love this platform and the vanities of scientists.

Computer vs rationality of our universe. Big Bangs with a S no but frankly, you simulate what an universe or your universe.A big joke all that.It's just computing , not physics.On that good bye.

Don't be offensed,I am just a little crazzy.Hop I am going to take my meds.until soon.

Best

Steve

report post as inappropriate

Dear Tommaso

I have read your interesting essay in which I would like to make a comment. In your essay you say the following:

Furthermore, sometimes we identify new, unifying laws that allow us to jump one level down: laws that appeared as primitive (e.g. Newton's law of gravitation) are shown to derive from deeper laws (e.g. General Relativity).

I wish this were...

view entire post

I have read your interesting essay in which I would like to make a comment. In your essay you say the following:

Furthermore, sometimes we identify new, unifying laws that allow us to jump one level down: laws that appeared as primitive (e.g. Newton's law of gravitation) are shown to derive from deeper laws (e.g. General Relativity).

I wish this were...

view entire post

report post as inappropriate

Hi Israel. Thanks for the comments.

Suppose one 'borrows' some constants (for example, Planck h, or universal gravitation G) from existing theories, and uses them in a new theory such that:

(1) the predictions of the 'old' theories are confirmed by the new theory, yielding even better agreement with experimental results, and

(2) more phenomena can be explained and predicted with high accuracy by the new theory, that fall even ouside the application scope of the 'old' ones.

What's wrong with that? The idea is that the new theory 'absorbs' the old theories as special cases -- of more limited applicability and lower accuracy. I do not see the inheritance of physical constants from theory to theory as a problem, but as a nice feature of scientific progress.

But perhaps you are addressing the problem of whether a theory is autonomously capable of justifying/determining the value of its constants?

I am indeed fascinated by this problem, although it is a bit out of the scope of this contest. In my opinion, the most ambitious form of ToE (if it exists) should be able to do without any physical constant: all of them should be derivable -- should emerge from the rules of the game. It is nice to think that those values had not to be chosen and fine-tuned by Someone, before switching on the Universe... And I believe that theories fundamentally based on a discrete substratum, on computation, and on emergence -- the type I discuss in my essay -- have much higher chances to achieve this goal, almost by definition.

Suppose one 'borrows' some constants (for example, Planck h, or universal gravitation G) from existing theories, and uses them in a new theory such that:

(1) the predictions of the 'old' theories are confirmed by the new theory, yielding even better agreement with experimental results, and

(2) more phenomena can be explained and predicted with high accuracy by the new theory, that fall even ouside the application scope of the 'old' ones.

What's wrong with that? The idea is that the new theory 'absorbs' the old theories as special cases -- of more limited applicability and lower accuracy. I do not see the inheritance of physical constants from theory to theory as a problem, but as a nice feature of scientific progress.

But perhaps you are addressing the problem of whether a theory is autonomously capable of justifying/determining the value of its constants?

I am indeed fascinated by this problem, although it is a bit out of the scope of this contest. In my opinion, the most ambitious form of ToE (if it exists) should be able to do without any physical constant: all of them should be derivable -- should emerge from the rules of the game. It is nice to think that those values had not to be chosen and fine-tuned by Someone, before switching on the Universe... And I believe that theories fundamentally based on a discrete substratum, on computation, and on emergence -- the type I discuss in my essay -- have much higher chances to achieve this goal, almost by definition.

Dear Tomaso

Thank you for your reply. I have read my own post and it seems that there are some sentences missing in the argument about special relativity. I am rewriting it so you understand better what I mean.

The value of the speed of light in vacuum was conventionally defined by the Bureau Intertanational des Poids et Mesures (BIPM) as V_{r}=299 972 458 m/s. But this value...

view entire post

Thank you for your reply. I have read my own post and it seems that there are some sentences missing in the argument about special relativity. I am rewriting it so you understand better what I mean.

The value of the speed of light in vacuum was conventionally defined by the Bureau Intertanational des Poids et Mesures (BIPM) as V

view entire post

report post as inappropriate

Dear Tommaso

When I see the preview post text, everything is ok, but when I submit the post several sentences do no appear. I am attaching the pdf so you read it completely. You can find these arguments on page 15. I apologize for this inconvenience.

attachments: 2010IPerez_1012.2423v1_PhysicsViewUniverse.pdf

report post as inappropriate

When I see the preview post text, everything is ok, but when I submit the post several sentences do no appear. I am attaching the pdf so you read it completely. You can find these arguments on page 15. I apologize for this inconvenience.

attachments: 2010IPerez_1012.2423v1_PhysicsViewUniverse.pdf

report post as inappropriate

Hello dear Tommaso Bolognesi,

A very beutiful essay full of rationality.Congratulations.

Here is my humble point of vue in bad english, sorry , I am writing litterally and too quuick a bad habit,

We see the encodings in the pure finite series....these codes compute our reality.It's a little if our particles, entangled spheres for me, knew what they must become in fact.In a...

view entire post

A very beutiful essay full of rationality.Congratulations.

Here is my humble point of vue in bad english, sorry , I am writing litterally and too quuick a bad habit,

We see the encodings in the pure finite series....these codes compute our reality.It's a little if our particles, entangled spheres for me, knew what they must become in fact.In a...

view entire post

report post as inappropriate

In fact, as many you confound a little the computing with the reality but it's well.Very good knowledge of maths computings.We thank you for that.Indeed the coomputing is not always a known matter for all.After all it's an application of physics.

An ultimate mathematical theory of physics you say......I say, the physics before, the maths after.The algorythms invented by humans...

view entire post

An ultimate mathematical theory of physics you say......I say, the physics before, the maths after.The algorythms invented by humans...

view entire post

report post as inappropriate

and Solomonoff will say ...waww AIXI is possible .....but a string is divisible and a sphere no.hihihi I love this platform.

Of course a string in computing is different.But but confusions hihihi

Now I insist , for a correct turing universal machine, the real fractal of the main central sphere with its pure number is essential....if not it's a wind.

Second , never a machine...

view entire post

Of course a string in computing is different.But but confusions hihihi

Now I insist , for a correct turing universal machine, the real fractal of the main central sphere with its pure number is essential....if not it's a wind.

Second , never a machine...

view entire post

report post as inappropriate

I too found the essay fascinating. It's great to see some truly foundational takes on reality in this essay contest, and your perspective is most enlightening.

report post as inappropriate

report post as inappropriate

Dear Tommaso,

Very interesting essay.

You mention in Section 3 first sentence: "No experimental evidence is available today for validating the digital/computational universe conjecture." Let me point you and your readers to one of my papers entitled "On the Algorithmic Nature of the World" (http://arxiv.org/abs/0906.3554) where we compare the kind of distributions one finds in...

view entire post

Very interesting essay.

You mention in Section 3 first sentence: "No experimental evidence is available today for validating the digital/computational universe conjecture." Let me point you and your readers to one of my papers entitled "On the Algorithmic Nature of the World" (http://arxiv.org/abs/0906.3554) where we compare the kind of distributions one finds in...

view entire post

report post as inappropriate

Hi Hector,

great to see you are here too. I hope indeed that you will bring more water (or, rather, bits) to the mill of the algorithmic universe conjecture! And thanks for the comments.

If I understand correctly, your work provides some estimate of how likely it is that the world we experience (through the statistical analysis of real data sets) be the output of some computation....

view entire post

great to see you are here too. I hope indeed that you will bring more water (or, rather, bits) to the mill of the algorithmic universe conjecture! And thanks for the comments.

If I understand correctly, your work provides some estimate of how likely it is that the world we experience (through the statistical analysis of real data sets) be the output of some computation....

view entire post

Hi Tommaso,

In my research the nature of randomness is secondary so at the lowest level there might be (or not) 'true' randomness and it would be pretty much irrelevant (from the algorithmic perspective). A consequence of assuming my algorithmic hypothesis is, however, that randomness is the result of deterministic processes and therefore is deterministic itself (which I think is compatible with your model). If randomness looks so is only in appearance. What I further say is that if randomness had any place in the world, it may no longer do. Whether you start a computer with a set of random programs starting from randomness or from emptiness, there is no difference in the long term. By contrast, if the universe somehow 'injects' randomness at some scale influencing the world (and our physical reality), empirical datasets should diverge from the algorithmic distribution, which is something we have being measuring (to compare both one has also to build the algorithmic distribution, hence to simulate a purely algorithmic world).

In my algorithmic world randomness is, as you say, also the fabric of information in the way of symmetry breaking. You can either start from nothing or true randomness but you will end up with an organized structured world with a very specific distribution (Levin's universal distribution). What I do is to measure how far or close data in the real world is to this purely algorithmic distribution.

Sincerely.

report post as inappropriate

In my research the nature of randomness is secondary so at the lowest level there might be (or not) 'true' randomness and it would be pretty much irrelevant (from the algorithmic perspective). A consequence of assuming my algorithmic hypothesis is, however, that randomness is the result of deterministic processes and therefore is deterministic itself (which I think is compatible with your model). If randomness looks so is only in appearance. What I further say is that if randomness had any place in the world, it may no longer do. Whether you start a computer with a set of random programs starting from randomness or from emptiness, there is no difference in the long term. By contrast, if the universe somehow 'injects' randomness at some scale influencing the world (and our physical reality), empirical datasets should diverge from the algorithmic distribution, which is something we have being measuring (to compare both one has also to build the algorithmic distribution, hence to simulate a purely algorithmic world).

In my algorithmic world randomness is, as you say, also the fabric of information in the way of symmetry breaking. You can either start from nothing or true randomness but you will end up with an organized structured world with a very specific distribution (Levin's universal distribution). What I do is to measure how far or close data in the real world is to this purely algorithmic distribution.

Sincerely.

report post as inappropriate

Hi Hector,

of course I also sympathize with the idea that no pure randomness is continuously injected into phsycial reality, and that everything that appears random is still the result of a deterministic process.

You write that by the algorithmic universe approach one ends up with 'an organized, structured world with a very specific distribution (Levin's universal distribution)'.

Levin's distribution m(x) provides the a-priori probability of binary string x, and depends on the number of programs of any length that trigger a computation on a Prefix Universal Turing Machine that terminates by outputting x. Thus, the sum of m(x) over all x depends on the number of programs that trigger a computation on a Prefix Universal Turing Machine that terminates (by outputting ANY x), and this is Chaitin's Omega! Nice! I imagine you knew already, but I didn't!

So, are you saying that you have been able to measure the extent to which distributions of data sets (binary strings) from our real world vs. from an artificial, algorithmic world approximate the m(x) distribution (which, I read, is 'lower semi-computable', that is, knowable only approximately)? This sounds very challenging. But I am curious about the type of artificial universe that you have experimented with, and the type of data that you analyzed in it.

For example, if I gave you a huge causal set, intended as an instance of discrete spacetime, where would I look for a data set to be tested against Levin's distribution?

By the way, are these distributions referring to an internal or external view at the universe (Tegmark's frog vs. bird view)? The problem being that in the real universe we collect data as frogs, but with a simulated universe it is much easier to act as birds.

A final question for you. By introducing the apriori probability of string x one shifts the focus from the space S of strings to which x belongs, to the space P of programs that can compute x. But then, why not assuming that even the elements of space P -- strings themselves -- enjoy an a-priori probability? (This is not reflected in the definition of m(x).) How, or why to avoid an infinite regression?

of course I also sympathize with the idea that no pure randomness is continuously injected into phsycial reality, and that everything that appears random is still the result of a deterministic process.

You write that by the algorithmic universe approach one ends up with 'an organized, structured world with a very specific distribution (Levin's universal distribution)'.

Levin's distribution m(x) provides the a-priori probability of binary string x, and depends on the number of programs of any length that trigger a computation on a Prefix Universal Turing Machine that terminates by outputting x. Thus, the sum of m(x) over all x depends on the number of programs that trigger a computation on a Prefix Universal Turing Machine that terminates (by outputting ANY x), and this is Chaitin's Omega! Nice! I imagine you knew already, but I didn't!

So, are you saying that you have been able to measure the extent to which distributions of data sets (binary strings) from our real world vs. from an artificial, algorithmic world approximate the m(x) distribution (which, I read, is 'lower semi-computable', that is, knowable only approximately)? This sounds very challenging. But I am curious about the type of artificial universe that you have experimented with, and the type of data that you analyzed in it.

For example, if I gave you a huge causal set, intended as an instance of discrete spacetime, where would I look for a data set to be tested against Levin's distribution?

By the way, are these distributions referring to an internal or external view at the universe (Tegmark's frog vs. bird view)? The problem being that in the real universe we collect data as frogs, but with a simulated universe it is much easier to act as birds.

A final question for you. By introducing the apriori probability of string x one shifts the focus from the space S of strings to which x belongs, to the space P of programs that can compute x. But then, why not assuming that even the elements of space P -- strings themselves -- enjoy an a-priori probability? (This is not reflected in the definition of m(x).) How, or why to avoid an infinite regression?

Dear Tommaso,

This is a very good essay. I recommend to the informed reader to move on to the bibliography.

I do have a question/suggestion: beside the whole universe, there are other smaller, man made universes, where this type of computational approach could explain something, like the emergence of the pattern of use of space in a city. I am no architect, but a mathematician. Recently I became aware of a host of research (in architecture) concerning SPACE. Here are some relevant references:

I first learned about the work of Christopher Alexander from this secret life of space link which I am sure you will enjoy.

Then I learned from Bill Hillier ("Space is the machine") about the existence of "axial maps" (Turner A, Hillier B & Penn A (2005) An algorithmic definition of the axial map Environment & Planning B 32-3, 425-444) which still escape a rigorous mathematical definition, but seems to be highly significant in order to understand emergent social behaviour (see Space Syntax).

So I wonder if such a computational approach could be of any help in such a more concrete but mathematically elusive subject.

Best,

Marius

report post as inappropriate

This is a very good essay. I recommend to the informed reader to move on to the bibliography.

I do have a question/suggestion: beside the whole universe, there are other smaller, man made universes, where this type of computational approach could explain something, like the emergence of the pattern of use of space in a city. I am no architect, but a mathematician. Recently I became aware of a host of research (in architecture) concerning SPACE. Here are some relevant references:

I first learned about the work of Christopher Alexander from this secret life of space link which I am sure you will enjoy.

Then I learned from Bill Hillier ("Space is the machine") about the existence of "axial maps" (Turner A, Hillier B & Penn A (2005) An algorithmic definition of the axial map Environment & Planning B 32-3, 425-444) which still escape a rigorous mathematical definition, but seems to be highly significant in order to understand emergent social behaviour (see Space Syntax).

So I wonder if such a computational approach could be of any help in such a more concrete but mathematically elusive subject.

Best,

Marius

report post as inappropriate

Dear Marius,

thank you for the pointer to the 'secret life of space' by blogger Leithaus.

Having been involved in process algebra (even older than Pi calculus) for quite some time I cannot but agree that one of the attractive features of those formalisms is their peculiar way to simultaneously handle 'structure' and 'behaviour'. But I also fully share the concern expressed in that blog, about the usefulness of modeling the geometry of spacetime in Pi calculus:

"...will it be of any use to encode these notions in the model, or will it just be another formal representation -- potentially with more baggage to push around."?

Who knows! But the idea that formal analogies between Pi calculus specifications of some spatial geometry, on one hand, and of biological processes, on the other, might suggest that 'space itself is alive' does not sound convincing to me, to say the least (although we all know that space is indeed alive!...). One reason is that two specifications with very different structure (syntax) may well share the same semantics/behavior, indicating that the formal structure of a specification is not so important.

One should rather concetrate on the semantics of the specification; and the semantics can be given in several ways, including by a mapping from syntax to ... causal sets -- the structure that I discuss in my essay. It would be interesting to see whether relatively simple process algebraic specifications could yield causal sets exhibiting the variety of emergent properties that I observe in causets grown by other models of computation.

thank you for the pointer to the 'secret life of space' by blogger Leithaus.

Having been involved in process algebra (even older than Pi calculus) for quite some time I cannot but agree that one of the attractive features of those formalisms is their peculiar way to simultaneously handle 'structure' and 'behaviour'. But I also fully share the concern expressed in that blog, about the usefulness of modeling the geometry of spacetime in Pi calculus:

"...will it be of any use to encode these notions in the model, or will it just be another formal representation -- potentially with more baggage to push around."?

Who knows! But the idea that formal analogies between Pi calculus specifications of some spatial geometry, on one hand, and of biological processes, on the other, might suggest that 'space itself is alive' does not sound convincing to me, to say the least (although we all know that space is indeed alive!...). One reason is that two specifications with very different structure (syntax) may well share the same semantics/behavior, indicating that the formal structure of a specification is not so important.

One should rather concetrate on the semantics of the specification; and the semantics can be given in several ways, including by a mapping from syntax to ... causal sets -- the structure that I discuss in my essay. It would be interesting to see whether relatively simple process algebraic specifications could yield causal sets exhibiting the variety of emergent properties that I observe in causets grown by other models of computation.

..."One reason is that two specifications with very different structure (syntax) may well share the same semantics/behavior, indicating that the formal structure of a specification is not so important."

Right. But this, I think, is already taken care of by Leithaus (Greg Meredith), with Snyder, in this paper: Knots as processes: a new kind of invariant.

Which, to my understanding, seems somehow related to this paper by Louis Kauffman who was among those who started topological quantum computing (along with Freedman, Kitaev, Larsen) which is just a form of computation with braids.

report post as inappropriate

Right. But this, I think, is already taken care of by Leithaus (Greg Meredith), with Snyder, in this paper: Knots as processes: a new kind of invariant.

Which, to my understanding, seems somehow related to this paper by Louis Kauffman who was among those who started topological quantum computing (along with Freedman, Kitaev, Larsen) which is just a form of computation with braids.

report post as inappropriate

Hi Tommaso,

1. Thanks for your clear presentation on the groundwork needed to find emergence from simple computations.

2. How did the "Experiments with emergence in computational systems modeling spacetime and nature" ISTI-CNR, Pisa, Italy, July 10-11, 2009 Turn out? Where there any dramatic experiments demonstrated in your opinion?

3. I like your point that: "At all levels, including the lowest ones, something 'interesting' must happen. Objects, localized structures distinguishable from a background, waves, the mix of order and disorder, are examples of 'interesting' things."

4. I believe that one of the most interesting events in physics is progression of particle masses from Buckyballs to Fleas (a Planck Mass). We go from indistinguishable objects to identifiable objects and from objects that can exhibit interference to those that have none. This seems to me an area that may be appropriate for a computational approach.

Thanks again,

Don Limuti

report post as inappropriate

1. Thanks for your clear presentation on the groundwork needed to find emergence from simple computations.

2. How did the "Experiments with emergence in computational systems modeling spacetime and nature" ISTI-CNR, Pisa, Italy, July 10-11, 2009 Turn out? Where there any dramatic experiments demonstrated in your opinion?

3. I like your point that: "At all levels, including the lowest ones, something 'interesting' must happen. Objects, localized structures distinguishable from a background, waves, the mix of order and disorder, are examples of 'interesting' things."

4. I believe that one of the most interesting events in physics is progression of particle masses from Buckyballs to Fleas (a Planck Mass). We go from indistinguishable objects to identifiable objects and from objects that can exhibit interference to those that have none. This seems to me an area that may be appropriate for a computational approach.

Thanks again,

Don Limuti

report post as inappropriate

Hi Don,

1) Thank you.

2) You refer to the JOUAL 2009 conference . Four of the six regular papers presented at the event are now published in Complex Systems . I could add that one of those authors - Alex Lamb - has just submitted an interesting essay at this contest, which matches very well, again, the JOUAL focus! Go take a look.

3) That's what I call the 'Teilhard conjecture' (after T. de Chardin). We are still very far from its experimental validation, but if it turned out to be false, all the experiments I am running on computational, causet-based big bangs would be, mostly, wasted cpu time...

4) I fully agree.

1) Thank you.

2) You refer to the JOUAL 2009 conference . Four of the six regular papers presented at the event are now published in Complex Systems . I could add that one of those authors - Alex Lamb - has just submitted an interesting essay at this contest, which matches very well, again, the JOUAL focus! Go take a look.

3) That's what I call the 'Teilhard conjecture' (after T. de Chardin). We are still very far from its experimental validation, but if it turned out to be false, all the experiments I am running on computational, causet-based big bangs would be, mostly, wasted cpu time...

4) I fully agree.

Dear Tommaso,

I think your essay is very interesting.

I was wondering if you could clarify something: You say, "There exists a tiniest scale at which the fabric of spacetime appears as a pomegranate...made of indivisible atoms, or seeds." How do you reconcile this with the fact that photons of any wavelength travel at the same rate through space? Would photons of a smaller wavelength be more impeded by the atoms of space? (Forgive me if you have already addressed this.)

Also, I feel I should clarify my response to your question about my essay. I do believe that discrete space and time are the ultimate bottom layer of nature, but when I say 'nature,' I'm thinking of the universe as a working system, not its basic, fundamental components. Only through discrete space and time do you get matter, force, (relative) energy, and all the workings of the universe. However, on a fundamental level, I believe space at least is a continuous object. Its discreteness would come from particular one-, two- or three-dimensional regions becoming more 'timelike' in their nature, though they would continue to be space...in my opinion.

All the best with the contest,

Lamont

report post as inappropriate

I think your essay is very interesting.

I was wondering if you could clarify something: You say, "There exists a tiniest scale at which the fabric of spacetime appears as a pomegranate...made of indivisible atoms, or seeds." How do you reconcile this with the fact that photons of any wavelength travel at the same rate through space? Would photons of a smaller wavelength be more impeded by the atoms of space? (Forgive me if you have already addressed this.)

Also, I feel I should clarify my response to your question about my essay. I do believe that discrete space and time are the ultimate bottom layer of nature, but when I say 'nature,' I'm thinking of the universe as a working system, not its basic, fundamental components. Only through discrete space and time do you get matter, force, (relative) energy, and all the workings of the universe. However, on a fundamental level, I believe space at least is a continuous object. Its discreteness would come from particular one-, two- or three-dimensional regions becoming more 'timelike' in their nature, though they would continue to be space...in my opinion.

All the best with the contest,

Lamont

report post as inappropriate

Hi Williams,

you ask whether photons of a smaller wavelength would be more impeded by the atoms of space.

I wish I were already there!

The general question behind this, I guess, would be: what are, really, particles in a causal set, intended as a discrete model of spacetime?

A general answer would be: since the only tool we have for building the universe is a discrete spacetime -- a directed graph made of nodes (events without attributes) and causal relations among them -- a particle is a trajectory, a worldline, a periodic pattern made of events.

But talking about the SPEED of a particle in a causal set is already quite difficult, since we need a reference frame, and that's not easy to define either, since we cannot enjoy the advantages of a continuous setting, such as Minkowski spacetime. One way to proceed would be to identify the origin of a reference frame with a particle, as defined above, so that you end up with a system of particles that observe each other and detect their mutual speeds... But I have not jet investigated this line of thought.

Defining what a photon is in a causal set is indeed particularly challenging, for the following reason.

While the definitions of time-like and space-like sets of events are immediately available, via the notions of chain and anti-chain for partial orders, the definition of light-like path is problematic, since you do not have a continuous background where you can take limits.

The difficulty is as follows. Consider the set of nodes forming the photon's trajectory.

If these are in time-like relation, we have something for which Lorentz distance progresses (Lorentz distance between two events in a causal set is the length of the longest directed path between them - this works fine, as the people in the Causal Set Programme know well), but then we would have a progression of the photon proper time, which contradicts its living on a null spacetime cone.

If the points are in a space-like relation, no information can be carried by the photon, violating the idea that this particle is the fastest causality carrier.

The attitude one assumes with the type of research I have described is to set up computational experiments and, basically, see what happens, without preconceived expectations. I find this approach justified in light of the highly creative power exhibited by emergence in simple models of computation. Of course the idea is then to establish relations between what emerges and familiar physical phenomena, as I suggested, for example, with the entanglement-like effect in Figure 3 of my essay.

At the moment, photons and null cones appear to be still at large, in my causets.

you ask whether photons of a smaller wavelength would be more impeded by the atoms of space.

I wish I were already there!

The general question behind this, I guess, would be: what are, really, particles in a causal set, intended as a discrete model of spacetime?

A general answer would be: since the only tool we have for building the universe is a discrete spacetime -- a directed graph made of nodes (events without attributes) and causal relations among them -- a particle is a trajectory, a worldline, a periodic pattern made of events.

But talking about the SPEED of a particle in a causal set is already quite difficult, since we need a reference frame, and that's not easy to define either, since we cannot enjoy the advantages of a continuous setting, such as Minkowski spacetime. One way to proceed would be to identify the origin of a reference frame with a particle, as defined above, so that you end up with a system of particles that observe each other and detect their mutual speeds... But I have not jet investigated this line of thought.

Defining what a photon is in a causal set is indeed particularly challenging, for the following reason.

While the definitions of time-like and space-like sets of events are immediately available, via the notions of chain and anti-chain for partial orders, the definition of light-like path is problematic, since you do not have a continuous background where you can take limits.

The difficulty is as follows. Consider the set of nodes forming the photon's trajectory.

If these are in time-like relation, we have something for which Lorentz distance progresses (Lorentz distance between two events in a causal set is the length of the longest directed path between them - this works fine, as the people in the Causal Set Programme know well), but then we would have a progression of the photon proper time, which contradicts its living on a null spacetime cone.

If the points are in a space-like relation, no information can be carried by the photon, violating the idea that this particle is the fastest causality carrier.

The attitude one assumes with the type of research I have described is to set up computational experiments and, basically, see what happens, without preconceived expectations. I find this approach justified in light of the highly creative power exhibited by emergence in simple models of computation. Of course the idea is then to establish relations between what emerges and familiar physical phenomena, as I suggested, for example, with the entanglement-like effect in Figure 3 of my essay.

At the moment, photons and null cones appear to be still at large, in my causets.

Dear Tommaso Bolognesi,

"There exists a tiniest scale at which the fabric of spacetime appears as a pome-granate (Figure 1), made of indivisible atoms, or seeds. This view is reflected in models such as Penrose's spin networks and foams, and is adopted in theories such as Loop Quantum Gravity [14] and in the so called Causal Set Programme [6, 13]...."

Thank you for pointing out that this is a view.

"At that level, a universal computation keeps running. We do not know yet the program code, but, in accordance with a fundamental principle of minimality ('Occam razor'), we like to believe that it is small, at least initially."

And it grows by what means?

"Perhaps it is a self-modifying program: code and manipulated data might coincide. ..."

Self? In other words by mechanical magic?

"...This does not mean that we have to postulate the existence of a divine digital Computer that sits in some outer space and executes that code, for the same reason that, under a continuous mathematics viewpoint, we do not need a transcendental analog Computer that runs the Einstein field equations for animating spacetime. Computations may exist even without computers (and, incidentally, the concept of computation is much older than computer technology). 1"

Of course computations may exist without computers. Your remark about "outer space" could be applied equally well to extra dimensions and other universes.

I have printed off your essay and am going to read it; but, I must admit that your beginning appears to be ideological rather than scientific. If I am incorrect, I will learn that by reading your essay and will returnh to apologize.

James

report post as inappropriate

"There exists a tiniest scale at which the fabric of spacetime appears as a pome-granate (Figure 1), made of indivisible atoms, or seeds. This view is reflected in models such as Penrose's spin networks and foams, and is adopted in theories such as Loop Quantum Gravity [14] and in the so called Causal Set Programme [6, 13]...."

Thank you for pointing out that this is a view.

"At that level, a universal computation keeps running. We do not know yet the program code, but, in accordance with a fundamental principle of minimality ('Occam razor'), we like to believe that it is small, at least initially."

And it grows by what means?

"Perhaps it is a self-modifying program: code and manipulated data might coincide. ..."

Self? In other words by mechanical magic?

"...This does not mean that we have to postulate the existence of a divine digital Computer that sits in some outer space and executes that code, for the same reason that, under a continuous mathematics viewpoint, we do not need a transcendental analog Computer that runs the Einstein field equations for animating spacetime. Computations may exist even without computers (and, incidentally, the concept of computation is much older than computer technology). 1"

Of course computations may exist without computers. Your remark about "outer space" could be applied equally well to extra dimensions and other universes.

I have printed off your essay and am going to read it; but, I must admit that your beginning appears to be ideological rather than scientific. If I am incorrect, I will learn that by reading your essay and will returnh to apologize.

James

report post as inappropriate

Dear James Putnam,

your post triggers a number of reactions, probably (and unfortunately) more than I can put in a post here.

The first general point I need to clarify is that, in an attempt to be concise in writing the essay, I adopted a style, especially in the opening that you quote, which may indeed sound more 'assertive' than that of a stardard scientific paper. On the other...

view entire post

your post triggers a number of reactions, probably (and unfortunately) more than I can put in a post here.

The first general point I need to clarify is that, in an attempt to be concise in writing the essay, I adopted a style, especially in the opening that you quote, which may indeed sound more 'assertive' than that of a stardard scientific paper. On the other...

view entire post

Dear James,

you quote my essay:

"Perhaps it is a self-modifying program: code and manipulated data might coincide. ..."

and ask:

Self? In other words by mechanical magic?

The funny thing is that I had removed this line from the quasi-final version of the essay, because I did not have enough space for expanding on it. But then I put it back, being ready to accept questions or criticism. Thank you for giving me an opportunity to explain.

The idea of a self modifying program is, again, an attempt to satisfy the requirement of minimality. While I support the computational universe idea, what I find a bit annoying is the separation between (i) a (fixed) program P and (ii) the data D that it manipulates. Under this view, D represents our Reality, while P would be the rule that governs it, without enjoying itself the status of a Real entity. They are two things, and one of them is even 'unreal'. Two is bigger than one. If the program operates on itself, we would have only one thing: P = D = Reality. That would be more elegant. I believe that the Mathematical Universe idea by Max Tegmark also achieves this unity: there's only one thing, namely a mathematical structure.

By the way, the concept of a self-modifying program is quite familiar in computer science, e.g. in logic programming (in the Prolog language etc.). Furthermore, self-reference is a recurrent concept when dealing with formal systems (Goedel's theorem), computation (universal Turing machines), not to mention consciousness. I would not be surprised at all if it played a crucial role in an ultimate, computational theory of physics.

If you claim that there is something magic in a program that modifies itself (= Reality), then I'd expect you to claim the same for a program that modifies 'external' data (= Reality). In my opinion, there is no more magic in a program that runs our Reality, than in a set of differential equations that does essentially the same thing.

Cheers. Tommaso

PS

So far I've done only few experiments on self-modifying Turing machines, without exciting results. In all the experiments mentioned in the essay, data and program are separated, and the latter is fixed.

you quote my essay:

"Perhaps it is a self-modifying program: code and manipulated data might coincide. ..."

and ask:

Self? In other words by mechanical magic?

The funny thing is that I had removed this line from the quasi-final version of the essay, because I did not have enough space for expanding on it. But then I put it back, being ready to accept questions or criticism. Thank you for giving me an opportunity to explain.

The idea of a self modifying program is, again, an attempt to satisfy the requirement of minimality. While I support the computational universe idea, what I find a bit annoying is the separation between (i) a (fixed) program P and (ii) the data D that it manipulates. Under this view, D represents our Reality, while P would be the rule that governs it, without enjoying itself the status of a Real entity. They are two things, and one of them is even 'unreal'. Two is bigger than one. If the program operates on itself, we would have only one thing: P = D = Reality. That would be more elegant. I believe that the Mathematical Universe idea by Max Tegmark also achieves this unity: there's only one thing, namely a mathematical structure.

By the way, the concept of a self-modifying program is quite familiar in computer science, e.g. in logic programming (in the Prolog language etc.). Furthermore, self-reference is a recurrent concept when dealing with formal systems (Goedel's theorem), computation (universal Turing machines), not to mention consciousness. I would not be surprised at all if it played a crucial role in an ultimate, computational theory of physics.

If you claim that there is something magic in a program that modifies itself (= Reality), then I'd expect you to claim the same for a program that modifies 'external' data (= Reality). In my opinion, there is no more magic in a program that runs our Reality, than in a set of differential equations that does essentially the same thing.

Cheers. Tommaso

PS

So far I've done only few experiments on self-modifying Turing machines, without exciting results. In all the experiments mentioned in the essay, data and program are separated, and the latter is fixed.

Dear Tommaso,

Perhaps it would help to bring up the contribution of Alan Turing and his concept of universality unifying both data and programs. While one can think of a machine input as data, and a machine as a program, each as separated entities, Turing proved that there is a general class of machines of the same type (defined in the same terms) capable of accepting descriptions of any other machine, and simulate their evolution for any input, hence taking a program+data as data, and unifying both.

This is why one can investigate the 'computational universe; today either by following an enumeration of Turing machines, or using one (universal Turing) machine running an enumeration of programs as data inputs. Because both approaches are exactly the same thanks to Turing's universality.

Best.

- Hector Zenil

report post as inappropriate

Perhaps it would help to bring up the contribution of Alan Turing and his concept of universality unifying both data and programs. While one can think of a machine input as data, and a machine as a program, each as separated entities, Turing proved that there is a general class of machines of the same type (defined in the same terms) capable of accepting descriptions of any other machine, and simulate their evolution for any input, hence taking a program+data as data, and unifying both.

This is why one can investigate the 'computational universe; today either by following an enumeration of Turing machines, or using one (universal Turing) machine running an enumeration of programs as data inputs. Because both approaches are exactly the same thanks to Turing's universality.

Best.

- Hector Zenil

report post as inappropriate

Dear Tommaso Bolognesi,

I have printed your response. I am impressed. Your response is not in agreement with me; but, that is a minor point. Your response was directed at my questions and even referred to my own essay. I appreciate your time and effort in putting your response together. I will follow the leads you referrenced. I will respond when I put something together worth your time.

James

report post as inappropriate

I have printed your response. I am impressed. Your response is not in agreement with me; but, that is a minor point. Your response was directed at my questions and even referred to my own essay. I appreciate your time and effort in putting your response together. I will follow the leads you referrenced. I will respond when I put something together worth your time.

James

report post as inappropriate

Tommaso,

Thanks for a fascinating and extremely well constructed essay. Since Wolfram is scheduled to speak at ICCS in Boston this summer, I think it might be interesting to see how your multi-level hierarchy compares to Bar-Yam's multiscale variety -- hierarchies of emergence vs. lateral distribution of information.

Interesting conceptual equation, "spacetime geometry = order + number." Suppose one were to make another equation: "order = organization + feedback." Then one would get -- substituting terms in my equation for yours -- the theme of my ICCS 2006 paper ("self-organization in real and complex analysis") that begs self-organization of the field of complex numbers, z, in the closed algebra of C.

One more comment (though I could go on; your paper is rich in quotable points), concerning global and local (4.1) time-dependent relations among point particles. Research in communication network dynamics (e.g., Braha--Bar-Yam 2006, Complexity vol 12) shows often radical shifts in hub to node connectivity on short time intervals while time in the aggregate shows that the system changes very little. Taking point particles as network nodes, perhaps something the same or similar is happening.

Good luck in the contest. (I also have an entry.) I expect that you will rank deservedly high.

All best,

Tom

report post as inappropriate

Thanks for a fascinating and extremely well constructed essay. Since Wolfram is scheduled to speak at ICCS in Boston this summer, I think it might be interesting to see how your multi-level hierarchy compares to Bar-Yam's multiscale variety -- hierarchies of emergence vs. lateral distribution of information.

Interesting conceptual equation, "spacetime geometry = order + number." Suppose one were to make another equation: "order = organization + feedback." Then one would get -- substituting terms in my equation for yours -- the theme of my ICCS 2006 paper ("self-organization in real and complex analysis") that begs self-organization of the field of complex numbers, z, in the closed algebra of C.

One more comment (though I could go on; your paper is rich in quotable points), concerning global and local (4.1) time-dependent relations among point particles. Research in communication network dynamics (e.g., Braha--Bar-Yam 2006, Complexity vol 12) shows often radical shifts in hub to node connectivity on short time intervals while time in the aggregate shows that the system changes very little. Taking point particles as network nodes, perhaps something the same or similar is happening.

Good luck in the contest. (I also have an entry.) I expect that you will rank deservedly high.

All best,

Tom

report post as inappropriate

Dear Tom,

thanks for the positive comments. Following your links I reached the Robert Laughlin's 2005 book 'A Different Universe: Reinventing Physics from the Bottom Down', in which the role of emergence in theoretical physics is given an important role. Good to hear; another book on the pile!

In the equation 'spacetime geometry = order plus number', introduced by people in the Causal Set programme, 'number' simply refers to counting the number of events in a region of the causal set, which is then equated to the volume of that region. And 'order' is the partial order among events. You mention self-organization in the context of the field of complex numbers, and this does not seem much related to 'number' in the above sense (if this is what you meant to suggest). But of course I am curious about everything that has to do with self-organization.

Usually a self-organizing system is conceived as a moltitude of simple active entities. Does this happen in your ICCS 2006 paper?

One peculiarity of the 'ant-based' (or Turing-machine-like) approach dicussed in my essay is that you actually have only ONE active entity -- the 'ant' -- and expect everything else to emerge, including the moltitude of interacting particles or entities that one normally places at the bottom of the hierarchy of emergence.

Ciao. Tommaso

thanks for the positive comments. Following your links I reached the Robert Laughlin's 2005 book 'A Different Universe: Reinventing Physics from the Bottom Down', in which the role of emergence in theoretical physics is given an important role. Good to hear; another book on the pile!

In the equation 'spacetime geometry = order plus number', introduced by people in the Causal Set programme, 'number' simply refers to counting the number of events in a region of the causal set, which is then equated to the volume of that region. And 'order' is the partial order among events. You mention self-organization in the context of the field of complex numbers, and this does not seem much related to 'number' in the above sense (if this is what you meant to suggest). But of course I am curious about everything that has to do with self-organization.

Usually a self-organizing system is conceived as a moltitude of simple active entities. Does this happen in your ICCS 2006 paper?

One peculiarity of the 'ant-based' (or Turing-machine-like) approach dicussed in my essay is that you actually have only ONE active entity -- the 'ant' -- and expect everything else to emerge, including the moltitude of interacting particles or entities that one normally places at the bottom of the hierarchy of emergence.

Ciao. Tommaso

Ciao Tommaso,

Yes, I do mean to suggest that the non-ordered set, z (the universal set of complex numbers) is organized to allow -- not a partial order of events -- but a well-ordered sequence in the specified domain of topology and scale, with analytic continuation over n-dimension manifolds. It is nontrivial that this is accomplished without appeal to Zorn's lemma (axiom of choice). And time is given a specifically physical definition. I followed up at ICCS 2007 with a nonmathematical paper ("Time, change and self organization") that incorporated and expanded on some of these results.

You pick up right away, the difference between the hierarchical distribution of information, and multiscale variety. I am thinking that your "multitude of entities" may be dual to the "ant" analogy, because with activities occuring at varying rates at different scales, new hierarchies may form and feed back to the system dynamics.

You know, Boston is very beautiful in the summer. :-)

All best,

Tom

report post as inappropriate

Yes, I do mean to suggest that the non-ordered set, z (the universal set of complex numbers) is organized to allow -- not a partial order of events -- but a well-ordered sequence in the specified domain of topology and scale, with analytic continuation over n-dimension manifolds. It is nontrivial that this is accomplished without appeal to Zorn's lemma (axiom of choice). And time is given a specifically physical definition. I followed up at ICCS 2007 with a nonmathematical paper ("Time, change and self organization") that incorporated and expanded on some of these results.

You pick up right away, the difference between the hierarchical distribution of information, and multiscale variety. I am thinking that your "multitude of entities" may be dual to the "ant" analogy, because with activities occuring at varying rates at different scales, new hierarchies may form and feed back to the system dynamics.

You know, Boston is very beautiful in the summer. :-)

All best,

Tom

report post as inappropriate

Dear Tommaso,

Thank you for your essay. You write a lot about an emergence and computation, chaos, self-organization and automata. All the elements I have touched in my essay because they are closely connected to the evolution of spacetime concept.

E.g. you write: “Computations may exist even without computers”. It seems to have something in common with Computational LQG by Paola Zizzi. In my essay I have even quoted Paola.

My own view is that the universe is a dissipative coupled system that exhibits self-organized criticality. The structured criticality is a property of complex systems where small events may trigger larger events. This is a kind of chaos where the general behavior of the system can be modeled on one scale while smaller- and larger-scale behaviors remain unpredictable. The simple example of that phenomenon is a pile of sand.

When QM and GR are computable and deterministic, the universe evolution (naturally evolving self-organized critical system) is non-computable and non-deterministic. It does not mean that computability and determinism are related. Roger Penrose proves that computability and determinism are different things.

Let me try to summarize: the actual universe is computable at the Lyapunov time so it is digital but its evolution is non-computable so it remains at the same time analog (the Lyapunov time is the length of time for a dynamical system to become chaotic).

Your work seems to be the trial to develop the computable model of the universe at the Lyapunov time. Good luck!

Jacek

report post as inappropriate

Thank you for your essay. You write a lot about an emergence and computation, chaos, self-organization and automata. All the elements I have touched in my essay because they are closely connected to the evolution of spacetime concept.

E.g. you write: “Computations may exist even without computers”. It seems to have something in common with Computational LQG by Paola Zizzi. In my essay I have even quoted Paola.

My own view is that the universe is a dissipative coupled system that exhibits self-organized criticality. The structured criticality is a property of complex systems where small events may trigger larger events. This is a kind of chaos where the general behavior of the system can be modeled on one scale while smaller- and larger-scale behaviors remain unpredictable. The simple example of that phenomenon is a pile of sand.

When QM and GR are computable and deterministic, the universe evolution (naturally evolving self-organized critical system) is non-computable and non-deterministic. It does not mean that computability and determinism are related. Roger Penrose proves that computability and determinism are different things.

Let me try to summarize: the actual universe is computable at the Lyapunov time so it is digital but its evolution is non-computable so it remains at the same time analog (the Lyapunov time is the length of time for a dynamical system to become chaotic).

Your work seems to be the trial to develop the computable model of the universe at the Lyapunov time. Good luck!

Jacek

report post as inappropriate

Hi again Tommaso,

Just to let you know I dropped you a comment on Feb. 18, 2011 @ 21:07 GMT concerning the data vs. code question, just in case you hadn't seen it.

Best.

report post as inappropriate

Just to let you know I dropped you a comment on Feb. 18, 2011 @ 21:07 GMT concerning the data vs. code question, just in case you hadn't seen it.

Best.

report post as inappropriate

Dear Tommaso,

Welcome to the essay contest. This essay contradicts quantum mechanics: How your digital/computational universe conjecture theory manages the Heisenberg uncertainty? For this purpose your digital computer must know the definite, absolute information about the position and momentum of every particle. Moreover, this ''computer'' must know all quantum information with absolute...

view entire post

Welcome to the essay contest. This essay contradicts quantum mechanics: How your digital/computational universe conjecture theory manages the Heisenberg uncertainty? For this purpose your digital computer must know the definite, absolute information about the position and momentum of every particle. Moreover, this ''computer'' must know all quantum information with absolute...

view entire post

report post as inappropriate

The previous post is my post, by Constantin Leshan. The login does not hold.

Soncerely,

Constantin Leshan

report post as inappropriate

Soncerely,

Constantin Leshan

report post as inappropriate

Dear Constantin,

would it be wise to say that Quantum Field Theory is wrong because it does not predict the existence of unicellular organisms?

This is not meant to be provocative, but only to express what I believe is the 'delicate' status of any conjectured theory of everything (QFT not even pretending to be one). Any such conjecture should maximize the number of explained...

view entire post

would it be wise to say that Quantum Field Theory is wrong because it does not predict the existence of unicellular organisms?

This is not meant to be provocative, but only to express what I believe is the 'delicate' status of any conjectured theory of everything (QFT not even pretending to be one). Any such conjecture should maximize the number of explained...

view entire post

SECOND PART of my answer.

(The ultimate bottom)

You find a contradiction between placing indivisible atoms of spacetime at the bottom of reality, and the need for a digital computer that runs the evolution of this collection of atoms. You seem annoyed by the fact that such a digital computer would represent a 'deeper background structure' beneath the level of these indivisible...

view entire post

(The ultimate bottom)

You find a contradiction between placing indivisible atoms of spacetime at the bottom of reality, and the need for a digital computer that runs the evolution of this collection of atoms. You seem annoyed by the fact that such a digital computer would represent a 'deeper background structure' beneath the level of these indivisible...

view entire post

Hi Tommaso

My rate is done and you got a good grade. A very well written essay. I agree with the essence of it as I hope you could verify on my essay, even that the style of my writing is quite different.

Now having said that, I would say:

I agree our universe is made from some simple basic cellular automata and most things are emergent phenomena.

I don't agree to identify those automata to space-time and see particles and every thing else emerge from there.

My position is quite the opposite. I identify the basic automata with particles and see space and time derived from the interaction. Unfortunatly I haven't done concrete definitions and experimentation with my approach.

I feel my approach may have the problem of having more complex automata but might be easier to codify relativity in there.

Could you comment ?

Regards

Juan Enrique Ramos Beraud

report post as inappropriate

report post as inappropriate

My rate is done and you got a good grade. A very well written essay. I agree with the essence of it as I hope you could verify on my essay, even that the style of my writing is quite different.

Now having said that, I would say:

I agree our universe is made from some simple basic cellular automata and most things are emergent phenomena.

I don't agree to identify those automata to space-time and see particles and every thing else emerge from there.

My position is quite the opposite. I identify the basic automata with particles and see space and time derived from the interaction. Unfortunatly I haven't done concrete definitions and experimentation with my approach.

I feel my approach may have the problem of having more complex automata but might be easier to codify relativity in there.

Could you comment ?

Regards

Juan Enrique Ramos Beraud

report post as inappropriate

Hi Juan Enrique,

Tommaso Bolognesi's essay contradicts quantum mechanics: How the digital/computational universe conjecture theory manages the motion of particle and Heisenberg uncertainty? For this purpose this digital computer must know the definite, absolute information about the position and momentum of every particle. Moreover, this ''computer'' must know all quantum information with...

view entire post

Tommaso Bolognesi's essay contradicts quantum mechanics: How the digital/computational universe conjecture theory manages the motion of particle and Heisenberg uncertainty? For this purpose this digital computer must know the definite, absolute information about the position and momentum of every particle. Moreover, this ''computer'' must know all quantum information with...

view entire post

report post as inappropriate

Constantin (and Juan-Enrique), I have given a first answer to your objections up in the blog where you raised them first. The rest of my replies comes hopefully tomorrow. Look for it by scrolling up to that same place. Thanks. Tommaso.

Dear Tommaso,

My remark concerns your as well as other purely 'computational approaches' to physics.

What troubles me about them is that they fail to address the nature of physical reality: as I discussed elsewhere, theory of computability came out of logic and has never been concerned with this question.

At the same time, physics is the central natural science, and if it does not address the above question, as it has tried to do so far, we are left with no science addressing it.

report post as inappropriate

My remark concerns your as well as other purely 'computational approaches' to physics.

What troubles me about them is that they fail to address the nature of physical reality: as I discussed elsewhere, theory of computability came out of logic and has never been concerned with this question.

At the same time, physics is the central natural science, and if it does not address the above question, as it has tried to do so far, we are left with no science addressing it.

report post as inappropriate

Dear Tommaso:

I think my previous question - or the answer for it- got lost with the answer from an for Constantin.

I think your essay is great and your proposal of identifying space time "atoms" with a cellular automata yet to be debugged and understood is plausible. From experiments - or simulations- like the ones presented in "new kind of science" and in your essay we see "particles" and all sorts of things emerge. We could even find quantum mechanics emerging from there.

As you say, the automata are not yet "seen" and not debugged.

I think - as I propose in my essay- it might also be plausible to search the basic automata on the particles or sub particles instead of in space-time. I identify the basic automata with particles and see space and time derived from the interaction. Unfortunatly I haven't done concrete definitions and experimentation.

I feel my approach may have the problem of having more complex automata but might be easier to codify relativity and quantum mechanics in there.

Now, again, could you comment on the different approaches?. I do believe in an algorithmic universe, as many others -like Hector Zenil- do.

Regards

report post as inappropriate

I think my previous question - or the answer for it- got lost with the answer from an for Constantin.

I think your essay is great and your proposal of identifying space time "atoms" with a cellular automata yet to be debugged and understood is plausible. From experiments - or simulations- like the ones presented in "new kind of science" and in your essay we see "particles" and all sorts of things emerge. We could even find quantum mechanics emerging from there.

As you say, the automata are not yet "seen" and not debugged.

I think - as I propose in my essay- it might also be plausible to search the basic automata on the particles or sub particles instead of in space-time. I identify the basic automata with particles and see space and time derived from the interaction. Unfortunatly I haven't done concrete definitions and experimentation.

I feel my approach may have the problem of having more complex automata but might be easier to codify relativity and quantum mechanics in there.

Now, again, could you comment on the different approaches?. I do believe in an algorithmic universe, as many others -like Hector Zenil- do.

Regards

report post as inappropriate

Dear Juan Enrique Ramos Beraud,

You supports Tommaso' essay because you have the same essay ''Universe is a computer'' with the similar statements and errors. If you want I can review your essay and show you a lot of flaws in your essay.

Sincerely,

Constantin

report post as inappropriate

You supports Tommaso' essay because you have the same essay ''Universe is a computer'' with the similar statements and errors. If you want I can review your essay and show you a lot of flaws in your essay.

Sincerely,

Constantin

report post as inappropriate

Constantin:

If you find my essay wrong, it's own thread is the place to comment it, and yes I would love the criticism.

Tommaso:

I would like some comments any way.

Yours.

Juan Enrique Ramos Beraud

report post as inappropriate

If you find my essay wrong, it's own thread is the place to comment it, and yes I would love the criticism.

Tommaso:

I would like some comments any way.

Yours.

Juan Enrique Ramos Beraud

report post as inappropriate

Hi Juan Enrique,

first let me clarify once more that the computational model I regard as most promising for deriving causal sets (that is, instances of spacetime) is NOT cellular automata, but network mobile automata, a sort of Turing machine acting on graphs by applying graph rewrite rules. In the causal sets derived from the computations of this model, MANY 'particles' may emerge as a result of the operation of ONE single, state-less control head, as it happens with Turmites. With cellular automata you may also obtain many particles, but you have to assume the synchronous action of MANY cells (many active elements). One active element is cheaper than many.

It seems that the ingredients you require for cooking your universe are MANY particles, modeled as some sort of relatively complex automata. It would be interesting to run some simulation of your system, for checking what might possibly emerge. Being very lucky, this might give some anticipation of what could happen when starting with more minimalistic assumptions.

But, for interacting, your automata probably need a background where to move. That's additional work to be carried out, and another elements that adds 'weight' to the model...

Cheers

Tommaso

first let me clarify once more that the computational model I regard as most promising for deriving causal sets (that is, instances of spacetime) is NOT cellular automata, but network mobile automata, a sort of Turing machine acting on graphs by applying graph rewrite rules. In the causal sets derived from the computations of this model, MANY 'particles' may emerge as a result of the operation of ONE single, state-less control head, as it happens with Turmites. With cellular automata you may also obtain many particles, but you have to assume the synchronous action of MANY cells (many active elements). One active element is cheaper than many.

It seems that the ingredients you require for cooking your universe are MANY particles, modeled as some sort of relatively complex automata. It would be interesting to run some simulation of your system, for checking what might possibly emerge. Being very lucky, this might give some anticipation of what could happen when starting with more minimalistic assumptions.

But, for interacting, your automata probably need a background where to move. That's additional work to be carried out, and another elements that adds 'weight' to the model...

Cheers

Tommaso

Dear Tommaso,

Your answers are unconvincing and wrong. Moreover, I suspect that you are trying to suppress my questions by the stream of senseless information. It is impossible to find the answers for these questions because this theory is fundamentally wrong.

Let us begin again with quantum mechanics. Your essay states that all the complexity we observe, from subatomic particles to the biosphere, is a manifestation of the emergent properties of a digital computation that takes place at the smallest spacetime scale. How this digital computation can explain the simple motion of free particle, the Heisenberg uncertainty? For this purpose this digital computation must know the absolute information position-momentum about every particle before events occurs, that is forbidden by quantum mechanics.

And I don't see any answer for this problem; your words about Quantum effects, Stephen Wolfram, Renate Loll explain nothing; It is a stream of senseless information. You cannot explain it by definition; it is a fundamental flaw in this theory.

The next flaw about black holes: I found at least one place where the digital computation can not exist, it is a proof that this theory is wrong.

And your answer is senseless: ''I completely agree that your PC (or even my Mac!) would start having computing problems a while after crossing a black hole horizon. But, again, we are not talking about hardware, we are abstractly talking about computation''. Do you think your spatial atoms will be able to process information inside of black hole, in singularity? Inside of the Black Hole the exchange of information is not possible, therefore the digital computation cannot work. Since I found at least one phenomenon/place that exist without need in the digital computation, it is a proof this theory is wrong.

Another argument also is not valid: ''By the way, a very good 1999 paper by Margenstern and Morita proves that, in the context of cellular automata, spatial (negative) curvature offers indeed a great computational advantage over flat space''. They refer about the usual curvature but not infinite curvature, singularity.

Thus, this theory is fundamentally wrong.

Regards,

Constantin

report post as inappropriate

Your answers are unconvincing and wrong. Moreover, I suspect that you are trying to suppress my questions by the stream of senseless information. It is impossible to find the answers for these questions because this theory is fundamentally wrong.

Let us begin again with quantum mechanics. Your essay states that all the complexity we observe, from subatomic particles to the biosphere, is a manifestation of the emergent properties of a digital computation that takes place at the smallest spacetime scale. How this digital computation can explain the simple motion of free particle, the Heisenberg uncertainty? For this purpose this digital computation must know the absolute information position-momentum about every particle before events occurs, that is forbidden by quantum mechanics.

And I don't see any answer for this problem; your words about Quantum effects, Stephen Wolfram, Renate Loll explain nothing; It is a stream of senseless information. You cannot explain it by definition; it is a fundamental flaw in this theory.

The next flaw about black holes: I found at least one place where the digital computation can not exist, it is a proof that this theory is wrong.

And your answer is senseless: ''I completely agree that your PC (or even my Mac!) would start having computing problems a while after crossing a black hole horizon. But, again, we are not talking about hardware, we are abstractly talking about computation''. Do you think your spatial atoms will be able to process information inside of black hole, in singularity? Inside of the Black Hole the exchange of information is not possible, therefore the digital computation cannot work. Since I found at least one phenomenon/place that exist without need in the digital computation, it is a proof this theory is wrong.

Another argument also is not valid: ''By the way, a very good 1999 paper by Margenstern and Morita proves that, in the context of cellular automata, spatial (negative) curvature offers indeed a great computational advantage over flat space''. They refer about the usual curvature but not infinite curvature, singularity.

Thus, this theory is fundamentally wrong.

Regards,

Constantin

report post as inappropriate

Constantin,

your last post is basically a cut and paste of your original message: I could probably cut and paste my original answer again here (but I won't). Apparently, none of my arguments has succeeded in convincing you that there are many good reasons for investigating the computational universe conjecture (not 'theory'), in spite of the many problems that are still open. Never mind. I still see the glass half-full, while you see it half-empty...

Tommaso

PS - When you cross a black-hole horizon, nothing special happens to you; hitting the singularity at its center is another story. But in a discrete model of spacetime there is no room for infinities, and we talk, for example, of huge, but still finite curvature. A computation may well produce (or take place on) a graph with huge curvature!

report post as inappropriate

your last post is basically a cut and paste of your original message: I could probably cut and paste my original answer again here (but I won't). Apparently, none of my arguments has succeeded in convincing you that there are many good reasons for investigating the computational universe conjecture (not 'theory'), in spite of the many problems that are still open. Never mind. I still see the glass half-full, while you see it half-empty...

Tommaso

PS - When you cross a black-hole horizon, nothing special happens to you; hitting the singularity at its center is another story. But in a discrete model of spacetime there is no room for infinities, and we talk, for example, of huge, but still finite curvature. A computation may well produce (or take place on) a graph with huge curvature!

report post as inappropriate

Yes, my last post is basically a cut and paste of my original message because I don't received any rational answer. Your ''answer'' explains nothing, it is a stream of senseless information; I'm afraid it is impossible to find an answer because it is a fundamental flaw in this theory.

''I could probably cut and paste my original answer again here.'' It makes no sense to cut and paste it, since it is a senseless information. Your ''original answer'' cannot explain my questions and therefore it is senseless.

''But in a discrete model of spacetime there is no room for infinities, and we talk, for example, of huge, but still finite curvature''.

Please read Wikipedia Black Hole - ''At the center of a black hole lies a gravitational singularity, a region where the spacetime curvature becomes infinite. Inside of Black Hole the exchange of information is not possible, consequently no computation is possible. Since I found at least one phenomenon that exist without need in computation conjecture, it is a proof this theory/essay is wrong.

Also this theory is forbidden by quantum mechanics and Heisenberg Uncertainty. The computational conjecture is not able to explain the motion of a simple particle and Heisenberg Uncertainty. To process the motion of particle, your computation conjecture must know the complete information about position and momentum before events occurs. Also I found other errors yet in this theory.

''there are many good reasons for investigating the computational universe conjecture''

We need the true, powerful Science, if we support the erroneous theories our civilization may die. There are revolutionary theories supported by nobody because all money are absorbed by false theories. It is a crime against humanity and science to support the false theories.

Constantin

report post as inappropriate

''I could probably cut and paste my original answer again here.'' It makes no sense to cut and paste it, since it is a senseless information. Your ''original answer'' cannot explain my questions and therefore it is senseless.

''But in a discrete model of spacetime there is no room for infinities, and we talk, for example, of huge, but still finite curvature''.

Please read Wikipedia Black Hole - ''At the center of a black hole lies a gravitational singularity, a region where the spacetime curvature becomes infinite. Inside of Black Hole the exchange of information is not possible, consequently no computation is possible. Since I found at least one phenomenon that exist without need in computation conjecture, it is a proof this theory/essay is wrong.

Also this theory is forbidden by quantum mechanics and Heisenberg Uncertainty. The computational conjecture is not able to explain the motion of a simple particle and Heisenberg Uncertainty. To process the motion of particle, your computation conjecture must know the complete information about position and momentum before events occurs. Also I found other errors yet in this theory.

''there are many good reasons for investigating the computational universe conjecture''

We need the true, powerful Science, if we support the erroneous theories our civilization may die. There are revolutionary theories supported by nobody because all money are absorbed by false theories. It is a crime against humanity and science to support the false theories.

Constantin

report post as inappropriate

Dear Tommaso

After recovering from my cataract operations I re-read your essay, and enjoyed the lovely photo of the pomegranate and the beautiful causal set plots. I also read the paper by Reid that you referred to. I still do not understand several aspects of causal sets. With my limited understanding of the technicalities involved I will try to express my reaction to the concept as applied to physics: You started by discussing automata and I could follow the logic of causality between nodes following a simple algorithm as in Wolfram's NKS. I have no access to the printed references of your other papers and could not understand the Termite simulations.

Generally I think complications occur when applying the automata concept to physics. GR and Quantum mechanics are accepted as they are now formulated, and the simplicity of a node structure has to be abandoned to accommodate their physically unrealistic and mutually incongruous methodologies. The resulting causal sets are bloated beyond necessity. In my earlier 2005 Beautiful Universe theory on which my present fqxi paper is based, both GR and QM have to be reverse-engineered and some complications discarded before my simple dielectric node-interactions are applied. Hope this makes some sort of sense!

Best wishes from Vladimir

report post as inappropriate

After recovering from my cataract operations I re-read your essay, and enjoyed the lovely photo of the pomegranate and the beautiful causal set plots. I also read the paper by Reid that you referred to. I still do not understand several aspects of causal sets. With my limited understanding of the technicalities involved I will try to express my reaction to the concept as applied to physics: You started by discussing automata and I could follow the logic of causality between nodes following a simple algorithm as in Wolfram's NKS. I have no access to the printed references of your other papers and could not understand the Termite simulations.

Generally I think complications occur when applying the automata concept to physics. GR and Quantum mechanics are accepted as they are now formulated, and the simplicity of a node structure has to be abandoned to accommodate their physically unrealistic and mutually incongruous methodologies. The resulting causal sets are bloated beyond necessity. In my earlier 2005 Beautiful Universe theory on which my present fqxi paper is based, both GR and QM have to be reverse-engineered and some complications discarded before my simple dielectric node-interactions are applied. Hope this makes some sort of sense!

Best wishes from Vladimir

report post as inappropriate

At that level, a universal computation keeps running. We do not know yet

the program code, but, in accordance with a fundamental principle of minimality

('Occam razor'), we like to believe that it is small, at least initially.

Tommaso,

Do you think we will ever know the "program code"? You provide a fetching argument, but I tend to believe that reality is unknowable, though my argument isn't as definitive as yours.

Jim Hoover

report post as inappropriate

the program code, but, in accordance with a fundamental principle of minimality

('Occam razor'), we like to believe that it is small, at least initially.

Tommaso,

Do you think we will ever know the "program code"? You provide a fetching argument, but I tend to believe that reality is unknowable, though my argument isn't as definitive as yours.

Jim Hoover

report post as inappropriate

Hi James,

I am certainly optimistic about the possibility for science to understand more and more about nature, but I can imagine at least one way in which this process will never come to a conclusion. The upper end of the hierarchy of natural emergence is a moving target, that science cannot anticipate, but only monitor. I believe that science will never be able to predict the major evolutionary steps in the history of the universe, or the next layer of emergence (a simple retrospective example of such a step would be the appearance of life as we know today). The reason is that simulating this evolution would take at least as much time as the time taken by nature for unfolding it for real. There is no computational shortcut. In this respect, Wolfram had the right intuition with his concept of 'computational irreducibility'.

Nevertheless, I expect a number of nice progresses to happen, as we try to figure out the 'program code' for nature. To me, one of the most desirable achievement is as follows. We should be able to find a simple program in which the localized entities that emerge should not only be capable of Turing-universal interactions (this has been done), but should also manifest some ability to modify their own behavior, to compete, and to evolve, giving rise to a sort of Darwinian ecology. I am fully convinced that the mechanisms of natural selection and evolution should play a role also at the level of physics, not only of biology. Perhaps a first indication of this trend would be the emergence of a population of entities that act as sequential (in the sense of stateful), as opposed to combinatorial (stateless) devices. Note that this whole system should be fully supported by the operation of ONE control head only. And, we should not explicitly program the system for behaving like that -- it should all emerge for free. This is what I believe is possible, and has NOT been done yet!

As I suggest at the beginning of my essay, it would also be great if the rule of operation of this little automaton were not fixed apriori, but evolved itself...

I am certainly optimistic about the possibility for science to understand more and more about nature, but I can imagine at least one way in which this process will never come to a conclusion. The upper end of the hierarchy of natural emergence is a moving target, that science cannot anticipate, but only monitor. I believe that science will never be able to predict the major evolutionary steps in the history of the universe, or the next layer of emergence (a simple retrospective example of such a step would be the appearance of life as we know today). The reason is that simulating this evolution would take at least as much time as the time taken by nature for unfolding it for real. There is no computational shortcut. In this respect, Wolfram had the right intuition with his concept of 'computational irreducibility'.

Nevertheless, I expect a number of nice progresses to happen, as we try to figure out the 'program code' for nature. To me, one of the most desirable achievement is as follows. We should be able to find a simple program in which the localized entities that emerge should not only be capable of Turing-universal interactions (this has been done), but should also manifest some ability to modify their own behavior, to compete, and to evolve, giving rise to a sort of Darwinian ecology. I am fully convinced that the mechanisms of natural selection and evolution should play a role also at the level of physics, not only of biology. Perhaps a first indication of this trend would be the emergence of a population of entities that act as sequential (in the sense of stateful), as opposed to combinatorial (stateless) devices. Note that this whole system should be fully supported by the operation of ONE control head only. And, we should not explicitly program the system for behaving like that -- it should all emerge for free. This is what I believe is possible, and has NOT been done yet!

As I suggest at the beginning of my essay, it would also be great if the rule of operation of this little automaton were not fixed apriori, but evolved itself...

Dear Tommaso

I read your essay with great interest, I think the possibility of model reality based on a digital model is a very interesting point but the potentiality it does not reside on the discreteness but on emergence. I tried to explain this on my essay from a different perspective that reveal the importance or true meaning of the digital approach, I would like to hear your opinions about it.

Regards,

J. Benavides

report post as inappropriate

I read your essay with great interest, I think the possibility of model reality based on a digital model is a very interesting point but the potentiality it does not reside on the discreteness but on emergence. I tried to explain this on my essay from a different perspective that reveal the importance or true meaning of the digital approach, I would like to hear your opinions about it.

Regards,

J. Benavides

report post as inappropriate

Hello,

I liked this essay. I think it is more appropriate for this contest although I think it considers a very restrictive view of the problem. Unlike the other three essays of high popularity, which I basically believe they should not have been accepted at all, this essay offers a novel perspective although too "ontological".

report post as inappropriate

I liked this essay. I think it is more appropriate for this contest although I think it considers a very restrictive view of the problem. Unlike the other three essays of high popularity, which I basically believe they should not have been accepted at all, this essay offers a novel perspective although too "ontological".

report post as inappropriate

Peter, I think your comment is mean and uninformed. You should be reminded to be constructive. Remember that the essays are being rated by the community too, so if you think they don't even deserve to be accepted you are also disqualifying the rest of the participants.

report post as inappropriate

report post as inappropriate

Thank you Peter.

I guess that any unifying theory of everything, and, in particular, one based on emergence, will be somehow 'restrictive' by definition: it will consist of a simple, completely abstract-looking, computational rule, and all the rest should emerge from there. Proving that the right physics eventually emerges will require a lot of additional brain and computer work, but that would not be, strictly speaking, part of the fundamental theory. Anyway, I suspect this is not what you meant by 'restrictive' -- or was it?

As for the 'ontological' flavor, in fact I am doing a lot of concrete things in my daily research on this topic, as reported in some of the references. Mainly, I am designing and implementing algorithms, turning computations into causal sets (spacetime candidates), monitoring their behaviors by devising appropriate complexity indicators, and so on. Several essays in this contest seem to follow much more philosophical paths.

I guess that any unifying theory of everything, and, in particular, one based on emergence, will be somehow 'restrictive' by definition: it will consist of a simple, completely abstract-looking, computational rule, and all the rest should emerge from there. Proving that the right physics eventually emerges will require a lot of additional brain and computer work, but that would not be, strictly speaking, part of the fundamental theory. Anyway, I suspect this is not what you meant by 'restrictive' -- or was it?

As for the 'ontological' flavor, in fact I am doing a lot of concrete things in my daily research on this topic, as reported in some of the references. Mainly, I am designing and implementing algorithms, turning computations into causal sets (spacetime candidates), monitoring their behaviors by devising appropriate complexity indicators, and so on. Several essays in this contest seem to follow much more philosophical paths.

Dear Tommaso,

Your essay offers an impressive look at the idea of emergence from CA and the connection with quantum gravity. Fascinating and well-organized!

Best wishes,

Paul

report post as inappropriate

Your essay offers an impressive look at the idea of emergence from CA and the connection with quantum gravity. Fascinating and well-organized!

Best wishes,

Paul

report post as inappropriate

Thanks a lot for considering my essay well organized. At this point, however, I have clearly identified a presentation bug: I should NOT have included Figure 2 ('Emergent structures in Wolfram's elementary cellular automaton 110'), since it apparently leads readers to erroneously believe that my work is focusing on CA's. Most of my experimental results (as discussed in some of the references) refer to automata with a single control head operating on a graph; these are much more similar to Turing machines than to CAs, and do not require global synchronization.

OK, perhaps I should not have said 'CAs' but rather 'automata or 'digital computation.' All in the spirit of Turing and von Neumann, at any rate!

Best wishes,

Paul

report post as inappropriate

Best wishes,

Paul

report post as inappropriate

Paul,

sure you are right, that's the general spirit of the approach, which is in itself already quite controversial.

But let me take this opportunity to point out that, within this general approach, several models are available that, in my opinion, should not be considered as equivalent, and not only because of the mentioned issue of global synchonization. I am referring to what Ed Fredkin calls 'the tyranny of computational universality': many models of computation are universal, that is, they can simulate a universal Turing machine and perform any algorithm. So, why bother choosing one in particular, for the foundations of physics?

Well, when model B tries to simulate a computation of model A by using its own mode of operation, it usually needs to perform additional 'spurious' steps, that have to be filtered away in order to retain just the original steps (not to mention the fact that the original input has to be coded before being fed into the simulator). Fredkin suggests that there should be a one-to-one correspondence between the 'states and function' of the model and those observed in the physical universe: so, the choice of a specific (universal) model is indeed relevant, because we would select one, or the one, whose features have a clear physical counterpart, and viceversa.

In my work I have adopted this nice, economical idea by Fredkin, specializing it to a one-to-one correspondence between the events of physical specetime and those of a causal set from a formal computation. And, again, the choice among different causal sets from different models is far from being irrelevant: for example, some causal sets end up being totally ordered, or admit nodes with unbounded degree, others don't. These properties have clearly an impact on the emergent physics.

Regards

Tommaso

sure you are right, that's the general spirit of the approach, which is in itself already quite controversial.

But let me take this opportunity to point out that, within this general approach, several models are available that, in my opinion, should not be considered as equivalent, and not only because of the mentioned issue of global synchonization. I am referring to what Ed Fredkin calls 'the tyranny of computational universality': many models of computation are universal, that is, they can simulate a universal Turing machine and perform any algorithm. So, why bother choosing one in particular, for the foundations of physics?

Well, when model B tries to simulate a computation of model A by using its own mode of operation, it usually needs to perform additional 'spurious' steps, that have to be filtered away in order to retain just the original steps (not to mention the fact that the original input has to be coded before being fed into the simulator). Fredkin suggests that there should be a one-to-one correspondence between the 'states and function' of the model and those observed in the physical universe: so, the choice of a specific (universal) model is indeed relevant, because we would select one, or the one, whose features have a clear physical counterpart, and viceversa.

In my work I have adopted this nice, economical idea by Fredkin, specializing it to a one-to-one correspondence between the events of physical specetime and those of a causal set from a formal computation. And, again, the choice among different causal sets from different models is far from being irrelevant: for example, some causal sets end up being totally ordered, or admit nodes with unbounded degree, others don't. These properties have clearly an impact on the emergent physics.

Regards

Tommaso

Dear Tommasso

You do not pay attention to my essay

http://www.fqxi.org/community/forum/topic/946

report post as inappropriate

You do not pay attention to my essay

http://www.fqxi.org/community/forum/topic/946

report post as inappropriate

Tommasso

Very nicely written and argued essay, though I note no falsifiability. There are however possible distant parallels with my own, adding one more dynamic dimension, although I believe I show quite conclusively and falsifiably that Lorentz cannot and will not emerge from your pomegranate. I believe this also becomes intuitive. I do however see your essay as more 'on subject' than some.

I know we are close and wish to say I think our concepts are closer than appeared initially the case. I go, and cannot go, beyond the logic and conception, that is my domain but hope you also see both aspects are important. To progress the disparity of mankind needs to work in complementarity not just in competition.

I have just added a logical explanation in my string of where it appears our predecessors got lost over 100 years ago. It is important this is studied and analysed and I hope you will offer comment. If correct it would be nice if we could see it in use before 2020! I have offered other thought experiments in my and other strings, and could do more. If you have a test for the model do ask.

Well written, and very best of luck

Peter

report post as inappropriate

Very nicely written and argued essay, though I note no falsifiability. There are however possible distant parallels with my own, adding one more dynamic dimension, although I believe I show quite conclusively and falsifiably that Lorentz cannot and will not emerge from your pomegranate. I believe this also becomes intuitive. I do however see your essay as more 'on subject' than some.

I know we are close and wish to say I think our concepts are closer than appeared initially the case. I go, and cannot go, beyond the logic and conception, that is my domain but hope you also see both aspects are important. To progress the disparity of mankind needs to work in complementarity not just in competition.

I have just added a logical explanation in my string of where it appears our predecessors got lost over 100 years ago. It is important this is studied and analysed and I hope you will offer comment. If correct it would be nice if we could see it in use before 2020! I have offered other thought experiments in my and other strings, and could do more. If you have a test for the model do ask.

Well written, and very best of luck

Peter

report post as inappropriate

Tommaso

A very interesting essay; definitely one of the more worthy approaches amongst the submissions I have had the time to look through so far.

Has also been very interesting to see in recent times (e.g. from your paper, the discussion here and other related research) that concrete examples of work, concerned with such discrete worldview topics, are now being pursued!

I have spent many years pondering such topics, but have not yet properly delved into fully quantitative modelling; so I will have to look into some of the work you mention or reference here.

From your paper (and this discussion thread) I get the impression we probably have fairly similar views concerning some of the likely underpinnings of macroscopic reality

e.g. the existence of some level of discrete substrate underlying space-time; several levels of emergent order between that substrate’s operation (whatever form it is eventually found to have!) and the world we and our instruments actually observe; and some general organising principle possibly accounting for the generation of those differing levels of order/complexity (perhaps an evolutionary/Darwinian-selection type process of some sort??).

I only heard about the contest recently, so was regrettably unable to enter an essay of my own, but if I had been able to enter, my contribution probably would have touched on some of the overall topics your essay addresses.

So, good luck with the contest.

Regards,

David C.

report post as inappropriate

A very interesting essay; definitely one of the more worthy approaches amongst the submissions I have had the time to look through so far.

Has also been very interesting to see in recent times (e.g. from your paper, the discussion here and other related research) that concrete examples of work, concerned with such discrete worldview topics, are now being pursued!

I have spent many years pondering such topics, but have not yet properly delved into fully quantitative modelling; so I will have to look into some of the work you mention or reference here.

From your paper (and this discussion thread) I get the impression we probably have fairly similar views concerning some of the likely underpinnings of macroscopic reality

e.g. the existence of some level of discrete substrate underlying space-time; several levels of emergent order between that substrate’s operation (whatever form it is eventually found to have!) and the world we and our instruments actually observe; and some general organising principle possibly accounting for the generation of those differing levels of order/complexity (perhaps an evolutionary/Darwinian-selection type process of some sort??).

I only heard about the contest recently, so was regrettably unable to enter an essay of my own, but if I had been able to enter, my contribution probably would have touched on some of the overall topics your essay addresses.

So, good luck with the contest.

Regards,

David C.

report post as inappropriate

Hi Tommaso,

I like your essay, well written and illustrated, and very convincing while still open to other point of view. My own work is in the way of debugging it. But I prefer to use the word hacking. I present a trivalent graph model where GR emerges through Tetrad gravitationnal field as in LQG, and where SM emerges from internal structure as E8 roots from a double D4 lattice. Surprisingly, the emergent structure in my figure 10a (48 valent supernode as a trivalent graph) is very similar to the graph on figure 23 of your paper in Complex System that I am referencing, but I haven't noticed this before. So your c(4, {16, 4}, 199) trinet may represent a fundamental cell of universe before crystallization and polarization. After this phase transition to hyperdiamond universe, spacetime would be born, instead of needing a big bang. (just a big freeze). But this universe, if perfectly crystallographic, would be perfectly void. And some minimal imperfections, yes, bugs, are nothing else that all matter, energy and forces, we have in our universe. I prefer not to remove all the bugs...

Best regards,

Ray

report post as inappropriate

I like your essay, well written and illustrated, and very convincing while still open to other point of view. My own work is in the way of debugging it. But I prefer to use the word hacking. I present a trivalent graph model where GR emerges through Tetrad gravitationnal field as in LQG, and where SM emerges from internal structure as E8 roots from a double D4 lattice. Surprisingly, the emergent structure in my figure 10a (48 valent supernode as a trivalent graph) is very similar to the graph on figure 23 of your paper in Complex System that I am referencing, but I haven't noticed this before. So your c(4, {16, 4}, 199) trinet may represent a fundamental cell of universe before crystallization and polarization. After this phase transition to hyperdiamond universe, spacetime would be born, instead of needing a big bang. (just a big freeze). But this universe, if perfectly crystallographic, would be perfectly void. And some minimal imperfections, yes, bugs, are nothing else that all matter, energy and forces, we have in our universe. I prefer not to remove all the bugs...

Best regards,

Ray

report post as inappropriate

Hi Ray,

I've checked the similarity between our two figures. Honestly I am not too surprised by this, since we are essentially talking about a binary tree, possibly with linked leaves. That graph popped up many times in my experiments, and it is far too regular to be of any interest.

I agree with you that 'bugs' make the universe interesting, and, in spite of my essay title, I can guarantee that, if we ever find that code, I will not be the one who starts debugging it :-)

Tommaso

I've checked the similarity between our two figures. Honestly I am not too surprised by this, since we are essentially talking about a binary tree, possibly with linked leaves. That graph popped up many times in my experiments, and it is far too regular to be of any interest.

I agree with you that 'bugs' make the universe interesting, and, in spite of my essay title, I can guarantee that, if we ever find that code, I will not be the one who starts debugging it :-)

Tommaso

Dear Sir,

You begin with a postulate “There exists a tiniest scale at which the fabric of space-time appears as a pomegranate, made of indivisible atoms, or seeds. This view is reflected in models such as Penrose's spin networks and foams, and is adopted in theories such as Loop Quantum Gravity and in the so called Causal Set Program”. In physical terms what does this statement mean?...

view entire post

You begin with a postulate “There exists a tiniest scale at which the fabric of space-time appears as a pomegranate, made of indivisible atoms, or seeds. This view is reflected in models such as Penrose's spin networks and foams, and is adopted in theories such as Loop Quantum Gravity and in the so called Causal Set Program”. In physical terms what does this statement mean?...

view entire post

report post as inappropriate

Sir,

In the above thread, we had spoken of gravity as a different type of force from other fundamental forces of Nature. Here is a brief discussion on that.

Before we discuss whether the force we were referring to was gravity, we will like to discuss something about force itself. A force is experienced only in a field (we call it rayi). Thus, it is a conjugate of the field. If...

view entire post

In the above thread, we had spoken of gravity as a different type of force from other fundamental forces of Nature. Here is a brief discussion on that.

Before we discuss whether the force we were referring to was gravity, we will like to discuss something about force itself. A force is experienced only in a field (we call it rayi). Thus, it is a conjugate of the field. If...

view entire post

report post as inappropriate

Dear Sir,

you have several comments on my work, and several pieces of information on your own approach. In particular you write:

'Your entire essay exhibits your beliefs and suppositions that are far from scientific descriptions. This is one of the root causes of the malaise that is endemic in scientific circles. Thus, theoretical physics is stagnating for near about a century while experimental physics is achieving marvelous results'.

I claim that the approach described in my essay has a rather strong experimental component, but, so far, the type of phenomena that emerge from the investigated models of computation (and associated causal sets) do not yet lend themselves to detailed *quantitative* comparisons with their potential, 'real'counterparts, also due to the fact that these experiments probe layers of reality that sit far below what standard experimental physics can directly observe. But the *qualitative* things that can happen in the discussed simulations are already quite remarkable and promising. For example, I show that algorithmic causal sets can account for the emergence of particle-like structures, whose existence may depend on whether one takes a local or global view at the causal set. And I have shown that these causal sets may organize themselves so that partly separated causal regions start to appear. These and other circumstances (e.g., the bizarre behaviour shown in Figure 3) strongly suggest -- at least to me! -- that it would be unwise to dismiss the digital/computational universe conjecture without having explored VERY extensively its potential.

you have several comments on my work, and several pieces of information on your own approach. In particular you write:

'Your entire essay exhibits your beliefs and suppositions that are far from scientific descriptions. This is one of the root causes of the malaise that is endemic in scientific circles. Thus, theoretical physics is stagnating for near about a century while experimental physics is achieving marvelous results'.

I claim that the approach described in my essay has a rather strong experimental component, but, so far, the type of phenomena that emerge from the investigated models of computation (and associated causal sets) do not yet lend themselves to detailed *quantitative* comparisons with their potential, 'real'counterparts, also due to the fact that these experiments probe layers of reality that sit far below what standard experimental physics can directly observe. But the *qualitative* things that can happen in the discussed simulations are already quite remarkable and promising. For example, I show that algorithmic causal sets can account for the emergence of particle-like structures, whose existence may depend on whether one takes a local or global view at the causal set. And I have shown that these causal sets may organize themselves so that partly separated causal regions start to appear. These and other circumstances (e.g., the bizarre behaviour shown in Figure 3) strongly suggest -- at least to me! -- that it would be unwise to dismiss the digital/computational universe conjecture without having explored VERY extensively its potential.

Dear Sir,

Your reply shows the inadequacy of the present experimental approach. But it did not negate our contentions.

Will you please clearly say what we have written is wrong? If so, where it is wrong, what is wrong and how it is wrong.

We had shown specific examples where your views are different from our views. Both us cannot be correct. Hence kindly prove us wrong to save your view.

Regards,

basudeba.

report post as inappropriate

Your reply shows the inadequacy of the present experimental approach. But it did not negate our contentions.

Will you please clearly say what we have written is wrong? If so, where it is wrong, what is wrong and how it is wrong.

We had shown specific examples where your views are different from our views. Both us cannot be correct. Hence kindly prove us wrong to save your view.

Regards,

basudeba.

report post as inappropriate

Dear Tommasso

Causal set of space-time is the best idea yet, but a better idea is a causal set of particles where particles exist in a background-independent setting. the fundamental entity that make up the particles then will generate all known properties like mass, charge, spin, electromagnetic force and gravity. Space and time can be looked upon as derived quantities. All in one shot. This is my theory "Quantum Statistical Automata", I derive it from one simple idea, a postulate, "reality is nothing but math literally".

That led me to design the universe from scratch. How can you design a dynamic universe? a one axis design makes all that clear. the simplest possible is to throw random length lines on the big universe line. These lines(they are fundamentally a number) interpreted as energy(1/L) and their interaction so that when they cross(you get EM) or when they meet(you get gravity) upon subtracting these energies. This will give rise to a universe like ours.

Try any other scheme using any other fundamental entity you like, and you will either arrive at an equivalent result or non-sense ones(typically will be highly contrived complex entities). So our reality had no choice just like you if you try to design one USING A FUNDAMENTAL ENTITY.

qsa

report post as inappropriate

Causal set of space-time is the best idea yet, but a better idea is a causal set of particles where particles exist in a background-independent setting. the fundamental entity that make up the particles then will generate all known properties like mass, charge, spin, electromagnetic force and gravity. Space and time can be looked upon as derived quantities. All in one shot. This is my theory "Quantum Statistical Automata", I derive it from one simple idea, a postulate, "reality is nothing but math literally".

That led me to design the universe from scratch. How can you design a dynamic universe? a one axis design makes all that clear. the simplest possible is to throw random length lines on the big universe line. These lines(they are fundamentally a number) interpreted as energy(1/L) and their interaction so that when they cross(you get EM) or when they meet(you get gravity) upon subtracting these energies. This will give rise to a universe like ours.

Try any other scheme using any other fundamental entity you like, and you will either arrive at an equivalent result or non-sense ones(typically will be highly contrived complex entities). So our reality had no choice just like you if you try to design one USING A FUNDAMENTAL ENTITY.

qsa

report post as inappropriate

Dear Tommaso,

Congratulations on your dedication to the competition and your much deserved top ten placing. I have a bugging question for you, which I've also posed to all the top front runners btw:

Q: Coulomb's Law of electrostatics was modelled by Maxwell by mechanical means after his mathematical deductions as an added verification (thanks for that bit of info Edwin), which I highly admire. To me, this gives his equation some substance. I have a problem with the laws of gravity though, especially the mathematical representation that "every object attracts every other object equally in all directions." The 'fabric' of spacetime model of gravity doesn't lend itself to explain the law of electrostatics. Coulomb's law denotes two types of matter, one 'charged' positive and the opposite type 'charged' negative. An Archimedes screw model for the graviton can explain -both- the gravity law and the electrostatic law, whilst the 'fabric' of spacetime can't. Doesn't this by definition make the helical screw model better than than anything else that has been suggested for the mechanism of the gravity force?? Otherwise the unification of all the forces is an impossiblity imo. Do you have an opinion on my analysis at all?

Best wishes,

Alan

report post as inappropriate

report post as inappropriate

Congratulations on your dedication to the competition and your much deserved top ten placing. I have a bugging question for you, which I've also posed to all the top front runners btw:

Q: Coulomb's Law of electrostatics was modelled by Maxwell by mechanical means after his mathematical deductions as an added verification (thanks for that bit of info Edwin), which I highly admire. To me, this gives his equation some substance. I have a problem with the laws of gravity though, especially the mathematical representation that "every object attracts every other object equally in all directions." The 'fabric' of spacetime model of gravity doesn't lend itself to explain the law of electrostatics. Coulomb's law denotes two types of matter, one 'charged' positive and the opposite type 'charged' negative. An Archimedes screw model for the graviton can explain -both- the gravity law and the electrostatic law, whilst the 'fabric' of spacetime can't. Doesn't this by definition make the helical screw model better than than anything else that has been suggested for the mechanism of the gravity force?? Otherwise the unification of all the forces is an impossiblity imo. Do you have an opinion on my analysis at all?

Best wishes,

Alan

report post as inappropriate

Dear Sir,

You have raised a very important question. We have discussed it below the essay of Mr. Ian Durham. Here we reproduce it for you.

The latest finding of LHC is that the Universe was created from such a super-fluid and not gases. The confined field also interacts with the Universal field due to difference in density. This in turn modifies the nature of interactions at...

view entire post

You have raised a very important question. We have discussed it below the essay of Mr. Ian Durham. Here we reproduce it for you.

The latest finding of LHC is that the Universe was created from such a super-fluid and not gases. The confined field also interacts with the Universal field due to difference in density. This in turn modifies the nature of interactions at...

view entire post

report post as inappropriate

Dear Alan,

you write:

'An Archimedes screw model for the graviton can explain -both- the gravity law and the electrostatic law, whilst the 'fabric' of spacetime can't.'

I am afraid I do not see how the screw model can explain inverse square laws.

In a discrete, causal set approach one can count on notions of curvature, so we have at least one basic ingredient for explaining gravity.

As for the other forces, as far as I can honestly tell, everything still needs to be done. So far I have had only one idea for trying to discriminate between gravitational and non-gravitational fields: in the context of causal sets from models of computation based on GRAPH REWRITING, one can think of two different types of emergent, moving localized structure, depending on whether or not one keeps track of the identity of the simplices in the graph being rewritten. In the first case, one has actual pieces of the graph that move around (think of polygons in a dynamic voronoi diagram, that move like water molecules), in the second case one has just patterns that move around (say a peculiar combination of pentagonal and hexagonal faces, in the case of a planar graph), like the gliders on a 2D cellular automaton, that use many different cells from the substratum for implementing their flight. Of course, these two types can happily coexist!

you write:

'An Archimedes screw model for the graviton can explain -both- the gravity law and the electrostatic law, whilst the 'fabric' of spacetime can't.'

I am afraid I do not see how the screw model can explain inverse square laws.

In a discrete, causal set approach one can count on notions of curvature, so we have at least one basic ingredient for explaining gravity.

As for the other forces, as far as I can honestly tell, everything still needs to be done. So far I have had only one idea for trying to discriminate between gravitational and non-gravitational fields: in the context of causal sets from models of computation based on GRAPH REWRITING, one can think of two different types of emergent, moving localized structure, depending on whether or not one keeps track of the identity of the simplices in the graph being rewritten. In the first case, one has actual pieces of the graph that move around (think of polygons in a dynamic voronoi diagram, that move like water molecules), in the second case one has just patterns that move around (say a peculiar combination of pentagonal and hexagonal faces, in the case of a planar graph), like the gliders on a 2D cellular automaton, that use many different cells from the substratum for implementing their flight. Of course, these two types can happily coexist!

Dear Sir,

We had given a different theory for charge interactions by showing that Coulomb's law is wrong. We repeat it again.

According to our theory, gravity is a composite force of seven forces that are generated based on their charge. Thus, they are related to charge interactions. But we do not accept Coulomb's law. We have a different theory for it. We derive it from fundamental principles. In Coulomb’s law, F = k Q1 x Q2 /d^2. In a charge neutral object, either Q1 or Q2 will be zero reducing the whole equation to zero. This implies that no interaction is possible between a charged object and a charge neutral object. But this is contrary to experience. Hence the format of Coulomb’s law is wrong.

As we have repeatedly described, the atoms can be stable only when they are slightly negatively charged which makes the force directed towards the nucleus dominate the opposite force, but is not apparent from outside. Hence we do not experience it. We have theoretically derived the value of the electric charge of protons, neutrons and electrons as +10/11, -1/11 and -1. The negative sign indicates that the net force is directed towards the nucleus. Charge interaction takes place when a particle tries to attain equilibrium by coupling with another particle having similar charge. The proton has +10/11 charge means it is deficient in -1/11 charge. The general principle is same charge attracts. Thus, it interacts with the negative charge of electrons. The resultant hydrogen atom has a net charge of -1/11. Thus, it is highly reactionary. This -1/11 charge interacts with that of the neutron to form stable particles. These interactions can be of four types.

Positive + positive = explosive. By this, what we mean is the fusion reaction that leads to unleashing of huge amounts of energy. It’s opposite is also true in the case of fission, but since it is reduction, there is less energy release.

Positive + negative (total interaction) = internally creative (increased atomic number.) This means that if one proton and one electron is added to the atom, the atomic number goes up.

Positive + negative (partial interaction) = externally creative (becomes an ion.) This means that if one proton or one electron is added to the atom, the atom becomes ionic.

Negative + negative = no reaction. What it actually means is that though there will be no reaction between the two negatively charged particles; they will appear to repel each other as their nature is confinement. Like two pots that confine water cannot occupy the same place and if one is placed near another with some areas overlapping, then both repel each other. This is shown in the “Wheeler’s Aharonov–Bohm experiment”.

We had also commented on many other aspects of your essay.

Can we expect a clarification on the points raised by us?

Regards,

basudeba.

report post as inappropriate

We had given a different theory for charge interactions by showing that Coulomb's law is wrong. We repeat it again.

According to our theory, gravity is a composite force of seven forces that are generated based on their charge. Thus, they are related to charge interactions. But we do not accept Coulomb's law. We have a different theory for it. We derive it from fundamental principles. In Coulomb’s law, F = k Q1 x Q2 /d^2. In a charge neutral object, either Q1 or Q2 will be zero reducing the whole equation to zero. This implies that no interaction is possible between a charged object and a charge neutral object. But this is contrary to experience. Hence the format of Coulomb’s law is wrong.

As we have repeatedly described, the atoms can be stable only when they are slightly negatively charged which makes the force directed towards the nucleus dominate the opposite force, but is not apparent from outside. Hence we do not experience it. We have theoretically derived the value of the electric charge of protons, neutrons and electrons as +10/11, -1/11 and -1. The negative sign indicates that the net force is directed towards the nucleus. Charge interaction takes place when a particle tries to attain equilibrium by coupling with another particle having similar charge. The proton has +10/11 charge means it is deficient in -1/11 charge. The general principle is same charge attracts. Thus, it interacts with the negative charge of electrons. The resultant hydrogen atom has a net charge of -1/11. Thus, it is highly reactionary. This -1/11 charge interacts with that of the neutron to form stable particles. These interactions can be of four types.

Positive + positive = explosive. By this, what we mean is the fusion reaction that leads to unleashing of huge amounts of energy. It’s opposite is also true in the case of fission, but since it is reduction, there is less energy release.

Positive + negative (total interaction) = internally creative (increased atomic number.) This means that if one proton and one electron is added to the atom, the atomic number goes up.

Positive + negative (partial interaction) = externally creative (becomes an ion.) This means that if one proton or one electron is added to the atom, the atom becomes ionic.

Negative + negative = no reaction. What it actually means is that though there will be no reaction between the two negatively charged particles; they will appear to repel each other as their nature is confinement. Like two pots that confine water cannot occupy the same place and if one is placed near another with some areas overlapping, then both repel each other. This is shown in the “Wheeler’s Aharonov–Bohm experiment”.

We had also commented on many other aspects of your essay.

Can we expect a clarification on the points raised by us?

Regards,

basudeba.

report post as inappropriate

Dear Tommaso Bolognesi,

Having looked into http://pirsa.org/05090001, I am sure your essay is a much more proficient presentation of something I am still not yet a fan of. Do you know the nice book by LaMettrie "The Man - a Machine" written at the time of Laplace in French language? Today he would perhaps write something like "The Man - a Computer Program" or "Spacetime - a Lattice Computing itself".

I understand that the PI rather than you are responsible for such progress. Should we merely adapt our wording accordingly and speak instead of twin, grandfather, Ehrenfest, barn, Andromeda, etc. paradoxes of twin, grandfather, etc. bugs? No. There are certainly new bugs to be found and to be removed, e.g.:

- an Adam and Eve bug: For reasons of genetic repair, a causal set must not begin with just one male and one female primordial individual if it aims to mimic or even be reality instead of bible.

- Steven Dufourny's bug: There must not be a starting point in space and time at all. While this seems to preclude the application of all so far imaginable methods of programming, this does not matter. With a friendly grin we may declare the programs not yet debugged.

- a missing clock bug: If the program outputs the whole spacetime including past and future then it needs an additional clock as to mark out the border between past and future here and now.

- a smallest step-width or smallest CA bug: Division by (almost) zero is to be excluded as usual.

Presumably there are many more bugs. Bugs tend to be unseen. Do never loose hope.

Regards,

Eckard

report post as inappropriate

Having looked into http://pirsa.org/05090001, I am sure your essay is a much more proficient presentation of something I am still not yet a fan of. Do you know the nice book by LaMettrie "The Man - a Machine" written at the time of Laplace in French language? Today he would perhaps write something like "The Man - a Computer Program" or "Spacetime - a Lattice Computing itself".

I understand that the PI rather than you are responsible for such progress. Should we merely adapt our wording accordingly and speak instead of twin, grandfather, Ehrenfest, barn, Andromeda, etc. paradoxes of twin, grandfather, etc. bugs? No. There are certainly new bugs to be found and to be removed, e.g.:

- an Adam and Eve bug: For reasons of genetic repair, a causal set must not begin with just one male and one female primordial individual if it aims to mimic or even be reality instead of bible.

- Steven Dufourny's bug: There must not be a starting point in space and time at all. While this seems to preclude the application of all so far imaginable methods of programming, this does not matter. With a friendly grin we may declare the programs not yet debugged.

- a missing clock bug: If the program outputs the whole spacetime including past and future then it needs an additional clock as to mark out the border between past and future here and now.

- a smallest step-width or smallest CA bug: Division by (almost) zero is to be excluded as usual.

Presumably there are many more bugs. Bugs tend to be unseen. Do never loose hope.

Regards,

Eckard

report post as inappropriate

Dear Eckard,

I expected your post to terminate with a satanic 'hihihihi' grin. Should I take the fact that it doesn't as an indication that you expect a serious answer from me? I am not sure. In any case, since you wrote that you are not YET a fan of causal sets, I feel encouraged to comment at least on one point.

I am aware of the idea, discussed for example by M. Tegmark, that the program might output the whole spacetime (past and future, so to speak) in one shot, thus raising a question whether we need a clock that points to, and marks the progress of the present inside the whole structure.

My view is different; the universal computation is really unfolding step by step, in the sense that the future is not available until it is actually computed, and this is because, following Wolfram, the computation is irreducible: no shortcut is possible. The only fundamental 'clock', in this picture, is the one that counts the steps at the front of the computation as they happen -- in my favourite setting, the steps of the ant walking on a graph.

Regards

Tommaso

I expected your post to terminate with a satanic 'hihihihi' grin. Should I take the fact that it doesn't as an indication that you expect a serious answer from me? I am not sure. In any case, since you wrote that you are not YET a fan of causal sets, I feel encouraged to comment at least on one point.

I am aware of the idea, discussed for example by M. Tegmark, that the program might output the whole spacetime (past and future, so to speak) in one shot, thus raising a question whether we need a clock that points to, and marks the progress of the present inside the whole structure.

My view is different; the universal computation is really unfolding step by step, in the sense that the future is not available until it is actually computed, and this is because, following Wolfram, the computation is irreducible: no shortcut is possible. The only fundamental 'clock', in this picture, is the one that counts the steps at the front of the computation as they happen -- in my favourite setting, the steps of the ant walking on a graph.

Regards

Tommaso

Dear Tommaso,

While I consider the truth not negotiable, I appreciate the realism of you, Steven Wolfram, and other followers of Dedekind. Please read my essay carefully as to get aware that to me the present moment here and now is the only fix point in the whole entity that we are calling universe. I do not deny the possibility to pre-calculate partial pictures of future with the caveat we are unable to include all possible influences.

Those who are modeling the world with finite elements, CAs and the like do usually not need extended, in particular imaginary numbers for that. This is one more view we have in common.

As I pointed out in my third essay analog models were forced to be even closer to reality because they were bound to integrations. I am right now dealing with premetric electrodynamics. While it is appealing to me in so far, it begins with directed quantities and its metric enters as late as possible, I doubt that differential forms are close enough to reality.

What about preferences for a discrete or continuous world, I see several rather superficial reasons for both and also a lot of possible mistakes. While to me the difference between analog and merely continuous is more important, I do not yet see a discontinuous analog computing. If the world is digital then analog models should also be digital. Shouldn't they?

Regards,

Eckard

report post as inappropriate

While I consider the truth not negotiable, I appreciate the realism of you, Steven Wolfram, and other followers of Dedekind. Please read my essay carefully as to get aware that to me the present moment here and now is the only fix point in the whole entity that we are calling universe. I do not deny the possibility to pre-calculate partial pictures of future with the caveat we are unable to include all possible influences.

Those who are modeling the world with finite elements, CAs and the like do usually not need extended, in particular imaginary numbers for that. This is one more view we have in common.

As I pointed out in my third essay analog models were forced to be even closer to reality because they were bound to integrations. I am right now dealing with premetric electrodynamics. While it is appealing to me in so far, it begins with directed quantities and its metric enters as late as possible, I doubt that differential forms are close enough to reality.

What about preferences for a discrete or continuous world, I see several rather superficial reasons for both and also a lot of possible mistakes. While to me the difference between analog and merely continuous is more important, I do not yet see a discontinuous analog computing. If the world is digital then analog models should also be digital. Shouldn't they?

Regards,

Eckard

report post as inappropriate

Hi Eckard,

is it too much to ask you to briefly summarize what you mean by 'the difference between analog and merely continuous', without having me try and spot it somewhere in you essay? I personally consider 'analog' and 'continuous' as the same thing, at least w.r.t. the theme of this FQXi contest. I am happy of the clear distinction between the fundamental concepts of continuous and discrete, and regard with some suspicion any theory that claims to be fundamental, while providing, at the same time, some hybrid mix of these two ingredients. Thanks.

Tommaso

is it too much to ask you to briefly summarize what you mean by 'the difference between analog and merely continuous', without having me try and spot it somewhere in you essay? I personally consider 'analog' and 'continuous' as the same thing, at least w.r.t. the theme of this FQXi contest. I am happy of the clear distinction between the fundamental concepts of continuous and discrete, and regard with some suspicion any theory that claims to be fundamental, while providing, at the same time, some hybrid mix of these two ingredients. Thanks.

Tommaso

Dear Tommaso,

If something is an analogue of something else, the two things are similar to each other. Each analog computer used the analogy between a real composition of lumped electric components and an real object with similar properties as to model that object.

Usually, the modeled object was considered to behave continuous. Therefore analog is often equated with continuous.

While I share your suspicion concerning some mixes between continuous and discrete, e.g. Donatello Dolce's sweet donation of a cyclic space-time, and in particular in mathematics Cantor's paradise, it happens indeed that e.g. a measured spectrum has both continuous and discrete components at a time.

Hopefully you will not take it too much amiss that I do not focus on the question discrete or continuous. Old engineers like me tend to have no problem with easily switching back and forth between these two rather equivalent worlds.

I am deliberately focusing on analog (in the sense of performed with means that are as real as the object itself) vs. digital (in the sense of mathematical which gets rid of this usually unwelcome immediate link to reality).

The question "analog vs. digital" is of course prone to be (mis?)read as continuous vs. discrete. Perhaps, "analog vs. digital" was chosen because digital signal processing undoubtedly proved superior as to support those who feel entitled to fight for CA and the like. Being perhaps the only lonely one who emphasizes the aspect that analogy to reality implies realism, I am perhaps also the only one who arrived at hurting and rather unbelievable conclusions that were punished mainly in my public rates. I hope for rehabilitation by more prudent readers.

We may also switch from reality to its model and return. However, reality has some peculiarities: Any measured quantity belongs to positive values of elapsed time, and primary measured quantities do not contain imaginary values. In other words, in principle reality could always be expressed within R+.

Analog models are bound to reality. Hence they automatically obey these restrictions, no matter whether they work continuous or discrete.

Regards,

Eckard

report post as inappropriate

If something is an analogue of something else, the two things are similar to each other. Each analog computer used the analogy between a real composition of lumped electric components and an real object with similar properties as to model that object.

Usually, the modeled object was considered to behave continuous. Therefore analog is often equated with continuous.

While I share your suspicion concerning some mixes between continuous and discrete, e.g. Donatello Dolce's sweet donation of a cyclic space-time, and in particular in mathematics Cantor's paradise, it happens indeed that e.g. a measured spectrum has both continuous and discrete components at a time.

Hopefully you will not take it too much amiss that I do not focus on the question discrete or continuous. Old engineers like me tend to have no problem with easily switching back and forth between these two rather equivalent worlds.

I am deliberately focusing on analog (in the sense of performed with means that are as real as the object itself) vs. digital (in the sense of mathematical which gets rid of this usually unwelcome immediate link to reality).

The question "analog vs. digital" is of course prone to be (mis?)read as continuous vs. discrete. Perhaps, "analog vs. digital" was chosen because digital signal processing undoubtedly proved superior as to support those who feel entitled to fight for CA and the like. Being perhaps the only lonely one who emphasizes the aspect that analogy to reality implies realism, I am perhaps also the only one who arrived at hurting and rather unbelievable conclusions that were punished mainly in my public rates. I hope for rehabilitation by more prudent readers.

We may also switch from reality to its model and return. However, reality has some peculiarities: Any measured quantity belongs to positive values of elapsed time, and primary measured quantities do not contain imaginary values. In other words, in principle reality could always be expressed within R+.

Analog models are bound to reality. Hence they automatically obey these restrictions, no matter whether they work continuous or discrete.

Regards,

Eckard

report post as inappropriate

Science is not only objective (which by the way you are far from it with all kind of personal remarks about people all the time) but it is also about a human enterprise where people have to be open and listen each other. You must go first read and learn before writing gibberish in this contest.

report post as inappropriate

report post as inappropriate

'All kinds of personal remarks about people all the time'?

Where would these be? Are you referring to this thread, to my answers, and to my contribution?

In case this was really your intention, I suggest you to be more polite and constructive here, in case you are sincerely interested in interacting (which I doubt).

Where would these be? Are you referring to this thread, to my answers, and to my contribution?

In case this was really your intention, I suggest you to be more polite and constructive here, in case you are sincerely interested in interacting (which I doubt).

Peter,

Would you please clarify you are not identical with Peter Jackson, reveal to whom you are addressing your utterance, denote what you consider gibberish, and contribute a factual argument.

Eckard

report post as inappropriate

Would you please clarify you are not identical with Peter Jackson, reveal to whom you are addressing your utterance, denote what you consider gibberish, and contribute a factual argument.

Eckard

report post as inappropriate

Hi Toamaso,

This was to me a very readable essay (being not professional).

I agree with your principle of minimality (Occam), and as I write in my essay[link/]the the lowest level where we can make measurements is the Planck scale, fundamental laws of physics are unaccessible there. Is your universal computing running there ? (and of course who or what is causing it to run).

You mention : "Emergence occurs whenever complex patterns arise from a multiplicity of interactions", when we jump down to the lowest level (in my opinion the Planck level) it is our own consciuosness that reaches out to the level after that to form these patterns to the causal deterministic universe that we live in. Is our consciousness (creative substratum) that computational force that organises the "bits".

When you say that the total order of "computation" does not represent physical time, it means that these processus take place in the for me "fifth" dimension, because if "it" does not take causal time it does not exist, in fact you say there that every "computation" is already done, just like a quantum computer with more as 10 qubits, is this right ?

You say : "a spacetime in which all events are pairwise causaly related may support some notion of TIME, but certainly not of SPACE", but in my opinion causal time is stuck to space in our 4D Universe, or did you mean here that the uncertainty principle is valid, partcle or wave, it does not take timelaps for the wave to be observed as a particle and this phenomenon is not Space related.

Congratulations with your position in the contest.

Wilhelmus

report post as inappropriate

This was to me a very readable essay (being not professional).

I agree with your principle of minimality (Occam), and as I write in my essay[link/]the the lowest level where we can make measurements is the Planck scale, fundamental laws of physics are unaccessible there. Is your universal computing running there ? (and of course who or what is causing it to run).

You mention : "Emergence occurs whenever complex patterns arise from a multiplicity of interactions", when we jump down to the lowest level (in my opinion the Planck level) it is our own consciuosness that reaches out to the level after that to form these patterns to the causal deterministic universe that we live in. Is our consciousness (creative substratum) that computational force that organises the "bits".

When you say that the total order of "computation" does not represent physical time, it means that these processus take place in the for me "fifth" dimension, because if "it" does not take causal time it does not exist, in fact you say there that every "computation" is already done, just like a quantum computer with more as 10 qubits, is this right ?

You say : "a spacetime in which all events are pairwise causaly related may support some notion of TIME, but certainly not of SPACE", but in my opinion causal time is stuck to space in our 4D Universe, or did you mean here that the uncertainty principle is valid, partcle or wave, it does not take timelaps for the wave to be observed as a particle and this phenomenon is not Space related.

Congratulations with your position in the contest.

Wilhelmus

report post as inappropriate

Sorry Tomaso , I made an error in the linking, and now the whole post connects you with my essay... easy isn't it ?

Wilhelmus

report post as inappropriate

Wilhelmus

report post as inappropriate

New Measurement of the Earth’s Absolute Velocity with the Help

of the “Coupled Shutters” Experiment

http://www.ptep-online.com/index_files/2007/PP-08-

05.PDF

report post as inappropriate

of the “Coupled Shutters” Experiment

http://www.ptep-online.com/index_files/2007/PP-08-

05.PDF

report post as inappropriate

Dear Tomasso,

Congratulations for your well-deserved win.

Best wishes from Vladimir

report post as inappropriate

Congratulations for your well-deserved win.

Best wishes from Vladimir

report post as inappropriate

Dear Tommaso,

Congratulations for being selected as a prize winner by the judges.

I don't think I have properly read your essay.(Seems you didn't need my vote anyway.) There were just too many.I will have to put that right, now that the judges have chosen your essay as a prize winner. You must have done a good job. So well done!

Regards Georgina.

report post as inappropriate

Congratulations for being selected as a prize winner by the judges.

I don't think I have properly read your essay.(Seems you didn't need my vote anyway.) There were just too many.I will have to put that right, now that the judges have chosen your essay as a prize winner. You must have done a good job. So well done!

Regards Georgina.

report post as inappropriate

Dear Tomasso,

I would like to introduce myself in quantum terminology and share the truth that I have experienced with you. who am I?

I superpositioned myself to be me, to disentangle reality from virtuality and reveal the absolute truth.

Love,

Sridattadev.

report post as inappropriate

I would like to introduce myself in quantum terminology and share the truth that I have experienced with you. who am I?

I superpositioned myself to be me, to disentangle reality from virtuality and reveal the absolute truth.

Love,

Sridattadev.

report post as inappropriate

Login or create account to post reply or comment.