I think that these are all great questions but do not believe that any of them can be fully addressed within the confines of physicalism. This is especially true of question three which would require an explanation of how our seemingly multi-determined present can arise from the indeterminism of the quantum world.
While consideration of alternate ontologies does not seem to be a priority (or maybe even an acceptable topic for discussion) I believe that it is required if many of the fundamental questions are to be fully addressed.
The principle of sufficient reason should guide this activity. If the vast majority of people believe that they have free will then there needs to be an explanation of how free will works or there needs to be an explanation of why so many people hold the belief if it is in fact false.
It should be recognized that the existence of quantum gravity depends on the assumption of a monistic ontology: if the general theory of relativity and quantum mechanics can be attributed to two different ontological realms then the failure to find a fully satisfactory theory of quantum gravity may be due to the possibility that quantum gravity does not exist.
Robert H McEachern replied on Dec. 2, 2018 @ 17:04 GMT
Stefan:
"But this would mean that „free will“ is just an illusion – compared to the true circumstances that govern our behaviour. Our behaviour would be determined by the initial conditions since the advent of time."
Yes and No. It all depends upon what is meant by "determine". Obviously everything that I do is determined by my actions - by definition. And my actions are, of...
view entire post
Stefan:
"But this would mean that „free will“ is just an illusion – compared to the true circumstances that govern our behaviour. Our behaviour would be determined by the initial conditions since the advent of time."
Yes and No. It all depends upon what is meant by "determine". Obviously everything that I do is determined by my actions - by definition. And my actions are, of course, determined by my history and memories - by definition.
But that is an uninteresting definition of "determine". The question raised by Laplace and other philosophers and physicists, is "Can something else other than me, determine what I will do, before I do it?" Which, in turn, only is interesting, if they can determine what I will do, before they see me actually do it.
Thus, my point is, if no one can "determine" what IS happening until AFTER it has happened, then there is no reason to be impressed by their trivial determinations. The only interesting, non-trivial, definition of "determinism" is that something else (a cosmos that does not yet include my existence), can determine (make a correct prediction of my future actions) before my history and memory (the foundation of "me" and my "will") ever even exists.
Think of a game like chess. The existence of fully deterministic rules for the game, do not enable any known entity to predict the outcome of every game. The question is, is such an entity even a logical possibility? The answer is no, if the initial conditions have a higher information content, than can be stored in any subset of the cosmos (the supposedly "all-knowing" entity). Nevertheless, the game can be played, and someone will win, because the cosmos, as a whole, is sufficient to enable the game to exist, but is nevertheless NOT sufficient to determine what will happen, in any other manner than by just letting it happen. After the fact, it is always possible to determine who won. The point is, "So what?" No one ever seriously doubted that that is possible.
Rob McEachern
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Georgina Woodward replied on Dec. 2, 2018 @ 18:37 GMT
"Can something else other than me, determine what I will do, before I do it?" Which, in turn, only is interesting, if they can determine what I will do, before they see me actually do it." Robert
Derren Brown Advertising Agency Task Shows how priming of the mind influences its choices, so they can be predicted largely, but not always exactly.
report post as inappropriate
Robert H McEachern replied on Dec. 2, 2018 @ 19:21 GMT
Georgina,
"so they can be predicted largely, but not always exactly"
"Intuitively obvious to the casual observer." was an expression that was popular when I was in college, for such a statement.
Determinism is only interesting, if EVERYTHING can be EXACTLY predicted, at least in principle. That is the only case that Laplace was concerned with, because that is the only case that conflicts with the existence of free-will.
Rob McEachern
report post as inappropriate
Georgina Woodward replied on Dec. 2, 2018 @ 23:49 GMT
I think what I wrote is relevant to the impression people have of their own free will. That they make decisions based on the facts consciously known to them. Whereas it is the subconscious that has the majority of the facts and isn't letting on that it is controlling the outcomes not the conscious will.
report post as inappropriate
Stefan Weckbach replied on Dec. 3, 2018 @ 04:39 GMT
Georgina:
Agreed, but sometimes it comes down to making unconscious things conscious (by knowledge for example). The judges are able to eat (chocolate) or drink (coke instead of water) something that increases their blood sugar level, so that they must not sentence in an inappropriate manner before lunch.
Moreover, the subconscious is important, no doubt on that. But what is...
view entire post
Georgina:
Agreed, but sometimes it comes down to making unconscious things conscious (by knowledge for example). The judges are able to eat (chocolate) or drink (coke instead of water) something that increases their blood sugar level, so that they must not sentence in an inappropriate manner before lunch.
Moreover, the subconscious is important, no doubt on that. But what is unconscious can be made conscious – by free will to explore it. I think this would give a higher degree of personal freedom from deterministic influences, since after having made some things conscious, one can decide wether or not one wants to react the same way or other.
Robert:
“The only interesting, non-trivial, definition of "determinism" is that something else (a cosmos that does not yet include my existence), can determine (make a correct prediction of my future actions) before my history and memory (the foundation of "me" and my "will") ever even exists.”
Yes, and that’s the only case I refer to. I thought it would be clear by mentioning the multiverse (and its “wave function”).
“Think of a game like chess. The existence of fully deterministic rules for the game, do not enable any known entity to predict the outcome of every game.”
It’s not only the rules themselves. Chess does not play itself, it needs conscious agents. These agents have some ideas and simulations in mind, partly triggered by the last move of the opponent, but surely also triggered by – I don’t really know – emotional, intuitive reasonings. Despite the fact that the origins of these reasonings may not be fully known to the player, nonetheless the player can decide to follow those reasonings or follow some other reasonings. Therefore it is surely not possible to forecast with a 100% certainty who will definitely win.
“After the fact, it is always possible to determine who won”
Sure, because the rules are clear as water. The question is has the looser moved the wrong figures at the wrong time in the wrong direction.
“Can something else other than me, determine what I will do, before I do it?”
Here the question arises for me how to properly define “me”. Is it just the physical body and its biology or is it more?
“Obviously everything that I do is determined by my actions”
I would not agree, since this is a somewhat circular statement. I would rather state that everything that I do is *caused* by my actions. Nonetheless “action” and the word “do” are different words for the same thing, so it is no wonder that they have a strong “causal” link.
view post as summary
report post as inappropriate
Eckard Blumschein replied on Dec. 3, 2018 @ 06:18 GMT
I largely agree with Rob McEachern. I just dislike using the mathematical and creationist model of a basic initial condition which reminds me of a similar naive and unrealistic model: Adam and Eve.
report post as inappropriate
Robert H McEachern replied on Dec. 3, 2018 @ 14:17 GMT
Stefan,
"I thought it would be clear by mentioning the multiverse (and its “wave function”)" There is no evidence that such things even exist, except as naively supposed causes (akin to fairies and ghosts and souls), for effects that actually have been observed.
"Chess does not play itself, it needs conscious agents." Unconscious computers play chess far better than any humans ever have. They first beat the world champion over twenty years ago.
"Is it just the physical body and its biology or is it more?" "More", is an unnecessary hypothesis; It explains nothing that needs to be explained.
"this is a somewhat circular statement... “action” and the word “do” are different words for the same thing, so it is no wonder that they have a strong “causal” link." Exactly my point.
Eckard,
Keep in mind that mathematically, "initial condition" merely refers to the conditions existing at the start of some computation. It does not refer to anything like the biblical "in the beginning". As applied to the question of free-will, using the initial conditions existing a few hours before my birth, to perfectly predict my entire life's course, is every bit as sufficient a demonstration of my lack of free-will, as using the initial conditions billions of years earlier.
Georgina,
"it is the subconscious that has the majority of the facts and isn't letting on that it is controlling the outcomes not the conscious will." I agree that the subconscious is usually running the show. However, I would argue that it does "let on". The problem is, most people pay no attention to it. But it is there to see, for anyone that bothers to look.
Rob McEachern
report post as inappropriate
Stefan Weckbach replied on Dec. 3, 2018 @ 16:45 GMT
"There is no evidence that such things even exist..."
Exactly my point, as far as the "wave function" is concerned.
""Is it just the physical body and its biology or is it more?" "More", is an unnecessary hypothesis; It explains nothing that needs to be explained."
I cannot agree. Many people try to explain the emergence of consciousness from dead matter. If you can explain this, so that everybody understands how matter can become conscious about other matter - wonderful, please tell the world the details and the list of all necessary ingredients (and the detailed architecture with the reasons why it has to be as it is).
"Unconscious computers play chess far better than any humans ever have. They first beat the world champion over twenty years ago."
Yes, since this is a highly deterministic and combinatorical game. But does this prove anything other than computers are far more better and faster in exploring the space of possibilities and their consequences than human beings? I never saw such a chess computer output "i have no more pleasure in playing chess and will quit it now" but i saw many people doing this. By the way, there has to be someone who has programmed these computers. What was this programming procedure - was it a deterministic process of nature?
Since you have the opinion that there is nothing to explain concerning consciousness, i ask you to tell me why these chess computers have no consciousness (and do not claim that they have - until you have a watertight proof of that claim!).
report post as inappropriate
Robert H McEachern replied on Dec. 3, 2018 @ 17:22 GMT
Stefan,
"If you can explain this, so that everybody understands how matter can become conscious about other matter - wonderful, please tell the world the details..."
Read my old book It is happening pretty much just as I predicted it would happen, when I wrote the book over 25 years ago.
"I never saw such a chess computer output "i have no more pleasure in playing chess and will quit it now" A chess computer is neither required nor desired. Just buy an Apple smart phone, and talk with Siri.
"By the way, there has to be someone who has programmed these computers." Not any more. They have been able to reprogram themselves, for some time now. They are not yet world-champions at it, but that will happen in the not too distant future.
"i ask you to tell me why these chess computers have no consciousness" Because as I explained in my book, it will require the equivalent of about 100,000,000 gigaflops of processing power, in a machine that is cheap enough for anyone to buy - about the cost of a new small car. That price-performance mark is still some years away.
Rob McEachern
report post as inappropriate
Stefan Weckbach replied on Dec. 4, 2018 @ 04:27 GMT
Robert,
what principle(s) leads to the phenomenon of consciousness? Can you scribble them without me having to read the book?
Your book is an evolutionary view on consciousness. What does the theory of evolution say about the usefullness of consciousness? Why did it evolve?
Greets
Stefan
report post as inappropriate
Robert H McEachern replied on Dec. 4, 2018 @ 16:03 GMT
Stefan,
"what principle(s) leads to the phenomenon of consciousness?" Sensory signal processing, on a massive scale, in order to establish how the "self" should behave towards future sensory experiences. This should be fairly obvious, if you reflect upon how humans usually test a person in a coma (not conscious); they test to see if they respond to sensory experiences. Failure to respond to...
view entire post
Stefan,
"what principle(s) leads to the phenomenon of consciousness?" Sensory signal processing, on a massive scale, in order to establish how the "self" should behave towards future sensory experiences. This should be fairly obvious, if you reflect upon how humans usually test a person in a coma (not conscious); they test to see if they respond to sensory experiences. Failure to respond to such a test may not rule-out the existence of consciousness, but the mere fact that this is the first test employed, is indicative of the fact that people intuitively understand that there is an intimate connection between sensory responses and consciousness.
"What does the theory of evolution say about the usefulness of consciousness? Why did it evolve?" It evolved as a side-effect, that serves no great purpose. Once sensory signal processing power evolved enough to make sense (enhance survivability) of the external world, it was also used to monitor/control the body's internal world.
The importance of consciousness is over-rated. The only reason it seems so important to you, is because it is you - the only thing that you are actually aware of, which is, in fact only a tiny fraction of what is actually going on within your own body. Consciousness is like the CEO of a corporation - useful to have around, but the corporation will survive, and sometimes even thrive, without one - even though the CEO often believes otherwise, because CEO's tend to be more concerned about their own existence, rather than the corporation as a whole. That, by the way, is also why people are so enamored by the possibility of an afterlife - the CEO gets to move on to a bigger and better job, even after his old corporation is ruined.
Evolution does not produce entities with a purpose (teleology). It merely produces entities that are able to continue to exist, within their environment. Consciousness does not usually interfere with continued existence, so it continues to survive - but when it does interfere, it may be appropriate to consider the individual for a
Darwin Award.
Rob McEachern
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Eckard Blumschein replied on Dec. 4, 2018 @ 17:30 GMT
Robert,
You are not the only one who is using the Laplacean term "initial condition" without specifying what moment of beginning (t=0) it refers to. Unilateral Laplace transformation doesn't include time before the beginning, just possibly (in excess of intitial values at t=0) it needs all (!) derivatives of these quantities at t=0.
I simply prefer replacing the "initial values" by the word "influences" from before a chosen moment, because there is perhaps no reasonable choice of a begin to perhaps endless processes of evolution.
Of course, there is no reason to derive any beginning from Bible. On the contrary, I merely reminded of the gentically disproven Adam and Eve story as to reveal typical mis-generalizations. The same is with Noah's arch.
What about "free will", I understand strict determinism up to fatalism as likewise inappropriate human attempts to generalize observations. My "free will" certainly exists, and it arose from a murky plurality of clearly causal up to random influences.
Incidentally, I read your comments on consciousness as valuable steps to an admittedly coarse mathematical model of doctor's decision on consciousness. However, consciousness is also often used in the mystical sense of human soul.
report post as inappropriate
Stefan Weckbach replied on Dec. 5, 2018 @ 06:06 GMT
Robert,
you wrote
„There is a simple reason, for why free-will exists, even when the laws of physics are entirely deterministic: In order to "determine" the future state of the cosmos, it is necessary to know not just the laws of nature, but all the initial conditions. But if no subset of the cosmos has sufficient information storage capacity to store all those conditions, then no...
view entire post
Robert,
you wrote
„There is a simple reason, for why free-will exists, even when the laws of physics are entirely deterministic: In order to "determine" the future state of the cosmos, it is necessary to know not just the laws of nature, but all the initial conditions. But if no subset of the cosmos has sufficient information storage capacity to store all those conditions, then no subset can "determine" (AKA predict) the future of the whole. And no subset can have sufficient storage capacity, if the initial conditions are truly random, thus requiring an infinite storage capacity, for even a tiny, finite cosmos.”
Let’s take this at face value: All things happen entirely deterministic. But then – with your own words from above
Darwin’s theory of Evolution *is an unnecessary hypothesis; It explains nothing that needs to be explained*. Why? Because in the case of an entirely deterministic system, your statement
“Sensory signal processing, on a massive scale, in order to establish how the "self" should behave towards future sensory experiences. This should be fairly obvious…“
is simply false (and far from obvious). In an entirely deterministic system no consciousness whatsoever can alter the course of any event. With that, Darwins Theory must be labeled as an unneccessary hypothesis that explains nothing that needs to be explained. In this scenario the only thing that perhaps has to be explained could be why some initial conditions were as they were.
Nonetheless I think your example with the CEO has some truth in it.
I would beg you to think about all this twice, since you use two mutually exclusive and contradicting principles to *explain* all there is: a strictly deterministic system and Darwins Theory of Evolution.
“It was also used to monitor/control the body's internal world”. Again: in a strictly deterministic system there is no “use” and no “control” – all things happen inevitably due to some initial conditions. In a strictly deterministic world, even suicide happens inevitably.
By the way this reminds me of Boltzmann, who presumably took his own findings to serious and felt forced to do suicide (amongst some prominent others like Turing, Gödel, Ehrenfest).
So, at least either your belief in a strictly deterministic world or your belief in Darwinian Evolution must be just wishful thinking; and your link to this Darwin Award does no justice to men like Boltzmann, Turing, Gödel and Ehrenfest, especially if one believes in a strict determinism.
view post as summary
report post as inappropriate
Robert H McEachern replied on Dec. 5, 2018 @ 14:38 GMT
Stefan,
"In an entirely deterministic system no consciousness whatsoever can alter the course of any event."
Of course it can: by simply being THE "necessary" final, deterministic step, leading up to the event. The whole rationale of Laplacian determinism is to EXCLUDE that possibility, by proclaiming that the "end" can be determined - even when THE supposed necessary thing...
view entire post
Stefan,
"In an entirely deterministic system no consciousness whatsoever can alter the course of any event."
Of course it can: by simply being THE "necessary" final, deterministic step, leading up to the event. The whole rationale of Laplacian determinism is to EXCLUDE that possibility, by proclaiming that the "end" can be determined - even when THE supposed necessary thing (consciousness) could not possibly be present, because it has yet to be born.
Deductive logic is based upon a chain of reasoning. Hence, it is only as strong as THE weakest link. Laplacian determinism, in effect, claims that if there is a chain, suspended between two walls, if THE last link in that chain (my existence) is removed, then the now unattached chain, will still miraculously hover above the floor, just as it would have hovered, when THE last link (my consciousness) was in place. But it will not; my existence is "Necessary" to ALL the events occurring in my life - conscious or otherwise.
"I think, therefore I am." But where there is no "I am", there can be no "I think" either. Laplace threw the baby, out with the bathwater. So Laplace was correct, when he said that when I am not there, my consciousness cannot possibly be there either. But his argument, literally says nothing about what can happen when I AM there; it explicitly excludes that case from the argument - I was not yet born, so I AM not there.
Unless one HAS (past tense) perfectly predicted everything that will ever occur, then determimism has not BEEN proven. Simply assuming that it can be predicted, proves nothing. And presenting a logical argument, that claims to have proven it, but is ultimately based upon a false premise (which is what Laplace did) also proves nothing.
Here is what Laplace actually said:
"An intelligence which in a single instant could know all the forces which animate the natural world, and the respective situations of all the beings that made it up, could, provided it was vast enough to make an analysis of all the data so supplied, be able to produce a single formula which specified all the movements in the universe from those of the largest bodies in the universe to those of the lightest atom."
My point is, that to "know" "the respective situations of all the beings" and being "vast enough" is a logical impossibility, if "the respective situations" are random and thus require not merely a "vast", but an infinite machine, to predict behaviors within a finite universe. Infinite symbolic storage machines do not exist within finite universes.
Rob McEachern
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Stefan Weckbach replied on Dec. 5, 2018 @ 17:15 GMT
Robert,
“My point is, that to "know" "the respective situations of all the beings" and being "vast enough" is a logical impossibility, if "the respective situations" are random and thus require not merely a "vast", but an infinite machine, to predict behaviors within a finite universe. Infinite symbolic storage machines do not exist within finite universes.”
i think so...
view entire post
Robert,
“My point is, that to "know" "the respective situations of all the beings" and being "vast enough" is a logical impossibility, if "the respective situations" are random and thus require not merely a "vast", but an infinite machine, to predict behaviors within a finite universe. Infinite symbolic storage machines do not exist within finite universes.”
i think so too.
But this does not rule out logically the possibility of a strictly deterministic world. The world could nonetheless be strictly deterministic and at the same time offer no chance to prove it (due to practial or in-principle reasons). That’s not my cup of tea, but many people believe in such a deterministic universe (multiverse).
Therefore I agree with
“Unless one HAS (past tense) perfectly predicted everything that will ever occur, then determimism has not BEEN proven. Simply assuming that it can be predicted, proves nothing. And presenting a logical argument, that claims to have proven it, but is ultimately based upon a false premise (which is what Laplace did) also proves nothing.”
But again: a logical argument that claims to have proven a strictly deterministic universe by some false premise does not rule out that unprovable things can really exist!
You wrote
“Of course it can: by simply being THE "necessary" final, deterministic step, leading up to the event. The whole rationale of Laplacian determinism is to EXCLUDE that possibility, by proclaiming that the "end" can be determined - even when THE supposed necessary thing (consciousness) could not possibly be present, because it has yet to be born. Deductive logic is based upon a chain of reasoning. Hence, it is only as strong as THE weakest link. Laplacian determinism, in effect, claims that if there is a chain, suspended between two walls, if THE last link in that chain (my existence) is removed, then the now unattached chain, will still miraculously hover above the floor, just as it would have hovered, when THE last link (my consciousness) was in place.”
If your concept of the emergence and functionality of consciousness is based solely on the hitherto known laws of physics in a strictly deterministic fashion, I cannot see why “the last link (my consciousness)” should be more important then, say, the middle link. I think with the lines you wrote above, you forget what you wrote to me some posts earlier, namely
“The importance of consciousness is over-rated. The only reason it seems so important to you, is because it is you - the only thing that you are actually aware of”.
If this statement is true, then there is no “last link”, the deterministic chain in nature surely does not stop when it arrives at an "agent" (you), only because it’s “you” that triggers another event. And this event will again trigger another event… and so on. Nature in this case does not discriminate between an event triggered by a conscious agent and an event triggered by a non-conscious thing. But again: this is only true if we handle the universe and the phenomenon of consciousness on the same level, namely as some strictly deterministic mechanism.
My comment at the beginning of our conversation was to merely point out that in a strictly deterministic universe the notion of “free will” is simply a misnomer (together with “Darwinian Evolution”).
view post as summary
report post as inappropriate
Robert H McEachern replied on Dec. 5, 2018 @ 19:11 GMT
Stefan,
"I cannot see why “the last link (my consciousness)” should be more important then, say, the middle link." You are not alone in that deficiency. That was exactly my point, when I have previously discussed "the Physics Community’s profound misunderstanding of exactly what a single, classical “bit” is, in the context of Shannon’s Information Theory"
Shannon's...
view entire post
Stefan,
"I cannot see why “the last link (my consciousness)” should be more important then, say, the middle link." You are not alone in that deficiency. That was exactly my point, when I have previously discussed "the Physics Community’s profound misunderstanding of exactly what a single, classical “bit” is, in the context of Shannon’s Information Theory"
Shannon's "information" is about perfect, absolutely perfect, reconstruction; if you get a single-bit wrong anywhere at anytime, then you FAIL. There is no such thing as "graceful degradation". You either have perfect reconstruction, or catastrophic failure.
Consequently, if you look at the long history of "yes/no" decisions (did "this happen or did "that" happen) in a predicted sequence of events, it makes no difference WHATSOEVER, how many trillion, trillion, trillion predictions you got correct in the past. As soon as you EVER fail to predict a single bit value ANYWHERE at ANYTIME, then you have totally failed altogether. Because that single-bit error IS the failure of determinism. This has nothing to do with free-will. It is the very definition of determinism.
Consider a universe with just two entities; Laplace's "vast intelligence" (a trillion, trillion galaxies of matter and energy organized into a single, vast computer) and a single dust speck. Could such a vast intelligence use the correct, deterministic laws of nature, to EXACTLY predict the motion of the dust speck, if it does not know where the dust speck is? No. And can this vast intelligence EVER know where this dust speck is, if it has consumed every last bit of its "information storage capacity", in an effort to describe the locations of its own constituents? No. Under such circumstances, it cannot even predict what a single dust-speck will do, much less what a human being will do. That is what determinists have utterly failed to comprehend. But that is what "random" initial conditions mean, in information theory - a "one-time pad" - meaning there is no possibility of representing an information sequence with ANYTHING that is less complex than the sequence itself. Hence, the "vast intelligence" can only represent itself, and can never know anything about the dust speck, without having to over-write (and thus lose) some absolutely critical information about itself, and thereby disabling it from EXACTLY being able to account for its own behavioral impact upon the dust speck.
The point is, there is no such thing as a "most" or "least" significant bit of "information" - they are all equally significant, because the ENTIRE sequence is being treated as a single, alphabetic symbol. Consequently, a single bit-error, anywhere in the sequence, may result in the "decoding" of a different alphabetic symbol, triggering an entirely different, subsequent behavior; an inability to predict anything, even in a universe with entirely deterministic laws, and even when all but one, just one bit of information, is missing from the "vast intelligence". Unlike physics, such behaviors is what Information Theory is all about.
A subset of a one-time-pad, cannot be used to correctly predict every value in the entire one-time pad. There will always remain something that is unpredictable, even if there are fully deterministic laws. This is related to Godel's incompleteness theorem.
Rob McEachern
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Stefan Weckbach replied on Dec. 6, 2018 @ 05:51 GMT
Robert,
thanks for your extensive reply. A pleasure to read your point of view on these topics!
„As soon as you EVER fail to predict a single bit value ANYWHERE at ANYTIME, then you have totally failed altogether. Because that single-bit error IS the failure of determinism. This has nothing to do with free-will. It is the very definition of determinism.”
Ah, now I come...
view entire post
Robert,
thanks for your extensive reply. A pleasure to read your point of view on these topics!
„As soon as you EVER fail to predict a single bit value ANYWHERE at ANYTIME, then you have totally failed altogether. Because that single-bit error IS the failure of determinism. This has nothing to do with free-will. It is the very definition of determinism.”
Ah, now I come closer to your definition of determinism. For me, a strict determinism is something nature does or doesn’t (I don’t know for sure), but it is independent of wether or not a machine or a human consciousness should be able (in practice or in-principle) to predict every step in the future of a strictly deterministic system (given the correct initial conditions are “known”).
I understand that there can be no exact representation of the information content of the whole universe in just a part of the universe.
“There will always remain something that is unpredictable, even if there are fully deterministic laws”.
Yes, we know this from chaos theory.
“There is a simple reason, for why free-will exists, even when the laws of physics are entirely deterministic”
This line of reasoning is not tracable for me yet.
Robert, you wrote
“The question raised by Laplace and other philosophers and physicists, is "Can something else other than me, determine what I will do, before I do it?" Which, in turn, only is interesting, if they can determine what I will do, before they see me actually do it.”
Why is it only interesting if they can predict my doing? Surely, this would be a kind of proof for the believers in a strict determinism. But we already deduced that such a proof isn’t feasable. So we have to examine the claims of a strict determinism indirectly. It’s – at least for me – nonetheless interesting wether or not the claims of strict determinism make overall sense.
As I pointed out in the posts above and elsewhere, if strict determinism is true, then not only Darwins Theory of Evolution and natural Selection is absurd, but moreover the whole universe including all human and non-human actions. Because some mysterious initial conditions were such that all human and non-human actions do make sense (at least in most cases) and are fine-tuned towards each other (and *mimick* Darwinian Evolution). So, logically, these assumed initial conditions would follow some logical imperative. Even if our universe is part of a vast multiverse with most of its universes not able to produce such a coherent one as ours, the fact remains that only in universes that follow logical rules some parts of it can ponder about mysterious initial conditions. Well, it is clear that in a multiverse these mysterious initial conditions are a tiny subset of all the others in the multiverse and therefore not mysterious at all (from a statistical point of view). But the mystery remains by asking why only logically consistent worlds can produce “true” information about the world instead of falseness (non-sense).
Surely, one can answer it’s the math that forces this to be the case. If maths is indeed the most “fundamental” fundamental of existence, then, as I annotated at the beginning of this thread, one has to redefine maths, since classical maths has no power to produce dynamical physical systems (like a universe or some thing out of nothing) out of itself (otherwise we should have seen this happening from time to time in our classrooms or elsewhere).
Interestingly, despite the widespread assumption that maths is an infinite landscape of precise interdependencies, it also seem to have its limits. From a logical point of view, it is only hard to imagine that some parts of maths are able to encode the whole landscape of it (including the encodings of these parts…). But if maths is somewhat infinitely infinite, why not…? Furthermore it is only hard to explain why the property of consistence is such superior over the property of inconsistence. If it would be vice versa, would there be any reliable knowledge of maths about itself (I presuppose here that humans “are” a mathematical pattern, conscious)?
If there is really that limit between consistency and inconsistency, where does this limit come from?
view post as summary
report post as inappropriate
Robert H McEachern replied on Dec. 6, 2018 @ 17:31 GMT
Stefan,
“Ah, now I come closer to your definition of determinism. For me, a strict determinism … is independent of wether or not a machine or a human consciousness should be able (in practice or in-principle) to predict every step”
Ah, but it it not independent. That is the problem. Spooky Action at a Distance - NECESSITATES the acquisition of - Spooky Information from a...
view entire post
Stefan,
“Ah, now I come closer to your definition of determinism. For me, a strict determinism … is independent of wether or not a machine or a human consciousness should be able (in practice or in-principle) to predict every step”
Ah, but it it not independent. That is the problem. Spooky Action at a Distance - NECESSITATES the acquisition of - Spooky Information from a Distance.
“Why is it only interesting if they can predict my doing?”
When Newton introduced his Universal Law of Gravity, enabling the prediction of planetary motions (Laplace’s Vast Intelligence), people asked “Isaac, tell me again, how EXACTLY does this work? How do each of the planets come to acquire the INFORMATION that is NECESSARY to DETERMINE how they should behave, right at this very moment? How do they KNOW where all the other planets are, right now?”
Newton famously replied “Hypothesis non fingo.” - “I feign no hypothesis” - or, more bluntly “I don’t have a clue.”
Field theories, like General Relativity, are supposed to have resolved this problem, by saying that the planet/star does not need to know the positions and masses of other objects, but only needs to interact with the local field. But that does not solve the problem at all in the quantum case. It merely disguises it in a different form; how in the world is some tiny particle supposed to be capable of ever measuring/detecting a tiny field - and THEREBY INSTANTLY ACQUIRE THE VAST QUANTITIES OF NECESSARY INFORMATION? It requires a huge apparatus, and a long length of observations, to perform the recent detection of gravity waves. How is a tiny, single particle supposed to do it at all, much less instantly?
A simple answer to this dilemma is that tiny particles cannot detect any field (and thus exhibit no interaction), except at certain, discrete points, where the detection conditions happen to be "just right". This is why the world is quantized. It has nothing to do with the quantization of space or time or matter. It has to do with the fact that the detection of information, is inherently a discrete interaction process; a single bit of information, a yes or no answer to the question, "Was something that can be interacted with, just encountered?" Whenever the answer is no, there can be no possible interaction. Think of neutrinos passing through the entire earth, with no interaction at all - they just never happened to encounter any "just right" conditions, rather like the binding of a drug-molecule with a very specific receptor, but not interacting with anything else.
It is easy to show that the Heisenberg Uncertainty Principle is EXACTLY equivalent to the statement that all measurements (and thus all interactions) must contain >= one bit of information. Hence, if a tiny particle cannot extract even one bit of information, from a tiny field, then it cannot interact with that field - at all. That is what quantum theory is ultimately all about…
Every tiny particle, has the EXACT same problem as Laplace’s “vast intelligence”, but it has vastly less information storage capacity, so the only way it can obtain ANY information NECESSARY to DETERMINE what it is supposed to do at each and every moment, is to collide with whatever is “out there” and attempt to acquire that one and only bit of information, that it is EVER possible to acquire (at the Heisenberg limit) - the answer to the yes/no question “Did I just collide with something?” If it did, then it behaves as if it did. If it did not, then it behaves as if it did not. End of story. That is all there is to reality. It is not about math, it is about information acquisition.
Think of two submarines, designed to be hard to detect, attempting to detect each other. If they succeed, they sound general quarters “All Ahead Full! Dive! Dive Dive!” How did they DETERMINE that that was how they should behave, AT THAT VERY MOMENT, from the acquisition of the single bit of information, the mere detection of the other sub’s existence? They did not. They merely executed a prearranged, standard behavior. Or not - if they failed to detect anything. So all of a sudden, the formerly undetectable “virtual subs” become detectable, by a distant observer, as the result of their change in behavior. And if they return, to “Running Silent, Running Deep”, the distant observer is left to wonder if they were ever, really there at all - hence the sobriquet “virtual”.
It all starts to make perfect sense, when you realize that EVERYTHING, from the most elementary particle, to the free-will of a conscious being, is all the consequence of properties of sequences of single-bits of information. The only difference is the length of the sequence that they are dealing with. Elementary particles deal with sequences of the minimum possible length = 1. Conscious entities deal with much longer sequences. Emergent behaviors are caused by the emergence of entities that can deal with longer sequences of information - by teaming up and combining to form internal structures capable of storing longer sequences of behavioral responses to longer sequences of just-acquired information (observations).
The most elementary particle that is even conceivable, is the one that has the smallest conceivable (most elementary) amount of information - one bit. If there is not at least one bit, then there is nothing to be observed, by a conscious entity, or acted upon, by an elementary one.
It is not "Turtles all the way down." It is information all the way down. But "all the way down" is not an infinite regress. There is an abrupt end to reductionism - at precisely one bit of information.
Rob McEachern
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Stefan Weckbach replied on Dec. 6, 2018 @ 20:25 GMT
Robert,
what you describe has some characteristics of what we call a „virtual reality“ in some computer.
You wrote
“Ah, but it it not independent. That is the problem. Spooky Action at a Distance - NECESSITATES the acquisition of - Spooky Information from a Distance.”
and
“Every tiny particle, has the EXACT same problem as Laplace’s “vast...
view entire post
Robert,
what you describe has some characteristics of what we call a „virtual reality“ in some computer.
You wrote
“Ah, but it it not independent. That is the problem. Spooky Action at a Distance - NECESSITATES the acquisition of - Spooky Information from a Distance.”
and
“Every tiny particle, has the EXACT same problem as Laplace’s “vast intelligence”, but it has vastly less information storage capacity, so the only way it can obtain ANY information NECESSARY to DETERMINE what it is supposed to do at each and every moment, is to collide with whatever is “out there” and attempt to acquire that one and only bit of information, that it is EVER possible to acquire (at the Heisenberg limit) - the answer to the yes/no question “Did I just collide with something?” If it did, then it behaves as if it did. If it did not, then it behaves as if it did not. End of story. That is all there is to reality. It is not about math, it is about information acquisition.”
In a virtual reality in a computer, spooky action at a distance isn’t spooky, since what the computer outputs is an illusion (misinterpretation) of causal links. A cat on my display that eats a mouse hasn’t that mouse in its stomack after having “eaten” it, nor has the “cat” killed the “mouse”. With other words: Cat and mouse are not located somewhere in the display or even in the computer’s memory.
That’s all fine and I too am a friend of the information-theoretic bit-interpretation of QM. But there are open questions to me: how does it come about that we do *not* confuse a certain bit-sequences that defines a mouse with a mouse as we know (see) it? Same problem with Tegmark’s mathematical universe: how does it come about that we do not confuse a mathematical structure of a chair with a chair – albeit the chair and “its” mathematical structure have been defined (or indeed are?) as one and the same (no difference at all, end of story)?
Next question: are conscious observers generated by the dynamics within the computer, means by the virtual reality itself (this would then be like an animated mouse in some future desktop computer could have also indeed consciousness)? Who or what defined that bit-sequence xyz has to be experienced as the color “red”? And last, but not least, if it is true that
“There is an abrupt end to reductionism - at precisely one bit of information.”
maybe these questions have no real answers?
My answer: the virtual reality idea (inutition?) is compelling, but it is not the end of the story. There has to be something outside it. To see this clearer, remember again your 1 bit of information. If information is not just an absurdity whose origins have no roots in something that (with your own words) “makes perfect sense”, then this one bit of information (that it makes perfect sense) necessitates (signals!!!) that this virtual reality has its roots (and fruits?) beyond itself.
view post as summary
report post as inappropriate
Robert H McEachern replied on Dec. 6, 2018 @ 21:05 GMT
Stefan,
"what you describe has some characteristics of what we call a „virtual reality“ in some computer." But with one huge difference. Virtual machines cannot deal with truly random, real numbers, for all the reasons noted above; such random numbers are not compressible and so cannot be fit in the memory of the physical machine, underlying the virtual reality. So virtual realities use...
view entire post
Stefan,
"what you describe has some characteristics of what we call a „virtual reality“ in some computer." But with one huge difference. Virtual machines cannot deal with truly random, real numbers, for all the reasons noted above; such random numbers are not compressible and so cannot be fit in the memory of the physical machine, underlying the virtual reality. So virtual realities use compressible, pseudo-random numbers instead.
So, in order to determine if we live in a "real" world, or a "simulated" one, we need to know the answer to the question, are the "initial conditions" of our world truly-random or merely pseudo-random?
"how does it come about that we do *not* confuse a certain bit-sequences that defines a mouse with a mouse as we know (see) it?" By looking at all the non-information bits accompanying the mouse, but that have be "compressed" out of the information sequence being used to "represent" the mouse.
To put it simply: An information-sequence is a single number, a unique "index" number into a "look-up table", whose contents defines the behavior to be performed when the index number is observed. There need not be any relationship at all, between two nearly identical index numbers, and there associated behaviors - the system may be "maximally ill-conditioned", the exact opposite of the "smooth functions" so familiar to physicists.
"Who or what defined that bit-sequence xyz has to be experienced as the color “red”?"
So what does evolution load into those look-up tables? Whatever behaviors enable the tables to survive. There is no other reason than this, for why you stop at red, or drive on one side of the road in the United States, but the other side in the United Kingdom; you will not survive for long, if you do otherwise.
So why is there something rather than nothing? Because detecting nothing is a "no op" - the look-up table behavior is "act like you just detected nothing". "Nothing" is the undetectable condition.That is why we cannot detect it.
Rob McEachern
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Stefan Weckbach replied on Dec. 7, 2018 @ 06:15 GMT
Robert,
thanks again for your extended reply and the will to explain your lines of reasoning. Albeit we differ in some conclusions, I enjoy what you have figured out.
You wrote
“So why is there something rather than nothing? Because detecting nothing is a "no op" - the look-up table behavior is "act like you just detected nothing". "Nothing" is the undetectable...
view entire post
Robert,
thanks again for your extended reply and the will to explain your lines of reasoning. Albeit we differ in some conclusions, I enjoy what you have figured out.
You wrote
“So why is there something rather than nothing? Because detecting nothing is a "no op" - the look-up table behavior is "act like you just detected nothing". "Nothing" is the undetectable condition.That is why we cannot detect it.”
It is clear that “nothing” cannot be detected, because there would be nothing there to be detected and no one (no thing) there to detect something. But this does not give an answer to the more fundamental question (as I wrote it down in my last essay)
Why is there something rather than nothing at all?
Instead of absolutely nothing, there are look-up tables existent and pseudo-random or even truly random numbers. In a virtual reality with pseudo-random numbers, all events are determined.
In a scenario where truly random numbers can be generated, the situation is much more subtle. It would be comparable (coarse grained) with a deterministic “wave function” that immediately after a measurement has been performed has to be updated to continue with the correct *new* initial conditions.
I cannot see how this could enable something that we traditionally call “free will”. So here are my questions: are all possible *truly* random numbers that could exist, listed in a look-up table to determine what to do when such a number emerges? Why is it important to emphasize that those numbers cannot be algorithmically compressed? Is it because all these numbers *must necessarily* (for reasons you should tell me then) be so huge (in length)?
Next question: Let’s take your view on Evolution as granted for the sake of the following lines of reasoning (don’t misunderstand me here, I am not a fundamental creationist who does categorically deny Evolution, far from that, I only take many things into question [like science should regularily do]). Now someone performs a suicide by jumping into a running car and dies. The look-up tables will not survive this. Now the question: If there was an ill-definition (compared to the evolutionary imperative to survive) in the look-up table, the person came to death deterministically from the point on the error in the table has been established. If there was no failure in the look-up table, what “number-sequence” caused the suicide and why did this number-sequence exist at all?
view post as summary
report post as inappropriate
Robert H McEachern replied on Dec. 7, 2018 @ 15:49 GMT
Stefan,
"But this does not give an answer to the more fundamental question..." As I have pointed-out elsewhere, it is impossible in principle, for deductive logic to answer "fundamental" questions about the physical world (as opposed to virtual worlds). Deductive logic can only prove that a conclusion follows from a premise. It cannot prove a premise; if it could, then it would not be a...
view entire post
Stefan,
"But this does not give an answer to the more fundamental question..." As I have pointed-out elsewhere, it is impossible in principle, for deductive logic to answer "fundamental" questions about the physical world (as opposed to virtual worlds). Deductive logic can only prove that a conclusion follows from a premise. It cannot prove a premise; if it could, then it would not be a premise in the first place. It would be another theorem, based on a premise...
As Bacon noted 400 years ago, a premise about the physical world, can only be justified via inductive logic, applied to observations. But such logic can only demonstrate that a premise is probable. It cannot demonstrate that it is certain. Godel's incompleteness theorem is appearing again.
"In a scenario where truly random numbers can be generated, the situation is much more subtle." There is no such situation. Truly random numbers cannot be "generated" by a finite process. They can only be "found". In other words, they must pre-exist the entity attempting to "generate" them; either they exist in the our world, or they do not. If they do not, then our world is a virtual world.
"Why is it important to emphasize that those numbers cannot be algorithmically compressed?" Because that is what creates the dilemma I described earlier; Laplace's vast, virtual intelligence, cannot perfectly describe both itself and a dust speck. If it could perform a lossless compression of either its description of itself, or its description of the dust speck's position, then it might be able to fit ALL the Laplacian required information, within its information storage capacity, thereby enabling Laplace's perfect, error-free predictions. No compression, no (perfect) prediction.
"(compared to the evolutionary imperative to survive)" There is no imperative. There is merely a tendency. The world does not care if we, or anything else survives or not. It simply has not prevented survival of some observable things.
By the way, these types of issues are related to the Anthropic Principle, which, correctly understood, states that "It is absolutely inevitable that every observer must have a past history, that was conducive to it's own existence, consequently, all claims that "fine-tuning" is required to explain an observer's existence, must ultimately have been based upon a false premise."
"If there was no failure in the look-up table" There will always be failures. (1) A failure to correctly decode the index number from the observations (like trying to read bad handwriting). (2) Failure to have exactly the right response in the table-element associated with a correct index. (3) And most importantly of all, the world may have changed, rendering obsolete, most of the hard-learned behaviors, stored within the table. All the things the dinosaurs learned and stored in their tables, enabling them to survive for millions of years, were suddenly, in a matter of days, rendered obsolete and useless, when an asteroid hit the earth. They had no table-entry for that, precisely because they had never observed and remembered and survived such an occurrence previously.
If someone jumps in front of a running car, their table will not survive, as you have noted. So when the surviving tables are copied into a new machine, they will not have that defective table-element. But copying errors will always occur in the real world. They may be very rare, but they will occur. And so a new defect in the table occurs, followed by new suicides eliminating the new defect... an endless process, with no "plan" for survival, just a process that fails to prevent some things surviving preferentially over others.
Rob McEachern
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Stefan Weckbach replied on Dec. 7, 2018 @ 18:53 GMT
Robert,
i agree with many of the things you wrote. But I yet cannot see how a truly random number should somewhat be an argument for what we call free will.
I understand that the uncompressability of such random numbers excludes a correct prediction of a certain action. But what is the difference between your approach to justify the existence of free will and the widespread...
view entire post
Robert,
i agree with many of the things you wrote. But I yet cannot see how a truly random number should somewhat be an argument for what we call free will.
I understand that the uncompressability of such random numbers excludes a correct prediction of a certain action. But what is the difference between your approach to justify the existence of free will and the widespread “approach” that the randomness of QM somewhat could leave some space for free will?
How can in a deterministic world an agent make use of some truly random numbers? Is the decision (the behaviour) to pick up some “found” number triggered by determinism or by another truly random number? Once picked up, according to your definition, such a number should deliver some information how to behave – namely how to behave independently of what would be the next deterministic step in the chain of deterministic behaviour. To choose freely, and not only randomly this next behaviour, there should be also a look-up table for those truly random numbers where the subject can look up what behaviour the number means. This is necessary to decide for or against this behaviour. Otherwise we do not arrive at some free will but at randomness.
“By the way, these types of issues are related to the Anthropic Principle, which, correctly understood, states that "It is absolutely inevitable that every observer must have a past history, that was conducive to it's own existence, consequently, all claims that "fine-tuning" is required to explain an observer's existence, must ultimately have been based upon a false premise."
Yes, and that “false” premise would be that the world could have some slightly other fundamental constants (or no constants at all, means “nothing”) instead of those constants our world has. But as you have already outlined, one cannot prove a premise (and therefore not disprove it, at least not in the cases we are interested in). So, you handle your example concerning the Antrophic Principle as if your premise must be true, namely that there is no deeper reason (fine-tuning) for life in the universe. With this, the opposite premise, namely that there could be a truly non-random reason (fine-tuning) for life in the universe is therefore necessarily declared false. But you can’t declare such a premise false, because you cannot disprove it - you can’t either prove it! - neither with inductive nor with deductive logic. Both possibilities are therefore logically valid.
"It is absolutely inevitable that every observer must have a past history, that was conducive to it's own existence”
Yes, if my mother and my father died before I was born, this would in no way be conductive to my existence. But this is trivially true (addendum: trivially true only unless there exists a deeper reason for why my mother and my father did not die before I was born, namely a reason that does not exclusively rely only on pure chance, randomness, evolution and bit-sequence-failures).
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Robert H McEachern replied on Dec. 7, 2018 @ 22:14 GMT
Stefan,
"But what is the difference between your approach to justify the existence of free will..." The difference is that I am not even trying to justify free will. I am DE-justifying determinism. In other words, since determinism is based on a false premise (that Laplace's vast intelligence is always possible, at least in principle), then there is no reason to continue to suppose that...
view entire post
Stefan,
"But what is the difference between your approach to justify the existence of free will..." The difference is that I am not even trying to justify free will. I am DE-justifying determinism. In other words, since determinism is based on a false premise (that Laplace's vast intelligence is always possible, at least in principle), then there is no reason to continue to suppose that one's "self-evident" perception of free will, is probably just an illusion, as the believers in determinism would have us believe.
"How can in a deterministic world an agent make use of some truly random numbers?" By finding and compressing it, with a loss. In other words, truncate the infinite real-world number that specifies the position of the dust speck, thereby enabling it to fit into Laplace's vast, virtual machine, that only APPROXIMATES the behavior of the real world it seeks to represent.
The ultimate point is this: the dust speck has to exist, in order for its behaviors to exist. It, itself (and not just its past history) plays a NECESSARY role in determining what it, itself is doing, right now. Self-determinism IS a form of determinism; the very form that Laplace tried, but ultimately failed, to exclude.
"So, you handle your example concerning the Antrophic Principle as if your premise must be true, namely that there is no deeper reason (fine-tuning) for life in the universe..."
"Yes, if my mother and my father died before I was born, this would in no way be conductive to my existence. But this is trivially true..."
As you have observed, the premise IS "trivially true." The Anthropic principle does not object to the claim that there may be some "deeper reason" for the world being as it is. Rather, it only objects to the additional claim, of fine tuning; namely, that the "deeper reason" is such that, it would almost never result in an observer coming into existence. Such a claim is rather like your father trying to claim that you probably don't exist, because he might not have existed. If it is highly unlikely that the world would produce a human, then why are there humans in the world, arguing that they themselves probably should not exist? The fine-tuning aspect of the argument is self-contradictory. We do not live in a world that is not conducive to our existence. The fact that many such "other" worlds are conceivable, does not alter the fact that the world we are attempting to understand, is not one of them.
Rob McEachern
view post as summary
report post as inappropriate
Stefan Weckbach replied on Dec. 8, 2018 @ 06:07 GMT
Robert,
now i see clearer.
"The difference is that I am not even trying to justify free will."
The misunderstanding came from your opening post
"There is a simple reason, for why free-will exists, even when the laws of physics are entirely deterministic"
If i got it right, what you termed "free-will" earlier is identical with what you call "self-determinism".
I think Laplace ment a Demon, an intelligence not causally connected to the physical world. That would be the god-eyes-view (bird's view) of the universe. you rightly stated that such a Demon-view for physical agents is an impossibility.
Self-determinism sounds like there could be some freedom for the self in that process for wether it determines itself to this or that or some other action (or thought, or conclusion). Moreover, it sounds like this agent would be able to truncate some infinite real-world number at exactly the place this agents "wants" it to be truncated.
Have you any theory about how a conscious agent (not a dust speck) decides for which of many mutually exclusive future paths (actions, thoughts etc.) it should determine itself?
Concerning the Anthropic Principle (there are more versions of it), it all hinges on wether or not one believes to know the correct probabilities (and therefore possibilities) for what and why the universe is like it is. I do not claim to know them, but thinking about them and evaluate the probable consequences is nonetheless interesting to me.
Nonetheless i read your presentation at http://vixra.org/pdf/1707.0162v1.pdf and it's very interesting!
this post has been edited by the author since its original submission
report post as inappropriate
Robert H McEachern replied on Dec. 8, 2018 @ 16:54 GMT
Stefan,
"If i got it right, what you termed "free-will" earlier is identical with what you call "self-determinism"." Not quite. Free-will is only a subset of the set of all the things that might be self-determined. Laplace's argument attempts to eliminate the possibility of that subset, by eliminating the possibility of the entire set; the "self" is not present, before it was created/born....
view entire post
Stefan,
"If i got it right, what you termed "free-will" earlier is identical with what you call "self-determinism"." Not quite. Free-will is only a subset of the set of all the things that might be self-determined. Laplace's argument attempts to eliminate the possibility of that subset, by eliminating the possibility of the entire set; the "self" is not present, before it was created/born. Thus, it applies equally to dust specks as well as humans, which is precisely why it is such a persuasive argument. And it would be a valid argument, if its premise was unconditionally valid. But it is not.
"I think Laplace ment a Demon, an intelligence not causally connected to the physical world." I think that is only a subset of the set of all the things that Laplace meant. Recall that Laplace is famously supposed to have said that "God is an unnecessary hypothesis". The same applies to any Demon.
"Have you any theory about how a conscious agent (not a dust speck) decides for which of many mutually exclusive future paths (actions, thoughts etc.) it should determine itself?" That was the topic of my book. Sensory signal processing constructs the "index" at each, present moment, that is being used to immediately access the look-up table behavior that "determines" how to behave/respond appropriately to the present index, while also, slowly, over-time, updating the table, both in terms of the length of the table (number of different indices that have an established response) and the contents of each table entry (the appropriateness of each response).
The massive amount of signal processing power, required to do such a thing (a more sophisticated version of the "deep learning" techniques currently being developed), is the reason for the 100,000,000 gigaflops, that I mentioned earlier.
This is somewhat like an actual procedure for constructing/evolving a sophisticated version of
Searle's Chinese Room One notable difference is that Searle's presence, within the room, is not required, in order for the room to function.
As noted in my book, my book was written as a rebuttal to Roger Penrose's 1989 book "The Emperor's New Mind". Not only did I think that Penrose's claim that "quantum weirdness" underlies human consciousness was ridiculous, I also thought that his (the standard) take on "quantum weirdness" in quantum interpretations (like Bell's theorem) was ridiculous. The physicists do not even understand the physics, much less consciousness. So the final chapter in the book explained my take, on the EPR paradox and Bell's theorem. That was the origin of my much later FQXI essays and the vixra papers that I have posted on-line.
"Concerning the Anthropic Principle (there are more versions of it), it all hinges on wether or not one believes to know the correct..." That is why I originally stated that "these types of issues are related to the Anthropic Principle, which, correctly understood..." Most people do not seem to be aware of the fact, that the whole point of the argument is a "reduction to absurdity" of any fine-tuning premise: Someone arguing that "My premise, combined with flawless deductive logic, leads me to conclude that I, very probably, cannot exist."
In regards to the presentation on vixra, it was assembled prior to a year-long, reading and discussion course (using the Great Books Foundation's "What's the Matter? - Readings in Physics", together with my own thoughts about these issues) that I was leading at a local, community college, last year. As the course progressed, I assembled a much longer, more detailed version of the presentation, but it is much too large to fit into the 10 mega-byte vixra limit, so I have never updated the limited, online version.
Rob McEachern
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Stefan Weckbach replied on Dec. 9, 2018 @ 05:39 GMT
Robert,
thanks for the reply.
I think until now we somewhat executed an inverse chinese room experiment, as far as the definition of your term “Free-will” and my understanding of it is concerned.
I am in a position to discuss with you issues like free will, but do not really know what *you* mean by it and how you define your term “Free-will”.
So I ask you explicitely what this term “Free-will” should mean when you use it here.
By the way, I consider it as great and worth the time and effort that after retirement you not only lead a discussion course but also assembled a presentation to that. I think those efforts – albeit I do not agree with all you state or conclude here and there – are worth in that they each offer the interesting public some meaningful information about aspects of the issues we discuss so that some day the whole puzzle (picture) may be seen in a more complete manner.
report post as inappropriate
Robert H McEachern replied on Dec. 9, 2018 @ 17:43 GMT
Stefan,
Here is what I mean by the two terms self-determination and free-will.
First consider a hypothetical universe, that is the only thing that exists. Whatever happens in that universe, happens because the universe it-"self" exists. Thus, it is "self-determined". This is "trivially true", because, by the initial postulate, there is nothing "else" that could be causing/determining...
view entire post
Stefan,
Here is what I mean by the two terms self-determination and free-will.
First consider a hypothetical universe, that is the only thing that exists. Whatever happens in that universe, happens because the universe it-"self" exists. Thus, it is "self-determined". This is "trivially true", because, by the initial postulate, there is nothing "else" that could be causing/determining what is happening.
Next, consider subdividing that universe into two parts. Is one part dependent, in any way whatsoever, on the other? If not, then the one part (or perhaps both) that is not dependent on the other, is also self-determined.
Next, consider the case in which neither of the above two parts are ENTIRELY self-determined; what happens to one part, is at least partially due to (influenced by) the other. This is the case noted above, with the two parts consisting of Laplace's vast intelligence and a dust-speck; if a long-range force exists, like gravity, then one part influences what happens to the other, so neither part is ENTIRELY self-determined, even though the universe as a whole (both parts together) is self-determined.
Next, consider if ANY action within either of those two, not-entirely-self-determined parts, could ever be dependent on only one part it-"self". It would have to be an action that can, somehow, negate any long-range force, that would otherwise causally-link the two parts. That may not be possible with a physical action. But it is certainly possible with a symbolic action, because symbols are not subject to physical forces. For example, if there is a tiny "self"-driving car, on the dust-speck, and the tiny car is "programmed" to stop at red-colored lights, this is a "symbolic" action, that utterly ignores (is ENTIRELY uninfluenced by) the existence of any small (not powerful enough to disrupt the symbolic action), long-range force or “physical” influence, due to the distant "vast intelligence". Hence, while “physical” actions occurring within the dust-speck can never be ENTIRELY independent of the vast intelligence, symbolic actions “may” be; symbolic actions “may” be self-determined.
If the above mentioned “programming” (which is itself an example of a symbolic action) ever becomes independent of the “other” part, then the dust-speck/car not only “may” be, but actually “is” self-determined; it is not only behaving symbolically, but is “determining” what constitutes a symbol in the first place, and how to behave towards each symbol, ENTIRELY independent of the “other” part.
Such a dust-speck/car is “free” from the “other” part (the vast intelligence); it CAN perform at least some actions (symbolic ones) that have become ENTIRELY independent of the “other”. But does it have “will”?
If some of the symbols it employs to make some decisions, come to symbolize its own internal states (symbolize it-“self”), rather than symbolizing any external states (non-self), then that is what I mean by “will”; the symbols driving the behaviors, refer to nothing at all in the physical world, but only to the symbolic realm within the thing it-“self”. That is what “will” is. It has a “mind of its own” - though not necessarily a conscious-mind. Thus, when confronted with an option, such as either turning left, or turning right, the “decision variable” being used to “determine” which way to turn, may have a value that has become “disconnected” from the physical realm. This is free-will; decisions are being made via decision-variables that symbolically represent nothing at all, either in or about, the physical world, other than the “self”. And the “other” part, Laplace’s vast intelligence, cannot even DETERMINE that such symbols even exist, much less influence their detailed behaviors.
In a nut-shell, physical laws do not dictate ALL aspects of non-physical (symbolic) behaviors. It can dictate some, such as establishing the maximum number of symbols (information storage capacity) that could possibly exist in a finite universe. But the laws do not dictate what those symbols MEAN as thus, how entities behave/act towards them. The laws DETERMINE what cannot be done, but not what WILL be done. It is the initial conditions that determine what will be done.
Symbols, unlike physical laws, may involve intentions; the laws do not “intend” anything to happen, they merely enable things to happen. But symbols may be “intended” to symbolize an associated behavior. So what happens, when a single “bit-error” EVER occurs in such a symbolic system. Something unintended may happen. Thus, a system that evolved symbolic processing that “seems” to merely represent the physical world (processing sensory data) and thereby enabling continued survival), may suddenly start to behave AS IF it has an entirely different intention - like representing a non-physical, internal “virtual” world. And developing an intention, disconnected from the physical world, is the beginning of something new in the world - something that merely “seemed” to have an intention, may become something that actually does have an (unintended) intention - it actively alters (whether intended or not) its own behaviors, based on it’s own, internal virtual world, rather then the external physical world.
Rob McEachern
view post as summary
report post as inappropriate
Stefan Weckbach replied on Dec. 9, 2018 @ 20:03 GMT
Robert,
o.k., thanks, now i got it. Thanks for taking the time to write it down in such an easy-to-understand step-by-step description.
I certainly have to think about the possible implications for all of this.
Isn’t it funny and remarkable at the same time that symbols have such a causal power? At first sight, no one (at least not me) would expect that in the first place....
view entire post
Robert,
o.k., thanks, now i got it. Thanks for taking the time to write it down in such an easy-to-understand step-by-step description.
I certainly have to think about the possible implications for all of this.
Isn’t it funny and remarkable at the same time that symbols have such a causal power? At first sight, no one (at least not me) would expect that in the first place. But one or the other way round, it’s true, since it’s true that we are here.
Albeit one can now ponder about what “programming” means in the absence of any conscious programmer, nontheless it seems true that a running program mustn’t be conscious to output a result that enables some internal system to survive.
One now can ask wether or not that system has to be conscious at all. Surely the reading of some DNA does not necessitate a consciousness, as less as the reading of a computer memory and the subsequent actions do. The latter was facilitated by conscious beings, the former not.
Your example (argument) with the bit-error is interesting. It is very suggestive when one first reads it (at least for me), since it contrasts a definition for a non-conscious symbolic system with the definition for another symbolic system whose bit-errors may force that system to evolve some awareness about the sudden difference (and in the longer run react to it).
Or is it just *me*, the already conscious reader, that is forced to realize this difference between these two systems and now concludes that the symbolic system with the bit-errors should have the “same” (at least rudimentarily) ability… anthropocentrically spoken? Nonetheless, I think it’s worth to further ponder about these questions and not a priori throw them away. You give me valuable things to ponder about.
I agree with your point of view of initial conditions and physical laws. I only suspect that your thought experiment with the hypothetical universe may be to simplified, since there is the possibility that such a universe (our universe?) is not infinite in time, but itself evolved out of something other. Surely, this “something” other (or another “something” other) has to be eternal at the end of the day (unless one does not argue everything can come from absolutely nothing or everything is infinite in time) and could be also termed a “hypothetical universe”. But there is a subtle difference, since this “something other” may have other rules (and another fundamental ontology – compared to that of matter) that we human beings are used to (in our universe).
I think one should not totally exclude the possibility that such a “something other” has an ontology that is much more of the nature of symbols and information than that of “matter”. In such a world, there would be no need for some energy-or-matter exchange to change the state of something. The only thing that would be exchanged would be abstract things (like information, symbols), but for the case that there are also conscious observers and actors there, there probably would be also an exchange of emotions. This scenario would be a kind of “mathematical universe” with the important difference that one has to replace the word “mathematical” by the word “symbolical” and add some observers to this universe. Since every consciousness has some personal symbols that are not exactly generalizable to other conscious agents, this “symbolical universe” would have it’s focus on subjectivity rather than on mathematical “objectivity”.
Well, I will leave my own speculations and ideas at that. Again thank you very much for presenting your really interesting case to me, and if you like, we can communicate further here and / or at the next essay contest (if I have something to say there at all). If you have any further annotations, I would be happy to reply.
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Robert H McEachern replied on Dec. 10, 2018 @ 02:28 GMT
Stefan,
"Isn’t it funny and remarkable at the same time that symbols have such a causal power? At first sight, no one (at least not me) would expect that in the first place."
Recall that earlier, I said "It is not "Turtles all the way down." It is information all the way down." I meant that it is (two-state) symbols all the way down. A bit of information does not represent the numerical values 1 or 0. It represents a symbol, from a two-letter alphabet:
"I think one should not totally exclude the possibility that such a “something other” has an ontology that is much more of the nature of symbols and information than that of “matter”... This scenario would be a kind of “mathematical universe” with the important difference that one has to replace the word “mathematical” by the word “symbolical” "
We are that something other. Consider:
The fact that the Shannon capacity for a single-bit, is equivalent to the limiting case of the Heisenberg uncertainty principle, implies that the most "elementary" particle, is behaving like a (two-state) symbol. It is behaving symbolically, rather than physically. That is why quantum behavior seems so weird.
You are not the only one, that did not expect that. Remarkable indeed.
Rob McEachern
report post as inappropriate
Stefan Weckbach replied on Dec. 10, 2018 @ 07:23 GMT
Robert,
„We are that something other.”
Well, from my point of view, yes and no.
When I referred to “something other”, I referred to the universe’s history. “We” clearly are not the reason for such a history or for the universe being as it is.
If the universe is constituted by “bit”-symbols, then that’s simply the way it is. This view of reality is...
view entire post
Robert,
„We are that something other.”
Well, from my point of view, yes and no.
When I referred to “something other”, I referred to the universe’s history. “We” clearly are not the reason for such a history or for the universe being as it is.
If the universe is constituted by “bit”-symbols, then that’s simply the way it is. This view of reality is (at least for me) not only compelling regarding DNA and Evolution, but also due to logical considerations. Hence, it’s not entirely anthropocentrical, since nature seems to act due to logical principles (as far as we can know).
“It is behaving symbolically, rather than physically. That is why quantum behavior seems so weird”.
Well, for most people I think the weirdness of QM is entanglement. Your paper at http://vixra.org/pdf/1609.0129v1.pdf gives an interesting reading about how things may be connected in the quantum world and its well worth contemplating about it.
If I understood it correctly, there is not only a signal, but also noise. Surely “signal” and “noise” are relative terms. You write in the paper above
“In any real experiment, the detectors will not be able to detect the existence of every particle or coin-image. Hence, the Detection-Efficiency will be less than 100%. In the simulation, it is possible to detect every particle; that is what is depicted in Figures 1 and 12. However,
when only a single bit of information can ever be extracted from a received particle/image, the detection of the particle/image existence, cannot be separated from the detection of its polarity; they are one and the same thing. At polarity detection angles of 90 and 270 degrees, the correlation is zero. That means there is no “Signal” to detect. There is only “Noise”.”
Have you checked the respective data of real experiments, namely that at angles around 90 and 270 degrees, the detection efficiency is the worst?
If your theory is true, then a) an elementary particle object to such experiments has an irregular surface. The average surface of all those particles is such, that together with the cruical detection inefficiencies at angles 90 and 270, they produce the known QM-correlations curves. B) If your theory is true, then such an elementary particle carries much more than one bit of information, since it has an irregular surface. C) If your theory is true, then obviously it is not only “information all the way down”, but also “statistics and hence, mathematics all the way down”.
C) then brings me back to my term “something other”. Taking your theory for granted, at the end of all the way down, there should be mathematics and symbols. Nothing to object against, so far. But I ponder about the reality of your explanation for intrinsic noise. If elementary particles (at least those with whom we make the respective experiments) have irregular surfaces, these particles obviously have more than one bit of information (information that is unrecoverable in-principle). So one can handle such a particle as a substitute for what we call “randomness” – since it is in principle irrelevant how much information such a particle has and how long the bit-sequence is to describe a certain tiny, tiny irregularity on a certain location of its surface. The only thing that has to be demanded is that all these particles must statistically be such, that the known correlation-curve should be recovered (albeit sometimes, or always only coarse-grained and not such smooth as being a computer plot).
This is all very nice and somewhat compelling, but I have to annotate something. By using real, physical particles with real irregular surfaces, one generates a distinction between an abstract symbol and a physical thing. Surely one can handle the surface noise together with the detection inefficiency around 90 and 270 as comparable to a bad handwriting and therefore reduce it to a bad written symbol.
But the fact remains that this bad handwriting is intrinsically worst at these angles. One now can answer “well, that’s simply how things are”. But as I annotated above, for all your explanations to work, it is necessary that there is some mathematics behind and around. So there has to be a mathematical reason for that bad handwriting at these angles.
Let’s now suppose we found the mathematical reason for that bad handwriting. It would be a mathematical truth, encoded by some idealized, not-badly-handwritten mathematical symbols somewhere in a platonic realm. Now my “undecidable” but nonetheless interesting (as far as I see it) question: how can perfect handwriting (platonic realm) turn into bad handwriting (physical realm)?
view post as summary
report post as inappropriate
Robert H McEachern replied on Dec. 10, 2018 @ 15:09 GMT
Stefan,
"Surely “signal” and “noise” are relative terms." Yes and no. It is just symbol and non-symbol.
"But I ponder about the reality of your explanation for intrinsic noise." Think of it as nothing more than the inability of a maker/manufacturer, to make/produce absolutely "identical" items. Do you really suppose "mother nature" can produce a perfect anything - perfection...
view entire post
Stefan,
"Surely “signal” and “noise” are relative terms." Yes and no. It is just symbol and non-symbol.
"But I ponder about the reality of your explanation for intrinsic noise." Think of it as nothing more than the inability of a maker/manufacturer, to make/produce absolutely "identical" items. Do you really suppose "mother nature" can produce a perfect anything - perfection means an infinite amount of information, is thereby being encoded into the entity. The implication would be that a single, perfect, elementary particle encodes/stores more information than all the computers in the world. And why would THAT be ELEMENTARY? Surely, an elementary particle would encode the minimum, not the maximum amount of information - that is the property that makes it elementary in the first place. But what makes a set of particles, behave as if they are identical? An observer/detector that only treats things as BEING identical, if they BEHAVE towards the detector in an identical manner. In other words, they may not BE identical, but they nevertheless respond (to the detector) AS IF they are, because the detector is “picking and choosing” what to detect. That is what symbols are all about - their defining characteristic.
"Surely one can handle the surface noise together with the detection inefficiency around 90 and 270 as comparable to a bad handwriting and therefore reduce it to a bad written symbol." That IS what is being done. Sometimes the symbol is so badly written, that the reader fails to even recover the correct NUMBER of symbols in the message. By the way, the detection process in my paper, is really nothing more than the World-War II era process for detecting a RADAR pulse - the pulse is the symbol that one is attempting to recover.
I suggest reading
the comments I made to David Byrden“If your theory is true, then such an elementary particle carries much more than one bit of information” No. Because it only IS information (in Shannon’s theory), if it can be perfectly recovered, from each and every misshapen symbol - because that is what symbol recovery IS. That is the entire point of the final paragraphs in my paper. The coins only exhibit 1-bit, that APPEARS identical, within EVERY coin. That is what causes an observer to declare that the coins are identical, in some sense, in the first place; every misshapen “a” IS identical to a misshapen “A”, even though they obviously appear different, they are nevertheless identical to the first letter in the VALID alphabet - the only alphabet that the detector knows how to identify.
“So there has to be a mathematical reason for that bad handwriting at these angles.” The reason is simply that the rotated symbol does not match the non-rotated one. An “A”, rotated 90 degrees looks SO different than a non-rotated “A”, that the detector, in effect, declares that is not just another misshapen symbol, it is an invalid symbol (pure noise). So it totally ignores it.
“how can perfect handwriting (platonic realm) turn into bad handwriting (physical realm)? “ How can it not? How could an ancient, Roman, silver-smith, ever produce perfectly-identical, silver coins, for the Emperor? How can the Emperor’s calligrapher ever produce absolutely identical copies of the Emperor’s decrees? Only the symbols encoded within the coins and decrees and particles are identical - precisely because they are being TREATED identically, in spite of their obvious differences.
"Have you checked the respective data of real experiments, namely that at angles around 90 and 270 degrees, the detection efficiency is the worst?" No. I don’t have access to such data.
Rob McEachern
view post as summary
report post as inappropriate
Stefan Weckbach replied on Dec. 10, 2018 @ 17:09 GMT
Robert,
thanks again for your answers.
I will read your comments you made to David Byrden.
“How can it not? How could an ancient, Roman, silver-smith, ever produce perfectly-identical, silver coins, for the Emperor? How can the Emperor’s calligrapher ever produce absolutely identical copies of the Emperor’s decrees? Only the symbols encoded within the coins and decrees...
view entire post
Robert,
thanks again for your answers.
I will read your comments you made to David Byrden.
“How can it not? How could an ancient, Roman, silver-smith, ever produce perfectly-identical, silver coins, for the Emperor? How can the Emperor’s calligrapher ever produce absolutely identical copies of the Emperor’s decrees? Only the symbols encoded within the coins and decrees and particles are identical - precisely because they are being TREATED identically, in spite of their obvious differences.”
I think these analogies are misleading, when it comes to questions about a platonic realm. The definition for a platonic realm is not that all symbols look perfectly the same, but the idea behind the relationships of such symbols. The idea behind is perfect certainty of the relationships that can have a consistent and unchangeable truth. So we can say with confidence that one can neglect the question of how the symbols in that platonic realm really “look like” (if they look like “something” at all). The interesting thing is how the symbols relate to each other and what this could say about ultimate reality.
So when you anwer my question with “How can it not?” there must be necessarily some kind of powerful idea within the platonic realm (or beyond it) to enable (and facilitate) a dynamical world where silver-smiths and calligrapher change things (or at least where things do necessarily change without the presence of intelligent, conscious agents). The very term “ex-change” for the exchange of some information does indicate this.
With other words: by excluding humanly facilitated symbols, the natural symbols (elementary particles and their properties and so on) must have come to their property of being symbols via a mathematical definition. But a mathematical definition is itself just a sequence of symbols. How can a sequence of symbols generate an entire universe and why should it?
What should one conclude from this? Should one conclude that the emergence of a dynamical universe is somewhat a mathematical imperative (means mathematically necessary)? Or should one conclude that the emergence of a dynamical world is merely possible, but not fundamentally necessary and has nothing to do with mathematics?
Note that these considerations have no direct impact on what your theory says. But I nonetheless mention them here because earlier you stated “End of story. That is all there is to reality. It is not about math, it is about information acquisition.”.
That it’s all about information acquisition may be correct, but this efforts some questions towards nature. One such question for me is what maths is. How has it’s inner kernel defined what a symbol is and what a non-symbol should be.
If you answer to my comment
“"Surely “signal” and “noise” are relative terms."
with “Yes and no”, this only seems to be true for conscious observer’s self-fabricated symbols, but not for those of the mathematical *and* physical landscape (if there is anything like a mathematical landscape at all).
In summary:
1) Nature works with symbols
2) Maths works with symbols
3) Something in nature has to determine what counts as a symbol for matter in the physical world (something has to determine its meaning to other matter)
4) Something within the mathematical landscape has to determine what counts as a valid, consistent mathematical relationship and what not.
5) Either nature and maths are one and the same, or they are two distinct systems (matter and abstract ideas).
6) If they are two distinct systems, how can one reconcile a world of matter that has not yet facilitated some consciousness (the universe before evolution took place somewhere) with the world of abstract ideas (which is traditionally the domain of consciousness)?
7) Without a clear and unchangeable distinction that nature and maths make concerning what counts as a symbol (and its meaning) and what as a non-symbol – everything is possible. But as I outlined in my latest essay, if everything is possible, it would be also possible that we live in a world that just popped up out of literally nothing. This would then really be the end of the scientific story (because some or all parts of our universe could easily transform into nothing again during the next blink of an eye).
Why does math TREAT the following equation as nonsense
1 electron plus 0 electrons = 10.5 electrons
and the following equation as valid:
1 symbol plus 0 symbols = 1 symbol
view post as summary
report post as inappropriate
Robert H McEachern replied on Dec. 10, 2018 @ 17:49 GMT
Stefan,
Being is in the doing. If something never interacts with anything else, then for all intents and purposes, it does not exist, as far as physics is concerned. The properties of the interactions, define the things that BE.
"the natural symbols (elementary particles and their properties and so on) must have come to their property of being symbols via a mathematical definition." No, they become symbols, by simply behaving like symbols, while interacting with each other. In my paper, I used the term
Matched Filter. In signal processing, this is a process that uses one symbol, as the detector for an identical (matched) copy of itself. The paper thus provides an abstract model of how "identical" particles interact - as identical symbols, and not just how a human-built observer might mathematically construct a "detector" to detect them.
"The interesting thing is how the symbols relate to each other and what this could say about ultimate reality." Exactly. They behave like matched-filters. Not as measuring devices, as implicitly assumed by physicists.
"One such question for me is what maths is" Math is a symbolic language encoding a tiny amount of information. Consequently, it is perfectly suited to perfectly describing that tiny subset of physical phenomenon, that happen to encode a tiny amount of information. In other words, the subject matter of physics.
"Something in nature has to determine what counts as a symbol for matter" That something is having two particles interact as a matched filter - optimally suited for detecting each others existence.
"Why does math TREAT the following equation as nonsense..." Because symbolic processing DEMANDS that the processes exploit a priori information about what to look for (signal) and what to ignore (noise). Where does such information reside within a pair of particles? In their property of forming a matched filter.
Rob McEachern
this post has been edited by the author since its original submission
report post as inappropriate
Stefan Weckbach replied on Dec. 10, 2018 @ 18:26 GMT
Robert,
thanks for your immediate reply. Sounds really interesting. So thank you for sharing and ordering for me what you found out.
“Where does such information reside within a pair of particles? In their property of forming a matched filter.”
Could the fact that only particles that have been generated by SPDC or by some former interactions form such a matched filter be of generalizable information about nature?
I think what you wrote would mean that not only SPDC does create a matched filter, but also some particle interactions. The latter then would allow one to conclude that interactions of identical particle types do in fact newly generate both particles (since such interactions are considered to be able to facilitate entanglement phenomena)? Does this make sense from your perspective?
this post has been edited by the author since its original submission
report post as inappropriate
Robert H McEachern replied on Dec. 10, 2018 @ 20:08 GMT
Stefan,
I assume by SPDC you mean Spontaneous parametric down-conversion, for generating entangled pairs. I would put it this way: SPDC creates pairs of particles that together form a single, oppositely and redundantly encoded bit. Each member of each pair is subsequently subjected to a detection process, that will form a match-filter, at the one-and-only polarization angle that actually "matches" that of the particle to be detected. At other angles, it is only a partially matched-filter, with the match degrading to zero, when the particle and detector are orthogonal (at 90 degrees to each other).
By the way, communications engineers have used, what could be called "entangled signalling", for a very long time - so long, that many of the techniques have been obsolete for decades. Binary Frequency Shift Keying (FSK) is an example, in which a frequency shifted tone, can be thought of as a pair of entangled On-Off-Keyed signals (OOK): When the tone appears at one frequency (signaling a "1" bit value), it simultaneously disappears at the other frequency (signaling a "-1" bit value). It thereby can be thought of as "twin" redundant copies (entangled) of the same bit-value (with one copy negatively encoded relative to the other). Correctly processing both (with matched filters) effectively improves the signal-to-noise ratio, thereby improving the ability to correctly "call" the bit, in the presence of noise.
An entangled pair forms a redundant encoding of the same bit. That is what entanglement IS. Hence, it makes no sense at all, to attempt to measure each member of an entangled pair, in a different manner (Mis-matched-filtering). But that is exactly what is deliberately being done in every Bell test. That is the problem. That is what creates the peculiar correlations - mis-matched-filtered detection, rather than matched-filtered detection, of the paired, redundant values.
Rob McEachern
this post has been edited by the author since its original submission
report post as inappropriate
Stefan Weckbach replied on Dec. 10, 2018 @ 20:59 GMT
Robert,
„I assume by SPDC you mean Spontaneous parametric down-conversion”
Yes.
“Each member of each pair is subsequently subjected to a detection process, that will form a match-filter, at the one-and-only polarization angle that actually "matches" that of the particle to be detected. At other angles, it is only a partially matched-filter, with the match degrading to...
view entire post
Robert,
„I assume by SPDC you mean Spontaneous parametric down-conversion”
Yes.
“Each member of each pair is subsequently subjected to a detection process, that will form a match-filter, at the one-and-only polarization angle that actually "matches" that of the particle to be detected. At other angles, it is only a partially matched-filter, with the match degrading to zero, when the particle and detector are orthogonal (at 90 degrees to each other).”
This would mean that at zero, no particles can be detected. Because you established the identity between polarization and detection. But at zero, particles can indeed be detected (with random polarizations).
“Binary Frequency Shift Keying (FSK) is an example, in which a frequency shifted tone, can be thought of as a pair of entangled On-Off-Keyed signals (OOK): When the tone appears at one frequency (signaling a "1" bit value), it simultaneously disappears at the other frequency (signaling a "-1" bit value). It thereby can be thought of as "twin" redundant copies (entangled) of the same bit-value (with one copy negatively encoded relative to the other). Correctly processing both (with matched filters) effectively improves the signal-to-noise ratio, thereby improving the ability to correctly "call" the bit, in the presence of noise.”
Isn’t this just a matter of a Fourier Transformation – and therefore not necessarily saying something about quantum mechanical entanglement?
“An entangled pair forms a redundant encoding of the same bit. That is what entanglement IS.“
This would mean that “entanglement” does only occur in a system with only two elementary particles that each can only deliver one bit of definite information. But there are multi-particle systems that can have delicate entanglement. From your point of view this is explainable by what you stated so far about your theory. The only missing ingredient of your theory is an exactly defined physical mechanism for the case of particles that have formerly interacted and are now entangled / disentangled.
view post as summary
report post as inappropriate
Robert H McEachern replied on Dec. 10, 2018 @ 21:37 GMT
Stefan,
"This would mean that at zero, no particles can be detected." No. Because of the noise. Think of the coin as being 50% polarized and 50% unpolarized. The unpolarized component (the noise) can often be detected at every angle.
"Isn’t this just a matter of a Fourier Transformation." No. Fourier analysis is entirely inappropriate to information-recovery, in most cases. Modulation analysis is required - something few physicists are familiar with. It is an entirely different type (non-superposition) of mathematical model, for a continuous function.
"This would mean that “entanglement” does only occur in a system with only two elementary particles that each can only deliver one bit of definite information." No. It means the entanglement used in standard Bell tests is of a single-bit. But it is conceptually trivial to entangle multiple bits, by simply constructing each multi-bit entity, from multiple single-bits, arranged in the same (entangled) pattern.
"The only missing ingredient of your theory is an exactly defined physical mechanism for the case of particles that have formerly interacted and are now entangled / disentangled." That is trivial - just separate a pair after they have been matched. In the noise-free case think of two, polarized coins, with their semi-circle polarization components vertically offset, like two 3D puzzle pieces. When matched, they snap together perfectly. But not when they are rotated so as to be unmatched.
Rob McEachern
this post has been edited by the author since its original submission
report post as inappropriate
Stefan Weckbach replied on Dec. 10, 2018 @ 23:25 GMT
Robert,
„Think of the coin as being 50% polarized and 50% unpolarized. The unpolarized component (the noise) can often be detected at every angle.”
What does “polarization” mean in the context of a single elementary particle with an irregular surface, subject to Bell tests?
report post as inappropriate
Robert H McEachern replied on Dec. 11, 2018 @ 21:52 GMT
Stefan,
It means there is a polarized signal, underlying the noise and blurring (band-limiting) as described on pages 2-3 of my paper This underlying polarization, can be reliably detected via matched-filtering.
In the case of two entangled particles, the underlying polarization of the two particles is identical, except for a change of sign (polarization of coin#1) = -(polarization...
view entire post
Stefan,
It means there is a polarized signal, underlying the noise and blurring (band-limiting) as described on pages 2-3 of
my paper This underlying polarization, can be reliably detected via matched-filtering.
In the case of two entangled particles, the underlying polarization of the two particles is identical, except for a change of sign (polarization of coin#1) = -(polarization of coin#2). However, the noise on the two coins, is not identical. As noted on page 4, "Identical Particles", if the noise is also made to be identical, so that (coin#1) = -(coin#2), then the quantum correlation curve will disappear and will be replaced by the classical correlation curve.
Recall that the whole point of the EPR thought experiment, is to get around the fact that the "original state" of an elementary particle cannot be measured twice, if the first measurement alters the state, before it can be remeasured. The way around this problem, was to create two supposedly identical (entangled) particles (identical except for a change in sign), and perform one measurement on each particle. What my demonstration shows, is that the particles cannot actually be identical, in the sense of every pixel in an image of one particle, will have the exact same value as the corresponding pixel in an image of the second particle; such "identical" particles fail to produce the quantum correlation curve; they are "too identical". Only particles in which only the underlying polarization is identical, will reproduce the quantum correlation curve, and even they will do so, only if the additive noise and band-limiting (blurring) is such, that it corresponds to their being only a single-bit-of-information (as that term is defined by Shannon's Capacity theorem), recoverable from a measurement of a coin.
Also recall that the original EPR paper was not concerned with discontinuous variables, like spin, that only take on one of two values (up or down). It dealt with the continuous variables appearing in the Heisenberg Uncertainty Principle, like position and momentum. Bell's theorem has nothing to say about that case. But the fact that Shannon's Capacity expression for a single bit, is exactly equal to the Heisenberg uncertainty principle, explains why even the paired continuous variables, like position and momentum cannot be simultaneously measured. In other words, the "single bit of information" explains the inability to make two uncorrelated measurements, in both the continuous and the discontinuous variables cases.
Rob McEachern
view post as summary
report post as inappropriate
Stefan Weckbach replied on Dec. 12, 2018 @ 06:47 GMT
Robert,
thanks again for your reply.
I understand the information-theoretic aspects of your theory and the functioning of a matched-filter. And I surely do honor your insight of Heisenberg’s uncertainty principle being equivalent to Shannon’s Capacity expression.
Nonetheless there are some things I want to annotate. Firstly, since you state that the original EPR paper only...
view entire post
Robert,
thanks again for your reply.
I understand the information-theoretic aspects of your theory and the functioning of a matched-filter. And I surely do honor your insight of Heisenberg’s uncertainty principle being equivalent to Shannon’s Capacity expression.
Nonetheless there are some things I want to annotate. Firstly, since you state that the original EPR paper only dealt with continous variables, one could assume that it has less to say about entanglement of electrons and spin.
For the case of continous variables and the property of polarization, your theory says something. Now, is this sufficient to disprove quantum mechanical entanglement? I am suspicious about that on the basis of the following lines of reasoning:
1) For the polarization case you examined in your paper, you must assume several things to be true: an irregular surface of photons. Statistically, the irregular surface of photons must be such, that together with the property of polarization, an ensemble of photons does reliably reproduce the famous Bell-curve.
2) I found no physical element of reality identified in your papers for your assumption that polarization must physically understood such, that one half of a sphere has polarization x, the other half has the opposite polarization. What physical element of reality does the term “polarization” refer to? Some dipole property (electric, magnetic) or something other?
3) An irregular surface of some sphere suggests that such a sphere could to be thought of as some multipartite entity. In case of photons, one can imagine (abstractly) such an irregular surface as a kind of little mountains with their respective valleys. What are these foldings made of? If these foldings exist in the real physical 3D space, they should consist of “something” – right?
4) For the double-slit case the properties of a photon must be such that together with the slit architecture, they mimick a wave-phenomenon. The photons as “radio-frequency carriers” (as you termed it earlier) must be such, that also for the case that they are observed after the slits, they produce no interference patterns. Additionally, for the case that a photon does pass the slit that is *not* observed, these photons should also not contribute to any interference pattern.
My conclusion so far is that you give no detailed physical explanation about the details involved in the mechanics outlined by 1) - 4), only some coarse-grained hints in terms of information-theoretic considerations, but no precise physical mechanics for all this to happen. Either there aren’t such precise mechanics existent (because we may live in a virtual reality), or there is such a precise mechanics existent and we can further meaningfully speak about particles, properties of particles, interactions and causes and effects.
My conclusion so far is that what you outlined in your theory for the case of polarized photons, is therefore *itself a matched-filter* – suited for the case of Bell-tests with polarized photons. This would mean that your theory was invented to suite the case, it has not “discovered” something independent of human imagination.
A matched-filter matches something by neglecting some details (otherwise it wouldn’t be a filter). My impression is that your theory does “filter out” (neglect) the needed physical details to convince (me). Remember that in the double-slit case, the photons have to “behave” such, that almost all of them do not land on certain areas. The natural question therefore is what *physically* determines such a single photon to land where it landed. There has to be a physical reason for that, as long as we speak about particles and some really existing slit-architecture. Moreover, there also has to be a physical reason for the photons to land at the well known areas when at least one of the slits is observed.
For a justification of why photon x landed at the point y, you had to explain in physical terms why it could not have been an other place than point y – for that photon. And you had to explain the same thing also for all the other photons that pass the double-slit experiment.
It is trivially true that one can model the dynamics of some Bell-type polarization experiment (with photons) in a computer such, that the experimental results (and the Bell-curve) are reproduced. Surely one can do this also for the EPR-B experiment. Again, the question is once more what should necessarily count as the real elements of reality, the real physically detailed mechanisms for the results of such experiments that have been conducted in reality.
Since it is trivially true – by already done programming – that one can match the results of real physical experiments by modelling them in a computer, these programs are all designed to be matched-filters: they filter out the question about the real physical causes and effects in the physical world that execute the real physical phenomena. You may say that these models *don’t* filter out these questions, but *answer* them (at least when *your* theory is concerned:-). But I yet see no answers (in your theory) to the questions posed above about the physical details – and the reason why photon x lands on a certain position y rather than one of the many other possible ones. If there are no “other possible ones” and photon x *necessarily* had to land on position y, please clarify why this should be so.
view post as summary
report post as inappropriate
Robert H McEachern replied on Dec. 12, 2018 @ 15:58 GMT
Stefan,
“one could assume that it has less to say about entanglement of electrons and spin.” David Bohm adapted the EPR argument to the case of spin, in his 1951 book “Quantum Theory”, precisely to get around the issue of not being able to ever perform a testable experiment with continuous variables. Bell’s theorem is based on Bohm’s adaptation.
“is this sufficient to...
view entire post
Stefan,
“one could assume that it has less to say about entanglement of electrons and spin.” David Bohm adapted the EPR argument to the case of spin, in his 1951 book “Quantum Theory”, precisely to get around the issue of not being able to ever perform a testable experiment with continuous variables. Bell’s theorem is based on Bohm’s adaptation.
“is this sufficient to disprove quantum mechanical entanglement?” Bell’s theorem is not a statement about quantum mechanics. It is a statement about classical mechanics; that physicists need not bother to ever look for a classical (hidden variable) system that can reproduce the observed correlations, because such classical systems, can be entirely ruled-out theoretically, via Bell’s theorem. This theorem is the ONLY thing that gives ANY significance to the concept of “entanglement”. If the theorem is not valid, then the fact that some systems are entangled, has no more deep significance than the fact that some skies are blue.
“I found no physical element of reality identified in your papers” The paper specified a classical-physical element, but not a quantum-physical element, because the latter is not relevant to the issue of the validity of Bell’s claim, that CLASSICAL systems CANNOT behave the way quantum systems do.
“For the double-slit case the properties of a photon must be such that together with the slit architecture, they mimick a wave-phenomenon.” David Bohm described how that works, in detail, in his 1951 book, in the chapter dealing with Scattering Theory. The only thing he could not explain, is why the scattering is discontinuous, rather than continuous. And that is due to it being mediated by a detection process, exquisitely sensitive to the recovery of a single-bit-of-information, indicative of the detection itself. This is what “detection” IS.
“it has not “discovered” something independent of human imagination.” Of course not. That is the defining difference between the “map” and the “territory”: No map (description of reality) can ever be independent of the humans that created the map. Our maps are the only things we can ever actually know, with certainty.
“The natural question therefore is what *physically* determines such a single photon to land where it landed.” Read Bohm’s book. The Fourier transform of the slit’s geometry (which is entirely independent of the properties of any of the things passing through the slits), contains large “moguls” in the EM field within the slits, that deflect particles, just as moguls on a ski-slope deflect skiers. The moguls are caused by the band-limiting of the Fourier transform (Gibbs phenomenon) describing the slit geometry.
“they filter out the question about the real physical causes and effects in the physical world that execute the real physical phenomena” No. They are intended to describe only the CLASSICAL causes and effects, that produce the observed behavior - the very ones that Bell claimed cannot possibly exist. The quantum causes, as interesting as they may be, are entirely irrelevant to the conclusion that Bell derived. Experiments demonstrate that quantum systems ARE sufficient to produce the observed correlations. Bell's theorem claims to have proven that classical systems cannot possibly be sufficient to produce the same correlations. That claim has been falsified.
Rob McEachern
view post as summary
report post as inappropriate
Stefan Weckbach replied on Dec. 12, 2018 @ 19:05 GMT
Robert,
yes, Bell’s theorem is based on Bohm’s adaptation.
But this is not what’s at stake.
“Bell's theorem claims to have proven that classical systems cannot possibly be sufficient to produce the same correlations. That claim has been falsified.“
That’s also not at stake, because – as I wrote in my last post – the fact that you can produce the same...
view entire post
Robert,
yes, Bell’s theorem is based on Bohm’s adaptation.
But this is not what’s at stake.
“Bell's theorem claims to have proven that classical systems cannot possibly be sufficient to produce the same correlations. That claim has been falsified.“
That’s also not at stake, because – as I wrote in my last post – the fact that you can produce the same correlations by introducing several assumptions, this fact does not automatically falsify quantum mechanical entanglement. But you are right, the claim that it cannot *possibly* be sufficient to produce the same correlations may indeed be falsified. But I think this hinges on what one considers as being *possible* concerning the microworld.
There may be photons with irregular surfaces possible and all the other assumptions that Bohm may have introduced (pilot wave, potential etc.).
What’s on stake is that due to the mathematical formalizability of the measurement correlations (results), one is always able to construct a model that produces the same correlations. Due to Moore’s Theorem (see my latest essay) and some experience with such different models, it is hard to see why any one of such models should explain the real circumstances (of the experiments we talk about) better and more realistically than any other of them.
As I recall, Bohm’s pilot wave theory is considered to be highly non-local, so it should not be within the class of those models that intend to explain the experiments locally-realistic and in classical terms.
““it has not “discovered” something independent of human imagination.” Of course not. That is the defining difference between the “map” and the “territory”: No map (description of reality) can ever be independent of the humans that created the map.”
It should have been clear that with “independent of human imagination” I did not mean that “maps” floated around before the first conscious observer appeared in the universe. I simply ment that if your theory is true, then nature does behave like your theory says. And if your theory is true, then nature behaves independently of wether or not you or something other has built such a theory.
““they filter out the question about the real physical causes and effects in the physical world that execute the real physical phenomena” No. They are intended to describe only the CLASSICAL causes and effects, that produce the observed behavior - the very ones that Bell claimed cannot possibly exist.”
I never stated in my last post that I wish to read from you that your theory offers some non-classical causes. But as far as I see, the classical causes that I wish to read about are printed in Bohm’s book you mentioned. Therefore two important questions (despite that Bohm's theory is considered to be non-local...):
Is your theory identical with what Bohm’s book says?
Is your theory local or non-local?
“Our maps are the only things we can ever actually know, with certainty.”
I agree and that’s really a problem when one is discussing issues like free will or entanglement in the quantum world. We don’t know the answers to these questions with certainty, we even don’t know some objective “probabilities” for the answers to be this or that. We only construct some subjective probabilities on the basis of some a priori “knowledge” (= premises about some thought-to-be existing and non-existing details of an issue) and afterwards carry them with us as “objective” probabilities (“objective” = independent of our maps).
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Robert H McEachern replied on Dec. 12, 2018 @ 22:01 GMT
Stefan,
"As I recall, Bohm’s pilot wave theory is considered to be highly non-local, so it should not be within the class of those models that intend to explain the experiments locally-realistic and in classical terms." Bohm's 1951 book was written before he developed his pilot wave theory. His scattering theory has nothing whatsoever to do with waves of any type, pilot, local, non-local or otherwise - it is an account of how purely local, classical, particle scattering can result in what appears to be "interference".
"Is your theory identical with what Bohm’s book says?" Except for the fact that Bohm offers no explanation for why interactions between particles and scattering-potentials, are discrete, rather than continuous.
"Is your theory local or non-local?" Local, just like Bohm's classical, scattering theory.
"this fact does not automatically falsify quantum mechanical entanglement" Nothing ever will. It is a real phenomenon, being attributed to an absurd cause. I am merely attempting to point-out that Entanglement is a trivial phenomenon, with a common-sense cause: nothing more than producing entities that have a KNOWN a priori relationship to each other, and then subsequently misinterpreting that a priori known relationship, as though it was entirely unknown and just "revealed" via some measurement that triggered some spooky action at a distance. It is only slightly more subtle, than knowing Alice's ball must be white, if Bob's ball is black. The only difference is the balls or coins now have two-colors (are polarized and consequently in a "superposition") plus noise and band-limiting such that any measurement can reliably reveal only a single bit of information.
Rob McEachern
report post as inappropriate
Georgina Woodward replied on Dec. 12, 2018 @ 23:36 GMT
"The only difference is the balls or coins now have two-colors (are polarized and consequently in a "superposition") plus noise and band-limiting such that any measurement can reliably reveal only a single bit of information." Robert
I think'spin' is different from permanent states such as colourings of an object. This is what makes it different from the colouring of socks; 'Spin' is the result of a behavioural characteristic that can change when compelled to by the environment to which the object is exposed. It either already has compatible behaviour and there is no change or it changes to become compatible. If same test orientations are applied to each of the pair the correlation (opposite-ness) of the pair remains.
this post has been edited by the author since its original submission
report post as inappropriate
Georgina Woodward replied on Dec. 13, 2018 @ 01:01 GMT
It is more precise to say -It either already has compatible alignment of behaviour and there is no change or it changes to become a compatible alignment. The behaviour is not really changing but how it interacts with the applied field can and will, if needs be.
If the spin ups are collected and retested with same orientation of field they will be spin up again, making it seem a fixed characteristic of the particles; but not necessarily so as I have explained. A new orientation of field necessitating change gives a 50 % of each, up and down, result. As if random, but not as the opposite spin correlation for same field orientation remains.
this post has been edited by the author since its original submission
report post as inappropriate
Stefan Weckbach replied on Dec. 13, 2018 @ 06:24 GMT
Robert,
what is spin?
How does your explanation of polarization explain the differences between linear and circular polarization?
report post as inappropriate
Robert H McEachern replied on Dec. 13, 2018 @ 16:01 GMT
Georgina,
"I think'spin' is different from permanent states such as colourings of an object." Imagine a circle, divided into two semi-circles, one colored red and the other colored blue. Now imagine the dividing line between those colors, rotated to a 41 degree angle, relative to your imagined point of view. Is red ENTIRELY "up" or "down"? If the only permitted answer is either "yes" or "no", which is it? That is what is different about such a "colouring". When you are forced to "call it" one or the other, you are being forced to call-it something that it is not. That is the problem. It is always a "superposition" of the two colors, regardless of what you or anyone else have decided to call-it. There is no mystery why someone else, viewing it from a different angle, might sometimes "call-it" the same color that you do, and sometimes call it a different color. And there is also no mystery about why the statistical correlations between these different "calls" are what they are.
Stefan,
Spin is the label attached to an observable phenomenon, that is analogous to classical angular momentum, but only yields one of two values (a single-bit of information) when a physicist attempts to determine its value.
"How does your explanation of polarization explain the differences between linear and circular polarization?" The two semi-circle coloring of linearly polarized coins, is not the only possible way to subdivide and color the surface of a coin (AKA encode information into the color distribution on the surface).
Rob McEachern
report post as inappropriate
Stefan Weckbach replied on Dec. 13, 2018 @ 19:23 GMT
Robert,
again, thanks for the answer.
Surely you know that photons can have an orbital angular momentum (OAM). Zeilinger et al. have made some interesting experiments, entangling one photon’s OAM with the partner photon’s polarization.
See the paper at https://www.pnas.org/content/pnas/113/48/13642.full.pdf
The resulting detection patterns, at least for OAM with quantum numbers up to 10 can be found in Fig. 6 here
https://robertfickler.files.wordpress.com/2018/03/1-pops
cience-mit-lichtschrauben-ans-quantenlimit.pdf
How does your theory explain these results?
Please do not understand me wrong, I think your theory may explain these results locally-realistic (as far as we can have any certainty about such issues). But independent of that I simply want to figure out what’s the difference between a photon and an electromagnetic field (since “field” and “photon” seem to be two distinct entities) – or at least what your theory says about this question.
report post as inappropriate
Robert H McEachern replied on Dec. 13, 2018 @ 22:33 GMT
Stefan,
"How does your theory explain these results?" In principle, anything can be entangled with anything. Entanglement just means that there is an a priori known relation between the two entities, such that if you know the correct information decoding procedure, you can deduce the state of one entity, from the detected state of the other.
"what’s the difference between a photon and an electromagnetic field (since “field” and “photon” seem to be two distinct entities)" What is the difference between a wave "field" on the surface of a lake, and a water molecule? Classical fields are just observable interactions of large numbers of undetected particles. There is no reason to suppose quantum fields are any different in this regard. The real question is why are quantum phenomenon quantized, rather than seeming to be continuous as classical phenomenon are. And the answer is, when the information content of an interaction, has been reduced all the way down to the least possible number of bits, there is no longer any logical possibility, for it to be anything other than quantized; interacting particles can either detect each other's existence (and thus interact), or not.
Here is a quote from Bohm's chapter (21.22) on scattering: “If there are enough successive deflections, the scattering process will begin to seem continuous, and it will approach a classical behavior. Thus, we see in another way why a strong force tends to produce a classical behavior; also we see how the apparently continuous classical deflection arises, despite the indivisible nature of the elementary processes of deflection.”
Rob McEachern
report post as inappropriate
Stefan Weckbach replied on Dec. 14, 2018 @ 02:26 GMT
Robert,
thanks again sincerely for your answers.
I now am trying to see the bigger picture. As you wrote, the large moguls in the EM Field within the slits is caused by the fourier transform of the slit’s geometry.
So I guess that due to an EM field within the slits, there should be some (much?) photons present in the slits, when the probing photon is sent towards these...
view entire post
Robert,
thanks again sincerely for your answers.
I now am trying to see the bigger picture. As you wrote, the large moguls in the EM Field within the slits is caused by the fourier transform of the slit’s geometry.
So I guess that due to an EM field within the slits, there should be some (much?) photons present in the slits, when the probing photon is sent towards these slits. The deflection of the probing photon is by interacting with other photons. The EM field photons are spatially – or better said – fourier transformed – such that these interactions lead to certain points at the screen.
Now I ask myself, if the analogy with the “wave”-field on the surface of a lake is valid, what about heat photons in the room where a double-slit experiment is executed? There should be much of them in the area before and after the double-slit aperture. How do they alter the path of the probing photon? I think their distribution should be such, that at average, an approximately straight line of beam should be the result – with some exceptions that cause the probing photon to mismatch the slit-position at all.
“In principle, anything can be entangled with anything. Entanglement just means that there is an a priori known relation between the two entities, such that if you know the correct information decoding procedure, you can deduce the state of one entity, from the detected state of the other.”
I think I am finally forced to accept this. Together with what I once discussed with Peter Jackson, a new picture of entanglement could arise out of this.
What is your opinion on Peter Jackson’s model of spin-entangled particle-twins? Some things about it are unclear to me, but I have problems with Peter’s “step-by-step-explanations” which are very difficult to understand and to translate into statements about the precise physical state of affairs. Otherwise I would ask him personally. But I also want to know wether or not your theory differs from what Peter Jackson has modeled.
Another question: After all I now believe to have understood about your scheme of local-realistic “entanglement” (as used in the respective experiments), there are some necessities for it to work:
1) The source produces twin-particles with the same spin directions, but (following Peter Jackson’s account) due to a certain hemispherical distribution of “spin” being the same for both twin-particles and the delicate spatial orientation of it (off from the source), the effect of antiparallel correlations is realized.
2) The source produces twin-particles with opposite directions of OAM.
3) The source produces twin-particles that both have irregular surfaces at certain degrees on the “sphere” to account for the fact of randomness (noise) at certain relative angles that could be measured (90° for the EPR-B experiment)
4) We cannot know in advance what spin orientation the source will generate for a certain twin-pair (+ / - or vice versa).
5) Number 4) together with number 1) are the real reasons for the suggested spooky-action-at-a-distance, since the non-information about certain attributions of a property for each of the photons (spin up or down) does go “all the way down”, till the experimental run is completed.
6) The “Freedom-of-Choice loophole” contains the assumption that hidden variables do not depend on the measurement settings. Without that assumption to be believed to be true, one cannot logically consistent average over all twin-pairs, when establishing a Bell-type inequality and / or when taking into account systematic and random errors for finally deriving a certain error-corrected result.
Many questions, I know, but necessary to better understand your theory.
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Robert H McEachern replied on Dec. 14, 2018 @ 15:11 GMT
Stefan,
Photons are not the only particles that interact electro-magnetically. So do electrons. And the slit structure is packed with electrons, that exhibit induced fields (perhaps produced by "virtual" photons, understood to mean real photons that exhibited no interaction previously, but are being "induced" to interact at the present moment), whenever any other particle that interacts via...
view entire post
Stefan,
Photons are not the only particles that interact electro-magnetically. So do electrons. And the slit structure is packed with electrons, that exhibit induced fields (perhaps produced by "virtual" photons, understood to mean real photons that exhibited no interaction previously, but are being "induced" to interact at the present moment), whenever any other particle that interacts via EM fields, approach the slit structure. The Fourier transform of the slit structure, specifies the positions of all those electrons, not individually, but in an average sort of way - like specifying their density. I am of the opinion, that it is the band-width of this "inducing" phenomenon, that limits the bandwidth of the "effective" transform of the slit geometry. The moguls exist, because the induced field cannot be formed instantaneously.
I am not familiar enough with Jackson's model to really comment on it, but what you have outlined makes sense - but does not go far enough.
The original reason EPR proposed an entanglement thought experiment, was to counter Heisenberg's claim that the reason one cannot simultaneously measure both variables in the uncertainty relation, is because one measurement perturbs the entity being measured, such that a second measurement will not accurately reflect the original, unperturbed state. EPR stated that cannot be the cause, because it should then be possible to create two identical (entangled) particles and measure one variable from each. But quantum theory suggested that would not work either, so there must be some other cause, for the inability to measure both variables.
So returning to Jackson "does not go far enough", a two-colored, polarized object, like either a coin or a sphere, is not sufficient to produce the observed quantum correlation - noise and band-limiting are also necessary. I have previously described those as being "intrinsic to the particles" themselves. But as in the case described above for the slits, the band-limiting at least, is probably a property of the interaction (detection) process itself, rather than the interacting entities per se; real interactions do not occur instantaneously.
Rob McEachern
view post as summary
report post as inappropriate
Stefan Weckbach replied on Dec. 14, 2018 @ 18:53 GMT
Robert,
again thanks for your answers.
What you wrote sounds plausible and I will try to obtain Bohm’s book.
Your note “real interactions do not occur instantaneously” is interesting. It sounds natural and is in alignment with common sense experience. But I nonetheless would at least doubt it for a moment.
If I transfer my experience of Newtonian mechanics into my picture of the behaviour of the quantum domain, then it is natural to assume that interactions, and especially the mobilisation of some energy and its transfer does need a certain amount of time.
For the worst case, namely that there are interactions all the way down with no end beyond QM, surely every interaction that we see in the microworld couldn’t be possible, because it would require an infinite amount of time and energy. So this scenario can’t be the case.
On the other hand, interactions need forces, and forces between certain fundamental particles like protons and neutrons etc. are supposed to again need exchange particles (gluons). The question arises wether or not the interaction with those gluons does again require a force (and therefore some classical time). If yes, one could proceed and ask wether or not these interactions with gluons do again need some other exchange particles to be possible.
It could well be that at a certain level of depth (concerning particles), the terms “forces” and “time” do not make sense anymore, because both aren’t there anymore. One or the other way – be it with an finite amount of levels of matter resolution or be it with an infinite amount of those levels, there has to be levels where no more “forces” and “time” in the classical sense are present.
this post has been edited by the author since its original submission
report post as inappropriate
Robert H McEachern replied on Dec. 14, 2018 @ 20:41 GMT
Stefan,
"especially the mobilisation of some energy and its transfer does need a certain amount of time." This is what Shannon's Capacity Theorem is all about. Think about it. The theorem specifies the relationship between perfectly recoverable information content (number of bits) and three parameters; energy (in the form of the signal-to-noise ratio) time-duration, and the latter's Fourier transform pair - bandwidth.
In other words, the transfer of information (like the detection of something's existence) requires the receipt of a certain amount of energy, within a certain amount of time, and with a certain "response time" (bandwidth) of the system.
The derivatives in differential equations have infinite bandwidths. As such, they can only represent an idealized model of a real process. Communications engineers have learned to deal with real bandwidth limitations, in ways unimagined by physicists. That is what Shannon's Information theory is all about, and that is what is missing in quantum physics.
Rob McEachern
this post has been edited by the author since its original submission
report post as inappropriate
Stefan Weckbach replied on Dec. 15, 2018 @ 07:41 GMT
Robert,
thanks again for your reply.
“The derivatives in differential equations have infinite bandwidths. As such, they can only represent an idealized model of a real process.”
I agree.
“That is what Shannon's Information theory is all about, and that is what is missing in quantum physics.”
This may be the case, and I would definitely not exclude it.
But, on the other hand, the picture of a photon as a particle seems to miss something too, since it is very hard to explain mach-zehnder single-photon interference with the classical picture of a particle (photon) and scattering. In fact I never came across an explanation of such experiments in terms of particles and scattering, and therefore your take of looking and interpreting them would be interesting to me.
In case you are not familiar with mach-zehnder single-photon interference, here are the needed resources:
Paper: http://www.liceolocarno.ch/Liceo_di_Locarno/Internetutti/fer
rari/PDF/vari/ValerioAJP.pdf
Video:
https://www.youtube.com/wa
tch?v=dhMrrmlTXl4
report post as inappropriate
Robert H McEachern replied on Dec. 16, 2018 @ 15:08 GMT
Stefan,
"your take of looking and interpreting them would be interesting to me." UMOP and reversing cause-and-effect. That is my interpretation of the analysis (and misinterpretation) of the thought experiment described in the .pdf file that you linked to.
UMOP is an acronym for Unintentional Modulation On Pulse. Many years ago, people analyzing RADAR pulses began to wonder if it...
view entire post
Stefan,
"your take of looking and interpreting them would be interesting to me." UMOP and reversing cause-and-effect. That is my interpretation of the analysis (and misinterpretation) of the thought experiment described in the .pdf file that you linked to.
UMOP is an acronym for Unintentional Modulation On Pulse. Many years ago, people analyzing RADAR pulses began to wonder if it would be possible to distinguish between the similar appearing pulses produced by different RADAR emitters. For example, are pulses produced by different serial-numbered transmitters of the same model-type, distinguishable from one another? The transmitters were not INTENDED to produce distinguishable pulses, but there might, nevertheless, be some unintentional modulation on each pulse, that is distinguishably different and deterministically repeatable, from one serial-number to the next. It turned out, that that is often the case.
UMOP is what is being detected by the "matched filters" in my paper reproducing quantum correlations. Even though each coin (pulse) has the same signal (intentional modulation) and noise (unintentional modulation) in a statistical sense, the noise (unintentional modulation) is nevertheless recognizably different from one coin to the next, enabling a deterministic detection response. This is what my comments concerning "identical particles" are about.
The paper that you linked-to states that: "transmission and reflection are random (to the extent that this means something, one can assume that everything is always determined by parameters we cannot control. However, this deterministic option will have no influence on what follows)."
That is a false assumption. That fact that a transmitter does not control its unintended modulations, does not imply that such modulations do not exist and can be detected and employed to "influence" the behavior of a receiver/observer. Devices like beam-splitters are not RANDOMLY transmitting or reflecting pulses/photons. Rather, they are deterministically responding to the UMOP.
In the last section of the paper, the authors discuss the "coherence length of the laser" and "the heart of quantum interference: indistinguishability". This is where they reverse cause and effect: the pulses/photons are not distinguishable by virtue of having taken different paths, rather they were routed onto different paths by virtue of being distinguishable via the UMOP. That is what the coherence length is all about - how long do you have to wait, until the UMOP on one pulse will become distinguishable from that on another and thereby enable the router (beam-splitter) to direct them unto different paths.
Rob McEachern
view post as summary
report post as inappropriate
Stefan Weckbach replied on Dec. 16, 2018 @ 20:24 GMT
Robert,
It is clear that one can take noise (“unintentional modulation”) for a signal and a signal (“intentional modulation”) for noise. For some observers, this is relative, due to their a priori knowledge. The question is wether or not this is also the case for the unanimated physical world.
If one does include your definition of noise as an intrinsic property of the...
view entire post
Robert,
It is clear that one can take noise (“unintentional modulation”) for a signal and a signal (“intentional modulation”) for noise. For some observers, this is relative, due to their a priori knowledge. The question is wether or not this is also the case for the unanimated physical world.
If one does include your definition of noise as an intrinsic property of the surface of each coin, namely as you wrote in your paper (http://vixra.org/pdf/1609.0129v1.pdf, p. 3) as an irregular surface, then it can hardly change if the experimental setup changes.
So, if you look at figure 3, all particles arrive at detector 2 (D2). This means that detector 1 (D1) does not register any photons, if at all, very, very few in comparison to detector 2. This further means that the second beam-splitter does not reflect photons from the lower path and does not transmit photons from the upper path.
If you look at figure 4 where the path length of the uppper path has been made a little bit longer, the situation changes in that now detector 1 can (dependent on the additional length of the upper path), say, detect 30 % of all photons that went through the experimental setup. Detector 2 then approximately detects 70% of all photons that went through the experimental setup.
You can’t explain these different results of figure 3 and figure 4 by some intrinsic noise of a photon, since the only thing that has changed is the path length. Intrinsic noise stays the same, on average, for all the photons that will go through the experimental setup. These photons are not produced by the source with some a priori knowledge of wether they will be used in an experiment like figure 3 or figure 4. Moreover, the beam-splitter number 2 also stays the same in both experimental setups, due to the very same reasons.
So, if beam-splitter number 2 has some hidden mechanism to produce a deterministic reflection / transmission response, this would necessitate that at least the “noise” of the photons that go through the upper path is changed (due to the path-lengthening) such, that the response of detector 1 is what the (already done) experiments say.
Surely one can nonetheless just claim this. But it is not only in contradiction to what you wrote in the paper I cited above, it moreover does demand all kinds (and a huge amount) of hidden information on just one bit of a photon, because now this photon isn’t anymore an entity with just one bit of information that can be transmitted to its exchange particles it may meet in the future, but it is a real 3D object with an irregular surface.
And now we have the problem that those continuities that we wanted to explain away by declaring the world to be fully digitized, return. How fine should the resolution and how well-ordered should the irregular surface be to alllow an exchange particle to read out how to properly react to the presence of that photon? The answer is: as fine as the resolution of the complementary match-filter (the interacting particle).
But since the surfaces of those complementary match-filters (be them in a beam-splitter or in a polarization filter) do again not change according to a change in the experimental setup, we gain a contradiction: what in figure 3 should deterministically read out the proper information how to react to the detection of a photon, namely some properties of beam-splitters, detectors, polarization filters etc. must now change according to your model.
Since the average surface of all the photons that go through the experiment cannot change (the source cannot “know” that the experimental setup has changed from figure 3 to figure 4) AND the lower path is also unchanged, the conclusion should be that if one lengthens the upper path, THEN beam-splitter number 2 must change its behaviour – in respect to the detection rate at detector 1. This sounds like nonsense and I think it is nonsense.
So let’s try it the other way round: Detectors, beam-splitters and polarization filters do not change their default states. But the irregular surfaces change when channeled through some delay line.This means that all photons that are reflected by the beam-splitter number 1 do not change their surfaces. Only those photons that are transmitted by beam-splitter number 1 do. And because we cannot know how many photons from the upper and lower path contribute to, say, the 30% detection rate at detector 1, this sounds plausible.
But if the same photons (maybe with other irregular, but statistically fair sampled surfaces) are produced to form an entangled photon pair and one of those photons is channeled through the same delay line as in figure 4, it should also change its surface and therefore react differently at some polarization filter – and therefore change the experimental results (the bell-curve)!
This is a provable scenario and has been proven to be wrong: the bell-curve does not change!!! Sorry for that, but in summary I must say that the picture you deliver to explain the relevant experiments unfortunately is inconsistent with what we experimentally know. I remain with my impression that all these explanations for single quantum experiments are merely themselves “matched-filters”. But they do not take something away like normal filters do, they add something hidden according to a change in the experimental setup. Altough adding something is not a bad thing per se, when comparing it with other quantum experiments, it becomes simply inconsistent.
view post as summary
report post as inappropriate
Robert H McEachern replied on Dec. 17, 2018 @ 15:44 GMT
Stefan,
"it can hardly change if the experimental setup changes" It is not the input that is changing, in a statistical sense. It is the "response" of a highly-non-linear experimental system that is changing (AKA a detector); it is not responding "randomly" to "identical particles" as is being assumed. It is responding deterministically to small UMOP, that may vary from particle-to-particle, that has been assumed to not even exist.
"Intrinsic noise stays the same, on average, for all the photons that will go through the experimental setup." Obviously. IT DOES NOT MATTER. Each individual photon is not EXACTLY IDENTICAL to the others. A UMOP detector is SENSITIVE TO THE INDIVIDUAL DIFFERENCES not the average.
"this would necessitate that at least the “noise” of the photons that go through the upper path is changed" The UMOP may change with every photon. That is the point. They are not individually identical. They are only "Statistically" identical. And the small difference between successive individuals may "drift" with time - that is what the coherence length is all about.
UMOP, riding on top of a large signal, does not behave (within a detector) like either an independent, fixed, hidden variable, or like hidden, random noise. It systematically biases the detector outputs, thereby changing the detection statistics. So if you analyze the detection statistics ASSUMING that you are looking at EITHER a fixed, hidden variable or just random noise, then you are guaranteed to be mystified by the results - your fundamental premise about the nature of the input and how a non-linear-detector responds to it, is false.
Rob McEachern
report post as inappropriate
Stefan Weckbach replied on Dec. 17, 2018 @ 18:46 GMT
Robert,
„Obviously. IT DOES NOT MATTER. Each individual photon is not EXACTLY IDENTICAL to the others. A UMOP detector is SENSITIVE TO THE INDIVIDUAL DIFFERENCES not the average.”
This means that a photon carries more than one bit of physical information. Information in the sense physicalism is concerned with is what makes a physical difference. Even a beam-splitter (or a...
view entire post
Robert,
„Obviously. IT DOES NOT MATTER. Each individual photon is not EXACTLY IDENTICAL to the others. A UMOP detector is SENSITIVE TO THE INDIVIDUAL DIFFERENCES not the average.”
This means that a photon carries more than one bit of physical information. Information in the sense physicalism is concerned with is what makes a physical difference. Even a beam-splitter (or a detector) for a photon, that comes from the upper path and is transmitted by the beam-splitter number 2 due to the fact that this beam-splitter cannot read the “very bad hand-writing” (the “noise”), must at first surely look at this hand-writing to decide wether to act like there is no photon there, or to reflect this photon towards detector 2. So, in any case, a bad hand-writing that is to be ignored is also information. It’s a signal. And something that is read out from the photon (a “signal”) to cause its reflection is also information. Let’s not argue about what “read out” ontologically means, wether it is an analytical process of abstract data processing or just a physical interaction in the sense of a “non-matching” - with the consequence that the particle “could not be detected”.
But even if we suppose things work like that with photons that carry such an amount of information, the only logical reason for the different results we obtain for figure 4 (in comparison to figure 3) is initialized by the presence of a delay line that was added to the setup depicted in figure 3.
So the delay line must change some properties of the photons (compared to the experiment depicted in figure 3), the presence of that delay line alone cannot change some reaction-patterns of the splitters, detectors etc!! Otherwise your theory would be non-local but could not be considered being locally-realistic. If at all, the change in the photons that pass the delay line, should cause the different reaction-patterns of the subsequent splitters, detectors etc.
So what does the delay line add – or subtract – physically, according to your theory, to enable the observed outcomes which differ from the experiment depicted in figure 3?
view post as summary
report post as inappropriate
Robert H McEachern replied on Dec. 17, 2018 @ 20:38 GMT
Stefan,
"This means that a photon carries more than one bit of physical information." No! This is what I mean by the "profound misunderstanding of exactly what a single, classical “bit” is, in the context of Shannon’s Information Theory"
In order to even be "information" in the first place, the recovery of a bit's value, MUST be perfectly repeatable; EVERY copy of the same...
view entire post
Stefan,
"This means that a photon carries more than one bit of physical information." No! This is what I mean by the "profound misunderstanding of exactly what a single, classical “bit” is, in the context of Shannon’s Information Theory"
In order to even be "information" in the first place, the recovery of a bit's value, MUST be perfectly repeatable; EVERY copy of the same message, coin, photon, particle etc. MUST yield the same value, when measured in the same way. A "signal" has this property. "Noise" does not. But what property does "Signal+Noise" have?
"Information in the sense physicalism is concerned with is what makes a physical difference." Assuming that to be true, IS the PROBLEM. That is not what information is in Shannon's theory. Shannon's Capacity applies to his definition of information, not the physicist's profoundly wrong conception. IT DOES NOT MATTER if it makes a physical difference to a device TOO STUPID to ignore the difference. In order to BE "information", the receiver must be able to distinguish, a priori, between physical differences that are likely to be repeatable, from one message copy to the next (think of entangled copies), and ones that are not. Hence, once a receiver detects a "physical difference", it MUST subsequently make a DECISION as to whether or not the observed difference is likely to be repeatable; if the difference is determined to be repeatable, then the receiver behaves one way (accepts the difference as likely to be valid information), but if the difference is deemed to be unrepeatable, then it is totally ignored as invalid. This is what the "thresholding" in the Bell simulation is all about; above the threshold, is deemed to be a valid measurement of a repeatable difference, below threshold is deemed to be an unrepeatable measurement - that consequently fails to flip the "a valid detection has occurred" bit, IN SPITE OF there being a obvious "physical difference".
"So what does the delay line add – or subtract – physically, according to your theory.." In the case of a phase detector, it may enable a very slowly spinning/rotating/drifting phase, to drift enough to no longer "match" the very specific phase the detector is looking for, relative to an undelayed signal. Such things happen all the time in modern communications signals, in which a tiny frequency mistuning, results in a very slow phase-drift, within a receiver that is exquisitely sensitive to just such a drift.
Rob McEachern
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Stefan Weckbach replied on Dec. 18, 2018 @ 00:47 GMT
Robert,
you wrote
“This is what the "thresholding" in the Bell simulation is all about; above the threshold, is deemed to be a valid measurement of a repeatable difference, below threshold is deemed to be an unrepeatable measurement”
This may be the case and is interesting in its own right, but it is irrelevant here. What’s relevant here is wether or not a delay line...
view entire post
Robert,
you wrote
“This is what the "thresholding" in the Bell simulation is all about; above the threshold, is deemed to be a valid measurement of a repeatable difference, below threshold is deemed to be an unrepeatable measurement”
This may be the case and is interesting in its own right, but it is irrelevant here. What’s relevant here is wether or not a delay line does alter the bell-curve in the two-particle entanglement experiments when all other error correction methods, statistical methods and the thresholds are the same as when applied in the absence of a delay line. The delay line can be located before the respective photons do impinge at the polarization filter or afterwards. Also this does not make a difference in the results.
Note also that in two-particle entanglement experiments as well as for the mach-zehnder single-photon experiments (figure 3 and 4), the exact time of the emission of such photons do not play a role. Note also that one can conduct a mach-zehnder interferometer single photon experiment (figures 3 and 4) also with photons that come from pairs of entangled photons. Again, no difference in the results when one does only use one photon per generated twin-pair.
“So if you analyze the detection statistics ASSUMING that you are looking at EITHER a fixed, hidden variable or just random noise, then you are guaranteed to be mystified by the results”
Let’s make a side-step: Noise on top of a signal makes it difficult for a tuned detector to read the signal out properly. I understood this. Detectors must be tuned to a certain signal that one wants to detect. If noise comes into play, the readout suffers due to irritations concerning the “proper” functioning of the tuned detector. So far I follow. But there must be a deeper reason that “It systematically biases the detector outputs, thereby changing the detection statistics”, because these readout-errors together with the signals in all cases mimick some wave-behaviour. So, from a global point of view, the phenomenon of noise has some global structure in it and when globally (by averaging) viewed, it “interferes” with another structure, namely, the signals. The result is a kind of “moiree pattern”, means, a wave phenomenon (in laymen terms).
What does this tell us about “randomness”? By paraphrasing you, even for the case that detectors (..and beam-splitters and filters etc.) would partially indeed “responding "randomly" to "identical particles" as is being assumed”, this randomness would surely have a systematical character, since together with some signals it systematically mimicks wave-phenomena. This is all highly interesting and well worth to think about it, but I think your explanation for noise as an irregular surface of a sphere, together with some tunable orientation of that sphere does not explain why the delay line does not alter the bell-curve. One can make the entanglement experiment with the delay line as well as the mach-zehnder single photon experiment with the same length of the delay line with the same wavelength of the photons and with the same tuning of the same detectors.
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Robert H McEachern replied on Dec. 18, 2018 @ 18:01 GMT
Stefan,
"But there must be a deeper reason..." There is. That is the entire point. The detector (or interacting particle) KNOWS how to identify (respond to) measurable "physical differences" that must subsequently be totally ignored (never responded to - at all), if something is to ever behave as if absolutely "identical" particles/symbols, can ever be interacted with or...
view entire post
Stefan,
"But there must be a deeper reason..." There is. That is the entire point. The detector (or interacting particle) KNOWS how to identify (respond to) measurable "physical differences" that must subsequently be totally ignored (never responded to - at all), if something is to ever behave as if absolutely "identical" particles/symbols, can ever be interacted with or "detected".
"this randomness would surely have a systematical character" Indeed it does. That behavior is precisely this:
(1) when the S/N is high (above threshold), behave EXACTLY as if the S/N is infinite.
(2) when the S/N is low (below threshold), behave EXACTLY as if the S/N is 0.
(3) when the S/N is in between (near threshold), "call it" either (1) or (2), depending on whether or not it is above or below the threshold, and behave accordingly, even though it is inevitable, that many of these "calls" and their associated behaviors, will be entirely "wrong", from the perspective of anyone imagining that they are observing a linear, measurement-based response.
It is this sort of behavior that transforms the "classical correlation" curve, into the "quantum correlation" curve, in Bell Tests. This is what I mean by "symbolic behavior all the way down"; there is nothing "between" the two letters, of a two-letter alphabet, so there can be no behavior "between" the behaviors associated with two such letters. This is what "quantization" is - nothing exists "between" the only things that are being observed/detected, not even in principle. Assuming, as every physicist has, that something "ought" to ALWAYS exist "between", is the problem. "Measurements" behave as if "between" exists, but "Symbols" do not. Assuming, incorrectly, that "elementary" particles are responding to "Measurements", rather than responding to "Symbols" is the ultimate source of all the total confusion associated with "interpreting" quantum theory. Symbols behave "weirdly" compared to Measurements.
Rob McEachern
view post as summary
report post as inappropriate
Stefan Weckbach replied on Dec. 18, 2018 @ 19:53 GMT
Robert,
yes, of course, infinitely many „betweens“ all the way down make no sense, perhaps only for some Cantor dust.
Otherwise we had the problem of infinite resolutions needed to ever see a physical process taking place at all.
“The detector (or interacting particle) KNOWS how to identify (respond to) measurable "physical differences" that must subsequently be totally...
view entire post
Robert,
yes, of course, infinitely many „betweens“ all the way down make no sense, perhaps only for some Cantor dust.
Otherwise we had the problem of infinite resolutions needed to ever see a physical process taking place at all.
“The detector (or interacting particle) KNOWS how to identify (respond to) measurable "physical differences" that must subsequently be totally ignored (never responded to - at all), if something is to ever behave as if absolutely "identical" particles/symbols, can ever be interacted with or "detected".”
This would mean that a certain particle has some knowledge, some information, like a list. On the list there are all the “particles/symbols” that have interacted with this certain particle in the past. When an interacting particle arrives and is on the list, then there is no interaction.
So far, all this sounds as if we would live in a virtual reality, a discrete environment, where no physical forces and some energy-exchange needs to happen.
How does such a particle “interaction” take place? If a particle physically meets another particle, then there should be some forces both particles are exhibited to. Classically spoken, the manner in which these two particles collide dicates their further subsequent behaviour (until the next interaction). For a certain particle to decide wether or not to interact with another particle, this certain particle must at first come in touch with this interacting partner to “see” wether to respond to it or “never respond to – at all”. But again, for such a digitized decision process in a physical world (in contrast to a virtual world, namely a computer scenario) it needs a physical contact between these two particles.
This “readout” is the crucial point. It does not work the same way like a human being reads a badly hand-written letter. Otherwise this particle would be a pattern recognition automaton, namely a macroscopic complex thing. No, for this to work there must be a fixed threshold for this certain particle to collect all the detailed details (noise etc.) for its decision. And since there aren’t infinitely many detailed details – otherwise the particle to analyze would be an entity with infinitely many “betweens” that had to be analyzed – the analyzing process must also be digitized.
The question now is twofold.
Firstly, in a strict physical sense, “analyzed” means physical contact with the particle that should be analyzed. There should be some forces involved.
Secondly, for the case that these two particles indeed interact with each other (and not ignore each other), there should also be some forces involved that dictate how both particles behave after the interaction (until the next interaction takes place).
Another question would be wether or not your explanation scheme does really imply cases where a certain particle decides to ignore another particle, although these two particles where in contact to “analyze” each other. This poses serious questions about the nature of forces in the physical world and how they are delivered in detail. Surely we have the phenomena of for example tunneling etc. But your explanation scheme is not concered about forces, but about pattern recognition and the subsequent 1-bit decisions to react to them.
If forces are built up by many, many such 1-bit decisions and only appear as constant in our macroscopic realms, then “forces” exhibit no classical physical necessities, only on average, but they exhibit a pure information-theoretic concept at the core. This would mean that we would live in a kind of virtual reality that mimicks the presence of forces that should have their origins in some solid-rock physical laws. Since these laws can be described algorithmically (mathematically), they seem to be laws. And they are, but the other way round: physical laws are only mathematically describable because the fundamental reality behind it is an information-processing reality.
And if the latter is true, then the “problem” of “entanglement” does vanish, since obviously (hopefully!) the software of this virtual reality is such that it renders all “wrong” decisions (the wrong calls you spoke of) particles may make such that a human observer always can think his reality is constituted exclusively by wave-phenomena.
Further: if there are “wrong” decisions, there has to be a default state of “right” decisions for all possible “interactions”. If the software of this virtual reality is identical with the hardware, the decision of “wrong” and “right” probably came into existence without a deeper reason, since for the word “software” we usually assume some architect of that software (at least this software could theoretically be other than it is, namely defining “wrong” and “right” differently from the assumption you make).
Let’s assume there is such a software (identical to the hardware or not). This software may be highly complex, but nonetheless, on a logical level, it would carry out only deterministic processes (independent of wether or not human observers can prove that it is indeed entirely deterministic). It is therefore not appropriate to speak of a particle as something that “decides” something. Everything that happens would happen deterministically, even “wrong” decisions. Therefore the differentiation between “wrong” and “right” decisions is misleading.
The information-processes for some noise may be highly nonlinear and complex, but nonetheless they would be deterministical. What brings me back to my question in the previous post, namely your explanation of the delay line in figure 4 making a difference and for the case that it is used in two-particle entanglement (exclusively only for one particle) it makes no difference in the overall statistics. We may indeed live in a highly nonlinear environment, but there is no non-linearity all the way down, since the microscopic world we are debating about is such that it enables very stable conditions, even for information exchanges like ours.
view post as summary
report post as inappropriate
Robert H McEachern replied on Dec. 18, 2018 @ 22:25 GMT
Stefan,
"This would mean that a certain particle has some knowledge, some information" Not quite. Information (index-number into a table) is what the particle needs to extract from an interaction, precisely because it does not "have" it and could never predict it. In Shannon's theory, information is precisely that which cannot be predicted - which is why it must be obtained from the...
view entire post
Stefan,
"This would mean that a certain particle has some knowledge, some information" Not quite. Information (index-number into a table) is what the particle needs to extract from an interaction, precisely because it does not "have" it and could never predict it. In Shannon's theory, information is precisely that which cannot be predicted - which is why it must be obtained from the external environment - from an acquired "message". Knowledge, on the other hand, (the contents of the look-up table, dictating how to respond to an index-number) is indeed part of what might be called the particle's intrinsic "behavioral repertoire". But it would be a mistake to think that these behaviors have been "acquired" or learned, over the course of time. Rather, they merely constitute the ONLY possible behaviors, that result in the production of an interaction "signal" that could ever be externally observed. In a sort of evolutionary sense, those are the only interaction/behaviors, that "survive" an attempt to detect them, by an external witness. There may be all sorts of dark, "virtual", matter, that never interact with anything, and thus remain totally undetectable, until the moment that they finally encounter something, that presents an "index number" that triggers a detectable response. These are the only interaction behaviors that we ever see. Many other interactions could occur - but never result in a behavior that produces an external observable.
"energy-exchange needs to happen" Exactly. It is the only thing that ever DOES happen. There is no phase. There is no physical wavefunction with a phase. Nature, unlike man-kind, does not appear to be able to either detect or process "phase". But many energy-detection processes perfectly "mimic" phase-detection processes; or more precisely, they perfectly mimic the time-derivative of phase; what communication engineers call "instantaneous frequency", to distinguish it from the "frequency" associated with Fourier transform superpositions. Being unaware of the existence of such processes, has confused physicists for generations, such as being puzzled over how the perception of visual-color and audio-pitch is accomplished, via a detector that obviously lacks (by several orders of magnitude) the Fourier "resolution" that would be required to account for the perceptions. It is the principle of an FM receiver, rather than a Fourier spectrum analyzer. One of the very first such receivers, dating back to the 1930s, involved an energy-detector that can mimic an "instantaneous frequency" detector.
The Born Rule works, precisely because the math describing a wave-function happens to correspond EXACTLY to the math describing an array of energy-detectors (a filter-bank). When you perform an experiment, in which particles of equal-energy (quanta) are sent into the filter-bank, the ratio of (detected energy)/(energy per quanta) yields the number of particles detected within each channel of the filter-bank. In other words, the wavefunction, coupled with the Born Rule, coupled with equi-quanta inputs, is exactly equivalent to a histogram. That is why it works, It has nothing to do with phase. Taking the sum of the squares of the real and imaginary parts, corresponds to computing a "power spectrum" - that completely eliminates all phase information. Phase is a superfluous, intermediary, computational variable, that ultimately corresponds to nothing in the physical realm, and contributes nothing to either the "probability" estimate, or to the causative, phase/mimic-energy-detection processes that are actually occurring, but being misinterpreted as "phase" phenomenon.
"this sounds as if we would live in a virtual reality" We do. But it is one individually-constructed, entirely by each and every one of us, and existing entirely within our own mind. We have constructed an internal "information derived" virtual representation, of the external physical world. We (our conscious mind) never even encounters the real, external reality. We only encounter our own, personally-constructed, representation of that reality. And we routinely mistake the latter, for the former, no matter how many times the philosophers have told us to beware. Neither physicists, nor the common-man, listen to philosophers.
"How does such a particle “interaction” take place?" Always via energy detection, in quantized "symbols". That is why we only ever observe "quantum jumps" in energy levels - corresponding to the "letters of the information alphabet". Think of cursive handwriting; the fact that the observable, physical measurement is continuous, is irrelevant to the fact that it nonetheless "encodes" discrete, discontinuous symbols. The "discreteness" is entirely a property of the interaction itself, not the inter-actors. The actors act as if the interaction itself NEEDS to be treated discretely.
"it needs a physical contact between these two particles." Or with a normally, unobservable, chain of interacting particles, constituting a "field" in which everything is embedded; some of which might just become temporarily observable, if endowed with enough energy (the only thing driving any detection) within giant, particle accelerator experiments.
"for this to work there must be a fixed threshold for this certain particle to collect all the detailed details (noise etc.) for its decision." That is exactly the nature of the thresholding that reproduced the quantum correlation statistics; no complex error-detection and correction, just a crude energy-level detector. The energy at the output of the matched filter, is either above the threshold, or not - a single bit-flip. And if you modify a few lines of code in the simulation, you can easily observe what happens with you replace that crude energy-detector, with one that compares the "a priori known" state with "the estimated state" and excludes the "badly called coins" rather than the below-threshold coins. The quantum correlations disappear and the classical correlation reappears. The "bad calls" generated by the crude energy-detector, are entirely responsible for the change in correlation statistics.
"This poses serious questions about the nature of forces in the physical world and how they are delivered in detail." Indeed it does. And it appears that physicists have come-up with an entirely mistaken conception of how it all works. It appears to work by a "symbolic exchange of energy", rather than by a "physical exchange of energy". We never actually observe energy per se, we merely deduce its existence from observations of something else, just as the existence of a symbol is deduced from observations of something else. They are both merely our fabrications (virtual constructs), that we have found useful for describing whatever we have deemed to be of importance, about the actual observations.
"This would mean that we would live in a kind of virtual reality that mimicks the presence of forces that should have their origins in some solid-rock physical laws." Exactly. And we have a name for where we "live" and it is not "Earth", it is "mind". The mind may very well reside upon a physical Earth, rather than in a vat, but we can never discover the difference by examining the only perceptions that we will ever encounter - those created within our own mind, and seemingly being produced via the sensory-signal processing/information-extraction occurring within our physical brain. Our brain is constructing our mind, and our mind is left to wonder how in the world did THAT happen?
"such that a human observer always can think his reality is constituted exclusively by wave-phenomena." Or angles and demons, or anything else. Because "his reality" has become, whatever behaviors he has loaded into his own, personal, look-up tables. So if you have created tables, that engender behaving as if monsters exist under the bed, and are what is causing the floor-boards to squeak at night... and everyone else has done the same... well, that has become, that is, your reality.
"the decision of “wrong” and “right” probably came into existence without a deeper reason" Exactly. Regardless of whether you apply that statement to physics, morality, or anything else. "Right" is exactly equal to whatever has been loaded into your "right" table. And whenever everyone has been equipped with the same table, regardless of how that came to be, then no one will ever behave, as if you are "wrong". And whatever has "come to be" has done so, because, if it would have done otherwise, it would probably have failed to survive and thus would not exist as a present "observable". The Anthropic Principle; the only things that can ever be experienced, are those which have survived, at least until the arrival of the entity doing the experiencing.
"it enables very stable conditions, even for information exchanges like ours" Ah, but the mere existence of enabling conditions is merely a necessary condition, but not a sufficient one. As Bacon pointed-out 400 years ago, 2400 years ago, all the conditions that were necessary, existed on Earth, for the Romans to have discovered everything that we have now discovered, but 2000 years earlier than we did. But it was not sufficient - because, according to Bacon, Plato and Aristotle induced all future generations to abandon the effort, until folks like himself and Galileo arrived on the scene and commenced what Bacon called a "Great Instauration" - a restoration, following a decay.
Rob McEachern
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Stefan Weckbach replied on Dec. 19, 2018 @ 08:07 GMT
Robert,
you wrote
„But it would be a mistake to think that these behaviors have been "acquired" or learned, over the course of time. Rather, they merely constitute the ONLY possible behaviors, that result in the production of an interaction "signal" that could ever be externally observed. In a sort of evolutionary sense, those are the only interaction/behaviors, that "survive" an...
view entire post
Robert,
you wrote
„But it would be a mistake to think that these behaviors have been "acquired" or learned, over the course of time. Rather, they merely constitute the ONLY possible behaviors, that result in the production of an interaction "signal" that could ever be externally observed. In a sort of evolutionary sense, those are the only interaction/behaviors, that "survive" an attempt to detect them, by an external witness.”
Taking this for granted as well as
“There may be all sorts of dark, "virtual", matter, that never interact with anything, and thus remain totally undetectable, until the moment that they finally encounter something, that presents an "index number" that triggers a detectable response. These are the only interaction behaviors that we ever see. Many other interactions could occur - but never result in a behavior that produces an external observable.”
and combining it with
“"energy-exchange needs to happen" Exactly. It is the only thing that ever DOES happen.”
leads to the conclusion that if “dark matter” does indeed exist, these “many other interactions” are NOT based on energy-exchange, since you defined energy-exchange as the only signal.
One now could say that these “many other interactions” are surrounded by such an amount of noise that this leads to the non-detection property of that “dark matter”. But nonetheless this “dark matter” would interact with its kind of other dark matter. If true, then either it does it without energy-exchange – or with the latter, but with “the contents of the look-up table, dictating how to respond to an index-number” that do in most cases not occur in the look-up tables of our ordinary matter.
This means that what for our ordinary matter is a “signal” is “noise” to the dark matter. If you define “virtual dark matter” as what scientists call “virtual particles”, means a “random” activity of a quantum vacuum, then I am forced to conclude that what is considered random about this quantum activity is coherently structured in some way, since it must enable stable interactions due to some global interaction rules.
A “virtual” reality as I defined it is not quite what you have in mind. Yours is the “ordinary” epistemological and psychological subjective awareness of an observer, together with the many filter mechanisms that the theory of evolution and the anthropic principle provide. It is certainly true that each of us lives in his own “virtual” reality to some extend. And it is certainly also true that “the thing as it is” (Kant!) cannot be reached by anyone of us.
But virtual reality I meant a reality that is perceived by its observers only *as if* time and space, causes and effects are fundamental. Think of a cat, animated by a computer program, the cat pushes a ball. There is no energy-exchange going on with that pushing. The ball rolls to the left back corner, but there is no space as we define it in that animation. We even can slow down that animation at some point. The slowing down does not alter the final result.
In that example, the phase correlations stay the same when one does slow down the animation. So, when you say “There is no phase. There is no physical wavefunction with a phase.” the phase correlations must reside elsewhere and that brings us – amongst other subtleties – to the question of what time is. These phase correlations cannot only reside in one entity, since "phase" is relative. But on the other hand there must be a global time frame for some correlations to at all being able to happen.
Remember this: humankind has figured out a very successful operation scheme (the formalism of quantum mechanics) for predicting many physical behaviours. This behaviour necessitates many time-dependent correlations. For the case that your theory should be true, humankind has filtered out the formalism of quantum mechanics out of some (physical) behaviour that has no phase relations. Surely this can’t be the case.
There is a phase, albeit there may be no physical “wave-function”. Otherwise no filter-banks would exist.
“That is why it works, It has nothing to do with phase”
The phase may not be in the individual equi-quanta, but it obviously is hidden within the average behaviour of many, many particles. Otherwise quantum mechanics would not work. What you call “phase” is therefore the amount and the qualitative aspects of correlations that enable the emergence of a stable world – and the emergence of the formalism of quantum mechanics. The latter is guaranteed to work by stable (predictable) correlations. The reason is that there are no successive “betweens” in nature at a fundamental level, means no infinity of physical properties. With a very limited arsenal of physical properties, you automatically get stable global correlations. That’s what enables filter mechanisms, evolutionary mechanisms to work in the first place.
But, for all this to work it needs a global time reference, that is also stable. In this sense, “time” must also be equi-quantized. Every time you state that a certain enery-exchange needs an amount of time and cannot happen instantaneously, you implicitely refer to that equi-quantized time. Time cannot be divided infinitely – in the same manner that logically there cannot be infinitely many “betweens”.
This brings me back to the spinning spheres in Peter Jackson’s framework. The assumed orientation-shift of such spheres also cannot happen continously. It must also be quantized in some manner. Although the delay line I spoke of may not shift the phase of such a sphere, this delay line must surely advice the entity going through it to behave as if there is such a phase shift. This advice cannot happen at the last beam-splitter, when one follows a locally-realistic explanation.
One now can model such an advice as a 1-bit energy (“information”) transfer to the entity that is going through that delay line. Would the delay line be lambda/2 longer (or shorter), there had to be a different advice. We can shorten or lengthen this delay line – roughly spoken – arbitrarily. In each case, the delay line must transmit (or offer) the proper symbols for the particle to react. If time is quantized, the particle may “count” all quantized steps from the time on it enters the delay line until it leaves it and therefore can deduce how to behave afterwards. But this explanation wouldn’t then be due to some energy-exchange between the delay-line and the particle. If there is some advice from the delay line to the particle, the symbolic information for that advice must reside somewhere within the delay line. The question is where this information should be located within the delay line.
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Robert H McEachern replied on Dec. 19, 2018 @ 17:22 GMT
Stefan,
"leads to the conclusion that if “dark matter” does indeed exist, these “many other interactions” are NOT based on energy-exchange, since you defined energy-exchange as the only signal." Such a conclusion is invalid. All kinds of energy exchanges occur onboard a submarine. But a distant observer cannot detect any of them. The issue is, does a potential emitter, actually...
view entire post
Stefan,
"leads to the conclusion that if “dark matter” does indeed exist, these “many other interactions” are NOT based on energy-exchange, since you defined energy-exchange as the only signal." Such a conclusion is invalid. All kinds of energy exchanges occur onboard a submarine. But a distant observer cannot detect any of them. The issue is, does a potential emitter, actually produce a detectable emission. It is not possible to observe anything other an emission. You cannot observe the entity you call your mother. You can only observe things like the solar emissions produced by the sun, that have been scattered off your mother's body and subsequently detected by your visual system. Or, if your mother says hello, your auditory system might detect the sound-wave emissions she produced. Your brain may then construct a "virtual" reality that you perceive as your mother. But in no case, have you ever actually "observed" your mother per se. Energy exchanges that fail to produce detectable emissions, can never be detectable. Two people, off in the distance, might be whispering to each other (exchanging emissions) but if I cannot possibly detect the sound, then they have not exchanged anything with me.
"If you define “virtual dark matter” as what scientists call “virtual particles”" I do not. Virtual particles are real particles that have merely remained undetectable by some distant observer. They are like ants crawling beneath the leaves on the ground, that I cannot detect, and thus do not interact with at all, but they can easily detect each other and thus are constantly interacting among themselves. But occasionally, for one reason or another, I do become aware of their existence, thus changing their status from "virtual" to "real", within my internal, mental-model of the external world. But it is only the "state" of my (virtual) model of ants, that has changed. The (physical) ants themselves are just doing what they have always done; I, not them, was the "virtual" entity,from their perspective, prior to our encounter, assuming they were able to detect my presence.
"A “virtual” reality as I defined it is not quite what you have in mind." I agree. Mine actually does exist within my mind - that is what a mind is. An external, virtual reality is merely a hypothetical entity.
"Think of a cat, animated by a computer program, the cat pushes a ball. There is no energy-exchange going on with that pushing." Of course there is. The computer program could never be executed in the first place, without energy being exchanged. The point I have been trying to make, is that when you mistake the source of the energy for the dummy, rather than the ventriloquist, you will utterly confuse yourself, in regards to any cause-and-effect relationship. This is exactly what has happened in quantum physics. In the case of the double slits, it is the properties of the slits themselves that are responsible for the "interference", not some property of the entities passing through the slits, as has been assumed.
"Surely this can’t be the case." Surely it is the case. This is inevitably the case, when the most fundamental "self-evidently true" premise, from which ALL subsequent conclusions have been derived, turns out to be false. This has happened several times previously, in the multi-century history of physics. This is exactly what many of the founding fathers of quantum theory were concerned about. It is what all the Schroedinger's cat and EPR paradoxes etc. are about: something EXCEEDINGLY fundamental must be wrong, in our understanding of what is causing the observed effects.
"Otherwise no filter-banks would exist." The filter bank only needs to accumulate energy, such as by counting quanta as they arrive. Which is the only thing that is ever actually observed in the so-called interference experiments.
"The phase may not be in the individual equi-quanta, but it obviously is hidden within the average behaviour of many, many particles." And which can have absolutely no causative effect upon any PREVIOUS, individual, detection event.
"But, for all this to work it needs a global time reference, that is also stable." Exactly. And since no such thing exists, none of the observations attributed to it can be caused by it. They are caused by something else entirely.
"you implicitely refer to that equi-quantized time." No. You are again reversing cause and effect. The passage of time can only be observed, as the passing of events.
"Every time you state that a certain energy-exchange needs an amount of time and cannot happen instantaneously, you implicitly refer to that equi-quantized time." No. Look at the expression for Shannon's Capacity. A single-bit of information can be encoded many different ways, with various combinations of time-duration, bandwidth and S/N. There is no unique time-interval the must be associated with information recovery.
"If time is quantized" It is not. Nothing is quantized, except for the detection/counting of energy-quanta.
Rob McEachern
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Stefan Weckbach replied on Dec. 20, 2018 @ 06:49 GMT
Robert,
thanks again for your extensive reply Robert.
Taking everything for guaranteed what you wrote so far, I have no clue yet what the presence of the delay line in figure 4 contributes to the different results in comparison to those of figure 3.
I also have no clue why this delay line does not alter the experimental results (in terms of your locally-realistic theory and compared to the violation of the bell inequality in EPR experiments) when inserted into one path of the EPR experiment.
It would be helpful if you could explain these different experimental results in terms of your theory.
report post as inappropriate
Robert H McEachern replied on Dec. 20, 2018 @ 17:55 GMT
Stefan,
The key is in the author's statement (III.A) that "The length of the delay line must be compared to the coherence length of the laser."
The coherence length is absolutely irrelevant to any detector that is only sensitive to amplitude. So the fact that the authors have bothered to make that statement, implies that the detectors are not sensitive to just amplitude variations in...
view entire post
Stefan,
The key is in the author's statement (III.A) that "The length of the delay line must be compared to the coherence length of the laser."
The coherence length is absolutely irrelevant to any detector that is only sensitive to amplitude. So the fact that the authors have bothered to make that statement, implies that the detectors are not sensitive to just amplitude variations in the laser pulses; the detectors are also sensitive to phase, regardless of whether or not the authors intended that to be the case.
So, imagine a delay line in the form of a physical tunnel, through which a very slowly rotating "polarized coin" is passing. The difference between the phase rotation angle of the coin, when it finally exits the tunnel, compared to the angle when it first entered the tunnel, is directly proportional to the tunnel length. So, if the coin is detected via a matched filter, then this difference in phase angle, not only can, but will, trigger an entirely different response, as compared to an undelayed coin, if the change in angle causes the matched-filter output to transition from being above-threshold to being below-threshold, or vice-versa.
The "coherence length" is the very definition of a "delay large enough to be detectable". All real oscillators, like a laser, exhibit slowly drifting instantaneous-frequencies (time-derivatives of instantaneous-phase). The rate of drift is characterized as the "coherence length". Such "minimum detectable drifts" are what the Shannon Capacity and the Heisenberg Uncertainty Principle are ultimately all about; they define the drift-size necessary to distinguish one "symbol" from the next-most-similar symbol in the alphabet. When the drift becomes large enough, a decision-error occurs (the wrong symbol is selected), and as previously described, such decision errors are what cause the observed changes in the detection statistics and/or correlations.
I should point-out that these "drift" issues are not just my idle-speculations. They have played a critical issue in developing modern communication systems, capable of operating at anything near the Shannon Capacity. Because NO internal clock in a receiver can EVER be accurate enough to recover information from such an input signal, for a time-period much longer than the "coherence length" of the receiver's clock relative to the emitter's. The "timing recovery" within such a system must be derived from the emission itself - derived from the non-stationary events observed within the emission itself. There is an entire art and science to doing this, that, to paraphrase Shakespeare (Hamlet) "There are more things in heaven and earth, dear physicist, than are dreamt of in your philosophy."
This is how real physical processes can deal with your previously observed problem "But, for all this to work it needs a global time reference, that is also stable." Time need not be stable at all, if time itself, is effectively being derived from the very events being observed, in REAL time. That is what REAL time is.
And this is where a "single bit of information" comes into play. Above, I just described a "synchronous" message, in which the timing between symbols is important. But as the number of symbols in a message is reduced, when it is finally reduced from two, to just one, the need to maintain synchronization between symbols suddenly vanishes. The system is now totally asynchronice - time has disappeared altogether from the equation. The Born rule eliminates any internal phase within a message and the single-bit means external timing (between symbols) is also irrelevant to any detection. Only a detection event itself remains (energy detection); either something is detected, or nothing is - a single bit-flip. But as soon as two such entities combine, a new type of behavior suddenly becomes possible; like either an internal orbital period (phase information), or the time-duration for two, stuck-together balls, to pass by and thus be sequentially detected. That is how time and phase enter into existence, from what was previously a state with only the "energy" associated with single-bits.
Rob McEachern
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Stefan Weckbach replied on Dec. 20, 2018 @ 20:48 GMT
Robert,
so, if you speak of “phase” and “drifting”, have I to assume that down-converted twin particles in EPR-experiments have no such drift when passing through a delay line???
report post as inappropriate
Robert H McEachern replied on Dec. 21, 2018 @ 15:26 GMT
Stefan,
Regardless of delay-length, any drift will be identical, as long as the delay on each path is identical.
The question is: Do the delays differ on the two paths, attempting to measure each member of the entangled pair? And the answer is, the detectors in Bell-test experiments, are "coincidence" detectors; meaning any RELATIVE delay whatsoever (as compared to the coherence length), will totally disable the entire experiment - nothing will ever be detected, other than random noise.
Actually, I should be more precise: Bell tests are performed by "after the fact" analysis; after all the detections have been made, ALL the detections made by Bob, are completely rejected, UNLESS there was a time-coincident detection by ALICE. Particles that experienced different delays along the two paths are rejected from the experiment.
But again, it depends on the coherence length. What is the coherence length of the source? AS long as the coherence length is long enough for any drift to be undetectable, no relative delay will matter.
Rob McEachern
this post has been edited by the author since its original submission
report post as inappropriate
Stefan Weckbach replied on Dec. 21, 2018 @ 16:21 GMT
Robert,
again thanks for the answer.
“Regardless of delay-length, any drift will be identical, as long as the delay on each path is identical.”
This sounds reasonable. If I understood this right, using a delay line for each of the two particles will result in such a drift. The size of each phase drift on each side depends on the length of each delay line on each side. Are both delay lines of the same length, then there is no relative delay and both particles execute the same amount of drift, hence the known end-results of such an EPR-experiment are obtained, when filtering all the time-correlated twins out of the data and plotting them. Is this what you have in mind? Surely the two particles have not conspired to have no drift at all, but the reason for the observation of the well-known bell-curve-like shape is due to the fact that each drift on each side has the same amount?
Does it necessarily need detectors that are sensitive for amplitude as well as for phase to obtain the well-known results (bell-curve)? If not, how would such two amplitude&phase detectors alter the well-known results?
report post as inappropriate
Robert H McEachern replied on Dec. 21, 2018 @ 17:10 GMT
Stefan,
"Is this what you have in mind?" Yes. But it is important to remember that the drift is caused by the source, not the delay lines. As a more extreme example, imagine that the frequency drift of a laser is so great, that over a period of one hour, the color slowly shifts from red-to-yellow-to-green. As long as you only compare pulses separated in time by a few seconds or minutes, the color shift may be undetectable. But when you start comparing pulses taken an hour apart, the difference becomes obvious. The effect is caused by the Unintentional Frequency Modulation On Pulse (UFMOP).
In the presence of such UFMOP, an experiment conducted over a time period much shorter than one hour, will exhibit a spread in the measured color (or phase in the case of a more slowly drifting source), that together with noise, will result in the histogram of the measured color or phase being something other than a discrete line - like a Gaussian distribution.
"how would such two amplitude&phase detectors alter the well-known results?" Remember, the "single bit" in Shannon's theory, is at the very limit of detectability. It can only be reliably detected, by a detector that is optimally designed and "tuned" to detecting however the bit happens to be encoded within the signal. Any other type of detector is likely to either not detect the bit at all (probability of detection < 1), or mistake environmental noise for an actual bit (probability of a false alarm > 0). Either error will alter the detection statistics and thus the observed correlations between two detectors.
It is precisely the use of non-optimized detectors (randomized phase-angle detectors) that causes the change in the above two detection probabilities, and thus the "weird" correlations between the detectors, even when there is no drift whatsoever, in the Bell Tests.
Rob McEachern
this post has been edited by the author since its original submission
report post as inappropriate
Stefan Weckbach replied on Dec. 22, 2018 @ 00:02 GMT
Robert,
thanks again for the reply.
I have not yet grasped what you define as a photon (or “coin”) and its intrinsic properties in your theory.
1) Is a photon a lump of energy that can spread in space or is it a solid particle?
2) What general class of properties has this photon when not measured?
3) Does such a photon take one or both paths in the experiments depicted by figures 3 and 4?
report post as inappropriate
Robert H McEachern replied on Dec. 22, 2018 @ 15:32 GMT
Stefan,
In the one-dimensional case, imagine a waveform that consists of exactly three half-cycles of a sine-wave, centered at zero time. Now imagine another, consisting of three half-cycles of a cosine-wave, centered at zero time. The two waveforms are localized, and look different. And they have different matched filters; the matched filter is just a duplicate copy of each waveform...
view entire post
Stefan,
In the one-dimensional case, imagine a waveform that consists of exactly three half-cycles of a sine-wave, centered at zero time. Now imagine another, consisting of three half-cycles of a cosine-wave, centered at zero time. The two waveforms are localized, and look different. And they have different matched filters; the matched filter is just a duplicate copy of each waveform itself. If you apply the matched filter, optimized to detect one of those waveforms, to the other waveform, you will get a different response as compared to when it is applied to the one it is the duplicate of.
Now add noise and band-limiting (via a
Window Function) to those two waveforms, to reduce the Shannon Capacity of each waveform, all the way down to a single-bit-of-information.
I visualize photons as just such waveforms, like tiny fish, swimming in a sea of environmental noise (still in one dimension). In a coherent emission, the phases of each photon is nearly identical, and the fish swim in a synchronized "school". In an incoherent emission, they do not. Under such circumstances, it is JUST BARELY possible, for ANYTHING to even detect the existence of one of these fish.
So, is such a construct a wave, or a particle, or a wave-particle duality? There is no possible way to tell! The information content of the entire beast is so incredibly low (one bit), that only, one, single yes/no question about it, can ever be answered from ANY set of measurements of the thing; was it's existence just detected?
After that single bit of information has been utterly consumed in answering the existence-detected question. There are no more available bits to INDEPENDENTLY answer any other question, such as "Did it pass through door number 1?" An observer can always generate as many answers to any number of additional questions, as he or she likes. But the point is, only the first answer has any significant probability of being correct. That is the defining property of such a beast. That is what Shannon means by a single-bit.
Now repeat all of the above, but this time, design the fish in three dimensions, but still only encoding a single bit of information, dispersed across all three dimensions. Now imagine that the ocean of noise itself, might consist of nothing but such fish - exceedingly difficult to detect individually, but easily observed en masse. Some of the "schools" of these fish even appear to resemble a single, much larger fish. We see the forest, but not the trees.
Rob McEachern
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Stefan Weckbach replied on Dec. 22, 2018 @ 18:23 GMT
Robert,
thanks again for the reply.
You wrote
“In the one-dimensional case, imagine a waveform that consists of exactly three half-cycles of a sine-wave, centered at zero time. Now imagine another, consisting of three half-cycles of a cosine-wave, centered at zero time. The two waveforms are localized, and look different.“
1) I imagined these two waveforms...
view entire post
Robert,
thanks again for the reply.
You wrote
“In the one-dimensional case, imagine a waveform that consists of exactly three half-cycles of a sine-wave, centered at zero time. Now imagine another, consisting of three half-cycles of a cosine-wave, centered at zero time. The two waveforms are localized, and look different.“
1) I imagined these two waveforms (scribbled them). Do these two waveforms constitute what you call a “particle”?
2) If the answer to 1) is yes, then these two waveforms have an internal phase-difference of pi/2 – right?
3) If the answer to 1) is yes, then each of the two waveforms stands for some property of the particle. What are each of these properties?
You wrote
“And they have different matched filters; the matched filter is just a duplicate copy of each waveform itself. If you apply the matched filter, optimized to detect one of those waveforms, to the other waveform, you will get a different response as compared to when it is applied to the one it is the duplicate of.”
What do you mean in this case by matched filter? Albeit it is a duplicate copy of each waveform itself, where is it located? Are these copies somewhat located in the “particle”?
4) if the answer to 2) is yes, is this phase-difference the one you spoke of when you wrote
“So, imagine a delay line in the form of a physical tunnel, through which a very slowly rotating "polarized coin" is passing. The difference between the phase rotation angle of the coin, when it finally exits the tunnel, compared to the angle when it first entered the tunnel, is directly proportional to the tunnel length.”
I guess what you mean here is that the phase difference defined under 2) stays the same for the particle as it passes the tunnel – is this correct? If right, it would follow that the original orientation in space of such a photon is slightly changed by passing the delay line, but the phase difference of pi/2 for the two waveforms you mentioned is conserved, means does not change? Is this correct?
view post as summary
report post as inappropriate
Stefan Weckbach replied on Dec. 24, 2018 @ 00:44 GMT
Robert,
just a few further annotations to think about.
A locally-realistic theory should not only aim to replace non-locality by locality, but should also be realistic.
Realistic means that independent of wether or not it is possible for us to deduce a more detailed description of what we call a “photon” (particle in general), we have to necessarily assume that particles...
view entire post
Robert,
just a few further annotations to think about.
A locally-realistic theory should not only aim to replace non-locality by locality, but should also be realistic.
Realistic means that independent of wether or not it is possible for us to deduce a more detailed description of what we call a “photon” (particle in general), we have to necessarily assume that particles of the same class must have some fixed properties. Otherwise the ocean of nothing but fish (as you put it) can never resemble a single much larger fish without non-local correlations.
It is not enough to say that somehow all these fish, albeit having no fixed properties at all, can resemble something stable. They even couldn’t resemble all the repeatable statistics of the experiments we discussed.
For a locally-realistic theory, one has to define these fixed properties and then test them by experiment. In the case of two-photon entanglement that we discussed, you gave at least a global fixed property of both photons: they should somewhat be symmetric. But your explanation of why a relative delay does not matter due to some coherence length is false (“AS long as the coherence length is long enough for any drift to be undetectable, no relative delay will matter.”) since one can fix the number of photons for each run (two photons, down-converted) and also have a fixed relationship of phase.
Nonetheless, with a delay line in one path one can prove wether or not the bell-curve statistics does change; one does this by after-the-facts analysis via subtracting the additional time of the photons that went through the delay line and comparing this time with the time the other photon needed to be detected. Albeit the time of emission may be uncertain, both photons will be emitted at the same time due to energy and momentum conservation. If there is some noise or other irritations in the detector(s) that prevents that emitted photon-pairs are detected during the respective time-windows, these irritations should be causally independent for each detector (and therefore should be called random) and all bell tests should not deliver the repeatably and well-known bell-curve. Since these detectors are surely tuned to detect the signals one wants to detect, systematical errors can be excluded. The same is true for the experiment depicted in figure 4, since these experiments haven’t been done with detectors that are sensitive to phase. If one assumes the contrary, this would be like a kind of conspiracy theory (with reintroducing what one wanted to cast out in the first place, namely unexplainable randomness), since it is highly unlikely that the unwilling use of phase-sensitive detectors (each properly tuned to its respective experimental setup!) should be a systematic error in all the hitherto made Mach-Zehnder experiments.
Your ansatz of explaining quantum mechanical behaviour with information-theoretic terms is surely not false per se, but I think it is simply not enough for a locally-realistic theory. It may be enough for the argument that we indeed live in a kind of virtual reality where our common notions of locally-realistic properties and fixed cause-and-reaction patterns turn out to be merely systematically false attributions.
Anyways, I wish you happy christmas holidays!
view post as summary
report post as inappropriate
Robert H McEachern replied on Dec. 24, 2018 @ 17:37 GMT
Stefan,
"It is not enough to say that somehow all these fish, albeit having no fixed properties at all, can resemble something stable." They have the only possible, fundamental property. They have existence. Nothing else is as fundamental, since without existence, there can be no other properties. Simply continuing to exist IS the only fundamental form of stability.
"I think it is simply not enough for a locally-realistic theory." I agree. The problem is, such a theory is not even a logical possibility - no "physical" theory is. The best one can ever hope for, is to mathematically model an input signal (an emission rather than an emitter), and how such a signal responds to attempts to recover information from it. If the responses always agree with experiments conducted in the "real" world, then our existing scientific method has achieved all that it ever can; the nature of the "emitters", the physical entities themselves, giving rise to the only signals we can actually detect, will remain forever unknown.
Happy holidays!
Rob McEachern
report post as inappropriate
hide replies