the EPR paradox and Bell's Inequality Theorem.
Due to attachment size restrictions, image quality is not great.
Rob McEachern
Steve Agnew replied on Aug. 31, 2016 @ 13:21 GMT
I like it, but just like the complexity of Bell's logic, quantum uncertainty is at the basis of all reality. In your program, you incorporate uncertainty as noise, but with a known seed, the computer algorithm for noise is deterministic. Classically, noise is just due to the complexity of chaos and so noise is deterministic.
However, if your algorithm used quantum uncertainty instead of chaos to generate your noise, you would get the same answer. However, with quantum uncertainty, you would need to accept a future that is not completely knowable. Thus while it is true that highly chaotic classical systems can mimic many of the results of quantum logic, the basic uncertainty of quantum logic is just an immutable fact of how the universe works. The fact that the universe is also quite chaotic just layers complexity over reality...just like Bell's theorem layers complexity over the simple reality of quantum uncertainty.
The past is different from the future since the universe changes in time. The change of the universe represents the arrow of time and always points to the future as the dipole direction of the CMB creation.
report post as inappropriate
Robert H McEachern replied on Aug. 31, 2016 @ 17:55 GMT
Tom,
If you uncomment that line, you will get the classical result. The noise is now identical, except for a sign, even for an entangled pair; because even the bit errors (bad polarity decisions) are now perfectly correlated.
Rob McEachern
report post as inappropriate
Robert H McEachern replied on Aug. 31, 2016 @ 18:08 GMT
Steve,
There is no difference between quantum and classical uncertainty. If you study the derivation of Bell's Theorem, you will find a well-known, but false assumption at its heart. The assumption is that two independent measurements are possible, just as long as the first does not disturb the second. That is the entire rational for the experimental design. But no one would ever bother to even try to perform an experiment requiring two such measurements, if they realized that the proposed experiment is such that two independent measurements are impossible. The uncertainty principle amounts to the statement that it IS IN FACT IMPOSSIBLE. So why are they even bothering to attempt an impossible experiment? Because they do not understand the meaning of the uncertainty principle, or of a single, classical bit of information.
Rob McEachern
report post as inappropriate
Thomas Howard Ray replied on Sep. 1, 2016 @ 13:08 GMT
Yes, Rob, that is what I mean about the function returning to zero. There is zero difference between quantum and classical domains.
My friend John R Cox understands: “...multiple spin components, cannot be independent – because they have been correlated...” Correct! That’s concisely the loophole I tried to worry out of the Delft experimental protocols. Give Rob my regards. jrc
Good work!
report post as inappropriate
Akinbo Ojo replied on Sep. 1, 2016 @ 15:08 GMT
Rob,
The grammar here seems strange or illogical to me:
"The assumption is that two independent measurements are possible, just as long as the first does not disturb the second".
Two dependent measurements disturb each other.
From this it follows that
Two independent measurements do not disturb each other.
It therefore appears unnecessary to add again that if the first does not disturb the second independent measurements are possible unless the stage is being set for some strange physics.
report post as inappropriate
Robert H McEachern replied on Sep. 1, 2016 @ 15:18 GMT
Tom,
But consider the reason why this happens. It does not happen because no bit errors are being made (as in figure 1), it happens because both detectors now ALWAYS make IDENTICAL errors. In other words, in the quantum interpretation, even when the actual detected polarity is not even a possible state of the noise-free wave-function (the detection was a total error) even the bogus detections must be perfectly correlated, and have nothing to do with the actual, noise-free wave-function, in order to explain the classical result. Now that really would be spooky.
This begs the question, "What makes identical particles, behave as if they are identical?" They cannot have identical noise (actually be identical), because that will fail to produce the observed quantum correlations. Thus, they are identical if and only if, only their recoverable information content is identical. If the intrinsic noise is "too identical", then they cannot behave like identical, quantum particles, instead, they will behave like identical, classical particles. Particles with more than one identical bit of information, are consequently, too identical to ever behave as identical, quantum particles.
Rob McEachern
this post has been edited by the author since its original submission
report post as inappropriate
Thomas Howard Ray replied on Sep. 1, 2016 @ 15:31 GMT
Rob,
I hope readers will forgive me for re-posting my attachment in this thread (and I hope you do the same with your paper); this will make referencing easier.
We are so much in accord. I want to bring attention to my excerpt from Aharonov-Elitzur-Cohen -- the eta term (p. 4), goes to zero and forces orientation.
By the uncertainty principle, though, there is no zero rest state, and one random path is compelled to be taken. (The classical bit)
So why even bring up entanglement?
attachments:
1_Suppose_one_had_visited_a_restaurant_years_ago.pdf
report post as inappropriate
Thomas Howard Ray replied on Sep. 1, 2016 @ 15:38 GMT
Rob,
Our posts crossed.
Your last reminds me of
Lev Goldfarb's ETS formalism.
I would be delighted if he would engage here.
this post has been edited by the author since its original submission
report post as inappropriate
Robert H McEachern replied on Sep. 1, 2016 @ 15:50 GMT
Akinbo,
No strange physics is required.
Two conditions must be true in order to make independent measurements:
1) The object being measured must be capable of yielding two independent measurements.
2) The first measurement must not cause the second to somehow become dependent upon the first.
All Bell-type experiments seek to ensure that (2) is true, but they have merely assumed that (1) ought to ALWAYS be true, in the classical realm. It is not. You cannot measure a second dimension, of a one dimensional object, like a line, even though the object is classical.
Objects that intrinsically possess only a single bit of information, ALWAYS falsify the first condition. That is what ALL Bell-type experiments have ignored, to their eventual dismay.
But true, single bit entities are about as common in the classical realm, as the nuclei of transuranic elements. That is why their behavior is so unfamiliar. They do not occur naturally, so they have to be constructed, before they can be observed. That is why I constructed some - so their behavior can be observed, to be identical to that observed in the Bell-type experiments.
Rob McEachern
this post has been edited by the author since its original submission
report post as inappropriate
Robert H McEachern replied on Sep. 1, 2016 @ 19:28 GMT
Tom,
I have only skimmed the Goldfarb paper, but I would have to say that an important (maybe the only important) connection between the continuous and the discrete is already known. It is Shannon's Information Theory (not Algorithmic Information Theory). The whole point of the former, is to elucidate under exactly what circumstances, a copy of a continuous function, can be generated from a set of discrete values, that is absolutely indistinguishable from the original. Note that this does NOT mean that it is identical. It means that whatever difference exists between the copy and the original is indistinguishable, in the sense that feeding either one into a detector, that knows, a priori, the correct "decoding" procedure, will result in the same, identical output bit-stream, obtained by feeding the other into the same detector, and that attempting to "encode" any more bits into the bit-stream, than the limit imposed by Shannon's Capacity, will result in catastrophic failure; instead of being identical, the resulting bit-streams will be totally uncorrelated (50% probability of randomized bit-errors).
This is also related to the original concept of the Fourier series (not the Fourier Transform), in which discrete frequency components are used to perfectly (in a least squared error sense) approximate a continuous function. This is the property, along with linearity, that enabled Fourier Methodology to exploit the concept of a superposition, to solve partial differential equations in mathematical physics. Hence, it is no surprise, that continuous Fourier Transforms and superpositions have come to play such a dominate role in QM descriptions of discrete entities; that is why they were invented, to link the continuous to the discrete and superimpose the solutions to discrete input components (sinusoids) to obtain solutions to more general problems, with non-sinusoidal inputs.
Rob McEachern
report post as inappropriate
Eckard Blumschein replied on Sep. 2, 2016 @ 04:45 GMT
Rob,
You mentioned "a set of discrete values, that is absolutely indistinguishable from the original". As already Galileo Galilei concluded, the relations equal to, smaller and larger than are invalid for infinite quantities. That's my opinion and it seems to be your point too. However, mandatory mathematics has been based on the idea that every number must exist like a pebble and must be distinguishable from any other one. Zeno's Parmenidean nonsense seems to be alife up to accepted theories and interpretations in physics.
Shouldn't we together elaborate consequences?
++++
report post as inappropriate
Anonymous replied on Sep. 2, 2016 @ 11:00 GMT
Thanks Rob for the clarifications about Bell's theorem.
I don't intend to distract from the main conversation but just as you point out the wrong interpretations in Bell-type experiments stem ab initio from not paying attention to the details of the assumptions made.
In your post, you also made an assumption that I cannot overlook by saying,
"You cannot measure a second dimension, of a one dimensional object, like a line,..."Says who?
I can measure the width of the line on my computer screen and that on the paper on my desk. I can also measure the thickness of the paper on which the line is drawn. Indeed, if the thickness of the paper reduces to zero, the line ceases to exist.
I therefore put it to you that in the real world, a line has length, width and thickness and it is not a one-dimensional object. To say it does appears a wrong assumption which if incorporated in your arguments will give similar misinterpretations as Bell-type experiments.
Regards,
Akinbo
report post as inappropriate
Robert H McEachern replied on Sep. 2, 2016 @ 11:53 GMT
Akinbo,
The things you are discussing are approximations to lines, not lines themselves. The issues concerning quantum correlations seek to discover properties of the things themselves, not any approximation of the thing.
So, does the uncertainty principle describe a property of the thing itself, or does it describe only an observer's inability to form a better approximation of the thing? Is there even a difference between the two? If there is a difference, can it be minimized? To zero? It is these types of questions, that are responsible for the decades-long interest in the EPR paradox and Bell's theorem.
Rob McEachern
report post as inappropriate
Steve Agnew replied on Sep. 2, 2016 @ 12:02 GMT
So I do not understand what you mean by classical. Classical causal logic means that all action is determinate with knowable causes. Quantum logic agrees that most actions are from knowable causes but also some quantum action is not determinate, ergo the uncertainty principle.
Robert H McEachern replied on Aug. 31, 2016 @ 18:08 GMT as "There is no difference between quantum and classical uncertainty. If you study the derivation of Bell's Theorem, you will find a well-known, but false assumption at its heart. The assumption is that two independent measurements are possible, just as long as the first does not disturb the second. That is the entire rational for the experimental design. But no one would ever bother to even try to perform an experiment requiring two such measurements, if they realized that the proposed experiment is such that two independent measurements are impossible. The uncertainty principle amounts to the statement that it IS IN FACT IMPOSSIBLE. So why are they even bothering to attempt an impossible experiment? Because they do not understand the meaning of the uncertainty principle, or of a single, classical bit of information."Your statements are a confusion of classical and quantum logic, but you are correct about Bell's theorem being flawes. Any measurement of a photon that does not include the source as well as the observer ignores the quantum phase entanglement between the source and observer. This entanglement has no classical analog and is just how the universe really works.
The chaos of classical uncertainty can mimic the effects of quantum uncertainty as you have well demonstrated. However, classical noise comes from the incoherence of chaos and does not have the phase entanglement between source and observer that quantum noise does.
Is your future determinate or uncertain?
report post as inappropriate
Robert H McEachern replied on Sep. 2, 2016 @ 12:25 GMT
Eckard,
If you are familiar with my 2012 FQXI essay, then you know that I am not fond of equating math and physics. The consequences of attempting to do that, has been a proliferation of strange, speculative "interpretations", about the unknown causes, underlying the observed effects. Math is highly useful for describing those effects, but has little ability to elucidate the causes for those effects, because there need not be a unique one-to-one relationship, between the terms in a mathematical equation describing observations, and the things being observed. When there is, fine. But when there is not...
Rob McEachern
report post as inappropriate
Robert H McEachern replied on Sep. 2, 2016 @ 12:41 GMT
Steve,
My future would not be determined, even if ALL laws of physics were determinate. Laplace's absolute determinism is based upon a flaw in logic, similar to that in Bell's Theorem, a flaw originating in the lack of understanding about the nature of information. I have discussed this on other FQXI web-pages, in regards to Determinism versus Free-Will. It is not possible to determine (predict) the future, when the required information cannot be known, not even in principle, before the event being determined actually occurs. It might seem surprising that such cases can exist, but not only can they exist, they do exist.
The uncertainty principle does not describe a property of objects. It describes a property of descriptions of observations of objects.
Rob McEachern
report post as inappropriate
Steve Agnew replied on Sep. 3, 2016 @ 02:34 GMT
So the future is fundamentally uncertain, but not for object properties...just for descriptions of observations of objects...But as long as the information is fundamentally unknowable, you are in quantum heaven and not classical hell.
But sometimes people cherrypick a quantum property here and then contrast it with a classical property there, like Maudlin. So with a beamsplitter, there are two possible futures for a particle; path A and path B. Once an observer measures the particle on path A, quantum logic still insists that the particle's exact path is still fundamentally unknowable as a superposition of A and B. Classical logic says otherwise.
What say ye?
report post as inappropriate
Robert H McEachern replied on Sep. 3, 2016 @ 03:16 GMT
The issue of determinism is not about whether or not information is knowable. It is about the exact point in time, that it becomes knowable. You can always know the outcome of an event, after you observe it happen. The question is, is it possible to know it before it happens. In some cases it is, and other cases it is not.
Superpositions are a mathematical phenomenon, not a physical one. Bear in mind that quantum theory merely describes how to compute the probabilities of observing various behaviors. It does not say anything about why those behaviors occur. It is only the unsubstantiated interpretations of the theory that make claims about the underlying causes. The theory is accurate. The interpretations are not. "Quantum Correlations" exist. But not for the reasons stated in any of the prevailing interpretations.
Rob McEachern
report post as inappropriate
Eckard Blumschein replied on Sep. 3, 2016 @ 04:37 GMT
Rob,
Isn't information in terms of bits discrete mathematics? If so, you are perhaps in the quantum heaven as Steve A called it.
In my early essays I showed MATLAB plots as to demonstrate that spectrograms based on cosine transformation avoid the non-causality that is notorious with Fourier transformation. Why did and do experts not trust me? They were and are not ready to even question some related mathematical tenets up to spacetime and SR. Be sure, I share your distrust in putatively rigorous mathematics and I support your reasoning.
My restricted to IR+ singularity functions are not in contradiction to my opinion that there are no singularities within a continuum.
++++
report post as inappropriate
Robert H McEachern replied on Sep. 3, 2016 @ 10:42 GMT
Eckard,
Information in terms of bits, is just counting. Counting is indeed discreet.
As I have repeated many times, math and physics are not the same. Math can be used to describe physical observations. Continuous math can be used to describe discrete observations and discrete math can be used to describe continuous observations. It is up to the user, to demonstrate that all such descriptions are accurate. But the user should never assume (as they all too often do) that there is some necessary form of one-to-one correspondence between anything other than the final math result's numerical value and the final value produced by a physical observation. In other words, the "effect" described by the math agrees with the observed "effect". But if the user assumes that the "cause" of the effect can be deduced by studying the form of the mathematical equation (it has a wave-function in it!), then the user is headed for trouble. Math has nothing to do with causality, because descriptions that are mathematically identical, need not be physically identical.
What so many people fail to comprehend about Information Theory, is that information is not about anything - it has no meaning - it is merely a description of an effect. Meaning is concerned with the causes of effects. By eliminating all meaning, information is able to describe all effects, regardless of the cause of the effect. That is what makes modern communications, based upon Information Theory, so useful; the systems used to transfer your data, can represent it with discrete bits of information, without having to possess the slightest knowledge of what it means to either the sender or the receiver - it works for everything.
Rob McEachern
report post as inappropriate
Robert H McEachern replied on Sep. 10, 2016 @ 18:12 GMT
A slightly revised version of the above paper, with much better quality images, is now available on vixra.org, under quantum physics.
Rob McEachern
report post as inappropriate
Colin Walker replied on Sep. 12, 2016 @ 02:53 GMT
Hi Rob,
A classical system exhibiting quantum correlations? I was intrigued enough to translate your program into C, and was able to reproduce your results. I was thinking of finding the noise and cutoff levels for a least squares fit to the cosine numerically, but each run of 500,000 coins takes 28 hours on my less-than-super computer, so that is ruled out. Kind of curious to know how that time for a run compares to your experience.
You have presented a truly innovative approach that is indisputably classical. It seems to me the word breakthrough is appropriate. There is clearly much to contemplate about bandwidth, noise and a single bit of information.
Colin
report post as inappropriate
Robert H McEachern replied on Sep. 12, 2016 @ 19:32 GMT
Colin,
I run the MATLAB script on an Apple iMac Retina (3.2 GHz Intel Core i5). Each run takes about 10 minutes. MATLAB's processing kernals are pretty highly optimised for this kind of number-crunching. As I mentioned above, you can purchase a "home" license for $150 (I also have the signal processing package, for an additional $50, but I don't think my script actually requires it.)
Rob McEachern
report post as inappropriate
Colin Walker replied on Sep. 12, 2016 @ 19:33 GMT
I just noticed you had previously mentioned a time of 10 minutes per computer run. Might be time to step up my computer system.
Colin
report post as inappropriate
Eckard Blumschein replied on Sep. 18, 2016 @ 04:45 GMT
Rob,
On Sep 1 you wrote: "This is also related to the original concept of the fourier series (not the fourier Transform), in which discrete frequency components are used to perfectly (in a least squared error sense) approximate a continuous function." Could you please explain why not FT?
++++
report post as inappropriate
Robert H McEachern replied on Sep. 18, 2016 @ 13:57 GMT
Eckard,
Good question. The issue is: How do the continuous functions, that appear in all the partial differential equations in mathematical physics, relate to discrete measurements? This is the issue of the uncertainty principle: variables like position and momentum are continuous functions, within the equations, but their measurements yield discrete values. Why are the discrete...
view entire post
Eckard,
Good question. The issue is: How do the continuous functions, that appear in all the partial differential equations in mathematical physics, relate to discrete measurements? This is the issue of the uncertainty principle: variables like position and momentum are continuous functions, within the equations, but their measurements yield discrete values. Why are the discrete measurements connected, as they seem to be, by the uncertainty principle?
Continuous Fourier Transforms (FT) never deal with the discrete measurements, so they have NOTHING to say about this issue. Similarly, at the opposite end of this spectrum, Algorithmic Information Theory (AIT) only deals with the compression of discrete data, so it too has NOTHING to say about this issue. Shannon's Information Theory, is precisely devoted to this issue - the connection between continuous functions and their discrete measurements. How do limitations, such as finite duration, finite bandwidth and finite signal-to-noise ratio, used to characterize the continuous functions, impact one's ability to make discrete measurements of those functions? That is the question Shannon answered.
The original EPR paradox, had nothing to do with the special case of spins and polarizations, it was concerned with the uncertainty principle (UP) in general. But 60 years ago, David Bohm could not figure out any way to test the EPR paradox, using the continuous functions appearing in the UP, so he introduced the idea of testing spin, which does not appear to be a continuous function.
Consequently, by focusing on the FT, AIT, and spin related experiments, physicists have ended-up ignoring the continuous function/discrete measurement problem, for almost a century. Thus, they have never developed any intuitive understanding of the connection, and its resulting "weird" correlations, of any discrete measurements of supposedly independent variables.
They have never done the simple Gedanken thought experiment, and asked themselves, "If an entity only contained a single bit of information, within its continuous function description, what types of behaviors would one observe, if one attempted to make multiple, discrete measurements of such an entity?" Think about it. Have you ever heard about any experiments that exhibit such behaviors? Measurements that only take on one of two values, that convey no additional information, that exhibit strange, unexpected correlations, when one assumes that multiple, independent measurements OUGHT to be possible?
Rob McEachern
view post as summary
report post as inappropriate
Eckard Blumschein replied on Sep. 20, 2016 @ 09:28 GMT
Rob,
Why and in what sense "the original concept of the fourier series"?
++++
report post as inappropriate
Steve Dufourny replied on Sep. 20, 2016 @ 10:21 GMT
Hello,in all case the harmonical oscillations are deterministic at all scale.The Bohmain interprétations or the copenaghen interpretation.The hidden variables are proportionally dterministic also.Fourier series are just a wonderfull tool in harmonical analyse.
(--)(--)(--)(--):)
report post as inappropriate
Steve Dufourny replied on Sep. 20, 2016 @ 10:33 GMT
A cont.function is just a tool also correlated with geometry.Now the real relevance is to consider the 3D and spherical volumes and the correct deterministic convergences.The continuity on R3 S3 become relevantt with the correct serie of volumes respecting the finite serie of these said volumes.Even gravitation can be formalised with the respect of the newtonian mechanic and the motions of sphères and the correlated oscillations.The hamiltonian and the lagrangian are always good Tools.The geometricalalgebras also must converge.I workon this witthe spherical geometrialalgebras but it is not easy I am admiting.
report post as inappropriate
Steve Dufourny replied on Sep. 20, 2016 @ 10:53 GMT
Generally after all the aim is to harmonise the good convergences in harmonising also the commutativity, the associativity and the continuity.The proportions appear in logic.This universe is precise and rational in its mechanic,universal between mattter and energy after all at all 3D scales.This complexity is deterministic for all variables.
report post as inappropriate
Robert H McEachern replied on Sep. 20, 2016 @ 13:29 GMT
Eckard,
Fourier Transforms relate continuous functions in one domain (such as time) to continuous functions in another domain (such as frequency).
Discrete Fourier Transforms relate discrete functions in one domain to discrete functions in another domain.
But Fourier Series relate continuous functions in one domain, to discrete functions in another (such as discrete frequency harmonics).
Hence, the latter is the only one that ever even addresses the issue of attempting to relate continuous functions to discrete measurements of those functions (of the discrete amplitudes of the discrete frequency components.) It is the only one that addresses the question: In what sense, does a set of the discrete measurements, correspond to (approximate) a continuous function? This is what "information" is all about.
Rob McEachern
report post as inappropriate
Steve Dufourny replied on Sep. 20, 2016 @ 14:11 GMT
Tell us more about what is an information, how do you quantify it ? What are the methods of measures please?Discrete or not, all is proportional....
report post as inappropriate
Steve Dufourny replied on Sep. 20, 2016 @ 14:32 GMT
The hiddn variables and spooky action at distance are Under our standard model.If now the gravitation is analysed, indeed there are severalrelevant roads to analyse but there also the proportionalities are respected.The analyses of signals with the fouriers Tools are always rational and need a point of departure and a point of arrival.In the numerical analyses with binar codes, it seems the same logic.And like the gravitation is an other force,and that it is not an electromagnetic waves....it implies that discrete analyses areactually always Under our standard model and deterministic about the values or calculations of signals simply.Regards
report post as inappropriate
Thomas Howard Ray replied on Sep. 21, 2016 @ 15:24 GMT
"In what sense, does a set of the discrete measurements, correspond to (approximate) a continuous function? This is what 'information' is all about."
Perfect, Rob. Until information is defined and bounded by a continuous function, it is disconnected or multiply connected, and not information as we use the term.
Extending your assessment: What are the boundary conditions?
report post as inappropriate
Eckard Blumschein replied on Sep. 22, 2016 @ 14:35 GMT
Rob,
Doesn't a continuous function of time relate to a discrete function of frequency and vice versa? IIRC, time and frequency are so called canonically related to each other as are some other physical quantities too. A flaw that I would like to attribute to the use of FT is also some unwarranted generalization: Future data cannot be measured in reality, only within a model. Cosine reansformation, no matter whether continuous or dicrete is more appropriate and not subject to the arbitrary choice of a phase reference because there is only one natural point t=0 in reality, the actual now. Admittedly this view is quite different from usual time scale.
++++
report post as inappropriate
Robert H McEachern replied on Sep. 22, 2016 @ 17:54 GMT
Tom,
The initial conditions and boundary conditions are the source of almost all the information content, of any observable behavior. The “particles and waves” are merely the carriers of information, that has been modulated onto them, by the initial and boundary conditions, much like a radio frequency carrier. For example, consider the double slit experiment. No interference pattern is...
view entire post
Tom,
The initial conditions and boundary conditions are the source of almost all the information content, of any observable behavior. The “particles and waves” are merely the carriers of information, that has been modulated onto them, by the initial and boundary conditions, much like a radio frequency carrier. For example, consider the double slit experiment. No interference pattern is created, if the detection screen is placed immediately behind the slits. The pattern only appears if the screen is far behind the slits, in the Fraunhofer Diffraction region. It is well known, that in that case, the pattern is determined by the Fourier Transform of the slit geometry. In other words, all the information content of the pattern, comes from the slit geometry. It has almost nothing to do with particles, waves, or even physics. It is a pure mathematical description of the geometry, that has been modulated onto the particles and/or waves carrying the information to the detection screen. Consider the following:
A flat, plate of glass is placed between a laser light source and a detection screen. A spot of light appears on the screen, caused by the light shining through the glass, onto the screen. Now “smoke” some of the glass, reducing its transparency, so that only two slits remain clear. A so-called interference pattern now appears. But what causes this change of pattern to appear? In cannot be the particles or waves striking the glass, since they have not changed. It cannot be the path through the glass slits, since that has not changed either. It cannot be a change in the distance, and thus phase, along the propagation paths behind the slits either, since they too have not changed. The changed pattern is entirely due to the changed amplitude modulation of the carriers, by the slit geometry; many of the particles and/or waves have been absorbed or deflected. This is what causes the change in the pattern. To see this even more starkly, consider “smoking” the glass at the slits, so that instead of an abrupt change from transparent to opaque, a Gaussian intensity distribution is used, to gradually reduce the transmission through the slits, from center to edge. The interference pattern will now disappear, and be replaced by a classical pattern of two Gaussian humps, not because any path or phase has changed, but simply because the amplitude modulation (absorption and slight deflection of photons) has changed.
This spatial, amplitude modulation is the same, regardless of whether or not the particles pass through the slits one-at-a-time, or as an army, marching in lock-step like a wave. It is the same regardless of whether you view the situation as classical or quantum.
The information content of the pattern, is not a property of the particles or waves, it is a property of the slits. Incorrectly attributing it to the particles and waves, is the source of all the “weirdness”. It is a though a fork has been shot through an apple pie, and strikes the wall behind the pie, where it is examined by an observer, who then exclaims “Goodness gracious! How strange! How profoundly weird! The fork has taken on the characteristics of an apple pie! It tastes and smells just like an apple pie! Some strange, non-local physics must have spookily transported the apple pie’s information content regarding taste and smell, onto the fork, at faster than light speed!”
The particle/fork is merely the carrier of the information, not the source of it.
Rob McEachern
view post as summary
this post has been edited by the author since its original submission
report post as inappropriate
Robert H McEachern replied on Sep. 22, 2016 @ 18:05 GMT
Eckard,
Cosine transforms may be more appropriate than Fourier Transforms, but non-transform-based filter banks are even more appropriate, since Mother Nature does not seem to have ever discovered the use of any orthogonal transformations. Hence, while transforms may be useful as computational models, they are poor physical models, since they do not appear to correspond, at all, to the way Mother Nature has actually implemented physical reality.
Rob McEachern
report post as inappropriate
Steve Dufourny replied on Sep. 22, 2016 @ 18:55 GMT
If I can Mr McEachern,I read your developments and how you explain the waves and particles and these informations.The computing and simulations are Tools where you insert parameters with your Matlab.Even with modulations,the informations are waves.You speak about mother nature,I am a nursery man, and I know mother nature and I class all even the maths.The informations are waves.These waves can be classed also.I don't really encircle how you interpret an information.Explain please.What is an information ? how is it quantify? what is its nature? what is its origin? If we take the father of the theory of information,Shannon,we have a source, codes,receptor.All this is quantified and are waves.Why so an information is not a wave in this case ,please develop orexplain.A signal is a wave even if it is modulated.The bit is the unity of minimal information,so it is quantify like a wave because it is just the yes or no and the 1 and 0,open, closed.It is the binar encodings if I am right.So in all the cases, an information is always correlated with waves or particles even for a thought in our brain.The AI so is a deterministic system simply where consciousness is not possible.Can an AI have a thought of love? It is not possible because we do not check the gravitational stable codes fortunally.All this to say that even the simulations must be deterministic.All the systems of binar encodings are Under our special relativity and electromagnetism.We code the informations with electricity or hv I suppose or magnetic polarisation but it is always Under our special relativity considering the waves and particles and the duality.Could you explain me please what is physicaly speaking an information, do you class them? binar?a human thought,the gravitatio,the photons,or do you consider that informations are an other thing?
report post as inappropriate
Steve Dufourny replied on Sep. 22, 2016 @ 19:10 GMT
The AI is correlaed with the Shannon entropy and the increasing in informations.The sortings seem a main parameter like the finite groups.It is possible with rational encodings in utilising the turn off turn on of bits for the codes.The number of informations that said must be imrportantIf all is coded and encoded in an automata,so at the points of necessary entropy,the rational comportment appears.It is intriguing if we check one day the gravitation and that we codes this for AI.
report post as inappropriate
Robert H McEachern replied on Sep. 22, 2016 @ 23:13 GMT
Steve,
As you may already know, it takes two points to determine a straight line. That is, it takes two samples. But how many significant bits, must each of those samples have, so that, for a finite length observation of a noisy line, you will be able to accurately determine the line? In other words, how many bits of information (significant bits per sample times the number of samples) are required such that you can reconstruct the line, as good as the noise will ever allow? That is how information is quantified: how many bits of data are required to reconstruct a continuous, noisy function.
Rob McEachern
report post as inappropriate
Steve Dufourny replied on Sep. 23, 2016 @ 06:52 GMT
Thank you,I see better the meaning of bits and noise.And I understand better your works also.Best Regards
report post as inappropriate
Eckard Blumschein replied on Sep. 24, 2016 @ 02:34 GMT
Rob,
Let me remind of often overlooked facts:
The sampling theorem refers to band limited signals.
All measured functions of time or of frequency have corresponding limitations.
MATLAB nicely demonstrates that the more restricted the timespan of an analyzed function is, the wider are belonging spectral "lines".
Accordingly, any desription in terms of continuity is fundamentally different from a description in terms of bits.
I agree on that physiology must not be based on the mandatory mathematical theory of signal processing. Nature cannot even "know" what reference point the latter is doomed to arbitrarily refers to when using the ordinary time scale instead of the scale of elapsed time which relates to the real now.
Your demo is nice. Hopefully it will be noticed by the narrow minded.
++++
report post as inappropriate
Robert H McEachern replied on Sep. 24, 2016 @ 11:16 GMT
Eckard,
Shannon's Capacity is dependent on signal Duration, Bandwidth and Signal-to-Noise Ratio. Hence, it includes the effect of band limited signals, on the number of bits of information, that can be recovered from a continuous signal. Shannon's bandwidth limitation is derived from the limit you have noted, on the Sampling theorem.
Rob McEachern
report post as inappropriate
Eckard Blumschein replied on Sep. 28, 2016 @ 03:03 GMT
Rob,
To me, Shannon is more than a river and a location near Limerick. See my 2013 essay "Shannon's View on Wheeler's Credo". However, I don't yet fully agree with your reasoning. I will read your ideas again. Could you please guide me to literature that confirms your ideas?
Doesn't FT also connect continuous functions within one real in mathematical sense domain that can directly describe reality with discrete functions within a fictitious complex domain and vice versa?
Coincidence detection plays definitely an important role in physiology of senses. However there is much undisputed evidence e.g. for tonotopy too.
Maybe, your "identical pairs" of noise hsve a background in fundamental mathematics?
++++
report post as inappropriate
Colin Walker replied on Sep. 28, 2016 @ 04:17 GMT
Hi Rob,
I changed my C program's convolution function to use FFTs resulting in a 20x speed-up. I downloaded a trial version of Matlab and found that my program takes about twice as long (76 min) as your Matlab program (36 min) on an AMD A6-6400K 3.9 GHz dual-core cpu which has only one floating point unit.
There is a way to search for noise and threshold levels that give the best fit...
view entire post
Hi Rob,
I changed my C program's convolution function to use FFTs resulting in a 20x speed-up. I downloaded a trial version of Matlab and found that my program takes about twice as long (76 min) as your Matlab program (36 min) on an AMD A6-6400K 3.9 GHz dual-core cpu which has only one floating point unit.
There is a way to search for noise and threshold levels that give the best fit (numerically) to the cosine from one run of the program. The way I did it involves getting the program to write a file consisting of the reduced data for each coin. The data are the angles Dang1 and Dang2, and separate polarity correlations for the coin and the unscaled noise. E.g., instead of Corr1, calculate Corr1c for the coin, and Corr1n for the noise. Since Corr1c and Corr2c depend only on the angle, they can be computed outside the coin loop.
A second program then reads the data for each coin (Dang1, Dang2, Corr1c, Corr1n, Corr2c, Corr2n) and reconstructs the original polarity correlation, e.g. Corr1 = Corr1c + NoiseAmp * Corr1n. The squared error of the cosine fit can then be calculated with relatively little computational effort for many values of NoiseAmp and Threshold, and the minimum selected. I get values for NoiseAmp and Threshold somewhat larger than used by your program, but there is a range of values which give a good fit.
Changing the bandwidth of the smoothing function has little effect. I changed the expression (line 69) that determines the width of the Gaussian from "RR=0.25*(FR)^2" to "RR=0.0025*(FR)^2" to make a much narrower pulse with a higher bandwidth. The squared error of the cosine fit is minimum at NoiseAmp = 5.473, Threshold = 13.2, producing a detection rate of 68.9%. The noise level is not much different. The only thing that changed significantly is the threshold. The detection rate is nearly unchanged leading me to suspect that it is perhaps as fundamental as signal-to-noise ratio.
Your coin model reminds me of Caroline Thompson's "The Chaotic Ball: An Intuitive Analogy for EPR Experiments" in the way it implies missing measurements. Your model has the great advantages of being quantitative as well as simpler, being based on a circle not a sphere.
I can't find the original citation for this, but the following is attributed to her: "A frequent objection is that local realism cannot match quantum theory when it comes to accurate quantitative predictions. True, it cannot easily match exactly the quantum-mechanical coincidence formulae (the ball model, illustrating principles only, does not even attempt to), but what is required is surely a match with experimental results, not with the quantum theory predictions." I think your model represents a step beyond illustrating principles, but her statement is still relevant.
Colin
view post as summary
report post as inappropriate
hide replies