If you are aware of an interesting new academic paper (that has been published in a peer-reviewed journal or has appeared on the arXiv), a conference talk (at an official professional scientific meeting), an external blog post (by a professional scientist) or a news item (in the mainstream news media), which you think might make an interesting topic for an FQXi blog post, then please contact us at forums@fqxi.org with a link to the original source and a sentence about why you think that the work is worthy of discussion. Please note that we receive many such suggestions and while we endeavour to respond to them, we may not be able to reply to all suggestions.
Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you.
Can We Feel What It’s Like to Be Quantum?
Underground experiments in the heart of the Italian mountains are testing the links between consciousness and collapse theories of quantum physics.
CATEGORY:Blog[back] TOPIC:Watching the Watchmen: Demystifying the Frauchiger-Renner Experiment — musings from Lidia del Rio and more at the 6th FQXi Meeting[refresh]
Blogger George Musser wrote on Dec. 24, 2019 @ 19:04 GMT
Credit: Lidia del Rio
Even by their usual excitable standards, the physicists and philosophers who study the foundations of quantum mechanics have been abuzz about a thought experiment first proposed in 2016 by Daniela Frauchiger and Renato Renner at ETH Zurich, and later published in Nature Communications (Frauchiger, D., Renner, R. Quantum theory cannot consistently describe the use of itself. Nat Commun 9, 3711 (2018)). A blog post about it by Scott Aaronson of the University of Texas drew nearly 300 comments, and sparks flew at the two most recent FQXi meetings. So it was high time for me to buckle down and make sense of the experiment.
Free Podcast
An amped up version of the Schrodinger Cat Paradox spells trouble for all quantum interpretations -- according to its architect Renato Renner. He tells Zeeya and Brendan how the controversial thought experiment works, and why he thinks it is bad news for fans of Many Worlds and quantum parallel universes, QBism, Collapse models and (less so) for Bohmian interpretations of quantum mechanics. But not everyone agrees.
You can listen to a detailed rundown of the thought experiment for beginners, in which Renner talks through each step, on the podcast. He also describes the controversy his paper caused, and how fans of various interpretations of quantum mechanics—including Many World's, QBism, Bohmian mechanics and Collapse models—argue that the paradox actually supports their preferred model, while ruling out its rivals. But Renner, as you'll hear, disagrees, explaining that in his opinion, no current interpretation can provide a satisfactory way out of the paradox.
I've come up with my own way of describing it—vetted by Renner—and put it into the form of a quantum circuit that I've run on IBM's cloud quantum computer. Renner says it is the first experimental implementation of his experiment. (A closely related experiment proposed by Časlav Brukner of Institute for Quantum Optics and Quantum Information in Vienna has already been performed (Science Advances 20 Sep 2019: Vol. 5, no. 9, eaaw9832).) The interpretive dispute will no doubt rumble on. But what makes quantum physics fun is the journey, not the destination.
The experiment engineers a contradiction between third- and first-person views: the objective perspective that physics traditionally provides and the experience of an embedded observer. "In physics we try to build a theory of the world as seen from the outside, as God would see it," Renner's colleague LĂdia del Rio said at this year's FQXi meeting in Tuscany. "But of course, to do this, we have only, as a basis, our own observations. We are always talking about the point of view of some observers, and the best we can do is talk to each other, compare observations, and try to build a consistent picture." In the Frauchiger-Renner experiment, observers find themselves weirdly unable to do this. "The agents will make some inferences about each other's results, which in the end will be contradictory," del Rio said.
These days, especially, it seems naĂŻve to expect that we could reach consensus through dialogue. But et tu, physics?
Becoming One With Nature
As usually presented, the experiment involves a convoluted series of measurements and logical deductions. But stripped to its essence, all you are doing is measuring a pair of entangled particles in two different ways. Normally, the first measurement of a particle would disturb it, spoiling the second. But Frauchiger and Renner propose a trick to measure and remeasure the particle in its pristine state: combine a direct and an indirect measurement. One observer measures the particle, and another measures the first observer. The first measurement transfers the state of the particle to the combined system of particle and observer, making it available for a second look. Frauchiger and Renner argue that, in specific cases, the indirect measurement is just as good as a direct one.
So, this experiment has the feature that observers are themselves observed. In most presentations of the experiment, the observers are human beings, but they could be just particles. All they have to do is make a prediction on the basis of quantum theory, and, for Frauchiger and Renner's scenario, that is a simple logical operation. Swapping particles for people makes the whole business of observing-the-observer seem less mysterious and implausible. That said, it also lessens the philosophical puzzle, because only if the observers are people can they be said to have a first-person viewpoint.
This procedure requires four observers in all, two for each of the entangled particles. Let's call those making the direct measurement the "friends" and those making the indirect measurement the "Wigners," in homage to the physicist Eugene Wigner, who was one of the first to note that observing the observer is a useful test case for interpretations of quantum theory. If the particles are photons, the observers measure their polarization using a special light filter. The friends orient their filters horizontally, and the particle either passes through (0) or reflects off (1). The Wigners orient theirs diagonally, and again the particle either passes through (+) or reflects off (–). So, that's four results to compare:
1. What a friend saw for the first particle vs. what a friend saw for the second
2. What a Wigner saw for the first particle vs. what a friend saw for the second
3. What a friend saw for the first particle vs. what a Wigner saw for the second
4. What a Wigner saw for the first particle vs. what a Wigner saw for the second
The team creates and measures multiple pairs of particles to see the statistical trends. The particles are entangled in a way devised by Lucien Hardy of the Perimeter Institute in 1993. This state can be written in four equivalent ways corresponding to the above cross-comparisons:
1. |00> + |01> + |10>
2. |+0> + |+1> + |–1>
3. |0+> + |1+> + |1–>
4. |++> + |+–> + |–+> – |– –>
To write these is just an exercise in geometry, using the fact that diagonal is part horizontal and part vertical. I am neglecting the exact probabilities for these sundry outcomes; Hardy considered a range of values. What's important is that, in the first three formulas, only three of the four possible outcomes arise, whereas in the fourth all can occur. Hardy showed that such a pattern is hard to explain and seems to require some spooky coordination among the particles.
Frauchiger and Renner have a different aim. They don't seek to explain how the particles could exhibit this pattern, only what happens if they do. Because the first three formulas contain a restricted set of outcomes, a friend can sometimes be certain what a Wigner will see, and vice versa. Based on that, we can draw some conclusions for what they will see and surmise.
When the first friend measures 0, she can conclude the second Wigner will measure + (per #3).
When the second friend measures 1, she knows the first friend must have measured 0 (per #1) and concluded that the second Wigner will measure +. The second friend adopts this prediction as her own, on the assumption that if you know that someone knows something, you know that thing, too—a principle that philosophers call "closure."
When the first Wigner measures –, he knows the second friend must have measured 1 (per #2). He now adopts the friend's prediction.
But sometimes when the first Wigner measures –, the second Wigner will measure –, too (per #4). That violates the prediction. Paradox!
Winding Back the Clock
When critics such as Aaronson say Frauchiger and Renner got it wrong, they are not disputing that the experiment gives the results it does. It's the interpretation that riles them.
Many have latched onto the strange feature that the observers are themselves observed. Observation is not a passive operation, but a thoroughgoing alteration. In the course of doing their indirect measurement, the Wigners undo the friends' direct measurement and wipe their memory. The friends see something, then un-see it. To them, it is as though nothing has happened; when the experiment wraps up and everyone else goes out for after-work drinks, the friends are still sitting there asking, "When will the experiment start?" In some descriptions of the experiment, it's even worse: they enter a Schrödinger-cat-like state of complete ambiguity. This makes The Matrix or brain-in-vat scenarios look tame by comparison. It's one thing to imagine that our world is a virtual projection, another that someone could reach directly into our brains and decide what we think.
At the meeting in Tuscany, Aaronson and Raphael Bousso at U.C. Berkeley argued that if you can't trust in your own integrity as a reasoning agent, you shouldn't be surprised to encounter contradictions such as the one in Frauchiger-Renner. By screwing with the friends' temporal continuity, the experiment smashes the chain of logical statements. If someone has made an observation and then un-made it, you can't base any conclusions on that observation.
Renner and del Rio reply that the experiment is staged to avoid this problem. The friends do get wiped, but by that point, they have no further role to play in the experiment. Whatever they saw and concluded has already been incorporated into the analysis, and nobody refers to it again. Now, you might wonder, if their memory is wiped, then how can any record of their observation endure? This is the most critical part of the experiment. Most of the time, it is true that no record endures. But when the conditions I laid out above are satisfied—namely, when observers are able to make definitive predictions for one another—information lives on. That happens for one in six trials (given the specific Hardy state used by Frauchiger and Renner), and a contradiction arises in half those cases. Thus the experiment walks a line: in undoing an observation, it sometimes preserves a trace of it.
Making the Circuit
This can be illustrated by a quantum circuit—that is, an algorithm that can be implemented on a quantum computer. If you're new to quantum circuits, this section will probably make zero sense. The main takeaway is that the circuit shows how the observers don't need to be humans. Also, the circuit lays bare the sequence of events and the conditions under which information can endure, allaying some of the skeptics' misgivings.
I've implemented the circuit using Quirk, an online quantum simulator created by Craig Gidney in Google's quantum-computing group. (Gidney has his own circuit version of the Frauchiger-Renner experiment.) You can run and modify the circuit for yourself.
I'm attaching a PDF version of this circuit to this post. If you scroll down to the bottom of the post, you can click "Quirk_circuit.pdf" to open a larger version, so you can more easily see the details.
Here, the observers are abbreviated F1 and F2 (the friends) and W1 and W2 (the Wigners). The first two wires (horizontal lines) are qubits representing the entangled particles. The next two are flags indicating whether a given observer is able to make a firm prediction. The following two are the actual predictions. Although we have four observers, we need only two sets of wires, since we track only two observers at a time. The bottom wire is a workspace where observers compare their results and confirm they are entitled to make the inferences they do—an aspect of the Frauchiger-Renner experiment that tends to get overlooked.
Quirk has a nice set of probes—colored green or cyan—that show the qubit values and their correlations at any stage nondestructively. The two boxes with four little yellow circles are custom operations to create or manipulate the Hardy state. The rest of the symbols are standard quantum circuit symbols.
The steps in the procedure are:
1. Hardy state preparation
2. F1 measures particle 1. If 0, F1 is able to make a firm prediction—namely, that W2 will measure + (0). Otherwise F1 assigns equal probabilities to + (0) and – (1).This gives formulation #3 of the Hardy state.
3. F2 measures particle 2. If 1, F2 is able to make a firm prediction—namely, that F1 measured 0. Otherwise F1 assigns equal probabilities to 0 and 1. This gives formulation #1 of the Hardy state.
4. If F2 does make a firm prediction for F1, she can further conclude that F1 has made a firm prediction for W2—namely, that W2 will measure + (0)—and hence can provisionally adopt that prediction as her own. Because of the sign conventions adopted in this circuit, F2’s prediction for F1 (in the 0/1 basis) is automatically a prediction for W2 (in the +/– basis).
5. F1 and F2 confer and check for two errors. First, whether, when F2 is able to make a firm prediction for F1 and W2, F1 either (i) could not make a firm prediction for W2, or (ii) made a different prediction. This tests the assumption of transitivity of knowledge. Note that F2 can adopt a prediction only if it is firm; if she tried to adopt probabilistic predictions, the next step would fail. A firm prediction can be made without using two-qubit gates. This selective reasoning is the main asymmetry in the experiment.
6. F1’s role is over, so her measurement can be undone, clearing the way for W1 to make his. F1’s prediction qubits can be put to other uses.
7. W1 measures particle in the +/– basis. If – (1), W1 is able to make a firm prediction—namely, that F2 measured 1. Otherwise W1 assigns weighted probabilities to 0 and 1. This gives formulation #2 of the Hardy state.
8. If W1 does make a firm prediction for F2, she can further conclude that F2 has made a firm prediction for F1 and thus for W2 and hence can provisionally adopt that prediction as her own. Because of the sign conventions in this circuit, we have to invert W1’s prediction for F2 in order to interpret it as a prediction for W2.
9. W1 and F2 confer and check whether, when W1 is able to make a firm prediction for F2 and W2, F2 either (i) F2 could not make a firm prediction for W2, or (ii) made a different prediction. As in step #5, W1 can adopt only a firm prediction or else the next step would fail.
10. F2’s role is over, so her measurement can be undone, clearing the way for W2 to make his.
11. W2 measures particle 2 in the +/– basis and obtains statistics for formulation #4 of the Hardy state, including -- (11) one run in 12.
12. W1 and W2 confer and check whether W1 erred, i.e. whether he predicted + (0) with certainty yet W2 obtained 1 (–). And he did err for one run in 12, half his predictions.
Just for fun, and just because I could, I ran this circuit on the IBM Q Experience online quantum computer. IBM's interface is sleek and easy to use, but I had to strip down the circuit to accommodate the hardware's limitations, not all of which are documented. I'm grateful to Paul Nation at IBM's Quantum Computing group for his help. The output is now a single error bit signaling a paradox: the second Wigner didn't observe – (1) as the other observers had predicted.
First I ran the circuit on IBM's own simulator and got such an error in 86 of 1024 trials, closely matching the theoretical prediction of one in 12. Then I ran it on an actual quantum computer located in Ourense, Spain. It was just as easy as running the simulator and took less than a minute. I got 461 errors in 1024 trials. This higher value suggest that the device is rather noisy. The other processors that IBM makes available gave similar results. I also checked some of the intermediate values and, not surprisingly, the early steps roughly match theory, while later ones deviate significantly.
To Each His Own
So, if the strange wiping of memory doesn't account for the paradox, what does? It comes down to the unpalatable choice between quantum physics and the objectivity of knowledge.
Quantum physics says the second friend does not—and could not—measure the second Wigner directly. If you can't measure something even in principle, most physicists would question whether that thing exists, in which case there isn't any such thing as a direct comparison of these two observers, and the observers commit a fallacy by pooling their knowledge to draw the comparison nonetheless. Yet pooling knowledge is what scientists do. They couldn't function otherwise.
If you accept quantum mechanics as hard fact, but then weaken the category of "hard fact," haven't you swallowed your own tail? Physicists created the theory and proved it experimentally by stringing together inferences. Every measurement they make is indirect—a long chain of "if this, then that" stretching from the state of a particle to a signal a human can perceive. Those who would give up objectivity to save quantum mechanics may lose both. (That said, maybe quantum mechanics has a theory of knowledge tucked inside it in the form of quantum Darwinism, see "The Evolution of Reality.")
Carlo Rovelli and others have argued for years that quantum mechanics is perspectival: there is no third-person view at all. They still accept some kind of postulate of consistency: observers' viewpoints may differ, but must mesh whenever they come into contact, so that no out-and-out contradiction arises. Yet they remove the most natural explanation for that consistency: a world independent of us. Frauchiger and Renner's experiment might nudge more people to adopt a perspectival view, but heightens the puzzle of how we ever come to any agreement.
Of course its not. The observer isn't doing anything. It is the particle that strikes the observer, only then does the observer respond. The observer is a second party, not the first.
I see that too, that's really funny! Ho...ho...ho!
We discover by classical means that things get so small that we can't directly measure them, or sort them out classically to do so anyway. And if we accept the discrete interpretation of the photo-electric effect rather than Compton, and Planck's *pre-loaded* hypothesis, then there is no 'early warning' field effect to...
I see that too, that's really funny! Ho...ho...ho!
We discover by classical means that things get so small that we can't directly measure them, or sort them out classically to do so anyway. And if we accept the discrete interpretation of the photo-electric effect rather than Compton, and Planck's *pre-loaded* hypothesis, then there is no 'early warning' field effect to alert the designated observer that the particle is coming. Transfer of any direction of momentum, or any bit of information in the catalogue of Quantum Spin Characteristics is by touch. And that is a parabolic function. While at the same time in Classicism, the Lorentz Transform (a hyperbolic function) treats a particle the same way; the contraction of length is only along one direction and it collapses to infinity! Either way, we are dictated by observation, measurement and theory that the Universe is one of Neutral Centrality. A particle only knows its own center. So the observer is an arbitrary starting point, but by designation is the observational rest reference and only the first party by virtue of that arbitrary designation. While in the physical action it is actually the second party and the offending particle is the first party. And neither party can see the third party unless the first party runs into it subsequent to the the initial observational incident, or the observer is propelled by the initial collision to run into a third party that would have to be one different than what the third party could bump into. Unless, and only if the first party encounters a higher momentum that bounces it back to hit the observer on its now observed trajectory. If not it would have to hit yet at least one more party for any party to touch the original second party observer! So the question becomes, if transfer is from the greater value property in action to the lesser valued property at a ballistic rate, how can we determine after the fact of the second collision with or by the original observer, whether any collision subsequent to the first is by any of the already involved particles or with other particles unobserved prior to or following any singular collision event?
Odd, isn't it? The general consensus accepts Neutral Centrality, and works well with it. And there is general agreement that, Yes, there exists no universal physical reference to establish an absolute scale for measurement. For all we can tell our Universe exists on a plane that is itself the shelf in a school girl's locker, and has only existed since she printed it out in her class on multi-dimensional programming just before lunch period. And yet turn right around and say, "What what? No! A second is a second! What do you mean, 'how slow is that?' ". How long is a Meter if we have no scale of length of span other than somewhere between nil and light velocity? That's the long and the short of it, I think we need to rethink and reach consensus on what physical properties constitute matter and why. Well wishes with the lengthening days into the coming year, jrc
Your deep question, "How Matter Knows Matter" is provocative and likely one reason why this topic will struggle to gain acceptance. A century of conventional bias is being challenged on and in its own terms.
I dug out an essay from the FQXI 2012 archives that may be of interest to you, along with its full page of footnote citations which include selected works by contributors to the physics that spawned the current QM preferred interpretations.
This FQXI Topic is engaging enough that I've violated my privacy protocols and reinstated an account! It also dovetails nicely with the Essay Contest this year.
The advanced level of Method on which the arguments of Frauchiger & Renner are predicated, should not discourage interest and there are less technical explanations available. One that I've seen is in terms more familiar in Quantumagazine (note: only one 'm') with this link:
The discussions online open up a wide vista of revisitation on the whole Quantum Mechanical paradigm, and we must not "throw the babe out with the bathwater". But questioning our way of question was the origin of The Age of Reason, and is the hallmark of human sentience.
I do hope capable and knowledgeable practitioners in QM will soon avail themselves of the growing literature, and lead the discussion here on the FQXi Forum. Happy New Year to All, jrc
The article is written by my friend and colleague Anil Ananthaswamy and I enjoyed it, as I do all his writing. That said, I think the presentation is confusing in certain ways and that my reformulation, though it may seem more complicated at first, is actually more straightforward and clears up some misconceptions.
this post has been edited by the author since its original submission
I browsed Mateus Araujo's blog 'More Quantum' and his arguments were in the actual maths, all beyond my understanding, what do you think? Overall it appears that critiques of F-R methods do follow a rigor of form, but to the uninitiated that also bolsters F-R contention that form following form is not a disproof of form follows function. Are you satisfied that QM is complete? best-jrc
Blogger George Musser replied on Dec. 30, 2019 @ 21:33 GMT
In his second post on the topic (http://mateusaraujo.info/2018/10/24/the-flaw-in-frauchiger- and-renners-argument/), AraĂşjo suggests a flaw in the mathematics - which is, to my knowledge, the only time anyone has suggested there is such a flaw, as opposed to a disagreement over interpretation. He qualifies this, a bit, in a update. I'm probably missing something, but I still think we have a conflict between quantum mechanics and the objectivity of knowledge.
Whether quantum mechanics is complete depends on what you mean by "complete". There is no indication that is an approximation to a deeper theory. But all interpretations implicitly assume it is not complete in the sense that it requires auxiliary assumptions, such as a collapse postulate or, in an Everettian interpretation, a probability measure or definition of "world".
thanks for a qualified review of Araujo's math, that helps me get some idea of how the community is receiving Frauchiger and Renner. I couldn't catch more than some phrases of his in the FQXi podcast which was like trying to hear what someone says on a call-in program who hasn't muted their TV. Maybe Zeeya could get Perimeter to spring for a good bypass filter?
Personally from my take on the Planck quanta burst hypothesis, I would qualify any collapse hypothesis to a limit of its original formulation. That being essentially to where and when an electron might be found within the spherical bound of an atom. And as I understand its utility, a second measurement conducted immediately after the first will return the same value, which further bolsters the time dependent burst emission of a stream of 'quanta' (discrete or continuous), and the wave equation's relation to the outside world would be the spatial direction of the emitted EMR. Whether the time parameter of the two (mathematical) measurements would correlate with the duration required for the frequency Quantum observed, to project, is beyond my level of play, but it may suggest more than one electron in an excited state or the lag time of an absorption:emission event.
Thanks again for going out on the open forum limb, and you do have the pedagogical prerogative. jrc
this post has been edited by the author since its original submission
you state, "I still think we have a conflict between quantum mechanics and the objectivity of knowledge."
That actually says a lot, and it can be expanded to the conflict between QM and Classicism without harm to either. Have a Happy New Year and beware of 'Amatuer Night'. jrc
Georgina Woodward replied on Dec. 31, 2019 @ 23:21 GMT
Knowledge is not always objective, even when macroscopic observations are performed. Consider 4 observers, 2 pairs opposite each other with a box between them. So that each pair can see 3 sides. The pairs have two different view points constructed from mutually exclusive 'information. from the electromagnetic radiation input received at their location. Call the viewpoints Va and Vb. The two observers sharing (both generating) Va will agree, what they see is objective. Two observers sharing (both generating) Vb will agree, what they see is objective. But if pairs of observers are made from observers with opposite views they will not agree and can not say what they see is objective by corroboration of the other observer. How the knowledge is obtained matters.
Georgina Woodward replied on Jan. 1, 2020 @ 00:35 GMT
I forgot to mention; the box should be differently coloured, patterned and have different text on its sides, like any ordinary commercial packaging. Alternatively it could be a large die. In this way the received data from each face will be different. If Va shows faces ABC, Vb shows DEF. Generated from mutually exclusive data sets.
Interestingly the state of the box unobserved is not Va or Vb, nor Va and Vb;; because no view point has been established. it is not in a condition of both observation outcomes but has co-state potential. It has latent potential, meaning it has the potential to be observed in different ways by an observer or observers. Each producing their own observation products.
and a good illustration of what George was saying. It also has a similarity in form to the observer scheme proposed by the Frauchiger, Renner Experiment, though I hasten to add that yours appears more classical. Keep in mind that QM doesn't formally pretend realism, only a result of mathematical formulation that approximates a (often subjective) realistic expectation. But you are right. and your set-up provides multiple variables on six fronts.
If you have digested George's brief synopsis, and done a little digging, you'll recognize the signature of QM methodology of seeking a solution of, rather than to or for, an inequality. After all, what percentage is there in equality? Happy :-)newYear jrc
Georgina Woodward replied on Jan. 1, 2020 @ 09:23 GMT
Thank you John, happy new year. George mentioned a conflict between QM and the objectivity of knowledge. I'm attempting to show that the idea that knowledge from observation or measurement must be objective is a fallacy.As well as showing that the idea of a superposition of outcome states represents the condition of a beable observable, prior to a measurement outcome, is not valid. As an outcome requires imposition of the observation method or viewpoint. Which establishes the 'looked at this way' necessary for that outcome. It may look to you like a classical argument.However the background universe in which the scenarios should be imagined is not the space-time continuum.So no outcomes preexist waiting to be encountered. What the observers are seeing is not the external present slice of space-time but the space-time observer generated from emr input.
this post has been edited by the author since its original submission
let me chew on that a bit. I'm off soon for a New Year fest now that the drunks are in the ditches. One quick point in the meantime; *objective* knowledge is born of consensus and Ch.1, "Introduction to Theory of Knowledge", Philosophy 101, the consensus accepts that Axiomatically we agree that some subjectively constructed assumption is taken as self-evident. We have to start somewhere. All branches of Philosophy and Science must contend with arguments of subjectivity v. objectivity, its endless. But there is also "objectivity of knowledge" which goes to what it seeks. I think George's main concern is that QM's objective is largely to support an ad hoc methodology that evolves from the rough measurement in Chemistry when Physics was the poor cousin, that the Proton was an equal multiple of the electron mass, measurement space was 3D+t, geometry was Trig, and Derivatives was Top of the Form in math; and the atom was a child's bad drawing of additive whole numbers.
Roast Pork and Sauerkraut, Mashed Potatoes! jrc
PS: that said, QM methods while ad hoc, also supports a consistent system of orthogonal mathematical proof of a deductive argument. ie. the first and second lists from Lucien Hardy's schemata of entangled particle relationships which is employed by Frauchiger & Renner's Wigner type observers, presented by George Musser in his synopsis on this blog.
this post has been edited by the author since its original submission
Take a book with clearly legible large print on the cover, let's say National Geographics "Field Guide to the Birds of Eastern North America" and hold it upside down in front of a mirror. You can read it right to left and with the ease that resolves from familiarity with the written language normally. Why? Because it goes to developing conventions of measurement, a definition of...
Take a book with clearly legible large print on the cover, let's say National Geographics "Field Guide to the Birds of Eastern North America" and hold it upside down in front of a mirror. You can read it right to left and with the ease that resolves from familiarity with the written language normally. Why? Because it goes to developing conventions of measurement, a definition of terms, an entirely subjective interpretation of observation. But not without an objective of making measurement practical. For what you are perceiving from the reflection ("generating EM input" if you wish) is the Negative Polar Vector of the normal orientated perception. And Normal is the general consensus of a vertical line intersecting the center of gravity of your reference frame, ie. the Normal Line. Normally what you see reading is the Normal Polar Vector. Hold the book right-side up in front of the mirror and what you generate is the Normal Negative Vector, the top of the book is still in the normal orientation. So what direction does the Earth rotate? It depends on which hemisphere you are in. So general consensus accepts that North is "up", rotation is clock-wise, left towards right (even though if you look at the rotation from a position above the North Pole it would be CCW) and the sign of rotation is (+). In geometry if you make an orthogonal with your right hand, thumb vertical, first finger straight out and second finger cocked straight leftward and trace a spiral left towards right from knuckle(s) toward tips; that is Right hand Torque, (+), CW. And of Course if you observe the whole Earth on polar vector from the South the rotation is CW etc. Those are subjective reasons but valid by consensus to make observation Objective".
You can contrive a scenario that dispenses with conventions and construct an inductive argument of subjective reason, but that does not invalidate deductive argument of objective reason nor validate itself on its own terms. Nor can it be used there-after as a deductive argument. So to state: "the idea that knowledge from observation or measurement must be objective is a fallacy" is an amphiboly, a fallacy itself> The ambiguity lies in the use of the word 'must'. Knowledge from observation or measurement is objective by conventional definition. It would be a valid statement to say "need not be objective" if it is a subjective derivation that is agreed by general consensus.
The lay-out of your observer experiment is entirely objective, how you then use it is subjective. It has similar elements in appearance to the F-R layout but not at all the same. Your observers in each pair not only can see each other, they can reach consensus, though neither pair can see the other. The box is not an entanglement state, it is by your scheme uniquely asymmetric, its just that only three adjacent planes can be observed by one observation pair and none of those can be in the three adjacent planes observed by the other pair. (NOTE!!! That does produce a blind spot) so your box can be a candidate for quantum superposition but not an entangled pair.
Sorry for the length but I want to cover all bases and refresh what you should know from your comments on study of topology. So given all that, the element of your scenario that has a similarity to one in Frauchiger & Renner, is The Blind Spot. But due to the Wigner-type observers being non-commutative, the blind spot in Hardy's first three ranks of the fourth column would not be a similar construct of observer pairs as yours. A Wigner observer can observe the observer in "the lab" but not vice versa. But both can deduce primitive conclusions from the set of qualification of entanglement relations.
The three missing blind spots in list #2 make crossing outer products impossible for conjugating a global orthogonal proof of non-contradictory sets in all groups of the selected Quantum Interpretations under analysis. So Frauchiger and Renner have posed a comparative analysis that contends that at least one element in each Interpretation would conflict with at least one other Interpretation. Interpretation isn't the issue, its consistency and non-contradictory definition of terms.
That's the level of rigor necessary to ante up in the Physics Executive Game. Sucks, dontit? :-) jrc
Georgina Woodward replied on Jan. 2, 2020 @ 04:39 GMT
Hi John, thanks for your thoughts. I was not attempting to construct an analogy of the experiment set out in the blog. I was addressing two points; objectivity of observation (by corroboration not convention) and the validity of the superposition of states idea (at any scale).In relation to "I still think we have a conflict between quantum mechanics and the objectivity of knowledge."George Musser. Also relevant "Carlo Rovelli and others have argued for years that quantum mechanics is perspectival: there is no third-person view at all. They still accept some kind of postulate of consistency: observers' viewpoints may differ, but must mesh whenever they come into contact, so that no out-and-out contradiction arises. " George Musser.
If talking of a model globe, it knows nothing of convention, It can be turned in any direction and the rotation direction about its axis depends on the relation to an observer of it- "perspectival" Carlo might say.
Re. meshing of observer viewpoints: View VA excludes view VB. When they meet excluding what we know from experience of boxes; the observers of different views of the box can not corroborate each others findings. Convention can tell us that boxes have fronts, backs, sides, tops and bottoms, and which observer is seeing the front. VA is a partial view, VB is a partial view.They are contradictory but not paradoxical.Each view is consistent with a three sided object, not 6 sided.
Re. superposition of state, I was only saying that the outcomes require the observers viewpoint or method (think of the different methods of coin toss calling) to be applied. Remember the box beable object is not VA or VB and not VA and VB. The state/s do not exist of themselves. There has to be potential for becoming as a result of the measurement or observation, And variables that can become are not of the fixed kind required to comply with Bell's inequalities.
Georgina Woodward replied on Jan. 2, 2020 @ 10:23 GMT
." Knowledge from observation or measurement is objective by conventional definition."John. What about relativity? Isn't the measurement found dependent on how the measurement is performed :stationary vs moving observer relative to observed etc. And from that realization that it is the observer who is generating the seen present from its own uniquely received inputs.
I think 'obliged to be considered as" is what I mean by'must'.
Robert H McEachern replied on Jan. 2, 2020 @ 14:46 GMT
Georgina wrote:
"I forgot to mention; the box should be differently coloured, patterned and have different text on its sides"
But it is critically important to realize that having different text on different sides is not the same as having independent text on different sides.
Observing a box with six independent texts on each of its six sides, is entirely different (enables a different amount of information to be extracted) than observing a box where each of the three pairs of opposite sides always exhibit a "standardized pair" of texts (like "heads" and "tails").
This is absolutely fundamental, to understanding what the Heisenberg Uncertainty Principle is really all about. This "standardized pairing" is what "entanglement" is - a redundant (non-independent) encoding of information - making it possible to reliably predict what the other side will be, without actually having to observe it. This is what makes all repeatable (hence predictable) behaviors possible - without which, there would be no such thing, as physics.
So think about what happens, when one pair of sides on a box exhibits a "standardized pairing", but the others do not - you get "weird" (unexpected) correlation statistics, like those observed in Bell tests.
Rob McEachern
this post has been edited by the author since its original submission
Robert H McEachern replied on Jan. 2, 2020 @ 15:07 GMT
Georgina wrote:
"What about relativity? Isn't the measurement found dependent on how the measurement is performed"
Yes, but "measurement" is not the same as "information recovery". All "information" recovery, is predicated on having the a priori knowledge necessary, to correctly decode (interpret) the raw measurements. In the case of relativity, raw measurements, made in different frames of reference, should never be directly compared; they need to be "decoded" via the Lorenz transformation, prior to any attempt at comparison. The situation is ultimately no different than two observers using two different standards of measurement (like meters versus feet) to make their measurements, but then become mystified when their measured values, for the same measured object, do not agree; all raw measurements need to be correctly transformed/decoded prior to any comparison.
I put about two hours boiling things down and condensing to transliterate the Topic into a response to your inductive (objectivity of observation by corroboration not convention) logic. My brother uses corroboration as a proof, too, but not about SR. For him its about Jesus.
Objectivity is deductive logic, proofs are harder to come by. Conventions are necessary as practical...
I put about two hours boiling things down and condensing to transliterate the Topic into a response to your inductive (objectivity of observation by corroboration not convention) logic. My brother uses corroboration as a proof, too, but not about SR. For him its about Jesus.
Objectivity is deductive logic, proofs are harder to come by. Conventions are necessary as practical measures for qualification and communication. I have no problem with questioning SR, I do it all the time. It lacks a couple degrees of freedom to be realistic, and the paradoxes display that. If you have read Eddington, he briefly mentions that Minkowski recognized that we have no physical reference by which to determine if the scale of a span of time is the same as the scale for a span of distance, so he he just decided that we might as well treat them as if they were the same. Eddington didn't take that any further, or point out that Block Time as a product of Minkowski's 4D Spacetime, suffers from the same scale independent condition as Newtonian absolute Time. He just puts the sliding 2D scale of SR, that mathematically justifies Maxwell's results that light velocity must be taken as an absolute, not relative, velocity value (or such things as Chemistry wouldn't work unless everything in the universe were moving at an identical uniform speed in relation to each other)into a measurement space of non-justified scale. ie. Scale Independent not Scale Neutral, you see the distinction of terms (?), Scale Neutrality refers the scale of time to that of space and vice-versa. Scale Independent refers either scale only unto itself and leaves discretion to the viewer (It should have a PG rating). Hence Sci-Fi's cast of Dr. Who's. SR is realistic in application, but only in either direction of one dimension.
Inductive logic is fine and good for conjecture, we wouldn't make much progress without conjuring an idea into being a real possibility. But conjecture is not testable. At some point, to become possible that idea has to be formalized as a testable hypothesis with a concise and demonstrable geometric math, and an ontology that relates physical properties to that measurable timespace with as few degrees of freedom as necessary to realistically do so.what can I say but Good Luck. jrc
I have long appreciated your facility to explain the functional reason of transform in SR. The textbook and popular science presentations do not do justice to the heritage in philosophy and mathematics to appreciate that what makes it a watershed is that it was (at Last) a formal solution to a measurement problem that had vexed mathematicians for a century since the discovery of electromotive induction, and perplexed philosophers since the Copernican model blew 'UP' into outer space and left mankind wondering what size anything actually is.
Adding to the fixation of those neophytes intent on saving science from itself, is when it comes to theory of large numbers you have get into the upper 90 percentile range of the Gamma Function to begin to get a numerical result for the transform formula in computation on a scientific calculator without a plug-in application like MathLab and the knowledge and proficiency to use it. Conventional terminology of definition of terms laboriously achieved in general consensus has been Blogged to oblivion in a cacophony of nerd words in Geekspeak. Chapters are written that can be summed from two terms; Neutral Centrality and Scale Neutrality. They deserve formalization as a conjugate pair. best jrc
this post has been edited by the author since its original submission
try looking at it a fresh. Put aside all the accumulated boxes and look at the essential problem.
GR works enough that, yes time goes slower in higher gravitational fields. But a beam of light still goes light speed. (?!?) Time doesn't stop at light velocity in that higher gravity on earth, so what's missing in the scheme? Try this and don't worry about getting all Gamma; what goes to infinity at the extremities of the bounded equality is not time or mass, but the DIFFERENCE between velocity and either Rest, or Light Velocity. James Putnam arrives at it differently than I did, but that's independent duplication of results. So its in the public domain. As a covariant rather than invariant function to apply just to one mass at rest going to an energy state at light velocity, try to see if your thinking could be expressed as: C times the square root of [1-(v^2/c^2)]. James calls it "A New Gamma" I just call it useful. It would mean that light velocity is the limit of acceleration because that is as fast as Time can go without dragging everything in the gravitational field its going through along with it. Time waits for no particle, it just feels like it has. break time for this Bonzo :-) jrc
this post has been edited by the author since its original submission
Georgina Woodward replied on Jan. 2, 2020 @ 22:56 GMT
John, Robert, thank you for your time replying. I'd rather get my thoughts tidied up before submitting an essay. John, I don't know what you are commiserating for. The speed of light being unchanged is a result of the alteration of the metric, in Relativity. I'm not making any claim that objective by corroboration is superior to objective by convention. Objective by corroboration is just a way of reality checking; Did you see that?-yes I did- So we agree it is an objective reality. Robert you point out that different observer views should not be directly compared. Yes, because they aren't objective (meaning free from the personal bias due to perspective). I think the word 'objective' is problematic as it has a number of different meanings.
Georgina Woodward replied on Jan. 3, 2020 @ 00:21 GMT
", try to see if your thinking could be expressed as: C times the square root of [1-(v^2/c^2)]." John. Firstly the concept of (foundational ) time is competently different. That is time existing independent of observation is uni-temporal and the entire configuration of all that is in existence. What is seen , the content of an observer's reference frame is not a present slice of observation independent space-time continuum. It is observation product generated by the observer, from emr received from the local environment. So the space-time for which a transform between views is applicable is product of input processing by the observer. Like the spinning observer, its seen product is affected by how the emr is received. Processing is far slower than the speed of light. The effect of such info. overload on the generated products is open to experiment.
that would mean that it would be theoretically possible to attribute an absolute velocity to any observable object. Letting light velocity float to do that means falsifying Maxwell and replacing the whole book on electro-magnetism. jrc
Georgina Woodward replied on Jan. 3, 2020 @ 03:49 GMT
Why do you say that John?. Material objects are not seen themselves but observer generated semblances , pertaining to the particular emr received and processed are. Relativity applies to those seen products. Because of the way in which the visual systems of organisms and devices, such as cameras function, those observers generate products . They are space-time images not external space-time in which the material observers exist. As each different configuration of all that exists is a different time, it makes no sense to talk of the speed of time in relation to that. As change of configuration is not a singular distance over clock time. Change of observed present is to do with processing of input. Also does not have a singular distance over clock time.
Georgina Woodward replied on Jan. 4, 2020 @ 10:16 GMT
John, "The close examinations of scientific practice that philosophers of science have undertaken in the past fifty years have shown, however, that several conceptions of the ideal of objectivity are either questionable or unattainable. The prospects for a science providing a non-perspectival “view from nowhere” or for proceeding in a way uninformed by human goals and values are fairly slim, for example." "Scientific Objectivity", The Stanford Encyclopedia of Philosophy (Winter 2017 Edition), Edward N. Zalta (ed. 'Non-perspectival' is the expression I need. The Image realities generated by the observers of the box are not non perspectival. 'Objective' has too many different meanings and connotations. I think it's the word George needs when he mentions giving up objectivity. The measurements are perspectival and can't just be added together, as if that doesn't matter.
removing oneself from the equation (so to speak) only works to a point. Then we are faced with making another assumption. The "view from nowhere" phrase does have the human touch, and it has long seemed to me that no matter what I try to figure out (physics is a get-a-way) an objective view is temporarily satisfying but very quickly I again wonder if I understand anything. Perhaps deduction is limited to a range applicability. Its how we try to qualify subjective ideas but doesn't cover everything.
The numerous assumptions, and skips in the explanatory models that evolved with discoveries of apparently material sub-atomic particles, and the contradiction of the intensity of luminosity in a spherical wave with the photo-electric effect which is the origin of the supposition of superposition, all combine in a highly subjective state of Quantum conjecture. I don't handle it well at all, jrc
Georgina Woodward replied on Jan. 5, 2020 @ 04:33 GMT
Object reality, beable existence, complete, non perspectivaal.
Eg. box: complete element of Object reality, non perspectival, material beable, existence.
View VA of box: observation product, Image reality, partial, fixed, perpectival.
VAx2 is corroborated perspectival view. Both obtained from emr emitted from same object part surface. More than subjective due to corroboration but not fully objective as they pertain to a particular perspective.
VB observation product, Image reality,characterization list the same as VA, partial, fixed, perspectival.
VBx2 corroborated perspectival, partial views. Both obtained from emr emitted from same object part surface.
VA Cf VB: mutually exclusive views, uncorroborated, different part of object surface is source of emr going to different observers: seeing VA and VB.
Box object source has 6 sides 8 corners and 12 edges, each can be rotated from 0 to 360 degrees and viewed ( product generated) from very near to far away.
Box object is not any view of it nor combination of all views. It non perspectival and may have inside as well as surface.
this post has been edited by the author since its original submission
Georgina Woodward replied on Jan. 6, 2020 @ 00:23 GMT
Even if not filled the box has inside surfaces corners and edges. The outer sides, edges and corners can be tilted in various ways relative to an observer of the box. The box of course is macroscopic. The box has numerous co state potentials prior to a choice of how it will be observed. Pertaining to every possible view of it. Reduced when method of observation is decided. Such as just one face...
Even if not filled the box has inside surfaces corners and edges. The outer sides, edges and corners can be tilted in various ways relative to an observer of the box. The box of course is macroscopic. The box has numerous co state potentials prior to a choice of how it will be observed. Pertaining to every possible view of it. Reduced when method of observation is decided. Such as just one face out of 6 potential face states.
. Quantum measurable measured by Stern Gerlach apparatus differ from the macroscopic example in that the different orientation of measurement are uncorrelated. That would be like taking three measurements of the box, on different sides, Side one looking for black or white text, side 2 looking for card colour or printed colour background, and side 3 glossy or matt finish. Any one of those measurement outcomes can not predict what will be discovered on the the other sides..
Yet it is more than being uncorrelated; the quantum measurements are able to alter the condition of the measured. Sso that the outcome comes into being upon measurement and is not just revealed upon measurement. Re Stern Gerlach: a measured state is retained if retested with same apparatus orientation but outcome is random (as if never tested before) if a different orientation is used between retesting.
Should results of a test of entangled partners carried out at different angles "mesh" together? No because although the second particle test can show what would have been obtained for particle one if that test had been carried out first. However it wasn't. So if particle one is tested again with same orientation used for particle two, the outcome will be random not certain -as the condition of the particle was altered by exposure to first test conditions.
..Same with polarizers; they alter the beable input do not just measure. This means outcome stats. from different polarizer orientations or before and after polarization stats should not be added or considered interchangeable. This is similar to Roberts point about comparing apples and oranges, only I'm saying (I think that) the difference comes about in the measurement process..
I'm not taking issue with your soliloquy on object and image, which can be analogous with the distinction between detection and observation. Nor the alteration by interaction observed both in QM and Classical regimes.
However, Bell-Aspect experiments differ fundamentally from Stern-Gerlach which are only uncorrelated in that electrons and neutral atoms display non-polar charge fields and only have an induced magnetic moment, and curve when they are within the directed homogeneous field domain of the apparatus, and only exit on a linear trajectory. Bell-Aspect devices function in polar discrimination, and whether polarizing filters are chemically compounded with a square lattice, or finely incised plate grating types, both behave with antenna like scattering characteristics.
I think that where things get confused many times is that where the electron exhibits no polarity unless induced, the neutral plane of electro-magnetic polarity exhibits ambiguous (entangled) polarity. ie. Two identical bar magnets at right angle on a plane will behave as if North exists from the plane of the South end, past the North end. And vice versa. So an orthogonal observation anywhere along the length is dependent the detection of polarity that only happens at the terminal planes. The detection element (filter) puts any observation on the neutral plane. So measurement is limited to only plotting the incidence of scattering, Back to apples and oranges. :-) At least there are enough Quants bored with the institutionalized archaic ad hoc criteria that F&R are getting some public play. jrc
Neutral Plane is common usage in generator/motor design, but good for physical representation of the abstract 'orthogonal measure' here
this post has been edited by the author since its original submission
I thought that a fair assessment, and everyone's allowed qualification of their own position. This Topic has been more "topical" than usual in that comments do have pertainance, if not directly about the F&R premise, to the presentation of it. In particular George's own comment on Dec. 30 that he "still think(s) we have a conflict between Quantum Mechanics and the objectivity of knowledge". And it's clear from the several respondents that there is serious question of the mathematical artifice of non-locality the QM arguments of entanglement present as existential artifactual.
This has been informative and I've gained some understanding of your own paradigm and lexicon addressing the intricacies of distinguishing between perception, observation and detection. And I keep having to correct myself on specifics that have alluded my first and second... readings, but which are more easily apprehended in the limiting context of a physical Bell-Aspect experiment and any possible observation of it. 'Ooops. I guess the Wigner twins can't see their Friends. Okay. Now what?' But that Wigner/Friend scenario does lend itself to your schemata. There does seem to be a profound disconnect in the Frauchiger-Renner model that puts both pair of observers in that "view from nowhere", and reliant on the rules and procedures of QM to deduce probabilities of observables by proofs of axiomatic non-violation. I think that is the intent of the exercise, but also what George was referring to. I still can't follow the whole (expletive deleted) thing, though. :-) jrc
this should have been on that other thread in discourses with RM
this post has been edited by the author since its original submission
Spoken like a true Engineer. And on engineering principles I would agree, but Theory is different things to an engineer, an experimentalist, and a theoretician. Like Eddington's solioquy prefacing his tome on Relativity.
But THE Quantum is dependent, theoretically on the still to be rationalized Planck Constant as being a physically indivisable, and absolute minimum quantity; which I successfully continuously partitioned in an EM model, parameterizing and rationalizing through a permutation of Coulomb's Law into an exponential distribution theorem which when applied to known and accepted empirical values,; matched exactly the range of observed EM spectrum, through the mass accumulative range of subatomic and Elemental isotopes to a terminal limit of 263.11 amu. And that is right in the middle of the QM predicted 'allowance' for an 'island of stability' which when it was subsequently (3 decades) artificially produced in acceleraters was found to be so unstable that the IUPAC originally suggested a three letter ID, and the rationale predicts what to expect if that mass accumulation limit is exceeded. Chernobyl. So, NO. I already got information you don't have. The "Quantum" is a convenient empirical measurement value, but is entirely contextual. Just not in the public domain. I can tell you that for free. :-) jrc
Balls! I scrolled past the thread end again.
this post has been edited by the author since its original submission
If you could please clarify something; rhetorically, "a measurement" is made, but it often seems that rather than a physical observation that is being spoken of it is a qualified probabilistic calculation of what is physically unobservable. The end result may be a macro world agreement with prediction but the discussion does get confused when prediction of what a quantum state might be is treated as a measurement. So in the relevance of comparison; how is that second measurement in an orthogonal basis actually accomplished. (A lot of us didn't grow up flipping bottlecaps and don't have that comfort zone with statistical probabilities.) So why does the second measurement result being completely uninformative change anything about the 10% error Robert identifies?
Forget Bell for a moment and we still have the physical puzzlement of an Aspect experiment, the 'trick bulb and sunglasses thing". thanks, jrc
p.s. yes, my idiot box partitioned that link, thanx
Yes,thank-you, I'll try to compensate my level of ignorance
At the quantum level where we can't directly measure any thing or event, what is 'taking' a measurement? ie: the example of the second measurement on an orthogonal basis, "who" is in a position to do so? it can't be the experimenter other than through some agency of operation. And mathematics is the study of axiomatic operations.
My difficulty comes from imagining applying that second measurement to "a (electron) particle in a box" and if my second measurement, orthogonal to the dimensions of a side of that box results in a completely uninformative condition of that boxide, then the electron could be found in the adjacent box. That may well enough account for the behavior of an electron in the substrate layer at the junction of a Zener Diode but it does nothing to explain it. Yet "Electron Tunneling" has become conventionally accepted as a physical definition rather than a mathematical operation. And...onward I go through the fog! :-)jrc
Robert H McEachern replied on Dec. 30, 2019 @ 15:51 GMT
George Musser wrote:
"The relevant comparison is that a second measurement, made in an orthogonal basis to the first, is usually completely uninformative."
The point that you are missing, is that it has been demonstrated that the underlying reason that is true, is because any measurement orthogonal to the only actual signal is only measuring noise - with a 50% probability of producing an incorrect estimate of the one and only bit of information that is actually present, which can only be correctly measured when the axis of the detector just happens to be perfectly aligned with the (unknown) axis of the entity to be measured.
To understand what is actually happening, contemplate the figure found here, depicting a cryptographic one-time-pad, implemented with polarized coins.
Rob McEachern
this post has been edited by the author since its original submission
post approved
John R. Cox replied on Dec. 30, 2019 @ 16:29 GMT
Hi, Robert,
excellent point, and good information. Thanks. I've bookmarked your pdf for later reading but wanted to chime in now to assure George that my intent is not to simply heckle. I just look at things from an experimentalist view and have a bit of trouble with the layout of the experiment. And given that F-R have something to antagonize everyone in the QM community I fear that the...
excellent point, and good information. Thanks. I've bookmarked your pdf for later reading but wanted to chime in now to assure George that my intent is not to simply heckle. I just look at things from an experimentalist view and have a bit of trouble with the layout of the experiment. And given that F-R have something to antagonize everyone in the QM community I fear that the arguments might founder on claims of contradiction in that quasi experimental lab set-up.
I'm beginning to get a handle on the F-R deductive logic, and am open to it being capable of demonstrating inherent contradiction in the current conventions of QM methodology. And, If That, would suggest the origin is in the contradiction in interpretation of experiments that form the foundation of Quantum Theory. We see by physical response to an octave of the EM spectrum, but we can't see anything of that spectrum, We can only deduce from observed behavior of an observing system and infer onto the source.
Take the literature on Aspect type experiments. Nowhere have I seen it even mentioned that a Quantum size photon would have to physically have a diameter equal to, or less than, the wavelength of the given frequency that the Quantum is dependent on; for the photon(s) to exhibit that frequency. If each photon were quantum size and exhibited the frequency then the photo-electric equation would be violated. An obvious experimental contradiction. And if the the photon can exhibit a frequency of that diameter which in the visible range is about that of an atom, then NO Aspect alignment of the comparative polarizing filters could be fine-tuned enough to ensure that an entangled pair would reach the refraction plane of both at precisely the same time, or that the pair was truly the same pairing as prepared in the singlet state.
"The Quantum" is convenient for purposes of hypothesizing on atomic structure, which was Bohr's interest not the electromagnetic spectrum itself. But it is a mathematical abstraction not a physical observation. The physical observation supports Planck's 'pre-loaded hypothesis' of a burst of discrete quanta that accumulate into the photo-electric ejection of an electron from the target material. And if each successive
photon in any burst was a reversal of polarity (up.down.up.down...) then the same probabilities attributed to superposition would still be observed. Put two such 'wave-trains' of even particulate photons in parallel and skew the phase + or - 90 degrees and you get the same observational spread of probable correlations as "entangled quantum size single particles. But all the evolving particle zoo in the std model comes from The Great Dane's sacred cow. Walk that, and you better carry a hefty baggy.
So it isn't surprising that Frauchiger and Renner are rattling more than a few cages> Best, jrc
Robert H McEachern replied on Dec. 30, 2019 @ 20:26 GMT
JRC:
"I'm beginning to get a handle on the F-R deductive logic..."
The problem, as always, throughout history, is not in the deductive logic. It is in the premises, that form the foundations of every possible deduction. When you derive valid deductions, from false premises, the deductions need not correspond to "reality" at all. This has always been the problem with truly fundamental physics.
Ever since the time of the ancient Greeks, people have simply assumed, that elementary particles must be idealistically identical. But there has never been any reason whatsoever, either experimental or theoretical, to believe that is true. Most of quantum theory, and Bell's theorem in particular, are founded upon that assumption being absolutely true. But as demonstrated in my paper, it only takes a few dozen lines-of-code, in a high-level, signal processing language like Matlab, to prove that some non-identical particles (those manifesting only a single bit of information) behave exactly as the particles observed in Bell tests.
In other words, entanglement involves nothing more than mistaking the behavior of mundane, fraternal-twin particles, for the behavior of some mysterious, non-existent, but very weird, idealistically-identical-twin particles. That is all there is to the matter - a huge mistake in a fundamental premise that, combined with a few others, has resulted in an entire century of utter confusion.
I am in total agreement with that. And not only do particles need not be identical twins, in Planck's burst of quanta, they can be out of sync as long as the detection is within the time frame of the length of burst sending signals in divergent direction. The reversal of polarity would be observed by both detection stations but the bit of information carried by each that would be of value would be the direction of the vector of angular momentum imparted by the emitter and that vector direction in either burst would be the reverse of the other. So despite the distance of separation that imprinted vector would be received and recorded by both stations. There doesn't have to be a continuous connection through the emitter. Nothing spooky about that either.
Its "News" time in my zone, I'll read your paper this evening, thanks. jrc
I've had a little time to think about your brief on the EPR and the linked paper there-in explaining quantum correlations. Must admit I'm not at all familiar with the technicalities you present but could follow the argument. I'll refresh and study it some more but even so can say that it does go a long way to describe results one might expect from the "Bell-Aspect" experiments where polarizing filters on either side of a (hi-tech) light source, oriented at 90* to each other will allow reception of a pulse indicated by one outboard photo-multiplier to register but not the other in a seemingly random fashion. The Bell Inequality curve is independently a 'bell curve', and quantum streams of discrete quanta out of phase by one(+) wavelength at the refraction planes could be expected to build a bell shaped response curve, given that noise in a photon stream would be 'muffled' by the refraction of the polarizers and the threshold of triggering the photo-multipliers would compound with refraction anomalies to make 'fails' and 'false' responses. Thanks much. night All. jrc
this post has been edited by the author since its original submission
is the 2nd list is the OUTCOMES of the formulations of Hardy,
and each group in the each of the numbered sets is a comparison of what a Friend and associated Wigner could find given the first observation in pair of observations.
not many takers on your topic. Too bad, the premise behind the F&R experiments goes to the heart of all the extraneous comments. Is our capacity to accurately enough theorize on indirect measurement testable, or a meaningless tautology? Welcome in a New Year. This would have been Isaac Asimov's 100th birthday, I'm happy for that. jrc
Georgina Woodward replied on Jan. 3, 2020 @ 04:12 GMT
I dd not really understand the experiment. The orthogonal measurement should, as usual is uninformative, be uninformative. I take that to mean uncorrelated. What does it mean for a Wigner to take a fiends measurement as his own? Is it then not orthogonal and not really a Wigner measurement but Wigner by proxy. So why are the indirect Wigner measurements of observers one and two chosen to be orthogonal?
Now those are good questions, Georgi. Maybe George will check in and be able to explain how the actual experiment works. Isn't conjugation a proof of orthogonality to test if coordinates of a set of points has been correctly calculated? Wouldn't that set be a vector of a chosen Spin characteristic like angular momentum? So why is an orthogonal measurement meaningless?
this post has been edited by the author since its original submission
Robert H McEachern replied on Jan. 3, 2020 @ 12:03 GMT
Did you notice that my reply to George, explaining the real significance of those "uninformative" orthogonal measurements, has been removed; apparently it remains too disturbing for many people to contemplate, once they understand it, since it implies that fifty years of their work, on Bell-type tests, will be reduced to a complete waste of effort, as the result of being founded upon a false premise (that identical, entangled particles, are in fact, perfectly identical).
I've just scrolled several times through posts on this topic and can find no post of yours to George (or anyone) that mentions orthogonal measurement. And I never got an answer to how by who an orthogonal measurement could be made. What the blazes is being observed to measure in the first place. And if its going to be "uniformative" why conduct an ambiguous measurement? If the observers are particles themselves (OP), and the entangled particles being observed are photons, then the photons would have to decohere on contact with (decohered?) particles of a polarizing filter and the only way the OP could observe that would be by a photon shed by the filter's particle (?) which would use up the lions's share energy of the originally entangled particle and any information would be about the filter. In the quantum scheme, how would a photon physically be polarized going through the filter without decohering? Does the OP reside in an atom downstream of the filter? So whats so orthogonal about that unless whats called "a measurement" is a mathematical proof routine that should cancel out, and which is only about computation not observation?
Quite frankly, there never seems to be much tutorial content in any article of blog on any topic. It all tends to be more a sales pitch and then cut it loose on the blogosphere and cite the meta-data from the chat room to drum up sponsorship tax deductible contributions. Pardon my cynicism. What IS an orthogonal measurement, I know what it is if I were laying out a construction work site from the survey hubs. Jack, the State'll never pass that! That corners gotta go down another foot and would **** with the steel plan! That shalerocks gotta come out I know what %#X!! time it is! What's quant? jrc
this post has been edited by the author since its original submission
Robert H McEachern replied on Jan. 3, 2020 @ 16:08 GMT
JRC,
"I've just scrolled several times through posts on this topic and can find no post of yours to George that mentions orthogonal measurement."
Exactly my point; my post, to which you responded "excellent point, and good information..." (Dec. 30, 2019 @ 16:29 GMT) was removed, shortly after you responded to it.
"What the blazes is being observed to measure in the first...
"I've just scrolled several times through posts on this topic and can find no post of yours to George that mentions orthogonal measurement."
Exactly my point; my post, to which you responded "excellent point, and good information..." (Dec. 30, 2019 @ 16:29 GMT) was removed, shortly after you responded to it.
"What the blazes is being observed to measure in the first place."
Garbage.
As in garbage in, garbage out. That is the point. The entire EPR paradox is based on the assumption that if you attempt to measure a property of one particle in an entangled pair, and then attempt to measure the exact same property of the other particle in the pair, then, since the other particle in the pair has been assumed to be absolutely, perfectly identical to the first, one expects that the two measurements ought to always be exactly the same. But they are not.
So to repeat your question, what the blazes is being observed?
(1) a spooky action at a distance, in which the first measurement mysteriously alters the second.
or
(2) maybe, just maybe, the two particles merely are not as perfectly identical as has been assumed, in which case, there is no reason whatsoever, to have ever assumed that the two measurements ought to agree, in the first place.
In like manner, one can analyze what ought to happen when two different (such as orthogonal) measurements are made, on identical particles. This is what Bell tests are all about.
And all your expectations about what ought to happen, will be completely shattered, if you ignorantly ended-up performing the tests on particle pairs that turn-out to be not perfectly identical; like the difference between fraternal twins and identical twins.
As my paper demonstrates, it only takes a few dozen lines of code, to prove that (2) reproduces the observed "quantum correlations", in spite of Bell's theorem (based on the identical particle assumption) claiming to have proven that (1) is the only possible explanation.
Garbage in, garbage out. So much, for fifty years of work and millions of pages of peer-reviewed absurdities, about all the spooky behaviors of non-existent, perfectly identical particles.
Add to it the QM dictum that the particle(s) are not only identical polar opposites, but each particle is a perfectly whole single entity. It wouldn't change results beyond LV separation if you just record a reversed polarity combo in a mixed quanta Quantum valued 'particle' and threw out any combos you don't like. It just wouldn't be spooky enough to shill an audience.
Even so, when Quants talk about making an orthogonal measurement, is that meaning the viewing aspect along an x,y. or z axis, or observing the vector of something like magnetic moment from any remote position. Or is it cross products zeroing out to prove the vector point coordinates were correctly calculated and the vector space is still 90* orthogonal? Hard to get an answer. jrc
this post has been edited by the author since its original submission
Robert H McEachern replied on Jan. 3, 2020 @ 19:41 GMT
JRC,
In the context of these bell-type tests, the significance of two orthogonal measurements, is the assumption (which is not always valid!) that the results ought to be independent of each other. This is related to my Jan. 2, 2020 @ 14:46 GMT reply to Georgina, above, regarding the sides of a cube, exhibiting different texts, versus independent texts. "What the...
In the context of these bell-type tests, the significance of two orthogonal measurements, is the assumption (which is not always valid!) that the results ought to be independent of each other. This is related to my Jan. 2, 2020 @ 14:46 GMT reply to Georgina, above, regarding the sides of a cube, exhibiting different texts, versus independent texts. "What the blazes type of thing is being observed?" It makes a huge difference in the observed correlations between measurements, that ought to be expected.
Think of a set of cubes in which one pair of opposing sides on each cube exhibit a "Head" opposite a "Tail", but the other, orthogonal pairs of sides are not paired in that fashion, but instead exhibit random pairing (sometimes both heads, or both tails, or one head plus one tail). Every time you happen to make a measurement of the paired side, you will always observe that a measurement performed on one "entangled" pair of cubes will always be exactly the opposite of the measurement on the other member of the entangled pair. But if you then perform the same set of measurements, on one of the orthogonal sides, that are not always paired heads/tails, then the previously observed perfect correlation mysteriously vanishes - there seems to be no correlation at all, just random results! But it is only mysterious if you expected each of the three pairs of sides on the cubes, to exhibit the same head/tails pairing, just because one of them did.
Now, instead of a cube, think of a coin. First it is viewed face-on. Then it is viewed orthogonally - edge-on. Do you expect to see the same perfect correlations between two opposing face-on measurements (one looking face-on at "heads" and the other face-on at "tails") as would be obtained by looking at two opposing edge-on views, neither of which can clearly see either a "heads" or a "tails" and so just reports random guesses? When you expected to obtain the same clearly correlated measurements from the orthogonal (edge-on) case as was obtained face-on, but then actually obtain "uninformative" random results, what caused the mysterious change? The answer is, that you ignorantly expected that "What the blazes type of thing is being observed?" was not an object which has no "orthogonal" state at all! - it only has a barely perceptible "edge", rather than the expected "heads" or "tails".
Blogger George Musser replied on Jan. 3, 2020 @ 23:02 GMT
Rob, can you walk us through your argument that the Bell experiments make an assumption of identity and that, if this is dropped, the correlations become explicable? Also, I'm not clear on how this bears on the Frauchiger-Renner setup. Looking forward to your elaboration.
While Rob tailors a brief; in the section of the procedures in #2 if F1 makes a firm prediction on particle 1, F1 can conclude what W2 has measured. ...now, I'm assuming W2 is observing the evil twin of particle 1, not a second observation of P1. Correct? And if F1 measures particle i as 0, is that an observation of particle 1 being in an UP orientation meaning that W2's observation of particle 2 would also be UP with a RH (+) angular momentum (CW torque)? If not what is the + about? The 2nd list is a 4X4 matrix form but without a Hardy glossary the Vertical Bar symbol and the > make it a puzzle to figure out what the 0's and 1's, and + & -'s are about. Is it that F1 can only firmly predict that it has no reference with which to compare an observation (hence 0 probability) and so W2 has to conclude that as well (so + that)? Its a gedanken but it is distracting to wonder how a quantum variety photon could be observed by a particle without obliterating itself and raising the energy level and skewing the orientation of the observing particle. Oh, And is the Third party Us that is supposed to be versed in QM methods to observe both F's and W's? I'm with Georgina, I don't understand the experiment but can see why it is being posited. jrc
On the crib sheet 4 observers making observations of 4 particles have outcomes of usually only 3 observations with no distinction if the lone diagonal observations, mostly +, are attributable to which Wigner.
this post has been edited by the author since its original submission
Robert H McEachern replied on Jan. 4, 2020 @ 14:39 GMT
George,
Part I:
It all goes back to the question: Why does the Heisenberg Uncertainty Principle (HUP) exist, and what does it really mean?
As I am sure you are aware, in 1935, the EPR paradox was devised to demonstrate that the HUP cannot exist for the reasons proposed by Heisenberg himself. Heisenberg was then of the opinion, that the fundamental reason two variables,...
It all goes back to the question: Why does the Heisenberg Uncertainty Principle (HUP) exist, and what does it really mean?
As I am sure you are aware, in 1935, the EPR paradox was devised to demonstrate that the HUP cannot exist for the reasons proposed by Heisenberg himself. Heisenberg was then of the opinion, that the fundamental reason two variables, associated with a single particle, such as the particle's momentum and position, cannot be accurately measured, was due to the fact that the first measurement, of one of the variables, inevitably disturbs the particle enough to prevent an accurate measurement of the second variable's original state.
So the point of the EPR setup, was to prevent this disturbance, by measuring an "entangled" two-particle system, rather than the original single-particle system. In the entangled system, it is premised, that the two particles have perfectly identical values (neglecting any trivial, change of sign), for the two variables to be measured. This setup was designed to enable one variable of one particle, in the entangled pair, to be measured, without disturbing the measurement of the second variable, on the other member of the entangled pair. Since the two particles are assumed to be perfectly identical, the measurement of the second particle ought to yield the exact, same value as the same measurement performed on the first particle.
Thus, this set-up "side-steps" the issue of disturbance and ought to enable the determination of both variable values, associated with a single particle, that the HUP claims should never, under any circumstances, be determinable. So what is wrong with this picture?
Well, this set-up leaves open the rarely discussed "loophole", of just how "identical" the two members of the entangled pair have to actually be. The original thought experiment avoids this issue entirely, since it is just an idealized, thought experiment, that can simply assume perfectly identical particles. But in any real experiment, such as a Bell-test, this "loophole" rears its ugly face and it turns out to be far ugly (and much more interesting and much more fundamental) than anyone ever imagined.
What does it even mean, exactly, for particles to be identical? Can they be identical without being measurably identical?
if F1 is certain of a pass, then W2 would pass. Otherwise (>) if W2 might be a pass, then (|) F1 might be blocked while W2 might pass. Or (>) if W2 might be a pass then (|) f1 might be blocked and W2 might also be blocked. Otherwise (>) there would are no other defined combination probabilities.
Blogger George Musser wrote on Jan. 4, 2020 @ 17:27 GMT
There a few issues here.
First, EPR had several goals in their 1935 paper, and E's were not the same as P's and R's. As Einstein made clear in subsequent correspondence, he had no interest in trying to take down the Uncertainty Principle; his concern was to explain the remote correlations.
Second, I don't see that EPR makes an assumption of identity (unless this is implicit in the elements-of-reality assumption). Rather, there is an assumption that the system has a global wavefunction that implies correlations.
Third, we do have a notion of particle identity that is measurable: particle statistics. That is not relevant here, though.
Robert H McEachern replied on Jan. 4, 2020 @ 18:26 GMT
First, you are correct; "taking down" the HUP was not an issue. But taking down Heisenberg's explanation for why the HUP even exists, was an issue. Those are two very different things.
Second, you are again correct; EPR did not. But Bohm and Bell did, in their EPR-B revision of the experiment. That is the problem. If the entangled pair consists of an apple and an orange, and instead of measuring their extrinsic properties (like position and momentum, in the original EPR), you instead decide (as Bohm and Bell subsequently did) to substitute a measurement of an intrinsic property (like skin-texture or polarization), then there is going to be a problem, when you have assumed that the measurement of the skin-texture of the orange can be substituted for a measurement of the skin-texture of the apple, in the same manner in which measurements of the positions and momentums can be substituted.
Third. It is directly relevant: objects can be statistically identical (as in exhibiting the same mean and standard deviation) without being exactly the same (identical). Since you have raised this issue, you might wish to reflect upon my comment regarding the significance of this. Modern Code-Division-Multiple-Access communications systems are founded upon exploiting this very distinction: codes that are statistical identical will always behave in fundamentally different ways, when received by an entity that knows a prioi, how to exploit this distinction.
George, the a priori assumption that a total system wave function exists that collapses when one “entangled” particle is sensed producing a statistical outcome that then and only then immediately impacts the statistical sensed outcome of the now distant pair partner has seemed to me to make the argument that Bell type experiments have no possible classical explanation seem a bit circular. Was not this “spooky action at a distance” at the heart of the EPR issues they brought up?
Bell’s work was a statistical mathematics exercise, it did not address experimental measurements. What if a conserved characteristic was 0 before pair production and became +1 in one pair member and -1 in the other at the time of production, summing to zero as conservation would require. What if the Bell -cosine effect is a manifestation of the metrology of the sensing statistically recovered over a large number of pairs sensed in much the same way as Rob’s CDMA example statistically recovers the desired communications? E.g. the cosine transfer function of a polarizing filter, the coherence being the conservation of the sensed effect.
I have yet to see a reasonable explanation for why this can be ruled out. Do you have one?
I think that it all goes into the bag of the originating Photo-Electric Effect formulation being just that... an effect of incident light not a first order observation of it. The closest thing we have experimentally of actual detection of the physical existence of EMR is the Transition Zone at a macro-scopic antenna. All else is inference on the source. The discovery of spectral lines was also an observation of an effect, not a first order detection.
So EPR is an argument of interpretations by default. I wouldn't rule yours out. jrc
P.S. 1/5/20 The transition zone is also called the Near Field and Far Field and gets complicated fast. But it should be required reading for any discussion of Maxwell as an experimental proof that the 90* phase difference and 'c' proportional difference in orthogonal intensity of Magnetic and Electrical fields in a point (rest) charge, progress to orthogonal, in phase equal strength at light velocity of "a photon'(which is why an antenna doesn't vaporize and fry everything around it). And in application to the Quantum Mechanical measurement system is the only direct evidence of physical emission. In the early days of radio telegraphy contemporary with the Bohr Atom, it was unknown, but is now compelling evidence that the Quantum jump is time dependent.
this post has been edited by the author since its original submission
"it did not address experimental measurements." and "What if the Bell -cosine effect is a manifestation of the metrology of the sensing..."
Exactly. Detections can be statistically characterized by two parameters:
(1) the probability of detecting the event, when the event happens
(2) the probability of falsely identifying an event as happening, when it never happened.
Quantum theory only computes (1), and utterly neglects (2). But there are some classical "events" for which (2) is always non-zero, and it is trivial to demonstrate that Bell tests happen to create just such "events", with a high probability. See the final two paragraphs in my May 11, 2019 reply to John Fraser, for further details.
In other words, quantum theory is exactly like the statistical description of a drug test, that never even considers, either the possibility or the consequences of "false positives", which commonly and inevitably occur in some tests - like Bell tests.
It has yet to dawn on the physics world, that quantum theory is only describing the detection statistics of the "drug test" itself ("1", above), and not the structure or even the behavior of the "drugs". Mistakenly assuming that the theory is actually, directly describing the "drugs" themselves, is why the theory seems so weird. The theory only, indirectly, characterizes the "drugs" themselves, within the specification of the "drug tests" designed to detect them. So, badly-designed "drug tests" (that produce many false-positives AKA bit-errors) have resulted in the belief in weirdly-behaving "drugs"; like drugs, that when taken by one person, cause mysterious side-effects to occur in that person's distant, identical twin.
"It has yet to dawn on" Robert H McEachern that binary digits [1] are symbolic representations of information created by human beings. Binary digits don't actually exist except as concepts in people's minds, concepts that can be instantiated by using materials with appropriate properties.
1. “The Heisenberg Uncertainty Principle … correspond[s] to the shortest message of all, namely a message of exactly one bit” (Robert H McEachern replied on Dec. 2, 2019 @ 15:46 GMT, https://fqxi.org/community/forum/topic/3351).
Robert H McEachern replied on Jan. 7, 2020 @ 17:44 GMT
Take a coin with glue on one side, but not the other. Particles (such as dust specks) behave differently (stick or not stick) in response to encountering those two different sides; it has nothing to do with "people's minds".
It is not just "concepts that can be instantiated by using materials with appropriate properties"; binary behaviors of inanimate objects can also be "instantiated by using materials with appropriate properties"
Your coin example is not a good example. Dust sticking to glued coins is higher-level information that only exists from the point of view of human consciousness; this information is the result of higher-level analysis, in the human brain, of masses of lower-level information. “Dust sticking to glued coins” is not information that just abstractly exists; the universe is not in the business of collecting “dust sticking to glued coins” information and other assorted nonsense. There is no such thing as abstractly existing information.
I repeat: 1) materials with appropriate properties (i.e. “behaviours”) are utilised by human beings to instantiate the human-created binary digit concept; and 2) human beings (and some other living things) send messages, but you are wrong to imply that the micro-world is sending messages – there are no brains down there in the micro-world to encode and decode messages.
Robert H McEachern replied on Jan. 8, 2020 @ 02:31 GMT
The sun is sending out information carrying signals (AKA messages). Those signals convey information about the chemical elements and reactions occurring within the sun. The sun started doing this long before there was any life on Earth to decode those messages. No brains are required to either encode or decode such information.
You appear to be confusing an intent to convey information, with the actual act of conveying information. But information may be conveyed (encoded) and extracted (decoded) without there ever being any intent to do so.
Information is analogous to the iron in iron ore. It exists, regardless of whether or not anyone ever mines the ore, extracts the iron, or processes the extracted iron to produce steel knives or engine blocks.
Your 1) redefinition of the meaning of words and your 2) blurring of distinctions in the meaning of words is tantamount to lying and deception. Most of all, you have lied to and deceived yourself, and ended up with a nonsensical view and nonsensical conclusions about the world.
Encoded does not mean conveyed; and decoded does not mean extracted. These are strange and peculiar meanings that you have fabricated.
Signals and messages are always the result of human intentions, or the intentions of other living things. Signals and messages are always coded. You have attempted to blur distinctions in the meaning of the words “signals” and “messages”.
It is not possible to have a discussion with a person who deliberately redefines the meaning of words and deliberately blurs the meanings of words.
this post has been edited by the author since its original submission
Re the webpage link in “you might wish to reflect upon my comment regarding the significance of this” [1]:
As I have tried to explain to you before, Shannon’s “Information” Theory is in fact Shannon’s “Symbolic Representations” Theory.
Sorry, but your mixed-up ideas about information, messages [2], binary digits and codes inevitably leads you to mixed-up conclusions about the world. There are no messages being relayed in the micro-world.
I.e. you, like a lot of other people, don’t understand the difference between 1) information and 2) the symbolic representations of information that are created by human beings in order to communicate (e.g.) their ideas.
………………………
1. Robert H McEachern replied on Jan. 4, 2020 @ 18:26 GMT, referring to http://dailynous.com/2019/03/21/philosophers-physics-experim ent-suggests-theres-no-thing-objective-reality/#comment-1791 78
2. “The Heisenberg Uncertainty Principle … correspond[s] to the shortest message of all, namely a message of exactly one bit” (Robert H McEachern replied on Dec. 2, 2019 @ 15:46 GMT, https://fqxi.org/community/forum/topic/3351 )
"There are no messages being relayed in the micro-world."
That's what Robert McEachern keeps drumming. He says that technological 'sign'als are being relayed, and that metaphors of human communication messaging are technologically encoded in signal generation and transmission. And that information encoded as messages are comprised of multiple bits and the criteria of which for encoding, that constitutes a single bit is that it must be distinct enough in technological parameters that it can be completely and faithfully reconstructed by the same technology at reception as was used in transmission. That the micro-realm can be approximated with the criteria of technological parameters.
Contrast that with Lorraine Ford's insistence on a new age nomenclature that conveys an idea that free choice and societal responsibility are a higher order of construct from the particle level acting against constraints of force effects associated with particles in the conventionally accepted operational definitions which are approximated in formalization of Physical Laws.
Distinctly different paradigms, each complex in structure and metaphor, but not at all mutually exclusive.
I've known a few hunters, all humanely conscientious and no wounders, and I've known a number of others. There is no better eating than to sit at table of a hunter. Fresh, immediately dressed, free of chemicals and contagion of sheltered and fed stock. Well prepared and cooked its Edwardian Baronial Estate healthy quality. Takes a lot of knowledge, respect, patience and work. Animals leave sign. Oak trees of common species cycle through abundance of seeding, typically individual specimens will produce a super abundance of acorns about every 4 to more usually 6 years. jrc
this post has been edited by the author since its original submission
I come from a hunting & fishing family. It was great camping out in the bush and by rivers, far away from civilisation. But I only liked target shooting, not shooting rabbits, even though they are a pest species.
Georgina Woodward replied on Jan. 7, 2020 @ 00:14 GMT
"Carlo Rovelli and others have argued for years that quantum mechanics is perspectival: there is no third-person view at all. They still accept some kind of postulate of consistency: observers' viewpoints may differ, but must mesh whenever they come into contact, so that no out-and-out contradiction arises. Yet they remove the most natural explanation for that consistency: a world independent of us." George Musser. I think it worthwhile to question , what is the world independent of us? I used to think the sum of all possible views of it would suffice but it doesn't because it still relies upon the imposition of subjective viewpoints. Better is a completely non peprsspectival condition. The state of an measurable is always tied to how it is measured or viewed. i.e. seen this way ...or if this is done...|NO single perspective ->no single state.
Georgina Woodward replied on Jan. 7, 2020 @ 00:38 GMT
When considering a pair of particles formed together so that when a first measurement on one is carries out the outcome of the same test on the partner can be known with certainty, rather than random chance. That is not the same as instant communication between particles ass the second is still without applied perspective until the method is carried out on it. Between knowing of a certain outcome for a certain orientation of measurement and doing a test, the observer could instead choose a different test. Which is uncorrelated. If the orientation used on the first particle is then used on the second there is a random chance of outcome. The certainty has been lost. So observables can change. That's like the man with glasses removing them when looked at to see if he has coat or not; and putting them back or not via coin toss, as the observer checks to see if the man has glasses.
Georgina Woodward replied on Jan. 7, 2020 @ 01:21 GMT
Bell's inequalities apply to variables that do not change. Such men who either do or do not have glasses on, men who do or don't have a coat on and men who have hats or do not. It is argued that violation of Bell's inequalities requires giving up locality and allowing faster than light communication. However more 'down to Earth' is the distinct possibility that Bell's inequalities were not applicable to begin with; (as experiment can also suggest).Because of Observables (beables) that are changed by the act of measurement, affecting subsequent measurement.I don't fully understand the experiment in the blog. However,the implications is that it is not behaving as QM expects requiring further questioning ad altering of the model of observer independent reality, to make it fit the results. Or the QM formalism is not a viable description of what is happening.
If I'm looking at an Aspect apparatus, sighting along the plane of a polarizer, I can only assume that no matter what the shape of a photon is, that it has been previously aligned with a consistent polar vector. But I can not assume what that vector is or what shape the photon is. I'm sighting along the neutral plane of the object reality of the only true detection element, so that is an orthogonal, uninformative, observation not measurement on my part (an image reality), not the detector's. How the photon reacts to the orientation of electrostatic fields in the detector is an unobservable object reality (beable) to me but if a sensor element, like a scintillation plate, is in the resulting trajectory it will display the image reality of that generalized local effect of the detector. But only on an approximate vector, as I can not be certain where on the face of what to my view (from anywhere) is a neutral plane, and the absolute vector of the photon from the source is also only a near approximation. And what is missing in the scenario that imagines a communication existing between that photon and one inversely aligned projected toward an opposite, differently aligned detector, is simply the old adage that you can't put a round peg through a square hole.
The only real value I have ever thought that the catalogue of Bell-Aspect experiments can provide would be clues to what the volumetric shape of a wavelength of light would look like at different frequencies, and what vectors of electro-magnetic induction within that solitonic waveform would concentrate as negative acceleration by the slope of the curve of the shape, to transform from angular momentum of electrical potential to F=(e/c^2)c/sec^-1, associated with the particulate characteristic of light.
You and Robert look pretty close to the same page to me, and at least reading the same book. I'm still browsing old classical novelties, I've never had a gambling problem. :-) jrc
this post has been edited by the author since its original submission
I just got a flash of the edge of your coin, spinning on the concavity of an Euhler's Disc! Bravo! OH! the frequency! the frequency!
When the coin Faces my right (Tails = left) the edge facing me (Spin UP) I assign a + sign and the hidden edge I assign -. At 1pi, I see the - edge but can't see the Face looking left. But at 2pi I again see the + edge. And as the coin's extended duration of sustained rotation on and by the concavity keeping the coin in a practical locality for strobe synchronized observation, and the rotation rate drops & the wobble increases the angle off vertical I can view that angle corresponding to wavelength! So at 1/2pi CW I get Face (height of profile) amplitude, and at 1pi: - (momentum vector) theta, @ 1,1/2pi : Tails amp, @2pi: + theta, for one soliton. And I take the next 2pi rotation as the same spin DOWN (the negative polar vector) for a 4pi rotation in two solitons that induces the recursive sinusoidal wave response conventionally interpreted as a single wavelength of a transverse wave cross-section. (and 2pi rationalizes HUP)
So your coin DOES have a hidden variable: the backside of the edge!
Terrific! thanks jrc
(okay, end edits - jrc0
this post has been edited by the author since its original submission
Georgina Woodward replied on Jan. 8, 2020 @ 05:09 GMT
Hi John, you wrote "You and Robert look pretty close to the same page to me, and at least reading the same book". Robert and I am approaching the puzzle from opposite but not contradictory directions. Robert is concerned with he analysis of outcomes; Addressing the claim that no hidden variable model could fit the results of quantum experiments. He has even said in one post""forget about the physics".However even if there is a hidden variable model fitting the data, as Robert has found, it does not mean that is happening in the actual physics experiments.It is showing fallibility of the analysis. I am trying to address the foundational physics behind the results, What is the condition of the beable observabales, what is going on when a measurement orientation is selected and when a measurement is carried out. What stays the same or changes.
Robert H McEachern replied on Jan. 8, 2020 @ 15:40 GMT
Georgina,
I have already identified the "foundational physics behind the results" and "what stays the same or changes".
What stays the same, is one single-bit-of-information = THE quantum. Everything else changes, from one supposedly "identical particle" to the next (they are not as identical, as has been assumed). That is what it means for there to only be one...
I have already identified the "foundational physics behind the results" and "what stays the same or changes".
What stays the same, is one single-bit-of-information = THE quantum. Everything else changes, from one supposedly "identical particle" to the next (they are not as identical, as has been assumed). That is what it means for there to only be one single-bit-of-information. That is why the Heisenberg Uncertainty Principle exists. It has nothing to do with one measurement perturbing another, or any hidden variable. The point is, it is a huge mistake to ever even try to make a second measurement in the first place! Precisely because there is nothing else (no independent variable) remaining to be reproducibly-measured, after the first measurement. It is like trying to measure the value of the fourth component of a three-dimensional vector. A fourth component does not exist! So it can never be measured! When you attempt the additional measurement, you will always end-up measuring something, but it will never be what you expected to measure, because what you expected to measure, simply does not exist.
This issue is not just the "fallibility of the analysis". The issue is, there is an extremely-fundamental misconception about the nature of reality, caused by an extremely-fundamental misconception about the nature of information itself, that underlies all of physics.
As a direct result of the above, physics cannot possibly be describing either the "things being observed" or even the "behavior of the things being observed". It can only describe the behavior of detectors, that have (hopefully) been designed to correctly detect the very existence of the things being observed, at one particular time, and one particular place. Physics only describes the behavior of the "drug test", including all its imperfections, not the behavior of the "drug" itself. Nothing else is even a logical possibility, when the "drug" that one is attempting to detect, manifests only one single-bit-of-information; a single, yes/no response, to the detector's fundamental question - "Was the thing I was designed to detect, just detected, right here and right now?"
Isn't THE Quantum contextual, and thus subject to analysis? A green quantum may liberate an electron at nominal velocity from one elemental atom but a red quantum may not. But if a red quantum may liberate an electron at nominal velocity from a different elemental atom and a green quantum will liberate an electron from that same atom it will have a higher velocity than the one the redQ liberated. So YES the criteria for one bit is: "Was the thing I was designed to detect, just detected, right here and now?" But in the fundamental analysis of what is physics, the question is not about THAT bit. Its about whether the interpretation is correct; of the photo-electric equation as: a quantum being one perfectly uniform particle (of any variable empirical value), for the convenience of explaining intensity as: for every quantum there is (perfectly) one liberated electron. Old Mad Max Planck would (and did) disagree. jrc
this post has been edited by the author since its original submission
Robert H McEachern replied on Jan. 8, 2020 @ 17:28 GMT
"Isn't THE Quantum contextual, and thus subject to analysis?"
No. Precisely because there is nothing (no additional information) to analyze.
"Its about whether the interpretation is correct"
The point is, when there is only one bit of information present, then there is only one interpretation possible: if your detector just indicated that a detection occurred, then it just detected whatever the detector was capable of detecting, whether you intended that to happen or not. It says nothing at all about how well the detector fulfills your expectations about only detecting the things/conditions you hoped it would and never be fooled by any other things.
The ancient Greek philosophers debated whether or not it is ever possible to find something, when you do not know exactly what you are looking for. Few people realize that Shannon definitely answered that question. There are things that can never be reliably found, unless you know exactly what to look for, and exactly how to look for it (detect it). He called such things "information".
So if you do not know, a priori, exactly how to detect something, without making any detection errors, then you are just out-of-luck. That is why the Heisenberg Uncertainty Principle exists; there is nothing more to ever be "measured" or "analyzed", when there really is only one bit of information being manifested within the thing being detected.
I agree with Mad Max. The Quantum is contextual, empirical measurement not withstanding. Choice of what you want to detect, and recognizing what you didn't, is the art of good ol' benchtop, Rube Goldberg trial by error. Volta was doing a chemical experiment and didn't intend or expect to detect an electrical current. Now we have Electrical Engineering and Shannon's Capacity Theorem. :-)
p.s. until Volta realized that a SLOW discharge was occuring in what we now call a conductive wire he used to suspend a stack of (now) conductors separated with (now) dielectric sheets in a solution seeking to precipitate a sought after compound, Electricity was thought to be static and generally came in Leyden Jars and only recognized as a FAST spark discharge. The information didn't exist to be put to the Shannon Rule for after the fact measurement qualification. Now it does exist, after the fact of someone (Volta-1807) recognizing (electrolysis) what no one could have imagined. That, in essence is what Lorraine is getting at. jrc
this post has been edited by the author since its original submission
Robert H McEachern replied on Jan. 8, 2020 @ 21:33 GMT
You fail to understand the fundamental, insidious nature of reality:
"There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy."
Once you introduce enough "context", to enable extracting more than a single bit of information from it, then you have merely succeeded in leaving the quantum realm entirely and reentering the classical realm. The total lack of "context" is precisely what THE quantum is. By attempting to gain more information, in order to study the most peculiar nature of this situation, you inevitably, totally destroy the very situation (lack of context), that you were attempting to study; that is what "decoherence" is - The ultimate catch-22.
Robert H McEachern replied on Jan. 9, 2020 @ 14:29 GMT
That is indeed "just it."
Something versus nothing. The smallest possible bit of new knowledge, newly gleaned from the void. An interaction that tells you nothing more, than that there is "not nothing", at one particular place and one particular time. It does not even tell you what that "not nothing" is; the detector already had to embody that knowledge, in order to detect whatever it detected. That is what the quantum condition is. As soon as you glean anything else from the void, then you have left the quantum realm and entered the familiar classical world of more information, more context and "higher representations".
Of course you can always add more context - gleaning more information from the void. But that just means that you are no longer dealing with THE quantum condition - you are right back where you started, in the classical world you so know and love.
Ad Hoc creation of anything beyond that single "something versus nothing" detection, is exactly what the classical realm IS. That creation is what emergence is all about. The classical world emerged from the quantum world, by processes that accumulate more information, than the single bit of information (the something versus nothing indicator), not the other way around.
You can literally see that happen, right before your very eyes, by running the code I provided in my paper: as soon as you reduce the noise enough to enable the reliable extraction of more than just one bit of information, the "quantum correlations" literally vanish and the familiar classical correlations emerge from the chaos (AKA noise). Try it. Then cogitate about what you have just witnessed, until you realize that "that's just it."
Rob McEachern
this post has been edited by the author since its original submission
Engineers and Experimenters will only ever agree on one thing; engineering is neccessary for practical experiment. So while we might metaphysically disagree if what is available on the Shannon Channel is all there is, THE quantum as a bounded variable has been fruitfull technologically. That does not mean that a Boson produced in high energy acceleraters which doesn't, last long enough to be identified by anything other than schedule disintegration products predicated on quantum unitary detection, which filters out (renormalizes) less than whole numbers, is more than what realistically might momentarily morph in the high intensity fields of a nuclear cross-sectional region. It's calibration of measure. And it certainly doesn't mean a field theoretical convergence of continuous functions 'pulls anything from the void'. The field is a continuum. not little tiny spheres flitting about making space and time out of nothing.
There actually is an underlying theory to Quantum Mechanics, and its essentially the same one as Henri Poincare's clearly stated "Factory Stamp" of proceedural convenience. 'Everybody, do it this way and we'll get a lot of results of the same sort.' At least in times past the illuminaries of QM had the humility to admit that they couldn't say if they were even half wrong. jrc
there has never been any physical experiment conducted which has succeeded in producing the projection of a single discrete particle. Double Slit, many electrons and even more numbers of photons, most hitting the plate. EPR?, Delft?, Fourier. jrc
As a matter of political science, the only thing that matters in this debate is who gets a functioning Quantum Key Distribution system up and running. Currently from what is publicly acknowledged and alleged the ChiCom (meaning communist cum dot com) has achieved synchronized orbital LOS stability to a functional degree, but can only operate in the night-time shadow due to increased atmospheric interference in daylight. In 2012, the U.S. through DARPA initiated a seed program to promote private sector R&D but nobody's talking it up in the blogosphere of investment capital. The RNC and DNC don't even mention it, though its perhaps the hottest topic available to focus attention on education policy. jrc
Among the standard measurement criteria of the symmetric spin coordinate system, devised for the practical purpose of reducing complexity down to a manageable host of parameters for statistical analysis by Quantum Mechanics, is (also) the symmetry of axial rotation around the poles of the precession of orbital of magnetic moment. So perhaps due to this the conventional assumption of...
Among the standard measurement criteria of the symmetric spin coordinate system, devised for the practical purpose of reducing complexity down to a manageable host of parameters for statistical analysis by Quantum Mechanics, is (also) the symmetry of axial rotation around the poles of the precession of orbital of magnetic moment. So perhaps due to this the conventional assumption of entanglement is that Particle B is a direct inverse of Particle A. And that makes for a peculiar distribution of correlation of results as a consequence of the interaction between a photon and the alignment and degree of separation of electrostatic charge in the polarizers (None, I've read of, set at the Brewster angle).
(oops, I submitted instead of previewed) but I'll go on;
suppose that what really constitutes entanglement is B being the negative polar vector of A.
I grew up working in a print shop, my sophomore science project was photomechanical reproduction. I worked in my early teens with negative polar vectorization as a daily practical application. Try thinking of entanglement as the INK. not the platen or the paper. We'll do letterpress for reduction to first principles. You "chase" (back and forth) some raised handset type into the center of the form and "Coin" it in place by spreading a pinch block instead of wedging in a coin, roll some ink onto it and lay a piece of paper on top, then turn the crank on the drum of the proof press geared to roll the chassis of the impression roller across the formed type, or as I worked with often; transit the press bed under the roller. Pull the paper off the sticky type and "voila!" you got a readable For Sale sign. Do that a lot of times with raised type and photo negatives and grasping the function of a negative polar vector becomes second nature.
Think of entanglement like that, like ink, and look at what it does in relation of A & B to the electrostatic orientation of the polarizer elements. Don't just "work and turn", "work, turn & tumble", You are printing a book, not a scroll. Both sides of the page is printed, right side up. This not an ink jet, its a proof press. Let's say you are printing a big 0 on both sides of the page. You put a block letter in the form and print one side, then turn the paper over, but you have also rotated the form in the press so you tumble the paper that direction too. That's how you get the bell shaped curve if you open the book to see two pages with a big 0 and maybe the next page actually had a big 1 printed on the back (hidden) side. Be the ink. And let it B the negative polar vector of A, opposite sides of the page not the paper. It doesn't change the price of ink. Kool! ain't it? jrc
post script: try this
write your normal polar vectors on a piece of paper
turn it and hold it in front of a mirror; the terms are right side up but each point backwards, and the algorithm reads right to left
tumble the paper upside down: the terms are inverted but each term points the right direction (so do the operators) and the algorithm reads left to right. jrc
Entanglement would have to be the negative polar vector. If it were the conventional assumption of inversion, entangled electrons would be impossible! You would physically encode Electron-Positron Annihilation:
Georgina Woodward wrote on Jan. 10, 2020 @ 22:30 GMT
Applying alternative explanatory framework to the experiment. The beable particles that will be used are elements of Object reality, non perspectivval, complete. Pairs are produced with a correlation that will result in opposite outcomes for the same orientation of measurement. The Friends agree on the measurement to be made and choose the orientation of measurement. Now the measurable is being...
Applying alternative explanatory framework to the experiment. The beable particles that will be used are elements of Object reality, non perspectivval, complete. Pairs are produced with a correlation that will result in opposite outcomes for the same orientation of measurement. The Friends agree on the measurement to be made and choose the orientation of measurement. Now the measurable is being considered (context) and orientation (perspective).Whereas the beable is potential source of all views of it, the measurable is a restricted sub set. For the selected orientation there are two potential outcomes.The test is conducted and the result obtained. I think 0 or 1 means no detection or detection. By some means, most probably sound or light signal, the friends are made aware of the outcome. they receive the signal and generate an image reality from it by which they perceive the outcome. The Wigners are said to make an orthogonal indirect measurement. I don't think that is so at least for the experiment described in the blog. The friends will look at the apparatus in a particular way which provides the input to their sensory system allowing them to see it. The Wigners can orient themselves orthogonal to the friends. They will therefore have different view of the experiment. Friends and Wigners produce independent Image realities. However the detection of the particle is beable particle interacting with beable apparatus-an Object reality. That Object reality is the source for both the Wigner's and friend's Image realities (observation products). So the idea that a second indirect measurement of the particle can be made is not so. An observer receives input from Object reality and generates Observation products from it. This makes detection (measurement outcome in this experiment) categorically different from observation. No erasure of memories required. Re. the mentioned version of the experiment where the observers are particles: Particles are not observers, they do not have a perspective ( seen this viewpoint) of other things.
Georgina Woodward replied on Jan. 10, 2020 @ 23:38 GMT
The event of a beable particle interacting with the beable measurement apparatus is unitary. The outcomes were not actualized prior to production of singular result. The alternative has ceased to have potential to become, as the other condition is actualized. (No need for Many Worlds). Awareness of the outcome is via Observation product (Image reality) production. Different observers with different viewpoints generate their own products from input from the same object reality event (Fits with relativity). Orthogonal observer viewpoints is categorically different from orthogonal measurement. The measurement part of the experiment is allowing the particle interaction with apparatus imposing a particular context (just dealing with this aspect of being) and perspective (seen in this way), leading to a detection. Different from observation (see above).
Frauchiger and Renner are correct, so was Wigner intuitively. There is something missing in the Spin co-ordinate system representation of Maxwell's Theory of Electro-Magnetism. It's right there in front of everybody. But it makes Superposition hollow, and if you look at the Born Rule from the perspective of Faraday's right hand rule; axis A is +1, B is -1, and C is either the...
Frauchiger and Renner are correct, so was Wigner intuitively. There is something missing in the Spin co-ordinate system representation of Maxwell's Theory of Electro-Magnetism. It's right there in front of everybody. But it makes Superposition hollow, and if you look at the Born Rule from the perspective of Faraday's right hand rule; axis A is +1, B is -1, and C is either the square or square root of +1 or -1... so you wind up by elimination with C being assigned the imaginary unit, i. Chirality requires the left hand set so the inverse of A & B are reverse sign, but reversing sign of i is dependent on multiplication by one or the other sign of 1. So the inverse of the i axis is a pseudovector. That way there is still a possible 90* rotation. But the pseudovector is synonomous with (correlates 1:1) with superposition, so the Born Choice Function goes to many possible worlds instead of one result for this world. And no quasi classical qualification can be consistent with either other QM interptretation because the Spin co-ordinate system representation of Maxwell's Theory of Electro-Magnetism is missing something Maxwell didn't. :-) The 90* time dependent, continuous orthogonal rotation of phase between the perpendicular planes of direction of magnetic and electrical field force. Coherence in Spin is existential at light velocity, decoherence is existant at rest. QM likes to normalize every thing to an instantaneous measure, and omits Maxwell's continuous extrapolation of equalizing phase difference and c proportional difference of field intensity. E = mc^2 jrc
You have to refresh on Faraday and Maxwell to get the orthogomal progression of phase correct. Two symmetric sinusoids are perpendicular to each other, but at rest, or at the slower than a stone rotation speed of macro alternators, those sinusoids are positioned so that the tad end of one falls on the midpoint of the other along the common line of intersecting planes. The Electrical sinusoid lags the magnetic by half (-1/2pi). AND though symmetric in shape and size, the electric field potential is a c proportion greater intensity than magnetic. As rotation speed increases, the phase difference reduces as the electric sinusoid creeps upward towards in-phase with magnetic, (and the intensity difference diminishes). Maxwell could only extrapolate that from Faradays meticulous recording of thousands of electro-motive experiments to comensurate with Hertz' radio wave experiments. That orthogonal creep is NOT evidenced in the Spin 1/2 pi rotations at right angles. Its orthogonal to Maxwell as creep in the direction of motion (parallel with the polar planes - see: Kerr Effect/solitonic waves/polarizing elements). with the And the ~c proportion at non-relativistic speeds induces response in fields in proximity so at mechanical high speeds, excitation of atomic electro-static fields produce immense profusion of their own electromagnetic emissions. The cavity cyclotron invented early in WWII used that, to provide a micro-wave source for early onboard radar for aircraft. But mechanical speed is very limited and way down in the flats of the Lorentz Gamma. On the Gamma curve you have to be half the speed of light for mass to be about 14% energy, and at about 86% light speed to be half massed and half energy, so in that non-Maxwellian hollow of Spin superposition is probably where they will find all that missing 'dark' matter and energy.
Your experiment is an entirely different experiment, not an explanation of F&R. Sorry, The only thing the Wigner Twins and Friends need to initiate the experiment is to see someone on the corner flipping a coin at regular intervals before the Friends each enter their respective Labs down adjacent sides of the block, and the Wigners each across the streets at coffee shops. With that in mind check out the synopsis in "Nature". best jr
Georgina Woodward replied on Jan. 11, 2020 @ 09:41 GMT
I'll think about it John. What do you mean by the superposition being hollow? Geometry of possibility? John, you wrote "Frauchiger and Renner are correct, so was Wigner intuitively. " I have not said they are wrong according to their premises and theory.Nor have I set out to explain what they have done.I have used the experiment as a testing ground for the RICP explanatory framework . It is not QM and not classical physics. I'm taking a different approach to the Theoretical/ Metaphysical background in which the experiment happens. Partitioning Object (independent) reality and Image ((observer generated)reality.It shows that the explanatory framework can be applied too Relativity issues and to QM experiments
I'll give it another read and try not to conflate it with F&R. Superposition I see as hollow because typically QM doesn't think of coherence as being existing at light velocity. But it would have to be. Same way with phase creep to equilibrium in Classical theory. Warning, I take liberty with both, as Quants will jump up and shout. And a correction; Hertz extrapolated Maxwell 2 years later. But if the intensity difference had not equalized in an electromagnetic wave, a c greater electric intensity at light velocity would induce an 'ultraviolet catastrophe' in any material response to the photoelectric effect. Even at mechanical speeds, induction produces near c proportional higher frequencies of EM, The Object Reality detects the physical wave, the Imagine Reality is the observed effect, and our physical laws are all based on observed operations. The result of application of those laws gives us a mathematical statement of the Effects, not the physical nature. Galileo's Ramp, Newton and GR all desribe the effect of a gravitational field, they don't definitively describe that field as physically existant energy. You can see why I don't trot this out often, jrc
I've gone over your scenario several times and (quite apart from F&R) see how you are structuring an argument to epistomologically distinguish observation from physical detection. It gets a little wordy still, but that language arts, eh? It's not ean easy subject, especially with what we have to work with. We don't have even a conventional classical, ontological, quantifiable definition of an existential particle, let alone a similar model of a 'photon' that we can hold in our hand and turn it about and track its fields with a backpacker's compass. And the quantum beable is even less demonstrable; its a zero point location in a vextor field (as predicted by the Schrodinger Equation).
The best we can do, I think, is to try to agree on what we mean by word definitions and deduce where different perspectives of observations yield consitent agreements on 'what' has been detected. In the meantime, your meanings have developed more concise general form, and are getting easier to follow. Good luck, jrc
Georgina Woodward replied on Jan. 12, 2020 @ 23:15 GMT
By beable I mean an element of Object reality, something existing, independent of observation; that is without context and perspective. Unlike a measurable which is an attribute that can be measured. Restricting what is being considered, that is context. How it is to be measured, restricting the possible outcomes, is perspective. The beable is the source for all observations of it. When measurement context and perspective is chosen the potential outcomes are both possible because the method that causes one to be selected has not yet been applied. That not yet applied method-perspective leaves the, context-ed, measurable as source of both possible outcomes (Cf: superposition) An outcome made known by visual or auditory means is not a beable but an element of Image reality, an observation product; A switch. (Cf: decoherence)
Georgina Woodward replied on Jan. 13, 2020 @ 01:19 GMT
Schrodinger's cat is mentioned in the podcast.This is a different kind of scenario from those in which there is not a singular outcome state because of lack of method-perspective being applied. I.e. 'seen like this' the outcome is that. But without the 'seen like this' there is no singular outcome, that. Un-decayed and decayed atom, intact poison flask and shards, alive and dead cat are pairs of states of being that can not temporally co-exist in a uni-temporal universe. They are sequential states belonging to different configurations of the entirety existing. The supposed superposition of states (in the experiment) is not state latency with co state potentials or merged state potentials. Instead they are quasi superpositions (not an object reality) due to lack of knowledge of the condition of the entities prior to an Image reality being formed.
Georgina Woodward wrote on Jan. 11, 2020 @ 20:50 GMT
Hi George, I don't understand how in practice W1 makes an orthogonal measurement of particle 1. I thought from the outset of the blog, the idea is that W1 can make an indirect observation of F1 performing the experiment-so not disturbing the particle again. I thought the whole encryption business re. QM was the idea that when one observer 'looks' at an entangled particle the supposed superposition of states and entanglement ceases. That's not going to work if the first observers memory and disturbance of the particle can be erased, allowing a second first measurement. I don't understand why the W1 measurement is 3rd person if there is interaction with the particle itself by W!'s experiment. That's another 2nd person activity. If the particle is 1st 'person' perspective , F1 2nd person perspective.
Georgina Woodward replied on Jan. 12, 2020 @ 00:28 GMT
I mean by, if the particle is first 'person' perspective, just that the interaction with the apparatus leading to an outcome happens to it. I don't mean it has sensory perception or opinion. The 2nd person perspective is that of the person conducting the experiment and forming an awareness of the outcome. As I see it a 3rd person perspective is that of a spectator watching the experiment being performed but not interacting with the apparatus or other person. If doing his /her/its own measurement that is another 2nd person perspective. And there can't be two first measurements. Things change upon first measurement outcome, whether described as loss of coherence, wave function collapse, or another way. Correlations that would have been are lost upon second measurement. (Must be sequential not measured in both orientations simultaneously. Is that not so?.)
Georgina Woodward replied on Jan. 12, 2020 @ 23:26 GMT
Hi George, I have listened to the podcast. Now it is clear to me that the Wigners are not making their own independent measurements but relying on what they are sent.
Now I don't understand why the two labs, with independent random number generators, can be considered entangled merely because they can share information
Oops, Georgi, that's the disconnect of the 3rd person. There is no sharing of information between any observer during the experiment. In the panel discussion, 'classical record' was among things at issue. F&R equip their individual observers, then pair them, only with the classical record of the axiomatic rules of operations of probabilities (the math) in QM, and send the 1st persons into isolated labs while the 3rd persons deduce from axioms what the probable outcomes might be. The classical record includes the catalogue of results from Harvey's distillation (I'm an old guy, I really prefer the Harvey that was paired with Elwood P. Dowd). That classical record is shared on the NW corner of 'F' St. and 'R' Av. as an extra on the SE corner flips a coin each time the light changes. That timing signal becomes part of the classical record before the Wigner Twins and their Friends make their seperate ways to begin the Gedanken. The flip of the coin, not its outcome in relation to the direction of light change, is the only observable the 4 observers actually obtain. I know. I know, we are missing some crucial information ourselves. It's in the Harvey protocols and we as onlookers (not observers) must accept the classical record is sufficient. When a 'measurement is made' its in the classical record, and as the timing intervals progress the seperate observers simply deduce from knowledge without any actual observation or measurement taking place. Nothing disturbs the Beables by thinking of the classical record.
(edit) yes. its a head game.
this post has been edited by the author since its original submission
Georgina Woodward replied on Jan. 13, 2020 @ 10:53 GMT
Thanks for taking the time John. By 'relying on what they are sent' I mean what I think you are referring to as the 'classical record'.
Right now I'm thinking you can't make silk purse out of a sows ear. It starts out with quantum correlations and uses quantum maths and quantum explanations and ends up with something unexpected. I'm not sure if it is a true paradox, an impossibility (most likely indicating something wrong with the theory) or just what you get if you follow the procedure. 'Ask a silly question get a silly answer',springs to mind. Though it seems that it doesn't matter how silly it gets, it's still taken seriously.
by and large I agree. Near the end of the discussion panel, Aaronson summed it as what is proven is that we can prove QM is a theory. The 'why' that it works is omething that goes to vast numbers of events in the simplest macroscopic thing. A mature Maple tree in my neighborhood might produce 30,000 leaves in a season, yet there is still a small area way up in the wind where a few, scattered, dead leaves cling to their twigs in January. What are the odds of that, and what odds of each of any of those particular leaves remaining even though there are clear causal factors why leaves fall. (It's counter-intuitive but the maths are beautiful, its said) jrc
We shouldn't conflate contradiction with inconsistency. QM has a dynamic track record not only of prediction in application to specific tasks, but also in discovery. Its worth noting that where we have seen discovery in QM it has been by theoretical regimes which are quasi-Relativistic, ie: inverse square law subject to Lorentz Invariance.
The question posed by Frauchiger and Renner, does contend inconsistency. But all the underpinning of QM parameters are classical laws of observed operations. And Classical Realism is riddled with inconsistencies, assumptions and gaps of causal ontology. The ad hoc notion of superposition draws immediately on the contrary classicism of luminosity decaying over distance in a spherical wave, while observation of the photoelectric effect constrains emission of EMR to an LOS trajectory. Ergo: a Quantum might be envisioned as decohering from an arc section of the spherical wave at any observational location along that trajectory.
QM can argue that it is a complete theory only in the same sense that SR can be said to be mathematically complete. Which is not to say that either is physically complete. Perhaps if the principle interpretations of QM were consitent, it would cease to be a dynamic methodology and retrograde into the same stassis of 19th century Newtonian physics in which all to be discovered had been, and only specific applications needed accounted for. jrc