If you are aware of an interesting new academic paper (that has been published in a peer-reviewed journal or has appeared on the arXiv), a conference talk (at an official professional scientific meeting), an external blog post (by a professional scientist) or a news item (in the mainstream news media), which you think might make an interesting topic for an FQXi blog post, then please contact us at forums@fqxi.org with a link to the original source and a sentence about why you think that the work is worthy of discussion. Please note that we receive many such suggestions and while we endeavour to respond to them, we may not be able to reply to all suggestions.

Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you.

Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you.

Contests Home

Previous Contests

**Trick or Truth: the Mysterious Connection Between Physics and Mathematics**

*Contest Partners: Nanotronics Imaging, The Peter and Patricia Gruber Foundation, and The John Templeton Foundation*

Media Partner: Scientific American

read/discuss • winners

**How Should Humanity Steer the Future?**

*January 9, 2014 - August 31, 2014*

*Contest Partners: Jaan Tallinn, The Peter and Patricia Gruber Foundation, The John Templeton Foundation, and Scientific American*

read/discuss • winners

**It From Bit or Bit From It**

*March 25 - June 28, 2013*

*Contest Partners: The Gruber Foundation, J. Templeton Foundation, and Scientific American*

read/discuss • winners

**Questioning the Foundations**

Which of Our Basic Physical Assumptions Are Wrong?

*May 24 - August 31, 2012*

*Contest Partners: The Peter and Patricia Gruber Foundation, SubMeta, and Scientific American*

read/discuss • winners

**Is Reality Digital or Analog?**

*November 2010 - February 2011*

*Contest Partners: The Peter and Patricia Gruber Foundation and Scientific American*

read/discuss • winners

**What's Ultimately Possible in Physics?**

*May - October 2009*

*Contest Partners: Astrid and Bruce McWilliams*

read/discuss • winners

**The Nature of Time**

*August - December 2008*

read/discuss • winners

Previous Contests

Media Partner: Scientific American

read/discuss • winners

read/discuss • winners

read/discuss • winners

Which of Our Basic Physical Assumptions Are Wrong?

read/discuss • winners

read/discuss • winners

read/discuss • winners

read/discuss • winners

Forum Home

Introduction

Terms of Use

RSS feed | RSS help

Introduction

Terms of Use

*Posts by the author are highlighted in orange; posts by FQXi Members are highlighted in blue.*

RSS feed | RSS help

RECENT POSTS IN THIS TOPIC

**Carl Brannen**: *on* 2/2/09 at 10:41am UTC, wrote "So there are connections with some established physics here." The things...

**Lawrence B. Crowell**: *on* 1/26/09 at 21:47pm UTC, wrote To connect with density matrices, the Leech lattice is in Jacobi functions...

**Carl Brannen**: *on* 1/24/09 at 8:21am UTC, wrote I had to read your most recent post several times over several days before...

**Lawrence B. Crowell**: *on* 1/20/09 at 2:29am UTC, wrote I was probably not clear in what I wrote. Quantum tomography is a way of...

**Carl Brannen**: *on* 1/17/09 at 23:57pm UTC, wrote Ooops it ate the rest of the post. So you should be able to prove that xx...

**Carl Brannen**: *on* 1/17/09 at 23:54pm UTC, wrote Tomography means measuring something by taking slices out of it, more or...

**Lawrence B. Crowell**: *on* 1/15/09 at 14:36pm UTC, wrote It looks like a long paper. I noticed you talk about quantum tomography,...

**Carl Brannen**: *on* 1/14/09 at 6:34am UTC, wrote Lawrence, I've got a new paper out. Well, I'm basically asking friends to...

RECENT FORUM POSTS

**Thomas Ray**: "Amen to that."
*in* Science Funding in an...

**John Cox**: "Science funding needs to start at the preschool level. That was probably..."
*in* Science Funding in an...

**jim hughes**: "Georgina, I'll have to think about how to best answer that question. But ..."
*in* Defining Existence

**Georgina Woodward**: "Hi Jim, 'Where are the atoms?' was posed as a question for you to..."
*in* Defining Existence

**dieu le**: "Einstein’s theory of Gravity has no place for Gravitational Waves One..."
*in* Alternative Models of...

**Pentcho Valev**: "Entropy Is Not a State Function (Thermodynamics Is Not Even Wrong) ..."
*in* Dirty Secrets of...Life:...

RECENT ARTICLES

*click titles to read articles*

**Untangling Quantum Causation**

Figuring out if A causes B should help to write the rulebook for quantum physics.

**In Search of a Quantum Spacetime**

Finding the universe's wavefunction could be the key to understanding the emergence of reality.

**Collapsing Physics: Q&A with Catalina Oana Curceanu**

Tests of a rival to quantum theory, taking place in the belly of the Gran Sasso d'Italia mountain, could reveal how the fuzzy subatomic realm of possibilities comes into sharp macroscopic focus.

**Dropping Schrödinger's Cat Into a Black Hole**

Combining gravity with the process that transforms the fuzzy uncertainty of the quantum realm into the definite classical world we see around us could lead to a theory of quantum gravity.

**Does Quantum Weirdness Arise When Parallel Classical Worlds Repel?**

Quantum mechanics could derive from subtle interactions among unseen neighboring universes

RECENT FORUM POSTS

RECENT ARTICLES

Figuring out if A causes B should help to write the rulebook for quantum physics.

Finding the universe's wavefunction could be the key to understanding the emergence of reality.

Tests of a rival to quantum theory, taking place in the belly of the Gran Sasso d'Italia mountain, could reveal how the fuzzy subatomic realm of possibilities comes into sharp macroscopic focus.

Combining gravity with the process that transforms the fuzzy uncertainty of the quantum realm into the definite classical world we see around us could lead to a theory of quantum gravity.

Quantum mechanics could derive from subtle interactions among unseen neighboring universes

FQXi FORUM

September 27, 2016

CATEGORY:
The Nature of Time Essay Contest
[back]

TOPIC: Density Operators and Time by Carl A Brannen [refresh]

TOPIC: Density Operators and Time by Carl A Brannen [refresh]

Our understanding of time from physics is through the combination of quantum mechanics and relativity. In quantum mechanics, measurements are represented by operators. The state of a system is usually represented by a wave function which is operated on by the operators. This view of time is compatible with relativity in that each event is assigned a unique time coordinate; the wave function changes with time. The only difficulty is the measurement or collapse process; this process must act outside of time as, in the language of special relativity, it modifies our representation of a single event, for example, a particle experiment, converting our representation from a wave to a particle. The density matrix and density operator formulation of quantum mechanics is an alternative formulation that is compatible with all the old results of wave functions. It has certain advantages over the usual formulation and it gives a different view of time, one that suggests that our usual understanding of time in phyiscs is over simplified. We show that density formalism suggests an additional parameter in quantum states giving the time of the observer. And we show that the non Hermitian extension of density matrices give quantum states which include an arrow of time.

Carl Brannen works on elementary particle theory using Clifford algebra and density matrix theory.

The author seems to misunderstand the basic formalism of Quantum Mechanics (QM). The density matrix description of QM is entirely equivalent to the wave function description, as can easily be seen by the definition of the density matrix. Thus using the density matrix description doesn't resolve the measurement problem in QM. What the density matrix description provides is a slightly clearer and neater presentation of the measurement process.

And the author is incorrect in stating that the measurement process in QM "must act outside of time". QM describes completely the evolution of a quantum state in time, both in between measurements, and during measurements. e.g. in non-relativistic quantum mechanics (NRQM), the density matrix of a system undergoes unitary evolution in between measurements (say from time t1 to t2):

rho(t2) = U(t2,t1) rho(t1) U*(t2,t1)

where U(t2,t1) is the time-evolution operator for the wave function from time t1 to t2.

During measurement (at time t2, say), the density matrix undergoes a non-unitary transformation R(t2):

rho(t2,after measurement) = R(t2) rho(t2) R*(t2)

The difficulty of the QM measurement problem is not the description of measurement in time (which QM is perfectly capable of doing), but the seeming incompatibility between the unitary evolution of an undisturbed QM system in time and the non-unitary instantaneous "collapse" of the system during measurement.

There have been many attempts to resolve the measurement problem in QM, most of which tried to explain the non-unitary collapse of a system as a result of the effect of the environment on the unitary evolution of the system, i.e. the decoherence approach, though this approach has the fundamental flaw that however close one may be able to get to something that looks like non-unitary collapse using only unitary evolution, at the end any combination of unitary evolution can only give rise to unitary evolution, so the best the decoherence approach can do is to give us a FAPP (for all practical purposes) pseudo-explanation of the collapse of the wave function, but never a true explanation that goes the last step...

The discussion so far has been limited to NRQM. Needless to say, even more problems arise in dealing with the measurement problem in relativistic QM (see e.g. papers by Yakir Aharonov and David Albert.)

And the author is incorrect in stating that the measurement process in QM "must act outside of time". QM describes completely the evolution of a quantum state in time, both in between measurements, and during measurements. e.g. in non-relativistic quantum mechanics (NRQM), the density matrix of a system undergoes unitary evolution in between measurements (say from time t1 to t2):

rho(t2) = U(t2,t1) rho(t1) U*(t2,t1)

where U(t2,t1) is the time-evolution operator for the wave function from time t1 to t2.

During measurement (at time t2, say), the density matrix undergoes a non-unitary transformation R(t2):

rho(t2,after measurement) = R(t2) rho(t2) R*(t2)

The difficulty of the QM measurement problem is not the description of measurement in time (which QM is perfectly capable of doing), but the seeming incompatibility between the unitary evolution of an undisturbed QM system in time and the non-unitary instantaneous "collapse" of the system during measurement.

There have been many attempts to resolve the measurement problem in QM, most of which tried to explain the non-unitary collapse of a system as a result of the effect of the environment on the unitary evolution of the system, i.e. the decoherence approach, though this approach has the fundamental flaw that however close one may be able to get to something that looks like non-unitary collapse using only unitary evolution, at the end any combination of unitary evolution can only give rise to unitary evolution, so the best the decoherence approach can do is to give us a FAPP (for all practical purposes) pseudo-explanation of the collapse of the wave function, but never a true explanation that goes the last step...

The discussion so far has been limited to NRQM. Needless to say, even more problems arise in dealing with the measurement problem in relativistic QM (see e.g. papers by Yakir Aharonov and David Albert.)

Ming writes: "The density matrix description of QM is entirely equivalent to the wave function description, as can easily be seen by the definition of the density matrix."

This is true if the density matrix is defined this way, but the paper shows a generalization of (pure) density matrices, whch are Hermitian, to non Hermitian states. These do not correspond to any wave function. In addition, even if they were equivalent, wave functions and density matrices treat time differently, which is the point of the discussion.

Ming's example of a system that evolves according to a unitary law between t1 and t2, and then undergoes a non unitary evolution at time t2 is not an uncommon way of describing collapse, but it is not entirely satisfactory in that it claims that sometimes evolution is unitary and sometimes it is not.

Regarding relativistic quantum mechanics and density matrices, the reader may find

"On the Role of Density Matrices in Bohmian Mechanics"

Detlef Duerr, Sheldon Goldstein, Roderich Tumulka, Nino Zanghi

quant-ph/0311127

Foundations of Physics 35(3): 449-467 (2005)

appropriate because it has a section on 2nd quantization.

See the literature for the large number of recent papers on relativistic Bohmian mechanics from the wave function approach and on density matrices in Bohmian mechanics.

This is true if the density matrix is defined this way, but the paper shows a generalization of (pure) density matrices, whch are Hermitian, to non Hermitian states. These do not correspond to any wave function. In addition, even if they were equivalent, wave functions and density matrices treat time differently, which is the point of the discussion.

Ming's example of a system that evolves according to a unitary law between t1 and t2, and then undergoes a non unitary evolution at time t2 is not an uncommon way of describing collapse, but it is not entirely satisfactory in that it claims that sometimes evolution is unitary and sometimes it is not.

Regarding relativistic quantum mechanics and density matrices, the reader may find

"On the Role of Density Matrices in Bohmian Mechanics"

Detlef Duerr, Sheldon Goldstein, Roderich Tumulka, Nino Zanghi

quant-ph/0311127

Foundations of Physics 35(3): 449-467 (2005)

appropriate because it has a section on 2nd quantization.

See the literature for the large number of recent papers on relativistic Bohmian mechanics from the wave function approach and on density matrices in Bohmian mechanics.

Hello Carl,

I enjoyed your paper and especially your conclusion, "However, in this model observers do not interact per se, and consequently we may as well make the simplifying assumption that all observers have the same T. Then T becomes an attribute of the universe as a whole, and an explanation for that persistent human insistence

on free will and the uniqueness of the...

view entire post

I enjoyed your paper and especially your conclusion, "However, in this model observers do not interact per se, and consequently we may as well make the simplifying assumption that all observers have the same T. Then T becomes an attribute of the universe as a whole, and an explanation for that persistent human insistence

on free will and the uniqueness of the...

view entire post

Thanks for reading my essay, Elliot.

There are some other people doing stuff in "Euclidean Relativity" (ER) that more or less fits in well with this essay and possibly your own more so. I'll write a comment over on your topic forum when I get some time and internet availability simultaneously.

I don't have a lot of hope for ER because prejudice against it is deeply ingrained. In fact, I don't have much hope for these essays on time. I just typed one up because I thought I shouldn't keep my opinions to myself forever on the subject. Instead, to change the foundations of physics I think we need to redo stuff that gives more concrete results.

Hopefully, Louise Riofrio will write an essay regarding her use of the equation R = c t, where t is the age of the universe and corresponds to the T used in the above essay.

report post as inappropriate

There are some other people doing stuff in "Euclidean Relativity" (ER) that more or less fits in well with this essay and possibly your own more so. I'll write a comment over on your topic forum when I get some time and internet availability simultaneously.

I don't have a lot of hope for ER because prejudice against it is deeply ingrained. In fact, I don't have much hope for these essays on time. I just typed one up because I thought I shouldn't keep my opinions to myself forever on the subject. Instead, to change the foundations of physics I think we need to redo stuff that gives more concrete results.

Hopefully, Louise Riofrio will write an essay regarding her use of the equation R = c t, where t is the age of the universe and corresponds to the T used in the above essay.

report post as inappropriate

Thanks Carl,

Actually my essay/Moving Dimensions Theory runs with Minkowski's/Einstein's relativity. I have looked into Euclidian Relativity only briefly.

Yes--we need concrete results, but after thirty years of String Theory and LQG, and probably fifty more, we should brave logic, reason, and *physics* with a faith in an apprehendable physical reality that is both simple and...

view entire post

Actually my essay/Moving Dimensions Theory runs with Minkowski's/Einstein's relativity. I have looked into Euclidian Relativity only briefly.

Yes--we need concrete results, but after thirty years of String Theory and LQG, and probably fifty more, we should brave logic, reason, and *physics* with a faith in an apprehendable physical reality that is both simple and...

view entire post

The author has submitted an article somewhat related to this, "Density Matrices and the Weak Quantum Numbers", to Foundations of Physics. The paper gives a derivation of the weak hypercharge and weak isospin quantum numbers of the left and right handed elementary fermions from a few simple assumptions about their density matrix representations. A copy of the submitted paper is attached.

attachments: WeakQNs.pdf

attachments: WeakQNs.pdf

In fact both 'Quantum mechanics' and 'Einstein's Relativity' are combinations of the Empiricism method tools (new algebraic Geometry born in XVIIth century in Europe).

One can even say that 'Quantum mechanics' is preceeding Einstein's ideology although Einstein seems to speak about high spheres and high speeds and Planck about small particles.

Einstein's theory is just a reflex as Quantum mechanics is.

The 'collapse process' as you say is still in Descartes algebraic Geometry (and string theory too). This collapse has exactly the same cause than the Higgs Boson paralogism and than the dualisme of particle splitted in two.

One can even say that 'Quantum mechanics' is preceeding Einstein's ideology although Einstein seems to speak about high spheres and high speeds and Planck about small particles.

Einstein's theory is just a reflex as Quantum mechanics is.

The 'collapse process' as you say is still in Descartes algebraic Geometry (and string theory too). This collapse has exactly the same cause than the Higgs Boson paralogism and than the dualisme of particle splitted in two.

F. Le Rouge,

I agree with you completely on classifying QM and relativity, both GR and SR, as due to empiricism. String theory gets away from it, but since string theory is completely compatible with both relativity and QM, it cannot avoid the rot built into its foundations.

Mathematically, the essence of the problem is that both QM and relativity depend on symmetries to describe the world. The symmetries are obtained from experimental observations. Thus the empiricism. Neither theory is an attempt at explaining the world, they're both very elaborate curve fitting procedures, not theories in the Descartes sense.

The reason I wrote the paper linked above, "WeakQNs" was to give an alternative explanation to the weak quantum numbers of the elementary fermions. Rather than just saying "this is what experimenters tell us", I feel that theorists should look for an underlying explanation.

For me, symmetry is an attribute that allows one to solve a differential equation. The differential equation is the fundamental object, not the symmetries it possesses. To guess the nature of the universe we must guess the differential equations, not their symmetries.

That symmetries have done us well so far is not proof that the universe is constructed of symmetries "all the way down". To me, it's just proof that differential equations have symmetries, a fact that any mathematician knows without the need for experimental evidence.

Newton's gravitation is a great example of this. The differential equation is very simple, F = ma = gmM/r^2 (uh, if I remember correctly). The symmetries are somewhat more complicated; i.e the conserved quantities observed by Kepler. This is the natural order of things, the fundamental object is simple, its symmetries are more complicated. With modern physics, the increasing complexity of the observed symmetries is a sign that the next big steps will be in the other direction, in guessing the fundamental differential equations.

I agree with you completely on classifying QM and relativity, both GR and SR, as due to empiricism. String theory gets away from it, but since string theory is completely compatible with both relativity and QM, it cannot avoid the rot built into its foundations.

Mathematically, the essence of the problem is that both QM and relativity depend on symmetries to describe the world. The symmetries are obtained from experimental observations. Thus the empiricism. Neither theory is an attempt at explaining the world, they're both very elaborate curve fitting procedures, not theories in the Descartes sense.

The reason I wrote the paper linked above, "WeakQNs" was to give an alternative explanation to the weak quantum numbers of the elementary fermions. Rather than just saying "this is what experimenters tell us", I feel that theorists should look for an underlying explanation.

For me, symmetry is an attribute that allows one to solve a differential equation. The differential equation is the fundamental object, not the symmetries it possesses. To guess the nature of the universe we must guess the differential equations, not their symmetries.

That symmetries have done us well so far is not proof that the universe is constructed of symmetries "all the way down". To me, it's just proof that differential equations have symmetries, a fact that any mathematician knows without the need for experimental evidence.

Newton's gravitation is a great example of this. The differential equation is very simple, F = ma = gmM/r^2 (uh, if I remember correctly). The symmetries are somewhat more complicated; i.e the conserved quantities observed by Kepler. This is the natural order of things, the fundamental object is simple, its symmetries are more complicated. With modern physics, the increasing complexity of the observed symmetries is a sign that the next big steps will be in the other direction, in guessing the fundamental differential equations.

Carl Brannen,

You quoted a paper by Duerr, Goldstein, et al.

If I recall correctly, they wrote "The Emperor's New Swindle" and faced distrust from those who consider entanglement proved. Doesn't this issue have serious practical consequences?

As far as I know, quantum computing does still not work as promised, and there are reasons for me to question an application of single electron counting published in PRL in 1997.

Eckard Blumschein

You quoted a paper by Duerr, Goldstein, et al.

If I recall correctly, they wrote "The Emperor's New Swindle" and faced distrust from those who consider entanglement proved. Doesn't this issue have serious practical consequences?

As far as I know, quantum computing does still not work as promised, and there are reasons for me to question an application of single electron counting published in PRL in 1997.

Eckard Blumschein

Do you have any ideas about the scale of the time T in the rho(x,t,x',t',T)? Is it T G' or Ag = g' so that g^{-1}Ag = g^{-1}g'. This can define exterior products as well so that the right hand side is a density matrix rho_{gg'} = |g>

My post was transmitted in a garbled form, so I am trying again

Do you have any ideas about the scale of the time T in the rho(x,t,x',t',T)? Is it T G' or Ag = g' so that g^{-1}Ag = g^{-1}g'. This can define exterior products as well so that the right hand side is a density matrix rho_{gg'} = |g>

Do you have any ideas about the scale of the time T in the rho(x,t,x',t',T)? Is it T G' or Ag = g' so that g^{-1}Ag = g^{-1}g'. This can define exterior products as well so that the right hand side is a density matrix rho_{gg'} = |g>

Evidently this does not like carot signs! So bra-kets I replace with ( and )

Do you have any ideas about the scale of the time T in the rho(x,t,x',t',T)? Is it T must less than 1/freq, for freq pertaining to the system.

I have been working on a system of noncommutative geometry which includes associators. For an associator A and g, g' in the quantum groups G and G' an associator acts on these so that A:G rightarrow G' or Ag = g' so that g^{-1}Ag = g^{-1}g'. This can define exterior products as well so that the right hand side is a density matrix rho_{gg'} = |g)(g'|, writen symbolically here. If we coarse grain over the associator the value of an observable O is then

(O) = tr(O rho_{gg'}.

For O = unit this gives tr(rho_{gg'}) = e^{&E/kT}, for &E the energy functional (error) induced by tracing over the associators. Clearly then &E = kT ln(tr(rho_{gg'}) = &s, which is the entropy or information loss due to the coarse graining.

This leads to the Bogoliubov algebra for quantum fields in curved spacetime, which is due to the nonEuclidean nature of time. Associative quantum mechanics is nonunitary, but if it is due to an error correction code then q-bits are preserved, at least on a fine grained scale. Then the appearance of entropy (information loss or "burial") and its identification with time is a large scale emergence.

I have a piece here #371 on an aspect of this physics with AdS spacetimes and the scaling of quantum fields.

Cheers,

Lawrence B. Crowell

Do you have any ideas about the scale of the time T in the rho(x,t,x',t',T)? Is it T must less than 1/freq, for freq pertaining to the system.

I have been working on a system of noncommutative geometry which includes associators. For an associator A and g, g' in the quantum groups G and G' an associator acts on these so that A:G rightarrow G' or Ag = g' so that g^{-1}Ag = g^{-1}g'. This can define exterior products as well so that the right hand side is a density matrix rho_{gg'} = |g)(g'|, writen symbolically here. If we coarse grain over the associator the value of an observable O is then

(O) = tr(O rho_{gg'}.

For O = unit this gives tr(rho_{gg'}) = e^{&E/kT}, for &E the energy functional (error) induced by tracing over the associators. Clearly then &E = kT ln(tr(rho_{gg'}) = &s, which is the entropy or information loss due to the coarse graining.

This leads to the Bogoliubov algebra for quantum fields in curved spacetime, which is due to the nonEuclidean nature of time. Associative quantum mechanics is nonunitary, but if it is due to an error correction code then q-bits are preserved, at least on a fine grained scale. Then the appearance of entropy (information loss or "burial") and its identification with time is a large scale emergence.

I have a piece here #371 on an aspect of this physics with AdS spacetimes and the scaling of quantum fields.

Cheers,

Lawrence B. Crowell

Lawrence,

As it turns out, I'm not a fan of using symmetry groups to define the foundations of particle physics. I think it's been rather well picked over and is mathematically naive; yes differential equations have symmetries but symmetries are never foundational in mathematics. In my view, the situation we've ended up in is from people making a series of lucky guesses about symmetries but to get further, we have to make lucky guesses about differential equations (that have the observed symmetries).

Adding T changes the geometry of spacetime and geometries imply symmetries. When you add an extra time dimension T, you end up having to modify Dirac's gamma matrices. That is, there are normally four gamma matrices because there are 3+1 space time dimensions. When you go to 3+1+1, you naturally end up with 5 gamma matrices. String theorists do similar things.

This all suggests that we should look at the density matrix states in the Clifford algebra C(4,1). This is done in a paper I wrote a few years ago called "The Geometry of Fermions" (which string you can google to get a copy). The theme of the paper is "count the hidden dimensions using Clifford algebra and density matrices". The density matrix states correspond to the primitive idempotents of the Clifford algebra.

That paper was written from a slightly different point of view (the extra time dimension has to do with proper time, more or less), but as far as counting dimensions it works out the same. Your paper uses imaginary time; what I'm doing in that paper is related. It's also related to the "Euclidean relativity" work by various people. There are complications associated with the choice of which parameters you assume contribute to the geometry but it's too complicated to discuss here.

A short description of the particle content of the density matrices of a Clifford algebra is that you get N particles for a theory that needs NxN complex matrices in its representation. N is 2 to some integer power k. The N particles have properties that can be described as +-1 for k different parameters. For the Dirac gamma matrices k=2 and the two properties can be chosen to be particle / antiparticle, and spin up / spin down. This is just Clifford algebra and density matrices, no need to assume any symmetries or anything else; it all follows directly from density matrices and geometry.

All this implies a preon structure for the elementary particles which is further explored in various other papers I've done. Most recently, Marni Sheppeard has been helping me with the CKM and MNS matrices.

After this contest ends (probably with my essay winning nothing), I'll type the essay into a more complete submission to Foundations of Physics maybe.

As it turns out, I'm not a fan of using symmetry groups to define the foundations of particle physics. I think it's been rather well picked over and is mathematically naive; yes differential equations have symmetries but symmetries are never foundational in mathematics. In my view, the situation we've ended up in is from people making a series of lucky guesses about symmetries but to get further, we have to make lucky guesses about differential equations (that have the observed symmetries).

Adding T changes the geometry of spacetime and geometries imply symmetries. When you add an extra time dimension T, you end up having to modify Dirac's gamma matrices. That is, there are normally four gamma matrices because there are 3+1 space time dimensions. When you go to 3+1+1, you naturally end up with 5 gamma matrices. String theorists do similar things.

This all suggests that we should look at the density matrix states in the Clifford algebra C(4,1). This is done in a paper I wrote a few years ago called "The Geometry of Fermions" (which string you can google to get a copy). The theme of the paper is "count the hidden dimensions using Clifford algebra and density matrices". The density matrix states correspond to the primitive idempotents of the Clifford algebra.

That paper was written from a slightly different point of view (the extra time dimension has to do with proper time, more or less), but as far as counting dimensions it works out the same. Your paper uses imaginary time; what I'm doing in that paper is related. It's also related to the "Euclidean relativity" work by various people. There are complications associated with the choice of which parameters you assume contribute to the geometry but it's too complicated to discuss here.

A short description of the particle content of the density matrices of a Clifford algebra is that you get N particles for a theory that needs NxN complex matrices in its representation. N is 2 to some integer power k. The N particles have properties that can be described as +-1 for k different parameters. For the Dirac gamma matrices k=2 and the two properties can be chosen to be particle / antiparticle, and spin up / spin down. This is just Clifford algebra and density matrices, no need to assume any symmetries or anything else; it all follows directly from density matrices and geometry.

All this implies a preon structure for the elementary particles which is further explored in various other papers I've done. Most recently, Marni Sheppeard has been helping me with the CKM and MNS matrices.

After this contest ends (probably with my essay winning nothing), I'll type the essay into a more complete submission to Foundations of Physics maybe.

Hello Carl,

The 120-cell defines the icosians which are the D_8, which with the 128 half-spinor part gives the CL(16), which embeds E_8. The icosians are a system of quaternions, gamma matrix valued elements. So in part I would agree with you. This defines an error correction code [4,2,2], which is an elementary (if you call E_8 elementary) quantum error correction code. The physical point is that q-bits which are "processed" by an instanton of the gravity field, such as a black hole, pass through the information channel completely preserved, but encrypted in a form which is difficult to cypher.

At this stage I would agree that working out explicit irreps of groups and the like is probably secondary. What I am more interested in is a "proof of existence," or maybe more like a demostration of applicability. What particular irrep the E_8 takes is at this time secondary, where of course this gets into the Leech lattice and irreps are practically impossible to find.

To be honest I think that to find appropriate irreps of these large groups with huge numbers of irreps, such as the E_8 with 60,779,787 some sort of quantum computation might be required. The quantum computer would then find the minimal energy or configuration for a wave function set over all possible irreps. I will need some time to think about his, but some quantum computation over the root space of Kazhdan-Lusztig-Vogan polynomials, thought here as eigen-numbers might result in a heirarchy of irreps, where some extremization principle might give the irrep appropriate for physics.

The quantum groups I outline are defined on quaternions (noncommutative quaternions) and the overlap is for different quaternions under different representations or bases. So this connects in some way with your ideas about using Clifford algebra.

I suppose I will also rewrite my paper as well and try to publish it. I will pretty clearly not win. I entered late anyway, for I thought the deadline was Jan 1, and then was informed it was Dec 1 around Thanksgiving. So I wrote it up in about three days.

Cheers,

Lawrence B. Crowell

The 120-cell defines the icosians which are the D_8, which with the 128 half-spinor part gives the CL(16), which embeds E_8. The icosians are a system of quaternions, gamma matrix valued elements. So in part I would agree with you. This defines an error correction code [4,2,2], which is an elementary (if you call E_8 elementary) quantum error correction code. The physical point is that q-bits which are "processed" by an instanton of the gravity field, such as a black hole, pass through the information channel completely preserved, but encrypted in a form which is difficult to cypher.

At this stage I would agree that working out explicit irreps of groups and the like is probably secondary. What I am more interested in is a "proof of existence," or maybe more like a demostration of applicability. What particular irrep the E_8 takes is at this time secondary, where of course this gets into the Leech lattice and irreps are practically impossible to find.

To be honest I think that to find appropriate irreps of these large groups with huge numbers of irreps, such as the E_8 with 60,779,787 some sort of quantum computation might be required. The quantum computer would then find the minimal energy or configuration for a wave function set over all possible irreps. I will need some time to think about his, but some quantum computation over the root space of Kazhdan-Lusztig-Vogan polynomials, thought here as eigen-numbers might result in a heirarchy of irreps, where some extremization principle might give the irrep appropriate for physics.

The quantum groups I outline are defined on quaternions (noncommutative quaternions) and the overlap is for different quaternions under different representations or bases. So this connects in some way with your ideas about using Clifford algebra.

I suppose I will also rewrite my paper as well and try to publish it. I will pretty clearly not win. I entered late anyway, for I thought the deadline was Jan 1, and then was informed it was Dec 1 around Thanksgiving. So I wrote it up in about three days.

Cheers,

Lawrence B. Crowell

Lawrence, interesting about the error correcting codes. My cohort, Marni Sheppeard, talks about this a lot. It comes up in the context of "mutually unbiased bases" which is how she got a postdoc at Oxford's quantum information group that starts in January. That is, she's the only person on the planet who knows MUBs and category theory. The MUBs come from my application of them to the Koide mass formulas. Eventually I'll get around to publishing this, but I'm hoping to get more stuff complete first.

Right now I'm messing around with Gullstrand-Painleve coordinates and writing the gravitational force of a black hole as a series in powers of the radial distance. (GP coordinates do this exactly with a finite number of terms and so are kind of interesting.)

On the subject of E8, it arises naturally from density matrices in a way that can be described in a few paragraphs.

Density matrices are operators and so they can act on states. When you make a bound state by combining a number of particles which are individually represented by density matrices you are making an operation on the general set of density matrices M that is a mapping. That is, "M" means all possible density matrices of all possible symmetries.

There is a peculiarly natural way to describe a bound state built from density matrices and that is to assemble them into a matrix form. In doing this, you have to make a minor generalization of density matrices to non Hermitian density matrices. These sorts of density matrices represent states where the outgoing and incoming state differ. They can be represented by products of the usual Hermitian density matrices.

Anyway, E8 is the only group whose algebra has symmetry given by the group. That is, E8's algebra has symmetry E8. Now when you consider a matrix of density matrices, what you are describing can also be considered as a symmetry operation on the density matrices themselves.

This is better described in a series of posts on the subject that can be located by googling for

density+matrix+E8+bound+state

and clicking around on references. It's about a half dozen posts in total.

The reason it takes so much is that it requires rethinking the concept of "quantum state" quite a bit. But the mathematics is very simple and straightforward. And to help comprehension, I put in a lot of examples. The thing I don't like about it is that the E8 symmetry comes from the assumption that the density matrix represents the bound state exactly. Of course this is not at all exact and one gets a broken E8 instead. It seems to me that it's better to try and understand the exact bound state instead of a bad approximation.

Right now I'm messing around with Gullstrand-Painleve coordinates and writing the gravitational force of a black hole as a series in powers of the radial distance. (GP coordinates do this exactly with a finite number of terms and so are kind of interesting.)

On the subject of E8, it arises naturally from density matrices in a way that can be described in a few paragraphs.

Density matrices are operators and so they can act on states. When you make a bound state by combining a number of particles which are individually represented by density matrices you are making an operation on the general set of density matrices M that is a mapping. That is, "M" means all possible density matrices of all possible symmetries.

There is a peculiarly natural way to describe a bound state built from density matrices and that is to assemble them into a matrix form. In doing this, you have to make a minor generalization of density matrices to non Hermitian density matrices. These sorts of density matrices represent states where the outgoing and incoming state differ. They can be represented by products of the usual Hermitian density matrices.

Anyway, E8 is the only group whose algebra has symmetry given by the group. That is, E8's algebra has symmetry E8. Now when you consider a matrix of density matrices, what you are describing can also be considered as a symmetry operation on the density matrices themselves.

This is better described in a series of posts on the subject that can be located by googling for

density+matrix+E8+bound+state

and clicking around on references. It's about a half dozen posts in total.

The reason it takes so much is that it requires rethinking the concept of "quantum state" quite a bit. But the mathematics is very simple and straightforward. And to help comprehension, I put in a lot of examples. The thing I don't like about it is that the E8 symmetry comes from the assumption that the density matrix represents the bound state exactly. Of course this is not at all exact and one gets a broken E8 instead. It seems to me that it's better to try and understand the exact bound state instead of a bad approximation.

Sorry it took some time to get back. I looked at some of the references on density matrices. The identification with projectors or idempotents is interesting. Yet I am not sure how they can be identified with primitive idempotents.

I am not sure how this connects with associators and MUBs. Yet I think that systems of quaternions in an octonionic system define groups g and that g^{-1}Ag = g^{-1}g' for A an associator map between these groups. This defines for exterior products of elements (states) g in a quantum group density matrices, which are defined across in commensurate quantum groups for associators. I looked at MUBs about a year ago, so I can't say with any certainty how this connects with MUBs. This does have connections with Hadamard matrices however, which as I recall are utilized in MUBs.

There is another element to what I am thinking which are projective varieties and Goppa codes. These are codes on elliptic curves and varieties. Since they work for projective varieties these seem to have connections with null congruences (light cones, Robonson congruences, horizons etc). What I have been attempting so far with little success is to find connections between Goppa codes and some elliptic curve conditions with norms of cyclotomic rings of quaternions.

I think there is some sort of connection between Golay codes, which really work on Euclidean lattices, and Goppa codes which have properties similar to Lorentzian systems with Zariski point-set topological moduli.

Cheers,

Lawrence B. Crowell

I am not sure how this connects with associators and MUBs. Yet I think that systems of quaternions in an octonionic system define groups g and that g^{-1}Ag = g^{-1}g' for A an associator map between these groups. This defines for exterior products of elements (states) g in a quantum group density matrices, which are defined across in commensurate quantum groups for associators. I looked at MUBs about a year ago, so I can't say with any certainty how this connects with MUBs. This does have connections with Hadamard matrices however, which as I recall are utilized in MUBs.

There is another element to what I am thinking which are projective varieties and Goppa codes. These are codes on elliptic curves and varieties. Since they work for projective varieties these seem to have connections with null congruences (light cones, Robonson congruences, horizons etc). What I have been attempting so far with little success is to find connections between Goppa codes and some elliptic curve conditions with norms of cyclotomic rings of quaternions.

I think there is some sort of connection between Golay codes, which really work on Euclidean lattices, and Goppa codes which have properties similar to Lorentzian systems with Zariski point-set topological moduli.

Cheers,

Lawrence B. Crowell

The notion of primitive idempotents isn't that common in the literature. You have to be a bit of a conosewer to run into it. I'm sure there's references in the literature as it's kind of obvious, if you spend enough time playing with the things. My website, www.densitymatrix.com has a connection to Frank Porter's (Cal Tech) class notes on quantum mechanics. If you click on that link, you will see...

view entire post

view entire post

I will comment more later on the MUB issue. What you indicate is a standard aspect of associator rules for basis elements. For e_i e_j and e_k associator tables usually have e_i(e_je_k) = -(e_ie_j)e_k and the sum of these defines some other element. The quantum group G has elements g = exp(ia*e) (a is small) and the associator A = exp(ia*e') is such that g^{-1)Ag is an associated product. The Baker-Campbell-Hausdorff expansion gives commutators of the elements plus associators. The associators are between lattices which tesselate the manifold, where the lattice is E_8 and the group is E_8. The miracle of E_8 is that the symmetry of the root space is that of the group.

The physical idea is that this is a nonunitary transformation of quantum elements which still preserves quantum bits. A density matrix of such states when course grained gives thermal distributions seen in Hawking radiation.

I will get back to MUBs later. It has been a while since I have looked at that topic.

Cheers,

Lawrence B. Crowell

The physical idea is that this is a nonunitary transformation of quantum elements which still preserves quantum bits. A density matrix of such states when course grained gives thermal distributions seen in Hawking radiation.

I will get back to MUBs later. It has been a while since I have looked at that topic.

Cheers,

Lawrence B. Crowell

Lawrence,

I've got a new paper out. Well, I'm basically asking friends to review it before I submit it to Phys Math Central. It's about the masses of hadron excitations. Kind of like Regge trajectories, but about radial excitations instead of angular momentum. Anyway, it's also about MUBs and has an introduction to them in the background section. Right now, it's here:

http://www.brannenworks.com/koidehadrons.pdf

After this I'm writing a joint paper with Marni Sheppeard that uses the same methods to do the quark and lepton mixing angles.

I've got a new paper out. Well, I'm basically asking friends to review it before I submit it to Phys Math Central. It's about the masses of hadron excitations. Kind of like Regge trajectories, but about radial excitations instead of angular momentum. Anyway, it's also about MUBs and has an introduction to them in the background section. Right now, it's here:

http://www.brannenworks.com/koidehadrons.pdf

After this I'm writing a joint paper with Marni Sheppeard that uses the same methods to do the quark and lepton mixing angles.

It looks like a long paper. I noticed you talk about quantum tomography, which is something I think is involved with characterizing quantum states in black holes. Anyway, it will take some time to digest this.

L. C.

L. C.

Tomography means measuring something by taking slices out of it, more or less. In this case, quantum tomography means determining a quantum wave function by taking measurements of it.

Here's the calculations for quantum tomography. (Uh, I don't have a reference for this, but I think it's obvious enough that I'm not likely to have made too many errors typing this in on the fly.)

For the case of spin-1/2, one obtains a bunch of particles all in the same identical (but unknown) state. One uses 1/3 of them to measure the spin in the x direction. Another 1/3 is used to measure spin in the y direction. And the last 1/3 are used to measure spin in the z direction.

With all three spin measurements, you get a probability between 0 and 1 for spin in the + direction (say +x). This number is only approximated, but you can get as accurate as you wish by sampling enough particles.

So you've got p_x, p_y, and p_z as your three probabilities. Convert these to three numbers x, y, and z by

:

x = 2p_x - 1,

y = 2p_y - 1,

z = 2p_z - 1.

By the laws of quantum mechanics, you should be able to prove that xx + yy + zz

Here's the calculations for quantum tomography. (Uh, I don't have a reference for this, but I think it's obvious enough that I'm not likely to have made too many errors typing this in on the fly.)

For the case of spin-1/2, one obtains a bunch of particles all in the same identical (but unknown) state. One uses 1/3 of them to measure the spin in the x direction. Another 1/3 is used to measure spin in the y direction. And the last 1/3 are used to measure spin in the z direction.

With all three spin measurements, you get a probability between 0 and 1 for spin in the + direction (say +x). This number is only approximated, but you can get as accurate as you wish by sampling enough particles.

So you've got p_x, p_y, and p_z as your three probabilities. Convert these to three numbers x, y, and z by

:

x = 2p_x - 1,

y = 2p_y - 1,

z = 2p_z - 1.

By the laws of quantum mechanics, you should be able to prove that xx + yy + zz

Ooops it ate the rest of the post.

So you should be able to prove that

xx + yy + zz is less than or equal to 1.

If it were = to 1, then you have a pure density matrix state and the vector is on the Bloch sphere. Either way, your estimate for the density matrix for the quantum states is:

(1 + x sigma_x + y sigma_y + z sigma_z)/2

where sigma_n are the Pauli spin matrices.

So quantum tomography is the process of figuring out a wave function from measurements. A complete set of mutually unbiased bases defines a measurement system that optimizes the process of quantum tomography.

So you should be able to prove that

xx + yy + zz is less than or equal to 1.

If it were = to 1, then you have a pure density matrix state and the vector is on the Bloch sphere. Either way, your estimate for the density matrix for the quantum states is:

(1 + x sigma_x + y sigma_y + z sigma_z)/2

where sigma_n are the Pauli spin matrices.

So quantum tomography is the process of figuring out a wave function from measurements. A complete set of mutually unbiased bases defines a measurement system that optimizes the process of quantum tomography.

I was probably not clear in what I wrote. Quantum tomography is a way of entangling various spin states with a spin and then using ancillary measurements to estimate the state of the one spin, My statement was meant to indicate that I think this could be used to estimate internal state amplitude of a quantum black hole. AS Bekenstein found a black hole is a one dimensional channel, and the state through a black hole could in principle be teleported through a black hole.

Lawrence B. Crowell

Lawrence B. Crowell

I had to read your most recent post several times over several days before it made sense to me. I looked it up on arXiv, i.e. gr-qc/0603046, and they do use the method.

By the way, getting back to the E8 feature, you wrote "The miracle of E_8 is that the symmetry of the root space is that of the group." I like the way wikipedia puts it: It's unique among simple groups in that its nontrivial representation of smallest dimension is the adjoint representation acting on the Lie algebra of E8.

This gets back to how E8 can arise naturally at low temperatures from bound states of less complicated things.

By the way, getting back to the E8 feature, you wrote "The miracle of E_8 is that the symmetry of the root space is that of the group." I like the way wikipedia puts it: It's unique among simple groups in that its nontrivial representation of smallest dimension is the adjoint representation acting on the Lie algebra of E8.

This gets back to how E8 can arise naturally at low temperatures from bound states of less complicated things.

To connect with density matrices, the Leech lattice is in Jacobi functions ~ @^3(E_8), and so there are three E_8's there. The Leech lattice decomposes to S^3xSL(2,7), or a Fano plane at each point of a 3-sphere. So this appears to define a Bloch sphere type of construction. The projective Fano plane defines a set of three E_8s on a three ball (an 8^3). The breaking of this system then freezes one of these E_8's into its lattice of roots and the other two persists as E_8xE_8. The lattice of roots then defines a tessellation of AdS. So there are connections with some established physics here.

This might be a different direction from what you might be thinking for the Leech lattice has 196,560 elements and things appear to be vastly complicated. The automorphisms over these of course lead to the Fischer-Greiss (monster) group. I might imagine things could go into those domains as well. Yet I think that to make physics work there must be a master quantum error correction code, such as a Hamming distance for E_8 [8, 4, 4] or the Steiner group for the Mathieu group.

We might in a coarse grained sense say that the E_8's emerge from a sort of chaos or "simplicity," for obviously we are not going to cast about finding irreps for the Leech lattice, or find all of them. So these massive groups exist as a sort of ensemble space for various low energy (eg < 10^2 E_{planck}) configurations. And of course I don't propose getting into monster group considerations in any considerable way.

Tony Smith seems to want to push things that far, in fact with systems of monster groups or moonshines and so forth. I have a hard time making sense of some of what he writes, it seems at times almost autistic in a strange way.

My paper for this essay contest, which garnered a vote or two from the fqxi and not many public votes, is a part of the physical arguments I am laying down for this.

Lawrence B. Crowell

This might be a different direction from what you might be thinking for the Leech lattice has 196,560 elements and things appear to be vastly complicated. The automorphisms over these of course lead to the Fischer-Greiss (monster) group. I might imagine things could go into those domains as well. Yet I think that to make physics work there must be a master quantum error correction code, such as a Hamming distance for E_8 [8, 4, 4] or the Steiner group for the Mathieu group.

We might in a coarse grained sense say that the E_8's emerge from a sort of chaos or "simplicity," for obviously we are not going to cast about finding irreps for the Leech lattice, or find all of them. So these massive groups exist as a sort of ensemble space for various low energy (eg < 10^2 E_{planck}) configurations. And of course I don't propose getting into monster group considerations in any considerable way.

Tony Smith seems to want to push things that far, in fact with systems of monster groups or moonshines and so forth. I have a hard time making sense of some of what he writes, it seems at times almost autistic in a strange way.

My paper for this essay contest, which garnered a vote or two from the fqxi and not many public votes, is a part of the physical arguments I am laying down for this.

Lawrence B. Crowell

"So there are connections with some established physics here."

The things you are talking about here, Fano planes and Jacobi functions, are things that some of my correspondents also talk about. In particular, Michael Rios see math-ph/0503015. Marni Sheppeard in her blog:

http://kea-monad.blogspot.com/2007/08/m-theory-lesson-8

0.html

talks about Fano planes and references a blog post of mine which link is out of date. To see what she is talking about, with respect to Fano planes, see the diagrams at this post:

http://carlbrannen.wordpress.com/2007/10/04/fict

Uh, in the comments, "Kea" is Marni Sheppeard, and "Kneemo" is Michael Rios. The drawings show how to calculate topological phase for the Pauli MUBs. I don't know if they have much to do with Fano planes but the higher math types seemed interested.

By the way, regarding your essay, an interesting paper has come by on arXiv that sort of agrees with both of our papers. See:

http://arxiv.org/abs/0901.4917

and other papers by Walter Smilga.

The paper gives a derivation of a rather accurate formula for the fine structure constant originally found by Wyler. It's based on the assumption that the correct symmetry group is SO(3,2) rather than Poincare.

This fits in with what I'm doing because I've got two copies of the the time coordinate, that is, the usual time and the absolute age of the universe. So it's quite natural that one would need a larger symmetry group for this. My original work on the fermions was based on extending the Dirac algebra by adding one hidden dimension to it, which amounts to the same thing (since the Dirac algebra is complex, the sign of the additional dimension doesn't effect the algebra any).

And it fits in with what you're doing because this symmetry is related to AdS in some manner. I'm not a gravity guy and can't explain this further.

The things you are talking about here, Fano planes and Jacobi functions, are things that some of my correspondents also talk about. In particular, Michael Rios see math-ph/0503015. Marni Sheppeard in her blog:

http://kea-monad.blogspot.com/2007/08/m-theory-lesson-8

0.html

talks about Fano planes and references a blog post of mine which link is out of date. To see what she is talking about, with respect to Fano planes, see the diagrams at this post:

http://carlbrannen.wordpress.com/2007/10/04/fict

Uh, in the comments, "Kea" is Marni Sheppeard, and "Kneemo" is Michael Rios. The drawings show how to calculate topological phase for the Pauli MUBs. I don't know if they have much to do with Fano planes but the higher math types seemed interested.

By the way, regarding your essay, an interesting paper has come by on arXiv that sort of agrees with both of our papers. See:

http://arxiv.org/abs/0901.4917

and other papers by Walter Smilga.

The paper gives a derivation of a rather accurate formula for the fine structure constant originally found by Wyler. It's based on the assumption that the correct symmetry group is SO(3,2) rather than Poincare.

This fits in with what I'm doing because I've got two copies of the the time coordinate, that is, the usual time and the absolute age of the universe. So it's quite natural that one would need a larger symmetry group for this. My original work on the fermions was based on extending the Dirac algebra by adding one hidden dimension to it, which amounts to the same thing (since the Dirac algebra is complex, the sign of the additional dimension doesn't effect the algebra any).

And it fits in with what you're doing because this symmetry is related to AdS in some manner. I'm not a gravity guy and can't explain this further.

Login or create account to post reply or comment.