CATEGORY:
The Nature of Time Essay Contest (2008)
[back]
TOPIC:
From time to timescape -- Einstein's unfinished revolution by David L. Wiltshire
[refresh]
Login or
create account to post reply or comment.
David L. Wiltshire wrote on Nov. 14, 2008 @ 15:29 GMT
Essay AbstractI argue that Einstein overlooked an important aspect of the relativity of time in never quite realizing his quest to embody Mach's principle in his theory of gravity. As a step towards that goal, I broaden the Strong Equivalence Principle to a new principle of physics, the Cosmological Equivalence Principle, to account for the role of the evolving average regional density of the universe in the synchronisation of clocks and the relative calibration of inertial frames. In a universe dominated by voids of the size observed in large-scale structure surveys, the density contrasts of expanding regions are strong enough that a relative deceleration of the background between voids and the environment of galaxies, typically of order 10^{-10} m/s^2, must be accounted for. As a result one finds a universe whose present age varies by billions of years according to the position of the observer: a timescape. This model universe is observationally viable: it passes three critical independent tests, and makes additional predictions. Dark energy is revealed as a mis-identification of gravitational energy gradients and the resulting variance in clock rates. Understanding the biggest mystery in cosmology therefore involves a paradigm shift, but in an unexpected direction: the conceptual understanding of time and energy in Einstein's own theory is incomplete.
Author BioDavid Wiltshire did undergraduate studies in his native New Zealand, followed by a PhD in the Relativity and Gravitation Group at the University of Cambridge, UK, in the mid 1980s. After a variety of research and teaching positions in Italy, UK, and Australia he returned to NZ in 2001, where he is now Senior Lecturer at the University of Canterbury, Christchurch. He is known for his work in higher-dimensional gravity, brane worlds, black holes and quantum cosmology. His recent research has turned to the problem of dark energy, the averaging of the inhomogeneous universe and foundational implications for cosmology.
Download Essay PDF File
John Merryman wrote on Nov. 15, 2008 @ 00:37 GMT
Professor Wiltshire,
Your observation about the different clock rates obtained in gravitational voids is extremely interesting. Given your insight into the current cosmological model, may I ask some questions about issues which cause me to be skeptical about the basic model;
According to theory and observations by COBE and WMAP, the expansion of space and gravitational contraction are roughly equal, resulting in large scale flat space. If this is so, then it would seem the overall expansion is negated by gravity, so that while the measure of space is effectively expanding between gravitational structure, it is also collapsing into these gravity wells at an equal rate. How is it then that the overall universe could be expanding?
It just seems more logical to me that there is some sort of process, where these two effects are opposite sides of the same cycle, which would explain why they are equal. Gravity does cause particulate mass to collapse in on itself, into ever greater density, but either through chemical reactions or pressure, this mass ignites and radiates back out across a broad spectrum of energies. Is it possible that the observed redshift and the expansion of measured space is a consequence of this expanding energy, just as gravitational collapse is a property of mass?
Specifically Einstein realized the presence of mass caused space to collapse over time, so for the Cosmological Constant to balance this effect, it would be an expansion of space, logically where it is not being contracted by gravity. As you point out, it is time which is faster in these voids. Could this affect the propagation rate of light across intergalactic space?
If this is so, the more space that light crosses, the more the effect would be compounded, creating the impression that distant sources are receding at increased rates, but still having the base rate of expansion attributed to dark matter and not one slowing at the geometric rate assumed by Big Bang Theory.
At some point light sources would seem to recede at the speed of light and this would create a horizon line for visible light and the sources thereof, but not black body radiation.
I could further speculate, but this is your forum, so I'll leave to you as to whether it is worth your effort to continue along this train of thought.
T H Ray wrote on Nov. 16, 2008 @ 13:14 GMT
A wonderfully lucid analysis of the relation between general relativity and Mach's Principle, with a clear and natural path to link a quantum universe with quantum gravity.
The paper really does follow the best tradition of Einstein in taking, metaphorically speaking, a God's-eye view of the universe ("...average cosmic rest frame...") which holds promise for a mathematically complete physical theory. I like the treatment of geometrical congruences in context of scale. Also, among the several insights I find particularly appealing: "We can always find regional frames in which the average volume-expanding motion with deceleration is such that we cannot tell whether particles subject to such motion are at rest in an expanding space, or moving in a static space." (I characterize this same phenomenon in my own work as "volume preserving is energy conserving.")
Thank you, David Wiltshire, for a great read and stimulating ideas!
Tom
Dr. E (The Real McCoy) wrote on Nov. 16, 2008 @ 18:00 GMT
Thanks for the wonderful paper, David!
You write, "In 1905 Einstein completely changed our understanding of the nature of time. Rather than being an absolute standard independent of the physical objects in the universe, time became an intrinsic property of the clocks carried by the objects themselves. In comparing two clocks, time could stretch and bend depending on the relative speeds of...
view entire post
Thanks for the wonderful paper, David!
You write, "In 1905 Einstein completely changed our understanding of the nature of time. Rather than being an absolute standard independent of the physical objects in the universe, time became an intrinsic property of the clocks carried by the objects themselves. In comparing two clocks, time could stretch and bend depending on the relative speeds of particles over their histories."
It is important to note that in his 1912 paper, Einstein never stated that time is the fourth dimension, but rather he and Minkowski wrote x4=ict, implying dx4/dt=ic: the fourth dimension is expanding relative to the three spatial dimensions, distributing locality and fathering time--the central postuale of Moving Dimensions Theory. Godel et al. had problems with the block universe and frozen time General Relativity implied, so thank goodness this block universe does not exist--thank goodness that time is not frozen and we have free will! MDT has unfrozen time and progress in theoretical phyiscs, liberating us from the antitheory regimes! Thank goodness change--which marks all realms of physics and measurement (as there is no physics without measurement)--has finally been woven into the fundamental fabric of spacetime with MDT's simple postulate and equation: dx4/dt=ic. For without change, how would we be able to change the antitheory establishment, which is enforcing the block universe alongside the anthropic principle to justify their supposed inevitable permanence. Unfortunately, all this is done at the expense of physics and physicists.
You write, "The quasilocality of gravitational energy and momentum is very different to a nonlocality of interactions
in flat spacetime which some physicists occasionally postulate and which is anathema to many, myself included."
Nonlocality is a fact of quantum mechanics. Perhaps I am misreading this. Do you not agree with the nonlocality observed in quantum mechanics, from the foundational double-slit experiments to Aspect et als.' experiments supporting quantum entanglement and action-at-a-distance?
Moving Dimenisons Theory also accounts for nonlocality with a *physical* model--the fourth dimension is expanding relative to the three spatial dimensions, distributing locality. Hence two ageless photons, which shared a common point of origin, can yet be entangled, as they inhabit the same locality in the fourth expanding dimension, no matter how far apart they travel. More on this can be found in my paper "Time as an Emergent Phenomenon: Traveling Back to the Heroic Age of Physics by Elliot McGucken": http://fqxi.org/community/forum/topic/238
And too, MDT's fundamental universal invariant--from where relativity, entropy, time, and quantum mechanics all arise--the fourth dimension is expanding relative to the three spatial dimensions at the rate of c (dx4/dt=ic)--naturally accounts for the gravitational slowing of light and time, as well as the gravitational redshift, as shown in the attached figures. It also expalins why there is no need to quantize gravity, as while the fourth dimension expands as a probabilistic wavefront with a wavelength given by the Planck length, space is continuous. Perhaps this novel, more realistic view of time in General Relativity, which also accounts for time and all its arrows, entropy, and quantum mechanics' nonlocality and entanglement, while providing a deeper universal invariant from where all of relativity can be derived, can help you further your work!
Well, I hope MDT might aid you in your work, as the goal of physics is to unify diverse *physical* phenomena with simple, beautiful, common principles reflecting the deeper truth of our physical reality. And this is best accomplished with simple *physical* postulates representing *physical* realities that can be summed up with mathematical equations: dx4/dt=ic.
Best & thanks for the paper,
Dr. E (The Real McCoy)
view post as summary
attachments:
2_MOVING_DIMENSIONS_THEORY_EXAMINES_THE_GRAVITATIONAL_REDSHIFT_SLOWING_OF_CLOCKS.pdf
David L Wiltshire wrote on Nov. 21, 2008 @ 03:58 GMT
Dear Readers
As I may not have time to answer all of the questions of the sort posted here, may I suggest that you also first check out my web pages
http://www2.phys.canterbury.ac.nz/~dlw24/universe/
There
is an FAQ, and links to some popular articles on my work, including a New Scientist feature article from March 2008 (which is now open access), and a 37 minute podcast...
view entire post
Dear Readers
As I may not have time to answer all of the questions of the sort posted here, may I suggest that you also first check out my web pages
http://www2.phys.canterbury.ac.nz/~dlw24/universe/
There
is an FAQ, and links to some popular articles on my work, including a New Scientist feature article from March 2008 (which is now open access), and a 37 minute podcast of an interview on Radio NZ. However, the FAQ does not yet address anything relating directly to the Cosmological Equivalence Principle discussed in the essay, and I probably won't have time to update it for a while.
I will respond to some of the questions/statements of John Merryman and "E Real McCoy"...
---------------------------------------------------
----
John Merryman:
>According to theory and observations by COBE and WMAP, the expansion of
>space and gravitational contraction are roughly equal, resulting in large
>scale flat space... How is it then that the overall universe could be
>expanding?
The statement "the expansion of space and gravitational contraction are roughly equal" is incorrect, even in the standard Friedmann-Lemaitre cosmology. So there is no problem that overall the universe is expanding.
The observation you are in fact referring to is the angular scale of the Doppler peaks in the CMB anisotropy spectrum. What is actually measured is a back body spectrum with tiny fluctuations in its mean temperature. A cosmological model is needed to interpret the statistical spectrum of angular correlations in these fluctuations, as they were laid down at the time of last scattering when the universe was a few hundred thousand years old. The CMB photons have propagated to us over the largest distances possible in the intervening 14-15 billion years (by our clocks). Thus any inferences made from the CMB are always limited by the assumptions of a cosmological model for the expansion history of the universe between last scattering and now.
The "theory" which you are citing is in fact the standard cosmology based on assuming homogeneous isotropic expansion and no structure in the universe. If you assume that the spatial curvature of the universe is the same everywhere, that the universe contains no structure and evolves as a smooth featureless fluid by a Friedmann-Lemaitre solution, then it turns out that the angular scale of the sound horizon, which determines the overall angular size of the Doppler peaks, coincides with the expectation based on a spatially flat universe.
I claim that the 80-90 old standard cosmology presents an oversimplified view of the universe that ignores the actual observations of voids and inhomogeneities. Such observations would suggest realistically that spatial curvature is not the same everywhere. If the spatial curvature is not the same everywhere then you have to redo the calculations to analyse the CMB. For this a specific model universe is required. Constructing such a model is nontrivial and very difficult to study in general, and why most people persist with the standard Friedmann-Lemaitre model. I did the relevant calculations recalibrating the angular scale of the sound horizon for the new model universe I have developed in New J. Phys. 7 (2007) 377. Furthermore, in a paper with my student Ben Leith and former student Cindy Ng, which appeared in Astrophys. J. 672 (2008) L91, we found that the parameter values which best-fit the Riess gold supernovae data also pass the angular scale of the sound horizon test. This is one of the three independent tests I mention in my essay.
>As you point out, it is time which is faster in these voids. Could this
>affect the propagation rate of light across intergalactic space?
One quibble: time independently of specifying an observer does not mean anything, as you can get very different results to that of any given observer by making a local boost at any point. So I would not say "time is faster in voids". That's very Newtonian wording. As Einstein pointed out in 1905, time is always a property of a clock. In the present case it is the time *measured by the clocks of observers who see an isotropic CMB radiation* that is faster in a void.
And yes of course this does affect the interpretation of the propagation of light, and give important the differences to the expectations of the Friedmann-Lemaitre models. There many different predictions which are outlined in my papers. The next paper - cited in the essay as ref [16] "in preparation" will be out before the end of the year, and looks at several distinguishing predictions.
>If this is so, the more space that light crosses, the more the effect
>would be compounded, creating the impression that distant sources are
>receding at increased rates, but still having the base rate of expansion
>attributed to dark matter and not one slowing at the geometric rate
>assumed by Big Bang Theory.
There are a number of things with your wording here that are problematic. Firstly the "Big Bang" is simply the idea that the universe has expanded over a finite time from a state in which it was much smaller, hotter and denser in the past. So my model, just like the standard model is a "big bang" model, as opposed to universes which have no beginning in time such as the "steady state" and Hoyle-Narlikar models. Furthermore, interpreting very distant redshifts in terms of "recession velocities" can lead to all sorts of interpretational difficulties. Really, a velocity is only something you measure locally at a point. You can interpret any derivative of a distance by time as a velocity, but in general relativity such quantities do not necessarily have to be bounded by the speed of light because they are only formal definitions, not local operational measurements. A locally measured velocity - something whizzing past you - by contrast is always bounded by the speed of light. In cosmology, even with distant events you can formally interpret things in terms of recession velocities if you use the connection of general relativity to parallel transport a local 4-velocity at some distant event to your event along the lightcone. (For a recent non-technical account see Bunn and Hogg arXiv:0808.1081.) These sorts of interpretation issues are no different in my model than in the standard Friedmann-Lemaitre models. What is different is that the expansion history is different, and the interpretation of cosmic acceleration is purely apparent. Furthermore, below the scale of statistical homogeneity there will be an observable variance in the apparent Hubble flow that correlates to observed structures in a particular fashion. This is perhaps the most interesting and definitive prediction of the new cosmology, as it has no counterpart in the Friedmann-Lemaitre models. (See http://arxiv.org/pdf/0712.3984 for the least technical overview of my work.)
------------------------------------------------------
-
E. Real McCoy:
>Nonlocality is a fact of quantum mechanics. Perhaps I am misreading this.
Indeed you are misreading it because I said "nonlocality of interactions" not "nonlocality of quantum states". These are very different things. One part of physics deals with spacetime structure and the interactions that live in the structure: gravity, electromagnetism, weak and strong forces. The propagation of these forces is always subject to the limits of causality imposed by the spacetime structure. Quantum mechanics addresses something else - the fact that there is a fundamental indeterminism in correlations of particular measured properties of matter fields, beginning most importantly with position and momentum (in a given direction). Quantum mechanics will certainly limit our ability to completely determine future spacetime structure from a given matter distribution, but it does not change the rules of causality. It does not give "action at a distance" in the way that Newtonian gravity involves action at a distance.
It is certainly possible to have quantum states which are entangled with each other over spacelike distances, in Einstein-Podolsky-Rosen type experiments such as the ones of Aspect that you refer to. I know that there is a lot of confusion around this, some of it promulgated by Einstein who spoke about "spooky action at a distance" in reference to EPR experiments. Quite apart from the fact that Einstein was wrong in his predictions about the outcome of EPR experiments, I would say that he was wrong in referring to EPR type entanglements as "action at a distance". The fact is that you cannot transmit a message by making EPR type measurements, so you do not violate causality. People who do foundations of quantum mechanics type experiments can do "quantum teleportation" of quantum states by these means. However, such teleportation cannot be used to transmit a message, and so is not teleportation of the sort you see on Star Trek. By making a measurement on one part of an entangled state, you can know some correlated property of the other part of that state at a spacelike separation. E.g., by learning some property of a particle at your location you determine some related property of a distant particle that was entangled with it. This may seem "spooky" as you do have a freedom to determine which property you measure and as classical physicists we are used to thinking of such properties "existing" independently of the observer, when quantum mechanics says that is not so. But if you can get over your classical hangups then what is "spooky" is not so frightening. And whether it is "spooky" or not it is not "action" in the sense of transmission of a message. It is just an inevitable consequence of indeterminism of the sort supplied by our rules of quantum mechanics.
I should mention that the Cosmological Equivalence Principle approaches general relativity from the philosophical viewpoint that laws of physics only have meaning relative to an observer. I view Einstein's equations for the universe as evolution equations which only make sense from the point of view of an observer, and so are limited by the finite size of the past light cone at any event (the "particle horizon" technically speaking). This has important consequence on account of cosmic variance in density fluctuations and their subsequent evolution, as I elaborate in section 9 of my paper "Cosmic clocks, cosmic variance and cosmic averages" [ http://stacks.iop.org/1367-2630/9/377 ]. Furthermore, since a regional "cosmological inertial frame" is one which is conformally flat - i.e., with zero Weyl curvature - and since any "nonlocal" curvature has to be part of the Weyl tensor, the CEP also includes the idea that the only "non-local curvature" we are allowed physically is that which has arisen as a result of local processes within the past light cone at any event - through gravitational collapse and production of gravitational waves etc. In the limit of the earliest times that means we are not allowed Weyl curvature for initial conditions of the universe, giving a fundamental link to Penrose's Weyl curvature hypothesis. I discuss this in detail in a section of the recent paper in which I introduced the CEP: Phys. Rev. D 78 (2008) 084032 [ http://arxiv.org/pdf/0809.1183 ]. Penrose formulated the Weyl curvature hypothesis motivated by understanding the arrow of time and gravitational entropy. These are further aspects of the nature of time which the proposed framework may shed light on; but this has still to be developed.
view post as summary
John Merryman wrote on Nov. 22, 2008 @ 00:08 GMT
Professor Wiltshire,
Thank you for taking the time to reply and I'm having to think through much of what you have said. I would like to put one more question forward that I haven't found an answer to which isn't too confusing for me to understand, but you might be able to clarify for me;
When originally proposed, the expanding universe was assumed to be an increasing volume of space. This posed a number of problems. One, that the homogeneity of it was hard to explain and also that all galaxies outside the local cluster were observed to be redshifted directly away from our position, as if we were the center of the universe. To resolve these, it was proposed that the very "fabric" of space expanded, first in the inflation stage and then more slowly. The problem I have with this is that a standard speed of light seems to be still assumed for the post Inflation expansion. Such that if two points are x lightyears apart, if the universe were to double in size, then they would roughly be 2x lightyears apart. It seems to me this is an increasing amount of stable space, not expanding space, since it would seem that if it is space which is being stretched, not just added to, then these two points would always be x lightyears apart, because the measure would stretch with the measured. But if this is so, then the whole idea falls apart because, all things being relative, how can we say it is expanded? It just seems there are three different concepts of space; That which is inflated and carries light along with it. That which is measured by lightspeed. And that which expands according to redshift. It just doesn't pass Occam's razor for me.
I realize there doesn't appear any other reason for redshift than a form of recessional velocity, but if it was in fact some form of optical effect, it would make for a much less complicated cosmology.
David Wiltshire wrote on Nov. 27, 2008 @ 03:58 GMT
Dear John,
Your question confuses me as I cannot quite make out what your
conceptual grasp of the notion of expanding space is. The volume of
space between clusters of galaxies on large scales increases with time. That
is what we mean by space expanding. The problems you say this introduces
- of homogeneity and isotropic expansion - are not problems at...
view entire post
Dear John,
Your question confuses me as I cannot quite make out what your
conceptual grasp of the notion of expanding space is. The volume of
space between clusters of galaxies on large scales increases with time. That
is what we mean by space expanding. The problems you say this introduces
- of homogeneity and isotropic expansion - are not problems at all.
The standard model has an exactly homogeneous isotropic expansion as
described by an FLRW metric. My proposal alters that by the observation
that homogeneity is only true statistically on scales of at least of order
100/h Mpc, and we have to deal with large variances in geometry and
perceived expansion rates below that scale. Statistically on large
scales there is still homogeneous isotropic expansion.
When you say homogeneity and isotropy are problems, and then mention
inflation, I think you are getting confused with the horizon and flatness
problems, which are quite different issues and have nothing to do with
the overall conceptual notion of expanding space. The horizon and flatness
problems arise because in a standard FLRW model with only matter and
radiation as sources, the past history of the universe before the time
of last-scattering is such that points on our CMB sky which are further apart
than about one degree cannot have been in causal contact before the epoch
of last scattering, once you calculate the volume of their past light cones
at that time. However, the evidence of the CMB
radiation is that the universe was at thermal equilibrium with a near
perfect blackbody spectrum at that time, an impossibility if regions more
than one degree apart had not been able to talk to each other. The
inflationary universe scenario has been invented to solve conundrums such as
the horizon problem I have just described. It has nothing to do with the
notion of expansion of space per se; it just means the relative expansion
rate would have been many orders of magnitude faster very early on, changing
the shape of the past light cones in a dramatic fashion that solves the
horizon problem etc.
The rate of expansion of the universe has nothing
to do with the speed of light per se. The speed of light in vacuum enters
relativity as a fixed universal constant, and the whole of spacetime
structure in Einstein's theory depends on the speed of light being
constant. That means we can always choose a local inertial frame at
a point, which is a Minkowski space. The constancy of the speed of
light is built into the physical structure of this Minkowski space. General
relativity is a larger theory in which spacetime overall can bend
and warp, so that the relative calibration of clocks and rods in
widely separated local Minkowski frames can differ markedly, in a manner
than depends on solutions of Einstein's equations. The Cosmological
Equivalence Principle I have introduced is a means of clarifying
the problem of the relative calibration of clocks of widely separated
frames on cosmological scales in the absence of
exact symmetries of the background - a problem which otherwise has no general
solution in general relativity. I believe I do this in a manner which
naturally incorporates an aspect of Mach's principle, consistent
with the rest of the conceptual foundations of general relativity.
It is dangerous to take the "rubber sheet" analogy too far. Empty
space does not have a fabric, and light can never be brought to rest
in any frame - so you should never talk about space "carrying light
along with it". The fact is that in general relativity the laws of
geometry are not Euclidean but pseudo-Riemannian, and the spatial
relationship of objects are dynamical so the curved geometry changes
over time. That's all there is to the "rubber sheet" or the "fabric"
of space. It is just an analogy for us to be able to conceptualise
curved geometries. But the curvature of the geometry is determined
by matter - matter tells the geometry how to curve and the curved
geometry tells matter how to move - so ultimately it is matter telling
matter how to move via the rules of Einstein's equations, and light moving
in this background is part of the matter in the game; light being
matter which can never be
be brought to rest relative to other matter. If you just think of expanding space as being that the distances and
volume of space between distant galaxies is getting larger, then you
will never go wrong. I think your confusions may arise from taking the rubber sheet too
literally. Whatever started the matter rushing apart in a particular
manner at the earliest fractions of a second of existence is something
beyond the laws of general relativity, and part of physics still to be
determined, maybe in quantum gravity. Inflationary models have such properties but depend highly on even earlier initial conditions within their parameter space; and there are hundreds of models of inflation. I regard them as phenomenological decriptions which fit the observations, but do not deserve to be called a fundamental theory until some genuine theoretical insight that has yet to be made picks out some scenario in a compelling fashion, and rules others out. Anyway given initial conditions at the time of last scattering or earlier in the radiation-dominated era,
once the laws of physics as we understand them held sway ordinary matter
will decelerate the expansion rate in a manner consistent with Einstein's
equations.
Finally when you say "there doesn't appear any other reason for redshift
than a form of recessional velocity", that is not correct. Redshift just
arises in comparing the relative frequency of a photon at the emitter
and at an observer's position, and these can change in general relativity
due to a relative calibration of the clocks of those two frames in many
ways. Here are three ways: (i) cosmological redshift - the overall volume
of space has increased; (ii) peculiar velocity redshift - there is
a local boost of the emitter and/or observer frame relative to the overall
cosmological expansion; (iii) gravitational redshift - the relative
distribution of matter between source and emitter and resulting gravitational
"potentials" induces other relative changes of frequency over and above the
first two effects. Now it is true that general relativity is a theory in
which we cannot *locally* distinguish these different types of redshifts,
but that is quite different than your statement.
The Cosmological Equivalence Principle extends the range of these equivalent
circumstances. In particular, even though it is an old textbook statement
that the redshift in a FLRW universe is *locally* equivalent to a peculiar
velocity in Minkowski space, the CEP states that even though the universe
is inhomogeneous - so the FLRW models are not globally valid - nonetheless
regional frames can always be found for arbitrarily long periods during
which the average decelerating volume expansion is conformally equivalent
to a Minkowski frame, and therefore with volume expansion which is
indistinguishable from the equivalent motion of a congruence of particles
in a static Minkowski space by the standard textbook argument. Furthermore,
I establish a new type of gravitational time dilation and gravitational
redshift. We are used to thinking about static gravitational potentials
and the equivalence of a static observer in such a potential (eg someone
at the Earth's surface) with an observer firing rockets. I introduce a
different notion of gravitational time dilation due to cumulative variations
of dynamically varying "potentials". This is to be thought of as relative
deceleration of the expanding average background due to gravity being
equivalent to a regional homogeneous/isotropic symmetry-preserving
deceleration of a tethered lattice of observers in Minkowski space.
view post as summary
John Merryman wrote on Dec. 2, 2008 @ 01:00 GMT
David,
Sorry to confuse the issue. I do have a reasonably approximate understanding that space is not a fabric. My observations had to do with how it is described by the singularity/inflation/ expanding universe model. Without further adding to the confusion, I'm of the opinion, purely speculative, that redshift being caused by some as yet unexplained optical effect, possibly as a consequence of the expansion of radiant energy, or vacuum fluctuation, balanced by the optical effect of collapsing space caused by gravitational effects, resulting in a form of convective cycle of expanding continuity and discrete collapse, would be far less complex than the Big Bang model. Obviously it's not based on a bottom up construction of known quantities, but a top down application of whole pattern. Rather then space expanding from a singularity, it's an attempt to construct a cosmological model from infinite space expanding due to energetic instability and collapsing into vortices of accreted stability. The connection with time is that the expanding entangled energy moves into the future, as the collapsing mass is order falling away into the past. This last part may make more sense in the context of my own entry:
http://fqxi.org/data/essay-contest-files/Merryman_Expl
aining_Time.pdf
Tevian Dray wrote on Dec. 7, 2008 @ 22:46 GMT
Fascinating! I've always thought that "dark energy" would turn out to be an artifact of a not-yet-fully understood theory, and I never did like the cosmological constant. Thanks for providing a possible alternative. Mind you, I'm also partial to toy models as idealizations; if your ideas pan out, my toys will be forced to become more complicated...
Lawrence B. Crowell wrote on Dec. 17, 2008 @ 01:21 GMT
This is well written. I will say that I applaud the statement:
... the conundrum of dark energy does not involve a fluid in the vacuum of space ...
I have long been disturbed with the identification of the cosmological constant L = 8pi(rho - 3p) which then is a Ricci scalar term which gives R_{ab} = (1/2R + L)g_{ab}. Such spacetimes are source-free, but in this case a source is proposed, the vacuum energy and pressure with some equation of state w = -1.
Your statement about conservation laws also is pretty close to how I think. Cosmological spacetimes are Petrov-Pirani type O solutions which have no global Killing vector fields. As such there is no K_t*E = constant. The inability to define a global isometry means that a conservation law is not applicable for the entire spacetime. I consider this to be the big elephant in the room of physics and cosmology. I think few people seem able to wrap their minds around the prospect that, ahem energy conservation may simply not apply in cosmology. This is tied to your statement on page 3
... Since the universe is expanding, however, no time symmetry exists absolutely.
Later you write:
A universe as inhomogeneous as the one we observe cannot be adequately described by a single global frame.
I have been working on a general approach to gravity or quantum gravity which employs lattice tessellations and quantum error correction codes. This leads to a noncommutative geometry, described by quantum groups, and where there are systems of quantum groups linked by associators. Nonassociators act on elements of a quantum group G and map it to G' A:G ---> G' so that g^{-1}Ag = g^{-1}g, which is nonunitary. However, this does preserve quantum bits, and if one "coarse grains" over associators it leads to thermal states such as Hawking radiation. This is commensurate with your statement that there does not exist a single global frame to the universe. Any such frame, with an underlying quantum group of noncommutative elements, is local (quasilocal?) and linked to other regions with a different underlying noncommutative structure, a different quantum group, which classically corresponds to a different frame.
I wrote #370 on one aspect of this tessellation approach.
Thanks for the informative and I think one of the more illuminating essays here.
Cheers,
Lawrence B. Crowell
Cristi Stoica wrote on Dec. 20, 2008 @ 19:13 GMT
Dear Professor Wiltshire,
I like the ideas you presented in your essay. I always thought that the anomalies that seem to require dark energy can be explained by a proper account of General Relativity, instead of the quasi-Newtonian approach combined with additional unobserved energy. Your solution, the CEP, is a good candidate for such an explanation, and seems to provide nice arguments in its support.
Best wishes,
Cristi Stoica
Flowing with a Frozen River
Robert Sadykov wrote on Dec. 23, 2008 @ 13:13 GMT
Dear Prof David L. Wiltshire,
One of variants of the solution of a problem of gravitational energy is presented in the most modest essay
The Theory of Time, Space and Gravitation.
Yours faithfully
Robert Sadykov
Dimi Chakalov wrote on Dec. 23, 2008 @ 19:32 GMT
David:
You wrote at George Ellis' thread (Dec. 23, 2008 @ 10:22 GMT):
"Dimi - should you read my work and have any further questions - then since George has closed his discussion, I guess you should continue over at my not-so-active thread."
Thanks a lot for your suggestion. I downloaded your essay and tried to read it, but was struck a very unclear -- to me -- introduction, and couldn't proceed.
I am asking you to help me understand the following.
In your essay, you wrote: "A simple way to understand this (quasilocal quantities - D.C.) is to recall that in
the absence of gravity energy, momentum and angular momentum of objects obey conservation laws. A conservation law simply means that some quantity is not changing with time."
Let's find out what kind of 'time' is involved in GR. George Ellis did not answer any of my arguments posted at
his thread. Hope you can do better.
Please correct me if I'm wrong: The time read by a wristwatch is assumed to be a linear variable, and it is this linear variable that enters the conservation laws in the absence of gravity (Minkowski spacetime).
Q1: What -- if any -- should be the change or alteration to this linear variable, as introduced by quasi-local variables?
Further, you wrote: "General relativity is entirely local in the sense of propagation of the gravitational interaction, which is causal."
Q2: What -- if any -- should be the change or alteration to the propagation of the gravitational interaction, as introduced by quasi-local variables?
For if you mix apples (local theories) with oranges (quasi-local variables in these same theories), the confusion may be enormous, which is perhaps the reason why I couldn't finish reading your essay. Hope you can help.
My tentative answers to the questions posed above were provided in a link to my web site, in my first posting to George Ellis from
Dec. 2, 2008 @ 07:02 GMT. Regrettably, your mentor neither replied to my critical comments on his proposal, nor said anything on mine.
Dimi
David Wiltshire wrote on Dec. 24, 2008 @ 10:51 GMT
Dimi
>Please correct me if I'm wrong: The time read by a wristwatch is assumed to be a linear variable, and it is this linear variable that enters the conservation laws in the absence of gravity (Minkowski spacetime).
Individual variables themselves are neither linear nor nonlinear. So time is not "linear" (apart from being a parameter on the real line which is not what I mean here)....
view entire post
Dimi
>Please correct me if I'm wrong: The time read by a wristwatch is assumed to be a linear variable, and it is this linear variable that enters the conservation laws in the absence of gravity (Minkowski spacetime).
Individual variables themselves are neither linear nor nonlinear. So time is not "linear" (apart from being a parameter on the real line which is not what I mean here). Linearity is a property of combinations of variables in equations. In Minkowski space it is the Lorentz transformations which relate inertial frames that are linear. Conservation laws are described by divergence-free currents, which via Gauss's law give conserved charges when doing the relevant integrals on hypersurfaces. The time-direction is hypersurface orthogonal in formulating such conservation laws.
BTW In relativity one also has to be always careful to distinguish between arbitrary coordinate variables, and proper lengths and proper times which are invariants. Your watch measures your proper time. If you are talking about your proper time, say it; "variable" is too vague as it can also refer to non-measurable things.
>Q1: What -- if any -- should be the change or alteration to this linear variable, as introduced by quasi-local variables?
Since any "quasilocal" quantity is an integrated regional thing, not a local quantity like a proper time measured by a clock, in any formulation one will never replace any proper time by a quasilocal variable. It is gravitational energy that is quasilocal not time; proper time is a locally measured quantity on the worldline of a particle, gravitational energy is not. Gravitational energy comes into the relative calibration of clocks at widely separated events.
Why is energy conservation difficult in GR? Well, in the absence of exact symmetries one cannot do the same procedure of a Gauss law style integration to extract a conserved 4-momentum and conserved covariant angular-momentum from the divergence-free energy-momentum tensor as you can in Minkowksi space. In general relativity one can in general define conservation laws for completely antisymmetry tensor densities. However, one cannot do this for the rank 2 symmetric energy-momentum tensor. If you try to integrate it, in the manner of Gauss's law there is an extra bit involing the connection which is in a sense "the work done by gravity". In the presence of exact symmetries of the background spacetime, described by a Killing vector, it is possible to contract the Killing vector with the energy-momentum tensor to get rid of the extra bit and get conservation laws. Just symmetries of the background are required, and to get a conserved energy you need a timelike symmetry of the background. What I am talking about here is the "energy of the spacetime" but the same is true for geodesic motion; if you have a timelike Killing vector you can contract it with a particle 4-velocity to get a conserved energy of the particle in motion. Without a timelike Killing vector you cannot do that.
Timelike symmetries describe bound solutions extremely well, but since the universe is expanding there is no absolute time symmetry, and so the time symmetry is approximate. Since the timelike Killing vector is normalised to unity at "spatial infinity", defining the equivalent of the zero of the Newtonian gravitational potential; in the actual universe this non-existent spatial infinity has to be replaced by something like "finite infinity", as first discussed by George Ellis in the 1984.
Since spacetime is intrinsically dynamical in GR, the "work done by gravity" enters into energy-momentum conservation in an inextricable way in general, when there are no time symmetries.
>Q2: What -- if any -- should be the change or alteration to the propagation of the gravitational interaction, as introduced by quasi-local variables?
Nothing changes because the idea of "introducing quasilocal variables" is not something that is being done on top of general relativity. We are simply disussing known properties of Einstein's theory. It is a property of his theory that propagation of the gravitational interaction is causal. It is also a property of his theory that "the work done by gravity" cannot be separated out in general, making energy conservation an intrinsically different problem from the same problem in flat spacetime. This is a consequence of the equivalence principle; you can always get rid of gravity near a point, and gravity is a property of spacetime structure. Gravitational energy involves the calibration of clocks at different points (via the connection), and can only be regionally refined; so it is at best quasi-local.
view post as summary
Dimi Chakalov wrote on Dec. 24, 2008 @ 15:16 GMT
David:
Thank you for your professional reply. I believe we have at least one thing in common: we both want to develop and modify George Ellis' notion of 'finite infinity', but from entirely different perspectives (I will be happy to explain mine, if you're interested).
You wrote above: "Your watch measures your proper time. If you are talking about your proper time, say it."
Yes, I am talking about the proper time in STR, as read by my wristwatch. Glad you agreed that it is a local quantity.
But in GR we have a formidable conundrum: the metric has a "double role" (Laszlo Szabados, private communication), namely, it is a field variable and defines the geometry *at the same time.*
It seems to me -- please correct me if I'm wrong -- that the metric in GR is treated as a field which not only affects, but also -- at the same time -- is affected by the other fields.
If you agree, would you please elaborate on the dynamics of GR, as encoded in the phrase "at the same time"?
In STR, the proper time read by my wristwatch is a local quantity, so it seems impossible to borrow this kind of time for the dynamics of GR. The latter does include the extra "work done by gravity" (which is is absent in STR).
As you put it, energy conservation is "an intrinsically different problem from the same problem in flat spacetime."
What kind of "time" might be implied in GR, if every instant from it (a "point" in Euclidean 1-D space) is a nexus of an *already* completed -- at this same instant -- negotiation between the two sides of Einstein field equation?
I'll come back to you, after Christmas, about your efforts to tweak George Ellis' finite infinity (FI), as presented in New Journal of Physics 9 (2007) 377, and will ask you to test your vision of FI by recasting the positive mass theorems. If you succeed in replacing the conformal infinity with FI, please try to eliminate the geodesic incompleteness and the Cauchy problems for Einstein field equations, as the ultimate 'test of the pudding' for your vision of FI and the dynamics of GR.
Wishing you a nice white Christmas,
Dimi
Dimi Chakalov wrote on Dec. 25, 2008 @ 03:41 GMT
Addendum
David wrote (Dec. 24, 2008 @ 10:51 GMT):
"Since any "quasilocal" quantity is an integrated regional thing, not a local quantity like a proper time measured by a clock, in any formulation one will never replace any proper time by a quasilocal variable. It is gravitational energy that is quasilocal not time; proper time is a locally measured quantity on the worldline of a particle, gravitational energy is not."
I am indeed trying to suggest that the proper time, as measured by a clock, can be replaced by a new quasi-local variable: please see my postings from Dec. 4, 2008 @ 01:30 GMT and Dec. 10, 2008 @ 14:31 GMT at
Dean Rickles' thread.
The aim is to bridge GR and QM with a new form of retarded causality, and to open a "window" in GR for the energy density of the so-called
empty space. My opinion on GR matches that of Einstein: "... not anything more than a theory of the gravitational field, which was somewhat artificially isolated from a total field of as yet unknown structure."
David Wiltshire wrote on Dec. 25, 2008 @ 07:52 GMT
Dimi,
Regarding
>the metric has a "double role" (Laszlo Szabados, private communication), namely, it is a field variable and defines the geometry *at the same time.*
Yes, I agree the geometry both affects and is affected by the other fields. The problems you are alluding to are of course all part-and-parcel of the problem of coarse-graining and averaging in describing the...
view entire post
Dimi,
Regarding
>the metric has a "double role" (Laszlo Szabados, private communication), namely, it is a field variable and defines the geometry *at the same time.*
Yes, I agree the geometry both affects and is affected by the other fields. The problems you are alluding to are of course all part-and-parcel of the problem of coarse-graining and averaging in describing the evolution of the universe. In his 1984 paper George Ellis described this passively as a "fitting problem" of how do we reconstruct the past geometry of the universe from information on our past light cone using averages on different scales. I think of it as an active problem: how does the universe construct itself from averages of what has happened? I think George's evolving block universe ideas are important; quantum mechanics does introduce a fundamental indeterminacy until things have happened. Thus the future geometry is not absolutely determined. But whereas it might be impossible to predict the precise number of black holes in our galaxy say, just from stochastic rather than quantum fluctuations, once we average on larger and larger scales the universe can only evolve so far away from its state of almost homogeneity at last scattering, given the limits of causal evolution. The question of coarse-graining and averaging in GR to extract something like an average spatial hypersurface is therefore a big unsolved problem of classical GR. Gravitational entropy - and gravitational energy since there ought to be a first law - are both part of the same puzzle as far as I'm concerned.
Now you say
> I am indeed trying to suggest that the proper time, as measured by a clock, can be replaced by a new quasi-local variable
Having read your comments on Dean Rickle's thread I can see intuitively where you are coming from. Let me make the following suggestion: suppose I was to say that you are not trying to replace local proper time as measured by a clock by a quasi-local variable, but the "time evolution parameter" in something like the ADM-formalism by a quasi-local variable. Would you buy that? Because that is exactly what I believe we are trying to do in the coarse-graining/averaging/fitting/evolution problem. What is needed is a clarity of concepts. In GR the proper time on a particle clock is a measurable quantity *at a point on a timelike worldline* which is related to some particular observer; it is an invariant of the metric given a solution of the geodesic equation for free-fall or of a non-geodesic equation if there is some additional forcing term. However, you can have infinitely many different worldlines passing through the neighbourhood of any given spatial point, all with different 4-velocities. So there is no unique proper time at a point. The geometry in some spatial region is an average over all particles and motions via an appropriate average of the Einstein field equations. Thus within an averaging scheme you can describe some coarse-grained cell by some average time parameter. This then is "quasi-local" as it relates to the whole averaging region. It will not exactly coincide with the proper time of every observer within the coarse-grained cell, because these will differ. The average time parameter may happen to be a local parameter for some particular observers, but not all, and the averaging scheme should operationally incorporate an understanding of both the average and the variance of relevant observables.
This is precisely what I am trying to do in the cosmological context. I am using the averaging scheme of Thomas Buchert, but operationally interpreted in a manner quite different to the way in which he conceived it. I am using Buchert's scheme simply because it is developed in a way which makes the most direct contact with the FLRW models, which are remarkably successful. However, I do not think that as it is currently formulated it is ultimately the best scheme for the problem, and I would advocate trying to generalise the "Mach 1 gauge" of Bicak, Katz and Lynden-Bell beyond the perturbative framework. In fact, I think that any averaging scheme that begins with a decomposition into spatial hypersurfaces is part of the problem operationally. I would not attempt anything like a positive mass theorem based on finite infinity, until I had a better grasp of these sorts of issues. My attempt at defining finite infinity is just work-in-progress, and in fact it is not yet well-defined and has many weaknesses. However, one can only make progress by making concrete physical models. Concepts, words and philosophy alone are not going to solve things.
If your motivation for this question is to understand vacuum energy, (judging from links you give) then that is the angle that I came at this from originally, because I think it is likely that there is no vacuum energy. At least from the cosmological model I have proposed one can get a fit to observations, and predict observational differences from the Lambda CDM model that can be tested in future. My essay (and the longer version in Physical Review D) argue that this has its origin in a refined understanding of the equivalence principle in the coarse-graining/fitting/averaging/evolution problem. Ultimately, if this is correct, I think it will be important for quantum cosmology and quantum gravity, and the full "cosmological constant problem". However, rather than try to tackle everything at once, I think one first has to refine the mathematical tools with concrete problems and predictions, while striving for conceptual clarity.
BTW; as I have recently returned home to the southern hemisphere, my location makes a white Christmas extremely improbable, but it wasn't particularly warm either. I wish you a pleasant holiday.
David
view post as summary
Dimi Chakalov wrote on Dec. 25, 2008 @ 11:29 GMT
David:
Thank you for your time and efforts.
Since you agree that the geometry both affects and is affected by the other fields, please notice that I am trying to suggest, with the so-called Buridan donkey paradox two kinds of time: "global" time for the negotiations of all particles, and "local" time for the end-result of this negotiation. Then the proper time on a particle clock, as a measurable quantity *at a point on a timelike worldline*, is being created (i) dynamically, and (ii) relationally (Machian type relational ontology), and corresponds to its "local" time. The "global" time is something that belongs to 'the whole universe en block'. To define the latter, I am trying to modify George Ellis' Finite Infinity with some well-known ideas from Aristotle.
All this comes from the solution of the measurement problem, which was suggested at the link from my first posting to George Ellis' thread (Dec. 2, 2008 @ 07:02 GMT).
In my opinion, ADM hypothesis is seriously flawed (I've elaborated extensively on my web site, with many references).
I don't like *any* coarse-graining whatsoever, since I can't see how one could approach the Hilbert space problem in quantum gravity. Will be happy to elaborate, by quoting from Claus Kiefer's research and of course Karel Kuchar's articles.
You wrote: "I would not attempt anything like a positive mass theorem based on finite infinity, until I had a better grasp of these sorts of issues."
To me the main puzzle is that we see only one "charge", called 'positive mass'. The positive mass theorems need a precise cut-off at spatial infinity, which is the crux of my efforts to modify FI with some help from Aristotle (cf. my two postings mentioned on Dec. 25, 2008 @ 03:41 GMT).
You say: "I think it is likely that there is no vacuum energy." I suggest the answer YAIN (both yes and no, in German). It's a whole new ball game, as I tried to explain
here.
Sorry about my stupid remark about "white Christmas".
Best regards,
Dimi
Lawrence B. Crowell wrote on Dec. 25, 2008 @ 14:14 GMT
The ADM approach to general relativity is a calculation of a Gauss' second fundamental form for spatial surfaces. It is what Wheeler called "geometrodynamics," though it does not tell us how coordinate time is a prescription for describing the evolution of a spatial surface. All it gives us are constraints NH = 0, N^iH_i = 0, and the identificaiton of "time" with the diffeomorphisms between surfaces is unclear.
When it comes to coarse graining, this appears to be implicitely what we do with metric back reactions in Hawking radiation. We so far do not have a complete description of how the black hole quantum mechanically responds to the emission of this radiation. This is similar to the description of how a detector responds to the measurment of a quanta.
Happy Holidays, Sol Invictus, Happy Hanukkah or ... ,
Lawrence B. Crowell
Dimi Chakalov wrote on Dec. 25, 2008 @ 18:29 GMT
I agree with the first paragraph from Larry's comment above (Dec. 25, 2008 @ 14:14 GMT). In addition to Hawking's statement that "the split into three spatial dimensions and one time dimension seems to be contrary to the whole spirit of relativity", there is a very interesting, in my opinion, paper by Kiriushcheva and Kuzmin, arXiv:0809.0097v1 [gr-qc], pp. 7-9, which brings specific arguments against such "slicing" of spacetime.
My personal (and certainly biased, if not wrong) attitude toward 'spacetime' is that it is *one* object which might be "disentangled" into '3-D space and its time' for illustrative purposes only, while its genuine dynamics -- if any -- is not traceable to anything in this *one* object: we have only constraints, and also the dubious "freedom" to choose the lapse and the shift by hand, since the latter are gauge functions (M. Alcubierre, gr-qc/0412019v1, Sec. 5).
In a way, ADM hypothesis is like showing the moving parts of a piano, but we can't "see" the player. But this is as it should be, since if we were able to "see" the player with the present-day GR (say, the source of the so-called dynamic dark energy), the latter must be some bona fide 'observable in GR', and we would be able to trace back
The Beginning or the Aristotelian Unmoved Mover, whichever comes first :-)
Dimi
Lawrence B. Crowell wrote on Dec. 26, 2008 @ 14:01 GMT
The ADM approach to relativity in one sense does "spoil" the origiinal perspective of relativity where spacetime exists as whole. In both ADM and more of a block universe perspective the notion of time is strange. In either case general reltivity is not really a dynamical theory, for coordinate time is an element of the field. This element can in either case be imposed freely by the analyst. In the case of standard GR this is fixed by the coordinate condition the analyst chooses, and in ADM this is given by the freedom to choose the lapse and shift functions. These are different ways a gauge-like condition can be imposed on a problem.
The ADM approach is applicable for numerical problems, which is done through either Regge calculus or grid adaptive algorithms. It also has found use in quantum gravity calculations, usually in a Euclidean form, because the Wheeler-DeWitt (WDW) equation and the path integral formulation that results works within the techniques established in quantum field theory.
It is a standard matter that a Hamiltonian is established on spatial surfaces with some fixed "time arrow," and where equal time commutators are established. This is of course why the ADM and WDW approaches have become popular. There is of course the additional issue that gauge-like connections exist in a nonHausadorff moduli space, which means they do not satisfy Cauchy type of convergence conditions. As a result many analysts Euclideanized these problems. The intention is to examine QFT amplitudes with elliptic conditions (Atiyah-Singer indices etc) which are presumed to capture the physics in the Lorentzian configuration. Of course the topological issues of moduli are ultimately being swept under the rug. Outside of this being some sort of instanton for cosmological tunnelling states, I find this to be a far bigger adulteration than a "space plus time" ADM approach to GR.
I could go on at considerable length here, where underneath this, some correspondence between Euclidean and Lorentzian configurations, involves a Bogoliubov map between inequivalent unitary quantum groups.
There are two notions of time at work here. General relativity only defines a physical time according to the invariant interval or proper time of a particle. Coordinate time as an element of spacetime is a gauge dependent (a gauge theory for an external symmetry) quantity, which ultimately has nothing to do with any evolution. Hence the nature of block time. Yet to do QFT, we establish a Hamiltonian (an internal generator of time translations) which is attached to spatial surfaces with some time direction.
The dichotomy between these concepts of time probably lie at the heart of the obstructions we face with quantum gravity and cosmology. It is worth focusing in on this, and after all Einstein said of his annus mirabilus and his publication of relativity that he solved the problem by focusing in on the nature of time.
Lawrence B. Crowell
Dimi Chakalov wrote on Dec. 26, 2008 @ 17:15 GMT
In connection with the last paragraph from Larry's posting (Dec. 26, 2008 @ 14:01 GMT), it seems to me that the problem of time in canonical quantum gravity should be solved along with the Hilbert space problem en bloc, since the latter is 'the test of the pudding' for the former. More in my latest posting at
Claus Kiefer's thread from Dec. 26, 2008 @ 17:01 GMT.
David: Please excuse my violent curiosity. If you prefer, I will quit.
Best - Dimi
Lawrence B. Crowell wrote on Dec. 26, 2008 @ 19:12 GMT
Before we know the Hilbert space it might be best if we have some idea of the contact structure of relativity. The action is of the form dS = pdq - Hdt, which defines a one-form. The two form d^2S = 0 = dpvdq - dHvdt (v = wedge) tells us that &H/&p = dot-q and &H/&q = -dot-p (dot = time derivative) and we get from the contant form from dS, the tangent bundle = ker(dS), the equations of motion, a'la Frobenius theorem. With ADM relativity we of course have a similar structure, but the contact manifold is not defined properly with the lapse and shift functions. We do not have well defined notion of how x = NH + N^iH_i is a well defined one-form which defines a two-form dx. The restriction of this two-form to a hyperplane defined by xvdx =! 0 (=! means not equal to). Hence in mini-superspace the meaning of a contact structure, or energy surface, of 5 = 3*2 - 1 dimensions is not apparent.
For quantum gravity the trivial O(1) "point" line bundle is replaced with a U(1) bundle by extending the symplectic structure to a Kahler one. The Hilbert space constructed by polarizations of the bundle is not apparent, mainly because the classical energy H(p,q) = E for the expectated value of the quantum langle Hrangle is absent.
There is one hint we might exploit. Fermions obey Y^2 = 0, which extends to supersymmetry (if one wants to consider that) with Q^2 = 0. This is the topology d^2 = 0 "boundary of a boundary = 0". This means that Y = ker(Q)/im(Q), for Q a boundary-like operator in BRST quantization. Hence the field Y is not QX. Thus for spinorial gravity the Frobenius theorem might not apply directly because the tangent bundle must be replaced by a closed form that is not exact.
Lawrence B. Crowell
Dimi Chakalov wrote on Dec. 27, 2008 @ 03:32 GMT
Larry:
You wrote
(Dec. 16, 2008 @ 16:36 GMT):
"The issue of time is a bit slippery. I am not out to deny the existence of time, but it is something which appears to be geometrical and as such "relational." It relates kinematic entities to dynamical ones. As I see it the important question is not whether time exists, but as a relational quantity "what does it tell us?""
I believe the so-called Buridan donkey paradox mentioned above (Dec. 25, 2008 @ 11:29 GMT and Dec. 25, 2008 @ 03:41 GMT) offers a tentative answer to your very important question: time as a geometrical entity "tells us" that the world is fundamentally relational (relational ontology), in line with the Bootstrap Principle of Geoffrey Chew (Science 161 (1968) 762).
And here at David Wiltshire's thread, you wrote (Dec. 26, 2008 @ 14:01 GMT): "There are two notions of time at work here. General relativity only defines a physical time according to the invariant interval or proper time of a particle. Coordinate time as an element of spacetime is a gauge dependent (a gauge theory for an external symmetry) quantity, which ultimately has nothing to do with any evolution. Hence the nature of block time."
It seems to me that the "block time" and "block universe" (BU) are artifacts from the current incomplete GR. I've been trying to suggest, in my two postings mentioned above, the notion of 'quasi-local time' with two components, "global" and "local". The latter corresponds to 'physical time in GR', each event from which is *already negotiated* in the "global" component of time. Just try to think of this 'already negotiated' as the "duration" of the flight of a photon, from its emmission to its absorption: it is zero. It's like clapping your hands by which you produce one event of joint emission/absorption.
Hence the "dark gaps" of negotiation in the "global" component of time are completely and totally extinguished in the 'physical time in GR' (the "local" component of time), rendering the latter a *perfect continuum* that is being created dynamically and relationally. Hence we may have the "quantization" of spacetime installed from the outset.
I regret that learned about this FQXi Contest too late, on December 2nd, and haven't submitted my essay here. I can only hope that my ideas might be of some interest to David and to you.
Dimi
David Wiltshire wrote on Dec. 27, 2008 @ 08:35 GMT
Dimi and Lawrence,
I don't quite have the time to respond to every point you have raised. I shall restrict my reply to those that I think are most pertinent to the subject of my essay.
Lawrence: you have made a number of insightful comments, also at George Ellis' thread, and I concur with your statement that "the dichotomy between these concepts of time probably lie at the heart of...
view entire post
Dimi and Lawrence,
I don't quite have the time to respond to every point you have raised. I shall restrict my reply to those that I think are most pertinent to the subject of my essay.
Lawrence: you have made a number of insightful comments, also at George Ellis' thread, and I concur with your statement that "the dichotomy between these concepts of time probably lie at the heart of the obstructions we face with quantum gravity and cosmology". Here you are referring to the proper time of a particle on one hand, an invariant, and a Hamiltonian on spatial surfaces with an orthogonal time parameter, which is gauge dependent.
Let me bring back the equivalence principle, the focus of my essay. The strong equivalence principle tells us that we can always find a local inertial frame, and really we only know how to quantum mechanics and quantum field theories in such Minkowski frames. The question is how do we proceed when we include gravity and go beyond local inertial frames? Although I am not directly addressing quantum gravity, I have worked on quantum cosmology in the past, and it is my view that the canonical Wheeler-DeWitt quantization, or any similar quantization based on something like the ADM formalism, is inadequate. The problem, as you have recognized, is this formalism treats spacetime at the classical level timelessly as a 4-dimensional construct. There is no real evolution built in, and therefore any 3+1 split, as a pretext to quantization, is gauge-dependent. My view, which is maybe similar to George Ellis' evolving block universe, is that we have to formulate the classical cosmological problem already as an evolution problem before we quantize. Since the universe had a beginning the domain of the causal past is limited at any event; and the geometry near any event can only depend on things in its past light cone. Viewed this way it is imperative that we treat cosmological GR as an evolution problem. I am not trying to understand quantization yet, but to understand this other question. How does the universe choose something akin to an average spatial hypersurfaces as it evolves? My answer is to go back to first principles and extend the equivalence principle.
Dimi: I have looked at your website, but in general find it too incoherent to follow. I feel you have some good physical insights, and you certainly ask some probing questions, but you use your own personal idioms and mix so many different problems simultaneously in an intuitive way, that it is very hard often to see precisely what you are aiming at. Writing a focused paper on just one topic to clarify what is meant by some of these personal idioms, with a degree of rigor to make it acceptable for publication in a journal, would greatly help your cause. When you talk about your notion of 'quasi-local time' with two components, "global" and "local" etc, actually intuitively I think this is very much the sort of thing I am doing in my approach. I do not use the words "quasilocal time" because I think it is best to reserve "time" to describe the local proper time that we usually refer to in GR. Something cannot be both local and quasilocal. However, I do have a "volume-average time" which is different to the local proper time of observers at finite infinity. You talk about "negotiation" in the global component of time, without defining what "negotiation" is. I would suggest that averaging and coarse-graining, which are words that you do not like, are the same thing as you are are referring to with your terminology "negotiation". It's just that people who have thought about the problem in other ways come with their own terminology.
How we do averaging is an unsolved problem in GR, and I think George Ellis gave a pretty good statement of the problem to you in his reply of December 21. My essay describes a proposal that we interpolate between the local and "global" frames by extending the equivalence principle to a cosmological equivalence principle. The thing is that the strong equivalence principle already tells us how the average frame is "negotiated" on very small scales. In the standard cosmology we assume an answer to the averaging problem by simply demanding that there is a single global average FLRW geometry for the whole universe, and that matter averages to homogeneous isotropic pressureless dust for all times. That is at odds with observed inhomogeneity below scales of 100/h Mpc. Rather than demanding a single global geometry, I propose that we can always choose regional cosmological frames with the spatial symmetries of Minkowski space, but not the time symmetries - since we are talking about the dynamical regime. Rather there is a regional conformal timelike Killing vector. When George Ellis mentioned the conformal timelike Killing vector on his thread, he was probably referring to the standard FLRW cosmology. In some general cosmology, like various Bianchi models, there is no such timelike conformal Killing vector. I propose (for a variety of reasons that I discuss in the essay and the recent paper in Physical Review D) that even though the universe is inhomogeneous, the manner of the "negotiation" between local and global in the fitting/averaging/coarse-graining problem is that we can always choose average regional frames which are Minkowski up to a timelike conformal scaling; i.e., with spatial symmetries of Minkowski and an additional timelike conformal Killing vector. This is less strong than what we do in the standard cosmology, which assumes this as a global frame. My reasons for doing this - which also lead to a testable model - are stated in the essay, so I will not repeat it here, unless there is some point that is not clear.
On an unrelated issue. Why is there only positive mass? Weak equivalence principle: if negative mass were to have a meaning it ought to violate the principle of uniqueness of free fall. This is an intuitive answer, not a theorem. However, I'm sure the positive mass theorems could be rederived with a suitable definition of finite infinity. It would require a very tight definition of finite infinity first, however.
view post as summary
Lawrence B. Crowell wrote on Dec. 27, 2008 @ 14:05 GMT
Dave,
Curiously what you say at the end connects things in an interesting way, but before then ... .
The WDW equation as a constraint equation might be a condition on the target map between a D3-brane and spacetime. I am invoking some of the ideas of Steinhardt about cosmologies associated with D3-branes which interact by their mutual connection with type II strings. Your idea of cosmological conformal principle I think has connections with conformal structure on strings & branes. This is one thing I find interesting about this.
I advise people to read Hestenes' paper. It is fairly simple in its maths and I think makes some valuable points. The zitterbewegung, which is related to what Penrose calls the "zig-zag" in his "Road to Reality" with respect to the 2-component Weyl spinor equations, is a motion of a massless fermion in a trap which confers a mass to it. The force which traps the particle has a gauge-like structure to it. The reason I bring this up is that I have thought that QCD and conformal gravity are copies of a similar structure. The couplings involved with the two theories might have some Olive-Montenen pq = hbar duality to them. So conformal gravity SU(2,2) which contains the dS and AdS spacetimes is dual to an extended QCD ~ SU(4) that breaks down as SU(3)xU(1). So a fermion is trapped in a bubble in much the same way a particle (or black hole) is confined in the hyperbolic AdS. For the electron this confinement is given by the SU(2) struture of the spinor equations, while for quarks there is the extended SU(3) gauge confinement.
To really discuss this requires use of extended Clifford algebras, but I will defer that until later. I will say that the 120-cell of icosian quaternions I work with in my paper works in this direction.
This then segues into the issue of negative energy. If we consider the grand master Dirac, he illustrated how fermionic states of negative energy are completely occupied. Zap a filled negative state with energy and you pull out a particle with negative quantum numbers but positive mass. This is the anti-electron and other anti-fermions. In a spinorial context gravity is I think similar. Negative energy states simply don't manifest themselves because they may be occupied in the same way the Dirac sea is filled. This is related in some ways to the Boulware (sp?) vacuum and energy states near horizons. Curious solutions to the Einstein field equations, such as wormholes, warp drives and Kraznikov tubes, might simply be completely occupied, just as negative mass, positively charged electron states define a "sea."
I an somewhat conservative and doubt that things such as time travel are really possible. So I think that nature in her wisdom has quantum states corresponding to these solutions filled up, which prevents them from becoming real.
Lawrence B. Crowell
Dimi Chakalov wrote on Dec. 27, 2008 @ 14:56 GMT
David:
Thank you for your precise and thoughtful reply from Dec. 27, 2008 @ 08:35 GMT. In the last paragraph, regarding positive mass theorems, you wrote: "It would require a very tight definition of finite infinity first, however."
You hit the nail on the head. If we employ the Aristotelian First Cause and Unmover Mover, we may have a precise "boundary" in the so-called "global" component of time, while in the "local" time this same "boundary" would look like an ever-sliding horizon extendable to infinity. The underlying motivation here is that we shall sort out the ambiguities with our notion of '3-D space', and then approach the nature of time, pertaining to this 3-D space.
You said: "Something cannot be both local and quasilocal." I believe it depends on how you understand Quantum Theory (please check out my essay on QM). Which brings me to your comment that I talk about "negotiation" in the global component of time, without defining what "negotiation" is. EPR correlations are just one example of "negotiation", but the really difficult task, to me at least, is to *derive* the Equivalence Principle -- the focus of your essay -- from some broader perspective based on Machian-type relational ontology (cf. the Buridan donkey paradox). At the end of the day, we should be able to understand the origin of the positive mass, and the mechanism by which inertial reaction forces are being generated "instantaneously" (in the "global" component of time, perhaps).
As of today, the Equivalence Principle gives us the dubious "freedom" to eliminate the energy-components of the gravitational field *at a point* (Hermann Weyl, Space-Time-Matter, Dover Publications, New York, 1951, 1922, p. 270). I cannot accept this, and neither did Einstein (quote from Dec. 25, 2008 @ 03:41 GMT above).
You are right that I should produce a "focused paper on just one topic". I will do that by the end of 2009, and will comment on your Essay extensively.
Thank you, once more, for inviting me to your thread.
Dimi
Lawrence B. Crowell wrote on Dec. 28, 2008 @ 15:28 GMT
Dimi: The quasilocality referenced with mass-energy, or nonlocality of energy, is due to the fact that
p^a = e_bT^{ab}
is a frame dependent quantity. P_a = (E, p) defines an invariant interval (mc^2)^2 = E^2 - (pc)^2, but the specific components E and p are, just as with t and x, coordinates that are not of primary physical importance. Further, the above definition of the momentum-energy component p^a will in a Stokes' law calculation give a de_b = w^c_be_e (w^c_b a connection term) which can be removed by a coordinate condition. It is in this sense that energy is no localizable.
Nonlocality of quantum states means entanglements can exist between states across any distance in either space or time, recall the Wheeler Delayed Choice Experiment. Entanglements do not correlate with causality conditions according to the spacetime variables we use to represent quantum wave functions.
I will confess that I think these two are related in some ways that we don't understand. However, at this time they are distinct concepts. It is worth noting that a quantum spin system and the structure of parallel transport of vectors in GR share a Galois structure GF(4), and are algebraically equivalent. As I say, if you want to understand physics best geometrical structures are best replaced with algebraic ones according to some functor.
Lawrence B. Crowell
Anonymous wrote on Dec. 28, 2008 @ 20:08 GMT
Larry: Thank you for your efforts. If you wish to comment on my efforts to *think* of gravitational energy as being both localizable and non-localizable (cf. my postings above), please do it at your thread and I'll jump there, with utmost pleasure. I believe all this pertains to the nature of time in GR, since nobody has managed to separate time from energy. Surely in textbook GR there isn't such animal like the one I propose, perhaps because I address this puzzle in GR after proposing a solution to the measurement problem in QM.
Dimi
report post as inappropriate
David Wiltshire wrote on Dec. 29, 2008 @ 02:00 GMT
Dimi: I do not believe that the equivalence principle can be derived. Rather it is a principle that limits the possible class of physical theories, just as the principle of relativity limits the possible kinematic relationship of particles near a point. Physics, I believe, proceeds from physical principles that limit the uncountably infinite number of mathematical structures one could imagine to...
view entire post
Dimi: I do not believe that the equivalence principle can be derived. Rather it is a principle that limits the possible class of physical theories, just as the principle of relativity limits the possible kinematic relationship of particles near a point. Physics, I believe, proceeds from physical principles that limit the uncountably infinite number of mathematical structures one could imagine to describe the physical world. The strong equivalence principle already tells you how the classical spacetime is negotiated in a very small region near a point; because particles are in motion, and they continue to move in a manner consistent with special relativity including any non-gravitational forces in a very small spacetime region. This is a spacetime region in which no one spacelike 3-surface is singled out as special. So there is no Buridan donkey paradox near a point because there is no special space direction and the local inertial frame (LIF) is regionally "negotiated" from the prior motions of particles in both space and time.
The difficult part of course is patching together these LIFs in general relativity. If the local geometry is dominated by some mass concentration possessing symmetries, then we assume that we can approximate the geometry by an exact solution such as the Schwarzschild or Kerr geometry with the relevant symmetries. Furthermore, this assumption enables us to do calculations and make predictions of the motion of particles and light which agree extremely well with what is observed. The real problem is when we now attempt to patch these almost isolated geometries together in the absence of exact symmetries.
That is the fitting problem that George Ellis spelled out. We do not use the Friedmann equation to solve for the Earth's motion about the sun, or the sun's motion around the galaxy. Yet we naively assume that we can write down solutions of the Friedmann equation and apply the invariant time and distance definitions of this geometry as if it were the local geometry, when it is not. Since homogeneity is only reached by averaging on scales of at least of order 100/h Mpc, this step is not justified by any principle of general relativity. Since the deduction of cosmic acceleration and vacuum energy energy are completed based on this unprincipled assumption, also at odds with the evidence for inhomogeneity from our telescopes, it is prudent to ask whether one can realistically account for the observations in another way. What I have shown in my work is that, to the level of the observational tests I have considered thus far, one can. Furthermore, conceptually it means thinking about quasilocal gravitational energy.
The cosmological equivalence principle is a precisely step towards a "Machian relational ontology". As far as I am concerned neither space nor time has an existence separate from the matter fields that exist within it. A lot of essays in this competition are hung up on the question of the "reality" of the spacetime continuum. Well, what do we mean my "real"? If one supposed that a spacetime could exist independently of material fields, then as far as I am concerned no such entity exists (which to me also makes it perfectly sensible that there is no vacuum energy). However, I do not consider myself an "illusionist" in George Ellis' terminology because the time on my clock is real, just as the length of ruler is real, or the energy of a photon I measure after transmission across a cosmological scales is real. That is the only reality, the rest is a mathematical relational structure.
As a purely mathematical theory of differential geometry, general relativity is I believe, too general. All those crazy solutions with closed timelike curves, wormholes, and the like, are I believe ruled out by physical principles, and the key ones are the class of principles we call the equivalence principle, as they deal with the concept of inertia. The strong equivalence principle already severely restricts our choice of metric connection; placing restrictions on the way you might try to introduce torsion as a physical variable. I am proposing the cosmological equivalence principle as a means of further restricting our choice of background universe - in a way which would make all those solutions with closed timelike curves, or indeed anisotropic Bianchi models, physically redundant.
At the basis of this is a further clarification of the notion of inertia - the centre piece of the Machian ontology. How is the average relational background "negotiated" as an average of all fields and motions? The semi-tethered lattice I have introduced is a Minkowski space analogue of a collective regional deceleration, with conversion of energy from kinetic to other forms, while no net force is felt by any particle in the lattice, justifying the statement that for this regional decelerating frame we have a sense of "inertia". Real energy is extracted from this process just as energy is extracted from gravitational collapse; yet the sense of inertia relates to geodesic motion that maintains a collective average regional homogeneity. This is what makes it quasilocal. My cosmological equivalence principle is to demand that the "negotiation process" is such that in the fitting problem we can always choose such average regional frames with such a quasilocal notion of inertia. At such a level we cannot distinguish the regional deceleration of matter due to its average density from an equivalent semi-tethered lattice deceleration process in a Minkowski space in which an equivalent amount of kinetic energy of expansion is converted to other forms. (That's the equivalent of the work done by gravity.) This is the essence of the regional timelike conformal Killing symmetry. Thus I claim over the scales of regional inhomogeneity we find such frames in the universe, which have decelerated by different amounts, and their local clocks will differ cumulatively.
In saying that something cannot be both quasilocal and local, I am simply stating that you have to be absolutely clear in your concepts and definitions to build a predictive physical model, and if you cannot build a physical model (through such lack of precision) then you are wasting time. "Quasilocal" means something more than local, otherwise we would simply use the word "local". In my case it is a property of regional collective motions, as outlined above. The time measured on a clock will still be a local time always. Different regions will have different average local proper times on their clocks. So there is a sense in which the measurement of an average time parameter is "quasilocal" as in being region dependent. But I would suggest that one does not learn that much from a slogan such as "time is both local and quasi-local" which just hides a degree of precision of clarification. For a physical model one needs a degree of precision in stating just by what amount some average time parameter will differ from one region to another, and for observers defined in what manner. How do we calibrate the clocks in the regional "negotiation"? In my case is it observers who see an isotropic CMB in regions of varying spatial curvature due to strong inhomogeneities consistent with the observed structure of voids and walls below the scale of homogeneity in the observed universe. Furthermore, I quantify the cumulative difference in clock rates of relevant canonical observers.
Personally, I think my cosmological equivalence principle is just a first step, and there is a lot further to go. Think of it this way: individual stars are treated as vacuum solutions and have ADM energies. Yet from all that vacuum and ADM energy we assume that the collective description is described by a dust fluid with a non-zero Ricci tensor, even though the original solutions had zero Ricci curvature (and only Weyl curvature). So, how do we mathematically convert a collection of ADM energies to dust with Ricci curvature? Ricci curvature is so very nebulous a concept in the presence of the equivalence principle. Some of my colleagues such as Thomas Buchert and Mauro Carfora think that this averaging away of Ricci curvature is related mathematically to Ricci flow, as a sort of renormalization process. That may be the case, but I think we are just scratching the surface, and have to ask penetrating physical questions, rather than just playing with mathematical formalism. If we construct such flows mathematically, then there has to be a physical relation to the relative calibration of local clocks and rods in doing the renormalization. I do not pretend to yet have all the answers; what I have tried to do in my essay is to approach the foundational physical questions, to the extent that one can start to build quantitative cosmological models.
Of course the question of initial conditions is vitally important on cosmological scales. Dimi, when you talk about "the Aristotelian First Cause and Unmover" then no doubt you are talking conceptually in such terms, though to me me such phrases do not mean anything until you can write down a physical model which somehow quantitatively matches reality. As someone who has worked in quantum gravity, I think that often we try to do too much at once, while overlooking the fact that some things we think we understand are not really that well understood. The idea that we have to sort out the mystery of some physical fluid in the vacuum of space is grossly premature, when our entire deduction of the existence of dark energy relies on ignoring a "too hard" fitting problem and pretending that our universe has exact symmetries which differ from what we observe. I stopped working in quantum gravity and cosmology because I though I was wasting my time until these other fundamental issues, such as the fitting problem, are not sorted out. Let us first try to sort that out, and maybe it will give us fresh ideas about the other problems, even if the fitting problem is only a classical problem.
Lawrence: at this point I would say that any connection between my "cosmological conformal principle" and conformal structures of strings and branes is not at all obvious. I am only talking about a timelike conformal symmetry for establishing average regional cosmological frames in which it is useful to phrase a quasilocal gravitational energy concept. I am not talking about complete conformal structures per se, but only a timelike one for average regional frames in cosmology. Now, it is true that the unboundedness of the Euclidean action in the naive approach to 4-d quantum gravity can be spelt out mathematically in terms of conformal equivalence classes (I seem to recall though I cannot remember the paper - circa 1979). So maybe there is some relation, who knows. But while strings and branes make for interesting models (which I have worked on too) they are not established observationally as anything to do with the real world. So I am only worrying about classical GR and observational cosmology at present. The essay of David Hestenes - which involves a real world experiment beyond standard particle phenomenology - is indeed very interesting, and I thank you for pointing it out.
view post as summary
Lawrence B. Crowell wrote on Dec. 29, 2008 @ 14:32 GMT
The connection with D3-branes is something which came to mind. To be honest I have largely been skeptical of some of these ideas about oscillating cosmologies due to brane-brane bound states tied by strings. You come a bit closer to this idea with the mention of Ricc-flow and renormalization. The Hamilton-Perelman theory of Ricci flow centers around conformal theory, which in a string-brane setting might be induced by a target map from the brane-string sector.
As a further comment, your discussion of the nebulous nature of Ricci curvature, EP and energy, this is one reason I suspect there are deep problems with most models which have the cosmological constant as due to a vacuum energy source.
Of course at this point these are just thoughts which I have been kicking around and nothing serious at this time.
Lawrence B. Crowell
Dimi Chakalov wrote on Dec. 29, 2008 @ 22:56 GMT
David wrote (Dec. 29, 2008 @ 02:00 GMT):
"Of course the question of initial conditions is vitally important on cosmological scales. Dimi, when you talk about "the Aristotelian First Cause and Unmover" then no doubt you are talking conceptually in such terms, though to me me such phrases do not mean anything until you can write down a physical model which somehow quantitatively matches reality."
The challenge I face with the Aristotelian First Cause and Unmoved Mover is first and foremost mathematical: it is not clear to me what particular blueprint from these notions should be sought in quantum gravity, yet I think it should be presented with pure math only, or else the First Cause and Unmoved Mover will be *physically* reachable.
I will be very difficult to provide compelling evidence that the whole physical world may be grounded on some Aristotelian "cutoff" that is nothing but 'pure math'. Not to mention the UNspeakable 'cat per se' (cf. my essay on QM mentioned above), which is also unclear in mathematical terms. But if some day I make progress, will get in touch with you.
Best - Dimi
Lawrence B. Crowell wrote on Dec. 30, 2008 @ 01:45 GMT
At the risk of making a monkey out of myself some thoughts have come to mind with respect to this matter. Consider the conformal map g_{ab} ---> Q^2g_{ab} for the diagonal flat spacetime. Then for Q = du/dt we get the synchronous time metric
ds^2 = -dt^2 + Q^2(dx^2 + dy^2 + dz^2)
Now set Q = e^{2B} for B = B(r). The Ricci curvatures are
R_{ii} = -B_{ii} + 2(B_i)^2
where i = x, y, z. This leads to the heat equation &B/&t = nabla^2B, and we get the Ricci flow equation for nabla a gauged operator. So with the tethered grid for expansions in different regions, this sort of Ricci flow would suggest that the "equilibrium" condition obtains according to the renormalization process.
The deSitter spacetime is a case where the density of matter and energy are zero. The current state of the universe is one where the density is small, but not zero. With some calculation it is possible to estimate the De Sitter Horizon should be at 89.98 BLY while the current Cosmic Horizon is 46 BLY. This reflects the deviation from equilibrium, which the Ricci flow equations indicate the universe will eventually reach. At that point the cosmological horizon will evolve to a final state. Potentially beyond that stage the horizon will quantum decay, but that is not a classical domain. So these regions where the tethered net expands in different manners then interact or negotiate (mesh etc) in a way which obeys a Ricci flow type of renormalization.
This does I think have connections to strings. A standard string int d^s sqrt{-q} q^{ij}g_{ab} nabla_iX^a nabla_jX^b, for ij a string index and ab spacetime, will reproduce a Ricci-flow like physics for conformal transformations on the string. Now if that string is not embedded in spacetime, but is attached to two 3D-branes, the Chan-Patton factor or end of the string determines a field phi. For the cosmological constant L ~ H^2W the Hubble parameter will then be a function of H^2 ~ phi', for ' = time derivative. This then connects the time derivative of the metric in the Ricci flow equation with the value of the endpoint (field phi) of the string connecting a D3-brane. There is then some form of a target map between the D3-brane and the induced spacetime.
Lawrence B. Crowell
David Wiltshire wrote on Dec. 30, 2008 @ 08:57 GMT
Lawrence,
There may be just a few too many connections in your suggestions to be completely plausible (as direct connections rather than analogies), but I will think seriously about any quantitative suggestions relating to Ricci flow. So thank you for your thoughts on this. Mauro Carfora has already thought quite a bit about the Ricci flow perspective in inhomogeneous cosmology - indeed, he was thinking in such terms before Perelman's proof of the Poincare conjecture.
I guess by the "cosmic horizon" you mean what is usually called the "particle horizon"? And by "de Sitter horizon" maybe a "cosmological event horizon" (of which the one in pure de Sitter space - i.e., vacuum energy no matter - is an example). In my proposed cosmology there is no actual cosmological event horizon since the universe is in actual fact decelerating, rather than accelerating. We simply misinterpret luminosity distances and the like by a naive assumption that our locally measured spatial curvature is the same globally, and local clocks of isotropic observers everywhere are the same as ours on relevant surfaces of average homogeneity. Once one does a relative recalibration of coarsely-grained average frames relative to smaller regional "cosmological inertial frames" then a relative deceleration of regional backgrounds below the scale of statistical homogeneity, at rate typically about 10^{-10} m/s^2 accumulates to large differences in relative calibration of clocks, which we interpret as cosmic acceleration. But there is no acceleration really, and so no de-Sitter-like horizon.
Best wishes,
David
Lawrence B. Crowell wrote on Dec. 30, 2008 @ 14:40 GMT
I agree that connections with string-brane concepts are not well founded at this time. I just ponder whether Ricci flow is a way to describe the evolution of cosmology to a final state as a DeSitter cosmology. Then since the central element is conformal spacetimes whether this might have connections to strings and branes. Again this is not something serious at this time, but more in the way of questions and thoughts.
If the universe is FRW then there can be no cosmological horizon, in particular if the universe is indeed decellerating. Of course it seems to me a matter of formalism (eg a sign on the acceleration) to extend what you work with inhomogeneeous regions and tethered lattices for an accelerated case. In that case a Ricci flow associated with these regions will then approach the equilibrium condition, which would be the pure deSitter cosmology.
Whether the universe is accelerating or not is of course not completely settled, which is often the case in science for some time after a discovery. There are issues on whether the "standard candle" as SN1s are as completely calibrated as we think. These happen with diminishing frequency the closer in one looks. As for the spatial curvature of space, this metrology is determined in part by Einstein lenses. Of course this is an active area of research.
We may get better data with the James Webb space telescope. Fortunately the Hubble appears to be given a life extension in the mean time. So these matters will be resolved with more data. Fortunately Obama appears pretty science friendly, a refreshing change from the goofball who has run the US the last 8 years, so we may get further cosmological data through the next decade.
Cheer, L. C.
Dimi Chakalov wrote on Dec. 31, 2008 @ 14:58 GMT
David:
I left three comments at
George Ellis' thread on Dec. 31, 2008 @ 14:32 GMT. Please notice Comment #1, regarding the missing definition of the non-tensorial gravitational energy in a "fraction DT of time", as George put it.
I will appreciate your professional feedback.
Happy New Year.
Dimi
Login or
create account to post reply or comment.