CATEGORY:
What's Ultimately Possible in Physics? Essay Contest (2009)
[back]
TOPIC:
What Is Ultimately Possible in Physics? by Stephen Wolfram
[refresh]
Login or
create account to post reply or comment.
Author Stephen Wolfram wrote on Oct. 9, 2009 @ 16:33 GMT
Essay AbstractThis essay uses insights from studying the computational universe to explore questions about possibility and impossibility in the physical universe and in physical theories. It explores the ultimate limits of technology and of human experience, and their relation to the features and consequences of ultimate theories of physics.
Author BioStephen Wolfram is CEO of Wolfram Research, creator of Mathematica and Wolfram|Alpha, and author of A New Kind of Science. Long ago he was officially a physicist, receiving his PhD from Caltech in 1979 (at age 20). His early papers on particle physics and cosmology continue to fare well. Every few years he makes an effort to continue his approach to finding the fundamental theory of physics; his effort-before-last is in Chapter 9 of A New Kind of Science. Today he noticed the title of this essay competition, and this evening decided to have some fun writing the essay here.
Download Essay PDF File
Author Stephen Wolfram wrote on Oct. 9, 2009 @ 20:38 GMT
The official essay entry has three pages omitted for length reasons. The full essay is attached here.
attachments:
What_Is_Ultimately_Possible_in_Physics_Full.pdf
FQXi Administrator Brendan Foster wrote on Oct. 9, 2009 @ 20:46 GMT
Now before anyone shouts, I want to confirm that "today" in the abstract refers to a day last week, not literally today. The author submitted his essay in a timely manner before the contest deadline.
E. Canessa wrote on Oct. 9, 2009 @ 22:25 GMT
Dear Prof Wolfram, you wrote:
"we can imagine transferring our experience to some simulated universe, and in a sense existing purely within it"
- isn't the "Second Life" Game doing something like that already?
Thnaks,
(when you find time a
vote is appreciated)
report post as inappropriate
Jonathan J. Dickau wrote on Oct. 10, 2009 @ 02:49 GMT
Greetings Stephen,
I enjoyed reading your essay. After resolving to read only the 'official' version, I did end up downloading the expanded paper and skimming through the missing pieces, because I wanted to see what you had to say.
I agree with many of the points you make. In my contest
essay, I echo your sentiment that many physicists tend to forget that an equation is just a model, and that its predictive capability is largely a reflection of how closely it models what is real. Too many confuse equations with reality. But if a computational process gives rise to what is real, one would expect the order inherent in Mathematics to both emerge from and shape that process.
I wonder, however, if some of the current perceived limits to knowledge arising from limits to computation are due to our failure to recognize or incorporate the hierarchal nature that arises in any process of asbstraction. If we could somehow encode the hierarchality of symbologies involved into a generalized heuristic computational algorithm, this might allow some of the limits to what is knowable to disappear.
For example; perhaps Gödel simply started in the wrong place, with Arithmetic and Number Theory. I am a constructivist, and I believe that advances in various branches of Math rest on certain fundamentals which must be constructed out of first principles. Had Gödel posited that the rudiments of Geometry were necessary to Topology, which brings us topological distinctions or boundaries, and that this was necessary to Set Theory, which is part of the picture needed for Number Theory to be formulated, a very different picture might emerge.
Now; I'm not saying I think Gödel is nescessarily wrong, but the decidability gets more complicated when procedural hierarchality is figured in.
I am glad I had some exposure to your work, prior to reading this essay, as it is a merry romp through most of the key concepts you introduce in NKS. But I am happy to see you have an essay entered in the contest, as I'll have an opportunity to dig into what you've written and pose some additional meaty questions. I've long been a fan of the Computational Universe hypothesis, as it links up with my work in Cosmology with the Mandelbrot Set. I like the extensions of Wheeler's concept "It from Qubit" of Zizzi and Deutsch (plus Lloyd and Ng). And I coined a phrase imitating Descartes "It computes therefore it is!" which sums up that view nicely.
All the Best,
Jonathan J. Dickau
report post as inappropriate
Jeffrey Nicholls wrote on Oct. 10, 2009 @ 06:43 GMT
Hi Stephen,
You write 'Is there a direct correpondence of mathematical impossibility with physical impossibility. The answer is that it depends of what physics is made of. If we can successfully reduce all physics to mathematics, then mathematical impossibility in a sense becomes physical impossibility.'
While the universe is dynamic, mathematics, at least on paper is formal and static, so one suspects that physics cannot be completely reduced to mathematics. On the other hand, the only things we can say about physics are those which are invariant with respect to time so can be written down in static formal form. This is the beauty of differential equations which capture a dynamic process in a static string of symbols.So perhaps we may think about the relationship between physics and mathematics in terms of fixed point theorems. Since the universal dynamics maps the universe onto itself (there being by definition nowhere to go outside) we can expect to find fixed points which can be satisfactorily encapsulated in the physical literature and remain true for at least long enough to get published.
On this picture, we may think of the dynamics within which we find the fixed points as guaranteeing the mutual consistency of the fixed points. . . .
Best regards,
Jeffrey Nicholls
report post as inappropriate
Stefan Weckbach wrote on Oct. 10, 2009 @ 08:38 GMT
Dear Stephen Wolfram,
i am happy to read your essay here and to learn more about your general thoughts about physical possibilities/impossibilities. Your paper is well written and has a nice rythm in exposing your lines of reasoning.
My personal view of the computational paradigm is, that it will change at some time in the future - maybe not so far away - to a mind/spiritual...
view entire post
Dear Stephen Wolfram,
i am happy to read your essay here and to learn more about your general thoughts about physical possibilities/impossibilities. Your paper is well written and has a nice rythm in exposing your lines of reasoning.
My personal view of the computational paradigm is, that it will change at some time in the future - maybe not so far away - to a mind/spiritual paradigm. That sounds weird at first glance, but i think the computational paradigm is yet too much deterministic and "bottom up" to grasp reality in its fundamental structures. It's a little bit like the clockwork-paradigm, invented by Newton.
What could it mean to assume a mind/spiritual paradigm? Firstly, it would surely mean that there come human values into play (you labeled this in your essay with "purposes"). Secondly, it would surely mean that there must be a conection between determinism and holism. There are some profound human experiences that suggest the possibility that our spacetime-reality is in some sense a filtered subset of another realm, where hypercomputation is indeed possible. Those human experiences are usually called "near-death experiences". About 40 years ago, the mainstream opinion on this issue was that those stories are purely abnormal hallucinations of the experiencers. But today it is widely accepted that these phenomenons are at least real in the sense that the experiencer must have experienced them in the time his/her brain/body wasn't active at all. Some of the made perceptions in those states of consciousness could be verified and are scientifically relevant. Because these experiences touches issues like timelessness, seing into the future, diving into ones own past, having a multimodal view on the surrounding environment, and perfect insight and knowledge into ultimate reality (and last but not least "love") and so on.
The problem of the limits of computability is linked with complexity. That is the stage where in my opinion "top-down"-causes come into play and new phenomena "emerge" at the very top of this irreducible complexity. My assumption is, that this is only possible because "emergence" can only be possible with a cetain top-down dynnamics as natural feature of ultimate reality. This means at the same time that the very notion of cause-and-effect isn't universally valid *without* purpose and subjective values. The subjective "spiritual" world isn't build up of only one subject's imagination, but through the power of a multitude of subject's values and imaginations, all in agreement with each other and fixed on the same purpose. That lead to the emergence of physical laws and the borders of irreducability.
I am strongly convinced that this "panpsychism", "super-natural" view will sooner or later develop out of the computational paradigm, because the very fact that the search for ultimate reality is build into ultimate reality via humans/consciousness is a hint in this direction and at the same time maybe its deeper purpose. See therefore my own essay here in to contest for a possible mechanism of ultimate reality to achieve all this.
Best,
Stefan Weckbach
view post as summary
report post as inappropriate
Owen Cunningham wrote on Oct. 10, 2009 @ 15:48 GMT
Hi Dr. Wolfram,
I hate to be one of those guys who says "Nice paper, now please read mine," but in this case, since I titled my paper in homage to your book "A New Kind of Science," I couldn't resist piling on with exactly such a request.
Your paper seems to be a great distillation of many of the themes emerging from different contestant's submissions. Plus, it has the benefit of being written conversationally and in a natural, engaging tone. Bravo.
The very earliest spark of my paper was, right after I finished "A New Kind of Science," I read Seth Lloyd's book "Programming the Universe," and thought to myself, "This is the first time I've purchased a book with the word 'programming' in the title that hasn't contained a single line of source code." To co-opt the terminology of pure mathematics, the digital physics community seems to have contented itself with producing existence proofs, but not constructions. That community seems to agree that "Yup, the entire universe could indeed be software," but nobody seems to have taken the next logical step, to say "OK, what might that software look like? How might its source code be constructed?" My paper offers up a starting point for exploring such possible constructions. That starting point is, as is yours, fundamentally graph-theoretic in structure. More specifically, it is a fractal that operates within graph-theoretic space.
Traditional fractals like the Mandelbrot set, Menger sponge (indeed anything listed at http://en.wikipedia.org/wiki/List_of_fractals_by_Hausdorff_d
imension) consume a subset of traditional n-dimensional space; part of the reason a Sierpinski triangle's Hausdorff dimension is less than 2 is that a complete representation of it can fit inside a 2-dimensional plane. The same reasoning ensures that the Menger sponge's Hausdorff dimension is less than 3. No fractal at the wikipedia page has a Hausdorff dimension greater than 3.
I wonder if part of the reason this is the case is that traditional fractals, because they work by "claiming" points within a larger predefined space, can only _consume_ space. The fractal I propose in my paper (the "Object" class) simultaneously consumes _and_generates_ space (by "claiming" points/nodes and then also having a mechanism for creating new points/nodes). This means, if we were to find a suitable generalization of Hausdorff dimension that can take graph-theoretic fractals into account, then the "Object" class's Hausdorff dimension could be something greater than 3. It could even be, for instance, pi.
Anyhoo, the other day I was lamenting to Ray Munroe Jr that "I wish someone who had as much history as I do with computation, and also as much history as you do with physics, would read the paper and comment on it." Seems to me you're just the right man for the job -- or, at worst, overqualified.
Thanks,
Owen Cunningham
report post as inappropriate
Jonathan J. Dickau wrote on Oct. 10, 2009 @ 15:57 GMT
Hello again,
I like some of Stefan W's comments a lot, and would like to point out that they relate to my statement about the need for hierarchality in an observational procedure. One needs to rise to a higher level of abstraction, sometimes, in order to see the answer to a problem, or see that what is being viewed is part of a larger whole. When we look at illustrations in a Math text, we are viewing the idealized figures from a point off the page. Only then can we see the 'true' nature of a circle.
I tend to believe that we see both bottom-up and top-down procedures at work in nature, arising from the very fact that the levels of abstraction required to create or observe anything do have a natural hierarchy. And this is easily linked up with the mind/spirit paradigm. But as Stephen has pointed out, a lot of this sort of behavioral complexity can arise from very simple computational systems. So we are left to wonder if perhaps the universality of their emergence is the result of natural order inherent in Math. I tend to believe that what's out there, in the land of mathematical abstractions, has an influence on what happens here and that this reflects the very mind/spiritual element of which Stefan speaks.
Regards,
Jonathan
report post as inappropriate
James Putnam wrote on Oct. 10, 2009 @ 17:35 GMT
Dear Dr. Stephen Wolfram,
"No doubt, though, we will one day master the construction of atomic-scale replicas from pure information. But more significant, perhaps our very human existence will increasingly become purely informational~at which point the notion of transportation changes, so that just transporting information can potentially entirely achieve our human purposes. ...
...But consider a time in the future when the essence of the human condition has been transferred to purely informational form."
Is there more you can say within this forum to clarify what you mean by becoming purely informational?
James
report post as inappropriate
George Schoenfelder wrote on Oct. 10, 2009 @ 18:16 GMT
Dear Dr. Wolfram,
I much enjoyed your essay and your books, in particular modeling using Turing state machines.
In my FQXi essay I suggest a computer model of the universe with atomic systems being bimodal Truing machines which alternate between modes. One mode conducts classical computation and the other mode quantum computation as a network. Have you or to your knowledge anyone else thought along those lines?
Sincerely,
George Schoenfelder
report post as inappropriate
Ray Munroe wrote on Oct. 10, 2009 @ 19:49 GMT
Dear Dr. Wolfram,
I just finished reading the long version of your paper - It looks like the correct length! It is odd that so many authors want you to read their papers, and yet your community score is relatively low.
It sounds like you are applying the Principle of Computational Equivalence to solve unknown problems (such as TOE) computationally. There are other papers here that attempt similar tasks (Abhijnan Rej and Owen Cunningham), but it is obvious that more simplifications or approximations need to be applied.
I think that my Geometrical Approach Towards A TOE might be the type of idea that could simplify these computations. Any feedback would be appreciated!
Sincerely,
Ray B Munroe - Author of "A Geometrical Approach Towards A TOE"
report post as inappropriate
Ray Munroe wrote on Oct. 10, 2009 @ 19:52 GMT
p.s. - I love the Wolfram Research site. I use it and Wikipedia quite often. In fact, my essay references your research site.
report post as inappropriate
Owen Cunningham wrote on Oct. 10, 2009 @ 23:22 GMT
To Uncle Al: Are we to infer from your comment here that you view the computational approach to physics as fundamentally at odds with the experimental ethos that has driven science forward? If so, that is unfortunate, because introducing an experimental angle to the existing, and very young, subspecies of physics known as "digital physics," which has heretofore been a purely theoretical genre, is precisely what my paper is attempting to do. In the digital physics world, the best way to conduct experiments is to write some code, run it, and see whether its behavior at all reflects that of the universe.
To George Schoenfelder: Your paper sounds extremely interesting. I'm going to read it in detail and post a comment on its thread at some point in the next few days.
report post as inappropriate
Terry Padden wrote on Oct. 11, 2009 @ 04:47 GMT
Stephen
A very thought provoking essay with a welcome conclusion. Some comments:
1. I think you you place too much some reliance on Godel for the incompleteness of mathematics. I refer to him slightly in my essay but I still doubt his theorem. It only applies to structures with countable = separable & Hausdorf etc. sets of axioms and theorems. It is therefore appliable to digital computers / Turing machines - but not analogue computers. My essay focuses on the incompleteness of mathematics for pragmatic scientific reasons. One aspect of that incompleteness is that the sciences of Computation & Information have not yet been coherently incorporated into main stream mathematics. There is stlll scientific apartheid. Some measures via model theory and category theory are being taken to address this but results so far are ineffective.
2. You write "But traditional mathematical models of physics tend to have parameters that are specified in terms of real numbers." This is wrong. Input from physics can only be Rational numbers. The relevance of Real numbers is that by assuming their existence they enable continuity of representation of the physical variables between measured Rational values. This requirement is why the Topology of Open Spaces is fundamental to PHYSICAL measurement. (see page 9 of "An Introduction to Geometrical Physics" Aldrovandi & Pereira). Note that Turing Machines rely on this assumption of topological continuity between each step.
3. There is a hidden assumption that Logic is complete. I doubt it.
4. You attribute the Ultimate limit on physics to Teleology. Purpose implies causality. It is interesting to note that conventional physics does incorporate causality - but its reductionist structure back to explaining everything in terms of elementary particles is a process that eliminates causality at the elementary level. Reverse the process direction and at the Top Level cause we get Purpose. Your conclusion is predicated on emergence, and purpose on consciousness. These are the 2 of the things my essay requires for the Ultimate completion of mathematical science.
report post as inappropriate
Anonymous wrote on Oct. 12, 2009 @ 18:35 GMT
Stephen,
The idea of computational equivalency is related to something which occurred to me some years ago. As I might pour cream in my coffee the flow of the cream becomes complex with tentacles and filigree that has a growing amount of complexity. This is a common observation of course that we don’t often think of as having some complexity to it in the sense of a Connection Machine. However, that flow of cream is effectively computing something, say the hydrodynamic evolution according to the Navier-Stokes equation. The foldings of tendrils of cream often have a measure of recurrence and these are at least comparable to the iterative computation of “something.” We just do no ordinarily couple a cup of coffee to output devices to register its output in an alpha-numeric form. This also seems to strike a sense with the growing field of virtual reality computation. The Pixar movies, and those made by similar computer animation groups, use physics and algorithms to simulate reality. So in approximating what we observe in the ordinary world ends up requiring a large amount of computational complexity. These can involve how a sheet or item of clothing will fold as it is dropped on a surface, or the intertwining of leaves in a pile that has been brought together by wind. The latter of these has always intrigued me by how this exhibits a noticeable pattern that differs from a pile raked up by a person.
I would take some departure from the last paragraph on about this diverges from physics. The computations are ultimately the evolution of physical states. The universe may well be fundamentally a quantum computer, or a quantum gravitational computer. The occurrence of a macroscopic world may be a signature of exiomatic incompleness. The underlying algorithm, say a quantum error correction code --- maybe even deeper the monster (Fischer-Greiss) group, is subject to decoherence or some chaotic process which results in a set of all possible “computations,” or physical algorithms. So the universe (or multi-verse) might contain all possible Turing machines or algorithmic processors, where most have an undecidable halting status, or a Chaitan halting probability. Then the universe is a Universal Turing Machine is not able to compute all its outcomes from a bottom up sense. Yet through this the computational states are ultimately the physical states of atoms, particles, quanta or a tiny volume of fluid in a flow.
On balance this is a well written and interesting essay.
Cheers LC
report post as inappropriate
Narendra Nath wrote on Oct. 13, 2009 @ 07:09 GMT
Dear Stephen,
i read your essay and it appears you are living with the computational world. Some others live in the mathematical world and some like me are experimetalists and empirical people who prefer to live in their own world of making. But the maker if the universe lies beyond us and still has generated we all individually in this universe which got created far earlier than when the humansm first appeared on the scene. Yes, it is achallenge. But when we look at whatever we have been able to decipher about the universe evolution, the entire Physics got born with the primordial matter ( we don't know yet what it was). Ths in turn created the visible and dark matter world along with dark energy that is attributedly repelling the minor component called the visible world. The visible componennt is baryonic in nature while dark matter is non-baryonic. They do not have means to interact with each other, except that the non-baryonic one repells the baryonic component gravitationally. Within the visible world the gravity is attractive force field.
From this description it become clear that nature governs Phsyics and not vice versa. Humans have a mind of their own. It is known to be complex, wandering entity. t is what we use to study nature. It is simple but our means are complex and that is we have made the universe a complex subject. As one works towards one theory in Physics that may explain all processes, we go towards the simplicity of approach. But our mind comes in the way to complicate the matter. Thus we need to quiten the mind and discipline it further. That is we need improvement in ourselves before we can improve our computational, physical and mathematical tools. The experiments are governed by the technolgy and so it will follow physical machinenaries that we develop. Thus we are in a vicious circle of possibilties that accompany immpossibilities. Enjoy ht egame but keep your mind wide open, calm it down and discipline it to see the simplicity of nature , rather than complicate its simplicity.
report post as inappropriate
amrit wrote on Oct. 15, 2009 @ 14:38 GMT
Dear Stephen
We search universe in two ways: with the mind and with the consciousness. When we integrate both approaches we have optimal results.
I publish about the subject on
http://vixra.org/pdf/0910.0018v1.pdf
yours amrit
report post as inappropriate
Ben Baten wrote on Oct. 15, 2009 @ 19:48 GMT
Dear Stephen-
Thank your for your interesting essay, although I have a different opinion about the suitability of a computational approach to fundamental physics. Basically, I have similar concerns with many other posted essays, which have a computational, philosophical or formal character. This leads to an unbound search for, and speculative discussions about, a unified theory of physics....
view entire post
Dear Stephen-
Thank your for your interesting essay, although I have a different opinion about the suitability of a computational approach to fundamental physics. Basically, I have similar concerns with many other posted essays, which have a computational, philosophical or formal character. This leads to an unbound search for, and speculative discussions about, a unified theory of physics.
In your essay, the suggested 'discrete machine' or computational approach relies on formal axiomatic theory. This is, in my opinion, unlikely to provide an accurate representation of nature. In principle, an infinite number of machines can be imagined. Without guiding principle to reduce this number, this leads to an unbound search problem. In addition, certain machines may produce output that, deceptively, resembles observed natural behavior, although the output may never a good representation of nature when observing it at a small enough scale. Within the computational approach, it also remains to be explained how machines are created.
From within a 'discrete machine world' you'll not be able to comprehend nor completely accurately represent the possibilities of the 'analog' world, which nature appears to be. Theorems applicable to the world of discrete machines are in general not generalizable to the analog world. In addition, time is implicitly present in the operation of a machine, which makes it impossible to explain physical time from within a machine.
From my perspective, it is far more efficient to start a search for a unified theory directly with already observed 'known to be true' physical properties. This dramatically limits the search effort. These physical properties must be represented in terms of a minimal coherent formalism. From this theory it should be demonstrated that all observed quantum and relativistic effects appear and correspond to observed behavior within currently measured accuracy. This is essentially what Quantum Field Mechanics (QFM) attempts to do (see my essay). In the 'analog world' of QFM, interaction gives rise to unceasing pulsating phenomena. These quantum beat processes exhibit wave-corpuscular behavior (as Louis de Broglie suggested) and can be interpreted as massive particles. The mentioned pulsation has a finite duration (10^(-20) sec for electrons) and short spatial jumps (with a size equal to the Compton wavelength). This results in dynamically emerging discrete space and time, although the two attracting fundamental fields from which this behavior arises are 'analog and continuous'. In this analog world, dynamically emerging behavior has no relation to discrete machines and does not require multiple universes.
I have provided an -overview- of this theory and some of its extensions in my essay. The references provide the in depth rationalization. A slide deck on
my website provides an alternative description.
Thanks again for your essay.
Regards,
Ben Baten
view post as summary
report post as inappropriate
Eckard Blumschein wrote on Oct. 16, 2009 @ 14:38 GMT
Dear Stephen Wolfram,
While analog computers cannot compare with digital ones with respect to reproducibility and performance, they were in an important sense closer to reality. For more details you might have a look at the attached files and 527.
Kind regards,
Eckard
attachments:
1_Ritz09.pdf,
1_M283.html
report post as inappropriate
J.C.N. Smith wrote on Oct. 20, 2009 @ 00:23 GMT
Mr. Wolfram,
Thank you for an interesting essay. You wrote, "So what about time travel? There are also immediate definitional issues here. For at least if the universe has a definite history--with a single thread of time--the effect of any time travel into the past must just be reflected in the whole actual history that the universe exhibits."
For some thoughts on why time travel is not possible, please see my FQXi essay 'On the Impossibility of Time Travel,' which may be found
here.
Cheers
report post as inappropriate
James Putnam wrote on Oct. 24, 2009 @ 21:27 GMT
Ok two weeks is long enough for me take a hint. Good luck in the contest.
James
report post as inappropriate
NN wrote on Oct. 31, 2009 @ 12:00 GMT
Sorry about several typographic mistakes in my commeny of ocy 14 that awaits reactions/responses!
report post as inappropriate
Steve Dufourny wrote on Nov. 9, 2009 @ 11:57 GMT
Hello dear Mr Wolfram,
Nice to know you .
I didn't know your work .I saw in this thread a post by Ray Munroe about Wolfram Research .Very very very interesting this architecture .
Like a taxonomy of all .I congratulate you for this platform .Very relevant about the classment ,and thus our utilisation of fundamentals .
The future of our technology is fascinating .
Best Regards
Steve
report post as inappropriate
Ernie Bohm wrote on Nov. 23, 2009 @ 16:20 GMT
Dear Dr. Wolfram,
As you probably know the establishment was sneering at some of your work as being numerology. Clearly the establishment or at least part of it did not know what they were talking about. Those who have traditionally worked starting from the Lagrangian could not possibly understand the approach of people like Penrose or your good self. Those poor fellows who spent the better part of their life trying to learn quantum field theory and all the no-go theorems attached to it, are again and again horrified when after a long or short time it is discovered that the no-go theorems do not apply. A very common trait is to confuse number theory with numerology. In this connection I would like to recall the furore which met Mohamed El Naschie’s proposal that the background radiation is connected to a Menger sponge geometry of spacetime. Fifteen years after his proposal the loop quantum mechanics community is suddenly discovering that their theory can be embedded in a Menger sponge. If it were not so sad, it would be really funny. Isn’t that exactly what Mohamed El Naschie proposed and was laughed out of court for by the same community? Will we all ever learn to respect everybody’s point of view? Will we all ever learn from our past mistakes of underestimating people just because they are different? I for one find your last book a different kind of science, not only different but magnificent.
Ernie
report post as inappropriate
Eckard Blumschein wrote on Nov. 23, 2009 @ 22:43 GMT
Dear Ernie Bohm,
I just learned from you that Cantor dust was suggested to solve the problems of physics. I looked into
M.S. El Naschie A review of E infinity theory and the mass spectrum of high energy particle physics.Chaos, Solitons and Fractals 19 (2004)209–236.
Meanwhile the year 2010 is approaching, and I expect the LHC not to find compelling evidence for SUSY - if my humble suspicions are correct.
Be not mislead by my hesitation to accept Cantor's naive set theory as long as it has admittedly not a single tenable basis and there is no application for aleph_2.
I am pointing to a mistake that has nothing to do with set theory.
Given Mohammed El Naschie is correct, will the Higgs boson be found?
Regards,
Eckard
report post as inappropriate
Ernie wrote on Nov. 24, 2009 @ 22:28 GMT
Dear Dr. Eckard,
I am a theoretical physicist, not a mathematician. None the less I learned from Bruno Augenstein that set theory can be applied to quarks. Augenstein published many papers in Chaos, Solitons & Fractals probably he unfortunately disappeared from the scientific arena probably due to old age. He used to work with Murray Gellman and he had a passion for set theory. I expect that the Higgs will be there whether or not Mohamed El Naschie is correct. The transfinite set theory of El Naschie tries to set more stringent limits on the number of Higgs but it is by no mean the other theory which predicts Higgs. There are a few papers in the 2009 issue of Chaos, Solitons & Fractals, some by El Naschie giving an indication that at a minimum one particle should be found. This particle is most probably the Higgs. Similarly inclined non-mainstream people like Garrett Lisi made very similar predictions like Mohamed El Naschie. The establishment treated Lisi fare worse than they did El Naschie. At least El Naschie’s financial security and livelihood was never threatened. There are two versions of E8 exceptional Lie Group. The Lisi version and the transfinite version proposed by El Naschie. The difference between the two is fundamental but from a practical view point, the difference is almost irrelevant. To answer your question in a straight forward way after all that, the answer is yes, I expect one Higgs boson will be found. Mohamed El Naschie predicts that the mass will be about 169 gega electron volt. I am sure about the particle but I am not sure about the mass.
Regards,
Ernie
report post as inappropriate
Eckard Blumschein wrote on Nov. 25, 2009 @ 22:13 GMT
Dear Ernie Bohm,
Why should we expect the world to be digital if not even the ratio between diameter and circumference of a circle is rational? Stephen Wolfram claims that the quadrature of the circle has been solved. He means that there is no practical need for a higher than available accuracy of approximation. However, I cannot confirm that pi can be calculated for good.
As an engineer I learned to measure and draw the far field of a charge thought to extending endlessly into space. This does not fit into the drawer of any set theory.
Nonetheless I will look for Bruno Augenstein and Garrett Lisi. Thank you for the hints.
Regards,
Eckard
report post as inappropriate
Eckard Blumschein wrote on Nov. 26, 2009 @ 15:39 GMT
Bruno Augenstein died in 2005. I guess: His antimatter rocket will not work.
Eckard
report post as inappropriate
Fred wrote on Jan. 9, 2010 @ 07:28 GMT
These El naschie and Huan are frauds.
You can consult
Integrity Under Attack:
The State of Scholarly Publishing
By Douglas N. Arnold
http://www.ima.umn.edu/~arnold/siam-columns/integrity-
under-attack.pdf
Rumors of editor and journal misconduct have dominated the highly publicized case of the applied math journal Chaos, Solitons and Fractals (CSF),...
view entire post
These El naschie and Huan are frauds.
You can consult
Integrity Under Attack:
The State of Scholarly Publishing
By Douglas N. Arnold
http://www.ima.umn.edu/~arnold/siam-columns/integrity-
under-attack.pdf
Rumors of editor and journal misconduct have dominated the highly publicized case of the applied math journal Chaos, Solitons and Fractals (CSF), published by Elsevier. As reported in a 2008 article in Nature,[2] “Five of the 36 papers in the December issue of Chaos, Solitons and Fractals alone were written by its editor-in-chief, Mohamed El Naschie. And the year to date has seen nearly 60 papers written by him appear in the journal.” In fact, of the 400 papers by El Naschie indexed in Web of Science, 307 were published in CSF while he was editor-in-chief. This extremely high rate of selfpublication by the editor-in-chief led to charges that normal standards of peer-review were not upheld at CSF; it has also had a large effect on the journal’s impact factor. (Thomson Reuters calculates the impact factor of a journal in a given year as C/A, where A is the number of articles published in the journal in the preceding two years, and C is the number of citations to those articles from articles indexed in the Thomson Reuters database and published in the given year.) El Naschie’s papers in CSF make 4992 citations, about 2000 of which are to papers published in CSF, largely his own. In 2007, of the 65 journals in the Thomson Reuters category “Mathematics, Interdisciplinary Applications,” CSF was ranked number 2.
Another journal whose high impact factor raises eyebrows is the International Journal of Nonlinear Science and Numerical Simulation (IJNSNS), founded in 2000 and published by Freund Publishing House. For the past three years, IJNSNS has had the highest impact factor in the category “Mathematics, Applied.” There are a variety of connections between IJNSNS and CSF. For example, Ji-Huan He, the founder and editor-in-chief of IJNSNS, is an editor of CSF, and El Naschie is one of the two co-editors of CSF; both publish copiously, not only in their own journals but also in each other’s, and they cite each other frequently.
Let me describe another element that contributes to IJNSNS’s high impact factor. The Institute of Physics (IOP) publishes Journal of Physics: Conference Series (JPCS). Conference organizers pay to have proceedings of their conferences published in JPCS, and, in the words of IOP, “JPCS asks Conference Organisers to handle the peer review of all papers.” Neither the brochure nor the website for JPCS lists an editorial board, nor does either describe any process for judging the quality of the conferences. Nonetheless, Thomson Reuters counts citations from JPCS in calculating impact factors. One of the 49 volumes of JPCS in 2008 was the proceedings of a conference organized by IJNSNS editor-in-chief He at his home campus, Shanghai Donghua University. This one volume contained 221 papers, with 366 references to papers in IJNSNS and 353 references to He. To give you an idea of the effect of this, had IJNSNS not received a single citation in 2008 beyond the ones in this conference proceedings, it would still have been assigned a larger impact factor than any SIAM journal except for SIAM Review.
view post as summary
report post as inappropriate
Fred wrote on Jan. 9, 2010 @ 07:37 GMT
You can check yours self a typical paper for Huan using E-infinity theory
Hierarchy of wool fibers and its interpretation using E-infinity theory
Chaos, Solitons & Fractals,
Chaos,Solitons and Fractals 41 (2009)1839 –1841
Ji-Huan He, Zhong-Fu Ren, Jie Fan, Lan Xu
Abstract
Why do wool fibers show excellent advantages in warmth-retaining and many other practical properties? The paper concludes that their hierarchical structure is the key. Using E-infinity theory, its Hausdorff dimension is estimated to be about 4.2325, very close to El Naschie’s E-infinity dimension, 4.2360, revealing an optimal structure for wool fibers.
The same article again with little modifications
Hierarchy of Wool Fibers and Fractal Dimensions
International Journal of Nonlinear Sciences and Numerical Simulation,9(3),293-296, 2008
http://works.bepress.com/cgi/viewcontent.cgi?article=103
7&context=ji_huan_he
Abstract
Wool fiber shows excellent advantages in warmth-retaining and many other practical properties possibly
due to its hierarchical structure. Its fractal dimension of wool fiber is calculated which is very close to the
Golden Mean, 1.618. The present study might provide a new interpretation for the reason why wool fiber
has so many excellent properties.
You can notice the confilict between the two abstracts, in the first fibre wool
has dimension 4.2325 (which is greater than the embedding space) and in the second it is 1.618. I hope El naschie can explain these remarkable results.
report post as inappropriate
Fred wrote on Jan. 9, 2010 @ 07:42 GMT
I hope to have answers from the great supporters of E-infinity theory
(Ernie Bohm, Eckard, ...........)--> all in one (all in El naschie)
report post as inappropriate
Eckard Blumschein wrote on Jan. 9, 2010 @ 12:59 GMT
Fred, I agree that it is perhaps not sound if "Conference organizers pay to have proceedings of their conferences published in ..."
However, why do you consider me a supporter of E-infinity theory?
Eckard
report post as inappropriate
Steve Dufourny wrote on Jan. 11, 2010 @ 12:26 GMT
Dear Eckard ,
You see clear,more I read your words more I see pragmatism .The rationality seems thus the best road fortunally for the foundamentals sciences .
A model or a theory will take always its sense in the pure physicality with its specifics numbers and constants limits .The physical referential seems the main part of the confusions .The topology thus is specific inside a closed system more the motion and the evolutive point of vue .If the maths extrapolations utilize a false referential or a topology without real physicality thus we can understand the confusions about some models .
A model must consider all centers of interest furthermore and be in correlation with the foundamentals equations .
The sciences need limits and foundamentals .There the maths take all their sense but only if the objective referential is correlated .
Regards
Steve
report post as inappropriate
STEVE JEFFREY wrote on Jan. 16, 2010 @ 08:16 GMT
What about the program supermaxine.Supermaxine runs on any computer and renders that computer a superthinker.
You can add random physics equations in a spreadsheet 1ODD+ 1 EVEN= 2 ODD.
And 2 ODD+ 2 EVEN= 4 EVEN.
And you can express the two answers for everything Ying Yang as one answer 1/3 CUCUMBER+ 1/3 GREEN APPLE+ 1/3 GREEN BANANA= 1 GREEN SALAD SANDWHICH.
So you can go on adding the answers to 2+2=4 in 1/3s until you reduce millions of equations to one equation for everything.
The program supermaxine can print out for all of time for hundreds of years coming up with a new E=MC^2 type breakthrough every day.
The equations can be divided by four to see if they balance for one.
If they don't balance that doesn't mean they are useless just not conventional math....
Just set up Suprmaxine Wolfram and import equations at random form wikkapedia using maths type 6.
And print in your office reams of paper with single answers for everything reduced from thousands of equations that you begin with and end up with one equation for everything.
Steve
report post as inappropriate
Hoffman wrote on Feb. 23, 2010 @ 18:23 GMT
It is interesting that everyone is now talking about the golden mean and the golden mean E8. Mohamed El Naschie was the first to intimate the relationship between E8 and the golden mean and subsequently introduced a fuzzy E8 based again on the golden mean. He work does not only apply to the Ising model which was tested experimentally in the Helmholtz Center in Berlin but to everything else as well. You could say he found a general theory based on the golden mean for high energy physics. Some call it quantum golden field theory. You can read much more about it in details from a very accessible short review The theory of Cantorian spacetime and high energy particle physics (an informal review) published in Chaos, Solitons & Fractals, 41, 2009, p. 2635-2646.
report post as inappropriate
Steve Dufourny replied on Mar. 8, 2010 @ 09:58 GMT
Hi Hoffman,
Ther golden mean is just a irrational which helps to build like many constants and irrationalities around us.
E8 is false and a big joke in the sciences community.
The reality is a physicality, a pure physicality.
Regards
Steve
report post as inappropriate
Anonymous wrote on Mar. 6, 2010 @ 14:34 GMT
from http://martialculture.com/blog/2010/01/the-golden-ratio-and-
quantum-mechanics/
El naschie is a real spark in the human written history, he is startling . Al his predictions based on E-infinity theory are well verified. Among many and just to name:
1-The well experimentally verified results about fiber wool pioneered by Huan. Who showed that the Hausdorff dimension of fiber wool is to be about 4.2325, very close to El Naschie’s E-infinity dimension, 4.2360. According to Huan this reveals an optimal structure for wool fibers. This is an easy proved fact and it doesn’t need high energy.
Hierarchy of wool fibers and its interpretation using E-infinity theory
Chaos, Solitons & Fractals, Volume 41, Issue 4, 30 August 2009, Pages 1839-1841
Ji-Huan He, Zhong-Fu Ren, Jie Fan, Lan Xu
2- A remarkable achievement of El naschie is his unique extra ordinary talent in revealing a deep connection between double slit experiment and particle physics. That is really a breakthrough in the field has never been acheived.
The two-slit experiment as the foundation of E-infinity of high energy physics
Chaos, Solitons & Fractals, Volume 25, Issue 3, August 2005, Pages 509-514
M.S. El Naschie
3- El naschie is gifted in doing simple calculations and getting non-perturbative results. While ordinary people can get results by using supper computer in a one year, El naschie get the same results straight forward by counting on his fingers without using computer at all. These are due his GOLDEN FINGERS.
On quarks confinement and asymptotic freedom
Chaos, Solitons & Fractals, Volume 37, Issue 5, September 2008, Pages 1289-1291
M.S. El Naschie
Quarks confinement
Chaos, Solitons & Fractals, Volume 37, Issue 1, July 2008, Pages 6-8
M.S. El Naschie
4- With a simple rope with knots El naschie could derive the spectrum of possible Elementary particles, and realy this is the discovery of the century.
Any one can just bring a rope with knots and could easily testify El naschie’s conjecture.
Fuzzy multi-instanton knots in the fabric of space–time and Dirac’s vacuum fluctuation
Chaos, Solitons & Fractals, Volume 38, Issue 5, December 2008, Pages 1260-1268
El naschie may be the greatest thinker in the history of mankind and his theory is the most important discovery since the invention of wheel. El naschie maybe the most remarkable event after cosmic big bang. His theory can describe every thing after big bang and I’m sure El naschie will extend his theory to accommodate what has been before big bang. Please don’t wonder it is an E-infinity theory that could deal with such a long history of time.
report post as inappropriate
Steve Dufourny replied on Mar. 10, 2010 @ 11:48 GMT
Hi,
The problem with E infinity is the uniqueness and the applications of the infinity on this uniquenes which has a finite serie.
I respect the thinker indeed and his creativity.But I don't agree about several conclusions about the Universe.
The fratal is finite anc correlated with the sphere in its primordial divisibility giving the uniqueness of all things.
God do not play with dices in an ocean of universes.
Regards
Steve
report post as inappropriate
danny burton wrote on Jun. 16, 2010 @ 17:48 GMT
What, exactly, do we mean by 'number'?
The human term ‘number’ and the concepts of a counting system are descriptions of difference between topologically whole areas. ‘Two fish’ decribes two discreet entities within a set ‘fish’. What we call number theory is the detailed analysis of how areas of difference within topologically whole entities organise efficiently within that...
view entire post
What, exactly, do we mean by 'number'?
The human term ‘number’ and the concepts of a counting system are descriptions of difference between topologically whole areas. ‘Two fish’ decribes two discreet entities within a set ‘fish’. What we call number theory is the detailed analysis of how areas of difference within topologically whole entities organise efficiently within that entity.
The differences described however are not the result of human numbering, human numbering is a classification of already existing areas of difference within a given set. A number of fish existed, in an awful lot of discreetly different ways, before the human number system. If we insist that the different areas only existed as areas of discreet difference after they were perceived to, we are what is commonly termed ‘creationist’.
It is accepted that the universe (by definition) is a topologically whole entity. Physics is the analysis of the areas of disceet differences, and how they interact, combine and divide within the topologically whole universe. In physics these areas of difference, and the way they ‘organise’ are treated as the results of naturally-occurring phenomena. Physics has always used mathematical tools to analyse these ‘physical’ areas of difference, and many words have been written about the miraculous coincidence that the language of mathematics is so well suited to do such analyses.
but instead of numbers being miraculously suited to describing the universe; what WE call number is how the universe ‘describes’ its differences.
the relationship between the ‘naturally-occurring areas of discreet difference in the topologically whole universe, and their behaviours’ and ‘human numbering system, number theory and mathematics’ is the equivalent of the relationship between ‘the naturally-occurring force between masses’ and what we call ‘the theory of gravity’.
relationship N->n
equivalent to
relationship G->g
where the capital letter represents a natural phenomenon and the lower-case represents the human analysis of the natural phenomenon.
The implications are that the naturally-occurring processes that we call ‘number theory’ will result in the naturally-occurring processes that we call ‘quantum mechanics’ and further to all other naturally occurring processes that we eventually call ‘physics’.
If the universe IS a topologically whole entity, and everything within that universe is composed of various fractions of the whole: then inflation is in fact division and subdivision. The expansion is in the ‘numbers’ ie the discreetly different areas within the whole.
it is not a set of sets, which is then a set of set of sets… the set of sets is absolute by definition and any introduction of further sets merely shows subdivision of the original.
[inserted note for Prof Schiller, with added lolz --> the term 'discreet difference' is used to indicate that although there may well be a continuum of difference it's only when such differences are discreet that they interact as differences. i love my analogies, so think of a magnet. there is a continuum between N and S (the physical object is a whole unit), and the differences in polarity gradually converge to the grey areas where we can't tell if it's more N than S or more S than N... but when the interactions of each pole are examined, we see they act in discreetly different directions. The continuum isn't discreetly different, so it isn't analysable through number. As soon as we're analysing using number we're separating it into discreetly different interactions. A curve on a graph is a continuum, but as soon as you wish to examine the value of a point on that line, you are separating it discreetly from the continuum of line before and after./note for prof schiller]
It is eminently testable as it predicts that ‘number theory’ and ‘quantum mechanics’ will become increasingly converged (ok, all areas of physics… but I say quantum mechanics because it’s at the narrow end of the decreasing complexity).
the prediction is: more and more ‘coincidences’ such as the riemann-zeta function will be ‘discovered’ at the LHC and other high-energy early-universe particle experiments. (In fact anywhere all naturally-occurring topological wholes being subdivided over time, when analysed mathematically should show evidence’s of ‘strange’ similarities between each other, whether it’s in physics, biology or any other field).
still with me?
:P
view post as summary
report post as inappropriate
Anonymous wrote on Jul. 21, 2010 @ 07:07 GMT
Two very informative papaers explaining in layman terms E-infinity theory and the role played by the golden ratio in fiber wool and high energy physics are the following:
Hierarchy of wool fibers and its interpretation using E-infinity theory
Chaos, Solitons & Fractals,
Chaos,Solitons and Fractals 41 (2009)1839 –1841
Ji-Huan He, Zhong-Fu Ren, Jie Fan, Lan Xu
Again the same article with little modifications
Hierarchy of Wool Fibers and Fractal Dimensions
International Journal of Nonlinear Sciences and Numerical Simulation,9(3),293-296, 2008
http://works.bepress.com/cgi/viewcontent.cgi?article=103
7&context=ji_huan_he
The two articles are authored by a great official organ of the E-infinity group who is Prof. Ji-Huan He.
It is a true pleasure reading these articles which puts everything into perspective. In fact the golden ratio was recently discovered to be enjoyed by sheep through their fiber wool and this is deeply rooted on the basis of conformal quantum field theory. This is all apart from its fundamental connection on polylogarithm. This is a beautiful theory with applications in quantum sheep's wool and wooltex and related subjects. These recent discoveries should silence the critics of E-infinity theory provided this criticism is truly scientific and not just politically motivated to disturb sheep market and wooltex stores.
For the seriously motivated readers these are the abstracts of the two wonderful papers
First abstract:
Why do wool fibers show excellent advantages in warmth-retaining and many other practical properties? The paper concludes that their hierarchical structure is the key. Using E-infinity theory, its Hausdorff dimension is estimated to be about 4.2325, very close to El Naschie’s E-infinity dimension, 4.2360, revealing an optimal structure for wool fibers.
Second abstract
Wool fiber shows excellent advantages in warmth-retaining and many other practical properties possibly due to its hierarchical structure. Its fractal dimension of wool fiber is calculated which is very close to the
Golden Mean, 1.618. The present study might provide a new interpretation for the reason why wool fiber has so many excellent properties.
Some suspicious readers would ask why we have two different Hausdorff dimension for the same fiber wool.
-The answer is simple and it is that the two papers measured the Hausdorff dimension in two different frames. As any body know the dimension depend on the reference frame. This reveals that even dimension is a relative concept.
report post as inappropriate
Josef Tsau wrote on Dec. 31, 2010 @ 17:22 GMT
This post offers a comprehensive interpretation of most universal phenomena and the universe by discovering their physical origins. It proves that Galilean physics is the physics of everything, etc.
The Expanded Galilean Physics - Universal Phenomena and Cosmology
A. Universal Phenomena
The discovery of the Galilean-physics based interpretation of the universal phenomena is...
view entire post
This post offers a comprehensive interpretation of most universal phenomena and the universe by discovering their physical origins. It proves that Galilean physics is the physics of everything, etc.
The Expanded Galilean Physics - Universal Phenomena and Cosmology
A. Universal Phenomena
The discovery of the Galilean-physics based interpretation of the universal phenomena is based on the logical scientific lead that for universal phenomena such as the charge energy of proton and electron and gravity to continue to exist, they need constant energy supply and the stars should be the huge energy source needed to supply them. To continuously transport the energy of stars to all matters, the nuclear reactions in stars should continue to produce energetic tiny particles, undetectably small, to reach, to interact, and to lose energy to all matters to produce universal phenomena. There are already strong scientific evidences to show that stars constantly produce neutrinos, which carry no charge and are so small that they have the unique ability to penetrate all matters to reach all of their subatomic particles: protons and electrons. Therefore, neutrinos should be the tiny particles interacting effectively with all matters to produce all universal phenomena. However, scientists do not think so. They believe that both electrons and protons are point-size particles and therefore atoms and molecules are essentially empty space and therefore neutrinos hardly collide interacting with them. Another reason scientists believe that neutrinos are very inactive is that they believe that interacting with all matters to produce universal phenomena such as gravity should produce such large amount of heat that should evaporate all matters in seconds. These misconceptions together with other wrong knowledge taught by the mainstream of thought (its mathematical theories), such as that they believe that neutrinos have no mass, have effectively stopped scientists from understanding all universal phenomena.
Well established findings show that all matters are made of, in addition to protons and electrons, neutrinos, since they are produced in nuclear reactions from the matters made of protons and electrons. Unlike neutrinos both proton and electron are the fundamental charge particles and undoubtedly their charge is some kind of energy constantly producing forces. To constantly carry charge they should have constant energy supply from their environment. Otherwise, they should be absolutely uncharged particles like neutrinos. Therefore, without charge energy both proton and electron should be absolutely uncharged particles likely having mass density comparable to neutrinos, which based on our knowledge of atomic nuclei is amazingly dense, but should be much bigger than neutrinos. In space, the collision interactions between the energetic neutrinos coming from stars and the large uncharged particles should result in transferring energy from neutrinos to them. At equilibrium, the large uncharged particles should be surrounded by an amazingly dense atmosphere of neutrinos having much lower (kinetic) energy than the atmospheric neutrinos of the universe. If the large particles have irregular shape, at equilibrium they should be fast rotating and oscillating, thus, transferring their rotating-energy to their atmospheric neutrinos, which are therefore spinning and likely oscillating too. Both a proton and an electron therefore should be made of an absolutely uncharged particle having an amazingly dense atmosphere of spinning neutrinos constantly gaining the kinetic energies from incoming neutrinos of the universe to maintain their structures and charge energy. Both protons and electrons have their own gravitational force, which is the atmospheric pressure of neutrinos of the universe since neutrinos cannot penetrate through the large particle at their center. Besides, having amazingly dense atmosphere of neutrinos, they should also have repulsive force acting on one another. Both a proton and an electron therefore have an amazingly dense atmosphere of neutrinos of their own, gravitational force, and mutual repulsive force to express their charge energy not simply having single + or – force as generally believed.
The above logical understanding of both the structure and the charge energy of the fundamental charge particles (proton and electron) based on the knowledge of Galilean physics has led to the understanding of all universal phenomena and cosmology! Only the correct physics can have such easy to understand logics, overall coherency and consistency, and the support of all valid R&D findings!
Having two opposite forces instead of a single force to express their charge energy explains why protons and electrons form atoms and molecules. An electron approaches a proton due to that the gravitational force between them is greater than the repulsive force between them. However, their repulsive force increases upon their approaching and when the two forces are balanced, they form atoms and molecules. Free and partially free electrons and protons extend their neutrino atmospheres to produce the so-called “electric and magnetic force fields”, whose forces only act on charged particles. Knowing both electron and the proton having amazingly dense atmospheres of neutrinos also leads to the understanding why the neutrinos of the universe can interact effectively with all matters having atomic and molecular structures to produce many universal phenomena! It further reveals the scientific nature of the collision interactions between neutrinos and all matters: they are mainly the collisions among neutrinos hardly producing any atomic and molecular motions or heat. In these collision interactions neutrinos of the universe constantly lose (kinetic) energy to protons and electrons mainly to produce and to maintain their charge energy, to form and to maintain atomic and molecular structures not to produce the huge amount of heat expected by physicists! However, these collision interactions do cause protons and electrons to oscillate and these oscillation motions should constantly produce some heat. Therefore, there is a fundamental heat of matters produced by neutrinos-matters collision interactions, which keeps the centers of planets hot and produces their volcanic activities.
The logically arrived structure of both electron and proton further leads to the fundamental understanding of the energies released and absorbed from the collision interactions among all matters including physical, chemical and nuclear reactions! These collision interactions mainly involve losing or gaining neutrinos from the neutrino atmospheres of orbital electrons resulting in emitting or absorbing energy-producing neutrinos. Both the neutrino content and the weight of electrons therefore vary depending on the environment they are in. For example, free electrons contain the most neutrinos and are the heaviest. Electrons inside the nucleus of the heaviest atom contain the least amount of neutrinos, thus, are the lightest. The small amount of mass lost in a nuclear reaction therefore is explainable by the loss of neutrinos from orbital and/or nuclear electrons and, may be, a very small portion, from protons too, thus, avoiding the postulations that mass and energy are mutual convertible and the presence of pure energy, which is not even scientifically definable. Like both physical and chemical reactions, nuclear reactions now also obey mass-balance law, when the amount of neutrinos lost or gained in the reactions is accounted for.
The presence of gravity proves that there is a dense energetic atmosphere of undetectable neutrinos that interact effectively with all matters to produce it, which is the differential force between the pushing forces of the neutrinos going into and coming out of the matters. The presence of a dense atmosphere of undetectable neutrinos also explains the existence of protons, electrons, atoms, molecules, electric, and magnetic force fields.
The logically arrived structure of both electron and proton further leads to the understanding of light! Findings show that orbital electrons of atoms and molecules jump into lower-energy-level orbital emitting light. Since they can only emit neutrinos, lights should be their emitted neutrinos. This is explainable since getting closer to the protons in an atomic nucleus, an orbital electron need to lose some atmospheric neutrinos to reduce the repulsive force between them. Also, that the neutrinos emitted from orbital electrons are detectable as lights of various energy levels are explainable due to that the neutrinos of the atmosphere of an electron have various levels of spin energy, the closer to the absolutely uncharged particle at the center, the higher their spin energy. Lights therefore are neutrinos having spin energy. If neutrinos also have irregular shape, the light or the spinning neutrinos traveling in space should oscillate showing transverse wave-like properties, which have already been found. Also, balanced by two forces orbital electrons should circulate their atomic nuclei in a transverse wave-like motion. When lights bounce off orbital electrons upon going through a narrow slit, for example, they should produce wave-like scattering photographic patterns, which have also been found. However, contrary to general belief, lights are spinning neutrino particles hardly collide interacting with one another in space not the mass-free waves interfering with one another, although they show some wave-like properties when they collide interacting with matters. Findings also show that lights collide interacting with matters to lose all or part of their (spin and oscillation) energies turning into undetectable neutrinos, e.g., by passing through a black surface.
Traveling in space lights gradually lose their energy producing the well-known phenomenon of universal redshift of star lights. Hubble Law shows that the magnitude of the redshift of a star light is proportional to its distance from Earth. According to big bang theory the redshift of star light is relative-motion caused Doppler Effect of light and has concluded from Hubble’s findings that the universe is uniformly expanding. However, scientists have also discovered gravitational redshift of light, which cannot be explained by relative–motion Doppler Effect. In comparison, the latter is a much stronger effect, since it takes many light years to develop a detectable redshift of star light. To interpret both redshift effects of light consistently and coherently, I have proposed a mini-ether theory assuming that besides neutrinos the nuclear reactions in stars also produce mini-ether having particle size much smaller than that of neutrinos. Like neutrinos and may be more so, the concentrations of the mini-ether inside and around planets and stars are much higher than in space, thus, causing much stronger redshift effect than in space. This coherent interpretation of both redshift effects of light may be more reasonable than the conclusion of the big bang theory based on the effect of the universal redshift of light alone that the universe is uniformly expanding started from a big bang of a “singularity”, which the nature cannot logically produce.
B. Cosmology
The new cosmology of the expanded Galilean physics continues to recognize absolute time and space. It defines our universe as an energy consuming open-system, which should have limited size and duration inside the domain of nature. To form the universe the domain of nature should have a way to produce and to store the energy-producing material of the universe and a natural mechanism to start consuming it or to start a universe. The space surrounding our universe should still have a dilute atmosphere of neutrinos and since the temperature in an Intergalactic Space is already very low ~ 2.7 oK (based on the assumption that the microwave interference background coming from the Intergalactic Space surrounding our galaxy), the temperature outside the universe will drop to 0 oK and below, where according to scientific findings atoms and molecules cannot survive. It means that the material the nature made to release energy inside a universe cannot by hydrogen and helium. Based on our knowledge about nuclear reaction, the energy-producing material of the universe should be made from proton, electron, and neutrinos. Findings also suggest that at temperature very close to 0 oK atoms and molecules collapse into a very dense material like atomic nuclei but containing about equal amounts of protons and electron. The author has named this material “dense-matter object”.
I postulate that when stars and planets die at sub-0 oK, they collapse to form dense-matter objects, which are stable only at 0 oK and sub-0 oK temperatures in the domain of nature outside the universe. They continue to aggregate themselves and from the collection of proton, electron, and neutrinos to grow in size and, therefore, some of them can have amazingly large sizes. A galaxy is likely formed from a violent collision of two giant dense-matter objects and a universe can form from the multiple of such collisions. A violent collision should break the two dense-matter objects into billions of pieces and produce a thick cloud of dense-matter dust, neutrinos, protons, neutrinos, and huge amount of energy. Under the bombardment of the high-energy particles radioactivity and nuclear reactions should occur on the surface of the dense-matter object pieces and dusts. The dense-matter dusts would be quickly consumed to produce hydrogen and helium clouds in the cosmos and each piece of dense-matter object becomes a baby star. Simultaneous producing billions of baby stars marks the birth of a galaxy. Inside a galaxy dense-matter objects are unstable being radioactive and undergoing nuclear reactions on their surface constantly shooting out energetic particles such as neutrinos, hydrogen, helium, ions, etc. pushing things away and therefore they are “antigravity matters”. The existence of antigravity matter in this universe is still unknown to all. All the matters made of atoms and molecules are known gravity matters since they constantly get energy from atmospheric neutrinos produced by the antigravity matter in the universe to produce gravity.
Regular stars – A baby star or dense-matter object is very small, dense, and essentially invisible. It constantly shoots out besides neutrinos, protons, electrons, hydrogen, helium and ions to a great distance, where they gradually accumulate to form an outer-layer. Therefore, a regular star like our Sun is made of a dense-matter object having a large ball-shape hollow outer-layer, which gradually gets denser, thicker and, thus, shinier. However, regular stars are still unknown to be hollow and to have a dense-matter object or antigravity matter at their center. As their outer-layer gets thicker and hotter with aging, its inner surface will reach the critical temperature for nuclear reactions to occur to produce elements heavier than helium. Upon further aging, the inner surface of the outer-layer gets harder and harder for the gases and the particles continuously produced by the dense-matter object at the center to penetrate through and as a result the inner gas pressure continues to builds up. Therefore, mature regular stars like our Sun have to release their inner gas pressure by periodical burst and, eventually, the outer-layer of regular stars will explode to end their lives as findings have already confirmed this conclusion.
Planetary motions - The dense-matter objects inside many regular stars such as our Sun are fast rotating due to both their irregular shape and nuclear reactions constantly occurring on their surface. This rotating energy makes the outer-layer of these stars, such as our Sun, to rotate. More importantly, this rotating energy also enable it to shoot out energetic neutrinos supports the constant orbital and rotational motions of their planets and moons in a specific direction such as those of the solar system.
Dense stars – Baby stars are the densest stars but they have turned into regular stars longtime ago, which are artificially big having artificially low mass density. Findings show that when large regular stars explode, they effectively blow away their outer-layer turning into the so-called neutron stars, which are the densest stars found. Neutron stars should be the exposed dense-matter objects from their explosion. Instead of being dead, they are the baby stars reborn from the explosion. Giving time, they will become regular stars again. Many of them are found fast rotating too. The explosion of the medium size regular stars like our Sun blows a large portion of their outer-layer away to become white dwarf stars. Having both small size and an outer-layer hard to be penetrated through by hydrogen and helium, they should violently explode again as findings having proven so too.
Black holes – The presence of very large dense-matter objects particularly in and around the center of all galaxies is expected. They should be the “black holes” found. Having very strong antigravity, they destroy and disperse the light passing through and around them and are therefore black hole-like. They throw the hydrogen and the helium they constantly produce so far away that these gases continue to disperse into cosmos instead of forming ball-shape outer-layer to become gigantic regular stars. Some fast-rotating large dense-matter objects (black holes) throw their gases very far away forming large bright rings, which are rotating like the outer-layer of our Sun due to the fast rotating of the large dense-matter objects at their center.
C. Physical Origins not Mathematical Origins
A century of scientific revolution led by Einstein using the mathematical theories teaching mathematical origins to interpret universal phenomena against Galilean physics teaching physical origins has now completely failed due to both that it has not disproved Galilean physics and that the physical origins of the universal phenomena have been found. Now, the following scientific debates can be concluded.
A century of scientific revolution of Einstein has led to an era of (pseudo-) physics using mathematical origins, against the fundamental teaching of physical origins by Galilean physics, to interpret universal phenomena and cosmology. The revolution however has failed scientifically both to disproved Galilean physics and to be consistent with other fields of natural science. The discovery of the expanded Galilean physics using physical origins to interpret universal phenomena and cosmology has conclusively disproved all theories teaching mathematical origins. Some major conceptual changes from mathematical origins to physical origins are given bellow.
1. Light and gravity – Contrary to general belief, lights are neither (electromagnetic) waves nor quantized photons. They are neutrino particles having both spin energy, which hardly collide interacting with one another in space. Therefore, the negative finding in the Michelson-Morley experiment is expected, which however does not prove that light has an absolute speed in space. Gravity is the wind force of neutrinos not curved spacetime.
2. Mass and moving speed – Today’s physicists believe firmly that mass increases with moving speed. They cited the findings of the high-energy particle accelerators as “proofs”. However, having an atmosphere of neutrinos, a charge particle such as electron is expected to lose some of its atmospheric neutrinos upon moving. The higher the moving speed, the more neutrinos it loses and thus the weaker of its charge power. Therefore, charge particles lose, not increase, mass with increasing moving speed. However, magnetic-field measurement would wrongly show an increase in mass due to the reduced charge power of the moving charge particles.
3. CMB – A reasonable interpretation of CMB, the cosmic microwave interference background, is that it is emitted by the matters inside the Intergalactic Space surrounding our galaxy while our galaxy as a whole is moving in it to produce anisotropic property in CMB. Without stars in the Intergalactic Space, the temperature there is very low (~2.7 oK) and uniform. If CBM were the residue radiation predicted by the big bang theory, it should not have anisotropic property.
4. Einstein Equation E = mc2 – It is wrong to assume that energy (E) and mass (m) are mutually convertible. Pure energy is not even scientifically definable and therefore does not exist. The equation is correct to show that the energy produced by a reaction is proportional to the mass (of neutrinos) lost in the reaction but c2 is simply a proportional constant having nothing to do with light speed.
5. Matters and Energies – Since all energies are produced by matters, pure energy does not exist and matters cannot be converted to pure energy.
6. The Universe – The Galilean-physics based cosmology is developed based on the following logical thinking, the physical origins-based interpretation of universal phenomena, and findings. To make common sense it should be an energy-consuming and open universe. Similar to a fire, it should have limited size and duration. The domain of the nature that may have unlimited size and duration should have a natural way to make and to store the (nuclear) energy-producing material and a way to start the universe.
There is no valid scientific proof that time is not absolute and it is a common-sense making three-dimensional universe according to Galilean physics, which is made of both antigravity and gravity matters. The antigravity matters, which are made of densely packed protons, electrons, and neutrinos and is still unknown to the scientific community. Inside universe, they are radioactive constantly producing nuclear energy emitting energetic particles to produce antigravity and are located at the center of regular stars and dwarf stars and are also known as neutron stars and black holes.
6. Particle physics – Galilean physics teaches that the universe is made of only three stable fundamental particles, neutrinos, the uncharged proton, and the uncharged electron. It further postulated the existence of mini-ether to be the fourth fundamental particles to explain the redshift-of-light phenomena. They all have mass. I suggest that positron is the electron deficient in neutrino content. At present, Galilean physics has no explanation to the hundreds of very unstable elemental particles found. They are likely related to the break up of protons and electrons.
Since both protons and electrons are unbreakable there can be no quark particles. As expected that Galilean particle physics is not compatible with the standard model of modern physics. Since the charge properties are unique to both proton and electron only, the electric and magnetic field methods may be able to detect other elemental charge particles but cannot quantize the charge and the mass of them and may lead to serious errors for the standard model.
By Josef Tsau, Ph.D., jtsau@comcast.net , September 2010
view post as summary
report post as inappropriate
Login or
create account to post reply or comment.