CATEGORY:
Undecidability, Uncomputability, and Unpredictability Essay Contest (2019-2020)
[back]
TOPIC:
Unverifiability, Unexplainability & Unpredictability by Roman V Yampolskiy
[refresh]
Login or
create account to post reply or comment.
Author Roman V Yampolskiy wrote on Jan. 28, 2020 @ 16:47 GMT
Essay AbstractOptimistic plans of mathematicians to automatically uncover all truths have been thwarted by Gödel’s Incompleteness and Turing’s Undecidability among many other impossibility results. In this essay we describe a more general limitation on mathematical proofs, Unverifiability, along with Unpredictability and Unexplainability of powerful knowledge discovery agents. We conclude with analysis of limits to what we can prove, predict or understand on physics and science in general, as well as safety of artificial intelligence in particular.
Author BioDr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Science and Engineering. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. During his tenure at UofL, Dr. Yampolskiy has been recognized as: Distinguished Teaching Professor, Professor of the Year, Faculty Favorite, Top 4 Faculty, Leader in Engineering Education, Top 10 of Online College Professor of the Year, and Outstanding Early Career in Education award. Dr. Yampolskiy’s main areas of interest are Artificial Intelligence and Cybersecurity.
Download Essay PDF File
Georgina Woodward wrote on Jan. 28, 2020 @ 21:51 GMT
I found your essay very clearly written, interesting educational and topical.The idea of equating a mathematics verifier with a physics observer is an interesting one. However I don't fully agree. Physics observers have a partial viewpoint that relates to a particular context and perspective of observation or measurement. So the conclusion is not impartial. Observers with similar viewpoints can...
view entire post
I found your essay very clearly written, interesting educational and topical.The idea of equating a mathematics verifier with a physics observer is an interesting one. However I don't fully agree. Physics observers have a partial viewpoint that relates to a particular context and perspective of observation or measurement. So the conclusion is not impartial. Observers with similar viewpoints can corroborate each others view. However those with different perspective need not corroborate each other. You make the point that the verifiers themselves may be in error [for that reason may disagree].They do not set out to take a partial viewpoint. Veriifiers conclusions differing for different verifiers is down to what the verifier does not the ambiguity of the mathematics. Which is not usual. This shows your essay is thought provoking : ) All the best.
view post as summary
report post as inappropriate
Author Roman V Yampolskiy replied on Jan. 29, 2020 @ 15:47 GMT
Dear Georgina,
Thank you for your kind words and helpful feedback! In a way, different verifiers also come from different contexts, as a particular one is unlikely to contain all of known mathematics, but each one will have some subset, perhaps not fully integrated with all other domains of mathematical knowledge. So their conclusions may also be not impartial, but biased by their...
view entire post
Dear Georgina,
Thank you for your kind words and helpful feedback! In a way, different verifiers also come from different contexts, as a particular one is unlikely to contain all of known mathematics, but each one will have some subset, perhaps not fully integrated with all other domains of mathematical knowledge. So their conclusions may also be not impartial, but biased by their background knowledge. Again, thank you!
Best,
Roman V. Yampolskiy
view post as summary
Jochen Szangolies wrote on Jan. 29, 2020 @ 17:21 GMT
Dear Roman,
your essay contains much food for thought, and I'll have to take some time digesting its contents. You very deftly eliminate a naive view of mathematical proof, according to which once something's proven, we know it to be true, and that's that.
I wonder about how to extend this framework. Is it, for instance, possible to construct a complexity theory for proofs? Say,...
view entire post
Dear Roman,
your essay contains much food for thought, and I'll have to take some time digesting its contents. You very deftly eliminate a naive view of mathematical proof, according to which once something's proven, we know it to be true, and that's that.
I wonder about how to extend this framework. Is it, for instance, possible to construct a complexity theory for proofs? Say, proofs which take some minimal number of steps that scale with the---however defined---complexity of the theorem to be proven?
Could one create a kind of 'verifiability logic', analogous to provability logic? Can the intuitive regress argument you give be made rigorous in terms of, say, a diagonalization argument? That is, a way to construct a proof even a hypothetically ideal verifier could not verify?
And what about scenarios where you convince another party that you possess a certain kind of knowledge item---like, say, a proof---without revealing that knowledge? Something like a zero-knowledge proof? I would guess that at least certain constructive proofs are amenable to such an 'indirect' verification---formulate some computational task such that it can only be solved if one can construct some mathematical object. Then again, I suppose the putative prover could always get a lucky guess...
view post as summary
report post as inappropriate
Author Roman V Yampolskiy replied on Jan. 29, 2020 @ 18:40 GMT
Dear Jochen,
Thank you for reading my work and taking the time to comment. You are suggesting some interesting directions for future work, some of which have been attempted and others are still waiting to be tried. I do try to address such alternatives in the appendix, and will try to do more for your ideas in my future work. Thank you!
Best,
Roman Yampolskiy
Jochen Szangolies replied on Feb. 2, 2020 @ 15:58 GMT
Dear Roman,
thank you for your answer. I think that it's an interesting thread to consider whether to merely certify a proof, one always needs to be capable of checking whether it is correct. Certain proofs, if you possess them, may enable you to perform certain tasks---hence, your ability to perform these tasks will certify your having that proof, up to any given standard of certainty (strictly smaller than absolute certainty, of course). This sort of thing seems closely related, to me, to the problem of certifying whether one party has a certain capacity (say, access to a universal quantum computer) without the other party necessarily having that capacity (a quantum computer to check).
Therefore, it doesn't seem quite right to me that each verifier necessarily needs capacities equal to or exceeding that of the system it verifies; indeed, there may be ways for you to convince me you've proven something without me having any hope of ever checking the proof, which would indicate that a proof-checker is not the only possible kind of verifier imaginable.
report post as inappropriate
Author Roman V Yampolskiy replied on Feb. 2, 2020 @ 16:32 GMT
I think that makes sense. Something definitely to consider as I continue work in this domain. Thank you!
Joseph Maria Hoebe wrote on Jan. 30, 2020 @ 14:54 GMT
You write in your nice expose at the end: our results are very timely and should allow for a better understanding of limits to verifiability and resources required to get desired levels of confidence in the results.
Please, allow me to comment:
The development of Being goes from total unformedness to total formedness.
Observing observes and forms "That which is" as a world. ...
view entire post
You write in your nice expose at the end: our results are very timely and should allow for a better understanding of limits to verifiability and resources required to get desired levels of confidence in the results.
Please, allow me to comment:
The development of Being goes from total unformedness to total formedness.
Observing observes and forms "That which is" as a world. As long as an observer is not equal to the whole, then his observer is limited to his personal possibilities, just as the unformedness of Being does not allow the totality of Being to be totally formed, thus there is always room for more formedness of Being. This will have the effect that knowing can always only be knowing up to and including that specific state of being formed.
Conversely, therefore, that which is formed is sufficient for knowing the next step, but certainly not for the step after that. In Category Theory this is that if A goes to B and B goes to C then A can go to C, but that is only true if B indeed goes to C. Hence AC has the notation AC after BC.
The conclusion is that we always know exactly enough for now, because now is as it is. That means "we" is the decisive factor and that is the scientific community as a whole as verifier (for that moment).
Bests,
Jos
view post as summary
report post as inappropriate
Author Roman V Yampolskiy replied on Jan. 30, 2020 @ 15:09 GMT
Dear Jos,
Thanks for taking the time to read my work. We, as the scientific community, a an ultimate verifier of truth.
Best,
Roman Yampolskiy
Dizhechko Boris Semyonovich wrote on Jan. 31, 2020 @ 16:11 GMT
Dear Roman V Yampolskiy, after reading your essay, I realized that I should ask you to verify the new Cartesian generalization of modern physics, which is based on the identity of physical space and Descartes’s matter. According to this identity, it is common for physical space to move relative to itself, since it is matter. Arguing in this way, I showed that the probability density of states in...
view entire post
Dear Roman V Yampolskiy, after reading your essay, I realized that I should ask you to verify the new Cartesian generalization of modern physics, which is based on the identity of physical space and Descartes’s matter. According to this identity, it is common for physical space to move relative to itself, since it is matter. Arguing in this way, I showed that the probability density of states in an atom depends on Lorentz abbreviations: length, time, mass, etc. I invite you to discuss my essay, in which I show the successes of the neocartesian generalization of modern physics: “The transformation of uncertainty into certainty. The relationship of the Lorentz factor with the probability density of states. And more from a new Cartesian generalization of modern physics. by Dizhechko Boris Semyonovich »
view post as summary
report post as inappropriate
Author Roman V Yampolskiy replied on Jan. 31, 2020 @ 16:17 GMT
Dear Boris,
Thank you for the invitation, I will take a look.
Best,
Roman Yampolskiy
Moritz Stautner wrote on Feb. 12, 2020 @ 16:07 GMT
Hi Roman,
I like your overview of the reductionist scientific paradigm in his many empirical, theoretical and pratical implications very much. But I personally can't find any deep new conclusions. Correct me if I'm wrong, but couldn't you reduce all scientific concepts to an abstract (turing-related...) 'reduction' and that's really it?
Greetings
Morris
report post as inappropriate
Author Roman V Yampolskiy replied on Feb. 12, 2020 @ 16:14 GMT
Morris,
Thank you for liking my work.
If you are looking for new conclusions, I would suggest concentrating on the subsections on AI.
Best,
Roman
Luca Valeri wrote on Feb. 13, 2020 @ 11:00 GMT
Hi Roman,
sorry for this long reply. But I was really intrigued by your essay, which inspired me to a lot of thoughts.
In physics I used to be a bit of a Platonist interested only in theoretical physics, which is reflecting the true forms and not much interested in experimental physics that are concerned only with the shadows of ideal forms. In your essay you made the theory of...
view entire post
Hi Roman,
sorry for this long reply. But I was really intrigued by your essay, which inspired me to a lot of thoughts.
In physics I used to be a bit of a Platonist interested only in theoretical physics, which is reflecting the true forms and not much interested in experimental physics that are concerned only with the shadows of ideal forms. In your essay you made the theory of verification - which I compare with experimental physics - really interesting! Your essay is nice to read and interesting from the beginning to the end.
A third of all mathematical publication contains errors? That is worse than medical publications!
In physics realism that states the truth of ontological propositions should be independent of the means of its verifications (measurements). This of course is questioned by quantum mechanics. And the search for a type of realistic interpretation of physics, which does not relay to the human observer is still going on. In mathematics even more than in physics, we use to have a realistic attitude. Do you defend in your essay an non-realist position, where the truth of mathematical proposition depends on the ability of verification of the verifier?
I am not up to date in AI research and I found your exposition very interesting. Were you able to give to 'comprehensibility' a precise mathematical meaning?
When I was reading your essay my son asked me what I was reading. I sad it is about whether a computer program could be able to check if other programs and himself is working correctly. I instantaneously asked: why don't you write a second program. the two programs could check themselves. So if there were another barber ... In your essay you say that there are minds that can never be understood by a human mind as it has the greater number of states. But could two minds that have the same amount of states comprehend each other?
In the context of physics I always wondered, whether a measurement instrument must have greater complexity than the objects that is measured. For instance for measuring a square one needs at least a field over 4 points in order the be able to distinguish, if a square has been rotated. Also this would lead to an infinite regress.
In your physics section you seem to imply that the probabilistic nature of mathematical verification implies the probabilistic nature of mathematics and
hence the probabilistic nature of physics (=QM) in the MU. Is that so?
I always wondered, if the fact that a system cannot completely know itself, and an external measurer is needed to completely specify the system , could be the cause of the probabilistic nature of QM. If an object has n degrees of freedom, its measurer must have at least n degrees of freedom. Let’s say m. So the total system must have n*m degrees of freedom, which is greater than m. Hence there are undetermined elements within the system. Hence probability.
Well in relativistic classical physics only relative distances and velocities are measurable within the system. While the absolute values are measurable only from outside - relative to the location of that system. There is also an infinite regress here, but I think this is completely consistent and classical and no 'problem' arises with that kind of infinite regress.
Last but not least I want to advertise my essay, that I still need to write and that will have a title like: "Changing axioms - the unpredictable evolution of physical laws". There I propose that the definability of basic concepts (quantities) that make up the physical theory (laws) depends on the possibility to separate the system from its environment. For instance relative distances are only defined as long as the system is separable (translation invariant) from its environment. This in my opinion not only solves the objectivity problem raised by Wigner friend type of experiments, since it protects the system from outside interventions and its symmetry and unitarity as condition to have well defined concepts within the system. But the conditioning of the definability of concepts by its environment (which is contingent) means that the basic concepts that make up the theory may change by changing environment and so does the theory itself. I think that gives an interesting view also on AI which is able to adapt itself and change its program according to the environment.
Best regards
Luca
view post as summary
report post as inappropriate
Author Roman V Yampolskiy replied on Feb. 13, 2020 @ 16:24 GMT
Hey Luca,
Thank you for your detailed comment.
“I am not up to date in AI research and I found your exposition very interesting. Were you able to give to 'comprehensibility' a precise mathematical meaning?”
Please see: https://arxiv.org/abs/1907.03869
“But could two minds that have the same amount of states comprehend each other?”
Please see: https://arxiv.org/abs/0708.1362
“In your physics section you seem to imply that the probabilistic nature of mathematical verification implies the probabilistic nature of mathematics and hence the probabilistic nature of physics (=QM) in the MU. Is that so?”
Yes, that is one of my ideas.
Best,
Roman
Member Noson S. Yanofsky wrote on Mar. 8, 2020 @ 17:01 GMT
Dear Roman,
Thank you for disentangling all these different limitations of human knowledge.
Science made great strides by formulating the intuitive notion of a computation with a Turing machine. With this formulation, we were able to conquer the notion of undecidability. It would be nice to formulate the many intuitive concepts you bring to light.
You have a line:...
view entire post
Dear Roman,
Thank you for disentangling all these different limitations of human knowledge.
Science made great strides by formulating the intuitive notion of a computation with a Turing machine. With this formulation, we were able to conquer the notion of undecidability. It would be nice to formulate the many intuitive concepts you bring to light.
You have a line: "Essentially the intelligence based complexity of an algorithms related to the minimum intelligence level required to design an algorithm or to understand it." I suspect that he vast majority of programs available on the market are not intelligible to any single person. I once heard that Microsoft Windows was made of 40,000,000 lines of code. Surely that is beyond the ability of one. Perhaps a more general definition is needed. Do two designers have a higher IQ than one?
Thank you for a very interesting article.
All the best,
Noson
view post as summary
report post as inappropriate
Author Roman V Yampolskiy replied on Mar. 9, 2020 @ 00:07 GMT
Dear Noson,
Thank you for your comment. You are asking great questions about how IQ of multiple agents can be combined. This week, a book chapter of mine (chapter 1: TOWARDS THE MATHEMATICS OF INTELLIGENCE) on this topic is out in the book: https://vernonpress.com/book/935 I think it answers some of your questions, but still leaves much room for future work.
Best,
Roman
Flavio Del Santo wrote on Mar. 11, 2020 @ 16:36 GMT
Thank you for this well-written and stimulating essay.
Let me add something to your sentence: “The Born rule [76], a fundamental component of Copenhagen interpretation, provides a link between mathematics and experimental observations.” . I would like to point out to you that the interpretation known as QBsim, whose author explicitly consider a refinement of Copenhagen, takes the Born rule as an element of reality. In fact, the only “element of reality”, while the rest is all subjective.
I invite you to have a look at
my essayregarding the role of “elements of reality” that we grant to mathematical entities like numbers and what are the consequences for natural sciences.
Very high rate from me, and good luck with the contest!
Flavio
report post as inappropriate
Author Roman V Yampolskiy replied on Mar. 12, 2020 @ 19:17 GMT
Dear Flavio,
Thank you for your kind words and useful pointers. I look forward to reading suggested materials.
Best,
Dr. Yampolskiy
Vladimir Rogozhin wrote on Apr. 15, 2020 @ 16:58 GMT
Dear Roman,
Your extremely important essay makes it possible to conclude: the centenary problem of the “foundations of mathematics” (justification, substantiation), which Morris Kline beautifully presented in “Mathematics: Loss of Certainty,” remains the philosophical and mathematical problem No. 1 for cognition as a whole. Uncertainty in the foundations of knowledge, the "language...
view entire post
Dear Roman,
Your extremely important essay makes it possible to conclude: the centenary problem of the “foundations of mathematics” (justification, substantiation), which Morris Kline beautifully presented in “Mathematics: Loss of Certainty,” remains the philosophical and mathematical problem No. 1 for cognition as a whole. Uncertainty in the foundations of knowledge, the "language of Nature", ultimately gives rise to undecidability, uncomputability, unpredictability ... unverifiability, unexplainability ... unrepresentation
The problem of the "foundations of mathematics" is an ontological problem. Therefore, I call this problem: the problem of the ontological basis of mathematics (knowledge). The unsolved problem of the essential “foundations of mathematics”, in turn, gives rise to problems in Computational Mathematics, which Narin'yani A.S. described well in the article
Mathematics XXI - a radical paradigm shift. Model, not Algorithm . A.Narin'yani notes:
“The current state of affairs in Computational Mathematics can be tried to evaluate by contrasting two alternative points of view. One, as it were, for granted: Computational mathematics is a successful, rapidly developing field, extremely demanded by practice and basically meeting its needs. Second, far from being so optimistic: Computational mathematics is in a deepening crisis, becoming more and more inadequate in the context of growing practice demands. “At present, Computational Mathematics has no conceptual ideas for breaking this deadlock.”Do you agree with this conclusion?
In an interview with mathematician and mathematical physicist Ludwig Faddeev
The equation of the evil spirit it is written:
"Faddeev is convinced that just as physics solved all the theoretical problems of chemistry, thereby “closing” chemistry, so mathematics will create a “unified theory of everything” and “close” physics."How can mathematics “close physics” if the problem of “foundations of mathematics” (ontological basification of mathematics) is not solved? ... In your opinion, why is the age-old problem of the justification (basification) of mathematics “swept under the rug” primarily by mathematicians themselves?
With kind regards, Vladimir
view post as summary
report post as inappropriate
Author Roman V Yampolskiy replied on Apr. 15, 2020 @ 19:05 GMT
Dear Vladimir,
Thank you for your kind words and for sharing some interesting references. I will be sure to read them. As to your last question, recent work by Wolfram may be an interesting direction to follow in that regard: https://www.wolframphysics.org/
Best,
Roman
Vladimir Rogozhin replied on Apr. 15, 2020 @ 20:05 GMT
Thank you very much, Roman, for your quick reply and link. I'm starting to read with interest.
Best,
Vladimir
report post as inappropriate
Michael muteru wrote on Apr. 28, 2020 @ 21:40 GMT
hi roman I appreciate your comprehension of human observer.Do human selection effects filter into the eventual outcome of an experiment ,or vice versa.? please read/rate my take on my essay -https://fqxi.org/community/forum/topic/3525.i would greatly love to hear you on this topic. thanks and All the best to you
report post as inappropriate
Kwame A Bennett wrote on May. 1, 2020 @ 20:23 GMT
Dear Roman,
Excellent Essay, please take a look at the long form version of my essay;
The sections where I compare biological complexity to computer, you will find that part very interesting
Please take a look at my essay A grand Introduction to Darwinian mechanic
https://fqxi.org/community/forum/topic/3549
report post as inappropriate
Pavel Vadimovich Poluian wrote on May. 16, 2020 @ 12:55 GMT
We carefully read and discussed everything. There is something to think about. The scientific perspective is visible. Your ideas are very close to us! One of us works at the department of philosophy, the other at the department of computer science. Therefore, your essay was interesting to both of us. We really liked the use of the principle of "regression to infinity" for an observer in physics and for checking evidence in mathematics. This comparison is very heuristic. We liked the fact that you do not come to agnosticism. We believe in the possibilities of reason. But we think your approach has overtaken time. While in science there is not even a recognition of the objectivity of information. Therefore, ideas of this type are perceived as metaphors.
Now we are implementing a startup project to develop a fundamental ontology for integrating various ontologies of subject areas. We are creating a digital platform for this integration. Perhaps we can even establish mutually beneficial cooperation with you.
We hope you find our essay interesting.
Truly yours,
Pavel Poluian and Dmitry Lichargin,
Siberian Federal University.
report post as inappropriate
Login or
create account to post reply or comment.