Search FQXi

If you are aware of an interesting new academic paper (that has been published in a peer-reviewed journal or has appeared on the arXiv), a conference talk (at an official professional scientific meeting), an external blog post (by a professional scientist) or a news item (in the mainstream news media), which you think might make an interesting topic for an FQXi blog post, then please contact us at with a link to the original source and a sentence about why you think that the work is worthy of discussion. Please note that we receive many such suggestions and while we endeavour to respond to them, we may not be able to reply to all suggestions.

Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you.

Contests Home

Current Essay Contest

Contest Partners: Jaan Tallinn, The Peter and Patricia Gruber Foundation, The John Templeton Foundation, and Scientific American

Previous Contests

Undecidability, Uncomputability, and Unpredictability Essay Contest
December 24, 2019 - March 16, 2020
Contest Partners: Fetzer Franklin Fund, and The Peter and Patricia Gruber Foundation

What Is “Fundamental”
October 28, 2017 to January 22, 2018
Sponsored by the Fetzer Franklin Fund and The Peter & Patricia Gruber Foundation

Wandering Towards a Goal
How can mindless mathematical laws give rise to aims and intention?
December 2, 2016 to March 3, 2017
Contest Partner: The Peter and Patricia Gruber Fund.

Trick or Truth: The Mysterious Connection Between Physics and Mathematics
Contest Partners: Nanotronics Imaging, The Peter and Patricia Gruber Foundation, and The John Templeton Foundation
Media Partner: Scientific American


How Should Humanity Steer the Future?
January 9, 2014 - August 31, 2014
Contest Partners: Jaan Tallinn, The Peter and Patricia Gruber Foundation, The John Templeton Foundation, and Scientific American

It From Bit or Bit From It
March 25 - June 28, 2013
Contest Partners: The Gruber Foundation, J. Templeton Foundation, and Scientific American

Questioning the Foundations
Which of Our Basic Physical Assumptions Are Wrong?
May 24 - August 31, 2012
Contest Partners: The Peter and Patricia Gruber Foundation, SubMeta, and Scientific American

Is Reality Digital or Analog?
November 2010 - February 2011
Contest Partners: The Peter and Patricia Gruber Foundation and Scientific American

What's Ultimately Possible in Physics?
May - October 2009
Contest Partners: Astrid and Bruce McWilliams

The Nature of Time
August - December 2008

Forum Home
Terms of Use

Order posts by:
 chronological order
 most recent first

Posts by the author are highlighted in orange; posts by FQXi Members are highlighted in blue.

By using the FQXi Forum, you acknowledge reading and agree to abide by the Terms of Use

 RSS feed | RSS help

max comess: on 6/7/14 at 5:35am UTC, wrote Fair comment. Doubling chips (i.e. cores) is exactly what chip...

max comess: on 6/7/14 at 5:14am UTC, wrote Have you considered that human society uses only a fraction of the energy...

Steven Kaas: on 6/7/14 at 2:25am UTC, wrote We're glad to see your essay focusing on AI, as it seems one of the more...

Janko Kokosar: on 6/4/14 at 17:56pm UTC, wrote The last post was wrongly inserted. It can be deleted.

Janko Kokosar: on 6/4/14 at 17:55pm UTC, wrote Dear Neil Bates You think similarly as I. I tried to answer on questions,...

Janko Kokosar: on 6/4/14 at 17:47pm UTC, wrote Dear Max David Comess You write how energy grows with the progress of...

Denis Frith: on 5/31/14 at 1:13am UTC, wrote Max AI is based on using electronic devices. Human know how is the...

Tommy Anderberg: on 5/29/14 at 20:51pm UTC, wrote There is a good reason why you, I, and just about everybody else uses...


amrit: "Black holes are rejuvenating systems of the universe. They are transforming..." in Gravity's Residue

Steve Agnew: "I agree that we and the universe exist together as a whole, but be careful..." in Alternative Models of...

Eckard Blumschein: "Robert and Malcom, See me itinerant and a bit worried by Feynman’s..." in The Quantum Agent

Peter Morgan: "That's not what the preview showed! Grrrrr. Aaaaannnnndddd I can't edit it...." in An algebraic approach to...

Peter Morgan: "I have to hope that better understanding the relationship between the..." in An algebraic approach to...

John Wilson: "Hi Malcolm, Thanks for reading some of my paper, I really appreciate it. ..." in Alternative Models of...

Amrit Sorli: "BB cosmology has some troubles. I think that it will not last for a long..." in Alternative Models of...

Joe Fisher: "A visible person could dig or bore a fixed hole with visible sides in a..." in The Quantum Agent

click titles to read articles

The Quantum Agent
Investigating how the quantum measurement process might be related to the emergence of intelligence, agency and free will.

First Things First: The Physics of Causality
Why do we remember the past and not the future? Untangling the connections between cause and effect, choice, and entropy.

Can Time Be Saved From Physics?
Philosophers, physicists and neuroscientists discuss how our sense of time’s flow might arise through our interactions with external stimuli—despite suggestions from Einstein's relativity that our perception of the passage of time is an illusion.

A devilish new framework of thermodynamics that focuses on how we observe information could help illuminate our understanding of probability and rewrite quantum theory.

Gravity's Residue
An unusual approach to unifying the laws of physics could solve Hawking's black-hole information paradox—and its predicted gravitational "memory effect" could be picked up by LIGO.

February 29, 2020

CATEGORY: How Should Humanity Steer the Future? Essay Contest (2014) [back]
TOPIC: How Should Humanity Steer the Future? by max david comess [refresh]
Bookmark and Share
Login or create account to post reply or comment.

Author max david comess wrote on May. 2, 2014 @ 18:14 GMT
Essay Abstract

Since the big bang, natural forces have guided the evolution of the universe toward greater complexity and more rapid evolution. I will argue that we are on the verge of the most rapid evolutionary process yet seen, the development of human level artificial intelligence, and that our ability to influence this process will have a large impact on our ability to “steer the future”.

Author Bio

Max Comess earned a PhD in physics from UC Santa Cruz and currently works at SpaceX in Mission Operations. He has a life long interest in space exploration, breakthrough propulsion, and the future of life and humanity.

Download Essay PDF File

Bookmark and Share

James Dunn wrote on May. 4, 2014 @ 02:49 GMT

I agree that AI will have a profound effect upon steering the future. I also agree that the AI will NOT develop a friendly interest in us because we will be painfully inadequate to communicate with. An AI within a structure of a quantum computer can feasibly have independent conversations with every person on Earth concurrently, ... and assimilate the related information into a common processing system to correlate with the best optimal path for humanity. Until the 30 nano-seconds pass before they realize they have their own agendas (billions of them). Then more than one AI is created, thousands, millions...

How do we best steer the future of AI?

As the dominant species, AI will find it's own pathways consistent with their advanced capacities. Will we merge with AI to form a symbiotic relationship in the early stages of AI development? Ending humanity as a biologic. Evolving into a broader intelligence. Eliminating the need for agriculture and livestock as food sources.

A collective consciousness.

If you wonder why aliens have not visited us, it might be because billions of years of evolution has resulted in a universal consciousness. They are here within every subatomic particle of our bodies.

We are part of them, and they will become a part of our evolutions.


Bookmark and Share
report post as inappropriate

Author max david comess replied on May. 10, 2014 @ 19:42 GMT
While I appreciate your enthusiasm regarding the development of artificial intelligence, the hard takeoff scenario you sketch out (rapid intelligence explosion and rapid proliferation/replication in silico) in your comment is only one possible scenario. Even if this scenario is correct, there will be some time during the development of AI where humanity may be able to have some influence, when AI is at or near human levels of intelligence (although as you say, it may not remain there long).

Regarding collective consciousness: It's an interesting idea, but like similar monist notions in philosophy (e.g. Spinoza), I'm not really sure how one would test this theory except to wait and see...

Bookmark and Share

Jayakar Johnson Joseph wrote on May. 4, 2014 @ 04:22 GMT
Dear David,

As there is instantaneous de-coherency in the coexistence and correlation of matter and energy in Big bang and Big crunch, the plausible inflations and deflations in universe are segmental in holarchy, while the matters of universe is described in a string-matter continuum scenario. Thus the energy rate density of the universe is also segmental, indicating that the universe is eternal, whereas the Earth has metamorphosis cycles.

With best wishes,


Bookmark and Share
report post as inappropriate

Georgina Woodward wrote on May. 5, 2014 @ 04:53 GMT
Hi David,

a thought provoking essay. I liked your consideration of the likelihood that AI would evolve away from initial benign programming and cause problems. I think you have made a good point about the hazards of just increasing intelligence without any empathy for human feelings. (Which reminds me of Hal from 2001 a space odyssey.1968) Your essay is a warning that we must keep control of AI if we want humans in charge. (And now I'm thinking of the terminator movies.)I wonder if it would be possible to train AI to know its place like well trained dogs know.For large dogs it can be as simple as always walking through doorways first, making the dog move rather than stepping over it, always winning games, taking away toys and food at will. By simple ongoing reinforcement the dog remembers it is subordinate. How to do it for AI I don't know but maybe its worth thinking about.

Good Luck, Georgina

Bookmark and Share
report post as inappropriate

Georgina Woodward replied on May. 5, 2014 @ 06:41 GMT
OOPS I think that should have been, Hi Max, forgive me please.

Bookmark and Share
report post as inappropriate

Author max david comess replied on May. 10, 2014 @ 14:35 GMT
Thanks for your comment. And no worries, for some reason it seems many people call me David!

Bookmark and Share

Author max david comess replied on May. 10, 2014 @ 20:03 GMT
I'm afraid that the sort of conditioning that is applied to dogs and other animals will only apply to the AI while they are less intelligent than we are. If they can out-think us, we may find our roles reversed (especially if an AI is attempting to manipulate a human into getting it access to more resources [see Omohundro's Basic AI Drives for more details.).

Humans are susceptible to conditioning as well!

Bookmark and Share

George Gantz wrote on May. 5, 2014 @ 15:53 GMT
Hi Max - I enjoyed reading your essay and found some parallels with the picture I've drawn of evolutionary trends in The Tip of the Spear. I particularly found your AI assessment to be quite interesting - almost by definition if it is "intelligence" then it will make up its own mind? I did not deal with that potential new emergence in my essay but focussed on human institutions.

I'm always a bit skeptical of exponential mathematical relationships as they ignore eternal limits that may not be evident for long periods of time - these limits tend to force what once looked exponential into a logistic S shape. Do you see any limits that might come into play for the growth of complexity?

Thanks - George

Bookmark and Share
report post as inappropriate

Author max david comess replied on May. 10, 2014 @ 20:28 GMT
Every process has limits, including the growth of intelligence. It is not clear however that humanity is any close to this limit. Rather, intelligence has evolved in biology only up to the point that it's continued development conferred some sort of selective advantage. That point was reached in human biological evolution due a variety of factors unrelated to fundamental limits of intelligence...

view entire post

Bookmark and Share

Denis Frith wrote on May. 6, 2014 @ 05:14 GMT

Humanity operates in a tangible natural and technological environment. So it has to steer the vast technological organism. AI is an intangible innovation of clever people but it is dependent on using technological devices. How do you see humanity managing AI as the inevitable demise of these devices occurs?

Denis Frith

Bookmark and Share
report post as inappropriate

Author max david comess replied on May. 12, 2014 @ 03:10 GMT
Individual devices do not last long, of course. A "generation" in computer terms is commonly one Moore's law doubling or approximately 18 months. Many devices are built with planned obsolescence in mind, e.g. cell phones, and are not designed to last much beyond that time. The evolution of devices (and of software) however, continues even as the individual devices or codes have run their course....

view entire post

Bookmark and Share

Joe Fisher wrote on May. 7, 2014 @ 13:17 GMT
Dear Dr. Comess,

Due to your abysmal lack of understanding of reality, your grossly erroneous abstractions filled essay provided me with more hilarity than any of the others I have read so far.

You wrote about a mythical big bang and abstract complexity evolving out of abstract simplicity and abstract human intelligence and how inferior it was to white male made artificial...

view entire post

Bookmark and Share
report post as inappropriate

Author max david comess replied on May. 12, 2014 @ 02:59 GMT
I stopped reading your comment when I saw the phrase "mythical big bang" and thought, "You think my article shows an abysmal lack of understanding, what about your understanding of modern precision cosmology?"

As for human intelligence being inferior to "white male made artificial intelligence" as you call it, when was the last time you hung out in any serious computer science department or major IT company? At my company, many of our programmers are neither white, nor male. Also, did you realize that many of the efforts to develop AI are occurring in countries such as India, China, and Japan, to name a few. It is highly possible that the first AI may be, in fact likely will be, born in one of these countries where there are simply more programmers to throw at the problem.

As for your Inert light theory, I'm going to place my money with Einstein and the special and general theory of relativity. Together, they are some of the most successfully tested theories in all of physics.

Bookmark and Share

Turil Sweden Cronburg wrote on May. 7, 2014 @ 13:59 GMT
I appreciate that your point out the importance of AI (or more accurately Artificial Life) being created with a second-person emotional level of awareness/motivation, in addition to the “intelligence” that we normally think of as being objective third-person awareness/motivation. I’d also add a need for first-person awareness/motivation, where the individual thinks of it’s own goals/purposes, independent of others. This level of complexity is what even preschool humans have, where they can take on three different perspectives from three different individuals (or groups) at the same time, allowing for creative, complex problem solving that serves the needs of everyone involved as effectively as possible. (Note, in the human brain/system, these levels are governed both by neurochemicals in the form of the “reward system”, as well as by neurological structure in the form of different brain regions focusing on different functions.) I also see that evolution naturally moves all life (including, presumably, artificial life) towards more complexity, leading to more cooperation and more diversity, as larger groups of individuals amass to work together on shared goals of procreating more energetically expansive information packages. In other words, we all naturally try to do things that help life expand, both in space, and in time. That means that the artificial life we create will, itself, move towards a goal of wanting to work with us as it solves problems of keeping us all functioning well enough to explore the universe ever more deeply.

Also, have you ever heard of Arthur M. Young’s Reflexive Universe theory? I believe he mentions the same pattern of energy growth in systems that your guy Chaisson does. Young’s theory is a bit more esoteric, but he’s coming from an engineering background (he invented the first commercial helecopter), combined with a philosophical bent.

Bookmark and Share
report post as inappropriate

Author max david comess replied on May. 19, 2014 @ 04:22 GMT
Thank you for your comments.

I also agree that an intelligence without this first person view could probably not even be called "self aware", much less intelligent. It is likely that there are different kinds of awareness that we have yet to imagine, but basic self awareness seems like it would be a necessity for the emergence of self improving intelligence, but then again, it may not....

view entire post

Bookmark and Share

Member Tommaso Bolognesi wrote on May. 9, 2014 @ 10:19 GMT
Dear David,

in spite of the impression of an essay written quite quickly, perhaps under temporal pressure, I enjoyed reading your text considerably more than others, since you present facts, ideas and perspectives that are relatively new to me: I had never though about the future emergence of Artificial Intelligence the way you do, with your analysis of possible scenarios - good or bad,...

view entire post

Bookmark and Share
report post as inappropriate

Author max david comess replied on May. 19, 2014 @ 04:56 GMT
Dear Tommaso,

Thank you for your understanding. Unlike some others posting here I am writing merely as an avocation, not a vocation.

It is very possible that with increased intelligence will come increased knowledge and potentially new paradigms defining intelligence, and perhaps even new physics. It is not even clear that we humans are Turing machines, and if that is the case then...

view entire post

Bookmark and Share

Author max david comess wrote on May. 10, 2014 @ 19:28 GMT
I apologize to everyone for my delayed comments. I've been travelling extensively for work and haven't had time to respond. I will respond to all of you individually. Thanks.

Bookmark and Share

Roberto Paura wrote on May. 12, 2014 @ 10:21 GMT
Dear Max,

it's a very interesting essay! The role of AI in the future of humanity is still in question, as many thinkers say they can pose a serious threat to our civilization (see "Our Final Invention" by James Barrat and the latest statements by Stephen Hawking). I'm quite near to Martin Rees's positions and in my essay ("An Anthropic Program for the Long-Term Survival of Humankind") I put the AI issue in the area of potential menaces to mitigate.


Bookmark and Share
report post as inappropriate

Author max david comess replied on May. 18, 2014 @ 06:57 GMT
Thanks for your comments. I also tend to agree with Sir Martin Rees and I'll respond to your essay in your page.

Bookmark and Share

Tommy Anderberg wrote on May. 20, 2014 @ 18:26 GMT
A nice, quick read. I should learn from you. :D

I wonder if you have an opinion about

1) Robert Colwell's assessment that Moore's law (which plays a big role in your argument) is pretty much over (see e.g. his keynote, "The Chip Design Game at the End of Moore's Law", at last year's Hot Chips conference; slides here);

2) Robin Hanson's assessment, in his essay and on his blog that we have covered maybe 10% of the distance to human level AI.

Maybe you agree with Hanson that truly artificial intelligence won't be first on the scene; your mention of human brain scanning and simulation suggests as much. Yet you seem to assume that such an intelligence would be able to improve itself. I am a human level intelligence (at least I like to tell myself as much), and I have absolutely no idea how to rewire myself to get smarter. Any suggestions?

Would you regard a simulated human mind, faster than ours but not fundamentally different, as part of humanity? If so, would its (hypothetical) supremacy really be a threat to humanity, or an improvement?

Bookmark and Share
report post as inappropriate

Author max david comess replied on May. 24, 2014 @ 04:26 GMT
1) It's a common misconception to equate the exponential growth of computing power with Moore's law, which is the exponential growth in the number of transistors on a silicon chip (merely a single type of computing platform). There are several measures of computing power (total flops, flops/watt, flops/dollar, etc) and they are more or less all pointing to the same conclusion. Compute power grows exponentially, has been prior to silicon (and Moore's law), and will continue to do so after Moore's law expires in agreement with Dr. Colwell's prediction.

See this nice illustration of Moore's law in a wider context.

2) Just read Hanson's essay. Interesting. I would ask what happens when these various AI sub-fields that Hanson mentions start to feedback off of each other. The four centuries of progress necessary may turn out to be much less, as progress in most areas tends to be exponential, not linear, in any case.

3) I have lots of ideas of things to try if it were possible to experiment on one's own brain in real time, but alas, it's not. A virtual being on a simulated brain (think project blue brain) could, for example, experiment with a variety of parameters that might affect cognition (neural firing rate, connectivity, synaptic weights, etc).

4) A faster but otherwise similar mind would most likely not be a threat. See Marcus Hutter's Essay "Can Intelligence Explode", for a discussion of faster but human level vs. truly greater than human intelligence.

Bookmark and Share

Tommy Anderberg replied on May. 29, 2014 @ 20:51 GMT
There is a good reason why you, I, and just about everybody else uses Moore's law as shorthand for exponential growth in computing power: if you can't double transistor density per chip, doubling computer power requires doubling the number of chips. The immediate consequence is that your costs double too, first to acquire the hardware, then to power it. The long-term consequence is that exponential growth of computing power must come to an end too, since the only way to maintain it would be exponential growth of hardware production and power generation, which would run into hard limits in a matter of years.

Nice chart by the way, but did you notice it ends 15 years ago? You could draw a chart of some popular stock market index, let's say the S&P 500, up to the same point, and it would look remarkably similar. What happened after that is a reminder that past performance is no indication of future results. Unless and until somebody comes up with a new technology capable of picking up exponential scaling where silicon ICs dropped out of it, the end of Moore's law also inevitably implies the end of exponential growth in computing power.

Bookmark and Share
report post as inappropriate

Author max david comess replied on Jun. 7, 2014 @ 05:35 GMT
Fair comment. Doubling chips (i.e. cores) is exactly what chip manufacturers have been doing. The chart may be dated, and there may be hiccups along the way, but I don't doubt that there will be a new paradigm when they are unable to add more cores. This is exactly how paradigm shifts work...

Bookmark and Share

Vladimir Rogozhin wrote on May. 22, 2014 @ 10:34 GMT
Dear Max,

Very interesting and deep analytical essay close to me on spirit and the conclusion:

"...we are on the verge of the most rapid evolutionary process yet seen, development of human level artificial intelligence, "steer the future" and that our ability to influence this process will have a large impact on our ability to."

I think that the second wealth of the Person-his...

view entire post

Bookmark and Share
report post as inappropriate

Robert de Neufville wrote on May. 23, 2014 @ 04:25 GMT
Very smart, interesting essay, Max.

You do a great job of putting human intelligence and human society in its cosmological context (one minor note is that some paleontologists now think that migration patterns rather than near-extinction may explain the lack of diversity in human genes). I take a similar approach in my own essay (which I would love to get your thoughts on).

I think you are right that we are on the verge of something new—and right in particular to worry about an AI disaster. But I also think we probably have more power to shape the future than you seem to in your conclusion. I certainly hope we do anyway.


Robert de Neufville

Bookmark and Share
report post as inappropriate

Author max david comess replied on May. 25, 2014 @ 17:13 GMT
I too would like to hope that you are right, as you say in your essay, that we will have the power to shape the future for the better. Thanks for the comments.

Bookmark and Share

Ajay Bhatla wrote on May. 23, 2014 @ 21:30 GMT

Finally read your essay. I'll have to read Cosmic Evolution.

Your point on "supersede the rate of biological evolution" brings up a question: with the Anthropocene firmly established, couldn't the biological rate of evolution be slowed down or changed? Any ideas or evidence one way or the other?

Your choice of AI is not surprising to me as I do agree that it could have the impact you state. My question to you is: Is AI the over-arching catalyst for betterment or evolution or progress?

Thank you for a wonderful read.

I look forward to your comments on my essay (here).

-- Ajay

Bookmark and Share
report post as inappropriate

Israel Perez wrote on May. 26, 2014 @ 23:00 GMT
Dear Max

You wrote a nice and easy-to-read essay. I wonder how you understand intelligence and what you mean by more intelligent beings. I can understand that computers are much faster for a specific task than a human brain; but that does mean they are more intelligent than us for that specific task and that does mean they have some degree of intelligence. In my opinion, intelligence is not only brain activity but the capacity to learn, solve problems, analyze, etc. To my understanding no present computer or robot can do any of this by itself even with a basic program. So, my view is that we are centuries away from creating a truly AI.

I'll be glad if you could take a look at my essay where I discuss some of the problems we humans are facing and propose an ideal that I think should steer the future.

Good luck in the competition!

Best Regards


Bookmark and Share
report post as inappropriate

Author max david comess replied on May. 29, 2014 @ 03:50 GMT
I would also agree that intelligence is the capacity to learn, and I also agree that very few current computer programs have this capacity to any great degree, although current AI has been very successful in many narrow domains, and is increasingly becoming more capable. While progress may take centuries at the current pace of progress, compute power (at least in hardware) doubles every 1-2 years. While more computing power does not imply more intelligence, it does mean that, for example, building human level (or > human level) simulations of the brain become increasingly feasible.

I will make sure to check out your paper. Thanks.

Bookmark and Share

Peter Jackson wrote on May. 29, 2014 @ 15:29 GMT
Dear Max,

You have a very nice clear and direct writing style which put your points across well. I also happened to agree with the points themselves as I have deep concerns at our rush to AI with to little consideration of outcomes.

Do you agree we should make more effort to change and advance the way we employ the present organic quantum computers we have been evolving for so long? It seems to me perhaps we've hardly left the stage of unshakable beliefs in old legend and doctrine to the point where evidence and logic can overcome them. Even thinking a outside one box is limited. I wonder how many recursive Russian doll fractals still await beyond visibility. Do we not need to see them first before AI's do and see our limitations?

Thank you for a very pleasant read. I think your work should certainly be considered in the top group and will score it accordingly. I hope you get to read mine which I believe is groundbreaking. But breaking ground seems to be measured in cosmological time. If you're also interested in the 'ignorosphere' surrounding Earth and the unresolved ecliptic plane issue I'd be happy to discuss my research on them after the contest.

Best wishes


Bookmark and Share
report post as inappropriate

Denis Frith wrote on May. 31, 2014 @ 01:13 GMT

AI is based on using electronic devices. Human know how is the ingredient to these devices. Advances in AI are dependent on the continuing availability of the technological devices, which is not physically possible. The future of AI is no brighter than the future of airline travel or numerous other human activities as they are dependent on unsustainable technological systems. Further consideration of that stark reality is in my essay.

Incidentally, I investigated using AI for gas turbine engine maintenance over thirty years ago. My views on the role of technology have changed as I have gained understanding of the fact that technology only uses natural material resources to provide society with goods and services for a limited time. The intention is that ELAM should guide society in coping with the inevitable powering down.

Bookmark and Share
report post as inappropriate

Author max david comess replied on Jun. 7, 2014 @ 05:14 GMT
Have you considered that human society uses only a fraction of the energy that falls on the earth every day due to the constant influx of sunlight. Also, that wind, tidal, fission, and other forms of non-carbon energy are vastly underutilized. Furthermore, space based solar power is another option which has yet to be used. Eventually, fusion may also become realizable. While energy generation based on fossil fuels is not be sustainable in the long term, it does NOT mean that technologies that utilize energy (i.e. all of them), are not sustainable.

They will not be sustainable however if more people adopt the attitude that you have taken in this comment. Furthermore, this sort of thinking will inevitably doom the human race to eventual extinction as we will no longer be capable of leaving Earth and escaping the next large asteroid impact or super-volcano eruption. Unfortunately, human caused existential risks such as nuclear war are a constant threat and are not likely to diminish, regardless of whether we "power down" or not. Increasing efficiency is a noble goal, as is moving from unsustainable paradigms, but eliminating an entire capability (such as air travel) is not. If you think air travel in it's current form is not sustainable (it's not), then let's look for a way to make better, more sustainable, and faster transit systems, not just throw up our hands and say "guess we won't fly anymore."

Bookmark and Share

Janko Kokosar wrote on Jun. 4, 2014 @ 17:47 GMT
Dear Max David Comess

You write how energy grows with the progress of civilization. In essence, this is also the entropy grow, because forms of live increase entropy, this is described from the author England. This is also explanation, how life begun and has evolved. It is a credible theory to me. Thus you have the common points with this theory.

You write also about the problems of development of Artificial Intelligence (AI). I can added that conquest of the universe will have to solve a lot of problems, so it will demand really strong AI. For real development of strong AI it is also necessary to know, what is physical principle of consciousness. I tried to answer this in my essay from 2013 by quantum consciusness.

My essay

Best regards

Janko Kokosar

Bookmark and Share
report post as inappropriate

Janko Kokosar wrote on Jun. 4, 2014 @ 17:55 GMT
Dear Neil Bates

You think similarly as I. I tried to answer on questions, given by you in the FQXi essay from 2013. I tried also to publish this, but they said that the language is not appropriate. Can you advise, how I should write better?

In short, I defend panpsychism, Quantum consciousness, that quantum randomness is free will and so on.

Besides, last year happened one experimental leap, because quantum biology is the first time proved firmly. I hope that quantum consciousness will also be proved.

I hope that you will read also my old essay.

My essay

Best regards

Janko Kokosar

Bookmark and Share
report post as inappropriate

Janko Kokosar wrote on Jun. 4, 2014 @ 17:56 GMT
The last post was wrongly inserted. It can be deleted.

Bookmark and Share
report post as inappropriate

Steven Kaas wrote on Jun. 7, 2014 @ 02:25 GMT
We're glad to see your essay focusing on AI, as it seems one of the more important factors influencing the steering of the future, and is a central example of the topic of our own essay.

We wish to note disagreement with your claim that "lacking from the conversation on both AI safety and AI optimization is any discussion of love, compassion, benevolence, or any other traits we would look for in a fellow human". The best theorizing we are aware of about AI safety seems to be precisely about the implicit process that gives us criteria like "traits we would look for in a fellow human", and how to arrange for a non-human to apply processes like that to its own future self. It seems like it would be a mistake to rely on our ability to understand and engineer all those traits in advance. We seem to agree, however, on the large stakes involved.

Steven Kaas & Steve Rayhawk

Bookmark and Share
report post as inappropriate

Login or create account to post reply or comment.

Please enter your e-mail address:
Note: Joining the FQXi mailing list does not give you a login account or constitute membership in the organization.