Search FQXi


If you are aware of an interesting new academic paper (that has been published in a peer-reviewed journal or has appeared on the arXiv), a conference talk (at an official professional scientific meeting), an external blog post (by a professional scientist) or a news item (in the mainstream news media), which you think might make an interesting topic for an FQXi blog post, then please contact us at forums@fqxi.org with a link to the original source and a sentence about why you think that the work is worthy of discussion. Please note that we receive many such suggestions and while we endeavour to respond to them, we may not be able to reply to all suggestions.

Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you.

Contests Home


Previous Contests

Wandering Towards a Goal
How can mindless mathematical laws give rise to aims and intention?
December 2, 2016 to March 3, 2017
Contest Partner: The Peter and Patricia Gruber Fnd.
read/discusswinners

Trick or Truth: The Mysterious Connection Between Physics and Mathematics
Contest Partners: Nanotronics Imaging, The Peter and Patricia Gruber Foundation, and The John Templeton Foundation
Media Partner: Scientific American

read/discusswinners

How Should Humanity Steer the Future?
January 9, 2014 - August 31, 2014
Contest Partners: Jaan Tallinn, The Peter and Patricia Gruber Foundation, The John Templeton Foundation, and Scientific American
read/discusswinners

It From Bit or Bit From It
March 25 - June 28, 2013
Contest Partners: The Gruber Foundation, J. Templeton Foundation, and Scientific American
read/discusswinners

Questioning the Foundations
Which of Our Basic Physical Assumptions Are Wrong?
May 24 - August 31, 2012
Contest Partners: The Peter and Patricia Gruber Foundation, SubMeta, and Scientific American
read/discusswinners

Is Reality Digital or Analog?
November 2010 - February 2011
Contest Partners: The Peter and Patricia Gruber Foundation and Scientific American
read/discusswinners

What's Ultimately Possible in Physics?
May - October 2009
Contest Partners: Astrid and Bruce McWilliams
read/discusswinners

The Nature of Time
August - December 2008
read/discusswinners

Forum Home
Introduction
Terms of Use

Order posts by:
 chronological order
 most recent first

Posts by the author are highlighted in orange; posts by FQXi Members are highlighted in blue.

By using the FQXi Forum, you acknowledge reading and agree to abide by the Terms of Use

 RSS feed | RSS help
RECENT POSTS IN THIS TOPIC

Daniel Dewey: on 6/6/14 at 15:19pm UTC, wrote Steven & Steve, Hey! I didn't realize you were in the contest; there are...

Steven Kaas: on 6/6/14 at 12:03pm UTC, wrote Sorry, I had meant to post that from my account, but apparently it logged...

Anonymous: on 6/6/14 at 9:40am UTC, wrote The split of phenomena into limiting and transformative is interesting....

Anonymous: on 6/6/14 at 9:03am UTC, wrote Hello Daniel I am impressed by your analysis. I am impressed by much of...

Michael Allan: on 5/31/14 at 10:46am UTC, wrote Hello Daniel, May I post a short, but sincere critique of your essay? I'd...

Ray Luechtefeld: on 5/29/14 at 18:13pm UTC, wrote Hi Daniel, Thanks for the really interesting essay. I agree that the two...

Jonathan Dickau: on 5/29/14 at 17:56pm UTC, wrote Thanks Daniel, I especially resonate with one statement in your essay...

Daniel Dewey: on 5/29/14 at 17:47pm UTC, wrote Hi Jonathan, Thanks! I'm glad you enjoyed it. Re: your first point: I...


RECENT FORUM POSTS

kurt stocklmeir: "em drive metamaterials at 1 end that create negative radiation pressure -..." in Alternative Models of...

jiya joseph: "I guess a workshop is conducted by the people that are favorable for the..." in Schrödinger’s Cat...

mono joli: "thanks a lot for such a great topic. Scary Maze Games Basketball Legends..." in The Race to Replace the...

Joe Fisher: "Dear Dr. Brendan Foster, Of course: “We Are All Connected .” My..." in We Are All Connected

Caroline v: "Thanks for informing us about FQXi-sponsored workshop which aim was to..." in Schrödinger’s Cat...

Yelena Hopper: "Training is the way toward encouraging learning, or the procurement of..." in Wandering Towards a Goal:...

Yelena Hopper: "This article is such a pleasant and intriguing one, I'm exceptionally happy..." in We Are All Connected

Yelena Hopper: "Pushing the points of confinement of hypothesis and creative energy in..." in Review of “A Big Bang...


RECENT ARTICLES
click titles to read articles

Our Place in the Multiverse
Calculating the odds that intelligent observers arise in parallel universes—and working out what they might see.

Sounding the Drums to Listen for Gravity’s Effect on Quantum Phenomena
A bench-top experiment could test the notion that gravity breaks delicate quantum superpositions.

Watching the Observers
Accounting for quantum fuzziness could help us measure space and time—and the cosmos—more accurately.

Bohemian Reality: Searching for a Quantum Connection to Consciousness
Is there are sweet spot where artificial intelligence systems could have the maximum amount of consciousness while retaining powerful quantum properties?

Quantum Replicants: Should future androids dream of quantum sheep?
To build the ultimate artificial mimics of real life systems, we may need to use quantum memory.


FQXi FORUM
October 18, 2017

CATEGORY: How Should Humanity Steer the Future? Essay Contest (2014) [back]
TOPIC: Crucial Phenomena by Daniel Dewey [refresh]
Bookmark and Share
Login or create account to post reply or comment.

Author Daniel Dewey wrote on Apr. 23, 2014 @ 13:01 GMT
Essay Abstract

I give a case that, as a public good, societies and their governments should support and invest in scientific research on crucial phenomena, empirical features of the world that figure strongly in how humanity's choices influence the size of its future. In particular, I give reasons for thinking that (1) humanity's vulnerability or robustness to accidents arising from biological engineering, and (2) the future rates of improvement of artificial intelligence and its susceptibility to misuse, are phenomena that call strongly for our systematic attention.

Author Bio

Daniel Dewey is a Research Fellow at the Oxford Martin Programme on the Impacts of Future Technology and the Future of Humanity Institute. His research centres on high-impact, understudied features of the long-term future of artificial intelligence. Topics of particular interest include intelligence explosion, machine superintelligence, and AI ethics. Daniel was previously a software engineer at Google, Intel Labs Pittsburgh, and Carnegie Mellon University.

Download Essay PDF File




Member Rick Searle wrote on Apr. 29, 2014 @ 02:22 GMT
Nice essay Daniel.

I am currently working on an anthology on Machine Ethics for IGI press due out next year. Would you have any interest in submitting a chapter?

Best of luck in the contest,

Rick Searle

report post as inappropriate

Author Daniel Dewey replied on Apr. 29, 2014 @ 14:35 GMT
Hi Rick. Glad you liked the essay, thanks!

I might be interested in submitting to a machine ethics anthology, yes. Could you email me with details? You can find my address at my homepage.




Joe Fisher wrote on Apr. 29, 2014 @ 17:05 GMT
Dear Mr. Dewey,

I regret that I could not understand most of your essay due possibly to the fact that I have a poor grasp of abstractions. On the one hand, sober scientists have assured me that it has taken thousands of years for the human brain to evolve. You now claim to be able to build an artificial brain that is far superior to any human brain. The only thing I notice about any artificial brain is that it never seems to be attached to a black body, or to ever communicate in Spanish. Billions of dollars have been spent by the predominantly white male government of the United States so that a few predominantly white males can explore space, listen to messages from outer-space, or build brand new never been used artificial brains. Nine billion dollars have been cut from the food stamp program that aids the poor.

report post as inappropriate

Author Daniel Dewey replied on Apr. 30, 2014 @ 15:58 GMT
Dear Mr. Fisher,

Thanks for reading anyway; I'll have to see what I can do to make my writing maximally accessible.

My essay focuses on reasons for thinking that the size of humanity's future is a very important consideration, but that doesn't mean that I oppose efforts to address present-day wrongs like poverty and animal suffering. I think we should address both.




Georgina Woodward wrote on May. 11, 2014 @ 04:59 GMT
An interesting classification of types of risk.You have chosen two interesting technologies to pick out in particular.

I think it would have been good if you had talked about actual accidents that have occurred with GM crops.There is a lot of information available on the web.

Contamination of food with crops engineered to produce medicines, contamination of organic crops with GM genes, replacement of traditional local crops by licenced vulnerable terminator gene carrying monocultres; associated with farmer suicides in India due to failure of such crops that leaves the farmer with nothing not even seed or money to buy seed for the following year. There is the risk of transfer of terminator genes to other plants.There is the problem of greater usage of pesticides, particularly glyphosate that has been linked to liver damage, and other health problems in some research. Also the risk to other animals such as aquatic organisms.freecymru.org US Regulation of GM crops: USDA slammed in Congress hearing

The risks are certainly not only from novel pathogens although the escape of such organisms from secure facilities is shocking. That was the cause of the foot and mouth out break in the UK in 2007 Foot and mouth outbreak caused by petty government dispute over leaky drain.

I think that would have made the threat seem far more real and urgent.I like that you have compared the two kinds of risk and identified AI as less urgent but still with very high potential risks. I don't yet know of problems caused by AI but having to navigate an automated telephone system can be frustrating. It cam only get worse if the AI is able to argue as well: )

report post as inappropriate

Georgina Woodward replied on May. 11, 2014 @ 07:15 GMT
Hi Daniel,

I'm sorry I accidentally submitted my post before I was ready. Then I lost my internet connection.

Firstly I meant to say Hi Daniel.

"The UK foot and mouth outbreak 2007 was from a "secure" laboratory. The FMD virus strain by which the outbreak was caused was found to be most similar to strains used in international diagnostic laboratories and in vaccine production",..." this strain is a 01 BFS67 - like virus, isolated in the 1967 Foot and Mouth Disease outbreak in Great Britain".From European community, food, animal diseases control

I think this evidence, regarding GM crops and the escape of organisms from secure facilities, indicates that the dangers are not from lack of research but inaction in the face of known risks.

I meant to say It can only get worse if the AI is able to argue as well: )

Good luck. Georgina

report post as inappropriate

Author Daniel Dewey replied on May. 11, 2014 @ 17:40 GMT
Hi Georgina,

Thanks for pointing out the GM crops case, and for giving those examples; it's certainly an area I'm interested in, and building more detailed cases with more examples makes sense.

Best,

Daniel




Ajay Bhatla wrote on May. 13, 2014 @ 05:11 GMT
Daniel,

I didn't realize till I read your essay that the comment you left on my essay on Biology and AI were the focus of your essay.

I agree with you that "one of the most important tasks facing us today is the scientific investigation of certain Crucial Phenomena." You didn't, however, say why biological stability and AI are crucial, but just selected them to exemplify how they are crucial. Did I miss something? For me, these are just two out of a long laundry list.

I wish you had elaborated on the game between Humanity and Nature a lot more!

Also, you talk about funding support and "have the particular skills and resources" as the only important issues; Does researcher interest matter or is funding, resources and skills the only criteria you deem important for scientific research?

- Ajay

report post as inappropriate

Author Daniel Dewey replied on May. 13, 2014 @ 11:32 GMT
Hi Ajay,

Yes, it seems that our essay topics are relevant to one another!

I did, on pages 8 and 9, explain why I think AI and biological instability are crucial--- they both have the potential to render us extinct in the relatively near term. I expect that there are other crucial phenomena, but I don't have a "long laundry list"--- do you have any examples ready to hand?

I'm glad you liked the "game between Humanity and Nature", maybe I'll expand on that in a later piece of writing :)

While researcher interest is clearly important, I do think that a certain amount of societal resources should be devoted to phenomena that are crucial for the long-term future of humanity *whether or not* anyone finds them particularly interesting. This isn't to say that everyone has to work on these things, just that it would be good if enough people work on them for us to reap the considerable benefits. That said, I think that skills and resources are the relevant bottlenecks--- I do regularly encounter many people who are interested in i.e. biological instability or AI safety, if only they had the skills and resources to pursue them!

Thanks for your thoughtful comment,

Best,

Daniel




James Lee Hoover wrote on May. 15, 2014 @ 18:05 GMT
Daniel,

Quite important ideas are simply proposed. Many of us say the same, but not as purposely and emphatically. I like "aiming for a large future" and pursuing "crucial phenomena." Your anecdote at the beginning represents choices we make or don't make during our careers, settling for mundane but insignificant studies rather than pushing for "crucial phenomena."

My essay is similar in my prospect of looking beyond the mundane and within the microcosm of the universe, our brain.

Good job,

Jim

report post as inappropriate

Author Daniel Dewey replied on May. 15, 2014 @ 18:07 GMT
Thanks Jim! I'll go check out your essay.

Best,

Daniel




Robert de Neufville wrote on May. 16, 2014 @ 03:34 GMT
Excellent essay, Daniel. In my opinion, one of the best. You frame the issues extremely well. I'm almost in complete agreement with you, although I would add that we need need to improve not just our knowledge, but also what we collectively do with our knowledge; that we need better institutions as well as better science.

Thanks for your comment on my essay, by the way. I responded on my own page, but the short version is that I think I did make a mistake. If I could go back I would change or get rid of that sentence. I really appreciate your pointing it out to me.

Good luck in the contest—your essay deserves to do well!

Best,

Robert

report post as inappropriate

Author Daniel Dewey replied on May. 20, 2014 @ 12:27 GMT
Thanks, Robert! Best of luck to you as well, I enjoyed your essay.

I agree that we should improve our institutions, especially in linking knowledge to action. I think some folks around FHI are interested in institution design; I should ask them what they've been thinking about.

Best,

Daniel




Jens C. Niemeyer wrote on May. 16, 2014 @ 14:28 GMT
Daniel,

Great job, I think your article perfectly matches the theme of this competition! Although science (like other cultural achievements) mostly progresses through small, incremental steps, it is crucial to keep an eye on the long-term consequences and risks. You very clearly identify two potentially critical fields of research.

Would you agree that the notion of biological instability must be extended to encompass our biological ecosystem? One can argue that major disruption of our natural environment caused by artificial agents would be just as harmful as one that only affects humans themselves.

Good luck!

Jens

report post as inappropriate

Author Daniel Dewey replied on May. 20, 2014 @ 12:32 GMT
Thanks, Jens! I'm glad you liked it.

I would definitely agree that stability of the ecosystem as a whole in the face of biological engineering / synthetic biology should be included in the "biological instability" category; it might turn out to be the case that evolutionarily difficult steps are easy for biological engineers, and that the biosphere won't have the appropriate defense mechanisms, or that the equilibrium that's eventually reached will be unsuitable to human life.

Good luck to you as well!

Cheers,

Daniel




Peter Jackson wrote on May. 16, 2014 @ 16:18 GMT
Dan,

Nice essay identifying the importance of scientific advancement and the prioritisation of attention and funding. I consider it very well written, argued and organised, and should be better placed. I also agree your two identified areas, but believe there's also a strong case for targeting a great leap in understanding of nature by unification of classical and quantum physics and demystification of QM. You may feel that's already being done, but my essay shows current views have the opposite effect, keeping us in a deepening 'rut'. I show that QM an be classically derived, comprehensibly.

My previous successful essays showed how the same mechanism allows SR to converge. But of course nobody is looking and journals won't will risk suggesting such advancement! I agree 'big' and don't think any other single success could give such broad advancement. I subtly suggest that we need to improve our way of thinking to enable the right focus. I look forward to any views on mine.

Very well done for yours, and best of luck in the results.

Best wishes

Peter

report post as inappropriate

Author Daniel Dewey replied on May. 20, 2014 @ 12:56 GMT
Thanks, Peter!

Maybe you can help me understand--- why would a leap forward in quantum mechanics make a significant difference to humanity's future? Is it more, less, or equally important relative to other major scientific questions, in your opinion?

Best,

Daniel



Peter Jackson replied on May. 20, 2014 @ 14:39 GMT
Daniel,

It's not 'just QM' at all, it's about unification of all of physics. A current main barrier is QM, but the same interaction mechanism also allows SR to converge. (scattering is at c in the electron rest frame). ALL main physics questions are then answered at once. Most eminent physicists seem to agree unification is the key. Do you not? Perhaps you'd need to study my last 3 essays to understand the full picture (all top 7 community scorers). I've also answered in detail in my blog including in the post below.

Thanks. I'm a practical guy and recognise that all significant advancement is led by advances in science and technology. See my post to John above. I consider most essays here are either stating the obvious, give some ideal, or discuss a specialisation. Few actually point and steer a negotiable path with real chance of big progress. The 'quantum leap' I cite. I'm disappointed that didn't come across to all, but then current confusion means all have different views of the problem.

Have you noticed the propensity for unintended and even 'reverse' outcomes? That's because people take the obvious view and don't think through cause and effect. As an 'enabler' that's my job. I see most wandering around lost with no tangible way of making real progress or understanding of where to start. Clearly no one thing can improve our understanding better and more widely than unification of physics. I thought your essay showed you understood the importance of identifying and focussing on the right things. Was I wrong?

Best wishes

Peter

report post as inappropriate

Author Daniel Dewey replied on May. 26, 2014 @ 14:07 GMT
I've replied to your later post, just to keep things tidy.

Best,

Daniel




Edwin Eugene Klingman wrote on May. 18, 2014 @ 22:56 GMT
Dear Daniel Dewey,

I very much like your idea of "crucial phenomena". In particular you choose bio-engineering hazards and misuse of AI. I agree with the first, and, while I do not foresee AI becoming conscious, I see its use by Google or by the NSA as potentially destroying privacy, a very negative outcome. And I can envision other serious misuse that does not require super intelligence.

I would suggest that another crucial phenomena is growth of government based on AI and communication techniques. It may be a less forgiving disaster than some physical disasters. I analyze this problem in my essay which I hope you will read and comment on.

I enjoyed your Hamming anecdotes. I talked with him a few times in the 80s and found him full of interesting opinions.

Best regards,

Edwin Eugene Klingman

report post as inappropriate

Author Daniel Dewey replied on May. 23, 2014 @ 14:07 GMT
Thanks for writing, Eugene! I've checked out your essay and commented there.

I do think that societal phenomena could be crucial. It will be a challenge, though, to create a predictive theory reliable enough to make good predictions about future governments (or at least it seems so to me). I'd love to see more people taking up this challenge.

Thanks for your comments,

Daniel



Edwin Eugene Klingman replied on May. 28, 2014 @ 21:52 GMT
Dear Daniel,

Thanks for reading and for your questions on my thread.

You note that humans have free will and can pursue common goals without economic incentives. That is surely true, and is a counterargument against a too narrow interpretation of my approach.

I suggest in the essay that there is still "motion" in the case of equality, but the movement resembles "diffusion" more than directed activity. I do think that this aspect of reality (the existence of gradients) intrudes even into human affairs. Very little seems to get accomplished without resources being applied, despite that we can, many of us, agree to pursue a common goal.

I do hope to continue work on the idea. The Science magazine I received in today's mail has a front cover dedicated to "the Science of Inequality". The special section is quite lengthy and I haven't read it yet, but it seems to indicate that these ideas are worth developing.

Thanks again for your response, and congratulations on your current very high ranking.

Best regards,

Edwin Eugene Klingman

report post as inappropriate

Author Daniel Dewey replied on May. 29, 2014 @ 13:17 GMT
Hi Edwin,

First, my apologies for mixing up your first and second names!

Second, thanks for your response. I hope your continuing work goes well; if physical laws were found to be very predictive of societies in certain circumstances, that would be very useful.

Best,

Daniel




Aaron M. Feeney wrote on May. 25, 2014 @ 02:58 GMT
Hi Daniel,

Thanks for reading my paper and commenting on my page a while ago. I found your article to be exceptional, and I'm glad so many others here agree. Let's all think and work hard so our descendants will have a nice Large future!

I am about to rate your essay and I will rate it highly. All the best!

Warmly,

Aaron

report post as inappropriate

Author Daniel Dewey replied on May. 26, 2014 @ 13:11 GMT
Hi Aaron,

I was glad to! Thanks for taking a look at mine as well. It's been enjoyable getting people's reactions and different viewpoints.

My best,

Daniel




Don Limuti wrote on May. 25, 2014 @ 06:36 GMT
Hi Daniel,

Important essay on steering away from danger. I liked Georgina's input on genetically modified plants and their danger.

Kurzweil seems to see AI technology as approaching a singularity. I have my doubts. Groups of humans using AI as "augmentation" still beat any computer without human augmentation at chess. But who knows what the future brings.

Nice work,

Don Limuti

report post as inappropriate

Author Daniel Dewey replied on May. 26, 2014 @ 13:13 GMT
Hi Don,

Thanks for your comments. I'm glad you raise the chess example--- I think it will be important to see whether human-computer teams continue to dominate in chess, whether they are equally dominant in other tasks, and whether a theoretical basis can be found for explaining that success. Thanks for mentioning it.

I'll take a look at your essay!

Best,

Daniel




Anonymous wrote on May. 25, 2014 @ 07:16 GMT
Daniel,

Clearly your proposition is correct, but without identifying the key areas which will enable consequential advancements across the board I confess I struggle to see it's uniqueness or value in giving a direction to steer. Even identifying our actual critical failures, wrong directions or the dangers facing us would be a step in that direction.

I think you're correct in that there are always fundamental advancements which would save vast resources on less widely effective, but the skill is in identifying them. For instance a few posts above you effectively query the importance of the unification of the classical and quantum descriptions and understandings of the universe. Bringing together the 'two great pillar' of physics that remain entirely incompatible due to our ignorance.

I see you haven't answered the question asked there. Yet it seems clear that closing this massive and fundamental divide, described as the holy grail of physics, would clearly have the widest of effects, yet you seem to see it as equal to all other areas, surely contradicting your approach?

My own subject, eugenics, is slightly different in that it can represent as much of a danger as advancement if not reigned in, yet with all such areas a fundamentally better understanding of how nature works would help avoid the most serious mistakes. Another fundamental is the way we employ our brains, badly needing far better teaching methods as eugenics can't help.

I'm really asking if the value is not in identifying the area where the greatest fundamental 'leaps' are possible. There does seem to be a lot of 'stating the obvious' in the essays without fulfilling the practical specifics of the scoring criteria. Do you not think your view falls into that category? I needed the commitment of a short list of suggested focuses at the end.

But good writing, organisation and presentation of course.

Judith

report post as inappropriate

Author Daniel Dewey replied on May. 26, 2014 @ 13:34 GMT
Hi Judith,

It's clear that you found something unsatisfactory about the essay, but I am having trouble understanding exactly what it is. I'd appreciate your help in figuring it out.

I have tried to do three things with my essay. In order from most abstract to most concrete: first, I point out that Bostrom and Beckstead's views imply that we should steer the future primarily by trying to achieve Large and avoid Small futures; second, that this is a reason that societies and their governments should support and invest in scientific research on crucial phenomena; and third, that extinction risks from biological engineering and AI are concrete crucial phenomena that ought to be invested in. This does seem to me to "identify key areas" and "give a direction to steer". What element seemed missing, to you? If you wanted proposed solutions, I'm afraid I don't have good ones; it seems to me that we know little enough about the problems that more study is needed before solutions can be found. I have more ideas about what topics can be studied in AI risk here, in case you're interested.

You say "I needed the commitment of a short list of suggested focuses at the end". I had intended the bio and AI risks to be that short list of suggested focuses. I can't really see the list getting shorter; did you want my recommendations to be more specific?

Thanks for pointing me back at Peter's question; I see he's posted again, and I'll be going back and trying to explain myself more clearly to him; hopefully you'll find my reply to him useful.

Thanks for commenting, and I'll go check your essay out; your topic sounds very interesting!

Best,

Daniel



Judy Nabb replied on May. 29, 2014 @ 08:08 GMT
Daniel,

I suppose I find your approach rather two dimensional, like a slice through a pyramid. Yes, you've picked out the odd current 'hot topic' but seemingly as much from familiarity as from any fundamental analysis of consequential effects on other areas.

I see subjects as all connected but entirely 'layered' in a hierarchy. At the head of the pyramid are the fundamentals which inform everything so should have far higher priority. In the middle layers the subjects are largely insulated from each other. We use disconnected science - as a few authors here also point out, so there's too little cross pollination.

I'd have preferred to see you identify a methodology for assessment of where the most valuable long term returns apply. As Peter says, these are not always immediately apparent. Peter correctly identifies the peak of the pyramid, connecting to everything but you seem to treat the whole structure as 'flat' and cellular. Surely that's no improvement on what we do now.

This is all in a way connected to my proposals that we need greatly improved thinking methods, going to deeper level in assessing consequences. I feel we j have great unrealised potential in our own brains and focussing too much on AI is likely to distract and may even be dangerous.

I've tracked you down from the anonymous 'Daniel' post on my blog. Thanks for your comment but such research is presently impractical due to paucity of required data.

Judy

report post as inappropriate

Author Daniel Dewey replied on May. 29, 2014 @ 13:19 GMT
Hi Judy,

Thanks for your response; I think I understand your feedback better now.

I'm glad you figured out which Daniel the comment was from. I must not have been logged in!

Best,

Daniel




Anonymous wrote on May. 25, 2014 @ 12:23 GMT
It is probably the case that what is called a crucial phenomenon is determined so by politics. The orientation of scientific and technological progress has a fair amount to do with policy.

I make an assessment of these ideas about hyper-advanced intelligent life and the Kardashev scale. I think it is unlikely that any IGUS (information gathering and utilizing system) can achieve these levels. For this reason I think our universe is a natural system and not something generated as a “matrix” on an enormous computer system.

http://www.fqxi.org/community/forum/topic/2010

Cheers LC

report post as inappropriate

Author Daniel Dewey replied on May. 26, 2014 @ 13:35 GMT
Thanks for the comment, Lawrence! I'll check out your essay.

Best,

Daniel




Thomas Howard Ray wrote on May. 25, 2014 @ 12:59 GMT
A thoughtful and nicely-written essay, Daniel. In contrast to those who think you didn't do enough to identify crucial phenomena, the message I take away is the importance of theoretical guidance as primary to crucial choices. In the immortal words of Yogi Berra, "If you don't know where you're going, you might end up somewhere else."

To break the grip of pragmatic and whimsical politics over scientific policy, my own preferred framework is a robust and redundant communications and supply network of laterally-linked resources.

High marks from me, and all best to you in the competition.

Tom

report post as inappropriate

Author Daniel Dewey replied on May. 26, 2014 @ 13:38 GMT
Hi Tom,

Thanks! I agree, working with folks like Bostrom and Beckstead has given me a healthy respect for using theory to guide us towards really high-stakes issues.

Thanks for the link to your essay; I'm quite interested in science policy and governance. I look forward to reading it.

Best of luck,

Daniel




Anonymous wrote on May. 26, 2014 @ 07:27 GMT
Dear Daniel,

Your Hamming window seems to have not much overlap to what I selected as my aim: peace. While I appreciate your courage to deal with future AI and its consequences, I am an old engineer who hesitates to measure the result of discoveries, inventions, and other contributions to progress in terms of "the size of humanity's future".

I agree with you on that Peter Jackson's claims are perhaps far fetched if they are correct at all.

I am trying to understand your separation between crucial (= important natural if I understood you correctly) phenomena and artificial i.e. man-made facts (e.g. birth control). Does this separation matter much?

You are also using terms like "biological instability" or "robustness" in possibly mistakable manner. My command of English is shaky. For that reason, I would like you to explain to me how you meant humility in your last sentence:

°“So much the worse for our collective humility” seems, to me, the only acceptable response." Did you quote something in the first part of this sentence? You gave no reference. What you quoted from Hamming is easily understandable to me.

Best,

Eckard

report post as inappropriate

Author Daniel Dewey replied on May. 26, 2014 @ 13:50 GMT
Hi Eckard,

Hamming window, nice :)

As to your questions: by crucial phenomenon, I mean an empirical regularity or relationship that holds between sets of real-world conditions, that is especially important in determining how our choices affect the size of humanity's long-term future. Crucial phenomena could be properties of natural systems, like cells or black holes, or they could be properties of man-made systems, like the LHC or computers with particular programs on them. Does that help?

I do think that biological "instability" or "robustness" might not be the ideal phrases, and I'll be on the lookout for better ones.

The closing quote was questionable to a friend of mine who proofread the essay, so you're not alone there :) "Humility" means "having a modest or low view of one's own importance", thinking that we can't do much of significance. The quote isn't from anything; I put it in quotation marks to figuratively indicate that when we encounter the conflict between our modesty and our duty to humanity's future, humanity ought to "respond" by denying its humility and embracing its duty. Thanks for the feedback, that'll need some work for a future draft.

I look forward to reading your essay!

Best,

Daniel



Anonymous replied on May. 27, 2014 @ 16:06 GMT
Hi Daniel,

Thank you for the explanations. Peter Jackson has perhaps anything but a modest view of his own importance. I don't deny, I have to humbly admit being not in position to understand and embrace what he claims.

You wrote: "phenomena could be properties of natural systems". A phenomenon is something that is observed to happen or exist. The properties of a substance or an object are the ways in which it behaves in particular conditions.

Still trying to understand what you meant with "the size of humanity's future", I think you meant the desired property of future being a bright alias great one. I know that great means only in German nearly the same as does big. A big woman is a fat one.

I hope my current essay does not contain too much of such embarrassing mistakes. Please don't hesitate asking me if something seems to be strange.

Your topic susceptibility to misuse is the same that motivated Alfred Nobel.

Best,

Eckard

report post as inappropriate

Eckard Blumschein replied on May. 28, 2014 @ 18:42 GMT
Daniel,

You mistook my essay perhaps without even reading it carefully. Since you seem to speak for a "Future of Humanity Institute" in Oxford, and at least the wording of your essay did not meet my quality standards while I consider Oxford's colleges still renowned, I tried to learn a bit about Beckstead and Bostrom who seem to be rather young fellows, and I searched for the still strange to me term "large future" with the result that Yahoo only returned links to "Big future" with one exception: "Large Future - Image Results", a glittering perspective that make it understandable to me why you mistook my essay.

As an engineer, I see large and big only reasonable in connection to something a size refers to. More worse, I see the future something to which one cannot even ascribe a size. If I was forced to rate your essay, this logical flaw did cause me to rate it one, although your command of English is definitely better than mine. Maybe, I mistook you. Please correct me if you can.

Eckard

report post as inappropriate


Anonymous wrote on May. 26, 2014 @ 12:40 GMT
Daniel,

I appreciate you didn't call my hypothesis "far fetched" as Eckard suggests but asked why would such a; "leap forward in quantum mechanics make a significant difference to humanity's future? Is it more, less, or equally important relative to other major scientific questions, in your opinion?"

I answered, noting that the unification of classical (relativity) and quantum...

view entire post


report post as inappropriate

Author Daniel Dewey replied on May. 26, 2014 @ 14:06 GMT
Hi Peter,

If I understand right, you're saying that your proposed unification of quantum mechanics and relativity will also advance understanding in ecology? That's pretty unintuitive to me. Would you like to explain more?

In response to your previous post: "Most eminent physicists seem to agree unification is the key. Do you not?" I assume that they think it's key to the mission of physics, that is, to a mathematical understanding of the fundamental laws that govern the universe. I was just asking whether you had a concrete idea of how that affects humanity's future, and how you'd rank it against other kinds of science we could do if our priority was to steer humanity's future. For example, given the choice between accelerating progress in theoretical physics and accelerating progress in epidemiology, I would choose epidemiology, on the grounds that pandemics are becoming an increasingly large risk, whereas theoretical physics seems to have little to no urgency. Given the choice, how would you prioritize theoretical physics like the kind you propose relative to the other investments available?

Side note: I would never advocate cutting off one field of inquiry entirely in favour of another (except in the most dire of emergencies), so I hope I'm not coming across as disliking physics. I love physics, and it's a very deep, beautiful, and significant field; however, that doesn't mean that I think it's particularly relevant to how humanity should steer the future.

"I thought your essay showed you understood the importance of identifying and focussing on the right things. Was I wrong?" Well, I hope not, but I have been known to make mistakes! ;)

Best,

Daniel



Anonymous replied on May. 28, 2014 @ 16:36 GMT
Daniel,

Tricky without listing them, but I'll start. I'm pointing out that intuition is commonly wrong, as initially assumed 'effects' are invariably not the actual effects. We fail to 'think through' consequences carefully enough. So a few at random from the interminable list, all interrelated and all influencing others;

1. Logic. Famously all logical systems are 'ultimately beset...

view entire post


report post as inappropriate


Laurence Hitterdale wrote on May. 27, 2014 @ 02:12 GMT
Hi Daniel,

Thank you for your comments on my essay. I appreciate also this opportunity to read and think about what you have to say. I think you have succeeded in identifying crucial phenomena, and your approach seems sensible and insightful. I also like the fact that you connect your proposals to significant recent work. Where my outlook might differ from yours is that I would judge the next few decades to be a time of serious existential risk (in Bostrom’s sense). It might be hard enough just to avoid the dangers, so maybe we can’t be guided by much more than Bostrom’s maxipok. Aim for Large might be too ambitious for the rest of this century. In other words, if disaster is avoided, then there will be time to work on maximizing the probability of a large future. At present, though, steering past the dangers will take the resources available. However that may be, your long-range vision can motivate people to face the tasks immediately before us.

I have looked at your Web site. I intend to keep in touch with your future research and writing.

Laurence Hitterdale

report post as inappropriate

Author Daniel Dewey replied on May. 29, 2014 @ 13:22 GMT
Hi Laurence,

Thanks for reading, and your comments! I agree that existential risk should be a top priority. I'm honestly not sure how existentially risky the next few decades are relative to later times this century or next, but I'd welcome more information about those facts.

Best,

Daniel




Member Tommaso Bolognesi wrote on May. 27, 2014 @ 08:30 GMT
Dear Daniel,

I read your essay with interest, and I agree with the essence of your proposal. In particular, I fully subscribe to the idea that producing and disseminating technical, publicly understandable knowledge of critical phenomena is . . . critical.

I only have a minor remark about your style of presentation.

In Section 2 the exposition is kept to a high level...

view entire post


report post as inappropriate

Author Daniel Dewey replied on May. 29, 2014 @ 14:37 GMT
Hi Tommasso,

Thanks for your feedback. It does seem that many people would have been helped by more concrete examples, whether in crucial phenomena, in ideas like breadth and "size" of the future, or in assertions like the one about cosmology.

I'm glad you liked the ending :)

Best of luck to you as well!

Thanks,

Daniel




Jonathan J. Dickau wrote on May. 29, 2014 @ 17:18 GMT
Hello Daniel,

I enjoyed your essay, and I agree with its central thesis to the point of thinking it is essential that we do deal with the existential risks that face humanity, but some of your intermediate points fall apart for me. Premise 2 on page 2 is almost too easy to disprove or discredit, and appears to be of no value, while abandoning that premise reveals a host of phenomena to be...

view entire post


report post as inappropriate

Author Daniel Dewey replied on May. 29, 2014 @ 17:47 GMT
Hi Jonathan,

Thanks! I'm glad you enjoyed it.

Re: your first point: I think I can clear this up. As your example points out, the extrinsic or instrumental value of things is very time sensitive; this is quite right. What I meant was that *intrinsic* value is time-insensitive. For example, if you think that suffering is of intrinsic disvalue, then it doesn't make much sense to think that that intrinsic value is more or less depending on what day, year, or millennium that suffering takes place in. That's all I meant to say by premise 2.

I'm glad we're in agreement about existential risk from AI (though I don't think "self-awareness" is relevant; it seems to me that un-"self-aware" AI could probably have all of the effects I'm worried about).

I'll have to go take a look at your essay to learn more about the issue you point out! Unfortunately, I can't promise I'll get to it before the end of the month.

Best of luck,

Daniel



Jonathan J. Dickau replied on May. 29, 2014 @ 17:56 GMT
Thanks Daniel,

I especially resonate with one statement in your essay "given the knowledge of how Nature sets its phenomena, Humanity could act to maximize the value of their play." Since my essay is focused on the value of play as a learning tool, I find that idea especially appealing.

Regardless of how soon you get to my essay, I think you will find it of value to your efforts, and I hope to stay in contact to discuss the issues you raise, even after the contest has concluded.

All the Best,

Jonathan

report post as inappropriate


Ray Luechtefeld wrote on May. 29, 2014 @ 18:13 GMT
Hi Daniel,

Thanks for the really interesting essay. I agree that the two phenomena you identified are crucial, and propose a third. It is research on processes and systems that lead humans to interact "productively" (free from bias and destructive conflict while sharing information freely and making effective decisions).

Some support for this suggestion is provided in my essay on computationally intelligent personal dialogic agents. I've developed a prototype of such a system as part of a US National Science Foundation CAREER award.

I'd appreciate a rating on my essay, if you can do that, since I am a bit short on ratings. Also, I'm interested in collaborators in furthering the development of the dialogic system, if you know of anyone that might be interested. Have them contact me at my gmail address, my username is my first name, then a period, then my last name.

Thanks,

Ray Luechtefeld, PhD

report post as inappropriate


Michael Allan wrote on May. 31, 2014 @ 10:46 GMT
Hello Daniel, May I post a short, but sincere critique of your essay? I'd ask you to return the favour. Here's my policy on that. - Mike

report post as inappropriate


Anonymous wrote on Jun. 6, 2014 @ 09:03 GMT
Hello Daniel

I am impressed by your analysis. I am impressed by much of the work of the Future of Humanity Institute. However, I wonder if your suggestions will work, and I wonder if they really lower risk. Your examples, biological engineering and AI, are good examples to illustrate my concerns too. 1) It seems difficult to stop either biological engineering or AI research. Note that the biological containment labs from which you cite escape as being "shockingly common" were the result of one of science's few attempts to restrain dangerous experiments. 2) I agree that biological engineering and AI present existential problems, but they also might solve others. If Willard Wells and Martin Rees are right about our prospects, we may have to take some risks to lower the background level of risk. As an example, your colleague Stuart Armstrong warns persuasively about AI risk in his booklet "Smarter than Us." However, AI is a critical part of his proposal to settle the universe, a proposal that if workable would give us a broad and safe future. [Stuart Armstrong & Anders Sandberg, "Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox," Acta Astronautica, Aug-Sept 2013.] AI seems to be a component of many projects that would reduce risk.

Of course, really good science and good use of that science would take care of these concerns. Perhaps I am too cynical about scientists.

My solution is an attempt to crowdsource work in the area of what I call management of positive and negative singularities. I wonder if it will really work either.

report post as inappropriate


Anonymous wrote on Jun. 6, 2014 @ 09:40 GMT
The split of phenomena into limiting and transformative is interesting. (Instead of "transformative phenomenon", which suggests, to the uninitiated reader, "phenomenon that rearranges some existing part of the world into a better part organized along different principles", we'd suggest a term like "mediating phenomenon" or "controlling phenomenon", which suggests "phenomenon that, if it exists,...

view entire post


report post as inappropriate

Steven Kaas replied on Jun. 6, 2014 @ 12:03 GMT
Sorry, I had meant to post that from my account, but apparently it logged me out.

report post as inappropriate

Author Daniel Dewey replied on Jun. 6, 2014 @ 15:19 GMT
Steven & Steve,

Hey! I didn't realize you were in the contest; there are so many essays that I missed yours. Thanks for commenting!

I agree with your note that "transformative" is confusing, but I'm not sure about what would best replace it--- I'd like to represent the possibility of huge flips and swings in the way actions are mapped to values. I'll have to think about that. Mediating might be best.

Thanks also for the link to value of information, that makes sense.

Your essay sounds quite interesting; I'll give it a read and go comment over there.

Best,

Daniel




Login or create account to post reply or comment.

Please enter your e-mail address:
Note: Joining the FQXi mailing list does not give you a login account or constitute membership in the organization.