If you have an idea for a blog post or a new forum thread, then please contact us at email@example.com, with a summary of the topic and its source (e.g., an academic paper, conference talk, external blog post or news item). You may also view a list of all blog entries.
Conjuring a Neutron Star from a Nanowire
Using tiny mechanical devices to create accelerations equivalent to 100 million times the Earth’s gravitational field—mimicking the arena of quantum gravity in the lab.
Rest In Peace astronaut Edgar Mitchell, the sixth man to walk on the moon and the last member of the Apollo 14 mission. He was oddly credulous for a man of science, confusing the public with his proclamations that alien visitors interceded in the arms race, the Roswell incident was not a lot of tin foil and sticks but a downed saucer, and the like, on which I commented in a previous blog.
But we remember him this weekend for the uncommon bravery involved in his work as an aviator, test pilot, and NASA astronaut. Without sound data and dedicated skepticism (not cynicism), we are likely to believe any old thing. But without a passion for trying on new and even radical ideas, what we know will be surely chained by expectation.
Flaws notwithstanding, here was an explorer's mind.
On Dec. 8, 2015, two different groups (sharing an author) posted papers to the arXiv announcing the possible detection of planet-sized objects in the far outer solar system (Vlemmings et al, arXiv:1512.02650v2 and Liseau et al, arXiv:1512.02652v2). There was a brief flutter on Twitter and in the media, which shortly died down. As far as I am aware, no large-scale effort has begun to confirm or refute these potential detections, and both papers have since been withdrawn, until further data is available.
Six weeks later on January 20, a paper appeared in The Astronomical Journal adducing strong circumstantial evidence, based on solar system object orbits, for a large 9th planet in the outer solar system (K. Batygin and M. E. Brown, The Astronomical Journal Volume 151, Number 2). The media attention was staggering, and the paper downloaded 243,547 times of this writing. There are almost certainly numerous intense efforts underway to try to detect the object.
While it may be surprising to see much more attention (and resources) directed toward circumstantial evidence for a 9th planet than to direct potential observation of one, this is the sort of decision with which researchers — and research funders, and journalists — are confronted all the time.
These decisions are, in essence, predictions about how things are going to unfold; this has gotten me interested in how to better solicit and aggregate expert predictions in science and technology, and helped motivate a new project I and several other physicists have been developing, called Metaculus.
To be more specific, there is an important class of decisions that can be posed in the form of "what is the expected return on my investment of time/effort/attention/funding in X?" For some science-based examples:
— "What is my expected return in using my time on telescope X to search for the planet suggested by this data?" Here the potential "return" is fame and satisfaction at discovering a planet.
— "What is my expected return in skimming/reading/studying this new paper?" Here the return might be insight gained, entry into a promising new research direction, etc.
— "What is the expected return in funding this research grant?" Here, the return could be papers published, talks given, meetings run, or more abstractly intellectual impact on a field or set of questions.
— "What is the expected return on building this instrument?" The impact here would be scientific discovery, possibly measured by papers, citations, etc.
A central idea in these questions is that of expected return. Most simply, this could be the likelihood of success times the return if successful. Or, if there are multiple possible outcomes, it could be the sum/integral of the probability of each outcome times that outcome's impact.
The idea of high expected return (per dollar) is part of FQXi's core philosophy (and grantmaking criteria). To make a financial analogy, government funding agencies tend to purchase the equivalent of a diverse-but-safe portfolio of bonds and index funds: decent returns, fairly safe. These agencies tend not to fund the science equivalents of startup companies — projects where the chance of major success is fairly low, but the impact if successful is very high. We believe that in the science, as in the corporate, world, both types of investment are very important, and one role of FQXi is trying to fill in this end of the research funding portfolio.
Evaluating the "probability of success" is, though, rather difficult. It's often not hard to assess which of two projects is more likely to be successful. For example, I would say the Wendelstein 7-X fusion experiment and subsequent efforts are more likely to lead to useful energy generation than Brillouin Energy's LENR experiments. But how much more likely? Ten times? A thousand? A million? The 7-X’s funding is probably about 1000 times higher, so which experiment has the higher per-dollar expected return on investment depends on this likelihood ratio! Or what about tabletop quantum gravity experiments versus a bigger version of the "holometer"?
The idea of Metaculus is to generate quantitative and well-calibrated predictions of success probabilities, by soliciting and aggregating expert opinion, and by (in the process) helping people improve their skills at quantifying and predicting impact. Metaculus poses a series of questions, for example "Has a new boson been discovered at the LHC?", with relatively precise criteria for resolving the question after a specific time. Users are invited to predict likelihoods (1-99%) for these questions, and later awarded points for accuracy in their predictions. Studies show that by carefully combining the predictions of many users, better precision and calibration can be achieved.
My experience so far suggests to me that there are several ways a prediction platform like this, when applied to scientific research, can be complementary to traditional peer-review. The effort of creating precise criteria for 'success', and in trying to assign numbers to success likelihood, has a quite different feel than just reading to understand whether a paper/proposal is intellectually sound or correct. It also makes me realize that in all of the peer review and assessment that I have done, I've never been asked (or asked someone) to supply a number like "what is the probability that X will be the result of funding/publishing Y?" Since that's a significant part of what peer review is, isn't that a bit odd?
Perhaps there is an opportunity for real improvement here. A recent study made the case that prediction 'markets' are quite effective — and more effective than surveys even of experts — in forecasting whether given research (in this case in psychology) would be successfully reproduced (PNAS, Vol 112, no. 50).
I'm very interested in everyone's ideas for how something like Metaculus could be used in trying to make the biggest impact we can out of the limited resource society throws in the direction of us scientists — please comment!
If you're driving, you're having a subjective experience of colors, sounds and vibrations. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car, or is it a zombie in the sense of having behaviour without experience? This question of why and when matter is conscious is the essence of what philosopher David Chalmers has coined "the hard problem" of consciousness, and it's important not only in philosophy. For example, if you're an emergency room doctor, how can you determine whether an unresponsive patient is conscious or not in the sense of having a subjective experience? Patients with locked-in syndrome have functioning minds without being able to move or communicate. And what about a future robot intelligent enough to converse like a human?
A traditional answer to this problem is dualism — that living entities differ from inanimate ones because they contain some non-physical element such as an "anima" or "soul". Support for dualism among scientists has gradually dwindled. To understand why, consider that your body is made of about 10^29 quarks and electrons, which as far as we can tell move according to simple physical laws. Imagine a future technology able to track all your particles: if they were found to obey the laws of physics exactly, then your purported soul is having no effect on your particles, so your conscious mind and its ability to control your movements would have nothing to do with a soul. If your particles were instead found not to obey the known laws of physics because they were being pushed around by your soul, then we could treat the soul as just another physical entity able to exert forces on particles, and study what physical laws it obeys.
Let us therefore explore the other option, known as physicalism: that consciousness is a process that can occur in certain physical systems. Instead of starting with the hard problem, we can then start with the hard fact that some quark blobs are conscious and others aren't, which leads to the fascinating question of what makes the difference. I've long contended that consciousness is the way information feels when being processed in certain complex ways, but what types of information processing quality? Specifically, what mathematical equation must an information processing system satisfy to be conscious? Answering this question might allow future ER-physicians to have a consciousness detector, and would let future programmers control whether they built consciousness into their artificial intelligence systems.
Neuroscientist Giulio Tononi has proposed just such an equation, which forms the core of his Integrated Information Theory of consciousness (IIT). It says that information being processed is conscious if a mathematical quantity called "Phi". Phi quantifies integration, the extent to which information is interconnected into a unified whole rather than split into disconnected parts. The theory has generated interest from the neuroscience community, but also controversy, including recent critique from FQXi member Scott Aaronson.
I want to see the question of whether IIT is correct or not resolved by experimental tests. Unfortunately, Tononi’s proposed measure of integration is too slow to compute in practice from state-of-the-art patient data, requiring longer than the age of our universe, let alone the lifetime of the patient. I’ve therefore worked hard over the last year in search of a faster way to compute integration, and I’m happy report that I’ve found one--in fact, several. In a paper I just posted, I explored and classify existing and novel integration measures by various desirable properties, and found that although there at first seem to be a few hundred options, there are in fact only a handful of attractive ones (arXiv:1601.02626). I was happy to discover that there's an approximation based on graph theory that lets you dramatically speed up the exact formulas, so that they can be applied to real-world data from laboratory experiments without posing unreasonable computational demands. This improves the prospects of making fascinating questions and theories about consciousness experimentally testable.
We’re taking our annual look back at the physics highlights of the past 12 months — as chosen by FQXi member Ian Durham, a quantum physicist at Saint Anselm College in New Hampshire. Ian will be counting down his top 5 picks in a special podcast series.
Our review of the year in physics, with quantum physicist Ian Durham, begins.
Updated on 29 December 2015 to say: Part 2 has now been added, revealing pick 4 and 3.
Updated on 30 December 2015 to say that I've tweaked the second podcast since first posting it yesterday. We had a slight mistake in the first version. I'll avoid telling you what it was because if I do, it will spoil you on what's on the list. Corrected now though.
It’s hard to say what’s the most exciting element of this new paper on parallel universes, the inflationary multiverse, and black holes, by Tufts cosmologist (and FQXi member) Alex Vilenkin and colleagues. Is it the idea that black holes hide baby universes inside them — inflating their own spacetimes — connected to our universe by wormholes? Could it be that, according to the authors, astronomers may soon be able to find evidence to confirm this crazy notion? Perhaps it’s the fact that this paper could be presenting the first way to find definitive evidence that an inflationary multiverse of parallel worlds exists. Oh yes, and the authors also say that such black holes could have seeded supermassive black holes — the origin of which remains a mystery — *and*, in some of the scenarios they’ve looked at, they could comprise dark matter, the invisible stuff that makes up most of the matter in the universe.
Phew! No wonder the paper by Vilenkin along with Jaume Garriga, at the University of Barcelona, and Jun Zhang also at Tufts, is almost 50 pages long! (”Black Holes and the Multiverse” arXiv:1512.01819v2.)
Let’s take this piece by piece. Vilenkin sent me the paper, which he has just posted to the physics preprint server, arXiv, because, for him, what’s exciting is that it provides a "new way to test multiverse models observationally." Their analysis is based on inflation theory — the idea that our universe underwent a phase of rapid expansion, or inflation, in its early history. This is now a pretty mainstream notion, which serves to solve a number of mysteries about the state of our universe today. It has also had good observational backing since various satellites have now measured the slight temperature differences in the afterglow of the big bang — the cosmic microwave background radiation — and found patterns that match those predicted by inflationary models. (There are still alternative proposals out there to explain these features, however. See Sophie Hebden's "Faster than Light" for an example.)
Slightly more controversial is the idea that inflation forces us to accept that we live in a multiverse of neighbouring universes with potentially very different physical parameters than our cosmos. This stems from the realisation, by Vilenkin and others, that inflation is unlikely to have been a one-off event. Just as the patch of space that we now call home once inflated to create an entire cosmos for us to wonder at, other neighbouring patches are probably inflating all around us, creating parallel bubble universes nearby.
The multiverse idea has been criticised because it’s tough to test. Almost by definition, parallel bubbles are spacetimes that are divorced from ours, and so we can't interact with them directly. That hasn't stopped cosmologists like Vilenkin, and our own Anthony Aguirre, from coming up with inventive ways we might be able to detect them. For instance, two neighbouring bubbles might collide and leave a scar on our universe, which we could pick out of the cosmic microwave background data. (See "When Worlds Collide" by Kate Becker.)
In their new paper, Garriga, Vilenkin, and Zhang have investigated another possible consequence of inflationary cosmology — providing a new mechanism for the formation of black holes in our universe. We often talk about stellar mass black holes that were formed from the collapse of stars. There are also supermassive black holes that can be found at the centre of galaxies, which can have masses up to a billion times that of the Sun. Astrophysicists aren’t quite sure how those latter behemoths are formed.
According to Garriga, Vilenkin and Zhang, black holes could also have been formed by little bubbles of vacuum in our early universe. These would have expanded during our universe's inflationary phase (as the cosmos they were embedded in was also growing around them). When inflation ended in our cosmos, these bubbles would — depending on their mass — have either collapsed down to a singularity (an infinitely dense point that we think lies at the core of a black hole) — or, if they were heavier than some critical mass, the bubble interior would continue to inflate into an entirely new baby universe. This universe would look to us, from the outside, like a black hole, and would be connected to our universe by a wormhole. (See the image, taken from the paper, at the top of this post.)
The team has also examined another mechanism in which black holes are formed inside spherical "domain walls" that are thought to be created during inflation. A domain wall is like a fracture or defect in space, created as the universe cools. You can think of it like a defect created in a cube of ice, where the crystal structure in the solid has misaligned as the water froze.
The paper takes a detailed look at some of the possible properties of such black holes formed by these novel processes, including the masses they might have, and the sort of observable signs they might give out that astronomers could pick up. They caution that they would need to carry out comprehensive computer simulations to work out all possible signatures and the possible effects of, for instance, energy being siphoned off from our universe through the wormhole. But a preliminary analysis suggests that these novel black holes could provide noticeable signatures, in the form of gamma rays given out by the black holes, or distortions induced on the cosmic microwave background spectrum created by radiation that was emitted as gas accreted onto large black holes in the early universe.
By looking at observational evidence that is already out there, the team can rule out inflationary black holes with certain parameters, but others are still allowed. Those that remain viable could have seeded today's supermassive black holes, the team says. And for certain model parameters they have investigated, the number and mass of black holes they expect to see suggests that these black holes could make up the missing dark matter in the universe.
The authors also calculated that the baby universe could contain very different physical parameters from each other. Thus the network of baby universes within black holes, linked by wormholes, would create an inflationary multiverse.
"We note that the mass distributions of black holes resulting from domain walls and from vacuum bubbles are expected to be different and can in principle be distinguished observationally," the teams writes in their paper. "If a black hole population produced by vacuum bubbles or domain walls is discovered, it could be regarded as evidence for the existence of a multiverse."
It's worth noting here that this isn't the first time that physicists have suggested that black holes lead to parallel universes. For example, FQXi members Lee Smolin and Jorge Pullin have independently had similar ideas in the past. On the podcast, on the June 2013 edition, you can hear Pullin talking about how loop quantum gravity predicts that black holes are tunnels to parallel worlds. (Smolin is also on that edition, talking about his book.) But this is the first analysis carried out using inflationary theory.
You can also read about Nobel Laureate Frank Wilczek’s ideas for detecting quantum parallel worlds by looking for energy leaking between worlds. Plus, you can listen to Howard Wiseman on the podcast talking about tests for interacting parallel worlds.
New Podcast: Shifty Neutrinos Win Big, a Cosmic... By ZEEYA MERALI
[picture]Congratulations to the 1300-strong group of physicists who won the Breakthrough Prize in physics on Sunday, for the discovery of neutrino oscillations—confirming that neutrinos can switch identities and have mass. This is the same...
“Spookiness” Confirmed by the First... By ZEEYA MERALI
[picture]Spookiness, it seems, is here to stay. Quantum theory has been put to its most stringent “loophole free” test yet, and it has come out victorious, ruling out more common sense views of reality (well, mostly). Many thanks to Matt Leifer...
Jacob Bekenstein (1947-2015) By ZEEYA MERALI In remembrance of Jacob Bekenstein, a guest post by his friend and colleague Eduardo Guendelman, Physics Department, Ben Gurion University, Beer Sheva, Israel.
It is with great sorrow that we report on the passing of Professor Jacob D....
The Physics of What Happens Grantees By BRENDAN FOSTER
This past winter, FQXi announced it's fifth Large Grant program, on the topic of The Physics of What Happens – a call for proposals for research and outreach projects on "Events". I am happy to announce that from an initial group of almost 250...
Action and Excitement and Science! - Podcast... By BRENDAN FOSTER
[picture]In our new special edition of the FQXi podcast, we ask, what is the best way to interest and excite the public about physics, especially foundational physics? Do we just stick to the facts, or do we need slogans, explosions, and, ahem,...