Search FQXi


If you are aware of an interesting new academic paper (that has been published in a peer-reviewed journal or has appeared on the arXiv), a conference talk (at an official professional scientific meeting), an external blog post (by a professional scientist) or a news item (in the mainstream news media), which you think might make an interesting topic for an FQXi blog post, then please contact us at forums@fqxi.org with a link to the original source and a sentence about why you think that the work is worthy of discussion. Please note that we receive many such suggestions and while we endeavour to respond to them, we may not be able to reply to all suggestions.

Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you.

You may also view a list of all blog entries.

 RSS feed | RSS help
RECENT FORUM POSTS

halim sutarmaja: "dewapoker hadir untuk semua pecinta game poker dengan teknologi terbaru dan..." in New Nuclear "Magic...

Jason Wolfe: "Hi Georgina, Steve, What is reality? The humorous answer, almost at the..." in Schrödinger’s Zombie:...

Jason Wolfe: "Joe, What you are saying sounds like mathematics. But mathematics doesn't..." in First Things First: The...

Joe Fisher: "Jason, You can only unnaturally make an infinite number of finite written..." in First Things First: The...

Jason Wolfe: "As for religious fundamentalists, I would rather deal with them, then with..." in More on agency from the...

Jason Wolfe: "The best we can do with the environment is to plant more trees and..." in More on agency from the...

gmail login: "Thanks a lot for the post. It has helped me get some nice ideas. I hope I..." in Bonus Koan: A Lake of...

Georgina Woodward: "Steve, I don't think the quantum representation of the hydrogen atom is an..." in Schrödinger’s Zombie:...


RECENT ARTICLES
click titles to read articles

First Things First: The Physics of Causality
Why do we remember the past and not the future? Untangling the connections between cause and effect, choice, and entropy.

Can Time Be Saved From Physics?
Philosophers, physicists and neuroscientists discuss how our sense of time’s flow might arise through our interactions with external stimuli—despite suggestions from Einstein's relativity that our perception of the passage of time is an illusion.

Thermo-Demonics
A devilish new framework of thermodynamics that focuses on how we observe information could help illuminate our understanding of probability and rewrite quantum theory.

Gravity's Residue
An unusual approach to unifying the laws of physics could solve Hawking's black-hole information paradox—and its predicted gravitational "memory effect" could be picked up by LIGO.

Could Mind Forge the Universe?
Objective reality, and the laws of physics themselves, emerge from our observations, according to a new framework that turns what we think of as fundamental on its head.


FQXi BLOGS
November 18, 2019

New Blog Entries
Bookmark and Share

Will A.I. Take Over Physicists' Jobs? More on Max Tegmark at the 6th FQXi Meeting
By GEORGE MUSSER • Oct. 17, 2019 @ 17:28 GMT

Max Tegmark
Imagine you could feed the data of the world into a computer and have it extract the laws of physics for you. At the recent Foundational Questions Institute meeting, in Tuscany, FQXi director Max Tegmark described two machine systems he and his grad students have built to do exactly that. One recovers algebraic formulas drawn from textbook physics problems; the other reconstructs the unknown forces buffeting particles. He plans to turn his systems loose on data that have eluded human understanding, trawling for new laws of nature like a drug company screening thousands of compounds for new drugs.“It would be cool if we could one day discover unknown formulas,” Tegmark told me in a coffee break.

“One day” may already be here. Three theorists recently used a neural network to discover a relation between topological properties of knots, with possible applications to quantum field theory and string theory (V. Jejjala, A. Kar & O. Parrikar arXiv:1902.05547 (2019)). Machine learning has analyzed particle collider data , quantum many-body wavefunctions, and much besides. At the FQXi meeting, Andrew Briggs, a quantum physicist at Oxford, presented an A.I. lab assistant that decides how best to measure quantum effects (D. T. Lennon et al, arXiv:1810.10042 (2018)).  The benefits are two-way: not only can A.I. crack physics problems, physics ideas are making neural networks more transparent in their workings.

Still, as impressive as these machines are, when you get into the details, you realize they aren’t going to take over anytime soon. At the risk of stroking physicists’ egos, physics is hard—fundamentally hard—and it flummoxes machines, too. Even something as simple as a pendulum or the moon’s orbit is a lesson in humility. Physics takes a lot of lateral thinking, and that makes it creative, messy, and human. For now, of the jobs least likely to be automated, physics ranks up there with podiatry. (Check the numbers for yourself, at the Will Robots Take My Job? site.)

Survival of the Fittest

Fitting an algebraic formula to data is known as symbolic regression. It’s like the better-known technique of linear regression, but instead of computing just the coefficients in a formula—the slope and intercept of a line—symbolic regression gives you the formula itself. The trouble is that there are infinitely many possible formulas, data are noisy, and any attempt to extract general rules from data faces the philosophical problem of induction: whatever formula you settle on may not hold more broadly.

Searching a big and amorphous space of possibilities is just what evolution does. Organisms can assume an infinity of possible forms, only some of which will thrive in an environment. Evolution finds them by letting a thousand flowers bloom and 999 of them wither. Inspired by nature, computer scientists developed the first automated symbolic regression systems in the 1980s. The computer treats algebraic expressions as if they were DNA. Seeded with a random population of expressions, none of which is especially good at reproducing the data, it merges, mutates, and culls them to refine its guesses.

As three pioneers of the field, John Koza, Martin Keane, and Matthew Streeter, wrote in Scientific American in 2003, evolutionary computation comes up with solutions as inventive as any human’s, or more so. Genetic-based symbolic regression has fit formulas to data in fluid dynamics, structural engineering, and finance. A decade ago, Josh Bongard, Hod Lipson, and Michael Schmidt developed a widely used package, Eureqa. They used to make it available for free, but now charge for it—as well they might, considering how popular it is at oil companies and hedge funds. Fortunately, you can still do a 30-day trial. It’s fun to watch algebraic expressions spawn and radiate in a mathematical Cambrian explosion.

But the algorithm still requires additional principles to narrow the search. You don’t want it to come up with just any formula; you want a concise one. Physics, almost by definition, seeks simplicity within complexity; its goal is to say the most with the least. So the algorithm judges candidate formulas by both exactness and compactness. Eureqa occasionally replaces complicated algebraic terms with a constant value. It also looks for symmetries—whether adding or multiplying by a constant leaves the answer unchanged. That is trickier, because the symmetry transformation produces a value that might not be present in the data set. To make an educated guess at hypothetical values, the software fits a polynomial to the data, in effect performing a virtual experiment.

Feynman in a Box

Tegmark and his MIT graduate student Silviu-Marian Udrescu take a different approach they call “A.I. Feynman” (arXiv:1905.11481 (2019)). Instead of juggling multiple possibilities and gradually refining them, their system follows a step-by-step procedure toward a single solution. If the genetic algorithm is like a community of scientists, each putting forward a particular solution and battling it out in the marketplace of ideas, A.I. Feynman is like an individual human methodically cranking through the problem.

It works by gradually eliminating independent variables from the problem.  “It uses a series of physics ideas… to iteratively transform this hard problem into one or more simpler problems with fewer variables, until it can just crush the whole thing,” Tegmark told the FQXi meeting. It starts by looking for dimensionless combinations of variables, a technique particularly beloved of fluid dynamicists. It tries obvious answers such as simple polynomials and trigonometric functions, so the algorithm has an element of trial and error, like a human. Then it looks for symmetries, using a mini neural network instead of a polynomial fit. Tegmark said: “We train a neural network first to be able to approximate pretty accurately the function.… That gives you the great advantage that now you can generate more data than you were given. You can actually start making little experiments.” The system tries holding one variable constant, then another, to see whether they can be separated.

Credit: Max Tegmark
Udrescu and Tegmark tested their system on 100 formulas from the Feynman Lectures. For each, they generated 100,000 data points and specified the physical units of the variables. The system recovered all 100 formulas, whereas Eureqa got only 71. They also tried 20 bonus problems drawn from textbooks that strike fear into the hearts of physics students, such as Goldstein’s on classical mechanics or Jackson's on electromagnetism. The system got 18; the competition, three.

To be fair, Eureqa is not the only genetic symbolic-regression system out there, and Udrescu and Tegmark did not evaluate them all. Comparing machine systems is notoriously fraught. All require a good deal of preparation and interpretation on your part. You have to specify the palette of functions that the system will mix and match—polynomials, sines, exponentials, and so on—as well as parameters governing the search strategy. When I gave Eureqa a parabola with a touch of noise, it offered x2 only as one entry in a list of possible answers, leaving the final choice to the user. (I wasn’t able to test A.I. Feynman because Udrescu and Tegmark haven’t released their code yet.) This human element needs to be considered when evaluating systems. A tool is only so good as its wielder.

Sorry to report, but symbolic regression is of no use to students doing homework. It does induction: start from data, and infer a formula. Physics problem sets are exercises in deduction: start from a general law of physics and derive a formula for some specified conditions. (Maybe more homework problems should be induction—that might be one use for the software.) As contrived as homework can be, it captures something of how physics typically works. Theorists come up with some physical picture, derive some equations, and see whether they fit the data. Even a wrong picture will do—indeed, one might argue that genuine novelty can arise only through an error. Kepler did not produce his namesake laws purely by crunching raw astronomical data; he relied on physical intuitions, such as occult ideas about magnetism. Once physicists have a law, they can fill in a new picture.

Or at least that is how physics has been done traditionally. Does it seems so human only because that is all it could be, when humans do it?

Making a Difference Equation

A formula describes data, but what if you want to explain data? If you give symbolic regression the position of a well-hit baseball at different moments in time, it will (if you get it to work) tell you the ball follows a parabola. To get at the underlying laws of motion and gravity takes more.

Since the ’80s physicists and machine-learning researchers have developed numerous techniques to model motion, as long as it is basically Newtonian, depending only on the objects’ positions and velocities and on the forces they exert on one another. If the objects are buffeted by random noise, the machine does its best to ignore that. Its output is typically a difference equation, which gives the position at one time given its position at earlier time intervals. This equation treats an object’s path as a series of jumps, but you can infer the continuous trajectory that connects them, thereby translating the difference equation into a differential equation, as the laws of physics are commonly expressed.

Eureqa attacks the problem using genetic methods and can even tell an experimentalist what data would help it to decide among models. It seeds its search not with random guesses but with solutions to easier problems, so that it builds on previously acquired knowledge. That speeds up the search by a factor of five.

Other systems avail themselves of newer innovations in machine learning. Steven Brunton, Nathan Kutz, Joshua Proctor, and Samuel Rudy of the Univeristy of Washington rely on a principle of sparsity: that the resulting equations contain only a few of the many conceivable algebraic terms. That unlocks all sorts of powerful mathematical techniques, and the team has recovered equations not only of Newtonian mechanics but also of diffusion and fluid dynamics. FQXi'ers Lydía del Rio and Renato Renner, along with Raban Iten, Tony Metger, and Henrik Wilming at ETH Zurich, feed their data into a neural network in which they have deliberately engineered a bottleneck, forcing it to create a parsimonious representation (arXiv:1807.10300 (2018)).

Pinball Wizard

Tegmark and his MIT grad student Tailin Wu hew closely to the methods of a paper-and-pencil theorist (Phys. Rev. E 100, 033311 (2019)). Like earlier researchers, they assume the equations should be simple, which, for them, means scrutinizing the numerical coefficients and exponents. If they can replace a real number by an integer or rational number without unduly degrading the model fit, they do. Tegmark told the FQXi meeting, “If you see that the network says, ‘Oh, we should have 1.99999,’ obviously it’s trying to tell you that it’s 2.” In less-obvious situations, they choose whatever rational number minimizes the total number of bits needed to specify the numerator, the denominator, and the error that the substitution produces.



Tegmark and Wu’s main innovation is a strategy of divide-and-conquer. Physicists may dream of a theory of everything, but in practice they have a theory of this and a theory of that. They don’t try to take in everything at once; they ignore friction or air resistance to determine the underlying law, then study those complications separately. “Instead of looking for a single neural network or theory that predicts everything, we ask, Can we come up with a lot of different theories that can specialize in different aspects of the world?” Tegmark said.

Credit Max Tegmark
Accordingly, their system consists of several neural networks, each covering some range of input variables. A master network decides which applies where. Tegmark and Wu train all these networks together. Each specialist network fits data in its own domain, and the master network shifts the domain boundaries to minimize the overall error. If the error remains stubbornly high, the system splits a domain in two. Further tweaking ensures the models dovetail at their boundaries. Tegmark and Wu do not entirely give up on a theory of everything. Their system compares the models it finds to see whether they are instances of the same model—for instance, a gravitational force law differing only in the strength of gravity.

Tegmark tested the system on what looked like a pinball ricocheting around an invisible pinball machine, bouncing off bumpers and deflecting around magnets. The machine had to guess the dynamics purely from the ball’s path. You can see this demonstrated in Tegmarks’ talk, about 4 mins into the YouTube video above. Tegmark and Wu tried out 40 of these mystery worlds and compared their system to a “baseline” neural network that tried to fit the whole venue with a single complicated model. For 36 worlds, the A.I. physicist did much better—its error was a billionth as large.

Think Different

All these algorithms are modeled on human techniques and suppositions, but is that what we really need? Some researchers have argued that the biggest problems in science, such as unification of physics and the nature of consciousness, thwart us because our style of reasoning is mismatched to them. For those problems, we want a machine whose style is orthogonal to ours.

A computer that works like us, only faster, will help at the margins, but seems unlikely to achieve any real breakthrough. For one thing, we may well have mined out the simple formulas by now. Undiscovered patterns in the world might not be encapsulated so neatly. For another, extracting equations from data is a hard problem. Indeed, it is NP-hard: the runtime scales up exponentially with problem size. (Headline: “It’s official: Physics is hard.”) A computer has to make simplifications and approximations no less than we do. If it inherits ours, it will get stuck just where we do.

But if it can make different simplifications and approximations, it can burrow into reaches of theory space that are closed off to us. Machine-learning researchers have achieved some of their greatest successes by minimizing prior assumptions—by letting the machine discover the structure of the world on its own. In so doing, it comes up with solutions that no human would, and that seem downright baffling. Conversely, it might stumble on problems we find easy. As Barbara Tversky’s First Law of Cognition goes, there are no benefits without costs.

What goes on inside neural networks can seldom be written as a simple set of rules. Tegmark introduced his systems as an antidote to this inscrutability, but his methods presuppose that an elementary expression underlies the data, such as Newton’s laws. That won’t help you classify dog breeds or recognize faces, which defy simple description. On these tasks, the inscrutability of neural networks is a feature, not a bug. They are powerful precisely because they develop a distributed rather than a compact representation. And that is what we may need on some problems in science. Perhaps the machines will help the most when they are their most inscrutable.

Credit: Bart Selman
At the previous FQXi meeting in Banff, back in 2016, Bart Selman gave an example of how machines can grasp concepts we can’t. He had worked on computer proofs of the Erdös discrepancy conjecture in mathematics. In 2014 a machine filled in an essential step with a proof of 10 billion steps. Its chain of logic was too long for any human to follow, and that’s the point. The computer has its ways, and we have ours. To those who think the machine did not achieve any genuine understanding—that it was merely brute-forcing the problem—Selman pointed out that a brute-force search would have required 10349 steps. Although Terence Tao soon scored one for humanity with a pithier argument, he cited the computer proof as guidance. If this is any precedent, the hardest problems will take humans and machines side by side.
16 comments | view comments


Till Next Time
By BRENDAN FOSTER • Oct. 4, 2019 @ 19:42 GMT

And now a quick pause in conference coverage - for a fond farewell. I am sad to say, after almost 10 years in the role, I have retired as FQXi’s Science Programs Consultant.

I had such fun in this multipurpose position, helping coordinate everything from grant reviews to grant writing, conferences to contests, websites and databases, and of course, the fabulous FQXi podcast.

Now, I’ll be focussing my work time on writing — mainly science journalism and possibly some fiction on the side. Around town, you can also find me in the middle of music projects and watching over the health of 20-year-old Puffy the Cat.

Now that I’m on the outside, you’ll know I mean it when I say how much of a positive impact I believe FQXi has had on fundamental physics research. Ten years ago, as I finished my degree, very few positions and next-to-no funding existed in any sort of non-string quantum gravity, quantum foundations, philosophical physics, or really anything that sounded too deep or too grand. As a student, to mention something like Bell’s Theorem, Many Worlds, or black hole thermodynamics as something you wanted to learn about would get you laughed at or more likely ignored by faculty.

I was fortunate to have a series of mentors and unofficial advisors who were committed to these kinds of topics and, while realistic about the prospects, felt willing to encourage younger researchers. I finished my degree debt-free doing exactly what I wanted to, but I saw friends divert into other fields, or take on second jobs to support their work with their chosen advisor (common, I know, for folks in the humanities, but unheard of in the sciences).

All that is why I was excited to discover FQXi and then get the chance to work with them, to support this kind of research and the people who love to do it. I saw the impact as we funded worthy projects that had no other prospects, and helped raise the profile of questions that other funders might have just ignored.

Nowadays, other organizations see the importance of foundational research. FQXi and its sponsors are no longer the sole funders in this direction. It is now common for physics departments to have groups in foundational physics, foundations of quantum mechanics, foundations of everything! We of course must credit the researchers who persisted and kept these topics alive. I am just happy to have had the chance to help out.

Thank you to all of you who have been a part of my FQXi experience the past decade. I hope you all continue to visit the site, apply for the grants, enter the contests. Special thanks of course to my colleagues Zeeya, Kavita, Anthony and Max — I wish you and FQXi much success in every branch of the wavefunction.
23 comments | view comments


More on agency from the 6th FQXi International Conference
By IAN DURHAM • Sep. 26, 2019 @ 19:00 GMT

There has been quite a bit of discussion surrounding my recent blog post about my talk on free will at the 6th FQXi conference this past July. In my work I am merely attempting to mathematically model the behavior that we most often associate with free will and agency. But what is agency? Is there room for free will within physics?

As Carlo Rovelli noted in his talk, there is a tension between decision and action within physics. It doesn’t help that agency, which involves both decisions and actions, is treated differently by different physicists. Rovelli believes that an agent should be describable within any theory of description and that no new physics is needed. Clearly he is not a dualist. Yet he also says that we should define agency as whatever happens in the cases in which someone, i.e. the agent, makes a decision.

But what is an agent? According to Susanne Still, they are observers that act on their environment: they sense, process information, and act. To Still, the description of decision making as an optimization process, or any such utilitarian approach, is fraught with problems. What we really want is a theory where behaviors emerge from first principles. These principles should, in turn, reflect physical reality. In other words, physics limits what can actually happen, i.e. there are physical limits to what agents can do (environmental forcing, context, etc.), and thus a complete theory of agency needs to take these limits into account. But agency also involves intention or purpose which begs the question, asked by Larissa Albantakis: how can we distinguish autonomous actions from mere reflexes?
Larissa Albantakis
This is something I considered in my talk on free will. At some point our choices are made so quickly that we don’t even think about them and are therefore merely reflexive.

Albantakis’ answer to this question is to define an autonomous agent as being an open system that is stable and that has self-defined and self-maintained borders. Such a system also has the capacity to perform actions that are at least partially caused from within, i.e. states internal to the system can produce causal change. This, of course, begs the question, how do we identify these self-defined borders? This is done by tracing back through the causal chain and looking at the evolutionary environments associated with each step in the chain. In other words, it involves finding the actual causes of actions. In doing so it is possible to compare levels of consciousness to levels of intelligence (slides and video of Albantakis’ talk will appear here). In looking at the representative plot, what is most interesting is what is not on the main sequence such as AI (more intelligent, less conscious) and complex microorganisms (less intelligent, more conscious).
Consciousness versus intelligence from Larissa Albantakis' talk




At any rate, I find it interesting that there is a convergence of ideas happening here towards the language of statistical mechanics and thermodynamics. Rovelli suggests that agency is related to entropy growth, Still argues that thermodynamics places physical constraints on agency, Albantakis defines autonomous agents in terms of open systems, Karl Friston spoke of Markov blankets, and I develop a measure of free will (i.e. agency) in terms of statistical distributions.
Karl Friston


In a certain sense, this is perhaps not surprising given the close relationship between statistical mechanics, thermodynamics, and information processing. But is this convergence more than merely one of language? Can agency, intention, and purpose be adequately described in terms of statistics and information? Various speakers at the conference had widely diverging opinions on this. Alyssa Ney, for instance, generally defended physicalism which puts physics in a privileged place amongst the sciences.
Alyssa Ney
This view appears, at least on the surface, to be heavily reductionist. This can be contrasted with George Ellis’ view which suggests that we require a new language since the language of physicalism seems to be mostly reductionist and the world simply can’t be fully described in reductionist terms. During one of the panel sessions, Paavo Pylkaanan flatly claimed that physics cannot adequately describe the mind, at least not without new physics. Like Ellis (and unlike Ney), Pylkaanan does not believe that there necessarily is a fundamental level, let alone that physics represents it. By this argument, no field by itself can fully represent or capture the mind (see the debate over this here). It seems clear from this that this debate is likely to endure without some convincing empirical evidence that favors one view over another.
37 comments | view comments


The Demon in the Machine — Paul Davies at the 6th FQXi Meeting
By ZEEYA MERALI • Sep. 20, 2019 @ 18:42 GMT

Paul Davies
Earlier this month, astronomers announced the discovery of water in the atmosphere of a potentially habitable planet, some 111 light years or 650 million million miles from Earth. The planet, called K2-18b, is reported to be a plausible candidate for hosting alien life.

What will those searching for signs of life be looking for? The plan is usually to watch for gases in the atmosphere of planets and moons that could only have been produced by living organisms, although in this case, because K2-18b is so far away, it will take the next generation of space telescopes to pick out such evidence. That covers life that we are familiar with, but what if these distant worlds harbour ‘life, but not as we know it’? What will scientists look for then?

As Paul Davies, a physicist and FQXi member at Arizona State University noted at FQXi’s 6th International Meeting in Tuscany, in July, astrobiologists don’t have a “life-meter” that can detect life in any form it may take because scientists don’t yet have a clear definition of what constitutes life in the first place.

In his talk, which you can now watch on FQXi’s YouTube channel, Davies describes his quest for a definition of life in terms of information. Embryo development marks a “meticulous choreography of organised information, all the right bits end up in the right place at the right time,” says Davies. “A wonderful example of the power of information to sculpt physical forms, living forms.”

In particular, Davies is searching for a boundary that an entity crosses in its ability to process information — a “demonic cut” — enabling it to manipulate and exploit information in a controlled way. Does this ability mark the transition from being a non-living to a living thing? (The term ‘demonic’ here is a reference to Scottish physicist James Clerk Maxwell’s hypothetical demon, which can seemingly violate the laws of thermodynamics to produce useful work based on its knowledge about a system.)

An intriguing question that came up a few times at the meeting is whether big questions about the origin of life, consciousness, intelligence and agency, can be explained by known physics. A few weeks ago, I posted an edition of the podcast featuring Carlo Rovelli’s work to understand decision-making by better investigating aspects of psychology, physics, cosmology, biology and information theory. Rovelli acknowledges there are many open questions, but he believes they can eventually be answered with today’s science.

Davies, however, takes the opposite view. He’s not calling for a supernatural explanation for these features, but in recent years, as he describes in the video, he has started to think that we need a new kind of physics to get to the bottom of these deep issues about our origins — a new kind of physical law. With FQXi member Sara Walker, he is investigating so-called “state-dependent laws of information.” You can think of these shifting laws like the rules of chess changing mid-game depending on the configuration of the chess pieces at different points.



So, do you agree with Davies that these questions will need new physics? Or, like Rovelli, do you think that we simply need to better understand the science we already know? Or would you say that, perhaps, these puzzles lie beyond the scope of science?
40 comments | view comments


Schrödinger’s Zombie: Adam Brown at the 6th FQXi Meeting
By GEORGE MUSSER • Sep. 8, 2019 @ 19:30 GMT

Adam Brown
Forget the cat: what if you put a computer into the Schrödinger thought experiment? You could make the computer both run and not run, at once, and that’s just a warm-up. You could, in fact, make it not run and nonetheless extract the answer to a computation. The computer will be sitting there waiting for someone to press “Run,” yet will have produced a result. It sounds impossible by definition, but that’s quantum physics for you. This idea of counterfactual computation is not just a thought experiment; there are computers in the physics labs of the world that have done this.

At the recently concluded FQXi meeting in Tuscany, Adam Brown of Stanford University grabbed hold of counterfactual computation and ran with it. What if the computer is set up to perform a brain simulation? You could ascertain what that brain would be thinking even if it is not, in fact, thinking. Whether a simulated brain is conscious is a contentious question, but suppose it is. Then you could create a mind that acts in the world, yet lacks first-person experience—a philosophical zombie. What is more, you can decide the circumstances under which the mind will be conscious or not; it might revel in happy sensations, but have no experience of sad ones. Brown’s talk put a new spin on old problems in the philosophy of mind and personal identity.

Free Podcast

Quantum Mind Reading. Could we create a quantum experiment to predict what a person will do, without having to simulate their consciousness? Physicist Adam Brown argues a classic quantum "bomb tester" proposed in the 1990s could be modified to do just that.

LISTEN:

Go to full podcast

Of all the many wonderful talks I’ve heard at these meetings over the years, Brown’s stands out as one of the most quintessentially FQXi: uninhibited and unpigeonholeable, less about the conclusion than about the steps leading up to it. “I am not really a quantum information theorist, and I’m really not a philosopher,” he admitted at the outset. “But what I am is amongst friends, so I hope you will take this in that spirit.”

Speaking of being amongst friends, let me put out a general-purpose disclaimer of my own. In this post, I will streamline the experimental descriptions, departing slightly from the authors’ original presentations, while staying true to the physics. Also, I will steer clear of the interpretation of quantum mechanics and focus on what we directly observe in these experiments. We will have plenty of interpreting to do as it is.

Please Do Interfere

Counterfactual computing is nothing so straightforward as predicting what the computer’s output will be. It’s not like saying that you know the computer would beep if you pressed a key, but you don’t press that key. In fact, the machine’s output could be unpredictable and, under ordinary circumstances, the only way to know what it will do is to run it. Yet quantum physics can obtain a prediction even so. It works because, in the quantum realm, things that can happen, but don’t, can leave their mark on what does.

The principle of doing something without doing it goes back to the earliest days of quantum theory. Physicists came up with ways to measure a particle without interacting with it (the Einstein-Podolsky-Rosen thought experiment), to affect a charged particle without exerting any force on it (the Aharonov-Bohm effect), and to collapse a wavefunction without measuring it (the Renninger negative-result experiment). Tellingly, these authors were critics of the Copenhagen interpretation of quantum mechanics. They were after features of the theory that challenged the conventional wisdom.

The specific idea of counterfactual computation grew out of a proposal in 1991 by Avshalom Elitzur and Lev Vaidman, both then at Tel Aviv University. They suggested creating a superposition of two possibilities, then un-creating it, which should restore the initial state—unless something happened in the interim to either of the superposed possibilities. Even if something could have happened, but didn’t, it will prevent you from reconstituting the original.

Their proposed experiment uses a quantum interferometer. You fire particles at a beamsplitter, which directs them randomly one way or the other. For photons, a half-silvered mirror does the trick; half reflect off and half pass through. Or you also glue two triangular prisms together to form a little cube; half the photons will leave through one face and half through another. The two paths diverge and meet again at a second beamsplitter. That beamsplitter, likewise, directs the particles one way or another, where detectors await them.

Credit: Adam Brown


Even before you introduce a counterfactual, this setup violates classical intuitions. If beamsplitters were simply gates that shunted particles either left or right, half would land in one detector, half in the other. What we observe, though, is that all land in one detector. That means the first beamsplitter must be causing each particle to go both left and right—a superposition of the two paths. When the two paths meet at the second beamsplitter, they interfere with each other, which closes off one of the directions the particle might have taken. So, the second beamsplitter reverses the operation of the first: if the first splits one particle in two, the second combines two into one.

The fun begins when you stick an obstacle into one of the two paths. Then the two paths no longer meet at the second beamsplitter. No interference occurs. Now the first beamsplitter does act like a simple gate, sending particles just one way or the other at random. Half the time, neither detector clicks—the particle must have gone down the blocked path. The other half, one of the two detectors clicks, at random—the particle has evidently reached the second beamsplitter and been steered to one of the detectors.

Credit: Adam Brown


The upshot is that, if the quiet detector starts to click, you know that someone has inserted an obstacle into the system. The converse is not true: if the normally active detector goes off, there might or might not be a blockage. This system thus acts as an obstacle-detector. What is counterfactual about it is that the particle was supposed to be blocked, but there it is, exiting the apparatus. The only effect of the obstacle has been to alter its exit point. Evidently the particle must not, in fact, have hit the obstacle, but nonetheless felt its presence.

As with much else in quantum physics, the core weirdness is the uneasy mix of particle and wave behavior. Without the blockage, the photon acts like a wave. It divides at the first beamsplitter and reunites at the second, like an ocean wave parting around an island. With the blockage, it’s like a particle, delivering all its energy in one lump. The counterfactual detection of the obstacle leverages this duality. Elitzur and Vaidman described this as an instance of quantum nonlocality: the presence of an obstacle affects the output not by any mechanistic process that you can trace out step by step, but by the sensitivity of the output to the system in its entirety.

Bomb Squad

Elitzur and Vaidman put the blowing into mind-blowing by imagining a hair-trigger bomb as the obstacle. If so much as a single particle falls on this bomb, it detonates. If someone opposing your tenure case plants such a bomb in your apparatus, you will hear one of two things when you perform the experiment: a loud bang, or a gentle click from the normally inactive detector. If the latter, the instrument will have detected the bomb without interacting with it—an otherwise impossible feat.

Credit: Adam Brown


Clever though this bomb-sniffer may be, please don’t trust your life with it. Because the beamsplitters divide their beams evenly, the system will set off the bomb half the time; the other half, it will reveal the bomb with only 50-percent likelihood. In all, it has only a one-in-three chance of giving you advance warning. You can do better by modifying the beamsplitters to send fewer particles down the pathway that might contain a bomb, which reduces the probability of an explosion. The tradeoff is that it also also reduces the probability of a safe detection. Favoring one path over the other weakens the interference effect at the second beamsplitter, and some particles will leak through to the normally inactive detector. You will get false positives, suggesting the presence of a bomb where there is none. Put simply, if you avoid an explosion by not bothering to look, you are none the wiser about whether a bomb is there.

But Paul Kwiat of the University of Illinois at Urbana-Champaign and his colleagues came up with a way to evade this tradeoff. It makes use of the so-called quantum Zeno effect, which, in this case, simply means running the check multiple times. Each time, the system adjusts the probability of sending a particle down the potentially bomb-ridden path. On its first attempt, it tries a very low probability. If a bomb lurks in the system, it will probably not detonate, but its presence nonetheless prevents interference from occurring at the second beamsplitter. If the system is bomb-free, interference does occur, but only weakly.

To create a cleaner distinction, the particles loop back for another pass through the apparatus. Based on the results of the first round, the system adjusts the probability of sending particles down the potentially bomb-ridden path. If it sensed even the whiff of interference, suggesting the path is clear, it nudges the probability up; otherwise, it keeps the probability low. If a bomb is present, it is no likelier to detonate than before, but if it isn’t, the interference is somewhat stronger. Crucially, the system can make these adjustments using passive optical elements, without detecting the photons, which would spoil the interference rather than amplify it.

After enough rounds, the system has shifted from sending few particles down the potentially bomb-ridden path to sending all of them, having ascertained that there is no risk of explosion. At this point, the bomb and bombless states are completely distinct, avoiding false positives. Only then do you steer the particles into the detectors to see which of those states the system is in.

An entertaining noir film about the bomb tester, created by Dag Kaszlikowski of the Centre for Quantum Technologies and set on the mean streets of Singapore, won the 2014 FQXi video contest.



Kwiat and his colleagues wrote up a Scientific American article in 1996 that clarifies the procedure.

Seeing in the Dark

Once you realize that you can interact without interacting, all manners of possibilities open up. Kwiat and his colleagues used the scheme to take microscope images of hairs, wires, and fibers without shining light on them (Physical Review A 58, 605-608, arXiv:quant-ph/9803060 (1998)). Vaidman suggested that biologists could take x-ray images of cells without causing radiation damage arXiv:quant-ph/9610033 (1996). Jian-Wei Pan of the University of Science and Technology of China and colleagues transmitted an image using hardly any photons. Roger Penrose of Oxford, in Shadows of the Mind, puckishly suggested that Orthodox Jews could use the system on the Sabbath to turn on a light without touching its switch.

In one especially interesting variant, Lucien Hardy of the Perimeter Institute showed how the bomb itself could be placed into superposition (Phys. Rev. Lett. 68, 2981 (1992)). He added a second interferometer that contained an antiparticle. It overlapped the first interferometer, so that the antiparticle could meet the particle and annihilate it in a minor explosion. Hardy originally imagined using electrons and positrons, but experimental demonstrations have used photons, which wipe each other out not by particle-antiparticle annihilation, but by mutual negation in a beamsplitter (Phys. Rev. Lett. 95, 030401 (2005)) or nonlinear optical crystal (Phys.Rev.Lett.102:020404 (2009)). On occasion, the normally inactive detectors of both interferometers click, indicating that an annihilation took place—but if it did, how did particles manage to reach the detectors? The scenario came to be known as “Hardy’s paradox,” although Hardy did not use the word “paradox” himself. A paradox arises only if you suppose the particles must always have well-defined positions.

Counterfactual Computing

In the most astounding proposal of all, Richard Jozsa of Cambridge proposed in 1998 that you could swap the bomb for a computer (arXiv:quant-ph/9805086 (1998)). The particle is its on/off switch. Just as you can detect a bomb without interacting with it, you can obtain the output of the computer without running it.

Jozsa imagined a rudimentary computer that outputs a single bit. A value of 0 is like the absence of a bomb; an output of 1 is like the presence. If the normally inactive detector clicks, the output must be 1—and the computer has yielded a result even though it wasn’t run. You can confirm that by looking at the machine. It will still be in its initial state, never having performed the computation. Yet its mere presence skewed the interferometer.

Like the bomb tester, this system often produces ambiguous results. You can do a bit better by augmenting it with the Zeno procedure.You set the probability of running the computer low at first and, as long as its output remains 0, gradually ratchet up that probability. By the end, an output of 1 is unambiguously a counterfactual output. Even that isn’t perfect, though. If the result of the computation is 0, the computer will run. There’s no way to perform a computation that is counterfactual no matter what the output is. In a follow-up paper, Jozsa and Graeme Mitchison of the Laboratory of Molecular Biology in Cambridge showed that this tradeoff is unavoidable. A counterfactual computer can never deliver complete certainty about both output values because its very functioning depends on uncertainty: that it might or might not run.

Kwiat and his team built such a machine in 2006 (Nature 439, 949–952 (2006)). Their computer returned two bits, indicating which of a set of four items met some criterion, using a famous algorithm developed by Lov Grover. The researchers placed it into one path of an interferometer. Running this computer is like exploding the bomb: it absorbs the particle, neither detector clicks, and you have to read the output from the machine itself.

The system produced one of three outcomes. Half the time, neither of the interferometer detectors clicked, meaning the computer ran. The counterfactual aspect failed in these cases, but at least the team had a very nice implementation of Grover’s algorithm (the best to date). A quarter of the time, the normally active detector clicked, which told them nothing: the computer may or may not have run. The remaining quarter, the normally inactive detector clicked, indicating that the computer did not run, but nonetheless gave a partial output. The researchers proposed stringing together a series of such interferometers to extract the rest of the output.

So what’s it good for? Josza suggested that counterfactual computation was yet another odd effect that quantum computers might someday harness, and Kwiat and his colleagues suggested that it might be adapted to reduce the errors that quantum computers are prone to. These uses are far off, at best. Brown, in his talk at the FQXi meeting, offered a more immediate application: for philosophy.

A New Breed of Zombie

Brown connected counterfactual computation to philosophical puzzles having to do with the mind. “There is, in fact, a reason you should care about whether the computation happened or not, and that’s if what the computation is doing is simulating your thought process,” he said.

Credit: Adam Brown
Suppose you program a computer to simulate a conscious mind. By putting this computer into an interferometer, you can predict what the mind will do without running the simulation. “Using counterfactual cognition, you can simulate what somebody’s going to do—you can predict what they’re going to do—without simulating them,” Brown said.

“Predict” is perhaps too weak a word. A simulated mind is a mind in its own right. You are not just predicting what the mind will do, but letting it do it. If you ask it to add 2 and 2, it will respond as readily as the original. Many philosophers and neuroscientists think a simulated mind is as conscious as the original—that is the premise of many a Black Mirror episode.

From the outside, the counterfactual mind seems identical to the original or simulated mind. Its output is the same. From the inside, though, the difference is profound. The counterfactual mind doesn’t have an inside. It is a philosophical zombie. In the taxonomy of zombies, it is even weirder than other breeds, because not only is it not conscious, it doesn’t even exist. It remains a potentiality inside the computer, awaiting an “on” signal that never came.

I don’t think this principle is limited to computer simulations. Why not insert an actual living brain into the interferometer and use the particle to control its state of consciousness? That would bypass the controversial question of whether a simulation has the same experience as the original. If the brain’s nerve connections to the body are preserved, it might move an arm without going through conscious deliberation. The brain will sometimes be fully present and sometimes a zombie.

You do run into the tradeoff that Mitchison and Jozsa talked about: not all results can be obtained counterfactually, so the mind will sometimes run and sometimes not. But Brown—in what was the most remarkable part of an already remarkable talk—made a virtue of this defect. Suppose you are simulating a mind that is making some big life decision. Such decisions are hard; with all the variables involved, you can never be sure which choice will make you happy or sad. But you can arrange the counterfactual procedure to execute only the happy outcomes and leave the sad ones unimplemented. Thus you could guarantee that any minds you conjure up will be happy. Indeed, you could apply that insight to an entire virtual universe, so that only universes that maximize the happiness of their occupants (or some other desirable outcome) were brought into existence.

Brown speculated that such a scenario bears on the problem of evil in theology. Even an omniscient creator faces a problem of prediction. If it wants to create a universe where good outweighs evil, it must, in effect, run a simulation first. But such a simulation is a universe in its own right. It seems the creator cannot avoid creating creatures that suffer. But counterfactual creation allows God to create a universe where good is guaranteed to outweigh evil.

These teleological outcomes can occur even at a more humble level. The conditions a physicist imposes at the end of an experiment can determine what happened during the experiment or whether the experiment is even performed. You can ensure that outcomes you don’t like never came to pass. “If you weren’t going to get the answer you didn’t like, then, in a wavefunction-weighted Born sense, you never instantiated that possibility to being with,” Brown said. That might have practical significance. When evaluating the efficiency of a quantum algorithm, researchers typically count how many times the computer performs certain operations, and Brown suggested they shoudn’t count operations that never occur.

If nothing else, I think Brown’s talk proves that physicists still have much thinking to do about quantum counterfactuality. If the potential and the actual blur together, perhaps you should conclude not that you haven’t run the computer, but that, by prepping the computer, you have run it—that there is no difference between running and not running, and computation is the structure of the machine, not it its dynamics.

One thing is clear: there is nothing counterfactual about how amazing quantum physics can be.
173 comments | view comments

Recent Blog Entries

Bonus Koan: A Simulacrum of Revenge
By ANTHONY AGUIRRE
This is a Koan written after the publication of Cosmological Koans, addressing and concerning the hypothesis that simulations of minds have the same moral value as the original physical and biological minds. It can be enjoyed whether or not you’ve...
September 7th, 2019 | 4 comments | view blog entry & comments

Bonus Koan: Distant Causes
By ANTHONY AGUIRRE
Another Koan from the cutting-room floor, this one discusses causality and Mach's principle.

Next up will be a brand-new Koan!
August 17th, 2019 | 21 comments | view blog entry & comments

Building an AI physicist: Max Tegmark at the 6th...
By ZEEYA MERALI
[picture]Ask not what AI can do for you – ask what you can do for AI. That was the challenge that Max Tegmark (cosmologist at MIT and one of FQXi’s scientific directors) laid down to his fellow physicists at the recent FQXi meeting in Tuscany....
August 16th, 2019 | 15 comments | view blog entry & comments

Downward causation: George Ellis at the 6th FQXi...
By IAN DURHAM
For many years now FQXi member George Ellis has been patiently trying to sell me on the idea of downward causation. While I have never actively argued against this idea, I have come out strongly in defense of reductionism which is generally...
August 15th, 2019 | 14 comments | view blog entry & comments

Designing the Mind: Susan Schneider at the 6th...
By ZEEYA MERALI
[picture]How far would you go to enhance your mind? How far is too far?

Last month, Elon Musk's Neuralink start-up introduced the idea of an implantable chip that you stick in your brain, through an invasive surgical procedure...
August 15th, 2019 | 8 comments | view blog entry & comments

The Physics of Decision-Making: Carlo Rovelli at...
By ZEEYA MERALI
[picture]You chose to click on this post.

But why? And does the fact that the universe started in a low entropy state play a role in providing the answer?

Elsewhere on the blog, Ian Durham has been writing about his own model of free...
August 14th, 2019 | 9 comments | view blog entry & comments

Measuring Free Will: Ian Durham at the 6th FQXi...
By IAN DURHAM
It feels a bit odd blogging about myself, but here goes...[picture]

For most of the history of modern science the debate over free will has been largely left to the realm of philosophy. Indeed, the debate is as old as philosophy itself. But,...
August 14th, 2019 | 341 comments | view blog entry & comments

Bonus Koan: A Lake of Many Reflections
By ANTHONY AGUIRRE
In the editing process of Cosmological Koans, a number of Koans — even pretty much complete ones — ended up on the cutting-room floor. This is one, which addressed/describes the "Cosmological Interpretation" of quantum mechanics, that I thought...
August 12th, 2019 | 58 comments | view blog entry & comments

The Future of Computation: Fred Adams at the 6th...
By ZEEYA MERALI
[picture]What are the physical limits constraining the exponential growth of computation? And how might we overcome them?

In a talk that captures the spirit of the Foundational Questions Institute beautifully, astrophysicist Fred Adams began...
August 1st, 2019 | -1 comments | view blog entry & comments

Memory, Causality and Cats: Sean Carroll at the...
By ZEEYA MERALI
[picture]FQXi’s 6th International meeting is now over — and we have plenty of brilliant talks and panel discussions from the conference to now share with you.

The first session was on causality, and Caltech cosmologist Sean Carroll opened...
July 30th, 2019 | 36 comments | view blog entry & comments

Please enter your e-mail address:
Note: Joining the FQXi mailing list does not give you a login account or constitute membership in the organization.