Outside the Box

March 5, 2021
by Colin Stuart
Outside the Box
A proposed quantum set-up that could predict your game-playing strategy resurrects Newcomb’s classic quiz show paradox.
by Colin Stuart
FQXi Awardees: Patrick Hayden
March 5, 2020
Imagine that you’ve reached the final of a TV game show. The host presents you with two boxes: A and B. Box A contains $1,000, but you don’t know what is in Box B. It could either be empty or contain $1 million. You can either choose both Box A and B, or just Box B. But here’s the catch: You should know that a supercomputer with a history of near-perfect accuracy has predicted what you are going to do ahead of time. If it thought that you’re going for both boxes then it put nothing in Box B. The $1 million goes in if it thought you will opt for just Box B. What choice would you make?

That question divides people—whether members of the public or mathematicians and scientists—because there are sound logical reasons for both choices. It forms the basis of American physicist William Newcomb’s classic paradox, laid out and analysed in depth by American philosopher Robert Nozick in 1969. In recent years, mathematicians, computer scientists and physicists thought they had found a definitive resolution with the best box-picking strategy. But with the aid of an FQXi grant of over $80,000, Stanford University’s Patrick Hayden and his PhD student Noah Shutty have come to a startlingly different conclusion that re-instates the paradox. Their analysis hinges on a peculiar quantum thought experiment, in which a computer can predict a person’s choices, and raises questions about the ethics of simulating consciousness.

Newcomb’s paradox has intrigued people for decades. It was first presented to the public in Martin Gardner’s popular math column in Scientific American, in 1973. In 2016, The Guardian newspaper surveyed of over 30,000 readers and found that 53.5 per cent of respondents plumped only for Box B. The rest picked both boxes. Some argue that you should go for Box B because the supercomputer has likely already predicted you’d do that and so there’s a high chance of becoming a millionaire. Others say that whatever happens you win $1000 (Box A) plus whatever is in Box B. Thus by selecting both boxes you’re increasing your winnings. That type of split decision is why it is a paradox. Both strategies make sense, yet they are seemingly at odds with one another.

Paradox Lost

So where do quantum physics and simulating consciousness fit in? The answer is mind-blowing. To accurately predict what you will do, this imaginary supercomputer would have to create a complex simulation of your choices. "It’s a widely held belief that a good enough simulation of a brain produces a conscious experience indistinguishable from your own," says Hayden. "When you’re faced with the decision, there’s no way for you to know whether you’re really the one making the choice or whether you’re the simulation and your conscious experience is being generated by the computer." Given that the contents of Box B depends on the simulation, you should always assume that you are the simulation and choose Box B. At least that was the proposed resolution to the paradox by a number of independent researchers, including computer scientists Radford Neal of the University of Toronto, (arXiv:math/0608592 (2006)), Scott Aaronson of the University of Texas in Austin (arXiv:1306.0159 (2013)), and Oxford University’s David Deutsch (Deutsch’s short comment).

Hayden’s project throws a potential spanner in the works, however, suggesting that choosing Box B is not, in fact, the best option, as the recent resolution argues. Hayden and colleagues have been investigating the impact of a branch of quantum physics called counterfactual quantum computation—and, in particular, the work of their Stanford colleague, FQXi member, Adam Brown. Counterfactual quantum computation is a method of inferring the result of a computation without actually performing that computation. This sounds truly bizarre, but it is, in turn, based on another thought experiment known as the Elitzur–Vaidman bomb-tester, which says that you can check to see if a bomb works without ever detonating it. There is a detailed description of how this quantum set-up works in the blog post, "Schrödinger’s Zombie"—and Brown gave a talk about the hypothetical bomb-tester, counterfactual quantum computation and simulating consciousness, at FQXi’s 6th International Meeting in Tuscany, Italy last year (video above).

Paradox Regained

The important part about counterfactual quantum computation for Hayden’s purposes is that when it is applied to Newcomb’s paradox, it suggests that the supercomputer could know the result of the simulation without bringing a copy of your consciousness into existence. This snuffs out that potential resolution of the paradox that says that it is always better to assume you are the simulation, and choose Box B. So the paradox is reinstated.

There’s no way for you to know whether you’re really the one making the choice or whether you’re the simulation.
- Patrick Hayden
That’s all fascinating stuff, but few of us will be in the situation of actually ever playing this game and choosing between the boxes. Still Hayden explains that the research could one day have implications for the way we plan our lives. "If you wanted to be certain that you were going to lead a satisfying life, you can’t yet know in advance how your choices are going to play out," says Hayden. "At the end of your life you may be pleased or disappointed with the outcome."

We already employ simulations to help analyse the outcome of economic policy decisions on society. It is not impossible to imagine that one day people could simulate their choices to such a degree of accuracy that they could forecast the consequences of their actions. Yet the process of creating the simulation may very well call into being a simulated, conscious copy of yourself that experiences any negative outcomes anyway. Hayden’s work with counterfactual quantum computation is investigating whether it is possible to suppress those negative consequences by arriving at the outcome of the simulation without creating the simulation itself.