Search FQXi


If you are aware of an interesting new academic paper (that has been published in a peer-reviewed journal or has appeared on the arXiv), a conference talk (at an official professional scientific meeting), an external blog post (by a professional scientist) or a news item (in the mainstream news media), which you think might make an interesting topic for an FQXi blog post, then please contact us at forums@fqxi.org with a link to the original source and a sentence about why you think that the work is worthy of discussion. Please note that we receive many such suggestions and while we endeavour to respond to them, we may not be able to reply to all suggestions.

Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you.

You may also view a list of all blog entries.

 RSS feed | RSS help
RECENT FORUM POSTS

Robert McEachern: "Yes - of course! That is what the word "quantum" means! That is what I have..." in What Will Quantum...

Lorraine Ford: "Rob, Re my words e.g.: “I believe that physicists can take accurate..." in What Will Quantum...

Joe Fisher: "Today’s Closer To Truth Facebook page contained this peculiar supposedly..." in First Things First: The...

Robert McEachern: "You are barking up the wrong tree. It is the role of "information" and..." in Intelligence in the...

David Sloan: "Fetzer Franklin Fund has partnered with FQXi to stimulate research on the..." in Intelligence in the...

Joe Fisher: "Today’s Closer To Truth Facebook page contained this peculiar supposedly..." in First Things First: The...

Manish Sharma: "Professional, Experience Team Management and Affordable PPC Services for..." in FQXi'ers Debate the Deep...

Steve Dufourny: "Oh My God , I have calculated my reasoning about this quantum gravitation..." in Alternative Models of...


RECENT ARTICLES
click titles to read articles

First Things First: The Physics of Causality
Why do we remember the past and not the future? Untangling the connections between cause and effect, choice, and entropy.

Can Time Be Saved From Physics?
Philosophers, physicists and neuroscientists discuss how our sense of time’s flow might arise through our interactions with external stimuli—despite suggestions from Einstein's relativity that our perception of the passage of time is an illusion.

Thermo-Demonics
A devilish new framework of thermodynamics that focuses on how we observe information could help illuminate our understanding of probability and rewrite quantum theory.

Gravity's Residue
An unusual approach to unifying the laws of physics could solve Hawking's black-hole information paradox—and its predicted gravitational "memory effect" could be picked up by LIGO.

Could Mind Forge the Universe?
Objective reality, and the laws of physics themselves, emerge from our observations, according to a new framework that turns what we think of as fundamental on its head.


FQXi BLOGS
December 6, 2019

New Blog Entries
Bookmark and Share

Intelligence in the Physical World Grantees
By DAVID SLOAN • Dec. 5, 2019 @ 18:57 GMT

Fetzer Franklin Fund has partnered with FQXi to stimulate research on the role of intelligence in the physical world. Research will be funded across a host of institutions around the globe, spanning the fields of physics, chemistry, biology, philosophy, neuroscience and information theory. At FQXi we try to promote rigorous research into new and underdeveloped areas that have the potential to fundamentally change the way in which we understand the world. Following on our previous program on “Agency in the Physical World,” the idea of intelligence, and how it relates to the world as described by physics, is an area we believe is ripe for investigation. In this round through a competitive peer review process we have selected nine proposals to receive large grants, totaling just under $1.5 million.

The exciting topics that will be investigated include:

1.) How an embryonic brain develops to produce complex behavior in organisms without the need to teach through examples.

2.) How do intelligent agents efficiently execute tasks in a quantum world?

3.) What can an agent with only access to probabilities conclude?

4.) Can agents with conflicting models of reality have equally useful information?

5.) Are physical laws the constructs of intelligent agents, and how do they encode information about the world?

6.) Does the universe “compute”? How is this rendered in biological systems?

7.) Do more complex systems process information at a higher level? Can this be used to define a collectivist notion of intelligence?

8.) Is intelligence limited by physical constraints such as the finiteness of resources available in the physical world and the laws of thermodynamics?

9.) What level of intelligence is needed to make predictions? How much memory is needed for this?

We would like to congratulate all our successful applicants and thank everyone who applied. We are particularly grateful to those who provided extended applications in the second round, and recognize the level of effort this entails. We’re looking forward to see the outcomes!
1 comment | view comments


What Will Quantum Computers Be Good For? — panel discussion from the 6th FQXi Meeting
By ZEEYA MERALI • Nov. 21, 2019 @ 18:08 GMT

Credit: Erik Lucero
Over the past couple of months there’s been renewed interest, and quite some intrigue, surrounding quantum computing. As you'll know from the special edition of the podcast with quantum physicist and FQXi blogger Ian Durham we posted in October, there was a news leak in September suggesting that a team at Google had achieved ‘quantum supremacy’ for the first time. This is the milestone at which a quantum computer performs a specific task that lies beyond the practical reach of a classical computer. At the time that we posted the podcast, the rumour was that Google’s quantum processor, Sycamore, had solved a random number generation problem in just 200 seconds. The claim was that the world’s best classical computer would need 10,000 years to perform the same task. Since then, the team has officially published their results in Nature.

Free Podcast

Quantum Supremacy Milestone? Rumours abound that Google's quantum processor Sycamore has performed a task that would flummox the best classical computer — a first in quantum computing. Physicist Ian Durham assesses the claims, gives us a quantum computing primer, and discusses concerns about the term 'quantum supremacy'.

LISTEN:

Go to full podcast

The plot thickened in October, however, when IBM hit back with a blog post in which some of their researchers claimed that the result was perhaps not quite as supreme as Google claimed, saying:

"Recent advances in quantum computing have resulted in two 53-qubit processors: one from our group in IBM and a device described by Google in a paper published in the journal Nature. In the paper, it is argued that their device reached “quantum supremacy” and that “a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task.” We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity. This is in fact a conservative, worst-case estimate, and we expect that with additional refinements the classical cost of the simulation can be further reduced.

"Because the original meaning of the term “quantum supremacy,” as proposed by John Preskill in 2012, was to describe the point where quantum computers can do things that classical computers can’t, this threshold has not been met."

I'm sure Ian and I will be discussing where things stand in this debate during our end of year run-down on the podcast in a few weeks. But regardless of the status of this particular result, it's certainly worth talking more about the practical future for quantum computers. The random number task performed by Sycamore, which Ian chats about on the podcast, isn't a hugely useful one. The point of the test was just to show that quantum computers can do something that a classical computer cannot. But what do scientists hope quantum computers will be good for, eventually? That was the subject of a panel discussion at FQXi's 6th international meeting in Tuscany, featuring quantum physicists Scott Aaronson, of the University of Texas in Austin, Mile Gu, of the Nanyang Technological University, Michele Reilly, of Turing Inc, and Seth Lloyd, at MIT, all moderated by Catalina Curceanu, of INFN, Italy.

You can watch the full panel discussion now. Aaronson listed the most famous applications: simulating chemistry and physics (with applications in material science), breaking cryptography, speeding up database searches, enhancing machine learning, and using quantum computers to prove that random bits are really random. Gu looked further to the future, pondering whether quantum computers might help solve the quantum measurement problem. Reilly noted that however powerful quantum computers may or may not become, it is worth remembering that every quantum computer needs (costly) classical peripherals.



Lloyd meanwhile talked about what's already being done, and gamely sang a Gilbert and Sullivan inspired ode to quantum computers. Here are the lyrics for your amusement:

Qubit Willow

In a superconducting circuit a little qubit

sang Entangled, entangled, unentangled.

And I said to it `Qubit, oh why do you sit

singing Entangled, entangled, unentangled?

Is it just decoherence, qubit,' I cried,

`or a nasty quasi-particle in your little inside?'

With a shake of its poor little head it replied

Entangled, entangled, unentangled.

Its flux fluctuated as it sat on that chip,

oh Entangled, entangled, unentangled.

Its Josephson junctions were having a pip,

entangled, entangled, unentangled.

It sighed and it sobbed and a quantum jump it made

as it lost all the phase of its de Broglie wave,

and a spin echo arose from the suicide's grave:

Entangled, unentangled, entangled.

Now I feel just as sure as I'm sure that my name

isn't Engtangled, entangled, unentangled,

that it was not spontaneous collapse of the wave function that made it exclaim

Entangled, entangled, unentangled.

If my neurons interact with the universe I

shall decohere as it did and you will know why,

but I probably shall not exclaim as my decoherence dies,

Entangled, entangled, unentangled.

Seth Lloyd, FQXi, Barga Italy, July 2019
46 comments | view comments


Will A.I. Take Over Physicists' Jobs? More on Max Tegmark at the 6th FQXi Meeting
By GEORGE MUSSER • Oct. 17, 2019 @ 17:28 GMT

Max Tegmark
Imagine you could feed the data of the world into a computer and have it extract the laws of physics for you. At the recent Foundational Questions Institute meeting, in Tuscany, FQXi director Max Tegmark described two machine systems he and his grad students have built to do exactly that. One recovers algebraic formulas drawn from textbook physics problems; the other reconstructs the unknown forces buffeting particles. He plans to turn his systems loose on data that have eluded human understanding, trawling for new laws of nature like a drug company screening thousands of compounds for new drugs.“It would be cool if we could one day discover unknown formulas,” Tegmark told me in a coffee break.

“One day” may already be here. Three theorists recently used a neural network to discover a relation between topological properties of knots, with possible applications to quantum field theory and string theory (V. Jejjala, A. Kar & O. Parrikar arXiv:1902.05547 (2019)). Machine learning has analyzed particle collider data , quantum many-body wavefunctions, and much besides. At the FQXi meeting, Andrew Briggs, a quantum physicist at Oxford, presented an A.I. lab assistant that decides how best to measure quantum effects (D. T. Lennon et al, arXiv:1810.10042 (2018)).  The benefits are two-way: not only can A.I. crack physics problems, physics ideas are making neural networks more transparent in their workings.

Still, as impressive as these machines are, when you get into the details, you realize they aren’t going to take over anytime soon. At the risk of stroking physicists’ egos, physics is hard—fundamentally hard—and it flummoxes machines, too. Even something as simple as a pendulum or the moon’s orbit is a lesson in humility. Physics takes a lot of lateral thinking, and that makes it creative, messy, and human. For now, of the jobs least likely to be automated, physics ranks up there with podiatry. (Check the numbers for yourself, at the Will Robots Take My Job? site.)

Survival of the Fittest

Fitting an algebraic formula to data is known as symbolic regression. It’s like the better-known technique of linear regression, but instead of computing just the coefficients in a formula—the slope and intercept of a line—symbolic regression gives you the formula itself. The trouble is that there are infinitely many possible formulas, data are noisy, and any attempt to extract general rules from data faces the philosophical problem of induction: whatever formula you settle on may not hold more broadly.

Searching a big and amorphous space of possibilities is just what evolution does. Organisms can assume an infinity of possible forms, only some of which will thrive in an environment. Evolution finds them by letting a thousand flowers bloom and 999 of them wither. Inspired by nature, computer scientists developed the first automated symbolic regression systems in the 1980s. The computer treats algebraic expressions as if they were DNA. Seeded with a random population of expressions, none of which is especially good at reproducing the data, it merges, mutates, and culls them to refine its guesses.

As three pioneers of the field, John Koza, Martin Keane, and Matthew Streeter, wrote in Scientific American in 2003, evolutionary computation comes up with solutions as inventive as any human’s, or more so. Genetic-based symbolic regression has fit formulas to data in fluid dynamics, structural engineering, and finance. A decade ago, Josh Bongard, Hod Lipson, and Michael Schmidt developed a widely used package, Eureqa. They used to make it available for free, but now charge for it—as well they might, considering how popular it is at oil companies and hedge funds. Fortunately, you can still do a 30-day trial. It’s fun to watch algebraic expressions spawn and radiate in a mathematical Cambrian explosion.

But the algorithm still requires additional principles to narrow the search. You don’t want it to come up with just any formula; you want a concise one. Physics, almost by definition, seeks simplicity within complexity; its goal is to say the most with the least. So the algorithm judges candidate formulas by both exactness and compactness. Eureqa occasionally replaces complicated algebraic terms with a constant value. It also looks for symmetries—whether adding or multiplying by a constant leaves the answer unchanged. That is trickier, because the symmetry transformation produces a value that might not be present in the data set. To make an educated guess at hypothetical values, the software fits a polynomial to the data, in effect performing a virtual experiment.

Feynman in a Box

Tegmark and his MIT graduate student Silviu-Marian Udrescu take a different approach they call “A.I. Feynman” (arXiv:1905.11481 (2019)). Instead of juggling multiple possibilities and gradually refining them, their system follows a step-by-step procedure toward a single solution. If the genetic algorithm is like a community of scientists, each putting forward a particular solution and battling it out in the marketplace of ideas, A.I. Feynman is like an individual human methodically cranking through the problem.

It works by gradually eliminating independent variables from the problem.  “It uses a series of physics ideas… to iteratively transform this hard problem into one or more simpler problems with fewer variables, until it can just crush the whole thing,” Tegmark told the FQXi meeting. It starts by looking for dimensionless combinations of variables, a technique particularly beloved of fluid dynamicists. It tries obvious answers such as simple polynomials and trigonometric functions, so the algorithm has an element of trial and error, like a human. Then it looks for symmetries, using a mini neural network instead of a polynomial fit. Tegmark said: “We train a neural network first to be able to approximate pretty accurately the function.… That gives you the great advantage that now you can generate more data than you were given. You can actually start making little experiments.” The system tries holding one variable constant, then another, to see whether they can be separated.

Credit: Max Tegmark
Udrescu and Tegmark tested their system on 100 formulas from the Feynman Lectures. For each, they generated 100,000 data points and specified the physical units of the variables. The system recovered all 100 formulas, whereas Eureqa got only 71. They also tried 20 bonus problems drawn from textbooks that strike fear into the hearts of physics students, such as Goldstein’s on classical mechanics or Jackson's on electromagnetism. The system got 18; the competition, three.

To be fair, Eureqa is not the only genetic symbolic-regression system out there, and Udrescu and Tegmark did not evaluate them all. Comparing machine systems is notoriously fraught. All require a good deal of preparation and interpretation on your part. You have to specify the palette of functions that the system will mix and match—polynomials, sines, exponentials, and so on—as well as parameters governing the search strategy. When I gave Eureqa a parabola with a touch of noise, it offered x2 only as one entry in a list of possible answers, leaving the final choice to the user. (I wasn’t able to test A.I. Feynman because Udrescu and Tegmark haven’t released their code yet.) This human element needs to be considered when evaluating systems. A tool is only so good as its wielder.

Sorry to report, but symbolic regression is of no use to students doing homework. It does induction: start from data, and infer a formula. Physics problem sets are exercises in deduction: start from a general law of physics and derive a formula for some specified conditions. (Maybe more homework problems should be induction—that might be one use for the software.) As contrived as homework can be, it captures something of how physics typically works. Theorists come up with some physical picture, derive some equations, and see whether they fit the data. Even a wrong picture will do—indeed, one might argue that genuine novelty can arise only through an error. Kepler did not produce his namesake laws purely by crunching raw astronomical data; he relied on physical intuitions, such as occult ideas about magnetism. Once physicists have a law, they can fill in a new picture.

Or at least that is how physics has been done traditionally. Does it seems so human only because that is all it could be, when humans do it?

Making a Difference Equation

A formula describes data, but what if you want to explain data? If you give symbolic regression the position of a well-hit baseball at different moments in time, it will (if you get it to work) tell you the ball follows a parabola. To get at the underlying laws of motion and gravity takes more.

Since the ’80s physicists and machine-learning researchers have developed numerous techniques to model motion, as long as it is basically Newtonian, depending only on the objects’ positions and velocities and on the forces they exert on one another. If the objects are buffeted by random noise, the machine does its best to ignore that. Its output is typically a difference equation, which gives the position at one time given its position at earlier time intervals. This equation treats an object’s path as a series of jumps, but you can infer the continuous trajectory that connects them, thereby translating the difference equation into a differential equation, as the laws of physics are commonly expressed.

Eureqa attacks the problem using genetic methods and can even tell an experimentalist what data would help it to decide among models. It seeds its search not with random guesses but with solutions to easier problems, so that it builds on previously acquired knowledge. That speeds up the search by a factor of five.

Other systems avail themselves of newer innovations in machine learning. Steven Brunton, Nathan Kutz, Joshua Proctor, and Samuel Rudy of the Univeristy of Washington rely on a principle of sparsity: that the resulting equations contain only a few of the many conceivable algebraic terms. That unlocks all sorts of powerful mathematical techniques, and the team has recovered equations not only of Newtonian mechanics but also of diffusion and fluid dynamics. FQXi'ers Lydía del Rio and Renato Renner, along with Raban Iten, Tony Metger, and Henrik Wilming at ETH Zurich, feed their data into a neural network in which they have deliberately engineered a bottleneck, forcing it to create a parsimonious representation (arXiv:1807.10300 (2018)).

Pinball Wizard

Tegmark and his MIT grad student Tailin Wu hew closely to the methods of a paper-and-pencil theorist (Phys. Rev. E 100, 033311 (2019)). Like earlier researchers, they assume the equations should be simple, which, for them, means scrutinizing the numerical coefficients and exponents. If they can replace a real number by an integer or rational number without unduly degrading the model fit, they do. Tegmark told the FQXi meeting, “If you see that the network says, ‘Oh, we should have 1.99999,’ obviously it’s trying to tell you that it’s 2.” In less-obvious situations, they choose whatever rational number minimizes the total number of bits needed to specify the numerator, the denominator, and the error that the substitution produces.



Tegmark and Wu’s main innovation is a strategy of divide-and-conquer. Physicists may dream of a theory of everything, but in practice they have a theory of this and a theory of that. They don’t try to take in everything at once; they ignore friction or air resistance to determine the underlying law, then study those complications separately. “Instead of looking for a single neural network or theory that predicts everything, we ask, Can we come up with a lot of different theories that can specialize in different aspects of the world?” Tegmark said.

Credit Max Tegmark
Accordingly, their system consists of several neural networks, each covering some range of input variables. A master network decides which applies where. Tegmark and Wu train all these networks together. Each specialist network fits data in its own domain, and the master network shifts the domain boundaries to minimize the overall error. If the error remains stubbornly high, the system splits a domain in two. Further tweaking ensures the models dovetail at their boundaries. Tegmark and Wu do not entirely give up on a theory of everything. Their system compares the models it finds to see whether they are instances of the same model—for instance, a gravitational force law differing only in the strength of gravity.

Tegmark tested the system on what looked like a pinball ricocheting around an invisible pinball machine, bouncing off bumpers and deflecting around magnets. The machine had to guess the dynamics purely from the ball’s path. You can see this demonstrated in Tegmarks’ talk, about 4 mins into the YouTube video above. Tegmark and Wu tried out 40 of these mystery worlds and compared their system to a “baseline” neural network that tried to fit the whole venue with a single complicated model. For 36 worlds, the A.I. physicist did much better—its error was a billionth as large.

Think Different

All these algorithms are modeled on human techniques and suppositions, but is that what we really need? Some researchers have argued that the biggest problems in science, such as unification of physics and the nature of consciousness, thwart us because our style of reasoning is mismatched to them. For those problems, we want a machine whose style is orthogonal to ours.

A computer that works like us, only faster, will help at the margins, but seems unlikely to achieve any real breakthrough. For one thing, we may well have mined out the simple formulas by now. Undiscovered patterns in the world might not be encapsulated so neatly. For another, extracting equations from data is a hard problem. Indeed, it is NP-hard: the runtime scales up exponentially with problem size. (Headline: “It’s official: Physics is hard.”) A computer has to make simplifications and approximations no less than we do. If it inherits ours, it will get stuck just where we do.

But if it can make different simplifications and approximations, it can burrow into reaches of theory space that are closed off to us. Machine-learning researchers have achieved some of their greatest successes by minimizing prior assumptions—by letting the machine discover the structure of the world on its own. In so doing, it comes up with solutions that no human would, and that seem downright baffling. Conversely, it might stumble on problems we find easy. As Barbara Tversky’s First Law of Cognition goes, there are no benefits without costs.

What goes on inside neural networks can seldom be written as a simple set of rules. Tegmark introduced his systems as an antidote to this inscrutability, but his methods presuppose that an elementary expression underlies the data, such as Newton’s laws. That won’t help you classify dog breeds or recognize faces, which defy simple description. On these tasks, the inscrutability of neural networks is a feature, not a bug. They are powerful precisely because they develop a distributed rather than a compact representation. And that is what we may need on some problems in science. Perhaps the machines will help the most when they are their most inscrutable.

Credit: Bart Selman
At the previous FQXi meeting in Banff, back in 2016, Bart Selman gave an example of how machines can grasp concepts we can’t. He had worked on computer proofs of the Erdös discrepancy conjecture in mathematics. In 2014 a machine filled in an essential step with a proof of 10 billion steps. Its chain of logic was too long for any human to follow, and that’s the point. The computer has its ways, and we have ours. To those who think the machine did not achieve any genuine understanding—that it was merely brute-forcing the problem—Selman pointed out that a brute-force search would have required 10349 steps. Although Terence Tao soon scored one for humanity with a pithier argument, he cited the computer proof as guidance. If this is any precedent, the hardest problems will take humans and machines side by side.
16 comments | view comments


Till Next Time
By BRENDAN FOSTER • Oct. 4, 2019 @ 19:42 GMT

And now a quick pause in conference coverage - for a fond farewell. I am sad to say, after almost 10 years in the role, I have retired as FQXi’s Science Programs Consultant.

I had such fun in this multipurpose position, helping coordinate everything from grant reviews to grant writing, conferences to contests, websites and databases, and of course, the fabulous FQXi podcast.

Now, I’ll be focussing my work time on writing — mainly science journalism and possibly some fiction on the side. Around town, you can also find me in the middle of music projects and watching over the health of 20-year-old Puffy the Cat.

Now that I’m on the outside, you’ll know I mean it when I say how much of a positive impact I believe FQXi has had on fundamental physics research. Ten years ago, as I finished my degree, very few positions and next-to-no funding existed in any sort of non-string quantum gravity, quantum foundations, philosophical physics, or really anything that sounded too deep or too grand. As a student, to mention something like Bell’s Theorem, Many Worlds, or black hole thermodynamics as something you wanted to learn about would get you laughed at or more likely ignored by faculty.

I was fortunate to have a series of mentors and unofficial advisors who were committed to these kinds of topics and, while realistic about the prospects, felt willing to encourage younger researchers. I finished my degree debt-free doing exactly what I wanted to, but I saw friends divert into other fields, or take on second jobs to support their work with their chosen advisor (common, I know, for folks in the humanities, but unheard of in the sciences).

All that is why I was excited to discover FQXi and then get the chance to work with them, to support this kind of research and the people who love to do it. I saw the impact as we funded worthy projects that had no other prospects, and helped raise the profile of questions that other funders might have just ignored.

Nowadays, other organizations see the importance of foundational research. FQXi and its sponsors are no longer the sole funders in this direction. It is now common for physics departments to have groups in foundational physics, foundations of quantum mechanics, foundations of everything! We of course must credit the researchers who persisted and kept these topics alive. I am just happy to have had the chance to help out.

Thank you to all of you who have been a part of my FQXi experience the past decade. I hope you all continue to visit the site, apply for the grants, enter the contests. Special thanks of course to my colleagues Zeeya, Kavita, Anthony and Max — I wish you and FQXi much success in every branch of the wavefunction.
25 comments | view comments


More on agency from the 6th FQXi International Conference
By IAN DURHAM • Sep. 26, 2019 @ 19:00 GMT

There has been quite a bit of discussion surrounding my recent blog post about my talk on free will at the 6th FQXi conference this past July. In my work I am merely attempting to mathematically model the behavior that we most often associate with free will and agency. But what is agency? Is there room for free will within physics?

As Carlo Rovelli noted in his talk, there is a tension between decision and action within physics. It doesn’t help that agency, which involves both decisions and actions, is treated differently by different physicists. Rovelli believes that an agent should be describable within any theory of description and that no new physics is needed. Clearly he is not a dualist. Yet he also says that we should define agency as whatever happens in the cases in which someone, i.e. the agent, makes a decision.

But what is an agent? According to Susanne Still, they are observers that act on their environment: they sense, process information, and act. To Still, the description of decision making as an optimization process, or any such utilitarian approach, is fraught with problems. What we really want is a theory where behaviors emerge from first principles. These principles should, in turn, reflect physical reality. In other words, physics limits what can actually happen, i.e. there are physical limits to what agents can do (environmental forcing, context, etc.), and thus a complete theory of agency needs to take these limits into account. But agency also involves intention or purpose which begs the question, asked by Larissa Albantakis: how can we distinguish autonomous actions from mere reflexes?
Larissa Albantakis
This is something I considered in my talk on free will. At some point our choices are made so quickly that we don’t even think about them and are therefore merely reflexive.

Albantakis’ answer to this question is to define an autonomous agent as being an open system that is stable and that has self-defined and self-maintained borders. Such a system also has the capacity to perform actions that are at least partially caused from within, i.e. states internal to the system can produce causal change. This, of course, begs the question, how do we identify these self-defined borders? This is done by tracing back through the causal chain and looking at the evolutionary environments associated with each step in the chain. In other words, it involves finding the actual causes of actions. In doing so it is possible to compare levels of consciousness to levels of intelligence (slides and video of Albantakis’ talk will appear here). In looking at the representative plot, what is most interesting is what is not on the main sequence such as AI (more intelligent, less conscious) and complex microorganisms (less intelligent, more conscious).
Consciousness versus intelligence from Larissa Albantakis' talk




At any rate, I find it interesting that there is a convergence of ideas happening here towards the language of statistical mechanics and thermodynamics. Rovelli suggests that agency is related to entropy growth, Still argues that thermodynamics places physical constraints on agency, Albantakis defines autonomous agents in terms of open systems, Karl Friston spoke of Markov blankets, and I develop a measure of free will (i.e. agency) in terms of statistical distributions.
Karl Friston


In a certain sense, this is perhaps not surprising given the close relationship between statistical mechanics, thermodynamics, and information processing. But is this convergence more than merely one of language? Can agency, intention, and purpose be adequately described in terms of statistics and information? Various speakers at the conference had widely diverging opinions on this. Alyssa Ney, for instance, generally defended physicalism which puts physics in a privileged place amongst the sciences.
Alyssa Ney
This view appears, at least on the surface, to be heavily reductionist. This can be contrasted with George Ellis’ view which suggests that we require a new language since the language of physicalism seems to be mostly reductionist and the world simply can’t be fully described in reductionist terms. During one of the panel sessions, Paavo Pylkaanan flatly claimed that physics cannot adequately describe the mind, at least not without new physics. Like Ellis (and unlike Ney), Pylkaanan does not believe that there necessarily is a fundamental level, let alone that physics represents it. By this argument, no field by itself can fully represent or capture the mind (see the debate over this here). It seems clear from this that this debate is likely to endure without some convincing empirical evidence that favors one view over another.
41 comments | view comments

Recent Blog Entries

The Demon in the Machine — Paul Davies at the...
By ZEEYA MERALI
[picture]Earlier this month, astronomers announced the discovery of water in the atmosphere of a potentially habitable planet, some 111 light years or 650 million million miles from Earth. The planet, called K2-18b, is reported to be a plausible...
September 20th, 2019 | 39 comments | view blog entry & comments

Schrödinger’s Zombie: Adam Brown at the 6th...
By GEORGE MUSSER
[picture]Forget the cat: what if you put a computer into the Schrödinger thought experiment? You could make the computer both run and not run, at once, and that’s just a warm-up. You could, in fact, make it not run and nonetheless extract...
September 8th, 2019 | 193 comments | view blog entry & comments

Bonus Koan: A Simulacrum of Revenge
By ANTHONY AGUIRRE
This is a Koan written after the publication of Cosmological Koans, addressing and concerning the hypothesis that simulations of minds have the same moral value as the original physical and biological minds. It can be enjoyed whether or not you’ve...
September 7th, 2019 | 4 comments | view blog entry & comments

Bonus Koan: Distant Causes
By ANTHONY AGUIRRE
Another Koan from the cutting-room floor, this one discusses causality and Mach's principle.

Next up will be a brand-new Koan!
August 17th, 2019 | 21 comments | view blog entry & comments

Building an AI physicist: Max Tegmark at the 6th...
By ZEEYA MERALI
[picture]Ask not what AI can do for you – ask what you can do for AI. That was the challenge that Max Tegmark (cosmologist at MIT and one of FQXi’s scientific directors) laid down to his fellow physicists at the recent FQXi meeting in Tuscany....
August 16th, 2019 | 15 comments | view blog entry & comments

Downward causation: George Ellis at the 6th FQXi...
By IAN DURHAM
For many years now FQXi member George Ellis has been patiently trying to sell me on the idea of downward causation. While I have never actively argued against this idea, I have come out strongly in defense of reductionism which is generally...
August 15th, 2019 | 14 comments | view blog entry & comments

Designing the Mind: Susan Schneider at the 6th...
By ZEEYA MERALI
[picture]How far would you go to enhance your mind? How far is too far?

Last month, Elon Musk's Neuralink start-up introduced the idea of an implantable chip that you stick in your brain, through an invasive surgical procedure...
August 15th, 2019 | 8 comments | view blog entry & comments

The Physics of Decision-Making: Carlo Rovelli at...
By ZEEYA MERALI
[picture]You chose to click on this post.

But why? And does the fact that the universe started in a low entropy state play a role in providing the answer?

Elsewhere on the blog, Ian Durham has been writing about his own model of free...
August 14th, 2019 | 9 comments | view blog entry & comments

Measuring Free Will: Ian Durham at the 6th FQXi...
By IAN DURHAM
It feels a bit odd blogging about myself, but here goes...[picture]

For most of the history of modern science the debate over free will has been largely left to the realm of philosophy. Indeed, the debate is as old as philosophy itself. But,...
August 14th, 2019 | 341 comments | view blog entry & comments

Bonus Koan: A Lake of Many Reflections
By ANTHONY AGUIRRE
In the editing process of Cosmological Koans, a number of Koans — even pretty much complete ones — ended up on the cutting-room floor. This is one, which addressed/describes the "Cosmological Interpretation" of quantum mechanics, that I thought...
August 12th, 2019 | 58 comments | view blog entry & comments

Please enter your e-mail address:
Note: Joining the FQXi mailing list does not give you a login account or constitute membership in the organization.