Search FQXi


RECENT FORUM POSTS

Steve Dufourny: "There is no proof that the universe can be explained like this like an..." in Schrödinger’s A.I....

Georgina Woodward: "'verisimilitude' is a nice word." in Can We Feel What It’s...

R.H. Joseph: ""[H]ow consciousness plays with quantum mechanics, our theory of the very..." in Can We Feel What It’s...

Jim Snowdon: "If Earth rotated once a year, rather than once every 24 hours, every..." in The Nature of Time

Jim Snowdon: "The constant rotational speed of San Diego is 1,408 kilometers per hour. ..." in The Nature of Time

Steve Dufourny: "If we don t change the global system, we shall add an ocean of chaotical..." in Global Collaboration

Steve Dufourny: "Hi all, The world must absolutelly find adapted solutions , here is the..." in Global Collaboration

Himalya Quartz: "Here at our store, you can find a wide variety of healing and Reiki..." in Anatomy of spacetime and...


RECENT ARTICLES
click titles to read articles

Schrödinger’s A.I. Could Test the Foundations of Reality
Physicists lay out blueprints for running a 'Wigner's Friend' experiment using an artificial intelligence, built on a quantum computer, as an 'observer.'

Expanding the Mind (Literally): Q&A with Karim Jerbi and Jordan O'Byrne
Using a brain-computer interface to create a consciousness 'add-on' to help test Integrated Information Theory.

Quanthoven's Fifth
A quantum computer composes chart-topping music, programmed by physicists striving to understand consciousness.

The Math of Consciousness: Q&A with Kobi Kremnitzer
A meditating mathematician is developing a theory of conscious experience to help understand the boundary between the quantum and classical world.

Can We Feel What It’s Like to Be Quantum?
Underground experiments in the heart of the Italian mountains are testing the links between consciousness and collapse theories of quantum physics.


FQXI ARTICLE
October 1, 2022

Mind and Machine: What Does It Mean to Be Sentient?
Using neural networks to test definitions of ’autonomy.’
by Kate Becker
FQXi Awardees: Larissa Albantakis
March 23, 2022
Bookmark and Share


Credit: Nejron, Shutterstock
More than two thousand years ago, the ancient Greeks told the story of Talos, an enormous bronze robot charged with defending Crete. Three times a day, every day, Talos trooped around the island’s perimeter, keeping lookout for enemy ships and heaving rocks at any that dared approach. Talos was a machine, but a machine with a difference: Talos was alive.

Pinocchio, Pygmalion, the golems of folklore—they all speak to our preoccupation with how the inanimate becomes animate. Whether brought to life by the magic of a fairy’s wand, a secret word, or the blood of the gods, ichor, they straddle the line between living and nonliving, mind and machine.

And they have stayed locked in the pages of storybooks—until now. Computers are the new embodiment of these living machines, and the more powerful they become, the more human they seem. Artificial intelligence can best us at tasks once assumed to require uniquely human intellect. AI computers can defeat humans at Go and chess. They can name everyone in our family photos (even the third cousins twice removed) and translate from Yiddish to Norwegian. With each new flicker of creativity and intuition, they are emboldening researchers and philosophers to ask exactly where the line between human and machine really sits.

"I’ve always been interested in the question of what makes us sentient beings separate from our environment," says Larissa Albantakis, a computational neuroscientist at the Wisconsin Institute for Sleep and Consciousness at the University of Wisconsin-Madison. But the question gets snarled up from the get-go. What do we mean by "sentient," or "conscious," anyway? Albantakis is hoping to untangle the mystery by pulling at the thread of autonomy.

What makes us sentient
beings separate from our
environment?
- Larissa Albamtakis
Merriam-Webster defines autonomous as "existing or acting separately from other things or people." Sounds simple enough. But the definition starts to break down even when applied to some living things. "Take a slime mold," an organism that can live freely as a single cell but can also swarm with others and fuse to create one coordinated mass, says Albantakis. "It’s not clear if the whole is one thing, or if the individual cells within it are actually individual entities." Is the slime mold autonomous? And if we struggle to answer the question for a living thing, what happens when we ask about something that isn’t even alive? "These are questions that we don’t really have a way to answer yet, and not even a good sense of which qualities are relevant for answering this question."

"Autonomy is like life: We know it when we see it, but defining it is difficult," says Daniel Polani, an artificial intelligence expert at the University of Hertfordshire, UK. "If we have an ant colony, is the individual ant an autonomous system? Is the colony an autonomous system? If two colonies merge, is the whole system autonomous? Suddenly, autonomy is not a well-defined notion anymore."

Researchers working in information theory, causal theory, and dynamical systems have all come forward with definitions that reflect the attitudes and ways of thinking of their chosen fields. Yet there is still no accepted consensus on how to define or quantify autonomy. To edge closer to one, Albantakis is placing the definitions in head-to-head competition. Which will deliver the most coherent measure of autonomy?

The measure of a machine

In the popular imagination, the gold standard of human-like computer intelligence is the Turing test. The concept is simple: If a computer can fool a human conversation partner into believing it is a person, then it passes the test. How the computer accomplishes that task, whether it attaches meaning to the words passing in and out of its processors, is beyond the scope of the test. The Turing test is exclusively about behavior that you can observe on the outside.


Larissa Albantakis
University of Wisconsin-Madison
That is consistent with the "functionalist" perspective that prevails in contemporary neuroscience, says Albantakis. But, she argues, autonomy is something that happens on the inside. After all, the exact same behavior can be performed autonomously or reflexively. A can-can dancer and a patient tapped with a reflex hammer both kick, but only one of them is doing so autonomously. To figure out whether an act is truly autonomous, then, you must ask not just what the action is, but how it is happening.

One reason today’s AI systems can rival—and sometimes best—human intelligence is that their "thinking" happens over networks of connected nodes that are roughly analogous to the neurons inside a living brain. Over time, as the AI learns, the connections between nodes adapt to become weaker or stronger, and the system gets better at its job, whether that’s telling pictures of cats from pictures of dogs, playing Space Invaders, reading lips, or any number of tasks at which neural networks excel.

But human brains can do something these neural networks typically can’t: create connections where there were none before. Think of a neural network as a city’s traffic grid: Most artificial systems are limited to adding and subtracting lanes. Living brains, on the other hand, come equipped with the machinery to build entirely new streets, tunnels, and bridges that bring far-flung neurons into close communication, or create new loops.

Maze Runner

To probe the differences between various measures of autonomy, Albantakis wanted to apply them to an AI that, like a living brain, has the capacity to evolve new connections. She found the ideal test subjects in artificial organisms called "animats." Albantakis’ animats come from a particular "species" called Markov Brains, developed by computational biologist Chris Adami of Michigan State University, in East Lansing. They are made up of tiny neural networks consisting of just a few neurons, plus motors and sensors that enable them to move in and sense the environment. (Albantakis’ animats are computer simulations, but a handy engineer could build real ones without much trouble.)

Albantakis set her animats loose in a series of simulated mazes, giving them a variety of visual signals to cue them to turn right or left (see movie). Iterating over multiple trips through the maze, many animats evolved to solve the maze perfectly.

On the outside, these high-achieving animats were indistinguishable. They all displayed the same behavior: perfect maze-solving. But Albantakis discovered that they were actually very different "under the hood." In some, each neuron handed a signal off to the next, like "flips along the processing line." This is what computer scientists call "feed forward" architecture. Other animats had evolved to include feedback connections, with signals looping through neurons and back again. Computer scientists call these "recurrent" networks.

If feedforward systems resemble the nervous system impulses that trigger a knee-jerk reflex, recurrent systems are more like the brainwork that goes into a can-can dancer’s kick: listening to the music, recalling the steps she’s learned, attending to exactly how her calf extends from her ruffled petticoats. Only the recurrent systems should truly "count" as autonomous, argues Albantakis.

But only some measures of autonomy were able to capture the difference between feedforward and recurrent architectures. "Some measures ultimately capture features of the environment rather than of the agent itself," says Albantakis. "We want to base our measure of autonomy on a causal description of its underlying mechanisms rather than observed correlations."

Crossing Boundaries

Insights from these artificial organisms could one day illuminate what happened deep in biological history, when inanimate molecules first combined to become primordial living things.

Biologists don’t typically examine questions like autonomy. "Work on concepts such as autonomy has been largely dismissed by mainstream biologists as ’untestable speculation about old fashioned ideas that do not really qualify as science,’" says Keith Farnsworth, a theoretical biologist at Queen’s University Belfast, UK. But Albantakis’ "genuinely quantitative effort" and her willingness to cross traditional boundaries between disciplines may change that, Farnsworth argues.

"If we have some abstract notion of autonomy, we can go back to the origins of life and living systems and we can see whether interacting proteins form a rudimentary autonomous system," says Albantakis. "More generally, if we have a working measure of autonomy, we can then apply it to all sorts of systems, biological or not—individuals or groups of individuals, interacting proteins, neurons, or brain regions—and quantify the degree to which they are independent from the environment."

Much of this work is still speculative, but it’s just the kind of cross-disciplinary thinking that excites Polani. "The interesting stuff happens at the boundary," he says—a thought that rings just as true at borderlands between academic disciplines as it does at the unmapped frontier of autonomy, consciousness, and life.

Comment on this Article

Please read the important Introduction that governs your participation in this community. Inappropriate language will not be tolerated and posts containing such language will be deleted. Otherwise, this is a free speech Forum and all are welcome!
  • Please enter the text of your post, then click the "Submit New Post" button below. You may also optionally add file attachments below before submitting your edits.

  • HTML tags are not permitted in posts, and will automatically be stripped out. Links to other web sites are permitted. For instructions on how to add links, please read the link help page.

  • You may use superscript (10100) and subscript (A2) using [sup]...[/sup] and [sub]...[/sub] tags.

  • You may use bold (important) and italics (emphasize) using [b]...[/b] and [i]...[/i] tags.

  • You may also include LateX equations into your post.

Insert LaTeX Equation [hide]

LaTeX equations may be displayed in FQXi Forum posts by including them within [equation]...[/equation] tags. You may type your equation directly into your post, or use the LaTeX Equation Preview feature below to see how your equation will render (this is recommended).

For more help on LaTeX, please see the LaTeX Project Home Page.

LaTeX Equation Preview



preview equation
clear equation
insert equation into post at cursor


Your name: (optional)






Recent Comments


It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share...

read all article comments

Please enter your e-mail address:
Note: Joining the FQXi mailing list does not give you a login account or constitute membership in the organization.