Zenith Grant Awardee
Larissa Albantakis
University of Wisconsin, Madison
Project Title
Distinguishing intrinsic intelligence from automatic behavior
Project Summary
Agents are open systems that dynamically and informationally interact with their environment. In biological, evolved systems, more intelligent behavior is typically associated with greater autonomy from the environment. Simple systems are thought to act in an automated, reflexive manner, while intelligent organisms perform complex tasks in an autonomous, context-dependent way, and increasingly rely on internal states, such as memory, or learned adjustable preferences. These preconceived biological notions are being challenged by recent advances in artificial intelligence (AI). Artificial neural networks (ANNs) already achieve super-human levels of performance in tasks that supposedly require intelligence, creativity, and intuition. Yet, the classically feed-forward architecture of standard ANNs suggests that they are just "machines running through the motions" and do not fit with the notion of autonomous systems as unified wholes distinct from the environment. Here, we will train artificial agents controlled by feedforward ANNs across various task domains and compare them to evolved "Markov-Brains", which resemble biological neural networks. To address the question of how these agents do what they do, we will compare the structural, informational, dynamical, and causal properties of the evolved/trained networks. In this way, we aim to elucidate what distinguishes automatic from autonomous actions in mechanistic terms.
Technical Abstract
What distinguishes automatic from autonomous actions in mechanistic terms? Here we will address this question utilizing artificial agents equipped with different types of neural architectures: classical feedforward ANNs, "Markov Brains", resembling biological neural circuits, or recurrent ANNs, a hybrid between the first two types. We will train/evolve these systems across three task domains: active perceptual categorization, spatial navigation, and multi-agent interaction. In all cases, our goal is to obtain high task performance with a minimal number of hidden units. The resulting networks will be analyzed using a range of state-of-the-art measures that have been proposed as quantities related to intelligence and autonomy. This includes structural, information-theoretical, and dynamical measures, as well as an analysis of actual causation ("what caused what") (Albantakis et al., 2019) that allows identifying the causes of an agent's actions. Our goal is to distinguish the quantities that characterize task demands and input-output behavior, from those that capture intrinsic differences between the various substrates, despite their functional equivalence. This may provide more stringent requisites for autonomous behavior and the means to measure it. In this way, we hope to focus current debates on intelligent behavior and its relation, or dissociation, to intrinsic intelligence, autonomy, and consciousness.
QSpace Latest
PressRelease: Shining a light on the roots of plant “intelligence”
All living organisms emit a low level of light radiation, but the origin and function of these ‘biophotons’ are not yet fully understood. An international team of physicists, funded by the Foundational Questions Institute, FQxI, has proposed a new approach for investigating this phenomenon based on statistical analyses of this emission. Their aim is to test whether biophotons can play a role in the transport of information within and between living organisms, and whether monitoring biophotons could contribute to the development of medical techniques for the early diagnosis of various diseases. Their analyses of the measurements of the faint glow emitted by lentil seeds support models for the emergence of a kind of plant ‘intelligence,’ in which the biophotonic emission carries information and may thus be used by plants as a means to communicate. The team reported this and reviewed the history of biophotons in an article in the journal Applied Sciences in June 2024.