CATEGORY:
Wandering Towards a Goal Essay Contest (2016-2017)
[back]
TOPIC:
A Tale of Two Animats: What does it take to have goals? by Larissa Albantakis
[refresh]
Login or
create account to post reply or comment.
Author Larissa Albantakis wrote on Mar. 7, 2017 @ 16:35 GMT
Essay AbstractWhat does it take for a system, biological or not, to have goals? Here, this question is approached in the context of in silico artificial evolution. By examining the informational and causal properties of artificial organisms (“animats”) controlled by small, adaptive neural networks (Markov Brains), this essay discusses necessary requirements for intrinsic information, autonomy, and meaning. The focus lies on comparing two types of Markov Brains that evolved in the same simple environment: one with purely feedforward connections between its elements, the other with an integrated set of elements that causally constrain each other. While both types of brains ‘process’ information about their environment and are equally fit, only the integrated one forms a causally autonomous entity above a background of external influences. This suggests that to assess whether goals are meaningful for a system itself, it is important to understand what the system is, rather than what it does.
Author BioLarissa Albantakis is an Assistant Scientist at the Wisconsin Institute for Sleep and Consciousness, at the University of Wisconsin—Madison. She obtained her Diploma in physics from Ludwig-Maximilians University in Munich in 2007, and her PhD in Computational Neuroscience from Universitat Pompeu Fabra in Barcelona in 2011. She has been at the University of Wisconsin since 2012, working together with Giulio Tononi on Integrated Information Theory, and has recently been awarded a ‘Power of Information’ Independent Research Fellowship by the Templeton World Charity Foundation.
Download Essay PDF File
Rene Ahn wrote on Mar. 8, 2017 @ 01:50 GMT
Hi Larissa,
Nice example, withfun discussion, I am not (yet?) a "Tononi believer" myself, but it does get more convincing perhaps where you explain that more complicated environments give rise to more "integrated" architectures. (when adding more types of blocks etc.)
If there is indeed such a trend (likely) then I wonder whether you investigated a possible connection here with information compression or even Kolgomorov complexity?
Kind Regards
Rene Ahn (2855)
report post as inappropriate
Author Larissa Albantakis replied on Mar. 8, 2017 @ 05:21 GMT
Dear Rene,
Thank you for your comment and pointing to compression / Kolmogorov complexity. On a practical level there is indeed a connection. In fact we use compression as a proxy for integrated information $\Phi$ in real neural recordings (see Casali AG, Gosseries O, et al. (2013)
A theoretically based index of consciousness independent of sensory processing and behavior. Sci Transl Med 5:198ra105.). The idea is that a perturbation will have a complex (incompressible) response in a highly differentiated and integrated system, but will have only a local or homogenous (highly compressible) response in a modular, disconnected or homogenous system.
We also found a correlation between compression measures and $\Phi$ in a study on elementary cellular autonomata (Albantakis & Tononi, 2015).
With respect to the theoretical issues discussed here, intrinsic information and meaning, what is important is characterizing the entire cause-effect structure of the system rather than just its $\Phi$ value (which is just a number). As I argue in the essay, intrinsic information must be physical, and the actual mechanisms of the system matter. By contrast, algorithmic information is, by definition, a measure of extrinsic information: it explicitly disregards the actual mechanisms of the system (neural network) and seeks the shortest program with the same output. For intrinsic information and intrinsic meaning, the implementation matters. To recap the essay, the proposal is that meaning is not in what the system is doing, but in what it is, and algorithmic information only captures the "doing".
I'm looking forward to reading your interesting essay more thoroughly soon.
Best regards,
Larissa
Shaikh Raisuddin replied on Mar. 8, 2017 @ 07:30 GMT
Larissa
Thanks for your reply.
To move from high potentiality to low potentiality is inborn nature of matter and is the inborn goal of matter.
The goal in question is to differentiate between internal potentiality and external potentiality and to steer motion.
report post as inappropriate
Rene Ahn wrote on Mar. 8, 2017 @ 01:53 GMT
oops, I, mean of course Kolmogorov.
report post as inappropriate
Shaikh Raisuddin wrote on Mar. 8, 2017 @ 05:20 GMT
Larissa Albantakis,
Good essay!
The questions remain are 1) how the system acquire stability with togetherness? 2) what internal state create goal?, 3) what is the internal disciplining principle? and 4) how the system replicate?
report post as inappropriate
Author Larissa Albantakis replied on Mar. 8, 2017 @ 05:52 GMT
Dear Shaikh,
Thank you! And indeed, those are very important questions. As admitted in the essay, it is still a long way to understand what kind of cause-effect structure would correspond to goals. As part of the integrated information research project, before we get to goals, we are currently exploring what kind of cause-effect structure would be required to have intrinsic information about spatial relations.
With respect to 1), applying the IIT framework, we can assess whether a system is a stable integrated complex across its dynamics (and did so recently for the fission yeast cell cycle network, to appear soon, see ref 18 in the essay). In this way we can also gain insights about which mechanisms contribute to the stability, as opposed to the function of the system.
About 3), the animat experiments show that integrated structures have an advantage in complex environments even if the selection is purely based on fitness. As outlined in the essay, the main reasons are that integrated systems are more economical and more flexible (for more details see the refs given in the essay).
Finally, with respect to 4), in the artifical evolution scenario described, the animats are simply copied into the next generation with a fitness-dependent probability. In general, however, the notion of intrinsic information outlined here applies to artificial systems just as much as to biological systems. Accordingly, being self-replicators is not a necessary requirement for having goals. But of course it is crucial for the question how those system have developed in nature in the first place.
Best regards,
Larissa
Lorraine Ford wrote on Mar. 9, 2017 @ 00:08 GMT
Dear Larissa,
Why is the “fitness” 47% at the start, when there are no connections between elements, sensors and motors? Surely the fitness should be 0 if the Figure 1 model has no connections i.e. if there is no ability to catch food or avoid danger?
If the animats weren’t already fully fit enough to survive in the environment, then how did they survive to generation 2, let alone survive to generation 60,000?
report post as inappropriate
Author Larissa Albantakis replied on Mar. 9, 2017 @ 00:33 GMT
Dear Lorraine,
Thanks for your thorough reading. The initial 47% are a technical issue. If the animat is just sitting still (which it is without connections) it gets hit by ("catches") some blocks correctly and correctly avoids some blocks. 0% fitness would correspond to doing the task all wrong, i.e. catching all the large blocks and avoiding all the small blocks. One could rescale fitness to 0 for no connections and negative values if they do worse than by doing nothing at all. That wouldn't affect any of the results.
As for your second question, after each generation the animats are selected by the algorithm probabilistically dependent on their fitness. If they all do terribly, then each of them has the same probability of 'reproducing' into the next generation.
The population size is kept fixed at 100 animats. So it can be the case that some animats are copied several times, while others are not copied at all.
The genomes of the animats in the new population are then mutated with low probability, and some of the mutated animat offspring may now have a first connection that allows them to have a little bit higher fitness in generation 1 (or whenever such a mutation first happens).
These slightly fitter animats then have a higher probability of 'reproducing' into the next generation and so on. The way to see this is that it's not the animat itself that is put into the next generation, but its mutated offspring, which can be fitter than its parent.
I hope this made sense! Let me know if you still have questions.
Best,
Larissa
Alan M. Kadin wrote on Mar. 14, 2017 @ 02:40 GMT
Dear Dr. Albantakis,
I read your essay with great interest. Your studies of even very small model neural networks shows clearly that they evolve adaptive behavior which mimics that in biological organisms.
I also address the issue of adaptation in my own essay,
“No Ghost in the Machine”. I argue that recognition of self, other agents, and a causal narrative are built into specific evolved brain structures, based on neural networks, which create a sense of consciousness as part of a dynamic model of the environment. The reason that this is such a difficult problem is that we are being misled by the subjective perceptions of our own minds.
Also, I noticed that you work at an Institute for Sleep and Consciousness. In my essay, I cited the work of Prof. Allan Hobson at Harvard, who emphasizes the importance of the dream state as an alternative conscious state that can provide essential insights. Do you have any thoughts about this?
Alan Kadin
report post as inappropriate
Author Larissa Albantakis replied on Mar. 15, 2017 @ 04:18 GMT
Dear Dr. Kadin,
Thank you for your interest! Indeed, sleep is a very interesting state for consciousness research as it is possible to compare conscious vs. unconscious levels in the same state using so-called non-response paradigms. Taking consciousness as phenomenology, dreaming clearly counts as being conscious. I also happened to notice that the scientific american article about sleep you cited in your essay in fact describes research performed at the Wisconsin Center for Sleep and Consciousness (Please see our website http://centerforsleepandconsciousness.med.wisc.edu/index.htm
l for more interesting experimental work being done in this field.)
It was a pleasure reading through your essay, and I hope you found the notion of causal control/autonomy advocated in my essay of interest. While the dynamical system as a whole (including the agent) may be dynamically determined, from the intrinsic perspective of the agent itself in its current state within that environment, there are causal constraints on its mechanisms from within the agent and from the environment. In this way, systems with the right kind of recurrent connections can causally constrain themselves above the background of influences from the environment.
The animats are so relevant to ideas and theories about "dynamic models of the environment" as they provide an excellent model system to test the proposed ideas. What kind of mechanistic structure would be necessary to have any kind of "model of the environment"? Do the simple animats have it, some of them, or not? And if not, then why not? What is missing?
Best regards,
Larissa
Member Simon DeDeo wrote on Mar. 15, 2017 @ 14:26 GMT
Dear Larissa,
It was fun to catch up on your animats work. You make an unusual move here—at least from the point of view of many biologists, who follow Dan Dennett and like to reveal goal-directed behavior to be nothing but selection. We take the "intentional stance" because it's so useful as a prediction tool.
By contrast, you want to locate goals through the causal powers that a...
view entire post
Dear Larissa,
It was fun to catch up on your animats work. You make an unusual move here—at least from the point of view of many biologists, who follow Dan Dennett and like to reveal goal-directed behavior to be nothing but selection. We take the "intentional stance" because it's so useful as a prediction tool.
By contrast, you want to locate goals through the causal powers that a system's internal representations possess. A lot of the essays this year have invoked information processing as a source of something meaningful. Yet it's never been entirely clear to me how we can really distinguish dynamics from computation (I try a different tack in my essay, talking about memory vs. memorylessness, but this only works as a negative case while you have an explicitly positive criterion).
A while ago at SFI I remember a debate about whether the gas in the seminar room was performing a computation or not. Many of the computer scientists said "sure, why not." But nobody really felt satisfied by it. Computer scientists are great at recognizing what's a paper in computer science, but are not so great at telling us how to spot a computation in the wild.
You've just jumped in and said, hey, there are certain causal features we expect to see in a system that's actually thinking. And then (if I understand correctly) you've attacked the "meaning from selection" story by showing that your animats might appear to have goals, but under this stricter notion, some actually don't.
Your essay makes we want to suggest an experiment: what happens when animats interact? A concern with the setup as stands is that if Phi is going up as the environment becomes more interesting, it could just be that complexity is leaking in from the environment—the system is mirroring interesting things that happen outside. But if you give animats a very simple game theoretic problem, and they evolve towards high-Phi systems regardless, that would be a lovely ex nihilo demonstration. Famously, Prisoner's Dilemma leads to all sorts of complexity, while being (at least on the game-specification size) a zero-memory, one-bit process. What would happen? It would be fun to correlate properties of the payoff matrix with Phi.
Yours,
Simon
view post as summary
report post as inappropriate
Author Larissa Albantakis replied on Mar. 16, 2017 @ 05:04 GMT
Dear Simon,
Good to hear from you. Your comment made my day, as you indeed captured the essence of my essay. The animats are such a great model system as they force one to consider the implementation of suggested potential solutions to intrinsic meaning, based on "information processing", "models about the environment", etc. Most of the time these ideas are presented abstractly, sound...
view entire post
Dear Simon,
Good to hear from you. Your comment made my day, as you indeed captured the essence of my essay. The animats are such a great model system as they force one to consider the implementation of suggested potential solutions to intrinsic meaning, based on "information processing", "models about the environment", etc. Most of the time these ideas are presented abstractly, sound really great, and resonate with many people, but on closer consideration fail to pass the implementation test.
With respect to the question of dynamics vs. computation, and whether the gas in the seminar room performs a computation, David Chalmers addressed a similar point here: Chalmers, D.J. (1996). Does a rock implement every finite-state automaton? Synthese 108, 309–333. It's about mapping any kind of computation onto a system that can assume various states. I think the conclusion is that in order to say that two systems perform the same computation, it is not sufficient for them to have a dynamical sequence of states that can be mapped onto each other. Instead, there has to be a mapping of all possible state transitions, which basically means the same causal structure, i.e. a mapping of the causal implementation.
Along these lines, computation, in my view, requires knowing all counterfactuals. I.e. to know that an AND gate is an AND gate and performs the AND computation, it is not sufficient to know that it transitions from 11 -> 1. One needs to know all possible input states (all possible counterfactuals) and the resulting output state.
With respect to game theory, I know that Chris Adami and Arend Hintze successfully applied the animats to solve games such as the prisoner's dilemma, but we haven't measured their integrated information in such environments yet. Memory does play a crucial role for evolving integration. Games that can be solved merely by "reflexes" based on current sensory inputs will produce mostly feedforward systems. Evaluating the animats on multiple-game versions with different pay-off matrices should indeed be interesting. Thank you for bringing that up! Relatedly, we are currently evaluating "social" animats that can sense other agents and mostly replicated the past results.
Best regards,
Larissa
view post as summary
James Arnold wrote on Mar. 16, 2017 @ 00:24 GMT
Hello Larissa
Your project sounds fascinating, and must have been enjoyable.
As you know, crucial element in the experiment is the
designer's goal. Without the designer there is no seeking, and no experiment.
I'm not suggesting a religious significance to seeking, or intention, but rather, that there seems to be a presumption that seeking and avoiding, however rudimentary, can develop in a truly deterministic system. Goal-seeking behavior may seem unproblematic in a deterministic world just because it has emerged in ours, but try an experiment of any complexity without programming an appearance of goal-seeking and watch how many generations it takes for it to emerge on its own(!)
You write of "goal-directed behavior" that "by the principle of sufficient reason, something must cause this behavior." You might be interested in my essay about spontaneity being more fundamental than causation, that it may be causally influenced, but essentially free of causation.
report post as inappropriate
Anonymous replied on Mar. 17, 2017 @ 05:03 GMT
Dear James,
Thank you for your comment and taking the time to read my essay! Indeed, in these artificial evolution experiments, some kind of selection bias has to be assumed that leads to certain systems being preferred over others. In the absence of biased selection, causal structure may emerge, but will not be stable for more than a couple of generations.
I read your essay about spontaneity with much interest. A possible connection could be that in the described causal analysis we assume any element within the system that is not being constrained as maximum entropy and the cause-effect power of a mechanism is evaluated also in comparison of maximum entropy. Certainly though my analysis starts by assuming physical elements with at least two states that can causally constrain each other and leaves room for more fundamental concepts.
The point I want to make with the essay is actually quite similar to Searl's Chinese Room argument, but aims at a partial solution at least. The two animats perform the same task, but in the feedforward case there is no system that could possible have any understanding of the environment (or anything else), as there is not system from the intrinsic perspective in the first place. This animat would correspond to the lookup tables. The other animat does have a small but nevertheless integrated core that constrains itself and thus at least forms a minimal system that exists from the intrinsic perspective above a background of influences from the environment.
Best regards,
Larissa
report post as inappropriate
Author Larissa Albantakis replied on Mar. 17, 2017 @ 05:05 GMT
Sorry, somehow I wasn't locked in.
Larissa
Satyavarapu Naga Parameswara Gupta wrote on Mar. 16, 2017 @ 09:38 GMT
Dear Larissa Albantakis,
Nice essay on animats,
Your ideas and thinking are excellent for eg…
By examining the informational and causal properties of artificial organisms (“animats”) controlled by small, adaptive neural networks (Markov Brains), this essay discusses necessary requirements for intrinsic information, autonomy, and meaning.
Some of the animats even...
view entire post
Dear Larissa Albantakis,
Nice essay on animats,
Your ideas and thinking are excellent for eg…
By examining the informational and causal properties of artificial organisms (“animats”) controlled by small, adaptive neural networks (Markov Brains), this essay discusses necessary requirements for intrinsic information, autonomy, and meaning.
Some of the animats even lack the conditions to be separate causal entities from their environment. Yet, observing their behavior affects our intrinsic mechanisms. For this reason, describing certain types of directed behaviors as goals, in the extrinsic sense, is most likely useful to us from an evolutionary perspective.
A Good idea, I fully agree with you…………
………………… At this point I want you to ask you to please have a look at my essay, where ……………reproduction of Galaxies in the Universe is described. Dynamic Universe Model is another mathematical model for Universe. Its mathematics show that the movement of masses will be having a purpose or goal, Different Galaxies will be born and die (quench) etc…just have a look at my essay… “Distances, Locations, Ages and Reproduction of Galaxies in our Dynamic Universe” where UGF (Universal Gravitational force) acting on each and every mass, will create a direction and purpose of movement…..
I think intension is inherited from Universe itself to all Biological systems For your information Dynamic Universe model is totally based on experimental results. Here in Dynamic Universe Model Space is Space and time is time in cosmology level or in any level. In the classical general relativity, space and time are convertible in to each other.
Many papers and books on Dynamic Universe Model were published by the author on unsolved problems of present day Physics, for example ‘Absolute Rest frame of reference is not necessary’ (1994) , ‘Multiple bending of light ray can create many images for one Galaxy: in our dynamic universe’, About “SITA” simulations, ‘Missing mass in Galaxy is NOT required’, “New mathematics tensors without Differential and Integral equations”, “Information, Reality and Relics of Cosmic Microwave Background”, “Dynamic Universe Model explains the Discrepancies of Very-Long-Baseline Interferometry Observations.”, in 2015 ‘Explaining Formation of Astronomical Jets Using Dynamic Universe Model, ‘Explaining Pioneer anomaly’, ‘Explaining Near luminal velocities in Astronomical jets’, ‘Observation of super luminal neutrinos’, ‘Process of quenching in Galaxies due to formation of hole at the center of Galaxy, as its central densemass dries up’, “Dynamic Universe Model Predicts the Trajectory of New Horizons Satellite Going to Pluto” etc., are some more papers from the Dynamic Universe model. Four Books also were published. Book1 shows Dynamic Universe Model is singularity free and body to collision free, Book 2, and Book 3 are explanation of equations of Dynamic Universe model. Book 4 deals about prediction and finding of Blue shifted Galaxies in the universe.
With axioms like… No Isotropy; No Homogeneity; No Space-time continuum; Non-uniform density of matter(Universe is lumpy); No singularities; No collisions between bodies; No Blackholes; No warm holes; No Bigbang; No repulsion between distant Galaxies; Non-empty Universe; No imaginary or negative time axis; No imaginary X, Y, Z axes; No differential and Integral Equations mathematically; No General Relativity and Model does not reduce to General Relativity on any condition; No Creation of matter like Bigbang or steady-state models; No many mini Bigbangs; No Missing Mass; No Dark matter; No Dark energy; No Bigbang generated CMB detected; No Multi-verses etc.
Many predictions of Dynamic Universe Model came true, like Blue shifted Galaxies and no dark matter. Dynamic Universe Model gave many results otherwise difficult to explain
Have a look at my essay on Dynamic Universe Model and its blog also where all my books and papers are available for free downloading…
http://vaksdynamicuniversemodel.blogspot.in/
Be
st wishes to your essay.
For your blessings please…………….
=snp. gupta
view post as summary
report post as inappropriate
Tommaso Bolognesi wrote on Mar. 17, 2017 @ 14:47 GMT
Dear Larissa,
nice and dense essay! One of the aspects that intrigued me most and that, I believe, adds much originality to your work, is the attempt to tackle goal-oriented behaviour under the perspective of the ‘intrinsic’ features of the agent - beyond what appears to the external observer. However, I’m still trying to understand clearly the sense in which the use of internal cause-effect information, based on conditional state distributions and the IIT tools, should yield a ‘more internalised’ notion of goal-oriented behaviour for an open subsystem than, say, the plain detection of a local entropy decrease. In which sense is the former more internal? Does it refer to an issue of internal interconnection architecture, high Phi values, and ultimately growing consciousness?
One of the most attractive (at least to me) hard questions related to the 2017 Essay Contest is the difference between re-acting and acting: when and how does the ability to act spontaneously, as opposed to reacting (to, say, the arrival of pieces of different sizes) arise in artificial or natural systems? As far as I have seen, none of the essays has tackled this issue directly. What (new?) information-theoretic 'trick' is required for obtaining an animat that starts doing something autonomously and
for no reason, i.e., not as a reaction to some external stimulus? In your opinion, is it conceivable to characterize (and synthesize) this skill just in the framework of IIT [… yielding an animat that stops catching pieces and says “Larissa, give me a break!” :-] ?
Another small question: in the simulation of [8] it seems that fitness increases visibly, while Phi doesn’t. In general, shouldn’t one expect them to roughly grow together?
Thank you!
Tommaso
http://fqxi.org/community/forum/topic/2824
report post as inappropriate
Author Larissa Albantakis replied on Mar. 17, 2017 @ 15:45 GMT
Dear Tommaso,
Thank you very much for your comment and insightful questions. By contrast to something like measures of local entropy decreases, the IIT formalism does not just yield a quantity (integrated information) but also a characterization of the system, its cause-effect structure, which is the set of all system mechanisms that constrain the past and future states of the system...
view entire post
Dear Tommaso,
Thank you very much for your comment and insightful questions. By contrast to something like measures of local entropy decreases, the IIT formalism does not just yield a quantity (integrated information) but also a characterization of the system, its cause-effect structure, which is the set of all system mechanisms that constrain the past and future states of the system itself. The mechanisms specify everything within the system that makes a difference to the system itself. In this way I don't just find out whether there is an intrinsic system in the first place, but also get a picture of its capacity for 'understanding', of what matters to the system and what cannot possibly matter to the system because it doesn't have the right mechanism to pick it up. I hope this helped. Characterizing more precisely how intrinsic meaning could arise from the cause-effect structure is work in progress in our lab.
I completely agree on your point regarding 'acting' vs. 'reacting'. In fact, this is basically the topic of my fellowship project for the next 3 years. Our goal is to quantify when and how strongly an action was caused by the system as opposed to the environment. Autonomous action here means that the system's action is not entirely driven by its current sensory inputs from the environment. Making a choice based on memory, however, would count as autonomous. If you look at the website (ref [8]) under the 'task simulation' tab and set it to trial 55, for example, you can see that the animat already shows a little bit of autonomous behavior in that sense. It first follows the block, then goes in the other direction, then follows again. This means that its behavior didn't just depend on the sensory inputs, but is context-dependent on its own internal state. This is a little different than your definition of autonomy ('doing something for no reason'). That could be achieved with just a little noise inside the system.
As for your last question: the issue with a trial-by-trial correlation of Phi and fitness is that an animat can always have more Phi than is necessary, as there is no real cost on being more integrated than needed the way the simulation is set up. Moreover, fitness can also increase due to e.g. a connection from a sensor to a motor (a reflex), which would not increase the integration. In practice, for complex tasks, there should be a lower limit on the amount of integration that is required for a given task given constraints on the number of elements, connections, and time available to perform the computations as integrated systems are more economical than feedforward systems.
Best regards,
Larissa
view post as summary
Tommaso Bolognesi replied on Mar. 20, 2017 @ 16:20 GMT
Thank you.
Making a choice based on internal memory, as opposed to being triggered by external events, is certainly a step towards autonomy, but again you need some internal trigger that induces you to look up that good or bad experience in your memory, compare with the current situation, and decide how to (re)act. You mention that 'doing something for no reason' - perhaps the perfect form of agency - could be achieved with just a little noise inside the system. I also thought about this. You mention it cursorily, but I wonder whether this couldn't in fact be the key to implement agency. Quantum fluctuations have already been envisaged (e.g. by Lloyd) as the random generators at the basis of the computational universe edifice: maybe they play a role also in triggering reactions that appear otherwise as self-triggered, spontaneous actions.
Best regards
Tommaso
report post as inappropriate
Author Larissa Albantakis replied on Mar. 20, 2017 @ 16:49 GMT
Dear Tommaso,
Noise could play an important role for innovation, exploration, and creativity. Yet, if you take autonomy to be causal power of the system itself, noise would not count since it doesn't actually come from within the system but literally out of nowhere. The causal power of the system itself would go down with noise, just as it would decrease through external inputs that drive the system. But I think the divide is just that we have two different views on autonomy (paralleled by the different possible views on free will). One emphasizes the 'free' part: 'being able to act otherwise', making choices without reason. The other emphasizes the 'will' part: 'being determined by oneself as opposed to outside stimuli'. A willed decision would be one that strongly depends on you, your memories, and internal structure, and your best friend can easily predict your choice. This latter sense of autonomy is possible in a deterministic world.
Best regards,
Larissa
Peter Martin Punin wrote on Mar. 17, 2017 @ 18:09 GMT
Dear Larissa,
I carefully read your essay. Your approach and mine are radically different, but this precisely could be a sufficient reason to have a good discussion.
Your essay has a great merit. You honestly describe the constraints a given system has to master so that we can ascribe to the system in question. “ A system can only ‘process’ information to the extent that it...
view entire post
Dear Larissa,
I carefully read your essay. Your approach and mine are radically different, but this precisely could be a sufficient reason to have a good discussion.
Your essay has a great merit. You honestly describe the constraints a given system has to master so that we can ascribe to the system in question. “ A system can only ‘process’ information to the extent that it has mechanisms to do so.” And “The cause-effect structure of a system in a state specifies the information intrinsic to the system, as opposed to correlations between internal and external variables. If the goals that we ascribe to a system are indeed meaningful from the intrinsic perspective of the system, they must be intrinsic information, contained in the system’s cause-effect structure (if there is no mechanism for it, it does not matter to the system).” Finally “Yet, the system itself does not ‘have’ this intrinsic information. Just by ‘processing’ information, a system cannot evaluate its own constraints. This is simply because a system cannot, at the same time, have information about itself in its current state and also other possible states.”
Shortly speaking, for the concept “goal” related to any system to have a meaning, the system in question must be equipped by a a lot of natural or artificial devices, whereas the set of the latter is supposed to be configured in an exactly determined way.
Suggesting that the forgoing is easy to say, but much less easy to realize, and even to model, you are absolutely right.
Well, but do you not agree that the problem is much more fundamental?
To specify the information intrinsic to any system, the required internal causal structure of this system must be “able to specify” information, and this “ability” presupposes other “abilities” like the “ability” to recognize information before and after being specified. So, the more fundamental question is: where do these “abilities” come from?
Yes, “by ‘processing’ information, a system cannot evaluate its own constraints”, but the very fact of evoking systems “‘processing’ information” already implies the presence of “information processors” within these systems, and once again, we have to ask the more fundamental question is: where do these “information processors” come from?
And so on.
These “more fundamental” questions which until further notice have no answers converge to the problem of generalized irreversibility. In a classical manner going back to Clausius, generalized irreversibility can be formulated as follows: For any system S apparently violating irreversibility, there is a “wider” system S' “comprising” S, so that at the level of S', irreversibility is reestablished. In the classical formulation, notions like “wider systems” or “systems 'comprising' other systems” are rather vague, and so not really appropriated for taking into account intrinsic information or integrated information you are focusing on.
Now, in order to touch the essential without too formal developments, let us consider the good old Maxwell's Demon operating on its Boltzmannian gas. In your eyes, Maxwell's Demon perhaps belongs to ancient history, whereas most authors, for diverging motivations going from Landauer's “principle” to whatever it may be, believe that the Demon is not able to accomplish its mission. But on the other hand, the Demon represents an intuitive mean to grasp the more fundamental problem being behind all superstructure problems concerning integrated information. So let us do as if Maxwell's Demon could do its job.
Operating in the well-known way, the Demon pushes the gas back to its ordered initial state. Under single-step selection conditions, the improbability of the transition would be tremendously high. Considered alone, the gas expresses a genuine irreversibility violation. In fact, the gas is not alone, because of the Demon's presence. Here the “wider system” reestablishing irreversibility is to be interpreted as a system with integrated information, and so all the questions arising with regard to information integration arise again. It is easy to “imagine” – like Maxwell – the existence of the Demon. By contrast, it would be hard – infinitely hard – to equip a mesoscopic, perhaps I should say microscopic device so that it is able to detect instantaneously the motion state – velocity, acceleration, direction – of each molecule present in the neighborhood of the gate, knowing that in the sole neighborhood of the gate you find an unimaginable number of molecules. Further, the microscopic Demon has to be able to take instantaneously the good decision. And then, the Demon must be conditioned to be a serious, conscious scientist respectively meticulously the experimental protocol, and not a trouble-maker misusing its quasi-infinite intelligence to make bad jokes or something else, and this point presupposes moral qualities. And so on. Yet, the foregoing is not a bit easy caricature. A task like re-ordering a disordered gas – a simple task in comparison with other tasks implying aims and/or intentions – needs information integration we cannot master, neither technologically, nor intellectually. I think, you agree.
But now we arrive to the essential: Beyond integration problems, there remains the more fundamental problem of generalized irreversibility. Even if the Demon, against Landauer, Szilard, Brillouin, Costa de Beauregard …, actually managed to “generate work by entropy reduction”, generalized irreversibility would not violated: the transition of the gas from maximal disorder to initial order under single-step selection conditions is tremendously improbable, yes, but the “emergence” of the Demon under the same single-step selection conditions is infinitely more improbable. So, as well as within any case of generalized irreversibility, the apparent irreversibility violation by the gas is “paid” by a given higher improbability at the level of the “wider” system consisting of the gas and the Demon.
As long as the devices required by information integration are given, information integration is hard to formalize, hard to realize, but at least we can conceive it to some extent.
By contrast, in a context like evolution where the devices required by information integration are not given, we have to ask where they come from, and at this level of analysis we merely are lost.
So, in my own paper Daring Group-theoretic Foundations of Biological Evolution despite Group Theory I try to tackle the problem at source, at the fundamental level concerning irreversibility.
Just because of the differences between your really interesting paper and mine, a discussion about both papers would be a pleasure for me.
All the best; good luck
Peter
view post as summary
report post as inappropriate
Author Larissa Albantakis replied on Mar. 26, 2017 @ 17:23 GMT
Dear Peter,
Thank you very much for your insightful comment. I now had the time to read your essay too and liked it a lot. I completely agree that there is a fundamental problem how selection can arise in the first place. I hope I made this clear in my essay at the very beginning. In my work, I program selection into the world. What I want to demonstrate is that even if there is a clear cut selection algorithm for a specific task, this doesn't necessarily lead to fit agents that have intrinsic goals. As you rightly point out it is a big question where such selection mechanisms arise from in nature.
Best regards,
Larissa
Don Limuti wrote on Mar. 18, 2017 @ 04:56 GMT
Hi Larissa,
I was pleasantly surprised reading your essay. Reminded me of "Vehicles" by Valentino Braitenberg only with the vehicles replaced by animats which are much more interesting goal directed creatures.
Many other scientists would be very tempted to say this completes the essay question by saying that the MUH (Mathematical Universe Hypothesis) is true. And I was completely surprised by: "While we cannot infer agency from observing apparent goal-directed behavior, by the principle of sufficient reason, something must cause this behavior (if we see an antelope running away, maybe there is a lion). On a grander scale, descriptions in terms of goals and intentions can hint at hidden gradients and selection processes in nature, and inspire new physical models."
I believe you agenda is something like: Let us pursue this concept of agency and see where it takes us. This is the essence of science.
Thanks for your excellent essay,
Don Limuti
Question: Is there a way to "play" with your animates online?
report post as inappropriate
Author Larissa Albantakis replied on Mar. 19, 2017 @ 16:42 GMT
Dear Don,
Thank you for your nice comment. The artificial evolution of the animats takes quite a bit of computational power, so there is no easy way yet to play around with them. However, there is a little video of one evolution and the behavior of one animat on http://integratedinformationtheory.org/animats.html
There is, however, an online interface to calculate the integrated information of little systems of logic gates: http://integratedinformationtheory.org/calculate.html
Best regards,
Larissa
Member Ian Durham wrote on Mar. 20, 2017 @ 01:56 GMT
Hi Larissa,
I wrote you a longer e-mail that I just sent, but in general I found your essay well-written and extremely stimulating. I’m still not entirely convinced that you’ve answered your own question concerning whether or not systems can have “goals.” You suggest that perfect fitness is a goal, but to me, a goal is an internal thing whereas it would seem to me that perfect fitness is largely a response to external stimuli (and by external, I include things like viruses and illness since I’m thinking of goals as related to consciousness here). But maybe I'm wrong. Who knows. Nice essay, though.
Ian
report post as inappropriate
Author Larissa Albantakis replied on Mar. 20, 2017 @ 18:06 GMT
Hi Ian,
Thanks for your comment. I'll be answering your email shortly. For the discussion here, I agree with you that having goals is necessarily intrinsic. That's why I put 'goal' in quotes any time that I referred to it as 'apparently having goals as subscribed to the agent by some outside observer'. The essay tries to make the point, that neither of the animats actually intrinsically has the goal of perfect fitness, although an outside observer would be tempted to describe their behavior as 'having the goal to catch and avoid blocks'.
I then give a necessary condition for having any kind of intrinsic information, that is being an integrated system that is to some extent causally autonomous from the environment. I moreover claim that the only way to find intrinsic goals is to look at the agents' intrinsic cause-effect structure and that correlations with the environment won't get us there. What kind of cause-effect structure would correspond to having a goal intrinsically I cannot answer (yet). But there is hope that it is possible since we know that humans have goals intrinsically.
Best,
Larissa
Stefan Keppeler wrote on Mar. 20, 2017 @ 17:31 GMT
Dear Larissa,
this is a nice summary of some of your own and related work. Now I want to learn more about integrated information theory. Thank you!
After reading many essays here I start seeing crosslinks everywhere...
When you wrote "Think of a Markov Brain as a finite cellular automaton with inputs and outputs. No mysteries." it immediately reminded me of Joe Brisendine's description of bacterial chemotaxis.
And later, when you wrote "one might ask whether, where, and how much information about the environment is represented in the animat’s Markov Brain" I had to think of Sofia Magnúsdóttir's essay who qualitatively analyzes the role of models which an agent must have about its environment.
I'd love to replace (in my essay) my clumsy conditions of being "sufficiently rigid and sufficiently flexible" by something less vague; maybe concepts from integrated information theory could help.
Cheers, Stefan
report post as inappropriate
Vladimir F. Tamari wrote on Mar. 21, 2017 @ 08:54 GMT
Dear Larissa,
I read your essay with interest but found the technical descriptions of the animats technically beyond my comprehension, although I am very interested in Cellular Automata CA which seem to resemble Markov Brains? Anyway you have certainly attempted a serious answer to the essay question.
My
Beautiful Universe Model is a type of CA.
I was interested that you were a sleep researcher - I have recently been interested in how the brain generates and perceives dreams, and noted some interesting observations experienced on the threshold of waking up when I saw ephemeral geometrical patterns superposed on faint patterns in the environment. As if the brain was projecting templates to fit to the unknown visual input.
Another more severe experience along these lines was 'closed eye' hallucinations I experienced due to surgical anesthesia.
which I documented here. The anaesthesia seems to have suspended the neural mechanism that seem to separate dreams from perceived reality and I could see both alternately while the experience lasted.
I wish you the best in your researches. It is probably probably beyond your interest but do have a look at
my fqxi essay.
Cheers
Vladimir
report post as inappropriate
Author Larissa Albantakis replied on Mar. 24, 2017 @ 03:39 GMT
Dear Vladimir,
Thank you for your comment and that you took the time to read my essay. Indeed, Markov Brains are very related to cellular automata, the only difference is that each element can have a different update function and that the Markov Brain has inputs from an environment and outputs to an environment (but this could also be seen as a section of a cellular automata within a larger system).
I am very sympathetic to the idea that the universe is in some ways a giant CA. Partly because it would make the connection between my own work and fundamental physics very straightforward, and partly because of the underlying simplicity and beauty.
I am not really a sleep researcher myself. Yet, dreams are an important part of consciousness research. You might find the following work by my colleagues of interest: http://biorxiv.org/content/early/2014/12/30/012443.short
It shows that the responses to seeing a face while dreaming for example are very similar to those of actually seeing a face while awake. Being awake can in this view be seen as a "dream guided by reality". At least some hallucinations then are a mixture of the two states.
All the best,
Larissa
Vladimir F. Tamari replied on Mar. 30, 2017 @ 08:03 GMT
Thank you Larissa for your response and references. It is amazing how much information brain imaging has provided, and yes dreams and reality are inextricably linked by the neural mechanisms that perceive them- the details of how that actually works out is of interest. In the half-awake perceptions I have mentioned and with eyes wide open and the mind alert, I can actually see ephemeral geometrical shapes that the mind seems to throw at , say, a patch of light in the ceiling, as if it is trying to identify or classify it in some way.
I suspect that in normal vision incoming signals are constantly being studied in the same way as perception takes its course. This can be a while field of experimental study, using dark-adapted subjects shown very faint images and seeing if such visual inputs (or outputs?) are seen. Have you come across anything like this elsewhere?
Best wishes
Vladimr
report post as inappropriate
Gary D. Simpson wrote on Mar. 23, 2017 @ 02:42 GMT
Larissa,
We are Borg. Species a1-c3, you will be assimilated. We are Borg. Resistance is futile:-)
Many thanks for an essay that was both enjoyable and enlightening. I wonder if the animats figure out that they are size 2?
Are there any simulations where the animats of size 1 and size 3 also evolve using similar rules? BTW, what would an animat of size 1 eat? Are there any simulations where the animats can cooperate to attack larger animats? Maybe I run from a lion but me and my buddies will attack a lion if we've got some weapons ..... and have been drinking some courage:-)
You clearly present the meaning of useful information and the difference between information and being ... that is a key concept that many of the essays do not present.
Best Regards and Good Luck,
Gary Simpson
report post as inappropriate
Author Larissa Albantakis replied on Mar. 24, 2017 @ 01:34 GMT
Hi Gary,
Thank you for your time and the fun comment.
We are looking at social tasks where more than one animat are interacting in the same environment. There are interesting distinctions that need to be explored further. Something like swarming behavior may require very little integration as it can be implement by very simple rules that only depend on the current sensory input. Real interaction, by contrast, increases context dependency and thus on average lead to higher integration. All work in progress.
Best regards,
Larissa
Jochen Szangolies wrote on Mar. 23, 2017 @ 09:30 GMT
Dear Larissa,
thanks for a genuinely insightful essay. At several points, I was afraid you'd fall for the same gambit that's all too often pulled in this sort of discussion---namely, substituting meaning that an external observer sees in an agent's behaviour for meaning available to the agent itself. At each such juncture, you deftly avoided this trap, pointing out why such a strategy just...
view entire post
Dear Larissa,
thanks for a genuinely insightful essay. At several points, I was afraid you'd fall for the same gambit that's all too often pulled in this sort of discussion---namely, substituting meaning that an external observer sees in an agent's behaviour for meaning available to the agent itself. At each such juncture, you deftly avoided this trap, pointing out why such a strategy just won't do. This alone would have made the essay a very worthwhile contribution---it's all too often that, even in scholarly discussion on this issue, people seem insufficiently aware of this fallacy, and (often inadvertently) try to sell mere correlation---say, the covariance of some internal state with an external variable---as being sufficient for representation.
But you go even further, giving an argument why the presence of
integrated information signals the (causal) unity of a given assemblage. Now, it's not quite clear to me why, exactly, such causal unity ought to bestow meaning
available to the agent. I agree with your stipulation that intrinsic meaning can't arise from knowing: that simply leads to vicious regress (the homunculus fallacy).
Take the above example of correlated internal states and external variables: in order to represent an external variable by means of an internal state, their covariance must, in some sense, be known---in the same way that (my favorite example) one lamp lit at the tower of the Old North Church means 'the British will attack by land' only if whoever sees this lamp also knows that 'one if by land, two if by sea'. Without this knowledge, the mere correlation between the number of lamps and the attack strategy of the British forces does not suffice to decipher the meaning of there being one lamp. But such knowledge itself presupposes meaning, and representation; hence, any analysis of representation in such terms is inevitably circular.
But it's not completely clear to me, from your essay, how 'being' solves this problem. I do agree that, if it does, IIT seems an interesting tool to delineate boundaries of causally (mostly) autonomous systems, which then may underlie meaningful representations. I can also see how IIT helps 'bind' individual elements together---on most accounts, it's mysterious how the distinct 'parts' of experience unify into a coherent whole; to take James' example, how from ten people thinking of one word of a sentence each an awareness of the whole sentence arises. But that doesn't really help getting at those individually meaningful units to be bound together, at least, not that I can see...
Anyway, even though I don't quite understand, on your account, how they work, I think that the sort of feedback structures you identify as being possible bearers of meaning are exactly the right kinds of thing. (By the way, a question, if I may: does a high phi generally indicate some kind of feedback, or are there purely feedforward structures achieving high scores?)
The reason I think so is that, coming from a quite different approach, I've homed in on a special kind of feedback structure that I think serves at least as a toy model of how to achieve meaning available to the agent myself (albeit perhaps an unnecessarily baroque one): that of a von Neumann replicator. Such structures are bipartite, consisting of a 'tape' containing the blueprint of the whole assembly, and an active part capable of interpreting and copying the tape, thus making them a simple model of self-reproduction (whose greatest advantage is its open-ended evolvability). In such a structure, the tape influences the active part, which in turn influences the tape---a change in the active part yields a change in the tape, through differences introduced in the copying operation, while the changed tape itself leads to the construction of a changed active part. Thus, the two elements influence another in a formally similar way to the two nodes of your agents' Markov Brains.
What may be interesting is that I arrive at this structure from an entirely different starting point---namely, trying to exorcize the homunculus mentioned above by creating symbols whose meaning does not depend on external knowledge, but which are instead meaningful, in some sense, to themselves.
But that's enough advertisement for my essay; I didn't actually want to get into that so much, but as I said, I think that there may be some common ground both of our approaches point towards. Hence, thanks again for a very thought-provoking essay that, I hope, will go far in this contest!
Cheers,
Jochen
view post as summary
report post as inappropriate
Author Larissa Albantakis replied on Mar. 26, 2017 @ 23:35 GMT
Dear Jochen,
Thank you for reading and the nice comment. I have finally had the time to look at your essay and indeed I think we very much start from the same premise that meaning must be intrinsic. First, to your question: Feedforward structures have no integrated information (by definition), because there is always elements that lack causes or effects within the system, no matter how the system boundaries are drawn.
I think the role that the replicators take in your essay is taken by a mechanism's cause-effect repertoire in IIT. By being a set of elements in a state, these elements constrain the past of the system and the future of the system, because they exclude all states that are not compatible with their own current state. The cause-effect repertoire is an intrinsic property of the mechanism within the system. It's what it is. However, by itself, a mechanism and it's cause-effect repertoire do not mean anything yet. It is the entire structure of all mechanisms as a whole that results in intrinsic meaning. For example, if there is a mechanisms that correlates with 'apples' in the environment, by itself it cannot mean apples. This is because the meaning of apples requires a meaning of 'fruit', 'not vegetable', 'food', 'colors' etc etc. Importantly, also things that are currently absent in the environment contribute to the meaning of the stuff that is present. The entire cause-effect structure is what the system 'is' for itself.
What is left to be demonstrated by IIT is that it is indeed possible to 'lock in' the meaning of a mechanisms through the other mechanisms in the cause-effect structure. There is work in progress to demonstrate how this might work for spatial representation.
Best regards,
Larissa
Jochen Szangolies replied on Mar. 27, 2017 @ 08:40 GMT
Dear Larissa,
thanks for your answer! I will definitely keep an eye on the further development of IIT. Is there some convenient review material on its latest version to get me up to speed?
Your mention of 'absences' as causally relevant evokes the ideas of Terrence Deacon, I wonder if you're familiar with them? He paints an interesting picture on the emergence of goal-directedness ('teleodynamics', as he calls it) from underlying thermodynamic and self-organizing ('morphodynamic') processes via constraints---thus, for instance, self-organization may constrain the thermodynamic tendency towards local entropy maximization, leading instead to stable structures. These constraints are then analyzed in terms of absences.
Cheers,
Jochen
report post as inappropriate
Robert Groess wrote on Mar. 28, 2017 @ 20:22 GMT
Dear Larissa Albantakis,
Thank you for your wonderfully readable and equally rigorous essay on the Tale of Two Animats. The depth of your analysis on artificial "intelligence" is impressive and I also appreciate the point you make regarding, "physics, as a set of mathematical laws governing dynamical evolution, does not distinguish between an agent and its environment." I have not seen that particular perspective before. Thank you for the fresh insight and entertaining read and I have also in the meantime rated your essay.
Regards,
Robert
report post as inappropriate
James Lee Hoover wrote on Mar. 28, 2017 @ 21:24 GMT
Larissa,
A clever presentation with perhaps a human-paralleled animat development. Do we assume the same primitive neurological processes (humans 1.5 million years ago ( use of fire, how does the reproduction fit in)in the animats' beginnings?
In my essay, I say this about AI systems: "artificially intelligent systems humans construct must perceive and respond to the world around them to be truly intelligent, but are only goal-oriented based on programmed goals patterned on human value systems." Not being involved in evolutionary neuroscience, I doubt the truly causally autonomous capabilities of animats, but perhaps the future. I know we should never judge future events based on current technology and understandings -- a type 0 civilization that we are.
Your causal analysis and metaphorical venture in AI evolution are forward thinking and impressive.
I too try to apply metaphor -- amniotic fluid of the universe to human birth and dynamics:I speculate about discovering dark matter in a dynamic galactic network of complex actions and interactions of normal matter with the various forces -- gravitational, EM, weak and strong interacting with orbits around SMBH. I propose that researchers wiggle free of labs and lab assumptions and static models.
Hope you get a chance to comment on mine.
Jim Hoover
report post as inappropriate
Peter Jackson wrote on Mar. 30, 2017 @ 10:35 GMT
Larissa,
Interesting experiment, findings and analysis, well presented. More a review than an essay perhaps but I do value real science over just opinion. The findings also agree with my own analysis so my cognitive architecture is bound to marry with it!
You covered a lot but your plain English style made it easy to follow. Bonus points for both!
My own essay agrees many...
view entire post
Larissa,
Interesting experiment, findings and analysis, well presented. More a review than an essay perhaps but I do value real science over just opinion. The findings also agree with my own analysis so my cognitive architecture is bound to marry with it!
You covered a lot but your plain English style made it easy to follow. Bonus points for both!
My own essay agrees many points;
"...one of two (or several) possible states, and which state it is in must matter to other mechanisms the state must be “a difference that makes a difference”" and the 'feedback' mechanism from results of 'running scenario's' drawn from input and memory (imagination).
You also identify that; "
What is left to be demonstrated by IIT is that it is indeed possible to 'lock in' the meaning of a mechanisms through the other mechanisms in the cause-effect structure. There is work in progress to demonstrate how this might work for spatial representation. Do you think that combining the feedback loops with the the hierarchically 'layered' architecture of propositional dynamic logic (PDL) may not well serve this purpose? A higher level decision then served by cascades of lower level decisions?
Is the conclusion;
"....a system cannot, at the same time, have information about itself in its current state and also other possible states.?" your own analysis or adopted theory? May not 'self awareness' be the recognition of 'possible states' and even current state? (i.e. "OK, I'm in a bad mood/hurry/overexcited etc, sorry")? or do you refer just to the causal mechanisms?
You seemed to shy away from the role of maths, which I think was sensible. Let me know if I'm wrong inferring maths has the role of a useful abstracted 'tool' rather than any any causal foundation. I also certainly agree your (or the?) conclusions and thank you for the valuable input into the topic and a pleasant and interesting read.
I hope you'll review and comment on mine. Don't be put off by the word 'quantum' in the title and last sections as many are. Your brain seems to work well in 'analytical mode' so you should follow the classical causal mechanism just fine. (Do also see the video/s of the 3D dynamics if you've time - links above).
Well done, thanks, and best of luck in the contest.
Peter
view post as summary
report post as inappropriate
Author Larissa Albantakis replied on Apr. 1, 2017 @ 18:52 GMT
Dear Peter,
Thank you for you time and the nice comment. A hierarchically layered architecture is certainly the way to go for increasingly invariant concepts based on more lower level specific features. I.e. a grid of interconnected elements may be sufficient to intrinsically create a meaning of locations. Invariant concepts like a bar or a pattern will definitely require a hierarchy of levels.
As for the statement about information of the system about itself in its current state: This is simple logic and certainly has been voiced before, I think with respect to turing machines and cellular automata, Seth Lloyd also mentioned a similar idea but in terms of prediction (that a system can never predict its entire next state). Note that I meant the entire current state, not just part of it. Yes, the system can have memory of course. But it is important to realize that any memory that the system has, has to be physically instantiated in its current physical state. So all there is at any given moment is the current state of the system and any measure that compares multiple such states is necessarily not intrinsic.
Best regards,
Larissa
Peter Jackson replied on Apr. 4, 2017 @ 11:39 GMT
Larissa,
Yes, I understand. Well explained, thanks.
I see your score now slipped down again! Too many 'trolls' applying 1's (Mines had 11, but I refuse to respond). Normally scoring gets crazy in the last few hours!
I hope you get to read, score and comment on mine (not long now!) I think you could bring a good perspective to the hypotheses which I think are complementary to your analysis.
Very Best wishes
Peter
report post as inappropriate
Dizhechko Boris Semyonovich wrote on Apr. 2, 2017 @ 05:56 GMT
Dear Larissa
I appreciate your essay. You spent a lot of effort to write it. If you believed in the principle of identity of space and matter of Descartes, then your essay would be even better. There is not movable a geometric space, and is movable physical space. These are different concepts.
I inform all the participants that use the online translator, therefore, my essay is written badly. I participate in the contest to familiarize English-speaking scientists with New Cartesian Physic, the basis of which the principle of identity of space and matter. Combining space and matter into a single essence, the New Cartesian Physic is able to integrate modern physics into a single theory. Let FQXi will be the starting point of this Association.
Don't let the New Cartesian Physic disappear! Do not ask for himself, but for Descartes.
New Cartesian Physic has great potential in understanding the world. To show potential in this essay I risked give "The way of the materialist explanation of the paranormal and the supernatural" - Is the name of my essay.
Visit my essay and you will find something in it about New Cartesian Physic. After you give a post in my topic, I shall do the same in your theme
Sincerely,
Dizhechko Boris
report post as inappropriate
Torsten Asselmeyer-Maluga wrote on Apr. 5, 2017 @ 18:25 GMT
Dear Larissa,
very interesting essay. I wrote my PhD thesis about physical models of evolution including the evolution of networks. Evolution is goal-oriented. Here, there are two processes, mutation and selection. Mutation produces new information (=species) and selection is a global interaction among the species giving a goal to the process. In a more refined model of Co-evolution, the selection itself is formed by the interaction between the species, so again you will get a direction or goal. So, I think from this point of view, your model perfectly fits.
Maybe I have one question: you are an expert in networks and I wrote about the brain network and its dynamics (using methods from math and physics). Please could you have a look on
my essay?
Thanks in advance and good luck in the contest (I gave you the highest rating)
All the best
Torsten
report post as inappropriate
Dizhechko Boris Semyonovich wrote on Apr. 7, 2017 @ 04:51 GMT
Dear Sirs!
Physics of Descartes, which existed prior to the physics of Newton returned as the New Cartesian Physic and promises to be a theory of everything. To tell you this good news I use «spam».
New Cartesian Physic based on the identity of space and matter. It showed that the formula of mass-energy equivalence comes from the pressure of the Universe, the flow of force which on the corpuscle is equal to the product of Planck's constant to the speed of light.
New Cartesian Physic has great potential for understanding the world. To show it, I ventured to give "materialistic explanations of the paranormal and supernatural" is the title of my essay.
Visit my essay, you will find there the New Cartesian Physic and make a short entry: "I believe that space is a matter" I will answer you in return. Can put me 1.
Sincerely,
Dizhechko Boris
report post as inappropriate
Claudio Baldi Borsello wrote on Apr. 7, 2017 @ 19:04 GMT
Dear Larissa,
I've read with amusing interest your essay. It's a fair way to tell valuable concepts.
I also love computer simulations of authoms, in order to understand complexity, that infact could be the result of few very simple rules acted by a multiplicity of individuals.
If you have time to have a look of my paper you could find it interesting.
Best regards,
Claudio
report post as inappropriate
Lorraine Ford wrote on Apr. 8, 2017 @ 02:36 GMT
Larissa,
You attempt to model the generation of improved fitness. The overall animat model system is given a highest-level ruling algorithm and given the equivalent of initial values. Each animat model has controlling “Markov brain” logic gate algorithms, and probably another higher-level algorithm controlling the “Markov brain” algorithm.
But it is an invalid assumption to consider that
algorithms must already exist in primitive living things, so your model cannot be considered a model of actual reality.
It is unfortunate that you conclude so much from so little evidence.
Lorraine
report post as inappropriate
Don Limuti wrote on Jul. 6, 2017 @ 09:16 GMT
Larissa,
Congratulations on your win! FQXi.org got it right.
Don Limuti
report post as inappropriate
Author Larissa Albantakis replied on Jul. 19, 2017 @ 21:28 GMT
Dear Don,
Thank you very much! It means a lot to me.
Best,
Larissa
Rajiv K Singh wrote on Jul. 12, 2017 @ 06:27 GMT
Dear Larissa,
First, let me congratulate you for winning the essay contest. Unfortunately, I could not get to see your essay prior to the result. Even belated, let me try to understand the idea of the essay. Your statements are referred to by '>' symbol, while my comments by '=>'.
> this essay discusses necessary requirements for intrinsic information, autonomy, and...
view entire post
Dear Larissa,
First, let me congratulate you for winning the essay contest. Unfortunately, I could not get to see your essay prior to the result. Even belated, let me try to understand the idea of the essay. Your statements are referred to by '>' symbol, while my comments by '=>'.
> this essay discusses necessary requirements for intrinsic information, autonomy, and meaning.
=> While a specific requirement has been discussed, but we are not given a picture how information becomes abstract, or what exactly is the meaning. It largely dealt with probability aspect of information, not the actual semantics of information.
> only the integrated one forms a causally autonomous entity above a background of external influences.
=> In fact, if the natural causation remains entirely deterministic, all outcomes are pre-set, autonomy is not really there, but only pre-determined looping of signals and states occur. It seems, indeterminism, even if limited, is a necessary requirement.
> Any form of representation is ultimately a correlation measure between external and internal states.
=> While it is entirely agreeable that 'representation is ultimately a correlation', but not between the external and internal states alone. In fact, this is hardly the case as 'representation is a correlation with emerged semantics, that includes state descriptions (a projection of reality), as well as all possible abstraction of semantics that we are capable of'.
> The state must be “a difference that makes a difference”.
=> I am not sure, but is it said in a sense that observed states are relative measures that makes a difference?
> A mechanism M has inputs that can influence it and outputs that are influenced by it. By being in a particular state m, M constrains the possible past states of its inputs.
=> Since M is a mechanism, not a physical entity, therefore, m is a particular instance of the mechanism, which should not be treated as state of matter. In several places, it is used in a manner that has this dual meaning.
> We can quantify the cause and effect information of a mechanism M in its current state mt within system Z as the difference D between the constrained and unconstrained probability distributions over Z’s past and future states.
=> Z is a system, I suppose a Markov Brain, which may have many physical elements. So, what is Z's state? Is it a particular instance of specific states for its elements? And how do we understand the state mt, since it is a mechanism -- a logical or relational component of the connectivity. May I suppose mt is a specific set of probabilistic or deterministic Logic Gates? And if they are, then p(Zt-1 | mt) would be the probability of finding elements of Z in a particular specification of states at a time t-1 given a specific configuration of Logic Gates (LG). And p(Zt-1) would be the LG independent probability of the same set of states irrespective. Furthermore, as per the referred statement, the differences between the two probabilities constitutes the information of the cause and effect. That is, one is not talking about the meaning (semantics) of the information, but only of the probability of their occurrence (causal connection).
> All causally relevant information within a system Z is contained in the system’s cause-effect structure.
=> Indeed, but in this essay, it is only the information about the physical states. If you think of intention, you know, intention is not just a state of matter. In any case, you have also concluded, "What kind of cause-effect structure is required to experience goals, and which environmental conditions could favor its evolution, remains to be determined."
> If the goals that we ascribe to a system are indeed meaningful from the intrinsic perspective of the system, they must be intrinsic information, contained in the system’s cause-effect structure.
=> In my view, the assertion is accurate. In this essay though 'intrinsic perspective' is ascribable only to the 'intrinsic correlation' with the states of matter, not with the semantics (meaning) of 'intention of goals'. It is pertinent to note, all physical entities, as they evolve, they follow the path of 'least action', (usual units Energy x Time). So, if we ascribe 'goal of performing least action, all mechanical entities naturally do that.
> This is simply because a system cannot, at the same time, have information about itself in its current state and also other possible states.
=> Information is what a state correlates with emerging from observed relative state descriptions of interacting entities as per natural causation. Therefore, a state of an entity is not a correlating information in itself. Furthermore, there is no unique definition of state, a state is what is observed by another interacting entity (an observer). Therefore, a system can never be said to have information about its current state, since the state is not defined unless observed. In my reckoning too, it naturally requires a recurrent system to know what state it was in a while ago, but the new state is yet another state. For example, temporal relation being one central element of state description, therefore, current state is always a new state. By the way, resultant state is also a consequence of the relation among observed states of the context, which is what sets the basis for the emergence of abstract semantics. Scientists usually talk about the emergence, but never lay down the formal mechanism of abstraction.
> Any memory the system has about its past states has to be physically instantiated in its current cause-effect structure.
=> I suppose, this is what won you the day. But it need not be re-instantiated as long as the present state is causally dependent on the past state configuration and relation.
> The cause-effect structure of a causally autonomous entity describes what it means to be that entity from its own intrinsic perspective.
=> There is a big leap here. It is unclear 'what it means to be that entity'. Though it may be so, but why and how it is so, is not worked out in this essay. Considering all an element correlates with is the states of other connected elements, therefore, the constrained element may have state information about other elements. Furthermore, while the process of unification is outlined by stating each element affects every other in turn, but how does the semantic unification takes place is left out.
> In this view, intrinsic meaning might be created by the specific way in which the mechanisms of an integrated entity constrain its own past and future states.
=> The expression of mere possibility, "intrinsic meaning might be created by the specific way", is right, since the actual emergence of such a meaning is not detailed in the essay.
This is an essay, which I presumed, I could read without an external help. Publications cited in this essay are not for evidence or for suggestive further links, but as the background. I could not follow the methods of the operation of Markov Brain, since rules of evolution, formation of connectivity, parametric values are not defined here. For example, why would they develop connections at all, unless the rule is especially coded? This turned out to be the hardest read for me, yet, I am not fully confident. I had to learn rules of the methods from Wiki and cited texts. Similarly, I could not have any idea of how you calculated the value of R to be 0.6 at some point, even though I understood the idea of eqn.1.
Hearty congratulations again on winning the contest -- a remarkable feat indeed!
Rajiv
view post as summary
report post as inappropriate
Author Larissa Albantakis replied on Jul. 19, 2017 @ 21:34 GMT
Dear Rajiv,
Thank you very much for your thorough reading of my essay!
Let me try to address some of the issues you raised. Many have to do with (1) the difference between the ‘intrinsic information’ that I outlined, which is a measure of causal constraint, and the classic notion of information: Shannon information, (2) a distinction between causal constraint and predictability,...
view entire post
Dear Rajiv,
Thank you very much for your thorough reading of my essay!
Let me try to address some of the issues you raised. Many have to do with (1) the difference between the ‘intrinsic information’ that I outlined, which is a measure of causal constraint, and the classic notion of information: Shannon information, (2) a distinction between causal constraint and predictability, and (3) taking the intrinsic perspective of the system itself vs. the typical extrinsic perspective of an observer of the system (as is the common perspective in physics).
Causal autonomy and (in)determinism:
Of course, in a deterministic system, given the full state of the system at some point in time, the system’s future evolution is entirely predictable. What happens happens. However, performing this prediction takes a “god’s” perspective of the entire system.
The notion of causal autonomy defined in my essay applies from the intrinsic perspective of the system, which may be an open subsystem S of a larger system Z. What is measured is the mechanistic constraints of S on the direct past and future state of the subsystem S itself, using a counterfactual notion of causation. Roughly, this notion of causation is about what within the entire state of the system actually constraints what (not all mechanisms constrain each other all the time). So locally, right now, constraints on the system can come from within the system S and/or from outside the system. If all parts of the system S constrain each other mechanistically, above a background of constraints from outside of the system, S is causally autonomous to some degree (which can be measured).
Indeterminism within the system will actually only result in less intrinsic constraints.
Representation:
I think we can connect here in the sense that what I am arguing in my essay is that any emerged semantics have to come from the intrinsic cause-effect structure of the system itself. The crucial point is that, whatever intrinsic meaning there is does not mean what it means because of a correlation with the outside world. There may be (and should be) a correlation between the world and the intrinsic cause-effect structure of the system, however the intrinsic semantics (what it means for the system itself) must arise purely from the intrinsic structure, and cannot arise because of the correlation. Only external observers can make use of a correlation between something external and something internal to the system, not the system itself.
Mechanism and system states:
The way a mechanism is define here is specifically a set of elements within a physical system (with cause-effect power). A mechanism must have at least two states. So M is a physical entity (it can be observed and manipulated) and m is its current state. A neuron or a logic gate for example could be a mechanism.
Z is a system, let’s say 4 binary logic gates {ABCD}, which at any moment is in a state, let’s say ABCD = 1011 in its current state.
Intrinsic information and semantics:
You wrote:
“p(Zt-1 | mt) would be the probability of finding elements of Z in a
particular specification of states at a time t-1 given a specific
configuration of Logic Gates (LG). And p(Zt-1) would be the LG
independent probability of the same set of states irrespective.
Furthermore, as per the referred statement, the differences between
the two probabilities constitutes the information of the cause and
effect.”
This is correct. Let’s say mt is an AND gate ‘A’ in state ON with 2 inputs (‘B’ and ‘C’). p(BCt-1|A=1) is 1 for state BC = 11 and 0 for all other states (BC = 00, 01, 10). Now the crucial point is that this is what it means to be an AND gate in state ‘1’. So the shape of the distribution is the semantics of the mechanism in its state. If the AND gate is OFF (‘0’) it will specify a different distribution over BC. That means that the cause-effect structure of the system with the AND gate ON will be different from the cause-effect structure of the AND gate OFF. Of course this is a very impoverished notion of semantics and it remains to be shown that in a sufficiently complex system, ‘interesting’ semantics can be constructed from compositions of these probability distributions (cause-effect repertoires). What I’m arguing is that the composition of all cause-effect repertoires is the only kind of information that is available to the system itself, so if there is intrinsic meaning, it must come from this intrinsic information (the system’s cause-effect structure). It can’t come from anywhere else (like a correlation with the external world). Certainly, though, my essay does not give a satisfying answer as to where and how exactly the semantics can be found in the cause-effect structure.
Intrinsic goals:
You wrote:
“It is pertinent to note, all
physical entities, as they evolve, they follow the path of 'least
action', (usual units Energy x Time). So, if we ascribe 'goal of
performing least action, all mechanical entities naturally do that.”
Precisely. Something like “goal of least action” would not be an intrinsic goal. It’s like reaching perfect fitness in the task. The animat may do that without it possibly being a meaningful concept for the animat itself. Both the feedforward and the integrated animat evolved to perfect fitness. Yet, the feedforward one cannot even be seen as an intrinsic agent, a causally autonomous entity, in the first place.
“knowing” vs. “being”
You wrote:
“… Therefore, a state of an entity is not a correlating information in itself. …”
This is crucial and is what I intended to express in the last paragraph of section III. A system, from the intrinsic perspective, does not “have” information about itself, it does not “know” about itself. Instead it specifies information by being what it is, at the current moment: the system (with all its mechanisms) in a specific state. And this is all there is.
From the essay: “The cause-effect structure of a causally autonomous entity describes what it means to be that entity from its own intrinsic perspective.”
You wrote:
=> There is a big leap here. It is unclear 'what it means to be that
entity'. Though it may be so, but why and how it is so, is not worked
out in this essay. Considering all an element correlates with is the
states of other connected elements, therefore, the constrained element
may have state information about other elements. Furthermore, while
the process of unification is outlined by stating each element affects
every other in turn, but how does the semantic unification takes place
is left out.
I completely agree. My argument is merely that a) something like intrinsic meaning obviously exists for us humans, b) the only way it could possibly arise (in a non-dualist framework) is from the system’s cause-effect structure. And I hope that I have given convincing arguments that the only intrinsic information there is, is the system’s cause-effect structure, while something like correlations with the world can only be meaningful for an external observer.
References:
The word limit unfortunately didn’t allow for more detail about the evolutionary algorithm. However, in some sense this is irrelevant for the rest of the argument. That the animats in Fig. 2 are two solution that actually evolved via selection and adaptation merely makes it more convincing as a model of natural systems with seeming agency. While I’m very happy about your interest in animat evolution, references to the actual scientific papers should rather be seen as proof that I didn’t just make things up, but are not crucial for the argument made. Same for the R = 0.6 value. The only relevant point is that the R that was measured is too low to possible allow for an intrinsic representation of all task-relevant features (less than 1 bit) even in animats that perform the task well.
Thank you again for your insightful comments!
Best,
Larissa
view post as summary
Eckard Blumschein wrote on Aug. 6, 2017 @ 04:38 GMT
Dear Larissa Albantakis,
You wrote: "I hope that I have given convincing arguments that the only intrinsic information there is, is the system’s cause-effect structure, while something like correlations with the world can only be meaningful for an external observer."
I am ashamed for having overlooked your essay so far due to its cryptic title.
I still see you a bit attacking open doors, at least from my perspective. Admittedly, I share in part the opinion of Ritz, of course not his preference for emission theory, in his dispute that ended in the famous agreement to disagree.
Since you are with Templeton World Charity Foundation, I don't expect you to accept in public my criticism of unreasonable humanity as endangering mankind.
Eckard Blumschein
report post as inappropriate
Login or
create account to post reply or comment.