CATEGORY:
Wandering Towards a Goal Essay Contest (2016-2017)
[back]
TOPIC:
Von Neumann Minds: A Toy Model of Meaning in a Natural World by Jochen Szangolies
[refresh]
Login or
create account to post reply or comment.
Author Jochen Szangolies wrote on Jan. 27, 2017 @ 17:03 GMT
Essay AbstractThe question of meaning, or intentionality, is plagued by the homunculus fallacy: postulating an 'internal observer' appraising mental representations leads to an infinite regress of such observers. We exhibit the structure behind this problem, and propose a way to break it down, by drawing on work due to von Neumann. This allows to eliminate the dichotomy between a representation and its user, eliminating the infinite regress. We briefly comment on how the resulting model handles other problems for a naturalistic account of meaning, such as the problem of error and the frame problem.
Author BioJochen Szangolies studied physics in Siegen and Düsseldorf, recently completing and defending his PhD-thesis. He has worked on the phenomena of quantum contextuality, the detection of quantum correlations, and their application in quantum information tasks.
Download Essay PDF File
Lee Bloomquist wrote on Jan. 27, 2017 @ 23:07 GMT
Jochen Szangolies writes to us:
"The source of the homunculus fallacy is glossing over whom a given symbol is supposed to have meaning to: we imagine that the internal picture is simply intrinsically meaningful, but fail to account for how this might come to be—and simply repeating this ‘inner picture’-account leads to an infinite regress of internal observers."
In this way we...
view entire post
Jochen Szangolies writes to us:
"The source of the homunculus fallacy is glossing over whom a given symbol is supposed to have meaning to: we imagine that the internal picture is simply intrinsically meaningful, but fail to account for how this might come to be—and simply repeating this ‘inner picture’-account leads to an infinite regress of internal observers."
In this way we are warned of circularity and circular arguments.
But Lawrence Moss and the late Jon Barwise some years ago wrote:
"In certain circles, it has been thought that there is a conflict between circular phenomena, on the one hand, and mathematical rigor, on the other. This belief rests on two assumptions. One is that anything mathematically rigorous must be reducible to set theory. The other assumption is that the only coherent conception of set precludes circularity. As a result of these two assumptions, it is not uncommon to hear circular analyses of philosophical, linguistic, or computational phenomena attacked on the grounds that they conflict with one of the basic axioms of mathematics. But both assumptions are mistaken and the attack is groundless." (
Vicious Circles: On the Mathematics of Non-Wellfounded Phenomena. Center for the Study of Language and Information, Stanford California. CSLI Lecture Notes Number 60.)
For example, consider "self = (self)." It's the language of "hypersets," which is the subject of the above lecture notes.
However, instead of a simple object-- like a ball suspended in space-- think of an algorithm running on a computer.
To adopt a convention, say that when the algorithm runs, it exists. And when it does not run, it does not exist-- only the static code, or the formal specification for a computer program, then exists.
Now say that the algorithm "self = (self}" calls another algorithm when it is running, which runs and then returns execution to "self = (self)". In some sense that I won't try to make formal here, other algorithms like this are "part" of the algorithm "self = (self)." Now say that the last line of code which the computer runs in this algorithm is to call the algorithm itself. Hence "self = (self)" and "self" must be a "part" of itself.
Ideas like this led to formalizing the idea of a "non-wellfounded set." And then with non-wellfounded sets worked out, set theory could be successfully applied to the study of algorithms.
In engineering terms, the above "call to self" is a "feedback loop." For example, place a karaoke microphone next to the speaker. You get a squeal. It's a "runaway feedback loop."
Runaway feedback is usually a bad thing, but every self-driving car needs a well designed "feedback loop" in order to exist at all as a self-driving car. Likewise spacecraft reach Mars by means of well-designed feedback loops.
It therefore seems to me like the idea of "homunculus" implicitly models the homunculus as an object, like the ball suspended in space.
But if the "homunculus" is NOT an object, but a process, then we can use non-wellfounded sets or hypersets to model this process in a rigorous way.
Which by the way leads to a testable hypothesis: "The Dream Child Hypothesis." The question becomes this: On which feedback loop does the existence of "self" depend? (Implying that "self =(self)" needs a feedback loop in order to exist.)
1. Is the feedback loop for "self= (self)" the enteroceptor feedback loop, on which the heartbeat depends? Or,
2. Is the feedback loop for "self = (self)" the proprioceptor feedback loop, on which breathing depends?
After performing an experiment to test this hypothesis (on lab animals, using neuro-imaging) we would then be in a position to ask about processes like "self = (thinking, self)."
...the transformation to meaning in her life which Helen Keller so famously described.
view post as summary
report post as inappropriate
Author Jochen Szangolies replied on Jan. 29, 2017 @ 10:56 GMT
Dear Lee Bloomquist,
thank you for the interesting response! I'll have to take a look at both the book you cite (hopefully the university library has a copy), and your essay before I can comment more in depth, but I think we have broadly similar concerns---a circularity need not automatically be vicious. In some sense, my whole essay is an attempt at removing the vicious circularity in the idea that 'a mind is a pattern perceived by a mind', which seems to bear some relation to your 'self=(self)' (I interpret this as a kind of set notation?).
The homunculus regress is vicious, because it needs to be completed before, so to speak, the first element of the hierarchy is done---i.e. before a given representation has meaning to the lowest-order homunculus, all the representations on the higher levels must have meaning.
In contrast, an acoustic feedback, or a control loop, aren't vicious---in a feedback loop, we have an acoustic signal being generated, which is then recorded by a microphone, amplified, re-emitted, re-recorded, and so on. This may be undesirable, but there's nothing fundamentally problematic about it. It would be different if, in order to emit sound on the first 'level', the whole infinite tower of recording-amplifying-emitting would have to be traversed: in this case, the production of a sound is simply logically impossible, and likewise the production of meaning in a homuncular setup.
The same goes for an algorithm that calls itself before producing a certain output: no matter how long the algorithm is run, the output is never produced.
Anyway, I'm going to go have a look at your essay!
Lee Bloomquist replied on Jan. 29, 2017 @ 23:07 GMT
-- "'self=(self)' (I interpret this as a kind of set notation?)"
Yes! It's the language of "non-wellfounded sets" where the set need not be "founded" in different objects.
-- "The homunculus regress is vicious, because it needs to be completed before, so to speak, the first element of the hierarchy is done---i.e. before a given representation has meaning to the lowest-order homunculus..."
In "self = (self)" there is no hierarchy of order between "selves." There is only one "self": "self = (self)."But I do think that hierarchy is relevant. In "
The Knowledge Level Hypothesis," there is a hierarchy of analyses-- One could analyze the system in terms of the wave functions of the electrons in circuit components; or in terms of the voltage and amp levels at circuit components; or in terms of microcode in the processor; or in terms of the assembly language; or in terms of the higher level language used (e.g. C++, Pharoh); or in terms of the formal specification of the algorithms involved; or finally, in terms of the "knowledge level" where there are knowledge, *goals,* and actions. The knowledge level hypothesis says there is no higher level than this useful for analysis.
-- "...an algorithm that calls itself before producing a certain output: no matter how long the algorithm is run, the output is never produced."
As I understand it that's the classic "
halting problem." Pragmatically, in a real world computer the called routine would never return to the memory address of execution. But I want to mean something different. "self = (self)" will terminate when all it's possibilities are zeroed. But during its lifetime, it's possibilities are not all zeroed!
report post as inappropriate
Author Jochen Szangolies replied on Feb. 1, 2017 @ 12:35 GMT
Note that the hierarchy of homunculi is something very different from the hierarchy of knowledge you propose. In the latter, you can, in a sense, go as far as you like---each new level being a sort of 'coarse-graining' of the level below; in many cases, in fact, the hierarchy will terminate, because eventually, there's nothing left to coarse-grain.
The hierarchy of homunculi, however, is...
view entire post
Note that the hierarchy of homunculi is something very different from the hierarchy of knowledge you propose. In the latter, you can, in a sense, go as far as you like---each new level being a sort of 'coarse-graining' of the level below; in many cases, in fact, the hierarchy will terminate, because eventually, there's nothing left to coarse-grain.
The hierarchy of homunculi, however, is necessarily infinite. Picture a person looking at a screen. We'd say that they understand what's happening on the screen, that they have knowledge about it, and so on. For instance, the screen might show an apple; the person will identify the apple, and thus, recognize that picture as a picture
of an apple. In this way, the picture comes to represent an apple.
But if we attempt to cash out all representation in this way, we become trapped in the infinite regress: if, in order to recognize the picture as being of an apple, the person possesses some internal mental representation of it---an inner picture of that picture of an apple---they need likewise an inner observer recognizing that second-order picture as such, in order to make it into a representation of the picture seen by the person.
But this never bottoms out: we're left with ever higher-level homunculi, and, like the way the picture of an apple can only be recognized as a picture of an apple if the first-level homunculus recognizes the internal representation as representing a picture of an apple, the interpretation of the representation at the nth level depends on the interpretation of the representation at the (n+1)st; thus, we have to climb the whole infinite hierarchy in order to generate the recognition of the picture of an apple as being a picture of an apple on the lowest rung. Since we generally consider such infinite tasks impossible, it follows that representation---intention, meaning, having aims and goals---cannot be explained by such a homuncular theory.
Now, not every theory of intentionality need be homuncular. Yours may not be, for instance. I try to circumvent the homunculus problem by the von Neumann construction, which allows me to construct a sort of internal representation that is itself its own user---that 'looks at itself', recognizing itself as representing something. But very many theories, in practice, are---as a guideline, whenever you read sentences such as 'x represents y', 'x means z', and so on, you should ask: to whom? And if there is an entity, implicitly or explicitly, that needs to be introduced in order to use a given representation as representing something, then you can discard the theory: it contains an (often unacknowledged) homunculus.
Regarding the halting problem, the example I gave is actually one in which it is solvable (while in general, it's not): the algorithm that calls itself will never terminate. But you are right to link infinite regress and self-referential problems, such as the halting problem: when I draw a map of an island, and the island includes the map (say, it's installed at some specific point), then the map must refer to itself; and if it's infinitely detailed, then the map must contain a copy of the map must contain a copy of the map... And so on.
view post as summary
Lee Bloomquist replied on Feb. 2, 2017 @ 05:59 GMT
Jochen,
Applying the mathematical idea of "fixed point" to the process of traveling towards a goal works not only for the map of a shopping mall, but also for the classic, well known story of a journey told many centuries ago by Parmenides (friend of Zeno, who devised his "paradoxes" as an attempt to help Parmenides). The goal in the story is to enter a hidden world. Applying the mathematical idea of "fixed point" to the map of a mall and also to this well known story demonstrates a way for goals to enter otherwise "mindless" mathematics. I posted it in the essay titled "Theoretical proof..."
Best Regards!
report post as inappropriate
Don Limuti replied on Mar. 11, 2017 @ 06:29 GMT
Jochen and Lee,
Most enlightening thread. Particular the concept of a fixed point (you are here). The concept of a GPS for completing a goal is also valid.
The mathematics is valid but behind this there needs to be a concept (notion) of self with a desire. Hmmm?
Don Limuti
report post as inappropriate
hide replies
Jack Hamilton James wrote on Jan. 29, 2017 @ 10:36 GMT
Great essay - I enjoyed it very much, thank you. A couple of questions.
1. Is this analogy/explanation a modern (physical/informational/mathematical) form of idealism?
2. If it's not, how does the self-reproduction arise initially from non-life?
report post as inappropriate
Author Jochen Szangolies replied on Jan. 29, 2017 @ 11:05 GMT
Dear Jack,
thanks for your kind words! I'm not quite sure I understand your questions correctly, though. I don't intend to put forward a modern form of idealism in the traditional sense---i.e. that everything is ultimately mental at the bottom. In some sense, I suppose one could argue that in my model, ideas are shaped only by certain mental background conditions, and hence, properly speaking, only refer to those---but I still intend for these background conditions (providing the fitness landscape) to be essentially provided by the outside, physical, world. You could think of a robot, having a cellular automaton for a brain, in which ideas 'evolve' according to the conditions created by the impact of the outside world.
Regarding your second question, are you asking about how self-reproduction arose in biological systems, or how it got started within the minds of biological creatures? If the former, I'm hardly an expert---but the basic idea is that there exist certain autocatalytic reactions, which then, over time, grow more efficient at creating copies of certain molecules. I think something like that may also have occurred in the brain: organisms prosper if they can adapt to a wide variety of circumstances, and as I outlined in my essay, the evolution of stable structures mirroring the outside world within the brain may be a general-purpose way of coping with near-limitless variation in the outside world.
Thus, creatures with a simple sort of self-replicating mechanism in the brain did better than creatures without, and this simple mechanism then got gradually refined via natural (and perhaps, also cultural) selection.
Did that address your questions at all?
Stefan Weckbach wrote on Jan. 29, 2017 @ 12:28 GMT
Dear Jochen Szangolies,
your essay is interesting and thought-provoking, at least for me. You give an attempt to model meaning in terms of algorithmic encodings. Your attempt is based on the assumption that brains are cellular automata, exhibiting patterns that can be encoded by the cellular automaton itself. You define CA patterns to be mental representations, thereby excorcising the...
view entire post
Dear Jochen Szangolies,
your essay is interesting and thought-provoking, at least for me. You give an attempt to model meaning in terms of algorithmic encodings. Your attempt is based on the assumption that brains are cellular automata, exhibiting patterns that can be encoded by the cellular automaton itself. You define CA patterns to be mental representations, thereby excorcising the homunculus problem. These patterns use themselves as symbols. The reason for you to introduce CA patterns as capable of being ‘mentally’ accessible is – as i understood it – because of those patterns being algorithmically compressible. As you wrote, a mental inner world is populated with a large amount of combined facts which all have their own specific meaning (coffee in the mug), so the CA must be able to produce new symbols every time a new combinatorical fact is imposed by it via the environment. Although i do not doubt that arbitrary symbols can be encoded by arbitrary symbols, i did not grasp how this could lead to the emergence of mental representations. Taking your attempt seriously, your attempt came about by the same process your attempt is describing. This may be consistent for a fully-fledged conscious being, but i think this is not the whole story, because for giving such an attempt, you had to carefully manipulate many concepts (‘symbols’) already being established in your mental inner world. Although i assume that your attempt at a certain level of the brain does indeed meet reality by constituing some parts of a stable cognition, i cannot see how mere data processing can ever produce the slightest piece of a mental inner world. Data processing surely can shape an already existent inner world. You seem to take it as guaranteed that the brain can be sufficiently defined as a neural network and/or as a CA, being also capable of *producing* the needed mentality in the first place in order for data processing being able to shape this mental inner world. Until now, i doubt that a simulation of such a network on a computer does result in a conscious entity. I would be more convinced when some projects modelling the brain as a neural network / CA would indicate some strong evidence that neural networks / CA give rise to mental inner worlds. But this does not prevent your attempt to explore the consequences of such a positive result. In this sense, your essay was an interesting piece to read. But until now i don’t think that data processing is the key for explaining consciousness. It nonetheless is surely important to generate and compress meaningful symbols within a consciousness, shortcuts to reduce complexity and to organize an already existing inner mental world.
view post as summary
report post as inappropriate
Author Jochen Szangolies replied on Jan. 30, 2017 @ 09:03 GMT
Dear Stefan Weckbach,
thank you for reading my essay, and especially for your comments. I think one thing I must clarify right off the bat: I don't claim for my von Neumann minds to be a model of full-fledged consciousness, by which I mean especially phenomenal consciousness---the 'what-it's-likeness' of being in a conscious state.
But I think this problem can be usefully separated...
view entire post
Dear Stefan Weckbach,
thank you for reading my essay, and especially for your comments. I think one thing I must clarify right off the bat: I don't claim for my von Neumann minds to be a model of full-fledged consciousness, by which I mean especially phenomenal consciousness---the 'what-it's-likeness' of being in a conscious state.
But I think this problem can be usefully separated from the problem of intentionality---that is, from the question of how mental states come to be about things external to them. So, while I am mute on the issue of consciousness, per se, I try and at least outline a possible solution to the question of how mental representations can in fact come to represent something.
To this end, I draw an analogy with von Neumann replicators in a CA-environment: they contain information by simply being shaped, evolutionarily, by that environment; they can access their own information, and generate appropriate behaviour. In this sense, they're like a picture that can look at itself, thus getting rid of the homunculus.
So the way a mental representation arises is roughly this: an organism with a CA-brain encounters a certain environment, and receives certain stimuli; these stimuli set up certain conditions within the CA-brain; the CA-brain contains some population of von Neumann replicators, and, via a selection process, this population will eventually come to be shaped by, or adapted to, those CA-conditions---and with them, the environment that set them up (if only indirectly---but then, all access to the world is ultimately indirect).
In this way, replicators surviving the selection process contain information about the environment. Moreover, this information is accessible to themselves, for e.g. building a copy. But equally well, the information may be used to guide behaviour. Thus, the dominant replicator (whatever, exactly, that may mean) comes to be in a position to guide behaviour (it gets put into the driving seat), and then steers the organism in accord with the information it retrieves about itself, and by extension, the environment.
None of this, as I said, entails that there's anything it is like to be such a CA-brained organism. I think the problem of phenomenology, the hard problem of conscious, lies elsewhere, and will probably require entirely new tools for its solution. In fact, I share your suspicion that data processing does not suffice to create any kind of inner world---but note that my approach shouldn't be construed as 'only' data processing: while one can use cellular automata in this way, a cellular automaton pattern is just a concrete physical entity, like a stone or a telephone; and it's really in this sense that I conceive of them, as entities in their own right, rather than just data structures. But again, I see no reason to believe that even this ought to suffice for phenomenal experience.
view post as summary
Lawrence B. Crowell wrote on Jan. 29, 2017 @ 22:41 GMT
Your model has a definite selection mechanism to it. The more precise emulation of the exterior world by the tripartite system is a sort of self-correcting system. Is this similar in ways to Dennett's heterophenominon idea, where there might be several competing systems that result in a one of them that has the best outcome. Further, could this be derived within something like maximum entropy?
LC
report post as inappropriate
Author Jochen Szangolies replied on Jan. 30, 2017 @ 09:15 GMT
Dear Lawrence,
thank you for commenting. I'm not sure the selection mechanism you outline really works---I see a danger of hidden circularity: how do I select for a 'better match' to the environment, without already having a representation of the environment in hand? Whatever tells me that a given replicator matches the exterior well already contains the information that I want to evolve within the replicator, so I could just use that instead. Or am I misunderstanding you?
Regarding Dennett, yes, there is some similarity to his multiple drafts: as different replicators come to dominance, different 'versions' of experience---or at least, of beliefs about the world---arise in the agent, as in the example where an agent mistakes a jacket in a dark room for a stranger. There, the difference between both is clear, but there might also be cases where the earlier set of beliefs is erased, such that there is no introspective evidence of having believed otherwise, but where we can extract it by behavioural experiments---much as in Dennett's model.
Your suggestion towards a maximum entropy principle is interesting. Indeed, in some sense, we should be able to arrive at the 'most sensible' set of beliefs of an agent about the world in terms of maximizing the entropy---in a sense, we should find the set of beliefs with maximum entropy regarding the constraints set up by the environment. I wonder if this is possible with a sort of genetic/evolutionary approach?
Steve Dufourny replied on Jan. 31, 2017 @ 20:47 GMT
Hi to both of you,
Lawrence, what is this maximum entropy ? a maximum Shannon entropy because if it is the maximum thermodynamical entropy or the maximum gravitational entropy ,it is different.Could you tell me more please?
report post as inappropriate
Steve Dufourny replied on Jan. 31, 2017 @ 21:04 GMT
A maximum entropy so in theory of information is when we have all probabilities without constraints for the message, the signals.But I don't see how this concept could be derived ? for what aims?could you explain me please?
report post as inappropriate
Author Jochen Szangolies replied on Feb. 1, 2017 @ 12:08 GMT
Well, the basic idea of maximum entropy methods is that you should always choose the probability distribution with the maximal amount of entropy (information-theoretic Shannon entropy) that is compatible with your data. In this way, you guarantee that you haven't made any unwarranted assumptions about anything not covered by your data (this is very informal, but I hope the point will get across).
So in a sense, it just says that in an environment about which you have incomplete information, the optimal (in some Bayesian sense) strategy is to assume the maximum uncertainty compatible with the data you already have.
Steve Dufourny replied on Feb. 2, 2017 @ 09:15 GMT
Thanks for these explainations.It is a lot of computing and simulations.It is a beautiful work about informations.Now of course the vectors and scalars with geometrical algebras like hopf, Clifford or lie.But If I can it is always how we utilise these vectors, operators, tensors,domains,finite groups,....it is always how we utilise the tool.I play guitar and piano.If you do not put the harmonical paramters, never you can have a harmonical music.The Tools are one thing, the domains and laws an other.The startegy in, the theory of game of von neuman tends always towards the points of equilibrium.Like the disuasion of arms and weapons due to the forces reached and energy.That imp)lies the disuasion.It is my startegy, the quiet harmonical entropical road.Well ,I will recome and I will ask you some détails about Your method.We are going to create an AI :) in utilising the good series arythmetic.The probanities and the distribution must be always rational after all.Best and until soon.
report post as inappropriate
hide replies
Harry Hamlin Ricker III wrote on Jan. 31, 2017 @ 14:35 GMT
Hi, I was unable to understand how this essay related to the essay topic. I don't think it does.
report post as inappropriate
Author Jochen Szangolies replied on Jan. 31, 2017 @ 15:06 GMT
Dear Harry,
thank you for your comment. The topic of this essay contest is 'Wandering Towards a Goal – How can mindless mathematical laws give rise to aims and intentions?'.
To me, the key words here are goal, aims, and intentions: in order to have either, agents need the capacity for intentionality---that is, they need to be capable of having internal mental state directed at, or about, things (or events, or occurrences) in the world. To have the goal of climbing Mount Everest, say, you need to be able to have thoughts about Mount Everest; to intend an action, you need to be able to plan that action, for which you again need to be able to think about it, and so on.
Consequently, it is this aboutness---the aforementioned intentionality---that is the prerequisite to all goal-directed behaviour; my model then proposes a way of how such intentionality might arise in a natural world. Agents within which something equivalent to this model is implemented are able to represent the outside world (or parts thereof) to themselves, and consequently, to formulate goals, have aims, and take intentional action. Thus, the model is a proposal for how goals, aims, and intentions may come about in a natural world governed by 'mindless mathematical laws'.
Does this help answer your concern?
Lee Bloomquist replied on Jan. 31, 2017 @ 23:30 GMT
Yes, and "Wandering towards a goal" in the context of mathematical physics suggests to me the "
fixed point problem."
Say that in a shopping mall your goal is the sporting goods store. So you stand in front of the map of the mall.
What enables you to plan a path towards your goal is that there is, on the map in front of you, the point "You are here." Which is where you are actually standing in the mall.
Without this "fixed point" you would be faced with a random walk towards the goal (If, like me most times, you are unwilling to ask strangers).
The fixed point-- "You are here"-- enables you more efficiently and effectively to move towards your goal.
So to me, the key in an effective use of a map for moving towards a goal is FIRST to know where you are. (First understand "self.")
After "self" is identified both in the real world and on the map, then a goal can be located on the map and a route planned towards that goal in the real world.
But before that-- you have to know where the "self" is, and where it is imaged on the map.
There may be a potentially useful mnemonic for this: When going towards a goal, first know what the self is so you can locate it in both places-- in other words, "Know thyself."
report post as inappropriate
Author Jochen Szangolies replied on Feb. 1, 2017 @ 12:17 GMT
Joe Fisher, thanks for your comment. I will have a look at your essay.
Lee Bloomquist, it's interesting that you mention fixed points---in some abstract sense, the self-reproduction I study is a fixed point of the construction relation: the output is the same as the input.
In fact, you can more stringently formulate von Neumann's construction as a fixed point theorem, as shown by Noson Yanofsky in his eminently readable paper
"A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points". It essentially elaborates on and introduces Lawvere's classic "Diagonal arguments and Cartesian Closed Categories", showing how to bring Gödel's theorems, the unsolvability of the halting problem, the uncountability of the real numbers, and von Neumann's construction under the same scheme.
Steve Dufourny wrote on Jan. 31, 2017 @ 20:33 GMT
Hello Mr Szangolies,
Thans for sharing your work.Congratulations also.It is an intersting appraoch considering the works of structure of von neuman.It is always a question of hard drive and memmory and input and output with Of course an arythmetic method of translation, logic and an unity of checking also logic in its generality.But the add to this unity of codes in considering the mind and intentions seem really difficult considering the main gravitational codes different than photons and binar informations.That is why an AI is possible with the structure of von Neumann, but not a free mind like us the humans because I beleive that gravitation and souls are linked.We cannot approach the main singularities, personal.Like all singularities in fact.
Best Regards
report post as inappropriate
Steve Dufourny replied on Jan. 31, 2017 @ 20:45 GMT
It is true what.How can we define whzt is a meaning, how to quantify the importance of a meaning for the synchros and sortings of codes and informations.The nature seems utilising spherical volumes and rotations.Lawrence is right in saying that sélections with environments are important.How to rank with an unbiversal logic in fact.
report post as inappropriate
Steve Dufourny replied on Jan. 31, 2017 @ 20:59 GMT
if we consider that informations and the Shannon entropy can reach an infinity, it is more logic than a maximum.The potential is infinite simply like for the electromagnetic and gravitational informations when we superimpose nor add these informations.A machine mimating the universe could be relevant for the sortings and synchros of codes.The evolutive point of vue is always relevant.
report post as inappropriate
Erik P Hoel wrote on Feb. 1, 2017 @ 19:28 GMT
Dear Jochen - thank you so much for the essay. It's cogent and well-put together, and it's definitely hitting upon an interesting line of thought and thus is very stimulating.
I think actually what you said above to one of the other comments is the best paraphrase of your position:
"an organism with a CA-brain encounters a certain environment, and receives certain stimuli; these...
view entire post
Dear Jochen - thank you so much for the essay. It's cogent and well-put together, and it's definitely hitting upon an interesting line of thought and thus is very stimulating.
I think actually what you said above to one of the other comments is the best paraphrase of your position:
"an organism with a CA-brain encounters a certain environment, and receives certain stimuli; these stimuli set up certain conditions within the CA-brain; the CA-brain contains some population of von Neumann replicators, and, via a selection process, this population will eventually come to be shaped by, or adapted to, those CA-conditions---and with them, the environment that set them up."
As you point out, this is pretty reminiscent of neural darwinism. I honestly think you (and Edelman) are correct: fundamentally, this is how brains are set up, much like the immune system. However, I don't think it by itself solves the symbol grounding problem that you're concerned with (particularly as it applies to intentions), as this approach runs into several problems.
The first is that I don't think this truly solves the problem of error. You say that errors in representation are just when the internal replicators are "not the most well-adapted to the actual environmental conditions, becoming eventually replaced by one fitting them better."
But what if they are never replaced by anything better? There doesn't seem to be any relation that you've described that actually fixes the representation. Rather it allows for good-enough fits, or only approximates the representation sometimes. For instance, in the dark room example of confusing a shirt for a person, one might merely peek into the room and never return, never updating the replicator.
The second problem is that the internal replicators might be selected by multiple different things in the world, leading to the same structure, or merely two correlated but different things. Which of these does the internal replicator represent? I think on further amendments the account will break down into a form of utilitarianism, which essentially holds there are no true representations, but merely those that are useful or not. That doesn't solve the problem of intentionality, although it is a very elegant (or at least, highly interesting) view on how brains work.
And this is without even bringing up the nuclear option for the symbol grounding problem: Putnam's twin earth.
view post as summary
report post as inappropriate
Author Jochen Szangolies replied on Feb. 3, 2017 @ 08:51 GMT
Dear Erik,
thanks for your comments! I'm glad you see some value in my thoughts. I should, perhaps, note that to me, my model is more like a 'proof of concept' than a serious proposal for how intentionality works in actual biological organisms (to point out the obvious difference, none of us has a CA in their brain).
So the bottom line is that if the model works as advertised, what...
view entire post
Dear Erik,
thanks for your comments! I'm glad you see some value in my thoughts. I should, perhaps, note that to me, my model is more like a 'proof of concept' than a serious proposal for how intentionality works in actual biological organisms (to point out the obvious difference, none of us has a CA in their brain).
So the bottom line is that if the model works as advertised, what it does is to show that there is logical space between eliminativism and mysticism when it comes to intentionality---i.e. neither does the acceptance of a world that's comprised 'only' of the physical force us to deny the reality of intentionality, nor does the acceptance of that reality force us to believe that there is some mysterious form of original intentionality that we just have to accept as a brute fact about the world. There's a possibility here of both having your cake and eating it (if, again, thinks work as I want them to).
Regarding the issues you raise, I think the most important thing to realize is that ultimately, reference in my model isn't grounded in the outside world, but rather, in the 'environmental conditions' set up in the cellular automaton via the environment's influence, mediated by the senses. So in some sense, we don't really have access to the outside world---but then again, we knew that already: all it takes to fool us is to insert the right electrochemical signals into our sensory channels, whether that's done by an evil demon or a mad scientist having your brain on their desk in a jar.
So, for the problem of error, this means that, in the case you're describing, we simply don't notice the error---so the replicator wasn't perfectly adapted to the CA environment, and was never replaced; then, things are just going to proceed as if that replicator was a faithful representation of the CA environment. I might, for instance, run out of my room upon seeing the 'stranger', straight to the police, and report a break-in. I'm wrong about that, of course, there never was a stranger in my room, or a break-in---but this seems to be a common enough sort of occurrence.
Similarly, you are right that there isn't necessarily a one-to-one correspondence between objects in the world and replicators. But then, what of it? It just means that we'll have the same beliefs, and show the same behaviors, in the presence of either---that is, we just can't distinguish between them.
I don't think this necessarily reduces the approach to a pragmatist one---in the end, all we ever are aware of, or have beliefs about, are really classes of things, and not individual things themselves. For instance, the chair in my office certainly contains today a couple of different atoms than it did yesterday; yet, I consider it to be the same chair, and my beliefs and other propositional attitudes toward it aren't influenced by this difference. Some differences just don't make a difference to us.
This then also suggests a reply to the Twin Earth case: on my account, 'water' doesn't refer to either H
2O or XYZ; it refers to some set of CA-conditions set up by being in the presence of some sufficiently similar liquids. My meanings are all in the head.
This also accounts for the possibility of a divergence in meaning, once additional facts come to light: suppose Earth science (and with it, Twin Earth science) become sufficiently advanced to tell the difference between H
2O and XYZ. Then, the inhabitants of Earth could redefine water as 'that liquid whose chemical composition is H
2O', while Twin Earthlings could say instead that water is 'that liquid whose chemical composition is H
2O'. This difference will be reflected in a difference between the CA-conditions set up in an observer of water and their twin: the knowledge of water's chemical composition allows different replicators to prosper.
Furthermore, an inhabitant of Earth transported to Twin Earth will mistake H
2O for XYZ; but then, upon further analysis---i.e. looking really hard, just as it might take looking harder to distinguish between a stranger and a jacket in a dark room---will learn of the difference. In the future, he then simply won't know whether he's presented with a glass of H
2O or a glass of XYZ without doing the analysis---but that's not any more problematic than not knowing whether something is water or vodka without having had a taste.
view post as summary
Erik P Hoel replied on Feb. 6, 2017 @ 16:20 GMT
Dear Jochen - thanks so much for your detailed response!
I agree with most everything you say, I just disagree that this solves the actual issue you bring up of intentional reference.
The initial problem you set up is this one: "the notion of reference: when we interpret the word ‘apple’ to refer to an apple, a reasonable suggestion seems to be that the word causes an appropriate mental representation to be called up—that is, a certain kind of mental symbol that refers to said apple."
When then after giving your account, you admit that there are still things like reference-error, twin-earth problems, etc, and answer those things by saying "My meanings are all in the head." In analytic philosophy this is called a "narrow content" view of representation. But once one takes a narrow content view, why specify this tripartite structure and use the analogy of the CA-replicators?
For instance, one could give a more general answer that takes the same form as your proposal, and just say that through development, learning, and evolution, our internal brain structure correlates to the outside world. But when pressed about errors in reference, twin earth, etc, the more general proposal just says "well, sure, all that's true. But the meanings are in the head anyways!"
In other words, if you've admit that meanings are all in the head anyways, can have errors, and don't have a fixed content in terms of referencing the outside world, I'm not sure what further work needs to be done in terms of the analogy to Von Neumann machines. The traditional problem that narrow-content views run into is that of underdetermination -> there are many possible interpretations to some brain states (or CA-states) in terms of what it's representing, and I'm not sure how the analogy gets you out of that.
Btw I know I sound critical here - but it's only because it's so advanced as an essay that we can even have this discussion.
EPH
report post as inappropriate
Author Jochen Szangolies replied on Feb. 8, 2017 @ 13:33 GMT
Dear Erik,
please, don't apologize for being critical---if the idea is to have any chance at all, it must be able to withstand some pressure. So every issue you point out is helping me, and I'm grateful for having the opportunity of discussing my idea.
Regarding the problem you see, do you think that narrow content is just in principle not capable of solving the problem of reference,...
view entire post
Dear Erik,
please, don't apologize for being critical---if the idea is to have any chance at all, it must be able to withstand some pressure. So every issue you point out is helping me, and I'm grateful for having the opportunity of discussing my idea.
Regarding the problem you see, do you think that narrow content is just in principle not capable of solving the problem of reference, or do you think that if one believes in narrow content at all, then there's really no additional problem left to solve---i.e. that one then is committed to a kind of eliminativist stance?
To me, the problem to solve is how any sort of mental state refers at all---i.e. how it comes to have any kind of content. For instance, I disagree with the idea that a thermostat refers, simply by virtue of being in a particular state, to the temperature (or to its being 'too low' or 'too high'). There's no salient difference between that thermostat and any other system with two possible states---it will always depend on the environment which state the system is in, and thus, the representational content could at most be 'I am in the state I am when the environment dictates that I evolve into this state'---which is basically empty, and doesn't really say more than that the system is in one of two possible states.
Reference, semantic content, etc., needs more than that. For instance, consider the example of one versus two lights being lit in the steeple of the Old North Church. If that's all you see, it has no representational content; but if you further know that 'one if by land, two if by sea', then one lantern being lit has content, and it represents the English attacking by land.
The trouble is, of course, that the intentionality thus bestowed to the lantern is derived from that of you, who knows that 'one if by land, two if by sea'. Since you already possess intentionality, trying to explain the intentionality of your own mental states---i.e. how they come to refer to anything---in the same terms as we have just explained the intentional, referential nature of the one lantern burning in the Old North Church will run into the homunculus problem.
This is independent of whether mental content is narrow or wide---the important thing is that it represents something; whether that something is, say, the apple out there in the world, or the conditions caused within the brain by the presence of that apple, or even the phenomenal experience of that apple is immaterial.
And it's there that my model comes in (if things work as I think they do): by collapsing the symbol and the entity using it as a symbol---the representation and the homunculus using it; you and the lantern at the Old North Church---into a single entity, I think it's possible to get rid of the regress. So, a von Neumann replicator evolved in conditions caused by the presence of an apple uses itself as a symbol for these conditions, reads the information it has stored and causes the body to perform actions appropriate to the presence of said apple. One might call this an 'autotelic symbol', because it derives its referent not from something external to it, but rather, from its own form (and because I just learned the word 'autotelic').
view post as summary
Erik P Hoel replied on Feb. 17, 2017 @ 16:35 GMT
Thanks for the clarifications Jochen. It's clearer to me what your account is addressing now.
I think maybe the best way to phrase it is there's two separate problems: the homunculus problem and the problem of reference. The problem of error that you talk about in the paper is a subproblem of the problem of reference. I don't think your account actually addresses it (beyond you advocating for a narrow-content view). However, I do see how you're trying to address the homunculus problem of mental content in an interesting way. It might be clearer to separate those out in the future, so that way you can drill down on this notion of "autotelic symbols."
Thanks for the interesting read!
Erik
report post as inappropriate
Author Jochen Szangolies replied on Feb. 21, 2017 @ 09:18 GMT
Hmm, I don't really think these two problems can be usefully separated. Rather, the homunculus problem is a problem that arises in trying to solve the problem of reference---namely, trying to solve it by means of an internal representation immediately implies the question of who uses that representation as a representation.
Consequently, such a naive representational account doesn't work; but if the homunculus problem didn't arise, then the account could do its job, and solve the problem of reference. Likewise, if the homunculus regress could actually be completed---i.e. if we could traverse the entire infinite tower of homunculi---the account would work, giving an answer to how reference works.
But we typically don't believe such 'supertasks' can be performed; and that's where my construction comes in, replacing the homunculus with my self-reading symbols. If they now do the same work, which I argue they do, then this solves the problem of reference just as well as traversing an infinite tower of homunculi would have.
Erik P Hoel replied on Feb. 23, 2017 @ 16:21 GMT
As I said, you're right that have a relation, but they can also be separated. Accounting for errors in reference is different than the homunculus problem. In fact, I'm not even sure the homunculus argument needs to be framed in terms of reference - although as you point out, it can be.
All the best,
Erik
report post as inappropriate
hide replies
Satyavarapu Naga Parameswara Gupta wrote on Feb. 6, 2017 @ 13:21 GMT
Good essay sir…
Taking your apple example….
One apple will not be same another apple. Each single apple will be different. Each apple will have a different picture in different direction.
-How will this homunculus algorithm will recognize it is an apple ?
-How will it recognize it is a different apple?
-How will it recognize it is a different view?
report post as inappropriate
Satyavarapu Naga Parameswara Gupta wrote on Feb. 6, 2017 @ 13:50 GMT
Good essay sir…
I have a small further doubt, hope you will analyze it for me. Now taking your apple example….
One apple will not be same another apple. Each single apple will be different. Each apple will have a different picture in different direction.
-How will this homunculus algorithm will recognize it is an apple ?
-How will it recognize it is a different apple?
-How will it recognize it is a different view..?
report post as inappropriate
Author Jochen Szangolies replied on Feb. 8, 2017 @ 13:47 GMT
Thank you for the compliment! I suppose you're essentially asking about how there come to be different categories of objects that can be represented. I.e., what makes a different apple an instance of the category 'apple'? What makes a peach not an instance of the same category?
In a sense, this harks back to the problem of universals, with all the attendant baggage that would take too long to even review, much less address, here.
But I think that, from a more modern perspective, one can draw an interesting analogy to a hash function. A hash function, used, e.g., in cryptography, is a function that takes a set of inputs to the same output, thus 'grouping' inputs into distinguishable sets.
Thus, we get a partitioning of a certain domain into different classes---like, e.g., the domain 'fruit' is partitioned into 'apples', 'peaches', and so on. So, one possible response here would be that two different replicators represent different instances of the same sort of object if they are mapped to the same hash code. This doesn't have to be explicit; for instance, when the replicator guides behavior, it might be that only certain of its properties are relevant for a given action---this ensures that the reaction to 'apple A' will be the same as to 'apple B', but different from 'peach X'.
Alternatively, one can think about this more loosely in terms of Wittgensteinian 'family resemblances': if there is a resemblance between objects, there will be a resemblance in the replicators, and consequently, a resemblance in actions taken upon encountering these objects (such as saying, 'that's an apple').
However, I think that this is an issue whose detailed treatment will have to wait until the model is more fully developed, and one can start applying it to real-world situations.
Satyavarapu Naga Parameswara Gupta replied on Feb. 21, 2017 @ 12:23 GMT
Dear Jochen Szangolies
Nice reply and analysis… have a look at my essay also please….
Best wishes for your essay
=snp.gupta
report post as inappropriate
Satyavarapu Naga Parameswara Gupta replied on Mar. 18, 2017 @ 10:30 GMT
Hi JS,
I want you to ask you to please have a look at my essay, where ……………reproduction of Galaxies in the Universe is described. Dynamic Universe Model is another mathematical model for Universe. Its mathematics show that the movement of masses will be having a purpose or goal, Different Galaxies will be born and die (quench) etc…just have a look at the essay… “Distances,...
view entire post
Hi JS,
I want you to ask you to please have a look at my essay, where ……………reproduction of Galaxies in the Universe is described. Dynamic Universe Model is another mathematical model for Universe. Its mathematics show that the movement of masses will be having a purpose or goal, Different Galaxies will be born and die (quench) etc…just have a look at the essay… “Distances, Locations, Ages and Reproduction of Galaxies in our Dynamic Universe” where UGF (Universal Gravitational force) acting on each and every mass, will create a direction and purpose of movement…..
I think intension is inherited from Universe itself to all Biological systems
For your information Dynamic Universe model is totally based on experimental results. Here in Dynamic Universe Model Space is Space and time is time in cosmology level or in any level. In the classical general relativity, space and time are convertible in to each other.
Many papers and books on Dynamic Universe Model were published by the author on unsolved problems of present day Physics, for example ‘Absolute Rest frame of reference is not necessary’ (1994) , ‘Multiple bending of light ray can create many images for one Galaxy: in our dynamic universe’, About “SITA” simulations, ‘Missing mass in Galaxy is NOT required’, “New mathematics tensors without Differential and Integral equations”, “Information, Reality and Relics of Cosmic Microwave Background”, “Dynamic Universe Model explains the Discrepancies of Very-Long-Baseline Interferometry Observations.”, in 2015 ‘Explaining Formation of Astronomical Jets Using Dynamic Universe Model, ‘Explaining Pioneer anomaly’, ‘Explaining Near luminal velocities in Astronomical jets’, ‘Observation of super luminal neutrinos’, ‘Process of quenching in Galaxies due to formation of hole at the center of Galaxy, as its central densemass dries up’, “Dynamic Universe Model Predicts the Trajectory of New Horizons Satellite Going to Pluto” etc., are some more papers from the Dynamic Universe model. Four Books also were published. Book1 shows Dynamic Universe Model is singularity free and body to collision free, Book 2, and Book 3 are explanation of equations of Dynamic Universe model. Book 4 deals about prediction and finding of Blue shifted Galaxies in the universe.
With axioms like… No Isotropy; No Homogeneity; No Space-time continuum; Non-uniform density of matter(Universe is lumpy); No singularities; No collisions between bodies; No Blackholes; No warm holes; No Bigbang; No repulsion between distant Galaxies; Non-empty Universe; No imaginary or negative time axis; No imaginary X, Y, Z axes; No differential and Integral Equations mathematically; No General Relativity and Model does not reduce to General Relativity on any condition; No Creation of matter like Bigbang or steady-state models; No many mini Bigbangs; No Missing Mass; No Dark matter; No Dark energy; No Bigbang generated CMB detected; No Multi-verses etc.
Many predictions of Dynamic Universe Model came true, like Blue shifted Galaxies and no dark matter. Dynamic Universe Model gave many results otherwise difficult to explain
Have a look at my essay on Dynamic Universe Model and its blog also where all my books and papers are available for free downloading…
http://vaksdynamicuniversemodel.blogspot.in/
Be
st wishes to your essay.
For your blessings please…………….
=snp. gupta
view post as summary
report post as inappropriate
Lawrence B. Crowell wrote on Feb. 8, 2017 @ 11:37 GMT
I found your small paragraph at the top of page 5:
The key to shake the agent’s mind free from empty, self-referential navel-gazing is the design’s evolvability.
Assume that the agent is subject to certain environmental stimuli. These will have some influence
upon its CA brain: they could, for instance, set up a certain pattern of excitations. As a result, the
evolution of patterns within the CA brain will be influenced by these changes, which are, in turn, due
to the environmental stimuli.
as interesting. This is similar to what I argue with the necessity of the open world. The open world, or the environmental stimuli that is not predictable in a closed world, is what cuts off the self-referential endless looping. I discuss this in my essay at the end with respect to MH spacetimes and black holes. I don't necessarily think black holes are conscious entities, but that they have an open quantum structure means they are not complete self-referential quantum bit systems. In my essay I also invoke the MERA logic which has a cellular automata nature.
The randomizing influence of the environment is crucial I think to prevent the duplicator from the universal Turing machine problem. The duplicator duplicates the object and blueprint, but in doing so duplicates the blue print encoding a copy of itself, which leads to this infinite regress. This is why there is no UTM; there is the need for the UTM to emulate itself emulating all TMs and itself emulating TMs which then ... . It leads to an uncountable infinite number of needed copies that runs into Cantor's diagonalization problem.
Great essay! LC
report post as inappropriate
Author Jochen Szangolies replied on Feb. 9, 2017 @ 09:05 GMT
Thanks for your comment, and the compliment! I've already had a preliminary look at your essay, but I'll hold off on commenting until I've had time to digest it somewhat (there's quite a lot there to be digested).
I'm thus not quite sure we mean the same thing by an 'open world'. It's true that I use the evolvability of my replicators in order to cope with the limitless possibilities that an agent in the world is presented with---that's why something like an expert system simply won't do: it's essentially a long, nested chain of 'if... then... else' conditionals, which the real world will always exhaust after some time (and given the limitations of feasibility of such constructions, usually after a rather short time).
It may be that something like this intrinsically non-delimitable nature is what you have in mind with the concept of openness, which you then more concretely paint as the existence of long-range entanglement between arbitrary partitions of a system, defining a topological order. But I'll have to have another look at your essay.
Lawrence B. Crowell replied on Feb. 11, 2017 @ 01:10 GMT
This definition of open world is with respect to entanglement swapping in the framework of ER = EPR. With cosmology there is no global means to define a time direction. A time direction is really a local structure as there does not exist a timelike Killing vector. Energy is the quantity in a Noether framework that is conserved by time translation symmetry. So if you have cosmologies with entangled states across ER black hole bridges (non-traversable wormholes) the only means one can define an open world is with entanglement exchanges. For instance the right timelike patch in a Penrose diagram may share EPR pairs with the left patch. In general this can be with many patches or the so called multiverse. There can then be a sort of swapping of entanglement.
I then use this to discuss the MH spacetimes and the prospect this sets up the universe to permit open systems capable of intelligent choices. Your paper takes off from there to construct a possible way this can happen.
Cheers LC
report post as inappropriate
Declan Andrew Traill wrote on Feb. 9, 2017 @ 06:36 GMT
I found the essay a but hard to read and a bit waffly.
It seems to me you are over complicating the problem.
We can easily see how a robot can build another copy of itself & then install the software that it is using itself into the new robot - job done!; no problems about infinite regress etc.
In nature, creatures that are unable to reproduce will die out, so given enough time and different types of creatures being formed due to essentially random changes, those that have formed the ability to copy themselves will continue to exists - those that don't won't.
Declan T
report post as inappropriate
Author Jochen Szangolies replied on Feb. 9, 2017 @ 09:16 GMT
Thank you for your comment. I'm sorry to hear you found my essay hard to read; I tried to be as clear as I can. One must, however, be careful in treating this subject: it is easy to follow an intuition, and be led down a blind alley. Hence, I simultaneously tried to be as scrupulous in my formulations as possible---perhaps excessively so.
Take, for instance, your example of the self-reproducing robot: at first sight, it seems to be a nice, and simple, solution to the problem. Likewise, a machine that just scans itself, and then produces a copy, seems perfectly adequate.
But both actually don't solve the problem, as can be seen with a little more thought. For the self-scanning machine, this is described in my essay; for your robot, the key question is about how it copies its own software. The first thing is that the robot itself is controlled by that software; hence, all its actions are actions guided by the software. So, too, is the copying: consequently, the software must actually copy itself into the newly created robot body.
But this is of course just the problem of reproduction again: how does the software copy itself? So all your robot achieves is to reduce the problem of robot-reproduction to software-reproduction. Consequently, it's an example of just the kind of circularity my essay tries to break up.
So I don't think I'm overcomplicating the problem; it's just not that easy a problem (although as von Neumann has shown us, it is also readily solvable, provided one is a little careful).
Edwin Eugene Klingman wrote on Feb. 11, 2017 @ 01:26 GMT
Hi Jochen,
You began by observing that "
a stone rolls downhill because of the force of gravity, not because it wants to reach the bottom." In fact, life is almost
defined by its ability to work its will against gravity. One might ask how this happens.
But your paper, on the
homunculus fallacy is excellent. The main problem of representations 'using' themselves...
view entire post
Hi Jochen,
You began by observing that "
a stone rolls downhill because of the force of gravity, not because it wants to reach the bottom." In fact, life is almost
defined by its ability to work its will against gravity. One might ask how this happens.
But your paper, on the
homunculus fallacy is excellent. The main problem of representations 'using' themselves [thus somehow invoking 'intentionality'] is two-fold. First, there is usually an infinite regress hiding somewhere, and second, as you note in your essay, in the absence of
one replicator, "
it is not clear how the dominant replicator is selected in order to guide behavior." This is clearly a major problem.
Along the way quite strong assumptions are introduced:
"
Suppose the system is simply capable of scanning itself, producing a description that then enables it to construct an exact copy." [Maybe, for strings of bits, but how scan ones 3D self?] Svozil addresses this. Even so, past the DNA level, it's difficult to envision "
mapping all possible responses of an automaton to binary strings...".
Then one assumes producing "
images of the world that are capable of looking at themselves – representations that are their own users." You "
create mental representations (CA patterns) that are their own homunculi, using themselves as symbols." This strikes me as easier said than done!
I love automata. My PhD dissertation,
The Automatic Theory of Physics dealt with how a robot could derive a theory of physics, [see my Endnotes] but, significantly, the
goal was
supplied from outside, leaving only the problem of recognizing patterns and organizing Hilbert-like feature-vectors. I made no attempt to have the robot formulate the dominant goal on its own.
You then ask that we "
imagine the symbol to be grabbing for the apple." Despite that you presume "
employing a replicating structure that interprets itself as something different from itself" [??] I have trouble imagining the symbol doing so. You've lost me. This is how you achieve "
the symbol becomes itself a kind of homunculus."
The core of the problem, as I see it, is the concept of "
the internal observer, the homunculus." In other words, an internal system must both model itself and understand itself. Your treatment of this problem is masterful.
May I suggest a different approach. In my essay I note that there experiential grounds for speculating that there is a
universal consciousness field, a physically real field, that interacts with matter. This can be developed in detail [I have done so] but for purposes of discussion, why don't you willingly suspend your disbelief and ask how this solves your problem.
It allows a homunculus to model or "represent" itself (as pattern recognizer and neural nets can do] while not demanding that the device understand itself, or even be aware of itself. All infinite regress problems disappear, as does the need to explain how consciousness 'emerges' from the thing itself.
I hope you will read my essay and comment in this light.
Thanks for an enjoyable, creative, well thought out essay.
Best regards,
Edwin Eugene Klingman
view post as summary
report post as inappropriate
Author Jochen Szangolies replied on Feb. 11, 2017 @ 10:19 GMT
Hi Edwin,
thank you for your kind words, and for giving my essay a thorough reading! I'll have to have a look at yours, so that I can comment on some of the issues you raise.
Regarding the selection problem, I think this is something my model can only hope to address in some further developed stage. Right now, my main concern is to show, in a kind of 'proof-of-principle'-way, that a...
view entire post
Hi Edwin,
thank you for your kind words, and for giving my essay a thorough reading! I'll have to have a look at yours, so that I can comment on some of the issues you raise.
Regarding the selection problem, I think this is something my model can only hope to address in some further developed stage. Right now, my main concern is to show, in a kind of 'proof-of-principle'-way, that a pattern, or a state of mind, having meaning to itself isn't in conflict with a natural, physical world governed by 'mindless mathematical laws', as the contest heading stipulates (although I myself tend to think of laws rather as descriptions than as active governing agencies).
Furthermore, the 'self-scanning' system is introduced as an example of what will not work: I (following Svozil) demonstrate that this assumption leads to absurdity. So, your intuition is right: there is no system (well, no 'sufficiently complex' system) that could simply scan itself in order to produce its own description. It would've made my life a whole lot easier if there were!
Rather, the impossibility of this particular solution is what forces me to introduce the von Neumann structure, of a system with a clearly delineated syntactic and semantic aspect---copying and interpreting its own, coded description. So there's a system that simply has its own description available to itself; and if now this description is shaped, as I propose, by an evolutionary process such that the fitness function depends on the 'outside world', then this description contains likewise information about the outside world.
Consequently, we have a symbol that has access to information that it itself represents, and that information is about the outside world (by mediation of sensory data setting up certain conditions within the internal CA-universe). In this sense, it is a representation that is its own user.
Now, intentionality is contagious: your own purposeful behavior translates into purposeful behavior of, say, the car you drive. The car makes a left turn because you want to take a left turn. In the same way, if a replicator becomes dominant, it gets to control an organisms behavior---where I fully acknowledge that how it comes to be dominant, and how exactly this behavior-controlling works, don't as yet have a satisfying answer in my model.
But suppose this works (and I don't believe that there are any other than technical problems in realizing this). Then, we have a symbol that to itself contains information using that information in order to guide movement---say, grabbing for an apple. That is, the goal-directedness of the action is due to the information the evolutionary process has imbued the replicator with---because it has a certain form, so to speak, it produces a certain action.
Does this help?
I'm going to have a look at your essay (but it might take me some time to comment).
Cheers,
Jochen
view post as summary
Member Rodolfo Gambini wrote on Mar. 1, 2017 @ 13:43 GMT
The essay is well written and calls attention to von Neumann's self-replication construction that could have a relevant role in some forms of intentional behavior.
report post as inappropriate
Author Jochen Szangolies replied on Mar. 2, 2017 @ 08:59 GMT
Thank you for your kind comment!
Peter Jackson wrote on Mar. 4, 2017 @ 13:42 GMT
Jochen.
I must say I consider your essay one of the best here. I didn't find it difficult to read and it was spot on topis with some important points. The homunculus fallacy and regression are too little considered in the contest.
I agree and also discuss the 'three-partite' relationship area, but suggest it seems to leave out the key element, whoever it was who turned a blank sheet of paper into a blueprint, and how. Perhaps you 'roll that in' to the drawing', but I think other important points emerge. Perhaps discuss when you've read mine?
I also agree your points on mutation but ask; How?. Again I identify a mechanism in my essay which has the advantage of a classic analogue of QMs predictions to shed light on the smallest scale mechanisms.
Very nicely written. I don't understand why your score is so low, perhaps it's n been trolled with 1's like mine? (three 1's without comment early on!)
I look forward to discussing further.
I certainly think yours should be a finalist and my score should help.
Peter
report post as inappropriate
Author Jochen Szangolies replied on Mar. 14, 2017 @ 12:04 GMT
Dear Peter,
thanks very much for your kind words! (Sorry, by the way, in being so late in replying---I was on holiday the past week...)
I think you correctly identify one of the main points where my proposal still needs work: as it stands, it's indeed not clear how, exactly, the selection process is implemented in the brain (if indeed it is). Mutation as such isn't that difficult: we merely need to stipulate that copying isn't perfect, which seems only realistic. But what decides which version is more fit with respect to the conditions the environment (ultimately) sets up?
I'll certainly have a look at your essay; maybe you can help me out there!
Regarding the score---yes, I've noticed a few unfortunate one-point votes without comment. It's a bit of a shame that people feel the need to resort to such practices, but with the voting system as it is, there's probably not a lot to be done right now.
Cheers,
Jochen
Conrad Dale Johnson wrote on Mar. 12, 2017 @ 16:48 GMT
Jochen –
Thank you for working through an interesting problem in a very clear and thoughtful way. The argument is coherent and well-structured from beginning to end, despite its complexities.
Since I take quite a different approach in my essay on the emergence of meaning, I’m afraid my comments here may not be very helpful in clarifying your theme – I’ve tried to make up...
view entire post
Jochen –
Thank you for working through an interesting problem in a very clear and thoughtful way. The argument is coherent and well-structured from beginning to end, despite its complexities.
Since I take quite a different approach in
my essay on the emergence of meaning, I’m afraid my comments here may not be very helpful in clarifying your theme – I’ve tried to make up for that by giving your essay the high rating it deserves.
You understand meaning in terms of reference or representation, which is well-accepted –mainly because it has a kind of clarity that’s otherwise hard to achieve. But of course there are many other ways for things to be meaningful – to “make a difference that makes a difference,” in Bateson’s phrase – without representing other things. You’re right that to understand reference we need to include an “agent” as well as a sign and its interpretation… and the rest of your argument follows convincingly, on this basis. More generally, though, what makes things meaningful is the context of possibilities in which they may have some effect, that changes what can happen in other contexts. Such contexts are always complex, hard to represent symbolically. But I’ve tried to show they can be understood in terms of the functionality of three distinct kinds of recursive systems.
Your argument about replicators makes a great deal of sense in a computational context. But the original replicators on Earth apparently faced a very different kind of challenge – they could by no means take for granted the existence of well-defined structures more complex than small organic molecules, and there were no blueprints or constructors available. So I suspect there may be basic limitations to computational models of biological systems, including the brain, where information-processing has to operate through interactions that are largely random, at the molecular level. Even in physics, I argue that the mathematical patterning serves a more basic function – that of selecting meaningful, i.e. measurable information out of a background of random events.
Nonetheless, I find your point very interesting that computational self-replication is only possible through a two-stage process. As you know, von Neumann was also instrumental in developing the two-stage representation of quantum dynamics, which plays a role in my essay. I wonder if there’s any connection between these aspects of his work?
Thanks again for your excellent contribution.
Conrad
view post as summary
report post as inappropriate
Author Jochen Szangolies replied on Apr. 1, 2017 @ 12:11 GMT
Dear Conrad,
I don't know how I missed your reply earlier---sorry for that. And thank you for your kind words!
I agree that representationalism isn't necessarily the only way to get meaning out of some system; one could, for instance, also think in terms of subsymbolic approaches. Representationalism's main virtue, to me, is that if it works, it's completely clear how---by simply having some vehicle standing in place of some object or state of affairs. But of course, this direct route is blocked by the homunculus; hence, my attempt to patch things up. If that turns out not to work, it might be necessary to abandon representationalism altogether, and move on to something else; but since, to me, this seems to entail a certain loss of intuitiveness and clarity, I'm going to keep on digging on this ground until I'm absolutely certain I'll never strike gold.
I'll certainly have a look at your essay; maybe I'll find something interesting to say about it.
However, a point of clarification: I don't understand my model as being mainly computational; in fact, I'm skeptical of computational models. I know that usually CAs are thought off as a computational system, but that just means that they are systems that can be used to compute, not that they are intrinsically computational. To me, what's more important is the pattern, which is a physically real thing (an analogy to the pattern of neuron firings in a brain), and its properties. The meaning I see is the semantic information the pattern contains about itself, and about the environmental conditions. But that's not a point I wanted to put too much emphasis on in the present essay.
Anyway, thanks again for your comment!
Cheers,
Jochen
Author Jochen Szangolies replied on Apr. 1, 2017 @ 12:34 GMT
Ah, forgot to comment regarding the two-tiered dynamics of quantum mechanics. At present, I'm not sure if one can really formalize a parallel, but just on the level of analogy and metaphor, these things may not be too far away from one another---there have often been attempts to link quantum mechanics and self-reference (one prominent exponent of this view being John Wheeler), and of course, the bipartite structure of von Neumann's replicators is exactly due to the problems of self-reference (which makes a self-scanning mechanism impossible). So well, maybe?
Dizhechko Boris Semyonovich wrote on Mar. 12, 2017 @ 21:24 GMT
Dear Йохен Szangolies!
I invite you to familiarize yourself with New Cartesian Physic
I appreciate your essay. You spent a lot of effort to write it.
If you believed in the principle of identity of space and matter of Descartes, then your essay would be even better.
I wish to see your criticism on the New Cartesian Physic, the founder of which I call myself.
The concept of moving space-matter helped me:
- The uncertainty principle Heisenberg to make the principle of definiteness of points of space-matter;
- Open the law of the constancy of the flow of forces through a closed surface is the sphere of space-matter;
- Open the law of universal attraction of Lorentz;
- Give the formula for the pressure of the Universe;
- To give a definition of gravitational mass as the flow vector of the centrifugal acceleration across the surface of the corpuscles, etc.
New Cartesian Physic has great potential in understanding the world. To show this potential in his essay I gave The way of The materialist explanation of the paranormal and the supernatural . Visit my essay and you will find something in it about New Cartesian Physic. Note my statement that our brain creates an image of the outside world no inside, and in external space. Hope you rate my essay as high as I am yours. I am waiting your post.
Sincerely,
Dizhechko Boris
report post as inappropriate
Author Jochen Szangolies replied on Apr. 1, 2017 @ 12:14 GMT
Dear Dizhechko Boris,
sorry for not replying earlier. Thank you for appreciating my essay; I will have a look at yours---however, I must confess I am somewhat skeptical that it needs a new physics in order to make sense of intentional, goal-directed behavior. But I will try and form an unbiased opinion of your work.
Cheers,
Jochen
Dizhechko Boris Semyonovich replied on Apr. 5, 2017 @ 16:49 GMT
Dear Jochen Szangolies
Physics of Descartes existed before Newtonian physics. It is known that through the efforts of Voltaire's Newtonian physics moved to Europe and became dominant up until Einstein put it under doubt, but he did this not by returning to the physics of Descartes, and by relativism, i.e. by its complications.
I believe that by updating the physics of Descartes to achieve greater understanding of the world than did previous theories, as it provides a more intuitive mapping. New Cartesian Physic, as the concept of moving space-matter, not remakes of modern physics, and summarizes it based on the identity of the space-matter.
I appreciate your essay and wish you success in the contest.
Sincerely,
Dizhechko Boris
report post as inappropriate
Cristinel Stoica wrote on Mar. 29, 2017 @ 18:19 GMT
Dear Jochen,
I enjoyed reading your essay. The problem of intentionality is indeed plagued by the homunculus fallacy as you described. I liked how you refer to Svozil's theorem and use von Neumann's constructors and replicators to propose a solution. Also the parallel with the immune system. And that you state clearly what open problems you see that need to be solved. Very good work!
Best regards,
Cristi Stoica
The Tablet of the Metalaw
report post as inappropriate
Author Jochen Szangolies replied on Apr. 1, 2017 @ 12:19 GMT
Dear Christi,
thanks for the kind words! Yes, I think that even if there's some germ of truth in my model, it'll be a long way yet before it'll be clear whether it actually solves the problems it sets out to solve. I think things are looking somewhat hopeful at the current stage, and the main virtue is that it provides a relatively concrete, well-defined model to play with; so I think there's a justifiable hope that even if things ultimately don't work out, we'll get out some useful pointers regarding what not to do.
Cheers,
Jochen
Miles Mutka wrote on Mar. 30, 2017 @ 18:07 GMT
Hi Jochen,
Nice and clearly written essay. I was vaguely aware about the self-replicating machines of von Neumann, but I did not know that they we formalized using cellular automata.
I like that your use of CA is very engineering/evolution oriented, rather than getting mired in the details of logic calculus or computability like so many others.
Still a lot of details missing, like if there are any natural boundaries of such self-replicating patterns, or indeed what features are necessary for a pattern to count as a "CA brain". Also if mutation is involved, how much can the pattern change and remain the "same" brain?
All the best, Miles Mutka
report post as inappropriate
Author Jochen Szangolies replied on Apr. 1, 2017 @ 12:24 GMT
Dear Miles,
I'm glad you found something of value in my essay! You're right, I think of this model as a kind of 'hands-on' test bed for my ideas; and as you point out, there's still lots to tinker with.
Regarding the question of identity, I'm afraid I don't have an answer. In a sense, it's analogous to the question of when a speciation event occurs---when was the first little bundle of feathers clawing its way out of an egg no longer a dinosaur, but a bird?
I'm not sure the question is very meaningful, at least in that case: the boundary between 'bird' and 'dinosaur' is ultimately as arbitrary, and as man-made, as the boundaries between nations on a map. But is there more meaning to the question in the case of brains/minds? Lots of people, from Hume to the Buddha, didn't think so. I, for myself, am just going to continue tinkering with my model for the moment.
Cheers,
Jochen
Robert Groess wrote on Apr. 1, 2017 @ 04:51 GMT
Dear Jochen Szangolies,
Thank you very much for your eminently readable and excellent summary on Von Neumann's cellular automata and the various implications of his work, forming much of the groundwork that is today considered to be "artificial intelligence". I wanted to let you know I particularly enjoyed the scope of your essay along with the appropriate rigorous grounding, and have rated it in the meantime too.
Regards,
Robert
report post as inappropriate
Author Jochen Szangolies replied on Apr. 1, 2017 @ 12:27 GMT
Dear Robert Groess,
thank you for the kind words! Yes, it's certainly a testimony to the genius of von Neumann that his work continues to influence and direct modern ideas---he ought to be rated much higher on the list of all-time greatest minds than he usually is.
I'll try and take a look at your essay, too!
Cheers,
Jochen
Donald G Palmer wrote on Apr. 2, 2017 @ 22:58 GMT
Jochen
An absorbingly written essay with a number of interesting automata discussions. I am not sure you explicitly define what you mean by 'intention' - or that how you define this fits what humans are capable of vs what is possible in an automata model.
I found the essay does need some additional context to understand, as you presume some knowledge of other items you reference. So...
view entire post
Jochen
An absorbingly written essay with a number of interesting automata discussions. I am not sure you explicitly define what you mean by 'intention' - or that how you define this fits what humans are capable of vs what is possible in an automata model.
I found the essay does need some additional context to understand, as you presume some knowledge of other items you reference. So I did review some of your references for my response.
There are three items I wish to discuss:
The first is the concept of requiring a sequentially ordered list of actions - which I believe you refer to as Richard's paradox. This is essentially Georg Cantor's proof of the uncountability of the Real numbers (using decimal expansions). The fact that something cannot be sequentially ordered does not mean it is not ordered - since Real numbers are ordered, yet cannot be placed in a 1-1 relationship with integers. It does mean that there are limitations to sequential automata and sequential instructions. This does not preclude parallel instructions, which I do not believe you address. This might be a worthy direction to pursue.
The second is the concept of replication needing to be exact. I do not see any evidence in nature that replication need be exact - and thus requiring the infinite regress you present. There are numerous examples of non-exact reproduction (like maybe all living reproduction), where some aspects are generated from a static (or passive) state. If much of our knowledge and learning starts from a (near) blank slate, then there is no requirement for exact reproduction. In fact this maybe an evolutionary negative that has been 'sifted out' in the early stages of life (why make the same exact being with the same mistakes in the next generation - at least make different mistakes).
Finally, my reading of how sensory perception works indicates that this is an active process, whereby sense perceptions are constructed against expected concepts and is not like a projection on a screen or light filtering into a room. We are active in our construction of what we perceive - not passively receiving images or sensory inputs. I think this changes the participatory actions of the 'agent' you discuss.
An interesting essay, overall.
Thank you,
Don
view post as summary
report post as inappropriate
Author Jochen Szangolies replied on Apr. 4, 2017 @ 13:47 GMT
Dear Donald,
thank you for taking the time of reading and commenting on my essay. Regarding intentionality, I agree that the concept is treated somewhat vaguely in much philosophical literature, but I'd say my level of rigor is par for the course, at least---compare my definition: "Mental content exhibits the curious property of intentionality—of being directed at, concerned with, or...
view entire post
Dear Donald,
thank you for taking the time of reading and commenting on my essay. Regarding intentionality, I agree that the concept is treated somewhat vaguely in much philosophical literature, but I'd say my level of rigor is par for the course, at least---compare my definition: "Mental content exhibits the curious property of intentionality—of being directed at, concerned with, or simply about entities external to itself.", and that of the Stanford Encyclopedia of Philosophy: "Intentionality is the power of minds to be about, to represent, or to stand for, things, properties and states of affairs." Both essentially follow Brentano, who introduced the concept into modern discourse (saving it from the old scholastics).
Again, I can understand---and to some degree, sympathize with---wanting a more thorough definition, but sometimes one also risks getting embroiled in petty turf wars when trying to clarify every last definitional issue ('rigour mortis'). So rather than spending most of the allocated room on such definitions, I chose to introduce my model instead, hoping that this would help clarify lingering issues---apologies if it didn't.
Regarding Richard's paradox, the ordering itself isn't really so important; one merely needs an unambiguous way of referring to certain elements (either of English phrases corresponding to natural numbers, or of behaviors of a given automaton). Any association between these elements and natural numbers will do fine, since then, you can refer to the nth element, which picks out a unique one; then, you can use the diagonalization trick by creating a new element that wasn't part of the original association.
But since that was claimed to be complete (a list of all the English sentences describing real numbers/a theory of all behaviors of the automaton), we arrive at a contradiction.
Regarding self-replication, you raise a good point---natural replication is indeed never exact. This doesn't necessarily address the infinite regress, though: if the parent needs to have access to a plan of the child in order to construct the next generation, we still get a regress, even if all of the plans are allowed to differ. Furthermore, when replication is inexact, we start getting into issues of vagueness: when is a construct a 'child'? How similar do parent and child-generation have to be in order to constitute an example of self-replication? If a stone, rolling down a hill, breaks off another, is that an example of self-replication?
Lastly, you're dead-on regarding perception: it's indeed a far more active process than my caricature gives it credit. But whether the outside world is just faithfully projected onto an internal screen, or whether a sort of virtual internal reality, perhaps only loosely 'inspired by actual events', is created, doesn't make a difference for the conceptual point: both implicitly presume some central meaner (as Dennett calls the homunculus) using the internal representation as pertaining to the outside world. And with that, we're already off the rails as far as a theory of representation goes.
Again, thanks for your thoughtful comments!
Cheers,
Jochen
view post as summary
Stefan Keppeler wrote on Apr. 3, 2017 @ 22:03 GMT
Dear Jochen,
I like your essay. Offering a solution to the homunculus problem you focus on a different aspect than most other essays which argue for a naturalist explanation of intention. I examine the compatibility of goal-oriented macroscopic behavior and 'goal-free' microscopic laws, which you may also find useful.
Cheers, Stefan
report post as inappropriate
Author Jochen Szangolies replied on Apr. 4, 2017 @ 13:49 GMT
Dear Stefan,
thanks for your comment. I have to say that I feel somewhat narrow in focus in this competition---most people seem to propose entire cosmologies, while I just play around with cellular automata!
I'm glad, though, that some people seem to find some value in my ideas nevertheless. I'll have a look at your essay!
Cheers,
Jochen
Torsten Asselmeyer-Maluga wrote on Apr. 6, 2017 @ 19:24 GMT
Dear Jochen,
very interesting essay. I rated it with the highest number.
Your goal-oriented dynamics (replication) reminds me on evolution. I wrote my PhD thesis about physical models of evolution including the evolution of networks. Evolution is goal-oriented. Here, there are two processes, mutation and selection. Mutation produces new information (=species) and selection is a global interaction among the species giving a goal to the process. In a more refined model of Co-evolution, the selection itself is formed by the interaction between the species, so again you will get a direction or goal.
I know it is a little bit to late (maybe) but I want to recommend
my essay.
All the best and good luck for the contest
Torsten
report post as inappropriate
Author Jochen Szangolies replied on Apr. 7, 2017 @ 07:25 GMT
Dear Torsten,
thanks for your comment. Glad you found something to like about my work!
Regarding evolution, I'm unsure if I would really say that mutation adds information---in a sense, mutation merely creates an ensemble of possible signals; selection then chooses among these. The ensemble of messages doesn't really carry information, but choosing one of the options then at the very least carries information about the entity making the choice---in my case, the environment, as mediated by the cellular automaton. But that's maybe something for another day to ponder.
I had actually already read your essay, and found it very intriguing; although I apparently didn't add a comment (I find it hard to keep track of conversation threads in this forum). Thanks for your good wishes, and right back to you!
Cheers,
Jochen
Member Marc Séguin wrote on Apr. 6, 2017 @ 23:16 GMT
Dear Jochen,
In your reply to Stefan Keppeler above, you noted that your paper has a relatively narrow focus, compared to those (like mine!) that "propose entire cosmologies". But this is not necessarily a bad thing: your paper is well written, rigorously argued, interesting and perfectly relevant to this year's essay topic: why ask for more?
I already knew about many aspects of Von Neuman's work, in particular about Von Neuman replicators, but I had never studied the details of his approach. Your essay presents it very clearly and builds on it in an interesting way. Congratulations, and good luck in the contest!
Marc
report post as inappropriate
Author Jochen Szangolies replied on Apr. 7, 2017 @ 07:33 GMT
Dear Marc,
thanks for your kind words! Although I have to say, I'm still a little humbled by the breadth and depth of ideas and concepts presented in this contest. I mean, of course I have my own ideas about what the world, deep down, is like, but I'm not sure I'll ever consider them well-developed enough to risk airing them in such a forum---so all the more props to those who do!
Von Neumann truly was a thinker of rare accomplishment; I'm happy enough if I can help popularizing some of his ideas.
Thanks for the well wishes!
Cheers,
Jochen
Rick Searle wrote on Apr. 8, 2017 @ 23:20 GMT
Hi Jochen,
What a brilliant use of Von Neumann replicators! I can't say I am qualified to judge whether your theory is workable, but feel you're certainly on to something.
All the best,
Rick Searle
report post as inappropriate
Login or
create account to post reply or comment.