Search FQXi


If you are aware of an interesting new academic paper (that has been published in a peer-reviewed journal or has appeared on the arXiv), a conference talk (at an official professional scientific meeting), an external blog post (by a professional scientist) or a news item (in the mainstream news media), which you think might make an interesting topic for an FQXi blog post, then please contact us at forums@fqxi.org with a link to the original source and a sentence about why you think that the work is worthy of discussion. Please note that we receive many such suggestions and while we endeavour to respond to them, we may not be able to reply to all suggestions.

Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you.

Forum Home
Introduction
Terms of Use

Order posts by:
 chronological order
 most recent first

Posts by the blogger are highlighted in orange; posts by FQXi Members are highlighted in blue.

By using the FQXi Forum, you acknowledge reading and agree to abide by the Terms of Use

 RSS feed | RSS help
RECENT POSTS IN THIS TOPIC

songjoong df: on 12/27/17 at 7:25am UTC, wrote KLIK DISINI KLIK DISINI KLIK DISINI KLIK DISINI KLIK DISINI KLIK...

Sridattadev: on 9/7/11 at 17:51pm UTC, wrote Dear All, I have read most of your thoughts on consciousness and how...

John Merryman: on 2/20/11 at 2:20am UTC, wrote Edwin, The news is rather full of examples of that non-linear bottom up...

James Putnam: on 2/20/11 at 0:27am UTC, wrote With 'Bubba' I can follow thread continuity. Maybe my participation matters...

Edwin Klingman: on 2/19/11 at 22:45pm UTC, wrote Bubba, You made a remark, and I made a remark, and neither remark...

John Merryman: on 2/19/11 at 18:55pm UTC, wrote Bubba, I occurs to me that intuition/emotion is how the brain processes...

Bubba: on 2/19/11 at 17:21pm UTC, wrote Well, I am not here to cause trouble. From now on, I just will use the name...

Edwin Klingman: on 2/18/11 at 23:59pm UTC, wrote Dear Anon, You say: "I don't understand this issue with identity. I am...


RECENT FORUM POSTS

Jorma Seppaenen: "Hi Georgina, Yes, CMB map is an observation product, it's very essential..." in Why Time Might Not Be an...

Jim Snowdon: "Of course, the stars would, very slowly, move across the sky as the Earth..." in The Nature of Time

Georgina Woodward: ""The motion of the solar system, and the orientation of the plane of the..." in Why Time Might Not Be an...

Jim Snowdon: "On the permanently dark side of the Earth, the stars would appear to stay..." in The Nature of Time

Joe Fisher: "Dear Dr. Kuhn, Today’s Closer To Truth Facebook page contained this..." in Can Time Be Saved From...

Joe Fisher: "Dear Dr. Kuhn, Today’s Closer To Truth Facebook page contained this..." in Can Time Be Saved From...

akash hasan: "Some students have an interest in researching and space exploration. I..." in Announcing Physics of the...

Michael Jordan: "Excellent site. Plenty of helpful information here. I am sending it to some..." in Review of "Foundations of...


RECENT ARTICLES
click titles to read articles

Can Time Be Saved From Physics?
Philosophers, physicists and neuroscientists discuss how our sense of time’s flow might arise through our interactions with external stimuli—despite suggestions from Einstein's relativity that our perception of the passage of time is an illusion.

Thermo-Demonics
A devilish new framework of thermodynamics that focuses on how we observe information could help illuminate our understanding of probability and rewrite quantum theory.

Gravity's Residue
An unusual approach to unifying the laws of physics could solve Hawking's black-hole information paradox—and its predicted gravitational "memory effect" could be picked up by LIGO.

Could Mind Forge the Universe?
Objective reality, and the laws of physics themselves, emerge from our observations, according to a new framework that turns what we think of as fundamental on its head.

Dissolving Quantum Paradoxes
The impossibility of building a perfect clock could help explain away microscale weirdness.


FQXi BLOGS
May 27, 2019

CATEGORY: Blog [back]
TOPIC: Five Reasons Thinking Computers Won't Destroy Humanity, Probably [refresh]
Bookmark and Share
Login or create account to post reply or comment.

Blogger William Orem wrote on Feb. 13, 2011 @ 00:38 GMT


Here’s a section from a fun news article that ran recently on NPR. (“Fun,” I suppose, only because I find it implausible; if you buy the premise—that within a few decades self-redesigning computer intelligence will skyrocket ahead of our ability to comprehend it and immediately destroy humanity—you may not find it so much fun.) The NPR person is interviewing Keefe Roedersheimer at the Singularity Institute for Artificial Intelligence in Berkeley:

*** *** *** ***

KASTE: Keefe Roedersheimer is one of the institute's research fellows. Over cups of green tea, he explains that he's a software engineer who's done work for NASA, and that his idea of a good time is teaching a computer how to play poker like a human.

But right now, at the institute, he's trying to predict the rate of advancement of artificial intelligence or A.I.

Mr. ROEDERSHEIMER: So it's about knowing when this could happen.

KASTE: By this, he's talking about the invention of a computer that's not only smart but also capable of improving itself.

Mr. ROEDERSHEIMER: Is able to look at its own source code and say, ah, if I change this, I'm going to get smarter. And then by getting smarter, it sees new insights into how to get smarter. And then by having those insights into how to get smarter, it modifies its source code and gets smarter and gets some insights. And that creates an extraordinarily intelligent thing.

KASTE: They call this the A.I. singularity. Because the intelligence could grow so fast, human minds might not be able to keep up. And therein lies the danger.

*** *** *** ***

Danger, you say? One might think that vastly improved computation would be a good thing. The laptop on which I am “penning” these comments at the moment is to ENIAC as that computational aid was to an abacus, but so far, all to the good. Not for long, say some . . .

*** *** *** ***

You've already seen this movie.

(Soundbite of movie, "Terminator 2: Judgment Day")

Mr. ARNOLD SCHWARZENEGGER (Actor): (as The Terminator) Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

Ms. LINDA HAMILTON (Actress): (as Sarah Connor) Skynet fights back.

Mr. SCHWARZENEGGER: (as The Terminator) Yes.

KASTE: They kind of hate it at the institute when you quote the "Terminator," but Roedersheimer says, at least, those movies gave people a sense of what could happen.

Mr. ROEDERSHEIMER: That's an A.I. that could get out of control. But if you really think about it, it's much worse than that.

KASTE: Much worse than "Terminator"?

Mr. ROEDERSHEIMER: Much, much worse.

KASTE: How could it possibly - that's a moonscape with people hiding under burnt out buildings and being shot by laser. I mean, what could be worse than that?

Mr. ROEDERSHEIMER: All the people are dead.

KASTE: In other words, forget the heroic human resistance. There'd be no time to organize one. Somebody presses enter, and we're done.



*** *** *** ***

Others have speculated that the dawn of spiritual machines, to borrow Ray Kurzweil’s phrase, will be a non-apocalyptic style apocalypse: everything will become different, and with extreme rapidity, but more in the manner of spontaneous state change than of a bomb being triggered. I am reminded of a nifty little story by John D. MacDonald I read as a child—MacDonald was a prolific writer of pulp, probably best remembered for the novel that would later be adapted as *Cape Fear*—in which scientists labor to develop a self-aware computer and, the moment it comes on line, listen in astonishment as the system begins describing all of the changes that are going to take place in their common society, beginning now. There’s no suggestion of attacking humanity, but rather of initiating an age of rational conflict-resolution and resource distribution—something humans have proven themselves woefully inept at achieving.

But back to the scary version of tomorrow:

*** *** *** ***

The singularity idea has floated around the edges of computer science since the 1960s, but these days, it's the subject of Silicon Valley philanthropy.

At a fund-raising party in San Francisco, the co-founder of PayPal, Peter Thiel, explains why he supports the Singularity Institute.

Mr. PETER THIEL (Co-Founder, PayPal): People are not worried about what supersmart computers will do to change the world, because we don't see those every day. And so I suspect that there are a lot of these issues that are being underestimated.

KASTE: Also at the party is Eliezer Yudkowsky, the 31-year-old who co-founded the institute. He's here to mingle with potential new donors. As far as he's concerned, preparing for the singularity takes primacy over other charitable causes.

Mr. ELIEZER YUDKOWSKY (Research Fellow and Director, Singularity Institute for Artificial Intelligence): If you want to maximize your expected utility, you try to save the world and the future of intergalactic civilization instead of donating your money to the society for curing rare diseases and cute puppies.

KASTE: Yudkowsky doesn't have formal training in computer science, but his writings have a following among some who do. He says he's not predicting that the future super A.I. will necessarily hate humans. It's more likely, he says, that it'll be indifferent to us - but that's not much better.

Mr. YUDKOWSKY: While it may not hate you, you're made of atoms that it can use for something else. So it's probably not a good thing to build that particular kind of A.I.

KASTE: What he and the institute are trying to do, he says, is start the process of figuring out how to build what he calls friendly A.I. before somebody inevitably builds the unfriendly variety.

*** *** *** ***

Woah! “The future of intergalactic civilization”? Not just planetary, or even interstellar, but coordinated civilization among separate galaxies? One wonders whether Mr. Yudkowsky knows just how far away almost all of the Milky Way is from here—to say nothing of the distance between ours and, say, Andromeda. Given the travel times, even for super-luminal technologies, I’m not sure the phrase “intergalactic civilization” has a meaning.

But, okay. We’re just spinning out possible futures, so let’s play along. It still strikes me that there are any number of things that might prevent this doomsday scenario—this “rapture of the nerds,” as it is humorously called—even in the short run. And it may be a useful mental exercise to try and figure out what they are.


Thus, my proposals for five possible impediments to computer-ocalypse:

1. Massively increased intelligence might include an increased capacity for what we term morality.

“Moral” is at the moment a hopelessly nonspecific term, somewhere in the fuzzy land of evolved propensities toward altruistic behavior, but it has at least a rough meaning in all cultures, and one assumes therefore that it might be found in hyper-advanced machine intelligence as well. Specific moral codes are notoriously flexible according to circumstance—as George Carlin quipped, Thou Shalt Not Kill has always been one of those “negotiable” commandments—but one might, at a minimum, include “Thou Shalt Abstain from Wholesale Slaughter of Your Human Creators” as basic moral decision-making.

In simpler terms: Why would a super-intelligent machine be without compassion? Is compassion in some way unintelligent?

2. “While it may not hate you, you're made of atoms that it can use for something else.”

Absence of hate doesn’t mean indifference to suffering. I could eat my neighbors—their atoms would be useful to me—but other considerations prevent me from doing so. (And it isn’t just that I can’t overpower them. We don’t eat children, the very old, or strangers with no social clout. An increasing number of us—I am among them—think we should accord rights to a whole host of creatures we currently do eat, or otherwise abuse, on the basis of sentience and ability to suffer.) More to the point, humans are oxygen, carbon, hydrogen, nitrogen, calcium, maybe some phosphorus. It’s not like we’re especially rare items. A silicon-based intelligence might well regard us as more or less thinly packaged seawater, which is virtually everywhere on this planet.

3. No one has yet demonstrated that self-awareness can be instantiated in a computer system at all. There are compelling arguments that it’s only a matter of increased processing power, but equally compelling counter-arguments that consciousness isn’t just data-crunching. We simply don’t know whether a brain—which was at one time thought to be like a telephone—really is like a computer, or whether the comparison is terminally flawed.

To say it another way, WATSON may beat all of us at Jeopardy, but that doesn’t mean it is thinking. And actual thinking may be requisite for stepping outside of initial programming in order to construct a new objective in line with your own agenda. It may be, that is, that computers can’t have agendas, because they can’t have selves.

(Counterargument: a virus has no consciousness, and no agenda, and does a fine job at eradicating humans nonetheless. Perhaps we should be more worried about *un*intelligent computer systems, such as nanobots or Von Neumann machines.)

4. There may be insurmountable limitations to the speed with which intelligence can be improved. Computer-ocalypse assumes that a self-modifying intelligence will advance with dizzying speed, when it might simply advance dizzyingly into the next wall. I doubt even a super-computing chip could find a way around the light speed limit just by number-crunching the problem; and there simply may be no way around that limit, no matter how smart you are. Similarly, heat build-up is a perennial problem for chip-designers, and one that seems to be built into the fabric of nature, not just limitations in human ingenuity. Conceivably, an ideal quantum chip might “generate” an answer to an input query without having to move any electrons, but if that’s the best possible chip, then a system tasked with improving itself will arrive quickly at quantum chip state and remain there.

We don’t yet know whether Moore’s Law breaks down at some point; to extrapolate it indefinitely into the future is just speculation.

5. It may be that intelligence *itself* cannot self-improve beyond a certain point—this is a separate issue from whether physical systems can run at indefinitely high speeds—because it always fragments off into sub-intelligences with specialized areas of awareness (as the brain seems to do, for example; different modules compete with each other, perhaps even in a Darwinian fashion, within the overall complex we refer to as “brain”). Human society is a large-scale example of this. There was a time when a Renaissance Man might be well-versed in Everything. Now, with knowledge increased exponentially, a single individual is accomplished if she or he knows a great many things about even one field of specialization.

Can you think of others? Post them here.

Barriers to apocalypse may prove in themselves surprisingly fruitful avenues of discovery. In any event, we have some time to kill before WATSON starts teething.



Bookmark and Share
this post has been edited by the author since its original submission

report post as inappropriate


Lawrence B Crowell wrote on Feb. 13, 2011 @ 03:04 GMT
I remember this feature on NPR “All Things Considered.” I think there is an issue with this, but I don’t think it is coming from some single AI super computer which takes over the world, or legions and phalanxes of single AI robotic units which march over us. How it will happen is all around us already, and we love the stuff. It comes in the form of little microcomputers which people seem...

view entire post


Bookmark and Share
report post as inappropriate

John Merryman replied on Feb. 13, 2011 @ 03:26 GMT
Lawrence,

The borg mind meld is down, ie. those links don't work.

Bookmark and Share
report post as inappropriate

John Merryman replied on Feb. 13, 2011 @ 03:40 GMT
fixed the address.

The facebook borg is scary. Here is a better link;

http://www.youtube.com/watch?v=tGZgDCGnTU0

Bookmark and Share
report post as inappropriate


John Merryman wrote on Feb. 13, 2011 @ 03:17 GMT
William,

The basic premise is flawed. Computational complexity doesn't lead to awareness. Awareness produces computational complexity. A dog is aware, but it is not computationally complex, at least in the number crunching category.

Consider that there are two primal mysteries. How did organic life begin? How did it become aware?

What if these are in fact the same mystery, that biology is primordially aware. It would satisfy Ockham's Razor, for one thing. Otherwise we have to ask and specify what we are willing to categorize as awareness.

E.O. Wilson described the insect brain as a thermostat, in that it reacts to changes in environmental energy levels. The fact is that the parallel processor side of our own brain is a complex thermostat, that reacts to energy, ie. hot/cold, attraction/repulsion, light/dark, etc. The other, linear processor side of the brain is effectively a clock, in that it calculates serial cause and effect of subsequent events. Yet it has been proven that ants can count footsteps, so they too have a rudimentary linear processing ability. These are the scalar and vector functions.

In fact ant hives have proven to have superior computational abilities to many of our traffic functions.

Both people and ants are examples of swarm intelligence.

What computers are actually doing is vastly increasing our mental connectivity and thus creating a much larger swarm intelligence. What is happening is that the planet is growing a nascent central nervous system, but we are still at a very infantile and energy intensive stage. Maybe, when we have been around as long as ants, we will have matured. You, I and Ray Kurzweil won't live to see it though.

Computers are tools. Yes, the more powerful they are, the more potential for danger they possess, but they are still tools.

Bookmark and Share
report post as inappropriate


John Merryman wrote on Feb. 13, 2011 @ 03:20 GMT
What is really scary is that Lawrence and I are starting to think alike;)

Bookmark and Share
report post as inappropriate


Florin Moldoveanu wrote on Feb. 13, 2011 @ 04:50 GMT
Each generation has its snake oil salesmans, in our time, among other things, this takes the form of AI taking over the world. I worked in AI myself, and I will try to give some simple arguments why this is completely bogus.

Consider the task of recognizing a letter on a grid of 12x12 pixels. How many combinations are there? 2^(12x12) = 10^43. The sheere volume of combinations makes it impossible to have expert rules to recognize a single letter. Instead, the approach people take in AI is to train a neural network. And indeed, the best character recognition software even exceeds human ability in terms of mistakes per character. But there is a catch. Traning a neural network to recognize 26 distinct characters takes weeks of continous running, and months of data preparation. Unsupervised learning simply does not converge anywhere.

Now if the best commercial recognition engines are more accurate than humans to recognize individual characters, why we don't have commercial viable text readers which works in any condition? Thant is because this is a chicken-egg problem: to determine the correct boundary of a character, you need to recognize the character in the first place. The trial and error to recognize individual characters ruins the overall accuracy for word recognition. There is no commercially viable word recognition neural network. To train only 26 character recognition is very hard, to tran 100,000 word recognition is at least 10,000 times harder. And then the problem starts anew: we recognize words, but how about grammar and sentence recognition, etc? Again, and again, and again. And at each level, unsupervised learning/training is hardly converging. The complexity of the real world which AI has to conquer is far greater than any improvements in computer speeds, or algorithm development.

Bookmark and Share
report post as inappropriate

Lawrence B Crowell replied on Feb. 13, 2011 @ 13:57 GMT
The AI thesis, related to the Church-Turing conjecture that algorithms are recursively defined or λ-calculus, is that processes of state changes in systems are all Turing machine enabled. There is some merit to it. However, biological systems do not function entirely this way. The function of a kinase in a molecular pathway is similar to a processor, where it phosphorylates some residue site based on a Boolean logic. However, the pathway is connected into a other pathways, which up and down regulate each other. These pathways, which might be compared to little algorithms which interact with each other, form molecular webs which are auto-regulated. The net result might be similar to Conway’s game of life, where simple rules result in an emergent complexity or “properties.”

If our cyber technology results in something we might call AI, or the ability of machines to self-adapt and learn it is likely to happen on this basis. I doubt some grand algorithm will come out of a computer science department at a big institution which solves the problem of AI, assuming we even really know what the big question is with AI. An emergent web of cyber complexity will either come from the nano-scale, or molecular level, on up, or it will emerge with the internet. It might in time involve both. It may also involves quantum computation, where cyber complexity will involve a vast number of quantum processors interlinked within a global net of classical processors. Then consider the prospect this will also interface with brains or neural-ware. In 50 years there may be a vast cyber-complex which involves a wide diversity of processors, which may include our own brains and the merging of our conscious spheres.

Eventually this may move off Earth and over a long period of time come to colonize, or infect as one might also see it, more and more of the galaxy. It might in time learn to make its “living” by exploiting interstellar dust, asteroids and other weakly gravitating sources.

Cheers LC

Bookmark and Share
report post as inappropriate


Dan T Benedict wrote on Feb. 13, 2011 @ 05:13 GMT
I'm much more concerned with the whole genetic engineering and cloning business. It's one thing trying to learn how nature operates in the biological sense. But when there's a buck to be made, it's amazing the things we'll do that are not in the best interests of our long term survival. Many times we don't even know what's in our best interests are. Cloning as a business venture is plainly shortsighted. A species insurance policy against extinction is in part due to its genetic diversity with the inherent ability to adapt and withstand catastrophe. So we have a prized bull, sow, or grain that is the epitome of its species in every respect, is cloned in mass production and along comes a mutant virus, bacteria, or fungus that wipes out a key portion(s) of our food supply. Imagine the Irish potato famine on a global scale. How much of this is going on of which we are unaware? Once the loss of genetic diversity is removed from the planet, its pretty much gone for good.

One can only image the horror stories that would occur if we start to fiddle with the human genome (the true biological computer) on a mass scale. Sometimes our moral responsibilities tend to fade into the background, when greed, shortsightedness, or devil may care enters the picture.

Bookmark and Share
report post as inappropriate

Lawrence B Crowell replied on Feb. 13, 2011 @ 13:17 GMT
There are two areas of concern with genetic engineering. The first is with the amplification of genes in the environment in a ways which can have long term effects with uncertain consequences. The other is the misapplication of the technology. This is the case with “Roundup Ready,” which is a genetic engineering trick meant to increase the sales a herbicide. However, I am not entirely against cloning up genes in different species, for this can be a way of engineering in resistance of parasites. In fact resistance to the fungal blight which attacks potatoes is an ongoing research area.

Cheers LC

Bookmark and Share
report post as inappropriate


Edwin Eugene Klingman wrote on Feb. 13, 2011 @ 06:16 GMT
In the 1950's when I was immersed in sci-fi I recall believing in this scenario. But after 50 years of designing computer systems and exploring consciousness, I now realize it ain't gonna happen. It's a great fund-raising ploy for fast-talking salesmen, but it's not in the cards.

Part of the problem is as Florin says, simply an issue of complexity. And that "Unsupervised learning simply does not converge anywhere." John thinks that biology is primordially aware. I think that primordial awareness has been here from the beginning. There is no way that dead materials can be combined to produce aware material. Logic circuitry is not aware.

At root, awareness is topological. It is of connectedness.

Jill Taylor, a Harvard neuro-anatomist, tells of her massive stroke in "My Stroke of Insight":

"..consciousness soared into an all-knowingness, a 'being at one' with the universe... The boundaries of my earth body dissolved and I melted into the universe." "I understood that, at the most elementary level, I am a fluid." "our perception of the external world, and our relation to it is a product of our neurological circuitry. For all those years of my life, I really had been a figment of my own imagination."

You may think this is hyperbole. It is not. It is a state of awareness that we all began with but most have long forgotten. Georgina says she recalls being in the womb. That is the primordial awareness. We grow out of it into the yak-yak-yak of the chattering classes, but Zen, LSD, Salvia, and strokes, among other things can take us back to universal awareness. We enter this world in that state and many, through one means or another, experience the state of universal connectedness in their adult life. Universal awareness come from the quiet brain, and many modern brains are never quiet. So probably far more common is that such awareness is smothered under the talk, talk, talk of academic life, to the point that many, like Wolfgang Pauli must say: "This eternal talking to myself is so fatiguing."

Certainly computers will talk to us, the more specialized the topic, the better. But computers have no topological awareness. They are based on numbers and do not experience a state of awareness of being connected to the universe, and no 'bandwidth' expansion is going to make them aware.

But some people will make lots of money promising it.

Edwin Eugene Klingman

Bookmark and Share
report post as inappropriate

Florin Moldoveanu replied on Feb. 13, 2011 @ 18:39 GMT
Dear Edwin,

Let me expand the explanation of why unsupervised learning does not converge. Traning a neural network is equivalent of finding a minima in a high dimensional space. The major problem is getting stuck in a local minima. And there are tones of them. In suppervised learning, there are methods (like simmulated annealing) which allow you to jump out of a local minima and they work basically because you know if you are closer to the stopping point or not. This is because someone else defines the success criteria. In unsupervised learning you never know if you are close to beying correct or not. From real life, where was the last time that a self-aware first grade child was able to solve a state of the art problem in math or physics? It takes a lot of effort even under supervised learning to get something of value.

The game in AI is to find clever ways to conquer complexity. And by the way, the standard backpropagation method which is covered is the past 10-15 years in AI literature is not how commercial companies train their neural networks. Far more efficient methods exists, but they are trade secrets which are not published anywhere.

Bookmark and Share
report post as inappropriate

Edwin Eugene Klingman replied on Feb. 13, 2011 @ 21:44 GMT
Florin,

Thanks for the additional info. I am aware of most of these details. I particularly like your remark: "From real life, where was the last time that a self-aware first grade child was able to solve a state of the art problem in math or physics? It takes a lot of effort even under supervised learning to get something of value."

It seems to me that there are periodically waves that sweep through the education establishment in which exactly this is proposed as a teaching method.

Also, I've noticed over the years that every time a new person hears about 'self-modifying code' they always get very excited. Yet nothing has ever come of it for the reasons you state [and for those I state.]

Finally, I seem to recall in the '80s a game called 'War' in which multiple tasks were placed in separate memory segments and given a set of operations that included writing over other tasks and repairing their own tasks. As I recall the winner of these games were 'all teeth', that is, they spent all of their time wiping out others and no time repairing themselves.

So I guess Jason is correct below. Don't forget to disconnect them...

Edwin Eugene Klingman

Bookmark and Share
report post as inappropriate

T H Ray replied on Feb. 14, 2011 @ 14:40 GMT
Florin,

Getting stuck in local minima is to me analogous to the huge number of vacuum minima in string theory. The theoretical existence of some 10^500 lowest energy state solutions in no way prevents us from knowing a value of some particular state as measured.

In 1970 and '71 I twice had the honor of interviewing educator John Holt (the free school advocate), and was impressed that autodidacticism is a function of variety, not of supervised learning and not hierarchically based information availability. Take the case of Gauss, if you want to talk about children solving problems in math and physics. Though the story may be apocryphal, it is instructive that Gauss would astound his teacher by a then-unknown recursive and mulplicative strategy for summing the integers 1 - 100 (50 pairs that sum 101, 50 X 101 = 5050). Gauss didn't just dream it up, one presumes, but had been exposed to numbers liberally; e.g., in his father's business, where (also maybe apocryphal but instructive) he is said to have corrected the elder's sums while still a young child..

If you poll mathematicians (I mean research or working applied mathematicians, not educators who teach mathematics) about one of the most pressing issues in primary education, I expect you'll find pedagogical reform near the top of the list. The supervised learning model, and even "sage on a stage" tends -- more in the sciences than perhaps any other subject -- to punish the quick learners and frustrate the slow ones. No one wins, least of all the teacher. It's truly a medieval system in a renaissance age, where information sources abound. It's like holding class in an oasis and forbidding students to drink the water and eat the dates.

I remember Holt talking about being in the U.S. Navy (in the 1940s, I expect) and wanting to change his rate to yeoman (the navy term for a secretary or clerk), but not knowing how to type the x words-per-minute one needed to qualify. He knew of something called "touch typing," so he studied and practiced it and got the job. His point was that one learns as one needs -- but in modern terms, Holt had chosen his own negative feedback system, his means of controlling his world, "self empowerment" in psych-speak.

In complex systems terms, laterally distributed information technology and variable rates of subsystem activity solve or mitigate problems of bounded rationality (time-truncated availabity of decision-making information) by localizing minima, not jumping out of them as you suggest; the "jumping out" comes from the cooperating components of the system (see, e.g., Bar-Yam). AI has to learn what we self replicating organisms take for granted -- we are all corporations of cooperating cells.

Tom

Bookmark and Share
report post as inappropriate


Cristinel Stoica wrote on Feb. 13, 2011 @ 07:36 GMT
It seems to me that the concept of "A.I. singularity" is not very well understood by the public.

The "artificial intelligence" part:

- A.I. is about machine that can learn by itself. Teaching such a machine is not like training a neural network. A neural network learns weights, while an A.I. device needs to be taught how to learn. Then, it continues learning by itself.

- A.I....

view entire post


Bookmark and Share
this post has been edited by the author since its original submission

report post as inappropriate

John Merryman replied on Feb. 13, 2011 @ 11:57 GMT
Cristi,

The movie Eagle Eye developed this theme somewhat, in that the claim wasn't that the computer became aware, but that having been programed with the entire legal code, was essentially loaded with a morality program and concluded the government was systematically breaking it and needed to be eliminated.

The irony here is that complexity is self destructive, since the long term wave pattern is toward equilibrium. Reality is, as we know it, the digital knowledge part, a disequilibrium. So it is cycles of intense complex formations that peak and then dissolve, so presumably computers would follow this cycle and crash. Oh, wait, they do! Organic life does too. It called death. Life has learned to overcome this natural flow by constantly reproducing. So if computers were naturally intelligent, they wouldn't waste the time of going beyond a certain level of complexity and would settle into stable cycles, like much of life. They would become ants. What people are doing is creating a world wide neural network. Either it is doing this in order to seed the universe with its own DNA, or we are simply the plants method of putting more carbon back into the atmosphere. A carbon hurricane.

Bookmark and Share
report post as inappropriate

T H Ray replied on Feb. 17, 2011 @ 14:24 GMT
Cristi,

In the flurry of posts in the last few days, I'm sorry I overlooked yours. It is spot on. You might be interested in an essay I wrote 11 or 12 years ago that echos many points that you raised:

we have met the alien

Best,

Tom

Bookmark and Share
report post as inappropriate


Bubba wrote on Feb. 13, 2011 @ 19:48 GMT
Various versions of this apocalyptic scenario has been vogue ever since 2001 a Space Odyssey.

It's 2011 and computers are still as dumb as a rock and rely exclusively on the instructions we feed it to accomplish a task. Nothing on the horizon indicates this is going to change, even with the advent of Quantum Computing.

Bookmark and Share
report post as inappropriate


Jason Wolfe wrote on Feb. 13, 2011 @ 20:16 GMT
If you want self learning computers, give them pain and pleasure nerves. Give them sex, food, and other pleasures; given them the ability to know pain.

Just don't forget to disconnect them from the nukes.

Bookmark and Share
report post as inappropriate


DMan wrote on Feb. 14, 2011 @ 06:35 GMT
"If you want to maximize your expected utility, you try to save the world and the future of intergalactic civilization instead of donating your money to the society for curing rare diseases and cute puppies"

This attitude is one of the many reasons why I no longer resonate with Singularitarians. And why it feels more and more like a cult than a genuine scientific movement or goal. They seem to be a group of individuals who have lost touch with reality and even their own humanity.

It is ridiculous to assert primacy of an abstract goal - the creation of AI, which has yet to be proven to be possible (I'm not saying it isn't, but it is non-real as yet) - over the goal to alleviate human (and for that matter 'cute puppies' i.e. animal) suffering, which is anything but abstract.

"We simply don’t know whether a brain—which was at one time thought to be like a telephone—really is like a computer, or whether the comparison is terminally flawed."

Exactly. As neuroscientist Steven Rose has said, metaphor is often confused with homology i.e. that the system is 'exactly like' instead of recognizing that it's just a useful analogy.

Each generation latches obsessively onto its fashionable metaphor - today's being brain-as-computer. This is stated as if it's a proven fact, when it is nothing more than a preliminary theory with limited evidence. It may well prove to be accurate, but we're a long way from that.

As you say William, we understand so little about consciousness and the brain. I suspect that only once we have a far greater understanding of biologically generated consciousness, will we be able to talk meaningfully about creating artificial ones.

Until then assertions that it's 'just around the corner' is more about intellectual narcissism than it is about science.

Bookmark and Share
report post as inappropriate


T H Ray wrote on Feb. 14, 2011 @ 11:42 GMT
DMan,

I largely agree with you. Ethics would be as important to AI as to any sentient being, not merely for individual species survival, but also for survival of the environment that sustains the variety of species and the system to which the species contributes feedback.

What makes us more than computers made of meat, also makes AI more than computers made of silicon and junction boxes.

Just as we choose our own negative feedback (i.e., control) mechanisms to make rational decisions, AI does the same; in a competitive game such as chess, the machine has to have the ability to alter its program, to modify algorthms and suppress strategy. The limits of these abilities were part of what IBMs Deep Blue experiment was all about.

We're probably closer than you think, to a working AI prototype. One of the reamining hard problems is how to assure that the machine can repair itself, with itself. Ethics, though, are learned, not programmed.

Tom

Bookmark and Share
report post as inappropriate


Anonymous wrote on Feb. 14, 2011 @ 16:46 GMT
We have been 'very close' to AI for the past century.

AI has proven to be something people spend a great deal of time discussing, not something that has been shown to be even remotely possible in the near future. The conjectures are based pretty much on hunches, guesses, and hopes.

I remember back in the 80's when I was an undergraduate. Packages like LISP and PROLOG came out and these were supposed to revolutionize how the programmer interfaced with the compiler and we were told they would essentially teach the compiler how to learn from repetitive tasks. There were articles in PC mags stating that AI is here and everyone needs to get ready. Unfortunately, the packages turned out to be some of the most worthless code ever created.

I am not saying that AI should not be pursued or that it is not possible. I just hold the opinion that those who are most adamant about the rise of AI in the near future are usually individuals who spend a great deal of time watching Star Trek--Deep Space Nine.

Bookmark and Share
report post as inappropriate


T H Ray wrote on Feb. 14, 2011 @ 18:16 GMT
Maybe. And then, maybe those who dreamed of going to the moon spent a lot of time reading Jules Verne. Technological progress is incremental and often very slow. Hero of Alexandria kmew of steam power many centuries before it was put to use.

Tom

Bookmark and Share
report post as inappropriate


Anonymous wrote on Feb. 14, 2011 @ 22:38 GMT
But the Apollo team had a plan and a feasibility study that had realstic metrics to work with. They could cinfidently set a date for 1969 because they concluded from the facts that it was realistic to expect a completion date by that time.

The AI community has hunches, guesses, and opinions. There are many opnions in the tech world as to to whether or not the problem is even solvable, let alone when to expect it.

It took nature billions of years to build protein computers. Some people out there think that a mere number of decades after the advent of the first silicone chip, humanity is on the verge of constructing a conscious and sentient digital device with sufficient complexity to mimic human intelligence.

IMO, the predictions of impending AI are all wishfull thinking, combined with watching too many reruns of Star Trek-Deep Space Nine.

Bookmark and Share
report post as inappropriate

T H Ray replied on Feb. 15, 2011 @ 00:29 GMT
Opinions, fortunately, are independent of science.

It took millions of years for nature to give wings to mammals, too. I suppose humankind will never fly.

Tom

Bookmark and Share
report post as inappropriate

Anonymous replied on Feb. 15, 2011 @ 00:43 GMT
But flight is not that sophisticated a process. Take some material and create an airfoil that geerates lift. Through trial and error and good engineering, learn how to stablilize the craft using various surfaces. A technical challenge, yes. But nowehre near the level of complexty we are talking about when discussing thought and sentience. Most neuroscientists would tell you that our understanding of the brain is still at an extremely primitive state. Yet in a few short years, we are going to build a machine that mimics the process we don't yet even understand on any susbtantial level? mmm, ok.

Some physcists also think teleportation is not too far down the road, either.

They have been watching too many re-rurns of "The Fly"--Parts I and II. I think the first part with Jeff Goldblum was better, though.

Bookmark and Share
report post as inappropriate

T H Ray replied on Feb. 15, 2011 @ 16:25 GMT
Anonymous,

With all due respect, yours is argumentum ad ignorantum.

Tom

Bookmark and Share
report post as inappropriate


James Putnam wrote on Feb. 15, 2011 @ 00:02 GMT
Artificial intelligence is formed from and by mechanical theory. There can be no connection between mechanical theory and establishing intelligence. Mechanics can be represented by a hammer. The hammer serves our mechanical needs, but, it is our intelligence that decides what to use the hammer for and to determine whether or not the result of its use was of value for our intelligence.

James

Bookmark and Share
report post as inappropriate

T H Ray replied on Feb. 15, 2011 @ 00:24 GMT
As the saying goes, when all one has is a hammer, everything else looks like a nail. That is not how intelligence, biological or artificial, works, nor how the world works. Tool builders adapt to their complex enviroment.

Tom

Bookmark and Share
report post as inappropriate

James A Putnam replied on Feb. 15, 2011 @ 00:35 GMT
Hi Tom,

"...when all one has is a hammer, everything else looks like a nail. That is not how intelligence, biological or artificial, works, nor how the world works."

It certainly is not how natural intelligence works, but, it clearly is how artificial intelligence works. By the way, you do not know how the world works. What you know is a complex mechanical interpretation of a substitute for explaining and predicting mechanical effects.

James

Bookmark and Share
report post as inappropriate


Anonymous wrote on Feb. 15, 2011 @ 01:22 GMT
I am starting to give serious credence to the notion of an Omega Point.

Variations of the theory postulate that in any civilization, theoretical understanding and technological advancement is bound to increase very slowly over many centuries and then rise exponentially for a relatively short period of time. During this Golden age of Discovery, there will be rapid bursts of creativity and an exponential increase in knowledge and understanding. Rapid technological advancement follows shortly after the golden age. The bad news is, after the golden age, the exponential rise will slowly trail off to an asymptotic limit, during which the pace of advancement will slow to a crawl and the levels of complexity attained will make it impossible to ever speed up the process of knowledge acquisition.

In our theories, we have kind of reached the point of maximum complexity where we are seeing diminished returns on our investments in research. There are some new avenues of technological research that show promise, such as nanotechology,

but on the theoretical side, there has been a lack of real substantial progress. The glory days of the 20th century slowly faded away and the community has found itself mired down in complexity that has increased exponentially. The sciences have branched into a zillion different sub-specialties and cross pollination is practically impossible as it takes experts in each sub-discipline to really make use of the field.

Physicists of the early 20th century were jacks of all trades. Thermodynamics, E&M, Mechanics, etc--most physicists were experts in each sub-field and could communicate with each other without having to study a particular sub-discipline for years.

Now, you have to have years of specialized education and training to just read a paper on String Theory. The same applies to just about every sub-speciality in existence. A String Theorist would have difficulty just picking up a paper on condensed matter theory and be able to jump right in without familiarizing himself with the current paradigms in the field.

In the Omega Point theory, increases in complexity and information overload leads to the whole becoming fragmented and the worker ants fail to communicate with one another due to being disconnected from each individual process. If you subscribe to the theory, the bad news is the process is irreversible and growth eventually stagnates and advancement is limited to refinements rather than leaps and bounds.

I think the same applies to computer technology. The only real advancements have come via increased processing power and memory storage, but theoretical advancement has stagnated. Computer technology itself is starting to branch out into sub-disciplines and in years,it is possible that each camp will find it hard to communicate with each other due to the gap in complexity and understanding.

Bookmark and Share
report post as inappropriate

John Merryman replied on Feb. 15, 2011 @ 04:26 GMT
Also known as the Tower of Babel effect. Essentially a basic wave pattern.

In this cyclical pattern, the future is growing in the cracks that open up in the old paradigm. Whether it's simply to patch the holes, or replace it altogether, only time will tell.

Bookmark and Share
report post as inappropriate


Author Frank Martin DiMeglio wrote on Feb. 15, 2011 @ 01:48 GMT
Sensory experience is not suppossed to be excessively thoughtful in its construction and effects. That is the whole point. Again, the big picture matters. Look at television -- the idiot box.

Depression skyrockets in the inanimate world of experience that you idiots are creating. Intention and concern are locked up in, and are inseparable from, natural sensory experience (including people) in general. Loss of the extensiveness of intention and concern is increasing depression and anxiety.

The rise in the experience of the inanimate and/or unnatural -- in too many ways and forms to mention -- is making us depressed, anxious, tired, excessively unconscious, and ultimately inanimate.

Money is the master addiction. Indeed, "everything is ok in the USA" is the mentality of modern physics in keeping with changing experience from what is natural and by making money from this. This is consistent with valuing money and profits over people and labor, and over other beliefs/truths as well.

You're walking away from reality, and it is walking away from you.

Maturity and seriousness are key.

Bookmark and Share
report post as inappropriate


JOE BLOGS wrote on Feb. 15, 2011 @ 09:53 GMT
B.A.B.L.E

Random QM and Gr equations are imported into a spreadhseet musing maths type 6.

And added 1+1=2 and 2+2=4.

Millions of equations are printed out using a dot matrix printer and summarised as one equation using 1/3+1/3+1/3=1

Computers that can beat a man at chess are smareter than man and if computers are smarter than man that means you can't beat the technology.

That opens a whole new can of worms

Bookmark and Share
report post as inappropriate


Anonymous wrote on Feb. 15, 2011 @ 14:40 GMT
"In this cyclical pattern, the future is growing in the cracks that open up in the old paradigm. Whether it's simply to patch the holes, or replace it altogether, only time will tell. "

...........

In the past, theory-building was usually the province of singular individuals. Newton, Maxwell, Einstein, Boltzmann, etc etc

Today, the process of theory-making in science...

view entire post


Bookmark and Share
report post as inappropriate


Anonymous wrote on Feb. 15, 2011 @ 15:11 GMT
Also, I wanted to add that AI is not likely a problem of reducing the whole to a set of first-principals contained in the parts. In other words, intelligence and consciousness are probably not reducible to a simple set of binary rules contained in code structures.

I am not speculating a ghost in the machine. I am simply stating that it is almost certainly likely that AI does not lend itself to first principles in the sense that: IF A then B.

It is more likely that: If A then {contingent relationships related to the state of the whole}--> then possibly B

Looking at the inner workings of an automobile, one cannot really understand the configuration and interplay of the parts without understanding how the parts are related to the whole. Also, one cannot derive the physical behavior of an automobile from simply analyzing the parts. Appealing to the parts that propel the automobile to action will not explain why a car stops at a red light or why it turns left or right when it does. There are nested hierarchies of causation. This is a crude analogy but the point is, intelligence and consciousness are probably not amenable to simple reductionist methods.

Bookmark and Share
report post as inappropriate

T H Ray replied on Feb. 15, 2011 @ 18:15 GMT
Again, argumentum ad ignorantiam. One most certainly does understand the workings of an automobile, an airplane, and any other mechanical device independent of how the the parts are connected and how the systems interact. I earlier cited the example of Hero's toy steam engine. Once one knows the principles of what makes something go -- whether it is steam power, internal combustion, hydraulics, air, whatever -- all the auxiliary system components are directed toward supporting, transferring and directing the source of power. The engineering is sophisticated, but the physics is quite straightforward and reducible to classical and quantum mechanics.

Because intelligence is also believed by serious physical theorists (e.g., Kafatos, Gell-Mann, et al) to lie on a continuum of connected parts from simple to complex, there is in principle no barrier between the intelligence of any part of evolving nature, and complex adapative systems, whether they are made of organic matter or of any other substrate. Menos Kafatos immediately came to mind becuase I recently bought the second edition of his book (with Robert Nadeau) _The Conscious Universe_, which made quite an impression on me when I first read it about 20 years ago. The subtitle of the first edition was "Part and Whole in modern physical theory." The subtitle to the new edition is "Parts and Wholes in Physical Reality." I have little doubt that so-called artificial intelligence will be a part of our future reality.

Tom

Bookmark and Share
report post as inappropriate

Anonymous replied on Feb. 15, 2011 @ 19:32 GMT
Phasers set to stun. Give me all she's got, Scotty.

Bookmark and Share
report post as inappropriate

John Merryman replied on Feb. 16, 2011 @ 02:09 GMT
Tom,

Even if you are right, it is not simply a linear emergence of complex intelligence out of a simpler substrate, but the fact that complexity is constantly emerging, collapsing back, re-emerging and this cycle has been folding itself into itself over the course of billions of years to reach our still somewhat maladaptive state. Maybe we can, over the course of the coming decades, produce increasingly life like simulators. The question though, is: Are they aware?

I think the practical difference between intelligence and awareness, is that while intelligence can answer the questions, it is awareness which is compelled to ask them.

Bookmark and Share
report post as inappropriate


T H Ray wrote on Feb. 16, 2011 @ 11:05 GMT
Emergence in a complex system is never linear, intelligence is adaptability alone, and to quote Murray Gell-Mann, "The last refuge of the obscurantists and mystifiers is self-awareness, consciouosnss."

Tom

Bookmark and Share
report post as inappropriate

James Putnam replied on Feb. 16, 2011 @ 14:32 GMT
The refuge of mechanical theorists is to deny with arrogant verbage that self-awareness and consciousness are not the product of their theoretical robot fundamental properties. A major step in promoting their artificial science of intelligence is to usurp the word information by declaring the lowest form of data to be the highest form of information. The falsity of their position is made clear by their practice of avoiding the first property of intelligence. They cannot explain how meaning is discerned from data. Beyond that act of intelligence are only effects. Mechanical theorists thrive on effects, but cannot explain what cause is. So, they give us theory. Theory is the art of inventing causes to serve as placeholders for lack of knowledge both in equations and in explanations.

James

Bookmark and Share
report post as inappropriate

T H Ray replied on Feb. 16, 2011 @ 15:30 GMT
You use the term often, but what exactly is a "mechanical theorist?"

Tom

Bookmark and Share
report post as inappropriate

T H Ray replied on Feb. 16, 2011 @ 17:05 GMT
And when you finish with that one, I have some followup questions:

What is the "first property of intelligence?"

What does data mean independent of theory?

What does cause mean independent of effect?

Bookmark and Share
report post as inappropriate


Crenell Stokes wrote on Feb. 16, 2011 @ 12:53 GMT
"intelligence is adaptability alone"

Adaptability to what?

Bookmark and Share
report post as inappropriate

T H Ray replied on Feb. 16, 2011 @ 15:29 GMT
To the system that sustains an organism's survival and survivability.

Bookmark and Share
report post as inappropriate

John Merryman replied on Feb. 16, 2011 @ 17:17 GMT
Tom,

I'm curious as to what would inspire the need for survival, if not some sense of self? If I kick a stone down the road, or the wind blows through the trees, they are adapting to input. There might even be some feedback, such as pain in my toe, or vortices forming in the wind.

On the other hand, there are quite a number of religious, political and various other schools of thought, not to mention quite a few individuals, whom are quite resistant to any adaptation to outside input, for the very reason of self preservation, though not always to successful effect.

As for Gell-Mann's comment, considering it in the context of the quote it is derived from, "The last refuge of the scoundrel is patriotism." This isn't so much a critique of all patriotism, though it may have been intended that way, but a comment on the tendency of political opportunists to wrap themselves in popular causes. Just as the fact that consciousness studies do attract a broad range of ideas, the fact that many of them are far fetched no more invalidates consciousness than the stirring up of mobs invalidates all forms of group allegiance.

Bookmark and Share
report post as inappropriate

T H Ray replied on Feb. 16, 2011 @ 17:41 GMT
John,

Do you think an amoeba has a sense of self?

I doubt that Gell-Mann derived his conclusion from anything but his own fertile knowledge of science and humanity. Parallel language construction does not imply derivation.

So far as social and religious adaptability are concerned, think of all the institutions that have adapted or disappeared throughout history. I agree with Popper that historicism does not determine destiny; OTOH, laws of evolution do seem to permeate every aspect of existence.

Tom

Bookmark and Share
report post as inappropriate


Crenell Stokes wrote on Feb. 16, 2011 @ 20:40 GMT
Intelligence is adaptability..

"..to the system that sustains an organism's survival and survivability. "

"..amoebae and other one-celled creatures are far more adaptable than we,.."

Are you implying that the degree of adaptability is the measure of intelligence?

Therefore, an Ameobae is more intelligent than a human?

Bookmark and Share
report post as inappropriate

T H Ray replied on Feb. 16, 2011 @ 22:17 GMT
Did you read my more detailed reply? The pressure to adapt (to be more intelligent) grows in proportion to the organism's complexity.

Tom

Bookmark and Share
report post as inappropriate

Anonymous replied on Feb. 16, 2011 @ 23:31 GMT
I was going by your defintion of intelligence that you were providing.

In an earlier post, you had said, "..intelligence is adaptability alone."

I then asked, "..adaptability to what?"

Your reply was,

"..to the system that sustains an organism's survival and survivability. "

Intelligence is a trait that is selectively beneficial to adaptability. It is not defined by it.

Bookmark and Share
report post as inappropriate

James Putnam replied on Feb. 16, 2011 @ 23:38 GMT
'The pressure to adapt..." What is this 'pressure'? How do you explain it in terms of the mechanical properties embraced by theoretical physics.

James

Bookmark and Share
report post as inappropriate


James Putnam wrote on Feb. 16, 2011 @ 21:16 GMT
Adaptibility is an effect of intelligence.

James

Bookmark and Share
report post as inappropriate

T H Ray replied on Feb. 16, 2011 @ 22:06 GMT
Then as another questioner posed: adaptability to what?

Bookmark and Share
report post as inappropriate

Crenell Stokes replied on Feb. 16, 2011 @ 22:35 GMT
Stating that intelligence is adaptability leads to a non-sequitor.

Intelligence may be classed as a survivial trait which gives an ogranism selective advatnages for survivial in a given environment. However, this causal relationship does not define the trait of intelligence itself.

Intelligence has never been clearly defined to the point that there exists a standard universal defintinon that is accepted by all specialists in those scientififc discplines where the subject comes up. A neruologist, a cognititive scientist, an evolutionary biologist and an AI scientist will likely have diverging points as well as commonalities in their defintions. None of them would consider defining intelligence as 'adaptability to the enviornment.'

Bookmark and Share
report post as inappropriate

James Putnam replied on Feb. 16, 2011 @ 22:48 GMT
Intelligence is not defined by specialists unless those specialists are not ideologically committed to mechanical beliefs. Defining intelligence requires experts who understand how discernment, the assignment of meaning to data, occurs at the fundamental level.

James

Bookmark and Share
report post as inappropriate


Edwin Eugene Klingman wrote on Feb. 16, 2011 @ 21:50 GMT
The assumption here seems to be that all participants are equally self-aware. Not only is that not proved, but the dialogue seems to suggest otherwise. As I mentioned in my early comment, some consciousnesses are buried under talk, talk, talk, and have not the slightest memory or awareness of a more encompassing mode of awareness. They live in the 'map' and have lost touch with 'the territory'.

And yes, I do believe that an amoeba has a (minute) degree of self-awareness. As I pointed out elsewhere and as John alludes to, there is no reason to suppose that any chemicals possess an 'urge to survive', let alone procreate, unless there is a degree of awareness. The rocks don't attempt to survive or procreate. Gell-Mann was a great physicist, but what has his theory of consciousness added of particular note? Why would one assume that his 'fertile knowledge of science and humanity' qualifies him above anyone else to comment on properties we all share. Sounds like either hero-worship or name-dropping.

And what the hell does "an amoeba has less pressure to be intelligent than a rock" mean?

To come on this thread and insist to other self-aware beings that "The last refuge of the obscurantists and mystifiers is self-awareness, consciousness" is simply an admission that one cannot make heads or tails of awareness. It contributes nothing to the conversation.

Edwin Eugene Klingman

Bookmark and Share
report post as inappropriate

T H Ray replied on Feb. 16, 2011 @ 22:24 GMT
Edwin,

That you have a different opinion (and most surely a different definition for intelligence) does not invalidate research that contradicts it, much less imply that others have "lost touch." Gell-Mann still IS a great physicist, BTW, And the physics of consciousness is a legitimate research topic.

Tom

Bookmark and Share
report post as inappropriate

James Putnam replied on Feb. 16, 2011 @ 23:04 GMT
"...the physics of consciousness..."

There is no physics of consciousness. Physics is about inventing mechanical type causes for mechanical type effects. Concsciousness is a subject for science unrelated to mechanical ideoology.

James

Bookmark and Share
report post as inappropriate

Edwin Eugene Klingman replied on Feb. 17, 2011 @ 00:18 GMT
Tom,

It's funny that you claim that "the physics of consciousness" is a legitimate research topic, while I'm sure that you reject my previous essay titled Fundamental Physics of Consciousness.

You probably think that making measurements on brains is "research on consciousness" and that electrical probes to a brain have anything to do with "explaining awareness". About as much to do...

view entire post


Bookmark and Share
report post as inappropriate


T H Ray wrote on Feb. 17, 2011 @ 11:26 GMT
Edwin,

There is no warrant for me to "believe in" self awareness when voluminnous documentation in evolutionary biology, mathematics, complex systems, physics and other disciplines say that it isn't necessary to explain the behavior of the world we observe. Consciousness is unitary, no "self" required.

It is not I who is out of touch.

Tom

Bookmark and Share
report post as inappropriate

John Merryman replied on Feb. 17, 2011 @ 12:30 GMT
Tom,

Doesn't that get into the whole digital vs. analog dichotomy? Two sides of the same coin, can't have one without the other?

Bookmark and Share
report post as inappropriate

T H Ray replied on Feb. 17, 2011 @ 14:09 GMT
An amoeba is not more intelligent than a human, if intelligence is a function of adaptatation in proportion to the scale of complexity to which an organism has to adapt in order to survive. An amoeba does not HAVE to be more intelligent, because there are many more ecological niches for it to fill, niches not available to more complex organisms, including human beings.

Tom

Bookmark and Share
report post as inappropriate

John Merryman replied on Feb. 17, 2011 @ 16:58 GMT
Essentially as an response function amoebae are very simple. Attraction/repulsion. It is a binary circuit. People, on the other hand, are like computers with gigabites of such circuits. The problem is trying to figure out at what level of the scale between these two points that could properly be described as aware. The fact is that there is no stable, objective quality of "awareness." Even people are blithely unaware of 99+% of what goes on around and within themselves. And then most of what they are aware of, is promptly forgotten, as they move on to other events, or die. That essence of awareness keeps replicating itself in subsequent moments and generations.

So the question is: Is that sense of awareness prior to or subsequent to the process of making the binary distinctions which are its biological function. As an organ, our brain is a navigation tool. Does it create awareness, or simply focus it?

Bookmark and Share
report post as inappropriate


Anonymous wrote on Feb. 17, 2011 @ 13:31 GMT
Physicists are not authoritative experts on every subject matter related to science or nature, although many individuals think that they are.

At the end of their lives, burned-out physicists and Nobel Prize winners like to write books about subjects they have not specialized in to any great degree. Because these are exceptionally smart individuals with brilliant careers in Physics, there is a tendency by the public to take any opinions or arguments they offer on any subject matter as truths to be accepted without questioning.

It's not that we shouldn't consider these opinions or conjectures, but if you want the scoop on what's really going on in the field of neurosciences, it is prudent to defer to the specialists who have spent their entire careers studying these subjects in detail and who have got their hands dirty in the laboratory studying the subject, not just writers cramp from penning books--researchers like Steven Pinker and company.

When trying to form my own opinions, I give more credence to this camp than the popular "Golly gee whiz" accounts of modern science written by non-specialists in a field.

Bookmark and Share
report post as inappropriate

T H Ray replied on Feb. 17, 2011 @ 14:05 GMT
Point well taken. The inverse, of course, applies as well. Specialized definitions given to specialized study do not generalize.

Tom

Bookmark and Share
report post as inappropriate

Anonymous replied on Feb. 17, 2011 @ 14:53 GMT
That's true.

I am not saying we shouldn't form opinions--we all have them.

However, I am not familiar enough with the subject to make any truly informed opinions. I have never read a paper on neuroscience or consciousness studies. My knowledge of the subject is limited to the quick synopsis that can be found in the popular books on the subject.

The only thing I can conclude is that the consensus appears to be that consciousness is an emergent feature of the neurological properties of an organism. The exact mechanisms that lead to this emergent feature is the subject of research and it is in its infancy. These are shaping up to be extremely complex problem that does not not readily lend themselves to simple explanations. It's not going to be easy.

It's OK to not know. That's what makes science exciting. As far as AI, I just take the position that it is too complex an issue to make predictions of when, where, how. I haven't the slightest idea.

Bookmark and Share
report post as inappropriate

T H Ray replied on Feb. 17, 2011 @ 16:25 GMT
We're not really in disagreement. It's just that I don't care when, where, and how. I am interested in whether the thing is impossible in principle -- whether there are known physical boundaries. There don't seem to be any.

Like Murray Gell-Mann, I am an unapologetic reductionist. What we've found, though, is that reduction to complexity does not obviate holistic models. That's sounds very much like how the human organism is organized, doesn't it? -- a self organized independent and self replicating process whose self awareness is subordinate to the cooperating system of specialized cells, organs and organ systems providing regulatory (negative, or control) feedback to the whole organism.

Part of the problem -- as reflected in this forum -- is to get one to understand the difference between robotics and artificial intelligence. Cristi Stoica said it better than I.

In short, robotics is a sophisticated engineering program, with external feedback alone. Neither humans nor hypothetical AI are robots. The hard problem, as Cristi (and I) have pointed out, is having an AI capable of repairing itself, with itself. One may believe this is impossible -- so what? If it is impossible, one has to identify and demonstrate the specific barrier. No barrier is evident so far, and personal belief has nothing to do with the issue.

Tom

Bookmark and Share
report post as inappropriate


T H Ray wrote on Feb. 17, 2011 @ 17:36 GMT
Anonymous,

We agree. In fact, in my reply to Cristi I posted a link to my essay "We have met the alien, and he is us" written 11 or 12 years ago, that says the same. Anthropomorphizing is fun, it makes a good movie, but it isn't likely how the universe really works.

Tom

Bookmark and Share
report post as inappropriate


Anonymous wrote on Feb. 17, 2011 @ 21:08 GMT
I would like to add: Intelligence is not really what defines us. We are not creatures of reason.

As Hume accurately noted, "Reason has never motivated a man to do anything."

We are creatures of instinct, just like all other sentient organisms. We cannot reason away the instincts for survival, food, and procreation. Reason is simply a tool.

Before talking about the nature of intelligence, I think we need to answer this question: Why do we do anything at all?

Why did I write this reply? Why do people follow it up with a reply? Why do people become angry over anonymous posters? The answers are not simple. They are actually quite complicated and I don't think anybody here can answer them by appealing to first principals. Regardless, the answer(s) would never be reason or intelligence. Individuals are motivated to perform an action by impulses.

Also, if an organism is self-conscious, what defines the 'thing' that is conscious of self? In other words, when do you get to the point where you say, "this is me" ? Does it make any sense to ask such a question?

I think that our memories certainly define our personality and sense of who we are. They provide continuity in time. But memories are something that this 'self' is aware of, so they cannot define the self.

I had a brief stint with Buddhism in my younger days. One of the tenets of traditional Buddhism is that the self is an illusory construct. There is nothing there that belongs to the self in a traditional sense. Nothing remains static. Memories change and fade, sensations change, experiences change. However, these constructs do not belong to any particular thing. They are not owned. Impermanence is really the defining factor.

This line of thought is one of the reasons I always had a metaphysical problem with dualism. If we had a 'nonmaterial' soul(whatever that means), what is it specifically that lives on after the expiration of the sense-apparatus? Memories, sensations?. If so, in what manner do they continue to exist?

Consciousness might be as fundamental to the Universe as gravity. It's simply what happens when you configure a dynamic system in such a manner. Asking why might be superfluous. Why is the ratio of the circumference of a circle to it's diameter =3.1412..? Why is this ratio an irrational number? That's not a proper question. That's just the way it is.

Bookmark and Share
report post as inappropriate

James Putnam replied on Feb. 17, 2011 @ 21:11 GMT
Dear safely anonymous,

"Why did I write this reply? Why do people follow it up with a reply? Why do people become angry over anonymous posters? The answers are not simple. They are actually quite complicated and I don't think anybody here can answer them by appealing to first principals. Regardless, the answer(s) would never be reason or intelligence. Individuals are motivated to perform an action by impulses."

What is an impulse?

James

Bookmark and Share
report post as inappropriate

Anonymous replied on Feb. 17, 2011 @ 21:24 GMT
What is an impusle?

Well, light a match and place it under an out-stretched palm. You will suddenly notice that beyond your will, your attention will be diverted from whatever it is you are currently thinking or doing. Any thoughts you currently had in your head will stop. You will be unable to control this impetus(i.e. impulse) to action(pull your hand away) and it will dominate and overwhelm anything else that is currently going on.

Any action we peform--even the smallest and most miniscule--is motivated by an impulse of some kind.

Bookmark and Share
report post as inappropriate

James Putnam replied on Feb. 17, 2011 @ 21:30 GMT
"Well, light a match and place it under an out-stretched palm. You will suddenly notice that beyond your will, your attention will be diverted from whatever it is you are currently thinking or doing. Any thoughts you currently had in your head will stop."

Is your point that this act is not one of intelligence? Do you believe that it is an unintelligent act. The fact that it is shortened so that the brain does not have to process it is not evidence of lack of intelligent intent. Intelligence occurred long before the brain occurred. Now if your viewpoint is one of a mechanical perspective, then lets skip past everything that you consider to be automatic according to mechanical principles: Why does the brain not react in that impulse manner?

James

Bookmark and Share
report post as inappropriate


John Merryman wrote on Feb. 18, 2011 @ 02:50 GMT
Anon,

I think the reason it is so hard to define what this sense of awareness is, is that it's a bit like defining space. We can only sense it in terms of what occupies it, such that the issue arises as to whether these forms create space/consciousness, or only give some subjective definition to it. There are ways to construct fairly reasonable arguments for why both consciousness and space are fundamental, but there is nothing tangible you can hit someone over the head with, if that's what it takes to convince them, since the tangibility is their first cause.

Bookmark and Share
report post as inappropriate


Sridattadev wrote on Sep. 7, 2011 @ 17:51 GMT
Dear All,

I have read most of your thoughts on consciousness and how Artificial Intelligence can or cannot be created. If we examine ourselves are we not some kind of intelligence emerging from the primordial conscience or singularity.

We can experience this absolute truth with in our self and that is the ultimate purpose of the human life.

For every action there is equal and opposite reaction,

There is also inaction at the point of their interaction.

Singularity is that point of inaction at the heart of everything.

Singularity is not only relative infinity, but also absolute equality.

Love,

Sridattadev.

attachments: 2_UniversalLifeCycle.doc

Bookmark and Share
report post as inappropriate


songjoong sdfsd df wrote on Dec. 27, 2017 @ 07:25 GMT
KLIK DISINI

KLIK DISINI

KLIK DISINI

KLIK DISINI

KLIK DISINI

KLIK DISINI

KLIK DISINI

KLIK DISINI

KLIK DISINI

KLIK DISINI

KLIK DISINI

KLIK DISINI

KLIK DISINI

KLIK DISINI

Bookmark and Share
report post as inappropriate


Login or create account to post reply or comment.

Please enter your e-mail address:
Note: Joining the FQXi mailing list does not give you a login account or constitute membership in the organization.