 |
Susan Schneider |
How far would you go to enhance your mind? How far is too far?
Last month, Elon Musk's
Neuralink start-up
introduced the idea of an implantable chip that you stick in your brain, through an invasive surgical procedure (drilling a hole in your head), to “merge biological intelligence with machine intelligence.” These chips would allow data to transmit wirelessly from your brain to digital devices.
One of Musk’s justifications for this is that such chips will one day enable humans to keep pace as artificial intelligences (AI) start to overtake us. “Even in a benign AI scenario, we will be left behind,” Musk warned. “But with a brain-machine interface, we can actually go along for the ride.”
Meanwhile, DARPA is
investing in brain-machine interface technologies to help people suffering from post-traumatic stress.
“The next big wave of AI may very well be inside the head,” said philosopher and cognitive scientist Susan Schneider, of the AI, Mind and Society (AIMS) group at the University of Connecticut, speaking at the 6th FQXi meeting, in Tuscany, in July.
The transhumanist vision espoused by the likes of Ray Kurzweil and Nick Bostrom says that one day we will be able to "upgrade" with cognitive and physical enhancements, including increasing our longevity, until we become post-human—superior to unenhanced humans in every capacity.
It’s a vision that sounds at once fantastical, enticing — and also deeply alarming. (To me, at least.)
 |
Would you implant a brain chip? |
Assuming (for the sake of argument) that there are no physical risks, and the procedures all work perfectly, what are the philosophical concerns regarding such tampering? Schneider points to a pretty major one: you could end up inadvertently killing your true “self” — not because of a surgical mistake or because of the chip malfunctioning, but because the procedure worked perfectly and you successfully replace so much of your original self that you cannot continue to identify yourself as “you”. Instead of enhancing yourself, you’ll be enhancing a new being in your place. How will you know if you are at risk of accidental suicide-of-the-self in the name of improvement? Where do you draw the line?
I’ve posted the
full audio version of Schneider’s talk, on the podcast. In it, you’ll hear her run through different conceptions of the self and what it means to be “you,” that have been discussed by philosophers and theologians for centuries, and are now being talked about in transhumanist circles.
If you identify yourself with your brain, it seems easy to see that very quickly, if you keep chopping chunks out of your brain, you will destroy yourself.
Transhumanists tend to adopt a more subtle “psychological continuity” view, however, which says that you are your memories and your psychological configuration — or your “pattern.” Listen to Schneider taking down this argument, as she warns that even in this picture, you will likely destroy yourself, by breaking the pattern. The problem of identity is actually sharpened here: In some scenarios you could imagine uploading your pattern and then duplicating it, but in that case, which duplicate is “you”?
Religions and ancient philosophies have alternative views, however, it’s not really clear what this all means for them either. Your essence might be contained in a "soul" — but what happens to your soul during upgrading and uploading? Or, taking a more Buddhist line, there is no self; in that scenario perhaps there is less cause for concern.
During the talk, Schneider asked how many people in the audience (of about 100) would be open to implanting a chip that could, for instance, make you better at doing calculations or improve your musical abilities. And how many would go for the whole mind-merge? I tell you the (rough) answers to that poll in the podcast. But I’d be interested to know how you would answer before listening to Schneider’s talk — and whether then hearing what she has to say makes you change your mind.
Free Podcast
Designing the Mind. Cognitive scientist Susan Schneider talks transhumanism and asks at what point does human enhancement go too far? Are we in danger of accidentally destroying ourselves through technological augmentation and creating new beings in our stead? From the 6th FQXi Meeting, in Tuscany.

LISTEN:
Go to full podcast
this post has been edited by the forum administrator
Cybermen, terrified me as a kid.
post approved
FQXi Administrator Zeeya Merali replied on Aug. 16, 2019 @ 17:36 GMT
It's funny, they never bothered me. Daleks did, though... :)
"If you identify yourself with your brain"
No one does. We identify ourselves as all those body parts (not just the brain) that we seem to be able to actively control.
Rob McEachern
report post as inappropriate
People like Susan Schneider and Max Tegmark are barking up the wrong tree.
You need algorithms to represent the decisions/ mind/ behaviours of living things: living things might be said to make algorithmic decisions.
But computers/ robots/ “AIs” don’t make algorithmic decisions. They merely have structures which implement pre-decided ways of handling incoming data [1]: all necessary decisions have been made, or agreed to, by human beings via the computer program.
Physicists and philosophers have not yet grasped that computers/ robots/ “AIs” are never going to become conscious. Living things process “living information”, but computers/ robots/ “AIs” are just structures set up to process symbolic representations of information.
1. Which is always organised so that it symbolically represents numbers associated with variables.
report post as inappropriate
a good question Indeed,what is a soul?
Personnaly I beleive that body mind soul problem can be solved with this. Imagine that we die only electromagnetically and so our soul continues its road.Imagine that at our death we are in the instant at this central cosmological sphere and like all turns is resynchronised in a small baby on an other planet,a brain corresponding about our step of consciousness .
report post as inappropriate
Okay! Lorraine,
Finally there is a fqxi topic that could be a good fit for what you always want to talk about. You make numerous very bold assertions. Here might be a place where you can present your idea of how things work. Ignore what you don't agree with. It never helps anyone explain their own idea by objecting to some other ideology, that's about the other idea. Give a simple enough example that it can be explained on its own merits without extensive background research being necessary of the reader.
To write an algorithm would be to select a set of symbols to represent some aspect of a common occurrence. What sort of algorithm do you have in mind? How does it describe the simple case? How does it work and what are the rules of its operation? What happens to it?
Give it a shot. :-) jrc
report post as inappropriate
to post reply or comment.