More on Wikimandering
A Crowdfunding Appeal

Ray Kurzweil's "How To Create a Mind"

Kurzweil2 I'm a bit of a fan of Ray Kurzweil, and had been planning to get his new book How to Create a Mind. I've since discovered a full synopsis on a website, which saves me some time and money. (For non-fiction authors the existence of such a site should be a worry - but that's another story.)

Kurzweil is that interesting type of thinker who drives scientific materialism to seemingly absurd limits. His idea is that having completely understood how humans are put together we can remake ourselves and overcome the imperfections that biological evolution saddled us with.

Kurzweil's transhumanist thinking sound kooky, even to many of his peers. But he's a credible figure, having made major contributions to artificial intelligence, notably in the field of voice recognition. I think Kurzweil's vision of the technological singularity, that machines will outstrip human intelligence by mid-century, is rather cool. I completely disbelieve it, for what I consider are sound empirical as well as practical reasons, so I'm not threatened by it, or feel the need to heckle (he gets plenty of flak as it is from other AI thinkers like Daniel Dennett). But I love it that science can let the imagination rip - it would be dull if we weren't allowed to dream.

I was curious to know, though, how does one create a mind? It's about understanding how it works. Kurzweil follows the well-trodden computational route, where lots of lower-level processes based on responses to the environment combine to produce higher level abstract thinking. He identifies the basic process as that of pattern recognition and thinks the underlying architecture is relatively simple, the kind of thing that could be readily replicated by a sufficiently powerful computer - by 2029, at the present exponential rate of progress.

He concedes that a supercomputer like IBM's Watson, currently the most advanced of its kind, could not yet pass the Turing Test, fooling a human interlocutor into thinking it is human. But that's only because it was not designed to engage in conversation but to succeed at specific tasks, like winning at chess and Jeopardy. The most advanced AI machines are already using these same principles and processes. Indeed, AI was using them even before it was discovered that the human neocortex is doing the same thing, and now the traffic has been reversed, with neuroscience feedings its discoveries back to AI.

Where reasoning is concerned, Kurzweil sees quality as an outcome of quantity. Nature endowed us with a mere 300 million pattern processors, but once we start making synthetic brains why not give them a billion, or even a trillion? This, he claims, will not only increase the kind of intelligence we already see in humans, but also generate higher orders of abstract thought and complexity. The synthetic brains could have an in-built critical thinking module that stops them holding a bunch of inconsistent ideas, as humans do. While humans are limited by evolution in terms of what we can achieve, these super intelligent, super rational machines could pursue goals like curing disease and alleviating poverty with a realistic prospect of success.

Meanwhile humans can beef up their own brainpower by adding new modules as brain implants. Lest we worry this would change our identity, Kurzweil thinks that identity is an effect of our entire system, and would not be compromised by changing individual parts.

The next step in the journey will be to spread this new intelligence throughout the universe.

If we can transcend the speed of light - admittedly a big if - for example, by using wormholes through space (which are consistent with our current understanding of physics), it could be achieved within a few centuries. Otherwise, it will take much longer. In either scenario, waking up the universe, and then intelligently deciding its fate by infusing it with our human intelligence in its non-biological form, is our destiny.

OK, then!

The book has stirred the pot, and it's interesting to see what sceptical perspectives have emerged. One is to point out just how incredibly distant we really are from the goal that Kurzweil thinks is just around the corner. Take the humble earthworm, C. elegans. Where the brain has 100 billion neurons connected by 100 trillion synapses, the worm has only 302 neurons connected by roughly 5,000-7,000 synapses, and a lot is known about it. Scientists can tie its reflexes and behaviours to individual neural pathways or brain circuits. But . . .

Can we use this to model or predict the actions of the worm? No. We're not even close. In fact, it takes a computer with a billion transistors to make a weak, incorrect guess at what a worm with 302 brain cells will do.

If we can't simulate 302 neurons and 5,000 synapses, how can we hope to conquer 100,000,000,000 and 100,000,000,000,000? Let's not even get started on the 100,000,000,000,000,000 electrical signals per second that form the traffic on that neural road network.

Then again, are mental phenomena really the outcome of mere pattern recognition processes? Colin McGinn, a long-time critic of mainstream thinking about mind, points out that this is in fact a very partial view of what the brain does. There's no perceptual recognition going on at all in thinking about an absent object - 'for instance in thinking about London when I am in Miami, or in dreaming, or in remembering that I have to feed the cat'. Kurzweil talks as if everything in the mind involved perception. But there are also other mental phenomena, such as

emotion, imagination, reasoning, willing, intending, calculating, silently talking to oneself, feeling pain and pleasure, itches, and moods - the full panoply of the mind. In what useful sense do all these count as "pattern recognition"? Certainly they are nothing like the perceptual cases on which Kurzweil focuses. He makes no attempt to explain how these very various mental phenomena fit his supposedly general theory of mind - and they clearly do not. So he has not shown us how to "create a mind," or come anywhere near to doing so.

McGinn also makes the point - widely noted by other critics of dominant trends in neuroscience such as Raymond Tallis - that these sorts of accounts are saturated in anthropomorphic language. Kurzweil's pattern-recognisers 'receive and send messages', they 'manipulate information'. Listening to this, it's easy to forget these are lumps of tissue: in fact they have no awareness of doing any such thing. A retort is that such representational talk is merely intended to be metaphorical. But that's not altogether true. Its effect is to create in the reader's mind a sense of a coherent process, a masking pseudo-explanation that leaves the fundamental mystery entirely untouched.

Kurzweil does include a chapter on such key matters as free will and identity, which he concedes his model can't really explain. Like many people he asserts that they somehow 'emerge' from lower level activity, just as conscious awareness does. This is problematic enough, but there's also an absence - as far as I can tell, having only read a synopsis - of any serious consideration of emotional or moral intelligence. It's not widely observed that the cleverest people are also the most emotionally mature - that is to say the wisest, the best at cooperating with groups, and most fully committed to finding the most widely acceptable outcome to any given social or political problem.

In principle I suppose one could program the machines with some utilitarian-type algorithm. However to design such a thing, humans would first have to agree on an ideal, which would be like competing groups getting together to write a national constitution - always an extremely fraught process. Kurzweil seems to think such matters can safely be left to the machines themselves, since they will be vastly more intelligent than us. This implies that there exists out there some single outcome that is so obviously rational they can all agree on it. But that is emphatically not the human experience, and what happens if the machines too start to compete with each other? This is the stuff of Hollywood dystopian nightmares, a world given over to indestructible beings with unimaginable strength and inhuman cunning.

One might say it's pure fantasy, not least because no machine could ever pass the Turing Test, as long as that test includes a demonstration of ESP - as Turing himself intended. It is course argued that ESP doesn't exist and that Turing naively let himself be seduced into accepting spurious claims. Or that even if ESP does exist, it plays such a marginal role that by meeting all the other requirements a computer could achieve human status. But if we take ESP to be a property of minds, then clearly minds are something more than the effect of brain processes.

It follows, therefore, that if some Frankenstein character were to try to use How to Create a Mind as a manual, he wouldn't get very far. Without understanding the true basis of mind we will only ever be able to mimic it. But that's not necessarily the end of the story.

Science can't create minds from dead matter, but perhaps it can create the conditions for minds spontaneously to come into being. This would mean taking seriously the idea of panpsychism, that consciousness is inherent in matter, and will emerge in systems of sufficient complexity. In this way, something of Kurzweil's vision could come about by quite different means. Or imagine that these thinking robots he anticipates, although not properly human, are sufficiently sophisticated that they can be possessed by the dead as a means to revisit the world in mechanical bodies.

That's why I like this transhumanism stuff. We don't have to take it literally, but it can take us to some interesting places.

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

"This is the stuff of Hollywood dystopian nightmares, a world given over to indestructible beings with unimaginable strength and inhuman cunning."

Oh I dunno. Might be a positive move towards a more compassionate world. ;)

More seriously, though, what about injecting the machines with a good measure of emotional intelligence? After all, that's the vital component missing in many of the most intelligent minds.

Ray Kurrzweil is one of the few hard-core materialists that I really like. As far as I know, he has never lambasted anybody's spirituality, he doesn't want to die, and he actually makes a certain amount of sense.
Of course, the same things can be said about Dr. Who.

The human imagination can take on a list of facts and conditions, and create an impressive picture of the future. History shows us that if the prognosticator is serious, s/he will end up being right to a certain extent, but wrong for the most part. Unanticipated discoveries, limitations and plot twists usually end up changing the outcome.

So, who knows? Here's my prediction - Kurzweil's materialism will fall by the wayside in the near future, and he will discover his never-ending life.

Then he can cut down on his 150 pills (a day!).
http://tinyurl.com/cz247cx

From the summary:
The technologies mentioned here might not only be used to create separate AI machines, but also, eventually, to enhance our own brains. This can be done noninvasively, by creating an AI cloud accessible to all (loc. 1663, 1754-60), or more directly by way of designing and installing cortical implants straight into our brains (loc. 3557). These implants could not only be used as add-ons, but to replace existing structures, in order to beef-up and improve their functioning (loc. 3519). Ultimately, every piece of our biological brain could be replaced with new and improved computer parts (loc. 3556).

This process would inherently involve scanning the microstructure of the brain so as to duplicate it with computer processing modules using the proper advanced technology. The result would be an artificial human brain. The structure and software of this system could be represented by a vast data file, and with it this system could be duplicated. It could be used to replace the original "person" in case he/it was destroyed. In fact any number of duplicate "clones" could be manufactured, each with the same consciousness as the original, or with different or added memories, degree of intelligence and personality characteristics. And the data file could be mechanized entirely in software in a higher level processor, so there would be no physical body at all. The implications of this train of thought quickly become absurd and beyond mindboggling. Pure imagination, science fiction.

The hugely arguable assumptions behind Kurzweil's thinking are that (1) the brain is purely a kind of computer, and (2) consciousness is essentially a kind of computation. In the case of manufacturing a computer duplicate of a person, unfortunately there would be no continuity of consciousness between the original live human being and the electronic version. The machine incorporating human neural software would be an independent thinking entity, totally separate from the original human, who if he somehow survived the process would be sadly disappointed to still be mortal, be subject to illness and death, etc. He might wonder why go to all the trouble since he personally doesn't benefit, or have his length and quality of life extended. Of course it is more likely his brain would be obliterated by the scanning process. It would be a noble, altruistic self-sacrifice so that after death an immortal copy could go on without such limitations. This doesn't sound very desirable to me.

It never fails to surprise me that the AI folks get so caught up in their computational wonderland that they completely forget stuff like creativity, intuition and inspiration. Those are all extremely important parts of human thinking even for dullards.

@Craig: That's left-brain dominance for you!

nbtruthman, should materialism be the correct characterisation of reality, then there is no persisting self. Hence a physical replica is just as much a continuation of consciousness of the original person as the consciousness in the original body.

I very briefly discuss this here in the context of a teleporter where the original body isn't destroyed.

http://ianwardell.blogspot.co.uk/search?updated-min=2012-01-01T00:00:00Z&updated-max=2013-01-01T00:00:00Z&max-results=2

I hate to be the token sceptic, but I don't think Ray Kurzweil is a real person. He's a character from a lesser known J.G. Ballard novel.

Ian Wardell - ...should materialism be the correct characterisation of reality, then there is no persisting self.

Certainly

Hence a physical replica is just as much a continuation of consciousness of the original person as the consciousness in the original body.

I guess it depends on the point of view. Using the teleportation thought experiment and supposing mind-body materialism is true, if the process damages or destroys the original body, the original "self" then experiences being damaged (or ceases to exist), and has good reason to fear and if possible avoid undergoing the process regardless of the duplicate not experiencing any discontinuity. As far as the original person is concerned, a physical and mental replica would very much not be "as much a continuation of consciousness of the original person as the consciousness in the original body".

You say "certainly", but then disagree with me! OK, well I explain it in my blog and I haven't got anything to add to that apart from the fact you're begging the question by presupposing the notion that we are substantial selves.

A materialist is being inconsistent if he thinks that teleporting will mean the end of him.

In the destructive teleportation thought experiment the original person is physically destroyed in order to reconstruct his replica at a remote location. Under materialism the original mind/personality and its stream of consciousness is destroyed, ceases to exist. Lets say the scanning process takes a few seconds of agony for it to deconstruct the original person. The original person's subjective stream of consciousness will be of agony for a few seconds followed by unconsciousness and then total cessation. A terrible total death.

Any disagreement here?

While at the same time a new duplicate mind/personality with all its memories is reconstructed at the receiver. The duplicate has the complete illusion of there being no discontinuity in its consciousness, even though it is a just-reconstructed being.

Under interactive dualism in the same destructive teleportation thought experiment, the original person's subjective stream of consciousness would be of a transition to afterlife existence, whereas the "duplicate" would presumably be a soulless clone with no mind/personality.

You state in your blog entry on this thought experiment, Under naturalism there is no distinction between numerical and qualitative identity. At that instant when the replica is created the replica necessarily must be you if it is physically identical. To deny this is to affirm that what "you" are is something over and above the totality of your physicality.

I think that the thought experiment as stated above demonstrates that there is something wrong with this philosophical formulation of naturalism. And Piccinini's statement, "Regardless of how many replicas are made and whether making replicas requires the destruction of your current body, your replica is not you. " is valid.

Seems to me that when the earliest life arose from matter, it would be using the "ultimate" properties of matter and energy which are really based on quantum vacuum properties, strong and electroweak forces and gravity, possibly multiple dimensions, string theory ideas, etc... And of course these are only models of what must be termed "ultimate reality". In all life, including me, all this is really going on. Something really more complex than we will ever know.

But any computer hardware and it's software is finite, materially and program-wise and though constructed of this stuff in the above para, is blind to it's subtleties. It's a closed system.
So any attempt to brute force equate one with the other by actual construction will fail due to this problem of principle. Am I missing something here?

Topologically speaking "life" (or anything composed of the "base stuff") is not homeomorphic to an attempted "life simulation" or "base stuff" simulation.

See here: http://en.wikipedia.org/wiki/Homeomorphism

Does this imply that if an afterlife is proven then animism or some variant is true? :-)

animism - "natural physical entities -including animals, plants, and often even inanimate objects or phenomena - possess a spiritual essence" from Wiki

@ nbtruthman

I don't think you've grasped Ian's argument. If your subjective reality "dies" and is "reborn" moment to moment (which is essentially what he's saying is true under materialism), then it doesn't matter if a "you" a trillion iterations from this moment will experience the pain of death, because the current "you" has already perished. The teleported self would bear the same relation to your current self as that dying self housed in the same body.

Yes I confirm that Michael appears to understand what I'm saying :-)

Ian


In the thought experiment the original person is dying in pain due to effects of the scanning process, while the reconstruction is perfect, not dying and not in pain. This shows that the original is a unique individual consciousness, a unique self, a unique "I" tied to and a function of the original brain and body (assuming materialism). The duplicate once it is created is a separate consciousness, a separate self, separate "I", even though it is mostly physically identical to the original.

It is true that "In every way this newly created person will feel herself as being simply a continuation of the original and that she has merely instantaneously transported from one place to another." (from the blog entry). But at that moment before her death, the original person would realize that "she", this unique self with the inner experience of "I", will not meaningfully survive the process. Therefore, in this sense the replica is not the same person as the original.

From the blog entry: "If the original body is killed at the precise moment of replication then, from the perspective of the person being teleported, she will seem to “jump” to the remote destination." Absolutely not, if the person is defined as the unique "I" associated with and a function of the original person's brain. The perspective described is that of the replica.

@nbtruthman

Read Ian's blog post slowly and sleep on it. You're still not grasping the core concept.

I'm not trying to be disparaging. He did a good job explaining it in his post but the concept is counter-intuitive and it's easy to think you've understood it when you haven't.

The technocrats too easily don't realize the importance of altruism, intuition, and emotional intelligence...

No one can argue that Ray Kurzweil is a genius
in the technical world, however, he is most
ignorant in the field of what the human mind can do as with God's mind. One has to believe in that being we call our Father in Heaven to understand and believe the following. God has all knowledge. There is "nothing" he does not know. He knows all things past, present and future. His head or
brain is about the same size as our brain. His Son,Jesus Christ, whom we have a written
account also knows what His Father knows. The holy scriptures also testify that all those who qualify to go and live in His presence will also become like Him and Jesus
Christ in the resurrection, which is very near. Either of them could have beaten "Big
Blue" in a chess match. Nothing will ever be invented that can equal the human mind at it's Celestial capacity.


about the

The comments to this entry are closed.