This entry is really my overlarge comment to this post here which was inspired my post here and is really a constellation of more questions of course than answers.
I agree that people like Kurzweil seem to be missing something when it comes to characterizing human intelligence as some conglomeration of brute computational powers. Still, at the risk of sounding more like a behaviourist than a functionalist I find myself wondering about some questions.
How do we know that another human is conscious and possesses real understanding? We do seem to take it for granted. I wonder if some people that I have met would pass a Turing test. Well, one way to gauge that would be to ask them. So if there came about some future controversy if a particular robot was indeed possessing of real intelligence then we could ask it directly. This doesn't really help us because it could be programmed to answer in ways that appear functionally to actually possess intelligence and consciousness.
In much the same way a psychopath is very good at mimicking moral and ethical behaviour even if they can't begin to understand it's meaning. But something eventually gives away the psychopath (like killing and eating the brain of a hitchhiker). But if nothing gives away the robot's virtual consciousness as being different from ours then why should we begrudge its claim. So another question arises: If they can fool everyone can they fool themselves? - Like Deckard in Bladerunner who doesn't know that he is a replicant.
In Searle's Chinese Room experiment a comprehensive collection of tables and maps describing the relationships of symbols to each other are available to Searle. With these he can produce an impression to an interlocutor that Searle actually understands Chinese while he really does not. These tables and maps are of course external and peripheral but what if they were internally accessed by Searle in some way that he came to be able to use them without knowing on the surface how he was using them. I think someone like Kurzweil could imagine some form of downloadable tables and maps that could be internalized so that the speaker could speak Chinese without actually understanding how they are able to speak Chinese. Which leads to another question: Does my friend Mr. Fung know "how" he speaks Chinese?
Deep Blue doesn't really understand chess and software doesn't know from pizza and this is because they lack something that the German Philosopher Heidegger called "Care". The human being's comportment to the world and its intelligence is informed by this "care" or "concern" ("sorge" in German). In order to allow a robot to achieve human-like understanding they need to have something at stake it seems. We could supply this only by ensuring that they have a vulnerable body, mortality, social history, encumbrances, risks etc. along with desires and fears in the service of self-transparent as well as deep hidden motivations. But by giving it all of those features (some would say weaknesses) what you are really designing here is a human being made with human parts and human histories and that's not really a robot. Perhaps developments in biotechnological computing will create biological robots that cannot be denied to possess human-like understanding on the grounds that they lack care, because they will also possess it.
While I hold onto the notion that human intelligence is something special and cannot be recreated simply with processing power I wonder if that will become moot when we begin to add cyborg peripherals that become deeply internalized and when the human body can be manipulated and altered through genetic nano-technology to the point of busting open the categories so that the limits and differentiations between robot and human become blurred. We may then have greatly more numerous categories of intelligence to contend with.
1 comment:
gosh. i guess i should take a class with dreyfus next, so i can grok heidegger.
it's an interesting question, does your friend know how he speaks chinese. certainly it's the case that proficiency in our native language isn't something we remember ever consciously trying to acquire. in some ways, native speakers are the worst people to talk to about the structural details of their language. i have tried asking native chinese speakers "but what does 'de' MEAN? why do you have to put it there?" usually they say "you just do. because that's what sounds right." what do we make of this? that we are just dumbly following hardwired programs every time we open our mouths? but how can that be, when language is infinitely varied, infinitely creative?
it is the case that the forms of language are signifiers, and that they map to signifieds. and those signifieds aren't just isolated pictures, as if the word "tree" just mapped onto a canonical image of an oak tree or some such. think of a tree for a moment. now think about what you imagined...was it growing in the ground? in a forest, in a field, in a city? was it tall, lots of branches? good for climbing? did it offer shade, or fruit?
meaning is not so simple, and it seems to be the case that our cognitive representations of things in the world are frame-based. that is, they are organized into domains of related experiential knowledge. that is, you can't talk about "tuesday" without an understanding of monday, wednesday, a week... which in turn is fully understood as a figure against the ground of a month, a year, the gregorian calendar, etc.etc. and we cannot conceptualize any of this without relying on our metaphors for time, how we think and talk about its flowing past us or our moving through it. if you start to pull on a thread, the whole sweater just comes apart.
in the face of all this, i guess it shouldn't really be so surprising that computers don't understand eating pizza. the better question is: how could it be possible that we *do*?
Post a Comment