This entry is really my overlarge comment to this post here which was inspired my post here and is really a constellation of more questions of course than answers.
I agree that people like Kurzweil seem to be missing something when it comes to characterizing human intelligence as some conglomeration of brute computational powers. Still, at the risk of sounding more like a behaviourist than a functionalist I find myself wondering about some questions.
How do we know that another human is conscious and possesses real understanding? We do seem to take it for granted. I wonder if some people that I have met would pass a Turing test. Well, one way to gauge that would be to ask them. So if there came about some future controversy if a particular robot was indeed possessing of real intelligence then we could ask it directly. This doesn't really help us because it could be programmed to answer in ways that appear functionally to actually possess intelligence and consciousness.
In much the same way a psychopath is very good at mimicking moral and ethical behaviour even if they can't begin to understand it's meaning. But something eventually gives away the psychopath (like killing and eating the brain of a hitchhiker). But if nothing gives away the robot's virtual consciousness as being different from ours then why should we begrudge its claim. So another question arises: If they can fool everyone can they fool themselves? - Like Deckard in Bladerunner who doesn't know that he is a replicant.
In Searle's Chinese Room experiment a comprehensive collection of tables and maps describing the relationships of symbols to each other are available to Searle. With these he can produce an impression to an interlocutor that Searle actually understands Chinese while he really does not. These tables and maps are of course external and peripheral but what if they were internally accessed by Searle in some way that he came to be able to use them without knowing on the surface how he was using them. I think someone like Kurzweil could imagine some form of downloadable tables and maps that could be internalized so that the speaker could speak Chinese without actually understanding how they are able to speak Chinese. Which leads to another question: Does my friend Mr. Fung know "how" he speaks Chinese?
Deep Blue doesn't really understand chess and software doesn't know from pizza and this is because they lack something that the German Philosopher Heidegger called "Care". The human being's comportment to the world and its intelligence is informed by this "care" or "concern" ("sorge" in German). In order to allow a robot to achieve human-like understanding they need to have something at stake it seems. We could supply this only by ensuring that they have a vulnerable body, mortality, social history, encumbrances, risks etc. along with desires and fears in the service of self-transparent as well as deep hidden motivations. But by giving it all of those features (some would say weaknesses) what you are really designing here is a human being made with human parts and human histories and that's not really a robot. Perhaps developments in biotechnological computing will create biological robots that cannot be denied to possess human-like understanding on the grounds that they lack care, because they will also possess it.
While I hold onto the notion that human intelligence is something special and cannot be recreated simply with processing power I wonder if that will become moot when we begin to add cyborg peripherals that become deeply internalized and when the human body can be manipulated and altered through genetic nano-technology to the point of busting open the categories so that the limits and differentiations between robot and human become blurred. We may then have greatly more numerous categories of intelligence to contend with.