This is a short essay I wrote for the Intro to Philosophy course that concluded last week. The one I post on the course site needs to be a bit shorter, so I’ll have to edit some stuff out. This isn’t really as tight as I’d like, either, but I think an extended version of the idea here might get a bit boring to read.
**************
Despite being thoroughly refuted John Searle’s Chinese Room thought experiment still generates notoriety (it was included in this course). Perhaps it is that the Chinese Room provides a useful background for counter arguments hat are particularly revealing about the relationship between body and mind. My own analysis of the thought experiment would be categorized as a “system reply argument”, with a focal point on the separation provided by walls of the room.
There’s an uneven comparison that Searle makes in presenting the Chinese Room thought experiment: he compares human to human. That is, he compares Humans (in general) not to any idea of artificial intelligence or machine, but to the human in his experiment. With the presence of the room, Searle creates a barrier of information flow between the human on the inside and the one on the outside. Yet, when he considers the mind of Humans in relation to this thought experiment, he looks at the distinction of separation created by the skin of the human inside the room, not by the room itself. His claim that “I would not be able to understand the conversation” would be analogous to saying that a microchip, or a mouse, or some other component of the machine does not have capabilities at that level as the machine does on a whole.
As much as the Humans rely on their limbs, senses and interaction with environment for presence of mind, so does the human in the experiment rely of the walls of the room. What would our concept of intelligence be if we disregarded the barrier of separation that distinguishes humans, in the same way that Searle does away with the walls? How does “ability” exist in this way?
Imagine a brain in a vat inside the room, and suppose this brain can actually understand Chinese, unlike the monolingual poor soul in Searle’s example. Someone on the outside of the room slides a piece of paper with a Chinese character on it into the room and (even if it could perceive the symbol) the brain goes through all of the mental states of replying, yet nothing happens. There’s no action, no evidence of mind. Humans include their inputs and outputs, as do machines, displaying the importance of the distinction of separation.
Searle’s anthropocentric bias is evident, making Putnam’s functionalism argument relevant. The Chinese Room is a human centered test, illustrating the limits of the communication technology (here, spoken language). Ask a question in a different way, to Putnam’s octopus or to a computer, and get a mindful response. If I want to find out what an octopus feels about sandpaper, I rub the paper on its skin and watch the octopus’ reaction. It may be difficult to interpret its reaction meaningfully, but, well, translating between distinctions of networks is never exact. Presence of Mind isn’t limited to language use, or even to interaction in Searle’s example – it’s about contained knowing. The octopus certainly contains its own reaction to sandpaper.
Thus, if we think of a similar situation with a machine or a computer, a non-living object, the ways of interacting might be more limited, but the responses will be more exact. If I want to know which program is best to install on my computer, I use the most relevant method (ie: clicking in the right places) to get the response that will best inform me.
Distinction of separation implies an extended account about multiply realisable and mind. Let’s take the brain in the vat example and reduce the distinction even further. Why not say that the area of the brain that processes language has a mind, in this case? What about only the neurons and synapses? Distinction needs to be considered with more thought to the action, or else the Chinese Room is pointless, simply expressing that: Only humans can have human minds.
An expanded version of multiply realisability is also valid. A community itself can have knowledge that an individual contained within that community does not. Networks of people can have space travel capability, or can be literate, for example..yet, a single person need not be able to command a rocket, nor be able to read. The claims at one distinction are independent of claims made on a larger or smaller scale. The point of distinction is relevant when considering how mind experiences knowing.
This is especially relevant in today’s world. Gone is the strictly dominant Vygotskian idea of internalization of knowledge (it certainly holds up as one way to consider knowing). Modern communication technology has pried open the depths of distributed knowledge and mindfulness that lies in communities, objects and networks of all types. Such external-to-the-human knowledge impacts our world at a much more complex and frequent degree than the previous limits of the long asynchronous and one-way interaction of the plain old paper printed book.
The course notes bring up the idea that the internal structure is a useful determining factor for deciding mindfulness – but, how well do we know the internal structure of anything, let alone humans? Go deep enough into the physical structure of anything and particle physics is still unsolved. As well, at barely more than a century old Psychology’s major insight thus far is the realization that an iceberg sized (minus the tip) unknown we call sub-conscious controls most of our actions. This is digression however, as Consciousness, the relationship with the self, is another story. When it comes to mere mind, our human perspective is not the only one that exists – human mind and mind are not interchangeable.