The extended mind and why it matters for cognitive science research [research review]
Most answers to “what is the mind” involve the brain and body in some way. Some people assert an identity relation (“the mind is the brain”), while others draw a kind of causal or functional connection (“the mind is what the brain does”). Some might even expand the definition of “mind” beyond the brain to include our body: perhaps we “think” with our hands, our legs, our torso.
Intuitively, however, it feels much stranger to extend the “mind” past both our brains and our bodies. Yet this is exactly the claim of extended cognition.
One of the most famous cases for extended cognition comes from Clark & Chalmers (1998). Consider the cognitive problem of playing Tetris: to solve the problem, players must identify whether an incoming shape fits into any of the “sockets” at the bottom of the screen. Now consider a few ways a player might solve this problem:
1) A player could mentally rotate incoming blocks, simulating what it would look like in different configurations.
2) A player could press a button to actually rotate incoming blocks on the screen, presumably getting a higher-fidelity picture of their different configurations (and perhaps gaining a speed advantage).
3) Sometime in the cyberpunk , a player is given a neural implant which can perform the mental rotation just as quickly as the button-press in (2).
Clark & Chalmers (1998) then ask: how much cognition is present in each case, and where is it located?
Our initial intuition is that (1) and (3) are pretty analogous: in both cases, the rotation process happens “inside” the brain––either with or without the help of an implanted device. But if we accept that a neural implant counts as cognition, what differentiates these scenarios from (2)? Besides the skin/skull boundary, the computation seems roughly the same: an individual executes a rotation process with the goal of better identifying where an incoming Tetris block will best fit.
Clark & Chalmers (1998) suggest that there’s no principled reason to insist that (1) and (3) are cognition, and (2) is not. Cases involving an external tool should still count as cognition––just a form of cognition that extends beyond the skin/skull boundary. In these cases:
the human organism is linked with an external entity in a two-way interaction, creating a coupled system that can be seen as a cognitive system in its own right. All the components in the system play an active causal role, and they jointly govern behaviour in the same sort of way that cognition usually does. If we remove the external component the system’s behavioural competence will drop, just as it would if we removed part of its brain. Our thesis is that this sort of coupled process counts equally well as a cognitive process, whether or not it is wholly in the head (pg. 9).
Relaxing the boundary
Once we relax the skin/skull boundary, we find that there are all sorts of examples where our cognitive capacities seem to be augmented by the use of an external tool.
For example, if someone with memory loss were to rely entirely on their computer to record everything that happened each day, and consulted that computer each time they made a decision, should that computer be considered part of their mind? Or perhaps more precisely: should the person and the computer together be considered a kind of coupled, cognitive system?
This relaxing of boundaries makes for some compelling, exciting examples. Yet it also leads to what some critics call “cognitive bloat”. Most things are causally connected to most other things in some way. And we use plenty of tools––and coordinate our actions with plenty of other people––as we go about our lives. Are all of these tools part of our mind? Are those people also part of our mind? Where does one draw the line?
One answer is simply to throw up your hands and accept everything as part of the mind. I think this is actually a reasonable response. But I think an important part of this response has to be emphasizing the notion of coupled cognitive systems. I think it’s misleading, or at least imprecise, to say that my computer is “part of my mind”. Rather, there exists a coupled, cognitive system–––parameterized by a particular time and place–––and that system includes both my brain/body and the computer. And we might even be more precise: that cognitive system need not include every part of my computer, or even every part of my brain or body. If I’m writing an essay, then my legs are less a part of that cognitive system than my fingers and my brain. And when I walk away from the computer, that coupled cognitive system disappears, at least for a time.
This view of minds is a little hard to grok, perhaps because it’s very hard to dissociate the notion of mind from the notion of consciousness, and we tend to associate consciousness with discrete, individuated organisms (i.e., “me” vs. “you”). But this view doesn’t entail that a group of people interacting as a team produces a kind of emergent consciousness (though this is a reasonable position that one might hold); rather, the claim is merely that there’s some set of interacting processes that are usefully described as “cognitive”.
Why should it matter?
To some, this may feel like a notational difference. Why does it matter if we refer to the interaction between my brain/body and my computer as a coupled, cognitive system?
I think it’s actually quite important from a theoretical perspective. One of the main goals of Cognitive Science is answering the question: how do humans (and other intelligent organisms) solve problems in the world? If you start out with the presumption that cognition is: a) individualized; and b) “happens” in the brain, then that’s where you’re going to look. And of course, I think the brain is very important for human cognition! But looking only at the brain might lead you to miss certain interesting aspects of how humans and other organisms solve certain problems. We structure our environment and social lives in all sorts of useful ways so our brains don’t have to do everything on their own.
Of course, one might object that we still don’t have to refer to these other processes as coupled, cognitive systems. Presumably human brains still do a number of things, even if their situational context is also doing some of that “cognitive labor”. So, one might argue, perhaps Cognitive Science should focus on human brains and exactly what computations they might be performing (assuming one thinks the brain performs computations); the other things are important too, but they’re not truly “cognitive”.
If this is one’s view, then I do think it amounts to a notational difference. And I think it’s appropriate for many people to study the specific role of brains and specific brain regions in solving particular problems. But even if that’s the goal, then accomplishing this requires a clear understanding of which functions are performed by the brain, and which are offloaded onto the environment or other tools. Otherwise, we risk looking for the wrong things in the brain. But this takes us right back to the notion of a coupled, cognitive system. We don’t have to call it that, but we still have to consider the role of context and factors beyond the skin/skull boundary in solving certain problems–––even if only for the purpose of excluding those computations from our investigation.