Language is a neuroenhancement [research review]

In Cognitive Science and Psycholinguistics, research on language tends to emphasize its role as a communicative medium. This emphasis lends itself to questions about how language is produced and understood, i.e., for the purposes of communication. That is, how does a speaker wishing to convey an idea formulate a linguistic expression from that idea, and how does a comprehender decipher the original idea from the linguistic expression they observe? In turn, there are a variety of theories about how all this happens––ranging from “amodal” theories that emphasize the role of disembodied, symbolic concepts, to more “embodied” theories that emphasize the role of sensorimotor experience in linguistic knowledge (Bergen, 2012; Barsalou, 1999; Gallese & Lakoff, 2005).

But as Dove (2020) notes, this traditional emphasis sidelines another, equally interesting question: what is the role of language in thought?

This question becomes especially interesting if we adopt an embodied perspective. Language is, after all, a form of embodied experience. For example, to produce language, we have to move our tongues and lips, or our hands and fingers. And so, if we believe––as proponents of embodied cognition do––that our conceptual representations make use of sensorimotor experience, then why shouldn’t that include the sensorimotor experiences associated with language in particular? What does our embodied experience of producing and comprehending language do for us, cognitively speaking?

Proponents of extended cognition sometimes go even further. According to this view, the mind does not merely reside in the brain, or even the body––”cognition” is a complex phenomenon emerging from brains, bodies, and the environments in which they’re situated (Clark & Chalmers, 1998). Our cognitive capacities are scaffolded and even augmented by external tools. And of course, language is a kind of tool. So this raises the question: how might acquiring a language––any language––shape our capacity for thought (1)?

Thinking with words

To acquire language, we have to become proficient in all sorts of things. We have to produce and recognize the sounds of a language, the morphological structure of that language, and even the art of having a conversation.

Dove (2020) suggests a few ways in which this acquired competence acts as a neuroenhancement. Together, these mechanisms fall under what he calls the LENS theory: Language as an Embodied Neuroenhancement and Scaffold Theory.

Words as glue

One of the most basic functions of words is to refer.

Nouns, for example, typically refer to some sort of entity (2), often by means of carving out a category (e.g., “dog”). Dove (2020) suggests that the mere act of assigning labels to referents changes our conception of them, and enhances our ability to construct novel thoughts with and about them.

To illustrate this, first imagine the set of sensorimotor experiences one might associate with dogs. Presumably most dogs share a set of common features. If you’ve ever pet a dog, you know the experience of running your hand through its fur; if you’ve encountered a dog barking, you likely have some kind of stored auditory memory of that sound. And of course, if you’ve seen a dog, you have a rough visual schematic of what dogs look like––they’re furry, they have four legs, they wag their tails.

Even without the word “dog”, we might be able to connect these disparate sensory associations into a relatively unified, multimodal representation. But Dove (2020) argues that having the label binds these associations more effectively. Words are sticky, like glue––and now, each time you encounter the word “dog”, you’re accessing a set of different sensorimotor experiences that are bundled together by all sharing hte same label.

Dove isn’t the only person to argue this. Elsewhere, Pulvermüller (2013) has argued that linguistic labels serve to stabilize and organize our conceptual representations, providing structure in an otherwise structure-less landscape of sensory experience.

Moreover, there’s evidence to support this claim, much of which Dove reviews. For me, one of the main takeaways comes from a study by Lupyan et al (2007):

Labeled categories are easier to learn than unlabeled categories even when the relevant experience is held constant and the labels are redundant (Dove, 2020, pg. 296).

Language as a new source of information

Words don’t (usually) just appear in isolation. Typically, they appear with other words. And importantly, there are regularities in which words co-occur with which other words: presumably, you’re much more likely to observe the word “scalpel” near the word “surgeon” than near the word “teacher” (3).

Dove––and others before him (e.g., Firth, 1957; Harris, 1954; Sahlgren, 2008; Lupyan & Winter, 2018)––suggests that these statistical regularities are actually a rich source of information: both about the meaning of words themselves (“you shall know a word by the company it keeps”), and about the world. In other words, language is a kind of rich knowledge base, implicitly and explicitly encoding many real-world relationships in the statistical relationships between words.

He also suggests that this statistical information might be particularly useful for our understanding of abstract concepts. One proposal (Andrews et al, 2009) is that our conceptual representations are subserved by a kind of hybrid system: concrete concepts (e.g., “dog”, “bed”, etc.) rely more on non-linguistic, sensorimotor associations, while abstract concepts draw more on linguistic associations––both in terms of linguistic sensorimotor associations (e.g., movements of the tongue or mouth), and in terms of the statistical regularities among word co-occurrences.

Language, then, offers up all kinds of novel associations:

  • Between words and their real-world referents.
  • Between grounded concepts.
  • Between words and other words.

Dove (2020) argues that this allows us to make novel connections that would otherwise be impossible in the absence of language.

A symbolic medium

Finally, language is a symbolic system (i.e., words are symbols for their meanings), which allows the recombination of those symbols (i.e., sequences of words can be rearranged in different orders to produce novel meanings).

A key component of the combinatorial property of linguistic systems is their arbitrariness: word forms are (for the most part) arbitrarily related to the meanings they convey. This decoupling of symbols from their meanings allows us to reproduce similar thoughts in different situations.

Dove (2020) suggests that the acquisition of this arbitrary symbol system “unlocks” new affordances. Now, he argues, we gain more fluency in the art of symbol manipulation, which we can in turn extend to different, non-linguistic domains. In this sense, then, acquiring language isn’t just useful for the direct affordances of language: it brings with it a suite of other, correlated abilities––hence the “neuroenhancement”.

Taking Stock

I think what Dove (2020) does best is recast the relationship between language and thought. By introducing LENS theory, Dove gives us a theoretical tool to help us develop testable hypotheses and ask interesting research questions.

One of the consequences of adopting this framework is that it allows us to fold in other research areas that typically don’t fall under the rubric of embodied cognition. Dove’s example of this is the relationship between language and Theory of Mind. There’s a rich body of literature investigating how language acquistion is intertwined with the ability to reason about the mental states of others. And interestingly, this literature seems to be broadly consistent with the predictions of LENS theory. For example, there’s some evidence that the acquisition of “mental state verbs” (e.g., “think”, “believe”, etc.) coincides with Theory of Mind ability (de Villiers & de Villiers, 2014). Now, this relationship may not be causal––but if it is, it suggests that learning labels for these mental state concepts helps scaffold the use and deployment of those concepts.

This framework might also be of use in other, more distantly related domains. In research on Artificial Intelligence, for example, there’s intense interest in the question of whether and when “Artificial General Intelligence” might be developed, i.e., systems that approximate human-level intelligence on a number of different tasks. Setting aside some of the thornier ethical and philosophical issues with this enterprise (4), it might be fruitful to consider how LENS theory could help inform debates about what it takes to reach human-level intelligence. If we take LENS theory seriously, then language could be an important component of any intelligent system we want to build. So rather than merely asking “how can we build systems that understand language?”, we should also ask “what other cognitive capacities would such a system acquire?”

In general, many of my research-related thoughts recently have been circling around this question of how much information about the world can be learned through exposure to linguistic input alone. LENS theory is useful because it gives me a label, along with a set of conceptual associations, for thinking about how to formulate that question. In this way, I think LENS theory demonstrates its own utility.

References

Andrews, M., Vigliocco, G., & Vinson, D. (2009). Integrating experiential and distributional data to learn semantic representations. Psychological review, 116(3), 463.

Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and brain sciences, 22(4), 577-660.

Bergen, B. K. (2012). Louder than words: The new science of how the mind makes meaning. Basic Books (AZ).

de Villiers, J. G., & de Villiers, P. A. (2014). The role of language in theory of mind development. Topics in Language Disorders, 34(4), 313-328.

Clark, A., & Chalmers, D. (1998). The extended mind. analysis, 58(1), 7-19.

Clark, A. (2003). Natural-born cyborgs. New York, NY: Oxford University Press.

Dove, G. (2020). More than a scaffold: Language is a neuroenhancement. Cognitive neuropsychology, 37(5-6), 288-311.

Firth, J. R. (1957). Papers in linguistics 1934–1951. London: Oxford University Press

Gallese, V., & Lakoff, G. (2005). The brain’s concepts: The role of the sensory-motor system in conceptual knowledge. Cognitive neuropsychology, 22(3-4), 455-479.

Harris, Z. (1954). Distributional hypothesis. Word World, 10(23), 146-162.

Lupyan, G., Rakison, D. H., & McClelland, J. L. (2007). Language is not just for talking: Abels facilitate learning of novel cat- egories. Psychological Science, 18(12), 1077–1083. doi:10. 1111/j.1467-9280.2007.02028.x

Lupyan, G., & Winter, B. (2018). Language is more abstract than you think, or, why aren’t languages more iconic?. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1752), 20170137.

Pulvermüller, F. (2013). How neurons make meaning: Brain mechanisms for embodied and abstract-symbolic semantics. Trends in Cognitive Sciences, 17(9), 458–470. doi:10.1016/j.tics. 2013.06.004

Sahlgren, M. (2008). The distributional hypothesis. Italian Journal of Disability Studies, 20, 33-53.

Footnotes

  1. Note that I’m going to set aside, for now, the question of linguistic relativity, also known as the Sapir-Whorf Hypothesis––i.e., the extent to which our cognition is shaped by a particular language. Linguistic relativity is clearly related to Dove’s (2020) point, and in fact follows logically from the premise that having a language shapes thought, but it’s also a more specific question. Here, the emphasis is on how acquiring any language system can change our cognitive capacities. 

  2. Abstract concepts, of course, don’t really have a real-world referent––or at the very least, it’s much more difficult to pin their real-world referents down. But more on that later. 

  3. Though I’m sure the determined reader could imagine a number of scenarios in which “scalpel” is more likely to co-occur with “teacher”. If there’s anything I’ve learned from studying language, it’s that people can construe almost anything to make sense. 

  4. Ethical issues might include: is human-level AI sentient? If so, should it be given rights? Other questions might include: is the notion of dividing human cognition into a relatively modular “task space” somehow fundamentally flawed? 

Written on October 13, 2021