Books I've read recently [other]

This post will be different from most of my other posts here.

I’ve recently been trying to read more “classics”–where a “classic” is loosely defined as a piece of writing that’s frequently referenced in a given discipline or domain and appears to have influenced that domain (or even other spheres of activity). Unfortunately, I also frequently forget much of what a book is about soon after I’ve read it, even books that I took notes on while reading. This is truer, I think, of nonfiction than fiction.

To try to address this problem of forgetting, I thought a potentially interesting way to spend my Sunday afternoon would be to write brief summaries of several of these books. A summary is not really a review, and it’s certainly not an in-depth analysis of a book. Thus, these summaries may not ultimately resolve the forgetting issue; on the other hand, I’m hoping that simply reviewing my notes to compile the summaries will help consolidate and organize my memories of each book.

The Odyssey (translated by Emily Wilson)

Somewhat shamefully, I’d never in fact read the entire Odyssey before. I read portions of it in high school, and of course I was familiar with many of the stories and characters it contains, but I’d never read it start to finish.

Recently, I listened to an interview with Madeline Miller on Ezra Klein’s podcast; Miller wrote Song of Achilles and Circe, two beautiful retellings of classic myths, both of which focus on characters who got short shrift in the versions most of us know (Patroclus and Circe, respectively). In the interview, Miller mentioned that she loved Emily Wilson’s translation of The Odyssey. Wilson is the first woman to publish an English translation of the myth.

One thing I was struck by was how well the writing “flowed”; I was expecting a bit of a slog, but it went quite quickly. There were many beautiful and moving passages, and I can genuinely say I felt captivated throughout.

Another thing that surprised me was simply how little of the text is devoted to what I always assumed were the central moments of Odysseus’s journey (i.e., battling the Cyclops, meeting Circe, the men opening the bag of wind from Aeolus, and so on). In fact, these moments are all recounted by Odysseus to the Phaeacians. This comes after Odysseus has settled himself in their halls and is experiencing intense emotion as a bard tells the story of the Trojan War (spurred on to do so, I should say, by Odysseus himself); it also occurs after Odysseus impresses the Phaeacians with his discus-throwing abilities (a scene I found very comical). The recounting is not without details, but just in terms of the number of pages it takes up, it almost feels “minor” with respect to the overarching plot. Immediately after, of course, Odysseus returns to Ithaca, whereupon he plots (and enacts) vengeance against the “suitors” occupying his house.

It’s also interesting to interpret this recounting when we consider Odysseus’s propensity towards cunning and deceit more generally–something that’s emphasized through the novel. Unlike the rest of the epic, this section is told from Odysseus’s perspective, not from the perspective of an omniscient narrator. How are we, the hearers/readers, meant to think about the veracity of these accounts? Should we take them at face value or could it be yet another case of Odysseus’s manipulation–this time cleverly embedded within the text itself? As far as I can tell, there’s no direct evidence for this interpretation within the text, and I imagine most Classics scholars would (justifiably) roll their eyes at my speculation. But it’s interesting nonetheless (to me).

Seeing Like a State (James Scott)

The full title is: Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed.

This one did take me a while to get through, but it was worth it. It’s one of those books where the summary seems quite straightforward and intuitive, but where really understanding it requires reading all the details. Here’s the short summary from Wikipedia, which I think is pretty good:

The book catalogues schemes which states impose upon populaces that are convenient for the state since they make societies “legible”, but are not necessarily good for the people. For example, census data, standardized weights and measures, and uniform languages make it easier to tax and control the population.

This distills the essence of the book pretty well, but I’d expand on this a bit. What stuck out to me about Seeing Like a State was the idea that large, powerful institutions (or maybe just people in general?) have a tendency to believe the abstractions they build of the world, society, etc. (the “maps” they create of the “territory”) are the correct ones; when combined with the power to restructure or transform the world in a way consistent with those abstractions, this can lead to many unintended consequences, including great harms.

Scott presents a number of examples in his critique of “Authoritarian High Modernism”, including: scientific forestry in the late 18th century; planned cities like Brasilia; land reform and forced villagization in the Soviet Union and Tanzania, respectively; and modern agricultural science. In each case, there is a set of planners who fail to appreciate the complexity of the real world and assume their simplifications (i.e., their abstractions) are not only sufficient but correct. I found the examples from forestry and agriculture most interesting, perhaps because ecosystems are so obviously complex (and yet scientists, economists, etc. in industrialized societies never tire of assuming our understanding of these systems is complete and can be modularized in a manner akin to that of the “factory floor”).

For example, late 18th century Germany faced a growing shortage of wood (due in part to the degradation of the forests), which presented obvious problems for the economy. To try to simplify the problem, mathematicians and scientists began measuring the yield of various kinds of trees to understand which trees would result in the most timber. Based on these measurements, they developed an abstraction: a “standardized tree” (a Normalbaum), which could be used to generate predictions about the overall yield of a forest. It was discovered that the Norway spruce was one of the “best” trees in terms of timber yield, so of course, researchers planted rows and rows of these spruces–resulting in a more legible, gridlike array of trees utterly unlike the apparent chaos of real forests. The first generation of trees was quite successful, raising respect for scientific forestry; but by the second generation, there was a 20-30% decrease in production and many of the trees died. At its core, the problem appeared to be monocropping: clearing of the “underbrush” that typically lay on the forest floor led to thinner and less nutritions soils (i.e., less “soil capital”); and the lack of genetic diversity (all trees were the same species and about the same age) opened up the forest to being felled by storms, killed by species-specific pests, and so on. Ironically, the scientists eventually tried to remedy the problem by reintroducing some of the complexity they’d previously abstracted away; wooden boxes were placed to attract birds; spiders and ant colonies were reinstated. Scott’s takeaway is that this illustrates:

“[the] dangers of dismembering an exceptionally complex and poorly understood set of relations and processes in order to isolate a single element of instrumental value” (21)

In other words, scientists reduced the complexity of the forest to a single product (wood), and optimized for this product alone. The other examples in the book highlight a similar form of arrogance, and many also demonstrate its opposite: the local knowledge constructed by a community over generations of living and adapting to a particular environment and gaining hands-on experience in that specific environment.

I think the book is both humbling and instructive, and should also probably be read by any person hoping to make a career involving “planning”–economists, politicians, public policy analysts, and so on. The lesson is not that science is doomed to fail and shouldn’t be used to guide planning–James is quite clear about this in the final chapter. Rather, he encourages humility in the plans we do make and in the generalizations we draw. He contrasts the scientific quest for “universal truths” with “local knowledge”, which is inherently contextualized and comes from generations of hands-on experience. Scott is simply arguing that scientists (and others prone to universalizing abstractions) should not outright dismiss the importance and insights of local knowledge, as they (we) have so often in the past.

In fact, scientists could (and should, in my view) attempt to learn from local knowledge. Often, James argues, scientists and planners observe a discrepancy between their idealized model and real-world practice and attribute this difference to ignorance or inefficiency on the part of real-world practitioners (or simply undesirable “chaos’ in the real-world system). Instead, they should try to understand this system from the perspective of those who have lived in it; if a particular adaptation seems strange or unexpected–but also seems to work–why does it work? In my view, the contribution of science here could be to elucidate mechanism; but as always, we must be wary of overgeneralization.

The Modularity of Mind (Jerry Fodor)

The Modularity of Mind is one of the most famous and influential books in Cognitive Science. It helped launch (and clarify the goals of) a research program that attempts to understand the mind by beginning with the assumption that certain mental processes (e.g., aspects of perception, language, etc.) can be conceived in terms of “modules”, which are informationally encapsulated from other processes—i.e., information from one process does not “leak” into another process.

This book generated a very lively debate in the years that followed; the question of whether particular cognitive processes are relatively more modular or whether they recruit shared cognitive resources represents one of the major theoretical divides in the field. It’s probably still one of the major “big questions” of the field, and interestingly, I feel that people’s views on this debate tend to be correlated (though of course not always) with their views on other topics, such as the extent to which the language faculty is innate.

I’m not going to rehash the entire argument of the book here, but I will say I’m very glad I read it. Full disclosure: my educational background has put me more in the camp of “non-modularists” and I tend to take a more “distributed”, domain-general view of cognitive faculties/processes. But as with The Odyssey, I always felt some amount of shame at having not read Fodor previously; there’s a lot of value in reading the original source material for an argument. And this book, along with a fantastic keynote address at the 2019 Cognitive Science Conference by Nancy Kanwisher, has made me considerably more sympathetic towards a research program that assumes some degree of what we might call “modularity”.

What I found most intriguing about Fodor’s argument—and what often gets elided from contemporary debates or portrayals of his view—is this idea that as researchers, we better hope parts of the mind are modular if we want to understand them. That is, if we aim to deliver a computational, input/output account of mental processes, a thorough and accurate account would greatly benefit (or perhaps even require) from those processes being relative modular. Scientific investigation typically proceeds by isolating variables; for example, an experiment tries to strip away the complexity of the real world and study a phenomenon in some “decontextualized” setting such that the only variables that change are the ones manipulated by the experimenter (or are modeled as random variation, as in the case of individual variability across subjects). Surprisingly (for me), Fodor argues that many (most?) interesting mental processes are perhaps not capable of being isolated in this way; these fall under the domain of the central processor, where any given process (e.g., inference) can be influenced in highly contextual, unpredictable ways by other states of the processor. This makes such mental processes ill-suited to scientific inquiry. Modular processes, on the other hand, are prime candidates for investigation. By Fodor’s characterization, modules perform a function, operate quickly and efficiently (and shallowly), and are informationally encapsulated. This means that in principle, a modular process could be isolated and studied in the lab. From an instrumentalist point of view, then, it’s useful to assume modularity of certain mental processes, i.e., proceed as if a given cognitive operation is modular. Otherwise we’ll be lost in the all-encompassing morass of “context”.

There’s an interesting parallel here between Fodor’s account and the more distributed view of cognition. Fodor’s argument is not—as I’d embarrassingly assumed—that all mental processes are modular. Far from it. Rather, many are “messy” and distributed, as I think many non-modularists would agree. I imagine there are at least two key distinctions. One is that non-modularists may extend the long arm of context into even those processes which Fodor assumes to be relatively modular. The second—which perhaps is a consequence of the first—is that non-modularists don’t advocate only for studying processes that appear to be modular (presumably because they believe none to be perfectly modular).

What this implies, however, is that even an advocate of a more “distributed” view of cognition—i.e., that a given process (e.g., “low-level” visual perception) can be influenced by another process or faculty (e.g., language)—would presumably concede that certain processes draw more on particular cognitive resources or sources of information than others. That is, if we’re foolish enough to believe we can actually study and characterize something, we presumably think that thing is more related to some things than other things; if it’s equally correlated with everything, there’s no hope of isolating the relevant variables. Some might argue that everything can indeed influence everything else; this is of course true in principle. But it also renders the possibility of building any sort of model intractable; one needs a way of determining which variables are more or less important to include in a model, otherwise the model simply becomes a recapitulation of the real world.

This leaves us in a strange position, and one that echoes the challenges enumerated in Seeing Like a State. A scientific model is necessarily a simplification of the real-world phenomenon; generalization involves abstraction, and abstraction, as Borges has noted, is a “forgetting” of the details of the world. So which details do we leave out? Which abstractions are permissible? I don’t think there is a “correct” answer to these questions, or at least not a single answer; the details that are abstracted for characterizing one phenomenon should be preserved for another.

When it comes to the mind: to study any particular cognitive process (e.g., “pragmatic inference”) is to assume it’s modular to the extent that it can be studied in the first place. That doesn’t mean it’s perfectly modular (whatever that might mean), only that it’s not perfectly distributed in the sense that it’s equally related to every other process one can imagine. In my view, this assumption need not be ontological. We don’t have access to the true nature of reality. But it’s an instrumentalist assumption, which may be more or less useful depending on the phenomenon.

(As a side note, Fodor’s writing is also a pleasure to read. Reading the book, you have the feeling of an argument being constructed almost entirely from first principles. I recommend it to anyone who wants to understand the motivation for adopting a modularist view of certain mental processes.)

Written on August 23, 2020