Some books I read in 2021 [other]
Last year I wrote up mini-reviews of some books I read in 2020. I found (and continue to find) this post, and others like it, helpful primarily for the purpose of not forgetting books that felt particularly impactful when I read them. Sometimes there’s a real pleasure in revisiting old thoughts and memories, and this kind of post serves as a useful cue to those memories. (There’s probably something circular about writing a post to help remember the books that I find most memorable, or perhaps that’s the point?)
The other purpose of a post like this is to inspire other people to read these books. Personally, I find that recommendations of any sort––for books, movies, songs, etc.––are much more meaningful when they come with an explanation for that recommendation. That’s one of the things I find most lacking in many recommendation engines: “you may also like X” doesn’t tell me why I may like X. So for each of these books, I’ll try to emphasize what I found worth remembering.
The Alignment Problem, by Brian Christian. This book helped convince me that research on long-term AI alignment is both important and under-resourced. I was already familiar with the many current examples of unethical uses of AI, e.g., as outlined in books like Weapons of Math Destruction or Algorithms of Oppression (both of which I recommend). And I was also loosely familiar with a contingent of people, many of whom I associate with the Rationalist movement, who were concerned about long-term, existential threats of AI. But these long-term concerns always felt strangely under-specified and overly reliant on analogies, e.g., a paperclip maximizer that decides to exterminate all humans. I understood the underlying principles (I think), but I just didn’t see a realistic scenario by which AI truly led to existential catastrophe––whereas it was much easier for me to see the existential risks posed by human-created pathogens or a world-ending comet. But as usual, it’s rarely a good idea to dismiss intellectual ideas out of hand, especially when they’re espoused by a community that one largely agrees with on other points. This book pushed me in the direction of taking long-term AI alignment more seriously. It’s a broad survey of both current ethical issues in AI (including some of what O’Neil discusses in Weapons of Math Destruction]), as well as ongoing theoretical problems (like “black box” models) that pose serious practical issues for AI deployment. It’s also just a really useful primer on major trends in AI, such as inverse reinforcement learning and deep neural networks. Together with blog posts like this one, I’m considerably more convinced that work on AI safety and alignment is a valuable research program.
Under a White Sky, by Elizabeth Kolbert. Human civilization has wreaked havoc on the planet: there’s climate change, obviously, but there’s also biodiversity loss, massive land erosion, and more. And of course all these things are related. Kolbert describes a number of ways in which human existence has fundamentally altered the world in which we live. And unfortunately, addressing these challenges likely requires new technological solutions and interventions in the environment. These solutions are bound to be controversial: there’s the idea to spray sulfur aerosols in the stratosphere to reduce the effects of warming; there’s the idea of genetically engineering invasive cane toads to stop their spread; and then there’s the idea of sucking carbon dioxide from the air and injection it deep into stone under the ground. At first glance, many of these technological solutions feel like they’re missing the point: if human intervention is what caused so many problems, why should we think further interventions will be any different? And the thing is that this question is absolutely valid––we can’t know that these solutions won’t introduce other unintended problems. In fact, it seems likely that they will. And that’s what makes this reality so deeply unsatisfying: the view of nature “untouched” by human intervention simply isn’t tenable anymore. And crucially, inaction is a form of action––simply refraining from taking novel actions won’t remedy the problems created by our previous actions. We’ve got to weigh the various possible paths ahead of us and figure out the least bad solution to the problems we’re facing: and in many cases, technological wizardry is likely our best hope.
Golden Gates, by Conor Dougherty. California has a massive housing shortage. (Side note: that Wikipedia page on the housing shortage is also quite informative.) The book is an in-depth look at the history and causes of the housing shortage in the California Bay Area, covering topics like single-family zoning, neighborhood vetocracies, and more. It focuses on the rise of the California YIMBYs, a political movement explicitly created to push back against NIMBYism and support housing development. I didn’t know much about the history of housing regulations or zoning practices, so this book was a great introduction. I’m now convinced that housing is one of the most pressing problems facing California (and the USA more generally); I’ve also been convinced of the basic tenets of the YIMBY movement (e.g., upzoning, removing parking minimums, etc.). For a great, nuanced look at housing (and related) issues in the Bay Area, I recommend Darrell Owens’s Substack. This article by Jerusalem Demsas is also really informative about how a variety of regulatory burdens have slowed the construction of new housing units and greatly contributed to our diminished housing stock.
The Hole, by Hiroko Oyamada. This book was both excellent and deeply unsettling. The main character, Asa, moves out to the countryside when her husband gets a new job out there. Apart from his family––which lives next door––Asa is quite isolated out there, and ends up getting quite bored. She wanders around the surrounding area, interacting with various people; at one point, she comes across a strange creature and even falls into a hole that seems especially designed for her. But the plot is considerably less important than the feeling that the story evokes. If I had to draw a comparison, I’d say it’s like a cross between David Lynch and Franz Kafka: surreal, dreamlike, and definitely unsettling. Specific images from the book still linger on in my memory. (I also enjoyed her other book, The Factory, though admittedly not quite as much as The Hole).
Frankenstein, by Mary Shelley. This book obviously doesn’t need much in the way of promotion, and to be honest, I feel a little self-conscious even writing about it, given how much has already been written (and adapted to the screen, etc.). But I’d never read Frankenstein, despite having absorbed the basic premise through a kind of cultural osmosis. I think what I found most tragic about that book was the enormous sense of squandered potential. Frankenstein’s creation comes into the world hopeful, eager to learn; and even after being spurned multiple times, he reaches out to certain humans in search of sympathy and kindness; but at each point he is met with fear and revulsion (except, notably, from an older blind man). And I understood, too, Frankenstein’s own reluctance to give into the creature’s demand for a “mate”; the creature had already killed his friend and his little brother––who’s to say the two creatures combined wouldn’t terrorize even more people? Although I, the reader, felt relatively certain that the creature would be true to his word, I of course had no way of knowing. All we can ever do is rely on the good faith of others, and often that trust is hard to find. The story is shot through with this kind of failure to trust, and the failures that follow from it, and that’s what made it such a heavy (but ultimately very beautiful) read.