Skip to main content
Menu

Book

A pandemic isn’t the only peril for which we’re unprepared

The Precipice: Existential Risk and the Future of Humanity

Toby Ord
Hachette
2020
480 pp.
Purchase this item now

On 16 July 1945, in a desert near Socorro, New Mexico, the U.S. Army conducted a test of the first nuclear weapon. The code name for the experiment, “Trinity,” was coined by J. R. Oppenheimer and inspired by one of his favorite poems, John Donne’s Holy Sonnet XIV, “Batter my heart, three-person’d God.” Oppenheimer was also among the observers who witnessed the detonation. While watching the mushroom cloud after the explosion, a verse from the Hindu scripture the Bhagavad Gita crossed his mind: “Now I am become Death, the destroyer of worlds.”

Earth was not destroyed in the summer of 1945, nor after. But we can only achieve long-term security if we put deliberate efforts into the reduction of existential risks, writes Toby Ord in The Precipice. He argues that with the Trinity test, society entered a new era—the Precipice—in which we are capable of endangering our own future.

Ord, a philosopher at the University of Oxford’s Future of Humanity Institute (FHI), begins the book with an overview of civilization’s journey from early settlements to modern society. Over time, we progressed through a number of major technological revolutions to achieve unprecedented power and welfare. According to Ord, this could be just the beginning of our story. He presents strong moral arguments for why it would be reasonable to make safeguarding our future a top priority. Currently, he notes, “humanity spends more on ice cream every year than on ensuring that the technologies we develop do not destroy us.”

It is in society’s best interest to prevent existential catastrophes by reducing threats that Ord broadly classifies into three categories: natural risks, anthropogenic risks, and future risks. His goal is not merely to aggregate research findings but to assign a quantitative estimate to individual threats with regard to their potential to cause an existential catastrophe.

Although the fossil record can provide us with useful hints for how to assess certain risks, we must rely, in some cases, on expert opinions. In such situations, especially when the stakes are very high, Ord advocates using a Bayesian statistics approach. He first sets the a priori probability of the threat (on the basis of prior knowledge or an estimate about the likelihood of an event) and then updates it in the light of available scientific evidence. This method allows some space for subjectivity, but it also enables comparison across different risks.

According to Ord’s analysis, natural perils such as asteroids and supervolcanic eruptions represent only a minor existential risk over the next century, especially when compared with the profound anthropogenic risks that we already face. Despite an 80% reduction in the number of nuclear warheads since the mid-1980s, for example, he argues that it is crucial that we continue the disarmament process and increase maintenance and security standards for remaining facilities.

Some of our biggest perils, however—namely, pandemics and unaligned artificial intelligence (AI)—lie ahead. The intentional development of a novel virus with the “right balance” of virulence and contagion would be challenging, but Ord urges serious consideration of the possibility that some entity might weaponize known pathogens or that such pathogens could accidentally be unleashed from the laboratory. And it would be even more difficult to mitigate the risk posed by an extremely powerful AI that is not aligned with human values. [This particular threat is discussed in more detail in Superintelligence, written by Ord’s FHI colleague Nick Bostrom (1).]

Existential risks are often positively correlated; if we succumb to one threat, we are more vulnerable to another. Nevertheless, Ord remains an optimist who believes that we can fulfill our long-term potential. He summarizes specific policy and research recommendations in an appendix, advocating, for example, to restart the Intermediate-Range Nuclear Forces Treaty and to increase the annual budget of the Biological Weapons Convention ($1.4 million in 2019, less than the average annual budget of a single McDonald’s restaurant). Given the significance of this material, the book would have benefited from a full chapter with a deeper analysis of current policy and future strategies to prevent a cataclysm.

I urge caution against setting our action threshold to the level of a global catastrophe, which could distort the way we prioritize our next decisions. But Ord’s map of the existential risk landscape is an engaging read for anyone who wants to learn more about this important and interdisciplinary research.

References and Notes:
1. N. Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford Univ. Press, 2014).

About the author

The reviewer is at the Department of Genetics, Harvard University, Boston, MA 02115, USA.