Skip to main content


A Microsoft researcher unpacks the power and perils of today’s artificial intelligence

Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence

Kate Crawford
Yale University Press
336 pp.
Purchase this item now

Kate Crawford’s new book, Atlas of AI, is a sweeping view of artificial intelligence (AI) that frames the technology as a collection of empires, decisions, and actions that together are fast eliminating possibilities of sustainable futures on a global scale. Crawford, a senior principal researcher at Microsoft’s FATE (Fairness, Accountability, Transparency, and Ethics in AI) group, conceives of AI as a one-word encapsulation of imperial design, akin to Calder Willingham’s invocation of the word “plastics” in his 1967 screenplay for The Graduate (1). AI, machine learning, and other concepts are here understood as efforts, practices, and embodied material manipulations of the levers of global power.

By taking power and materiality seriously and leaving aside questions of what intelligence is, Crawford maps answers to how AI is made and how we are trapped by its making. The primary thesis of her book is that AI has nothing to do with understanding or seeking intelligence but is a “registry of power,” a metaphor meant to encompass social, political, and economic power as well as the insatiable demands AI places on electric power grids and on nonhuman nature.

Why an “atlas” of AI? Because those in control of AI have a desire for AI “to be the atlas—the dominant way of seeing,” to be the single way in which humans understand and run the world. Crawford’s anticolonial manual advances an alternative mapping, one that resists AI’s extractive, exploitative, and destructive aims. To comprehend Crawford’s argument is to understand that AI’s danger lies not in a hypothetical future superintelligence but in the reality of its current manifestation.

The book begins with a stark chapter (titled “Earth”) on the destructive power of lithium and rare earth element mining that provides the raw materials that underlie artificial processing power. In “Labor,” a visit to an Amazon fulfillment center in New Jersey inspires reflection on the crushing effects of the “logics of production” that undergird just-in-time synchronization of humans by machines and their builder-owners.

In “Data” and “Classification”—two of her most effective chapters—Crawford traces the pragmatics of predictive analytics, which she argues are rooted in promises of beneficence without attention to nonmaleficence. Here, she describes how AI constructs digital gates that lock us into data cages fixed to a mismeasured atlas over which we have no consent or other control. She offers as an example UTKFace, a database of facial images scraped from the internet that uses restrictive gender binary and ethnic identity classification schemes (“an integer from 0 to 4, denoting White, Black, Asian, Indian, and Others”) without attempting to contact the persons in the dataset to ask for their consent or providing them with the opportunity to articulate their own gender and ethnic identities (2).

In “Affect,” Crawford applies the lessons of the previous two chapters to highlight the dangers of automating human emotion detection. She effectively strikes down the notion that machine classification of human emotion in policing, security, law, hiring, education, and psychiatric medicine will be bias-free, given its existing track record of othering persons from already marginalized communities.

Crawford’s final chapter (“State”) describes the US Department of Defense’s Project Maven, an initiative in which a weaponized AI would be used to expand the scope of drones. Google, the project’s first host, tried to keep its work on the project secret, but when the company’s employees found out, more than 3000 signed a letter expressing ethical concerns about the company’s involvement in such a program. After Google did not renew the initial contract, Project Maven moved to Palantir, a start-up whose funding was partially derived from a CIA-affiliated venture capital group. Crawford shows how Palantir’s business model has already made its way into domestic deportation efforts, local policing, and supermarket chains, arguing that the imminent threat of weaponized AI must supersede nagging worries about automated weaponry.

With Atlas of AI, Crawford has written a timely and urgent contribution to the interdisciplinary projects seeking to humanize data science practice and policy. One might reasonably object to her view that “we must focus less on ethics and more on power” or push back against her recurrent use of “myth” and “mythologies” to mean “falsehood” and “lies,” yet such qualms in no way diminish the value of this book.

References and Notes:
1. C. Willingham, The Graduate screenplay,;
2. UTKFace, Large-Scale Face Dataset;

About the author

The reviewer is a faculty member of the Psychology, Neuroscience, and Data Science programs at Scripps College, Claremont, CA 91711, USA.