Skip to main content


An engaging synthesis highlights the value of field experiments in the social sciences

Randomistas: How Radical Researchers Are Changing Our World

Andrew Leigh
Yale University Press
283 pp.
Purchase this item now

Social scientists once restricted their research to carefully controlled laboratory experiments. Over the past 25 years, however, they have increasingly made use of field experiments. Insights gained have spanned nearly every imaginable segment of our society, lending tests of theory, advice to policymakers, and guidance to nonprofit and for-profit firms alike.

In his new book, Randomistas, Andrew Leigh takes stock of a slice of this research in an even-tempered, scientific, and accessible way. The work reads like a stroll down memory lane, as Leigh digs into historically rich areas of research, ranging from the education production function to crime prevention and useful poverty interventions. Yet perhaps the most exciting aspect of the book concerns field experiments in politics and philanthropy.

Politics is interesting in its own right. Why people vote and how we can enhance voter participation remain first-order questions in well-functioning democracies. In chapter 9, Leigh cleverly illustrates how Barack Obama made keen use of field experiments to figure out what was working and why during his first presidential campaign. For instance, in 2007 Obama’s team used a field experiment to determine the best image and slogan to use to motivate web visitors to subscribe to future campaign emails. The version that yielded the most email addresses—a black and white photo with the message “Learn More”—surprised even the most seasoned experts, who expected that a video accompanied by the message “Sign Up” would be the top performer, and served as a scientific basis for a successful campaign.

Later, Leigh describes how political scientists Alan Gerber and Donald Green determine the efficacy of get-out-the-vote interventions. As Leigh, a politician himself, points out: “I’ve met ‘experts’ who are convinced that partisan letters work best when paired with doorknocking, that telephone calls work best in the final week of the campaign, or that posters outside the election booth make a huge difference. But ask them about their evidence base and it’s quickly apparent that their war stories lack a control group.”

In their most effective intervention, Gerber and Green show that a letter revealing the recipient’s turnout record, as well as their neighbor’s, increases turnout by 8 percentage points. This remarkable effect likely reflects social-image concerns.

Charitable giving, meanwhile, is much more important than most people realize. The number of U.S. nonprofits registered with the Internal Revenue Service grew by nearly 60% from 1995 to 2005, and charitable gifts of money have more than doubled since 1990, now exceeding 2% of GDP.

The market for charitable giving primarily revolves around three major players: donors, who provide the resources to charities; charitable organizations, which develop strategies; and the government, which decides (among other issues) the tax treatment of individual contributions, the level of government grants given to various charities, and what public goods to provide itself.

Leigh catalogs interesting details of recent field experiments, which lend insights into all three actors, but he focuses most of his efforts on the relationships between charities and individuals. Here, he summarizes some of my own field experiments from the 1990s, which showed the importance of a “lead donor” and matching funds in raising charitable giving. (The mere mention of a lead donor raised giving by 50 to 100%.)

What is particularly appealing about Randomistas is that it does not stop at discussing the overarching literatures associated with the topics Leigh chooses to focus on. It also details how to build a better feedback loop and how organizations can get over hurdles that prevent them from running effective interventions. These include fairness concerns, replication needs, and the discomfort associated with admitting when you do not know something. (The latter is a challenging one for most managers I know.)

I do not have any qualms with what was written in this book, but I do feel that an important element of the field experiment equation was omitted. Although in the past few decades social scientists have done a superb job of developing methods with which to generate field data showing how the world works and detailing intervention effects, how we should use the data for policy purposes is often neglected. Do the results scale to a larger setting? What are the factors that affect that scaling? Without this information, empirical research can be quickly undermined in the eyes of the policy-maker, the broader public, and even within the scientific community.


About the author

The reviewer is at the Department of Economics, University of Chicago, Chicago, IL 60637, USA.