Skip to Content

Book ,

A sober treatise on the future of warfare warns of the perils of autonomous robotic combatants

Army of None: Autonomous Weapons and the Future of War

Paul Scharre
Norton
2018
456 pp.
Purchase this item now

Sooner than you may think, robotic swarms will intercept incoming missiles at hypersonic speed, while dueling cyberattacks and countermeasures transpire at nearly the speed of light. Such strikes and counterstrikes will quickly overwhelm the capacities of human combatants to respond.

Fiscal constraints and decreasing human resources, together with the promise of enhanced precision, effective risk management, and force multiplication, are driving Pentagon planners (and their counterparts in other nations) to aggressively pursue battlefield automation. In Army of None, Paul Scharre offers an authoritative and sobering perspective on the automated battlefields that will very soon come to characterize military conflict, predicting that autonomous robots, many fully armed and capable of independent targeting decisions, will inevitably come to rule the waves, as well as prevailing on the ground, in the air, and especially in space.

The logic behind such developments seems inescapable. Why should human pilots fly risky missions in hostile territory when remotely piloted, and increasingly autonomous, air frames can conduct the same missions at a fraction of the cost and often with greater endurance, efficiency, and precision? Why should combat supply convoys use human drivers in zones of conflict when autonomous vehicles can move the same supplies without risk of harm? Robotic combatants and logistical support troops, moreover, require no pensions or veterans’ benefits—a draft by their human counterparts on future Department of Defense fiscal resources that, if unchecked, will very soon come to consume the entire annual defense budget.

The drive to automate seems relentless and unstoppable, and the headlong rush toward endowing automated combat weapons platforms and logistical support devices with the ability to operate without continuous human oversight or intervention seems the only way of realizing the economies of risk reduction and force multiplication that automation promises. Yet such considerations do not fully address the range of underlying policy questions at stake.

PFC. RHITA DANIEL/US MARINE CORPS/WIKIMEDIA COMMONS

U.S. Marines conduct a foot patrol with the Modular Advanced Armed Robotic System on 9 July 2016.

Scharre reminds readers of the compelling example of Soviet Army Lieutenant Colonel Stanislav Petrov, who, in September 1983, recognized that an automated surveillance system alert of an incoming U.S. nuclear missile strike was, in fact, a grave system malfunction and thus very likely averted a nuclear holocaust. Despite subsequent advances in artificial intelligence capabilities, none is sufficient to equip an automated weapons-delivery platform to make the kinds of discerning judgments that humans have and continue to make routinely in such circumstances.

Machines lack the concerns, empathy, and self-consciousness that factor into normal human situational awareness. Thus, despite numerous advantages, Scharre marshals his engagingly detailed accounts of these weapons systems to argue that a policy of blindly arming and deploying autonomous robots in armed conflict would constitute an error fraught with peril.

Debates over the future of military robotics have raged for more than a decade among the delegates to the United Nations’ periodic conventions on Certain Conventional Weapons (CCW) in Geneva, while members of ICRAC (International Committee for Robot Arms Control), including eminent Irish roboticist Noel Sharkey, have lobbied strenuously against the further development or deployment of such weapons.

Indeed, Wendell Wallach, a technology ethicist and robotics expert at Yale University, proposed several years ago that any attempts to wed full autonomy or general (deep learning) artificial intelligence with lethal weaponry ought to be preemptively condemned as constituting a means of warfare that is malum in se—wrong or evil in itself. By contrast, computer scientist and military roboticist Ronald Arkin proposed addressing such concerns with design modifications that would mimic emotions such as guilt and empathy and thus provide an “ethical governor” for emergent robotic behaviors.

To be sure, a kind of generalized technology anxiety with respect to military robotics is nothing new. P. W. Singer’s Wired for War covered much of the same, well-worn ground in 2009 (1). But Scharre brings these previous discussions up to date from a perspective forged in combat as a former Army Ranger and subsequently as a senior policy analyst in the Department of Defense. His is, thus, to a large extent, an insider’s perspective on the current and likely future course of development of these revolutionary and disruptive military technologies. As such, it is all the more compelling for its clarity and exhaustive detail—and hence well worth reading.

References

  1. P. W. Singer. Wired for War: The Robotics Revolution and Conflict in the 21st Century (Penguin, London, 2009).

About the author

The reviewer is the emeritus distinguished chair in ethics, U.S. Naval Academy, Annapolis, MD 21402, USA, and the author of Ethics and Cyber Warfare (Oxford University Press, New York, 2017).