Watch all the Transform 2020 sessions on-demand here.
Straight out of defense labs, autonomous and semi-autonomous weapons are already in use, but there’s no overarching agreement among key stakeholders on how to control their implementation and diffusion. Unlike nuclear or biological weapons, whose proliferation have been largely controlled, autonomous weapons face some tricky problems.
The first is the absence of an international treaty regulating them.
The second is the comparative ease by which autonomous weapons are developed. Nuclear weapons are hard. The nine countries with nuclear weapons have built them with multi-decade projects backed substantially by state resources and administrative capacity.
Not so for autonomous weapons. The hardware is getting cheaper and cheaper. Drones in use by insurgent forces cost only a few hundred dollars. And the software engineering know-how (machine vision, autonomous navigation, collision detection) is widely understood and rapidly dispersed through legitimate channels (such as ArXiv and GitHub), and increasingly in commercial products. An autonomous weapon does not need to be a T-1000. It could be a $500 drone carrying a shaped charge. (Longish review of autonomous drone weapons currently in use by state military and non-state actors here.)
In sum: The knowledge to build these subsystems is widely available. And the tools to do so are cheap. This 8-minute video published by Future of Life Institute depicts one possible scenario, too realistic to be ignored. An immediate threat is on the horizon in the form of machine learning-enabled malicious software that learns normal behavior patterns and uses them to get past security gates (WSJ paywall).
Paul Scharre, a security analyst, warns that there’s not much time left for negotiating proper regulations:
Four years ago, the first diplomatic discussions on autonomous weapons seemed more promising, with a sense that countries were ahead of the curve. Today, even as the public grows increasingly aware of the issues, and as self-driving cars pop up frequently in the daily news, energy for a ban seems to be waning. Notably, one recent open letter by AI and robotics company founders did not call for a ban. Rather, it simply asked the UN to “protect us from all these dangers.”
AI researcher Subbarao Kambhampati questions, too, whether any ban is worth supporting because a ban is likely to be ineffective and “a pyrrhic victory” for the proponents of peace.
I do think there are two things to bear in mind. First, by calling these powerful machines “killer robots” (as the Campaign to Stop Killer Robots does), we eliminate the most important variable in today’s systems — the humans who design, maintain, and manage them. As this excellent, comprehensive study by Maaike Verbruggen and Vincent Boulanin shows, the largest militaries do not immediately envision fully autonomous systems taking over the battlefield:
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
The focus on full autonomous systems is somewhat problematic as it does not reflect the reality of how the military is envisioning the future of autonomy in weapon systems, nor does it allow for tackling the spectrum of challenges raised by the progress of autonomy in weapon systems in the short term. Autonomy is bound to transform the way humans interact with weapon systems and make decisions on the battlefield, but will not eliminate their role. […] What control should humans maintain over the weapon systems they use and what can be done to ensure that such control remains adequate or meaningful as weapon systems’ capabilities become increasingly complex and autonomous?
The failure of system designers to account for human error is particularly exposed in air travel, another industry with high stakes. In 2009 Air France 447 disappeared into the ocean as its pilots were befuddled after the autopilot transferred controls to humans. (See also this story of an Airbus whose control systems went rogue, resulting in several passenger injuries.)
The second is that, yes, the technologies and raw materials are widely dispersed. Yellow cake and centrifuges, it isn’t. Cheap drones and free software, it is. Rather than throwing our hands up in despair, we should spend some time seriously raising awareness of these issues. In turn, this might generate creative solutions to the problem. Any workable solution is likely to be multi-factor, ranging from technical to ethical, regulatory, administrative, and even social concerns.
Worth looking into: The cofounding organization of Campaign to Stop Killer Robots, PAX, put out two reports in the wake of the latest UN discussion: an overview of trends and weapons under development and a report on the positions of European states.
This story was originally published on Medium. Copyright 2017.
Azeem Azhar is a product entrepreneur and analyst who serves on the editorial board for the Harvard Business Review.