Skip to main content

What we actually have to fear from killer robots

Image Credit: Shutterstock / Andrey Suslov

You’ve probably seen the latest Boston Dynamics video, which shows one of its recent quadruped creations, the SpotMini, opening a door despite being repeatedly accosted by a company employee. Boston Dynamics’ videos are notorious for eliciting both excitement and fear across social media and the internet. Nicholas King’s new parody of the Planet Earth documentary shows herds of SpotMinis taking over the planet. And the most recent season of Netflix’s Black Mirror features a murderous, highly autonomous SpotMini look-alike. So should we be concerned about killer robots taking over?

My take: I don’t expect the robot uprising anytime soon, but there’s plenty for us to worry about here — as a society, a culture, and a species.

The autonomous killer robots we imagine are so fearsome because (a) they’re autonomous, (b) they’re driven to kill, for some reason, and (c) they’re armed. Looking at these, in turn, allows us to isolate the true areas of concern and understand how we can work to head off our fears.

Autonomy

Autonomy is a layered, even nuanced concept. At the base level, autonomy can refer simply to the ability to get from A to B without human intervention.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


When I see videos like the SpotMini demonstration, I immediately think back to my interview with Aaron Ames, professor of mechanical and civil engineering at Caltech, which focused on intelligent robots. Our discussion inevitably turned to the then-latest Boston Dynamics video which, at the time, featured the Atlas robot performing a backflip. Ames’ take on the level of autonomy involved essentially boiled down to: not much.

“It’s a preplanned behavior, so this robot has no knowledge of its environment, in the sense that it’s not observing where those blocks are and in real time adjusting its behavior and learning how to do this behavior,” said Ames. Rather, “they put those obstacles in the memory of the computer, they preplan those behaviors, [and] they do a bunch of experiments until they get the right behavior.”

In other words, there’s still lots of work to do before we see autonomously walking robots able to deftly navigate the real world. According to Ames, not only are we not there yet, but researchers don’t even agree on the right basic approach to get us there. Some, like Pieter Abbeel, advocate an approach based on end-to-end deep learning, while Ames suggests a more integrative approach.

So, for the time being, you can probably evade and outrun the robot, especially on varying terrain, but they’re getting there.

Agency

Still, autonomy in the sense of locomotion doesn’t quite get at what’s scary about the “autonomous killer robots” scenario. This is more about agency; the idea that the robot can have a beef with a human in the first place.

I can think of a few scenarios in which a robot would have it in for a human:

  • The robot is acting under its own volition, and it has decided that a particular offending individual needs to go. This implies some degree of general intelligence, goal-directedness, and intrinsic motivation. We’re very far from achieving this type of artificial intelligence, and don’t even really know how to define it. My interview with Greg Brockman explores this topic in detail. It seems quite premature to worry about this.
  • The robot kills a person as an unintended consequence of some human instruction that it’s following. This strikes me as more worthy of concern, especially given the general difficulty we have as humans with anticipating unintended consequences. And it’s what makes AI safety research so important. Check out my interview with Greg’s colleague Dario Amodei to hear about OpenAI’s work in this space.
  • The robot kills a person as an intended consequence of some human tasking. This is really the most likely near-term scenario, and thus one we should be most concerned about. Essentially the robot is a weapon, and its autonomy acts both as a potential multiplier on the amount of damage it can do and also, critically, so as to decouple any human from the ultimate taking of life.

I think it stands to reason that for the foreseeable future, the humans, as opposed to the robots, are the greater concern.

Armed

Accidents aside, what makes a killer robot a killer robot comes down to the fact that they were armed in the first place.

This seems like an inevitability that we’re quickly racing toward. Military robots already exist, and many governments are researching and developing more of them. According to Statista, global spending on military robotics was $6.9 billion dollars in 2015, and is expected to grow to $15 billion by 2025.

If you agree that the real risk of autonomous killer robots is when humans use them as weapons, and not because they’ve become self-aware, it seems natural that our best defense is to stop arming them.

It turns out that roboticists, ethicists, and AI researchers are already calling for the banning of weaponized autonomous robots. A number of organizations have formed around or taken up this cause, including the Human Rights Watch, the International Committee for Robot Arms Control, Article 36, and the Future of Life Institute, backed by Stephen Hawking, Elon Musk, and more.

So, back to our original question: Should we fear autonomous killer robots? To be honest, probably not. The chances that anyone reading this will perish at the hands of an autonomous killer robot is really, really, really small. But that doesn’t mean we shouldn’t be thinking about them and the many moral issues that they raise.

This story originally appeared in the This Week in Machine Learning & AI newsletter. Copyright 2018.

Sam Charrington is host of the podcast This Week in Machine Learning & AI (TWiML & AI) and founder of CloudPulse Strategies.