If you grew up in the U.S., you’ve probably seen at least one episode of The Jetsons, a cartoon from the 1960s depicting a 21st-century futuristic society with push-button meals, floating cities, and a robot named Rosie.
In the episode titled Rip-Off Rosie, George Jetson fixes the fried memory chips of a robot called Robotto and earns himself a raise and a day off. He takes the faulty part home to show his family, and his robot maid Rosie accidentally eats it, mistaking it for candy. The faulty part makes Rosie go crazy. Her demeanor gets menacing, her eyes pop out, and she uncontrollably destroys everything in the house.
This scene may have been crafted by the creators of a children’s cartoon, but today, destructive robots are not a made-up scenario. Not only are robots taking over our jobs, but they might soon dominate our strength as a species.
Robots sometimes take human lives
Everyone loves a good bot battle in a virtual environment, but put a robot up against a human and it’s an unfair fight.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
Like Rosie the robot, all it takes is a glitch or an oversight for a robot to become deadly. Although they’re programmed using the best in AI technology, it’s impossible to program empathy into a robot. Like Data from Star Trek, a robot can learn, but it can’t feel.
Perhaps if robots were isolated, the danger would be less. However, these robots often work alongside humans in factories, and they have caused many injuries and deaths.
In 1981, a motorcycle factory worker named Kenji Urada was killed by an AI robot working nearby. For some reason, the robot identified him as a threat and pushed him into a machine. The robot used its hydraulic arm to smash the worker which killed him instantly, and returned to perform its job duties.
In 2015, a 22-year-old man working at a Volkswagen plant in Germany was killed by the robot he was assembling. He was putting together the robot that grabs and assembles various automobile parts when the robot grabbed him and slammed him up against a metal plate. The man died from his injuries.
Also in 2015, Ramji Lal was killed at Haryana’s Manesar factory in India when he approached a robot from behind. He adjusted a piece of sheet metal carried by the robot, and was pierced by welding sticks attached to its arm. Coworkers claim his mistake was approaching from behind instead of the front, but the fact that it happened at all is cause for concern.
Who is responsible when robots kill?
When a robot kills, who can be held accountable? Is it considered murder? Is it reckless homicide? According to criminal law expert Rowdy Williams, murder is defined as “knowingly or intentionally killing another human being or unborn child” and reckless homicide is “recklessly causing the death of another.”
If the consequences of murder include life in prison, fines, and even the death penalty, how can they be applied to a robot? If a human is found responsible for the robot’s actions, is it fair to apply those consequences to someone who didn’t actually commit murder?
What happens if someone decides to use AI technology to program robots to kill? What happens when a driverless car malfunctions and mows down innocent people on the sidewalk?
In his book When Robots Kill, law professor Gabriel Hallevy discusses the criminal liability of using AI entities in commercial, industrial, military, medical, and personal spheres. He explores many of the concerns mentioned above.
Hallevy sets out his purpose in the book’s preface: “The objective of this book is to develop a comprehensive, general, and legally sophisticated theory of the criminal liability for artificial intelligence and robotics. In addition to the AI entity itself, the theory covers the manufacturer, the programmer, the user, and all other entities involved. Identifying and selecting analogies from existing principles of criminal law, the theory proposes specific ways of thinking through criminal liability for a diverse array of autonomous technologies in a diverse set of reasonable circumstances.”
The most important questions Hallevy explores is whether criminal liability and criminal punishment are applicable to machines. His book focuses only on the criminal liability of AI entities and does not dive into ethics.
Perhaps Hallevy’s work will create the foundation for another conversation to consider the ethics involved in AI entities, based on the framework he has provided. It’s a complex matter and there is no clear answer yet, but perhaps we’ll find an answer before the next deadly incident.
Larry Alton is a contributing writer at VentureBeat covering artificial intelligence.