Skip to main content

AI leaders Musk, Tegmark, and DeepMind call for autonomous weapons systems ban

Watch all the Transform 2020 sessions on-demand here.


Prominent artificial intelligence thought leaders, including SpaceX and Tesla CEO Elon Musk, Skype founder Jaan Tallinn, three cofounders of Google’s DeepMind subsidiary, and Future of Life Institute president Max Tegmark, protested autonomous weapons at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, Sweden this week.

They, along with 2,400 other executives, researchers, and academics from 160 companies in 90 countries, signed an open letter pledging to not “participate in nor support the development, manufacture, trade, or use” of autonomous weaponry, which they warned could be “dangerously destabilizing” for every country and individual.

“Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems,” they wrote. “[T]he decision to take a human life should never be delegated to a machine.”

The signatories also called on governments to preemptively ban autonomous weapons.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


“I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect,” Tegmark said in a statement. “AI has huge potential to help the world — if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons and should be dealt with in the same way.”

This is just the latest effort to muster support for AI regulation.

In April, a group of researchers and engineers from the Centre on Impact of AI and Robotics published a letter calling for a boycott of the Korea Advanced Institute of Science and Technology (KAIST), which they accused of working with defense contractor Hanwha Systems on AI for military systems. In November 2017, over 300 Canadian and Australian scientists penned letters to Prime Minister Justin Trudeau and Malcolm Turnbull urging bans on autonomous weaponry. And in 2015, Musk, Stephen Hawking, Steve Wozniak, and hundreds of tech leaders signed a Future of Life Institute treatise in support of autonomous weapons legislation.

So far, their pleas have fallen on deaf ears. In the past year, countries such as India, Chile, Israel, China, and Russia have pursued autonomous tanks, aircraft, reconnaissance robots, ship-based missile systems, and weaponized drones. And in the U.S., federal agencies including the Defense Department and the Department of Homeland Security are seeking to modernize programs with machine learning. (The Defense Department’s Law of War Manual explicitly endorses the use of autonomous systems in the armed forces.)

The private sector has taken matters into its own hands, to a degree. Google, under pressure from employees and the general public, released a set of guiding AI ethics principles in June and canceled its controversial Project Maven drone contract with the Pentagon. Microsoft, meanwhile, discontinued its work with Immigrations and Customs Enforcement and created an internal advisory panel — the Aether Committee — to look critically at its use of artificial intelligence.

In a blog post in July, Microsoft president Brad Smith also called on lawmakers to investigate issues with facial recognition algorithms and craft policies guiding their use.

But there’s work to be done. A recent report on artificial intelligence and war commissioned by the Office of the Director of National Intelligence concluded that because of AI’s potential to “massively magnify” military power, countries will almost inevitably build autonomous weapons systems. That’s why it is critical, those in opposition say, to curtail these weapons through regulation before it’s too late.

“Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale that is greater than ever, and at timescales faster than humans can comprehend,” Elon Musk, Mustafa Suleyman, and 116 machine learning experts from 26 countries wrote last year. “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”