Skip to main content

Sorry, Elon Musk. AI is not a bigger threat than North Korea

Tesla chief executive Elon Musk enters the lobby of Trump Tower in Manhattan, New York, U.S., January 6, 2017.
Tesla chief executive Elon Musk enters the lobby of Trump Tower in Manhattan, New York, U.S., January 6, 2017.
Image Credit: Reuters / Shannon Stapleton

Regulations, sanctions, rules — they are not always “pure evil” as some might suggest. The regulations about how to keep a commuter rail safe or the sanctions the U.S. government uses to manage relations with foreign countries are necessary, not evil.

Yet, when it comes to AI, do we really need to worry?

Elon Musk has gone on the offensive attempting to convince us that AI needs to be more regulated because it could spin out of control. He tweeted that the dangers we face from AI and machine learning has “vastly more risk” than North Korea. He followed up that tweet by saying that “everything that’s a danger to the public” is regulated, including cars and planes.

The problem with this line of thinking, of course, is that an AI is a piece of software. A plane weighs over 350,000 pounds and can fall out of the sky. Where are we in the continuum of machines taking over? In an infant stage — not even crawling or walking. We might want to avoid hysterics.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Still, some of the reactions have been quite interesting.

One user said it was “inappropriate” to compare a nuclear threat to AI. One said the real danger is humans creating AI that doesn’t work. Another pointed out the obvious — if there is a nuclear war, it might not matter if the machines take over. We’ll all be dead.

The problem with the “end of the world thanks to AI” discussion is that we never get into specifics. It’s a random tweet comparing machine intelligence to nuclear war. It’s another random tweet talking about regulation. But what kind of AI should be regulated? By whom — and where? What are the actual dangers? The problem with fear-mongering about AI is that there are no obvious examples of a machine actually causing mass destruction…yet. We hear about failed automations, of cars driving themselves off the road, of a chatbot app crashing.

Musk has noted before that we should regulate now before it gets out of hand. Again, he hasn’t explained what should be regulated — Microsoft Word? Chatbots? The subroutines in a home sensor that shuts off your sprinkler system? Satellites? Autonomous trucks? Let’s get the subject out in the open and get into the specifics of regulation and see where that takes us, because my guess is that the companies making chatbots don’t need to be regulated as much as they need to be told to make better and more useful bots with the funding they already have.

Or is this all about the laws of robotics? If that’s the case, we get into a brand new problem — what is a robot? I’m sure Isaac Asimov never predicted that there would be a catbot that tells us the weather forecast (if he did, I apologize to all science fiction fans everywhere). Let’s regulate the catbots before they get out of hand, right? Next up — the dogbots.

The issue is pretty clear: When you start talking about specific regulations and dangers, they become a bit laughable. What are we really asking Congress to do anyway? And, when you start talking about machines taking over because they want to destroy humanity…well, it’s too late. You’re a piece of toast and the bots won. We need to get granular, not broad.

Do you agree? Disagree? If you have a reasonable argument to make about the dangers (or maybe the catbots) please send them to me. I promise to respond if you’re interested in a civil discourse.