Artificial intelligence is easily the buzzword of the year — and perhaps the decade. What started as simple text chatbots last year has evolved quickly to become a machine learning algorithm that knows when you leave the house and turns down the temperature, a truck that adjusts itself for the highway to get better gas mileage, and a voicebot that can tell you jokes and fun facts.
Curiously, AI is both an imminent danger and a non-threat at the same time — depending on whether you are driving a Tesla.
The truth is always a little layered.
Recently, Mark Zuckerberg noted that the existential threats of AI are overblown, obviously referring to prognostications by Elon Musk about the need for regulation. This week, Musk blasted back at the famous social media mogul and suggested that he doesn’t know that much about the field. As with any blanket statements made on Facebook Live or as a tweet, the motivations behind these statements, the context, the implications, and even the particular interpretation of what they said and what they meant is highly nuanced.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
Both are very right — and very wrong.
First, we know that AI can cause serious problems. The car I drove last week used adaptive cruise control to adjust its speed, and an algorithm (technically not AI, but more of a driving automation) didn’t quite realize a dark shadow in the road wasn’t a car. My wife gasped, the car started to slow down dramatically — something was seriously wrong. It was tense, then it wasn’t.
Here’s the problem: When AI gets out of control, it can be extremely dangerous. Your car can plow into a guardrail on its own.
And yet … these algorithms are incredibly difficult to program. I’m using a car analogy because it’s the one that is easiest to understand, and I test a lot of car automations. It’s a complex math problem — the yaw of the vehicle, the sun hitting the sensors, the speed of traffic. On a highway, there might be a thousand calculations to make for one simple lane change in future autonomous cars. A human is behind that programming; humans are flawed. Therefore, AI is flawed.
The problem, as always with doomsday scenarios, is that AI tends to be difficult to unleash. The machine does not have “a mind of its own” — it has the mind of the programmer. Alexa doesn’t even understand the context of a conversation. Is the voicebot really going to make your house explode? And, if Alexa can suddenly raise the temperature in your house to a sweltering level on a hot summer day, can’t we just reach for the thermostat and shut it off? The answer to both of those questions is yes.
Where Musk is wrong: A bot army is not going to take over. There isn’t going to be a bot apocalypse tomorrow or by next week. There’s no cause for alarm — yet.
Where Zuckerberg is wrong: AI can be dangerous. My favorite example of this is how a bot could recommend unhealthy food over a few decades, causing us to all die too young. Or maybe it is a little more subtle: A bot on your satellite television suggests reality shows that dumb us down over the next 40 years. Some would argue that has already occurred, if you’ve ever seen The Bachelorette. The point is not that AI will suddenly emerge with a steely expression, a fully formed humanoid who reaches for the killswitch on humans. The point is that an over-reliance on AI could mean we cede some control in minor ways to flawed algorithm in our cars, our thermostats, and our Bluetooth speakers — and that’s a legitimate issue.
Also, the point is that ceding some control to flawed algorithms is not going to lead to mass hysteria. When my test car slowed down on the highway, I pressed the gas pedal. It’s happened before, and as long as humans are writing code, it will happen again.