Elon Musk caused a media stir recently. Not for his innovative technologies or promises to commercialize space travel. In front of a meeting of the National Governors Association, the Tesla CEO warned attendees that “[Artificial Intelligence] AI is a fundamental existential risk for human civilization.” Based on his observations, Musk cautioned that AI is “the scariest problem.”
It’s not the first time he’s sounded this alarm. He made headlines with it a few years ago. In fact, Musk is so concerned, he suggested something almost unthinkable for most tech leaders: government regulation.
What AI needs, in fact, is a human touch.
AI is most certainly already a fixture in our lives — apparent in everything from suggestions of news articles we might like to Siri on our phone to credit card fraud detection and autonomous-driving capabilities in cars. But are we having the right conversations about its impact? There is discussion about the kinds of job losses that might result from technologies like self-driving cars or the blue-collar jobs that might be lost to other increasingly automated processes. But do we really need to look far into the future to see AI’s impact and its potential for harm? And will the impacts be limited to entry-level jobs in transportation or manufacturing?
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
The reality is much more complicated, widespread, and immediate than our current public dialogue — or Musk’s diatribe — betray.
An immediate opportunity — and risk — is that early variations of AI are destined to repeat the issues that already exist. But what happens when you need to move beyond a historical mold?
When managed by and for people, AI creates new opportunities for ingenuity.
For example, many mid-sized and large companies already use AI in the hiring process to source candidates via technologies that search databases like LinkedIn. These sourcing methods typically use algorithms based on current staff and will, therefore, only identify people who look a lot like the current employees. Instead of moving an organization forward and finding people who complement its current capabilities, this will instead build a culture of sameness and homogeneity that does not anticipate future needs.
As these AI sourcing methods become increasingly pervasive, HR and talent acquisition professionals are beginning to think about the implications for the industry and for their own jobs. Will we still need recruiters now that we have AI to cover many hiring responsibilities?
The answer is a resounding yes.
Where AI algorithms encourage sameness and disqualify huge swaths of potentially qualified candidates simply because they don’t look like current employees, humans can identify the gaps and use that insight to promote more innovative hiring. Companies are looking for new and different approaches, creative solutions, and new talents. To evolve, they need to anticipate future directions and adapt to meet those challenges. They need a diverse range of problem-solvers, people with new and varied skills. AI cannot deliver those candidates. People can.
While AI can be incredibly useful, if it’s used without human input it has potential to inflict real harm. We need humans to think creatively and abstractly about the problems we face and to devise new and innovative strategies, test out different approaches, and look to the future for upcoming challenges and opportunities. We need to be sure we aren’t using algorithms to replicate a past that does not meet the needs of the future.
Laura Mather is the founder and CEO of Talent Sonar.