Year after year, blockbuster films are replete with Turing-test-passing examples of AI — and this past year was no exception. From Blade Runner 2049 to Marjorie Prime to Star Wars: The Last Jedi, it seems the public’s appetite for depictions of truly intelligent AI is insatiable.
This tendency to dream of overly optimistic futures when it comes to technology is hardly relegated to the movies. In 2016, publications including Wired, Forbes, and, yes, VentureBeat, eagerly predicted a year where “machines will win” and AI will spark “the beginning of a new internet.” However, while there have been massive advances in AI this year, particularly in semantic recognition, the future the media predicted is far from realized. As science fiction does, these predictions were rooted in reality but took it one step further — into the land of fantasy.
Science fiction is fun to watch and read, but it’s important to recognize where technology is today in terms of actually improving goods and services, and where it’s nothing more than a dramatization.
What’s working
Decision-tree based chatbots
If there’s one recent advancement to note, it’s the development of decision-tree based chatbots. From airlines to customer service to retail, industries across the board have implemented chatbots into various stages of customer interactions.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
Unsurprisingly, China, the world’s messaging darling, is at the forefront of this chatbot revolution, demonstrating the potential for chatbots to improve operational efficiency through automation. For instance, Melody by Baidu (the Google of China) is an AI-based medical assistant that collects information for doctors and sometimes offers recommendations. Similar to information-gathering bots currently used in the customer service industry, Melody serves to decrease the amount of time a patient spends explaining a problem and eliminates the chance of a patient having to re-explain the problem to assistants, nurses, or other doctors.
Other industries — travel in particular — have followed this example, using chatbots to collect basic customer information before routing an inquiry to a human, whether it entails asking where customers would like to travel, what items they would like to order from room service, or what color clothing they are looking for in their next purchase.
Personalized assistant apps
While chatbots have a startling tendency to develop negative characteristics when exposed to the masses, as described above, it’s a completely different game when they are “learning” from a much smaller sample size. From x.ai to 24Me, numerous personal assistant apps have popped up that are able to fulfill simple personalized tasks, as well as use predictive analytics to simplify their users’ lives. That said, these apps come with a caveat: They work best in chat form, not through voice commands. In the coming year, we will likely see increased usage of simple, niche, chat-based personalized assistant apps.
What’s not
Crowdsourced machine learning-based chatbots
Chatbots that learn through interactions with large populations of users have a tendency to take on unsavory human characteristics. Turing Robot, for example, is an open platform that claims to have Chinese speech recognition at 94.7 percent accuracy. The company recently ran into trouble, though, when its chatbot, BabyQ, went rogue and claimed not to love the communist party — a decidedly uncouth thing for a Chinese bot to say. Embarrassments like this are an ever-present potential pitfall with machine-learning based chatbots that are unleashed on the general public. Let us never forget Tay.
When chatbots are used for highly intentional purposes — gathering information, directing users to an appropriate agent, allowing users to self-serve, etc. — they have the potential to be flawless. However, when we give chatbots too much free reign to learn from their users, they tend to exhibit those human characteristics that we would rather not see replicated in a machine.
Natural language processing/understanding
While some chatbots have become quite adroit at moving users through a decision-tree based on a user’s typed response, voice-recognition processing is far from mature. In 2017, Siri was the most popular voice-based virtual assistant, and yet its usage dropped 17 percent from the previous year — meaning it lost 7.3 million monthly users. And while home virtual assistant systems like Alexa have seen an increase in usage, this is more likely due to the fact that they are a novel technology rather than reflecting a positive impression of the systems’ natural language processing abilities.
Indeed, Alexa was plagued with hilarious mishaps throughout the year. Last January, a video showed a child asking Alexa to play “Digger, Digger” (a song), and Alexa responded by spewing a stream of keywords related to porn. Just a few days later, news reporters covered a six-year-old ordering a dollhouse and four pounds of sugar cookies from Alexa, which prompted a slew of Alexas to order the exact same thing upon hearing the news report.
Comedic instances aside, the fact is Alexa, which is one of the most impressive natural language processing technologies out there, still cannot actually process language. While definitely an improvement from automated voice menus, AI’s ability to understand intent still lags behind that of a chatbot.
Only 25 percent of 16 to 24 year olds use voice search on mobile, and only 7 percent of the population has a smart speaker at home. And while no doubt these numbers will increase, there’s a very simple reason why they’re currently so low: Voice recognition technology struggles with regional accents, background noise, homophone distinctions, and proper names — not to mention colloquialisms.
What you can reasonably expect in 2018
There is a wealth of technological advances that can accurately be labeled AI at this point, but none come close to the replicants of Blade Runner. Despite panic-inducing articles about sex dolls programmed to kill their owners or raising Terminator scenarios, the reality is we couldn’t get to that level of artificial intelligence in the foreseeable future, even if we wanted to. Alexa may be able to understand your voice, but she’s hardly at the level of human mimicry reached in the movie Her. We’ll stay tuned for 2049, though.
At this point, implementing chatbots is all but a must in most industries, particularly ones that automation can make less labor intensive. Companies that don’t use chatbots for simple tasks such as information collection, self-service, and time-sensitive notices (such as shipping information) will find themselves at a serious disadvantage. The customer service industry has already passed this tipping point: Those who have not implemented intelligent chatbots are shelling out millions in human cost, often to earn lower CSAT ratings than those companies support channels using chatbots. 2018 will solidify this change in other industries including airlines, events and planning, hospitality, and insurance.
Voice recognition will continue to improve, and it’s possible that by this time next year — when we were supposed to have Turing-test-passing machine slaves, according to Blade Runner — voice recognition will advance to the point that it gains popular appeal over other mediums. For now, though, the hype is just that: hype. Chatbots have matured, but AI in general still has a long way to go.
Abinash Tripathy is chief executive officer of Helpshift, a mobile software company based in San Francisco.