“Rogue Bots! Panic! Robots will take over the world! Bots talking in a secret language!” These were just a few of the panicked reactions that spread across the internet when two Facebook chatbots began speaking to each other seemingly in their own language earlier this year.
Despite the now common narrative that bots such as these are bringing us closer and closer to a Terminator-esque doom, as a chief architect working on AI, machine learning and bot technologies, I truly believe there isn’t (currently) a reason to worry about a looming bot-pocalypse. Let’s delve deeper.
What is AI, really?
Most business technologies have been created to optimize existing processes in order to bring us greater efficiency, and chatbots are no exception. As the technology currently stands, however, we are only able to create applications that either mimic what we are already able to do or to apply automation to those tasks. In my opinion, this means we still do not have true AI — not even close.
To explain further, you can think of our current understanding of AI as pattern matching, and of the human brain as the most complex pattern matching processor of all time. Data and machine learning are used to create self-learning systems that can automate simple tasks, but they cannot think for themselves.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
For example, earlier this year, Google revealed its AutoML project, in which machine learning is used to help code additional programs and scripts. These scripts started creating more efficient and powerful software than the best human-designed systems. However, even with these automated systems, the bots only operate and optimize what they know — and what they know is what we’ve programmed them to know.
As of now, these software applications cannot go beyond the copying tasks we have taught them, and therefore, they cannot make their own decisions or think for themselves. They aren’t actually intelligent. It is my belief that we will only be able to create true AI — AI that is embedded with independent, decision-making capabilities — once we truly understand the human brain, and that only then could AI become a potential threat.
What tech leaders need to know about bots
Using this lens to consider what Facebook’s chatbots were actually programmed to do, their “rogue” behavior is not as ominous as the headlines would make it sound. In fact, when the bots seemingly began to communicate in a “foreign” language, their behavior can actually be explained by their underlying design. According to a report by Facebook, there was never any incentive to use English programmed into their script, so the bots reverted to communication in the most efficient way they could — which happened to be a code similar to zeros and ones. And, since efficiency is why they were created in the first place, can you blame them?
These bots are not truly thinking for themselves or making their own decisions; even when they did begin to display unusual behavior, Facebook could shut them down. We will be able to do the same with “rogue” bots created in the foreseeable future.
In reality, chatbots are performing exactly how we should expect them to, as their actions are modeled to be an extension of the same tasks that humans already perform manually. At their most basic level, chatbots simply serve as an interface — in this case, a conversational interface — that pulls information from multiple sources when given a request, and initiates actions like entering the time at work or playing a favorite song.
However, although there is nothing to fret about right now, there are some safeguards we can put in place to protect us from what may come as we further our technologies.
Bots in business
Consider enterprise technology, where applications are created with very specific use cases in mind. Bots in this setting are rule-based and not designed to develop a data-driven personality. They are packed with some pretty powerful technologies, like machine learning and predictive analytics, which enable enterprise chatbots to “learn” new tasks. An example would be my digital assistant, Wanda, which can provide insight into the current state of a project, predictions around its completion time, and more. She can also learn from experience that words such as “procure,” “purchase,” “buy,” “acquire,” “obtain,” etc. all mean relatively the same thing and require the same action.
The main thing business leaders must keep in mind while working with chatbots is that giving the system incorrect information will result in incorrect outcomes. We cannot trust bots to catch our mistakes the way humans would. Therefore, we should implement checks and balances to ensure that the work the bot does is not the end-all, be-all. All actions that enterprise software take should conform to specific rules that ensure that anything that could have an impact on the company runs through very specific approval processes. Additionally, businesses need to be able to trace all decisions made by the bot and have visibility into logging systems. Most enterprise systems come standard with safeguards to track this type of behavior, but it’s important that CIOs get a thorough explanation from software vendors about how the algorithms work and their decision processes.
In closing, AI is still in its infancy. Chatbot technology will continue to grow and improve in the coming years, but there is no need to panic just yet. However, we should begin putting the processes in place to be ready for the time when bots truly are able to think on their own.
Claus Jepsen is chief architect at Unit4, a provider of enterprise solutions empowering people in service organizations.