For most of us, building a chatbot serves a real business purpose and isn’t just an amusing project. Since a chatbot is meant to serve or interact with customers in some way, it should be given as much information and intelligence as possible to properly interface with customers. The sooner this data is learned by the chatbot, the better — and it’s best when it is done before the chatbot even goes live.
There are a few reasons why this makes absolute sense:
- You already have data that you could use to avoid a “cold start” that would put friction on the user experience and cause big drop-offs in use due to the chatbot’s inefficacy in replying.
- You are investing a lot of time and effort into building a chatbot, and thus expect it to perform a strategic purpose in the grand scheme of things.
- Nobody likes a chatbot that keeps saying “Sorry, I didn’t understand your question. Can you rephrase?”
Raiding your support archives
A customer-facing chatbot naturally lends itself to using information from customer support tickets. Support tickets are a treasure trove illuminating the types of concerns and issues that customers already have, and they can inform your decisions on what features to build into the chatbot and anticipate what kind of questions your chatbot should be able to answer. Since these are actual, concrete data points, you don’t run the risk of imagining problems and solutions unrelated to your customers. From our experience, this is the best way.
Frequently asked questions are another great resource for provisioning your chatbot with the first cut of replies. Since these are distilled from repetitive queries (hopefully real ones, not just imagined), they can be the baseline from which your chatbot can draw responses. However, you should be aware that not all FAQs are updated regularly or relevant to customers, and so you have to be sure to vet these questions before haphazardly deciding to pop them into your chatbot.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
Training the chatbot
While it is ideal to use past tickets to train your chatbot, it can also be challenging if you or your chatbot provider don’t have the expertise to analyze and extract the important information needed at scale. If you have a big archive that you believe will be important to aggregate, it will take some knowledge of text mining and natural language processing to adequately start classifying, topic modeling, and summarizing these data into usable nuggets for your chatbot development process.
You can of course take the time to manually go through the tickets and categorize them yourself or get someone on your team to do so. Regardless, be sure to select a representative sample of tickets across these parameters:
- Products: to represent all product types that you carry
- Time frame: to account for events or seasonal fluctuations
- Account type: to capture the variety of customer accounts
- Priority: to understand differences between urgent and normal issues
- Resolution time: to sample how easily/quickly different issues are resolved (the easier or faster, the more likely your chatbot can handle them automatically)
- Tags or categories: to make use of human-classified data for training
There will be many other factors that you can sample for representativeness of your customer support tickets, depending on your business and product structures.
Now, the hard part to do without machine learning and NLP expertise is classifying and counting the results of your analysis by hand for a large number of tickets. Just classifying about 1,500 to 2,000 tickets, at minimum, would be overwhelming for the average person, and could take between 3 days to 2 weeks, depending on the complexity of issues and time availability of your human labor.
Assuming you do have some expertise at hand, you can consider using semi-supervised training and start off with labeled instances for classification, run some topic modeling through a hybrid vector-based LDA, and have some fun experimenting with different models and hyperparameters that do well on your dataset.
At the end of your training phase, you should have a relatively stable and production-ready model that can classify incoming queries that your chatbot receives from users. To maximize the usefulness of the effort you put into training this model, you can package it into an API and let your chatbot call this API when it needs to classify new queries from customers.
Testing the results of training
It’s really important to test and retest the results of your trained model with real users. Sometimes (actually, oftentimes) your models will overfit on the data that you trained it on, and they will turn out to be useless in the face of new data. Hence, other than the split test/dev sets you will use in evaluating the model during training, you should give the first iteration of the chatbot equipped with this model to users.
You can usually fit this into the trial phase or pilot/beta launch of your chatbot, when you’re testing the experience with a group of customers or users. Since you probably won’t have that many users at this stage, take the time to look through each response and figure out if there are systematic samples or examples that were ignored or misclassified in the model.
Also, take advantage of the way that the chatbot can ask for feedback from users, such as “Did I answer your question? [Yes/No]” that you can pop into the end of every response, as a way to add structured data to continue training the supervised classification model you have set up.
Launch and optimize
After the trials, it’s time to launch your chatbot and really see what happens to your chatbot’s response abilities in the field. If all goes well, you may be able to answer 50 to 80 percent of questions properly at launch. In this launch period, continue to observe and note what kind of questions people are asking the chatbot, and factor that into your next version for optimization.
Realistically, you’ll definitely have a lot more tuning to do in order for it to work at the level you expect it to. Be patient: Every 2-4 weeks, expect to update your model with more information if you can afford to. The more data you have amassed, the better; there is almost no exception to this rule (except maybe if the data is of low quality or is improperly handled).
Want to go deeper?
So you have heard about the wonders of advanced techniques in deep learning and you feel that you can beat the performance of your vanilla machine learning model with it. It certainly could help, especially if you want your chatbot to be able to interpret the context of sentences or turns. Deep learning in the context of chatbots is probably most useful with long short-term memory architecture and can improve the experience over time.
However, don’t feel the need to jump straight into this option until you have tested the other machine learning models first; it’s important to understand your baseline performance, and it’s also hard to find vendors or engineers who know how to tune the models enough for them to be useful in production. Many of the techniques (at the time of writing) are either still in, or fresh out of, research labs, and may not yet be robust enough to be used in mass production, facing your customers. Additionally, you’ll need a significantly bigger dataset — likely orders of magnitude bigger — that you may not have in order to build a meaningful deep learning model.
In summary, always make sure that you use actual data to train your chatbot so that it can have a better start with your users; sample your data rigorously; and test, optimize, and throw out models if you need to.
Carylyne Chan is the cofounder of KeyReply, an enterprise AI chatbot company.