As AI evolves, the tech industry learns how bots can fail humans; whether it’s based on humans’ inherent distrust and fear of robots, like the uncanny valley, or the problems presented by artificial ignorance, which reinforce harmful tropes like gender bias. This is why my team at Mezi considered human psychology and personality traits that facilitate communication as we developed our chatbots. We studied how we could improve everyday interactions between our customers and our chatbots by infusing these traits in the bots’ programming.
Over half (52 percent) of consumers believe AI has a positive effect on their lives and we aimed to make that experience even better. Using this goal as a starting point, we focused on identifying and implementing the elements of machine learning that make collaboration possible. We also researched how people interact with customer service professionals and used our human agents as test subjects to help us refine our product.
Below are some guiding principles on how to develop bots that facilitate, not prevent, better communication with humans.
Let the teacher become the student
Zen teacher and monk Shunryu Suzuki said, “In the beginner’s mind there are many possibilities, but in the expert’s there are few.” Suzuki was a thinker who had a significant influence on Steve Jobs. Similar to Jobs, tech leaders like Marc Benioff and Jeff Bezos believe the key to innovation is keeping an open mind and not letting your presumptions guide you. I do my best to challenge my own assumptions, and like any good student, I know a team can only develop a better bot by doing its research.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
So how can you start researching to build a better bot? First, have your engineers and data scientists analyze popular words and phrases that your users typically search for using your technology. Then use these words and phrases as the basis for creating your “virtual persona.”
Your virtual persona should feel human — not like it’s spitting out words that sound programmed. The more human your bot sounds, the more you can personalize the experience you create for your users.
Which leads us to…
Model the bot after yourself
I mentioned in a previous VentureBeat article that in order to create a bot that increases customer engagement, you should design the bot to function more like a human. It’s not just the element of humanity that matters, but recreating a positive user interaction to encourage customer loyalty and lead to repeated transactions.
An example of this process in action could be the decision to add emojis to a chatbot’s vocabulary. Usage of emojis on Instagram increases engagement, but that phenomenon is not limited to social media, it applies to all forms of communication. Think about it: Aren’t you more willing to engage with texts from friends and family when sharing humorous anecdotes that are personalized with emojis? That is a compelling reason for any company to start using emojis in chatbot communication.
But don’t just let anecdotal evidence inform the decisions you make about your chatbot — make sure you have the research to back them up. For instance, if you do the research, you’ll discover people react to emojis like they do to real smiling faces. You might also find that emojis break the barrier of understanding tone through text. In fact, many psychologists believe emoji use allows us to communicate more effectively and adds a human element to text.
As you develop your own bot, identify elements of speech that you respond well to, and do your research to understand why that might be the case. If it turns out to be a viable tactic with research to back it, ask your customer service team to replicate that type of speech and communication with your customers to see if it holds up in tests. After implementing emojis with our bots, we immediately noticed an uptick in engagement and customer satisfaction, and we’ve used the same model to inform how our chatbots speak to customers ever since.
Step back from instant communication
For most people, overcommunication or rapid-fire communication from a customer service agent feels a lot like being harassed. If a bot answers questions too quickly, it can make users feel like the bot isn’t listening to them and addressing their needs and concerns. Although this might sound counterintuitive, there are times when a customer would prefer to wait for a response. Sometimes a customer has a question they don’t want an immediate response to, and a quick response could make them feel like the service is too intrusive or abrupt. This generates distrust around that interaction.
If you allow some time before your chatbots respond to a user’s request, you’ll find that your customers are more likely to ask more follow-up questions related to the other customer services that you provide. Instead of one transaction, customers can get all of their needs met at the same time from your technology.
Take cues from human interaction
It’s important to take real-life learnings and apply them to machine learning. By researching and testing how your customers prefer to communicate, you’ll be able to develop a better way for your bots to anticipate and respond to customers’ future requests and collaborate with users.
The more you model your bots and AI after your own communication style, the more naturally your bot will communicate and successfully deliver on the promise of how bots and people can work together to achieve the best customer experience possible.
Snehal Shinde is the chief technology officer and cofounder of Mezi, the travel and shopping app.