Chatbots bridge the gap between messaging and application frameworks. Users no longer need to install multiple apps or visit numerous web pages to get things done or to access information. The way people live and work is rapidly changing as a result.
However, the incorporation of chatbots into daily life is not all sunshine and rainbows. A few serious, even dark, questions arise. What are the security implications of commercial chatbots for the regular consumer on Main Street? Are the inherent security risks associated with chatbots different from traditional cybersecurity woes? Is the chatbot simply another channel for attacking people for profit (or fun) using existing hacking techniques? Finally, what do bot creators need to do in order to ensure consumers are not unduly exposed to security threats from malicious attackers (or even breaches generated directly by bots)?
There are no simple answers to any of these questions — especially in a technology environment rife with security flaws, user gullibility, and social engineering attack vulnerability. To add a layer of complexity, the convergence of chatbot technology and traditionally human tasks increases the potential for even more threats against the consumer due to chatbots’ human-like interface. Let’s take a look at a few possible scenarios that could result from the proliferation of human-like chatbots.
Humans attacking chatbots
As with any technology, a hacker could target the underlying infrastructure or application framework of a chatbot. And, like any service provider, bot creators need to ensure that all of the usual security mechanisms are in place and accounted for in terms of patching, secure architecture, and high availability. Data flowing through the chatbot system should be encrypted at rest and in transit, especially if the bot is used for sensitive data.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
A chatbot’s interface presents an opportunity for attackers looking to inject malicious commands that unlock secured data. The degree of an attack’s success and its complexity may depend on the security of the messenger platform. To that end, some due diligence should be applied by bot creators to ensure that data is secured in transit and that input validation is required to access the information. Bot creators need to be mindful of secure development techniques that avoid well-known problems that can lead to SQL injection, XSS (cross-site scripting), XXE (external entity attack), and other attacks.
Humans attacking humans that use chatbots
Due to advances in security, the days of attacking server infrastructure are disappearing — albeit, at an excruciatingly slow pace. Hackers are now more concerned with attacking the human, using technology as a proxy. Surely, if chatbots aspire to human-like behavior they can be an excellent proxy for attacks that defraud humans for hacker profit, or just for fun and amusement. These take two common forms:
-
Technical attack: An attack could be scripted into an ‘evil bot’ using the messaging exchange with fellow bots as a means of reconnaissance. The goal would be to profile the victim bot or its framework to look for known or possible vulnerabilities that could later be exploited. This could be followed up with targeted payloads to compromise the bot service or framework protecting the data, using freely available tools such as metasploit. The outcome could be data theft.
-
Social engineering attack: An ‘evil bot’ could attempt to masquerade as a legitimate user by using an accumulation of data about a targeted victim, taken from public sources (such as social media), the dark web (using auctioned passwords or personal information), or both, all in an attempt to gain access to another user’s data through a bot that provides such services.
Drilling down a bit, is it possible to trick a bot that has implemented machine learning behavior or AI into revealing information it should not? Chatbots created so far do not appear to exert such advanced capabilities. But when they do, will attackers be able to trick them using social engineering techniques using masked identities? Probably. Will hackers be able to persuade chatbots to do something they are not supposed to do? All signs indicate yes. Therefore, there’s a greater risk that future chatbots will include human-like flaws, such as trust without sufficient verification.
Humans and chatbots working together to prevent attacks
As chatbots become more human, could they become more vulnerable to the well-known attacks of phishing, whaling, CSRF (cross-site request forgery), and clickjacking? Could they also end up generating devastating security breaches without human intervention? That’s to be determined. Regardless, chatbot creators need to use available methods to get ahead of the potentially disastrous trend.
In practice, this means subjecting bots to penetration testing to mitigate against complex and invisible attacks and to apply remediation accordingly. Testing in this manner is not a new concept, but its importance is not universally known among bot authors — especially new builders. In order to quickly close the knowledge gap, bug bounty programs could be an effective means of crowd-sourcing penetration testing. As a final line of defense, engineers and builders should also consider building security guidelines for their consumers to warn of the increasingly human dangers that accompany chatbot innovation.
Kriti Sharma is the Vice President of bots and AI and Robin Fewster (CSSLP, CISSP, CCSK) is the Lead Security Specialist at Sage Group, a global technology company.