Chatbots Gone Rogue: How Weak Chatbot Security Enables Bad Actors

Posted by Christian Hibbard on 22 Apr 2021

Over a short time span, chatbots have become standard practice in customer service. Services from basic troubleshooting advice to full-fledged payment services are available to consumers with minimal intervention from human staff. As with any automated process, great care should be taken to make chatbots robust and secure. As they become more sophisticated, the potential for malfunction or even exploitation has increased. This raises the stakes for chatbot providers and users. It also raises the question—what could go wrong if I use an unsafe chatbot?

In 2017, German regulators raised a red flag about a line of children’s toys. The government found that these dolls could be used as a clandestine surveillance device, which is illegal under German law. The dolls were marketed as remarkable new technology when, in actuality, they were just a chatbot. Children could ask the doll questions and get answers based on the doll's fictional life. Concerns arose when parents realized that their children were having lengthy, if somewhat one-sided, conversations with the doll. The children believed they were having a private chat, but all of that data was being sent to the toy company's chatbot operator. In addition, several different consumer awareness groups and tech organizations demonstrated that the Bluetooth receiver in the doll was not secure. It could be paired with any phone from up to fifty feet away by anyone who knew how. That phone would then be able to access the microphone and speakers embedded in the doll, listening and speaking through it. It's no surprise that after the German ban went into effect, owners were told to destroy the doll or either face a heavy fine or a two-year jail sentence.

The doll's chatbot was vulnerable to what's called a "man-in-the-middle" attack, in which a third party can access a chatbot conversation. They can then passively monitor the chatlog or even alter the messages sent, perhaps to carry out a phishing attack or trick the user into divulging sensitive information. This is far from the only way a malicious party can take advantage of a chatbot, however.

Other Kinds of Chatbot Attacks

Direct attacks on individual consumers are bad for business, but what can be even more devastating is an attack on the back-end systems directly. Delta Airlines found itself in one such situation a few years ago. Delta claimed that subpar cybersecurity practices from their chatbot provider opened a backdoor to highly sensitive information. Hackers were able to breach the chatbot provider's systems. They then modified the chatbot’s source code to allow them unrestricted access to other information entered on Delta's website. In total, some 825,000 customers' sensitive information was stolen. Chatbots can be a great way for organizations to unify many different services into one convenient location. This creates an attractive target for hackers, who can exploit weak links in the chatbot's implementation and access many systems through it.

Other types of attacks attempt to overwhelm or manipulate chatbots from the client side. Similar to a denial-of-service attack, bots are deployed to create a huge amount of traffic at once. This overwhelms the chatbot, leading to delays or errors for genuine users. A similar kind of attack uses bots over a long period to mislead the chatbot's implementer. It's common for businesses to analyze the usage of their chatbot to inform their development decisions. By consistently sending erroneous queries, the genuine customer data can be skewed to the preference of an outside party. Even worse, if the chatbot utilizes machine learning, it could be trolled into giving unhelpful or even offensive answers. Chatbots like this learn from past exchanges to use more complex and human-like answers. An unsupervised chatbot could easily be manipulated into producing outlandish messages, as with the infamous case of Microsoft's Tay.

What Can My Business Do?

The potential outcomes from these attacks range from lost leads and wasted resources up to existential threats to an organization. Some organizations opt to develop their chatbots in-house. Limiting external exposure helps to limit risks but can be too costly an option for smaller businesses. The only way to secure the benefits of automating customer service without leaving yourself open is to pick a trustworthy partner. A reliable chatbot operator has strong cybersecurity practices. They also take care to monitor their chatbots closely as part of a larger culture of vigilance. 

Chatbot security is crucial for those who are accepting payments with sensitive financial information. Providers of this service can do their due diligence to ensure the experience is convenient and safe for both user and operator. For the security of the messages themselves, all information should be encrypted. It’s standard practice to ensure only those with the right credentials can read the messages sent if somehow the data is intercepted. Redundant security practices can further boost the security of messages. One such practice is the use of payment profiles. This allows users to select from an account on file rather than entering sensitive information directly. 

Authentication is another crucial piece of the security puzzle. Essentially the chatbot needs to be able to easily verify the identity of the user interacting with it. Without this, impersonators could initiate fraudulent payments. Many opt to just require the user to sign in before they can access the payment chatbot. Generally, the easiest way to authenticate users is via a username and password. For operators, more robust authentication such as two-factor or even biometric authentication is necessary. 

The only thing better than having many different security practices is having one holistic culture of security. Utilizing these practices is crucial, but they each only cover one area. As mentioned earlier, redundancy is crucial. A good chatbot provider will stress-test their systems in varying circumstances, from penetration attempts to natural disasters. In doing so, they see how their other security systems function in the event of a failure of one. 

Chatbots can make your services more accessible and potentially shrink customer service costs. On the other hand, they can frustrate your customers and leave your organization vulnerable. The difference, as always, comes down to design and implementation. From the individual messages to the back-end security, the same security practices that hold true for banks themselves should apply to every new service they hope to provide. Whether it’s designed in-house or outsourced, a properly used chatbot is a worthwhile investment.

To learn more about chatbots and AI, read our resources, Financial Institutions Are Investing in Chatbots and How AI is Saving Customer Service.

Alacriti created Ella, an AI-powered, highly secure payments chatbot that facilitates seamless, personalized, and context-aware interactions with customers through messaging apps, intelligent personal assistants, and directly on your website. To find out how Ella can transform how you engage with your customers, contact us at (908) 791-2916 or

Stay connected. Get the latest delivered to your inbox.
Christian Hibbard Marketing Associate Christian Hibbard is the newest member of the Marketing team at Alacriti. His areas of interest are: Faster Payments, Data-driven Marketing, and Sustainable Business. Christian holds a Bachelor of Arts in Philosophy from Temple University.

Related Articles