sponsored

New security vulnerability: Chatbots in the age of mobile messaging

Chatbots have everybody talking – both about chatbots and to chatbots. What’s sometimes lost in the conversation, however, is much acknowledgement of the security risks this beguiling interface technology can introduce.

Broadly speaking, chatbots allow people to engage conversationally with messaging and other applications. The sophistication of the chatbots can range from rudimentary to I-can’t-believeit’s-not-a-person levels of engagement. Millions of people already routinely use chatbots with Facebook, Microsoft, Google, Apple and Amazon, and dozens of smaller players and platforms are pouring into the chatbot market.

Even casual users sometimes have qualms about the amount of personal information they share with these conversational platforms. If your Facebook or Microsoft chatbot knows not just about your links and preferences, but also your calendar schedule and your minute-byminute location, it’s reasonable to wonder whether your privacy may be compromised. That compromise could come intentionally via the chatbot owner’s exploitation of your personal information for marketing or other uses, or maliciously if hackers break into the chatbot database or message stream.

For enterprises, as well as individuals, poorly secured chatbots can also put everything from credit card numbers to intellectual property at risk. Individual users will often provide their card numbers to chatbots so they can automatically order and pay for items. For their part, company employees interacting with a chatbot may unthinkingly provide sensitive corporate information that they’d normally be hesitant to enter in a “less personable” application. Indeed, criminals are designing malicious chatbots specifically to trick employees into doing just this.

Sound security

As with all cybersecurity issues, the first defense against chatbot risks is education. You need to teach your employees to exert caution and commonsense when interacting with these applications, be they consumer-focused or business bots. Another standard element of defense also holds true: Survey your employee base to learn which chatbots your workers are using in their professional lives so you can intelligently evaluate your risk.

Ideally, any chatbots your employees use will support encrypted communications and data, either optionally or – best yet – by default. You should also do your best to determine where data provided to a chatbot is stored, how long it is stored and who has (legitimate) access to it.

Similarly to evaluating a cloud services provider, you should also perform due diligence in assessing the security controls and practices employed by the chatbot operator. Not surprisingly, chatbot-specific security standards and policies are even less mature than the chatbot technology itself. Still, you should press chatbot operators for whatever security information they can provide, and blacklist any services that fail to meet your corporate requirements.

Given that a huge amount of chatbot access and communications is smartphone based, it’s also important to understand the security controls provided by your mobile operator(s). Although chatbots and chatbot traffic introduce a new target for cyberattacks, many existing network controls have a role to play in protecting this proliferating technology. Better to talk to your mobile operators now about your chatbot security concerns and needs, rather than have others chatting about your data exposure sometime in the future.

Dwight Davis has reported on and analyzed computer and communications industry trends, technologies and strategies for more than 35 years. All opinions expressed are his own. AT&T has sponsored this blog post.