By Morey Haber, CTO & CISO, BeyondTrust
According to Gartner’s recent ‘AI and ML Development Strategies’ study, 40% of organizations cite customer experience (CX) as the number one motivator for use of artificial intelligence (AI) technology. Not surprisingly, across the Middle East, we are seeing enterprises of all sizes and even several government entities, start rapidly deploying chatbots on their websites, all in an effort to provide customers faster responses to their queries. These chat applications are designed to field plain text requests from humans that are fed into an AI engine, which can provide “smart”, scripted responses to inquiries.
As the machine learning technology that powers many of these chat applications gets smarter, it is going to get increasingly harder for users to determine if they are interacting with a real person or a machine. As a case in point, some services classified as “conversation marketing” may actually route you to the appropriate live person for a more in-depth conversation. But while we might never know the difference, with a little social engineering, a threat actor can easily determine what is behind the scenes and exploit any IT security vulnerability.
Understanding the security implications of chatbots
Irrespective of whether it’s a human or machine, there are some inherent security risks in chat-based services. Ironically, while there is a plethora of information available on how to deploy chatbots and the associated benefits, there isn’t the same level of attention and guidance around how to keep it secure for both your organization and for the end-user.
As a case in point, consider an automated service that is either hosted by the company itself or connected to a cloud-based AI engine as a service. To effectively respond to queries, this service needs to access backend resources. This often means having a database fronted by middleware that allows queries via a secure application programming interface (API). The contents of the database will vary from company to company and may include anything from hotel reservation information to customer data—and it may even accept credit card information.
Here’s a checklist of basic security questions to cover before implementing a chatbot that is fully automated and AI-driven:
- Is the API connecting your organization’s website and the chatbot engine secured using access control lists (ACLs)? You can accomplish this by using IP addresses, geofencing, etc.
- How do you approach the management of authentications between the systems (web services, engine, middleware, cloud, etc.)?
- How do you apply vulnerability management best practices across the architecture supporting the chatbot? You should also find a way to implement routine penetration testing.
- Have you adequately secured privileges/privileged access and enforced least privilege?
- What data can the chatbot query—is any of it sensitive? Do any specific regulations apply to how this data is collected, stored, handled? For instance, do communications contain information that may warrant extending your scope of regulations, like PCI DSS? Also, will communications “self-destruct” in accordance with certain regulations?
- Is there a process for logging and detecting potential suspicious queries that may be designed to exploit the AI engine or leak data?
- Can you mitigate or prevent malware or distributed denial of services (DDoS) that target your service?
- Do you ensure end-to-end encryption for all chatbot communication and what protocols are you using?
In addition to carefully considering these security implications, organizations should continuously inventory the supply chain based on assets and communications from a chatbot, web services, and provider to maintain a risk assessment plan. Any changes can easily affect some of the best practices listed above.
Protecting your employees during conversation marketing
In conversation marketing, a human is actually responding to the queries via the chat window. Several organizations try to make the experience really “authentic” and, as a consequence, do not use fake names or pictures for the human chatbox representative.
However, if a company displays the full name of their chat representative inside the chatbox, with just a little social engineering, a bad actor can easily uncover data about the representative that can be used as part of an exploit. This is particularly easy if the representative has a social media profile. So to that end, if you do choose to use conversation marketing, it is critical that you follow a few key security best practices.
- For one, never reveal the employees’ full name and instead use an alias. While this might seem counterproductive (remember the whole making the experience more “authentic”), using the full name or even just the first name and last initial poses a high risk as a little research could uncover personal information about the representative.
- If the chat service displays a picture, photo, or avatar of the representative, use a unique image that cannot be found anywhere else on the internet. The reason―a simple search by the employee and company name will reveal their social media presence and, if the pictures easily match, you might as well use their full name anyway! You will have done very little to mask their identity and provide protection from a potential social engineering attack at home or at work.
- Have a detailed manual in place that clearly states what information the employee can share and what he/she absolutely can not—under any circumstances, irrespective of the inquiry―during a chat conversation. These guidelines will vary and can include everything from license keys to password resets. Your business will have to establish this list based on the services the chatbox provides and any local and industry regulations governing data exposure, particularly across country lines.
- Create a formal support and escalation path for inquiries into potentially sensitive information.
- Provide regular security training for all chat box representatives so that they know how to recognize a potential attack, how to respond to suspicious requests, and how to escalate a situation before it becomes a security incident for your organization.
Let’s face it—when it comes to improving customer service, the benefits of chatbots and conversation marketing is undeniable, which means they are here to stay. But these tools do open up another attack vector―cybercriminals will always exploit the simplest way to compromise an organization and, unfortunately, humans are often the weakest link.
But by assessing the key questions and implementing these best practices, you can enable a chat service that helps support your business initiatives, without opening up unnecessary risks.
About the Author
With more than 20 years of IT industry experience and author of Privileged Attack Vectors and Asset Attack Vectors, Mr. Haber joined BeyondTrust in 2012 as a part of the eEye Digital Security acquisition. He currently oversees the vision for BeyondTrust technology encompassing privileged access management, remote access, and vulnerability management solutions, and BeyondTrust’s own internal information security strategies. In 2004, Mr. Haber joined eEye as the Director of Security Engineering and was responsible for strategic business discussions and vulnerability management architectures in Fortune 500 clients. Prior to eEye, he was a Development Manager for Computer Associates, Inc. (CA), responsible for new product beta cycles and named customer accounts. Mr. Haber began his career as a Reliability and Maintainability Engineer for a government contractor building flight and training simulators. He earned a Bachelor’s of Science in Electrical Engineering from the State University of New York at Stony Brook.