Straight Talk About Chatbots: Minimizing the Risks to Reap the Rewards

March 28, 2018

Using technology to sell insurance and gather claims — it seems like the stuff of science fiction. However, advances in natural language processing and artificial intelligence (AI) have led insurance companies to adopt smarter, more automated solutions for assisting customers in the purchase of insurance policies, as well as in claims processing. Rather than waiting on hold and spending time speaking with a salesperson or claims investigator, customers can interact with a chatbot that can share information about policy options or hold a conversation to gather all of the necessary details about a claim, and then process that claim — sometimes in a matter of minutes.

Wireframe head and printed circuit. Digital illustration

But with the use of chatbots come potential risks. Here’s what insurance companies need to know about these risks and how to minimize them.

Liability is typically not an issue when chatbots are used for customer support functions, like answering questions about when policies are up for renewal. But when it comes to the sales process, the waters become muddier. Who is liable if a chatbot provides a customer with the wrong information? And how can insurance companies minimize the consequences that occur when a customer takes legal action because of a chatbot’s error?

Just as insurance companies are at fault when an employee conveys inaccurate information to customers during the sales stage, they shoulder the liability when a chatbot does the same. In the event of litigation, the onus is on the insurance company to prove what the chatbot told the customer — for example, the extent of coverage they would get by choosing a particular insurance product, the differences between one product and another, and the conditions that apply to a policy or policies. Information about the price of coverage also falls into this category.

Mitigating the risk that comes with using chatbots as sales assistants starts with utilizing web archiving software to keep legally defensible records of every interaction between these chatbots and customers. Insurance companies must also carefully train their chatbot software to properly understand, interpret, and respond to language, including generic language, industry-specific terms, and terms that are utilized by their operations alone (e.g., product names). Archiving this information in a legally defensible manner— again using an archiving solution — is critical, too, because as a general rule, liability for misinterpretations by chatbots is insurance companies’ responsibility. The inability to prove that a chatbot did not, in fact, misunderstand a query can interfere with a successful defense in court.

What happens if a chatbot takes information for a claim, then rejects that claim based on the historical data that was used to train the chatbot? Can the customer sue? Will the insurance company be liable if the way the chatbot was trained caused the rejection?

When it comes to falsely rejected claims — or any mistakes a chatbot might make, for that matter — the rules that apply to insurance claims representatives also apply to chatbots. If a customer believes human error or bias caused a human insurance company representative to reject his or her claim, that customer has the right to initiate a lawsuit against the insurance company. Should the case be decided in the customer’s favor, the insurance company would, as the claim representative’s employer, be liable for damages. In other words, it’s your employee who caused the problem, so you pay.

Similarly, if a chatbot denies an insurance claim and the claimant chalks it up to bias or a mistake on the chatbot’s part, the customer can sue the insurance company. If the suit is decided in the customer’s favor, the insurance company, as the owner and “employer” of the chatbot, would be responsible for the error and for any financial penalties imposed by the court. Quite simply, your chatbot, your fault — and your loss.

What steps can insurance companies take to defend contested chatbot decisions, keep chatbots from rejecting claims that should be accepted, and mitigate the risk of litigation in the first place?

The first step involves harnessing web archiving technology — to document every aspect of the claims decision process. This means all claims data that is input into the chatbot software. It also means all of the data produced by the software, as well as the software itself and the details of the data analysis that led to the disposition of each claim. These legally defensible records can be presented in court to demonstrate how and on what basis claims decisions were made, and hopefully, prove their validity to the presiding judge.

Insurance companies should also assess whether their chatbot software is making appropriate claims determinations, or if some type of bias or other catalyst appears to be interfering with its ability to do so. One way to figure this out is to manually compare a sampling of decisions generated by the chatbot software with a sampling of decisions made by humans. Another, but more time-consuming approach, involves going back into the system and analyzing every decision made by a chatbot against a dataset of correct decisions, in order to ferret out anomalies. In either scenario, additional data that can improve the accuracy of decisions can then be input into the software, with further training to follow if needed.

Regardless of which approach insurance companies take to address claims rejection based on bias or other problems with data, it is a good idea to program into the chatbot software boundaries within which the system should remain when determining whether to approve or deny customers’ claims. Examples of these boundaries include variables that should not be considered or associations between data points that should not be made in order to avoid the risk of denying valid claims. The system can then be programmed to generate alerts when these boundaries have been or are about to be crossed, and retraining can be initiated to decrease future potential for bad claims-related decisions and legal action by disgruntled customers.

Kevin Gibson, is CEO & chairman of Hanzo, a firm that provides legally defensible collection, preservation and analysis of web and social media content for Global 2000 companies.

Was this article valuable?

Here are more articles you may enjoy.