There’s an old saying that “#$%^ runs downhill.” So, if you want to find the source of the #$%^ , you go to the top of the hill. A similar saying, more to the point, is “Garbage in, garbage out” (GIGO). We hear and use that phrase an awful lot in claims. Far too often.
Yet, how many of us really focus on the root source and commit to eliminating the cause? Bad data causes a tremendous amount of wasted time during claim handling due to incomplete, inaccurate, and improperly formatted data.
Most claims start at the policing level, where clerical personnel manually enter loss details, which are then married to policy data within the carrier core system (usually the policy management system). That combined data set is output as the First Notice of Loss and transmitted, sometimes via multiple channels and across multiple paths, to multiple stakeholders, sometimes at once, and sometimes sequentially as the claim life cycle progresses.
Each of these downstream stakeholders relies on the FNOL and policy data in order to perform their duties, one of which is exchangingcommuniques with other stakeholders and with the insureds themselves. Multiple systems and platforms exchange data, which again, is reliant upon the integrity of the data from the source system. In the modern world of claims, duplicating data entry is a workflow relic that everyone knows should be avoided at all costs – yet, it is still common and unfortunately necessary, because the sourced data is often unusable. Not only does duplicating data entry slow the claims process, but it creates another opportunity for even more human data entry mistakes. And, so rolls the #$%^…
What then, does the claim handler do when the data they are handed is bad? Who is responsible for fixing it? We’ll surely, that falls on the adjuster, right? If the adjuster sends a letter to the insured, and that letter misspells their name, that is on the adjuster, right? What if the letter never reaches the insured because the policy incorrectly shows the mailing address to be the same as the risk address? Still on the adjuster? What if the insured is not timely contacted because the policy hasn’t been updated in 5 years and the insured changed their phone number? What if the policy doesn’t contain the email for the agent, or an opt-in flag for SMS?
What if, what if, what if? The “ifs“ are exceedingly abundant…
As a more specific example, what if the county name is sometimes SAINT JOHNS, and sometimes St Johns, and sometimes St. Johns, and sometimes Saint John or some other variation? Yes, those might all be acceptable when writing a report intended for human eyes, but what about system-driven processes (including analytics) that rely on the county name being what the US Census Bureau dataactually says it is, and the mailing address being formatted according to USPO regulations? Why doesn’t the policy system create these records with correct data, consistently, from the very beginning? I’ll tell you why, because the system developers and managers have never worked in the claims department!
It is all on the adjuster, right? At the end of the day, that is the view, anygarbage out is the fault of the adjuster – like the concept of “last clear chance” when considering fault in a vehicle accident – the adjuster has the last chance and thus the duty to prevent the mishap from occurring. The adjuster is expected to “mop the floor” and make sure all the data is completed and corrected, at some point during the life of the claim. Because of this mindset, no one ever goes back to place any fault at the source, and thus no improvements are made – the #$%^ keeps on rolling. Let someone else deal with it, right?
Well, that is exactly the wrong answer, because claims adjusters need to be supported so they focus on claim handling, not clerical work. If the claim staffinherits issues, instead of making appointments and inspecting losses, they are cleaning data – they cannot contact the insured without valid contact details. Letters cannot be sent. Routes cannot be plotted. Geodata-driven operations cannot be deployed. The list goes on and on, and too often, the claims cannot even be created in the CMS unless entered manually.
And yet, it is the adjuster who’s Cycle Time is taken as a key measure of success. Timely claim handling is based on claim reported dateto insured contact/inspection/1st report/etc., and everything required in-between adds to the timeline. So why not provide pristine data to the department that is scrutinized most for timeliness, which is evaluated most critically for how fast the insured is serviced,and yet which has the least resources for fixing data issues?
This challenge is only exacerbated with CAT claims. Not only are claim resources stressed, but often the upstream processes (FNOL) are compromised, leading to more bad data. Often, different systems areutilizedfor CAT and Daily claims, and guess which one gets the focus?In the rush to get the CAT claims into the hands of the field adjuster, data is neglected, and normalworkflows are often found unworkable when volumes suddenly surge. Data transmission paths are interrupted and don’t travel via the same channels (yes, many carriers still distribute claims via a faxed FNOL or in an email body, requiring the IA firm to deal with it as best they can – but at all cost, “get the claim to the field adjuster” because “the insured must be contacted within 24 hours.”
In many ways the field adjuster is still viewed as the red headed stepchild and left to deal with junk data. As a systems analyst who deals with hundreds of carriers, vendors, systems, and platforms, I can tell you firsthand, the industry gets a “C+“. Considering the importance of good data and the wideavailability of tools to force good data at the point of entry, that C+ is being kind – there is no valid excuse.
If you want the very best outcome from your claim handlers, then give them the very best data to work with. They deserve it. Your insureds deserve it, and frankly, it isn’t hard!
In a follow-up blog post, I’ll identify the most common data errors we see as custodians of our clients’ data. I’ll also make suggestions about how to mitigate exposure to bad data; how to anticipate and auto-correct issues with inbound data; and finally, how keep the #$%^ from coming downhill in the first place. (Ambitious, right?)
Was this article valuable?
Here are more articles you may enjoy.