I came across an extremely thought – provoking article this morning on Artificial Intelligence (A.I.) and the law, by Dennis Anderson on Law360.com, which began with the following futuristic, bone-chilling scenario:
On the evening of Dec. 23, 2016, at seven seconds after 5:49 p.m., the holder of a renter’s policy issued by upstart insurance company Lemonade tapped “submit” on the company’s smartphone app. Just three seconds later, he received a notification that his claim for the value of a stolen parka had been approved, wire transfer instructions for the proper amount had been sent and the claim was closed. The insured was also informed that before approving the claim, Lemonade’s friendly claims-handling bot, “A.I. Jim,” cross-referenced it against the policy and ran 18 algorithms designed to detect fraud.
This piece goes on to ask a question which I have yet to stop thinking about: how to defend an insurance company claims decision when the decision is made by an algorithm, not a human being?
Before attempting a reasonably good answer to this question, if there is good answer to be had at all, a quick review of history is in order: this is not the insurance claims industry’s first foray into using artificial intelligence to process claims. Countless bad faith claims in the past have in fact been premised on that very thing, i.e., the use of a computer program to put the value on a bodily injury claim, for example. The very use of software to value a claim was the central theme of the bad faith complaint. Many of those claims, however, were successfully defended by lawyers for insurers who argued that such computer-provided data was merely a starting point, and that claims representatives with blood pulsing through their veins then went to work to take that piece of information, along with countless other pieces of information, to value, negotiate, and otherwise process the claim in good faith – the Human Element Defense, let’s call it.
Now, a stolen parka, I grant you, is a far cry from a soft tissue neck injury. But it is not hard to see that in the future, algorithms can and will be developed to use A.I. to adjust property damage and homeowners’ claims, commercial coverage claims, and , yes, let’s be bold here, personal injury claims.
I have spent decades defending bad faith claims, and every defense begins and ends with the same thing: what was the claims representatives thought process? Can that process be traced, documented, demonstrated, and shown in the light of day to be a reasonable approach to a difficult problem?
Are we coming to a time now when claims logs and insurer communications will simply be replaced with massive strings of zeroes and ones? How can you tell a story made out of zeroes and ones?
The immediate question, of course, becomes how to defend a claims algorithm in a bad faith case to a jury of humans, or a human judge sitting in a bench trial. There is, I’m afraid no immediate answer, except perhaps this one — it is best to continue to include human beings, in some capacity, in a claims process which may later have to be explained and legally justified to other human beings.
Stated another way, a purely mathematical, algorithmic defense of a bad faith claim may not be fully successful until the time comes when judges and juries are also algorithms, and, so too, are the lawyers.
I hope I’m retired by then.