Braced for an avalanche of AI claims

Claims in which the failure of AI to behave as expected has already caused losses running into millions of pounds for insurers – these are just the forerunners of what is to come

Imagine, if you will, a future a mere stone’s throw from the present day. Mid-afternoon in a care facility for the elderly in Japan. An 82-year-old woman’s grandchildren are visiting for an hour or so, playfighting at her bedside while a softly humming carebot tends to their grandmother’s needs – propping up her pillows, adjusting the blinds. As it busies itself, the carebot observes the two young boys trading playful blows and notes their behaviour. Then, having fulfilled its duties, the robot glides towards the door. En route, it reaches out and strikes one of the boys. Unsurprisingly – to everyone except the carebot – all hell breaks loose.

If this scenario strikes you as fanciful, think again. Robots like this imagined carebot that are programmed to learn from their environment and the behaviour they experience already exist. After all, it’s how humans learn. But there is a difference. If a small human strikes another small human, they’re likely to receive a swift reprimand followed by a realignment of their moral compass, namely ‘hitting people = bad’. If a robot strikes a small human, we find ourselves in an entirely different world. But if the robot learns in a similar way to the child, is the robot really defective?

The purpose of this scenario is to demonstrate the challenges that complex artificial intelligence (AI) systems pose to the insurance industry. Already, we are seeing claims in which the failure of AI to behave as expected has caused losses running into the millions, or in some cases hundreds of millions, of pounds. But these are just the forerunners of what is to come. AI systems in the form of automotive construction robots, transportation managers, schedulers and vast storehouses of data have only just gone mainstream. So, the claims we have seen to date are merely the leading edge of a type of claim that has the potential to become ubiquitous in the London insurance market, if the situation is not managed.

Over recent years, the London market has engaged in a systematic hunt for cyber exposures hidden within non-cyber insurance products – so-called non-affirmative cyber. As cyber risk grew significantly, the market and its regulators became concerned that the wording of standard marine and aviation policies among many others could conceivably be triggered by a cyber attack or failure – an event for which the policies were not designed to provide protection. Now that same concern is beginning to be voiced about AI-related risks – and with good reason.

Incidents

Just consider for a moment the range and breadth of AI-related incidents that have been reported in recent years. In America in 2015, Wanda Holbrook, a maintenance technician, was killed by a robot at a Michigan motor parts manufacturer. Her death is just one example among many of workers injured by robots used in manufacturing and product assembly.

But the problems need not relate to physical injury. In 2016, Microsoft launched Tay, an AI chatbot designed to engage with Twitter users and learn from what it read on the social media channel. Alas, things did not go as planned. Once exposed to the full force of internet commentary, it took less than 24 hours for Tay to begin posting racist and sexist tweets, which went as far as to offer its support for the views of the Nazi Party. Microsoft pulled the plug on Tay to "make adjustments" but strongly denied it had been at fault. Two years later and Tay is yet to tweet again.

The Amazon Web Services outage in 2017 revealed that a simple typographical error by an engineer can be enough to "take out a sizeable chunk of the internet", as technology bible, The Register put it, thus crippling huge numbers of businesses. Other examples include a major international airline that suffered a week of severe disruption when its systems were accidentally interrupted and hundreds of millions of dollars of trading losses caused by minor coding errors.

Even with such examples, we are still at a relatively early stage of the game regarding AI. But in this short time two things have become clear. First, the increasing complexity of AI systems means, when things go wrong, the severity of the resultant financial loss is growing dramatically.

Second, that same complexity makes the identification of any defect within the product difficult, to say the least. Take our friend the imaginary Japanese carebot: was its striking of the child the result of a defect in its software, its coding or its hardware? Or was it even a defect at all? At present, there are few, if any, international coding standards in the field of AI by which to assess the existence of a defect.

So how should the market respond to the growth of AI? Assembling a panel of experts for the industry to consult will be vital. Only professionals with an in-depth understanding of AI will be able to disentangle the coding from the software to identify where within an AI system a defect resides.

A review of policies – potentially on the same scale as the hunt for non-affirmative cyber – will also be necessary. What’s apparent from many of these early claims is that underwriters were unware of the manner in which their insured interacted with the AI system. For example, if an underwriter provides cover for a firm of electrical contractors, one of whose staff accidentally shuts down an AI system, the potential loss could be far larger than anything envisioned. Exclusions will also need to be reviewed and tightened where necessary.

The reality is that AI will permeate more and more of our world, replacing human employees and offering computing power with the ability to learn and adapt. As such, underwriters and claims teams will increasingly need to be aware of these systems and the way in which their products interact with them – assuming, of course, those underwriters are still human beings.

AI brings with it the promise of a revolution to the way in which we live and work. But all revolutions have their casualties. The London insurance market needs to act now in order not to become one.

This article was first published in Insurance Day

Find out more from our expert:

Neil Beresford

Partner