Artificial intelligence: make sure you’re aware of the risks

Artificial intelligence: make sure you’re aware of the risks

Willis Towers Watson's Tom Srail says insurers should take care when adopting AI solutions 

Caspar Herzberg |


This article first appeared in the Summer 2017 issue of The Record.

Artificial intelligence (AI) is expected to become a US$36 billion industry by 2025, impacting almost every sector. The insurance industry is no exception: through AI systems, insurers will be able to predict risk using multiple data points and algorithms. Armed with this information, they will be able to determine the best premium pricing with greater granularity and have greater uniformity in decision-making.

While its potential is plain to see, AI is also presenting a number of risks, many of which are often overlooked. 

The first risk is around bias. Contrary to the vision of neutrality and objectivity that the technology industry may promote, there is always an element of bias when writing an algorithm. If AI is used for decision-making, such as predicting risk for insurers, it could lead to a biased decision for a particular individual policy, with the added difficulty of not being able to pinpoint where the bias comes from. The use of technology to make a decision gives an element of certainty to that decision, but conclusions drawn from machines should never be relied upon as being flawless or absolute. As the inventor of machines, human reasoning should not be cut from the equation of decision-making.

Responsibility is also a concern. The legal issues surrounding liability of machines will continue to increase as AI develops. Using machines to complete tasks traditionally performed by humans leads to the issue of responsibility. Who bears the burden of liability if something should fail? The machine manufacturer or the owner? In the automotive industry, autonomous vehicle manufacturers have said they will accept liability in the event of an accident. This is a major flip in the current system of the insurance sector that attributes responsibility to the driver. The legal issues surrounding liability of machines will continue to increase as AI develops.

Another issue is regarding privacy. The increased collection and processing of personal data as part of AI to make machines more intelligent raises the question of who should control that data, whether consent is required, and how it should be given. It will become easier to identify an individual, even with minimal data points, and many users may not be aware of the amount of personal data they give out in the course of their online activities. Too high a regulatory burden might hinder the progress of the market, but the lack of a strong framework may also lead to abuse. 

Tom Srail is regional industry leader for Willis Towers Watson’s Technology, Media and Telecommunications practice.

 

Subscribe to the Technology Record newsletter


  • ©2024 Tudor Rose. All Rights Reserved. Technology Record is published by Tudor Rose with the support and guidance of Microsoft.