This article first appeared in the Summer 2017 issue of The Record.
Few people would argue that the insurance industry has struggled to sustain an exemplary customer experience at scale. Evaluating and managing risk in the real world is a complex mix of regulation, nuance, subjectivity and humanity.
As artificial intelligence (AI) technology evolves, the systems that insurance companies use to support their customers’ experience will become more personable. Natural language interfaces and agent bots that advocate and act for the customer are just a few of the technologies that will help them to make this change.
Insurers must ensure that the tools they provide to their workforce also keep pace. This is critical in closing the knowledge gaps caused by an aging workforce that is starting to retire. It is also critical in addressing a shift towards solving more human-like problems that employees are increasingly expected to grapple with. These include: what events (natural, personal and professional) are likely to happen? What is an appropriate response to a customer in a given situation? How do groups of people across various social networks respond to insurance issues?
In the same way that television was more than just a radio with pictures, AI will provide more than just smarter help systems. Insurers will be left behind if they continue to apply design principles that are optimised only for existing technology. However, AI will help them to push the envelope with DesignOps so they can implement thoughtful, user-centric systems to engage naturally with customers at scale. Similarly, the insurance industry needs to re-visit its long-held beliefs about ‘the way things work’ to encourage innovation.
Moving to AI will bring new challenges for insurers. A business’s reputation can collapse in the blink of an eye if its intelligent systems result in damage to customer privacy, equipment or people. It’s impossible to anticipate all the potential consequences, so carriers need to understand the interplay of risk, design errors, negligence and poor ethical judgment.
Trust relationships between insurer and insured will be challenged. For example, products that involve continuous monitoring of confidential information – such as machine operations, driving habits, and activity in the home – could potentially make the insurer an inadvertent source of damage or ethical misuse.
The notion of acceptable risk, liability and compensatory damages will evolve along with AI. The now classic example of self-driving cars that weigh ‘greater good’ scenarios prior to collision do not represent a product failure. Instead, they represent a choice that was informed by, but not actually made by, a human.
By thoroughly understanding the topic, insurance carriers can help articulate and quantify the costs and benefits of the technology.
Steve Magennis is North America insurance lead at Avanade