How can banks make use of explainable AI?

How can banks make use of explainable AI?
The technology is becoming a critical component in regulated industries like finance

Elly Yates-Roberts |


This article was originally published in the Autumn 2019 issue of The Record. Subscribe for FREE here to get the next issues delivered directly to your inbox.

With the growth in data being generated within the banking and financial sectors, the use cases of artificial intelligence (AI) in front-office, middle-office and back-office activities in banks are growing steadily, including for fraud detection, risk management, predictive analytics, automation and more. However, there are concerns related to the suitability of the AI models to regulatory frameworks as well as risks related to possible biases in machine learning (ML) algorithms which can be due to data quality or not applying proper business context to the model. 

Hence, regulators globally are mandating that financial institutions supply transparent models which can be easily analysed and understood. Many view Explainable AI (XAI) as a critical component in making AI work in heavily regulated industries like banking and finance.

According to the USA Defence Advanced Research Projects Agency, there are three main approaches to realise XAI. The first approach is ‘deep explanation’, whose outputs could not be easily analysed or augmented by a lay user.

The second approach is ‘interpretable models’. These are techniques to learn more structured and interpretable causal models which could apply to statistical models, graphical models or random forests. However, even for logistic regression or decision tree models (which are highly accepted as transparent models), using them with a lot of features converts them to black box models. The third XAI approach is ‘model induction’, which can simply adjust the weights and measures for the inputs to evaluate their effect on the outputs, drawing logical inferences in the process. Other means of model induction interpretability revolve around surrogate or local modelling.

Another facet of explainability relates to rules which not only aid it, but are also influential in customer relations. Anchor Local Interpretable Model-Agnostic Explanations were presented to explain individual predictions with crisp logic If/Then rules. However, the If/Then crisp anchor model will struggle with variables which do not have clear crisp boundaries, like income or age, and will not be able to handle models with a large number of inputs.  

It seems that offering the user If/Then rules which include linguistic labels can facilitate the explainability of a model and its outputs. One AI technique which employs If/Then rules and linguistic labels is the Fuzzy Logic System (FLS) which can model and represent imprecise and uncertain linguistic human concepts. Furthermore, FLSs employ linguistic If/Then rules which enable representation of the information in a human-readable form.

Another way to represent linguistic labels is by employing type-2 fuzzy sets to embed all the type-1 fuzzy sets within the Footprint of Uncertainty of the type-2 set. One amongst many example uses for this would be around the allocation of customers into behavioural groups which are then used to drive expected behaviour in both sales and fraud detections contexts. If customers are allocated to groups based on many variables, some of which interact with each other and some of which have human-like fuzzy boundaries (such as stage of life or socio­economic group), the If/Then approach can undermine the effectiveness of the model by introducing false simplifications.

However, FLSs are not widely explored as an XAI technique. One reason might be that FLSs are associated with control problems and they are not widely perceived as an ML tool. Logical Glue did produce novel patented systems which were able to use evolutionary systems to generate FLSs with short If/Then rules and small rule bases while maximising prediction accuracy.

The type-2 FLSs generate If/Then rules which get the data to speak the same language as humans. This allows humans to easily analyse and interpret the generated models and augment such rule bases with rules which capture their expertise. They allow people to cover gaps in the data to provide a unique framework for integrating data-driven and expert knowledge. This allows the user to have full trust in the generated model and cover the XAI components related to transparency, causality, bias, fairness and safety. The use of this can be seen in areas of customer interactions such as loan decisioning. In this case, it is important not only to take correct decisions on the availability and size of credit, but also to be able to explain this both to compliance departments and to the customers themselves. This can clearly help the bank to generate more appropriate offers, but can also build customer loyalty because they will be included in the decision process, and will be encouraged by the insight and communication demonstrated by their bank. 

Hani Hagras is chief science officer at Logical Glue, a Temenos Company

Subscribe to the Technology Record newsletter


  • ©2024 Tudor Rose. All Rights Reserved. Technology Record is published by Tudor Rose with the support and guidance of Microsoft.