Skip to Content
Martin Smith, Director of Strategy & Innovation

The author

Martin Smith

Director of Strategy & Innovation

AI delivers more accurate credit and risk scoring models than traditional techniques can achieve, but there is an intense regulatory focus on ensuring best practice in the development of non-linear models. 

In this article, we look at some of these areas of focus, and explain how our approach to controlling, and then explaining, a neural network is generating big performance improvements for our clients.

Recently, the Bank of England and the FCA established a public-private working group on AI to further the discussion on ML innovation and explore some of the questions raised by recent research and guidance papers. Holding their first meeting in October 2020, the group considered a number of issues, including:

  • The need for caution around adoption within financial services given concerns over accountability, privacy, and avoiding bias
  • Barriers to entry, due to lack of high-quality data, transparency, governance and adequate documentation of complex models
  • The need to adapt existing risk management frameworks to a non-linear world – through continuous monitoring and validation
  • Ethics, fairness and transparency, and the need for best practice advice.

Meanwhile, the European regulator has issued guidance which touches on similar themes, with a particular focus on explainability and interpretability. Their view is that a model is explainable when its internal behaviour can be directly understood by humans (interpretability) or when explanations can be provided for the main factors that led to its output. A key consideration is the extent to which the decision being made could have an adverse impact on a customer.

They further note that a lack of explainability could represent a risk in the case of models developed by external third parties and then sold as ‘black box’ (opaque) packages. Explainability is just one element of transparency. Transparency consists of making data, features, algorithms and training methods available for external inspection and constitutes a basis for building trustworthy models.

So, it’s understandable that some have concluded that the development of such models requires a degree of compromise: that the best possible results can’t be achieved without trading in the ability to explain the model and understand how it works. Indeed, the FCA argues that an approach of ‘interpretability by design’ rules out the more advanced techniques such as neural networks in favour of simpler but less powerful models.

If you believe this conclusion, then two possibilities emerge. The first possibility is that lenders might start to use models where full explainability and control are not features of the modelling approach – that they take the best available performance that the model data can generate, whilst taking comfort from easily-demonstrated average behaviours. In doing so, they risk making decisions that disadvantage the consumer or which are difficult to justify, e.g. someone with a completely clean credit report getting declined for credit; someone with bad credit getting huge credit limit increases that they cannot afford to repay.  An unconstrained AI model will almost always exhibit such unintuitive and undesirable features for some combination of data inputs.

The second possibility is that lenders might deploy compromised models which are explainable but which don’t fully exploit the benefits of the technology, ceding advantages to their competitors. Using traditional techniques to approximate an AI model reduces its predictive power.

Neither of these feel like an adequate outcome: the first goes against all established principles of credit scoring, and may require that lenders ‘unwind’ their risk models in response to challenge from regulators, bad press arising from unfair decisions, bias or other external pressures.  The second goes against sound business sense and means that lenders aren’t making the best possible decisions given their significant investment in data.  A compromise between these two extremes is difficult to balance, simply because the first one is such a compelling requirement – not to get customer decisions wrong. 

It’s our view, therefore, that explainability by itself isn’t sufficient. You also need to control the model, to prevent it from making indefensible decisions.

But the good news is that this trade-off isn’t necessary. These problems have been resolved for neural networks.  The approach that we have deployed within our own software, Archetype - mathematically controlling the way the model is trained so that it doesn’t exhibit unwanted behaviours, and then enabling the user to explain the predictive features of the model – is the only one which appropriately balances these concerns. We have delivered interpretability by design AND applied it to the most predictive form of AI models, neural networks.

We say that this trade-off isn’t necessary, not only because Archetype is proven to deliver excellent model performance results through over 30 client projects using real data, but also because we can demonstrate that there is no meaningful reduction in model performance when governance rules are applied: the model finds another route to an optimised model without introducing counterintuitive relationships between variables.

Indeed, we recently undertook a head-to-head comparison of Archetype against competitor modelling tool, Data Robot.  Not only did Archetype outperform Data Robot by 2.5 gini points (a 5% relative uplift), it did so using only one third as many variables – reducing the risk of overfitting. 

Furthermore, within Archetype these variables were constrained to avoid the possibility of unexpected model behaviour – a feature not available through Data Robot.  And with other commercial solutions, the available approaches can tend to reduce predictive power or lack an ability to explain the model’s construction – but that is a design consideration rather than a limitation of the technology.

Consequently, the regulators can look favourably at neural network modelling; whilst concerns over the general use of ‘black boxes’ persist, the right techniques can enable lenders to develop AI models which behave in a predictable way for every value of every variable in the model.  And in doing so, lenders can maximise their model performance whilst being able to explain and justify the approach to regulators, customers and internal stakeholders alike.

For more information, you can find out more about Archetype here or get in touch