Skip to Content

Controlling and explaining a neural network

Martin Smith, Head of Product Development

The author

Martin Smith

Head of Product Development

In our recent webinar with UK Finance, Martin Benson and I outlined what we see as the increasing role of Artificial Intelligence within the credit industry. In this blog, we look beyond the hyperbole and into how AI genuinely has the potential to solve real business problems and help credit teams make significant improvements to the way they make decisions, process applications or handle customer enquiries.

At the heart of our interest is the role of AI in making improved credit decisions. There is a fair challenge from the industry around how lenders can ensure that decisions are accurate, understood by the business and justifiable, in a world in which the deep learning processes used are somewhat opaque. This ‘black box’ problem is one of the reasons [1] why neural networks haven’t seen widespread adoption within credit scoring.  It concerns the fact that most lenders want or need to be able to explain their scores to at least one of three audiences:

  • Customers – understanding why individual customers were rejected
  • Regulators – giving assurance around the robustness of credit scores and models
  • Internal stakeholders’ – demonstrating that the right people are being accepted and rejected

[1] The other reasons, historically, concerned ensuring that models generalise appropriately (for use in data other than that used in the model development) and around the computing power required to train neural networks on sufficient data, such is their comparative complexity.  These two concerns have been addressed by a combination of mathematical and computing power advances, respectively.

Due to the nature of neural network processing, the solutions are almost certain to produce some unexpected or undesirable behaviours, as the non-linear interactions between variables conspires – albeit rarely - to identify cases where risk levels are high despite the customer’s data record looking squeaky clean.

In our predictive modelling product, Archetype, we have generated an answer to the black box problem, by devising a mechanism by which each variable can be constrained such that it always behaves in an expected manner. For instance, ‘credit risk decreases as salary increases’. This is optional per variable, and only required where the user needs to constrain the relationship between the input variables and the output, and there are certain contexts (fraud identification, collections) where such constraints may not be desirable.

Equally, there are some characteristics where the business will have no particular view on how the data ought to influence the outcome. These can be left as free agents, influencing the model outcome as it suits the neural network. Where constraints are applied our solution provides a firm mathematical guarantee that the outputs of the model will always adhere to them – and we have a patent pending for this unique solution to the ‘black box’ problem.

Clearly, in the context of credit scoring there are considerable benefits to gaining this level of control over the behaviour of the model – in fact, there’s a strong argument to say that it’s a firm pre-requisite in many cases. But, the great news is that introducing data constraints within a neural network need not reduce the predictive power of the model to any significant degree – we’ve found that constrained models are often only slightly weaker than unconstrained ones. Significant uplifts over a linear approach are still seen even with a fully constrained set of input data.

This approach fundamentally shifts the way in which governance operates.  Under normal model governance, the onus is on the modeller to demonstrate that the model observes reasonable behaviour, and the proof of that is easily achieved. Using Archetype, Governance would have an up-front role in determining how the model is to behave.  Archetype is then guaranteed to produce a model which adheres to these business rules.

Once these rules are in place, the business is in a very strong position to re-optimise the model very quickly, and as frequently as needed – because users can re-develop models with confidence, safe in the knowledge that they will operate consistently with the approved Governance framework.

That, in itself, doesn’t resolve the black box problem, of course. There’s still the question of how the user understands how the model has behaved.

That’s where Archetype’s other outputs come in:

  • It produces model performance charts quoting Gini or R-squared measures that will be familiar to any model developer, which indicate the predictive power of the model and confirm that it is not overfitted
  • It generates performance charts for each variable (both those that are used in the model and those that aren’t) showing how well-aligned the model outputs are to actuals over its range. These charts allow the user to confirm that the model appears to be optimal given the data available to produce it.
  • It shows the marginal impact of each variable within the model: in other words, for a range of data records, what happens to the model output if only the current variable is changed? These charts provide confirmation that the appropriate constraints have been applied, as well as indicating the general ‘shape’ of the contribution of the variable in the model. 
  • It’s also possible to produce a view of which variables were most influential in generating the prediction for any particular case. This enables lenders to say with certainty that the reason a particular customer got a particular score is because of changes in specific variables compared to the ‘average’ applicant.

Although significant benefits are available through the use of advanced machine learning techniques in credit scoring, the resulting models still need to be understood by the business.  Most lending businesses will take time to be completely comfortable with AI-based models, and it’s likely that they will take even longer to approve approaches which constantly re-calibrate and automatically learn from new cases.

However, Archetype begins this journey by making it easy to re-configure a model using refreshed data, meaning that lenders can easily re-optimise their scores using an already approved development approach, as regularly as they choose. This avoids the traditional loss of predictive power seen by traditional scorecard development cycles.

Archetype enables the best of both worlds - unlocking the power of deep learning, while also allowing appropriate levels of control and oversight of models. This enables lenders to generate and confidently deploy models that generate significant benefits compared to traditional approaches.

For more information on how AI could help you, get in touch.