The author

Steve Finlay

Lead Data Science Consultant

View profile
News & Views / (Machine) learning to live with CP6/22
12 December 2022

(Machine) learning to live with CP6/22

Not that long ago, the financial services industry was somewhat hesitant about the adoption of AI-based technologies, such as machine learning. In part, this was driven by the (rightly) cautious stance taken by regulators to these new, highly complex systems which were difficult to understand, explain and control.

However, many organisations have realised that, in a world increasingly awash with data, they need to be leveraging AI-based technologies to understand and extract maximum value from that data to remain competitive. Consequently, this reluctance to use AI is melting away. In a recent survey, the Bank of England reported that 72% of financial services organisations said that they used or were developing machine learning-based solutions.

Rather than debating if AI should be used in financial services, the question today is very much how it should be used.

 

In particular, the best way to use AI for business benefit while mitigating any risks that this new technology may create. Consequently, regulators are stepping up the tempo in terms of thinking about how best to apply regulatory principles in an AI-driven world. As the FCA said recently in relation to AI: “We are moving from fear to trust.”

From a strategic perspective, The Bank of England and FCA’s recent discussion paper (DP5/22) is the latest piece in the jigsaw that forms the UK supervisory authorities wider programme of work to address the regulation of AI-based tools across all public and private sectors.

DP5/22 is not proposing new legislation per-se. Instead, it aims to move forward the discussion on how well existing regulation covers AI development and usage, and to seek opinions as to what additional protections and safeguards may be required. DP5/22 is broad and far reaching, but one can see that the principles being discussed are already flowing into industry specific regulation that is currently on the cards, such the PRA’s recent consultation paper (CP6/22). This covers Model Risk Management (MRM) principles for banking institutions.

Looking at the wider picture, the timing of CP6/22 makes a lot of sense. At the heart of pretty much every AI system in use today are predictive models. These models are widely applied in everything from the systems used to target customers with offers, though to credit granting, customer management, business forecasting, stress testing and financial planning.

Historically, MRM was mainly seen as the domain of larger organisations as part of the regulatory framework covering models used for capital and impairment. Smaller organisations may have used a few models here and there, but model risk management was a relatively minor consideration in the wider scheme of things. However, as the PRA make clear in CP6/22, while a proportionate approach will be applied based on a firm’s size and complexity, all firms in the wider banking sector will be expected to comply with the regulator’s MRM principles going forward.

What this means is that, in the rush to reap the benefits that AI has to offer, many organisations will need to enact a step change in their approach to model development and usage.

 

Not only do they need to grow their technical expertise and business knowledge to be able to understand and use AI-based tools effectively, but they will also need to expand and upskill those who manage risk within their business to meet the requirements of CP6/22.

So, how should organisations approach CP6/22? The first step is to acknowledge that the PRA now sees MRM as a discipline in its own right and recognises that a specialist skill set is required to undertake MRM effectively. Therefore, organisations need to fill skills gaps where they are found to exist.

Next, firms should review their existing MRM frameworks against the CP and the supervisory statement that will follow. Firms should look to review their current model risk management framework across policy, governance, and their operating model to produce a gap analysis and a plan to remediate any issues prior to the expected policy implementation date of Q1 2024.

Finally, when it comes to AI deployments specifically, what needs to be done from a CP6/22 perspective? It can be argued that AI-based models should simply be subject to the same regulatory considerations as traditional (non-AI) models. For example, being included in an organisation’s model inventory and being subject to independent model validation. In one sense that’s right – all these things should apply to AI. However, the complexity and opaque nature of many AI-based solutions (the black box problem), combined with the use of new and often unstructured data sources, results in a shift in focus.

Traditional models can be subject to issues such as bias, and they also need to comply with general legislation such as the GDPR’s principles of transparency and fairness. However, these issues tend to be more pronounced and more difficult to deal with when it comes to AI. For example, with a traditional credit scoring model, it is relatively simple to reduce gender bias by explicitly excluding data items with known gender bias, such as gross income. However, AI-based systems may implicitly include gender bias through the use of multiple proxy variables that don’t display any association with gender or income individually but do when combined together in a non-linear way. Consequently, increased time and resources are required during both development and deployment to identify and mitigate these types of risk.

This need for increased oversight and governance of AI may sound off-putting to those currently exploring the use of AI in their organisation, but all these issues are surmountable and, in one sense, it’s a matter of perspective.

We shouldn’t consider new regulation, such as CP6/22, as a barrier to successful AI implementation, but as a useful control mechanism that forms one pillar of the wider strategy that supports the use of advanced models across the business.

 

The key to doing this successfully is through a two-pronged approach. The first, is making sure that one has the right resources and skill sets to draw upon. AI and MRM are both very specialist areas, and without the right people and skills, you are likely to encounter problems. The second is to embed the right MRM practices into your organisation at the very outset, with due consideration of the type, usage and materiality of the models that are being deployed – and the PRA is likely to want to see evidence of this. Do these two things and success, while not guaranteed, is predicted to be far more likely.

If you have any questions about the AI or MRM principles or the advice provided, don’t hesitate to get in touch.

Our analytical, AI and modelling expertise can support you when regulation requirements put pressure on your teams to deliver more than their day job. Regulation shouldn't be a burden and through an expert outsourced or insourced arrangement, it doesn't have to be.