As financial institutions adopt AI, they face the dual challenge of harnessing AI's potential while adhering to stringent regulatory requirements and ethical standards.
In this post, we'll examine current regulatory expectations, discuss strategies for developing explainable and unbiased AI models, and address the critical role of governance and validation.
A quick recap: The regulator’s stance on AI risk models
The UK's regulatory framework does not impose specific barriers on the use of AI in risk modelling. However, regulators such as the PRA and FCA emphasise that AI solutions must meet the same standards as traditional models, including transparency, accuracy, accountability and control.
Challenges of AI complexity
- AI models are more sophisticated than traditional techniques like logistic regression
- This complexity makes it harder to interpret and explain decision-making processes
- There are greater social and regulatory concerns about bias with AI-based models
- Regulators expect the same level of oversight, understanding and control as with simpler models
Key compliance requirements
To meet regulatory expectations, organisations must:
- Demonstrate a clear understanding of their AI models
- Implement appropriate governance frameworks
- Maintain full control over AI-driven decisions
- Be able to explain and justify model outcomes to regulators and stakeholders
In essence, organisations must show that they understand their AI models, have appropriate governance frameworks in place, and maintain full control over the outcomes of AI-driven decisions.
Here’s how organisations balance these needs and innovation.
Steps for building compliant AI risk models
It’s one challenge to create AI models. It’s another to ensure compliance. That’s why implementing AI in risk modelling requires a comprehensive approach. Here are some of the key steps organisations should take:
Step 1: Maintain in-house expertise
Organisations cannot delegate responsibility for AI models. Instead, they must:
- Develop core expertise in AI model building, implementation, and monitoring
- Understand models' strengths and weaknesses within their operational environment
- Be prepared for additional resource overhead when adopting AI-based approaches
Step 2: Create cross-disciplinary collaboration
- Ensure business and regulatory experts work closely with technical experts
- Create common management structures with shared objectives and responsibilities
- Where possible, employ cross-disciplinary experts with both business and technical expertise
Step 3: Develop AI-specific standards
- Create codes of practice detailing legal and ethical requirements for AI solutions
- Establish rules and constraints that must be adhered to
- These may be similar to, or modified versions of, existing model development standards
Step 4: Integrating AI into risk frameworks
- Include model risk alongside other types of risk in risk appetite statements
Step 5: Conduct independent model validation
- Ensure models are independently reviewed by experienced developers
- Recognise that regulators and auditors require evidence of independent validation before model’s are deemed fit for purpose
- Address the challenge of finding validators with the necessary blend of AI skills, industry knowledge, and regulatory expertise
On top of this, we recommend implementing these best practices:
💡Ensure good practice in variable selection: Use only high-quality data with clear provenance, stability, and guaranteed future availability. Plus, include only fully understood data in solutions.
💡Apply robust variable reduction: Reduce data inputs to improve model explainability and remove highly correlated variables to avoid ambiguities.
💡Include business-sensible data relationships: Prioritise data items that display sensible relationships at a univariate level.
💡Implement appropriate model interrogation methods: Use tools that can explain model outputs at both portfolio and individual case levels.
Now, we’ve covered some best practices but we want to delve into ethics a little deeper.
Spotlight: Addressing potential bias
When it comes to addressing bias, the most important aspect is to design the model appropriately at the outset, rather than looking to review or adapt the model once it has been created. Although, one should always check for bias after building a model and correct for it if needed.
In particular, if specific input data displays bias, then consider excluding that data from the model build.
Of course, it is almost impossible to exclude all bias due to the inherent bias found in society. Therefore, suitable outcome analysis should be performed to ensure that the bias is not unreasonable/unfair. In particular, this means ensuring that model outputs are accurate for given groups, even if that means that some groups are treated differently.
For example: The gender pay gap is a feature of society that should not exist, but unfortunately does. All other things being equal men, on average, are granted greater amounts of credit than women because they earn more. The best solution is to address the pay gap directly. However, while progress is gradually being made it will be many years before the gap is fully closed. Therefore, financial institutions need to find a way to manage this problem while also treating everyone fairly.
So, in this example, one argument is that "treating fairly" means ensuring that on average, men and women with the same salary receive the same amount of credit—but there are several other approaches that could also be adopted.
It is also very important that the data samples used to build AI solutions are as representative as possible—i.e. if certain groups are underrepresented in the data samples used to build AI solutions then the resulting models will not be as accurate for those groups. If under-representation exists, then the data sample can be adjusted (weighted) to provide more equal representation.
The future of AI in risk modelling
Currently, most large language models (such as ChatGPT) don't have the capabilities to undertake the specialist tasks involved in building risk models or validating them. However, these technologies are evolving rapidly. It is not inconceivable that specialist versions could emerge in the next few years that can take on significant elements of both model development and model validation.
Challenges with generative AI and LLMs
To add to this, a critical challenge lies in the data and modelling approaches used for generative AI and Large Language Model (LLM) applications. This data and how it is modelled is typically not accessible to end users, creating significant hurdles:
- Regulatory compliance: Firms struggle to meet their regulatory obligations for model risk, particularly when assessing the appropriateness, completeness, and quality of the underlying data.
- Bias identification: The lack of transparency makes it nearly impossible to identify the root causes of bias in model outcomes.
- Operational risk. Any organisation that embeds externally supplied models within business critical systems needs an extremely high level of assurance of the ongoing performance, availability and cost of those models.
These issues underscore the importance of careful consideration and regulatory guidance as AI technologies continue to advance in credit risk.
Key takeaways: Balancing ethics, compliance and innovation in AI risk modelling
As AI continues to transform risk assessment in financial services, organisations must strike a delicate balance between innovation and compliance. The journey towards AI-driven risk modelling is complex, but with the right approach, it can yield significant benefits.
Key takeaways:
- Regulatory alignment: UK regulators expect AI-based models to meet the same stringent standards as traditional approaches.
- In-house expertise: Organisations cannot outsource responsibility for AI models. Developing and maintaining internal expertise and model understanding is crucial for compliance and effective model management.
- Cross-disciplinary collaboration: Successful AI implementation requires seamless cooperation between business, regulatory, and technical experts.
- Ethical considerations: Addressing bias in AI models requires proactive design, careful data selection, and continuous monitoring.
- Transparency and explainability: Despite their complexity, AI models must be interpretable and their decisions explainable to both regulators and stakeholders.
- Future challenges: As AI technologies like large language models evolve, new challenges in data transparency and regulatory compliance will emerge.
- Continuous adaptation: The rapidly evolving nature of AI technology and the regulatory landscape demands that organisations remain agile and continuously refine their approaches.
As we look to the future, the key to success will be remaining vigilant, adaptive, and committed to responsible AI implementation.
For practical examples of AI successfully used in credit risk, take a look at these case studies:
- Nationwide was seeking to boost the performance of the application risk models it uses to approve existing customers for unsecured loans.
- Newcastle Building Society wanted to understand whether applying AI-based techniques to mortgage risk modelling could yield additional benefits.
- Secure Trust Bank needed to boost the predictive power of its models to make substantial reductions to bad debt.
Ready to start your AI risk modelling journey? Get in touch, we’d love to help.
Or you might also like our AI risk modelling maturity model ebook.