The AI Talk: Key Takeaways

In a recent webinar, Jaywing’s Dr Steve Finlay and Deep Future Analytics’ Dr Joe Breeden, were joined by host Lydia Edwards to explore the ever-evolving landscape of artificial intelligence (AI) and machine learning (ML) applications in the financial sector. With over 70 attendees from prominent UK financial organisations, the discussion covered topics from fraud detection to the use of alternative data sources, as the experts debunked the myths surrounding AI and presented the possibilities that await.

1. Your models are only as good as your data.

The speakers stressed the need to tailor models to the specific characteristics of the data and agreed that the effectiveness of machine learning depends on the complexity and nuance of the dataset.

For simpler, more traditional portfolios, machine learning modelling might not be necessary. As Joe states, “If you’re not bringing in any new data, and if there’s nothing new you need to learn, then why do we need it?”

For more complex data sets, commonly known as “big data”, automation is “almost essential” says Steve. In an environment where dealing with tens of thousands of variables is not unusual, you need the support of machine learning or AI technology to be able to drive, extract and analyse useful pieces of information.

Data provenance is a hot topic of the webinar, with improper ownership of data potentially leading to severe consequences for financial services organisations. Joe cites instances where institutions have faced a “death sentence” from the US Federal Trade Commission for building AI models on data they didn't own. The remedy in these cases is to delete all models, and, if you’re a vendor, for your clients to delete them too. Relying on data without proper ownership can pose legal risks and thorough examinations of contractual agreements is an absolute must.

 

2. Your models aren’t necessarily biased, but your data might be.

Steve brought this debate to the table: “Is it the models that are biased, or is it the data you’ve got?”

Joe answered, “The data we get reflects the world we live in, and I don’t know any societies on Earth that don’t have bias.”

In the past, linear models and bureau data were given a "free pass" with the assumption that biases were acknowledged and accepted. According to Steve, “We have accepted bias in credit scoring systems and other similar systems built on regression methods and there is this well-established view that that almost doesn’t matter.”

However, with the introduction of machine learning and alternative data, the burden now falls on financial institutions to prove model impartiality. Steve compares this to self-driving cars: “When humans drive cars, we accept a certain level of accidents, but as soon as you get into the self-driving vehicle world, they have to be almost perfect. To demonstrate that they are as safe as humans isn’t sufficient.”

Steve continued “We're now using those machine learning algorithms to bring in data sources that we would never, or are very unlikely to have used, in regression methods. So the bias looks like it's worse, but it's actually the new data.”

Steve shared a pivotal perspective on addressing bias: beginning with data assessment before model creation. He suggests a shift from post-model assessment to pre-model scrutiny, emphasising the importance of thoroughly evaluating data sources for potential biases and dangers. Implementing explainable AI or machine learning models is also highly beneficial here as they provide transparency to the underwriter and allow them to understand and investigate why the model has made certain assessments and decisions.

The dialogue underscores the need for industry-wide initiatives to assess data and spot potential dangers, envisioning a future where such efforts become collaborative industry endeavours.

 

3. AI is NOT going to steal the credit risk modeller’s job.

Unpacking one of the most popular myths in the industry, host, Lydia, posed the question: Can credit risk modelers take a backseat and let AI fully design risk models and assessment tools with minimal human input?

Joe immediately rejects this notion, stating “So much of the time, we work with edge cases, where you don’t have enough data. In these situations, you must bring your own knowledge to the problem. It’s not a case of just turning the machine on, because the machine only knows what the data shows.”

Steve expanded on this, commenting “There’s also the issue of data provenance. In the traditional credit scoring world, you’ve got very well controlled data sources. For example, you can rely on data from the bureau because you know it is managed, looked after, and maintained in a certain way. But, if I’m pulling data from other sources, like customer services, or any other external data, including open banking data, I don’t necessarily know it’s provenance. Real expertise is needed here to do, what I call, the meta-analysis.”

The focus narrowed down to large language models and their potential application in credit risk assessment. Steve Finlay noted, “there’s part of the underwriting journey where you could, technically, use the model interface for very basic information gathering, so there’s potentially a front-end use. But for the actual decision-making process, including the creation of bespoke models, lending experience and knowledge is absolutely critical”.

Joe followed on by sharing an insightful experiment involving a large language model's attempt to assess credit risk for companies in the Russell 2000. The results, while initially seeming promising, unravelled a crucial concern – the reliability and authenticity of the data. The cautionary note emphasised the need for due diligence before incorporating generative AI into financial decision-making processes, emphasising the potential legal consequences of overlooking data ownership.

 

4. Model risk management principles apply to AI-powered models too.

Steve commented, “there’s almost nothing different in terms of model risk management, in that the assessments you're making about the inputs, the outputs, the materiality of the models, any supplier risk, all those things you would consider for any other model, you must not put aside just because it's AI, all those things still apply.”

 

LEARN HOW JAYWING CAN HELP YOU MEET AND EXCEED MODEL RISK MANAGEMENT REQUIREMENTS.

 

5. AI and machine learning models are powerful for fraud detection.

Steve and Joe discussed how machine learning models prove particularly powerful in fraud detection, where considerations such as device usage, time of day, and location play a crucial role. The speakers emphasised the uniqueness of fraud modelling and the benefits of leveraging advanced techniques for screening.

Predictive models have been highly effective at reducing fraud losses. However, it is crucial that these are agile and respond to changes in the threats received by an organisation. To ensure robust defences, the models must also be refreshed more often, and ingest data from multiple sources. By doing this, organisations give themselves the best possible defence against fraud and crucially, the best level of protection for their customers.

 

6. The regulators are no longer a key barrier to innovation, but auditors and validators might be.

Steve notes, “At Jaywing, we've done quite a lot of modelling now with our Archetype software, which is a neural network-based tool. What we're seeing is more and more acceptance of those types of solutions, particularly in the areas we know are open to machine learning, like fraud for example.”

Continuing, Steve added “we’ve seen some caution from regulators in the UK, but I don’t know of any enforcement action, or any specific organisations, in our world that have been reprimanded specifically for their adoption of AI/ML approaches. So I think that’s a positive sign.”

Joe identified auditors and validators as potential obstacles to the broader adoption of advanced modelling techniques. He expresses concern about the limited hands-on experience of validators and auditors, pointing out the need for a better rotation of skills between model developers and validators. The lack of a defined qualification or training path for model validators is highlighted, and the discussion concludes with a call for skill development, particularly for auditors and validators. The evolving landscape of sophisticated modelling methods demands a thoughtful approach to training, ensuring individuals in these roles have a statistical background.

 

The AI Talk: Busting Myths, Mitigating Risks, Seizing Opportunities

 

Need support in building, developing, or validating your models?

Speak to our experts