The benefits of adopting new technologies, such as artificial intelligence (AI), are not always immediately apparent for businesses in all industries and sectors. But for the risk sector, it is now obvious that there are enormous gains to be made. Its application can serve a wide range of functions – including fraud detection, risk management, credit scoring, customer service and retention, and improving the quality of financial advice.
There is a growing recognition within the financial services industry of this potential: 85 percent of businesses say that AI will allow them to obtain or sustain a competitive advantage. Despite this, adoption has yet to become ubiquitous. In fact, research shows that only 32% are actively developing AI solutions and 11% have not yet started any AI related activities.
Barriers to adoption
There are always a set of standard barriers that companies face when looking to adopt emerging technologies. Companies need to have the right infrastructure set-up, the talent necessary to maximise the tools, and implement a proper strategy for their use. All of this requires investment and means that take-up can be slow.
But there are barriers specific to AI that have long been of concern even to those organisations that are already convinced its benefits. While they know that extraordinary results can be achieved, they remain wary of the challenge of being able to understand and explain those results.
This is commonly referred to the ‘black box’ nature of AI models, and has meant that full implementation of the technology in the risk sector has been prevented. In Risk in particular, being able to explain exactly why a customer has been rejected for a mortgage application is essential. Meanwhile, regulators want to know how robust credit decisions are, and internally, organisations need to know that they are accepting and rejecting the right people.
AI systems have historically been unable to provide this information and the highly regulated nature of the risk sector has prevented many firms from being able to adopt it in the ways they would like. When asked, 76% of organisations stated that the potential for biases and lack of transparency was a major barrier to AI implementation.
The rise of ‘explainable’ AI
In the world of finance, insurance and banking, where firms must explain each and every decision taken by the model to both regulators and customers, ‘explainability’ is essential. Ignoring the black box problem when using AI for credit risk use, for example, will always hinder the prospects for widespread adoption.
Left to their own devices, neural networks and other techniques will always generate some cases in which the decision the model makes is non-intuitive. Looking at average outcomes isn’t enough – the one customer who gets declined despite a squeaky-clean credit file is, no doubt, the one who’ll end up complaining.
It’s a challenge that we have addressed directly through the creation of ‘Explainable AI’. Here, the analyst remains in control of what data is used and how it is used, and lenders can also ensure that the approach remains ethical. When developing the modelling platform Archetype, we mathematically solved the problem of how to make each model behave intuitively, whilst generating outputs that enable the user to understand what’s going on under the hood.
This goes a long way in improving trust and confidence in the technology and has already increased adoption rates. Businesses want to innovate and stay ahead of competitors whilst being able to remain transparent and ethical – and this now possible.
The next step is to ensure they have the proper set-up and the right considerations in place to begin implementation.
Key steps towards implementing AI
Here are the first steps businesses can take to begin integrating AI into their operations:
- Get your AI team in place early
Whether you’re implementing your own in-house team or getting outside help, it’s important to get an AI team in place early so that they can be involved in all of the decision-making steps.
- What types of problems does your business face?
Rather than implementing AI simply because everyone else is and you think you should, it’s important to think about the types of problems that your business faces. Is there a decision point or process that generates or uses large amounts of data but currently takes a long time to complete? These are the processes that AI can really help with.
- Shortlist candidate projects
AI is not a magic solution and cannot solve everything. It is currently good at doing processes that would take a human around a second to undertake. This might include:
- Assessing an application (e.g. scoring)
- Recognising an image (is it a bird? Is it a plane?)
- Recognising the intention behind some text (e.g. understanding what agents’ shorthand notes mean)
- Categorising a response
- Working out someone’s customer segment
- Predicting a value or an outcome
- Spotting a data anomaly (e.g. fraudulent behaviour) or a process failure (e.g. faulty parts)
- Gather a data set together to train the model
To develop an AI model, you will need a few thousand records. Ideally, you should have lots of characteristics that can be used to predict the outcome, as well as the outcome that you would like the AI to learn about. It’s important not to include data that could introduce bias or unfairness.
- Pass the data and problem over to your AI team
Once you know which problems you are looking to solve and have the data, you should pass it over to your AI team who will work out the best type of model to build and work out if the data is suitable to predict the outcome you’re after. They will be able to build an initial model and work with your data specialists to understand whether there’s more value available from the available data.
- Test the model
Once you have followed these steps, you should be able to test your model against a different set of data and, if it works, deploy it as your first project and reap the benefits.
It’s now a matter of when, rather than if, the use of AI and automation will become standard practice in the risk sector. Adoption is already on the rise, and the benefits have been tangible – especially on the credit risk front, where AI models significantly outperform optimised linear models on the same data samples.
Where take-up has been slow, the reluctance has typically arisen from legitimate concerns around transparency and the need to be able to explain how decisions are made by the technology. The emergence of explainable AI should now put these to rest. The next step is to ensure that businesses have the basic set-up in place to enable them to integrate AI into their operations and maximise the benefits.