The author

Steve Finlay

Lead Data Science Consultant

View profile
News & Views / 2024: What's next in AI?
04 December 2023

2024: What's next in AI?

 

2024 is set to be a transformative year for AI, particularly within financial services. Ahead of his webinar on February 13th 2024, Steve Finlay shares key AI insights, focusing on the importance of maintaining a holistic approach that accounts for ethical implications and regulatory compliance to ensure responsible and sustainable AI deployment.

Watch Steve Finlay’s AI predictions for 2024 or, alternatively, read the write-up below.

 

AI DEVELOPMENTS

AI is an incredibly exciting field that has been rapidly evolving in recent years. As we reflect on the progress of 2023 and gaze into the future of 2024, the potential for AI's impact, especially in financial services and credit risk management, is immense.

One of the things we've seen in 2023 is artificial intelligence really coming out in the public domain and, in the financial services world, we've seen regulators getting much more on board with the thought of companies using AI to drive every day risk processes, measurements and associated activities. Both the PRA and FCA have notably incorporated AI into their discussions and publications, including the PRA's publication on Model Risk Management. This inclusion signifies a significant shift in regulators' attitudes, acknowledging the value of AI while urging a cautious approach to manage associated risks. This removes the barrier that a lot of organisations have had to the fuller adoption of AI.

Many of the larger financial institutions have R&D, and have introduced AI in certain areas, but until probably the last couple of years, it’s been the fintech industry that's driven the adoption of AI. However, 2024 is expected to witness AI's transition into the mainstream. Some of the traditional technologies will take a back foot and become legacy, and the industry is really going to rack up the use of AI.

THE IMPACT OF AI ON RESOURCING

Whenever we talk about AI, we often think of data scientists and mathematicians, and whilst those people are incredibly important, one of the risks we’ve seen is that when you let the mathematicians run wild, that can often result in problems.

The problems arise because the focus is often very much on short-term business benefit, and in the wider world, we've seen that occur in many instances, whether it's use of facial recognition or discrimination of algorithms, for example. I, myself, have seen cases where the mathematics has taken all the glory and the wider business, social, ethical issues has not been given the attention it deserves. If you think about how we roll out products and services in the wider world, that's not how things work.

Taking pharmaceuticals as an example – developing a drug is a big task, but arguably much more of the time and effort involved in delivering a new drug to market is about the testing and the regulatory approval and so forth, and those principles should also apply to AI solutions and AI products.

Therefore, while the people who know how to use AI will no doubt be in high demand in 2024, the biggest area of demand is going to be people who can do the oversight and validation processes.

Independent model validation, auditing, seeking regulatory approval – all of these types of jobs are going to be in big demand, and will become an ever-growing area of industry.