The author

Ben O'Brien

Managing Director

View profile
News & Views / Upcoming AI regulations and how to get ahead of them
17 May 2021

Upcoming AI regulations and how to get ahead of them

In April 2021, the EU proposed new regulation around the use of AI. Despite having left the EU, it remains the case that the UK lending market is intrinsically linked to the European banking sector and that the requirements of larger trading blocs tend to dictate the likely direction of travel for smaller markets such as the UK. Because of this, we can reasonably expect that UK legislation – or the practice of UK lenders - will continue to mirror closely that of our nearest neighbours.

What does the new regulation cover?

The draft legislation seeks to regulate the use of AI proportionate to the level of risk each AI system presents. It is not primarily concerned with financial services, but its clauses are applicable to all aspects of the use of AI, in particular where it may impact on human rights, freedoms and other key principles.

The legislation will ban AI systems that present unacceptable risk, and impose strict requirements on those which are considered to be high risk. Lower risk instances may be subject to transparency requirements, but companies who develop, distribute or use AI will need to consider how they will align with any regulatory requirements.

Not doing so could result in significant fines: up to €30m or 6% of total annual global turnover.

What’s in scope?

The proposed legislation applies a fairly broad definition of AI, including outputs such as content generation, predictions and recommendations. It specifically includes the use of statistical and machine learning approaches, including search and optimisation approaches.

On the surface, this could include a variety of activities that lenders already undertake, particularly around credit scoring where the development of a simple scorecard using logistical regression would fall within the definition. The same would be true for much of the digital marketing that lenders undertake.

The legislation will apply equally to providers, users, distributors and importers of AI systems and processes that are effective within the EU, even if those entities are headquartered outside of the EU.

Addressing rights and freedoms

As above, the regulation is not solely concerned with banking, but will have some parallels. For instance, the regulation as drafted prohibits certain uses of AI, including the use of real time biometrics in public places for law enforcement purposes – the corresponding implications for financial crime are clear. It defines as ‘High Risk’ those uses of AI where it forms part of critical equipment (such as a medical device), or where a system uses biometric data for ID purposes; the use of AI in recruitment, promotion or employment termination is similarly high risk. But most importantly for risk management, the use of AI in evaluating the creditworthiness of individuals is also in this category.

What are firms and their suppliers compelled to do?

The requirements will feel familiar: provide conformity assessments, report regulatory breaches, establish a risk management system, develop detailed technical documentation and retain logs, register the system in an EU database. Also, under the legislation, providers must make it clear to consumers and other end users that they are interacting with an AI system, unless this is obvious from the context of use.

Our thoughts on the regulation

We welcome the above focus: it is right that AI systems are designed, deployed and regulated so as not to disadvantage consumers. Nevertheless, the broad definitions of AI in play mean that many activities, from acquisition targeting for marketing through to Collections modelling, could unwittingly fall under a requirement to add a lot more friction into existing processes – as currently defined, all credit scoring activities would meet the definition of AI.

As an organisation which has worked extensively on AI implementation, we have supported a large number of clients to ensure they apply sufficient rigour to their use of AI. For instance, the approach taken by our AI credit modelling tool, Archetype, avoids the possibility of developing risk models which generate unwanted results – firstly by designing governance rules into the very core of the models, and then by lifting the lid on the construction of the models to ensure they can be understood by the business. Without constraints, an AI model will almost always display some unintuitive behaviours which could disadvantage consumers; our approach designs these out. There is nothing new in the requirement that risk decisions are fully explainable, but the means of achieving that in the new world is often harder. The consequences of being in breach of legislation such as that proposed by the EU – and perhaps coming to our shores soon enough – are severe.

Your next steps in AI

Luckily, conformity can be straightforward. To get ahead of the regulation with AI risk models that don’t come with nasty surprises, contact us for a discussion and a demo.