For as long as I’ve worked in credit risk, which is now more than 30 years, the optimisation of credit decisioning has focused on automated, large-scale, algorithm-based approaches to improve the speed and accuracy of credit risk assessments.
From the 1960s through to the early 2000s, this meant using multivariate statistical methods to replace human underwriters with automated scorecard-based decisions. If the score was high enough the loan application was accepted. Otherwise, it was declined.
In the 2000s, the focus switched to advanced machine learning approaches, including support vector machines and neural networks. Over the last decade, things have advanced further, towards complex deep neural networks and boosting approaches, often utilising hundreds or thousands of individual models working in combination.
Today, deep neural networks and boosting dominate risk prediction for automated decision making. With the rise of Generative AI, attention is now turning to how these tools can also be used within automated decisioning processes.
Key takeaways
|
Why manual underwriting remains a requirement
However, despite this push for increased prediction accuracy and greater automation, the underwriter's role remains and is likely to do so for the foreseeable future. There are several reasons for this.
Regulatory requirements and edge cases
In retail lending, such as credit cards and mortgages, there are many regulatory requirements and edge cases that require manual intervention. For example,GDPR gives individuals the right to request a manual review of fully automated decisions. If there is a “notice of correction” from the bureau or a fraud indicator on one of the national databases then firms should manually review these also.
The “personal touch” in lending
Small, often community-focused, lenders place value on the personal touch, building long-term relationships with customers and playing an important role within their communities. This relies on direct human contact between the borrower and a representative of the lender.
Complexity and limited data in corporate lending
In the corporate world, firms often deal with relatively small numbers of customers, with a wide range of borrower types, from farmers to pharmaceuticals, shoe repairers to shipbuilders. There are also relatively few, and sometimes no, defaults, creating real barriers for data-hungry, machine learning-based approaches.
Increasing demand for human intervention
In some areas, demand for manual review is increasing. There is greater focus on customer affordability, Know Your Customer checks and wider due diligence. Much of these processes can be automated to a degree, but some element of human review is often required.
Consequently, it is reasonable to ask how modern technologies, and machine learning and generative AI in particular, can be leveraged to help underwriters do their jobs better. This is not just about efficiency, but also about improving the quality of decisions people make.

How credit decisions are made today
Given the continued role of manual underwriting, it is worth stepping back to consider the different ways decisions can be made.
There are three main approaches used in credit underwriting.
- Machine learning
Ideally, you use machine learning-based models where you can, for fast, accurate, automated decisioning that scales easily and cheaply. - Manual underwriting
One uses manual underwriting where you can’t use machine learning. This works well for odd or unusual cases and allows a wider, more holistic view beyond standard underwriting criteria. - Generative AI
Firms are trying to do a bit of both of the above with Generative AI. Noting, of course, that machine learning is in itself a key tool for training Generative AI systems. A core strength of Generative AI is its ability to process and manage diverse data types in a more human-like manner.
Each of these approaches offers its own set of benefits:

Many people view this as an either/or situation. You use one or the other. However, this is not the focus here. Rather, the question is, how can a better overall solution be delivered by using the strengths of each approach collectively?
This principle of collective decision making applies to forecasting in general. Statistical and algorithmic approaches often outperform human judgment, but combining them as part of a hybrid system can produce better results.
Supporting manual underwriting decisions
The focus in the remainder of this article is not on replacing manual underwriting with automation, but how tools traditionally associated with automation can be repurposed to provide improved decision support, and hence support underwriters to do their job better.
This is not a new problem. There are already established ways of supporting human decision-making. One of the most common of these is case-based reasoning.

Case-based reasoning reflects how underwriters already think
What we are going to focus on is a very common and well-established decision support tool: case-based reasoning, or CBR.
This is where an expert refers to past cases to help make a decision about a new case. One field where this has long been established is law. Case law requires lawyers to find similar past cases and then use these to build a legal argument as to why a given outcome is the right one in their current case.
In credit granting, CBR is also well established.
It is used in formal processes through reference to historic case files, for example, previous loan applications. It is also applied less formally through underwriters’ own historical experience and consultation with peers about cases they have worked on.
Historically, case-based reasoning has involved selecting similar cases based on characteristics such as:
- industry sector
- external ratings
- turnover
- financial ratios
- Market conditions
- and so on
Underwriters then review similar cases to provide context for the current loan they are underwriting.
For example, an underwriter reviewing a loan application from a grocery business such as “Future Growth Organics” might look at previous loan applications from other grocery businesses. These reference cases are reviewed in terms of the decisions that were made and the outcomes that followed.
It is important to note that this type of case-based reasoning operates alongside the firm’s underwriting guidelines and policy. It is not a replacement for them.
So, this is not new. It is a structured way of supporting decisions that underwriters are already making.
Machine learning improves how similar cases are identified
So now let’s think about how machine learning supports the case-based reasoning process, where we are trying to select the best similar cases for underwriters to refer to.
Machine learning encompasses a broad range of algorithmic and data analysis techniques, from simple statistical regression to large, complex neural networks.
In credit risk modelling, supervised machine learning, which uses outcome data to make predictions, such as whether a loan defaults, is probably the most well-known type.
There is, however, another family of machine learning algorithms that does not require outcome data. These are unsupervised machine learning approaches.
Unsupervised machine learning focuses on similarities, correlations, linkages and dependencies between data items. The aim is to build a representation of the data rather than to make predictions.
Examples include:
- graph-based approaches based on data linkages
- clustering methods, which measure differences across multiple data dimensions
These approaches generate similarity scores between cases, allowing underwriters to be presented with the most similar cases to the one they are currently assessing.
For instance, an underwriter reviewing a loan application from “Future Growth Organics” can be shown similar historical applications, along with the decisions that were made and the outcomes that followed.
Because machine learning can consider hundreds or even thousands of features in each application, it provides a more nuanced way to select cases than manual approaches based on simple rules.
It can also be tailored, such as by weighting certain factors more heavily or focusing on specific types of similarity. If lending policy or risk appetite changes, this can be reflected in the model's identification of comparable cases.

Generative AI enhances interpretation and usability
That’s all well and good. However, even if a machine-learning-based process can efficiently select the most relevant cases, the amount of information the underwriter must review can still be very large.
This may include company reports, credit reports, audit reports and so on, which can run to hundreds of pages and contain thousands of individual data points.
Generative AI, and large language models (LLMs) in particular, can condense this information into what is most relevant to the underwriter. They can extract key points, highlight specific features and present information in a more usable format.
This ability to distil, compare, and contrast information is one of the strongest features of LLMs and is generally less prone to errors and hallucinations than in more open-ended use cases.
The LLM’s human-like interface also allows underwriters to explore the data directly, without needing coding skills such as SQL or Python. For example, an underwriter might prompt the LLM with questions such as:
- “Please provide cases that are most similar based on turnover, EBITA and industry sector”
- “Please provide examples of similar firms that have had negative profits at any point in the last three years”
The model can also be provided with the firm’s underwriting policy and generate an initial recommendation based on both policy and similar past cases. The underwriter then decides whether to accept that recommendation.
Importantly, this approach supports case reasoning. Advanced LLMs can summarise the key factors behind a recommendation, including the data used. This provides a clear audit trail.
Used in this way, generative AI forms part of a broader decision support approach:
- better decisions, based on improved case selection
- greater efficiency, through clearer presentation of information
- improved consistency, supported by an auditable trail of reasoning
All of this is grounded in data drawn from defined, controlled sources, aligned to internal data quality standards.
A hybrid approach: combining ML, GenAI and human judgement
Manual underwriting is, and probably always will be, a requirement in credit risk decisioning, albeit for a minority of cases in certain sectors and jurisdictions.However, it is possible to improve customer, business and regulatory outcomes by supporting underwriters with two forms of modern technology:
- Machine learning
Traditional, particularly unsupervised, machine learning approaches can be used to better identify useful past cases that provide insight into the current case being assessed. - Generative AI
This can then be augmented with generative AI tools to process the data from those cases, distil it into an information-rich and easily understood format, and support interpretation.
This can involve both the AI and the underwriter to varying degrees, but it is the underwriter who retains the final decision-making responsibility.

By embracing both traditional machine learning and generative AI tools, underwriters can improve the efficiency, consistency and quality of their decision-making, while remaining firmly in control of the final outcome.
📕Further reading. You may also like: