Implementing IFRS 9 is a huge undertaking for lenders of all sizes. The requirements of these accounting standards are likely to mean more models and an increase in complexity. Yet, while lenders have been busy developing new models, are they taking the opportunity to improve data management infrastructures at the same time?
The level of granularity of data required for IFRS 9 is extremely high. Every lending portfolio needs to be taken into consideration. The challenges lenders will face with IFRS 9 model testing and implementation are: obtaining all required data at a granular level to the right level of quality; aggregating it consistently; and consolidating data across multiple source systems using a best practice approach.
From our experience of IFRS 9 and IRB implementations, without a best practice approach to data management, many lenders may end up with tactical data approaches that: are inefficiently executed upon each refresh cycle; rely on key individuals to produce the required data; need data to be double-checked both before and after use as the data production processes are not industrial strength. What’s more, this approach could leave lenders with incorrect Expected Credit Loss (ECL) numbers, which may have a significant impact on provisions.
Generating the right ECL number requires the right models to be developed underpinned by current and historic data of the right quality. As ECL models should present a real-world view of the risk a lender is exposed to in the future, it is essential that the data used for these models can be shown to reflect reality.
Once a lender starts parallel runs, it will generate significant volumes of data across different periods from the different scenarios, simulations, models and model data inputs used. Data governance and auditability across this entire process is a big challenge.
Unlike traditional point-in-time models, there is an inherent lack of outcome data to demonstrate the accuracy of the IFRS 9 models and so the onus is on proving that the input data are correct and that the model method is fit for purpose. For the former, being able to prove (through data lineage and regular reconciliation) that the data used for ECL models accurately reflects the state of a lender’s portfolio provides confidence that the models have access to the necessary data and issues observed are likely to be either real-world or model method related rather than data issues (which typically take longer to investigate and resolve due to the volume and granularity of the data involved). As well as helping to reduce the probability of data issues, good data management also makes the investigation of such issues easier.
Volume and lineage challenges
IFRS 9 models (alongside other model implementations such as stress testing and IRB) produce large volumes of both working and output data, compared to traditional models. This data needs to be stored and made available for use. IFRS 9 data will typically be at the level of a loan, so the larger the lender’s portfolio, the more volume there is. Portfolios with longer lifespan products will also create larger volumes due to the need to forecast through a loan’s lifespan.
Where a lender has a well-built data warehouse linked into source systems, then the issue of lineage can be easily solved. Where a lender has a portfolio grown through acquisition, for example, there may be a need to take data from different sources. There will be a temptation to take data from existing analytic or reporting sources. While this can help with the original provision of data for model development purposes, these sources are commonly not well documented and so lineage can be hard to determine or at least be very time-consuming - lineage being a key requirement for model implementation purposes.
Five ways to improve data management for a smooth IFRS 9 implementation:
1.) It’s not too late to start now
To improve data management for IFRS 9, lenders could start thinking about data management during the model build phases. Ultimately, the models need to be built against the same data that they run against to ensure that outcomes predicted in development are replicated in live.
Obtain the data requirements for the models as soon as possible – there needs to be a cut off in model development and refinement to give enough time to implement the data infrastructure required for the models. Overrunning requirements are the main reason for rushed or inadequate data implementations.
In addition, if you agree on a data structure with model developers as soon as possible, it will help them build the models against early instances of these structures. This means providing model developers with the required joined-up data rather than relying on them to assemble their own data. This helps with consistency of data, from development to live.
2.) Make full use of metadata to describe the data
Time spent on populating metadata can greatly assist data lineage and the interpretation of data. Allowing users to see descriptions of data including sourcing and transformations can greatly enhance the model build and monitoring processes. Building a data dictionary from such metadata assembled during data implementation is a great way of delivering benefits to a wide range of users while achieving documented compliance.
3.) Identify data gaps early on
Data gaps are generally identified during model build where the model developers express a desire to have certain types of data or a particular depth of history but such data are found not to be available within the organisation or not easily accessible. Early identification of the gaps would help in the provision of such data or in a decision that such data cannot be used in the models.
Historic data gaps are a perennial problem (again, particularly with lenders who have grown through acquisition) with such data either not being available or being incompatible with current data (requiring work to make them compatible).
If the gaps are historic then they can usually be filled with one-off extracts into the same format as the current data with appropriate transformations. If the gaps are in breadth or types of data, then these need to be accommodated in the structures of current data at the earliest opportunity.
4.) Bring in the experts
All lenders have individuals who have a good understanding of the business and of the data. These people should be heavily involved in delivering data for IFRS 9 to bridge the gap between business teams who have a deep understanding of the business but little appreciation of good data management and technology teams who have a deep understanding of data management but little of the business. What’s more, these teams can gain deep insight by working alongside external consultants who have a best practice methodology from working with many different lenders on the same problems.
5.) Ongoing data management governance and controls
On an ongoing basis, banks will need to produce IFRS 9 measurements and related disclosures within short timeframes. Systems and processes that banks build – and associated controls – will need to be sufficiently automated and streamlined to deliver reliable results that are subject to appropriate review and challenge in the required timeframe. The foundation for complying with varying and increasing complex regulations is best practice data management, as it enables companies to demonstrate effective management of the complete data lifecycle. In addition, a foundation for compliance based on a solid data management strategy will reduce the cost of managing new regulations and can result in a clear and competitive capital advantage.