This year, many lenders will embark on a journey to gain an Internal Ratings Based (IRB) status. Following the Basel Committee on Banking Supervision’s (BCBS) recent announcement on significant reforms to Basel III, including an aggregate output capital floor of 72.5%, lenders are faced with squeezed interest margins and increased competition from the rise of challenger banks. This makes the significant benefits of IRB much easier to quantify.
The first step on the journey to IRB status is to ensure you have your data in order. In this blog, we talk through the three key elements you need to get started.
Why are data important?
Data are fundamental to IRB models – it is the infrastructure on which IRB models are developed, run and reported on. Data required for each stage of achieving and maintaining IRB status vary between organisations and within organisations between each stage in terms of breadth and depth. Such data can consist of any data held by the organisation though typically loan and default history data will dominate for IRB.
Interestingly, what is regarded as ‘an organisation’s data’ has changed recently with the use of external data being permitted for organisations seeking IRB status but without sufficient (under previous IRB criteria) loan or default history. The data landscape for IRB is therefore variable but, whether using internal or external data sources, there are principles that can be commonly applied to data management to make it both effective and compliant.
How do lenders currently manage data?
All organisations have different approaches to managing their data and it is unlikely that ‘one size fits all’ will be successful in many cases. However, the BCBS came up with their standard 239 in 2013 that provides principles that can be used to develop effective risk data management.
Based on these principles, we recommend organisations focus their data management approach on three areas: people, processes and knowledge... The approach is not restricted to achieving and maintaining IRB status but is a useful template that can be extended to areas of data used in risk and beyond.
Data can only be taken seriously if the right people with the right perspectives are responsible for the management of data. Data needs high-level representation and focus within an organisation. There are people in every organisation who are very good with data – their abilities should be recognised, harnessed and channelled whilst following best practice.
Data management should not be seen as a chore to be undertaken before the exciting work of analysis, modelling and reporting can commence. Organisations typically have people who are great with data but these people are not utilised in the best way. They usually occupy roles in support of small teams rather than being part of a larger Data Management function. Bringing together a team of data specialists who are interested in making data management work will not only go a long way to achieving a high standard of risk data management but will also free up analytical resources to focus on analysis, modelling and reporting.
Automated and manual processes, for the provision of data, should be as short and simple as possible. Data should be subjected to as few transformations as possible and where such transformations are necessary, they should be fully documented in the form of metadata, i.e. data about data.
Transformations can range from the simple renaming of a data item to complex aggregation, but all need to be recorded to preserve lineage. This recording needs to form part of the management process and data should not be provided unless corresponding metadata are also available. The same applies to data quality in that one of the aims for management should be to provide information on the quality of data supplied and to ensure that this information is maintained.
Consistency in the treatment of data is crucial.
Processes also need to be responsive (note that if you have the right people, as mentioned above, half of this particular battle has been won). Nothing is more likely to encourage data users to develop their own data ‘cottage industry’ than not being able to get hold of data or to get data issues rectified promptly. Tactical developments are often necessary, but they need to be incorporated into good risk data management practices as soon as possible.
As well as organisations typically having people who are good with data, they also have people who understand the organisation’s data very well. Often, they will be the same people and are the ones who users frequently turn to for advice: “Where can I find month end credit card balances?” and “What is the best source for collections data”. Their knowledge needs to be captured and preserved so that it is readily available. This should be done with the provision of data dictionaries that pull together business descriptions of the data, lineage and transformation information, and data quality metrics – much of this captured, created and maintained as part of the data management processes.
Data ownership is also important and needs to be publicised. Having plenty of information about data and knowing who owns it will give users confidence not only to use the data but also to report issues.
Spreading knowledge of an organisation’s data across the user community will increase the appetite for using data, improve the overall reliability (as ‘more eyes’ on the data means more issues will be identified) and will provide confidence that the data being used for IRB models and reporting are fit for purpose.
Having the right people, the right processes and the right knowledge will result in effective risk data management. Not only addresses potential regulatory compliance issues but also gives greater ability to use the data. More importantly, it provides a smooth way to IRB status. Of course, how to achieve this will vary by organisation but this is where data management can be as exciting as analysis, modelling and reporting!