How Small and Midsized Banks Should Address CECL
The Financial Accounting Standards Board (FASB) issued a new accounting standard in June 2016, introducing the current expected credit losses methodology (CECL) for estimating allowances for credit losses. While most banks were required to comply following December 15, 2019, smaller and mid sized banks only need to begin following it after December 15, 2022.
What does CECL require banks to do?
Unlike the previous accounting standards that allowed banks to report the value of an asset, like a loan, at the time it was originated ("incurred loss"), CECL requires banks to have a forecast over their current balance sheet to determine the amount of reserves to have on hand.
Forecasting, however, is tough, and even U.S. regulatory agencies realize that no one is going to get it right. But that's also not an excuse to completely miss the mark: the responsibility is on banks to do their due diligence and prepare credible forecasts. The specific language that the Office of Comptroller of Currency (OCC) uses is "reasonable and supportable."
Historical data will play an integral role in producing reasonable and supportable forecasts, but how one leverages historical data, ranging from the type of data to the statistical model, heavily influences whether the forecasts are actually reliable and practically useful.
Unpacking CECL
As a result, we need to dig in a bit more to understand the new requirements.
326-20-30-9: An entity shall not rely solely on past events to estimate expected credit losses. When an entity uses historical loss information, it shall consider the need to adjust historical information to reflect the extent to which management expects current conditions and reasonable and supportable forecasts to differ from the conditions that existed for the period over which historical information was evaluated. The adjustments to historical loss information may be qualitative in nature and should reflect changes related to relevant data (such as changes in unemployment rates, property values, commodity values, delinquency, or other factors that are associated with credit losses on the financial asset or in the group of financial assets). Some entities may be able to develop reasonable and supportable forecasts over the contractual term of the financial asset or a group of financial assets. However, an entity is not required to develop forecasts over the contractual term of the financial asset or group of financial assets. Rather, for periods beyond which the entity is able to make or obtain reasonable and supportable forecasts of expected credit losses, an entity shall revert to historical loss information determined in accordance with paragraph 326-20-30-8 that is reflective of the contractual term of the financial asset or group of financial assets. An entity shall not adjust historical loss information for existing economic conditions or expectations of future economic conditions for periods that are beyond the reasonable and supportable period. An entity may revert to historical loss information at the input level or based on the entire estimate. An entity may revert to historical loss information immediately, on a straight-line basis, or using another rational and systematic basis.
Here are the three main points
1. Estimates of loan reserves based on historical data alone is insufficient
Standard forecasts combine historical data points together and predict future values based on linear trends with an autoregressive component (i.e., the value in the previous time period). But these forecasts fail to account for the non-linearities and structure inherent in the data. For example, for most small and mid sized banks, national trends are not informative for the loan reserves that they will need based on their regional exposure – that is, what happens in Nashville, TN can be quite different than what happens in the nation as a whole.
By relying on aggregate data and simple statistical models that abstract from realistic interactions among variables of interest and local information, forecasts are unreliable at best and misleading at worst. In short, historical data can and should be used, but it must be layered by a sound data-driven infrastructure and theory to harness its full capabilities.
2. Qualitative adjustments to historical data can be based on judgment, but that judgment should be anchored in some way in relevant data
Good or bad, there are many phenomena that are not easy to convert into comparable categorical or continuous values for predictive modeling. While that should not stop us from paying attention to such qualitative factors, they need to be integrated into quantitative models in the right way to have a credible interpretation. Fortunately, there are almost always empirical counterparts to qualitative insights, which can be leveraged to efficiently integrate the qualitative factors into the model in a way that is scalable, transparent, and reliable.
3. Entities can, but are not required to, develop reasonable and supportable forecasts over the contractual term of an asset, and in those cases should revert to credible historical estimates
Since many loans originated with an anticipated long life cycle, it is impossible to predict the expected losses in these categories without the use of historical data. For example, forecasting the number of mortgages originated in 2020 with a 30 year fixed rate loan that will go delinquent is a loaded question: the loan could go delinquent at any point in the next 20 years, so the only way to tractably model these expected losses is to rely on the life cycle of comparable loans that were originated in the past and compare apples-to-apples as best as possible.
What data is relevant for creating a forecast?
But that begs the question, what data is relevant for creating a forecast? As far as we know, the existing consulting or software packages fall into one of two categories:
1. Forecast economic conditions using national data
2. Produce multiple scenarios of high, medium or low forecasts based on qualitative factors
Each approach has substantial limitations.
Approach #1 takes a one-size-fits-all approach using 1980s econometric methods, rather than state-of-the-art machine learning models that can accommodate big data at more disaggregated levels, such as a ZIP code or a county. Given that small and medium sized banks are inherently local and specialize in niche areas, a one-size-fits-all approach will force banks to hold out more reserves than they really need based on their own risk profile.
Setting aside the importance of regional heterogeneity and the specific ecosystem that each bank operates in, off-the-shelf macroeconometric tools are highly inaccurate. These packages do not follow best practices from computer science where, for example, the modeler holds out a subset of years and uses the model to produce an out-of-sample forecast that is compared with the data that was held out. Instead, the models are fit over the entire data, producing in-sample fits that are overly optimistic.
Moreover, these forecasts can change on a whim; organizations routinely update forecasts to "take into account updated economic conditions"without conceding that their earlier forecast was off. Simply put, forecasting today is highly inaccurate and may give the impression of security because of all the equations, but in reality they fail to reflect the dynamic features of the data.
Approach #2, while much simpler, also has its own challenges, namely the arbitrary nature of the high, medium, and low forecasts. What is to say 3% gross domestic product growth is high and 1% is low? A recent vein of economics research has pointed out that productivity has slowed over the past two decades, with some economists calling it an era of "secular stagnation," so does that mean that 1% GDP growth is high? The problem here is that the thresholds are arbitrary and do not take advantage of the availability of vast amounts of data that can be used to discipline the forecast.
Dissatisfaction with current forecasting methods abounds
While the large banks have their own teams to handle bank-specific forecasts, which often make the news when these forecasts are released, smaller and mid sized banks do not have the budget to hire teams of data scientists and economists. Michael Gullette, senior vice president, tax and accounting at the American Bankers Association (ABA), has commented that ABA roundtables have revealed substantial concern among these smaller banks about how to make adjustments to their forecasts based on emerging economic events. "The big banks had a grip on that," Gullette said. "They've understood progressively over the last few years that that is where we're headed, but the smaller banks [under $30 billion in assets] not so much. They're saying, 'How do we support these adjustments?'"
A data-driven approach is needed
We believe that sophisticated forecasting tools should be available to everyone, much like the consumer product and service industry (e.g., internet and iPhones). Using data from a wide array of seemingly disparate sources, ranging from the Census Bureau to Bureau of Labor Statistics, we build highly reliable predictive models around each asset class. Crucially, we include local information, rather than just national averages, when computing forecasts for an area and asset class.
Further, we can explicitly show our degree of accuracy using standard methods from computer science, often referred to as cross-validation. That way, we can show exactly how accurate our models are and quickly identify where there are gaps so that they can be rectified and caveated.
Once we obtain predictions for each asset class, we now have its contemporaneous value and a forecast. We run that forecast forward for however many years are on the contractual term of the asset, whether it's a 30 year fixed rate mortgage or a 5-1 adjustable rate mortgage. We also obtain a confidence interval over these estimates so that we have a band of uncertainty.
Then, the decision over how many reserves to hold is up to the bank: how many reserves do you need so that you do not fall below a particularly worrisome threshold?
If you are interested in learning more, please contact us or sign up to follow our research and thought leadership on forecasting, credit, risk, and regulatory compliance in financial services.