How To Master The Expanding Credit Universe

As a universe is wont to do, the credit eligible universe is expanding, with new models and information sources — and more of them than ever before. Sarah Davies, SVP of Analytics, Product Management & Research for VantageScore Solutions, shares advice on how lenders can best leverage techniques to capture a larger market share.

 

As it was for many financial institutions, the recession was a tough time for consumer lenders. Now that we’re on the other side of it, though, many lenders — as Sarah Davies, SVP of Analytics, Product Management & Research for VantageScore, recently discussed with MPD CEO Karen Webster — are still stuck in their old ways of credit scoring and assessment, methods that were arguably already flawed to begin with.

The credit universe is opening up to a host of new scoring models and information sources, and it’s in lenders’ best interests to explore every viable option and potentially take on a sizable amount of creditworthy consumers that were previously ignored because of legacy approaches. Davies and Webster talked through the best ways lenders can seize the day.

 

KW:  Last we spoke, we spent a lot of time looking at and talking through scoring models — which is of course the business of VantageScore.

Today I want to take the conversation into a little more depth around the 3 things that I think are very relevant for a lender to make the right lending decision: Having access to the right information; having a risk model that they are comfortable with; and determining the lending-decision time frame.

What are your insights and observations of those three variables, as they relate to lenders making good credit decisions?

SD:  It’s a very interesting time. Now that we are far removed from the recession, we can actually learn about the interaction of those 3 variables.

Let’s start with the idea of a relevant time frame. If a lender were to do nothing in terms of changing their strategy, and were to do nothing in terms of the model they were using and the information within it, having a sensitivity to the time frame still could resolve a tremendous amount of exposure in their credit management strategies.

A popular perspective historically has been that the credit score has a fixed relationship to risk. In reality, though, it does not.

A 620 credit score, let’s say in 2005, was associated with a default rate of roughly 3 percent. Prior to the recession, if a lender had a strategy in place with a 620 score cutoff, they were basically saying their P&L could handle a 3 percent default rate. Fast forward to the middle of the recession and that 620 — for all the reasons that we’re well aware of — was equating to a 6 percent default rate. If a lender had not updated that score cutoff and was still using a 620, they were now bringing in twice as much risk and twice as much loss to their business. During the recession, that 620 should have been set at maybe a 720 or a 750 in order to maintain the 3 percent default rate.

When it comes to the relevance of time frame, it is critical for lenders to have their scores and their businesses set around the most current information available. A lot of them weren’t necessarily following that strategy during the recession; had they done so, they would now — on the other side of the recession — find that the risk associated with a 620 is much lower, perhaps at a 2 percent default rate. Instead, as result, they’re unnecessarily excluding a large number of consumers and missing out on a lot of opportunities.

Keeping in sync with the risk-to-score relationship is essential, and that’s really a function of being on top of the state of the industry, the economy, and where consumers are defaulting — and where they’re not.

 

KW:  That seems like such an obvious thing. Is it just hard for lenders to make these kinds of adjustments? Is the information not available to them in a timely fashion, or are they simply too risk-averse to want to make the change?

SD:  That’s a good question. It’s a little of both, I think.

Heading into the recession, a lender didn’t want to be moving too many parts, so to speak — the more moving parts, the harder it is to isolate and control individual elements. Another contributing issue was the aforementioned misunderstanding of the relationship of score to risk as being fixed.

The way in which lenders build their strategies can be extremely complex. To unravel and then rebuild them in order to reflect a new score cutoff can take an awful lot of resources. In an environment where everyone was greatly constrained, perhaps those resources weren’t available.

 

KW:  It might also be the fact that a 620 may not be a 620, depending upon the other variables and attributes of the particular borrower seeking credit.

Are all 620s created equally? Is there an opportunity — through looking at different information — to distinguish between a “good” 620 and a not-so-good one?

SD:  Absolutely, there is.

In one of the studies we recently conducted, we looked at how a credit score of 620 plays out in a particular mortgage strategy, but the study has relevancy for credit card issuers as well. Part of that strategy required that a consumer have a “thick file,” which is basically industry language for a lot of accounts on one credit file. We as lenders were establishing a qualitative statement that “more information is better.”

As reasonable as a thick-file 620 criteria seemed, what we found through the study was that the consumers who met it — when they were assessed by only one credit bureau — were scored as higher-risk than consumers who didn’t have a thick file and were assessed by multiple bureaus.

As a result, we’re adjusting our strategy a bit. It’s not necessarily about getting a 620 on a thick file; it’s about getting a 620 based on two different sources of information. That strategy, compared to the incumbent one, actually delivers consumers that are lower-risk.

More information is better — provided it’s not more of the same information. If we’ve learned anything, it’s that the information we’re bringing into our underwriting strategies has to be more diverse. Collecting from multiple independent sources allows for a stronger corroboration in terms of risk.

 

KW:  It sounds like, because of the deficits in getting the right kind of information at the right time, that lenders are leaving a lot on the table. Have you been able to quantify just how much of the addressable universe of borrowers most lenders are actually able to serve?

SD:  Yes. We took a look at a couple of different ways of dissecting, as it were, the credit bureau databases.

Our general estimate is that there are approximately 220 million to 230 million consumers with a credit file in at least one of the credit bureau databases. That’s the minimum requirement to get a credit score — be it from VantageScore, or Scorex PLUS, or FICO. A number of scores — the more traditional ones; we can call them “conventional credit scores” — are predicated upon having a certain amount of information available. You have to have a trade on file that’s at least 6 months old; the score also has to have been updated within the last 6 months. If you don’t meet those requirements, you are deemed unscoreable and are effectively invisible to any lender that’s using automated underwriting approaches.

When we look at the databases, we find that roughly 180 million to 185 million consumers do meet those “conventional model” requirements. That means that 30 million to 35 million consumers are unscoreable, and therefore unreachable by lenders using a conventional scoring model. Factoring into that strategy consumers with thick files — 3 trades or more and minimum score of 620 — cuts out an additional 19 million.

Lenders are excluding a large amount of eligible consumers — 50 million or so — not only because they’re using a more conventional credit scoring model, but also because their strategy is designed to focus on qualitative criteria, such as thick files.

 

KW:  More than 20 percent of the addressable market: that’s a pretty sizable swath of consumers who would be reasonable credit risks and love to borrow.

Given all of this — the opportunity that is going unaddressed, and the complexity of gathering the right information and providing it at relevant points in time so that lending models can be adjusted — what do you advise is the right strategy for lenders?

SD:  First and foremost, the best and easiest way for a lender to address the time frame issue — the lag in information leading to a lag in updating strategies — is to get the latest performance charts from their score developer.

Whatever score developer a lender works with will produce performance charts reflecting the most up-to-date information about the risk level associated with a particular score. Those allow the lender to make appropriate adjustment in their strategy; it’s the most basic, essential, “care and feeding” sort of step in that regard. Had more lenders relied on up-to-date performance charts during the recession, it actually would have solved a tremendous amount of the exposure that occurred.

After that, a lender should think about which credit bureaus they’re working with. The more they work with, the more information they’re going to pull.

They also need to think about which credit-scoring model they use. They want to make sure it’s been designed to give them the capability to consider consumer populations that might not have a lot of information tied to their files.

We’re ready to start “opening up the credit universe,” as it were. Now is a good time for lenders that are using more conventional models to test out other ones, to test out different credit bureau data, as they look to find safe ways to bring more consumers into their business.

 

KW:  Asking for the latest performance charts seems like such a logical thing to do. Are they all updated on an annual basis, or are different charts updated with different frequencies?

SD:  All the major score developers provide publicly available performance charts on an annual basis. Major lenders will actually then build on top of those charts, adding customization, but you can do that within your own business.

 

KW:  Is an issue perhaps that most lenders don’t think that there’s much change year to year?

SD:  Certainly, I do think that was the case, yes. If a lender believes — mistakenly, as we’ve addressed — that a score of 620 is always going to equal a 3 percent default rate, then they don’t feel compelled to always look at the latest performance charts.

I will say that, over the last 4 or 5 years that we’ve spent a lot of time talking about this issue, we’ve seen an increasing number of requests for the information. I think that the industry has moved away from that misunderstanding.

 

KW:  You mentioned that having access to other scoring models could allow lenders to expand the universe of borrowers. Do you have any specifics about how those that are using the VantageScore model have fared in capturing the 20 to 25 percent of the addressable market that has been unavailable to others?

SD:  Wanting to retain any competitive edge, lenders aren’t particularly eager to share specifics.

I will say that a significant number of lenders at this point in time are looking at, evaluating, and using VantageScore 3.0 in a universe-expansion strategy. They’re intentionally stepping into new consumer populations, using VantageScore so that they are able to accurately assess risk, and determine whether these consumers are providing the right kind of profit dynamic for their business.

There’s a tremendous interest in this space right now, and a lot of testing being done within it using VantageScore.

 

To download the full version of the podcast, click here.