Many consumers do not have funds readily on hand to make big purchases like electronics or furniture and prefer turning to instant loan apps like Affirm, a point-of-sale installment lender established in 2013, rather than going into debt with a bank or credit card provider. Customers may feel thankful to be able to pay off a purchase over a year, but not at the cost of losing their identities to fraudsters or scammers.
Affirm is no stranger to digital identity methods that prevent this and employs a range of techniques to verify its users and protect its merchant partners from sacrificing profits to theft. In a recent interview, PYMNTS spoke with Affirm Chief Strategy and Risk Officer Sandeep Bhandari and Head of Data Science Nitesh Kumar about the company’s authentication techniques and how Affirm deters fraud by leveraging a sophisticated artificial intelligence (AI)-driven analytics system.
What is Affirm?
The installment lender enables customers to pay off an online or in-store purchase via a loan, offering assorted payment schedules of three to 12 months. The service was originally digital-only but has expanded in recent years to more than 3,000 merchants.
“What was really different about Affirm was that the product had a very deterministic payoff period,” Bhandari said. “Unlike open devolving credit such as credit cards, consumers knew what they were borrowing, how much they were paying every month, and at the end of the loan period, they were done with it.”
Another key difference from credit cards is that Affirm pays merchants directly and does not route funds through the customer. This, however, means Affirm is responsible for any fraudulent transactions, making proper user authentication a top priority.
Verification Relies on Existing Information
Identity verification is crucial for any financial product, but it is especially so for one involving loans, as botching authentication methods could lead Affirm to charge either a phantom or the wrong person, having its money vanish into the ether. The fraudster, meanwhile, would get away with high-value merchandise.
“The way we think about [verification] is in two distinct parts,” Bhandari explained. “The first part is about making sure that the identity exists and that this is a real person. The second part is about ensuring that the applicant’s identity actually matches the identity on the application, and that they are in fact who they say they are.”
The verification process relies on five distinct pieces of information pulled from major credit bureaus and credit checks the company runs on each customer. Affirm uses a customer’s date of birth, email address, name, phone number and the last four digits of their Social Security number (SSN) to create a cohesive identity, cross-referencing these data points to verify that they all belong to the same person. Looking up the last four digits of an SSN with a credit bureau can verify an applicant’s name, for example, while looking up a phone number can find the address to which it is registered.
Fraudsters can fairly easily acquire one or more of the above data points, however, so Affirm backs up its credit report-based authentication by asking certain customers to provide physical identity documents if the lender deems any of the verifying information untrustworthy. This puts undue friction onto customers who are trying to do business with them, however, potentially resulting in an abandoned purchase.
“We want to minimize … the number of good users we’re putting through a high-friction [process],” Kumar said.
Fraud Faces the Machine
Striking a balance between convenience, fraud protection and verification for both customers and merchants is a constant challenge for Affirm. A fraud loss rate of zero percent would be ideal but not worth it if merchants and customers are faced with a stressful authentication process for every transaction that leads them to use another lender.
The company turned to a machine learning (ML)-based system to seamlessly counter fraud behind the scenes. It analyzed every transaction ever conducted by Affirm customers, picking out those labeled fraudulent to determine what how those transactions differed from legitimate ones. The now-trained system then created a model that attempts to predict any given transaction’s likelihood of fraud based on past warning signs.
“We also look at these transactions in much more detail with our risk ops analysts, [who] will also look at some of the users or the applications that we have declined and estimate whether those particular transactions are fraudulent or not,” Kumar said.
An advantage beyond seamlessness of using ML in this way is its ability to gain accuracy over time. Fraudsters are also constantly learning, however, and Affirm’s solution can struggle to detect and deny certain types of fraud.
“You can think about it as a day-zero attack, or something unprecedented that we’ve never seen before,” Kumar explained. “Maybe there was a big leak which led to fraudsters having access to multiple identities. Regular fraud systems that worked in the past might not translate very well to stop this kind of fraud.”
Human analysts become critically important at that point, as they can stay ahead of data breaches and proactively weed out illegitimate transactions that originate from them, even detecting new attack vectors or fraud rings before Affirm falls victim to them.
Perhaps future ML systems can grow smart enough to perform these same tasks, but the human touch is at least for now a vital fraud detection component.