AI Models Eliminate Bias For Digital ID

Biometrics - Authentication

The age of artificial intelligence (AI) is upon us, where data feeds and models are used in the service of everything from verifying the digital identity of remote users to approving or denying a bank loan application.

But Labhesh Patel, chief technical officer and chief scientist at real-time identification firm Jumio, told PYMNTS in a recent interview that AI-underpinned models are only as good as their underlying training datasets.

Crafting AI models that make sure people are who they say they are and prevent fraud, while eliminating identity bias and other pitfalls, comes with its own set of challenges, he said. The key to creating robust machine-learning (ML) models that also embrace biometrics lies with data.

“We oftentimes hear of companies that say they are ‘doing’ AI and have AI-driven models,” Patel said. “But when you ask them about the provenance of the data, they are not able to explain it or articulate how it works.”

He said that identity verification can be well-achieved by AI models, but the teams creating those models need access to data that doesn’t have built-in bias.

We’re a long way from the days when companies created datasets from huge swaths of pictures and profiles culled from social media, gathered without cross-checking or consent. Patel said regulation is catching up to bias in data sets, and we might see a not-too-distant future when vendors must give assurance that there’s no bias in the models they are providing to financial institutions (FIs) and other companies.

A starting point to conducting identity verification is document-based, he said. Traditionally, that begins with an individual presenting a government-issued ID to confirm identity.

In that event, “the first part of the whole equation is making sure that identity document is authentic,” Patel said. After that, AI-driven models tied to face verification and authentication must make sure the face being scanned and analyzed matches the picture on the ID.

Facial recognition is about exposing one’s identity. By contrast, face verification is about establishing one’s identity.

In the case of recognition, an individual’s picture might be compared to millions of other faces in a database, one by one. Verification serves simply to match skin texture, color and a host of other features to an example that has already been enrolled or documented — say, on a mobile device.

Boots on the Ground 

However, that additional layer of scrutiny demands supervised learning. The correct human, unbiased governance must be in place to ensure that the people or entities who generated the data have consented to its collection.

Call it a form of boots on the ground. As Patel explained, “when Jumio started doing identity verification remotely several years back, we initially had a human in the loop.”

He said the person would examine a slew of facial images to determine authenticity, and in effect audit the data. Having localized teams in a given region or country can go a long way toward eliminating at least some of the bias that might exist in other countries.

For FIs and other companies seeking to verify identities, Patel said the first steps usually involve ensuring that the individual isn’t a politically exposed person and is not on any sanctions list. Companies such as Jumio can provide such screening.

“There is email verification, there is phone verification, there’s a whole bevy and buffet of services that are required,” he noted. But without connecting to platform models such as those on offer from Jumio, compliance and risk officers need to talk to scores of different vendors, then craft contracts and work with those providers on an individual basis to create verification strategies.

“If that vendor has problems because of load — because there is no disaster recovery, there is no redundancy — then they are stuck,” he said.

By contrast, Patel said a platform can offer a one-stop-shop for know your customer (KYC), identity verification and other services to FIs. Platforms also offer flexibility.

For example, a bank could decide after looking at an email address and conducting a quick phone check that it has enough information to verify the individual’s identity. On the other hand, a different individual might be subject to further scrutiny to assign the person a risk profile based on additional, triangulated data provided by the platform.

“There’s a decision engine that looks at the output of all these signals,” Patel said.

He said well-crafted data sets scrubbed of bias ensure that “there is enough evidence by dipping into all these different services that we are able to create a profile for the person.”