Eliminating ‘Biometric Bias’ Will Take Collective Effort From Banks, Retailers and Consumers

Picture, for a moment, the biometrics you might use every day — perhaps every time you unlock your phone. The thought of bias — exclusion based on race or gender identity — might not enter your mental picture.

And why should it? After all, the apps, the machine learning and the algorithms are all acting in service, simply, of verification — of taking note of your attributes, namely, what is, and confirming or rejecting what’s offered by the user.

And yet, as Cindy White, vice president of customer experience at Mitek, told PYMNTS, there is indeed bias in biometrics. At a high level, she said, when talking about the use of biometrics and technology and access, “we really have to think about technology and its starting point. Bias in technology predate algorithms and AI.”

To illustrate how bias can be embedded within technology, she recounted Kodak’s mid-1950s development of the “Shirley” card, named for a light-skinned model whose complexion (in a photo) was used to calibrate skin tones when developing film for customers. Shirley’s skin tone became the standard for film development over several years (and it was a standard that, frankly, excluded other skin tones and racial makeups, so that they would be relatively under or overexposed).

Mitek’s White said that similar dangers lurk within today’s technologies. “The risks are really very similar if we’re not putting the effort into the sample and the data,” she said, adding that “if we’re not training our algorithms or machine learning, then we’ll get something that’s not representative or fully inclusive.”

We’ve moved beyond simply having a thumbprint to identify a user on a device. We now have facial and voice biometrics as identifiers. White noted that we’re moving ever closer to having multiple, simultaneous layers of biometrics acting in concert to provide additional layers of defense — more reason to be conscious of inherent and unintentional bias.

Excluding Bias From Transactions

Ideally, within payments, everyone should have access to the same set of services and financial products. But White said the quest to stop fraud is one avenue where bias leads to exclusion.

Identity verification and biometrics act as a layer of defense against fraud, but they’re far from perfect. In studies cited by Mitek, the National Institute of Standards and Technology found that the faces of Black and Asian people experienced multiples of false positive rates — as many as 10 to 100 times higher — when compared to Caucasian faces recognized by algorithms.

Of course, there are ways for individuals who experience a false positive — in effect, who are excluded from the ID verification route of choice — to choose another method, such as fingerprint IDs.

But using a backup verification method of sorts introduces unwanted friction into the mix. Consumers, of course, want the most seamless process in place when transacting online and when checking out with a basket full of purchases.

The ripple effects of tech’s limitations and bias might also be seen with the problems transgender people encounter, White noted. Many of them may not register details about their identity with banks or other institutions — thus, they’re excluded from the benefits of biometrics and other identification systems designed to protect them.

The path forward, said White, is one where banks, retailers and solutions providers such as Mitek work together to find the most convenient (and inclusive) ways to construct and share identity documents and data.

“We need to make sure that our samples, those who are developing the algorithms, those who are training the machine learning, are representative of all the minority groups — not just gender, not just race and ethnicity, but that we fully represent all populations across the board, she said. “It takes a collective group to ensure that access is broad, equal and inclusive.”