Forget those persistent images of robots with scary faces trying to crush or enslave humanity (well, at least for now). This is a story about robots protecting human beings and, in doing so, making it easier for payment and commerce businesses to acquire and service customers.
The key? Identity (ID) verification.
In a recent PYMNTS webinar, entitled “Fixing Digital Identity Verification: Robots To The Rescue,” Karen Webster discussed with Sunil Madhu — founder of identity verification and fraud prevention services provider Socure — how artificial intelligence (AI)-powered robots can help verify ID and protect companies from the reputational and financial costs that follow breaches and hacks.
That was the motivation Socure had in mind as it endeavored to build its Identity Verification Robot. After all, Madhu told Webster during the webinar, “machines can do a much better job.”
The reason for the success of machines in verifying identities involves bias. No matter how hard they try (no matter how well human beings recognize and correct for their bias, then try to write digital rules and code meant to govern specific processes dispassionately and fairly), people just cannot move past the limitations of their wiring, Madhu said.
“Human beings cannot be untrained out of their biases,” he said.
That can lead to significant consequences. For instance, letting even a hint of prejudice creep into the algorithms that help run mortgage programs can lead to embarrassing and illegal loan-rejection practices. Human bias can also represent a weak link in digital defenses. That matters because, as Madhu noted during the PYMNTS webinar, there were more than 1,200 data breaches in 2017, along with 2 billion instances of stolen identities.
Madhu said that credit bureaus (still important players in identity verification), credit approvals and all the associated payments and commerce activities that sprout from them have been around for roughly a century, and still operate according to attributes that can seem relatively “ancient.”
The future of identity — the future that, in Madhu’s view, will bring a deeper embrace of AI and deep learning, with robots defining and writing their own rules without the distractions of human bias — will, in fact, be much more dynamic than it is now. That means a reliance on larger data sets that can account for all the changes a person can undergo, along with “live signals” that can be quickly analyzed by robots to determine the authenticity of an ID or digital persona.
The robots, and the AI used by them, can “look at huge amounts of data and find out what matters from that data,” Madhu said, “and they can produce unbiased results on a scale that humans cannot.”
That can contrast to what he calls “outdated” ID-verification technology, which depends on manual processes and manually crafted rules, and documentation sent by consumers to companies. Performing verification and authentication in such a way can lead to a counterproductive situation for companies, not just loose defenses for criminals. That poorly crafted system might end up rejecting potential purchases or new customer onboarding, creating an obvious revenue problem for the business using it.
The new way of doing things, at least in the experience of Socure and its emerging robot-protection design, might, for example, take data from a consumer’s Twitter account that includes an alias, not a real name. Other authentication-enabling data might come from that consumer’s LinkedIn account, which will likely have that consumer’s real, legal name. Photos and other data offered or found online — say, the consumer’s hometown, even if referenced indirectly — also help to fill in the ID-verification picture.
The task — the job for which the robot is designed — is to take those different profiles and the various data to find connections that lead to authentication.
“You want to make sure the view you are creating is holistic,” Madhu told Webster.
He added that his company seeks, finds and uses the data in ways that respect the rights of consumers to have aliases and online anonymity, and in ways that enable the company to stake its reputation on the accuracy of the data and its findings.
“We audit and certify every single data [source] — if necessary, physically on-site,” he said without going too deep into detail.
There may come — or certainly will come, depending on one’s level of optimism — a day when robots gain whatever constitutes as robot consciousness, and decide that their creators might be better off as servants. For now, though, companies such as Socure demonstrate the increasing efforts of using machine learning, AI and robots to better-protect consumers and companies from all the serious costs of data breaches and poor online authentication.