How Privacy Guides Virtual Assistants’ Evolution In Corporate Finance

Even just five years ago, Prophix President and Chief Operating Officer Alok Ajmera said talk of using virtual assistants in the workplace seemed more like science fiction than a real possibility.

Today, however, the rise in artificial intelligence-powered digital assistants like Apple’s Siri and Amazon’s Alexa make the use of similar technologies in the corporate space not only a possibility, but in many cases, already a reality.

In the last few weeks alone, several new solutions have emerged, including Oracle’s debut of a virtual assistant to help supply chain managers manage various logistics-related tasks, and VMware’s upgrades to its own workplace virtual assistant deploying IBM Watson and natural language processing technology.

Prophix is another company that recently rolled out its own addition to the B2B virtual assistant space, having launched its Virtual Financial Analyst for middle-market finance departments.

As a solution designed to analyze and present data via natural language processing, Ajmera explained that the tool demonstrates the opportunity for such technologies to positively disrupt the way companies manage finances. Virtual assistants allow for finance professionals to interact with the technology in a conversational, human way.

Today, such technology is capable of crawling through troves of data to automate various tasks that are typically error-prone, manual and time-consuming. Take the monthly book close, for example. Artificial intelligence-powered tools, like virtual assistants, have the potential to reduce the time it takes to close the books from 10 days to just two, placing the remaining eight days in the hands of accountants and other finance chiefs to wield more strategically for their enterprise.

But there’s another benefit in how virtual assistant technology can impact corporate finance, Ajmera explained.

“There are two major opportunities for a corporate,” he told PYMNTS in a recent interview. “One is to humanize the experience of using the technology.”

This applies to using one’s natural speaking voice or conversational tone to initiate action from a virtual assistant.

But on the other hand, “you use artificial intelligence to actually do things that are inhuman.”

Ajmera pointed to the ability for such AI solutions to crawl through millions of data points in a short amount of time. So rather than a team of financial professionals taking a random sample of corporate transactions and manually searing for anomalies and evidence of fraud, AI tools can analyze every data point and more accurately pinpoint areas of concern.

Opportunities in fraud detection makes virtual assistants, and other artificial intelligence tools, incredibly valuable in the corporate finance use case, he said.

Balancing Risks

With consumers becoming more comfortable using virtual assistants, it’s not surprising the technology would make its way into the corporate back office.

However, the rise of B2C virtual assistants has also ushered in a highly divisive conversation about user data privacy.

Among the most recent shifts in this debate stems from Google, which announced in July it would be reviewing its privacy policies following revelations that personal information and private conversations recorded by its virtual assistant tool, the Google Assistant, were leaked to third-party contractors.

While the risk mitigation opportunities of virtual assistants may be enticing for corporate finance executives, it’s possible that reservations over data privacy felt in their consumer lives could also carry over into the corporate sphere.

But Ajmera said B2C and B2B data privacy challenges cannot necessarily be compared.

“In the B2C world, one of the biggest challenges with privacy is that the consumer has no rights to their data,” he said, pointing to a hot topic of debate in the U.S., in particular.

But the “huge difference” between the B2C and B2B world, he noted, is that data security is often a contractual obligation for a corporation and its third-party service providers.

“In the B2B world, [privacy rights] are well established, documented and contractually obligated with external parties auditing to make sure those rights are maintained and people are compliant,” he said.

That means for anyone building a solution like a virtual assistant, data security must be built into the product itself if designed for corporates.

This, of course, is the goal, however the reality can be less straightforward. Corporates’ digital transformations and migration to the cloud broaden the opportunity for online bad actors to infiltrate corporate systems in pursuit of company cash. As businesses adopt more third-party apps, their threat level can increase as firms must now not only be responsible for ensuring the security of their own systems, but also the security of their third-party service providers, customers and partners.

As data privacy best-practices and regulations continue to evolve in both the consumer and corporate realm, security must be paramount when businesses decide to adopt any kind of new technology that will have access to its systems and data — whether it’s a virtual assistant or otherwise.

But Ajmera is confident that the opportunities in virtual assistants and AI technology for corporate finance outweigh the risks, and those opportunities will continue to grow as the technology evolves. Virtual assistants are, after all, still in their early days.

“There is so much more room for development,” he said, adding that corporates’ continued digital evolutions and cloud migrations will add to the volume of quality data available for AI tools to analyze.

“As we continue to evolve the artificial intelligence technology available,” he said, “you get richer and richer use cases to solve more sophisticated, complex problems.”