Visa The Embedded Lending Opportunity April 2024 Banner

Prompt Engineering for Payments AI Models is Emerging Skillset

“AI (artificial intelligence) will not take your job, someone using AI will take your job.”

That’s according to Geneva Graduate Institute Professor Richard Baldwin, who was speaking about generative AI

He wasn’t wrong. 

And as more and more enterprise AI offerings come to market, the latest being Amazon’s Q corporate chatbot announced Tuesday (Nov. 28), it is only growing more important for individuals to learn how to leverage AI tools to their most optimal effect. 

That’s where prompt engineering — the act of writing queries for AI tools that yield effective results and better train the AI — comes in. 

Prompt engineering helps users guide generative AI solutions, including OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Bard, Microsoft’s Copilot tool, and others, toward producing accurate — and desired — outputs. 

After all, today’s large language models (LLMs) are instruction-following pieces of software, and there are specific prompt formats that work particularly well and align more reliably with the way these AI systems are built. 

The typical prompt progression goes from “Zero-shot prompting,” to “Few-shot prompting,” to “Delimiting,” to “Prompt Chaining.” 

With many companies undertaking system enhancements across areas like accounts payable (AP) and accounts receivable (AR), the benefits of having a firm grasp of how to get the most out of the AI tools available today can’t be overstated.

See alsoDemystifying AI’s Capabilities for Use in Payments

Conditioning AI Models to Boost Their Performance 

Conditioning AI models to boost their performance also boosts enterprise performance. Typically, when leveraging AI, users tell the AI system using plain text prompts what they want to do, and then the generative AI solution spits out what they asked for. This helps reduce repetitive manual tasks and can vastly accelerate many common processes.

But how does it work? 

“Zero-shot prompting” requires users to simply ask the AI model a normal question without any additional context. Pending the output, “few-shot prompting” is when the user conditions the AI on a few examples to coach a better result. It is crucial to be as specifically descriptive and detailed as possible. 

When adding context or additional content to an instruction, using hashmarks (#) or quotations (“”) to separate the pieces of the query provides a better result. 

For longer, more complex requests, users should engage in “prompt chaining” which involves breaking a complicated ask into a series of smaller, more digestible (but very specific) steps. 

If a high-quality and relevant output still isn’t being generated, using “delimiters” or special phrases to provide structure and instructions to the model can be employed. A delimiter is a sequence of one or more characters for specifying the boundary between separate regions in plain text. Hashmarks and quotations are examples of delimiters. 

“AI is a tool — but it’s one that everyone should take time to learn and play with in order to explore how they can best take advantage of its capabilities,” Jeremiah Lotz, managing vice president, digital and data at PSCU, told PYMNTS.

Still, involving AI tools in security-sensitive areas requires a heightened awareness of the risks and limitations of the technology, and sensitive enterprise data should never be shared with the tools. 

Read moreGenerative vs Predictive AI’s Role Across the Future of Payments

Prompting With Purpose

For businesses to truly get the most out of leveraging AI, they need to understand how it works and be clear about what their desired outcome is — and this holds true across all areas where AI is applied. 

While OpenAI executives have said that AI will be able to do “any job within 10 years,” a study published Tuesday by the European Central Bank (ECB) found that although the speedy embrace of AI might lead to lower wages, it has so far created jobs rather than eliminating them.

PYMNTS reported earlier that tailoring AI solutions by industry is key to scalability.

LLMs and other AI-powered solutions can be particularly valuable within finance and accounting offices, assisting employees in areas like invoice processing, generating computer code, creating preliminary financial forecasts and budgets, performing audits, streamlining business correspondence and brainstorming, and even researching tax and compliance guidelines. 

Integrating prompts like, “be excellent at reasoning… perform a step-by-step thinking before you answer the question,” “if you speculate or predict something, inform me. If you cite sources, ensure they exist and include URLs at the end,” or simply just telling the AI to “provide accurate and factual answers” can help ensure that the model’s outputs are high quality. 

As i2c CEO and Chairman Amir Wain told PYMNTS, “There is a new skillset that’s required… people need to understand how to best prompt the AI to get the appropriate answer… If you have two or three people ask ChatGPT the same question, they may get different answers depending on the nuances of their queries.”

Within the payments space, the opportunities to enhance value while streamlining legacy cost centers is immense. Areas like transaction routing optimization, checkout personalization, fraud protection and more can benefit from current applications of AI.

But as with all software innovations, applications of AI technology still need to have a human in the loop to validate its outputs and course-correct as needed.

For further reading on AI’s at-work impact, the PYMNTS Intelligence “Working Capital Tracker®,” a collaboration with Billtrust, helps demystify the use of AI within payments. 

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.