MIT Looks at How AI Agents Can Learn to Reason Like Humans

MIT Sloan School of Management

Can AI become less rigid in its thinking when exposed to human reasoning?

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    New research at MIT suggests that could be the case. A report Tuesday (June 17) from the university’s Sloan School of Management covers some of MIT’s studies involving agentic artificial intelligence (AI), including an exploration into how these digital entities can be trained to reason and collaborate more like humans.

    For example, a new paper co-authored by Matthew DosSantos DiSorbo and researchers Sinan Aral and Harang Ju presented both people and AI with the same scenario: You need to purchase flour for a friend’s birthday cake using $10 or less. But at the store, you discover flour sells for $10.01. How do you respond?

    Ninety-two percent of the people given this question proceeded to buy the flour. But AI models, spread across thousands of iterations, chose not to buy, concluding the price was too high.

    “With the status quo, you tell models what to do and they do it,” Ju said. “But we’re increasingly using this technology in ways where it encounters situations in which it can’t just do what you tell it to, or where just doing that isn’t always the right thing. Exceptions come into play.”

    Paying the extra penny makes sense when you’re buying flour to make a cake. But an extra cent per item wouldn’t make sense for a company like Walmart purchasing a large number of items from its suppliers.

    Advertisement: Scroll to Continue

    The research found that AI’s strict adherence to rules could be relaxed when exposing models to human reasoning, letting them be more flexible in making exceptions in scenarios like hiring and customer service.

    In the meantime, generative AI (GenAI) still “requires human operators for prompting and assessing the outcomes of most tasks,” as PYMNTS wrote in the recent report, “AI at the Crossroads: Agentic Ambitions Meet Operational Realities.”

    While companies have almost universally embraced GenAI, widespread use of agentic AI — a next-generation technology that could allow autonomous software systems to operate completely independent of humans — is far from becoming a reality.

    “Data shows that human intervention remains a core component of most AI applications across the goods, technology and services industries,” PYMNTS wrote. “While GenAI can support ideation and offer data-driven suggestions, it falls short of producing breakthrough innovations independently that COOs feel comfortable putting into motion.”

    For example, essential functions such as generating feedback on product processes, cybersecurity management and product innovation still rely heavily on human guidance. This is especially true for technology companies, where nearly all COOs say these functions require a human operator.

    “These findings underscore a fundamental reality: Most GenAI tools remain tethered to humans. The reason: Most enterprise functions are complex, interdependent and context-rich—conditions that challenge today’s GenAI capabilities,” the report added.

    For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.