Wharton Profs: AI Bests MBA Students in Generating Business Ideas

Who can produce better business ideas: MBA students or ChatGPT?

Writing in the Wall Street Journal Saturday (Sept. 9), Wharton School professors Christian Terwiesch and Karl Ulrich say that conventional wisdom has held that artificial intelligence (AI) isn’t good at generating new ideas.

To test this notion, the professors pitted human ideas against those generated by AI, taking 200 ideas randomly selected from students, and asking ChatGPT4 for another 200 with the prompt: “Generate an idea for a new product or service appealing to college students that could be made available for $50 or less.”

It was something the AI was able to generate with roughly an hour of supervision. However, the professors wanted to test the quality of the ideas as well as the quantity, so they put them into an online purchase-intent survey.

When asked how likely they would be to purchase the items, the study found that the respondents were more likely to buy products generated by ChatGPT than by the students. The results, the professors write, have significant implications for how people think about innovation.

“First, generative AI has brought a new source of ideas to the world,” they argue. “Not using this source would be a sin. Second, the bottleneck for the early phases of the innovation process in organizations now shifts from generating ideas to evaluating ideas.”

But rather than viewing this as a contest between people and machines, Terwiesch and Ulrich write, it’s better to think of a pilot/co-pilot relationship, with humans in the pilot role.

Meanwhile, as AI becomes more and more commercialized and integrated into various industries, the risk that it might violate technology laws is growing, PYMNTS wrote last week. It’s why countries around the world are working on regulations that can contain AI while also supporting its potential for innovation.

“One of the questions that is immediately raised [around AI] is how do you draw the line between human-generated and AI-generated content,” John Villasenor, professor of electrical engineering, law, public policy and management at UCLA and faculty co-director of the UCLA Institute for Technology, Law and Policy, explained to PYMNTS as part of the monthly TechReg TV series [The Grey Matter] presented by AI-ID.

However, he added, implementing such disclosure requirements at the root of generated content can be a complex undertaking.

“At the extremes, it’s easy to classify things as one or the other,” Villasenor said. “But if you look at, for example, grammar suggestions on an academic paper that somebody’s writing to the extent that those grammar suggestions might be enabled by AI, I think most of us would agree that that shouldn’t convert the paper into an AI-generated paper.”