The University of Connecticut is studying how language shapes the way artificial intelligence understands people. Its new project, “Reading Between the Lines: An Interdisciplinary Glossary for Human-Centered AI,” looks at how words such as “intelligence,” “learning” and “ethics” are defined differently across cultures and why those differences matter as AI becomes embedded in daily life.
The program, featured in UConn Today, brings together researchers to help institutions build systems that reflect cultural diversity rather than erase it. “If we want an expansive, inclusive, liberatory AI, we need to start with how we talk about it,” said Anna Mae Duane, director of the UConn Humanities Institute.
UConn’s research comes at a time when global studies are finding persistent gaps in how artificial intelligence models handle different languages and dialects. A 2025 Johns Hopkins University study found that multilingual models still privilege English and other dominant languages, while a MIT Sloan analysis showed that the same AI prompt can produce very different responses depending on the language used. These findings underscore why UConn’s work is focusing on language as the foundation for inclusion, accuracy and trust in AI.
Cultural Framework of Intelligence
The UConn initiative examines three themes: care, literacy and rights, that frame how people and machines understand each other. The “care” dimension focuses on empathy and communication. “Care begins with language,” said Ihsane Hmamouchi of the Université Internationale de Rabat, explaining that AI systems trained on limited datasets often miss context when users speak in local idioms or dialects.
That limitation has become more visible as AI expands globally. A survey from Beijing Foreign Studies University of 50 multilingual models found consistent challenges with data scarcity, alignment and embedded bias. Technology firms are beginning to address this. Google’s AI Mode expansion now supports 35 additional languages, and Meta’s Language Technology Partner Program invites universities to help train models in underrepresented languages.
These efforts parallel UConn’s work to build shared frameworks for how AI systems communicate. As Duane said, AI must be developed through dialogue between technical experts and communities, not in isolation from them.
Advertisement: Scroll to Continue
Redefining Literacy in the Age of AI
The project’s second theme, literacy, focuses on interpretation rather than access. UConn researchers describe AI literacy as the ability to understand how meaning is created inside a system, not just how to operate it. That distinction aligns with a Cornell Global AI Initiative study showing that predictive-text tools often normalize Western phrasing, gradually narrowing how people express themselves.
Bias is also evident in training data. MIT News reported that large language models replicate the hierarchies of their source material even when given neutral instructions. In response, ETH Zurich has been building an open-source model trained on 1,000 languages to better preserve linguistic diversity.
The final theme, rights, connects language to ethics and governance. Philosopher Michael Lynch warned in the same UConn Today article that “the more unquestioning our trust of AI becomes, the less reflective and creative we become. We know more facts; but we understand less.” His comment captures the balance executives must navigate as they expand automation: efficiency cannot come at the cost of understanding.
Glossary for the Future
Rather than produce a fixed list of definitions, UConn’s team is building an “anti-glossary,” a living framework that evolves as technology and language change. The approach encourages companies, regulators and researchers to treat AI terminology as open for discussion, not set in stone.
For organizations deploying AI globally, that idea has practical value. The words used in policy documents, model cards and governance frameworks influence how systems behave and how they are perceived. As AI continues to shape economic and social decisions, UConn’s researchers argue that success will depend on a shared vocabulary, one that reflects how people actually speak and understand the world.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.