Big Tech really wants augmented reality (AR) smart glasses to happen, no matter the Google Glass graveyard, Apple Vision Pro pivot or Meta’s failed metaverse roadmap.
On Monday (April 27), details about an alleged Samsung smart glasses product leaked, while earlier in the month news broke that Gucci and Google were partnering on a luxury pair of smart wearables set to debut next year.
And at the start of the year, smart glasses maker XReal raised $100 million at a valuation of over $1 billion, while Meta and eyewear maker EssilorLuxottica let the marketplace know they were considering doubling their capacity to produce Ray-Ban Meta smart glasses from 10 million to 20 million by the end of the year, with additional room for up to 30 million.
Apple is also refining its approach to the space and moving from bulky virtual reality (VR) goggles to sleeker artificial intelligence (AI) wearables. Aside from the companies mentioned above, major tech firms like Amazon, Snap, Baidu, Xiaomi and others are all investing heavily into smart glasses; as are smaller, AI wearable-specific startups like Viture, Even Realities, Brilliant, Solos and Halliday, to name just a handful.
But while the introduction of AI has undoubtedly given the category a shot in the arm, it may not have fundamentally altered the core behavioral challenge of convincing users that wearing a computer, and a video camera, on their face is not just useful, but necessary.
Or, has it?
Advertisement: Scroll to Continue
Read more: Wearables, Robotics and Infrastructure Become Big Tech’s New Focus
AI Changes the Interface, Not the Fundamentals
If there is a through line in the current wave of AR glasses, it is a sense of convergence. Hardware is becoming more wearable and AI is making interactions more intuitive. Enterprise use cases are proving viability in specific contexts, with the London Marathon this past weekend even featuring vision-impaired runners using AI-powered smart glasses.
Yet convergence does not guarantee adoption. The history of consumer technology is littered with products that were technically impressive but failed to find a durable place in everyday life.
Wearing early AR glasses in public often felt like announcing oneself as a beta tester. Today’s designs aim to disappear into daily life, a prerequisite for any consumer technology aspiring to ubiquity. Advances in microdisplays, waveguides and battery efficiency have allowed manufacturers to shrink components without sacrificing performance. Devices like Meta’s Ray-Ban smart glasses and Snap’s Spectacles are now closer to conventional eyewear than conspicuous headgear.
What has changed more dramatically is the software layer, particularly with the integration of generative AI. The rise of large language models and multimodal systems has given AR glasses a more compelling narrative: not just as display devices, but as intelligent companions.
In this framing, glasses become a gateway to real-time assistance. They can summarize conversations, translate languages on the fly, identify objects and provide contextual prompts based on what the wearer sees. Startups are leaning heavily into this “AI-native” positioning, arguing that the true breakthrough is not the hardware itself but the intelligence embedded within it.
But AI does not eliminate the need for a clear use case. It enhances interactions, but it does not define them. The question of why a user should wear AR glasses for hours each day, rather than pull out a smartphone when needed, remains open.
See also: How Big Tech’s XR Push Could Redefine Both Payments and AI
What Comes Next
The next phase of AR glasses will likely be defined less by breakthroughs and more by iteration. Incremental improvements in battery life, display quality and comfort will continue. AI capabilities will expand, becoming more personalized and context-aware. Partnerships between hardware makers and software developers may begin to seed more robust ecosystems.
A PYMNTS Intelligence report found that people often use connected devices to multitask, especially among the younger, digital-first generations. Smart glasses, for example, can provide hands-free connectivity to users, who can use the embedded AI assistant to do online searches, take photos or videos, read and write text messages, and translate foreign languages in real time, among other capabilities.
But incremental improvements are unlikely to drive behavioral change at scale.
For now, AR glasses remain a category defined as much by its potential as by its limitations. More are being made than ever before. The technology is better than it has ever been. But the fundamental questions—what they are for, and why they matter—remain unresolved.