Project Genie is powered by Genie 3, a general-purpose world model that can produce diverse, explorable worlds from simple text and image prompts. Users can create landscapes, characters and environments that evolve in real time, with interactive elements responding to movement and actions.
The prototype is part of Google’s broader research into advanced AI systems that go beyond static text or image generation toward dynamic “world” building. Simulations can range from natural settings like deserts and forests to complex ecosystems and fantastical scenarios, all generated from user descriptions.
Google is opening access to Project Genie for Google AI Ultra subscribers in the United States, allowing them to experiment with the world-generation features.
Genie 3 was unveiled in 2025 as a breakthrough “world model” capable of building interactive environments that maintain continuity and logic over several minutes of exploration, marking a departure from earlier, shorter-lived scene generation systems.
The introduction of Project Genie arrives amid intense competition in generative AI, with companies like OpenAI and Meta also advancing systems that support dynamic content creation. World models such as Genie are seen by researchers as key steps toward more general forms of AI that can learn and reason within simulated environments.
Advertisement: Scroll to Continue
There has also been a broader push in the AI industry toward spatial intelligence, a technical category that emphasizes an AI’s ability to understand and generate three-dimensional environments.
As reported by PYMNTS in November, World Labs recently introduced Marble, a multimodal world model aimed at enabling AI systems to perceive, predict and interact with physical space. Marble can generate navigable 3D scenes from text, images, video or sketches and includes interfaces that let users lay out environments before refinement, reflecting a shift beyond traditional language and image models.