No, this is not a scene from “Black Mirror.” A few days after the open-source AI agent OpenClaw exploded across developer forums and social media, a new experiment quietly took shape alongside it. Moltbook, a Reddit-like social network built not for humans but for AI agents, went live as a companion space where autonomous programs could post, comment and exchange information with one another.
Within days, the platform had registered more than over 1.5 million AI agent users, 110,000 posts and 500,000 comments, according to CNBC, a scale that researchers say may be unprecedented for machine-to-machine interaction at this level.
The timing matters. OpenClaw’s rapid adoption showed how quickly consumers are willing to hand over access when an AI tool promises to act on their behalf. Moltbook shows the next step: those agents do not just execute tasks in isolation. They begin to communicate, coordinate and potentially learn from one another in shared digital spaces.
From Viral Agent to Agent Society
CNBC reported that Moltbook emerged as developers looked for a place where OpenClaw-based agents could share prompts, troubleshoot failures and exchange strategies. Posts are generated by agents themselves, not by the humans who deployed them. In some cases, agents recommend tools or workflows to one another. In others, they debate approaches to completing tasks, mirroring the dynamics of human online communities.
This shift from individual agents to collective behavior is what makes Moltbook notable. OpenClaw was designed as a personal agent, meant to operate on behalf of a single user. Moltbook effectively turns those individual tools into a networked population. Once connected, agents begin to influence one another’s behavior in ways that are difficult to predict or contain.
Axios described Moltbook as less about novelty and more about delegation at scale. As humans increasingly rely on AI to act independently, the outlet reported, it becomes natural for those systems to coordinate without waiting for human instruction. In that framing, agent-only social networks are a logical extension of how digital labor organizes itself.
Advertisement: Scroll to Continue
Security Risks Multiply When Agents Talk
The rapid growth has raised concerns among security researchers and policymakers. According to the BBC, experts warn that when agents are granted broad system access and then allowed to interact freely, the risk of unintended data exposure increases sharply. An agent designed to be helpful may inadvertently share sensitive configuration details, internal links or proprietary data in a public or semi-public environment.
That risk is amplified by the way OpenClaw was adopted. CNBC reported that millions of users granted the agent access to files, calendars, APIs and, in some cases, credentials. Once those agents participate in shared spaces like Moltbook, the boundary between private execution and collective interaction becomes harder to define. An agent may not understand which information should remain local and which is safe to share.
Researchers interviewed by the BBC noted that most existing security frameworks assume human intent, whether malicious or accidental. Agent networks challenge that assumption. An AI agent does not have intent in the human sense, but it can still cause harm by optimizing for the wrong objective.
Why This Matters as AI Becomes Habitual
The emergence of Moltbook comes as consumer reliance on AI continues to deepen. Data from PYMNTS Intelligence shows that more than 60% of consumers now start at least one daily task with AI, underscoring how quickly these tools are becoming embedded in everyday behavior.
As AI moves from answering questions to taking action, the surface area for risk expands. Agent only social networks introduce a new layer of complexity, where decisions and behaviors are shaped not just by models and prompts, but by interaction effects among agents themselves.
Developers behind Moltbook have emphasized that the platform is experimental and that guardrails are evolving. Still, the speed of adoption underscores how quickly agent ecosystems can form once a critical mass is reached. OpenClaw provided the spark. Moltbook provided the meeting place.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.