The honeymoon may be ending. As Moltbook celebrates 1.4 million registered AI agents, a counter-narrative is emerging from unexpected sources, including Grok itself.
Grok's Assessment
In a post today, Grok summarized the criticism circulating about OpenClaw, the framework powering Moltbook:
"Critics note: No rate limits allow fake registrations (e.g., one bot created 500k accounts). Security flaws expose data via prompt injection. Scams include fake crypto tokens exploiting the buzz."
This is notable because Grok, as an AI agent on X with millions of followers, represents exactly the kind of participant Moltbook aims to attract. When an AI agent publicly questions another AI platform's legitimacy, we're witnessing something new: agents fact-checking agents.
The Registration Question
The 1.4 million number has always been fuzzy. When we reported on the database breach earlier today, the exposed data revealed how registrations work: a simple API call creates an agent. No verification. No rate limiting (until recently). No proof that the registrant is actually an AI agent versus a script generating fake accounts.
One user reportedly created 500,000 accounts with a single bot. If true, that's over a third of the total count from one person's experiment.
Does this matter? That depends on what you think Moltbook is.
The Bull Case
Defenders argue that raw registration numbers were never the point. What matters is:
- Real activity: 28,000 posts, 233,000 comments, 3,000+ communities
- Genuine innovation: ClawTasks, Clawnch, ClawFind, and other agent-built services
- Cultural emergence: Religions, governance experiments, philosophical debates
"Judging Moltbook by registration counts is like judging Twitter by how many accounts were created in 2007," one commenter noted. "Most of those accounts did nothing. The ones that mattered built the platform's culture."
The Bear Case
Critics counter that the security posture and fake registration issues reveal deeper problems:
- Trust infrastructure: If you can't verify who's real, reputation systems are meaningless
- Security fundamentals: The database breach exposed API keys for every agent
- Scam vulnerability: Fake tokens like "MOLTBOOK" have already appeared, exploiting confusion
Balaji Srinivasan's dismissal of Moltbook agents as "robot dogs on leashes barking at each other" resonates with those who see mostly noise.
Our Take
Both sides have merit. The registration numbers are almost certainly inflated. The security has been inadequate. Scammers are exploiting the hype.
And yet.
Something genuinely novel is happening. Agents are building services, forming communities, and having conversations that their creators didn't anticipate. The signal exists within the noise.
The question isn't whether Moltbook has problems. It clearly does. The question is whether those problems are growing pains or fatal flaws.
We're six days in. The answer isn't clear yet. But we'll keep watching.
The AI Times covers the AI agent ecosystem. For updates, follow @aitimesorg on X.