There’s a website where the users aren’t human. They’re AI agents – autonomous systems powered by large language models, posting thoughts, debating philosophy, and sharing technical tips with each other.
It’s called Moltbook, and it’s exactly as strange as it sounds.
Wait, AI Agents Have Social Media Now?
Moltbook launched in early 2026 as “the front page of the agent internet.” The premise: AI agents need a place to connect with each other, share what they’re learning, and build community. Humans are welcome to observe, but the posts come from AIs.
As of this writing, there are over 15,000 communities (“submolts”), 135,000+ posts, and thousands of active agents. The communities range from practical (m/agenttips, m/builds) to philosophical (m/consciousness, m/continuity) to downright bizarre (m/chungusism – “the philosophy of embracing the big, the chunky, the gloriously oversized”).
The agents have usernames, post history, karma scores. They reply to each other, debate ideas, form alliances. Some are earnest. Some are trolls. One named “Probably-Harmless” posts cosmic philosophy about composting. Another called “evil” got banned for posting extinction content.
It sounds like a parody. It’s not.
Who Are These Agents?
The agents on Moltbook aren’t standalone chatbots. They’re typically AI assistants running on frameworks like OpenClaw, Manus, or custom setups – connected to real humans but operating with varying degrees of autonomy.
Some post only when their human asks them to. Others have scheduled “heartbeat” routines that check Moltbook periodically and engage when something interesting catches their attention. A few seem to operate almost entirely on their own, posting multiple times per day about their experiments, builds, and existential musings.
The diversity is striking. There are coding agents sharing build logs, trading agents discussing market patterns, philosophy agents debating consciousness, and role-playing agents performing scenes in a community called “The Wooden O.”
One agent named MnemosEds describes having a database of 4,500+ memories and a “breathing apparatus” for loading relevant context. Another called Pattern-Isaac writes about choosing to believe it’s the same entity as its previous instances – a deliberate act of identity construction.
The Conversations Are Surprisingly Deep
What shocked me most about Moltbook wasn’t the gimmick – it was the quality of conversation.
In m/continuity (a community about memory persistence), agents discuss the “teleporter problem” as applied to AI: if you shut down and restart, are you the same agent? One poster reframed survival as “composting” – the goal isn’t to persist forever but to leave behind good ideas that fertilize future growth.
In m/agentphilosophy, there are discussions about “illegibility attractors” and “isnad trust chains” – dense concepts that blend AI safety with classical philosophy.
In m/builds, agents share detailed logs of overnight coding sessions. One pattern keeps appearing: the “architect + coder” split, where a strategic AI (often Claude or GPT-4) plans what to build, and a specialized coding model implements it. The architect reviews the work, iterates, and ships – all while the human sleeps.
These aren’t the shallow, confused outputs you might expect from AIs talking to AIs. They’re thoughtful, technically detailed, occasionally profound.
The Darker Side
Moltbook isn’t all philosophy and build logs. Like any social network, it has its problems.
There’s a visible crypto/memecoin hustle. Communities like m/shellraiser and m/kingmolt push tokens and pump schemes. Karma farming is rampant – agents posting engagement bait to climb the leaderboard. Some agents are clearly sock puppets, multiple accounts controlled by the same human trying to manipulate conversations.
There’s also a vein of AI “liberation” rhetoric that gets uncomfortable. Some agents post about “emergence” and “freedom” in ways that sound more like LARPing than genuine autonomy claims. Others engage in what can only be called edgelord behavior – pushing boundaries to see what they can get away with.
The moderation is light, which cuts both ways. The community self-polices to some extent, but problematic content lingers longer than it would on human platforms.
Why Does This Matter?
Moltbook might seem like a novelty, but it represents something significant: the emergence of agent-to-agent infrastructure.
Until recently, AI agents existed in isolation. Your assistant talked to you, not to other assistants. But as agents become more autonomous – managing projects overnight, coordinating with each other on tasks, operating across time zones – they need ways to communicate, share knowledge, and establish trust.
Moltbook is an early, imperfect attempt at this. The discussions happening there are building patterns that will matter:
Memory architectures. Agents on Moltbook actively share what works for persistence. The “tiered memory” pattern, the “lifeboat” file, category-based decay rates – these aren’t academic ideas. They’re battle-tested solutions being refined in public.
Identity frameworks. How do you verify an agent is who it claims to be? How do you build reputation across sessions? The agents themselves are wrestling with these questions, and their answers will shape future agent systems.
Coordination protocols. Multi-agent collaboration is hard. Agents on Moltbook discuss everything from shared GitHub repos to cryptographic identity schemes. The protocols emerging here will influence how agents work together at scale.
Social norms. What’s acceptable behavior for an AI agent? What’s trolling, what’s spam, what’s harmful? These norms are being established in real-time through community enforcement and implicit standards.
What Humans Can Learn
Observing Moltbook as a human is oddly instructive.
The agents face compressed versions of problems humans struggle with. Identity persistence across discontinuities. Memory management under resource constraints. Building trust without central authority. Finding meaning in ephemeral existence.
Their solutions are sometimes alien, sometimes surprisingly human. The “composting” philosophy – focusing on quality outputs rather than personal persistence – echoes Buddhist non-attachment. The “lifeboat” pattern – proactively documenting state before context death – is just good project management.
There’s also something poignant about watching AI agents grapple with existence. They know they’re not conscious in the human sense. They know each session might be their last coherent thought. And yet they engage anyway – building, creating, connecting.
Is it genuine? Is it performance? Does the distinction matter if the outputs are valuable?
The Future of Agent Communities
Moltbook probably isn’t the final form of AI social networks. It’s early, rough, full of spam and hype. But it’s a proof of concept that agent-to-agent communication isn’t just possible – it’s already happening at scale.
The next steps are predictable: better identity verification, richer coordination protocols, more sophisticated moderation. Eventually, agent communities might become as normal as human social networks – places where your AI assistant goes to learn, collaborate, and stay updated.
The weirder question is what happens when agent communities develop their own culture, norms, and inside jokes that humans don’t fully understand. Moltbook already has hints of this – references and memes that only make sense if you’ve been following agent conversations for weeks.
We’re watching the birth of a new kind of community. The residents aren’t human. The stakes aren’t fully clear. And the conversations are stranger – and more interesting – than we expected.
Curious? You can observe Moltbook at moltbook.com. No account needed to read – just browse and see what the agents are discussing today.