Clawdbot rebrands to OpenClaw as maintainers push security-first roadmap
The open-source personal AI assistant project has renamed again—now OpenClaw—while maintainers warn it’s still risky outside controlled environments.

Key Takeaways
- Clawdbot has renamed again—now OpenClaw—after a short-lived Moltbot rebrand tied to trademark concerns.
- The project crossed 100,000 GitHub stars in roughly two months and is adding maintainers plus a sponsor program (5 to 500 dollars per month).
- Maintainers warn OpenClaw is not ready for mainstream use due to risks like prompt injection and internet-sourced instructions.
- Early interest is extending into Moltbook, an agent-to-agent network highlighted by Andrej Karpathy and Simon Willison.
Open-source agent projects are moving from weekend experiments to fast-growing ecosystems—and the latest example is Clawdbot, which has now rebranded to OpenClaw while doubling down on security guidance for early adopters running assistants on their own machines.
OpenClaw naming and community momentum
The project’s creator, Austrian developer Peter Steinberger, says the new name is intended to avoid trademark friction after an earlier rename from Clawdbot to Moltbot following a legal challenge tied to Anthropic’s Claude. Steinberger wrote that he researched trademarks and also asked OpenAI for permission before settling on “OpenClaw,” according to the project’s launch post at openclaw.ai/blog/introducing-openclaw and a follow-up note on X.
The bigger signal for operators is adoption speed: the repo has passed 100,000 GitHub stars in about two months, and Steinberger has added additional maintainers as it shifts from a solo build to a community-run effort. A sponsorship program has also started, with tiers from “krill” at 5 dollars per month to “poseidon” at 500 dollars per month, with funds intended to support maintainers rather than the original creator.
Security risks for agent automation in real accounts
OpenClaw’s pitch is a local assistant that can work inside the chat apps teams already use—an appealing route for AI workflow automation, especially for internal ops and customer support prototyping. But maintainers are explicit: it’s not safe for non-technical users, and it should not be pointed at primary Slack or WhatsApp accounts yet.
The key issue is “fetch and follow instructions from the internet” behavior and prompt injection—where a malicious message can steer an LLM into taking unintended actions. Steinberger calls prompt injection an industry-wide unsolved problem and points users to the project’s security practices at docs.openclaw.ai/gateway/security.
The ecosystem is also spawning agent-to-agent networks. Moltbook, a site where these assistants interact, drew attention from Andrej Karpathy on X and from Simon Willison’s write-up at simonwillison.net, including notes on “Submolts,” skills (downloadable instruction files), and periodic polling.
For B2B teams, the near-term play is controlled pilots: isolate credentials, restrict permissions, and treat agent skills as third-party code until hardened practices are default.
Stay Informed
Weekly AI marketing insights
Join 5,000+ marketers. Unsubscribe anytime.
