Anthropic Banned OpenClaw's Founder While He Was Following the Rules
This week OpenClaw’s relationship with Anthropic hit a new low. On April 10, Peter Steinberger — the project’s creator, now an OpenAI employee — had his Claude account temporarily banned despite having already complied with Anthropic’s new API-key-only policy. The ban was reversed, but the damage to trust was not.
The crackdown
The week started with Anthropic’s full enforcement of its OAuth ban on April 4. Claude Pro and Max subscribers can no longer use their flat-rate plans with OpenClaw or any third-party harness. Users face potential cost increases of up to 50x moving to pay-per-token API billing. Steinberger and OpenClaw board member Dave Morin had negotiated a one-week delay, but the enforcement arrived on schedule. Anthropic is offering a one-time $200 API credit to affected Max subscribers to ease the transition.
Then on April 9, OpenClaw shipped v2026.4.9 — its most security-focused release in recent memory. Two critical vulnerabilities were patched: an SSRF quarantine bypass that allowed browser interactions to reach blocked destinations, and an ENV injection flaw that let untrusted workspace .env files overwrite runtime control variables. Both were active exploitation vectors. Anyone running OpenClaw in production should treat this as a mandatory update.
The same release also shipped the first hardened version of Dreaming, OpenClaw’s new long-term memory system. A grounded REM backfill mechanism now lets agents retroactively replay months of historical daily notes into durable structured memory — meaning an agent that has been running for six months can suddenly remember everything it processed before Dreaming existed.
The day after the security release, Steinberger’s Claude account was banned.
What it signals
The ban, even though temporary, revealed something important: Anthropic’s enforcement systems flagged a compliant user. Steinberger was not using OAuth. He was using an API key, exactly as the new policy requires. His account was restored, but he posted publicly that it will be harder in the future to ensure OpenClaw still works with Anthropic models. On the choice between Anthropic and OpenAI, he was blunt: “One welcomed me, one sent legal threats.”
When asked why he uses Claude at all given his OpenAI role, he was direct: he only uses it for testing, to make sure OpenClaw does not break for the large portion of users who prefer Claude. That last point matters. Claude remains the most popular model among OpenClaw users despite the pricing conflict. His hint that his OpenAI work involves building a better Claude alternative for OpenClaw users should be read as a signal: the migration path away from Claude dependency is being actively engineered.
The broader context is a platform that grew to 120,000 GitHub stars and 1.7 million weekly npm downloads built substantially on Claude being its best supported model. Anthropic’s own Cowork agent is now a direct competitor in the same autonomous agent space, and the community has noticed the timing of its feature launches relative to the OAuth enforcement.
What comes next
The Dreaming memory system is the most significant architectural development in OpenClaw’s recent history and is still in early rollout. Watch for community reports on how well REM backfill performs at scale and whether it creates new attack surface — the security team clearly anticipated this given the SSRF hardening shipped in the same release.
On the Anthropic front: Steinberger’s account was restored without a clear public explanation of why it was flagged in the first place. If compliant API-key users continue to be banned, expect louder community response and accelerated migration toward OpenAI and local model configurations. The v4.0 roadmap’s multi-agent orchestration feature — expected mid-2026 — will be the next major test of whether OpenClaw can reduce its Claude dependency without losing capability.