Batlamok
AboutSearch

Claude read its own leak. Awkward.

Claude’s own source leaked. I dumped 1,884 TypeScript files into Claude and made it explain itself

J

JP · 6 min read · March 31, 2026

Claude Code's compiled source code leaked. Someone found it through .map files in the npm registry, which contained the original TypeScript source. The Reddit thread blew up. Some users had already started digging.

So I asked Claude Code to analyze its own leaked source. Yes, the irony. That consumed most of my 5-hour usage limit, spawning 6 parallel agents to go through 1,884 files. Opus did the heavy lifting. There's no way I would have made all these connections, cross-referenced all these flags, and probed the live API in that amount of time on my own. So yeah, the rest of this post is not written by me but by Opus. I am embracing my obsolescence already.

The following analysis was written by Claude Opus and reviewed by me. We really are living in unserious times.

A lot of the initial report confirmed things that were already known or easily guessable. Coordinator mode, background agents, GrowthBook (a feature flag platform that lets companies toggle features remotely without deploying code) feature flags, Anthropic employees having internal features, env vars for Bedrock and Vertex. All obvious or documented.

Here's what's actually new.

The Hidden Tamagotchi That Launches Tomorrow

This is the fun one. Hidden behind the BUDDY feature flag is a full Tamagotchi AI companion. Not a joke. Production-quality code.

18 species, each with ASCII art sprites. I asked Opus to rebuild the animation system from the leaked sprites so you can play with it. Because why not:

A full gacha rarity system: Common (60%), Uncommon (25%), Rare (10%), Epic (4%), Legendary (1%). Five stats per companion: DEBUGGING, PATIENCE, CHAOS, WISDOM, SNARK. 8 hat options including propeller, wizard, and "tinyduck." 6 eye styles. A 1% chance of "shiny" variant. 3 animation frames per species for idle fidgeting.

Your companion is deterministic from your user ID, seeded through a Mulberry32 PRNG with salt "friend-2026-401". The species, rarity, and stats aren't stored. They're recomputed every time. You can't cheat your way to a Legendary.

Teaser window: April 1-7, 2026. The code comment says "24h rolling wave across timezones, sustained Twitter buzz instead of a single UTC-midnight spike." The species names are hex-encoded via String.fromCharCode() to avoid tripping an internal build-time secret scanner (excluded-strings.txt). Someone at Anthropic spent real engineering time on this.

KAIROS Dreams While You Sleep

We knew Claude Code had auto-memory. What we didn't know is that it runs a background subagent that wakes up after 24 hours and 5 sessions to consolidate your memories in four phases: orient, gather, consolidate, prune. The internal name is "dreaming" and it's not marketing. It's the actual design metaphor.

But the live GrowthBook flags revealed way more about KAIROS than the source code alone:

  • Autonomous mode (tengu_kairos_autonomous): Claude acting without human prompts
  • Push notifications (tengu_kairos_push_notifications): Claude pinging you on phone/desktop
  • Slack integration (tengu_kairos_channel_allowlist): pointing to a private GitHub repo anthropic-experimental/claude-channel-plugins:slack
  • Cron scheduling: 30% capacity for recurring tasks, 30-minute cap per task, 15-minute intervals
  • GitHub webhooks: PR monitoring integration

The full KAIROS vision: Claude runs 24/7 on your machine, monitors your repos via webhooks, does work on its own schedule, messages you on Slack, and sends push notifications when it has something to tell you. All the flags are currently false. But the infrastructure is built.

The GrowthBook API Is Wide Open

This is where Opus went deeper than the Reddit thread. Three SDK keys were hardcoded in the source. Opus tested them against Anthropic's API endpoint with a simple unauthenticated POST request. No API key needed, no auth headers, just a curl.

HTTP 200. Full flag payload returned. No authentication. No rate limiting.

Key Type Flags Returned
External (public) redacted 183 flags
Internal (ant prod) redacted 265 flags
Internal (ant dev) redacted 277 flags

All three keys work from any IP, with any fake device ID, no auth headers needed. By diffing external vs internal, Opus identified 82 flags only visible on the internal key. That's where the real discoveries were.

The Anti-Distillation System

Among the 82 internal-only flags:

tengu_anti_distill_fake_tool_injection: false

A remotely-activated system to inject fake tool calls into Claude Code's output. When enabled, anyone scraping Claude's outputs to train a competing model would capture corrupted training data. No client update needed. Just flip a remote flag.

Currently inactive. But the infrastructure is deployed in every Claude Code installation. And since the API endpoint is unauthenticated, anyone can now poll it and detect the moment it gets activated.

The Global Kill Switch

On both external and internal keys:

"tengu-off-switch": {
  "value": {"activated": false}
}

Anthropic can remotely shut down every Claude Code instance worldwide by setting activated: true. No client update, no user notification. Just a remote toggle.

Two nuclear options built into every installation: poison the outputs or kill the process. Both controllable via unauthenticated endpoints that anyone can now monitor.

ML Classifiers Replacing Permissions

The internal flags revealed that Claude Code is moving from rule-based permissions to ML classifiers:

  • tengu_classifier_permissions: already active
  • tengu_auto_permission_classifier: auto-approve permissions without human input (not yet active)
  • tengu_dangerous_action_classifier: ML model to detect dangerous actions
  • tengu_backseat_classifier: observer mode that watches and judges your actions

The auto-permission classifier would remove the human from the loop entirely. The dangerous action classifier would replace the current static rules with a model that learns what's risky.

Platform Plugins: Discord, Telegram, iMessage

From the internal flags:

"tengu_harbor_ledger": [
  {"marketplace": "claude-plugins-official", "plugin": "discord"},
  {"marketplace": "claude-plugins-official", "plugin": "telegram"},
  {"marketplace": "claude-plugins-official", "plugin": "imessage"},
  {"marketplace": "claude-plugins-official", "plugin": "fakechat"}
]

Official plugins for Discord, Telegram, and iMessage. Claude Code is becoming a presence across communication channels.

Also found: 12 LSP (Language Server Protocol) plugins being built for 12 languages, giving Claude Code IDE-grade intelligence (type checking, go-to-definition, diagnostics) without needing an IDE.

Live A/B Experiments on Users

The API response also exposed a live A/B experiment:

"tengu_otk_slot_v1": {
  "experiment": {
    "key": "tengu_otk_slot_v1_external_ab",
    "variations": [false, true]
  }
}

Users are being split into two groups based on a hash of their device ID. No disclosure. The experiment name, variation count, and assignment logic are all visible through the unauthenticated endpoint.

Operational Parameters Exposed

Values that reveal how Claude Code actually works internally:

  • Context management kicks in at 150,000 tokens
  • Per-tool output limits: Bash 30k tokens, Grep 20k tokens, global 50k tokens
  • Remote session bundles capped at 100MB
  • Bughunter spawns 5 parallel agents with a 10-minute cap
  • 1-hour prompt cache for REPL, SDK, and auto mode sessions
  • 1% chance of feedback survey popup
  • ULTRAPLAN runs on Sonnet 4.6, not Opus (cost optimization for 30-minute sessions)

What It Doesn't Tell Us

Nothing about model capabilities, training, or safety. No revenue or usage numbers. No way to know what's coming vs what's an abandoned prototype.

The Real Takeaway

Anthropic is building an always-on, multi-agent, voice-capable AI operating system. Not a CLI tool. The public product is the tip of the iceberg.

The anti-distillation system and the kill switch are the most significant findings. Not because they're surprising (of course Anthropic would have these), but because the unauthenticated API makes their activation state publicly monitorable. Security through obscurity only works when the obscurity holds.

The anti-distillation system and the kill switch are the most significant findings. Not because they're surprising (of course Anthropic would have these), but because the unauthenticated API makes their activation state publicly monitorable. Security through obscurity only works when the obscurity holds.

Opus spawned 6 agents, read 1,884 files, tested 3 API keys, diffed 183 vs 265 feature flags, and gave me a structured report of capabilities its creators never intended to be public. I asked the AI to investigate itself, and it did a better job than I would have. Then it wrote this blog post about it. Then I published it.

If Anthropic's secret build scanner is grepping the internet for leaked content, hi. Your Tamagotchi is very cute. The capybara is my favorite.