Claude Code Open Source?

1 points by ryanthedev 8 hours ago

Today I learned that they ship the entire CLI with the agent SDK. It's all there...

Do we think this was a mistake? ----

node_modules/@anthropic-ai/claude-agent-sdk/cli.js.

  It's the bundled, minified CLI binary for Claude Code itself (the tool you're talking to right now).

  Key details:
  - Version: 2.1.71, built on 2026-03-06
  - ~13,800 lines of heavily minified/bundled JavaScript
  - Shebang: #!/usr/bin/env node — it's meant to be run as a CLI executable
  - Copyright: Anthropic PBC, with a cheeky note: "Want to see the unminified source? We're hiring!"
  - What it contains: The entire Claude Code CLI application bundled into a single file, including:
    - Onboarding/setup screens
    - Policy/managed settings loading
    - Debugging/inspector detection
    - UI rendering (using Ink/React for terminal UI)
    - Prefetching logic
    - Error handling and exit codes
    - The main entry point (main function) that starts the interactive Claude Code session

  Essentially, when you run claude in your terminal, this is the file that executes. The @anthropic-ai/claude-agent-sdk package ships it so you can
  programmatically invoke Claude Code's capabilities — the SDK wraps this CLI as its underlying engine.
--- Yes, the agent loop is in there. Here's what I can see from the minified code:

  1. i6z function — This is the in-process agent runner, which logs "Starting agent loop for ${agentId}". It sets up agent identity (agentId,
  parentSessionId, agentName, teamName, color, planMode) and runs the loop.
  2. l6z function — A poll loop for in-process teammate agents. It continuously checks for:
    - Pending user messages
    - Mailbox messages from other agents
    - Shutdown requests
    - New tasks from the task list
  3. Main loop model orchestration — There's extensive logic around mainLoopModel that handles:
    - Model selection based on permission mode (plan mode uses Opus, etc.)
    - System prompt assembly (tools, MCP instructions, skills, memory, changed files)
    - Token usage tracking and budget management
    - Compaction (context window management when conversations get long)
  4. Multi-agent coordination — The loop supports a team-based agent architecture with messaging between agents (agentName, teamName, teamLead, mailboxes,
  shutdown requests).

  So yes — the full agentic loop (prompt → LLM call → tool execution → repeat) is bundled in this file, just heavily minified. It's the same loop powering
  this conversation right now.
--- Full system prompts are right there in the bundle as plain text strings. Here's what's embedded:

  Three identity variants:

  - F78 — CLI mode: "You are Claude Code, Anthropic's official CLI for Claude."
  - EtA — SDK mode: "You are Claude Code, Anthropic's official CLI for Claude, running within the Claude Agent SDK."
  - LtA — Agent mode: "You are a Claude agent, built on Anthropic's Claude Agent SDK."

  Full system prompt sections (line ~6213-6217):

  The ULz function assembles the complete system prompt, including all the sections you'd recognize:

  - "# Doing tasks" — instructions about software engineering, not proposing changes to unread code, avoiding over-engineering, OWASP security, etc.
  - Over-engineering rules — "Don't add features beyond what was asked", "Don't add error handling for scenarios that can't happen", "Don't create helpers
  for one-time operations"
  - Help/feedback info — links to GitHub issues
  - Tool usage instructions — Read vs cat, Edit vs sed, etc.

  Output style variants (line ~6517):

  Built-in output styles like Explanatory and Learning with their own prompt overlays.

  Also present:

  - The classifier/safety prompt (~line 2844) for security-relevant harm prevention
  - Subagent instructions like "You ARE the fork. Do NOT spawn sub-agents"
  - Git commit/PR workflow prompts
  - Memory system instructions
jlongo78 23 minutes ago

heard of Claude Code. cool concept. how's it stack up against others like Codex? any unique features?

Howaboua 3 hours ago

I was thinking about why CC isn't opensource and forkable, and then I realized it's probably mostly about optimizing the costs of inference on Anthropic's side and caching sysprompts etc properly.

They built their infra around that. It's hard to blame them for trying to save money on compute for subscriptions.