More tokens, less cost: why optimizing for token count is wrong

1 points by nicola_alessi 25 days ago

I ran a controlled benchmark on AI coding agents (42 runs, FastAPI, Claude Sonnet 4.6) and found something that broke my mental model of LLM costs. The setup: I built an MCP server that pre-indexes a codebase into a dependency graph and serves pre-ranked context to the agent in a single call, instead of letting the agent explore files on its own. The expected result: less input context → lower cost. Straightforward. The actual result: total tokens processed went UP 20% (23.4M vs 19.6M) while total cost went DOWN 58% ($6.89 vs $16.29). The explanation is in how Anthropic prices tokens. There are three pricing tiers:

Output tokens: most expensive (3-5x input price) Input tokens (cache miss): full price Input tokens (cache hit): 90% discount

The agent with pre-indexed context processes more total tokens because the structured context payload is injected every turn. But the token MIX shifts dramatically: Output tokens: 10,588 → 3,965 (-63%) Cache read rate: 93.8% → 95.3% Cache creation: 6.1% → 4.6% Output tokens dominate the cost equation. When the agent receives 40K tokens of unfiltered context, it generates verbose orientation narration ("let me look at this file... I can see that..."). When it receives 8K tokens of graph-ranked context, it skips straight to the answer. 504 output tokens per task → 189. The cache effect compounds this: structured, consistent context across turns hits the cache more reliably than ad-hoc file reads that change every turn. So the additional input tokens cost almost nothing (90% discount) while the output token reduction saves the most expensive tokens. The general principle: with tiered token pricing, optimizing for total token count is wrong. You should optimize for token mix — push volume from expensive tiers (output, cache miss) to cheap tiers (cache hit). More total tokens can cost less if you shift the distribution. This seems obvious in retrospect but I haven't seen it discussed much. Most context engineering work focuses on reducing input tokens. The bigger lever might be reducing output tokens by improving input signal-to-noise ratio — the model writes less when it doesn't have to think out loud about what it's reading.

The tool is vexp (https://vexp.dev) — local-first context engine, Rust + tree-sitter + SQLite. Free tier available.

alexbuiko 25 days ago

This is a brilliant breakdown of the 'Token Mix' paradox. It aligns perfectly with what we’ve been seeing while developing SDAG.

When you optimize for a structured context payload (like your dependency graph), you aren't just hitting the Anthropic pricing cache—you are literally reducing the routing entropy at the inference level. High-noise inputs force the model into 'exploratory' output paths, which isn't just expensive in dollars, but also in hardware stress.

We found that 'verbose orientation narration' (the thinking-out-loud part) correlates with higher entropy spikes in memory access. By tightening the input signal-to-noise ratio, you're essentially stabilizing the model's internal routing. Have you noticed any changes in latency variance (jitter) between the pre-indexed and ad-hoc runs? In our tests, lower entropy usually leads to much more predictable TTFT (Time To First Token).

  • nicola_alessi 25 days ago

    Interesting framing — hadn't thought about it from the inference routing angle but it maps well to what the data shows. On latency variance: yes, significantly. Cost standard deviation across runs dropped 6-24x depending on task type. The most extreme case was a refactoring task: baseline sigma $0.312 vs $0.013 with pre-indexed context. Duration variance also dropped in 6 out of 7 tasks. I didn't measure TTFT specifically but the overall duration went from 170s → 132s with much tighter clustering around the mean. The stabilization effect is probably the most underrated finding. Everyone focuses on the average cost reduction, but the predictability improvement matters more for production workloads — you can actually forecast spend instead of hoping the agent doesn't go on an exploration tangent. What's SDAG? Curious about your setup.

    • alexbuiko 24 days ago

      Those sigma numbers are incredible—dropping variance by 24x practically confirms that you’ve managed to 'trap' the model in a low-entropy state. In production, predictability (the 'anti-tangent' factor) is often worth more than the raw discount.

      SDAG (Systematic Defect Awareness & Guidance) is a protocol we’re developing for auditing AI infrastructure at the hardware-inference interface.

      Most observability tools look at the 'what' (tokens, logs), but we look at the 'how' (routing entropy and hardware stress). We use it to detect when a model's routing logic starts 'redlining' the hardware—essentially catching those exploration tangents you mentioned by monitoring physical signals like memory controller stress and cache thrashing before they even manifest as high latency or cost spikes.

      We're currently open-sourcing the core SDK [https://github.com/alexbuiko-sketch/SDAG-Standard]. Given your results, I’d be very curious to see if your 'pre-indexed context' approach shows a direct drop in hardware-level jitter. It sounds like you've found a software-level 'clamp' for what we’ve been measuring as physical entropy.

  • hkonte 25 days ago

    [dead]

    • alexbuiko 24 days ago

      Exactly. What you describe as 'parsing work' is, at the architectural level, a high-entropy search across the attention heads. When a prompt is a 'wall of text,' the model's routing logic has to maintain multiple competing states, which physically manifests as jitter and increased power draw per token.

      By using semantic blocks (like in your flompt framework), you are essentially performing Inference Pre-conditioning. You’re forcing the model into a narrow, low-entropy path from the very first token.

      This is why we focus on SDAG [https://github.com/alexbuiko-sketch/SDAG-Standard] — to provide a metric for this 'routing efficiency.' In the future, we might even be able to use SDAG signals to 'score' prompt architectures like flompt based on how much hardware-level stress they reduce. Structural clarity isn't just a convenience for the model; it's a physical optimization of the compute cycle.

gnabgib 25 days ago

You're over doing the self-promotion (this is the 7th time you've submitted vexp), share something with us you're curious about that you didn't build.

> Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity.

https://news.ycombinator.com/newsguidelines.html

  • nicola_alessi 25 days ago

    Fair point, appreciate the callout. I'll dial it back.

    • jacquesm 25 days ago

      No, don't dial it back. Just stop. The only way this will end otherwise is either with an account ban, a domain ban or both.

gilles_oponono 24 days ago

WOW...very interesting approach. Indeed limiting output T is the real challenge especially when you deploy.

verdverm 25 days ago

tl;dr AGENTS.md and the Anthropic post about putting MCPs behind search are a winning idea right now