Show HN: TraceRoot – Open-source agentic debugging for distributed services

github.com

40 points by xinweihe 2 days ago

Hey Xinwei and Zecheng here, we are the authors of TraceRoot (https://github.com/traceroot-ai/traceroot).

TraceRoot (https://traceroot.ai) is an open-source debugging platform that helps engineers fix production issues faster by combining structured traces, logs, source code contexts and discussions in Github PRs, issues and Slack channels, etc. with AI Agents.

At the heart are our lightweight Python (https://github.com/traceroot-ai/traceroot-sdk) and TypeScript (https://github.com/traceroot-ai/traceroot-sdk-ts) SDKs - they can hook into your app using OpenTelemetry and captures logs and traces. These are either sent to a local Jaeger (https://www.jaegertracing.io/) + SQLite backend or to our cloud backend, where we correlate them into a single view. From there, our custom agent takes over.

The agent builds a heterogeneous execution tree that merges spans, logs, and GitHub context into one internal structure. This allows it to model the control and data flow of a request across services. It then uses LLMs to reason over this tree - pruning irrelevant branches, surfacing anomalous spans, and identifying likely root causes. You can ask questions like “what caused this timeout?” or “summarize the errors in these 3 spans”, and it can trace the failure back to a specific commit, summarize the chain of events, or even propose a fix via a draft PR.

We also built a debugging UI that ties everything together - you explore traces visually, pick spans of interest, and get AI-assisted insights with full context: logs, timings, metadata, and surrounding code. Unlike most tools, TraceRoot stores long-term debugging history and builds structured context for each company - something we haven’t seen many others do in this space.

What’s live today:

- Python and TypeScript SDKs for structured logs and traces.

- AI summaries, GitHub issue generation, and PR creation.

- Debugging UI that ties everything together

TraceRoot is MIT licensed and easy to self-host (via Docker). We support both local mode (Jaeger + SQLite) and cloud mode. Inspired by OSS projects like PostHog and Supabase - core is free, enterprise features like agent mode multi-tenant and slack integration are paid.

If you find it interesting, you can see a demo video here: https://www.youtube.com/watch?v=nb-D3LM0sJM

We’d love you to try TraceRoot (https://traceroot.ai) and share any feedback. If you're interested, our code is available here: https://github.com/traceroot-ai/traceroot. If we don’t have something, let us know and we’d be happy to build it for you. We look forward to your comments!

autorinalagist an hour ago

Very cool! I have a question, how are you evaluating the performance while you develop this. Do you have some golden set of examples that you evaluate against?

sand_9999 19 hours ago

I can connect MCP for Datadog/NewRelic/Cloudwatch logs. Cursor or ClaudeCode would give me all that I need. Are you doing something new here?

  • xinweihe 17 hours ago

    Fair question. Here’s how TraceRoot is different.

    - We don’t just stream raw logs/traces into an LLM, we build execution trees and correlate data across services and threads. That gives our agent causal context, not just pattern matching.

    - It’s designed to debug real issues in production, where things are messy, not just dev or staging.

    - We are aiming for automatic bug detection and remediation soon, not just copiloting, but a debugging agent that can spot regressions and trigger fixes proactively.

    - We are working on persist historical incidents, fixes, and infra quirks, so the agent improves with each investigation, and doesn’t start from scratch every time.

    Happy to dive deeper! Let me know if you have more questions.

lmeyerov 2 days ago

I'm curious -- let's say we have claude code hooked up to MCPs for jaeger, grafana, and the usual git/gh CLIs it can use out-of-the-box, and we let claude's planner work through investigations with whatever help we give it. Would TraceRoot do anything clever wrt the AI that such as a setup wouldn't/couldn't?

(I'm asking b/c we're planning a setup that's basically that, so real question.)

  • xinweihe 15 hours ago

    Good question! Your setup already covers a lot — but TraceRoot tries to go a bit further in a few areas:

    In TraceRoot, we organize all logs, metrics, etc. around traces and build an execution tree. This structured view makes it much easier for our agent to reason through the large amount of telemetry data using context-aware optimizations. (We plan to support slack and notion integration as well.)

    It’s not a one-off tool. TraceRoot is a live monitoring platform. It continuously watches what’s happening in prod. So when something breaks, the agent already has full team-visible context, not just a scratchpad session spun up in the moment.

    Down the line, we're aiming for automatic bug detection and remediation - not just smarter copiloting, but proactive debugging workflows. The system also retains team-level memory of past bugs, fixes, and infra quirks, so the agent gets smarter over time.

    We’ve open sourced a lot of the core. Would love to jam on this if you're up for it. Always down to trade ideas or even hack on something together!

thatrandybrown 2 days ago

I like the idea of this and the use case, but don't love the tight coupling to openai. I'd love to see a framework for allowing BYOM.

  • Onawa 2 days ago

    It's been 2.5 years since ChatGPT came out, and so many projects still don't allow for easy switching of the OPEN_AI_BASE_URL or affiliated parameters.

    There are so many inferencing libraries that serve an OpenAI-compatible API that any new project being locked in to OpenAI only is a large red flag for me.

    • xinweihe 2 days ago

      Thanks for the feedback! Totally hear you on the tight OpenAI coupling - we're aware and already working to make BYOM easier. Just to echo what Zecheng said earlier: broader model flexibility is definitely on the roadmap.

      Appreciate you calling it out — helps us stay honest about the gaps.

  • zecheng 2 days ago

    Yes, there is a roadmap to support more models. For now there is a in progress PR to support Anthropic models https://github.com/traceroot-ai/traceroot/pull/21 (contributed by some active open source contributors) Feel free to let us know which (open source) model or framework (VLLM etc.) you want to use :)

    • 44za12 2 days ago

      Why not use something like litellm?

      • zecheng 2 days ago

        That's also one option, we will consider add it later :)

  • ethan_smith 2 days ago

    Adding model provider abstraction would significantly improve adoption, especially for organizations with specific LLM preferences or air-gapped environments that can't use OpenAI.

    • xinweihe a day ago

      Yep, you're spot on - and we're hearing this loud and clear across the thread. Model abstraction is on the roadmap, and we're already working on making BYOM smoother.

jinusunil 12 hours ago

How do you evaluate the output of your trace tool? Are some benchmarks for tracing tools?