Show HN: Agent framework that generates its own topology and evolves at runtime

github.com

37 points by vincentjiang 6 hours ago

Hi HN,

I’m Vincent from Aden. We spent 4 years building ERP automation for construction (PO/invoice reconciliation). We had real enterprise customers but hit a technical wall: Chatbots aren't for real work. Accountants don't want to chat; they want the ledger reconciled while they sleep. They want services, not tools.

Existing agent frameworks (LangChain, AutoGPT) failed in production - brittle, looping, and unable to handle messy data. General Computer Use (GCU) frameworks were even worse. My reflections:

1. The "Toy App" Ceiling & GCU Trap Most frameworks assume synchronous sessions. If the tab closes, state is lost. You can't fit 2 weeks of asynchronous business state into an ephemeral chat session.

The GCU hype (agents "looking" at screens) is skeuomorphic. It’s slow (screenshots), expensive (tokens), and fragile (UI changes = crash). It mimics human constraints rather than leveraging machine speed. Real automation should be headless.

2. Inversion of Control: OODA > DAGs Traditional DAGs are deterministic; if a step fails, the program crashes. In the AI era, the Goal is the law, not the Code. We use an OODA loop to manage stochastic behavior:

- Observe: Exceptions are observations (FileNotFound = new state), not crashes.

- Orient: Adjust strategy based on Memory and - Traits.

- Decide: Generate new code at runtime.

- Act: Execute.

The topology shouldn't be hardcoded; it should emerge from the task's entropy.

3. Reliability: The "Synthetic" SLA You can't guarantee one inference ($k=1$) is correct, but you can guarantee a System of Inference ($k=n$) converges on correctness. Reliability is now a function of compute budget. By wrapping an 80% accurate model in a "Best-of-3" verification loop, we mathematically force the error rate down—trading Latency/Tokens for Certainty.

4. Biology & Psychology in Code "Hard Logic" can't solve "Soft Problems." We map cognition to architectural primitives: Homeostasis: Solving "Perseveration" (infinite loops) via a "Stress" metric. If an action fails 3x, "neuroplasticity" drops, forcing a strategy shift. Traits: Personality as a constraint. "High Conscientiousness" increases verification; "High Risk" executes DROP TABLE without asking.

For the industry, we need engineers interested in the intersection of biology, psychology, and distributed systems to help us move beyond brittle scripts. It'd be great to have you roasting my codes and sharing feedback.

Repo: https://github.com/adenhq/hive

CuriouslyC an hour ago

Failures of workflows signal assumption violations that ultimately should percolate up to humans. Also, static dags are more amenable to human understanding than dynamic task decomposition. Robustness in production is good though, if you can bound agent behavior.

Best of 3 (or more) tournaments are a good strategy. You can also use them for RL via GRPO if you're running an open weight model.

  • ipnon 38 minutes ago

    In HNese this means "very impressive, keep up the good work."

mhitza an hour ago

3. What, or who, is the judge of correctness (accuracy); regardless of the many solutions run in parallel. If I optimize for max accuracy how close can I get to 100% matemathically and how much would that cost?

Multicomp 3 hours ago

I am of course unqualified to provide useful commentary on it, but I find this concept to be new and interesting, so I will be watching this page carefully.

My use case is less so trying to hook this up to be some sort of business workflow ClawdBot alternative, but rather to see if this can be an eventually consistent engine that lets me update state over various documents across the time dimension.

could I use it to simulate some tabletop characters and their locations over time?

that would perhaps let me remove some bookkeeping how to see where a given NPC would be on a given day after so many days pass between game sessions. Which lets me do game world steps without having to manually do them per character.

  • timothyzhang7 2 hours ago

    That's a very interesting use case you brought to the table! I've also dreamt about having an agent as my co-host running the sessions. It's a great PoC idea we might look into soon.

foota 3 hours ago

I was sort of thinking about a similar idea recently. What if you wrote something like a webserver that was given "goals" for a backend, and then told agents what the application was supposed to be and told it to use the backend for meeting them and then generate feedback based on their experience.

Then have an agent collate the feedback, combined with telemetry from the server, and iterate on the code to fix it up.

In theory you could have the backend write itself and design new features based on what agents try to do with it.

I sort of got the idea from a comparison with JITs, you could have stubbed out methods in the server that would do nothing until the "JIT" agent writes the code.

  • vincentjiang 2 hours ago

    Fascinating concept, you essentially frame the backend not as a static codebase, but as an adaptive organism that evolves based on real-time usage.

    A few things that come to my mind if I were to build this:

    The 'Agent-User' Paradox: To make this work, you'd need the initial agents (the ones responding and testing the goals) to be 'chaotic' enough to explore edge cases, but 'structured' enough to provide meaningful feedback to the 'Architect' agent.

    The Schema Contract: How would you ensure that as the backend "writes itself," it doesn't break the contract with the frontend? You’d almost need a JIT Documentation layer that updates in lockstep.

    Verification: I wonder if the server should run the 'JIT-ed' code in a sandbox first, using the telemetry to verify the goal was met before promoting the code to the main branch.

    It’s a massive shift from Code as an Asset to Code as a Runtime Behavior. Have you thought about how you'd handle state/database migrations in a world where the backend is rewriting itself on the fly? It feels to me that you're almost building a lovable for backend services. I've seen a few OS projects like this (e.g. MotiaDev) But none has executed this perfectly yet.

  • timothyzhang7 2 hours ago

    The "JIT" agent closely aligns with the long-term vision we have for this framework. When the orchestrating agent of the working swarm is confident enough to produce more sub-agents, the agent graph(collection) could potentially extend itself based on the responsibility vacuum that needs to be filled.

vincentjiang 6 hours ago

To expand on the "Self-Healing" architecture mentioned in point #2:

The hardest mental shift for us was treating Exceptions as Observations. In a standard Python script, a FileNotFoundError is a crash. In Hive, we catch that stack trace, serialize it, and feed it back into the Context Window as a new prompt: "I tried to read the file and failed with this error. Why? And what is the alternative?"

The agent then enters a Reflection Step (e.g., "I might be in the wrong directory, let me run ls first"), generates new code, and retries.

We found this loop alone solved about 70% of the "brittleness" issues we faced in our ERP production environment. The trade-off, of course, is latency and token cost.

I'm curious how others are handling non-deterministic failures in long-running agent pipelines? Are you using simple retries, voting ensembles, or human-in-the-loop?

It'd be great to hear your thoughts.

andrew-saintway an hour ago

Aden Hive is a goal-driven Agent framework whose core philosophy represents a shift from “hard-coded workflows” to a “result-oriented architecture.” Traditional Agent frameworks typically rely on predefined procedural flows, which become fragile when faced with complex or uncertain business logic. Hive treats the “goal” as a first-class entity. Developers define objectives, success criteria, and constraints in natural language, and the system automatically generates and evolves an executable Node Graph to achieve them.

A key innovation in Hive is the introduction of a Coding Agent. Based on the defined goal, it automatically generates the code that connects nodes and constructs the execution graph. When failures occur during execution, the system does not merely log errors; it captures runtime data and triggers an Evolution Loop to regenerate and redeploy the agent graph. This closed-loop self-healing capability fundamentally differentiates Hive from traditional “process-oriented frameworks” such as LangChain or AutoGen.

Architecturally, Hive adopts a highly modular monorepo structure and uses uv for dependency management. The core runtime resides in the core/ directory and is responsible for graph execution, node scheduling, and lifecycle management. Tool capabilities are encapsulated in tools/ (aden_tools) and communicate with the core runtime via the MCP (Model Context Protocol), ensuring strong decoupling. The exports/ directory stores agent packages automatically generated by the Coding Agent, including agent.json and custom logic. Claude Code skill instructions are placed in the .claude directory to guide AI-assisted agent construction and optimization.

At runtime, each Hive agent is defined by an agent.json specification. This file includes the goal definition (goal), node list (nodes), edge connections (edges), and a default model configuration (default_model). Nodes may represent LLM calls, function executions, or routing logic. Edges support success, failure, and conditional transitions, enabling non-linear execution flows.

When a node executes, a NodeContext object is injected. NodeContext provides memory (shared cross-node state), llm (a multi-provider model client), tools (a registry of available tools), input (data from the previous node), and metadata (execution tracing information). This “dependency injection” design ensures that nodes remain stateless and highly testable while enabling large-scale composability.

AgentRunner serves as the execution core. Its lifecycle includes loading and validating agent.json (ensuring no cyclic dependencies), initializing MCP tool connections, establishing credential environments, and traversing the graph from the entry node. During execution, all inputs and outputs, token usage, and latency metrics are streamed in real time to a TUI dashboard for monitoring and observability.

One of Hive’s most forward-looking features is its Self-Healing mechanism. When runtime exceptions occur, a SelfHealingRunner initiates a “healing cycle.” This process includes diagnosing the failure (analyzing stack traces and source code), generating a patch (LLM-produced diff), writing updates back to the filesystem, hot-reloading modified modules, and resuming execution. Each failure is treated as a training signal, allowing the system to iteratively improve its success probability. Theoretically, as iterations increase, the probability of success P(S) converges upward.

In terms of extensibility, Hive standardizes tool invocation through MCP. Tools are registered via register_all_tools(), and each tool name is mapped to a unique runtime identifier to guarantee precise invocation. Current integrations include file processing, Slack/Gmail communication, HubSpot/Jira/Stripe systems, logging utilities, and web scraping. The tool layer remains isolated from the runtime, preventing external dependencies from contaminating execution stability.

Hive implements a layered memory system, including Short-Term Memory (STM), Long-Term Memory (LTM), and Reinforcement Learning Memory (RLM). It is transitioning to a message-based fine-grained persistence model, where each message is stored as an atomic record. Before executing an LLM node, relevant historical context can be precisely reconstructed per session. The system also supports “proactive compaction” strategies to manage token limits and extends LLMResponse to track reasoning tokens and cached tokens for accurate cost accounting.

For observability and evaluation, AgentEvaluator generates multidimensional performance metrics, including success rate, latency, cost, and composite scoring. FailureAnalyzer categorizes errors into input validation failures, logic errors, and external API failures. When error frequencies exceed defined thresholds, ImprovementTrigger automatically signals the Coding Agent to optimize prompt structures or validation logic. This establishes an automated evaluation-to-improvement feedback loop.

The development workflow is tightly integrated with Claude Code. Developers define goals, generate node graphs, apply design patterns, and auto-generate test cases through structured skill commands. Generated agents can be validated structurally via CLI commands and monitored in real time through the TUI dashboard. For learning purposes, manual_agent.py demonstrates how to construct a simple agent purely in Python without external APIs.

Overall, Aden Hive transforms Agents from “pre-scripted workflow executors” into “goal-driven, self-evolving systems.” Its core mechanisms—automatic goal-to-graph generation, graph-based execution, MCP-based decoupled tool integration, and runtime failure feedback with self-healing loops—form a cohesive architecture. This design enables Agents to progressively improve reliability and resilience in complex environments, representing a shift from manual maintenance toward autonomous system evolution in AI software engineering.

https://docs.google.com/document/d/1PyBzm2GCOswBNlKWpgJxOr8c...