Show HN: Agents.json – OpenAPI Specification for LLMs

github.com

136 points by yompal 16 hours ago

Hey HN, we’re building an open specification that lets agents discover and invoke APIs with natural language, built on the OpenAPI standard. agents.json clearly defines the contract between LLMs and API as a standard that's open, observable, and replicable. Here’s a walkthrough of how it works: https://youtu.be/kby2Wdt2Dtk?si=59xGCDy48Zzwr7ND.

There’s 2 parts to this:

1. An agents.json file describes how to link API calls together into outcome-based tools for LLMs. This file sits alongside an OpenAPI file.

2. The agents.json SDK loads agents.json files as tools for an LLM that can then be executed as a series of API calls.

Why is this worth building? Developers are realizing that to use tools with their LLMs in a stateless way, they have to implement an API manually to work with LLMs. We see devs sacrifice agentic, non-deterministic behavior for hard-coded workflows to create outcomes that can work. agents.json lets LLMs be non-deterministic for the outcomes they want to achieve and deterministic for the API calls it takes to get there.

We’ve put together some real examples if you're curious what the final output looks like. Under the hood, these LLMs have the same system prompt and we plug in a different agents.json to give access to different APIs. It’s all templatized.

- Resend (https://demo.wild-card.ai/resend)

- Google Sheets (https://demo.wild-card.ai/googlesheets)

- Slack (https://demo.wild-card.ai/slack)

- Stripe (https://demo.wild-card.ai/stripe)

We really wanted to solve real production use cases, and knew this couldn’t just be a proxy. Our approach allows you to make API calls from your own infrastructure. The open-source specification + runner package make this paradigm possible. Agents.json is truly stateless; the client manages all memory/state and it can be deployed on existing infra like serverless environments.

You might be wondering - isn’t OpenAPI enough? Why can’t I just put that in the LLM’s context?

We thought so too, at first, when building an agent with access to Gmail. But putting the API spec into LLM context gave us poor accuracy in tool selection and in tool calling. Even with cutting down our output space to 5-10 endpoints, we’d see the LLMs fail to select the right tool. We wanted the LLM to just work given an outcome rather than having it reason each time which series of API calls to make.

The Gmail API, for example, has endpoints to search for threads, list the emails in a thread, and reply with an email given base64 RFC 822 content. All that has to happen in order with the right arguments for our agent to reply to a thread. We found that APIs are designed for developers, not for LLMs.

So we implemented agents.json. It started off as a config file we were using internally that we slowly started adding features to like auth registration, tool search, and multiple API sources. 3 weeks ago, Dharmesh (CTO of Hubspot) posted about the concept of a specification that could translate APIs for LLMs. It sounded a lot like what we already had working internally and we decided to make it open source. We built agents.json for ourselves but we’re excited to share it.

In the weeks since we’ve put it out there, agents.json has 10 vetted API integrations (some of them official) and more are being added every day. We recently made the tool search and custom collection platform free for everyone so it’s even easier for devs to scale the number of tools. (https://wild-card.ai)

Please tell us what you think! Especially if you’re building agents or creating APIs!

thomasfromcdnjs 11 hours ago

I've been following agents.json for a little while. I think it has legs, and would love to see some protocol win this space soon.

Will be interesting to see where the state/less conversation goes, my gut tells me MCP and "something" (agent.json perhaps) will co-exist. My reasoning being purely organisational, MCP focuses on a lot more, and there ability to make a slimmed down stateless protocol might be nigh impossible.

---

Furthermore, if agents.json wants to win as a protocol through early adoption, the docs need to be far easier to grok. An example should be immediately viewable, and the schema close by. The pitch should be very succinct, the fields in the schema need to have the same amount of clarity at first glance. Maybe a tool, that anyone can paste their OpenAPI schema into, and it gets passed to an LLM to generate a first pass of what their agents.json could look like.

---

The OpenAPI <> agents.json portability is a nice touch, but might actually be overkill. OpenAPI is popular but it never actually took over the market imo. If there is added complexity to agents.json because of this, I'd really question if it is worth supporting it. They don't have to be 100% inoperable, custom converters could manage partial support.

---

A lot of people are using agentic IDE's now, would be nice if agent.json shared a snippet with instructions on how to use it, where to find docs and how to pull a list and/or search the registry that people can just drop straight into Windsurf/Cursor.

  • yompal 10 hours ago

    1) Thanks for being a part of the journey! We also want something that works for us as agent developers. We didn't feel like anything else was addressing this problem and felt like we had to do it ourselves.

    We love feedback! This is our first time doing OSS. I agree - MCP and agents.json are not mutually exclusive at all. They solve for difference clients.

    2) Agreed. Something we're investing in soon is a generic SDK that can run any valid agents.json. That means the docs might getting a revamp soon too.

    3) While many API services may not use OpenAPI, their docs pages often do! For example, readme.com lets you export your REST API docs as OpenAPI. As we add more types of action sources, agents.json won't be 1:1 with OpenAPI. In that way, we left the future of agents.json extensible.

    4) Great idea! I think this would be so useful

winkle 15 hours ago

In what ways is the agents.json file different from an OpenAPI Arazzo specification? Is it more native for LLM use? Looking at the example, I'm seeing similar concepts between them.

  • yompal 15 hours ago

    We've been in touch with Arazzo after we learned of the similarities. The long-term goal is to be aligned with Arazzo. However, the tooling around Arazzo isn't there today and we think it might take a while. agents.json is meant to be more native to LLMs, since Arazzo serves other use cases than LLMs.

    To be more specific, we're planning to support multiple types of sources alongside REST APIs, like internal SDKs, GraphQL, gRPC, etc.

    • winkle 15 hours ago

      Thanks, that's helpful. I agree there are many other sources REST APIs where this would be helpful. Outside of that I would be interested in understanding the ways where Arazzo takes a broader approach and doesn't really fit an LLM use case.

      • yompal 14 hours ago

        It's not that Arazzo can't work for LLMs, just that it's not the primary use case. We want to add LLM enabled transformations between linkages. Arazzo having to serve other use cases like API workflow testing and guided docs experiences may not be incentivized to support these types of features.

linux_devil 2 hours ago

Pardon my ignorance , how is it different from MCP servers and having a supervisor agent selecting and executing the right MCP tool

bberenberg 13 hours ago

Cool idea but seems to be dead on arrival due to licensing. Would love to have the team explain how anyone can possibly adopt their agpl package into their product.

  • yompal 12 hours ago

    A couple people have mentioned some relevant things in this thread. This SDK isn't meant to be restrictive. This can be implemented into other open-source frameworks as a plugin(ie. BrowserUse, Mastra, LangChain, CrewAI, ...). We just don't want someone like AWS flip this into a proxy service.

    Some have asked us to host a version of the agents.json SDK. We're torn on this because we want to make it easier for people to develop with agents.json but acting as a proxy isn't appealing to us and many of the developers we've talked to.

    That said, what do you think is the right license for something like this? This is our first time doing OSS.

  • favorited 12 hours ago

    Sounds like the spec is Apache 2.0. The Python package is AGPLv3, but the vast majority of the code in there looks to be codegen from OpenAPI specs. I'd imagine someone could create their own implementation without too much headache, though I'm just making an educated guess.

  • froggertoaster 13 hours ago

    Echoing this - is there a commericialization play you're hoping to make?

sidhusmart 12 hours ago

How does this compare to llms.txt? I think that’s also emerging as a sort of standard to let LLMs understand APIs. I guess agents.json does a better packaging/ structural understanding of different endpoints?

  • yompal 12 hours ago

    llms.txt is a great standard for making website content more readable to LLMs, but it doesn’t address the challenges of taking structured actions. While llms.txt helps LLMs retrieve and interpret information, agents.json enables them to execute multi-step workflows reliably.

luke-stanley 15 hours ago

This could be more simple, which is a good thing, well done!

BTW I might have found a bug in the info property title in the spec: "MUST provide the title of the `agents.json` specification. This title serves as a human-readable name for the specification."

  • yompal 14 hours ago

    It now reads "MUST provide the title of the `agents.json` specification file. ..." Thanks for the heads up!

alooPotato 13 hours ago

Can some help me understand why agents can't just use APIs documented by an openapi spec? Seems to work well in my own testing but I'm sure I'm missing something.

  • yompal 13 hours ago

    LLMs do well with outcome-described tools and APIs are written as resource-based atomic actions. By describing an API as a collection of outcomes, LLMs don't need to re-reason each time an action needs to be taken.

    Also, when an OpenAPI spec gets sufficiently big, you face a need-in-the-haystack problem https://arxiv.org/abs/2407.01437.

    • thomasfromcdnjs 3 hours ago

      Does anyone have any pro tips for large tool collections? (mine are getting fat)

      Plan on doing a two layered system mentioned earlier, where the first layer of tool calls is as slim as they can be, then a second layer for more in depth tool documentation.

      And/or chunking tools and creating embeddings and also using RAG.

      • yompal 3 hours ago

        Funnily enough, a search tool to solve this problem was our product going into YC. Now it’s a part of what we do with wild-card.ai and agents.json. I’d love to extend the tool search functionality for all the tools in your belt

        It took us a decently long time to get the search quality good. Just a heads up in case you want to implement this yourself

    • ahamilton454 10 hours ago

      I can agree this is a huge problem with large APIs, we are doing it with twilios api and it’s rough

      • paradite 9 hours ago

        Thinking from the retrieval perspective, would it make sense to have two layers?

        First layer just describes on high level, the tools available and what they do, and make the model pick or route the request (via system prompt, or small model).

        Second layer implements the actual function calling or OpenAPI, which then would give the model more details on the params and structures of the request.

        • yompal 7 hours ago

          That approach does a lot better, but LLMs still have positional bias problem baked into the transformer architecture (https://arxiv.org/html/2406.07791v1). This is where the LLM biases selecting information earlier in the prompt than later, which is unfortunate for tool selection accuracy.

          Since 2 steps are required anyways, might as well use a dedicated semantic search for tools like in agents.json.

          • paradite 6 hours ago

            Interesting. This is the first time I am hearing about intrinsic positional bias for LLM. I had some intuition on this but nothing concrete.

codenote 8 hours ago

Our team was just exploring an approach to building an AI Agent Builder by making API calls via LLM, so this is very helpful. I'll give it a try!

  • yompal 7 hours ago

    Interesting! Reach out if you want to chat about it :)

sandinmyjoints 16 hours ago

Looks cool! How is it similar/different from MCP?

  • yompal 15 hours ago

    Thanks! MCP is taking a stateful approach, where every client maintains a 1:1 connection with a server. This means that for each user/client connected to your platform, you'd need a dedicated MCP server. We're used to writing software that interfaces with APIs, as stateless and deployment agnostic. agents.json keeps it that way.

    For example, you can write an web-based chatbot that uses agents.json to interface with APIs. To do the same with MCP, you'd spin up a separate lambda or deployed MCP server for each user.

ripped_britches 13 hours ago

Can you explain what the LLM sees in your Gmail example instead of the chain?

And how is that translation layer created? Do you write it yourself for whatever you need? Or is the idea for API owners to provide this?

I’m sure the details are there if I dig deeper but I just read the readme and this post.

  • yompal 12 hours ago

    We work with API providers to write this file. It takes a non-negligible amount of thought to put together since we're encoding which outcomes would be useful to enable/disable for an LLM. The standard is open so anyone can write and read and agents.json. Mainly intended for API providers to write.

ahamilton454 16 hours ago

Hey this looks pretty interesting. I saw that you guys are a YC company, how do you intend on making money deploying a protocol?

  • yompal 16 hours ago

    We think the main opportunity is to charge API providers, to get white-gloved onto this standard.

TZubiri 12 hours ago

Is this Agents.json file automatically generated or is one supposed to invest thousands of lines into it?

  • yompal 12 hours ago

    The end developer doesn't need to even see or read the agents.json file. It's a means for transparency and meant to be implemented by the API provider. Tooling to make creating an agents.json easier is on our roadmap. We have a process internally where we use a validator to guide creating an agents.json.

    • TZubiri 12 hours ago

      So,the api provider, like stripe, is supposed to publish a second API?

      And then the "end developer" who is going to be making a chatbot/agent, is supposed to use that to make a chatbot?

      Why does the plan involve there being multiple third party developers to make n products per provider? If the plan is to have third parties be creative and combine, say, Stripe with Google Ads, then how is a second API for LLMs useful.

      I'm not seeing the vision here. I've seen something similar in a project where a guy wanted LLM developers to use his API for better browsing websites. If your plan involves:

      1- Bigger players than you implementing your protocol 2- Everybody else doing the work.

      It's just obviously not going to work and you need to rethink your place in the food chain.

      • yompal 11 hours ago

        We're grateful that bigger players like Resend, Alpaca, etc do want to implement the protocol. The problem is honestly onboarding them fast enough. That's one of the main areas we're going to build out in the next few weeks. Until then, we're writing every agents.json.

        If you check out wild-card.ai and create your own collection, you'll find that it's actually really easy to develop with. As a developer, you never have to look at an agents.json if you don't want to.

        • doomroot 9 hours ago

          The resend api has around 10 endpoints.

tsunego 15 hours ago

I like your approach but it's not clear to me whether it's MCP compatible

Anthropic just announced a MCP registry

  • yompal 15 hours ago

    MCP is great for the stateful systems, where shared context is a benefit, but this is a rarity. Developers generally write clients to use APIs in a stateless way, and we want to help this majority of users.

    That said, agents.json is not mutually exclusive to MCP. I can see a future where an MCP for agents.json is created to access any API.

    • winkle 15 hours ago

      I think MCP being stateful is true in the short term. It's currently at the top of their roadmap to add to the protocol https://modelcontextprotocol.io/development/roadmap.

      • yompal 15 hours ago

        We've been keeping a close eye on this topic: https://github.com/modelcontextprotocol/specification/discus...

        The options being considered to do this are:

        1) maintain a session token mapping to the state -- which is still statefulness

        2) create a separate stateless MCP protocol and reimplement -- agents.json is already the stateless protocol

        3) reimplement every MCP as stateless and abandon the existing stateful MCP initiative

        As you can tell, we're not bullish on any of these.

    • esafak 13 hours ago

      Isn't the idea to create a data lake to better inform models? Why are you bearish on stateful protocols? Could you elaborate on your thinking?

      • yompal 12 hours ago

        Bearish on everyone needing to be on stateful protocols. Developers should have the option to have their state managed internal to their application.

        • esafak 11 hours ago

          Can't you simply use a stateful protocol and not report any state? Doesn't statefulness subsume statelessness? I am beginning to wrap my head around this space, so excuse the naive questions.

          • yompal 11 hours ago

            No worries! In other cases, I believe you would be right. But splitting up context is not optional with MCP. Part of the whole state will always reside in an external entity.

jeffrsch 5 hours ago

I've been down this road - with OpenPlugin. It's all technically feasible - we did it successfully. The question is, so what? If the new models can zero-shot the API call and fix issues with long responses, boot parameters, lookup fields, etc, what's your business model?

  • yompal 5 hours ago

    The tail of the problem is quite long. Even if the average model is perfect at these things, do we want them to re-reason each time there's an impasse of outcomes? Often, the outcomes we want to achieve have well traversed flows anyways and we can just encode that.

    In fact, I'm looking forward to the day that models are better at this so we can generate agents.json automatically and self-heal with RL.

    On the business model, ¯\_(ツ)_/¯. We don't charge developers, anyways