Show HN: Call Multiple LLMs with GraphQL and AI Chainer

lq.1nb.ai

1 points by bobatnet 9 hours ago

Many of the issues with LLMs can be solved by prompting as a chain, by transforming a single prompt to multiple sub prompts. With the wide variability in LLM pricing and model sizes, it also helps if we can offload some of those sub prompts in the chain to smaller models. LQ provides a GraphQL endpoint that allows you to call multiple LLMs viz. gemini, claude, openai, mistral as a tree. You can extract structured xml tags from responses and fill placeholders in prompts down the chain/tree. With conditionals, you can provide condition-based branching as well. Further, we also provide an inbuilt AI Chainer that creates a chain for you. As an added benefit it also has a request history, for you to repeat a query while changing some of the placeholders as variables.

We are a team of 2 and have been using LLMs in many projects. For every LLM operation we end up making several LLM calls and have to write our own tooling to optimize the queries as a bunch. Even during development, evaluating multiple LLMs, prompt-tuning, condition based calling, etc. is messy. We are trying to make LQ as a middle-layer that can ease some of those pains and allow users to express the data flow within the request itself.

Our next goal includes support for local LLMs as well as longer running queries with subscriptions. Large chains/trees will also be supported with the context residing on the server.