Show HN: A lightweight LLM proxy to get structured results from most LLMs
l1m.ioHey HN!
After struggling with complex prompt engineering and unreliable parsing, we built L1M, a simple API that lets you extract structured data from unstructured text and images.
curl -X POST https://api.l1m.io/structured \
-H "Content-Type: application/json" \
-H "X-Provider-Url: demo" \
-H "X-Provider-Key: demo" \
-H "X-Provider-Model: demo" \
-d '{
"input": "A particularly severe crisis in 1907 led Congress to enact the Federal Reserve Act in 1913",
"schema": {
"type": "object",
"properties": {
"items": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": { "type": "string" },
"price": { "type": "number" }
}
}
}
}
}
}'
This is actually a component we unbundled from our larger because we think it's useful on its own.It's fully open source (MIT license) and you can:
- Use with text or images - Bring your own model (OpenAI, Anthropic, or any compatible API) - Run locally with Ollama for privacy - Cache responses with customizable TTL
The code is at https://github.com/inferablehq/l1m with SDKs for Node.js, Python, and Go.
Would love to hear if this solves a pain point for you!
Looks usefull. Could you explain how it works? you have to chain it after the call from other LLM ?
Thanks.
We use a minimal schema[1] to prompt the LLM under the hood. We were inspired by BAML [2]. Then the output is wrangled with a converter and ajv [3].
[1] https://github.com/inferablehq/l1m/blob/main/api/src/schema....
[2] https://github.com/boundaryml/baml
[3] https://ajv.js.org