Show HN: PicoFlow – a minimal Python workflow for LLM agents
Hi HN,
I’ve been experimenting with LLM agents for a while and often felt that for simple workflows (chat, tool calls, small loops), existing frameworks add a lot of abstraction and boilerplate.
So I built a small Python library called PicoFlow. The goal is simple:
express agent workflows using normal async Python, not framework-specific graphs or chains.
Minimal chat agent
Each step is just an async function, and workflows are composed with >>:
from picoflow import flow, llm, create_agent
LLM_URL =
“llm+openai://api.openai.com/v1/chat/completions?model=gpt-4.1-mini&api_key_env=OPENAI_API_KEY”
@flow
async def input_step(ctx):
return ctx.with_input(input(“You:”))
agent = create_agent(
input_step >>
llm(“Answer the user: {input}”, llm_adapter=LLM_URL)
)
agent.run()
No chains, no graphs, no separate prompt/template objects. You can debug
by putting breakpoints directly in the async steps.Control flow is just Python
Loops and branching are written with normal Python logic, not DSL nodes:
def repeat(step):
async def run(ctx):
while not ctx.done:
ctx = await step.acall(ctx)
return ctx
return Flow(run)
The framework only schedules steps; it doesn’t try to own your control
flow.Switching model providers = change the URL
Another design choice: model backends are configured via a single LLM URL.
OpenAI:
LLM_URL =
“llm+openai://api.openai.com/v1/chat/completions?model=gpt-4.1-mini&api_key_env=OPENAI_API_KEY”
Switch to another OpenAI-compatible provider (for example SiliconFlow or
local gateways): LLM_URL =
“llm+openai://api.siliconflow.cn/v1/chat/completions?model=Qwen/Qwen2.5-7B-Instruct&api_key_env=SILICONFLOW_API_KEY”
The workflow code doesn’t change at all. Only runtime configuration
does. This makes A/B testing models and switching providers much cheaper
in practice.When this is useful (and when it’s not)
PicoFlow is probably useful if you:
- want to prototype agents quickly - prefer explicit control flow - don’t want to learn a large framework abstraction
It’s probably not ideal if you:
- rely heavily on prebuilt components and integrations - want a batteries-included orchestration platform
Repo:
https://github.com/the-picoflow/picoflow
This is still early and opinionated. I’d really appreciate feedback on whether this style of “workflow as Python” is useful to others, or if people are solving this in better ways already.
Thanks!