Show HN: Operational web infra (Why we moved away from pure Computer Use agents)
mino.aiHi HN,
I am from the engineering team behind Mino by Tiny Fish.
We’ve spent the last year building the web agents infrastructure for companies like Google and DoorDash. Today, we’re opening Mino, our API to developers.
95% of the web's value is no longer in static HTML documents. It's in operations: checking inventory behind a login, verifying compliance across government portals, or aggregating pricing. These tasks were made for human attention, but the web has outgrown human comprehension.
Traditional RPAs are too brittle. The current wave of Computer Use Agents (Claude, OpenAI Operator, Gemini, Browser Use) wants to solve this with Continuous Inference. They take a screenshot, reason, click, repeat. This is amazing for novelty, but fatal for production: 1. Reasoning loops introduce massive overhead. 2.Cost: "Looking" at a page 1,000 times costs 1,000x the tokens. 3. Indeterminacy: A probabilistic model might click "Back" instead of “Book Now" 1% of the time.
Mino’s architecture is different. We use AI to learn the workflow (navigation graph, logic), then codify that logic to make subsequent runs execute as deterministic code. 1. It is fast (no reasoning loop). 2. It is cheap (pennies per job). 3. It returns Structured JSON. Every single time.
We designed Mino to be the "Hands" for your application. Input: URL + Natural Language Goal ("Login, find the invoice from May, return total as JSON"). Output: Structured JSON. Concurrency: Stateless, parallel execution across websites via async calls.
We believe the internet will largely be used by non-human users, such as agents, robots, and devices. And this is the beginning of that infrastructure, to use and build on top of the "Deep Web".
Check it out: https://mino.ai/
50 Free Completed Jobs to test the architecture. Feedback is welcome.