Show HN: GoatDB – A lightweight, offline-first, realtime NoDB for Deno and React
github.comHey HN,
We've been experimenting with a real-time, version-controlled NoDB for Deno & React called GoatDB. The idea is to remove backend complexity while keeping apps fast, offline-resilient, and easy to self-host.
Runs on the client – No backend required, incremental queries keep things efficient. Self-hosted & lightweight – Deploy a single executable, no server stack needed. Offline-first & resilient – Clients work independently & can restore state after server outages. Edge-native & fast – Real-time sync happens locally with minimal overhead.
Why We Built It: We needed something that’s simpler than Firebase, lighter than SQLite, and easier to self-host. GoatDB is great for realtime collaboration, offline, prototyping, single-tenant apps, or ultra-low-cost multi-tenant setups—without backend hassles.
Would love feedback from HN:
* Are there specific features or improvements that would make it more useful?
* How do you handle similar problems today, and what’s missing in existing solutions?
If you're interested in experimenting or contributing, the repo is here: GitHub Repo: https://github.com/goatplatform/todo
Looking forward to your thoughts!
What is the story for querying? Can it filter and rank objects?
Simple. You write plain Ts functions for sorting and filtering. GoatDB runs them with a linear scan in a coroutine so it doesn't block the UI thread. From this point it'll use its version control underpinnings to incrementally update query results
Could this be used without Deno or React, just a vanilla webpage? Can it p2p sync two client databases with WebRTC?
Does it use OPFS or IndexedDB?
Currently only deno (react is optional), but we're working on supporting other frameworks and runtimes.
GoatDB has backends for both OPFS and IndexedDB
I’m a bit confused, if it runs in the client why does it require deno?
It's a symmetric design that runs on both the client and the server
I don't think you've answered the question - if it runs on the browser, it runs in an es6 compatible environment. so why does it not default support node or bun? what specific deno facility does it use that somehow also works on a browser, but not on node/bun.
A few things which are not a big deal, but do require some work:
- Back when we started, only Deno had the ability to compile to a single exec
- We're using deno's module resolution which is superior in every way (with an ESBuild plugin)
- Deno's filesystem API
All of the above can be implemented for other runtimes, and it's definitely on our roadmap
I evaluated reactive databases for a client side app recently - what a mess to be found there. Node polyfills in the browser? No thanks. So yes, there is a need and I hope this could be an option in the future.
Thank you for the kind words it really motivates us!
Seems like the database is reactive on the client, and React components get re-rendered automatically when content changes.
How does it compare to Minimongo?
Similar effect, completely different tech. GoatDB is not another b-tree wrapped as a DB, but a distributed, replicated, commit graph similar to Git
I was thinking about CRDTs last night and now this. This is awesome.
This is nice as long as your data doesn't exceed allowed memory.
Right. But you can push it a bit further than that actually by explicitly unloading portions of the data to disk kinda like closing a file on a desktop app
Why not just use pouchdb? It's pretty battle-tested, syncs with couchdb if you want a path to a more robust backend?
edit: https://pouchdb.com/
But how many goats does pouchdb have? I'm betting 0.
you can fit a lot of goats into a pouch, depending on the size of the pouch
"A pouch is most useful when it is empty" - Confuseus
[flagged]
Scale really. GoatDB easily handles hundreds of thousands of items being edited in realtime by multiple users
so can couch/pouch? (pouch is a façade over leveldb on the backend and client-side storage in your browser)
have you done benchmarks to compare the two?
i know from personal experience leveldb is quite performant (it's what chrome uses internally), and the node bindings are very top notch.
GoatDB is web scale. PouchDB isn't web scale.
[dead]
Hundreds of thousands of items and multiple users could be done on a $5 PI zero 2 w (1Ghz quad-core A53) with the C++ standard libary and a mutex.
People were working at this scale 30 years ago on 486 web servers.
Doing concurrent editing AND supporting offline operation?
What do you mean by "offline operation"? Which part is non-trivial?
Your server/network goes down, but you still want to maintain availability and let your users view and manipulate their data. So now users make edits while offline, and when they come back online you discover they made edits to the same rows in the DB. Now what do you do?
The problem really is about concurrency control - a DB creates a single source of truth so it can be either on or off. But with GoatDB we have multiple sources of truth which are equally valid, and a way to merge their states after the fact.
Think about what Git does for code - if GitHub somehow lost all their data, every dev in the world still has a valid copy and can safely restore GitHub's state. GoatDB does the same but for your app's data rather than source code
I swear we've been going backwards for the past 15 years
[flagged]
You can do whatever you want, but if you reach out to other people because you want them to use it, you better be able to convince them why
> lighter than SQLite
You’re concerned that a < 1 MiB library is too heavy, so you wrote a DB in TS?
> easier to self-host
How is something that requires a JS runtime easier than a single-file compiled binary?
Have you tried using SQLite in the browser and have it play nice with a DB in the back?
No, and admittedly I misunderstood the purpose, but I don’t understand the need any better now. I’m not a frontend (nor backend) dev FWIW, I’m a DBRE.
If this is meant for client-side use, that implies a single user, so there aren’t any concerns about lock contention. It says it’s optimized for read-heavy workloads, which means the rows have to be shipped to the client, which would seem to negate the point of “lighter weight.”
If the purpose is to enable seamless offline/online switching, that’s legitimate, but that should be called out more loudly as the key advantage.
Think about the modern cloud-first architecture where you have a thick back with a complex DB, a thin client with a temporary cache of the data, and an API layer moving data between them.
This is an experiment in flipping the traditional design on its head and pushing most of the computation to the client using an architecture similar to what Git is using, but for your actual application data.
You get all kinds of nice byproducts from this design like realtime collaboration, secure data auditing, multiple application versions coexisting on different branches in production, etc etc. It's really about pushing the boundaries of what's possible with modern client hardware
shipping SQLite as a WASM module can increase your bundle size significantly, depending on your baseline.
> How is something that requires a JS runtime easier than a single-file compiled binary?
You can compile your JS to bytecode and bundle it with its runtime if you want to, getting a single-file executable. QuickJS and Bun both support this, and I think Node.js these days does as well.
If you expect your user to already have a runtime installed and you're not using QuickJS, you can just give them the script as well.
This is clearly intended for use in web applications so a JS runtime comes for free and the package is only 8.2kb
In the age of docker anything almost anything can be a single file binary if you don’t mind pushing gigs of data around.
Right. But how do you scale it?
Pros and cons vs something like Replicache, Triplit, InstantDB, or Zero...?
Cons: less mature, much newer tech based on ephemeral CRDTs combined with a distributed commit graph. Less mature ecosystem, tooling, etc.
Pros: - Branch based deployment so multiple versions can coexist nicely in prod - Completely synchronous API that fully hides the networking - Clients can securely restore a crashed server - Your DB functions as a signed audit log that prevents cheating (similar to blockchain) - Stronger consistency guarantees that simplify development
can goats use it to manage grazing sites etc to not overgraze and offend local farmers or is it human centric?
Yup. Conflict free grazing everywhere