Besides DDL changes, pgstream can also do on-the-fly data anonymization and data masking. It can stream to other stores, like Elasticsearch, but the current main focus is on PG to PG replication with DDL changes and anonymization.
We've been using it in Xata to power our `xata clone` functionality, which creates a "staging replica" on the Xata platform, with anonymized data that closely resembles production. Then one gets fast copy-on-write branching from this anonymized staging replica. This is great for creating dev branches and ephemeral environments.
Nice to see tooling like this pop up. At previous company, when we built our, mostly self-hosted, analytics platform, and had devs average on one schama migration per day, we spent so much time dealing with this semi-manually, leading to all kinds of breaking and hiccups downstream. We had something working rather automatically at the end, but it really felt like tooling that should exist for everybody.
I wish this was around 5 or 6 years ago when I was writing a direct native consumer of a logical replication slot, also in Go coincidentally. It's way easier to do natively with Postgres than I'd have guessed, but still a bit of a PITA. I wish I still had access to that code to do a side-by-side.
This looks great by comparison from what I remember though.
We also just released v0.8.1 today. You can read the release blog here, if you're interested: https://xata.io/blog/pgstream-v081-update
Great to see pgstream on HN!
Besides DDL changes, pgstream can also do on-the-fly data anonymization and data masking. It can stream to other stores, like Elasticsearch, but the current main focus is on PG to PG replication with DDL changes and anonymization.
We've been using it in Xata to power our `xata clone` functionality, which creates a "staging replica" on the Xata platform, with anonymized data that closely resembles production. Then one gets fast copy-on-write branching from this anonymized staging replica. This is great for creating dev branches and ephemeral environments.
Nice to see tooling like this pop up. At previous company, when we built our, mostly self-hosted, analytics platform, and had devs average on one schama migration per day, we spent so much time dealing with this semi-manually, leading to all kinds of breaking and hiccups downstream. We had something working rather automatically at the end, but it really felt like tooling that should exist for everybody.
I wish this was around 5 or 6 years ago when I was writing a direct native consumer of a logical replication slot, also in Go coincidentally. It's way easier to do natively with Postgres than I'd have guessed, but still a bit of a PITA. I wish I still had access to that code to do a side-by-side.
This looks great by comparison from what I remember though.