Developing directly on the production database with no known backups. Saved from total disaster by pure luck. Then a bunch of happy talk about it being a "small price to pay for the lessons we gained" and how such failures "unleash true creativity". It's amazing what people will self-disclose on the internet.
That's the first thing I took away. The author ignores every sane software engineering practice, is saved by pure luck and then dives into what commands not to use in supabase. Why do this? Why not spend a week or two before you launch to setup a decent ci/cd pipeline? That's the real lesson here.
While I agree with everything said here about making backups etc. and which I have done in my career at later stage companies, when you are just starting out and building MVPs, I'd argue (as I do in the newsletter) that losing 2 weeks to setup CI/CDs pipelines and backups before you can pay the rent is a waste of time!
I was a Supabase noob back then so I had not explored their features for local development, which is the learning I try to share in this post.
I cut my dev teeth in a financial institution so I'll concede I'm biased away from risk, but devving directly on the prod DB, not having a local enviroment to test changes against, and worse: literally no backups.. it screams wreckless, stupid, cheap, arrogant, and immature (in the tech sense). Nothing I'd like my name against publicly.
A colleague upgraded the production database for a securities financing settlement system on a Friday evening by accident 20 years ago.
We were devs with root access to production and no network segregation.
He wanted to upgrade his dev environment, but chose the wrong resource file.
He was lucky it was a Friday, because it took us the whole weekend working round the clock to get the system and the data to a consistent state by start of trading.
We called him The Dark Destroyer thereafter.
So I would add network segregation to the mix of good ideas for production ops.
Right?! This whole post is kinda absurd. It has the feel of a kid putting a fork into an outlet, getting the shock of a lifetime and then going “and thanks to this, everyone in my household now knows not to put a fork into an outlet.” You didn’t have to go through all this to figure out that you need backups. The fluff is the cherry on top
I dunno. The effort needed to ensure you have backups is tiny compared to the work done to create the product. And to pull a backup before deleting stuff in production only needs a smidgen of experience.
They were extremely lucky. Imagine what the boss would have said if they hadn't managed to recover the data.
Owww. The first or second paragraph of this made me cringe
"I had just finished what I thought was a clean migration: moving our entire database from our old setup to PostgreSQL with Supabase" ... on a Friday.
Never do prod deploys on a Friday unless you have at least 2 people available through the weekend to resolve issues.
The rest of this post isn't much better.
And come one. Don't do major changes to a prod db when critical team members have signed off for a weekend or holiday.
I'm actually quite happy OP posted their experiences. But it really needs to be a learning experience. We've all done something like this and I bet a lot of us old timers have posted similar stories.
This is such a poorly written post, and im sure there are on-going disasters waiting to happen -- I've built 3 startups and sold 2 of them and never ever developed on production. ?? What level of crazy is this?
I hope the poster will learn about transactions at some point. Postgres even lets you alter the schema within a transaction.
What I learned, once upon a time, is that with a database, you shouldn't delete data you want to keep. If you want to keep something, you use SQL's fine UPDATE to update it, you don't delete it. Databases work best if you tell them to do what you want them to do, as a single transaction.
I use transactions all the time for my other projects and I've read the great Designing Data Intensive Applications which cover the topic of linearization in depth.
>Here's the technical takeaway: Never use CASCADE deletes on critical foreign keys.
The technical takeaway, as others have said, is to do prod deployment during business hours when there are people around to monitor and to help recover if anything goes wrong, and where it will be working hours for quite a while in the future. Fridays are not that.
When you are a 3 people startup, I'd argue there is no such thing as "business hours". I worked every day back then. I'll concede that the "Friday Night" part in the title might be a bit clickbait to that regard.
I'm sorry, but there's "move fast and break things" and then there's a group of junior devs not even bothering to google a checklist of development or moving to production best practices.
Your Joe AI customers should be worried. Anyone actually using the RankBid you did a Show HackerNews on 8 months ago should be worried (particularly by the "Secure by design: We partner with Stripe to ensure your data is secure." line.
If you don't want to get toasted by some future failure where you won't be accidentally saved by a vendor, then maybe start learning more on the technical side instead of researching and writing blogspam like "I Read 10 Business Books So You Don't Have To".
This might sound harsh, but it's intended as sound advice that clearly nobody else is giving you.
Dropping DB on day 3 of your business? Probably fine. Dropping it on your day 3 but on day 300 of your business when you have paying customers? Seriously?
I dropped the production database at the first startup I worked at, three days after we went live. We were scrappy™ and didn’t have backups yet, so we lost all the data permanently. I learned that day that running automated tests on a production database isn’t a good idea!
Here is another one: Don't trust ops when they say they have backups. I asked and was told there are weekly full backups, with daily incrementals. The time came when I needed a production DB restored due to an upgrade bug in our application. That was bad - thank $DEIITY we have backups.
OPS: Huh, it appears we can't find your incremental.
ME: Well just restore the weekly, its only Tuesday.
Two Days later.
OPS:About that backup. Turns out it's a backup of the servers, not the database. We'll have to restore to new VM's in order to get at the data.
ME: How did this happen?
OPS: Well the backups work for MSSQL Server.
ME: This is PostgreSQl.
OPS: Yeah, apparently we started setting that up but never finished.
ME: You realize we have about 20 applications using that database?
OPS: Now we do.
Lesson: Until you personally have seen a successful restore from backup, you do not have backups. You have hopes and prayers that you have backups. I am forever in the Trust but Verify camp.
If your company is big enough to have dedicated ops then it should be running regular tests on backups. A disaster recovery process if you will.
At some point though its not your problem when the company is big enough. Are you gonna do everyone's job? You tell em what you need in writing and if they drop the ball its their head.
It’s relative. No, I’m not sitting on the shoulder of the team that manages that (nor should I, there’d be 40 EMs bothering them!) but I fully expect my CTO has done it. And if not? Well, one day it’ll blow up and I’m looking for another job but that’s no different to any other possible major issues.
To be fair this was the norm 10 years ago. Just seems like he is stuck in the past. Really no excuse to provision an ec2 volume and dump all backups there. I’m not even in prod yet and have full backups to LTO to be ready for launch next month
Uhh, no, the answer is not to avoid cascading deletes. The answer is to not develop directly on a production database and to have even the most basic of backup strategies in place. It is not hard.
Also, “on delete restrict” isn’t a bad policy either for some keys. Make deleting data difficult.
> Here's the technical takeaway: Never use CASCADE deletes on critical foreign keys. Set them to NULL or use soft deletes instead. It's fine for UPDATE operations, but it's too dangerous for DELETE ones. The convenience of automatic cleanup isn't worth the existential risk of chain reactions.
I actually agreed 100% with this learning, especially the last sentence. The younger me would write a long email to push for ON DELETE CASCADE everywhere. The older me doesn't even want to touch terraform, where an innocent looking update can end up destroying everything. I will rather live with some orphaned records and some infra drifts.
And still I got burnt few months ago, when I inadvertently triggered some internal ON DELETE CASCADE logic of Consul ACL.
Assuming storage cost is not a huge concern, I’m a big fan of soft deletes everywhere. Also leaves an easy “audit trail” to see who tried to delete something.
Of course - there are exceptions (gdpr deletion rules etc)
Echoing the other comments about just how bad the setup here is. Setting up staging/dev environments does not take so much time as to put you behind your competition. There's a vast, VAST chasm between "We're testing on the prod DB with no backups" and the dreaded guardrails and checkboxes.
That being said, I would love to see more resources about incident management for small teams and how to strike this balance. I'm the only developer working on a (small, but somehow super political/knives-out) company's big platform with large (F500) clients and a mandate-from-heaven to rapidly add features -- and it's by far the most stressed out I've ever been in my career if not life. Every incident, whether it be the big GCP outage from last week or a database crash this week, leads to a huge mental burden that I have no idea how to relieve, and a huge passive-aggressive political shitstorm I have no idea how to navigate.
This is a good story and something everyone should experience in their career even just for the lesson in humility. That said:
> Here's the technical takeaway: Never use CASCADE deletes on critical foreign keys. Set them to NULL or use soft deletes instead. It's fine for UPDATE operations, but it's too dangerous for DELETE ones. The convenience of automatic cleanup isn't worth the existential risk of chain reactions.
What? The point of cascading foreign keys is referential integrity. If you just leave dangling references everywhere your data will either be horribly dirty or require inconsistent manual cleanup.
As I'm sure others have said: just use a test/staging environment. It isn't hard to set up even if you are in startup mode.
I once remailed emails to IEEE and ACM. I was ready to quit and take the L for such a bad mistake. Not write a blog post for Friday evening consumption
Developing directly on the production database with no known backups. Saved from total disaster by pure luck. Then a bunch of happy talk about it being a "small price to pay for the lessons we gained" and how such failures "unleash true creativity". It's amazing what people will self-disclose on the internet.
That's the first thing I took away. The author ignores every sane software engineering practice, is saved by pure luck and then dives into what commands not to use in supabase. Why do this? Why not spend a week or two before you launch to setup a decent ci/cd pipeline? That's the real lesson here.
While I agree with everything said here about making backups etc. and which I have done in my career at later stage companies, when you are just starting out and building MVPs, I'd argue (as I do in the newsletter) that losing 2 weeks to setup CI/CDs pipelines and backups before you can pay the rent is a waste of time! I was a Supabase noob back then so I had not explored their features for local development, which is the learning I try to share in this post.
I cut my dev teeth in a financial institution so I'll concede I'm biased away from risk, but devving directly on the prod DB, not having a local enviroment to test changes against, and worse: literally no backups.. it screams wreckless, stupid, cheap, arrogant, and immature (in the tech sense). Nothing I'd like my name against publicly.
A colleague upgraded the production database for a securities financing settlement system on a Friday evening by accident 20 years ago.
We were devs with root access to production and no network segregation. He wanted to upgrade his dev environment, but chose the wrong resource file.
He was lucky it was a Friday, because it took us the whole weekend working round the clock to get the system and the data to a consistent state by start of trading.
We called him The Dark Destroyer thereafter.
So I would add network segregation to the mix of good ideas for production ops.
I’m building my toy project and I have an LTO drive taking backups every night. Here I am complaining that having 2Tb of backups is too much.
lol good luck op
Right?! This whole post is kinda absurd. It has the feel of a kid putting a fork into an outlet, getting the shock of a lifetime and then going “and thanks to this, everyone in my household now knows not to put a fork into an outlet.” You didn’t have to go through all this to figure out that you need backups. The fluff is the cherry on top
Maybe the post is an attempt to save face in front of his colleagues. Owning up to the mistake and listing lessons learned.
Yeah. Imagine everything else that's completely wrong in that app.
I dunno. The effort needed to ensure you have backups is tiny compared to the work done to create the product. And to pull a backup before deleting stuff in production only needs a smidgen of experience.
They were extremely lucky. Imagine what the boss would have said if they hadn't managed to recover the data.
This _was_ one of the bosses.
Ah, yes.
> I immediately messaged my co-founders.
Owww. The first or second paragraph of this made me cringe
"I had just finished what I thought was a clean migration: moving our entire database from our old setup to PostgreSQL with Supabase" ... on a Friday.
Never do prod deploys on a Friday unless you have at least 2 people available through the weekend to resolve issues.
The rest of this post isn't much better.
And come one. Don't do major changes to a prod db when critical team members have signed off for a weekend or holiday.
I'm actually quite happy OP posted their experiences. But it really needs to be a learning experience. We've all done something like this and I bet a lot of us old timers have posted similar stories.
It's hard to have 2 people available when you have a 2 people tech team. We were very early back then, MVP stage.
No. Never release/upgrade on a Friday. Had too many late night weekends when I should be happily drinking beer. Never release at eod Friday. Never.
This is such a poorly written post, and im sure there are on-going disasters waiting to happen -- I've built 3 startups and sold 2 of them and never ever developed on production. ?? What level of crazy is this?
supabase kiinda pushes you in that direction though.
I agree. They also push you not to git migrations at first, which is definitely not the best practice.
I hope the poster will learn about transactions at some point. Postgres even lets you alter the schema within a transaction.
What I learned, once upon a time, is that with a database, you shouldn't delete data you want to keep. If you want to keep something, you use SQL's fine UPDATE to update it, you don't delete it. Databases work best if you tell them to do what you want them to do, as a single transaction.
I use transactions all the time for my other projects and I've read the great Designing Data Intensive Applications which cover the topic of linearization in depth.
I mean
UPDATE users SET name='test'
is still effectively a delete...
>Here's the technical takeaway: Never use CASCADE deletes on critical foreign keys.
The technical takeaway, as others have said, is to do prod deployment during business hours when there are people around to monitor and to help recover if anything goes wrong, and where it will be working hours for quite a while in the future. Fridays are not that.
When you are a 3 people startup, I'd argue there is no such thing as "business hours". I worked every day back then. I'll concede that the "Friday Night" part in the title might be a bit clickbait to that regard.
Also: don't brag about doing the opposite of what this guy says.
I'm sorry, but there's "move fast and break things" and then there's a group of junior devs not even bothering to google a checklist of development or moving to production best practices.
Your Joe AI customers should be worried. Anyone actually using the RankBid you did a Show HackerNews on 8 months ago should be worried (particularly by the "Secure by design: We partner with Stripe to ensure your data is secure." line.
If you don't want to get toasted by some future failure where you won't be accidentally saved by a vendor, then maybe start learning more on the technical side instead of researching and writing blogspam like "I Read 10 Business Books So You Don't Have To".
This might sound harsh, but it's intended as sound advice that clearly nobody else is giving you.
Did I read that correctly? They’re on Supabase’ free plan in production?
We’re just getting started and we’re even in Supabase’ paid plan.
Why do you take the paid plan when getting started?
Once you’re at a point where some of your business depends on it, you probably want the things like backups they provide…
Definitely! I had just finished the migration back then so that's why we were still on the free plan, but we had planned on enabling even PITR
this is exactly how you earn your prod stripes. dropped the db on day 3? good. now you’re officially a backend engineer.
no backups? perfect. now you'll never forget to set one up again. friday night? even better. you got the full rite of passage.
people act like this's rare. it’s not. half of us have nuked prod, the other half are lying or haven't been given prod access yet.
you’re fine. just make the checklist longer next time. and maybe alias `drop` to `echo "no"` for a while
Dropping DB on day 3 of your business? Probably fine. Dropping it on your day 3 but on day 300 of your business when you have paying customers? Seriously?
I dropped the production database at the first startup I worked at, three days after we went live. We were scrappy™ and didn’t have backups yet, so we lost all the data permanently. I learned that day that running automated tests on a production database isn’t a good idea!
Here is another one: Don't trust ops when they say they have backups. I asked and was told there are weekly full backups, with daily incrementals. The time came when I needed a production DB restored due to an upgrade bug in our application. That was bad - thank $DEIITY we have backups.
OPS: Huh, it appears we can't find your incremental.
ME: Well just restore the weekly, its only Tuesday.
Two Days later.
OPS:About that backup. Turns out it's a backup of the servers, not the database. We'll have to restore to new VM's in order to get at the data.
ME: How did this happen?
OPS: Well the backups work for MSSQL Server.
ME: This is PostgreSQl.
OPS: Yeah, apparently we started setting that up but never finished.
ME: You realize we have about 20 applications using that database?
OPS: Now we do.
Lesson: Until you personally have seen a successful restore from backup, you do not have backups. You have hopes and prayers that you have backups. I am forever in the Trust but Verify camp.
If your company is big enough to have dedicated ops then it should be running regular tests on backups. A disaster recovery process if you will.
At some point though its not your problem when the company is big enough. Are you gonna do everyone's job? You tell em what you need in writing and if they drop the ball its their head.
It’s relative. No, I’m not sitting on the shoulder of the team that manages that (nor should I, there’d be 40 EMs bothering them!) but I fully expect my CTO has done it. And if not? Well, one day it’ll blow up and I’m looking for another job but that’s no different to any other possible major issues.
> I learned that day that running automated tests on a production database isn’t a good idea!
There's novel lessons to be learned in tech all the time.
This is not one of them.
Learn lessons from other people. You can't learn them all yourself.
I got deep pangs of pain and anguish for you and everyone involved. These lessons hurt so much to learn the hard way.
Your website title is "Profitable Programming" with a blog post "How I Dropped the Production Database on a Friday Night"
Thats not very profitable
Who is this guy? He seems like a poser. I wouldn't be surprised if these articles are AI-generated.
Harsh but untrue (for the AI-generated part).
To be fair this was the norm 10 years ago. Just seems like he is stuck in the past. Really no excuse to provision an ec2 volume and dump all backups there. I’m not even in prod yet and have full backups to LTO to be ready for launch next month
This was never the norm for successful companies. This is only the norm for cowboys who have more pizza than good sense.
Uhh, no, the answer is not to avoid cascading deletes. The answer is to not develop directly on a production database and to have even the most basic of backup strategies in place. It is not hard.
Also, “on delete restrict” isn’t a bad policy either for some keys. Make deleting data difficult.
> Here's the technical takeaway: Never use CASCADE deletes on critical foreign keys. Set them to NULL or use soft deletes instead. It's fine for UPDATE operations, but it's too dangerous for DELETE ones. The convenience of automatic cleanup isn't worth the existential risk of chain reactions.
I actually agreed 100% with this learning, especially the last sentence. The younger me would write a long email to push for ON DELETE CASCADE everywhere. The older me doesn't even want to touch terraform, where an innocent looking update can end up destroying everything. I will rather live with some orphaned records and some infra drifts.
And still I got burnt few months ago, when I inadvertently triggered some internal ON DELETE CASCADE logic of Consul ACL.
(I do agree with your other points)
Assuming storage cost is not a huge concern, I’m a big fan of soft deletes everywhere. Also leaves an easy “audit trail” to see who tried to delete something.
Of course - there are exceptions (gdpr deletion rules etc)
Echoing the other comments about just how bad the setup here is. Setting up staging/dev environments does not take so much time as to put you behind your competition. There's a vast, VAST chasm between "We're testing on the prod DB with no backups" and the dreaded guardrails and checkboxes.
That being said, I would love to see more resources about incident management for small teams and how to strike this balance. I'm the only developer working on a (small, but somehow super political/knives-out) company's big platform with large (F500) clients and a mandate-from-heaven to rapidly add features -- and it's by far the most stressed out I've ever been in my career if not life. Every incident, whether it be the big GCP outage from last week or a database crash this week, leads to a huge mental burden that I have no idea how to relieve, and a huge passive-aggressive political shitstorm I have no idea how to navigate.
The “and honestly?” phrase smells like AI writing to the point I stopped there and closed the post.
Don’t fuck your database up and do have point-in-time rollbacks. No excuses it’s not hard. Not something to be proud of.
Yeah, the whole thing is full of AI-isms. Started skimming and every other sentence has one.
"Picture this: Panic mode activated. You heard that right. But here's what surprised me the most" and so on. Ugh.
Let he who is without sin cast the first DELETE CASCADE.
This is a good story and something everyone should experience in their career even just for the lesson in humility. That said:
> Here's the technical takeaway: Never use CASCADE deletes on critical foreign keys. Set them to NULL or use soft deletes instead. It's fine for UPDATE operations, but it's too dangerous for DELETE ones. The convenience of automatic cleanup isn't worth the existential risk of chain reactions.
What? The point of cascading foreign keys is referential integrity. If you just leave dangling references everywhere your data will either be horribly dirty or require inconsistent manual cleanup.
As I'm sure others have said: just use a test/staging environment. It isn't hard to set up even if you are in startup mode.
Thanks for your takeaway. Yes the dev environment is definitely a must as soon as you start growing!
> The point of cascading foreign keys is referential integrity.
Not quite. Databases can enforce referential integrity through foreign keys, without cascading deletes being enabled.
“On delete restrict” vs “on delete cascade” still enforces referential integrity, and is typically a better way to avoid the OP’s issue.
i dropped the dev database once at PayPal back in 2006
I once remailed emails to IEEE and ACM. I was ready to quit and take the L for such a bad mistake. Not write a blog post for Friday evening consumption