Not true. Kent Back’s 3X is a much better take, test and good practices for what is high risk and hard to change, move fast for most of it on the rest to try to find that black swan as soon as possible.
Yes, I do feel this time is different and that I am a top-notch coder (I feel more comfortable sounding like a jerk when on hackernews), 1 year later into my startup, codebase is big, but I’m actually coding faster than ever, as foundation code is more and more complete.
One of the huge reasons for it beyond the right architecture is type safety. Someone that was well seasoned in strongly-typed FP but is now pragmatic can move incredibly fast with enormous safety by just adding the most cost-benefit type strictness, and being flexible on where it doesn’t pay off.
Yes exactly, from the experience he had as Facebook massively scaling up, while all good practices were thrown down the window (except for foundation and critical parts) and extreme go horse php being written as fast as possible, until it was too big and a migration was needed
I had the exact same experience at Booking, but with terrible Perl instead of php, still, built massively successful business until one day the movement to better practices becomes inevitable
Erm, he wrote the article with “you” to invoke the feeling of the reader thinking about their own use case, which I did
Different because I ran without good practices before, got more and more messed up over time, grinded to a halt, and it was all terrible. Then worked for 10 years, doing TDD for most of it, as well as pairing and other good practices, everything was better, started condemning people that don’t do it similar to uncle bob. Kept moving forward, entered an environment where everything was “wrong” but still somehow worked very well. Now I’m mature enough that I can have both: skip “good practices” and still not break things
Oh man. You can write tests after the fact. I have done it sucessfully many times. I write something to see if its viable, if it is I find the repetitive parts and break them up to functions or components in which you can unit test. Later you can also write end to end tests for all the important flows.
TDD is just a way of doing things but it's not faster. It's testing for testings sake. Why should I write a test if I don't have a single paying customer yet? Unless its something massively important, like money or health data there is just little incentives.
If you do test-after you have to keep a mental tally of new scenarios to test when making code changes. This makes it more unreliable since it's easy to forget one of those scenarios or mix it up with an already tested scenario.
TDD lets you safely forget by tying the test pass/failure directly to the code.
So, yes, you can do test-after but why?
The only reason Ive ever heard for doing it after anyway is "I just prefer it that way".
Not writing tests at all makes sense (e.g. for a spike), but if I were going to start writing tests at any point I cant see any reason not to do it with TDD.
My experience is that TDD just ensures that the code is unit-testable. This can lead to more complex code when it need not be complex. I definitely write tests, but my default approach is to make the test simulate how a user would use the functionality. So mostly a higher level test like a system test. And you can do this only when you have a bigger picture of the program that you write (which need not be driven by unit tests, just need to elucidate what the program needs to do and break it into steps). I don't rule out unit tests, but my approach is to start with tests that resemble system tests, and if i specifically need a unit test for a hairy algorithm in a function, use a unit test to help.
Also the higher level you test at, the less probable that you have to change the tests when you change a piece of functionality.
I frequently do TDD with integration and end to end tests. The type of test should be dictated by the nature of the code, which TDD doesnt have an opinion on.
Good for you then. I myself have not come across the (loud/louder) TDD exponents advocating for using TDD in system/integration testing, they mostly focus on unit tests. If you can point to some examples, it would be a learning experience for me. If not that's fine too, I am glad that there are voices out there like yours.
I wrote a systems/integration testing framework for this purpose specifically and I wrote a few essays alongside it. It has the same name as my handle.
Not much traction, unfortunately. Id be interested in any comments you might have.
I write tests alongside the code. I also write tests before the code, and after the code.
Usually, I use test harnesses[0]. These generally start before the code, and grow, alongside.
I’ll frequently write unit tests, after the code is done. Sort of like putting solder over a tightened bolt.
But I tend to spend a lot of time and effort on testing. It’s my experience, that I always find issues. I’ve never once, written “perfect” code, out of the starting gate.
Eh. WFM. YMMV.
IRT the post topic, I believe that every job I do -even “farting around” code- needs to be done as well as possible. If I always do a good job, then it becomes habit.
There are thousands of successful technical startup founders and early employees that would disagree with the content of this article.
The truth is that, it's actually totally fine to accumulate some amount of technical debt and to ship some buggy code if you're still early. Speed matters. Of course this will change if you're later stage or if you're in an industry where you must be very careful (security, healthcare, gov, etc).
> Is your code somehow less important than that account’s spreadsheets?
Shout-it-from-the-mountain-top "yes!"
The code should just get you through a demo, or securing the next round of financing.
The accountant can't cut corners, because they and other people could go to jail.
But the idea that prototyping in software is not a valid practice is laughably wrong.
Where you do go wrong is if the people only know how to prototype and nothing else; they can't get rid of the prototypey bits and evolve it into solid production in which all traces of prototyping are gone. Multi-talented mega-begins which know numerous methodologies wouldn't be in this boat, right?
> as if any bug is acceptable
Windows still crashes in 2024, yet we can't get rid of it.
This article buries the lede with a bunch of horrendous strawmen. The actual thesis is that software craftsmanship should be applied in the same way at all stages of a company. This is a very dangerous line of thinking that has killed many a startup staffed by experienced who engineers who are working in large scale systems and teams. I've seen it many times, and TBH it's not just an engineering problem, it's that people who have only worked in large companies don't have a real sense of what is truly essential, and so a huge percentage of their practices are effectively cargo culted over without any real reflection.
In the case of engineering, you need to apply a lot of judgement based on the situation that the company is in—how much runway, how much traction, actual product goals, etc. You must keep things as simple as possible at all times to optimize for future optionality as you search for product-market fit. All code is a liability, and you must fight tooth and nail against any individual who is getting ahead of their skis in terms of losing focus on the next thing needed to prevent the company from dying. The absolute worst thing you can do is bring some journeyman engineer and don't give them enough scope and ownership to satisfy their brain capacity or you'll end up with ridiculously over-engineered systems that impose a huge velocity tax for what needs to be a very lean and agile phase. I say this disclaimer first because 99% of people in tech trying to do a startup will fail by trying to do too much too soon, and have no intuitive sense of how narrow the tightrope from 0 to 1 success really is.
Of course that doesn't mean you shouldn't focus on code and system quality. Absolutely you should have tests, but you should apply serious judgement onto the nature of the tests in light of your best predictions about the future. You should think about what code is foundational, and what decisions may be one way doors, but not obsess over leaf nodes and experiments that are just as likely to be abandoned or scrapped as they are to be built upon. Making these calls is tough—no one can predict the future—but long tenures in fast growing code bases helps. Seeing the impact of ones decisions 2, 5, 10 years down the line is eye opening, experience is useful here as long as one still thinks from first principles and doesn't just rely on rote practices because they are comfortable.
I think you are actually agreeing with the main thesis of the article, which I agree buries the lede and obfuscates it with overly focusing on TDD in particular. You're just adding the, accurate, caveat that most people's opinions on how to build good software are wrong.
I think that it's true that at large companies there can be an obsession with over-design and over-engineering "for scale", but I actually think that's wrong to do at large companies too, you're just less likely to pay the ultimate price for wasting time on it.
The overall article is ambiguous enough that yes, it can be interpreted to be in alignment with my values. But also based on my quarter century experience in both startups and Fortune 500s and the transition from the former to the latter, I would say for every engineer shooting from the hip and creating an unmaintainable mess there is an equal number who will read it as a justification for over-engineering. Also, though of course most startups fail, I would say the latter archetype fails at a higher rate because they are focused on the wrong things.
The crux is really is the nuance of this statement: "The disciplines that lead to successful software are always valid". This is tautological, everyone reads and sees what they want to see. But if we take the examples he gives, that's where judgement comes in. Double-entry bookkeeping, yes, that's pretty universal. TDD? That really depends on what you are doing and what value you get out of it. Not only do specific disciplines and practices vary based on company stage, they also vary based on the product and the goals of the business. Anyone who doesn't understand this is fucked if they try to do a startup.
Agreed, but I would say it isn't necessarily tautological. I think there are lots of people out there who think that if you want to develop software faster you cut corners on things like testing (to be clear I'm not endorsing TDD, just saying having a reasonable number of tests beats the hell out of none), choosing a dynamically typed language vs a statically typed one, and just generally throwing code over the fence for the sake of moving fast. I think it's true that you will in fact move faster over time (not even a very long time, just like a few months) if you stick to whatever good principles you would have stuck to if you didn't have the time pressure of shipping while at a startup.
My read was that this was intentionally vague since it's basically trying to say to the reader that whatever you think is a good idea too do while developing software at a non-startup is also a good idea to do at a startup. If the reader has bad ideas about developing software then clearly there's no helping them :P
Exactly. All those things that are critical in a startup are really good way to accelerate possibly stagnant development. However, short-sightedness and ignoring tech debt kills startups too
For a startup achieving product market fit should be the primary objective, clean code is secondary. And honestly, a lot of the things that Robert Martin proposes are quite controversial and I’m not sure if I would recommend them for a more developed company.
Actually, people will use buggy software if it solves a real problem. If you need to polish your software to the nth degree to get and retain customers, you are in a crowded space and should go do something else with your life.
If chatgpt failed 50% of its requests from the UI, people would still use it. If it logged you out after every other chat request, people would still use it
Zero evidence, just stating the same unproven argument as fact over and over. Also directly countered by how much commercially successful/widely adopted software has fucking awful dogshit code.
But they are the same as anywhere else in a very different way than described here.
At almost any place (other than a company that only writes moon lander code, or similar) you will probably encounter many different situations over your career.
Sometimes quality is crucial and it's better to be late than buggy or wrong.
Sometimes timeline in crucial and your users are willing to be more in the "beta tester" role as long as they get that first version quickly.
Sometimes the product shape isn't quite clear yet so timeline (to iterate with real users) and future extensiblity/adaptability are both super important.
Sometimes you can only afford the 6 month effort, financially. Having less tech debt won't help you if you ship nothing and make no money.
Sometimes you won't need to add many more features in the future, so certain kinds of "tech debt" are fine.
But there's no one-size-fits-all approach to software engineering across all these situations. Heck, it's often hard to tell with full certainty which situation you're in - predicting the future is hard.
However, any approach that doesn't start with "what do we need, and what do we think the right set of tradeoffs are" is likely to let you down. Since there's no freebies here - "discipline" in the "follow the checklist 100% every time every where" sense is not free.
Funny thing about the harping on TDD is that one of the real tricks is that if you don't know the situation, you can't even write the right tests. If the features/functionality is gonna churn, you need to be careful how you write your tests so you aren't constantly rewriting them by testing volatile boundaries of functionality. But that's very different from "TDD good, slow good, fast bad."
This post sears and burns with the heat of a thousand suns. How I wish I could have read this every day, ten times a day, as I was starting out on my startup journey three years ago.
So many corners were cut in the name of speed. So much pressure from my cofounder/investor to get things out way too fast. So many months of endless suffering and never-ending bugs due to poor architecture that doesn’t scale on the BE.
I know how to push back. I do it for a living at my day job and I’m really good at it. For some reason I decided that I couldn’t ever push back against the guy who put in some money. How horribly wrong I was.
I have pondered this question before and I have seen people recommending "Philosophy of software design" by John Ousterhout, but my qualms with Clean Code is not that it needs a substitute, its just that its a fairly simple set of concepts about which Bob makes a big deal. I did read some of his books, but I realized its only about 10% of what makes a competent software engineer. My suggestions to people starting out or even seasoned programmers are that get an idea of what he advocates (TDD, SOLID and all that) but then design of programs is just a small part.(And I also can debate the usefulness of both TDD and SOLID. Personal opinion coming: they are great for small or greenfield projects but almost always don't hold up in the real world).
Learn about other kinds of (much more effective) testing like System/Integration testing, Property-based testing. Spend a lot of time learning about databases and SQL. Maybe get into somewhat esoteric topics like constraint solvers and logic programming. You may not use these but it helps to know there's a wide world out there, and they do bend your brain enough to enable you to think differently.
Time is limited. It does matter what we spend it on.
Note: There is apparently a large group of people who hate everything he does and also, seemingly, him personally. Everywhere he (or any of his books) is mentioned, the haters come out, with their vague “it’s all bad” and the old standard “I don’t know where to begin”. Serious criticism can be found (if you look for it), and he himself welcomes it, but the constant vague hate is scary to see.
I don’t want to be a hater but a lot of “Uncle Bob’s” advice just doesn’t seem very good, this article included. Robert has had a long and successful career writing books on code style and architecture diagrams, but he hasn’t built anything notable in industry during that period. And the opinion presented here seems to clash with nearly everybody who has ever actually built a successful startup. Some things you have to learn by doing.
While "Uncle Bob" certainly hasn't uncovered any silver bullets, I find the criticism of him to be generally unfair. It's the same type of criticism that commonly targets anyone that actually sticks their neck out far enough to make concrete recommendations.
So many teachers, careful not to invite the ire of the armchair critic and ACKCHYUALLY know-it-all, will hem and haw, leading you into analysis paralysis regarding "best practices". That then leads to exhortations to "find what works best for you", acknowledging there are many "right" ways among all the wrong ways.
As I've gotten older, I appreciate teachers like Uncle Bob who provide a specific prescription and says "try it this way". I've discovered I learn faster starting somewhere concrete.
> While "Uncle Bob" certainly hasn't uncovered any silver bullets
Which is compatible with Brooks, who famously had to point out that there are no Silver Bullets.
Anyone who is reading these kinds of things and expecting to find that the authors uncovered a Silver Bullet will be regularly disappointed. But that doesn't, as you point out, negate the value of the learned experience that they are trying to communicate. Even if there are no Silver Bullets, that doesn't mean that there are processes and methodologies that can make things more predictable, robust, reliable, and enjoyable.
I think the problem I have with Bob is that he doesn’t say "try it this way". He says “you must do it this way, and if you don’t do it this way you are a disgrace to the profession”. Its obnoxious.
Even worse, his cultish followers who blindly believe everything he says without understanding it. “Why must it be done this way?” I ask. “Because Bob said so”. Without comprehension, his advice becomes toxic.
If you’re new and you simply don’t get it yet, that’s fine. If you think you’re now an all knowing being because you listen to Bob, please calm down.
All non-political critiques which I have seen (here and elsewhere) always seem to turn out to be based on misunderstandings or exaggerations of what he actually has said or written (sometimes wildly so). Or, as it sometimes turns out, people hate him not for anything he has said, but because of how other people has misunderstood and misinterpreted him.
From what I have seen, every time he has been criticized sincerely, and he has become aware of it, he has engaged his critics in open debate, and they have come to amicable results, with him sometimes altering his views.
What you describe is apparently a common impression of what he writes; I have seen many people express it. But I have not seen an actual quote to prove it. I think some people may feel so threatened by someone who says that what they do may not be very good, that cognitive dissonance kicks in and they instead elect to take offense at the tone of the message (and re-interpret what he says in the worst possible way in order for that to make sense).
of course one of the disciplines I’m talking about is TDD. Anybody who thinks they can go faster by not writing tests is smoking some pretty serious shit
That is not some made up stuff - smoking shit - is literally saying one is a disgrace if he is not doing TDD.
I call your reading comprehension into question. What it literally says is that you will not go faster by not writing tests, or, conversely, that writing tests will not make you go slower, and that anyone who thinks otherwise is wrong. You are imagining an insult in order to feel outrage, probably so that you can then avoid the actual issue.
I read it as useless exaggeration used to incite the outrage.
It feels correct for young, starry eyed devs.
But then I have to deal with bunch of assholes in workplace who read that kind of shit and think they have to be edgy and always right and only “right things” have to be done.
That is the whole context you should read here and why people don’t like Bob’s stuff.
Other thing was Linus and his code reviews - I had at least couple of guys thinking they were Linus when what they worked on was yet another off the mill CRUD app. Good for Linus he finally understood what toxicity is and toned down.
If you insist on discussing only the tone, not the argument, you have to consider that the blog post you are commenting on is more than 10 years old. Would he write it in the same way today?
The tone is part of the message. In this article he describes the kind of strawman engineers he is disagreeing with as egotistical fools who believe “stupid” things. Elsewhere he’s writing oaths for programmers that say you should only build software according to his principles. It’s all very high and mighty.
I disagree with the idea that “The start-up phase is not different” and that engineers should follow the same set of good engineering principles regardless of company age. It’s just not true. Anyone who has worked in industry has experienced “good engineering principles” standing in the way of actually succeeding as a business.
And other than that point it’s just a badly written article. Yeah it’s fun to read and has swear words in it but Bob doesn’t actually back up any of his points with supporting arguments, the whole thing is just a polemic. And the reason for this lack of supporting arguments is that Bob doesn’t have any industry experience to draw on so he can’t provide any first hand accounts to support his opinion. It’s nothing but grand sweeping statements about an area he has zero familiarity with.
I think there's something like scaling laws for development teams. A number of things change, the cost of refactoring, the cost of communication, the need for communication, the plausibility of everyone knowing everything, etc.
Practices that are necessary for a large team can really hamstring a small team. When you're just a few people, you can cut corners larger organizations can not. If you really lean into that, you can kinda run circles around those larger organizations with just a few developers.
Technical debt is probably the biggest thing that changes with size. With a small team it's a tool you can use to get more done, a bit like a mortgage can let you do things you couldn't otherwise. Refactoring is cheap when you are few, so you can usually pay it off it gets too bad.
Technical debt in large project with many developers is very different, as large refactoring operations are prohibitively expensive, and you should go to great lengths to ensure it doesn't increase.
Having heard that, I'd caveat that with "Sometimes, those who have already done, teach."
Which then boils down to the student caution to ascertain whether the advice a solid professor is giving you has changed, since they last successfully did.
And also the observation that the number of students who are likely to be more successful ignoring education is pretty small: most people aren't that brilliant and will get better mileage out of learning and...
Your entire comment was "he used to be good at many things, but things I won't talk about have not been to my approval".
You presented no facts, no thoughts, just said "others say" you are disinformation, because I'm less informed reading your comment then I was before
I'm puzzled why this comment is getting downvoted. Does Martin have any data or statistics to back up his claim? No. Is his assessment based on personal experience in building a startup? Also no. So what's left? This blog post constitutes an uninformed opinion, and uninformed opinions are worthless.
If you have a zero revenue business you generally shouldn't write tests. Just add some health checks for backend services and use strongly typed code for everything.
Not true. Kent Back’s 3X is a much better take, test and good practices for what is high risk and hard to change, move fast for most of it on the rest to try to find that black swan as soon as possible.
Yes, I do feel this time is different and that I am a top-notch coder (I feel more comfortable sounding like a jerk when on hackernews), 1 year later into my startup, codebase is big, but I’m actually coding faster than ever, as foundation code is more and more complete.
One of the huge reasons for it beyond the right architecture is type safety. Someone that was well seasoned in strongly-typed FP but is now pragmatic can move incredibly fast with enormous safety by just adding the most cost-benefit type strictness, and being flexible on where it doesn’t pay off.
Is this what you're talking about?
https://medium.com/@kentbeck_7670/fast-slow-in-3x-explore-ex...
Yes exactly, from the experience he had as Facebook massively scaling up, while all good practices were thrown down the window (except for foundation and critical parts) and extreme go horse php being written as fast as possible, until it was too big and a migration was needed
I had the exact same experience at Booking, but with terrible Perl instead of php, still, built massively successful business until one day the movement to better practices becomes inevitable
The opposite of what uncle bob says
[flagged]
Can you please not post like this? We're trying for curious, respectful conversation here—not people putting each other down or ridiculing others.
https://news.ycombinator.com/newsguidelines.html
Erm, he wrote the article with “you” to invoke the feeling of the reader thinking about their own use case, which I did
Different because I ran without good practices before, got more and more messed up over time, grinded to a halt, and it was all terrible. Then worked for 10 years, doing TDD for most of it, as well as pairing and other good practices, everything was better, started condemning people that don’t do it similar to uncle bob. Kept moving forward, entered an environment where everything was “wrong” but still somehow worked very well. Now I’m mature enough that I can have both: skip “good practices” and still not break things
Finally, yes of course the types fucking matter
Oh man. You can write tests after the fact. I have done it sucessfully many times. I write something to see if its viable, if it is I find the repetitive parts and break them up to functions or components in which you can unit test. Later you can also write end to end tests for all the important flows.
TDD is just a way of doing things but it's not faster. It's testing for testings sake. Why should I write a test if I don't have a single paying customer yet? Unless its something massively important, like money or health data there is just little incentives.
If you do test-after you have to keep a mental tally of new scenarios to test when making code changes. This makes it more unreliable since it's easy to forget one of those scenarios or mix it up with an already tested scenario.
TDD lets you safely forget by tying the test pass/failure directly to the code.
So, yes, you can do test-after but why?
The only reason Ive ever heard for doing it after anyway is "I just prefer it that way".
Not writing tests at all makes sense (e.g. for a spike), but if I were going to start writing tests at any point I cant see any reason not to do it with TDD.
My experience is that TDD just ensures that the code is unit-testable. This can lead to more complex code when it need not be complex. I definitely write tests, but my default approach is to make the test simulate how a user would use the functionality. So mostly a higher level test like a system test. And you can do this only when you have a bigger picture of the program that you write (which need not be driven by unit tests, just need to elucidate what the program needs to do and break it into steps). I don't rule out unit tests, but my approach is to start with tests that resemble system tests, and if i specifically need a unit test for a hairy algorithm in a function, use a unit test to help.
Also the higher level you test at, the less probable that you have to change the tests when you change a piece of functionality.
It's called TDD not UTDD.
I frequently do TDD with integration and end to end tests. The type of test should be dictated by the nature of the code, which TDD doesnt have an opinion on.
TDD is about following red-green-refactor.
Good for you then. I myself have not come across the (loud/louder) TDD exponents advocating for using TDD in system/integration testing, they mostly focus on unit tests. If you can point to some examples, it would be a learning experience for me. If not that's fine too, I am glad that there are voices out there like yours.
I wrote a systems/integration testing framework for this purpose specifically and I wrote a few essays alongside it. It has the same name as my handle.
Not much traction, unfortunately. Id be interested in any comments you might have.
I write tests alongside the code. I also write tests before the code, and after the code.
Usually, I use test harnesses[0]. These generally start before the code, and grow, alongside.
I’ll frequently write unit tests, after the code is done. Sort of like putting solder over a tightened bolt.
But I tend to spend a lot of time and effort on testing. It’s my experience, that I always find issues. I’ve never once, written “perfect” code, out of the starting gate.
Eh. WFM. YMMV.
IRT the post topic, I believe that every job I do -even “farting around” code- needs to be done as well as possible. If I always do a good job, then it becomes habit.
[0] https://littlegreenviper.com/testing-harness-vs-unit/
I’ve seen people spending 10 minutes to test things by hand, would have taken them less to write and run a test, specially with AI now
When writing test actually makes it faster to code, THEN it’s worth it. You can even throw the tests away later, doesn’t matter
There are thousands of successful technical startup founders and early employees that would disagree with the content of this article.
The truth is that, it's actually totally fine to accumulate some amount of technical debt and to ship some buggy code if you're still early. Speed matters. Of course this will change if you're later stage or if you're in an industry where you must be very careful (security, healthcare, gov, etc).
> Is your code somehow less important than that account’s spreadsheets?
Shout-it-from-the-mountain-top "yes!"
The code should just get you through a demo, or securing the next round of financing.
The accountant can't cut corners, because they and other people could go to jail.
But the idea that prototyping in software is not a valid practice is laughably wrong.
Where you do go wrong is if the people only know how to prototype and nothing else; they can't get rid of the prototypey bits and evolve it into solid production in which all traces of prototyping are gone. Multi-talented mega-begins which know numerous methodologies wouldn't be in this boat, right?
> as if any bug is acceptable
Windows still crashes in 2024, yet we can't get rid of it.
This article buries the lede with a bunch of horrendous strawmen. The actual thesis is that software craftsmanship should be applied in the same way at all stages of a company. This is a very dangerous line of thinking that has killed many a startup staffed by experienced who engineers who are working in large scale systems and teams. I've seen it many times, and TBH it's not just an engineering problem, it's that people who have only worked in large companies don't have a real sense of what is truly essential, and so a huge percentage of their practices are effectively cargo culted over without any real reflection.
In the case of engineering, you need to apply a lot of judgement based on the situation that the company is in—how much runway, how much traction, actual product goals, etc. You must keep things as simple as possible at all times to optimize for future optionality as you search for product-market fit. All code is a liability, and you must fight tooth and nail against any individual who is getting ahead of their skis in terms of losing focus on the next thing needed to prevent the company from dying. The absolute worst thing you can do is bring some journeyman engineer and don't give them enough scope and ownership to satisfy their brain capacity or you'll end up with ridiculously over-engineered systems that impose a huge velocity tax for what needs to be a very lean and agile phase. I say this disclaimer first because 99% of people in tech trying to do a startup will fail by trying to do too much too soon, and have no intuitive sense of how narrow the tightrope from 0 to 1 success really is.
Of course that doesn't mean you shouldn't focus on code and system quality. Absolutely you should have tests, but you should apply serious judgement onto the nature of the tests in light of your best predictions about the future. You should think about what code is foundational, and what decisions may be one way doors, but not obsess over leaf nodes and experiments that are just as likely to be abandoned or scrapped as they are to be built upon. Making these calls is tough—no one can predict the future—but long tenures in fast growing code bases helps. Seeing the impact of ones decisions 2, 5, 10 years down the line is eye opening, experience is useful here as long as one still thinks from first principles and doesn't just rely on rote practices because they are comfortable.
I think you are actually agreeing with the main thesis of the article, which I agree buries the lede and obfuscates it with overly focusing on TDD in particular. You're just adding the, accurate, caveat that most people's opinions on how to build good software are wrong.
I think that it's true that at large companies there can be an obsession with over-design and over-engineering "for scale", but I actually think that's wrong to do at large companies too, you're just less likely to pay the ultimate price for wasting time on it.
The overall article is ambiguous enough that yes, it can be interpreted to be in alignment with my values. But also based on my quarter century experience in both startups and Fortune 500s and the transition from the former to the latter, I would say for every engineer shooting from the hip and creating an unmaintainable mess there is an equal number who will read it as a justification for over-engineering. Also, though of course most startups fail, I would say the latter archetype fails at a higher rate because they are focused on the wrong things.
The crux is really is the nuance of this statement: "The disciplines that lead to successful software are always valid". This is tautological, everyone reads and sees what they want to see. But if we take the examples he gives, that's where judgement comes in. Double-entry bookkeeping, yes, that's pretty universal. TDD? That really depends on what you are doing and what value you get out of it. Not only do specific disciplines and practices vary based on company stage, they also vary based on the product and the goals of the business. Anyone who doesn't understand this is fucked if they try to do a startup.
Agreed, but I would say it isn't necessarily tautological. I think there are lots of people out there who think that if you want to develop software faster you cut corners on things like testing (to be clear I'm not endorsing TDD, just saying having a reasonable number of tests beats the hell out of none), choosing a dynamically typed language vs a statically typed one, and just generally throwing code over the fence for the sake of moving fast. I think it's true that you will in fact move faster over time (not even a very long time, just like a few months) if you stick to whatever good principles you would have stuck to if you didn't have the time pressure of shipping while at a startup.
My read was that this was intentionally vague since it's basically trying to say to the reader that whatever you think is a good idea too do while developing software at a non-startup is also a good idea to do at a startup. If the reader has bad ideas about developing software then clearly there's no helping them :P
Exactly. All those things that are critical in a startup are really good way to accelerate possibly stagnant development. However, short-sightedness and ignoring tech debt kills startups too
For a startup achieving product market fit should be the primary objective, clean code is secondary. And honestly, a lot of the things that Robert Martin proposes are quite controversial and I’m not sure if I would recommend them for a more developed company.
Clean code is like #5 or lower on the list.
Actually, people will use buggy software if it solves a real problem. If you need to polish your software to the nth degree to get and retain customers, you are in a crowded space and should go do something else with your life.
If chatgpt failed 50% of its requests from the UI, people would still use it. If it logged you out after every other chat request, people would still use it
Zero evidence, just stating the same unproven argument as fact over and over. Also directly countered by how much commercially successful/widely adopted software has fucking awful dogshit code.
Worse than useless.
Startups aren't magically different.
But they are the same as anywhere else in a very different way than described here.
At almost any place (other than a company that only writes moon lander code, or similar) you will probably encounter many different situations over your career.
Sometimes quality is crucial and it's better to be late than buggy or wrong.
Sometimes timeline in crucial and your users are willing to be more in the "beta tester" role as long as they get that first version quickly.
Sometimes the product shape isn't quite clear yet so timeline (to iterate with real users) and future extensiblity/adaptability are both super important.
Sometimes you can only afford the 6 month effort, financially. Having less tech debt won't help you if you ship nothing and make no money.
Sometimes you won't need to add many more features in the future, so certain kinds of "tech debt" are fine.
But there's no one-size-fits-all approach to software engineering across all these situations. Heck, it's often hard to tell with full certainty which situation you're in - predicting the future is hard.
However, any approach that doesn't start with "what do we need, and what do we think the right set of tradeoffs are" is likely to let you down. Since there's no freebies here - "discipline" in the "follow the checklist 100% every time every where" sense is not free.
Funny thing about the harping on TDD is that one of the real tricks is that if you don't know the situation, you can't even write the right tests. If the features/functionality is gonna churn, you need to be careful how you write your tests so you aren't constantly rewriting them by testing volatile boundaries of functionality. But that's very different from "TDD good, slow good, fast bad."
Article's premise: - software development practices are invariable to company size and stage
Reality: - team size dominates methodology and user base size dominates architectural decisions
Tests are less useful on features that may be canceled soon.
Tests are more valuable on more complex codebases.
Local tests (shifting left) is more valuable when deploying is hard/infrequent and screwing up prod has higher stakes.
These factors explain why most startups should code differently than big corps.
Slow is smooth, smooth is fast.
those are the words of a drummer...
Has Uncle Bob even written a single line of code in 20 years? And the stuff he did produce 20 years ago wasn’t particularly impressive.
Yes, he has: https://blog.cleancoder.com/uncle-bob/2021/11/28/Spacewar.ht...
This post sears and burns with the heat of a thousand suns. How I wish I could have read this every day, ten times a day, as I was starting out on my startup journey three years ago.
So many corners were cut in the name of speed. So much pressure from my cofounder/investor to get things out way too fast. So many months of endless suffering and never-ending bugs due to poor architecture that doesn’t scale on the BE.
I know how to push back. I do it for a living at my day job and I’m really good at it. For some reason I decided that I couldn’t ever push back against the guy who put in some money. How horribly wrong I was.
To those who are recommending against Uncle Bob’s Clean Code book, are there alternative books/resources you’d recommend?
I have pondered this question before and I have seen people recommending "Philosophy of software design" by John Ousterhout, but my qualms with Clean Code is not that it needs a substitute, its just that its a fairly simple set of concepts about which Bob makes a big deal. I did read some of his books, but I realized its only about 10% of what makes a competent software engineer. My suggestions to people starting out or even seasoned programmers are that get an idea of what he advocates (TDD, SOLID and all that) but then design of programs is just a small part.(And I also can debate the usefulness of both TDD and SOLID. Personal opinion coming: they are great for small or greenfield projects but almost always don't hold up in the real world).
Learn about other kinds of (much more effective) testing like System/Integration testing, Property-based testing. Spend a lot of time learning about databases and SQL. Maybe get into somewhat esoteric topics like constraint solvers and logic programming. You may not use these but it helps to know there's a wide world out there, and they do bend your brain enough to enable you to think differently.
Time is limited. It does matter what we spend it on.
> Time is limited. It does matter what we spend it on.
Definitely. Knowing this does make choosing what to learn easier though!
Note: There is apparently a large group of people who hate everything he does and also, seemingly, him personally. Everywhere he (or any of his books) is mentioned, the haters come out, with their vague “it’s all bad” and the old standard “I don’t know where to begin”. Serious criticism can be found (if you look for it), and he himself welcomes it, but the constant vague hate is scary to see.
"To quote Captain Sulu when the Klingon..."
Easy skip of a read
It's probably time to stop recommending Clean Code: https://qntm.org/clean
I don’t want to be a hater but a lot of “Uncle Bob’s” advice just doesn’t seem very good, this article included. Robert has had a long and successful career writing books on code style and architecture diagrams, but he hasn’t built anything notable in industry during that period. And the opinion presented here seems to clash with nearly everybody who has ever actually built a successful startup. Some things you have to learn by doing.
While "Uncle Bob" certainly hasn't uncovered any silver bullets, I find the criticism of him to be generally unfair. It's the same type of criticism that commonly targets anyone that actually sticks their neck out far enough to make concrete recommendations.
So many teachers, careful not to invite the ire of the armchair critic and ACKCHYUALLY know-it-all, will hem and haw, leading you into analysis paralysis regarding "best practices". That then leads to exhortations to "find what works best for you", acknowledging there are many "right" ways among all the wrong ways.
As I've gotten older, I appreciate teachers like Uncle Bob who provide a specific prescription and says "try it this way". I've discovered I learn faster starting somewhere concrete.
> While "Uncle Bob" certainly hasn't uncovered any silver bullets
Which is compatible with Brooks, who famously had to point out that there are no Silver Bullets.
Anyone who is reading these kinds of things and expecting to find that the authors uncovered a Silver Bullet will be regularly disappointed. But that doesn't, as you point out, negate the value of the learned experience that they are trying to communicate. Even if there are no Silver Bullets, that doesn't mean that there are processes and methodologies that can make things more predictable, robust, reliable, and enjoyable.
https://en.wikipedia.org/wiki/No_Silver_Bullet
I think the problem I have with Bob is that he doesn’t say "try it this way". He says “you must do it this way, and if you don’t do it this way you are a disgrace to the profession”. Its obnoxious.
Even worse, his cultish followers who blindly believe everything he says without understanding it. “Why must it be done this way?” I ask. “Because Bob said so”. Without comprehension, his advice becomes toxic.
If you’re new and you simply don’t get it yet, that’s fine. If you think you’re now an all knowing being because you listen to Bob, please calm down.
All non-political critiques which I have seen (here and elsewhere) always seem to turn out to be based on misunderstandings or exaggerations of what he actually has said or written (sometimes wildly so). Or, as it sometimes turns out, people hate him not for anything he has said, but because of how other people has misunderstood and misinterpreted him.
From what I have seen, every time he has been criticized sincerely, and he has become aware of it, he has engaged his critics in open debate, and they have come to amicable results, with him sometimes altering his views.
What you describe is apparently a common impression of what he writes; I have seen many people express it. But I have not seen an actual quote to prove it. I think some people may feel so threatened by someone who says that what they do may not be very good, that cognitive dissonance kicks in and they instead elect to take offense at the tone of the message (and re-interpret what he says in the worst possible way in order for that to make sense).
From the linked article:
of course one of the disciplines I’m talking about is TDD. Anybody who thinks they can go faster by not writing tests is smoking some pretty serious shit
That is not some made up stuff - smoking shit - is literally saying one is a disgrace if he is not doing TDD.
I call your reading comprehension into question. What it literally says is that you will not go faster by not writing tests, or, conversely, that writing tests will not make you go slower, and that anyone who thinks otherwise is wrong. You are imagining an insult in order to feel outrage, probably so that you can then avoid the actual issue.
I read it as useless exaggeration used to incite the outrage.
It feels correct for young, starry eyed devs.
But then I have to deal with bunch of assholes in workplace who read that kind of shit and think they have to be edgy and always right and only “right things” have to be done.
That is the whole context you should read here and why people don’t like Bob’s stuff.
Other thing was Linus and his code reviews - I had at least couple of guys thinking they were Linus when what they worked on was yet another off the mill CRUD app. Good for Linus he finally understood what toxicity is and toned down.
If you insist on discussing only the tone, not the argument, you have to consider that the blog post you are commenting on is more than 10 years old. Would he write it in the same way today?
The tone is part of the message. In this article he describes the kind of strawman engineers he is disagreeing with as egotistical fools who believe “stupid” things. Elsewhere he’s writing oaths for programmers that say you should only build software according to his principles. It’s all very high and mighty.
If you are attacking the tone in order to avoid engaging with his actual arguments, then you are exhibiting the exact behavior which I described.
Why not debate the points in the article on their own merits? Which parts do you agree or disagree with, and why?
I disagree with the idea that “The start-up phase is not different” and that engineers should follow the same set of good engineering principles regardless of company age. It’s just not true. Anyone who has worked in industry has experienced “good engineering principles” standing in the way of actually succeeding as a business.
And other than that point it’s just a badly written article. Yeah it’s fun to read and has swear words in it but Bob doesn’t actually back up any of his points with supporting arguments, the whole thing is just a polemic. And the reason for this lack of supporting arguments is that Bob doesn’t have any industry experience to draw on so he can’t provide any first hand accounts to support his opinion. It’s nothing but grand sweeping statements about an area he has zero familiarity with.
Yeah.
I think there's something like scaling laws for development teams. A number of things change, the cost of refactoring, the cost of communication, the need for communication, the plausibility of everyone knowing everything, etc.
Practices that are necessary for a large team can really hamstring a small team. When you're just a few people, you can cut corners larger organizations can not. If you really lean into that, you can kinda run circles around those larger organizations with just a few developers.
Technical debt is probably the biggest thing that changes with size. With a small team it's a tool you can use to get more done, a bit like a mortgage can let you do things you couldn't otherwise. Refactoring is cheap when you are few, so you can usually pay it off it gets too bad.
Technical debt in large project with many developers is very different, as large refactoring operations are prohibitively expensive, and you should go to great lengths to ensure it doesn't increase.
Yeah, the old adage applies that the people making money and the people writing books on how to make money are different people.
Also known as "those who can't do, teach".
Having heard that, I'd caveat that with "Sometimes, those who have already done, teach."
Which then boils down to the student caution to ascertain whether the advice a solid professor is giving you has changed, since they last successfully did.
And also the observation that the number of students who are likely to be more successful ignoring education is pretty small: most people aren't that brilliant and will get better mileage out of learning and...
also known as those who can't do either (only), comment.
https://news.ycombinator.com/item?id=42353615
Your entire comment was "he used to be good at many things, but things I won't talk about have not been to my approval". You presented no facts, no thoughts, just said "others say" you are disinformation, because I'm less informed reading your comment then I was before
[dead]
Has Robert C. Martin ever started a successful product company?
Nope
So you shouldn't take this advice
https://en.wikipedia.org/wiki/Robert_C._Martin
I'm puzzled why this comment is getting downvoted. Does Martin have any data or statistics to back up his claim? No. Is his assessment based on personal experience in building a startup? Also no. So what's left? This blog post constitutes an uninformed opinion, and uninformed opinions are worthless.
If you have a zero revenue business you generally shouldn't write tests. Just add some health checks for backend services and use strongly typed code for everything.
> If you have a zero revenue business you generally shouldn't write tests.
Can you think of an industry where this might not be true?
Sure there may be some exceptions. That's why I said "generally".
Naw.
Hopefully, regardless of revenue you test your code. It’s just a question of if your tests are manual commands and clicks or if they’re automated.