When looking for advice, IMO the first question you need to ask is "what has this person actually built?"
Personally, I find that the people whose advice I love to read are guys like John Carmack or Jonathon Blow or Casey Muratori or Tim Sweeney. Those are guys that have or are continuing to solve difficult problems and have tons of hard won practical advice. When I look at someone like Uncle Bob or Ron Jeffries, I see people that struggle to write sudoku solvers and are mostly famous for giving advice, not for building great software.
In my experience, the greatest teachers rarely have the official title "teacher". I learned more from one programmer in 5 months working with him than I did in my entire computer science degree. (And no, I'm not saying professors or degrees are useless, just that I don't think title's and reputation necessarily tell you much about how much a person will teach you.)
And honestly, reading "Uncle Bob's" advice, I find a lot of it is outright bad (here's a good breakdown: https://qntm.org/clean ), or specific to Java's quirks, or has no actual backing other than that Bob think's it's a good idea.
It is great you had a mentor. I met few people I could learn from and books (including Bob’s) were very helpful. He promoted TDD, good names, short functions etc Is he too dogmatic? Are his function too short? Probably yes.
I never treat his proposals as a silver bullet, but a great source of inspiraton. I wish there were more people teaching those things.
I like the concept of Talkers vs Doers, that I leant from Nassim Taleb. Whenever reading advice on the internet, one should check the author and ask themselves "what has this person actually *done* that gives credit to their words?".
I don't entirely disagree, but I have to keep in mind that this is an ad hominem argument. It's also an argument that would be used to eliminate or discredit academia and the vast majority of those in public service - both of which I think have their place to an extent.
Well, I'm not saying you should discredit advice from people who haven't made great works (IE, most of us), just that it's a question you need to ask to put their commentary in perspective.
Especially when someone like Uncle Bob says: "The first rule of functions is that they should be small. The second rule of functions is that they should be smaller than that. This is not an assertion that I can justify. I can’t provide any references to research that shows that very small functions are better. What I can tell you is that for nearly four decades I have written functions of all different sizes."
Well, alright, in that context he's asking us to just take his word for it, but there are no tangible arguments here. Whereas Casey Muratori has a much more thoughtful exploration of this topic ( https://caseymuratori.com/blog_0015 ), and he's also written some very excellent code (in terms of solving difficult problems and doing useful things)
Ad hominem doesn't necessarily mean fallacious or invalid. Bob's whole brand is being a self-styled authority on what constitutes clean, high quality code. So yes, his achievements and personal credibility are important here.
Yes, some Talkers have great ideas, and some Doers offer terrible advice. I think a more nuanced mental framework that avoids ad-hominem is "opinions" vs "experiences".
Some Talkers peddle opinions because they haven't got any experience. Most Doers peddle the experiences they had. Why does this matter? Anything can be opinion if it isn't backed by data. Many opinions that look great on paper are terrible in practice. If someone is offering you an experience (or anecdote), at least they have a single data point to back it up. A surprising amount of influence is exerted on the internet based on "one guy's opinion".
The better Talkers aggregate experiences of others to back up their opinions. This tends to provide better evidence than a single anecdote, and can be done even if you have no hands on experience with the subject.
So to your point a better way to frame it might be "what data/evidence is this person giving to support their argument?"
Developers who did not seek permission to write code their way. We used their software and in some cases can see the code. Their output is attributable to them rather than faceless teams.
Because I'm interested in video game development, and it's a good litmus test since the solutions have to be high performance and developed very quickly. I could also mention people like Peter Norvig or Alan Kay though, I just listed the ones that initially came to mind
But these guys develop their programs in isolation and they deploy to users only the final result or a limited number of beta versions. They almost never evolve their code bases to a second version. They work on video games.
If I work in isolation from the users, don't have external requirements, don't care about future versions of the software - sure, their advice might be useful to me.
> They almost never evolve their code bases to a second version.
This isn't necessarily true, a lot of code can and is reused between games (math, physics, audio, etc.)
In terms of isolation from the users, that isn't really true either -- the users are the rest of your team. You have to build tools for the team to use and they better be at least somewhat usable, and you need to have something workable quick so you're not blocking your artists and level designers, etc.
If somebody explains a concept eloquently and uses good logic, you should consider what they have to say, even if they’re a dog or a cat and not a famous programmer. You shouldnt ignore arguments or explanations from people without glorious careers
To use a games example, the game programming patterns gentleman is most famous for his book, not for his game engines - yet the book is extraordinary and anyone would be poorer for skipping it
Uncle bob has clearly articulated a number of ideas I’ve found useful over the years, and so what if he also said some things we don’t agree with? He’s human
That said, the opinions of accomplished people like carmack and blow are definitely worth listening to. They are eloquent and have very interesting opinions
The answer to my question doesnt have to be famous or amazing, just a reasonable demonstration of individual capability. Would you want advice on, say, wood working from someone that cant show you a well crafted chair or table they made on their own? A lot of the refactorings he does in his book IMO are not good, and a lot of the SOLID principles dont really apply in languages outside of Java, and even there their effectiveness is a matter of opinion and taste more so than provable value. I dont really hate on Bob, I just would encourage new coders to carefully consider whats been accomplished by the people that follow a certain paradigm. The majority of the most succesful projects in history have had nothing to do with TDD or SOLID. So, take whats useful from those methodologies certainly, but I'm more interested in how the undeniably great people and projects work as compared to Bobs opinions.
clean code was published in 2008, two years after jquery. the world was different then. and there is a large audience of laggard adopters who still live in that world.
I do not agree. Take the example in football, Maradona was one of the greatest players, but he was unable to teach and to be conscious about how he did what he did.
Having a talent to solve problems does not mean you can teach them. Many times the opposite is true, people that do not have great talent but a good self reflecting and observing and analysis capacity can extract the insights and general rules of a give discipline.
The former plus a good experience on the field is best source o advice, and to my knowledge Uncle bob have both.
I completely miss the point of the article, but my pet peeve is that operator== is even defined for floats in most programming languages. It really, really shouldn't.
Instead if should return an error "Floating points should be compared using this library function that includes acceptable difference. If you want exact math use BigDecimal or similar. If you know what you're doing use library function with acceptable difference = 0.0"
And yes, Uncle Bob is giving some terrible advice. My least favorite is his advice to split "too long" pure functions into a stateful class with "short enough" methods that later can be called in the wrong order because now there's a right and wrong way to call them.
Oh, but one generally wants an "acceptable difference" only when the numbers compared are close enough to zero, otherwise the "acceptable relative difference" should be used instead, right?
I'm thankful I'm not the only one with this critique. I see this so often and it makes navigating and/or debugging the code a nightmare, while I hop around these tiny single use methods. A long function I can read top to bottom is a lot easier to grok!
Examples of that were common in 2000s poor Object Oriented Programming introductory material, do people still advise that kind of design?
There were people who abhorred the sight of non-stateful code, there was little opportunity to fiddle with objects in it, not realizing that they just made their and everyone else's job harder with more moving parts, a concept that, despite being very easy to understand and with analogues to physical machines, was lost on many programmers.
It is a constant discussion. You can always find examples of long functions with lots of state which is hard to follow and then there are always examples of code where people went too short. And then there are always people taking any of those examples and showing why they were wrong.
I myself like going to the extremes for fun/toy/throwaway projects (not only on these but also with rules like "no `if`" or "no raw `for` loop" etc) to try out all the different alternatives and then resetting my default based on cases where it went well and where it went bad.
No such rule is always right, but most hmstem from being exposed to too much of one extreme.
The correct way to do this, if it really is a significant simplification, is to factor out the minimal relevant code into a private helper class (or struct) that you instantiate anew for each computation. It then functions like a stack frame you can pass around and share between functions/methods, within private boundaries.
I had a gig where a consultant had just been through (before I started), and totally changed the programming culture of the engineering teams. One of the weird things that happened was my new team no longer tolerated methods > 10 lines, which created exactly what you're talking about: a bunch of tiny single-use methods and helpers scattered across other classes/files/modules/libraries that totally obscured control flow.
It's amazing to me how even smart people can be taken in by Harold Hill types. Can I get a job going around giving advice for $100k a pop? I promise to do better than that guy haha.
In a team I was part of, we used the default in a popular Ruby linter, Reek. And it wasn't 10, it was 5.
The team treated that as puzzle-solving, and it was part of their culture to tackle those kinds of challenges. There was light hazing of newbies whenever the CI didn't pass because methods were too big. So, that's why at least in this case. There was no technical argument, just inertia.
Code quality was atrocious. Things that could be 10 lines were 30 or 40, and extremely stateful.
On that topic: I came to like John Ousterhout's notion (in [1]) of "deep" methods/classes/etc. He explains it like this: If you have a reusable piece of code, imagine drawing it as a rectangle. The width corresponds to the complexity of the interface, the depth corresponds to the complexity of the implementation.
Ousterhout is saying that we really want deep classes, i.e. the complexity of the implementation a construct hides should be considerably larger than the complexity it takes to use it. Which, at least to me, basically says "search for good abstractions". And what you're describing strikes me as a good way not to achieve that.
Of course, there's considerable wiggle room in that mental model, and one could still endlessly squabble about how (not) to break up the implementation.
I think the fallacy of the "Poltergeist Pattern" is to optimize solely on the - in itself valuable - metric of short simple methods, without consider anything else, like the complexity of using or reading the resulting code.
It's one of his solutions for refactoring long methods that operate on more than one variable, so that simple "extract method" refactoring isn't feasible.
He says to turn the function into a class and local variables into private fields. So that you can split the function into small methods that operate on these private fields. I used it a few times in a company that had strict "no Sonar warnings" policy. But I hate the resulting code much more than the initial code.
The jack-in-the-box (or hellraiser's box) style of OOP. You call a succession of 0 or 1 argument methods that change the object state until suddenly something happens.
Floats can be compared for equality, they're a pattern of bits in memory after all. operator== can be defined for floats.
Using it on the results of floating point arithmetic may be a bad idea, but that's a different matter. That should perhaps be a compiler or linter warning, rather than flat out be declared incorrect.
I'm not sure how subnormal numbers are relevant. For binary floats, the only duplicate values are (+0, -0) and the NaNs, and in both of those cases it's really up to you to decide if those are really representing the same value or not.
I'm arguing against the notion that floats are just bit patterns in memory. If you base your operator== on such a notion, you will get very weird behavior.
A lot of Uncle Bob's advice come from a time and a place where they actually had merit. There is a sort of enterprisey Java code that was very common in the '90s. OOP and Gang of Four was incredibly trendy, but few people were actually all that good at it. So now your code had a AbstractFactoryFacadeDelegatorImplBuilderVisitorVisitor. Methods were often extremely long, hundreds of lines. The code was complex in a lot of ways, lots of static instances being accessed from every which way.
In this context, Uncle Bob's style was compelling, it felt like a breath of fresh air. His opinionated attitude was also compelling. In general, associating a term like 'clean' to your methodology seems to be a good zealot recruitment strategy. 'Pure functions' is similarly brilliant marketing.
That said, a lot of his supposed solutions actually cause problems of their own if you let the pendulum swing too far. It creates its own kind of complexity. Books like these also tend to sort of get their own momentum, if you look back you see a lot of big names praising them. Surely they must be good, right? Well they were. Clean Code made sense when it was published. It's for the most part not good advice today.
I haven't seen anything much better than SOLID from Clean *Architecture in terms of organizing large projects into lots of small discrete interfaces. I guess globally declared pure functions are being pushed a lot today, but I find this approach organizationally wanting, particularly in promoting tight coupling (with all its drawbacks).
*Edit: Updated: I had forgotten this was not from "Clean Code".
I meant "globally declared" in contrast to instance-oriented (i.e. methods) rather than whether or not a function exists within a namespace, which is what I think you're implying.
I'm curious why you think clean code is for the most part not good advice today. I read the book recently. What is different today compared to in 2008?
You can easily end up with another sort of complexity, where the code is difficult to reason about because it is so decoupled and the methods are so short (but the call stack very long) and modules so small (but many). You can end up with unnecessary code that's difficult to spot because the logic is so spread out that the big picture is clouded.
Because lots of small functions increase the system complexity by creating interdependencies. You also don't have context in each function to understand the big picture and very small functions can be meaningless on its own. You have to jump through many functions just to figure out what is being done.
Code is a non-linear medium, depending on the path one takes thought the graph and the scale one operates at clean might mean a very different thing. Take a long look at the history of ideas, it might be obvious looking back that thing should be the way they are, but no one just knows them as if they are a prophet. Even if there are sign of something being ugly or a hack this does not give any guidance as what direction to take next.
Ironically, on mobile at least, applying Uncle Bob's ideals into the VIPER framework creates proliferation of classes, interfaces, and functions that are often as byzantine and names as unwieldy as stereotypical J2EE spaghetti code.
I often wonder what would currently be a way to work in a more enterprisey environment and actually make stuff a little bit more fun and not all "abstract factories" etc. I'm talking TypeScript/C#/.NET Core type of stuff, maybe with Vue. What's the "new Clean Code"? Might just learn Go next or something.
I don't think it's sensible to liken Pure Functions with "clean code". The former, while no panacea, is strictly defined in a way that the latter could never hope to be.
Integer floats can be checked for equality perfectly.
Uncle Bob is a fraud by the way. Turning a perfectly readable medium length function into a class with dozens of tiny methods is a cardinal sin in my eyes. Then later someone stumbles upon this class and reuses some of these methods in one way or another, while working on a problem completely unrelated to the original, thus entangling the two implementations…
Interesting. In martial arts world there were lots of styles that claimed to be "the best" until they did UFC and suddenly people discovered that if you don't know brazilian jiu jitsu you will most likely lose to someone who knows it.
This analogy doesn't work because programming in "industry" has been like the MMA since day 1 in the sense that you have always had to "test your skills" and make something that people wanted and compete with other products. In the martial arts world a bunch of different martial arts just completely went without full-contact sparring/competition and instead built up a bunch of different rules and scenarios around how they were "too deadly to be done in practice". This is the bullshit that MMA exposed, and its interesting to note that the two practices in the comment you're responding to, judo and karate, have had a long history of being practiced "for real" in the gym and in competition and thus have spawned a long line of highly successful MMA competitors.
The analogy also doesn't work because BJJ isn't some silver bullet. What people discovered is that the first M in MMA is actually the important part and if all you know is BJJ you're going to get starched by a boxer with a sprawl, or more likely a wrestler with a modicum of submission knowledge, who will never let you get to the floor in the first place, and instead just grind you out.
So just like the question "what is the best martial art" currently has no answer outside of "you need a mix of striking and grappling not just one thing", there is no answer to "What is the best programming style" outside of "think about the problem you have at hand and crib on examples and knowledge from other people who have solved a similar problem". This "unfortunately" points to boring industry standard tools, like Java, C/C++, Javascript, RDBMSs, IDEs, Linux etc, etc. Probably some newer stuff like Rust and React as well. And note that answer isn't one specific technology, like MMA its a bag of different tools you combine.
Memory based == is not unreasonable for floats, once in exact representation domain. Of course, if operating in approximated domain, especially involving any of existing math libraries, the float compare should be done with corresp. functions/macro and be epsilon based.
Off: floating point numbers can be used to store integer values, so equality comparison might be perfectly valid in some cases. For example, if the embedded Sql doesn't support 'int64_t' (only int32_t), itt might still support 'double' which can store 52-bit integers exactly.
Yes, also there are deep magic gamedev tricks exploiting that (see quake sqrt :) ). There should be a way to do them. Something like bit_compare(x, y). I'd just prefer if rarely useful operations shouldn't use the most commonly used API. Too easy to make a mistake.
BTW I'd also love to have a built in float type that fails when you assign 0 to it.
Anyway, I guess in Java operator == is a lost cause anyway. My favorite example:
> Off: floating point numbers can be used to store integer values, so equality comparison might be perfectly valid in some cases.
Yeah, but then you're having to learn all the special cases for when it silently gives wrong answers, and hope to hell that you didn't miss any.
Much better to have consistency and behave the same way all the time, than to optimise for 3 keystrokes and introduce all sorts of special exceptions that the programmer must memorise.
The 'clean' pattern to me is a complex data structure that is operated on atomically by relatively simple functions, but not so simple that you need to keep a 6 level call stack in your head to follow what is going on. Function decomposition is fine as long as - like everything else, really - you don't overdo it.
> my pet peeve is that operator== is even defined for floats
Then people would start writing `signum(a - b) == 0` (or some equivalent) instead of `a == b`, and/or factor that out into a helper function. Not sure that would be an improvement.
> my pet peeve is that operator== is even defined for floats
I fixed two bugs at once in a softsynth caused by this stupidity. In the envelope generator code, which loosely modelled the ADSR circuit in a synth which charges and discharges a capacitor, there was an integrator implemented like `env = ((target - env) * timeconstant) + env`. The `target` value was set to 1 to allow the "capacitor" to "charge" at a rate set by `timeconstant`, and when it reached the top it would be flipped to 0 and `timeconstant` changed to the decay rate.
It failed about one go in ten, with some fudges to set `timeconstant` in odd values. It turns out that they detected the "capacitor fully charged" state with something like `if (env == 0.999) { dostuff(); }` and for some values of `env` and `timeconstant` it might never hit *exactly* 0.999 if you got it exactly wrong.
It was logically wrong (detect a precise float value) and functionally wrong (envelopes switch over when the cap is about 2/3 charged). Changing the "flip over" condition to something more like `if (env > 0.999)` and changing the maximum value of `target` to 1.5 (mimicking a circuit running off 15V and trying to it a 10V output, like a real analogue synth) cured all its problems.
Well, all its *envelope* problems. There were more, but they get into scary discussions of poles and zeroes and negative frequencies, and you don't want that at this time of night.
Uncle Bob has great advice that a large swath of the industry can benefit from. Keep in mind that a large portion of the industry, especially in corporate America, writes really bad code and his advice helps these people to put guardrails around that. The 10% bad advice I would attribute to the 10% of instances when "the rules" should be broken. But the key to breaking the rules is having an understanding of why the rules are there and as such why they should be broken.
His dogmatic approach is partially, from what I've seen, a counterbalance to the sloppiness that a large part of the industry, especially corporate America, has with software development. Can you take it too far and create beautiful software that does not solve any problems? Sure, but on the flip side it is also easy to create a lot of technical debt.
With things like TDD, when devs are new to it, part of the learning process is to take it too far first and then as you understand it you learn the right balance.
TDD has been around for two decades now. One would expect there to be some high profile success story. Yet none of the important software projects we use on a day to day basis seems to utilize it. Not even one.
I'm not sure what tech stacks you work on, but I've worked with lots of tech stacks and high profile projects that use it and also saw the quality that resulted from the practice.
The whole idea that something has to have a high profile success story available to the public for it to be an actual success doesn't really hold. Lots of people are far more busy with their work than they are with blogging and/or publicity. There are plenty of examples in my practice of quiet successes, some using TDD, some without and for those companies it may well have been a factor.
Depending on the industry, reliability and ability to ship software that functions as advertised from day #1 can be a huge advantage, and for others it can be a wash or even a competitive disadvantage if it consumes too many resources. But I would not dismiss any such tool out of hand just because you personally have not seen or heard about the success stories.
Oh, absolutely. It's the lack of quiet successes that is notable here. Like Linux, or apt, or nginx, or Python, or at least a popular Python module. Or anything, really.
After two decades of consulting in the software industry, you belong to a certain professional network. People love to talk, even if they won't always name their customers. I know people using formal methods and verification, in niches of the industry really, but have no indication that TDD is utilized for any big flagship products where it really matters.
I'm sure there are a few highly blogged about industry stories somewhere, but there's a world of a difference between those and a long time success in software development as a whole. There were high profile books written about Rational Unified Process, for example, and ~ nobody uses that anymore.
That's a common misunderstanding, but it's not true. TDD involves writing the tests at the same time as the code in tiny little steps. Write a few lines of test code that fail --> write a few lines of production code to make it pass --> refactor if needed to clean things up --> repeat.
Writing tests and doing TDD aren't the same thing. The commenter is talking about TDD in particular I think, not saying "unit testing has no success stories".
Linux and OpenSSL are two great examples of projects that probably should, but doesn't bother, to do much testing at all. Even if they are both steadily getting better at it.
They obviously produce useful results and most would love to see more of it, yet it is tacked on afterwards and often by completely different people than the original developers, not as part of the development process.
They have a long way to go to have good enough tests that could be relied on for integration purposes. TDD isn't even on the map. And will probably never be.
I've sort of half-suspected that TDD doesn't actually exist. I've never met anyone who does it. I've seen people talk it up a lot, and programming shops sometimes claim to be all about TDD, but when you look at how they work it's not TDD at all, but some token unit tests after the development is finished.
I have met one. His excuse for pushing broken code to trunk was "but it passes the unit tests". The same person also managed to write unit tests with >90% line coverage that just didn't check any results.
It's been twenty years. People would have moved on to new jobs, and spread the word with them. Some would have applied for jobs there. After all, non-googlers know a thing or two about their infrastructure, despite being mostly a trade secret.
If nothing else, apparently there are enough of disciples out there that it would be a great way of hiring the right people.
You'd have to believe there is some sort of cabal, completely bent on keeping this enormous advantage to themselves in absolute secrecy, to argue that it is a software development success story.
Uncle Bob gives some very bad advice. For example, he advocates passing temporary state between methods implicitly in instance variables instead of explicitly (and nonshared) via arguments and return values, increasing the statefulness of the code and creating complex ordering constraints and preconditions for method invocations. E.g. https://softwareengineering.stackexchange.com/questions/2887...
One can’t recommend his books with good conscience to anyone who doesn’t already have a very good judgment regarding software design and can pick out the parts that actually make sense.
Any time someone claims "this is the only solution" for as wide a swath of problems as Uncle Bob does, I always find them suspect. When doing anything that is not rote work (aka super basic CRUD apps where you're just slapping obvious chains of events together, and even these are INCREDIBLY rare if they even truly exist in the wild), there are always exceptions, and yet here we are.
It is funny how much of these types of works/teachings exist across a lot of domains. Writing fiction you'll see similar people claiming "this is the ONLY way to write a novel," which is patently stupid, but they'll claim it all the same, using weird logic to cram all kinds of great works into their model even when they don't really fit.
I always wonder - why people bother with Uncle Bob?
Guy has some opinions, but lack any notable experience and knowledge.
The only known project that he contributed (and constantly mentioned in his videos) - FitNesse (https://github.com/unclebob/fitnesse), but 59 commits doesn't make him look like an expert.
> That’s what makes Bob’s advice so toxic. By being so dismissive of everything but unit tests, he’s actively discouraging us from using our whole range of techniques.
I worked with programmers who worshipped at Uncle Bob's feet and never seemed to notice they never produced functional systems by any stretch of the definition.
They then adopted Bob's dismissive attitude about everything, you see, because they already had the answer to all the problems: just smurf -- I mean unit test -- it!
Yeah. It's a very good summary why Uncle Bob is self-help for programmers
> Uncle Bob is okay with software correctness: after all, he uses the phrase “unit testing” like a smurf uses “smurf”. But what about every other correctness technique? (but for Uncle Bob...) any correctness technique that isn’t unit tests can be dismissed.
I find it very hard to code in languages with no type check. Looking at all other domains, the tools make people better (medicine, constructions, etc). I can't code in vi, sorry, nor can I write a unittest w/o a good editor :)
If I had enough time, I'd write perfect software but mid writing perfect software my company needs the following 3 features or they go under.
Sure if I was working at Google I could spend the time making perfect stuff and after 3 years finally get that customer engagement up by 0.1% but working in startups we're often not given the luxury of time, or sometimes experimentation. We make decisions based on our current skills/knowledge/time constraints, and then we have to live with those decisions. I have to ramp up engineers with 1-5 years of experience who don't know all the patterns and sometimes are even learning the language we're working in, and they make mistakes.
Point is... from my experience... Good software is written when strong engineers at the top know how to train weaker engineers on the bottom, and together work towards a common vision. Struggle together, win together, fix the warts.
At my company, we have an unofficial motto for our learning reviews:
> Plan for a world where we are just as stupid tomorrow as we are today.
If an outage or issue was caused by a mistake, the solution can never be “don’t make that mistake again.” People don’t choose to make mistakes, so they can’t choose to not make them, either.
At the same time the industry is chugging along fine despite his bad advice. Despite his attitude against type checking we're gladly moving from JavaScript to Typescript and despite his attitude against better tooling we're gradually dropping C++ for Rust. Engineering projects that needed inscrutable correctness have proceeded to use TLA+ to great success.
Uncle Bob has influenced me during my career, but I don't nor do I know any other senior developer who preaches his gospel. Most people just pick up the bits and pieces they like and forget about the rest.
Maybe you'll get some juniors who get overly zealous but I think there's worse things a junior could be doing than following Bob a little too strictly.
Ruby has long been my favorite programming language and I used to believe that the static typing of languages like Java were the main reason they were so much less effective. After having worked with Typescript for a couple years and seeing the magic the Rust community is pulling off I now believe we'd benefit from static types in Ruby, as do an increasing amount of Rubyists.
The true danger is getting stuck in your ideas, and ignoring the wisdom that's being developed around you.
I think the answer to software apocalipse is both:
- get better tools
- get better programmers
And in fact we have been doing it - the tools are so much better than they used to be and with tdd, refactoring, patternals and clean code we have been building increasingly complex software with less bugs.
I think Bob’s message here is right, but the phrasing is scaring folks off. Maybe “lack of discipline” comes off as pejorative? Perhaps “lack of emphasis on quality” would be more popular?
> The author of the article interviewed many thought leaders in the industry, but chose only those thought leaders who were inventing new technologies. Those technologies were things like Light Table, Model Driven Engineering, and TLA+.
> I have nothing against tools like this. I’ve even contributed money to the Light Table project. I think that good software tools make it easier to write good software. However, tools are not the answer to the “Apocalypse”.
> Nowhere in the article did the author examine the possibility that programmers are generally undisciplined. The author did not interview software experts like Kent Beck, or Ward Cunningham, or Martin Fowler. The author completely avoided the notion that the software apocalypse might be avoided if programmers just did a better job. Instead, the author seemed convinced that the solution to the software apocalypse – the solution to bad code – is more code.
> I disagree. Tools are fine; but the solution to the software apocalypse is not more tools. The solution is better programming discipline.
If you step back for a second, this is basically the main gripe you see repeated on HN; business/PM is always in a hurry, the pressure is always to cut corners, nobody has time to build things right/properly.
His point is that in this environment (most software environments) just sprinkling some TLA+ in there is not going to solve the problem. If your PM is always rushing you can you imagine them letting you pause on delivering features to prove your system is correct? Most shops do not care enough to justify this expense.
I think by phrasing it as discipline Bob makes it sound like it’s the fault of individuals, where in fact I think a lot of people would like to spend more time on quality but just don’t have org buy-in. (But there are sloppy/undisciplined individuals too.)
I bet quickcheck-style testing or a fuzzing framework would find those bugs.
I tried using "fast-check" for some parsing code in JavaScript that needs to handle floats and it was pretty good at reminding me of the corner cases. Or at least some of them.
When looking for advice, IMO the first question you need to ask is "what has this person actually built?"
Personally, I find that the people whose advice I love to read are guys like John Carmack or Jonathon Blow or Casey Muratori or Tim Sweeney. Those are guys that have or are continuing to solve difficult problems and have tons of hard won practical advice. When I look at someone like Uncle Bob or Ron Jeffries, I see people that struggle to write sudoku solvers and are mostly famous for giving advice, not for building great software.
The world needs many great teachers, and many great builders.
I love taking advice from both groups, treating neither as a silver bullet.
In my experience, the greatest teachers rarely have the official title "teacher". I learned more from one programmer in 5 months working with him than I did in my entire computer science degree. (And no, I'm not saying professors or degrees are useless, just that I don't think title's and reputation necessarily tell you much about how much a person will teach you.)
And honestly, reading "Uncle Bob's" advice, I find a lot of it is outright bad (here's a good breakdown: https://qntm.org/clean ), or specific to Java's quirks, or has no actual backing other than that Bob think's it's a good idea.
It is great you had a mentor. I met few people I could learn from and books (including Bob’s) were very helpful. He promoted TDD, good names, short functions etc Is he too dogmatic? Are his function too short? Probably yes.
I never treat his proposals as a silver bullet, but a great source of inspiraton. I wish there were more people teaching those things.
I like the concept of Talkers vs Doers, that I leant from Nassim Taleb. Whenever reading advice on the internet, one should check the author and ask themselves "what has this person actually *done* that gives credit to their words?".
> I like the concept of Talkers vs Doers, that I leant from Nassim Taleb.
That's so funny.
Yes, it is… Nassim Taleb the man who blocks practically everyone who disagrees with him
I'm not too partial of Taleb, but I like some of his ideas, including this one. Also I don't think he is the one who invented it.
I don't entirely disagree, but I have to keep in mind that this is an ad hominem argument. It's also an argument that would be used to eliminate or discredit academia and the vast majority of those in public service - both of which I think have their place to an extent.
Well, I'm not saying you should discredit advice from people who haven't made great works (IE, most of us), just that it's a question you need to ask to put their commentary in perspective.
Especially when someone like Uncle Bob says: "The first rule of functions is that they should be small. The second rule of functions is that they should be smaller than that. This is not an assertion that I can justify. I can’t provide any references to research that shows that very small functions are better. What I can tell you is that for nearly four decades I have written functions of all different sizes."
Well, alright, in that context he's asking us to just take his word for it, but there are no tangible arguments here. Whereas Casey Muratori has a much more thoughtful exploration of this topic ( https://caseymuratori.com/blog_0015 ), and he's also written some very excellent code (in terms of solving difficult problems and doing useful things)
Ad hominem doesn't necessarily mean fallacious or invalid. Bob's whole brand is being a self-styled authority on what constitutes clean, high quality code. So yes, his achievements and personal credibility are important here.
Yes, some Talkers have great ideas, and some Doers offer terrible advice. I think a more nuanced mental framework that avoids ad-hominem is "opinions" vs "experiences".
Some Talkers peddle opinions because they haven't got any experience. Most Doers peddle the experiences they had. Why does this matter? Anything can be opinion if it isn't backed by data. Many opinions that look great on paper are terrible in practice. If someone is offering you an experience (or anecdote), at least they have a single data point to back it up. A surprising amount of influence is exerted on the internet based on "one guy's opinion".
The better Talkers aggregate experiences of others to back up their opinions. This tends to provide better evidence than a single anecdote, and can be done even if you have no hands on experience with the subject.
So to your point a better way to frame it might be "what data/evidence is this person giving to support their argument?"
It's obvious to builders, being a huge fan of these characters is a huge red flag for me. I've asked the same question many times.
You’ve named four developers who work primarily (exclusively?) in video games.
Why is that, do you think?
Developers who did not seek permission to write code their way. We used their software and in some cases can see the code. Their output is attributable to them rather than faceless teams.
Because I'm interested in video game development, and it's a good litmus test since the solutions have to be high performance and developed very quickly. I could also mention people like Peter Norvig or Alan Kay though, I just listed the ones that initially came to mind
But these guys develop their programs in isolation and they deploy to users only the final result or a limited number of beta versions. They almost never evolve their code bases to a second version. They work on video games.
If I work in isolation from the users, don't have external requirements, don't care about future versions of the software - sure, their advice might be useful to me.
> They almost never evolve their code bases to a second version.
This isn't necessarily true, a lot of code can and is reused between games (math, physics, audio, etc.)
In terms of isolation from the users, that isn't really true either -- the users are the rest of your team. You have to build tools for the team to use and they better be at least somewhat usable, and you need to have something workable quick so you're not blocking your artists and level designers, etc.
If somebody explains a concept eloquently and uses good logic, you should consider what they have to say, even if they’re a dog or a cat and not a famous programmer. You shouldnt ignore arguments or explanations from people without glorious careers
To use a games example, the game programming patterns gentleman is most famous for his book, not for his game engines - yet the book is extraordinary and anyone would be poorer for skipping it
Uncle bob has clearly articulated a number of ideas I’ve found useful over the years, and so what if he also said some things we don’t agree with? He’s human
That said, the opinions of accomplished people like carmack and blow are definitely worth listening to. They are eloquent and have very interesting opinions
The answer to my question doesnt have to be famous or amazing, just a reasonable demonstration of individual capability. Would you want advice on, say, wood working from someone that cant show you a well crafted chair or table they made on their own? A lot of the refactorings he does in his book IMO are not good, and a lot of the SOLID principles dont really apply in languages outside of Java, and even there their effectiveness is a matter of opinion and taste more so than provable value. I dont really hate on Bob, I just would encourage new coders to carefully consider whats been accomplished by the people that follow a certain paradigm. The majority of the most succesful projects in history have had nothing to do with TDD or SOLID. So, take whats useful from those methodologies certainly, but I'm more interested in how the undeniably great people and projects work as compared to Bobs opinions.
I wasn’t aware Sweeney spoke about software design. Any recommended articles?
clean code was published in 2008, two years after jquery. the world was different then. and there is a large audience of laggard adopters who still live in that world.
I do not agree. Take the example in football, Maradona was one of the greatest players, but he was unable to teach and to be conscious about how he did what he did.
Having a talent to solve problems does not mean you can teach them. Many times the opposite is true, people that do not have great talent but a good self reflecting and observing and analysis capacity can extract the insights and general rules of a give discipline.
The former plus a good experience on the field is best source o advice, and to my knowledge Uncle bob have both.
I completely miss the point of the article, but my pet peeve is that operator== is even defined for floats in most programming languages. It really, really shouldn't.
Instead if should return an error "Floating points should be compared using this library function that includes acceptable difference. If you want exact math use BigDecimal or similar. If you know what you're doing use library function with acceptable difference = 0.0"
And yes, Uncle Bob is giving some terrible advice. My least favorite is his advice to split "too long" pure functions into a stateful class with "short enough" methods that later can be called in the wrong order because now there's a right and wrong way to call them.
Oh, but one generally wants an "acceptable difference" only when the numbers compared are close enough to zero, otherwise the "acceptable relative difference" should be used instead, right?
Yes, it's more complicated than that, but at least "straightforward" code with x==y won't compile so you'll have to think about it.
> can later be called in the wrong order
I'm thankful I'm not the only one with this critique. I see this so often and it makes navigating and/or debugging the code a nightmare, while I hop around these tiny single use methods. A long function I can read top to bottom is a lot easier to grok!
Examples of that were common in 2000s poor Object Oriented Programming introductory material, do people still advise that kind of design?
There were people who abhorred the sight of non-stateful code, there was little opportunity to fiddle with objects in it, not realizing that they just made their and everyone else's job harder with more moving parts, a concept that, despite being very easy to understand and with analogues to physical machines, was lost on many programmers.
> do people still advise that kind of design?
It is a constant discussion. You can always find examples of long functions with lots of state which is hard to follow and then there are always examples of code where people went too short. And then there are always people taking any of those examples and showing why they were wrong.
I myself like going to the extremes for fun/toy/throwaway projects (not only on these but also with rules like "no `if`" or "no raw `for` loop" etc) to try out all the different alternatives and then resetting my default based on cases where it went well and where it went bad.
No such rule is always right, but most hmstem from being exposed to too much of one extreme.
The correct way to do this, if it really is a significant simplification, is to factor out the minimal relevant code into a private helper class (or struct) that you instantiate anew for each computation. It then functions like a stack frame you can pass around and share between functions/methods, within private boundaries.
I used to call this the "Poltergeist Pattern".
Tons of tiny functions, each doing almost nothing, frantically calling each other, and somehow useful functionality results.
Each function is very easy to understand, but how they together do useful tasks is near incomprehensible.
I had a gig where a consultant had just been through (before I started), and totally changed the programming culture of the engineering teams. One of the weird things that happened was my new team no longer tolerated methods > 10 lines, which created exactly what you're talking about: a bunch of tiny single-use methods and helpers scattered across other classes/files/modules/libraries that totally obscured control flow.
It's amazing to me how even smart people can be taken in by Harold Hill types. Can I get a job going around giving advice for $100k a pop? I promise to do better than that guy haha.
That's funny, I wonder how nobody thought to challenge that.
In a team I was part of, we used the default in a popular Ruby linter, Reek. And it wasn't 10, it was 5.
The team treated that as puzzle-solving, and it was part of their culture to tackle those kinds of challenges. There was light hazing of newbies whenever the CI didn't pass because methods were too big. So, that's why at least in this case. There was no technical argument, just inertia.
Code quality was atrocious. Things that could be 10 lines were 30 or 40, and extremely stateful.
On that topic: I came to like John Ousterhout's notion (in [1]) of "deep" methods/classes/etc. He explains it like this: If you have a reusable piece of code, imagine drawing it as a rectangle. The width corresponds to the complexity of the interface, the depth corresponds to the complexity of the implementation.
Ousterhout is saying that we really want deep classes, i.e. the complexity of the implementation a construct hides should be considerably larger than the complexity it takes to use it. Which, at least to me, basically says "search for good abstractions". And what you're describing strikes me as a good way not to achieve that.
Of course, there's considerable wiggle room in that mental model, and one could still endlessly squabble about how (not) to break up the implementation.
[1]: https://www.amazon.com/Philosophy-Software-Design-John-Ouste...
Sound about right!
I think the fallacy of the "Poltergeist Pattern" is to optimize solely on the - in itself valuable - metric of short simple methods, without consider anything else, like the complexity of using or reading the resulting code.
Is that last paragraph actually a thing? Yikes! That sounds like the kind of advice one would give if they've only ever dealt with toy problems.
It's one of his solutions for refactoring long methods that operate on more than one variable, so that simple "extract method" refactoring isn't feasible.
He says to turn the function into a class and local variables into private fields. So that you can split the function into small methods that operate on these private fields. I used it a few times in a company that had strict "no Sonar warnings" policy. But I hate the resulting code much more than the initial code.
The jack-in-the-box (or hellraiser's box) style of OOP. You call a succession of 0 or 1 argument methods that change the object state until suddenly something happens.
Floats can be compared for equality, they're a pattern of bits in memory after all. operator== can be defined for floats.
Using it on the results of floating point arithmetic may be a bad idea, but that's a different matter. That should perhaps be a compiler or linter warning, rather than flat out be declared incorrect.
Floats can be subnormal, and different bit patterns can represent the same value. It's not as easy as just comparing bits.
Here is some cursed C code to play with:
May print something like
I'm not sure how subnormal numbers are relevant. For binary floats, the only duplicate values are (+0, -0) and the NaNs, and in both of those cases it's really up to you to decide if those are really representing the same value or not.
I'm arguing against the notion that floats are just bit patterns in memory. If you base your operator== on such a notion, you will get very weird behavior.
Could you explain why that output is cursed?
I get the same output using the following (I switched to the portable memcpy solution.)
Since FLT_EPSILON is 1.19209290E-07F , I would expect a lot of small, close numbers to give the same output, even without touching subnormal values.
A lot of Uncle Bob's advice come from a time and a place where they actually had merit. There is a sort of enterprisey Java code that was very common in the '90s. OOP and Gang of Four was incredibly trendy, but few people were actually all that good at it. So now your code had a AbstractFactoryFacadeDelegatorImplBuilderVisitorVisitor. Methods were often extremely long, hundreds of lines. The code was complex in a lot of ways, lots of static instances being accessed from every which way.
In this context, Uncle Bob's style was compelling, it felt like a breath of fresh air. His opinionated attitude was also compelling. In general, associating a term like 'clean' to your methodology seems to be a good zealot recruitment strategy. 'Pure functions' is similarly brilliant marketing.
That said, a lot of his supposed solutions actually cause problems of their own if you let the pendulum swing too far. It creates its own kind of complexity. Books like these also tend to sort of get their own momentum, if you look back you see a lot of big names praising them. Surely they must be good, right? Well they were. Clean Code made sense when it was published. It's for the most part not good advice today.
I haven't seen anything much better than SOLID from Clean *Architecture in terms of organizing large projects into lots of small discrete interfaces. I guess globally declared pure functions are being pushed a lot today, but I find this approach organizationally wanting, particularly in promoting tight coupling (with all its drawbacks).
*Edit: Updated: I had forgotten this was not from "Clean Code".
> I guess globally declared pure functions
They don't have to (and probably shouldn't) be globally defined.
I meant "globally declared" in contrast to instance-oriented (i.e. methods) rather than whether or not a function exists within a namespace, which is what I think you're implying.
A somewhat more common term is "free function". Comes from C++, in particular, where it means non-member functions (that is, not methods).
I'm curious why you think clean code is for the most part not good advice today. I read the book recently. What is different today compared to in 2008?
You can easily end up with another sort of complexity, where the code is difficult to reason about because it is so decoupled and the methods are so short (but the call stack very long) and modules so small (but many). You can end up with unnecessary code that's difficult to spot because the logic is so spread out that the big picture is clouded.
Because lots of small functions increase the system complexity by creating interdependencies. You also don't have context in each function to understand the big picture and very small functions can be meaningless on its own. You have to jump through many functions just to figure out what is being done.
Code is a non-linear medium, depending on the path one takes thought the graph and the scale one operates at clean might mean a very different thing. Take a long look at the history of ideas, it might be obvious looking back that thing should be the way they are, but no one just knows them as if they are a prophet. Even if there are sign of something being ugly or a hack this does not give any guidance as what direction to take next.
The feasibility of functional programming in commercial settings.
Ironically, on mobile at least, applying Uncle Bob's ideals into the VIPER framework creates proliferation of classes, interfaces, and functions that are often as byzantine and names as unwieldy as stereotypical J2EE spaghetti code.
I often wonder what would currently be a way to work in a more enterprisey environment and actually make stuff a little bit more fun and not all "abstract factories" etc. I'm talking TypeScript/C#/.NET Core type of stuff, maybe with Vue. What's the "new Clean Code"? Might just learn Go next or something.
I don't think it's sensible to liken Pure Functions with "clean code". The former, while no panacea, is strictly defined in a way that the latter could never hope to be.
Integer floats can be checked for equality perfectly.
Uncle Bob is a fraud by the way. Turning a perfectly readable medium length function into a class with dozens of tiny methods is a cardinal sin in my eyes. Then later someone stumbles upon this class and reuses some of these methods in one way or another, while working on a problem completely unrelated to the original, thus entangling the two implementations…
i like the idea that uncle bob subscribes to and teaches a certain style or school of coding. kind of like judo vs karate etc..
you may not be an adherent of judo and you may prefer and better understand karate, but that doesn’t make a judo teacher a fraud.
Interesting. In martial arts world there were lots of styles that claimed to be "the best" until they did UFC and suddenly people discovered that if you don't know brazilian jiu jitsu you will most likely lose to someone who knows it.
What's the programming analog of UFC/MMA?
This analogy doesn't work because programming in "industry" has been like the MMA since day 1 in the sense that you have always had to "test your skills" and make something that people wanted and compete with other products. In the martial arts world a bunch of different martial arts just completely went without full-contact sparring/competition and instead built up a bunch of different rules and scenarios around how they were "too deadly to be done in practice". This is the bullshit that MMA exposed, and its interesting to note that the two practices in the comment you're responding to, judo and karate, have had a long history of being practiced "for real" in the gym and in competition and thus have spawned a long line of highly successful MMA competitors.
The analogy also doesn't work because BJJ isn't some silver bullet. What people discovered is that the first M in MMA is actually the important part and if all you know is BJJ you're going to get starched by a boxer with a sprawl, or more likely a wrestler with a modicum of submission knowledge, who will never let you get to the floor in the first place, and instead just grind you out.
So just like the question "what is the best martial art" currently has no answer outside of "you need a mix of striking and grappling not just one thing", there is no answer to "What is the best programming style" outside of "think about the problem you have at hand and crib on examples and knowledge from other people who have solved a similar problem". This "unfortunately" points to boring industry standard tools, like Java, C/C++, Javascript, RDBMSs, IDEs, Linux etc, etc. Probably some newer stuff like Rust and React as well. And note that answer isn't one specific technology, like MMA its a bag of different tools you combine.
Yeah, Uncle Bob is like aikido then.
Memory based == is not unreasonable for floats, once in exact representation domain. Of course, if operating in approximated domain, especially involving any of existing math libraries, the float compare should be done with corresp. functions/macro and be epsilon based.
Do you practice TDD? That is usually where the long methods start to become an issue.
Unit testing pure functions is a pleasure, no matter if they have local variables.
When you split them in a way that makes the local state external - you now have to handle the state in unit tests which is much more hassle.
Off: floating point numbers can be used to store integer values, so equality comparison might be perfectly valid in some cases. For example, if the embedded Sql doesn't support 'int64_t' (only int32_t), itt might still support 'double' which can store 52-bit integers exactly.
Yes, also there are deep magic gamedev tricks exploiting that (see quake sqrt :) ). There should be a way to do them. Something like bit_compare(x, y). I'd just prefer if rarely useful operations shouldn't use the most commonly used API. Too easy to make a mistake.
BTW I'd also love to have a built in float type that fails when you assign 0 to it.
Anyway, I guess in Java operator == is a lost cause anyway. My favorite example:
I think if you want to perform binary operations on floats you should have to cast them to a binary type.
> Off: floating point numbers can be used to store integer values, so equality comparison might be perfectly valid in some cases.
Yeah, but then you're having to learn all the special cases for when it silently gives wrong answers, and hope to hell that you didn't miss any.
Much better to have consistency and behave the same way all the time, than to optimise for 3 keystrokes and introduce all sorts of special exceptions that the programmer must memorise.
The 'clean' pattern to me is a complex data structure that is operated on atomically by relatively simple functions, but not so simple that you need to keep a 6 level call stack in your head to follow what is going on. Function decomposition is fine as long as - like everything else, really - you don't overdo it.
> my pet peeve is that operator== is even defined for floats
Then people would start writing `signum(a - b) == 0` (or some equivalent) instead of `a == b`, and/or factor that out into a helper function. Not sure that would be an improvement.
> my pet peeve is that operator== is even defined for floats
I fixed two bugs at once in a softsynth caused by this stupidity. In the envelope generator code, which loosely modelled the ADSR circuit in a synth which charges and discharges a capacitor, there was an integrator implemented like `env = ((target - env) * timeconstant) + env`. The `target` value was set to 1 to allow the "capacitor" to "charge" at a rate set by `timeconstant`, and when it reached the top it would be flipped to 0 and `timeconstant` changed to the decay rate.
It failed about one go in ten, with some fudges to set `timeconstant` in odd values. It turns out that they detected the "capacitor fully charged" state with something like `if (env == 0.999) { dostuff(); }` and for some values of `env` and `timeconstant` it might never hit *exactly* 0.999 if you got it exactly wrong.
It was logically wrong (detect a precise float value) and functionally wrong (envelopes switch over when the cap is about 2/3 charged). Changing the "flip over" condition to something more like `if (env > 0.999)` and changing the maximum value of `target` to 1.5 (mimicking a circuit running off 15V and trying to it a 10V output, like a real analogue synth) cured all its problems.
Well, all its *envelope* problems. There were more, but they get into scary discussions of poles and zeroes and negative frequencies, and you don't want that at this time of night.
Uncle Bob has great advice that a large swath of the industry can benefit from. Keep in mind that a large portion of the industry, especially in corporate America, writes really bad code and his advice helps these people to put guardrails around that. The 10% bad advice I would attribute to the 10% of instances when "the rules" should be broken. But the key to breaking the rules is having an understanding of why the rules are there and as such why they should be broken.
His dogmatic approach is partially, from what I've seen, a counterbalance to the sloppiness that a large part of the industry, especially corporate America, has with software development. Can you take it too far and create beautiful software that does not solve any problems? Sure, but on the flip side it is also easy to create a lot of technical debt.
With things like TDD, when devs are new to it, part of the learning process is to take it too far first and then as you understand it you learn the right balance.
TDD has been around for two decades now. One would expect there to be some high profile success story. Yet none of the important software projects we use on a day to day basis seems to utilize it. Not even one.
I'm not sure what tech stacks you work on, but I've worked with lots of tech stacks and high profile projects that use it and also saw the quality that resulted from the practice.
The whole idea that something has to have a high profile success story available to the public for it to be an actual success doesn't really hold. Lots of people are far more busy with their work than they are with blogging and/or publicity. There are plenty of examples in my practice of quiet successes, some using TDD, some without and for those companies it may well have been a factor.
Depending on the industry, reliability and ability to ship software that functions as advertised from day #1 can be a huge advantage, and for others it can be a wash or even a competitive disadvantage if it consumes too many resources. But I would not dismiss any such tool out of hand just because you personally have not seen or heard about the success stories.
agreed
Oh, absolutely. It's the lack of quiet successes that is notable here. Like Linux, or apt, or nginx, or Python, or at least a popular Python module. Or anything, really.
After two decades of consulting in the software industry, you belong to a certain professional network. People love to talk, even if they won't always name their customers. I know people using formal methods and verification, in niches of the industry really, but have no indication that TDD is utilized for any big flagship products where it really matters.
I'm sure there are a few highly blogged about industry stories somewhere, but there's a world of a difference between those and a long time success in software development as a whole. There were high profile books written about Rational Unified Process, for example, and ~ nobody uses that anymore.
Wrong?
https://github.com/linux-test-project/ltp/wiki https://github.com/neovim/neovim/tree/master/test https://github.com/openssl/openssl/blob/master/test/README.s...
What are you trying to say, that people don't write tests for their code?
I would agree with this, but perhaps the commenter has a different definition of "high profile" ¯\_(ツ)_/¯
Are you trying to say that “Test-Driven Development” is “writ[ing] tests for their code”?
Test-Driven Development means that the tests are written before the features.
That's a common misunderstanding, but it's not true. TDD involves writing the tests at the same time as the code in tiny little steps. Write a few lines of test code that fail --> write a few lines of production code to make it pass --> refactor if needed to clean things up --> repeat.
> Write a few lines of test code that fail
OK, write the test…
> write a few lines of production code to make it pass
…before the feature.
I recognize the color you’re trying to add (it’s more like TFTFTF than TTTFFF), but it remains Test-Driven Development.
Sure, if you insist. I've seen enough people misunderstand TDD to be TTTFFF that I think it's important to clarify.
Writing tests and doing TDD aren't the same thing. The commenter is talking about TDD in particular I think, not saying "unit testing has no success stories".
No. Those are most integration tests (neovim has some unit tests)
You're conflating unit tests with general testing
Uncle Bob makes it very clear that he only considers "unit tests" to be valid
So, no, these dogmatically "don't count"
(they do count when Uncle Bob supporters want to play half-truths and push for TDD instead of more realistic tests)
I'm an Uncle Bob supporter, but also support pragmatic testing techniques depending on the platform. The two are not mutually exclusive.
Linux and OpenSSL are two great examples of projects that probably should, but doesn't bother, to do much testing at all. Even if they are both steadily getting better at it.
They obviously produce useful results and most would love to see more of it, yet it is tacked on afterwards and often by completely different people than the original developers, not as part of the development process.
They have a long way to go to have good enough tests that could be relied on for integration purposes. TDD isn't even on the map. And will probably never be.
I've sort of half-suspected that TDD doesn't actually exist. I've never met anyone who does it. I've seen people talk it up a lot, and programming shops sometimes claim to be all about TDD, but when you look at how they work it's not TDD at all, but some token unit tests after the development is finished.
I have met one. His excuse for pushing broken code to trunk was "but it passes the unit tests". The same person also managed to write unit tests with >90% line coverage that just didn't check any results.
How do you know what testing/designing practices all "important" software projects use?
Do you expect it to be advertised?
It's been twenty years. People would have moved on to new jobs, and spread the word with them. Some would have applied for jobs there. After all, non-googlers know a thing or two about their infrastructure, despite being mostly a trade secret.
If nothing else, apparently there are enough of disciples out there that it would be a great way of hiring the right people.
You'd have to believe there is some sort of cabal, completely bent on keeping this enormous advantage to themselves in absolute secrecy, to argue that it is a software development success story.
Uncle Bob gives some very bad advice. For example, he advocates passing temporary state between methods implicitly in instance variables instead of explicitly (and nonshared) via arguments and return values, increasing the statefulness of the code and creating complex ordering constraints and preconditions for method invocations. E.g. https://softwareengineering.stackexchange.com/questions/2887...
One can’t recommend his books with good conscience to anyone who doesn’t already have a very good judgment regarding software design and can pick out the parts that actually make sense.
Uncle Bob sees classes as animals with internal state, not as abstract structures grouping together consumable APIs and their internal workings.
Calling bound functions ”methods” is perhaps a bit misleading.
Any time someone claims "this is the only solution" for as wide a swath of problems as Uncle Bob does, I always find them suspect. When doing anything that is not rote work (aka super basic CRUD apps where you're just slapping obvious chains of events together, and even these are INCREDIBLY rare if they even truly exist in the wild), there are always exceptions, and yet here we are.
It is funny how much of these types of works/teachings exist across a lot of domains. Writing fiction you'll see similar people claiming "this is the ONLY way to write a novel," which is patently stupid, but they'll claim it all the same, using weird logic to cram all kinds of great works into their model even when they don't really fit.
I always wonder - why people bother with Uncle Bob? Guy has some opinions, but lack any notable experience and knowledge. The only known project that he contributed (and constantly mentioned in his videos) - FitNesse (https://github.com/unclebob/fitnesse), but 59 commits doesn't make him look like an expert.
I think software is so hard to measure that having someone tell you there's a "right way" can be comforting to some people.
Comforting, maybe, but ultimately misleading. The surety of Bob's solutions combined with the disdain for any other styles is a cancer.
> That’s what makes Bob’s advice so toxic. By being so dismissive of everything but unit tests, he’s actively discouraging us from using our whole range of techniques.
I worked with programmers who worshipped at Uncle Bob's feet and never seemed to notice they never produced functional systems by any stretch of the definition.
They then adopted Bob's dismissive attitude about everything, you see, because they already had the answer to all the problems: just smurf -- I mean unit test -- it!
Yeah. It's a very good summary why Uncle Bob is self-help for programmers
> Uncle Bob is okay with software correctness: after all, he uses the phrase “unit testing” like a smurf uses “smurf”. But what about every other correctness technique? (but for Uncle Bob...) any correctness technique that isn’t unit tests can be dismissed.
I find it very hard to code in languages with no type check. Looking at all other domains, the tools make people better (medicine, constructions, etc). I can't code in vi, sorry, nor can I write a unittest w/o a good editor :)
My argument is:
If I had enough time, I'd write perfect software but mid writing perfect software my company needs the following 3 features or they go under.
Sure if I was working at Google I could spend the time making perfect stuff and after 3 years finally get that customer engagement up by 0.1% but working in startups we're often not given the luxury of time, or sometimes experimentation. We make decisions based on our current skills/knowledge/time constraints, and then we have to live with those decisions. I have to ramp up engineers with 1-5 years of experience who don't know all the patterns and sometimes are even learning the language we're working in, and they make mistakes.
Point is... from my experience... Good software is written when strong engineers at the top know how to train weaker engineers on the bottom, and together work towards a common vision. Struggle together, win together, fix the warts.
I'm seen some pretty crappy code come out of Google and at times they don't really slow down to do things right.
At my company, we have an unofficial motto for our learning reviews:
> Plan for a world where we are just as stupid tomorrow as we are today.
If an outage or issue was caused by a mistake, the solution can never be “don’t make that mistake again.” People don’t choose to make mistakes, so they can’t choose to not make them, either.
I only find about 10% of Uncle Bob's advice is terrible. 90% is good.
The trouble is he delivers that terrible 10% with equal confidence and it really undoes all the good advice.
At the same time the industry is chugging along fine despite his bad advice. Despite his attitude against type checking we're gladly moving from JavaScript to Typescript and despite his attitude against better tooling we're gradually dropping C++ for Rust. Engineering projects that needed inscrutable correctness have proceeded to use TLA+ to great success.
Uncle Bob has influenced me during my career, but I don't nor do I know any other senior developer who preaches his gospel. Most people just pick up the bits and pieces they like and forget about the rest.
Maybe you'll get some juniors who get overly zealous but I think there's worse things a junior could be doing than following Bob a little too strictly.
Ruby has long been my favorite programming language and I used to believe that the static typing of languages like Java were the main reason they were so much less effective. After having worked with Typescript for a couple years and seeing the magic the Rust community is pulling off I now believe we'd benefit from static types in Ruby, as do an increasing amount of Rubyists.
The true danger is getting stuck in your ideas, and ignoring the wisdom that's being developed around you.
>I don't nor do I know any other senior developer who preaches his gospel.
Lucky for you I guess. Ive known a few. "Clean code" is also still a pretty commonly recommended book.
People do cite him in PR arguments as a kind of appeal to authority and thats kind of unfortunate.
I'm pretty on the same page as you.
Related:
Uncle Bob and Silver Bullets (2017) - https://news.ycombinator.com/item?id=26153823 - Feb 2021 (92 comments)
Uncle Bob and Silver Bullets - https://news.ycombinator.com/item?id=15415278 - Oct 2017 (218 comments)
I think the answer to software apocalipse is both: - get better tools - get better programmers
And in fact we have been doing it - the tools are so much better than they used to be and with tdd, refactoring, patternals and clean code we have been building increasingly complex software with less bugs.
Still a long way to go.
I think Bob’s message here is right, but the phrasing is scaring folks off. Maybe “lack of discipline” comes off as pejorative? Perhaps “lack of emphasis on quality” would be more popular?
Quoting from “tools are not the answer” (http://blog.cleancoder.com/uncle-bob/2017/10/04/CodeIsNotThe...)
> The author of the article interviewed many thought leaders in the industry, but chose only those thought leaders who were inventing new technologies. Those technologies were things like Light Table, Model Driven Engineering, and TLA+.
> I have nothing against tools like this. I’ve even contributed money to the Light Table project. I think that good software tools make it easier to write good software. However, tools are not the answer to the “Apocalypse”.
> Nowhere in the article did the author examine the possibility that programmers are generally undisciplined. The author did not interview software experts like Kent Beck, or Ward Cunningham, or Martin Fowler. The author completely avoided the notion that the software apocalypse might be avoided if programmers just did a better job. Instead, the author seemed convinced that the solution to the software apocalypse – the solution to bad code – is more code.
> I disagree. Tools are fine; but the solution to the software apocalypse is not more tools. The solution is better programming discipline.
If you step back for a second, this is basically the main gripe you see repeated on HN; business/PM is always in a hurry, the pressure is always to cut corners, nobody has time to build things right/properly.
His point is that in this environment (most software environments) just sprinkling some TLA+ in there is not going to solve the problem. If your PM is always rushing you can you imagine them letting you pause on delivering features to prove your system is correct? Most shops do not care enough to justify this expense.
I think by phrasing it as discipline Bob makes it sound like it’s the fault of individuals, where in fact I think a lot of people would like to spend more time on quality but just don’t have org buy-in. (But there are sloppy/undisciplined individuals too.)
I bet quickcheck-style testing or a fuzzing framework would find those bugs.
I tried using "fast-check" for some parsing code in JavaScript that needs to handle floats and it was pretty good at reminding me of the corner cases. Or at least some of them.
If you like Uncle Bob you might be interested in https://github.com/unclebob/more-speech.