koonsolo 5 years ago

Great lessons! Although it didn't include one that I had to learn the hard way, multiple times:

Your throwaway prototype will be the codebase for the project.

In my career, I've seen multiple throwaway prototypes that were hacked together. None of them ended up in the waste bin.

"Just add X and Y and you're done". "But this is just a quick hack to show the interface, there is nothing behind it!" "Yes, and the interface is fine, so why start from scratch? Just add the rest".

Now I know: I never build throwaway prototypes ever again, it's always a base for the project to come.

  • hirundo 5 years ago

    Once I wrote non-prototype, non-throwaway code to start a large project. After about a month of work on that, due to a dead disk and my own stupidity, I lost all of it. It was throwaway code after all. I was about as upset and angry as I ever get.

    Then I sat down and rewrote it in about a week, and it was much better the second time, and turned out to be the start of a code base that's still in daily use and evolving 25 years later.

    The lesson I took from that is, don't write throwaway code, but given a chance, throw it away anyway.

    • mgkimsal 5 years ago

      I've had a couple of times where I've been called in to lend a hand on code I wrote 10-15 years earlier. I'm always surprised that it's still going, but perhaps shouldn't be (and may not be in the future). There's sometimes good reasons to upgrade/Greenfield - at some point, in one case, there were structural issues that prevented much more 'refactoring' without heavy work, and 'start over and migrate data' was a more cost effective option.

      But, until you've done this for 20 years, it's sort of hard to imagine going back to code you wrote 20 years earlier to work on it again. Many of those "best practices" take on a much more concrete meaning. In many cases, I've been ... moderately proud of how well some decisions stood up, and in other cases I remember exactly what I was doing when I cut a particular corner (was late getting home that night, needed to get back home before a winter storm, etc).

  • trumbitta2 5 years ago

    Once I wrote a quick demo of a classic content-heavy portal with our in-house portal system (at the time). It took me 2 days over a weekend. Shipped for the demo with some partner, forgot about it.

    6 months later, an angry customer wrote to my boss about the "Energy Saving Monitoring System" somebody sold to them stating that some kind of SSO wasn't working as expected.

    My boss handed the case to me and I, flabbergasted as I was, proceeded to delve deeper into the strange report about that system we were sure we've never written, let alone sold to somebody.

    Long story short, it was my quick demo of a content heavy portal, turned into that monster of a "Energy Saving Monitoring System" - whatever that is - by the partner we demoed it to, and sold to several customers as a finished product.

  • lhuser123 5 years ago

    Some researchers wanted a 128-bit space for the binary address, Cerf (recalled) ... But others said, "That's crazy," because it's far larger than necessary, and they suggested a much smaller space. Cerf finally settled on a 32-bit space that was incorporated into IPv4 and provided a respectable 4.3 billion separate addresses.

    "It's enough to do an experiment," he said. "The problem is the experiment never ended."

    https://www.networkworld.com/article/2227543/software-why-ip...

    • travisjungroth 5 years ago

      IMO, they should have gone with a wildly smaller number. Enough IP addresses to get ~10% of the world online is just about the worst possible pick. It’s enough to get entrenched, but not enough to finish the job. If it was 1024, they would have run into the problem while it was easily fixable. Y2K wouldn’t have been a problem if two-digit years started being used in 1999.

      • bdamm 5 years ago

        The irony here is Y2K was just the start. We have Y2K-like problems coming up in 2038, 2050, 2106, and many others on smaller scales in between such as the 2036, 2040, and 2042. This tells me that it is not just poor judgement, but rather widespread systematic incomprehension regarding the longevity of systems, protocols, and data.

      • p_l 5 years ago

        They did - original network addresses were, iirc, around 16 bit. Meanwhile OSI went with 20 byte maximum and strong preference for separating service and implementation and putting numeric addresses in implementation side...

  • lostmyoldone 5 years ago

    In a somewhat troubled company I worked at, and where I honestly stayed a bit too long, I once resorted to writing a prototype, in bash, and in the most idiosyncratic way possible with two goals in mind:

    1) Make sure that I didn't have to handle some messy deployment when it inevitably would become rebranded to production code.

    2) Make sure nobody in their right mind would try to extend the functionality and keep me responsible for fixing the inevitable wreck.

    Obviously it was put into production, still is, and has not been extended with anything except the initial functionality.

    I don't know if that counts as a success or a failure, but at least deployment was never an issue!

  • sadness2 5 years ago

    How about this: Code can add value when it's a lot lower quality than we think it needs to be. Customers will pay for even a crappy solution if you are addressing the right problem. Don't over engineer. But also employ staff for the explicit purpose of gradually increasing the quality of codebases which need to continue to churn.

    • mgkimsal 5 years ago

      > But also employ staff for the explicit purpose of gradually increasing the quality of codebases which need to continue to churn.

      No one ever wants to hear that part.

    • jmcqk6 5 years ago

      I put it this way: There is no floor to how bad code can be and yet still make people money.

      • hinkley 5 years ago

        It’s really a matter of how fast you chew through engineers. You boss will just hire a replacement. But as engineers we should want to take care of our own.

        • jmcqk6 5 years ago

          Absolutely. It's a very depressing observation for myself, and an uncomfortable truth for the industry.

    • EdgarVerona 5 years ago

      I agree with your perspective on this, as hard as it can be to admit.

  • hinkley 5 years ago

    This got so bad for me that at one point I was doing all POC code as command line only. Because proving the code worked without a UI got us green lit but any UI they wanted to ship.

  • hasbroslasher 5 years ago

    This is some great advice in my opinion. Every project starts from a base - give it a shitty base, and you're always going to have problems. Spend the time to write an OK base that doesn't try to do anything clever but organizes code in a clear and sane way and you'll be reaping the dividends for years to come. There is nothing more valuable than, from day one, unit testing, organizing files in a clear way, preventing clutter, and starting writing basic documentation of how to work with your software. These simple cues will remind you and others of what the expectations are for your project.

    As popular as it is to say "just solve the problem in front of you", no one actually does that. No one sits down to work on something with one problem - the sit down with likely hundreds, all vying to be solved. Average engineers will look at one, solve it, test it, put it in a text file, push it to prod, and say "look, done!" Good engineers will look at the whole class of problems and recognize commonalities that inform basic architectural and infrastructural decisions.

  • m463 5 years ago

    You can get around this (a bit) by squirreling away and enhancing a "template" program in the languages of your choice.

    For example, I have a "template" python script that does meaningful things like importing stuff, parses arguments with argparse, does logging with debug/verbose flags, and some other common things.

    So starting a "throwaway" project, it's initially over-engineered but that quickly averages out.

  • Thespian2 5 years ago

    There is nothing as permanent, as that which is called "temporary," and nothing as temporary, as that which is called "permanent."

    Applies to almost every bit of code I've written.

    • hnick 5 years ago

      AKA "Nothing is more permanent than a temporary solution."

      I think that cuts both ways - often the quick fix is good enough, while the "proper fix" is over-engineered and in promising to be all things to all people, anything missed becomes a sticking point and sinks the project.

  • dyarosla 5 years ago

    Could not disagree more with the moral. Write prototypes for sure, toss code out a lot early on, and then once you’ve nailed the direction, start iterating on refactoring the prototype. But to eschew prototype code from the start? That’s a separate hard lesson in itself.

  • EdgarVerona 5 years ago

    Definitely true. And I think, in retrospect, that's not so bad. I think the important part is realizing - and convincing people to allocate time to it - when it needs to be refactored: when it reaches a point that it can't sustain itself and doesn't solve business problems anymore.

    Until then, if it's solving a business need and you're not pouring tons of effort/money into it, it's more about pride than a real problem. That is a lesson I had to learn the hard way, and still struggle with: accepting that sometimes a janky, hobbled-together solution that solves a business problem still solves that problem and might not be worth the cost of "fixing."

    • Gibbon1 5 years ago

      I'll hijack your comment to say, it's easier to justify refactoring something totally ass then something half ass. That argues against trying to do too good a job initially.

  • mcv 5 years ago

    For this reason: write even your prototypes in a maintainable way. Don't go overboard, quick hacks are allowed where they save time (you might want to mark them with a // TODO), but even prototype code should be understandable.

    Also: don't be afraid for dramatic refactoring. If you know it's necessary, do it. It will improve your code base immensely. And the earlier you do it, the easier it is.

    And even when you do throw away your code, you're not going to throw away all of it. There are always bits that look good and work well, so you copy them into the new version.

  • avgDev 5 years ago

    When I started I named one of my first projects at work somethingsomethingTest, it is this way 3 years later. Honestly, I tried changing it but it caused a cluster fuck, so every time I have to fix a bug it gently reminds me about being a fresh dev, writing test programs that just somehow wiggle their way into production. Management says, job well done if it works it works. :\

  • maxxxxx 5 years ago

    I also try to write reasonable code for even the smallest projects. It doesn't have to be perfect but it should be possible to add the bells and whistles in straightforward manner. A lot of people seem to spend time thinking about whether they should do the right thing or not but I find it easier to not think about it and do the right thing by default.

  • reacweb 5 years ago

    I love prototypes. You have the joy of the blank page, the freedom of the choice of COTS. You can ignore requirements that would introduce complexity. You do not need to follow the coding rules that impose verbosity. As a result, you can build something simple you can be proud of.

    The prototype is a showcase that allows to show how development could be faster if there was less stupid requirements and less coding constraints.

    • hinkley 5 years ago

      Every problem is easy if you oversimplify the requirements.

      • reacweb 5 years ago

        The purpose of a prototype is generally to tackle one or two of the difficulties (among UX, thread architecture, dataflow, performance, complexity of data, encoding/decoding, robustness...) while ignoring most of the others. The other requirements will come back, but at least you have a good solution of the most problematic ones.

  • ChrisMarshallNY 5 years ago

    This is the reason I am less-than-thrilled with the way most companies get to MVP. Make MVP good quality ("V" should have "Quality" as a principal metric). It may be bare bones, but don't skimp on the quality. Or you'll be very, very sorry.

  • moocowtruck 5 years ago

    oh wow so this...I have about a dozen such stories!

    • pstuart 5 years ago

      Add 'em to the pile here!

kstenerud 5 years ago

One more point from my 20 years experience:

Keep your development environment deterministic and instantly rebuildable.

You should ideally be able to take your hard drive out, put in a new one, install a fresh OS, run 1 script, and be completely up and running in less than an hour.

As an added bonus, this means that you can replicate your development environment on any machine very quickly and easily, which makes losing your laptop during travel only an inconvenience rather than a travesty.

I've hacked together some scripts for that here [1], although I really should convert them to ansible or something.

[1] https://github.com/kstenerud/virtual-builders

https://github.com/kstenerud/ubuntu-dev-installer

  • smilliken 5 years ago

    I've always agreed in principal, but fell short in practice until I started using Nix[1]. Nix makes it significantly easier to pull off by verifying the hashes of all inputs, eliminating non-determinism in builds (timestamps, usernames, etc), eliminating dependencies on OS packages/files/etc, and allowing you to pin to a specific snapshot of all packages. Pardon the evangelism, I'm just a happy user.

    [1] https://nixos.org/nix/

  • _frkl 5 years ago

    I agree. It's a game changer when you can easily replicate all or specific parts of your dev setup with one command, everywhere (remote server, new laptop, dev vm, even containers).

    I wrote a script for this too (actually using Ansible), but then everything got a bit out of hand and now it's a framework (as those things go): https://freckles.io

    • codethief 5 years ago

      Freckles looks interesting!

      If I may ask, what was the reason for moving away from Ansible and resorting to building something yourself? What kind of issues did you encounter that Ansible couldn't solve?

      • _frkl 5 years ago

        Oh, sorry, I didn't express myself clearly: freckles still uses Ansible under the hood, it's just that the whole project sort of went it's own way and dragged me with it. Feels like it, anyway :-)

        freckles actually supports multiple backends, but Ansible is the only one that is implemented currently. I chose that mainly to take advantage of all the modules and roles out there, those can all be used from within freckles.

        As to it's disadvantages: for some things Ansible is a bit heavy (needs to be installed, and Python available/installed on the target), or not as well suited (orchestration -- although I'm quite happy with what it can do in that space), so the plan is to have other backends (shell, Terraform, direct K8s, etc) that are a better fit for whichever context you are operating in. And of course you will be able to mix and match.

        Anyway, if I could have only picked one backend, I'd still go with Ansible. It's a great platform, and it can do almost everything.

  • hinkley 5 years ago

    I’ve had to rebuild my machine at work twice and some people look at you like you told them your dog died. We never had it automated but we did have it written down.

    One of those, we got a bad batch of laptops. Over a summer five of us had to go through the same thing. The only rough part was the whole disk encryption. That took longer than the rest combined.

  • pmarreck 5 years ago

    > All scripts are idempotent.

    THANK YOU!

    This is not a property that is exploited nearly enough!

  • maxxxxx 5 years ago

    "You should ideally be able to take your hard drive out, put in a new one, install a fresh OS, run 1 script, and be completely up and running in less than an hour."

    this is great if you can pull it off but if you have a lot of disjointed systems plus maybe custom hardware it gets really hard to automate the process. In general I agree though. I also have learned to just live with the defaults of my IDE(s) instead of setting them up to my taste every time.

    • jimpudar 5 years ago

      I keep all my Emacs config files in a git repository, so I can quickly pull it wherever I happen to be using Emacs. Might be practical for some IDEs too depending on their configuration file formats. I'd imagine it would work fine if the configuration is stored as XML.

  • wil421 5 years ago

    Pi-Hole, Plex, and Samba builds. Thank you. I’m working on automating my home lab builds and your code gives me more ideas.

  • nunez 5 years ago

    This is what makes Docker (or even Vagrant) perfect. Super repeatable development environments that can run anywhere.

fnord123 5 years ago

>ALWAYS use timezones with your dates

This is bad advice. NEVER use timezones with your dates because there is only one timezone: UTC. Anything else is a presentation format.

Of course, if you are in a system that already committed the cardinal sin of storing datetimes with timezones then you have to deal with it. Alas.

  • wvenable 5 years ago

    If your application is future-dating items, like say clients entering appointments in a calendar, this will be very wrong. If a customer enters an appointment 6 months in the future, it will end up at the wrong time due to daylight savings time.

    UTC is great for timestamps (when did this event happen) but not good for user-entered dates. You throw away useful information if you store in UTC.

    • dheera 5 years ago

      No. ALWAYS store user-entered dates in UTC, and simultaneously either (a) save the user's timezone preference or (b) use their system-specified timezone and only do that conversion at the last step when rendering the UI, preferably with client-side timezone functions. Also use client-side timezone functions to provide the server with UTC dates directly.

      You'll deal with DST hell on your server otherwise. Countries change their DST and timezone policies from time to time, and you don't want to have to deal with keeping track of that stuff or force upgrading your server for a petty politician/timezone issue. That's the client machine's job.

      Saving everything in UTC also makes comparisons, logging, abuse tracking much easier.

      • stouset 5 years ago

        Future times record intent, which requires storing a datetime (sans any kind of time zone information) and location. Past times can be stored as a UTC timestamp.

        Storing and using a time zone for points in the future is the path of madness. If a user has an event at 2pm six months from now, they almost certainly want that event to be at 2pm no matter what the UTC offset happens to be on that day. This becomes even more obvious for repeating events: an event at 2pm this week will still be at 2pm six months from now, even after DST flips. And even if my government decides to change the time zone my location is in, or decided to opt into/out of DST.

        • lhorie 5 years ago

          > If a user has an event at 2pm six months from now, they almost certainly want that event to be at 2pm no matter what the UTC offset happens to be on that day

          Not necessarily. If the event is a long distance meeting that span multiple timezones, then by definition, the event isn't going to be at 2pm for all consumers of that data.

          Location is kind of a proxy for timezone, but it makes more sense to just let the client's OS figure the offset from UTC time than to manually manage timezones in the server yourself.

          Similarly, you let the client tell the server what timestamp "2pm two weeks from now" is (letting it consult its OS' timezone database to figure out what that means for the user's chosen geographic location), rather than trying to naively add up the seconds from now to then in the server.

          Saving the user's location in your DB to compute offsets on the server is problematic because you can't always get geographic data from the client (e.g. browsers will readily make timezone available to Javascript, but geolocation data might be unobtainable due to permissions/privacy concerns/etc). Also, users may move/travel, so getting geo data from side channels such as sign up forms also could lead to discrepancies.

          • wvenable 5 years ago

            If you store UTC, you automatically get the result that meeting occurs at the same time for everyone. But you could also get the situation where, depending on timezone and DST changes, that 2:00pm meeting doesn't occur at 2:00pm for any of the participants!

            The problem is really far more difficult and interesting than just storing all dates as UTC as a rule.

            • lhorie 5 years ago

              AFAIK that can only happen if the the government changes the DST policy (or timezone) between the time the event was scheduled and the meeting date. Thankfully, DST policy changes are comparatively rare.

              And in actuality, even if the event doesn't happen at 2pm for the person who got affected by the DST policy change, it'll still be scheduled at the "expected" time for everyone else (unless they too get affected by a DST policy change as well).

              Also, most OSes are decent enough nowadays to update their timezone/DST databases regularly so that UIs that use UTC will reflect the discrepancy in future time intent soon after the policy change takes effect.

              • wvenable 5 years ago

                If you convert a future date-time to UTC using the current timezone database you've already lost. If the timezone database changes anytime from that point forward, you cannot correctly convert that UTC date back to local time.

                You have no way of knowing, just looking at a UTC date, under what conditions it was converted from local time.

                By converting a future date to UTC, you're making the assumption that the timezone information you have for that future date won't change from now until that date passes.

                > Thankfully, DST policy changes are comparatively rare.

                "Russia switched to permanent DST from 2011 to 2014, but the move proved unpopular...so the country switched permanently back to standard time in 2014... Florida legislature and Washington State Legislature passed bills to enact permanent DST, pending US Congressional approval, and California, Maine, Massachusetts, New Hampshire, and Rhode Island have introduced proposals or commissions to that effect... Under an EU directive, from 2021 twice-yearly adjustment of clocks will cease. Member states will have the option of observing either standard time or summer time all year round." -- Wikipedia

                • lhorie 5 years ago

                  You seem to be missing the forest for the trees. DST policy changes _are_ rare, compared to the millions of times when people schedule multi-timezone meetings, international flights, etc. Bursts of high frequency DST policy changes are so rare that they get articles written about them.

                  > you cannot correctly convert that UTC date back to local time.

                  Of course you can. It may not be the same local clock time with most implementations, sure, but it will still correctly point to the same physical time relative to the local times of everywhere else on Earth. It would be physically impossible to retroactively change the time of the meeting to account for the the DST policy change and maintain the same clock time for participants in other timezones, so something's gotta give.

                  It's also not true that you can't recover the intended time after a DST policy change. Although it's monumentally laborious, it's definitely possible: you can keep track of the historical changes in the timezone database and work backwards from the UTC date and the date of that record change.

                  > By converting a future date to UTC, you're making the assumption that the timezone information you have for that future date won't change from now until that date passes.

                  Sort of. The assumption there is that the time delta between now and then is smaller than the delta between a notice of policy change and the time it goes into effect. It is, however, technically an engineering trade-off. The trade-off here is that you can have a far simpler and almost-always correct implementation by taking a tiny risk of a discrepancy between intent and actual local time in the extremely rare occasion that a policy changes without advance notice. Responsible governments will typically give plenty of advance notice so timezone databases can be updated and clients can then correctly calculate future offsets since they will know how to account for the policy change.

                  You could certainly forego UTC time and make an engineering decision of storing local clock time if your application matches a very strict set of timezone boxing requirements, but in my experience, it's more common that people pick this approach out of ignorance than careful analysis, and in some cases, it gets nasty to support when the requirements change (e.g. as soon as you have a new sales team office on the other side of the country)

                  • wvenable 5 years ago

                    Why go through all this just to follow the rule of storing UTC in the database? At what point does it just become the wrong thing to do?

                    Storing local time and timezone in the database for future dates is easy. It has none of these issues. It produces the correct results that people expect. You can convert to UTC at anytime.

                    • lhorie 5 years ago

                      Storing local time is fine if you and your server never ever leave your timezone. You can even get away with it for a long time. If your logic does ever require timezone conversions down the line though, it will most likely surface some nasty problems. And when I say nasty, I mean it in the will-leave-a-battle-scar-that-your-future-graybeard-self-will-warn-younguns-about sense.

                      The thing with the UTC advice and those who don't listen to it is that people often don't realize that both DST policy changes and factors that increase the number of applicable timezones in a business are rare to begin with. They often think that that storing local time is safe because they never had to truly deal with the complexities of representing physical time.

                      But DST policy changes are much rarer than factors that fuck up local time based implementations, and issues can be fixed without huge schema changes and downtimes. Plus it's less likely you will actually need to fix them since higher-ups won't be able to grasp the issue, whereas a local time bug fix will often come as a demand from some higher-up suit who flew out of state.

                      • wvenable 5 years ago

                        I've said all over this thead to store local time and the associated timezone. You do that and no nasty problems can happen. And you are correctly storing the time that the user intended and not the time it might be converted into.

                        • lhorie 5 years ago

                          If you're storing the timezone (and by that I assume you're doing so unambiguously, because offset alone doesn't tell you hemisphere/DST rules, and timezone names are not unique), then you're effectively using UTC, albeit storing it in a non-normalized format.

                          The flaw in your logic is in assumption that "intended time" is always fixed relative to one user's clock time. For international events, intended time is physical time. If it's a friday and I verbally arrange a meeting with someone in Sao Paulo at 3pm BST the following monday and that translates to 11am PST today for me at San Francisco, but if there's a DST change for me over the weekend, the intended time is still 3pm BST, not 11am PST, because in such cases, the general logical expectation is that whoever is getting a time change is the one who should be on top of it.

                          In either case, you'd be able to recover "intended time" (for any meaning of the term) 99.9999% of the time regardless of whether you used UTC or local time+timezone. If there was indeed an unlikely event of an unannounced DST policy change, then whether intended time is wrong depends on who's affected. If it's an international meeting and intended time means physical time, then local time+timezone does not capture true intended time. Likewise, if intended time means local time for a single user, UTC will be the wrong one. But as I said, this case is vanishingly unlikely. A lot of code doesn't handle integer overflows correctly because they are unlikely too and the world is still humming along just fine.

                  • flukus 5 years ago

                    > You seem to be missing the forest for the trees. DST policy changes _are_ rare, compared to the millions of times when people schedule multi-timezone meetings, international flights, etc. Bursts of high frequency DST policy changes are so rare that they get articles written about them.

                    It's not just DST policy changes. If I want to run a report at 10PM every day in my timezone and we store that time as UTC then my report will be running at 11PM after the DST change over, it will start screwing up twice a year. This does not happen when my local time and timezone are stored because the software working with the data can make smarter decisions with more information.

                    • lhorie 5 years ago

                      > screwing up

                      Is it wrong just because the clock time is wrong? Or is the data actually going to get corrupted? In my experience, reports are usually reporting on a period of time, and it doesn't actually matter if the job runs an hour off since you don't want them to be time sensitive in the first place (e.g. you might get noise because some query became very expensive for some reason and delayed a portion of the job by an hour, or the job ran in a data center in a different state, etc). Besides, this isn't really the same class of problems that the "use UTC" advice targets. Crons are a far better tool for daily scheduled jobs. The UTC advice is for systems that communicate dates across multiple machines.

              • thirdusername 5 years ago

                In 2018 there was 9 updates to the Olsson timezone database, it's not as irregular as you'd think. It doesn't always get communicated with notice, and sometimes applies retroactively depending on which government and point in history.

                Independent of all that the train still leaves at 2pm local time and I need to be on it.

        • dragonwriter 5 years ago

          > Future times record intent, which requires storing a datetime (sans any kind of time zone information) and location.

          This assumes particular kind of intent, but not the only kind of intent future times might represent. Datetime + named time zone is often the actual intent (e.g., often for TV schedules), datetime + whatever the then-current legal time is at some particular location is, as you note, also often an intent, but it's not the only possible one. Note also that either of those may not be unique, depending on whether the named zone is offset specific or includes DST shifts, due to DST and other rarer sources of duplicate times in one zone or location , and the intent often is for a unique time, so you may actually need to store even more to capture intent.

          • ShamelessC 5 years ago

            Can you clarify? If I've seen an ad for a TV show saying it's going to air at 6 PM CST, I'm going to miss the show if it's actually on at 5 PM CST because the DST law changed or what have you.

      • wvenable 5 years ago

        If you use the client's timezone preference to store the date and then use the client's timezone preference to recover the date at a future date, you might still mess it up (from the user perspective).

        It depends on the user's intent. If they book something at 4:00pm 2 years from now, will that become 5:00pm if the timezone rules change or stay at 4:00pm. I'd argue that for most future-dated items it should stay at 4:00pm. The only way to guarantee that is store the local time.

        Storing UTC doesn't relieve you of timezone problems.

      • reitzensteinm 5 years ago

        Wait, you're using "Countries change their DST and timezone policies from time to time" as an argument for storing UTC in the database?

        As far as I'm concerned, this is the only valid argument against UTC. If a user adds an event for e.g. 2pm March 25th 2022 local time, you convert it to UTC, and then later the conversion rules for that day change, you now have a date that's been improperly converted to UTC.

        I don't think it's a strong enough reason not to use UTC in the db, given the multitude of issues surrounding other choices, but I do think that it's important to be clear about the pros and cons.

    • dragonwriter 5 years ago

      > If your application is future-dating items, like say clients entering appointments in a calendar, this will be very wrong.

      Well, the future has problems no matter what you do.

      > If a customer enters an appointment 6 months in the future, it will end up at the wrong time due to daylight savings time.

      No, becauae that you store UTC doesn't mean that you also convert entries to UTC using the offset on the entry date without looking at the date the time is attached to.

      OTOH, no matter what you store, the fact that DST rules and even timezone boundaries can change over time means that you have to store a lot more about the intent than just the time to be certain of being correct for future times. Did you want X time in Y time zone or X time in the legal time applicable to Z venue? The former is often used as a proxy forthe latter on the assumption that tim zones don't change...

    • wil421 5 years ago

      You store the offset or you store the users configuration. Use the users config and/or offset to convert it from UTC. There should be a way to account for DST. I’ll look into it when I get a chance.

      If the future date was input in EDT or EST you’d know what to do in the future when you convert back to either EDT or EST based on the time of year.

      • wvenable 5 years ago

        You can do a lot of things but, at some point, you have to ask yourself if the solution is signficantly more complex than storing local time and the timezone.

        You also need to be aware that timezones change; even now there are discussions about eliminating daylight saving time in various regions. If you store 4:00pm on Nov 25th and convert that to UTC based on the timezone rules as they exist today, you'll be wrong when you convert them back under different timezone rules. There's just no way to do that cleanly.

        There are also things to consider: if the user changes their own timezone (by moving) did they intend the value of that time to change (4:00pm is now 1:00pm because it's a WebEx and they moved East) or did they intend that to remain at 4:00pm (it's when they take their medicine every day).

  • kstenerud 5 years ago

    Please please please please don't perpetuate this idea that all stored timezones must be UTC. Timezones are always necessary, and while they can often just be UTC, they can't always be [1].

    [1] https://technicalsourcery.net/posts/time/

    • fnord123 5 years ago

      That is an interesting post and does discuss the issue of always-local-to-me calendaring and planning future dates when you don't know what the political regime will do with those times in the future. Those are fair points.

      For recording all events (i.e. times you see in logs, 99.9999% of timestamps you will see), there should only be one timezone: UTC.

      • Lendal 5 years ago

        What if you're building a $350M machine that weighs many tons, located deep inside a concrete facility and will never ever be moved? The people who operate this machine years in the future will read a date on a log file or database table and make decisions that could affect the lives of millions of people, all of whom live in the same time zone with said machine.

        I don't want these people to be forced to convert UTC in their heads when there is no reason for it except for some abstract rule created by someone who doesn't know the first thing about this machine and its purpose, or the people who live there.

        • lmm 5 years ago

          > The people who operate this machine years in the future will read a date on a log file or database table and make decisions that could affect the lives of millions of people, all of whom live in the same time zone with said machine.

          They live in the same time zone now. What makes you so sure it will be the same time zone for the lifetime of the machine? More insidiously, what makes you so sure that that timezone's DST rules won't change, leading to timestamps that are correct half the year and then almost, but not quite, correct the other half of the year? Or worse, that are correct for 50 weeks of the year and then out by an hour for 2 weeks a year because the DST change date got moved?

          Better to use UTC and get those operators in the habit of converting every time.

          • pstuart 5 years ago

            > Better to use UTC and have their tools translate at display time

            FTFY

        • fnord123 5 years ago

          If you have very specific design constraints, please don't use advice from random people on the internet as though it's absolute gospel.

          • twic 5 years ago

            I am intrigued by the implied possibility of a project without specific design constraints.

          • falsedan 5 years ago

            the design constraint almost 100% matches 'my desktop computer in my house'. please don't vapidly declare some logocentric nonsense and assume people who know better can just ignore your comment.

        • icebraining 5 years ago

          > convert UTC in their heads

          Why would they do that, instead of using the computer to do it?

    • pbowyer 5 years ago

      Re his rant on IS0 8601: is there a better format to use, that doesn't hardcode the offset?

      • kstenerud 5 years ago

        What would be really cool would be a URI spec for time, like:

        time:2021-08-02/09.00.00/Europe/Berlin

        time:2021-08-02/11.00.00/UTC

        etc

        • frenchy 5 years ago

          This seems like a surprisingly good idea.

        • Aloha 5 years ago

          this is a startlingly good idea.

  • simonw 5 years ago

    Only using UTC makes sense if you are recording exact moments in time - when a log line was generated for example - but it falls over when you start interacting with actual human times.

    If I say an event is happening at 7pm, I mean 7pm in my local timezone. If I record that upcoming datetime in UTC but I forgot about a daylight savings time change between now and then that doesn't mean the event will happen at 6pm - it's still happening at 7.

    I've done some projects where I've stored dates as a localtime, a timezone AND a denormalized, calculated UTC datetime (for lookup/sorting).

    • ubercow13 5 years ago

      >If I record that upcoming datetime in UTC but I forgot about a daylight savings time change between now and then

      There are good reasons for not storing in UTC but this doesn't seem to be one of them. Surely the problem in this case is just that you incorrectly converted to UTC? It's just a bug.

      • mbreese 5 years ago

        Daylight saving time rules can change... Just because you stored it correctly now doesn't mean it is correct in the future (when it's needed).

        • aeorgnoieang 5 years ago

          > Daylight saving time rules can change

          Which is why all of the good date-time libraries come with the standard timezone database, which contains a full history of timezone changes. So, (carefully) using one of those libraries, you should be handle to handle any date-time ever.

          • mixmastamyk 5 years ago

            Timezone dbs lag, so do not solve every problem.

        • ubercow13 5 years ago

          Sure but that is a separate issue - in that case you didn't 'forget' about it

          • anamexis 5 years ago

            But it's not an issue to begin with if you just store it in the correct local time zone.

            • ubercow13 5 years ago

              I am not advocating anything other than that

    • treis 5 years ago

      >If I record that upcoming datetime in UTC but I forgot about a daylight savings time change between now and then that doesn't mean the event will happen at 6pm - it's still happening at 7.

      This breaks down when you start having to deal with multiple timezones. Then whenever you do sorting or availability checks you have to account for DST/timezones. You will pretty quickly make a method that converts your timezone stored data to UTC. Better to just store it all as UTC so that your DB queries are much simpler.

  • ed312 5 years ago

    This is bad advice. If you convert the timezone you're now taking on the risk that your conversion was incorrect. You also lose the valuable information of the source time zone. UTC isn't perfect and is _far_ from the only viable timezone for a given domain.

    • xwdv 5 years ago

      Nope. You shouldn’t be doing your own timezone conversions. If you are storing a time into a database the database should be handling the conversion of your date to UTC. If you care about the original timezone store that elsewhere. Do not make one piece of data serve two different functions.

      • dwild 5 years ago

        > If you care about the original timezone store that elsewhere

        I think that was his point, that you should always care about the original timezone and thus keep it.

    • hobs 5 years ago

      Saying "We might have bugs" isnt a good reason to reject a proposal.

      Generally you store an offset/name of the zone, which then isnt losing the valuable information about the source zone - if you dont do this you are right, this will fall down almost immediately.

  • misja 5 years ago

    Apparently I'm in the minority together with OP but I agree that storing dates as UTC is good advice.

    What most repliers seem to have overlooked is that OP is saying that timezones should be used for presentation. Maybe it was not clear, but that also means they should be used for input conversion. But once the dates have to be used by the business logic of your system, you really want to prevent the headaches you will get when you have to deal with timezone transformations everywhere. That really adds a ton of complexity, is easy to overlook and above all an error in timezone conversion can go undetected for a long time.

    • beckler 5 years ago

      I don't think anyone is saying that you shouldn't store datetimes in UTC. It's just that time is complex and the errors can be subtle, and you don't want to mess up appointments across timezones because you didn't factor in the timezone when appointment was originally made.

      http://www.creativedeletion.com/2015/03/19/persisting_future...

  • forkerenok 5 years ago

    I agree with the sibling, _this_ is also bad advice.

    It really depends on the use case. For example:

    * If date is used as a "low precision timestamp", then I'd advise to use full timestamp+tz and extract the date in presentation after applying tz adjustments.

    * Sometimes date represents a sort of a fiscal date, which refers to a time period that captures business hours in a certain location. There it is a bit trickier, but slamming a tz on it is not an answer.

    As for timestamps, stick to UTC if you can, but in any case keep TZ explicit. There are two many middlewares there that would be happy to assume a tz for you and give you hell.

  • java-man 5 years ago

    fnord123, you are confusing Instant with ZonedDateTime (to use java classes as an example).

    An instant is a moment in time, expressed in java as milliseconds in UTC.

    A ZonedDateTime (if we ignore time portion of it for the sake of brevity) is a LocalDate (year-month-day) with a time zone. One can convert an instant to LocalDate given the time zone, or vice versa. But one cannot convert between the two if the time zone is missing.

    So, time zone is not a presentation format, it's an essential part of the data. One can, of course, store a LocalDate in a database, iff the timezone is somehow assumed (and hit the land mine when the assumption is violated by a recent change).

  • karmakaze 5 years ago

    Don't dogmatically choose either. I had one case where it made sense not to use UTC but was hard to get across to those who were stuck on ALWAYS. The case was for photos taken around the world on a photo site. Each photo effectively has a place and time at that place. Storing UTC would have shown NYE midnight pictures at an arbitrary hour based on viewers tz.

    • koyote 5 years ago

      I'd argue that in that case you store two values:

      1. The time the photo was taken in UTC (as usual). 2. The UTC offset for the photo location.

      • mceachen 5 years ago

        I banged my head on this several times until I came up with basically the same conclusion.

        PhotoStructure stores both the raw EXIF value, a nullable timezone that tries to adopt sibling's timezones (normally only inferable when you have GPS tags, including GPS acquisition time, which is always in UTC). It also stores the source of the inference, so other heuristics know how much it can "trust" that value.

        I open-sourced the GPS location and offset inference code: https://github.com/photostructure/exiftool-vendored.js

      • karmakaze 5 years ago

        Others before tried just that. Photo métadata is notoriously inconsistent and incomplete. You're much more likely to have accurate local time and not much else.

  • gpvos 5 years ago

    No. UTC-only is only good for log files. In all other cases, you should store a timezone offset with your dates.

  • raxxorrax 5 years ago

    Maybe unpopular opinion: UTC with offset is actually better, because it contains information that might be worthwhile.

    If I live in 2019-06-19T00:00:00+0200 and save the current time as 2019-06-18T22:00:00Z and don't have any additional information, there could be suddenly dragons...

    • kstenerud 5 years ago

      Timezone data should always be the actual timezone, not the offset. Offsets don't take into account the political nature of time. For example, in Europe in 2021, countries will cease to use daylight savings, changing their time offsets permanently.

      • jp_sc 5 years ago

        For that exact reason you should store three values: The UTC date, the timezone for presentation, and the offset at the moment the date was saved.

  • husainalshehhi 5 years ago

    UTC is a timezone. So his advice still stands. He probably meant that you should not use time without knowing the timezone, and UTC is the easiest way to attach timezone to a time instance.

maxxxxx 5 years ago

One thing I have learned in almost 30 years of SW development: work somewhere where the CEO has at least a basic idea of your work and sees value in your work, not just cost. Work somewhere where people care for the craft and it’s not only about “business” goals.

  • ellius 5 years ago

    Life is essentially a big collection of systems or games, and choosing to play the wrong ones is worse than playing the right ones badly. You can learn tricks to get promoted at some place that has a shitty culture, shitty bosses, etc., but at the end of they day all you'll have done is won the wrong game.

  • ravroid 5 years ago

    This is an excellent piece of advice.

    Of the five companies I have worked for, the two where I was disconnected from leadership were where I felt the least valued (and consequently less motivation). Of the three where I worked directly with leadership, I was treated with significantly more respect and felt a much stronger sense of purpose.

  • 1PlayerOne 5 years ago

    Can you give examples of companies that fit your description?

    • maxxxxx 5 years ago

      Apple under Jobs, Google, Facebook, MS under Gates, lots of smaller companies. Probably the ones where a founder is still CEO.

      • savrajsingh 5 years ago

        And the founder was a software engineer

physicles 5 years ago

> Keep a record of "stupid errors that took me more than 1 hour to solve"

I’ve been doing this for the last three years, but electronically, and it’s amazing. In my home folder there’s a giant text file that’s like my personal log file. It gets a new line at least once an hour, when cron pops up a box and I type in what I’m working on. And for those times when I’m working on stuff that’s perplexing, I write to it as a stream of consciousness — “tried this, didn’t work” “wonder if it could be due to xyz” “yep, that was it.”

I’ve used it a few dozen times to either look up how I solved a weird problem, or why I decided to do something a certain way. I’ve even built pie charts of how I spent my time in the last quarter by sampling it Monte Carlo style.

  • AnimalMuppet 5 years ago

    I keep a log of "stuff I had to look up how to do". Mine isn't "stupid errors" so much as stuff like "what are the flags to get the compiler to show you the assembly code it produces".

  • stormthebeach 5 years ago

    Do you have this cron script hosted anywhere?

strken 5 years ago

The description of cognitive dissonance is wrong. It's not holding two things in your head at once[0], it's the feeling you get when you believe two contradictory things.

[0] that's closer to https://en.wikipedia.org/wiki/Working_memory

  • simplify 5 years ago

    Yes, and it's also not as severe or "a sign of being stupid" as many think. Cognitive dissonance occurs in situations as simple as walking to a store and discovering the store is closed.

    • 1123581321 5 years ago

      It’s also used in marketing to describe what happens when a buyer’s expectations are not met by the product, and is entirely used to fault the seller and the seller’s marketing.

robodale 5 years ago

> Learn when you can't code anymore. Learn when you can't process things anymore. Don't push beyond that, it will just make things worse in the future.

I cannot agree more. (18 years developer here). On those days I just kept pushing, I mostly ended up correcting my poorly-created code the next day. When you know you're done, be done, or at least switch to the lowest mental energy task you can do. Sometimes staring into the screen is that lowest mental task.

  • diggan 5 years ago

    > Sometimes staring into the screen is that lowest mental task

    For the sake of your eyes (and perhaps well-being), don't just stare into the screen without doing anything. If you have a free moment, take your eyes off the screen and look out the window (or at nearest wall if you have none)

    • WhompingWindows 5 years ago

      This is when I take a slow walk around the building or outside for a few minutes.

bobm_db 5 years ago

I really like this one:

> Documentation is a love letter to your future self

This seems like a brilliant way of transcending what feels like a drudge-job, making it into something really meaningful.

  • snarf21 5 years ago

    Not that documentation is bad but it is rarely correct. Now you have to spend time trying to match documentation that doesn't match the code you see.

    I always prefer a different approach. There is no need to document what a function does, that is self evident from stepping through it. The thing that should be included in the code is the WHY, meaning why does this function even need to exist and how does it fit into the architecture?

    Now an accurate high level architecture diagram and function raison d'etre telling you everything you need to know with much less risk of not having an accurate and up-to-date documentation process.

    • brianpgordon 5 years ago

      > Not that documentation is bad but it is rarely correct. Now you have to spend time trying to match documentation that doesn't match the code you see.

      It depends on what you mean by documentation. If you mean articles on Confluence or whatever, yeah, that's hopeless. If you mean comments in the code, I couldn't disagree more.

      I don't buy the common argument that comments easily become inconsistent with the code. I can't imagine changing a line of code and completely ignoring the comment above it, and I'd definitely require a fix if I saw someone else do that in a code review. That's just a ridiculous level of sloppiness.

      > There is no need to document what a function does, that is self evident from stepping through it.

      I'm afraid I disagree here too, though again conditionally depending on what you mean. Straightforward logic shouldn't be littered with superfluous comments. But on the other extreme, the fast invsqrt algorithm needs more of a comment than the canonical "what the fuck?"

      Yes, given weeks of head scratching any arbitrarily-dense block of code will release its secrets, but that's not good engineering. Anyone can build a bridge by dumping a hundred billion dollars of concrete into the bay until it's dry; good engineering is doing it cost-effectively. A well-placed comment in some non-obvious logic can save time for a developer (likely you) every time they interact with it.

    • mannykannot 5 years ago

      > Not that documentation is bad but it is rarely correct.

      Rarely? I often turn to the documentation to check the capabilities, syntax and semantics of the languages and interfaces that I use. If it were only rarely correct, I would be in trouble.

      If you are talking about application code specifically, my experience is that documentation is too rarely present to be an issue.

  • tomelders 5 years ago

    Writing code is also UX. some future user will have to understand it. More often than not, that future user is you.

    • ehnto 5 years ago

      Which is why: Verbosity is not bloat. Abbreviations are a code smell.

      I remember encountering a platform in a language before namespacing was introduced in that language, and it had names for the classes like `Platform_Catalog_Model_Product_Simple_Collection`. It looked really silly, but I always knew what I was working with at a glance. It was honestly a really nice developer experience.

  • WhompingWindows 5 years ago

    Usually I write "why" documentation before code, that way the purpose is clear as I code it and later when I re-read or share it.

  • aeorgnoieang 5 years ago

    I like to include (mostly-professional) jokes or humor in my comments and documentation as it's pretty nice to read later.

sethammons 5 years ago

> People get pissed/annoyed about code/architecture because they care

> ... Yeah, you don't like that hushed solution 'cause you care" was one of the nicest things someone told about myself.

This is very similar to a foundation that we repeat at work. Everyone is trying to make things better, we might not agree on the how, but we must agree on the intent. When you assume good intent, conversations go much, much better. It can be a great way to frame disagreements. "We are disagreeing here because we care."

shakna 5 years ago

> For a long time, I kept a simple programming rule: The language I'm playing at home should not be the same language I'm using at work. This allowed me to learn new things that later I applied in the work codebase.

Though I've found this helpful in expanding what I can do, or how well I can do it, what I find using a different language at home than at work does best is prevent burnout.

I can be reckless, and play with the language, because I don't have the tired patterns of my day trying to rigidly enforce best practices and cover all edge cases.

My mind can relax and do the wrong thing if I want, and I don't have the autoformatter or linter in mind. I don't have a dozen rules that will spit red errors if I step out of line.

Those things are all good for work. The application must be solid, and the user deserves stability.

But when I'm hacking together a game for myself that I never intend to release, I just want to experience the fun side of programming without the hard part.

vmlinuz 5 years ago

Haven't read all of these yet, but a couple stood out as matching recent experiences...

"Future thinking is future trashing": we had a good engineer with a bad tendency to write complex generic code when simpler specific code would work. In one case, he took a data structure and wrote a graph implementation on top of it - it was only ever used once, by him, and I recently rewrote it in about 15 lines of structure-specific code. In another case, he wrote a query builder to make it easy to join data from a new service with our legacy DB, which makes the high-level code simple, but the full implementation much more complicated, and means incoming engineers can't obviously see what implementation to use. His last week at the company was... last week.

"Understand and stay way of cargo cult": Today, I was reviewing someone's code, when I saw a language construct I didn't recognize, and which made no sense to me. I asked him what it was meant to do, and if he could give me a link to somewhere which described the construct - he said no, he'd mostly just copied it from someone else's code. I asked 'someone else', and he pretty much said it was useless, he's worked in so many languages that he gets confused about language idiom/construct sometimes, he writes his code like he'd write a letter to a friend, and it wasn't like anyone was reviewing his code at the time anyway. I said I wasn't going to take his stuff out of the existing codebase, but I wouldn't let anyone copy his useless code in future, and linked him to https://blog.codinghorror.com/coding-for-violent-psychopaths...

WalterBright 5 years ago

> You can be sure that if a language brings a testing framework -- even minimal -- in its standard library, the ecosystem around it will have better tests than a language that doesn't carry a testing framework, no matter how good the external testing frameworks for the language are.

D not only has it as a library, it's a core feature of the language. As incongruous as it sounds, builtin unittests have been utterly transformative to how D code is written. Built in documentation generation has had the same effect.

edw519 5 years ago

Excellent post! Thank you, Julio!

This could make great bulletin board material in some other format, but there's just too much here for that.

I started my reply by cutting and pasting the gems with my little follow-up, but quickly realized that would take all day. There just so much good stuff here. So instead I narrowed it down to just one:

Code is humans to read. ALWAYS. Optimization is what compilers do. So find a smarted way to explain what you're trying to do (in code) instead of using shorter words.

This really strikes a nerve because in 40 years of programming, most of my problems have not been with building code myself, but with trying to figure out what my predecessor had written. I bet I've wasted 20 years of my life refactoring the crap of others who would have benefited greatly by reading posts like this one first.

There's lots to think about. Lots to smile and agree with. And lots to disagree with (OP reminds me a little of myself, a caveman who often claims to know "the one right way" :-)

If you don't have time to read this now, bookmark it and come back later. One good idea from this can make a big difference.

  • maxxxxx 5 years ago

    "This really strikes a nerve because in 40 years of programming, most of my problems have not been with building code myself, but with trying to figure out what my predecessor had written. I bet I've wasted 20 years of my life refactoring the crap of others who would have benefited greatly by reading posts like this one first."

    I also curse a lot about code my predecessors have written but I bet my successors will curse at my code too :-)

blablabla123 5 years ago

> Learn to recognize toxic people; stay away from them

> You'll find people that, even if they don't small talk you, they will bad mouth everything else -- even some other people -- openly.

So true. It has happened to me so often - actually at most brown-field projects - that when I get onboarded, someone is telling me in which bad state this and that is, even how bad this and that person is supposed to be. Although I tend to be optimistic, when the project seems to be nicely challenging at first (not only from the code), I somehow take over this negative attitude for the rest of the project.

I think people should be able to informally talk about projects, even gossip sometimes. But it has its limits, especially team/project leads should IMHO be really careful about taking negative stances on certain projects/people/teams. In fact the limits to mobbing are fluid.

For me this is the single biggest annoyance in any job.

  • ShamelessC 5 years ago

    > So true. It has happened to me so often - actually at most brown-field projects - that when I get onboarded, someone is telling me in which bad state this and that is, even how bad this and that person is supposed to be.

    I certainly don't want to encourage any sort of toxicity like this. I will say, however, that I recently started a new job. People haven't been particularly welcoming so far but I finally managed to bond with someone over a discussion of the skill gap that exists between some of the employees and the lack of quality code produced by a few of the stubborn veterans.

    It wasn't just a bonding experience either. I was able to learn more about the nature of the organization and how I can help to improve things in the future. It also had a calming effect because I was originally pretty intimidated by some of these veterans but now I know they're just flawed humans like the rest of us.

    • blablabla123 5 years ago

      Pretty close to that was also my last experience. But having seen that before, I wasn't intimidated by all these accomplishments. In fact running a linter objectively showed the overall low code and design quality.

      Anyhow, complaining doesn't help but trying out things and gradually improving. In fact it's possibly even a good opportunity because nobody else wants to touch such code so there's room for creativity (of course within the limits that the stakeholders are fine with).

js8 5 years ago

I think the most difficult lesson is that good software is like cheese or wine. You have to let it age, to find the proper way to approach the problem; adding more developers will not make the process faster.

peterwwillis 5 years ago

Something I learned: K.I.S.S. is the most important principle in technology, but keeping technology simple while still adding value and utility is really hard.

Example: how can we improve on the design of the bicycle, but keep it simple? You could add expensive composite materials, design a new gearing mechanism, create new complex geometries based on wind tunnel experiments... all of those are more advanced, but complex; simple is a tough nut to crack.

chamilto 5 years ago

> When you're designing a function, you may be tempted to add a flag. Don't do this. Here, let me show you an example: Suppose you have a messaging system and you have a function that returns all the messages to an user, called getUserMessages. But there is a case where you need to return a summary of each message (say, the first paragraph) or the full message. So you add a flag/Boolean parameter called retrieveFullMessage. Again, don't do that.'Cause anyone reading your code will see getUserMessage(userId, true) and wonder what the heck that true means.

What about in languages with named parameters? I think keyword arguments solve this issue pretty well.

  • gregmac 5 years ago

    Another option is to use an enum, so it looks like getUserMessage(userId, MessageResultOption.FullMeesage). Even if there's only two options, it's more readable than a boolean.

    I'd question this specific case in a different way, though: summary vs detail sounds like different object types, which means different return types and so there should be two different methods (eg: getUserMessageSummaries(userId) and getUserMessageDetail(userId) ). It's perhaps a bit more work up front, but when consuming this you don't end up with a bunch of properties that may or may not be null depending on how you called this (instead, they're just not there on the summary response), and in a typed language it will simply fail to compile (making your mistake very obvious).

  • ryanianian 5 years ago

    Similarly: over time almost every boolean becomes an enum. I'm a bit surprised the article doesn't mention enum parameters as a possible alternative to booleans.

    In this example the call-site could instead look like `getUserMessages(userId, MessageStyle.SUMMARY)`. Naming the enum is always harder than naming the boolean but that's kinda the point.

  • joncp 5 years ago

    Perhaps another way to word it is: "Adding a boolean parameter is a strong indication that you're about to cross the single-responsibility threshold. Try to think of different ways to structure the code in which the bool isn't needed."

  • EdgarVerona 5 years ago

    This one's definitely a weakness I still have. I read that and thought "yeah, I do this in a lot of places and often end up regretting it."

    I think named parameters don't actually help, because the underlying problem with the boolean (aside from the mystery about what it means when looking at code calling it) is that it implies a potential separate "mode" of operation for the function. That the single function might actually serve two different purposes. It doesn't always imply that I imagine, but it's pretty likely.

    I'm guilty of this in my code, and I know that my code quality has suffered for it - for some reason, the way he put it in this article gave me a moment of introspection there, and it's something I'm going to try and take away from it and improve my own code with in the future.

    • tabtab 5 years ago

      However, using multiple names can create a combinatorial explosion. If you have 3 Boolean parameters, then you need up to 8 functions. I think named parameters are probably a better option.

      • pavon 5 years ago

        Only if you really need all combinations of those options. The flip-side is true as well - multiple booleans creates a combinatorial explosion of corner cases that you have to test for. I have definitely been guilty of adding flags to functions that really should have been separate functions, and being bit by permutations that were not tested. Some of which we couldn't even decide what the proper behavior ought to be when we went back to fix the function.

        • tabtab 5 years ago

          Wouldn't those same combos need to be tested regardless of which interface technique is used? If there are 8 variations then there are 8 variations regardless of the name-centric or flag-centric approach.

          Note that sometimes I add an optional boolean parameter to avoid breaking existing method calls. The default of the new optional flag is the original behavior. It's a case being backward compatible versus changing more code in order to fit a revamped interface.

  • physicles 5 years ago

    I agree. The only problem I see with “boolfixes” (as an old colleague called them) is the call site readability issue that the author pointed out. If you have the purpose of the bool right there at the call site, as with mandatory keyword arguments in Python, that issue goes away.

  • m4r 5 years ago

    Everyone that works in project A uses properly configured IDE, so when you take a look at code you'll actually see something like that: `getClient(liveMode: true)` instead of just `getClient(true)`.

    Is there really a point in creating Enum or writing more complex internal logic if you use proper IDE?

    Only benefit I can see is doing CR's via Github like system - but still, I do not believe in CR without actually pulling, viewing, and testing code on own computer.

    • brianpgordon 5 years ago

      Even as a person who adores my properly-configured IDE with that feature, I vehemently support the idea of IDE-independence for even internal projects. Requiring developers to use a particular IDE evokes the dark days of Java projects based on Eclipse's build system. Developers are happy and productive with their pet tools, and discontented and resentful when forced to use tools they hate. The choice of editor/IDE is one of the most personal that a professional developer makes. Let's ship code that stands on its own (with an independent build system for heaven's sake) and not even implicitly force the use of particular IDEs.

      • m4r 5 years ago

        I see your point, however, function parameter lookup is present in all IDEs I've tried/worked with (VSC, Atom, Jetbrains').

        Is boolean flags antipattern? Well, this all depends on the workflow. I follow ideology of making thing work first, then making it perfect. If introducing flag saves me two hours that I can spent to deliver on time - hell - why not - knowing that everyone who works with me uses some modern text editor / IDE - they will know immediately what's the parameter is doing. Also, what I've observed is that most of the people do have a habit of `fn + click` into function they've never seen before - so they will learn about parameters one way or another.

        Cheers

  • rntz 5 years ago

    If the name is mandatory, as it is in Smalltalk, then this basically solves the issue.

    If the name is optional, as it usually is in eg. Python, then there is a temptation for the writer to omit it, which means the reader won't know what "True" or "False" stands for.

    Are there languages besides Smalltalk and its descendants with mandatorily-named parameters?

    • rand_r 5 years ago

      In Python 2, if you need to, you can force people to use kwargs by defining the function in the form of

        def f(*args, **kwargs):
      
      and then making sure any params you use in the function come from kwargs, and raising an error if args is non-empty.

      Python 3 has a nicer way of doing the same thing, that I can’t remember.

      (Sourced from the book “Effective Python: 59 Specific Ways to Write Better Python”.)

      • Cpoll 5 years ago

        In Python 3 at least, anything after the * args is a required keyword-only argument:

            >>> def a(b, *args, c): print(c)
            >>> a()
            Traceback (most recent call last):
              File "<stdin>", line 1, in <module>
            TypeError: a() missing 1 required positional argument: 'b'
            >>> a(3)
            Traceback (most recent call last):
              File "<stdin>", line 1, in <module>
            TypeError: a() missing 1 required keyword-only argument: 'c'
            >>> a(3, c=5)
            5
        
        It's also valid to define it this way, if you don't need the * args:

            >>> def a(b, *, c): print(c)
      • dbcurtis 5 years ago

        Please tell me you have not been coding in Python for the last five years.

therufa 5 years ago

> Data flows beat patterns (This is personal opinion) When you understand how the data must flow in your code, you'll end up with better code than if you applied a bunch of design patterns.

this one is priceless.. I had so many arguments in my past with cocky devs about this topic. Don't get me wrong, patterns are good, but it's only a small part of coding. Also patterns aren't there to follow them as they where the law, but bend them to your needs. One doesn't even necessary needs to be implemented to 100% as it is in the "book". So yes, this part of the post i'm very fond of, it's really wise.

  • cgrealy 5 years ago

    I cringe whenever I hear someone talking about "what pattern to use here" (I cry if the answer is "singleton").

    Patterns are a way of describing code. They are meant as a common language to use when explaining a codebase to another dev. If you are writing code with the express intention of "using X pattern", you're doing it backwards.

Const-me 5 years ago

Most of them are awesome, but some are questionable.

> don't add flags/Boolean parameters to your functions.

Many languages have strongly-typed enums for them, makes calling code readable, e.g. getUserMessages( user, eMessageFlags.Full ). Especially useful for more than 1 flags.

> ALWAYS use timezones with your dates

UTC usually does the trick?

> UTF8 won the encoding wars

UTF8 used on the web a lot, but OP seems to be generalizing their experience over all programming, not just web development.

> when your code is in production, you can't run your favorite debugger.

No but I can run my second favorite, WinDBG. A crash dump .dmp file is often more useful for fixing the bug than any reasonable amount of logging.

> Optimization is for compilers

Compilers are not that smart. They never change RAM layout. I've recently improved C++ project performance by an order of magnitude replacing std::vector everywhere with std::array (the size was fixed, and known at compile time). Before I did, profiler showed 35% time used in malloc/free, but the cost of vectors was much more because RAM latency.

agentultra 5 years ago

Type systems exist on a spectrum. A rich type system is better at documenting your code than comments are if the compiler and it's type system are sound: the comments can be wrong, the types cannot.

A sufficiently expressive type system can check your logic for you. Push as much of your propositions and predicates into the types as possible.

Data types are better than exceptions, return codes, etc for the majority of unexceptional cases. Conditions and restarts are better than exceptions. Sometimes crashing is not an option when your computer is flying in the air with a human.

In addition to specifications -- often a bunch of boxes with arrows will not be sufficient. There are a range of tools to check your designs from automated model checking to interactive proofs. When the problem is difficult to understand on its own then the solution is probably even harder -- a good sign you should consider using one of these tools to make the problem clear and guide you to the right solution.

  • JamesBarney 5 years ago

    I agree with everything you said, except >: the comments can be wrong, the types cannot. If you're arguing comments are valuable but type constraints are a better way to express requirements than I agree and sorry for the long post. But I've seen a lot of comment hate on hackernews and I feel like I have to defend comments.

    I really think comments can be especially valuable. They're valuable when reviewing code because they tell the reader what the developer was attempting to do, not necessarily what they did.

    Not everything is expressible in every type system. Even something as simple as "this reference can't be null" isn't expressible is many languages.

    Comments are easier to read than type names, a sentence is much more expressive than code. Especially code that is tricky due to performance or domain complexity.

    Type and method names can rot too. If you have a method that starts with get that someone adds a little bit of updating logic to making the method name misleading.

    And comments can be kept up to date easily by just adding an item to your code review checklist "Ensure comments match code".

    • agentultra 5 years ago

      Comments are useful and type constraints are a better way to express specifications.

      It varies on the level of expressivity and soundness of the type system employed as to how valuable comments are. In my experience there's an inverse relationship: the more expressive the type system, the less valuable are comments. This is true in so far as the types get richer the more they can express with less ambiguity than prose. In a way this makes the comments that do exist in such a code base even more valuable since they describe far less and are probably filled with interesting information such as why certain propositions are important, etc.

      However when you're working in languages without a static type system or a rather weak one, comments can be necessary and invaluable.

      Update

      To clarify, in a type system as rich as a dependently typed language, given an inductive definition of vectors, and a type; Agda, for example, can often find the solution (the term) that implements the type. You could add a comment if you wanted to but you'd only be elaborating on what is already obvious by the type signature.

      This does scale beyond trivial examples such as vector products and into the realm of business logic; it's possible to encode state machines this way. Code comments of this kind of code end up looking superficial.

      In a more common FP language like Haskell, often derided (even by me) for it's lack of commenting and documentation, seems to be that way precisely because once you get used to understanding how to reason about types... well it gets redundant... and the compiler cannot check that comments are correct with respect to the term level code. The comments are sparse and rich instead of redundant and full of exposition.

      That's not to say that comments don't exist! Other poster have given many good examples where they are definitely necessary.

      I ascribe to the rule that there's a balance and too many comments are not helpful just as too few are as well.

    • brianpgordon 5 years ago

      I'm on team comments too. The article puts it well:

      > When you start the code by writing the documentation, you're actually making a contract (probably with your future self): I'm saying this function does this and this is what it does.

      > If later you find out that the code doesn't match the documentation, you have a code problem, not a documentation problem.

      "Self-documenting code" is, in my opinion, a silly canard. Maybe it's appropriate for some brain dead glue logic or frontend boilerplate, but any time you really need to think about what the code is doing, a comment will be immensely helpful. I don't at all buy the argument that comments too easily get stale. They won't get stale if you maintain them.

      • gregmac 5 years ago

        Honestly, there's very little code where comments on the function calls can't be helpful. Let's use GetUserDetails(userId)

        A comment like "gets the details about a user" is of course a waste of time.

        What I like to see is everything about using it, without having to read the code. For example: "Gets details for a user, based on the Service.CurrentUser access rights. If userId doesn't exist, is not available valid id, or the current user diesn't have permission, throws exception. Note this can return data for disabled and deleted users, so if used for non-admin functionality, you should check the State property. Due to caching, this might not show changes made in another session in the past couple minutes."

        The lack of a check for disabled status (for example) won't be obvious if I'm not actively thinking about checking disabled status. That few seconds of a comment can easily prevent a bug when someone consumes this - especially if they didn't write this API or are new to the cosebase (maybe they don't even know disabled users are a thing).

        Sometimes when you write these types of comments you realize your naming is bad, or that there's an API change that would make it easier to avoid a mistake - such as having separate GetActiveUserDetails() and AdminGetUserDetails().

        I think about this during vice reviews, too: I've often asked for comments to be added on existing methods being consumed (but not modified in the code I'm reviewing) because the new use made me wonder about how something would be handled and I had to go read a couple levels deep to figure it out.

        • agentultra 5 years ago

          Agree. And in a richly-typed language you can be more specific:

              getUserDetails :: UserID -> DBAction (Maybe AnyUser)
          
          This type says a lot already. The definition of `AnyUser` would specify the parameters that the comment only says in prose. Or we could be more specific

              getActiveUserDetails :: UserID -> DBAction (Maybe ActiveUser)
          
          We could also include the authorization specification in the type constraints on the `DBAction`:

              getUserDetails :: (DBAction m, HasAdminContext m) => UserID -> m (Maybe AnyUser)
          
          This is what I mean when I say that a rich type system can enforce a lot of what you might traditionally use comments for in a weaker type system. Now my hypothetical comment might highlight some salient points about this function, such as performs N queries to validate all of the available authorization contexts, use with a user role that has a limited number of contexts for best performance.
  • mannykannot 5 years ago

    Of course, if a compiler and its type system are sound, then the fact that certain errors are eliminated follows trivially, but, in practice, this does not prevent types (and especially those of the rich type systems you are advocating) having bugs in their implementation. Take, for example, floating-point numbers in Excel.

    Even something as fundamental as floating point types need documentation, which is why we have IEEE 754.

SamuelAdams 5 years ago

For as much grief as Medium gets, it sure would help blogs like this. The font color is a poor choice considering the background, the font itself isn't that common for reading long text forms.

Not saying Medium is perfect, but it does help standardize the display of all these random, one-off blogs.

  • jp_sc 5 years ago

    I strongly disagree. This is like saying that every store everywhere should be a Walmart so one doesn't have to adapt to the organization of other stores.

    In my opinion, the style of the blog is as important as the content because we are dealing with the self-expression of a person here. The colors and fonts were conscious decisions, and presenting the data in the most efficient and standardized form is not the first priority.

    (Also calling it a "one-off" blog is kind of mean. Sure it might the first and the last time you ever read a post there, but that blog has more than one hundred posts, so it's not a "one-off" for the author.)

    • astine 5 years ago

      Right. The font at least is a monospace font common in terminals. It's a clear conscious design decision that technical readers are likely to pick up on.

  • lostmyoldone 5 years ago

    People have a right to their individuality, quirks and all. Claiming the article/text would be 'helped' by medium implicitly asserts that your desire for lowest possible effort to read is more important than a modicum of individual preference - and expression - from the writer. Respecting individuality is important, whether it is in how we style our text, dress ourselves, or look in any other way.

    It's fine if you don't like it, but then say that.

    Say "I didn't like the typography and colors, I prefer $style" if you really have to, but still, why?

    Bright on dark, fixed width, sans serif.

    Pretty much the colors and typography of programming from the first terminals, and with some interruptions, lasting into the modern day. Noting the audience, the style could be said to be fitting, especially with the current retro trends.

  • enlyth 5 years ago

    Press F9 if you're on Firefox :)

  • kgraves 5 years ago

    Alternatively, I use reader/distill mode, the color scheme on this blog made me squint and almost damage my eyes that I nearly stopped reading immediately until I switched on reader/distill mode.

    I agree, for a much wider (programming) audience, Medium would suit better for this blog.

  • abledon 5 years ago

    disagree, this was 20x more enjoyable to read than on medium's website.

  • sadness2 5 years ago

    Small price to pay?

chewxy 5 years ago

> A long time ago, a small language made the rounds by not evaluating expressions when they appeared, but when they were needed.

> Lisp did this a long time ago, and now most languages are getting it too.

I'm not sure if this is correct. You can probably write a macro for lazy eval, but by default lisp is a CBV almost-lambda calculus.

Or you could pass around quoted things, I guess

  • phoe-krk 5 years ago

    F-expressions were a thing back in the day. https://en.wikipedia.org/wiki/Fexpr

    Possibly the closest thing available in Common Lisp are Lisp macros - they receive their arguments unevaluated, and are free to utilize that input for further computation.

    • chewxy 5 years ago

      TIL. I only came across fexprs briefly on LtU once and never looked more into it. Thank you.

nunez 5 years ago

> Understand and stay way of cargo cult. "Cargo cult" is the idea that, if someone else did, so can we. Most of the time, cargo cult is simply an "easy way out" of a problem: Why would we think about how to properly store our users if X did that?

_cries in kubernetes_

coryfklein 5 years ago

OMG I never have encountered this idea for function deprecation in libraries but I think it is hilarious:

> (A dickish move you can do is to create the new functions, mark the current function as deprecated and add a sleep at the start of the function, in a way that people using the old function are forced to update.)

This idea is sure to waste lots of people time, lead to angry emails, and is probably worse than just changing the function signature in-place without any deprecation schedule at all.

GuB-42 5 years ago

That's interesting because even if I agree with most technical points, I went the opposite way for most of the general points.

The big one is: start with the specs, then code. I used to think that, wasting time making plans for an end result that ends up completely different. Now, I prefer to start coding right away. Anything goes, it just has to be code.

Why code? That's because it keeps my feet on the ground. And because it is the early phase, I can still change things. Chances are that most of that early code will be scrapped, maybe several times. No big deal, I may have lost a few days of coding, but I have a much clearer view than if I spent that time writing specs.

As for the documentation vs code, I spend all my experience points into reading and understanding code. I sometimes even force myself not to read the documentation, going straight for the code instead. Comments are lies. For me, documentation will always be an afterthought, for others. For me, I have the code, and if I have trouble reading it later, that's because it is bad code, not because it lacks documentation, and I try to learn my lesson, and maybe rewrite the offending parts.

For the social part (code of conduct, micro aggression, etc...), my strategy when things become unpleasant is to go technical. The bigger the asshole facing me is, the more deeply technical I go. Here two things can happen. Either the asshole is incompetent, and because he doesn't want to look too much like a fool, he usually leaves me alone. The other is that the asshole turns out to be really good, and in that case, the interaction may not turn out to be the most pleasant, but at least, there is something to learn.

wiz21c 5 years ago

Well mines : Unicode is trickier than you think, there are some date-time that don't exist, transactions are trickier than one (usually) think, don't use floating point for money (because many people don't understand floating point), a quick fix always introduces another bug, any contract (WSDL, json-schem, whatever) is full of holes, integrate with others first, legacy database are full of bad/unexpected data,...

bluejekyll 5 years ago

> Unit tests are good, integration tests are gooder

I would personally rephrase this as unit tests are necessary, as are integration tests, as are end-to-end tests.

The testing pyramid: write most of your tests as unit tests, then write a bunch of integration tests, and finally throw in a few end-to-end tests.

> Tests make better APIs

Yes! This is such an important point. A perfect example I love is that globals and even thread-local values make testing hard, and that’s reason enough to avoid using them.

> Make tests that you know how to run on the command line

Again great advice. I would also add, get your tests running as early as feasible. This saves you rewriting APIs and other decisions that might make testing, even deployment (which you need for end-to-end tests), significantly harder.

Anyway, this entire set is a great read, and I agree with much, but not all.

  • sadness2 5 years ago

    I'm with Julio on this one. The triangle ought to be a diamond, with the integration tests being the fattest part. This is a level where tests can be at the component level, where they can correspond to specifications which are comprehensible to stakeholders (which helps keep developers accountable to focus test effort on things that really matter), and where refactoring of component internals do not require corresponding refactors of test code. This creates challenges, 90% of which are resolved by applying the same good development practices (re-use, abstraction, performance) to your test code that one is accustomed to applying to application code.

    • sbov 5 years ago

      Many people consider module-level tests to be unit tests.

      Integration tests would mix multiple modules.

      Depending upon the specifics of your application, unit tests at a lower level than the module level may be suitable. E.g. complicated algorithms a module might internally rely upon.

      Our language around testing is missing a slot. We have unit, integration, and end-to-end. Should be unit, module, multi-module, and end-to-end. Depending upon the specifics of the application, you might have more unit or module level tests.

      • sadness2 5 years ago

        > unit, module, multi-module, and end-to-end.

        A great semantic insight.

        I still prefer to test complicated algorithms via the external interface. It does end up requiring some scaffolding code for generating inputs based on test cases, but code is for automating repetitive tasks, and they're pretty fast these days, so it's rarely a problem, even for many thousands of test variations. It enables you to test the interactions between your various unit-level variations, which is more likely to catch errors. Again, it also helps make your tests visible, along with the edge cases you're testing, and many times developers working at the unit test level are way more thorough than they actually need to be. Couple this with efficiencies around refactoring and it can actually save time.

    • bluejekyll 5 years ago

      You seem to imply that I think integration tests aren’t valuable. But I’m not saying that. What I’m saying is that it’s easier to produce unit tests that cover sections of code and test their contract across edg-cases easier than it’s to do the same with integration tests.

      An example: Let’s say you’re writing a DNS resolver.

      You need to parse all of the various RDATA portions of records. It’s far easier to write an exhaustive suite of unit tests that verify the code is parsing those data elements properly and throw in many tests for edge-cases that might mean insecure software if done incorrectly.

      But it’s absolutely invaluable that we also have integration tests that verify an entire Message parses, and even moves through the entire stack from parsing to response. But why construct an entire Message when all you need to test is the RDATA? It’s easier and less complex test code to localize the RDATA test, and then to do a smaller set of Message based tests that verify the message moves through all the stacks properly.

      Then popping all the way up the stack, you actually need to generate network traffic and send real DNS queries and verify that those work. That means multiple processes in your end-to-end test. Trying to produce a full set of tests that would validate individual edges cases in all RDATAs would be an extreme amount of redundant code, and also would fail due to many other reasons not related to an RDATA change, thus making it more time consuming to track down a failure.

      Each area of tests is absolutely important and all are necessary for different reasons.

    • twic 5 years ago

      The model i like, keeping to the monumental theme of the test pyramid, is a testhenge:

      http://www.ox.ac.uk/sites/files/oxford/styles/ow_medium_feat...

      You need a thin layer of high-level tests (controller and/or browser) covering the entire application, and then columns of lower-level (integration and unit) diving deeper into it where there are areas of particular complexity or risk.

dheera 5 years ago

> "Don't use Booleans as parameters"

... except I think it's perfectly fine in languages that support named parameters. In fact this isn't even specific to booleans. Say we're using Python,

    draw_rect(width = 5, height = 10, top = 5, left = 6)
is much more readable than

    draw_rect(5, 10, 5, 6)
  • Izkata 5 years ago

    For languages that don't (and depending on the usage, maybe ones that do), I'd like to throw out another possibility for those who haven't seen it before: Fluent interfaces

      draw_rect(5, 10).at(5, 6)
      draw_rect().from(5, 6).to(10, 16)
    
    It's one I'd be careful with though, as it can be easy to end up with something less readable.
  • turingbike 5 years ago

    Named parameters _might_ be my favorite part of Python. I don't understand why every language doesn't have them.

    • astrange 5 years ago

      Objective-C and Swift also. Though my favorite implementation is AVISynth, the best language and most interesting execution model nobody uses.

pier25 5 years ago

> Solve the problem you have right now. Then solve the next one. And the next one.

Amen to that.

WalterBright 5 years ago

> I tried to become "paperless" many times. At some point, I did keep the papers away, but in the very end, it really do help to have a small notebook and a pen right next to you write that damn URL you need to send the data.

I thought I was old-fashioned because I keep one of those spiral notebooks on my desk. It's invaluable. I've never been able to take quick notes using a computer.

When I fill up the notebook, I run it through the scanner, toss it and get another.

joaobeno 5 years ago

Happy to see some things I learned the hard way with 15 years... Man, I wish I had this level of knowledge when I was 20...

lifeisstillgood 5 years ago

If a function description includes an "and", it's wrong

yes :-)

amelius 5 years ago

> Spec first, then code

I wish I could make my boss understand the importance of this one.

In his mind, the best coding strategy is:

1. build prototype

2. add feature

3. goto 2.

drderidder 5 years ago

I like the point "The function documentation is its contract". I experimented using JSDoc instead of TypeScript on a recent javascript project and have found it so enjoyable, partly for that reason.

jupake 5 years ago

Fastastic! This is so deliciously language/framework/editor/OS agnotic.

Nothing quite like experience to make you wish you could go back in time, with all your current knowlege, and take over the world.

cgrealy 5 years ago

> Always use a Version Control System Please tell me no-one is writing anything without a VCS in 2019.

> One commit per change Amen, brother.

> You're responsible for the use of your code That's a tough one. Code is a tool, like any other. If you're making a gun, then you have to accept that your product will kill people. But if you're making a hammer, it's not really fair to hold you responsible when someone goes postal with it.

> Learn from your troubles This. So much this. If you're not doing this... the rest is worthless.

hamburglar 5 years ago

This is a great, great list, but what the heck is this:

> This is what most people get wrong about adding booleans to check the number of True values.

This appears in two different sections. Who are all these people adding booleans as if they're integers? It's like someone wrote a whole treasure trove of sensible life advice that included "this is why most people have trouble when they use a dry-erase marker as a toothpick."

  • mixmastamyk 5 years ago

    Looks like a weird side effect of Javascript being exploited by the unscrupulous.

WalterBright 5 years ago

Most of these sort of lists are crap. But this one is good. I'm amazed I agree with nearly all of it. (>40 years in the business!)

JoeAltmaier 5 years ago

Good stuff, until it broke down into coding style.

rurp 5 years ago

I'm curious about the "Don't use Booleans as parameters" point. I primarily write Python and like to use keyword arguments in these cases. So the function call would look like

  get_message(full_message=True)
Is this an OK pattern for Python but not some other languages, or am I writing more confusing code than I had realized?
  • ndiscussion 5 years ago

    That's exactly what the post is warning against. It's really hard to explain without a concrete example, but basically, I would make two functions:

    get_full_message and get_partial_message or something like that

    • ShamelessC 5 years ago

      Is it safe to say that using named keyword arguments mitigates this issue?

  • prometheus76 5 years ago

    Well, if I'm reading that, I'm wondering what gets returned if I set that parameter to false. Does it return a summary of the message, or does it return the first word or does it return null? Using named parameters doesn't answer that question any more clearly than OP's example.

  • Izkata 5 years ago

    If you add a safeguard to the function itself that disallows full_message as a positional arg, so no one is able to write "get_message(False)", then it's okay.

    The key here is the reader being able to tell at a glance what the args are/do, without needing to hunt down documentation.

cgrealy 5 years ago

A lot of people in this thread have argued that named parameters mean you can add bool params to functions.

e.g. getUserMessage(userId, fullMessage=true)

While that can help, I'd still argue against it unless your language has some mechanism to force use of name. Because if you can omit the name (i.e. getUserMessage(userId, true)) then someone will omit the name.

WalterBright 5 years ago

> You can create the new functions and mark the current one as deprecated, either by documentation or by some code feature.

Another D feature that is so simple in concept but miraculous in its effects - a "deprecated" attribute that can be applied to declarations. It enables plenty of warning given to users before it is removed.

  • astrange 5 years ago

    Well sure, but GCC and co have that too.

    It's a pretty good counter to people who think you should always use -Werror. One wrong deprecation could take you a month to fix!

lliamander 5 years ago

> Also, remember that most people that are against CoCs are the ones that want to be able to call names on everyone.

There's a XKCD comic which points out how people who claim to "hate drama" tend to be the most dramatic people[1].

That's how I tend to perceive projects that put a lot of emphasis on their CoC. The more they emphasize it, the more I expect pettiness, vindictiveness, and bad faith.

[1]https://xkcd.com/1124/

  • astrange 5 years ago

    It's not surprising that a large or corporate project would need an HR department, which is what the CoC is. Anyone who objects to that probably has never worked at a large company, or used a forum with no mods.

    But the important thing isn't the rules, it's that there's someone to enforce them. Ideally, the rules exist to constrain them, not you.

    • lliamander 5 years ago

      Whatever you may think about how a CoC ought to work, my observation about they work in practice (and the people who push them) remains the same.

      > Anyone who objects to that probably has never worked at a large company.

      I can assure you that's false.

dorusr 5 years ago

> If a function description includes an "and", it's wrong

Unless you have procedures.

Also, the 'don't add boolean switches' doesn't count for Powershell, as you can use named switches that are false if not added. For other languages you could add constants

rainhacker 5 years ago

> Beware of micro-aggressions

> Toxic/micro-aggressors are only fixable if they are YOU

This is a good thing to keep in mind. But the article doesn't provide any suggestion when you are a victim. Anyone has tip(s) to deal with a micro-agressor?

  • davidjnelson 5 years ago

    The authors recommendation seemed to be to change jobs.

pmarreck 5 years ago

it’s a pretty long read... clearly the author did not learn brevity in his coding, or that his blog’s color scheme does not contribute to readability

also re: Gherkin: Just don’t cucumber. If you focus on integration test technology like Cucumber or headless browser drivers, they will absolutely balloon your test suite runtime and it is partly because they are extremely inefficient (testing the same things over and over and over again such as “set up a logged-in user with 3 assets” for 90% of the test cases in the suite). Avoid. Maybe not entirely, but in large part.

  • MichaelApproved 5 years ago

    > "it’s a pretty long read... clearly the author did not learn brevity in his coding"

    Brevity shouldn't be a virtue of coding. The best code is self-documenting code and self-documenting code is rarely brief.

    To quote Wordpress PHP Coding Standards: In general, readability is more important than cleverness or brevity.

    https://make.wordpress.org/core/handbook/best-practices/codi...

    • pmarreck 5 years ago

      If you want code brevity, at all, PHP is not your language (nor is Go, nor is Java)

  • _asummers 5 years ago

    > it’s a pretty long read... clearly the author did not learn brevity in his coding, or that his blog’s color scheme does not contribute to readability

    It's also riddled with typos which made reading it hard. There's some wisdom in there, but it was clearly written in an editor without spell/grammar checks. Maybe there should have been a section about rereading your own work before sending to your team/prod? =)

    • reallydontask 5 years ago

      I would guess that based on name and link to a book, he wrote, in Portuguese, that the chap's first language is not English

      • Kreotiko 5 years ago

        As a fellow not native speaker I don’t think this should matter the moment you choose to use English. Also, personally, it wasn’t just the grammar that put me off but also all the aggressive colloquialism used.

        • reallydontask 5 years ago

          I give non-native speakers a lot more leeway for typos, grammar and odd expressions for, hopefully, obvious reasons.

          A lot of people use English as it's the language of tech, which means that the potential audience is significantly larger.

    • tomelders 5 years ago

      > Turn off comments

      • pmarreck 5 years ago

        There should be a way to vet negative people in some way that comments wouldn't be toxic

  • pytester 5 years ago

    >If you focus on integration test technology like Cucumber or headless browser drivers, they will absolutely balloon your test suite runtime and it is partly because they are extremely inefficient

    Between unit tests (fast, unrealistic, tightly coupled, catches fewer bugs) and integration tests (slower, more realistic and catches more bugs) I virtually always pick the latter.

    I'm happy to trade CPU churn for "catches more bugs". CPU time is cheap.

    • pmarreck 5 years ago

      It is not a dichotomy.

      Ideally you’ll have lots of unit tests and a few integration tests, all valid.

      I’m scarred by seeing test suite runtimes (THE most important element of programmer productivity IMHO) completely balloon after focusing too much on integration tests. There is a world of productivity difference between a 1 minute test suite and a 40 minute (or 4 day) test suite

      • pytester 5 years ago

        >It is not a dichotomy.

        It is a dichotomy. You do not want the same thing being covered by two different types of test. Duplicated test coverage means duplicated maintenance.

        >There is a world of productivity difference between a 1 minute test suite and a 40 minute (or 4 day) test suite

        Not in my experience. Tests can easily run while I'm at lunch, overnight, working on something else or, if time is really of the essence, in parallel. If I'm running an individual test while developing anything that runs in under a minute has no effect on my productivity.

        Tests that don't catch bugs are a far, far bigger deal.

  • lostmyoldone 5 years ago

    Optimising for our reading effort is not a given, neither in length or style. It's a choice.

    Insinuating incompetency when choice is clearly a reasonable explanation is in my opinion quite disingenuous.

    If you don't like the style, at least say so clearly, if you have to. But why do you have to?

    The authors tip to disable comments, I fully understand. I find writing hard, and I could never cope with all the things - except the actual text - people find the need to complain about.

RocketSyntax 5 years ago

"Types say what you data is" Are you saying to first check the type of your data before running a boolean so that you can catch high level things that are Null when they should be char or int?

MichaelMoser123 5 years ago

What i learned: keep a log of problems solved and things that slowed me down. Helps me to be more aware of what I am doing, hope it will keep me from shooting at my foot, at least occasionally.

herodotus 5 years ago

> Spec first, then code

> If you don't know what you're trying to solve, you don't know what to code.

I like this, but I would add "Always read the spec before you code".

graphememes 5 years ago

The only one I have an issue with is

> Future thinking is future trashing

It really depends on what you're building, and how long term the project will be around.

Git vs Javascript Frameworks

thieving_magpie 5 years ago

I enjoyed reading that. I recognized a lot of things I used to do and a lot of things I currently do. It's nice to gain perspective on that.

Raphmedia 5 years ago

The sections on work relations are golden. If there is anything you should remember after reading this, it is these points.

luord 5 years ago

I agree... on the actual technical points. Particularly the bits about design patterns and thinking about the users first.

neves 5 years ago

Some of the things depend on the language

getUserMessage(userId, True)

in Python can become

getUserMessage(userId, getUserMessagesFull=True)

  • prometheus76 5 years ago

    What happens if I set it to False, and what happens if I don't set it? Does it default to sending a summary, or does it send a null? I don't see how having named parameters eliminates the problem OP is talking about.

tuananh 5 years ago

> Don't use Booleans as parameters

ah, he must have forgotten about named parameter

  • falcolas 5 years ago

    Even with named parameters, unless exquisitely named, it can still be more confusing (and less flexible) than constants.

    • tuananh 5 years ago

      constant as in enum? in that case, enum gotta be name exquisitely as well.

m3kw9 5 years ago

#Future thinking is future trashing

Doubl 5 years ago

I don't see much wrong with true or false as arguments. Beats having two functions instead of one which is his preference.

surajs 5 years ago

This is awesome, thank you!

majewsky 5 years ago

Things I learnt the hard way reading this article: Headlines with `font-size: 1em` make a large article unnecessarily hard to skim.

  • mrob 5 years ago

    You can remove all the author's CSS with one click of Firefox's Reader View. The HTML is structured just fine, so this this works very well.

  • LoSboccacc 5 years ago

    1em can not be at fault it's literally 'whatever font size parent has' the culprit is somewhere else.

    • proaralyst 5 years ago

      Which is specifically the body text size, no?

      • LoSboccacc 5 years ago

        yeah, but that's the issue: hs and ps are both coded to the same size which happens to be 1rem

        i.e. 1rem hs would be ok if you'd had 0.8rem ps, and conversely you'd have the same issue if both were coded to be 2rem

  • tasty_freeze 5 years ago

    ctrl - will decrease the font size on your browser. (ctrl + increases; ctrl 0 resets it to the specified size).

draw_down 5 years ago

In half that time, basically everything I have ever built has ended up in a ditch. I suppose I have learned a lesson or two since I started, but any lesson fairly pales in comparison to that fact.

Kreotiko 5 years ago

I would add: learn to write properly, something the author didn’t.

A fellow not native speaker.

molteanu 5 years ago

> Spec first, then code

and then later

> Be ready to throw your code away

Then why would it be useful to have the spec first if you're expecting to throw it away anyway?

Better combine this advice,

> Write steps as comments

with a higher level language, and write code instead of comments, and have fun developing what you think is the product you need. You'll see plenty of corner cases, learn tons of new stuff about your domain that you didn't even think existed and then you'll want to start over since many of your initial assumptions will be wrong anyway by that time or you'll see new extra features that you couldn't have possibly discovered without actually having a toy product to play with and try new ideas with.

Tests? No, definitely not on the first run. Maybe only for documentation purposes in legacy systems. But don't take away all the fun by starting with tests. You'll paint yourself into a corner.

This seems like 30 years of experience in a corporate environment. I would take these recommendations with a grain of salt. I would venture to say that whoever starts with "I have x years of experience" or "I'm an Y language developer" is mostly bulshitting you or doesn't really know what he's talking about, so he'll need to have a badge to shove it up your face to demonstrate his qualities.

  • MrGilbert 5 years ago

    > This seems like 30 years of experience in a corporate environment.

    Well... That's where the money's at, right?

    > Tests? No, definitely not on the first run. Maybe only for documentation purposes in legacy systems. But don't take away all the fun by starting with tests.

    Kindly disagree. In a corporate environment (at least the one's I've worked so far), you HAVE to start with the tests. From my experience, people maintaining your code after you've left the company (or you are sick, whatever) will be grateful to have a testsuite that can verify the edge cases are still working after their changes have been deployed.

    Also, when the first working lines have been written, there was no way we would revisit them to write test on top. I need tests to verify that what I've just written actually works.

    Others might create a "ExampleConsoleApp1" for this, but then I can also write a test.

    • molteanu 5 years ago

      > Maybe only for documentation purposes in legacy systems.

      > people maintaining your code after you've left the company

      That's what a legacy system is. I agree. That's a good use of tests. But don't start WITH them. You'll build rigid systems.

      > Well... That's where the money's at, right?

      Agree with this also. But this is (or should be) hacker news, so we look for better ways to make software, regardless if it makes money or not. Or maybe we should rename it Financial Times?

      • UK-Al05 5 years ago

        "But don't start WITH them. You'll build rigid systems" - Only if you design them badly.

        Most people write terrible tests, which are over mocked. You should test input(say simulate http post call) then check the thing is in the fake database or even better a call to retrieve the item. You can completely change the way its implemented in between.

        • u801e 5 years ago

          > Most people write terrible tests, which are over mocked. You should test input(say simulate http post call) then check the thing is in the fake database

          You're still using mocks in that case. You can avoid using mocks entirely by decoupling your tests from external dependencies. That is, instead of making a HTTP post request to the application, test a method that takes the content of the HTTP request as a parameter and assert on the return value. For the database, you can test a method that takes parameters and returns the appropriate query.

          Then integration tests can be used to test the interactions between different parts of the system.

          • UK-Al05 5 years ago

            " That is, instead of making a HTTP post request to the application, test a method that takes the content of the HTTP request as a parameter and assert on the return value"

            The http post is the input into OUR system under test, not an external dependency. Nothing is mocked. I find going through the entire http pipeline catches misconfiguration issues.

            "For the database, you can test a method that takes parameters and returns the appropriate query." - That's what the fake database does. Implements the interface to the database. In this case implements a fake query method. Nothing is mocked. Stubed but not mocked.

        • nicoburns 5 years ago

          > You should test input(say simulate http post call) then check the thing is in the fake database or even better a call to retrieve the item.

          Agree. The key to good tests seems to make them neither unit tests or e2e tests, but something in between. API tests if you have an API, or "medium sized module" tests in other systems.

          Unit tests and e2e tests have value too, but they're a lot more work to maintain for less value IMO.

      • mateuszf 5 years ago

        > But don't start WITH them. You'll build rigid systems.

        It depends. If you'll test implementation - sure. But if you test use cases - not necessarily,

  • de_watcher 5 years ago

    > Tests? No, definitely not on the first run. Maybe only for documentation purposes in legacy systems. But don't take away all the fun by starting with tests. You'll paint yourself into a corner.

    Trying to squeeze in the tests into existing system is painful. So better do the tests before it had become a "legacy system to document".

    Tests should be written before and during writing the new system.

    That's because they are the executable language for the results you want to get from the system. The two other options are: use plain text to document your manual actions and expected results, don't document your manual actions and expected results.

    • protonimitate 5 years ago

      >Trying to squeeze in the tests into existing system is painful. So better do the tests before it had become a "legacy system to document".

      This, but also if you code first then write tests you try to write tests based on your interpretation of how the code you already wrote currently works, when its better to write tests to outline and define how the code _should_ work in the future.

      >You'll paint yourself into a corner. I find it is the exact opposite. If you write all your code first, then write tests, you will inevitably find a bug and refactoring working code to fit a test is infinitely harder than adding more tests or replacing irrelevant ones.

    • marcosdumay 5 years ago

      Trying to test ill-defined software is a recipe for failure. At the same time, trying to create well defined software without tests also leads to certain doom.

      Know where you are.

  • nickjj 5 years ago

    I have ~20 years of experience as a freelancer and I don't like TDD, but I do think writing tests early is important because they give you the confidence to refactor your "fun / getting it to work" code.

    My workflow is basically:

    - Plan the project out, but not super detailed, this is more like a high level overview of features. I did a 90 minute video live on this process once[0].

    - Think about each state of what a feature could be in. Here's another video[1] showing that process.

    - Design some pages around your states (if you planned a user registration system, start designing the register form)

    - Dive into the code and implement one of the features (using pseudo code to help with the flow initially if needed)

    - Write some tests to demonstrate that what you have works as you intended

    - Heavily refactor your code if needed, leaning on your tests and refining them as you come across edge cases, etc.

    Honestly the above never fails for both small and large projects. You always have something small and focused to work on and the code you produce typically ends up being in good shape by the time you reach the end of the workflow. It's a fast cycle too (easily multiple loops per day) once you have the states ironed out.

    [0]: https://nickjanetakis.com/blog/live-demo-of-planning-a-real-...

    [1]: https://nickjanetakis.com/blog/design-your-web-uis-faster-by...

    • hyperbole 5 years ago

      Code is legacy the moment it's checked in - not writing tests is the equivalent of expecting the game of telephone to go flawlessly always and forever for every corner and major case written... This type of optimism when looked up in the OED has one definition: nope

      • Sahhaese 5 years ago

        That is why I like Michael Feathers' definition of legacy code, which is (paraphrasing):

        > Legacy code is code not under test

        That said, I'm not against the existence of legacy code, but any bugs damn well should be written up as tests to catch regressions.

        That way new development can be done without tests (full TDD is tedious and often way too coupled to implementation imo) but any bugs are captured in a way that can't be tests written against the implementation and guard against regression.

  • reallydontask 5 years ago

    > Then why would it be useful to have the spec first if you're expecting to throw it away anyway?

    The idea is that you should be ready to throw your code away if the requirements change. If the spec changes (e.g. your own made up spec to the client's actual spec) then be ready to throw away some (all?) code.

    • js8 5 years ago

      GP's question reminds me of the quote: Plans are worthless, but planning is essential.

  • gitgud 5 years ago

    >I would venture to say that whoever starts with "I have x years of experience" or "I'm an Y language developer" is mostly bulshitting you or doesn't really know what he's talking about, so he'll need to have a badge to shove it up your face to demonstrate his qualities.

    A better approach is to demonstrate your skills and why it's important, then let people ask and wonder about your background...

    Basically, shoving experience or qualifications in my face doesn't make me respect your point of view more, it just makes me think you can't be humble and want to make yourself look good.

    This is actually great advice for both people with experience and people listening to them.

  • karlp 5 years ago

    > Write steps as comments with a higher level language, and write code instead of comments, and have fun developing what you think is the product you need.

    Hard disagree on that, there is nothing more annoying than reading code without plain comments, it's always less obvious than the original developer thought and the worst is that it always happen to your own code

  • pjc50 5 years ago

    > don't take away all the fun by starting with tests

    Depends what you're doing. For interactive programs, automated testing is more painful than just firing it up and testing manually, at least at the beginning. For noninteractive systems it may be much easier to start with a test as a spec: "parse this file and produce this output".

    • Nasrudith 5 years ago

      Reminds me of one unusual application of tests - working with inadequate or overadequate documentation (filled with many lines but difficult to find what you need) they work well as "scratch pads" for details which are unclear like "Does it return null or throw an exception when you attempt to remove something not in the collection?"

      • pjc50 5 years ago

        Exactly. They can be a great form of "case analysis", and can also be translated back into business-analyst-speak or use cases.