WalterBright 2 days ago

> Originally, if you typed an unknown command, it would just say "this is not a git command".

Back in the 70s, Hal Finney was writing a BASIC interpreter to fit in 2K of ROM on the Mattel Intellivision system. This meant every byte was precious. To report a syntax error, he shortened the message for all errors to:

    EH?
I still laugh about that. He was quite proud of it.
  • vunderba a day ago

    > EH?

    I feel like that would also make a good response from the text parser in an old-school interactive fiction game.

    Slightly related, but I remember some older variants of BASIC using "?" to represent the PRINT statement - though I think it was less about memory and more just to save time for the programmer typing in the REPL.

    • chuckadams a day ago

      It was about saving memory by tokenizing keywords: '?' is how PRINT actually was stored in program memory, it just rendered as 'PRINT'. Most other tokens were typically the first two characters, the first lowercase, the second uppercase: I remember LOAD was 'lO' and DATA was 'dA', though on the C64's default character glyphs they usually looked like L<box char HN won't render> and D<spade suit char>.

      All this being on a C64 of course, but I suspect most versions of Bill Gates's BASIC did something similar.

      • egypturnash a day ago

        C64 basic was tokenized into one byte, with the most significant bit set: https://www.c64-wiki.com/wiki/BASIC_token

        Each command could be typed in two ways: the full name, or the first two letters, with the second capitalized. Plus a few exceptions like "?" turning into the PRINT token ($99, nowhere near the PETSCII value for ?) and π becoming $FF.

        The tokens were expanded into full text strings when you would LIST the program. Which was always amusing if you had a very dense multi-statement line that expanded as longer than the 80 characters the c64's tokenizer routine could handle, you'd have to go back and replace some or all commands with the short form before you could edit it.

        • vrighter a day ago

          the zx spectrum did this too, except you could only type the "short forms" (which were always rendered in full). It had keywords on its keys. I.e. to type print, you had to press the "print" key.

        • mkesper a day ago

          As far as I remember you couldn't even run these programs after listing anymore.

          • LocalH a day ago

            You could run them just fine as long as you didn't try to edit the listed lines if they were longer than two screen lines. The same is true for a C128 in C128 mode, except the limit is extended to 160 characters (four 40-column lines).

  • nikau 2 days ago

    How wasteful, ed uses just ? for all errors, a 3x saving

    • ekidd a day ago

      Ed also uses "?" for "Are you sure?" If you're sure, you can type the last command a second time to confirm.

      The story goes that ed was designed for running over a slow remote connection where output was printed on paper, and the keyboard required very firm presses to generate a signal. Whether this is true or folklore, it would explain a lot.

      GNU Ed actually has optional error messages for humans, because why not.

      • teraflop a day ago

        https://www.gnu.org/fun/jokes/ed-msg.en.html

        "Note the consistent user interface and error reportage. Ed is generous enough to flag errors, yet prudent enough not to overwhelm the novice with verbosity."

        • fsckboy a day ago

          >not to overwhelm the novice with verbosity

          that doesn't make complete sense, in unixland it's old-timers who understand the beauty of silence and brevity, while novices scan the screen/page around the new prompt for evidence that something happened

          • Vinnl a day ago

            If I didn't know any better, I'd have thought they weren't entirely serious.

          • kstrauser 19 hours ago

            Ed helps induce novices into the way of the old-timers because it loves them and wants them to be happy.

      • llm_trw a day ago

        So much of computer conventions evolved for very good reasons because of physical limitations.

        When each line of code was it's own punch card having a { stand alone on a line was somewhere between stupid and pointless. Also explains the reason why lisps were so hated for so long.

        By the same token today you can tell which projects use an IDE as the only way to code them because of the terrible documentation. It is after all not the end of the world to have to read a small function when you can just tab to see it. Which is true enough until you end up having those small functions calling other small functions and you're in a stack 30 deep trying to figure out where the option you passed at the top went.

      • p_l a day ago

        /bin/ed did in fact evolve on very slow teletypes that used roll paper.

        It made the option to print file content with line numbers very useful (personally only used very dumb terminals instead of physical teletype, but experience is a bit similar just with shorter scrollback :D)

        • euroderf a day ago

          Can confirm. Using ed on a Texas Instruments dial-up terminal (modem for phone handset) with a thermal printer.

          And taking a printed listing before heading home with the terminal.

    • nine_k a day ago

      There are really few systems where you can save a part of a byte! And if you need to output a byte anyway, it doesn't matter which byte it is. So you can indulge and use "?", "!", "*", or even "&" to signify various types of error conditions.

      (On certain architectures, you could use 1-byte soft-interrupt opcodes to call the most used subroutine, but 8080 lacked it IIRC; on 6502 you could theoretically use BRK for that. But likely you had other uses for it than printing error diagnostics.)

  • zubairq a day ago

    Pretty cool.. I had no idea Hal was such a hacker on the personal computers in those days... makes me think of Bitcoin whenever I hear Hal mentioned

    • WalterBright 13 hours ago

      He wasn't hacking. Hal worked for Aph, and Aph contracted with Mattel to deliver console game cartridges.

      There was once a contest between Caltech and MIT. Each was to write a program to play Gomoku, and they'd play against each other. Hal wrote a Gomoku-playing program in a weekend, and it trashed MIT's program.

      It was never dull with Hal around.

  • furyofantares a day ago

    I run a wordle spinoff, xordle, which involves two wordle puzzles on one board. This means you can guess a word and get all 5 letters green, but it isn't either of the target words. When you do this it just says "Huh?" on the right. People love that bit.

    • speerer a day ago

      Can confirm. I loved that bit.

    • dotancohen a day ago

      > People love that bit.

      Add another seven Easter eggs, and people could love that byte.

  • WalterBright 2 days ago

    I've been sorely tempted to do that with my compiler many times.

  • euroderf a day ago

    Canadians everywhere.

  • nl 2 days ago

    It'd be interesting and amusing if he'd made the private key to his part of Bitcoin a variation on that.

    RIP.

physicles a day ago

The root cause here is poorly named settings.

If the original setting had been named something bool-y like `help.autocorrect_enabled`, then the request to accept an int (deciseconds) would've made no sense. Another setting `help.autocorrect_accept_after_dsec` would've been required. And `dsec` is so oddball that anyone who uses it would've had to look up.

I insist on this all the time in code reviews. Variables must have units in their names if there's any ambiguity. For example, `int timeout` becomes `int timeout_msec`.

This is 100x more important when naming settings, because they're part of your public interface and you can't ever change them.

  • TeMPOraL a day ago

    > I insist on this all the time in code reviews. Variables must have units in their names if there's any ambiguity. For example, `int timeout` becomes `int timeout_msec`.

    Same here. I'm still torn when this gets pushed into the type system, but my general rule of thumb in C++ context is:

      void FooBar(std::chrono::milliseconds timeout);
    
    is OK, because that's a function signature and you'll see the type when you're looking at it, but with variables, `timeout` is not OK, as 99% of the time you'll see it used like:

      auto timeout = gl_timeout; // or GetTimeoutFromSomewhere().
      FooBar(timeout);
    
    Common use of `auto` in C++ makes it a PITA to trace down exact type when it matters.

    (Yes, I use IDE or a language-server-enabled editor when working with C++, and no, I don't have time to stop every 5 seconds to hover my mouse over random symbols to reveal their types.)

    • OskarS a day ago

      One of my favorite features of std::chrono (which can be a pain to use, but this part is pretty sweet) is that you don't have to specify the exact time unit, just a generic duration. So, combined with chrono literals, both of these work just like expected:

          std::this_thread::sleep_for(10ms); // sleep for 10 milliseconds
          std::this_thread::sleep_for(1s);   // sleep for one second    
          std::this_thread::sleep_for(50);   // does not work, unit is required by type system
      
      That's such a cool way to do it: instead of forcing you to specify the exact unit in the signature (milliseconds or seconds), you just say that it's a time duration of some kind, and let the user of the API pick the unit. Very neat!
      • twic 20 hours ago

        I do something similar in Java by taking a java.time.Duration in any method dealing with time. We don't have the snazzy literal syntax, but that means users have to write:

          someMethodDealingWithTime(Duration.ofMillis(10));
          someMethodDealingWithTime(Duration.ofSeconds(1));
          someMethodDealingWithTime(50); // does not compile
        
        Since these often come from config, i also have a method parseDuration which accepts a variety of simple but unambiguous string formats for these, like "10ms", "1s", "2h30m", "1m100us", "0", "inf", etc. So in config we can write:

          galactus.requestTimeout=30s
        
        No need to bake the unit into the name, but also less possibility of error.
        • TeMPOraL 19 hours ago

          > i also have a method parseDuration which accepts a variety of simple but unambiguous string formats for these, like "10ms", "1s", "2h30m", "1m100us", "0", "inf", etc.

          I did that too with parsers for configuration files; my rule of thumb is that the unit has to always be visible somewhere anywhere a numeric parameter occurs - in the type, in the name, or in the value. Like e.g.:

            // in config file:
            { ..., "timeout": "10 seconds", ... }
          
            // in parsing code:
            auto& ParseTimeout(const std::string&) -> Expected<std::chrono::milliseconds>;
          
            // in a hypothetical intermediary if, for some reason, we need to use a standard numeric type:
            int timeoutMsec = ....;
          
          Wrt. string formats, I usually allowed multiple variants for a given time unit, so e.g. all these were valid and equivalent values: "2h", "2 hour", "2 hours". I'm still not convinced it was the best idea, but the Ops team appreciated it.

          (I didn't allow mixing time units like "2h30m" in your example, as to simplify parsing into single "read double, read rest as string key into a lookup table" pass, but I'll think about allowing it the next time I'm in such situations. Are there any well-known pros/cons to this?)

          • twic 12 hours ago

            Mixed unit durations are in ISO 8601, so the idea has had at least some scrutiny:

            https://en.wikipedia.org/wiki/ISO_8601#Durations

            One place i have run into confusion is being able to express a given span of time in multiple ways. 1m30s and 90s are the same length, but are they the same thing? Should we always normalise? If we do, do we normalise upwards or downwards? This hasn't actually been a problem with time, but i do similar handling with dates, and it turns out we often want to preserve the distinction between 1y6m and 18m. But also sometimes don't. Fun times.

            • TeMPOraL an hour ago

              > Mixed unit durations are in ISO 8601

              Don't know why I never noticed it before; thanks for posting this! That does give the idea more weight, so I'll consider mixed-unit durations next time I find myself coding up parsing durations in config files.

              > Should we always normalise? If we do, do we normalise upwards or downwards?

              I'd say normalize, but on the business side, down to regular units - e.g. the config or UI can keep its "1m30s" or "90s" or even "0.025h", but for processing, this gets casted to seconds or millis or whatever the base unit is. Now, this is easy when we're only reading, but if we need to e.g. modify or regenerate the config from current state, I'd be leaning towards keeping around the original format as well.

              > i do similar handling with dates, and it turns out we often want to preserve the distinction between 1y6m and 18m

              Can you share specific examples where does this matter, other than keeping the user input in the format it was supplied even underlying data values get regenerated from scratch?

    • theamk a day ago

      It should not matter though, because std::chrono is not int-convertible - so is it "milliseconds" or "microseconds" or whatever is an minor implementation detail.

      You cannot compile FooBar(5000), so there is never confusion in C++ like C has. You have to do explicit "FooBar(std::chrono::milliseconds(500))" or "FooBar(500ms)" if you have literals enabled. And this will handle conversion if needed - you can always do FooBar(500ms) and it will work even if actual type in microseconds.

      Similarly, your "auto" example will only compile if gl_timeout is a compatible type, so you don't have to worry about units at all when all your intervals are using std::chrono.

    • physicles a day ago

      Right, your type system can quickly become unwieldy if you try to create a new type for every slight semantic difference.

      I feel like Go strikes a good balance here with the time.Duration type, which I use wherever I can (my _msec example came from C). Go doesn’t allow implicit conversion between types defined with a typedef, so your code ends up being very explicit about what’s going on.

    • codetrotter a day ago

      > Yes, I use IDE or a language-server-enabled editor when working with C++, and no, I don't have time to stop every 5 seconds to hover my mouse over random symbols to reveal their types.

      JetBrains does a great thing where they show types for a lot of things as labels all the time instead of having to hover over all the things.

      • TeMPOraL 21 hours ago

        Right; the so-called "inlay hints" are also provided by clangd over LSP, so I have them in my Emacs too. Super helpful, but not always there when I need them.

  • scott_w a day ago

    Yes and it's made worse by using "deciseconds," a unit of time I've used literally 0 times in my entire life. If you see a message saying "I'll execute in 1ms," you'd look straight to your settings!

  • bmicraft a day ago

    > Variables must have units in their names if there's any ambiguity

    Then you end up with something where you can write "TimoutSec=60" as well as "TimeoutSec=1min" in the case of systemd :)

    I'd argue they'd been better of not putting the unit there. But yes, aside from that particular weirdness I fully agree.

    • physicles a day ago

      > Then you end up with something where you can write "TimoutSec=60" as well as "TimeoutSec=1min" in the case of systemd :)

      But that's wrong too! If TimeoutSec is an integer, then don't accept "1min". If it's some sort of duration type, then don't call it TimeoutSec -- call it Timeout, and don't accept the value "60".

      • whycome 14 hours ago

        Can we call this the microwave paradox

  • yencabulator a day ago

    I do that, but I can't help thinking that it smells like Hungarian notation.

    The best alternative I've found is to accept units in the values, "5 seconds" or "5s". Then just "1" is an incorrect value.

    • physicles a day ago

      That’s not automatically bad. There are two kinds of Hungarian notation: systems Hungarian, which duplicates information that the type system should be tracking; and apps Hungarian, which encodes information you’d express in types if your language’s type system were expressive enough. [1] goes into the difference.

      [1] https://www.joelonsoftware.com/2005/05/11/making-wrong-code-...

      • yencabulator a day ago

        And this is exactly the kind the language should have a type for, Duration.

        • crazygringo 16 hours ago

          Not really.

          I don't want to have a type for an integer in seconds, a type for an integer in minutes, a type for an integer in days, and so forth.

          Just like I don't want to have a type for a float that means width, and another type for a float that means height.

          Putting the unit (as oppose to the data type) in the variable name is helpful, and is not the same as types.

          For really complicated stuff like dates, sure make a type or a class. But for basic dimensional values, that's going way overboard.

          • yencabulator 15 hours ago

            > I don't want to have a type for an integer in seconds, a type for an integer in minutes, a type for an integer in days, and so forth.

            This is not how a typical Duration type works.

            https://pkg.go.dev/time#Duration

            https://doc.rust-lang.org/nightly/core/time/struct.Duration....

            https://docs.rs/jiff/latest/jiff/struct.SignedDuration.html

            • crazygringo 14 hours ago

              I'm just saying, this form of "Hungarian" variable names is useful, to always include the unit.

              Not everything should be a type.

              If all you're doing is calculating the difference between two calls to time(), it can be much more straightforward to call something "elapsed_s" or "elapsed_ms" instead of going to all the trouble of a Duration type.

          • thaumasiotes 5 hours ago

            > I don't want to have a type for an integer in seconds, a type for an integer in minutes, a type for an integer in days, and so forth.

            > For really complicated stuff like dates, sure make a type or a class.

            Pick one. How are you separating days from dates? Not all days have the same number of seconds.

  • MrDresden a day ago

    > I insist on this all the time in code reviews. Variables must have units in their names if there's any ambiguity. For example, `int timeout` becomes `int timeout_msec`.

    Personally I flag any such use of int in code reviews, and instead recommend using value classes to properly convey the unit (think Second(2) or Millisecond(2000)).

    This of course depends on the language, it's capabilities and norms.

    • kqr a day ago

      I agree. Any time we start annotating type information in the variable name is a missed opportunity to actually use the type system for this.

      I suppose this is the "actual" problem with the git setting, in so far as there is an "actual" problem: the variable started out as a boolean, but then quietly turned into a timespan type without triggering warnings on user configs that got reinterpreted as an effect of that.

  • bambax a day ago

    Yes! As it is, '1' is ambiguous, as it can mean "True" or '1 decisecond', and deciseconds are not a common time division. The units commonly used are either seconds or milliseconds. Using uncommon units should have a very strong justification.

  • deltaburnt 21 hours ago

    Though, ironically, msec is still ambiguous because that could be milli or micro. It's often milli so I wouldn't fault it, but we use micros just enough at my workplace where the distinction matters. I would usually do timeout_micros or timeout_millis.

    • seszett 20 hours ago

      We use "ms" because it's the standard SI symbol. Microseconds would be "us" to avoid the µ.

      In fact, our French keyboards do have a "µ" key (as far as I remember, it was done so as to be able to easily write all SI prefixes) but using non-ASCII symbols is always a bit risky.

    • hnuser123456 21 hours ago

      Shouldn't that be named "usec"? But then again, I can absolutely see someone typing msec to represent microseconds.

    • 3eb7988a1663 16 hours ago

      ms for microseconds would be a paddlin'. The micro prefix is μ, but a "u" is sufficient for easy of typing on an ascii alphabet.

  • jayd16 19 hours ago

    What would you call the current setting that takes both string enums and deciseconds?

    • physicles 9 hours ago

      help.autocorrect_enabled_or_accept_after_dsec? A name scary enough to convince anyone who uses it to read the docs.

  • miohtama 21 hours ago

    It's almost like Git is a version control system built by developers who only knew Perl and C.

thedufer a day ago

> Now, why Junio thought deciseconds was a reasonable unit of time measurement for this is never discussed, so I don't really know why that is.

xmobar uses deciseconds in a similar, albeit more problematic place - to declare how often to refresh each section. Using deciseconds is fantastic if your goal is for example configs to have numbers small enough that they clearly can't be milliseconds, resulting in people making the reasonable assumption that it must thus be seconds, and running their commands 10 times as often as they intended to. I've seen a number of accidental load spikes originating from this issue.

snet0 2 days ago

This seems like really quite bad design.

EDIT: 1) is the result of my misreading of the article, the "previous value" never existed in git.

1) Pushing a change that silently break by reinterpreting a previous configuration value (1=true) as a different value (1=0.100ms confirmation delay) should pretty much always be avoided. Obviously you'd want to clear old values if they existed (maybe this did happen? it's unclear to me), but you also probably want to rename the configuration label..

2) Having `help.autocorrect`'s configuration argument be a time, measured in a non-standard (for most users) unit, is just plainly bad. Give me a boolean to enable, and a decimal to control the confirmation time.

  • jsnell 2 days ago

    For point 1, I think you're misunderstanding the timeline. That change happened in 2008, during code review of the initial patch to add that option as a boolean, and before it was ever committed to the main git tree.

  • iab 2 days ago

    “Design” to me intimates an intentional broad-context plan. This is no design, but an organic offshoot

    • snet0 2 days ago

      Someone thought of a feature (i.e. configurable autocorrect confirmation delay) and decided the interface should be identical to an existing feature (i.e. whether autocorrect is enabled). In my thinking, that second part is "design" of the interface.

      • iab a day ago

        I think that is something that arose from happenstance, not thoughtful intent - this is true because of how confusing the end result is.

userbinator 2 days ago

IMHO this is a great example of "creeping featurism". At best it introduces unnecessary complexity, and at worst those reliant on it will be encouraged to pay less attention to what they're doing.

  • cedws 2 days ago

    That's git in a nutshell. An elegant data structure masked by many layers of unnecessary crap that has accumulated over the years.

  • snowfarthing 16 hours ago

    What I don't get is why anyone would want to allow the automation. Is it really that difficult to use the up-arrow key and correct the mistake? Doing something automatically when it's sort-of correct is a recipe for doing things you didn't intend to do.

    • dtgriscom 16 hours ago

      Double this. If I don't type the command that I want, I never want my computer guessing and acting on that guess. Favors like that are why I hate Microsoft Word ("Surely you didn't mean XXXX; I'll help you by changing it to YYYY. Oh, you did it again, and in the same place? Well, I'll fix it again for you. High five!")

    • userbinator 13 hours ago

      Things seem to be going in that direction with LLMs, unfortunately.

zX41ZdbW 2 days ago

> Which was what the setting value was changed to in the patch that was eventually accepted. This means that setting help.autocorrect to 1 logically means "wait 100ms (1 decisecond) before continuing".

The mistake was here. Instead of retargeting the existing setting for a different meaning, they should have added a new setting.

    help.autocorrect - enable or disable
    help.autocorrect.milliseconds - how long to wait
There are similar mistakes in other systems, e.g., MySQL has

    innodb_flush_log_at_trx_commit
which can be 0 if disabled, 1 if enabled, and 2 was added as something special.
  • stouset 2 days ago

    The “real” issue is an untyped configuration language which tries to guess at what you actually meant by 1. They’re tripling down on this by making 1 a Boolean true but other integers be deciseconds. This is the same questionable logic behind YAML’s infamous “no” == false.

    • Dylan16807 2 days ago

      I'd say the new addition is more of a special case of rounding than it is messing up types.

      • stouset a day ago

        1 was also accepted as a Boolean true in this context, and it still is in other contexts.

        • Dylan16807 a day ago

          > 1 was also accepted as a Boolean true in this context, and it still is in other contexts.

          Is "was" before the change described at the end of the article, or after it?

          Before the change, any positive number implied that the feature is on, because that's the only thing that makes sense.

          After the change, you could say that 1 stops being treated as a number, but it's simpler to say it's still being treated as a number and is getting rounded down. The interpretation of various types is still messy, but it didn't get more messy.

          • stouset a day ago

            In an earlier iteration the configuration value was Boolean true/false. A 1 was interpreted as true. They changed it to an integral value. This is the entire setup for the problem in the article.

            Elsewhere, 1 is still allowed as a true equivalent.

            • Dylan16807 20 hours ago

              But then they made it not be a boolean when they added the delay. They went the opposite direction and it caused problems. How is this a situation of "tripling down"? It seems to me like they couldn't make up their mind.

              • stouset 19 hours ago

                The only reason they even need this further hack is because people can reasonably assume that 1 is bool.

                Now, because of this confusion, they’re special-casing 1 to actually mean 0. But other integers are still themselves. They’ve also now added logic to make "yes", "no", "true", "off“ strings be interpreted as booleans now too.

  • smaudet a day ago

    Not sure where the best place to mention would be, but 0.1 deciseconds is not unreasonable, either...yes fastest recorded random reaction time maybe 1.5 ds (coincidentally this is the average gamer reaction time), however non-random reaction times can be much faster (e.g. on a beat).

    So if you wanted to go that fast, you could, the invokation should have relatively stable speeds (order of some milliseconds...

catlifeonmars 2 days ago

I enabled autocorrect (set a 3sec) a year ago and have the following observations about it:

1. it does not distinguish between dangerous and safe actions

2. it pollutes my shell history with mistyped commands

Reading this article gave me just enough of a nudge to just disable it after a year.

  • layer8 2 days ago

    If anything, it’s better to set up aliases for frequent typos. (Still “pollutes” the shell history of course.)

  • darkwater 2 days ago

    About 2, well, you are the actual polluter, even if you just scroll back in history andnuse the same last wrong command because it works anyway.

    • bobbylarrybobby a day ago

      The issue is if you accept the wrong command instead of retyping it correctly, you never get the correctly spelled command into your history — and even worse, you don't get it to be more recent than the mistyped command.

    • catlifeonmars a day ago

      Well to put it into context, I use fish shell, which will only save commands that have an exit code of 0. By using git autocorrect, I have guaranteed that all git commands have an exit code of 0 :)

      • darkwater 4 hours ago

        TIL. And what about programs that don't have autocorrect? How does Fish handle the "up arrow and fix mistyped command" flow?

      • fsckboy 21 hours ago

        wow, our brains work differently, how can you smile in that circumstance? :)

        It's a terrible idea of fish not to save errors in history (even if the way bash does it is not optimal, ignoring/obliterating the error return fact) because running a command to look up the state of something can easily return the state you are checking along with an error code. "What was that awesome three letter TLD I looked up yesterday that was available? damn, not a valid domain is an error code" and just like that SEX.COM slips through your grasp, and your only recourse would be to hijack it.

        but it's compoundedly worse to feel like the problem is solved by autocorrect further polluting your history.

        I would not want to be fixing things downstream of you, where you would be perfectly happy downstream of me.

Reason077 2 days ago

Deciseconds is such an oddball choice of units. Better to specify the delay in either milliseconds or seconds - either are far more commonly used in computing.

  • ralgozino a day ago

    I got really confused for a moment, thinking that "deciseconds" was some git-unit meaning "seconds needed to make a decision", like in "decision-seconds" xD

    Note: english is not my mother tongue, but I am from the civilised part of the world that uses the metric system FWIW.

    • legacynl 17 hours ago

      I get where your coming from, although deci is certainly used, it's rare enough to not expect it, especially in the context of git

    • ssernikk 17 hours ago

      I thought of the same thing!

  • cobbal 2 days ago

    It's a decent, if uncommon, unit for human reactions. The difference between 0 and 1 seconds is a noticeably long time to wait for something, but the difference between n and n+1 milliseconds is too fine to be useful.

    • jonas21 a day ago

      Milliseconds are a commonly-used unit. It doesn't really matter if 1 ms is too fine a granularity -- you'll just have to write "autocorrect = 500" in your config file instead of "autocorrect = 5", but who cares?

      • bmicraft a day ago

        If you're going to store that unit in one byte (possible even signed) suddenly deci-seconds start making a lot of sense

        • lionkor 21 hours ago

          Why would you do that?

      • zxvkhkxvdvbdxz a day ago

        Sure, yes. But for human consumption, decisecond is something one can relate to.

        I mean, you probably cannot sense the difference in duration between 20 and 30 ms without special equipment.

        But you can possibly sense the difference between 2 and 3 deciseconds (200 ms and 300 ms) after some practice.

        I think the issue in this case was rather the retrofitting a boolean setting into a numerical setting.

        • LocalH a day ago

          And then you have the rhythm gamers who can adjust their inputs by 5 or 10ms. Hell, I'm not even that good of a player, but in Fortnite Festival, which has a perfect indicator whenever you're within 50ms of the target note timestamp (and a debug display that shows you a running average input offset) and I can easily adjust my play to be slightly earlier or slightly later and watch my average fall or climb.

          Several top players have multiple "perfect full combos" under their belt, where they hit every note in the song within 50ms of the target. I even have one myself on one of the easier songs in the game.

        • adzm a day ago

          > But you can possibly sense the difference between 2 and 3 deciseconds (200 ms and 300 ms) after some practice.

          At 120bpm a sixteenth note is 125ms, the difference is very obvious I would think

          • zxvkhkxvdvbdxz 10 hours ago

            But then the tune is muliple beats long, and that misses my point.

        • fragmede a day ago

          The difference between 20 ms and 30ms is the difference between 33 fps and 50 fps which is entirely noticable on a 1080p60hz screen.

          • zxvkhkxvdvbdxz 10 hours ago

            Sure, but that's a continous stream of frames, which is not 300 ms long or whatever.

    • bobbylarrybobby a day ago

      But the consumers of the API aren't humans, they're programmers.

theginger 2 days ago

Reaction times differ by types of stimulus, auditory is slightly faster than visual and tactile slightly faster than that at 90 - 180 ms So if git gave you a slap instead of an error message you might just about have time to react.

  • orangepanda 2 days ago

    The slapping device would need to build inertia for you to feel the slap. Is 10ms enough for that?

    • dullcrisp 2 days ago

      I think if it's spring-loaded then definitely. (But it's 100ms, not 10ms.)

      • orangepanda 2 days ago

        Assuming the best case scenario of feeling the slap in 90ms, it would leave 10ms to abort the command. Or did the 90-180ms range refer to something else?

        • dullcrisp 2 days ago

          Oh I see, you’re right.

    • Aerroon a day ago

      This is why any reasonable engineer would go with zaps instead of slaps!

cardamomo 2 days ago

Reading this post, the term "software archeology" and "programmer archeologist" come to mind. (Thank you, Vernor Vinge, for the latter concept.)

  • scubbo a day ago

    Grrrr, this is such a bugbear for me. I was so excited to read "A Fire Upon The Deep" because hackers talked up the concept of "software archeology" that the book apparently introduced.

    The concept is briefly alluded to in the prologue, and then...nada, not relevant to the rest of the plot at all (the _effects_ of the archeology are, but "software archeologists" are not meaningful characters in the narrative). I felt bait-and-switched.

  • choult 2 days ago

    I like to say that the danger of software archaeology is the inevitable discovery of coprolites...

  • schacon 2 days ago

    I can’t help but feel like you’re calling me “old”…

    • cardamomo 2 days ago

      Not my intention! Just an esteemed git archeologist

newman314 a day ago

For reference, Valtteri Bottas supposedly recorded a 40ms!!! reaction time at the 2019 Japanese Grand Prix.

https://www.formula1.com/en/video/valtteri-bottas-flying-fin...

  • amai a day ago

    Most probably that was a false start:

    "World Athletics rules that if an athlete moves within 100 milliseconds (0.1 seconds) of the pistol being fired to start the race, then that constitutes a false start."

    https://www.nytimes.com/athletic/5678148/2024/08/03/olympics...

    • arp242 a day ago

      That value has also been criticised as too high.

      • legacynl 21 hours ago

        What is the argument for it being too high?

        The argument for it being what it is is the fact that our auditorial processing (when using a starter pistol) or visual processing (looking at start-lights) takes time, as well as transferring that message to the relevant muscles. 100 milliseconds is a pretty good average actually

        • pyuser583 7 hours ago

          Reaction time can vary dramatically.

          Someone new to the “gunshot, run” dynamic could take longer, a soldier trained via repetition to react to a gunshot could be shorter, and a veteran with PTSD could be shorter still.

          100ms is both too long and too short (or so I’ve heard, I’m not an expert).

        • arp242 20 hours ago

          Basically, some people can consistently respond faster. The 100ms figure just isn't accurate.

          I don't have extensive resources/references at hand, but I've read about this a few times over the years.

          • legacynl 17 hours ago

            > I don't have references ... but I've read about this a few times over the years.

            Yeah well, I did a psych bsc and I'm telling you that it's impossible.

            It's certainly possible for people to do and notice things way faster than that, like a musician noticing a drummer being a few ms off beat, or speedrunners hitting frame perfect inputs, but in those cases the expectation and internal timekeeping is doing most of the heavy lifting.

            • Aerroon 15 hours ago

              It's rhythm vs reaction time. We can keep a much smaller time interval rhythm than we can react at.

  • voidUpdate a day ago

    Is there a random time between the red lights and the green lights, or is it always the same? Because that feels more like learning the timings than reacting to something

    • eknkc 19 hours ago

      No green lights, when the reds go out it is race start but there is a random delay after all reds lighting up and then going off.

    • jsnell a day ago

      Yes, the timing is random.

  • dotancohen a day ago

    I once had a .517 reaction time in a drag race. You know how I did that? By fouling too late. It was completely unrepeatable.

    I'm willing to bet Bottas fouled that, too late (or late enough).

kittikitti 2 days ago

I sometimes have this realization as I'm pressing enter and reflexively press ctrl+c. As someone whose typing speeds range from 100 to 160 WPM, this makes sense. Pressing keys is much different from Formula One pit stops.

  • otherme123 2 days ago

    Not about pit stops. They talk about pro drivers with highly trained reflexes, looking at a red light knowing that it will turn green in the next 3 seconds, so they must push a pedal to the metal as fast as they can. If they react in less than 120ms is considered a jump start.

    As for 100WPM, which is a very respectable typing speed, it translates to 500 CPM, less than 10 characters per second, and thus slightly above 100ms per keypress. But Ctrl+C are two key presses: reacting to type them both in under 100 ms is equivalent to a writting speed above 200WPM.

    Even the fastest pro-gamers struggle to go faster than 500 actions (keypresses) per minute (and they use tweaks on repeat rates to get there), still more than 100ms for two key presses.

    • mjpa86 a day ago

      There is no green light at the start - it's the lights going out they react to. There's also no minimum time, you can get moving after 1ms - it's legal. In fact, you can move before the lights go out, there's a tolerance before you're classed as moving.

    • Aerroon a day ago

      >But Ctrl+C are two key presses: reacting to type them both in under 100 ms is equivalent to a writting speed above 200WPM.

      I think people don't really type/press buttons at a constant speed. Instead we do combos. You do a quick one-two punch because that's what you're used to ("you've practiced"). You do it much faster than that 100ms, but after that you get a bit of a delay before you start the next combo.

      • otherme123 a day ago

        As menctioned, pro-gamers train combos for hours daily. The best of them can press up to 10 keys per second without thinking. For example, the fastest StarCraft II player Reynor (Riccardo Romitti) can sustain 500 key presses per minute, and do short busts of 800. He has videos explaining how to tweak the Windows registry to achieve such rate (it involves pressing some keys once and the OS autorepeats faster than you can press), because it can't be done with the standard config dialogs. And you are trying to tell me that you can do double that... not only double that, "much faster" than that.

        I dare anyone to make a script that, after launching, will ask you to press Ctrl+C after a random wait between 1000 and 3000 ms. And record your reaction time meassured after key release. It's allowed to "cheat" and have your fingers ready over the two keys. Unless you jump start and get lucky, you won't get better than 150ms.

        • Aerroon 16 hours ago

          You don't make a typo, press enter and then start reacting to the typo.

          You start reacting to the typo as you're typing. You just won't get to the end of your reaction before you've pressed enter.

          The point of my combo comment is that pressing Ctrl + C is not the same thing as typing two random letters of a random word.

          Combine these two things and I think it's possible for somebody to interrupt a command going out. The question is whether you can press Ctrl+C while typing faster than 100ms, not whether you can react to it within 100ms.

          Also, people regularly type faster than the speed that pro StarCraft players play at. The sc2 players need the regedit because they will need to press Z 50 times in a row to make 100 zerglings as fast as possible, but you don't need that to type.

        • adzm a day ago

          I actually took you up on this, and best I was able to get was about 250ms when I was really concentrating. Average was around 320!

  • snet0 2 days ago

    That reflexivity felt a bit weird the first time I thought about it. I type the incorrect character, but reflexively notice and backspace it without even becoming aware of it until a moment later. I thought it'd be related to seeing an unexpected character appearing on the display, but I do it just as quickly and reflexively with my eyes closed.

    That being said, there are obviously cases where you mistype (usually a fat-finger or something, where you don't physically recognise that you've pressed multiple keys) and don't appreciate it until you visually notice it or the application doesn't do what you expected. 100ms to react to an unexpected stimulus like that is obviously not useful.

    • grogenaut 2 days ago

      I type a lot while looking away from the monitors, helps me think / avoid the stimulus of the text on the screen. I can tell when I fat finger. It also pissed off the boomers at the bar who thought I was showing off as I was a) typing faster then they could, and b) not looking at the screen, c) sometimes looking blankly past them (I'm really not looking when I do this sometimes).

      also I typed this entire thing that way without looking at it other than for red squiggles.

  • schacon 2 days ago

    I'm curious if the startup time, plus the overhead of Git trying to figure out what you might have meant is significant enough to give you enough time to realize and hit ctrl+c. In testing it quickly, it looks like typing the wrong command and having it spit out the possible matches without running it takes 0.01-0.03s, so I would venture to guess that it's still not enough time between hitting enter and then immediately hitting ctrl-c, but maybe you're very very fast?

    • johnisgood a day ago

      I think most programs you execute have enough startup overhead to do Ctrl-C before it even begins, including CLI tools. I do this a lot (and calculate in the time of realizing it was the wrong command, or not the flags I wanted, etc.)

    • rad_gruchalski 2 days ago

      The command is already running, you ctrl+c THE command. But I agree, 100ms is short.

dusted 2 days ago

I think it makes sense, if I typed something wrong, I often feel it before I can read it, but if I already pushed enter, being able to ctrl+c within 100 ms is enough to save me. I'm pretty sure I've also aborted git pushes before they touched anything before I put this on, but this makes it more reliable.

  • Etheryte 2 days ago

    Maybe worth noting here that 100ms is well under the human reaction time. For context, professional sprinters have been measured to have a reaction time in the ballpark of 160ms, for pretty much everyone else it's higher. And this is only for the reaction, you still need to move your hand, press the keys, etc.

    • shawabawa3 2 days ago

      In this case the reaction starts before you hit enter, as you're typing the command

      So, you type `git pshu<enter>` and realise you made a typo before you've finished typing. You can't react fast enough to stop hitting enter but you can absolutely ctrl+c before 100 more ms are up

      • Etheryte 2 days ago

        I'm still pretty skeptical of this claim. If you type 60 wpm, which is faster than an average human, but regular for people who type as professionals, you spend on average 200ms on a keystroke. 60 standard words per minute means 300 chars per minute [0], so 5 chars per second which is 200ms per char. Many people type faster than this, yes, but it's all still very much pushing it just to even meet the 100ms limit, and that's without any reaction or anything on top.

        [0] https://en.wikipedia.org/wiki/Words_per_minute

        • grayhatter a day ago

          For whatever it's worth*: I'm not skeptical of it at all. I've done this in a terminal before without even looking at the screen, so I know it can't have anything to do with visual reaction.

          Similar to the other reply, I also commonly do that when typing, where I know I've fat fingered a word, exclusively from the feeling of the keyboard.

          But also, your not just trying to beat the fork/exec. You can also successfully beat any number of things. The pre-commit hook, the DNS look up, the TLS handshake. adding an additional 100ms of latency to that could easily be the difference between preempting some action, interrupting it or noticing after it was completed.

        • shawabawa3 21 hours ago

          I just tried it out.

          I wrote this bash script:

              #!/usr/bin/env bash
              start_time=$(gdate +%s%3N)
              # Function to handle Ctrl+C (SIGINT)
              on_ctrl_c() {
                  end_time=$(gdate +%s%3N)
                  total_ms=$((end_time - start_time))
                  # Calculate integer seconds and the remaining milliseconds
                  seconds=$((total_ms / 1000))
                  millis=$((total_ms % 1000))
          
                  # Print the runtime in seconds.milliseconds
                  echo "Script ran for ${seconds}.$(printf '%03d' ${millis}) seconds."
                  exit 0
              }
          
              # Trap Ctrl+C (SIGINT) and call on_ctrl_c
              trap on_ctrl_c INT
          
              # Keep the script running indefinitely
              while true; do
                  sleep 1
              done
          
          
          And then i typed "bash sleep.sh git push origin master<enter><ctrl+C>"

          and got "Script ran for 0.064 seconds."

        • pc86 2 days ago

          Even if you typed 120 wpm, which is "competitive typing" speed according to this thing[0], it's going to take you 200ms to type ctrl+c, and even if you hit both more-or-less simultaneously you're going to be above the 100ms threshold. So to realistically be able to do something like beat the threshold during normal work and not a speed-centered environment you're probably looking at regularly 160 wpm or more?

          I'm not a competitive speed typist or anything but I struggle to get above 110 on a standard keyboard and I don't think I've ever seen anyone above the 125-130 range.

          [0] https://www.typingpal.com/en/documentation/school-edition/pe...

        • tokai 19 hours ago

          Typing is not string of reactions to stimuli.

      • dusted a day ago

        Yes exactly! This is what I'm trying to argue as well, it happens quite often for me that I submit a typo because it's already "on it's way out" when I catch it (but before, or about the same time it's finished and enter is pressed), so the ctrl+c is already on it's way :)

      • yreg 2 days ago

        Let's say you are right. What would be a reason for pressing ctrl+c instead of letting the command go through in your example?

        The delay is intended to let you abort execution of an autocorrected command, but without reading the output you have no idea how the typos were corrected.

      • brazzy 2 days ago

        > you can absolutely ctrl+c before 100 more ms are up

        Not gonna believe that without empirical evidence.

        • burnished 2 days ago

          I think they are talking about times where you realize a mistake as you are making it as opposed to hindsight, given that 100ms seems pretty reasonable.

          • dusted a day ago

            This is exactly what I'm trying to say. The actions are underway by muscles (or _just_ completed) and the brain catches something's off and so ctrl+c is queued.

          • brazzy a day ago

            "seems pretty reasonable" is not evidence.

        • bmacho 2 days ago

          I am not sure, have you read it properly? The scenario is that you are pushing enter, halfway change your mind, and your are switching to ctrl+c. So it is not a reaction time, but an enter to ctrl+c scenario.

          Regarding reaction time, below 120ms (on a computer, in a browser(!)) is consistently achievable, e.g. this random yt video https://youtu.be/EH0Kh7WQM7w?t=45 .

          For some reason, I can't find more official reaction time measurements (by scientists, on world champion athletes, e-athletes), which is surprising.

          • brazzy a day ago

            That scenario seems to me fishy to begin with, is that something that actually happens, or just something people imagine? How would it work that you "change your mind halfway through" and somehow cannot stop your finger from pressing enter, but can move them over and hit ctrl-c in a ridiculously short time window?

            > So it is not a reaction time, but an enter to ctrl+c scenario.

            At minimum, if we ignore the whole "changing your mind" thing. And for comparison: the world record for typing speed (over 15 seconds and without using any modifier keys) is around 300wpm, which translates to one keypress every 40ms - you really think 100ms to press two keys is something "you can absolutely" do? I'd believe that some* people could sometimes do it, but certainly not just anyone.

        • dusted a day ago

          That'd be interesting, but I don't know how to prove that I'm not just "pretending" to make typos and correcting them instantly ?

    • dusted a day ago

      There are different ways to measure reaction time. Circumstance is important.

      Reaction to unreasonable, unexpected events will be very slow due to processing and trying to understand what happens and how to respond. Examples, you are a racecar driver, participating in a race, you're driving your car on a racetrack in a peaceful country.

      An armed attack: Slow reaction time, identifying the situation will take a long time, selecting an appropriate response will take longer.

      A kid running into the road on the far side of the audience stands: Faster.

      Kid running into the road near the audience: Faster.

      Car you're tailing braking with no turn to come: Faster.

      Crashed car behind a turn with bad overview: Faster.

      Guy you're slipstreaming braking before a turn: Even faster.

      For rhythm games, you anticipate and time the events, and so you can say these are no longer reactions, but actions.

      In the git context, where you typed something wrong, the lines are blurred, you're processing while you're acting, you're typing while you're evaluating what you're typing, first line of defence is you're feeling/sensing that you typed wrong, either from the feedback that your fingers touched too many keys, or that you felt the rhythm of your typing was wrong, at least for me, this happens way faster than my visual input. I'm making errors as I type this, and they're corrected faster than I can really read it, sometimes I get it wrong and deleted a word that was correct. But still, watching people type, I see this all the time, they're not watching and thinking about the letters exclusively, there's something going on in their minds at the same time. 100 ms is a rather wide window in this context.

      Also, that said, we did a lot of experiments at work with a reaction time tester, most people got less than 70 ms after practice (a led lights up at a random interval between 2 and 10 seconds)

      • tomatotomato37 20 hours ago

        I also want to add in the context of human sprinters & F1 drivers, their reaction time is measured via leg actuation, which for a creature evolved to be an object-throwing endurance hunter is going to have worse neural & muscular latency than, say, your forearm. That is why using your finger to trigger a response in a conventional computer time tester can get such high speeds, cause we're essentially evolved for it.

  • SOLAR_FIELDS 2 days ago

    100 ms is an insanely short window. I would say usually even 1000ms would be too short for me to recognize and kill the command, even if I realized immediately that I had done something wrong.

    • jsjshsbd 2 days ago

      It's much too short to read an output, interpret it and realize you have to interrupt

      But often you type something, realize it's wrong while you are typing but not fast enough to stop your hand from pressing [Enter]

      That is one of the only situation 100ms would be enough to safe you

      That being said, the reason in the article for 100ms is just confused commander. Why would anyone:

      1) encode a Boolean value as 0/1 in a human readable configuration 2) encode a duration as a numeric value without unit in a human readable configuration

      Both are just lazy

      • SoftTalker 20 hours ago

        Absolutely. When I'm booting up an unfamiliar system and trying to catch the BIOS prompt for something non-normal, even 5 seconds is often too short. For me to notice that the prompt has been given, read "PRESS DEL KEY TO ENTER SETUP, F11 FOR BOOT OPTIONS, F12 FOR PXE BOOT" (or whatever), understand it, look for the F11 key on the possibly unfamilar keyboard on my crash cart, and press it, can often take me more than 5 seconds. Especially if it's not a single key required but a control sequence. Maybe I'm slow. I always change these prompts to 10 seconds if they are configurable. Or I'll make a label with the options and stick it on the case so I can be prepared in advance.

      • Reason077 2 days ago

        > "Why would anyone ... encode a Boolean value as 0/1 in a human readable configuration"

        It may be lazy, but it's very common!

      • grayhatter a day ago

        laziness is a virtue of a good programmer.

        why demand many char when few char do trick?

        also

        > Why would anyone [...] encode a duration as a numeric value without unit in a human readable configuration

        If I'm only implementing support for a single unit, why would you expect or want to provide a unit? What's the behavior when you provide a unit instead of a number?

        > but not doing that extra work is lazy

        no, because while I'm not implementing unit parsing for a feature I wouldn't use, instead I'm spending that time implementing a better, faster diff algorithm. Or implementing a new protocol with better security, or sleeping. It's not lazy to do something important instead of something irrelevant. And given we're talking about git, which is already very impressive software, provided for free by volunteers, I'm going to default to assuming they're not just lazy.

  • tokai 19 hours ago

    You are talking about an anticipatory response. Human response have been studied extensively and it is broadly accepted that ~100ms is the minimum for physiological processing and motor response to stimuli. If you feel you go faster you are anticipating your reaction.

  • frde_me a day ago

    But the point here is not that you need to realize you typed something wrong and then cancel (in that case just don't enable the setting if you always want to abort). The point is that you need to decide if the autocorrect suggestion was the right one. Which you can't know until it tells you what it wants to autocorrect to.

  • dankwizard a day ago

    Neo, get off HN and go destroy the agents!

politelemon 2 days ago

I agree that 'prompt' should be the value to set if you want git autocorrect to work for you. I'd however want that the Y is the default rather than the N, so that a user can just press enter once they've confirmed it.

In any case it is not a good idea to have a CLI command happen without your approval, even if the intention was really obvious.

  • misnome 2 days ago

    Yes, absolutely this. If I don’t want it to run, I will hit ctrl-c.

  • junon 2 days ago

    If prompt is the default, mistyped scripts will hang rather than exit 1 if they have stdin open. I think that causes more problems than it solves.

    • jzwinck 2 days ago

      That's what isatty() is for. If stdin is not a TTY, prompting should not be the default. Many programs change their defaults or their entire behavior based on isatty().

      • junon a day ago

        isatty() is spoofed in e.g. Make via PTYs. It's a light check at best and lies to you at worst.

        • darthwalsh 17 hours ago

          If make is going to spoof the PTY, it should take responsibility for answering the autocorrect prompt

          • junon 16 hours ago

            There's no "prompt". That's not how TTYs work. Make has no idea the program is waiting for any input.

mmcnl 2 days ago

The most baffling thing is that someone implemented deciseconds as a unit of time. Truly bizarre.

IshKebab 2 days ago

> Junio came back to request that instead of special casing the "1" string, we should properly interpret any boolean string value (so "yes", "no", "true", "off", etc)

The fact that this guy has been the Git maintainer for so long and designs settings like this explains a lot!

1970-01-01 2 days ago

Deciseconds?? There's your problem. Always work in seconds when forcing a function for your users.

  • GuB-42 2 days ago

    Deciseconds (100ms) are not a bad unit when dealing with UI because it is about the fastest reaction time. We can't really feel the difference between 50 ms and 150 ms (both feel instant), but we can definitely feel the difference between 500 ms and 1500 ms. Centiseconds are too precise, seconds are not enough. Also, it is also possible that the computer is not precise enough for centiseconds or less, making extra precision a lie.

    Deciseconds are just uncommon. But the problem here is that the user didn't expect the "1" to be a unit of time but instead a boolean value. He never wanted a timer in the first place.

    By the way, not making the unit of time clear is a pet peeve of mine. The unit is never obvious, seconds and milliseconds are the most common, but you don't know which one unless you read the docs, and it can be something else.

    My preferred way is to specify the unit during the definition (ex: "timeout=1s") with a specific type for durations, second is to have it in the name (ex: "timeoutMs=1000"), documentation comes third (that's the case of git). If not documented in any way, you usually have to resort to trial and error or look deep into the code, as these values tend to be passed around quite a bit before reaching a function that finally makes the unit of time explicit.

  • synecdoche 2 days ago

    This may be something specific to Japan, which is where the maintainer is from. In the Japanese industrial control systems that I’ve encountered time is typically measured in this unit (100 ms).

  • gruez 2 days ago

    better yet, encode the units into the variable/config name so people don't have to guess. You wouldn't believe how often I have to guess whether "10" means 10 seconds (sleep(3) in linux) or milliseconds (Sleep in win32).

  • userbinator a day ago

    My default for times is milliseconds, since that's a common granularity of system timing functions.

  • 331c8c71 2 days ago

    Seconds or milliseconds (e.g. if the setting must be integer) would've been fine as they are widely used. Deciseconds, centiseconds - wtf?

    • atonse 2 days ago

      Falls squarely within the "They were too busy figuring out whether they could do it, to ask whether they SHOULD do it"

  • dusted 2 days ago

    at least fractions of a second, 250 would already be much more noticble.. 100 is a nice compromise between "can't react" and "have to wait", assuming you're already realizing you probably messed up

  • UndefinedRef 2 days ago

    Maybe he meant dekaseconds? Still weird though..

    • TonyTrapp 2 days ago

      It reads like the intention was that turning the parameter (0/1) command into an integer parameter, where the previous value enabled = 1 should behave reasonably close to the old behaviour. 1 deciseconds is arguably close enough to instant. If the parameter were measured in seconds, the command would always have to wait a whole second before executing, with no room for smaller delays.

      • bot403 2 days ago

        No, smaller delays <1s are also a misdesign here. Have we all forgotten we're reacting to typos? It's an error condition. It's ok that the user feels it and is inconvenienced. They did something wrong.

        Do some think that 900ms, or 800, or some other sub-second value is really what we need for this error condition? Instead of, you know, not creating errors?

    • schacon 2 days ago

      We had this debate internally at GitButler. Deci versus deca (and now deka, which appears to also be a legit spelling). My assumption was that 1 full second may have felt too long, but who really knows.

      • a3w 2 days ago

        deci is 1/10, deca is 10/1. So decisecond is correct.

        • schacon 2 days ago

          I understand, I meant I tried to say the word “decisecond” out loud and we debated if that was a real word or if I was attempting to say “deca” which was understandable.

          • zxvkhkxvdvbdxz a day ago

            It's very standardized (SI), meaning 1/10:th. Althought not so commonly used with seconds.

            You might be more familiar with decimeters, deciliters, decibels or the base-10 (decimal) numbering system.

            • quesera 19 hours ago

              Also "decimate" which used to mean "kill 1/10th of the soldiers", but now apparently means "destroy (almost) entirely". :)

          • CRConrad a day ago

            Sure, deca- as in "decade" is understandable. But why would deci- as in "decimal" be any less understandable?

kqr a day ago

This timeout makes me think about the type of scenario where I know I have mistyped the command, e.g. because I accidentally hit return prematurely, or hit return when I was trying to backspace away a typo. In those situations I reflexively follow return with an immediate ctrl-C, and might be able to get in before the 100 ms timeout. So it’s not entirely useless!

jakubmazanec a day ago

> introduced a small patch

> introduced a patch

> the Git maintainer, suggested

> relatively simple and largely backwards compatible fix

> version two of my patch is currently in flight to additionally

And this is how interfaces become unusable, through thousand small "patches" created without any planning and oversight.

  • olddustytrail a day ago

    Ah, if only the Git project had someone of your talents in charge (rather than the current band of wastrel miscreants).

    Then it might enjoy some modicum of success, instead of languishing in its well-deserved obscurity!

    • jakubmazanec 21 hours ago

      Git has notoriously bad CLI (as other commenters here noted). Your snarky comment provides no value to this discussion.

      • olddustytrail 20 hours ago

        On the contrary, it offers a little levity and humour, and possibly even the chance for some self-reflection as you consider why you thought it was appropriate to insult the folk who manage Git. I'm sure you can manage at least one of those?

        • jakubmazanec 20 hours ago

          Your comment isn't funny, just snarky. I suggest you read again HN guidelines and do some reflection yourself.

          Also, if you see it as insult, that's your mistake. It is just a simple empirical observation. I'm not saying it's an original thought - feel free to Google more about this topic.

          I won't waste any more time since you obviously aren't interested in discussion.

          • Ylpertnodi 17 hours ago

            >I won't waste any more time since you obviously aren't interested in discussion.

            Pot. Kettle. Black.

NoPicklez a day ago

Cool but I don't know why it needs to be justified that it's too fast even for an F1 driver. Why can't we just say its too fast without all the fluff about being a race car driver, the guy isn't even an F1 driver but Le Mans.

  • blitzar a day ago

    My deoderant is good enough for a F1 driver, why whouldnt my git client adhere to the same standards?

  • benatkin a day ago

    The author is someone who went to conferences that DHH also attended, so for some of the audience it's a funny anecdote.

bobobob420 19 hours ago

Git autocorrect sounds like a very bad idea.

moogly 2 days ago

So Mercurial had something like this back in ancient times, but git devs decided to make a worse implementation.

ocean_moist a day ago

Fun fact: Professional gamers (esport players) have reaction times around 150ms to 170ms. 100ms is more or less impossible.

rossant 18 hours ago

First time I hear about deciseconds. What a strange decision.

mike-the-mikado 2 days ago

I'd be interested to know if any F1 drivers actually use git.

  • schacon 2 days ago

    Not sure, but I do personally know two high profile Ruby developers who regularly race in the LMP2 (Le Mans Prototype 2) class - DHH and my fellow GitHub cofounder PJ Hyett, who is now a professional driver, owning and racing for AO (https://aoracing.com/).

    I mostly say this because I find it somewhat fun that they have raced _each other_ at Le Mans last year, but also because I've personally seen both of them type Git commands, so I know it's true.

    • xeonmc 2 days ago

      Maybe we can pitch to Max Verstappen to use Git to store his sim racing setup configs.

    • pacaro 2 days ago

      I've also worked with engineers who have raced LMP. It's largely pay-to-play and this is one of those professions where if you're the right person, in the right place, at the right time, you might be able to afford it.

    • diggan 2 days ago

      Isn't Le Mans more of a "endurance" race though, especially compared to F1? It would be interesting to see the difference in reaction ability between racers from the two, I could see it being different.

      • schacon 2 days ago

        I feel like in the "racing / git crossover" world, that's pretty close. :)

meitham 2 days ago

Really enjoyable read

mscdex 2 days ago

This seems a bit strange to me considering the default behavior is to only show a suggested command if possible and do nothing else. That means they explicitly opted into the autocorrect feature and didn't bother to read the manual first and just guessed at how it's supposed to be used.

Even the original documentation for the feature back when it was introduced in 2008 (v1.6.1-rc1) is pretty clear what the supported values are and how they are interpreted.

Theodores 2 days ago

0.1 seconds is a long time in drag racing where the timing tree is very different to F1. With F1 there are the five red lights that have to go out, and the time this takes is random.

With your git commands it is fairly predictable what happens next, it is not as if the computer is randomly taunting you with five lights.

I suggest a further patch where you can put git in either 'F1 mode', or, for our American cousins, 'Drag Strip mode'. This puts it in to a confirmation mode for everything, where the whole timing sequence is shown in simplified ASCII art.

As a European, I would choose 'F1 mode' to have the give lights come on in sequence, wait a random delay and then go out, for 'git push' to happen.

I see no reason to also have other settings such as 'Ski Sunday mode', where it does the 'beep beep beep BEEEP' of the skiing competition. 'NASA mode' could be cool too.

Does anyone have any other timing sequences that they would like to see in the next 'patch'?

inoffensivename a day ago

Maybe a not-so-hot take on this... The only option this configuration parameter should take is "never", which should also be the default. Any other value should be interpreted as "never".

krab 16 hours ago

What if this was an intentional, yet overly clever, way to avoid one special case?

I mean, for all practical purposes, the value of 1 equals to the unconditional execution.

outside1234 17 hours ago

Regardless of the delay time, this just seems like an incredibly bad idea all around for something as important as source control.

moffkalast 2 days ago

> As some of you may have guessed, it's based on a fairly simple, modified Levenshtein distance algorithm

One day it'll dump the recent bash and git history into an LLM that will say something along the lines of "alright dumbass here's what you actually need to run"

bun_terminator 17 hours ago

clickbait, don't hide the truth in a pseudo-riddle

Pxtl 2 days ago

Pet peeve: Timespan configs that don't include the unit in the variable name.

I'm so sick of commands with --timeout params where I'm left guessing if it's seconds or millis or what.

  • skykooler 2 days ago

    I spent a while debugging a library with a chunk_time_ms parameter where it turned out "ms" stood for "microseconds".

    • grayhatter a day ago

      I have a very hard time relating to everyone else complaining about ~~lack of units~~ being unable to read/remember API docs. But using `chunk_time_ms` where ms is MICROseconds?! That's unforgivable, and I hope for all our sakes, you don't have to use that lib anymore! :D

      • Pxtl a day ago

        The sheer number of APIs of modern coding is exhausting, I can't imagine either trying to keep all the stuff I'm using in my head or having to go back to the docs every time instead of being able to just read the code.

        • grayhatter 20 hours ago

          do you primarily write rust, or js?

  • hinkley 2 days ago

    Be it seconds or milliseconds, eventually your program evolves to need tenths or less of that unit and you can either support decimal points, create a new field and deprecate the old one, or do a breaking change that makes the poor SOB that needs to validate a breaking change-bearing upgrade in production before turning it on get a migraine if they have to toggle back and forth more than a couple times. Code isn’t always arranged so that a config change and a build/runtime change can be tucked into a single commit that can be applied or rolled back atomically.

    All because someone thought surely nobody would ever want something to happen on a quarter of a second delay/interval, or a 250 microsecond one.

  • echoangle 2 days ago

    Alternatively, you can also accept the value with a unit and return an error when a plain number is entered (so --timeout 5s or --timeout 5h is valid but --timeout 5 returns an error).

  • cratermoon 2 days ago

    I'll bounce in with another iteration of my argument for avoiding language primitive types and always using domain-appropriate value types. A Duration is not a number type, neither float or integer. It may be implemented using whatever primitive the language provides, but for timeouts and sleep, what is 1 Duration? The software always encodes some definition of 1 unit in the time domain, make it clear to the user or programmer.

baggy_trough a day ago

Whenever you provide a time configuration option, field, or parameter, always encode the units into the name.

snvzz a day ago

At 60fps that's 6 frames, which is plenty.

That aside, I feel the reason is to advertise the feature so that the user gets a chance to set the timer up to his preference or disable autocorrect entirely.

  • ninjamuffin99 a day ago

    6 frames is not enough to realize you made a typo / read whatever git is outputting telling you that you made a typo, and then respond to that input correctly.

    in video games it may seem like a lot of time for a reaction, but a lot of that “reaction time” is based off previous context of the game, visuals and muscle memory and whatnot. If playing street fighter and say youre trying to parry an attack that has a 6 frame startup, you’re already anticipating an attack to “react” to before their attack even starts. When typing git commands, you will never be on that type of alert to anticipate your typos.

    • snvzz a day ago

      >6 frames is not enough

      git good.

      (the parent post was a set up for this)

tester756 2 days ago

Yet another example where git shows its lack of user-friendly design

  • hinkley 2 days ago

    Well it is named after its author after all.

    • yreg a day ago

      At first I thought this is unnecessary name-calling, but apparently Linus has also made the same joke:

      > "I'm an egotistical bastard, and I name all my projects after myself. First Linux, now git."