silisili a month ago

Every time I use Git, I see how bad the UX is and marvel at how it ever became popular.

Even a simple merge/rebase leaves one confused. Which should I use? What is incoming? Why does incoming change as you progress? I didn't change anything (on purpose) but Git won't let me change branches. What the hell does stashing do? How do I just unfuck what I did and go back to a branch? These are rather common use cases, and today people still just nuke a directory because they can't figure out the arcane arts. I am one of those people, at times. Because it's faster to do so than read heaps of manpages for a situation I may never hit again.

Git might have awesome tech underneath, I really don't know or care. But someone needs to really spend time in a typical dev's shoes to make it nicer to use.

  • midenginedcoupe a month ago


    I've seen teams evaluate multiple revision systems before starting projects and each one has decided on Mercurial based on its technical merits.

    I'll take it on trust that git is a perfect solution for linux kernel development, but the number of teams who work the same way as them is a rounding error from zero. I see people on here complaining about cargo-culting from the cool kids (k8s, spotify's team structures, etc., ect.), but I see git as exactly the same.

    I don't want to have to spend time understanding how my editor handles lines in memory in order to use it, why should I have to spend time understanding how my revision control system treats revisions?

    Tech is hard enough without having to sink time in to unnecessarily complicated tools that can bite you in the bum really hard.

    • AlexandrB a month ago

      Mercurial is really nice. I find its CLI interface much less confusing than Git's. I think a lot of this has to do with Git's "index" where you have to "add" files and then "commit" them. This seems like the #1 source of confusion when people are learning Git and persists into operations like "rebase" and "cherry-pick" where the index is used for marking conflicts as resolved or not.

      • beej71 a month ago

        I personally always liked the flexibility the stage offers.

        • AlexandrB a month ago

          Oh, I agree. It definitely has advantages once you get the hang of it! It might be nice if it was optional though so newbies could ignore it entirely.

    • andyjohnson0 a month ago

      > I see people on here complaining about cargo-culting from the cool kids (k8s, spotify's team structures, etc., ect.), but I see git as exactly the same.

      I think this explains a lot of the initial popularity of git and github. Wanting to be like the cool kids, and git knowledge being seen as somehow "elite" back then.

      But given that initial popularity, it seems likely to me that network effects mostly explain its growth in usage from there. Convergence on a single revision control tool/site/representation/api/etc is probably inevitable, in the same way that there is basically a single social network (ie Facebook).

    • maccard a month ago

      > I see people on here complaining about cargo-culting from the cool kids (k8s, spotify's team structures, etc., ect.), but I see git as exactly the same.

      The big advantage of git is github and co, not git itself. By switching and using mercurial you lose all of the advantages of github too.

      • midenginedcoupe a month ago

        Well that's a different argument.

        The irony is I use git for all my projects, too, due to the network effect. It's vastly easier to go with the flow rather than swim against the tide. But every time I have to wrestle with un-fucking git's state I find myself considering my life choices and how I got there :(

        • sgbeal a month ago

          > But every time I have to wrestle with un-fucking git's state I find myself considering my life choices and how I got there :(

          cough Fossil cough

          • mirodin a month ago

            Amen to that. I love Fossil for its simplicity and batteries included model. No external ticketing, wiki and documentation to manually keep in sync. Plus everything inside a single SQLite file makes everything easier.

      • mikro2nd a month ago

        Or there's sourcehut so you can use Hg.

        • laurentlbm a month ago

          It's not as “accessible” though: it's not free and is less user-friendly.

          • tristan957 a month ago

            During the alpha, SourceHut is free to use, except for, which was abused by crypto script kiddies.

            I think you can pay as low as 2 dollars a month to use SourceHut.

    • B1FF_PSUVM a month ago

      > git is a perfect solution for linux kernel development, but the number of teams who work the same way as them is a rounding error from zero.

      Yeah, but saying that here a few years ago got you lynched.

  • sophacles a month ago

    > marvel at how it ever became popular.

    It's worth going through old HN comment sections that have flamewars of SVN v git (or cvs vs hg or whatever) from 2006-2012 or so. The dvcs systems won because they changed a bad paradigm to one that's much much better - literally the way you think about version control these days is a radical departure from the previous way it was done. Merges are much much nicer in git then they ever were in those old systems - I've never spent more than an hour dealing with the results of git issues, but I've lost literal weeks to a single svn mishap.

    Why did git win over other dvcs systems?

    2 main reasons I see:

    1. Linux uses it - literally, that means its the "cool one"

    2. Linux uses it - and that means the people developing it work similar to the kernel, they are willing to try a lot of different ways to acheive the task resulting in both some amazing cool stuff and a lot of cruft left hanging out for various compatability and "accidentally super important structurally" reasons.

    • jerf a month ago

      There's a lesson here, too. To unseat git, you need to not just be slightly better than it; you need to have a big use case for many people and be radically better than it in that particular use case. You even can be worse than git in some others as long as you have that radical improvement to build on.

      Being merely better isn't enough; see mercurial.

      The use case actually has to be important to people in real life, too, not just be something that sounds cool or people might say is important but they don't actually exert any effort towards implementing; see fossil.

      The problem is that it's not clear what that will be. It wasn't hard to tell branch merging was a huge weak spot of SVN even if you were close enough to it to not see the forest for the trees. What is git's weak spot? People might jump up to say "interface!" but the reality is, no, it's not; again see mercurial. Talk is cheap and complaints are cheaper but the reality is the community is not switching en masse because of that. Large binary files is certainly a weak spot and I understand there are some commercial solutions for that that edge out git, but that's not going to edge out git in general because the pain isn't enough for most projects, and while git-lfs isn't necessarily the slickest solution ever it's good enough for programming projects. Submodules are awful but it's not clear to me that there is enough pain there to make people switch even if you made that slick as can be. (Plus, if something did start eating git's lunch because of that, there are improvements git can make that would blunt the impact. Git's fundamental model means submodules are always going to be weird but it doesn't have to be as bad as it is. I think backwards compatibility prevents git from fixing the problems but if something was really a threat it could overcome that and they'd be improved.)

      I expect to still be using git for many many years yet.

      • silisili a month ago

        Well said, but I humbly disagree that it has to be better in some way, just easier AND as technically sound. Unless that is by your definition better. Ease of use is probably one of the most important factors in real life in determining preferences for software.

        I strongly feel if someone could match the merging ability of Git with the ease of simple, well worded commands, it could win. And it doesn't even have to be a Git replacement, but a better frontend. Kind of how yay is to pacman. Or how most use something like Handbrake instead of ffmpeg.

        In a lot of ways, it reminds me of Python and Perl. Perl was definitely more powerful as a language, but Python felt like natural writing in pseudocode. Enthusiasts rightly point to the power of inbuilt regex, but like Git, few actually could harness it well enough to make that unreadability worth it.

        • jerf a month ago

          "I strongly feel if someone could match the merging ability of Git with the ease of simple, well worded commands, it could win."

          But we have that, and not only didn't it win, it shows no sign of winning. Mercurial isn't even growing in apparent mindshare.

          "Easier" and "technically sound" is by observation, not by a long line of strained theorizing, not enough. Theories must take this fact into account or be useless in explaining the real world.

    • evouga a month ago

      But almost nobody actually uses git the way it was originally intended, eg. as decentralized version control? Instead there’s a canonical master repository (on GitHub) everyone pulls from/pushes to.

      • KrisJordan a month ago

        That your commits are made locally, in a clone of the entire repository, is decentralized. Your local respiratory accrues history as divergent (and unbeknownst) from upstream as you'd like it to.

        If you're going to collaborate in a decentralized way you ultimately need an accepted mainline source of truth.

      • sophacles a month ago

        Everyone uses it the way it was intended. There's not really another option. Any time you create a branch locally, without needing to contact the central server you are using distributed vcs. Same when you do local commits, or rebases or whatever. All of that is because you have a local copy of the repo and history, and can develop your way against your local repo then push the resulting changes upstream. The fact that this allows chaining of upstreams is not the main focus of "decentralized" wrt git and the other dvcs, that's just a side effect of the concept that appealed to Linus since it would better fit the kernel development model.

        From the point of view of someone stuck on subversion, all of that is freaking magic. This is what I mean about "radical departure". By analogy, there's people who think of horses as slower cars and say things like "why can't we just start using horses again" without ever considering the horse issues that are no longer relevant like: having to feed them every day, having to not ride them too long without resting the horse, what do to with all the poop, and so on because cars don't even have analogous impediments.

        • COMMENT___ a month ago

          > Everyone uses it the way it was intended.

          "Everyone" uses GitHub as a single source of truth, but I believe that git itself was not designed with this in mind. GitHub users use git as centralised version control with "local commits, rebases or whatever".

          > Any time you create a branch locally, without needing to contact the central server you are using distributed vcs. Same when you do local commits, or rebases or whatever.

          I think that it is possible to add local branches and commits to centralised version control. Will it make it decentralised? I don't think so.

          > From the point of view of someone stuck on subversion, all of that is freaking magic. This is what I mean about "radical departure". By analogy, there's people who think of horses as slower cars and say things like "why can't we just start using horses again" without ever considering the horse issues that are no longer relevant like: having to feed them every day, having to not ride them too long without resting the horse, what do to with all the poop, and so on because cars don't even have analogous impediments.

          It's not a topic of "horses vs cars" you know. Not even close. This analogy is a plain trolling IMO.

          • kfkdldldl a month ago

            > I think that it is possible to add local branches and commits to centralised version control

            Well, you think wrong, because it's not. Centralized change control will contact the central server for every change.

            If you can make local commits, it's decentralized source control. Prove me wrong: show in the SVN or p4 or CVS documentation where you can create local branches or commits while the central server isn't reachable over the network

            • COMMENT___ a month ago

              I believe that - hypothetically - centralised version control can have disconnected local commits with private local branches.

              But using GitHub as a single source of truth for git repositories makes git mostly centralised. Think of it as of SVN with local commits and a central repository on GitHub (with its UI). And with awkward git’s CLI.

              I don’t think that SVN, p4 or CVS have support for local commits. What I want to say is that local commits could be added into centralised version control systems. Come on, SVN has a local working copy. Won’t it handle local commits?

              • sophacles a month ago

                I think you are getting confused about what is decentralized by git (et al). It's not that there's a notion of "a canoncial copy". The canonical copy stuff is about user/developer organization. What dvcs distributes is the history - the notion that you can have local commits, branches, etc is a decentralization of a repo's history. In a centralized vcs, that history is always mediated by the server - you want a new revision number, you have to ask the server what it is.

                The fact that you keep coming back to github to "prove" it's somehow centralized at the vcs level is clearly you just doing some contrarian trolling.

                • COMMENT___ a month ago

                  > In a centralized vcs, that history is always mediated by the server - you want a new revision number, you have to ask the server what it is.

                  I assume the most trivial case when say I contribute to e.g. MS documentation, which source is now available exclusively on GitHub. Can I say that this MS docs repository is a canonical copy?

                  I think that the most common daily use workflows with git and GitHub are absolutely centralised regardless of the decentralised nature of git.

                  * I have a local git repository, its local version history and all the great features this provides. But I have to push to GitHub, you know. Can I somehow publish my changes if GitHub is down? So how is this different from centralised version control?

                  * GitHib provides extra features besides version control. It has a bug tracker, wiki, whatever. I'm tied to all these features, and they are not decentralised at all. I understand that this analogy is silly, but GitHub is a well done SourceForge with Git. But it's still SourceForge, and it's centralised.

                  When I use git with GitHub, I usually only clone, commit and push and check my project's issue tracker. All these actions except commit require access to GitHub. So I think that the workflow is absolutely centralised even if git is decentralised by design and has local version history.

                  I think that's what @evouga meant in his comment above when he said > But almost nobody actually uses git the way it was originally intended, eg. as decentralized version control? Instead there’s a canonical master repository (on GitHub) everyone pulls from/pushes to.

      • atq2119 a month ago

        > Instead there’s a canonical master repository (on GitHub) everyone pulls from/pushes to.

        The company I work for has internal forks/clones of many projects hosted on GitHub.

        I routinely interact with at least four different clones of a project: the upstream one on GitHub; the local one on my machine; my personal clone on GitHub for upstream contributions; and the company-internal clone hosted on some company-internal server.

        So yes, we are using git in a decentralized fashion.

      • skybrian a month ago

        It might not be more decentralized in practice, but it's easier to make and accept outside contributions. Before "pull requests" were a thing there was the "patch" command. I don't miss manually figuring out how many directory levels to strip to get a patch to apply, or figuring out what to do when patch says "applied 7 of 9 hunks" or whatever.

    • koonsolo a month ago

      I see you never worked with coworkers that were able to mess up a git system. And because git is so complex, it's pretty easy to mess up.

      Just let some junior developers to some rebases and squashes, and see what happens.

    • Fnoord a month ago

      > 1. Linux uses it - literally, that means its the "cool one"

      Linus Torvalds developed it after the spat with Bitkeeper, which was the main public proprietary dvcs.

      In a short time it became the most popular one. Which means all these people must be wrong otherwise.

      OpenBSD can use CVS cause they have a small, relatively static team.

    • lenkite a month ago

      git won in the greater world because of There was nothing at the time for mercurial OSS hosting.

      • sophacles a month ago

        Sure there was, it was called stash (and then that was bought by atlassian becoming bitbucket). You could host mercurial repos there until just a few years ago.

        And at that point, even the use of dvcs was in question, so you could use svn on google's forge, or sourceforge. Launchpad had bzr support too.

        It wasn't clear github was going to win until it was clear that git was going to win.

        • morelisp a month ago

          Google Code also had hg support before it had git support, if I remember right. Certainly the SVN -> hg migration was easier than SVN -> git if you were using it.

          • ralgozino a month ago

            And canonical's

            • morelisp a month ago

              At the time I’m talking about that was a proprietary forge for bzr, Canonical’s unnecessary version of baz, which was the community fork of tla, which was the GNU experiment in DVCS with an even worse interface than git.

  • bvrmn a month ago

    Do/did you use other VCSes? I did. ClearCase, CVS, SVN, Mercurial. It's a real horror (excluding later) to deal with. Sometimes basic operations like merge need separate role (human) to perform.

    Mercurial is nice if you are a max middle-level and do not use anything besides checkout/commit/push. Any non-trivial stuff requires manual reading (the same as for git)

    I know you have no time to know your tool. But I don't see any hurdles to allocate one evening and read It gives a model that you can apply to other distributed VCS and may be make you a better engineer.

    • quietbritishjim a month ago

      Mercurial is also great for rebasing (that's cherry picking in Git terminology - not the same as git rebase!). If I had experimental changes, I would often just commit things and mark them as private (so they wouldn't get pushed) rather than shelve (the Hg equivalent of git stash). Then, when I actually wanted them, I would rebase them on to the most recent revision (using --keep so the originals were still there if something went wrong - then I could strip the originals when the dust settled). Hg is so easy to use that you can feel really confident manipulating the commit graph, in a way you normally wouldn't risk in git.

      It helps that Mercurial has a really nice cross platform GUI: TortoiseHg. Version control is one of those things that really benefits from a GUI because you're manipulating small elements complex object - the commit graph. Using the command line is bit like using ed to edit a text file (i.e. with commands like "insert xyz between lines 12 and 13").

      • troyvit a month ago

        Yeah Mercurial seems like what git should have been but never will be.

      • AvImd a month ago

        Is there any option to have this "private commits" feature in git? I just realized it's exactly what I wanted so many times. I've tried the assume-unchanged hack but it's too brittle.

        • bvrmn a month ago

          I successfully use [stgit][1] (for 10 years) for private commits and pretty much for the rest of near-git workflows.


          • AvImd a month ago

            Does stgit help with the following scenario:

            1. I want some changes in my repo to be persistent whenever I switch to a new branch, pull, or merge a remote.

            2. They should not be shown in git status.

            3. They should not be pushed.

            From skimming the front page I haven't found whether something like this is supported.

            I see that with stgit I can `stg pop` my "persistent changes" before pushing to the remote and then apply it again with `stg push` but that requires that I use stg commands to create patches instead of `git commit`.

            • bvrmn a month ago

              For this case `git stash` is enough.

      • bvrmn a month ago

        I heavily used mq extension and it was pretty inferior in UX terms to StGit at those times. With StGit there are no any nice features in Mercurial for me.

      • lenkite a month ago

        If only someone had launched at the time of

        • bvrmn a month ago

          I remember issues with windows/unix interop for mercurial repos when github started to gain traction.

    • koonsolo a month ago

      > Do/did you use other VCSes? I did. ClearCase, CVS, SVN, Mercurial. It's a real horror (excluding later) to deal with.

      I did, plus MS SourceSafe, Team Foundation Source Control and Bazaar.

      I don't see the problem with SVN and Mercurial. If you have merge conflicts, no tool is going to solve that.

      If you love git so much, try to do a squash merge to the main, then do some other changes in your branch, and try to merge again. Have fun solving all the merge conflicts that actually shouldn't be there.

      Like I said, SVN and Mercurial are fine too. I don't see any reason why git would magically solve some merge conflict that SVN or Mercurial is not able to.

      • bvrmn a month ago

        > If you love git so much, try to do a squash merge to the main, then do some other changes in your branch, and try to merge again. Have fun solving all the merge conflicts that actually shouldn't be there.

        Why I should want to do it? It feels unnatural. What problem you are trying to solve by this workflow?

        I'm very curious how your favorite VCS handle this case.

      • maccam94 a month ago

        > try to do a squash merge to the main, then do some other changes in your branch, and try to merge again

        For this situation, at the end just do:

        git rebase -i main

        And delete all of the commits that you included in your squashed commit. If you want to preserve your detailed history you can create a new branch for the rebase like so:

        git checkout -b mybranch-rebase && git rebase -i main

      • minitech a month ago

        > do a squash merge to the main

        If you want Git to continue keeping track of your commits, don’t squash-merge them.

        Squash merging loses valuable history in general and use of it is a sign that commits on source branches are being made carelessly.

        (That said, if I had to do this, I’d rebase off of the squash before merging again and have no problems.)

      • morleyk a month ago

        I would just delete the branch after merge to the main and then create a new branch from main.

  • parasti a month ago

    Git has a terrible UI from a learning point of view. You can't learn Git by using its commands. You can, however, learn Git by reading about its architecture: blobs, trees, commits, pointers to commits (refs), and index (staging area where new commits are prepared). Because Git is really just a brilliantly simple data structure manipulated by dozens of ad-hoc commands.

    • bilekas a month ago

      > You can't learn Git by using its commands.

      From experience, this is just straight up wrong. I came from an SVN enviornment which was all GUI tools.

      Git's learning curve was steep but extremely short. Once you know what is happening, even those crazy states you can find yourself in really are not that hard.

      • parasti a month ago

        > Once you know what is happening, even those crazy states you can find yourself in really are not that hard.

        I think we're saying the same thing. Once you know how Git works, it's easy. But can you figure out "how Git works" by using a handful of commands like git add, git commit and git push?

        • bilekas a month ago

          > But can you figure out "how Git works" by using a handful of commands like git add, git commit and git push

          Once you can picture what's happening, absolutely. I can think of some other cases that require a lookup later like one I had to lookup after: 'rebase v merge' for example, after a 5 min documentation read, and with a picture in your head of how your state is it makes sense.

          You're right if you're looking for 'how git works under the hood' that's a bit different. I might have misunderstood your sentiment.

    • hardware2win a month ago

      Is there any other soft where ppl do seriously advise newbies to learn its internals?

      Everywhere it'd bee seen as a flaw from ux standpoint, but git gets a "pass"

      Imagine having to read excel's or windows code in order to use it consciously, lol.

      • omnicognate a month ago

        You don't need to learn its internals or read its code. You do need to learn what it does. This is not unreasonable, and is not the reason its interface is a mess.

        • hardware2win a month ago

          Previous poster literally writes about implementation details.

          >You can, however, learn Git by reading about its architecture: blobs, trees, commits, pointers to commits (refs), and index (staging area where new commits are prepared).

          Impl details of e.g dbs, std libs, runtimes, compilers, etc are for advanced/expert cases, not for slighly above normal

          • carapace a month ago

            Blobs, trees, commits, refs, and index are not "internals" or "implementation details", they are the domain.

            It's like how for a word processor the domain is text, et. al., or for e.g. Inkscape the domain is SVG.

            Git has two levels, called "plumbing" and "porcelain" (I think.)

            The "plumbing" (blobs, trees, commits, refs, and index) is elegant and easy to understand and use.

            The "porcelain" is where everybody gets bogged down. I'll omit my own opinion (low) and just say that most of the problems people have with git seem to be related to trying to do complicated things with the crummy UX porcelain.

            And that seems to be because they learned the porcelain but not the plumbing.

            So what you're hearing is people advising people to learn the plumbing (as a side effect the porcelain becomes less unbearable.) In fact, if you know the plumbing you can write your own porcelain, and it's not hard and the code is brief because git is actually elegant and easy to understand and use.

          • elsjaako a month ago

            These aren't implementation details, these are the things you are actually dealing with when you use git.

            You have to have some mental model of what the program is doing. With SVN it's basically "the server has the latest version, you can download or update that". With git it's "You have commits (snapshots of the whole state of your directory) which are addressable by the hash of that commit and refer back to previous commits. Also, here's a bunch of tools to manipulate and share that". The gp list of terms you should know is pretty close to just covering that.

            For me I like git because it doesn't guess what I want to do, it give me tools to do what I want and then does what I tell it.

          • ivanche a month ago

            For a "normal" use case one can get away with just git clone, git checkout, git add, git commit, git push, git pull. Anything more advanced like rebasing onto master and squashing before merging (unfortunately) demands that one has at least understanding of git objects. I’ve also noticed the reverse is true - the more understanding of git objects one has, the less catastrophic/unexpected situations ones finds itself into.

      • lenkite a month ago

        git gets the "holy pass" because it was created by Saviour Linus Torvalds and thus to question it is Heresy. If you don't git it, it's because the problem is with you, never with git. Learn To Git, boy.

    • globular-toast a month ago

      There are a couple of problems with this in my experience. People who have done computer science or systems/OS programming in a lower level language like C are really comfortable with concepts like pointers/references/links, objects, trees etc. But there are loads of programmers who don't speak this language at all. When I try to explain that almost everything is immutable and append-only and they are only mutating a single reference called HEAD it just doesn't make sense.

      The other thing is, like you say, git itself is a terrible way to learn these concepts. A command line interface is great for expressing operations, but the state, before and after, is basically invisible. I think this might be especially hard for more visual people. When I was learning data structures drawing them out and "seeing" the structure was vitally important and I know it's not just me because computer science books are filled with pictures.

      I really want there to be a GUI I can recommend people to use for git, but I'm not aware of a good one. I use magit which helps a lot, but I can't recommend that to most people. All the other GUIs I've seen are just about doing command line stuff with a mouse.

    • LtWorf a month ago

      How to waste a week without learning anything useful about actually using git :D

      At least, that's how it went for me. It seemed horribly complicated, but once I started using it, it was no issue.

  • andyjohnson0 a month ago

    I agree. I use git on a daily basis, and I think I'm converging on some intermediate level of understanding, but I constantly find it difficult to use. I go backwards and forwards between wondering whether (after three decades as a developer) my brain is just too used to the SVN approach of centralised repositories and branches-are-directories -- and at the other extreme thinking that there is something unbalanced(?) about its abstractions and power.

    For very large projects (eg Linux-scale or Windows-scale) I can totally see why it is the best versioning solution. But most software isn't like that: it's relatively small teams maintaining a legacy codebase and making fairly incremental enhancements. I struggle with what git brings to that vs the cognitive overhead of using it.

    I've recently migrated a moderately large codebase from TFS to GitHub. A large part of what drove that was co-workers who were reluctant to use TFS - I suspect that they found it old-fashioned, and that kind of bothers me, even though git is clearly the future and TFS clearly isn't. I like diversity of tools, and tools that don't force me to think too hard about things that should be easy. The knowledge that git could turn into a foot-gun just when I most need to get things done, doesn't help.

    <edit> After some consideration, I think what I'd like is a tool that gives me a simple SVN-style UX over a real git repo, with the option to drop down into actual git of necessary, and so that colleagues can use only git if they prefer. Github desktop partly achieves this, as does Visual Studio's git integration, but it's not quite what I want.</edit>

    (Not a git hater. I've had a github account for over a decade and choose to use it a lot for personal projects. But mostly just simple, sequential check-ins over time.)

    • dasil003 a month ago

      This take is surprising to me as I spent a decade using cvs/svn and at the end of that I didn’t really understand how it worked well enough to have any confidence branching and merging. Switching to git was a steep learning curve but made so much more conceptual sense that in a few months I was very confident in branching and merging and with a couple years had a deep and intuitive understanding of rebasing. Git allows me to edit my work before I push it, committing early and often to help my local development, then editing and packaging into atomic and well-documented commits that will make sense years down the line without wading through the irrelevant hiccups that were part of the short-term development cycle.

      I won’t defend the git porcelain because it is incredibly baroque, but because it’s backed by a robust a simple data model underneath I see that as more of a rote memorization challenge than a fundamental flaw. This is far preferable to svn which conflates repos, directories, branches and tags, resulting in the possibility of nonsensical operations and unresolvable merge situations.

  • relyks a month ago

    You and me both. Git's interface has been very hard for me to understand (especially coming from Mercurial). I ended up finding Gitless (, a wrapper around Git with a better interface, and loving it. The original author hasn't updated it in a long time, but I've been using a maintained fork that's been pretty sweet:

    • bmitc a month ago

      I came to Git from Perforce and have also had a tough time. Perforce had well-named actions. I was able to learn it quickly and even do custom devops things with its API. For Git, I do the absolute bare minimum in terms of workflow because very little of it is well-designed or makes sense. There's also a lot of snark in its language. I think I was wanting to try to interactively add on the command line, but got out because it was impossible to use without reading a manual and didn't appreciate "what now" and other such things.

      There are also very few use cases that require its complex distributive model.

  • indy a month ago

    I wonder how many centuries of developer time have been wasted trying to "unfuck what I did and go back to a branch"?

    • sophacles a month ago

      Not as many as svn or cvs for sure. That's part of why it became popular. There was a job title "merge master" - literally there were people who's job was to handle merging in various feature branches. If you didn't have this person you risked days or even weeks of downtime because no one could use the central server without risk of making the situation worse. You would regularly open a file and find your work had been stomped on because one person added a small doc fix in their version of a file and merged it after your work went in, but the vcs didn't really merge, it just overwrote.

      I'm not saying it can't be better - clearly it can.

      Just pointing out that one person spending a little time unfucking a branch is massively superior to everyone not being able to work because that same person fucked up everyone's branches.

    • 8n4vidtmkvmk a month ago

      hg up --clean

      i dont know why mercurial doesn't get more love.

      • geenat a month ago

        It was loved. I blame all the platforms dropping support for it.

        Github really is git's best PR.

      • mbfg a month ago

        mercurial was pretty nice, and if you have a "beginner developer team", mercurial is probably my recommendation. but how git does branching is just better in a way you just can't get past, once you use it. Luckily it's usually pretty simple to start with hg and move to git at a later date.

        • 8n4vidtmkvmk a month ago

          Hg branching makes more sense to me. You can update to any changeset and then just start coding and it will naturally branch. If you want something more permanent you can create a named branch. If you can't find your unnamed branches you can bookmark or tag them.

      • javajosh a month ago

        hg is indeed the better tool. Just as microkernels are the better kernel design. By existing and working (which is no small feat), Linux obliterates both alternatives in the zeitgeist. This is, of course, the great downside to "worse is better".

      • tempest_ a month ago

        We used hg when I started at my current position.

        From a developer stand point the experience was better but all the tooling and integrations are git based so we switched.

      • bvrmn a month ago

        As I remember `hg up` requires network access. What if you break a repo offline?

        • capitainenemo a month ago

          hg clean ?

          Actually, wait. hg up doesn't require network access. Maybe you're thinking of hg pull -u or svn up

          • bvrmn a month ago

            So you should also read the manual to know about. There no such thing as "intuitive" in VCS world. Looks like most of commentators confuse intuitiveness with familiarity.

            Don't get me wrong I appreciate mercurial devs did a tremendous work to hide DVCS under SVN-like ("simple" hah) interface. But every non-standard thing requires manual reading.

          • bvrmn a month ago

            > hg up doesn't require network access.

            So it's like `git checkout` and doesn't cover all breakage cases.

            • capitainenemo a month ago

              Not sure how to answer that, but sure, I can imagine plenty of ways in which a local repo could be screwed up that might require a strip or just a complete fresh pull. "All breakage cases" is pretty broad.

              My personal reasons for liking mercurial is some pretty amazing tooling (hg absorb, hg fa --deleted, revsets/filesets) combined with a friendly commandline with fairly sensible defaults.

            • 8n4vidtmkvmk a month ago

              Dunno what scenario it would fail on. You can also clone a local repo as long as you didn't mangle the .hg dir

  • josteink a month ago

    > Every time I use Git, I see how bad the UX is and marvel at how it ever became popular.

    It's a tool aimed at technical people, which at the time it was launched offered features and technical possibilities which was unmatched by most other popular source-control systems at the time.

    I remember going from Microsoft's Team Foundation Version Control to Git, and everything just felt miles more flexible and capable.

    Especially branching was enormously limited with Microsoft's offer, while Git literally allowed me to merge anything I wanted, across any base, and usually end up where I wanted to be.

    In short: Git solved real world problems in a way which more than compensated for its somewhat clunky UX.

  • globular-toast a month ago

    > Every time I use Git, I see how bad the UX is and marvel at how it ever became popular.

    Did you see what came before? Git isn't popular because of its UI, it's popular despite its UI. It's considered worth it because of its distributed model and speed.

    Having said that, I've used magit pretty much from the start and can't understand people who don't switch to a better UI eventually. I don't think a command line UI works well for it at all.

  • CannisterFlux a month ago

    I find few of my fellow coworker devs know about or use gitk, and it is an awesome tool for visualizing where all the branch madness is. A lot of things that require good command-line git knowledge are a few mouse clicks or a menu option away with gitk. Undoing a merge for example (a classic "oh no, I pulled on the wrong branch and now it's all fucked up"), I've no idea how you'd do that with the command line, but in gitk it's just right click the parent commit and choose "reset the branch here".

  • SassyGrapefruit a month ago

    I think git is wonderfully elegant and simple. I wish more people would take the time to sit down and understand it. It only takes maybe an hour or two.

    >Even a simple merge/rebase leaves one confused.

    I've never had this problem. Git always works exactly as its designed for me.

    • stonekyx a month ago

      Exactly. Git felt totally intuitive to me when I started using it as a teenager (with no background on SVN et al. of course), and most colleagues I've met so far also rarely had troubles using it. The latter is probably because I intentionally chose to work for companies that use Git though.

  • jansan a month ago

    and marvel at how it ever became popular.

    You obviously have not used the alternatives that were available before Git. After CVS and Subversion, Git was just a breeze of fresh air. But UX still has a lot of room for improvement.

  • Aozora7 a month ago

    I find git reasonably understandable as long as you are using a GUI. When you have a GUI, every action you take provides you visual feedback, so it's way easier to understand the purposes of merges, rebeases, stashes, and resets. My understanding of git workflow greatly improved thanks to it.

    • c7DJTLrn a month ago

      On the contrary, I've met plenty of people who struggle along with a GUI because they don't want to learn the CLI. They mash fetch and pull rhymthically hoping it will do what they want with no clue what it's really doing underneath. Eventually, they give up, nuke the repo and reclone it. Most CLI users I've met are at least competent at using git.

      • Aozora7 a month ago

        I have the opposite experience, the CLI users I see pretty much never use anything besides commit, push and pull, while GUI users tend to use more advanced commands since they can actually see what they do.

    • IshKebab a month ago

      I completely agree. A GUI also lets you avoid the terrible CLI. Plus the commit graph is a visual thing. It makes sense to view it in a GUI.

      Definitely the best way to learn.

  • dale_glass a month ago

    Let me try!

    > Even a simple merge/rebase leaves one confused. Which should I use?

    Merge merges two branches. You fork a branch off master, work on stuff until it's done, then merge it back into master. After that the branch can go away.

    Rebase is for rewriting history. To get rid of the 20 embarrassing commits where you tweaked stuff at random hoping to get it to work, or to redo a bunch of changes in a more coherent and easier to review way. It can be used to transplant work from one branch to another. It also can be used to keep up with the changes in the branch you forked off from.

    > What is incoming? Why does incoming change as you progress?

    Not quite sure what you mean, could you provide more details?

    > I didn't change anything (on purpose) but Git won't let me change branches.

    Probably because changing branches would lose your current work. It can happen either mid-rebase, or because you modified a file that would be overwritten by a branch change.

    > How do I just unfuck what I did and go back to a branch?

    You can abort a rebase with `git rebase --abort`. This loses progress.

    > Git might have awesome tech underneath, I really don't know or care. But someone needs to really spend time in a typical dev's shoes to make it nicer to use.

    I agree the UI leaves a lot to be desired. But the main thing is that you need to understand what it is that you want to accomplish first, especially in the merge/rebase question.

  • knighthack a month ago

    I've never understood this sentiment. We're now in 2022. When working on Linux, especially on desktop versions, most people don't need to use the terminal, ever. You could if you want to, but you don't have to.

    Similarly, for most things you don't ever have to touch Git on the command line. There are tons of great interfaces overlaying Git - e.g. Sublime Merge, which can visualize what you're doing, and makes Git significantly "nicer to use". You can still descend into the command line, but for whatever you've described above, you don't have to, and you don't have read manpages for common use cases.

    Your problem with Git is not an insurmountable issue - it's about finding the right tool to use Git with, and which is the way moving forward anyway. Consider ffmpeg. It's extremely powerful, and about as arcane as git, and you'd needs just as many manpages or Google searches to understand it. Yet most people just use software that overlays all the magical conversions that ffmpeg enables - and no one has to see or blame ffmpeg per se. The scenario should be similar with Git, I hazard.

  • tommyage a month ago

    Maybe you should try a proven front-end in order. Maybe you are kind of a visual learner.

  • arein3 a month ago

    Use sourcetree (older versions) or smartigit(similar ui to sourcetree) if you are on linux

    Makes life a lot easier

    • swozey a month ago

      I love gitkraken when I'm stumped

mpermar a month ago

>Nearly a decade later, new problems arose when Kubernetes (the operating system of the cloud) brought open-source collaboration to a new level.

I'd love to get more context to that statement to understand it better because as it is, it sounds as such an arbitrary statement that undermines the credibility of all the content below.

Kubernetes didn't brought open-source collaboration to a new level. No matter how relevant Kubernetes is today, it's just a drop in the huge ocean of OSS. Maybe level in this context refers to 'gitops' which many of us where doing years before the term was coined and without K8s involved. Or perhaps the author refers to the fact that most gitops K8s frameworks will work via polling which is a fundamental scalability flaw.

  • superb-owl a month ago

    To try and steelman this line - the CNCF (a major force behind k8s) has indeed been a game-changer for OSS. It has built a way for OSS projects created by large enterprises to move towards vendor-neutral community governance.

    IMO the way Kubernetes is built and maintained serves as a model for sustainable, enterprise-grade open source.

    • aaronblohowiak a month ago

      > It has built a way for OSS projects created by large enterprises to move towards vendor-neutral community governance.

      In what way does the cncf do this that the Apache foundation does not?

      • morelisp a month ago

        I personally can’t wait until we have even more moribund foundations shipping over-engineered corporate leftovers.

  • dan-robertson a month ago

    The introduction was indeed pretty cheesy. I read it and thought ‘I bet there will be a load of comments about the introduction instead of the article’ and then I read the rest of the article. Thankfully other comments did discuss the real content of the post.

  • msoad a month ago

    This reads like a buzzword soup to me as well. All of the "ideas" presented are existing systems that author wants them in the source control software. Not sure why?

    Also GitHub is the defacto monorepo? Since when you can fork/clone code in a monorepo? The whole point of monorepo is to avoid that!

  • skratlo a month ago

    Came here to say this, I stopped reading after that, just skimmed the article, it's bunch of horsecrap.

    • emn13 a month ago

      To read this charitably, interpret that statement not to mean that kubernetes-the-software itself is the best gift to OSS collaboration ever, but rather that Kubernetes-the-project "brought open-source collaboration to a new level" in the context of working on kubernetes-the-software. I think that makes more sense, and certainly isn't as pompous.

      • jpgvm a month ago

        OpenStack had similar levels of contribution at its peak and had much of the same solutions. I'm just guessing the author is newer to the game so might have missed out on 2010-2013 in the infrastructure space.

        One of the cool things to come out of it was Zuul, which is a merge queue system similar to Bors and friends.

    • damagednoob a month ago

      I dunno, the points about UX, merge queue and LFS all ring true for me. All those things are solved outside of Git AFAIK.

  • Semiapies a month ago

    The rest of the article is just a listicle of wouldn't-it-be-cool-to-haves, so hard for that to have much credibility to start with

  • nailer a month ago

    Kubernetes main contribution to tech is “let’s do it all using kubernetes” as a meme for wasted time.

    It references the effort and money wasted by people that wanted to build cloud providers despite not being in the business of being a cloud provider.

    • jpgvm a month ago

      Just no.

      Kubernetes has it's roots in Omega which was a research project to explore improvements to Borg.

      It was created/released as a direct response to increasing lock-in of AWS and Azure PaaS like services that were becoming an existential threat to GCP ever gaining any marketshare.

      Unlike OpenStack it did actually manage to mostly achieve it's goal of preventing lock-in by creating a standardized API in which all distributions/managed-providers need to provide and actually certifying that they do. OpenStack failed in this regard because it was overrun by vendor interests too quickly and suffered poor governance. Additionally the leading vendors of the time simply ignored it because none of them offered a compatible API layer and none of them cared about any of the upstarts that did. Also it turned out very few people wanted to build their own IaaS if it would be incompatible with AWS and bursting would be awkward.

      k8s successfully learnt from these mistakes.

      So it's contributions are two-fold.

      1) Single-handedly forced the other 2 major vendors to implement a standard API.

      2) Created an infrastructure OSS ecosystem above this API layer that broadly has been successful with enterprise interests while abiding by the governance model set out by core k8s.

      These alone make it a very successful project even if you disagree with the technical implementation/merits.

      • nailer a month ago

        You've said a lot of things but none of these respond to the comment you're referring to.

        > 1) Single-handedly forced the other 2 major vendors to implement a standard API.

        I (and I'm sure you too, you seem intelligent) would be surprised if say EKS is anywhere close to ECS usage. k8s is considered so complex/poor Amazon sell ECS-on-prem.

        It's the epitome of resume driven development - nobody uses k8s for any other reason except to say they use k8s.

        • wvh a month ago

          What's your alternative for stuff that needs to run on more than one server for reliability or scale? I have been duct-taping HA solutions since the late nineties, it definitely wasn't prettier than Kubernetes. We are beyond the phase where we want to care about a physical server with a broken hard disk or power supply. Whether or not Kubernetes-the-software is the answer to the conceptualisation of a "computer" is beyond the point; there is a clear trend towards abstraction of computing hardware for good reasons, even for companies way smaller than the Googles and AWSes of this world.

          • nailer a month ago

            > What's your alternative for stuff that needs to run on more than one server for reliability or scale?

            Same as most people, and as previously mentioned: I pay pay a cloud provider, unless my business is being a cloud provider.

            • 0xCAP a month ago

              How do you host your stuff on, say, GCP? App engine? Only 1 per project. Compute Engine? Basically a VPS. Cloud Run? Ok, good luck with stuff that requires long running requests or listening to events from a source different from Google Cloud Pub/Sub, because your instances are eventually going to scale down to 0 and not wake up if no HTTP requests are incoming, so basically not an options for microservices if you're rolling with a non-google event broker. Need to host other Kafka, Elastic search, or anything else non-trivially deployable? How do you do that?

              • nailer a month ago

                You're acting like IaC didn't exist before k8s.

                Right now I'd use Pulumi, years ago I'd use Terraform, or the AWS API - I part of the first node AWS API client and part of App Engine before k8s existed. k8s didn't invent infra as code or auto provisioning capacity. The fact that it's advocates act like it did is why k8s is a DevOps meme.

                > your instances are eventually going to scale down to 0 and not wake up if no HTTP requests are incoming

                That's a good thing. I think you don't understand Serverless.

        • jpgvm a month ago

          All of it responds to your original comment which was your statement that it contributed nothing except padded resumes.

          I stated several things it contributed -even if you think it's too complex-.

          If you don't understand why people use k8s you don't understand the problems it solves. Especially if you think ECS is a substitute.

          • nailer a month ago

            The thing you stated is that k8s allows people to easily deploy across different cloud providers due to lack of vendor lock in. k8s doesn't because nobody uses it for that - the cost of getting it tunning compared to simpler alternatives removes all value from the cross-platform abilities. That's why k8s is a DevOps meme.

pdmccormick a month ago

Am I the only one who thinks that Git's UX is fine, and maybe even rather enjoyable? It has taken time to learn, and I am by no means a power user, but its model is now in my brain so, for better or worse, it's how I think and work now too (interactive rebasing for the win, all the time, and lots of shell aliases to shorten things). I do wish I had an easier way to split up a commit that accidentally included several unrelated changes though.

What's the lesson, that you can learn anything eventually, or that familiarity means you will lose the ability to accurately evaluate something?

  • pizza234 a month ago

    The checkout command is severely overloaded; I'd hardly remember the functions if it wasn't for aliases.

    The reset operations are also very inconvenient, due to the mix of: different types of reset (soft/hard); overlapping with the checkout command; different states of the files.

    Pushing is also overloaded, due to handling both branches and tags (this is probably due to the fact that both have refs).

    There are strange warts (e.g. adding with --patch doesn't include files not in the index; displaying the content a given stash entry requires typing the whole - unnecessarily complex - entry name), which I don't doubt make sense technically, but from a user perspective, they're odd.

    There's probably a lot of stuff that one can find, depending on how wide their usage is. For example, I actually didn't realize how convenient patched (--patch) unstaging would be, since I typically perform a reset, then add (--patch) again.

    I've personally never got past the feeeling that, not frequently but still with some frequence, git operations have a byzantine UX.

    edit: find the merge commit of a given commit is something also very missed; it requires a non-trivial alias.

    • anonymous_sorry a month ago

      I was learning git around the time `switch` and `restore` were introduced to tackle the problem of checkout being overloaded. I started using the new commands and it instantly made more sense and began to click. I very rarely use checkout at all.

      Something to be aware of when training junior devs. Do them a favour and learn switch/restore first!

      • b0afc375b5 a month ago

        I think this is the first time I've heard of switch and restore. Before I moved to magit I always used git checkout and git reset.

        • jacobyoder a month ago

          Git switch and git restore were introduced in Git v2.23, way back in August 2019. So... they're not brand new in the last few months, but... like you (and another poster) I'm not aware of them, as I had ... 8-10 years of muscle memory before without them around. I'm actually trying to think back to when I first used git... I don't think I learned about it at all until around 2007 - company I was at had a lot of SVN and a couple of the folks on the team were exploring other options. And... IIRC, around mid-to-late 2008, someone at a local Ruby group demonstrated github. But... I'm not sure I actually started using git 'for real' until 2010 or so.

          So... there was 9+ years of learning certain commands/styles and... switch/restore weren't part of that.

          And I've mostly switched to GUIs for day to day stuff - the Tower Mac client and sometimes the JetBrains git tools. They might even now be using 'switch' and 'restore' for some basic operations behind the scenes.

        • jawilson2 a month ago

          I've been using git since 2009, first for me too

    • madeofpalk a month ago

      > The checkout command is severely overloaded

      I guess that's why they split git checkout into git switch and git restore

    • _huayra_ a month ago

      > The checkout command is severely overloaded;

      I've been using git since 2008. I just learned a few weeks ago that I had the opposite understanding of what --theirs and --ours does on git-checkout during a rebase operation. (briefly: --ours is the branch you are rebasing onto, --theirs is the changes from the branch with the changes you are repeatedly cherry-picking into the new branch. see this answer for more detail [0])

      I shudder to think how much I've screwed things up over my career as a result...


      • ecnahc515 a month ago

        It kind-of makes sense when you think of rebase as a composite command that does a checkout of the branch/newbase and cherry-picks the commits from previous branch.

        • _huayra_ a month ago

          Yes, it actually totally makes sense, but the issue is that if one draws the mental picture the wrong way the first time (and is left uncorrected), it's equally easy to draw the mental picture (i.e. of commits "flying around" from one branch to another) in the other direction too. That's what bit me

          I don't even know how to check for mistakes here with my current employer (god speed to the code I wrote at my previous jobs). Unlike a merge, a rebase leaves no trace except in the reflog :(

    • andreareina a month ago

      Push makes sense to me actually, branches and tags are both aliases to a particular commit. What makes them different is that the branch pointer gets updated on commit, while the tag pointer is (meant to be) static.

      But I do in general agree that the cli leaves much to be desired. I really have to give credit to magit for making a git ui that is simultaneously easy to use, powerful, and has actually made me more proficient at using the regular cli (the commands that underlie the operations are echoed so whenever I do something new I take a peek at how it's done).

      • pizza234 a month ago

        I think there is definitely a technical reason, but I'll explain the UX problem with an example: what does, intuitively, `push --force --tags` do?

        1. force push both the branch and the tags

        2. force push only the tags

        It's very ambiguous - both answers make sense. And that's a big UX problem!

    • bryanrasmussen a month ago

      if instead of checkout you had a bunch of separate functions you think you would remember those?

      Someone could probably make an argument that commands should be overloaded even more - clone, pull, and checkout could be merged for most of their common operations - as an example. Note: I am not necessarily for this but I'm not necessarily against it either.

      agree --patch is weird.

      • anonymous_sorry a month ago

        > if instead of checkout you had a bunch of separate functions you think you would remember those?

        It does (switch/restore) and I do.

      • afiori a month ago

        If clone push and pull were overloaded into a

           git sync --from <local or remote ref, or .> --target <local or remote ref, or .>
        It would not change much.

        The problematic kind of overloading are like how

            git push origin master

            git push origin master:master
        push master to origin while

            git push origin :master
        deletes master from origin.
      • vvillena a month ago

        Check out pacman, the Arch Linux package manager, to see an extreme case of command overloading. It probably makes lots of sense from a conceptual point of view, but for users that simply use a very small subset of the tool, it can get a bit messy.

  • simias a month ago

    Anything can become "fine" with enough practice and muscle memory. I'm a fairly advanced git user and manage to use it mostly painlessly even for relatively advanced tasks but I still think it's pretty crap overall. Not as crap as it was 15 years ago mind you, but not great.

    I was a big proponent of Mercurial for a long time because I thought (and still think) that the UI and defaults were vastly superior for most projects which don't have the needs and workflow of the Linux kernel. I gave up a few years ago when it became clear that git was VHS to Hg's Betamax.

  • mabbo a month ago

    It's fine. It works. But it's not intuitive.

    A decade of near daily use of it, as the 'git guy' on most teams I'm in, and I'm still spending time searching how to do specific things. To use a UX term, there's very few affordances telling me what I should expect, guiding my intuition.

    Imagine a VCS where it was obvious and easy to figure out how to do anything that is possible. Along with all the power that git brings today.

    • alksdeef a month ago

      Seems like that's been done pretty well with all the wrappers, tui and gui's available.. what else is needed?

      • mabbo a month ago

        All the things built on top are limited to the things their creators decided were needed. They'll never be as powerful as git itself.

        I want a successor to git that provides as much or more power but with the intuitive usability.

        Is that so much to ask? (I joke)

  • marginalia_nu a month ago

    My main complaint is it's leaking abstractions. You basically need a PhD in git's internal data structures to use git. Well not the 5% of git you typically need in your day-to-day, but the other 95%, which you need when the 5% you do need somehow goes wrong, through one of the many foot-guns git offers.

    A concrete example: Accidentally pushing a merge you didn't want to push and now you're stuck staring git-revert(1) which has the sentence below, scratching your head like "uh, what's a parent number?"

           -m parent-number, --mainline parent-number
               Usually you cannot revert a merge because you do
               not know which side of the merge should be
               considered the mainline. This option specifies
               the parent number (starting from 1) of the
               mainline and allows revert to reverse the change
               relative to the specified parent.
    Maybe you google a bit, and find this: ... which explains a bit, but is still confusing as all hell. The rabbit hole continues to this: which is also... not really clear.

    But you still can't find information about what a "parent number" is. It turns out, the parent number is the order they show up in within 'git show HASH'. Combining that clue with Linus Torvalds email above may let you undo the merge, if you can make sense of his Feynman diagrams. Maybe.

  • krageon a month ago

    > Am I the only one who thinks that Git's UX is fine [...]

    No, I think most people that use it are fine with it. But those are usually not the ones you hear :)

    • troppl a month ago

      Are you really sure that is the case here? I think everyone that starts working with git is a bit hung up by its complexity at least for a year, if not more.

      It seems to me that, as it should be, every professional SW dev has managed to work with git at some point in their life, then. Because git is simply what you will most likely use nowadays.

      But still, everyone remembers how hard it was to start out. Which is why, I think, these blog posts about git are so popular all the time.

      • throw-ru-938 a month ago

        Come on, getting comfortable with a couple of incantations takes people a whole _year_? Because you don't need much more when you're starting out.

        • jacobyoder a month ago

          Not always, but if you only do some of those incantations a few times per year, and what you remember is that you tended to get them wrong... it's a bit of a pain. This is partially why I switched to GUI clients for day to day. Tower (and maybe others) have an 'undo' which gives me a bit more confidence to try/test out things, because I know if it's wrong I can hit 'cmd-z' and be back where I was (at least until I push!).

      • krageon a month ago

        > But still, everyone remembers how hard it was to start out

        I sure don't. I learned the commands I needed (branch, checkout, clone, push, pull and commit) and didn't step out of those bounds until much later. It's really no different than learning any other skill or platform. Nobody starts out a master, but that's no excuse to never start.

      • bvrmn a month ago

        > I think everyone that starts working with git is a bit hung up by its complexity at least for a year, if not more.

        I had enough SVN merging issues so when git appeared I forced transition to git in a 3 month-period including writing Git-plugin for Hudson (Jenkins).

  • R0flcopt3r a month ago

    For you to enjoy using this tool you had to change the model your brain use, and you call that good UX? And using aliases means that your git is now different from you co-workers git. And when teaching the new guys you throw all these aliases at them that they have no idea what is or how work?

    • ehnto a month ago

      Is it really so hard to imagine a tool that takes some thought to use?

      I didn't know how to use a welder just by looking at it, but the UX is fine once you know the concepts behind welding.

      I honestly can't see how git could be easier given the requirments of the tool. If you want to reduce its capabilities because it's too hard then go ahead, but please fork it or make something new instead of ruining a perfectly good developer tool.

      I do find the division between git seems to be really concise. Either people don't get what the fuss is about or they think git is just the worst.

      • martinvonz a month ago

        > I honestly can't see how git could be easier given the requirments of the tool.

        If you're curious how it can be done (IMO), take a look at It's its own VCS but also compatible with Git so individual developers on a team can migrate to it.

    • pydry a month ago

      Knowledge of git has become a sort of status signifier. It "makes you a developer".

      I think for this reason there's a lot less pushback on its bad UX than there would be for any other program. It would render knowledge of its arcane guts less...special. The juniors will be forced to deal.

      It makes me wonder though, if needlessly arcane knowledge is and always was a part of other apprentice relationships.

      • tehbeard a month ago

        How do you improve the ux though?

        I see a lot of complaints about it, and agree that for all the porcelain, you do have to become familiar with the plumbing to solve issues.

        But noones shown me a good alternate ux story, just different porcelain/fittings. I still have to reach under the sink because said new porcelain didn't stop/avoid a case sensitivity clash, or it barfed on a merge and left the repo still to merge.

        • Espressosaurus a month ago

          Make it use Hg's CLI.

          Mercurial and Git have, for most purposes, functionally identical capabilities. Atlassian at one point allowed you to checkout a project as either a git or a mercurial repo painlessly.

          Mercurial's porcelain makes sense: the command is exactly what you think it is, usually without any funky modifying flags. If there are flags, they're often obvious, and if they're not, hg help <command> will clear that right up.

          Contrast to git, which is a mishmash of commands and esoteric flags, and the help isn't even inlined.

          The underlying concepts are easy enough, but how you access them requires memorization of arbitrary command sets that are inconsistent and overloaded.

          Fix the porcelain and IMO you fix most of the problems with Git.

        • xigoi a month ago

          Making the CLI self-consistent would be a big improvement.

    • nextlevelwizard a month ago

      >"to use a tool the way it was intended you had to adjust to use the tool as it was intended to be used"

      I sure wouldn't call that a _bad_ UX. As with any tool you have to adjust to the tool or make your own unless by some miracle someone shares your particular idiosyncrasies.

      >"modifying tool makes it different from your co-workers tool"

      Yes, but that is a big plus instead of a minus. If your co-workers asks how to do something you can just give them the content of the alias. I also alias `git` as `g` in my terminal. Is that going to cause problems for my co-workers? No.

      Unless you are training a complete-fresh-out-of-school-junior the "new guy" should already know how git works and in either case that sounds like homework for them more than anything else.

    • ewindal a month ago

      That’s a complete non sequitur. Everything we learn changes the way we view the world. You cannot make version control software without a data model that needs to be learned. Git’s model is very easy to visualize.

    • sirmarksalot a month ago

      Depends what you mean by "change the model your brain [uses]". In terms of the fundamental model of how Git works, it perfectly describes what users actually want (get me the repo at revision X), but not what they expect (get me file A, revision 5, and file B, revision 7, and pray it builds).

      In terms of the metaphors for the actual commands, I would agree. Reset and checkout basically make no sense for what they're actually used for. Switch makes things a bit better, but yeah, it would be nice if the entire Git CLI could be redesigned from scratch.

    • bayindirh a month ago

      > For you to enjoy using this tool you had to change the model your brain use, and you call that good UX?

      Every tool has its modus operandi, incl. but not limited to every programming language. Extending our understanding is hardly a bad thing.

      Git proposes a model for handling stuff, and I prefer it very much.

      And yes, the UX is fine.

    • pdmccormick a month ago

      I've never understood the argument that you should not optimize your own working environment (IDE/editor customization, alternate keyboard layouts, shell aliases, custom hotkeys, scripts, etc.) because it could be unfamiliar to someone else. I've also heard it in the context of, you shouldn't stray too far from the defaults because if you sit down to use a colleagues' computer, you will be out of your element. Unless your job is to pair program full time, or perhaps you're creating educational screencasts, maybe? If you slowly build out your own personal setup over time, presumably you should still always be able to explain what it is that you're doing at each step.

    • rsync a month ago

      Counterpoint: for me to enjoy using the bicycle tool I had to change models my brain used … and, yes, I think the bicycle has a good UX.

  • herbst a month ago

    Same here. I am totally fine with git as it is. If I weren't I maybe would just try one of the hundreds git plugins / clients to see if it better fits my needs.

  • Cthulhu_ a month ago

    I'm seeing a number of points in the post that aren't so much about Git, but online collaboration software / websites or dependency / package management, too; I don't think git should fix those.

  • IshKebab a month ago

    Yep I think you might be the only one!

    Actually I think you might be misunderstanding what people think is bad. Nobody dislikes the model of Git. It's great. That's partly why it's so popular.

    It's the CLI and terminology that are the issue. Some things are very badly named (e.g. the "index"; anyone sane would call that the "draft") and the CLI is a complete mess. Remind me how you list submodules? Or delete a remote branch?

  • MereInterest a month ago

    I do hear reasonable complaints here and there, such as the overuse of "checkout", but the majority of complaints I hear from new users fall into one of two categories. The first is complaining that git doesn't enforce a server/client model. The second is that git doesn't enforce a linear history. Both of these seem incredibly odd to me, as they are complaints about git accurately representing the development process.

  • prettyStandard a month ago

    To split a commit...

    1. Make a new commit to revert what you to come last

    2. Make a new commit to revert what you want to come first

    3. Make a new commit to restore/revert line 2 above

    4. Make a new commit to restore/revert line 1 above

    5. Squash original into lines 1 & 2 above

    Alternatively you can use interactive rebase, 1. set edit on the commit you want to split

    2. When the rebase stops, `git reset --soft HEAD~1` (I think)

    3. `git add` and commit as necessary then follow up with git rebase --continue

  • bilekas a month ago

    For as little time I end up needing to use the actual UI i really don't see the issue. This isn't the first time someone has complained about it either.

    Reminds me of the slightly facetious anecdote that UI/UX has actually already been perfected so the complaints and problems you hear are just UI/UX people making work for themselves.

  • howinteresting a month ago

    Interactive rebases are what I use as well. They're such a terribly broken way to use Git. You can't even start an interactive rebase (to go back and update earlier commits in the stack) in the middle of another one.

  • tasuki a month ago

    Git's UX sucks, but git is built on like four simple concepts (blobs, trees, commits, refs) and I feel that's almost impossible to improve on.

  • OJFord a month ago

    > I do wish I had an easier way to split up a commit that accidentally included several unrelated changes though.

    Perhaps it's what you want something easier than, but I have `uncommit` aliased to `reset HEAD^`, and use it often as `git uncommit -p` (then amend, then the 'uncommitted' changes are unstaged ready to go in a different commit if they were wanted just elsewhere, or removed if not).

    • pdmccormick a month ago

      Thank you, I think that is exactly what I was looking for, and just what I had secretly hoped someone might suggest in response to my comment. Thanks again!

  • bvrmn a month ago

    I think a git CLI is an elegant and orthogonal and maps pretty much semantically to basic commit graph operations. It clicked when I started to think in terms of graph transformations and what graph state I want at the end.

    > I do wish I had an easier way to split up a commit that accidentally included several unrelated changes though.

    IMHO this one of the cases when GUI is better. I use `tig`.

  • Izkata a month ago

    Got another here with what seems to be a unique anecdote: In the early 2010s, with only svn knowledge, I tried out both git and mercurial. I don't remember the details from back then, so I couldn't explain why, but I do remember thinking mercurial was confusing and git was easy to understand.

  • flir a month ago

    The model's basically a directed acyclic graph (not a tree), with each edge being the diff between the two nodes it connects. I think the UX could be improved if it built on the language of graphs to make that model more apparent - node, edge, etc.

    (I also believe the same thing about SQL and sets).

    • avar a month ago

      It stores a full copy of every node, not the diff. The diff is just something that's rendered on-demand, and "gc"/"repack" compaction and it being content-addressable makes sure that the storage space doesn't balloon as a result of everything being a full snapshot.

      This distinction makes a difference in some cases, there are other VCSs that store diffs as a fundamental property.

      About your naming suggestion: I'm not saying you're wrong, but consider that git is used by a very wide audience, and even a lot of the programmer crowd isn't familiar with the graph theory terms you're using.

      So "branch", "tip" etc. is by no means perfect, but I think it's probably better.

      • flir a month ago

        There are a lot of people, even in this comment section, whose internal model of git is "it's a tree". Langauge like "branch" reinforces that model, and I don't think it does beginners any favours in the long run. Something something leaky abstractions maybe?

        (Maybe you're right and I'm being a bit ivory tower here).

        • avar a month ago

          Even if you suppose that Git had some extreme UX overhaul to the point of renaming builtins like "branch", the existing names would still survive in the popular zeitgeist, as they predated Git.

          So, I think it's an interesting thought experiment, but practically speaking a non-starter.

          You'd never be able to fully migrate over, instead it would be another case of that xkcd about N standards.

  • mejutoco a month ago

    I agree with you. Adding new well-thought commands while keeping the old more arcane syntax would be enough for me (like git switch, git restore, git create-branch instead of git checkout -b, etc.).

    This way new users will find it better to learn and people familiar with it don't need to change it.

    • anonymous_sorry a month ago

      `git switch --create` (or `-c`) is the new `git checkout -b`.

      There is also `git branch --create`, although that doesn't switch you.

      • mejutoco a month ago

        I made it up as an example of a future command `git create-branch` :)

        Thanks for the commands. They are useful.

  • newbieuser a month ago

    My favorite thing about git ux is the clarity. People often criticize saying I don't understand what to do and how. I do not agree. I think it is quite fun to use when you are aware that there is a history that you manage at the top, no matter what you are working on.

  • JoeyBananas a month ago

    My complaint is that the names of commands in Git are not intuitive. Otherwise, I think the UX lf Git is near perfect because it's possible to write things like Git graphical wrappers.

  • musicmatze a month ago

    > Am I the only one who thinks that Git's UX is fine, and maybe even rather enjoyable?

    Same here. Have been using git for over 10 years now though, so this might be an "experts view" kind of thing.

  • dan-robertson a month ago

    Even if you get over the bad commands, the internal model of snapshots (and eg the need for rerere) can still be unintuitive and not always line up with the behaviour one would expect

  • hnrj95 a month ago

    fwiw, and surely anecdotal, but i don’t know anybody who knows the dark arts of the git cli and uses some client (magit, lazygit, etc) who prefers to use the former

madmax108 a month ago

I personally believe that making diffs more human friendly is the next step of evolution we need. I work with a team on a NodeJS+React project and nearly every other PR shows up as "Something changed in package.json, something changed in package-lock.json, some static assets added/modified, some JSON changed" etc and it makes reviewing code quite unwieldy (esp. since it forces folks to use Github UI interface to even see blob diffs. which is quite opinionated on when it collapses a file in the PR, and how it determines what changed in a file). I feel like git was perfect when "code" was nearly almost completely text and patches were sent over email, but there's a lot more boilerplate+blob data that goes into git today and git needs to evolve to support this.

I feel that once tools like Difftastic [1] and similar get more mainstream, and ideally more firmly entrenched within git itself, it will make code reviewing much smoother process rather than having to depend on Github or any other proprietary service.


  • alexchamberlain a month ago

    I feel like you may be blaming the tool for a social problem there. We have a simple rule: no changes to package* in a feature PR. Adding a dependency? Cool, upgrade everything first, then add it fresh. (Be reasonable there- the idea is that the add should be clean, so if that can be done without doing a major upgrade, that is of course fine.)

    • bmitc a month ago

      It sounds like you're just working around the limitations of the tools.

      Edit: I think it's possible that I misunderstood, given the downvotes. I thought they meant that a dependency should be added in one PR, and then the feature needing that dependency in a separate PR. What I think they actually meant is that if adding a dependency requires upgrades of other dependencies, then upgrade the existing dependencies in one PR and then add the new dependency and feature in another single PR. That seems to make sense but not be necessarily a hard rule.

      As I said below though, I still think the tooling in this space is terrible. Even for dependencies, I don't want line and text changes in config and lock files. I want something that summarizes what dependencies were added or upgraded.

      • sshine a month ago

        Or a limitation of the human mind.

        If you jam too many unrelated changes into one diff, people stop paying attention, because now reviewing it requires many minutes and note-taking.

        Since hundreds of line changes can have little or no effect, but a single line change somewhere else can have drastic effects, it is essential to separate the unimportant from the important.

        Changes in dependencies is a not unimportant change, but they tend to get treated like noise because those changes are associated with automated tooling, like the package tool. Updating dependencies separately solves a social problem, not a technical one.

      • alexchamberlain a month ago

        Simply separating concerns so that each PR/commit focuses on 1 thing.

        • bmitc a month ago

          But what is the one thing if you're just adding a dependency, where you're only adding it so that you can subsequently add a feature using it in a separate PR? The atomic action is adding the feature which requires the new dependency. Unless I misunderstood what you wrote.

          I still agree with the original comment in that diffing technology feels decades behind, and that a lot of what we do as software engineers are working around things the tooling should be doing better on.

          • alexchamberlain a month ago

            Sorry, I don't think I explained myself well. Generally when you see a lot of package* noise, it's because you've upgraded your dependencies, added one and made a feature change all at once. The main thing that needs to be separated is the upgrading part.

          • sokoloff a month ago

            It seems no different than adding a feature dark in one PR and turning it on in a second. “The atomic action is launching the new feature to users” seems like an equivalent argument (and one with which I disagree).

          • LtWorf a month ago

            You can review pr commit by commit… if the commits are poorly made just reject the changes and ask to make them properly.

      • cxr a month ago

        > It sounds like you're just working around the limitations of the tools.

        It can be hard to recognize, esp. since it has so much cultural inertia and approval, but doing things like splitting things into packages, recording the links in package.json (whether by hand or by tool-assist), and then introducing something like `npm install` into the workflow as a way to lazily fetch parts of a application's codebase is nothing but one massive (and massively fragile) scheme to circumvent the version control system as an consequence of unacknowledged limitations of the relevant tool.

        • hinkley a month ago

          Circumventing version control systems in part because of cognitive dissonance. If your dependencies are too big to check in, maybe you should look at that.

          With NodeJS in particular, the issue of binaries is sort of dodged by exploding the files out instead of reading them from archives like Java does. However version conflicts can easily result in many versions of the same files in your repository, so you're still in a bit of trouble where bloat is concerned. While you should start by putting your dependencies on a diet, a different organizational structure for files than what git uses, where copies and moves are tracked better would help a lot.

  • WorldMaker a month ago

    Semantic diffs are great (I did some experiments with it myself [0]), but we can't entirely automate away "narrative problems" in a PR. No matter how good a diff is, a diff tells you what changed but not why and sometimes not even how. That's what we need good commit messages for. That's what we need good PR descriptions to do. If a branch has a ton of seemingly unrelated files modified (automated tool output or not), sometimes you have to ask the narrative questions: "Why did this change? What does it have to do with the other changes in this branch?"

    Admittedly, narrative problems are hard to solve in general. It's a lot easier to "build a new tool" than to train a junior developer to always explain why/how in commit messages, to order their commits to "tell a story" of what the branch accomplishes, to avoid unrelated changes in digressions and asides along the way (moving those into their own branches/narratives), to tell that story in a PR description in a way that is useful to understanding the whys/hows of the branch (especially if the commits themselves lack some of the narrative or aren't ordered well for proper storytelling). I often settle for one or two of those at a time from a given junior developer.

    I don't think we can automate ourselves out of narrative problems. I think narrative problems are one of the unsung creative problems of our industry and something that separates good programmers from great programmers. It's a human skill that takes practice.

    > I feel like git was perfect when "code" was nearly almost completely text and patches were sent over email

    That "perfect" never existed. Even codebases like the Linux kernel that work entirely in email still have their random binary blobs (often from outside vendors) and auto-generated files from tools. What the email flow focuses, arguably to some (such as many of the folks at Sourcehut) better than most other PR tools, is that conversation around the narrative and they why/hows. You expect a lengthy narrative discussion in a mailing list. Sometimes people see those comment fields in a Github PR and expect then to be less about narrative and more about nitpicking specific lines than having narrative discussions. (Nitpicking happens in mailing list discussions, too. It's mostly unavoidable. But to nitpick on a mailing list you have to do a lot more copying and pasting by hand.) There is something of a different story telling "pressure" when you are staring at an empty email with a file auto-attached than when you click the PR button in Github and get pages full of diffs and all the commit logs laid out before you before you ever start writing. (Sometimes people do see that and think their job is "done" and what they would write in the PR description would be redundant, because it is not a "blank email" that needs a greetings and salutation and maybe a description of the attachment.)


sriku a month ago

I've been using fossil ( for personal projects for like a decade now and I much prefer it over git. The characteristics that get me to stick with it are -

1. Single file executable. No dependencies to "install". Just the executable and you're good. 2. The whole repo is a single sqlite DB file. Fabulous for backups, sharing, hosting etc. 3. You cannot rewrite history unlike git. Hence the name. Folks using git have no idea what kind of a peace of mind this gives me. 4. Integrated issue tracker stored in the same repo. Complete with cross references to commits. 5. Allows repeated use of same tag name. This is so convenient in personal projects I miss it in git. You can mark a commit as "published" and later look at the whole history of all previous commits tagged as "published".

Other niceties -

1. Integrated wiki - I've occasionally used it, but usually prefer to write documentation in separate files. 2. Integrated webserver - `fossil ui` runs on the same thing so I do use it. The webserver comes complete with user account management and permissioning. 3. Can export and import to/from git.

  • sgbeal a month ago

    (A long-time fossil dev here...)

    Re. integrated wiki: when i first saw fossil (Christmas break of 2007) two features made it a killer app for me: wiki and hosting as a CGI. The wiki aspect has long since taken a back seat to the so-called "embedded docs" feature, where the docs live in the source tree and become first-class SCM citizens. However fossil is, to the best of my knowledge, still the only SCM which is absolutely trivial to host as a CGI, which means it can be hosted on cheap shared hosters just as easily as it can on one's standalone VPS.

    As far as "what comes after git," though: git is the SCM needed by the 0.1% (or fewer) of the largest, most-active FOSS projects. Imagine the Linux kernel source tree if its SCM could not remove dead branches - it would quickly become uncloneable under all of that weight. Fossil is not designed to scale to projects of that size. Fossil is, however, an ideal SCM for that 98%+ of remaining projects which fall into the size categories of personal/small/medium.

    • open-source-ux a month ago

      "Fossil is, however, an ideal SCM for that 98%+ of remaining projects which fall into the size categories of personal/small/medium."

      I wonder why so few developers consider the scenario you describe. Git fits the development of Linux. But, a question rarely raised: why is Git considered suitable for small or medium projects?

      • sgbeal a month ago

        > A question rarely raised: why is Git considered suitable for small or medium projects?

        Quite frankly, _it's not_. There's _absolutely nothing_ ideal about git except for its ability to super-scale to that _exceedingly small_ percentage of projects which need that level of scaling. That's its _one and only_ killer feature. If it weren't for github and its ilk, git would be just another second-tier tool like the rest of the SCMs. Unlike every(?) other SCM, fossil doesn't require 3rd-party tools to host over CGI: that's built right in to it and CGI works on even the cheapest of shared-hosting platforms, so no equivalent of github is required in order to host one's own repositories.

        Granted, as a long-time fossil dev and advocate, i'm _severely_ biased in this regard, but there's are _reasons_ i prefer fossil over git, why i use it for _all_ of my own projects (, and why i support and advocate for it (just not for that small tier of "uncommonly large" projects, as fossil has, quite frankly, no business being used there).

      • metta2uall a month ago

        Network effects, and that once many people get the hang of using git "well enough" they'd rather put up with it's annoyances than take the time to learn something that's marginally better..

      • pcthrowaway a month ago

        It's probably why so many people want to learn/use Hadoop and whatever else Big Tech is using to address their massive scale. Premature optimization and resume-driven development.

        I actually love git (even for small projects) but it's because I've been using it for so long now; I suspect if I learned another DVCS I'd be using it in isolation, and git works just fine for me

  • L3viathan a month ago

    1: What does it matter? I do $packagemanager install git and am done. What do I care whether it's got dependencies or consists of several binaries?

    2: Fair enough. Although a multi-file backup doesn't sound hard to me either.

    3: That would give me the opposite of peace of mind. I can't clean up my messy WIP commits?

    4: Sounds like a nice feature.

    5: This just uses the term "tag" to mean something else that git tags. But I agree that it would be nice to be able to label several commits (distinct from tags).

    • sgbeal a month ago

      > What does it matter? I do $packagemanager install git and am done.

      _Freedom_. Fossil is trivial to build on all modern platforms and we (in the fossil project) always recommend that folks use the trunk version, building it for themselves. Depending on an OS'es package manager just means that one is stuck with whatever version that package repo's volunteer package maintainers post.

      > I can't clean up my messy WIP commits?

      Nope. Fossil remembers what happened, not what "should have" happened. We (on the fossil project) consider that a feature, and fossil's own history is littered with "oopsies" (no small percentage of them from yours truly).

      • colejohnson66 a month ago

        I'm torn on that "feature." On one hand, I think it's a neat idea, but on the other, I amend commits in Git quite often before pushing. It would take a mindset change to switch. And then there's this: what if I accidentally commit a private key or database? Sure, I shouldn't be f-ing up, but we all know it happens sometimes. In Git, I can revert/reset back to the commit prior.

        • sgbeal a month ago

          > I amend commits in Git quite often before pushing.

          Fossil supports amending checkins at any time after committing, as often as you like - change the checkin commit, re-attribute to a different user, change the timestamp, or similar. What it doesn't support is _modifying_ them.

          > And then there's this: what if I accidentally commit a private key or database?

          Then you "shun" (to use fossil's term) that artifact. Fossil isn't 100% merciless when it comes to removing content, it just makes doing so a 4th-class sub-citizen of a feature and recommends against doing so in every case except for the one of content which should never have been checked in. Removing content inherently punches holes in the project history, so it's not something we (in the fossil project) recommend doing unless it's absolutely necessary for reasons of security or legality.

          • colejohnson66 a month ago

            Why do you mean "modify"? As in rewriting history (commits before HEAD)?

            • sgbeal a month ago

              > Why do you mean "modify"? As in rewriting history (commits before HEAD)?

              Fossil flat-out does not support, with the exception of "shunning" (forcibly removing content), the modification of any history. It supports the "amending" of any history, however.

              For example, you can "change" the timestamp of a commit retroactively. It doesn't change the timestamp on the actual checkin (as that's cryptographically baked into the commit), but it changes how the checkin is displayed to the user in fossil's "timeline" view. Fossil also, however, makes it easy to see, in the details for that checkin, that the timestamp was modified later (and who did it, as well as when they did it).

        • Too a month ago

          Even in git, amending doesn’t rewrite the actual commit though. It creates a new one. You can find the old in your reflog. Think about it, the commit hash is based on the contents, so it must be changed on any content changes. It’s only after pushing that history is written in stone.

          So you can say git never removes anything either.

          The biggest issues with data loss in git is before you have done a commit. There, many actions can nuke your changes because you mixed up two flags, trying to rebase before commiting, etc. As long as you have a commit though, the reflog will save you from failed rebases.

      • gitgud a month ago

        > Nope. Fossil remembers what happened, not what "should have" happened.

        So if someone new on the team accidentally commits "node_modules" it's there in the history forever? Doesn't sound like a great feature...

        The choice to discard history gives freedom to developers

        • sgbeal a month ago

          > So if someone new on the team accidentally commits "node_modules"

          Then the 120k+ files in node_modules is in there forever, far outliving that team member's career as a software developer.

          If the user has not yet synced those changes to a central repository, they can still delete their copy, re-clone, and re-do their checkin to be "less encompassing." Once they push, however, all of their coworkers will hate them. The fact that pushing would take noticeably longer than it should for that case would be the first hint that tapping ctrl-c would be in order (that is, cancelling the sync with the upstream repo).

          > ... it's there in the history forever?

          Until/unless it's "shunned" (which would take extraordinarily long to do for 120k files, as shunning requires the artifact IDs of every file to be shunned, and if a single one of those hashes is the same as a file which should not be shunned (e.g. all empty files have the same hash) then tough luck).

          It seems likely, however, that new developer's colleagues would have long since added `node_modules/*` to the repository's `ignore-glob` setting so that the new colleague wouldn't accidentally add that.

          Sidebar: in my 14+ years of being active in the fossil community, nobody's yet posted saying they've accidentally checked in a node_modules directory and asked for advice on how to deal with it. Presumably node folks primarily exist in Enterprise environments, and Enterprise environments all use git because that's where the tooling is.

  • al2o3cr a month ago

        3. You cannot rewrite history unlike git. Hence the name.
    So if somebody accidentally commits customer data to the monorepo, just burn down the whole company and start over?
    • sgbeal a month ago

      > So if somebody accidentally commits customer data to the monorepo

      First, you use that as a "teaching opportunity" to educate that someone about why doing so is Bad for Business. Secondly, you "shun" the artifact(s) in question. Since shunning burns holes in the project history (DAG and blockchain), we (in the fossil project) invariably advise against going so unless it's absolutely necessary (publishing custom data being an example of "necessary"). There's a major semantic difference between deleting content "just to cleanup the history" vs. "to eliminate legal liability."

      > , just burn down the whole company and start over?

      If you prefer that approach then have at it.

    • leethargo a month ago

      I'm guessing you can still go back in time and just "forget" what came after wards, by using backups of the repo file.

  • geenat a month ago

    How is large file support?

    What strategy is employed for binary file handling? How does it compare to git LFS / annex / mercurial?

    Aside from that, Fossil is very intriguing.

    • sgbeal a month ago

      > How is large file support?

      Fossil is, because of its sqlite dependency, limited to blobs no larger than 2GB each. Some of its algorithms require keeping two versions of a file in memory at once, so "stupidly huge" blobs are not something it's ideal for. Fossil is designed for SCM'ing source code, and source code never gets anywhere near 2GB per file. The only projects which use such files seem to be (based on fossil forum traffic) high-end games and similar media-heavy/media-centric projects which fossil is not designed for.

      > What strategy is employed for binary file handling?

      That's a vague question, so here's a vague answer: it handles binaries just fine and can delta them just fine. It cannot do automatic merging of binary files which have been edited concurrently by 2+ users because doing so requires file-format-specific logic. (AFAIK _no_ SCM can merge (as opposed to delta) binaries of any sort.)

      • geenat a month ago

        Fair enough, like vanilla git as of today.

        I do hope someday git and others employ either a git annex or mercurial-style scheme where if it's a large binary file: 1. no diff is performed, and 2. only the latest version is kept within the history.

        This would blow wide open the possibilities for using Fossil in binary-heavy projects such as machine learning, games, simulation.

        I could see the SQLite limitation worked around by just splitting up binary data into multiple pieces.

        • sgbeal a month ago

          > I do hope someday git and others employ either a git annex or mercurial-style scheme where if it's a large binary file: 1. no diff is performed, and 2. only the latest version is kept within the history.

          That will never happen in fossil: one of fossil's core-most design features and goals is that it remembers _everything_, not just the latest copy of a file. The way it records checkins, as a list of files and their hashes, is fundamentally incompatible with the notion of tossing out files. It is capable of permanently removing content, but that's a feature best reserved for removal of content which should never have been checked in (e.g. passwords, legally problematic checkins, etc.). Removing content from a fossil repo punches holes in the DAG/blockchain and is always to be considered a measure of last resort. In my 14+ years in the fossil community, i can count on 2 fingers the number of times i've recommended that a user use that capability.

          > I could see the SQLite limitation worked around by just splitting up binary data into multiple pieces.

          There's no need to work around that "limitation" because "source code" trees don't deal with files of anywhere _near_ that size. Fossil is, first and foremost, designed to support the sqlite project itself: it was literally designed and written to be sqlite's SCM. Projects with scales of 1000x that project's are nowhere near fossil's radar.

          Sharding large files over multiple blobs doesn't solve some of the underlying limitations, e.g. performing deltas. Fossil's delta algorithm requires that both the "v1" and "v2" versions of a given piece of content be in memory at once (along with the delta itself), and rewriting it to account for sharded blobs would be an undertaking in and of itself. That's almost certain to never happen until/unless the sqlite project needs such a feature (which, i'm confident in saying, it never will).

          TL;DR: fossil is, plain and simple, not the SCM for projects which need massive blobs.

          • geenat a month ago

            Fair enough, thank you for the detailed insight.

tedk-42 a month ago

Oh god this author again going after clicks for hit pieces on technology without providing any alternatives.

If you think you can do it better then go ahead and try build the product and then pitch it.

  • vaughan a month ago

    Usually problems arise before solutions. I’ve been thinking the same things and welcome this post.

    • codingdave a month ago

      Except the author did not really give problems. They gave a list of features they think would be needed, and some of those implied problems... but this post was a list of desired solutions. I have no idea what specifically they meant when they referred to git having pain points. I can guess based on my own experience, but a list of specific problems would have been lovely.

  • enasterosophes a month ago

    I didn't recognize the author, but came here to say the same thing: where are the ideas on how these improvements would be implemented?

    Without making an attempt at implementation, you (the generic you, not the person I'm replying to) have no idea what the real issues are. Even failed or partial implementations are more instructive than armchair criticisms, or thought experiments where you can just handwave away all the competing constraints that need to be considered, whether in Git or in any erstwhile successor.

    • niek_pas a month ago

      Fair, although I think we need to understand git’s shortcomings to understand what product needs to be built next. I’d argue that’s what this author is doing.

    • Too a month ago

      Of course concrete ideas are better. But even armchair criticisms can be important (or at least interesting), to open up the discussion.

      While git is good and powerful in many ways, git frankly has many deficiencies, yet everybody treats it as the holy grail of version control. Just because Linux uses it, GitHub exists or something else, I don’t know. The author lists many valid points that are not all sci-fi and that would multiply the usefulness of the vcs sevenfold.

      Someone has to point out the elephant in the room.

      It could spark the idea for someone to invest in something new and better or for someone to contribute improvements into git.

    • hackernewds a month ago

      The skills necessary to solve a problem, are also the skills necessary to realize there is one in the first place.

      • zaphirplane a month ago

        I’ve got one, poverty I realise the problem no idea and I bet no one here has an idea how to fix it without breaking a lot of other things

        The list goes on

  • drewcoo a month ago

    I feel your pain.

    And your pain is distributed systems. Not just of source control, but bug tracking. And general process-manager-y stuff.

  • lloydatkinson a month ago

    What’s wrong with the author?

    • enasterosophes a month ago

      Well, you can look at the author's past HN submissions to see what was meant.

      It's basically click-bait for hackers. "How to do X technology better" with a few paragraphs of ideas, and taking no responsibility for actually doing something about it, aside from hoping that their vision might inspire someone else to put in the hard work.

      • viraptor a month ago

        Some of us like to read ideas. The blog is more on the personal brand building side, sure. But I'm mainly coming to HN and comments to read what other people think about stuff.

      • vaughan a month ago

        Why do they have to take responsibility? I can point to so many oss projects where someone else’s vision was implemented.

        • enasterosophes a month ago

          Confuse "is" vs "ought" much?

          I didn't say they have to do anything. I did observe a pattern.

          There is a difference between saying everyone should take responsibility for fixing all their criticisms and noticing that a particular person never takes responsibility for any of their criticisms.

alenmilk a month ago

The thing is that git is not supposed to be smart when it comes to merging.

At the end of the article: This is a clearly ambiguous merge and it simply wouldn’t be right for your version control system to try "resolving" it for you.

So the strategy is that if there is any doubt you have to manually fix conflicts. This is by design.

  • tasuki a month ago

    Yes, better conflict resolution is needed, but outside of scope for git. Do one thing well.

lucideer a month ago

A lot of people responding (in good faith) to the premise of the title but not really engaging with the absolute nonsense points made within it.

Aside from it all being very vague, it struck me these are high level concepts and keywords the author seemed to have gleaned from experience working with knowledgeable peers but never quite grokked themselves. So it surprised me to read their open-source maintainer experience about page (though the professional experience being in Google does fit my original assumption).

> Atomicity accross projects

Git repos are as atomic as you make them. Github is not a defacto monorepo. The only people likely to use it as such are monorepo aficionados (Googlers?) who are deliberately avoiding atomicity. Also, Github isn't Git.

> Package management

Package managers already use checksums. This entire point is just wrong and ignorant. Reproducible builds would be nice here, sure, but outside of a few weird exceptions in dynamic builds we already have what the author wants here.

> Semantic diff

This would be great but... the author is an engineer right. How much have they thought about this? This would be a gargantuan undertaking. An awesome feature no doubt, but the maintenance effort...

> Merge queue data structure

The body of this bullet doesn't relate to the heading. I guess they're talking about how in-progress merges are stored on the FS but how would that impact testing (which the author rightfully points out is unrelated to VCS). What? This is just mashing unrelated jargon keywords together.

> Fan-out pull requests

Github is not Git.

> git should be fully decoupled from the pull request and merge workflow

Oh dear. Where is Drew Devault...

> lfs

The first good point they've made

> fossil

Yes fossil is cool. If the title of the post was "we should all use fossil" it would be more realistic.

  • ghoward a month ago

    > Git repos are as atomic as you make them.

    Not with submodules, apparently. [1]

    > Package managers already use checksums. This entire point is just wrong and ignorant. Reproducible builds would be nice here, sure, but outside of a few weird exceptions in dynamic builds we already have what the author wants here.

    I agree with this point, but maybe using the cryptographic hashes from a VCS is better than a checksum? Other than that, I don't think there's any reason to tie the two together.

    > This would be great but... the author is an engineer right. How much have they thought about this? This would be a gargantuan undertaking. An awesome feature no doubt, but the maintenance effort...

    Actually, not really a gargantuan undertaking. Large, yes, but that's only to tell the VCS about the semantics of each language. The actual algorithms to do so are pretty small, and the semantics needed for each language consists of a lexer and some dumb knowledge about the structure, i.e., what a function looks like, what a type definition looks like, etc. Source: I am designing those algorithms right now.

    Other than that, I agree with your points, except that I'm now making a competitor to fossil. :)


leeoniya a month ago

> Semantic diff – Can we figure out how to use version control to have more context-aware merges? Can you believe that we still rely on a text diffing algorithm from 1976 (and its shortcomings)? Git still has trouble with file renaming. GitHub Copilot, but for merge conflicts? Semantic diff has been tried before, but language-specific implementations will likely never work.

in case anyone missed it:

  • nine_k a month ago

    > Difftastic output is intended for human consumption, and it does not generate patches that you can apply later.

    To me this looks like integrating it with tools of actual conflict resolution during merges will be a bit harder than one would like. I'd be glad to be wrong.

  • crmrc114 a month ago

    git mv?

    • kadoban a month ago

      That's just a convenience command. Git doesn't actually record moves as anything different from a delete and an add. Many of the querying commands (eg `git log`) use heuristics to show moved files, but they end up wrong fairly often, especially if you don't mess with the parameter(s) of the heuristics.

      • saxonww a month ago

        In my experience they are correct all of the time for simple renames. It's when you move a file and make substantial edits that it gets confused.

        I think it's reasonable to argue that git shouldn't get confused in this scenario, but you could also do your renames in one commit and your changes in another.

        • Akronymus a month ago

          Would it make sense to make one commit for the move and one for the changes?

          • jraph a month ago

            I wish I didn't have to think about it and git supported file renaming properly so a unit of change that hasn't much sense when it is split can stay a unit, but without it yes, I think it makes sense to commit renames separately so history can be tracked more easily.

            I think git can be configured on how hard it tries to find renames from similarity between a deleted file and an added file.

          • kruador a month ago

            No, it makes no difference. Git doesn't look at the full history when performing a merge, it just looks for similarity between the paths and files in the commits being merged, and the common base commit (which you can discover by running `git merge-base`). It doesn't keep any metadata about whether something was a move: as far as the actual data structures are concerned, you deleted file a/b/c and added file d/e/f, that have the same file content hash.

            You can find more information on how Git now does it at . I think this first started to be released in v2.31.0, and completed in v2.33.0. v2.34.0 switched the default merge strategy to the new 'ort' strategy mentioned in that blog.

            • avar a month ago

              The "git log" "follow renames" logic isn't the same as conceptually related merge logic.

              I think there are unfortunately still cases where what the GP is suggesting improves the UX, i.e. I think some shortcuts are taken when following a file if the content doesn't change.

              IIRC this matters particularly for very large renames, at some point during revision walking we'll give up trying to match your A.txt to B.txt, but if they're the same...

          • tsimionescu a month ago

            No, because you often have to make changes to the file to get it to compile after loving to a new location. It's much more annoying to review history with changes that can't possibly compile.

            • kadoban a month ago

              This isn't necessarily an issue, depending on what you mean by "review". The one case I really run into is bisects, but even for that you can tell it to just skip commits that won't build.

          • bloak a month ago

            Yes, in some cases, but it depends on your workflow. Some projects require each commit on the main branch to be fully working and pass all the tests.

        • shikoba a month ago

          > It's when you move a file and make substantial edits that it gets confused.

          It's your POV that it's the same file. One could argue that it's a new file and the content of the old one embed in the file. That's the huge problem with git users, people cling desperately to the idea of changes, when git is just about snapshots.

          • tsimionescu a month ago

            People "cling desperately" to the way they are actually working. That Git uses an alien model that happens to somewhat match what I'm doing doesn't mean I'm "desperately clinging" to the actual reality of what I'm doing (editing a file).

            • shikoba a month ago

              So if someone creates a new file and integrate the content of the old file to reach the same folder state than yours, you think s/he does something different?

              • tsimionescu a month ago

                Yes? Editing a file is different from creating a new file with some code copied from somewhere else, obviously. They may reach the same end state, but that doesn't make them the same operation.

                It also happens that one is much, much more common in software development than the other, so it's a much better mental model of what SEs do.

                • shikoba a month ago

                  > They may reach the same end state, but that doesn't make them the same operation.

                  So you're clinging to your mental model. Thanks for proving my point.

                  • tsimionescu a month ago

                    This is how work is actually done, and my mental model matches how I and my colleagues and yours as well actually work.

                    I could contort my mental model to make it match how Git works, but it is a contortion that some tool imposes on me.

                    We work in computing, I would expect everyone's mental models to be computational, not equational - different algorithms with the same result are still different. QuickSort is not the same as Merge sort just because they have the same inputs and outputs.

                    • shikoba a month ago

                      The issue is that the path you used to go from A to B is yours, one could use a totally different path. But what really matters is that your product is now in state B. That's what you ship, not the path.

          • kadoban a month ago

            Git has both concepts, snapshots and changes. You'd have to contort yourself more than a bit to describe common uses of "git rebase" without viewing it as changes, for instance.

            You may be aluding to git's underlying storage being snapshot based, but it actually also has diffs in pack files.

            • shikoba a month ago

              > but it actually also has diffs in pack files

              Storage optimization, absolutely unrelated to the diff between two snapshots.

              • kadoban a month ago

                The semantic layer uses both, and the implementation layer uses both. Saying that it's one or the other just seems misguided.

                • shikoba a month ago

                  I don't understand your point. Mine was just that git could decide to store only snapshots, it would change nothing except for low level commands that deal directly with objects storage.

    • dotancohen a month ago

      Internally just rms and adds. The "Rename" feature is just a UI feature displayed when the files have identical or near-identical content.

knighthack a month ago

The only major problem I see with Git is that it's just a pain if you're working in a gigantic monorepo.

Outside of this, I think it's achieved it's ultimate form and tradeoffs for what it was originally intended to do - and the majority of projects fall under that category, meaning that while Git can be improved, but it doesn't need to 'change' architecturally or philosophically to accommodate other things.

Separate version control software can be designed for solving a specific problem - but I don't think Git should need to evolve beyond the problems it's designed to take on.

  • sirmarksalot a month ago

    The large file problem is a major issue in games, and basically means that developers who would much rather use Git, don't have the option of using Git because it would be crushed under the weight of all the art assets. So instead everyone's forced to use Perforce or Subversion, with all the workflow impediments that involves.

    • WorldMaker a month ago

      A lot of groups with large file problems seem to be converging on Git LFS at this point over Perforce/Subversion. Most of the major hosts (Github, Bitbucket, Gitlab) all have LFS hosting support, though it is not always cheap. (Arguably still cheaper than Perforce/Subversion, especially in accounting for those workflow impediments and the developer time they cost.)

      LFS is a plugin to install, but that's maybe a strong indicator in favor of the git model that there are common plugins to solve some things like this. (And many distributions of git now also bundle LFS.)

gorgoiler a month ago

The project management features don’t have to live inside VCS, but it would be nice to sync them there or derive some of the primitive data structures there — comment trees, approvals, CI red/green results at integration time.

A project should encapsulate the code, how we got there, what we changed, and why we changed it.

The code is your HEAD, available as a working copy. How we got there is the stack of diffs that, when applied to an empty repository, accumulate to being the current HEAD. What we changed is more nuanced that just the diffs: It’s the commit messages explaining the diffs and adding context. If the diff changes an algorithm from n^2 to n then what we changed is the runtime complexity of x, which is good for reasons y and z.

Why we changed it is the bit that’s missing. Was this work originally from a bug report? Did real-life-n stay small for our first six months and has all of a sudden become much bigger? Who was involved in deciding this was the right thing to do, what did they say, and what other approaches did we think about? Which cat meme was deemed appropriate for the final approval of the change?

Right now, that stuff is all linked to from git but it’s not really a part of the workflow unless you remain inside GitHub’s or GitLab’s ecosystems. Seeing that in the underlying tool would be really cool.

  • sgbeal a month ago

    > Which cat meme was deemed appropriate for the final approval of the change?

    FWIW, that wins my internet for today and i'll aim to make that a factor in any future code reviews/approvals.

coxley a month ago

Mercurial solves a lot of the UX problems. Maybe if Meta open-sources the API-compliant rust rewrite, we'll have the scaling sorted too.

Commits are automatically "branches" off trunk. You typically do 1:1, commit:PR. As you make new changes to the same PR, you `hg amend` instead of another commit. Those commits get merged into trunk.

ribit a month ago

My main headache with git — which is absent from the list in the blog – is that it tracks snapshots instead of tracking changes. It's probably less of an issue for software developers where a "version" is what this all is about — after all, that's what you ship, but if you try to use version control in a context of academic writing or data analysis, it can become very difficult to track individual contributions. All git can do is to compare snapshots and allow you to impose a resemblance causal order onto them (which you can freely manipulate anyway with rebasing, squashing etc.). If your workflow consists of trying out many different ideas and then choosing which of them to keep and which to discard, git can be extremely painful — you either end up with a history that is a complete mess or waste a lot of time rebasing and reorganising the (fake anyway) order of snapshots.

That's why I am exited for tools like Pijul that attempt to actually track changes.

  • CuriousSkeptic a month ago

    Snapshots do have the advantage of not having to know anything about the data.

    Tracking changes necessarily needs to define a way to describe how things may change.

    Perhaps another take on the issue is that changing data should be structured to be more snapshot friendly?

    • pmeunier a month ago

      I'm one of the authors of Pijul, and I have plans to turn it into a synthesis of snapshots and patches, I've implemented the formats and initial steps, see

      The formats are ready for the next steps, but since this is only useful for really massive repos, I want to wait a little bit before spending the time on these features. If more people start using it, it may provide the motivation needed.

i_have_an_idea a month ago

> Kubernetes (the operating system of the cloud)

The post just loses credibility after this statement. Yes, Kubernetes is important, but "operating system" is a clearly defined technical term. Using it arbitrary on something that is clearly not an OS to achieve some sort of an off-topic emphasis effect undermines the credibility of the content.

  • GuB-42 a month ago

    Yes, and it matches the definition of an "operating system".

    Originally, an "operating system" is a system that takes the role of an operator. In the early days, operators were people who loaded programs, feed them data, fetched the results, etc... for the users. Matching the modern definition of an interface between the user and hardware.

    Kubernetes allocate servers (hardware) to containers just like a typical OS allocate CPU and memory to processes. And it stands between the user (here, the sysadmin) and the hardware (the servers). "Operating system of the cloud" is, I think, a good description of Kubernetes.

    • i_have_an_idea a month ago

      > Yes, and it matches the definition of an "operating system".

      No, it does not. An operating system (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs. [1][2]

      You cannot just arbitrarily take long-established technical terms and redefine them to suit your needs or your rough intuition. If you look in the authoritative sources below, there are over 100+ mentions of Linux and Windows, as well as several mentions of esoteric and defunct OS like Haiku, BeOS and others. Not a single mention of Kubernetes.

      This is because Kubernetes is not an operating system and it does not fit the technical definition of an OS. It is container orchestration software.

      It is important to call things using their real names, otherwise, eventually, communication breaks down and no one knows what we're talking about.



      • ogogmad a month ago

        Asking what the word "operating system" means is like asking what money is. There isn't really a one true definition.

        "Operating systems" in a much narrow sense were invented in the 1950s. I cannot tell you what people meant by the term back then. Then it seems that they accreted features. And then they accreted more features, like a gigantic snowball or avalanche. Ultimately, this all happened because people were lazy and didn't want to repeat themselves -- and there were lots of things to be lazy about. Also, branding and marketing became important. And now an operating system is defined by its user interface guidelines, the artwork in its GUI, what web browser it comes with, and what other software it does/does not ship with. Some people gerrymander the meaning of the word to justify the architectural choices of their favoured operating system.

        By the way, being able to parrot the dictionary's definition of a term doesn't mean you know what it means. And knowing what something "means" (that is, knowing how to use it) does not mean you know its dictionary definition. The education system often forces people to remember these things and regurgitate them, which serves only to help you sound convincing in debates.

        • i_have_an_idea a month ago

          I feel like you ignored everything that I wrote, particularly the bit about it being important to call things using their correct names for the sake of effective communication. I.e., container orchestration software vs operating system.

          As for the rest of your argument, I feel ambivalent on whether it is worth responding. But here's my two cents:

          1) Random userland software bundled with the OS does not constitute part of the OS. And that's not just my opinion, that's the legal ruling in the 2001 United States v. Microsoft Corp case, where Microsoft tried that argument.

          2) I think it is a bit disingenuous to try to present Kubernetes as some sort of a natural evolution of the term, when it is pretty clear the intent behind calling something "THE operating system of the cloud" is marketing and to try to drive up the hype.

          As for the "being able to parrot" bit, please spare your personal attacks. They don't make your argument any stronger, they just make me think it is not worth talking to you.

          • ogogmad a month ago

            > As for the "being able to parrot" bit, please spare your personal attacks. They don't make your argument any stronger, they just make me think it is not worth talking to you.

            Sorry. I wasn't trying to attack you. But I can definitely see that it comes across that way, so I'll bear that in mind in the future. I might have been arguing for the sake of arguing too.

stellalo a month ago

> I saw the pain points of git

What are these? Asking for real: it’s the second time I read a similar sentence on HN this week, without finding any specifics, so I’m curious

  • nine_k a month ago

    On the surface:

    - Git is slow on large repos, even on an SSD.

    - Git has trouble with large objects; git-annex and git-lfs sort of help, but are bolted on, not integral.

    - Git's submodules are unergonomic at best.

    - Git's CLI is a mess.


    - Git has no idea of a conflict as a first-class object; hence merges and rebases with the user fixing the same conflicts multiple times (and `git rerere`). Compare this to Pijul.

    - Git is line-oriented and has no notion of semantic diffs and semantic merges. This makes it a raw tool when working with, ironically, source code.

    Don't get me wrong: the data structures and ideas on which git is based are beautiful and reliable. But something (even) better can be built on these ideas.

    • 5e92cb50239222b a month ago

      > Git is slow on large repos, even on an SSD.

      Maybe on Windows, but then everything is slow on Windows. On my 2015-era machine `git pull` on the Linux kernel source tree is nearly instantaneous after the remote objects are downloaded. Same with `git status`, `git diff`, etc. I mean, that's what it was developed for, because everything else was slow.

      • nine_k a month ago

        How about `git status`?

        The first SSD I bought back in 2008 was to put a large git repo on it; it helped. With much larger repos, like those I had to work with at Facebook, even an NVMe drive becomes a bit uncomfortable, and one has to use something like Watchman [1] to track changes without a rather noticeable delay.


    • mikewarot a month ago

      >Git is line-oriented and has no notion of semantic diffs and semantic merges. This makes it a raw tool when working with, ironically, source code.

      Git is a content addressable snapshot system, with bolted on code to make it retrospectively appear to be a line-oriented system.

      It's worse than you thought.

      • morelisp a month ago

        It's not worse. Snapshots are exactly what you want if you would like to have format-aware diff/merge or to experiment with alternate algorithms.

        But it's easier to complain about git and throw out pie-in-the-sky ideas about "modernizing our tools" than to try the actually-existing AST-based diff/merge tools and realize it's 100x more complex for no workflow gain.

    • globular-toast a month ago

      > - Git is slow on large repos, even on an SSD.

      I think this is an example of induced demand[0]. One of git's main advantages compared to other options is its speed. Git was so fast it completely changed the way you could work. It went from reluctantly interacting with version control when you needed to check in work, to integrating it tightly into your workflow. But, like with many things, people always find a way to "use up" the resource and make it slow again.


    • pmeunier a month ago

      > - Git is line-oriented and has no notion of semantic diffs and semantic merges. This makes it a raw tool when working with, ironically, source code.

      Compare this with Pijul as well!

      I've been working on (you can try it, but nothing is ready!), which leverages byte-level storage to get higher-level diffs (I know this sounds counter-intuitive, but finare storage granularity gives you more flexibility to compute diffs).

      That said, Git isn't actually line-oriented, 3-way merge is. But then even a byte-oriented 3-way merge would give the same shitty merges as Git.

    • formerly_proven a month ago

      Diffs aren't actually a first-class object either, they're made up on the spot. Git just stores a complete snapshot for each commit; delta compression in the repo is incidental and unrelated to the diffs you see.

    • hinkley a month ago

      Git doesn't understand move and copy operations very well, and has to be tricked into doing it.

      With Java, checking in your dependencies was always complicated by the trouble handling binaries efficiently. With NodeJS that's not a problem, but conflict resolution often ends up with duplicate files, so checking them in is still challenging.

  • bergenty a month ago

    I’m not qualified to go into specifics but I hate it. All version control needs to do is pull, push and branch. Version on branch is newer? You need to pull down before you can check in.

    Instead what we get is over complicated nonsense with commits and stashes, rebases and heads, reparenting etc. I get it you don’t want to store your code on your local machine but that’s what backups are for, that’s not what the version control system should be doing.

    • samtheprogram a month ago

      These are just different ways to handle the same problem, and the fact that git provides different methods is a good thing.

      You can choose which one to use in your project, and someone could enforce or write a wrapper if they wanted to to enforce/encourage a certain method.

      Learning one of the above methods, especially stashing and/or pull conflicts, isn’t that difficult or hard to grasp. Git even recommends this if you try to push to a more up to date upstream.

      • bergenty a month ago

        There shouldn’t be different ways to handle the same problem. It should literally be 4 features and call it a day. Simplicity is a feature.

    • oivey a month ago

      I’m confused. You do have to pull before you check in (aka push new commits)? Those systems don’t exist for backups. They’re for collaboration.

    • gspr a month ago

      With only push, pull and branch, how do you refer to an old version? Hence commits, or something like it, are needed.

      And do you seriously not see the need of rebasing?

      Furthermore, you seem to mistake git's distributed nature for some sort of backup scheme. That's not the case. The idea that every repo is equal is tremendously useful.

      • dotancohen a month ago

          > And do you seriously not see the need of rebasing?
        Git user for a decade. I never rebase, not professionally and not in my personal projects. I merge the work of other devs, no matter how ugly their history.

        I don't see any real problem that rebase solves, but I do see that it mangles history and makes troubleshooting e.g. git bisect much more difficult.

        • oivey a month ago

          Rebasing other’s stuff feels gross since it is altering externally visible history. Rebasing your own stuff before you push can make commits more clear, understandable, and meaningful. git pull —rebase is pretty unobjectionable.

          How does rebasing break bisect?

          • jacoblambda a month ago

            Presumably they are referring to how the `--first-parent` flag works as it only uses the head of the branch from a given merge instead of including each commit from the merge.

            Some projects prefer rebasing onto master instead of merging onto master or squashing onto master.

            If you rebase onto master but don't clean up the commits at the end of the PR, this litters master with a bunch of "top level" commits that don't build and cause git-bisect's test to fail due to those commits not working in the first place.

            If you rebase onto master but you do clean up your commits such that each commit onto master represents a fully functional version of the project, this isn't a problem however it can make a bisect take way longer than if just merge commits are tested.

            If you are rebasing to this degree, I don't really understand the purpose of the rebase for a feature or issue branch (as to this degree, the last commit is the only "completed" commit of this type of branch and you are effectively squashing). It makes sense for say a release branch so you can integrate hotfixes/patches but that workflow can be just as if not more effectively handled via a merge or a squash as well.

            • account42 a month ago

              > it can make a bisect take way longer than if just merge commits are tested.

              The whole beauty of bisect is that is a binary search where if you double the number of commits you only need to do ~1 additional check. So no, it can't take "way" longer.

              And it's not like you'd call it a day after finding the merge commit that breaks things - you then need to find the actual problem with that branch and the fastest way to do that is bisecting down to the individual commit so you are actually doing the same work but artificially restricting git bisect from evenly dividing the search space by restricting it to merge commits first.

            • oivey a month ago

              Makes sense. I guess I just implicitly assumed the workflow of rebasing personal stuff and usually merging externally visible stuff.

        • 5e92cb50239222b a month ago

          You must not ever work with junior developers. Rarely do I see a properly created commit history, what you usually get is something like:

          Rebasing that stuff before merging it into master feels mandatory, or you're left with history with a very low signal-to-noise ratio.
          • jacoblambda a month ago

            That or you are stuck with somebody fighting CI or issues that aren't reproducible in their environment. It's not necessarily a junior dev issue and can just be "our tooling sucks and management refuses to invest time or money into fixing it". It's not an ideal or sustainable work environment but it's something that even senior devs still have to deal with.

            Of course that should still be rebased down to a reasonable history.

          • menaerus a month ago

            > what you usually get is something like:

            That's like 9 out of 10 people in my experience and none of them are juniors. It's pretty hard to get developers more disciplined.

          • dotancohen a month ago

            If that's the case, I would have the dev go rebase that into a nice single commit. Not the guy doing the integration work. I expect that the dev's last operation was to merge from the dev branch, fix conflicts, then push a PR.

            At the least, if you're dev refuses to give you a nice commit, you could merge-squash his branch.

          • dreamcompiler a month ago

            The solution for this is not rewriting history but writing a smarter history viewing tool that filters out the noise.

        • rhdunn a month ago

          On some projects I maintain a set of local fixes that I regularly rebase on top of the latest code. I don't need to run a bisect on those.

          I've also managed other projects/branches like you have, merging the changes into my work.

          I also use the cherry picking feature (which rebase builds on) a lot. That is for things like creating hot fixes, pulling in some upstream patches to my local fixes, reordering a branch, etc.

      • funklute a month ago

        > And do you seriously not see the need of rebasing?

        Rebasing isn't actually necessary, and there's a good argument to be made that you should never rebase. Fossil (the version control system used by the sqlite team) doesn't have any rebasing mechanism:

        (there are of course also very good arguments in favour of rebasing, but my point is simply that it isn't strictly necessary in a "complete" version control system)

        • gspr a month ago

          Of course it isn't necessary. It's just extremely useful. Of course it rewrites history, and I abhor rewriting shared history, but it's extremely useful to be able to rewrite your own private history!

          • funklute a month ago

            > Of course it isn't necessary.

            Well....but you literally said above that there's a need for it. In a somewhat condescending tone. And now it's suddenly obvious that it's not necessary? Okay.

            • gspr a month ago

              It's possible to have a need for something even though that thing is not necessary.

              "Oh man, I really need a nice dinner right now" is impossible for you unless that dinner consists of the absolutely strictly minimum necessaries of nutrients in a gray tasteless slurry? Come on.

              • funklute a month ago

                And again, you could lose the condescending tone, thank you very much.

      • bergenty a month ago

        What do you mean? Why can’t you just pull down version 1.3 of a file by specifying the version number in the command?

    • Akronymus a month ago

      I use stashes all the time, when I realize I am working on the wrong branch.

      • dreamcompiler a month ago

        Stashes seem very mysterious to many people. They're one of the most sensible parts of git IMHO. They're just a stack of changes to your files.

        To me, rebases are the work of the devil and I never use them. To each their own I guess.

        • mariusor a month ago

          Personally I never understood why the stash needs to be a stack. I can't think of a time when the order in which changes have been stashed has mattered, outside of dropping the older ones.

        • Akronymus a month ago

          > Stashes seem very mysterious to many people.

          Huh. For me, they were one of the most intuitive parts of git. Altough, it was a long time before I ever used more than 1 layer of stash at once. (And even then, I rarely do so nowadays)

  • PawBer a month ago

    You can treat the points in the article as those.

  • oconnore a month ago

    Perhaps, instead of trying to identify issues with git, think about how really deep team based collaboration works in other programs (say, online video games?) and then work backwards to see if there are opportunities in code collaboration tools.

    Of course, just making an IDE multi-player would introduce chaos. What if the tool solved that chaos?

  • drewcoo a month ago

    Yes. I am also asking "for a friend!"

  • spencerchubb a month ago

    that's what the whole rest of the article is about :)

chriswarbo a month ago

I like the patch-based approach of Pijul and Darcs: rearranging patches seems less fragile than rebasing snapshots.

I'd like to see git's content-based more integrated into projects like IPFS (and for IPFS to get their resource-usage down, so it can be run as a background task on my laptop!)

As for package management, Git and Nix work really well together: e.g. we can use `builtins.fetchGit` to fetch particular commits; and we can `import` Nix definitions from those git commits; and those definitions can fetch other repos, etc. so we get a Merkle tree of the exact code used by all of our dependencies. I also like to write commit IDs as default function arguments, which makes it easy to override any of these dependencies to use another commit.

  • pmeunier a month ago

    Pijul works quite differently from Darcs: the primary datastructure in Darcs is indeed a list of patches, and the main operation is rearrangement.

    Pijul is instead a CRDT, meaning that independent patches can be applied in any order without changing the result, which makes rearrangement unnecessary, and the system much faster.

pgt a month ago

Implicit branching in Pijul is a killer feature (IMO):

  • japanuspus a month ago

    Yes, came here to plug Pijul as well: for distributed version control, the "first-class conflicts" of Pijul seems to be a step in the right direction.

    For anyone who hasn't looked at Pijul, the theory part of the documentation [0] is well worth a read.


    • vagab0nd a month ago

      How production-ready is Pijul? I heard about it some time ago and was super impressed. But apparently at the time it had issues, and was going to be re-written in a different language.

      • pmeunier a month ago

        Wow, you heard about Pijul years ago then.

        I believe it is production-ready, in the sense that there hasn't been any real bug in months. I'll remove the "beta" label when more people use it.

        One thing that isn't production-ready is, but mostly for lack of funding: that platform uses the CRDT nature of Pijul to replicate all the repositories in different datacenters, but the machines it runs on are somewhat undersized, and the PostgreSQL databases don't like that very much.

        But you don't need that to use Pijul, a simple SSH server works fine.

        • vagab0nd a month ago

          I did not expect to get an answer from you directly :) I will definitely try it out with my next project!

Xenoamorphous a month ago

> Kubernetes (the operating system of the cloud)

Never seen it described like that.

> Previously, I was a software engineer working on open-source Kubernetes at Google, building and maintaining Kubernetes developer tools such as minikube and skaffold


torginus a month ago

My main issue with Git, other than the terrible UX of the CLI, is just how common it is for one to want to rewrite the commit history - an operation for which there's no version control.

You better get it right, or otherwise you get to nuke the whole repository.

  • roganartu a month ago

    Perhaps this is a byproduct of the UI (which I totally agree is bad), but this is not true.

    `git reflog` contains a full history of all refs you’ve been on in chronological order. Unless you explicitly delete them, dangling refs are not cleaned up immediately. If you rewrite history and realise you made a mistake, you can likely recover by simply resetting the mutated branch to something from the reflog, even days or weeks afterwards (the default reflog retention is 30 days).

    • torginus a month ago

      You learn something every day!

      • alexchamberlain a month ago

        Another super simple technique is to create a branch where you start to go back to if you need to; ie if you are rebasing `foo`, start by running `git branch foo-back` and you can always reset back there if needed.

        • flurdy a month ago

          Since I often play with razors by rebasing, resetting, cherry picking, etc locally - I created a `git tmp` alias so I can play without fear of needing to go reflog diving again.

          The `tmp` command creates a commit of all changes, branches it, then rolls back the commit.


          • _huayra_ a month ago

            This temp branch (or even `git tag my_orig_branch`) approach is usually a better on-ramp than the reflog. It's still too easy to misread the line in the reflog of a prior HEAD change and go to the wrong commit, whereas the tmp branch is foolproof (and fatigue-proof).

  • nine_k a month ago

    There absolutely is version control for rewriting commit history, no fancy tools needed.

    Start another branch at the point where you want to rewrite history; don't switch to it: `git branch original-history-branch`.

    Now `git rebase` your branch to your heart's content. This branch will have the new, rewritten history.

    The original-history-branch still has your old history, refers to your old commits and prevents them from being garbage-collected, just in case you'd like to reset your target branch to that state.

    • comex a month ago

      That’s like saying the operating system has version control built in, no fancy tools needed, because you can `cp my-code.c original-my-code.c`, edit `my-code.c`, and have `original-my-code.c` just in case you’d like to reset your code to the old version.

      That is to say - sure, it can work, but automated history tracking is far superior.

      Though, git does at least have reflog, so if you accidentally delete or overwrite a branch, you should be able to get it back. That’s far better than the equivalent situation with files on a filesystem. But it’s still not real history tracking.

      • nine_k a month ago

        This is true, and this is entirely the UI / workflow problem.

        Git keeps a version for you while you are doing a rebase, so you can say `git rebase --abort` and get back to the preserved state. But it does not keep a log of these, and, more importantly, does not ask you whether you are glad with the end result: you cannot --abort right after a rebase completed without conflicts. One would say that you should explicitly do something like `git rebase commit` (or `git merge commit`) after you have reviewed the result.

    • usrusr a month ago

      And that original-history-branch is either dropped or lives on happy ever after, polluting a crowded namespace. Or it's the base of some parallel development and then that's merged and you end up with every commit twice in the history. I guess the github model of forks and PRs might help, but that's not git, it's git with an added workaround.

      Another approach (which might not be git anymore, but close enough to talk about in git nomenclature) might be some kind of facade layer for the history where you fix wrong comments, bundle up old commits to linear groups and so on. A commit hash would still reference a code state, but the repository state (code and its revised history presentation) would be something like "g123abc as seen by g345fed", usually "g123abc as seen by branch/HEAD", perhaps width some clever defaults like "head of whatever branch has the most recent commit on top of g123abc"

  • hackernudes a month ago

    The old commits with old comments are still there for a while and 'git reset --hard' can be used to get back to the old way. After a rebase there is a ref named ORIG_HEAD and a log in .git/logs/refs/heads (though I didn't really know that, I just went looking through my .git folder).

    If you do a big force push on a remote repository you could keep the old stuff in a tag or a branch.

  • sedatk a month ago

    which is why I love Mercurial as it's immutable by default. You have to try really hard to mess up your repository.

    • throw-ru-938 a month ago

      Which is why I don't love Mercurial, because once you mess up your repo, it's messed up forever.

      IMO "easy to break but easy to fix" is better than "hard to break but impossible to fix", and there ain't no such thing as "hard to break" after enough time passes.

      • sedatk a month ago

        How do you mess up an immutable repository?

        • throw-ru-938 a month ago

          Accidentally commit a huge video to a code repo (stupid as it sounds, this actually happened at a past employer). Make commits with unreadable messages, huge-ass commits with tens of logically different changes, split a single logical feature over several commits, etc.

  • martijnvds a month ago

    Even when you rewrite the commit history, the old objects are kept for a while (or until you forcibly expire them).

    You can find the "old" commits using "git reflog".

    I've fixed a lot of botched rebases with that :)

  • vaughan a month ago

    Big issue is parent commit hash included in the commit hash. You have to rehash all commits following a history change.

    • kjeetgill a month ago

      I wouldn't call that an issue, more like an indispensable core feature! The fact that git bled the content-addressable hash through to the front of the UI is an underappreciated stroke of genius as far as I'm concerned.

      It takes so much of the guess-work out of build tools, CI, etc.

  • byroot a month ago

    Doesn’t git reflog answer that?

    • 5e92cb50239222b a month ago

      It's local to a single repository, can't be pushed anywhere, and is collected periodically by git gc.

      • byroot a month ago

        I don’t see why OP would need any of that to undo an history rewrite they messed up.

  • drewcoo a month ago

    And as a black-eyed veteran of DCS and (oh, thank god, SVN) and P4 and other stuff, and migrations from one to another and . . . often stuff that wasn't actually source control . . .

    I say stuff it!

zambal a month ago

> I saw the pain points of git (and GitHub) firsthand working on Kubernetes open-source.

The article could have been interesting if the author actually expanded on what these pain points are. I'm genuinely curious about those.

eternityforest a month ago

I'm perfectly happy with Git. In the most basic uses, it's extremely easy, get git cola and you're done. The advanced uses are harder than they need to be, but not by that much. There's always google.

I wish it had native zip and sqlite support, I'd love to see issues and PRs more integrated, but that's about it.

It would be nice to have an unversioned sync directory in a repo, that could be updated without a commit and had no history, for implementing fast-changing stuff where history isn't critical, like stashing log files right on an internal config repo. But that's not exactly necessary.

I'm sure better things could be done. But would they be usable over ssh, no certs needed, as well as HTTP? Or would you need a domain name? Would they have decent GUIs? Would they still be decentralized? Would they have an equivalent to LFS? What features would they drop?

Would half the features be plugins so that every repo relied on a unique set of optional features?

I'm... not sure I'd like the kind of VCS that the current FOSS culture would like to make....

If people really tried to replace Git, could see multiple VCSes getting big at once. Git is pretty unique in how it is so popular, you probably don't need to know any others.

We can do better than git, but it's pretty unique. It's already a base primitive for so many things like package managers and notetaking. It's so deeply engrained in dev culture, it's almost like UTF-8, and I would hope whatever replaces it has that same property.

I think the easy solution would just be if the git devs themselves made a new first party front-end and added a few features.

There are lots of git frontends, that everyone ignores because you might as well just learn git, it's everywhere, but if some new git2 command was included and just as common, we could have a very smooth transition.

rswail a month ago

I'd really like issues porcelain that is built over git so that when I clone I get all of the project's history, as well as it being merged with pull requests etc.

I hate that the github/lab hosting solutions end up with a central database to keep track of issues, CI, etc. It breaks the whole "distributed" model.

  • chriswarbo a month ago

    I use

    Issues are kept in the repo, inside a .issues folder, so they can be cloned, merged, etc. Each issue is just a maildir, with metadata stored as headers on the top-level message, and comments stored as reply messages.

    I really like this approach, since there's no need for an always-on server; it's decentralised; I can use any mail client to browse and update the issues (there's also a simple CLI for listing and printing issues/comments); etc. For example, I write artemis issues using the standard message-mode in Emacs, and I render issues to a Web site using MHonArc (a program originally designed to render mailing lists).

  • WorldMaker a month ago

    On the PR side, I'm pretty strict about PRs requiring merge commits (no fast-forwards/squashes/rebases), because that's a useful in source control history of the PR no matter if you switch hosts.

    If everything is PRed and all PRs make merge commits `git log --first-parent` in your main branch is your PR history. If PRs are also your unit of CI `git bisect --first-parent` is a bisect on your (PR) integration log.

    The only thing missing from your "central database" at that point is comments inside the PRs that aren't accounted for in your final merge message in the merge commit.

  • morelisp a month ago

    For magit users, there's - ultimately the store of record is still centralized as it's GitHub/GitLab/etc., but it does integrate a local copy of it nicely with your other git operations.

  • bandie91 a month ago

    i worried about the issues/tickets are centralized, too. so i put them in the repo itself as files and asked contibutors to submit tickets by PR into dedicated subfolder. i'd be worth to standardize it.

ChrisMarshallNY a month ago

Personally, I’m mostly fine with Git.

But I started off with MPW Projector[0], so it's all sunshine, after that...

The “UX” doesn’t bother me, as it’s a fairly classic CLI application. Any “UI” is provided by apps that are written over the CLI. That’s a common pattern in development.

I use SourceTree, for the most part, with occasional drops into CLI, for specialized tasks.

Not perfect, but adequate. I do not use the Git integration in Xcode. I find it to be inconsistent and buggy (like many Xcode features).

I have found that submodules are all but worthless, which sucks, because I feel that they really are the best way to tag aggregate codebases. From what I understand, this is because of the author’s workflow. If submodules worked like package managers, that would be wonderful.

In my experience, I find sparse checkouts to be a bit “kludgy.” I find it annoying to have to check out an entire repo, for a single file (my testing code usually dwarfs the implementation code, in my packages).

Perforce used to be good at specifying “only what is needed” workspaces.

One aspect of VSS (dating myself —no one else will) that was very cool, was the ability to specify “artificial” workspaces.

You could create a workspace that aliased files from multiple workspaces into a new aggregate. When you modified and checked in work, it could be doled out to several different repos.

That’s pretty hairy. It would be difficult to implement safely.

Git has changed the way I work. As time has gone on, I have moved away from using feature and release branches, to using tags on a single mainline.

I think forks and PRs are great things, but they aren’t actually native Git features.


EdSchouten a month ago

- A client side virtual file system (FUSE), so that you can work with large repos that exceed the size of your own system.

Version control systems like Piper (Google internal) and Eden (Facebook), GitVFS (Microsoft) already do this, but adoption is marginal.

  • Ericson2314 a month ago

    I really want a system-wide content addressable store that a lot of different things can use.

  • WorldMaker a month ago

    It's also interesting to note that at this point Microsoft seems to be pivoting away from GitVFS. They've put a lot of collective engineering work in git sparse clones and git partial clones and the intersection of the two ("sparse cones") and optimizations like git commit graph pushing git itself to better at virtualizing its object store, not touching objects it doesn't need, and better handling just-in-time scenarios for object retrieval when it really does need them.

  • skybrian a month ago

    Having used Piper, I think the Go package manager's approach is better, actually, at least for a collection of packages maintained by independent teams.

    If you're not going to commit to fixing downstream packages for other teams when you change something, it doesn't make a lot of sense to have a monorepo. Instead, let people upgrade at their own pace.

    • oivey a month ago

      This is a good way to put it. I think the opposite is also true: going with many repos means you are committing to other teams being on their own to upgrade. It’s difficult to know if you’ve fixed all downstream code if everything isn’t in one repo. That model makes sense for OSS. Not so sure about within companies.

SulphurSmell a month ago

I wish I could answer this. I have always had a penchant for any robust versioning system. Even the simple ones (CVS and the like) were amazeballs to me at the time. 20 odd years ago I spent a lot of time with ClearCase (initially Rational, now Rational IBM). I was amazed at what could be done. We had dozens of customers, maybe 3 major and 10-14 minor versions in the field, and probably dozens of "bug fix" or "special customer branches". Oh, and likely 2 or 3 on-going major version dev branches. Somehow, it all worked. If you could describe (by whiteboard or hand-waving) what you wanted to see as your working branch, it could be done. "Give me a view that is exactly what customer x has, but for this directory, give me latest, except for this file...I want that one from customer y" . And blam! The CC guys would make it happen. The magical config spec. I have had to dabble in many other SCMs.. SVN, Git, etc...and they all seemed to be a compromise. Or, just ran out of gas when the going got tough. In my mind, I wish I could argue that ClearCase was where it was at, and the patterns it supported would be wonderful to have today...especially over Git. But I don't know enough to defend the point. All I am saying is that even with enormously complex version scenarios, the damn thing didn't break and we all got our work done.

  • Art_Wolf a month ago

    One reason we moved off it was because it was a bandwidth hog! I believe they had two clients to try and help with this, a thin and think client. Even with the thin one, with the CC servers hosted in the states and some developers located in Europe, it would take a half hour for the client to refresh and pull down the latest changes.

    • SulphurSmell a month ago

      Interesting. It was never an issue for us, as we were close to the CC servers. Some folks were not, so they ran their dev/build locally ("network close" to th CC servers) and VNC'd in from their actual location. In that case, only the stuff on the screen had to transit the "far away" network. Although, I think today, network issues would be less of a I tend to think that networks have scaled faster than code base size. I could be horribly wrong about that though. Any centralized repo is going to have this challenge though. It also depends on if you prefer snapshot or dynamic views. Snapshots were much easier on the network, at the expense of consistency. I also remember the CC team could work magic at optimizing things if you gave them time and a bit of flexibility. Crappy config specs were hard to read, and often slow to work off of. Any config spec that I had to scroll...I knew I was in for a shit week.

evouga a month ago

Auto merge using a “semantic diff” sounds line a complete nightmare to me.

How many people actually check that git has merged code correctly? When there are no conflicts and the merge passes CI? Exactly.

Now imagine the git automatically fixes conflicts by rewriting code Copilot-style. It will work perfectly 95% of the time—--so that new feature is too useful to ignore and everyone uses it. 5% of the time git resolves the conflict by riddling your code with subtle bugs and vulnerabilities…

geenat a month ago

Native large file support.

Git annex functionality built in = gg every other source control system.

  • WorldMaker a month ago

    Git annex "competitor" Git LFS is bundled in many distros of git today and supported by most of the major hosting providers at this point (if you are willing to pay for LFS space).

irrational a month ago

> Will a new version control system (or something that solves similar problems) spring up?

Of course. There is no way Git is the be all and end all of version control. Anyone who has been around long enough has seen lots of version control systems and knows that we will see lots more as time goes on.

  • hackernewds a month ago

    Especially given how primitive git still seems. I, for one, can't wait

gigatexal a month ago

I dunno. I’ve decided like with SystemD to just learn it and get on with life. Git is ubiquitous and so is SystemD. Learn them. Adapt. Move forward.

This is not to say they’re perfect but they’re open source so if I felt strongly I could offer code to change it.

polyrand a month ago

Not related to the problems listed in the blog post, but most of my problems with git went away when I started using worktrees[0] (I do a `--bare` clone and add worktrees in there).

Working on a new feature: new worktree Doing a code review: new worktree Testing random changes: new worktree (usually in detached mode) Need to debug some code from the current production branch: new worktree

All the problems with `checkout`, `stash` and switching between branches disappeared.


he0001 a month ago

I developed a couple of tools on top of git for various reasons. The problem I found is that many many developers don’t understand how git actually works, so they can’t translate git and what it actually does. I have to explain how git works over and over again. I think git is extremely good compared to everything else and it’s extremely simple yet advanced. But people don’t seem to understand what it actually does. Trying to replace it with something else is going to be hard as I believe that anything more advanced will be hard.

samgranieri a month ago

I've used Git since 2007, and mastered some of the esoteric parts of it a few years ago. I think it's gonna be here to stay for the long haul.

Granted, the learning curve can be a little steep, but once you learn it you're good to go. There's a rich ecosystem of documentation, helper tools and shell aliases you can find that can help you master the tool to get your job done.

Some of my favorite paid tools for working with git are Tower and Kaleidoscope (I'm a huge fan of native mac apps).

brendoncarroll a month ago

I've been working on a project "Got". Which deals with the LFS problem, mentioned in the post.

Got isn't really trying to do software version control better than Git. It's trying to make general purpose file versioning practical, with a workflow similar to Git's.

ziml77 a month ago

The semantic diffing bullet point has me wondering what attempts there are at making open source semantic diff tools. I love the closed source tool SemanticMerge, but the company behind it decided to be shitwads and pull the product so they could add a selling point to their proprietary version control system.

  • WorldMaker a month ago

    Several threads here point to difftastic:

    I know a lot of people who have a lot of hope for diffsitter (or something like it):

    Personally, I think the reason most "good" semantic diff tools are proprietary is that they are huge amounts of effort that are mostly "hacks" and "heuristics" bandaged together in ways that people don't want to let out how the sausage was made.

    But I also think "general, language agnostic AST-based semantic diff" is a mountain peak we cannot reach (probably ever), and I believe my experiments found an interesting local maxima that people are maybe passing by on the way to that ideal mountain (lexer-based diffs rather than parser-based diffs):

nailer a month ago

> Semantic diff has been tried before, but language-specific implementations will likely never work.

Why not? Storing code as code (ie storing an AST not text) and treating all changes as a CRDT, allowing your refactor to use a variable I just renamed without merge conflicts, seems completely reasonable.

kache_ a month ago

Nothing. Keep it simple. There's a reason it has stuck around

>inb4 it isn't simple

it is

  • tsimionescu a month ago

    Subversion is much simpler than Git (centralized is always simpler than decentralized). Being simple is not what made Git popular.

    Git is popular because it is free, fast, works well enough, and was popular in some major projects.

    Its major advantage for most orgs (those who actually use it in a centralized manner, with a corporate repo that everyone syncs with every day, unlike the Linux kernel team) as compared to Subversion is that it makes branches cheap and easy. The ability to work with history offline is also nice in niche situations, but definitely not the major selling point.

    Its major advantage compared to Hg is that it is more popular.

    • happyweasel a month ago

      > it makes branches cheap and easy.

      IMHO branching itself isn't expensive in Subversion, the problems arise when you merge (feature)branches back. If you branch off (for release branches) and then just selectively merge certain commits to that branch only if needed (the info being stored in svn:mergeinfo), I think its not that bad. I prefer branch-by-abstraction and trunk-based dev anyway, so here you go ;)

      • COMMENT___ a month ago

        Subversion has its branching and merging quirks, but it works and works well in most cases. Most of the usage problems can be solved by sticking to best practices and using up-to-date SVN client versions.

  • vaughan a month ago

    No one chooses to use git. They don’t say: let’s use it because it’s simple. It’s really the only choice because of network effects. It’s one of the biggest glaring monopolies in tech. Think about the barriers to entry of a new vcs. GitHub/gitlab support. Ide support.

    • kjeetgill a month ago

      There's still git fans out here in it's corner!

      I feel the same way when I hear griping about vim or Java. Like sure, there's surface stuff to cry about... Until you start to hear people's ideas. Then you're glad people have managed to mostly leave it alone!

      I also thing there's just this class of tools that only make sense when you get to a certain level with them. Learning curves are just steep on some things. That's not the same as bad design.

    • ferruck a month ago

      I choose to use git, over and over again. I hardly use any of its "network effects" but rather its features. So, maybe try to be a bit less general the next time, please.

      • vaughan a month ago

        Do you use GitHub? Do you work with a team? Did you really review other alternatives and try them out?

        • ferruck a month ago

          Sorry for the late response.

          > Do you use GitHub?

          Just a bit, it's nothing I look forward to. I mainly use Gitlab due to work, but it's not that much better.

          > Do you work with a team?


          > Did you really review other alternatives and try them out?

          Only SVN some years ago and I hated it.

          It wasn't my intention to say that Git is the best VCS we'll ever get, but I enjoy using it and (besides the sometimes a bit baroque command line syntax) have nothing to complain about. So, whenever there's the need for version control, I choose Git without second thought and so I never even felt the urge to try something else. I'm sure that this will change at some point, but not yet.

  • Akronymus a month ago

    The beauty of git is in that it is a simple data structure, with simple functionality.

  • fargle a month ago

    agree. same question as "what comes after C ?"

    same answer.

    • tsimionescu a month ago

      I'm sure some have written the same about various assembly languages, about COBOL, about FORTRAN, about LISP in AI.

      No king rules forever. C has become a much more niche language than it was 20-30 years ago, and the trend continues. C++ has slowly eroded most of its niches, and there's a slew of new languages gunning for its throne - Rust being the most likely to succeed (especially once we will have some popular Linux modules in Rust).

      Will there still be a niche for C programmers 50 years from now? Absolutely. Just like there is a niche for COBOL maintainers today.

      • fargle a month ago

        But those aren't good analogies for "C". I think the best analogy is the long-standing belief prior to "C" that assembly would "always" be the systems programming language. Yes, someday a new king will be born (I doubt a coup).

        In a similar grain, git is a beautifully designed and conceived simple content-addressable object store. With a complex and challenging UI (I don't have any problem with it, but still).

        Neither of these tools are perfect; in fact they can sometimes be objectively be more imperfect than many of their competitors. But they are special.

        > some have written the same about various assembly languages, about COBOL, about FORTRAN, about LISP in AI

        It's not what people write about - it's what they use. C, UNIX(/Linux), Git are all very similar - and Pervasive.

t43562 a month ago

Version control as a way of distributing and versioning the built code and also of rebuilding it minimally.

Goodbye to installers and package managers....and traditional build systems.

HelloNurse a month ago

It seems just a promotion for the worst past and future GitHub features of the Microsoft period, starting from intentional confusion between git and GitHub.

jamesrom a month ago

>GitHub Copilot, but for merge conflicts?

Great idea. The corpus of data to train with would be incredible. Every merge commit across all of GitHub.

fsiefken a month ago

Project management and issue tracking can be done with for example an org-mode or taskjuggler file in or outside git.

dvh a month ago

"Talk is cheap, show me the code"

  • tsimionescu a month ago

    Code is cheap, agreeing what problem to fix and how to fix it is the major cost of any software project.

nathias a month ago

for all complaining about UX, you can make your custom aliases and install extensions (eg. git undo) ...

  • tsimionescu a month ago

    For all those complaining that this tool box doesn't have any familiar useful tools, you can forge your own tools and use them instead.

bvrmn a month ago

Looks like author has a typo in the title and Git should be read as GitHub.

ghoward a month ago

Funnily enough, I'm working on the next Git. (I'm not the author of Pijul; it is another one.)

So here's what I think of these points, as well as how my VCS will address them.

1. Atomicity across projects: this is a good point, a necessary one. When I asked people why Git submodules are so bad [1], that was the biggest point. My VCS will have this, even though I haven't quite figured out how to do it yet. I'm almost there.

2. Native package management: my VCS will have this, sort of. The thing is that I hate CMake, so I'm building a build system, and while doing the design of it, I realized people are doing build systems and package managers wrong. So I'm working on that too. Needless to say, my package manager/build system will be well-integrated into my VCS. (My package manager will also have another shtick: you can set security policies for individual packages, which means that situations like where npm packages become malware should not cause damage.)

3. Semantic diff: I have figured this one out. It will exist. It will be language-specific, but all that it needs is a lexer for the language and a dumb understanding of the structure. This same system will also be used for diffing and merging binary files, such as Blender files, PNG's, executables, PDF's, files for Microsoft Office, files for LibreOffice, etc.

4. Merge queue data structure: my VCS will have something that will serve this purpose and could probably implement a high-level interface to it, like Git has porcelain over low-level operations. However, what I have actually designed is so powerful, it will also be capable of real-time collaboration and of implementing full undo/redo.

5. Fan-out pull requests: this is currently not in my plans (I think better package management would handle most or all of this), but it would be trivial to implement if people want it.

6. Terrible UX of Git: I'm going to spend the time on this upfront. In fact, I want to do user testing with non-programmers until they find it easy. I will take inspiration from Mercurial for sure, but the user testing results will be the most important.

7. Large file storage: my VCS would be completely inductive at binary files if I didn't have a plan for this. I do have a plan, and I will be testing on large files, including up to multiple terabytes, from the start.

8. Project management hooks, but not features: there will be built-in features for things that should always be there (issues is one, I believe), but there will also be a way to set up your own. They won't be hooks, per se, but it will be possible to create the project management "flavor of the month."

Comments on these are welcome.


vaughan a month ago

I’d love to move away from parent commit hash being included in a commit hash. It makes rewriting history so complex.

  • DougWebb a month ago

    That's kind of the point. The hash guarantees a chain of commits/changes. If you're rewriting history, you're creating a new chain, and all the changes along that chain need to be reverified.

nathants a month ago

git is like cpp, you have to use a subset:

- push

- pull

- checkout

- checkout -b

- merge —ff-only

- stash

- stash pop

- reset —hard origin/master

- reset —soft $hash

- commit -m

  • kjeetgill a month ago

    Interesting exercise. Mine looks more like:

    - push/fetch: I personally never liked pull, it's sugar over fetch and merge/rebase.

    - checkout -b: haven't looked into all the switch stuff yet

    - rebase -i: my bread and butter. I rarely use merge aside from PRs where rebase bugs people.

    - add/commit -am: obviously required

    - cherry-pick/reset: pretty much replaces stash without introducing a whole new set of tools

    - diff/log/status: if you get comfortable with these, you're not going to wedge yourself ever again

    - brach: very much an unsung hero. If you get comfortable with commiting often and using brach. You can't ever lose your place doing anything ever again.

    • nathants a month ago

      lately i’ve been using stash, reset hard, stash pop. work on a single patch over main.

      use a backup branch prior to stash, just in case.

      • kjeetgill a month ago

        That's reasonable, but I think a commit + a rebase -i of just that single commit gets you to the same place. And now your backup branch can preserve the code you would have stashed.

        • nathants a month ago

          rebase is a multi stage command that often requires force push. as good as it can be, discouraging it’s use, especially in contexts where git is considered challenging, is probably the play.

          • kjeetgill a month ago

            That's a really good point.

  • shikoba a month ago

    > pull

    I find that command too dangerous. I only use git fetch and git merge --ff-only.

    You forgot git tag