ninjin a day ago

"The reason why we have Linux, and BSDs, and XNU is that they all provide the same baseline API, which was defined from the outside [by POSIX]. The coordination problem was pre-solved, and what remained is just filling-in the implementation."

But that is not at all how Posix operates or has operated. Posix standardises common denominators between existing implementations. The fact that we now have strlcpy(3) and strlcat(3) in Posix 2024, is not because Posix designed and stipulated them. Rather, they appeared in OpenBSD in 1998, were found useful by other *nix-es over time, spread, and were finally taken aboard by Posix that standardised what was already out there and being used! This to me is the very opposite of the point the author is trying to make!

  • dale_huevo a day ago

    Linux would have had strlcpy/strlcat 25 years ago but the glibc maintainer was famously being a giant chode and refused to allow "horribly inefficient BSD crap" into his precious code, and this fight went on for years:

    https://sourceware.org/legacy-ml/libc-alpha/2000-08/msg00053...

    So it wasn't for lack of trying. Yes, Open Source can't coordinate and this is why we can't have nice things.

    • oersted a day ago

      It's surprising how we ended up with such a robust open-source OS ecosystem (pretty much every server running Linux) with such emotional people at the helm.

      He is clearly not being rational there, but I could see how his aesthetic tastes might correlate pretty well with robust software. I suppose that saying no to new features is a good default heuristic, these additions could have easily added more problems than they solve, and then you have more surface area to maintain.

      That being said, this old-school ideology of maintainers dumping the full responsibility on the user for applying the API "properly" is rather unreasonable. It often sounds like they enjoy having all these footguns they refuse to fix, so they can feel superior and differentiate their club of greybeards who have memorised all the esoteric pitfalls, simply because they were along for the journey, from the masses.

      • graemep 21 hours ago

        > It's surprising how we ended up with such a robust open-source OS ecosystem (pretty much every server running Linux) with such emotional people at the helm.

        People developing proprietary software will not be any less emotional or any more rational. The difference is that it does not happen publicly.

      • bandoti a day ago

        I recommend folks give “The Cathedral and the Bazaar” a read. Another good book is “Negotiating Rationally” (see below).

        If the core developers/maintainers are putting in thousands of hours over several years, and a patch comes along, it is rightfully at the discretion of those doing 80-95% of the work.

        But as negotiating rationally discusses, we value our work more than others—and there’s some emotional attachment. We need to learn to let that go and try to find the best solution, and be open to the bigger picture.

        https://en.m.wikipedia.org/wiki/The_Cathedral_and_the_Bazaar

        https://www.simonandschuster.com/books/Negotiating-Rationall...

      • aleph_minus_one a day ago

        > It often sounds like they enjoy having all these footguns they refuse to fix, so they can feel superior and differentiate their club of greybeards who have memorised all the esoteric pitfalls, simply because they were along for the journey, from the masses.

        Often the reason for these pitfalls is that they exist because they enable some performance optimizations. The respective maintainer does care about performance.

        • pdimitar 3 hours ago

          Sure. One part of the time. Likely maximum 10% even. Very often it's just somebody's fragile ego.

          • aleph_minus_one 10 minutes ago

            > Likely maximum 10% even.

            Intel, AMD and Apple would very likely be willing to invest an insane amount of money for a 10 % performance increase. So, if this indeed increases the performance by about 10 %, I'd call it a very good idea.

      • skywhopper a day ago

        Many of the people involved in the history of Linux (and most software) are jerks, but when people dig up a “this jerk blocked X for 25 years” story, we aren’t seeing the 100s of other (mostly bad) ideas that same jerk also blocked that would have changed things in other ways (possibly for the worse).

        My point being, not that the person isn’t a jerk or that the decision wasn’t wrong, but that one error by one jerk doesn’t tell us much.

        • MichaelZuo a day ago

          It does tell us that there has been no advancement in terms of jerk gatekeeping.

          Because there is no easy way to determine if they actually blocked hundreds of bad ideas.

          • vladms a day ago

            If a project does not have several hundreds of bad ideas, then probably it is not popular. I don't follow that many projects in detail, but all that I follow get a lot of bad ideas in time.

            I think being qualified as a jerk or not is orthogonal to the need of gatekeeping (required in my opinion) or the quantities (higher for more popular).

      • AIPedant a day ago

        "BSD crap," "deserves to be punished"...

        There's also an element of "Linus Torvalds is an antisocial jerk, and he's a genius, therefore if I am an antisocial jerk I must be doing genius-level work." In particular, it's a lot easier to attack someone with empty insults than it is to defend your own position with substantive thought.

        • agumonkey a day ago

          is antisocial appropriate for linus ? I thought he was semi hot headed and thin skinned when it comes to quality, not necessary harmful for no reason

          • mystraline a day ago

            The trend of 'savant but antisocial asshole' is not just a software thing.

            Gordon Ramsay - "this food is fucking raw!" /throws food

            We have sitcom dramas like House glorifying the same thing.

            • gbear605 18 hours ago

              Notably, Ramsay is mostly doing that as an act, so it’s really just the same thing as House. If you watch his UK shows, or see some of the other stuff he’s put out, he doesn’t bother with that whole antisocial performance.

            • pbhjpbhj a day ago

              I mean, he deserves to get upset if a professional preparing food serves raw chicken. I guess the programmer equivalent would be not sanitising input, or not even knowing what an injection attack is.

              • scott_w 18 hours ago

                True, the times I see him go fully mental, it’s pretty justified. Stuff like out off date food, uncooked food, vermin in the kitchen, unclean, etc.

                This is the kind of stuff that can make people seriously ill and kills multiple people every year. This isn’t even lack of skill, it’s pure laziness.

              • pydry 17 hours ago

                The Linus rants Ive seen are pretty much the equivalent of a chef undercooking chicken.

            • agumonkey a day ago

              but real question, is that antisocial ? to me antisocial is much more toxic and unrelated to perfectionism

              an antisocial ramsay would just throw your stuff no matter what even you did well, for the sake of messing with your head

              • stonemetal12 a day ago

                You can be a perfectionist, and not throw a hissy fit when things aren't the way you want.

                >an antisocial ramsay would just throw your stuff

                A social Ramsy would refuse to eat your food, but not throw it at you or have a giant baby fit about it. Of course no one would watch him on TV if he was calm and collected.

              • AIPedant a day ago

                I think you're confusing antisocial personality disorder with antisocial behavior. I am not diagnosing Torvalds with anything, just describing his behavior.

                • agumonkey a day ago

                  hmm yeah I thought these were nearly the same, i'd describe linus as an edgy angry perfectionnist (i'm less angry but i understand the strict spirit)

        • SecretDreams a day ago

          > "Linus Torvalds is an antisocial jerk, and he's a genius, therefore if I am an antisocial jerk I must be doing genius-level work."

          There's way too much of this in general. People use a talented individual with problematic behaviors to justify their own problematic behaviors. So many talented ICs that are absolute dickheads to work with.

      • isaacremuant a day ago

        > It's surprising how we ended up with such a robust open-source OS ecosystem (pretty much every server running Linux) with such emotional people at the helm.

        As opposed to what? Unbiased and dispassionate? There's no such thing. What you're probably thinking of is careerist and authoritarian within a corporation. It's not more efficient than the darwinism of open source.

        Naturally, passionate builders and experts who rise to prominence controlling a tool will feel strongly about the vision for that tool. That's how it gets made in the first place.

        Calling them "emotional" is just cheap.

        Your so called "rationality" is easy when you're not the one pouring your intense effort into something.

        You keep diminishing and attacking these "arrogant" creators while you're clearly the model of rationality who habe built... No, you use what they build. Funny that.

        Maybe take a humble pill.

        • oersted a day ago

          I suppose that what I'm advocating is being passionate about the technical problem, and only the technical problem. Making decisions based on facts and principled reasoning, and not vague aesthetic preferences or personal animosity.

          This is no utopia, and it is not rare, it's pretty basic professionalism and engineering discipline. If you really care about the problem you are solving, you'll push the rest of the baggage aside, especially your ego.

          Surely name-calling and making unfounded gut judgements based on us-vs-them tribalism, like is seen in that response, is not very productive. He demonstrated no intention to solve the problem, no acknowledgements that it exists, no explanation why the solution is not appropriate, what alternative solutions might be better... He had no interest in working together to find the best path forward. He was simply being territorial and scaring off those that did not align with his Holy Taste, whatever that is.

          • vladms a day ago

            I see it as a trade-off. There will be people passionate and rational enough about a project to make it work 90% while being a total jerk for the rest 10%. Would that make me put in the effort to do all the work? If jerk people "push it" too much, on too many topics, projects will be forked. But I think we will always have some that will manage to be "just acceptable" ...

          • isaacremuant a day ago

            > Making decisions based on facts and principled reasoning, and not vague aesthetic preferences or personal animosity.

            You're like companies claiming that "we make decisions based on data".

            Believe your own Kool aid but reality is much more nuanced and power/leadership/intuition based than "data based".

            I don't want to get into politics but it would be extremely easy for me to find several examples where you'll claim something and when I say that's emotional and tribal you'll decide I must be <label>.

            I don't even care about this specific example but about your initial generalization from it. Either you talk about this specific case only or you make and prove your generalizations in a "rational and unemotional" way, right?

    • pif a day ago

      > The problem with strlcat and strlcpy is that they assume that it's okay to arbitrarily discard data for the sake of preventing a buffer overflow. The buffer overflow may be prevented, but because data may have been discarded, the program is still incorrect. This is roughly analogous to clamping floating point overflow to DBL_MAX and merrily continuing in the calculation.

      He was not that wrong!

      • oersted a day ago

        Just to be clear, someone else wrote that response. It's unclear if the maintainer had the same thought process. They probably did to be fair, but the fact that they decided to throw an incoherent tantrum instead is not very helpful.

        • johhnylately535 a day ago

          I believe the responder is here on HN as kazinator. I remember him from back in the day.

      • WD-42 a day ago

        Wow so “Linux didn’t get it for 25 years because the glibc maintainer is a jerk” is an oversimplification??? Shocking.

      • cornstalks 21 hours ago

        That argument sounds totally wrong because you can easily detect truncation:

            if (strlcpy(dst, src, dst_len) >= dst_len) {
              // Truncation!
            }
        • patrakov 20 hours ago

          The argument is also totally wrong because the whole point of strlcpy is to copy the string and, if it fails, calculate the amount of storage that was really required, without making two passes over the data that does fit. The fact that the too-small buffer is overwritten with a truncated copy of a string is just a side effect.

      • cl0ckt0wer a day ago

        Don't let perfect be the enemy of good

        • Groxx 19 hours ago

          Making changes to ultra-core stuff like glibc is not really a "move fast and break things" area.

      • rendaw a day ago

        Isn't the point that it's harder to use those interfaces wrong then the null terminator stuff?

      • CamperBob2 a day ago

        Eh, to be fair, if doubles are overflowing past DBL_MAX, things went off the rails in your code quite some time ago, and it doesn't much matter what the exact overflow behavior is.

        A better example would be signed integer overflow, which a conspiracy of spec authors who don't work in the real world and compiler maintainers with a perverse sense of humor have decided means "Anything goes."

  • maccard a day ago

    Actually I think this is exactly the point.

    BSD got them in 1998, it took 17 years for it to go to posix and another 8 years before they made their way to glibc. 25 years to add a clear improvement

    • geysersam a day ago

      Guess not everybody though it was a clear improvement then? It's not like it made everyone adopt BSD instead of Linux. If it was easy to make all the right decisions someone would have made them and sold it as a product instead

      • collingreen 21 hours ago

        Even if profit isn't involved this entire op argument seems really weird to me. It seems exceptionally entitled and also myopic? Somehow the author wants the good (??), popular (??) open source projects (made of thousands and thousands of opinionated decisions, many aesthetic and not "rational") to decide to cooperate and suddenly share opinions and aesthetic (while simultaneously maintaining the unique opinions and aesthetic that made the project popular in the first place?). The whole thing feels a lot like consumers demanding even more from open source maintainers and continuing to pay nothing.

  • dragochat a day ago

    exactly, you can't standardize on a solution before any good one exists in the first place

    • Kranar 21 hours ago

      Tell that to the C++ Standard Committee. They have no problem standardizing things that don't even exist and may even be impossible to implement.

      • const_cast 17 hours ago

        Most additions to the standard library have been existing implementations found in boost. But yes, they have a tendency to go a little too theoretical. C++ standard targets a virtual machine and all that. Modules in particular was a hot mess.

taeric a day ago

I'm not entirely clear why/how this is an open source issue?

My assertion: Inertia of user base is by far the largest predictor of what will stick in a market. If you can create a critical mass of users, then you will get a more uniform set of solutions.

For fun, look into bicycles and how standardized (or, increasingly not) they are. Is there a solid technical reason for multiple ways to make shoes that connect to pedals? Why are there several ways to shift from your handlebars? With the popularity of disk brakes, why don't we have a standard size for pads?

I think there are a lot of things that we tell ourselves won from some sort of technical reasoning. The more you learn of things, the less this seems true, though.

Not, btw, that there aren't some things that die due to technical progress.

  • bloppe a day ago

    It's really not an open source issue. It's a more general issue than that. The author provides as counterexamples MacOS and Windows, but that's a silly comparison. Apple nor Microsoft never coordinated with anybody else on their APIs. Closed source developers are never as good at coordinating with others as OSS developers are. Sure, they can invest more in their own products, but that's a different issue.

    Also, I'm not sure what kind of standard this author is pining for. We have Wayland and freedesktop.org. Pretty much any Linux app can already run on pretty much any DE

    • mbreese a day ago

      I don’t think of it as a coordination issue. OSS can coordinate well. What it can’t do is make decisions. Most times there are at least two possible implementations for a given task (with different legitimate trade-offs). The OSS approach isn’t to pick the implementation that works for the most people, but rather support both implementations. Many times the only decision is to not to decide.

      The best projects have someone who will make unilateral decisions. They might be right or wrong, but it’s done and decided. Companies with an organizational hierarchy do that much better than a decentralized group of OSS developers.

      • pjc50 a day ago

        The underappreciated benefit of OSS is that you have an escape hatch from coordination if you disagree enough. You can in theory just fork it and run your own version, even if nobody else agrees. But you have to bear the associated development cost yourself.

        Choice looks like fragmentation. But the existence of alternatives is very important.

        • transcriptase a day ago

          Until someone wants to switch from Windows/MacOS to Linux and is presented with two dozen popular distros, half a dozen desktop choices for each with no explanation of what the acronyms mean or why someone should care, immediately gets prompted about things like bootloaders and various partition schemes, until they finally get to a desktop and their mouse is stuck at 0.01% pointer speed regardless of what the mouse options are set to because the maintainer of sdctlxinputmd got mad at the maintainer of some dependency and decided the user can install and configure what lets sdctlxinputmd work with the OS settings panel. Oh except installing those requires configuring new sources and overriding disallowing proprietary drivers because some other maintainer is a purist who added friction because Nvidia doesn’t provide their firmware source code under the PLGPPGL 4.1 Licence.

          • noisy_boy a day ago

            Given fairly well made mainstream distros like Ubuntu basically just work nowadays, I think it is a bit rich to shit on Linux with snarky remarks. Nobody is denying that there is certain lack of cohesiveness, niggling issues that annoy us but most of the time they are not dealbreakers. There used to be a thousand papercuts but not anymore - sure its not zero. Afterall it is made by volunteers for free because they have an itch to scratch or they like programming or want to give back something to the world etc.

            You didn't pay a dime for it, except with your time on things that annoy you, which is indeed not free. But if your time is that precious and you are that exacting, maybe Linux isn't a good fit for you. Maybe you should try paying for the perfectly crafted commercial alternatives. Except in the real world, those alternatives are far from perfect and have tons of issues like spyware, bloat, ads being shoved down the users throat and so on - on top of the papercuts/annoyances that sometimes take years to fix.

            • transcriptase 20 hours ago

              I’ve been using Linux daily since people were saying the same about Mandrake to people complaining about Gentoo requiring part-time job levels of effort to make it capable of basic functionality. Don’t mistake the recognition of very real hurdles to widespread adoption (brought on by fragmentation) with some sort of lack of experience on my part. I love Linux, except for when the simplest little things become nightmarish time-sucks and you find out it has nothing to do with technical complexity but because of some ideological argument a couple of guys had on a mailing list in 1999.

              • noisy_boy 13 hours ago

                > Don’t mistake the recognition of very real hurdles to widespread adoption (brought on by fragmentation) with some sort of lack of experience on my part.

                Everybody recognizes the hurdles. But they are there because of the nature of the ecosystem. Unpaid skilled maintainers are hard to get and it takes specific kind of people to maintain a driver or a subsystem, for sometimes decades. Those personalities sometimes come with strong opinions and prejudices and such. Does that delay fixing of end user issues? Yes. If there was an easy solution, it would be implemented by now.

                Funny you bring up Mandrake because I paid a good chunk of my salary to buy a box that came with the CD and a small printed book. I have been using Linux through thick and thin everyday since then.

                > I love Linux

                Me too! Let's not be too harsh on our loved one :)

          • bloppe a day ago

            This is funny but it's telling that people often resort to exaggerations to paint Linux as hard to use. Any major distro is a lot easier to set up than it was 15 years ago.

            Sure, most people's eyes gloss over the moment I try to explain the distinction between Linux and GNU, but if you're really that uncurious about how the system works then what's the point of switching to Linux anyway. Windows / Mac already get you most of the way there, especial now that WSL is a thing.

            • graemep 21 hours ago

              > what's the point of switching to Linux anyway. Windows / Mac already get you most of the way there

              What is the point of using Windows or Mac when Linux will get you most of the way there?

              Linux has a lot of advantages for users who are not interested in the technology. Longer hardware upgrade cycle, ease of maintenance and upgrades, more resource efficient, more secure (partly because it is less targetted as a desktop - but what matters is if you use Linux you are less likely to have issues), better privacy....

              • const_cast 17 hours ago

                I agree, which is why we have a trope of grandma running Elementary OS. Windows is not actually easy to use or intuitive at all - it's just popular. Which, if you squint hard enough, looks a lot of intuitiveness. But it's not actually.

              • bloppe 20 hours ago

                I've been daily-driving Linux for decades. I know how valuable it is. But I'm also "that guy" who likes examining every little piece of my system. I thought we were talking about the normies who just "hear good things" about Linux (especially soft-core developers) but aren't really interested in everything going on under the hood, and are willing to shell out a grand or two every couple years for new hardware and tech support.

                • graemep 17 hours ago

                  I am talking about normies. not even developers. I mean my family, for example. My late dad used Linux for many years, my daughters do, my ex-wife used to (possibly still does). None are developers, although my daughters are somewhat technically inclined and can program (the older one is an engineer).

                  The point I am trying to make is that there are advantages for people who are NOT interested in every detail of the system.

            • mbreese 15 hours ago

              > … paint Linux as hard to use. Any major distro is a lot easier to set up than it was 15 years ago.

              See, I don’t necessarily think Linux is hard to use. Most desktop environments are so similar in style to Windows and Mac that they are pretty intuitive.

              To me, setup is often the hardest part. It can be easy with some hardware. But if you aren’t lucky, getting a system running can be tricky.

              This is one area where having someone who can make decisions is really helpful. Apple can make sure their software works with their exact hardware specs. Microsoft has various compatibility guidelines and enough market share to make sure they are always supported. Linux is fragmented and hardware support lags because of this.

              Then you have smaller hardware vendors who support one obscure distro, but if that’s not what you want to run good luck. I spent last week trying to get Debian running on a Pine 64 laptop and still don’t have working sound.

          • nyrikki a day ago

            Still not an open source specific example.

            1) Due to the stability of the Linux kernel ABI, switching distros is far easier than between Interactive, SCO, SunOS, IIRC, Ultrix etc...

            There have only really been two major bootloaders in x86 Linux, Lilo and Grub, and the disk partitioning was driven by DOS/Windows and is still simpler than the dozens of differences between the commercial UNIXs or even adding on ons like VXVM.

            I have switched Linux distros at a data center level several times in my career due to various reasons and it has only ever been constrained by downtime budgets, and typically is faster and easier than a major window upgrade, even before modern tools existed.

            Different groups are always going to make different decisions, and those decisions will change over time within even single groups.

            Nvidia drivers are a complete mess, even without licence concerns.

            IMHO, with modern tools, you should let the application drive distro choices if needed, reserving your preferred distro for more generic needs that don't suffer from app vendor coupling.

          • ykclakuf a day ago

            That wasn't really a problem even a bit over a decade earlier when I started looking into switching things up. You look what is popular, you install ubuntu, it goes to usb stick which you insert in computer -> click next until you're on desktop. Nothing special.

            I think the really annoying parts were more related to nothing being quite perfect; things tearing, having iffy support for mixed resolution and refresh rates, sometimes browser not using hw acceleration and fans spinning up. Everything you could live with, but dealing papercuts.

          • lelanthran 18 hours ago

            "Show us on the doll where open source software touched you"

          • WD-42 a day ago

            Same old copium from the people that can’t bring themselves to give up their proprietary os

        • Asooka 17 hours ago

          > You can in theory just fork it and run your own version

          That is not feasible for a large portion of users. Even seasoned developers would struggle to maintain a personal fork of anything but the simplest open-source software. Proprietary software also has an escape hatch, it's called "Voting with your wallet", i.e. giving your money to someone else who solves your problem better. The set of people who have money is much larger than the set of computer programmers.

          Giving money to software you want to support has proven to be the most reliable way to direct software development. Granted, sometimes you're a Softimage user and Autodesk acquires the company making your software and then kills it to remove competition for their other 3D programs. Those cases are much rarer than the case where you want a consistently supported GUI for your OS and your only option is to write it yourself.

          When you think about it, development of proprietary software is a lot more democratised than open-source software, because average users can direct where development goes by voting with their wallets, or even without their wallets in the case of piracy, which still drives development via second-order network effects. BitTorrent has done more for practical software freedom than GNU ever did.

    • tbrownaw a day ago

      > The author provides as counterexamples MacOS and Windows, but that's a silly comparison. Apple nor Microsoft never coordinated with anybody else on their APIs.

      No, but each of those two systems is ruled by one dictator and has one blessed way to do things. For example in the Windows / .NET world, it's WinForms, er I mean WPF, er I mean...

      > Also, I'm not sure what kind of standard this author is pining for.

      It sounds like a wish for the clarity of a cathedral rather than the chaos of a bazaar.

      • bloppe a day ago

        It's a silly comparison because the author implies that OSS is a monolith that's failing to coordinate amongst its parts, whereas proprietary software firms are individuals that excel at self-coordination but are never expected to coordinate amongst each other.

        Individual OSS projects are often good at self-coordination. You don't need standards if you just reinvent the wheel yourself AKA how proprietary software often works.

    • FinnKuhn a day ago

      In recent years it appears that at least for Microsoft different teams working on Windows also aren't really coordinating with each other resulting in a scattered OS with duplicated implementations of similar features and a dozen different UI styles.

    • jowea 18 hours ago

      Maybe it's a bunch of volunteers vs large hierarchical for profit organizations? Even through there are some of the latter working on FOSS.

  • skydhash a day ago

    > I'm not entirely clear why/how this is an open source issue?

    I think it's not even an issue. Most open source projects are implementations (maybe flawed), and few are new propositions. If something was not fully defined in the standard/protocol/interface, the implementation may come up with its own extensions that are incompatible with others. But that's how you choose implementations, you're supposed to commit to one and not use a mismatch.

    So if you're using GNOME, try not to depend on something that depends on having the whole KDE installed. If you're using OpenRC, anything systemd is prohibited. All projects are consistent within themselves, the only thing you need to do is to avoid conflicts by having two things doing the same job and to ollow the installed system guidelines.

    I don't mind fragmentation. What I mind is dependency due to laziness and not a real need (like using a couple of glibc specific library for no real reason just because you're working on Debian) or taking the time to support windows and macos, but tying yourself to glibc on Linux because reasons.

    • graemep 21 hours ago

      > So if you're using GNOME, try not to depend on something that depends on having the whole KDE installed.

      I disagree with this bit. When i used a Gtk desktop I still used KDE applications so I had lots of KDE libraries installed. I use KDE now and I still have lots of Gtk applications installed.

      Its slightly wasteful of storage and memory but is still a lot less resource hungry than Windows.

  • em3rgent0rdr a day ago

    > not entirely clear why/how this is an open source issue?

    Was going to say this too, cause competing proprietary software companies generally don't coordinate. Macs don't easily run Windows programs and vice versa. Unless an alliance or some agreement to adhere to some standards body is made, the collaboration issue is part of both worlds.

  • AcaciaSkier a day ago

    > Is there a solid technical reason for multiple ways to make shoes that connect to pedals?

    Yes!

    - Toe clips give you more power than just flat pedals. You can use regular shoes with them.

    - Straps (similar to toe clips) can and be used with regular shoes / trainers and allow you to control the pedal stroke up and down. Fixed gear riders use them a lot.

    - MTB style cleats are easier to unclip and re-clip than Road style cleats as you are more likely to need to clip / unclip quickly.

    - Road style cleats provide better power transfer, the shoes are far stiffer as well.

    > Why are there several ways to shift from your handlebars?

    Two reasons. The first is that the technology has improved over time. I have ridden old racing bikes where the shifter was not on the handlebar and on the downtube. You had to feel the position of the next gear while steering with the other hand. It is difficult for people that haven't ridden a road bike before to get used to.

    Secondly the different types of bikes have the rider in different positions and thus their hands will be in a different position. Thus the different shifter types.

    The moral of the story is that different requirements, require different solutions.

    BTW almost everything else around the pedal and the shifter is standardised. The hub are normally one of a few sizes, wheel sizes have a few standard sizes, bottom brackets are almost all the same sizes, headsets have a few standard sizes. I have a mountain bike from 1995 that I've put a brand new stem on because the headsets are the same size as they were in 1995.

  • Bouncingsoul1 a day ago

    I'm not sure which point you are trying to make with the bikes. For road racing the UCI quite famously sets quite strict standards. For "normal" use, if you live within the US or EU will also have some standards (mostly conserning road saftey). Of course you may cherry pick some exceptions, but IMO this doesn't drive the point.

    • prmoustache a day ago

      UCI only has power on formats [1] used in race. It definitely influences the market but it has no say on which width of handlebar can I use on my own bike as long as I don't pin a number in a UCI sanctionnd race.

      [1] I prefer using that word because most aren't really standardised.

      • PaulDavisThe1st a day ago

        And also, not sure how the ratios are today, but 8 years ago, an order of magnitude more people raced bicycles in triathlons than in UCI sanctioned races, where the UCI has no control.

  • FridgeSeal a day ago

    > For fun, look into bicycles and how standardized (or, increasingly not) they are.

    Because triangles are a fantastic, high-strength shape, that suits the loads a bicycle is subject to. For the vast majority of cases, it’s a very solid choice. We deviate when specific UX requirements are required (city bikes having a battery and a low stepover to suit a variety of clothing, and the motor makes up for additional weight required.

    > Is there a solid technical reason for multiple ways to make shoes that connect to pedals?

    All of them attempt to address the same requirement, and make different tradeoffs depending on use-case and environment.

    > Why are there several ways to shift from your handlebars?

    Price points, performance reqs, handlebar setup, environmental (is it expected to be subject to mud rocks and trees constantly?) and weight.

    > With the popularity of disk brakes, why don't we have a standard size for pads?

    Same as for shifters: the braking compound and design for a race road bike will be really different to what a DH race bike requires.

    • prmoustache a day ago

      > All of them attempt to address the same requirement, and make different tradeoffs depending on use-case and environment.

      All road shoes/pedals interface have the same requirements. All XC ones have the same ones, All Dh ones, same.

      > Same as for shifters: the braking compound and design for a race road bike will be really different to what a DH race bike requires.

      Regardless if we are talking shifters, breaks or pedals interface among a specific line (road, gravel, xc, DH) the requirements stay the same and more importantly those formats/shapes/interface almost never vary accross price point:

      All mtb shifters from a given manufacturer and amount of gears available are usually interchangeable. A deore shifter can operate an XTR derailleur. Same SPD cleats accross all MTB pedals from Shimano. Same brake pad shape is used accross all lines for a given number of pistons. More importantly pads and calipers are usually interchangeable between road and mtb for a given manufacturer. Conpound, requirements and price point as little to do with it as manufacturers release pads with different compound but same shape.

      What makes all these formats not standards is because every manufacturer wants to have its own for 2 reasons: 1) think it knows better 2) aim to capture a market and become a monopoly (through cleats format)

      Only rarely they discuss between each others or release a standard and don't ask for royalties. Same as proprietary software vendors.

      The open source fragmentation only really comes from reason 1.

    • ClumsyPilot a day ago

      > > With the popularity of disk brakes, why don't we have a standard size for pads? Same as for shifters: the braking compound and design for a race road bike will be really different to what a DH race bike requires

      I really don’t understand how this mentality Uk survives.

      for the past 100 years companies have been working to get unfair advantage over each other by creating user lock-in, patent trolling each-other, DRM in games, changing their design to break compatibility with generic products etc.

      surely you must realise that many motivations for product difference have no bearing on user benefit, or we would never have region-locking on DVDs or proprietary media formats.

      Bicycle market is not a healthy competitive market. Shimano makes almost all gears for all bikes in Europe. For the price of an electric cargo bike that goes 15 mph and has 0.6 kWh battery I can buy an electric motorbike that goes 70 mph and has 6 kWh battery.

      They are both about £4,000

  • pixl97 a day ago

    >why don't we have a standard size for pads?

    Because each manufacturer can't put a premium on their pads that way.

  • thewebguyd a day ago

    There's still an agreement on what's beneficial, even if the how isn't standardized. TO use your bicycle analogy, it's widely accepted that clipping into pedals is beneficial, we've found it improves power transfer and efficiency - the idea of clipping in is standardized, but the implementation (cleat design) is still open for interpretation.

    We see this in open source too - we can coordinate and all agree on a core idea or problem that needs solving, but still end up with different, competing implementations. It's not a bad thing, choice is good and often leads to innovation as different approaches compete and evolve.

    I do think you are right on your first point - that inertia of user base is the predictor of what will stick. Even Linux, stuck around due to licensing and availability (and then user inertia from that) rather than any technical superiority.

  • ClumsyPilot a day ago

    > Inertia of user base is by far the largest predictor of what will stick in a market

    Why are you calling open source of market?

    These appear to be feuds and battle of ideas fought between contributors, with minimal input from end users. There is no price signal at all

    Re. Bicycles, it’s not a healthy market. Shimano dominates with 70% share in gears and brakes. Top 3 manufacturers have what, 95%+? Also look at how cargo bikes cost 3x what a normal bike does, but have same components.

    With that structure, users have zero input on size of breakpads.

  • dkkergoog a day ago

    All those things have s price point

  • skywhopper a day ago

    It’s not at all. As a counter-example, in the 00s, open source rapidly coalesced to using git, which has been so incredibly successful that it has now been adopted for many private corporate proprietary software repositories as well.

    I don’t think you can take a general lesson from any particular example here. Coordination of complex systems among thousands of competing and cooperative components is very hard and unpredictable, and why things happen depends on random events and personalities in ways that are not generalizable.

antonok a day ago

Open source has the best kind of coordination. If there's a real use-case for two things to work together, you or someone else can implement it and share it without anyone's permission. Meanwhile in proprietary land, people sometimes build things that nobody wanted, and also leave out features with high demand. Proprietary optimizes for the profit of some individuals; open source optimizes for maximum utility.

Thus far, open source has optimized for maximum utility for individuals who can write code... but AI may be changing that soon enough.

  • illiac786 a day ago

    I am a fan of open source, but it’s definitely not for the coordination part.

    Proprietary, money driven development, is top down and has coordination in general. In very large software, it starts failing sometimes (I’m looking at you Oracle)

    Open source handles conflict by forking. I wouldn’t call that good coordination.

    But, at the same time, I don’t see a better (or less worse) solution so I shut up and I take their code =)

    • latchup a day ago

      > Open source handles conflict by forking. I wouldn’t call that good coordination.

      Forking is far from the first step in conflict resolution; it is the ultima ratio between projects in the open-source world, when all dialogue breaks down. In other words, the worst outcome is that people agree to disagree and go their separate ways, which is arguably as good a solution as is possible.

      In the corporate world, coordination mostly exists within companies through top-down decision-making, as you said. Between them, however, things look much grimmer. Legal action is often taken lightly, and more often than not, a core business goal is to not just dominate, but to annihilate the competition by driving them out of business.

      Coordination between corporations, such as through consortia, is only ever found if everyone involved stands to profit significantly and risks are low. And ironically, when it does happen, it often takes the form of shared development of an open-source project, to eliminate the perceived risk of being shafted.

      • aleph_minus_one a day ago

        > Forking is far from the first step in conflict resolution; it is the ultima ratio between projects in the open-source world, when all dialogue breaks down.

        You also do a fork if you simply want to try out some rather experimental changes. In the end, this fork can get merged into the mainstream version, stay independent, or become abandoned. People wanting to try out new things has barely anything to do with all dialogue breaking down.

        • baobun 2 hours ago

          You may also fork from having different goals or ideas about some mutually incompatible requirements without an communication or coordination issues. Friendly forks happen all the time.

PaulHoule a day ago

Look at Microsoft Windows. You can still run Access '97 because all the DLLs with the GUI widgets from Win '95 are still there. Until 2012 or so, Microsoft would regularly come out with a whole new widget set every few years. You can see this most notably in the settings UI for Windows where there are some screens in the "modern" UI and plenty of dialogs that still come from the '95/NT 4 era.

Since then Microsoft has had no real answer for "how do I write desktop applications for Windows?" other than "use Electron".

(If they were still introducing new widget sets they'd be converting the 'modern' dialogs to something 'postmodern' while still having Win '95 dialogs in there)

  • Kranar 20 hours ago

    I decided to verify this and came up short. I went through the installation procedure which did work but required changing some workgroup settings and running the application as administrator... not ideal but understandable. However, when I tried to run it, no good. I get an error about missing files. I then went through the process of finding some of these missing files and installing them but ultimately ran up against the final fatal error that will not allow me to proceed.

    Access 97 depends on some Internet Explorer components and Microsoft has made it all but impossible to install Internet Explorer on the most recent Windows 10 and Window 11.

    Apart from that I also tried getting Minesweeper and SkiFree to work on Windows 10 and Windows 10 just straight up refuses to run them with the message "This app can't run on your PC."

    • PaulHoule 16 hours ago

      I installed Office ‘97 for work I think two summers ago. We had an old access database that I had to import into a new postgres-based system and it installed just fine on Windows 10. It was hilarious that clippy still tries to take over the desktop but it looks a little funny because borderless windows don’t work quite the same as they used to.

  • hgs3 9 hours ago

    > Since then Microsoft has had no real answer for "how do I write desktop applications for Windows?" other than "use Electron".

    Microsoft has been pushing WinUI the past few years, with WinUI 3 being the latest recommended UI toolkit [1]. I think what's interesting about WinUI 3 is it's not built into Windows - you have to ship the whole toolkit with your app, just as you would GTK or Qt on Windows. I find that a perplexing direction for a "native" toolkit.

    [1] https://learn.microsoft.com/en-us/windows/apps/winui/winui3/

  • kragen 18 hours ago

    You can run Access '97 on Linux under WINE. I'm not sure you can install it on current Microsoft Windows, even if it would hypothetically run.

pif a day ago

This post displays the same ignorance as that joke about Kissinger complaining that Europe didn't have a phone number.

Open source is a movement: it's neither an individual nor a committee. And people join it because it is a movement with no central authority.

scrapheap a day ago

Alternative view - Open Source projects have the freedom to do what they want to do. Which in turn gives me the freedom to choose how I set up my desktop environment. How many changes have been pushed on Windows users over the last 15 years with the only option being to change and get security updates or stay on an old insecure version?

And while there are lots of desktop environments for Linux you can usually run applications targeting one in any of them (I use Gnome's file manager in Enlightenment as it supports accessing CIFS shares directly).

  • reactcore a day ago

    Agreed. I have been thinking about this since the announcement of iOS and macOS 26. I really dislike the UI changes made and will not upgrade until my devices become obsolete. On my openSUSE PC, my desktop still looks almost the same as it did in 2006, which I love so much.

tbrownaw a day ago

> The underlying force there is the absence of one unified baseline set of APIs for writing desktop programs.

It's called the Common Desktop Environment.

  • skydhash a day ago

    Most desktop programs don't need to rely on a DE (apart from some utilities). If Emacs can run anywhere, your programs can too. GTK or QT is more than enough. For anything else, you go with components on a needed basis, and they should preferably be desktop independent.

    • hackyhacky a day ago

      > If Emacs can run anywhere

      Any desktop program needs to be programmed against some API. In the case of Emacs, it's probably raw Xlib or a wrapper library on top of it.

      The problem with that is that (a) your dependency on X11, which is obsolete and has many documented inadequacies, (b) the lack of a modern widget library and toolkit makes extra, unnecessary work for the programmer, and (c) the lack of a cohesive visual language between programs makes the experience worse for the user.

      Toolkits like GTK and Qt solve all these problems. By avoiding them, you're just reinventing the wheel, poorly, every time.

      • skydhash a day ago

        Emacs has a GTK3 layer (among others) for its UI.

      • dragandj a day ago

        Emacs was there way before GTK and Qt appeared, though.

        • hackyhacky a day ago

          > Emacs was there way before GTK and Qt appeared, though.

          So your point is that we should use older technology even when it's been surpassed by better alternatives?

          • Narishma a day ago

            Yes if the better alternatives are worse.

dcreater a day ago

Solving the coordination problem in FOSS is one of the grand challenges of humanity. If we solve it, I think it will effect a tectonic shift with far reaching implications and fixes major socioeconomic problems like wealth concentration. Eg: a FOSS alternative to Visa, and of course Windows/MS Office.

cadamsdotcom a day ago

To me it's about how low-bandwidth communication channels limit collaboration.

Linux and FOSS grew up (congrats!) and the important work got super big and complex.

Unfortunately, online communication - typically the channel preferred by FOSS projects - has much lower bandwidth than teams working full time in an office. Thus limiting the "depth" of collaboration.

It's not all bad. FOSS does have some profound success to be proud of. For small and well-defined projects "benevolent dictator for life" works! Anything one person can lead - a desktop tool or a library - FOSS produces really good outcomes. But above say, a package manager or a distro.. things get wonky.

Again it's not all bad. FOSS is rolling up its sleeves. Folks are organically doing the "go down every avenue people are motivated to go down" thing. You could call it Darwinism. But many motivated communities lack resources to go far enough to reach value. Motivation stalls during projects (we're human!), and FOSS rarely comes with a motivation-boosting paycheck. Plenty of valiant efforts don't reach value and it's never malicious. It's OK!

So is there a way to better concentrate efforts?

If the paths are long, it follows that the community should tackle fewer paths. The path(s) taken should be well defined, charted in advance as much as possible, and not uncovered bit by bit - or the work will take decades.

Growing an entire ecosystem around one path forward (or a few) requires alignment. Can enough trust be fostered in leaders to get people to work on a shared vision?

A vision of what Linux on the desktop should/could converge to is the kind of problem that, if Linux were a company, would be bet-the-company strategic. A company can't afford to go down two paths. So it might lock its smartest people in a room to hash out one true strategy. Or have one smart person dictate one vision and align everyone on it.

Can that be done for FOSS?

In the bounds of a single project it has been proven that it can. But what about an entire ecosystem?

  • nottorp a day ago

    > Unfortunately, online communication - typically the channel preferred by FOSS projects - has much lower bandwidth than teams working full time in an office.

    I wonder if that's why open source projects get so much done and at such a high quality with so few people.

    Instead of 75% "communication" and 25% work, 90% of the time donated to FOSS is actual work :)

  • candiddevmike a day ago

    > Unfortunately, online communication - typically the channel preferred by FOSS projects - has much lower bandwidth than teams working full time in an office. Thus limiting the "depth" of collaboration.

    Source on this? There are tons of collaborative, 100% remote companies out there (and they release open source software). I think your assertion may be more the folks aren't as dedicated to open source as contributing is a part time or hobby thing.

    • mnahkies a day ago

      I often submit minor bug fixes or features to fairly popular projects, and as an outside contributor the communication can be very async. It's typically limited to GitHub PR/issue discussions, and sometimes the latency is measured in weeks/months.

      I think it's probably quite different if you're a "core contributor" and likely using additional channels like slack and scheduled meetings, more akin to a company operating.

fergie a day ago

> There was a decade of opportunity for OSS to coordinate around an IDE protocol, but that didn’t happen, because OSS is bad at coordination.

Its also because a lot of the key people in Open Source, and senior hackers generally, don't actually use IDEs.

We should encourage more of the younger generation over to powerful configurable editors such as Emacs, rather than locking everybody into VSCode/JetBrains/etc.

  • tasuki a day ago

    > We should encourage more of the younger generation over to powerful configurable editors such as Emacs, rather than locking everybody into VSCode/JetBrains/etc.

    Isn't that precisely what LSP facilitates? I was using IDEs for like four years. Now I'm back to (neo)vim and couldn't be happier! There is no substitute for "jump to definition".

ozim 20 hours ago

It is a feature not a bug.

What author describes is about dominance.

Most OSS is fragmented because of different ideas and different people.

People nag about .NET not having much outside OSS because .NET devs (I am one of them) will not use stuff that doesn’t have MSFT badge.

You don’t want such power in general OSS. Lack of coordination is sign of no dominant entity and that is the feature.

bronlund a day ago

This is kind of the same reason I gave up on the Linux desktop and went for macOS. When I first learned about Linux I was thinking "Sweet!. This is going to kick Microsoft's ass!", but this was 30 years ago and instead of a kickass desktop OS, we got 1000 mediocre ones.

  • happymellon a day ago

    > looks at Windows Vista, 8, 10, 11

    I don't think its the mediocre interface thats holding Linux back...

    Whether its the abomination thats Windows 11, having to fight against an ad ridden interface in 10 or otherwise. Teams hasn't dominated because of a coherent interface, or even because anyone actually wants to use it.

    Besides, you say 1000 desktops, but there is really only 2 (well 1 since Gnome is the primary interface for the big 3 distros) along with couple of hobby ones that you have to seek out to even learn they exist and a lot of toys that no one outside HN has even heard of.

  • mixmastamyk a day ago

    I’d say Mint Cinnamon is better than mediocre.

    Further try Little Snitch on macos to see how completely out of control it is. Since they implemented the sealed volume it is also a lot harder to configure permanently, so fixing it is less feasible.

dTal 2 hours ago

@dang

Why editorialize a question mark into titles like this? What does it signify? We are not talking about a factual assertion of debatable veracity, but a statement of opinion along the lines of "considered harmful". Disagreement and debate is the obvious expected result of such a statement. HN admin using special mod powers to editorialize a title which is neither misleading nor clickbait, simply to indicate skepticism, comes across as petty and not a little hypocritical in light of the "don't editorialize titles" site guideline.

TechPlasma a day ago

This feels very right. The problem is there are few entities invested enough in Linux as a consumer platform, that have the motivation to push things forward. To make the decisions on what their "Reference" system is.

Valve is maybe the closest?

  • hackyhacky a day ago

    Ubuntu, for all their faults, were the first to make Linux really easy to install and made it "just work." That counts for a lot. Since then, their output has been disappointing.

    Part of the problem, is that "Linux/Unix culture" is very averse to coordination. When someone does try to establish a common baseline, there is inevitable pushback. The classic example is systemd, which fills a desperately needed hole in the Linux ecosystem, but is to this day criticized for being antithetical to the ethos of, I guess, gluing together an operating system with chewing gum and bits of string. The fact is that many users would rather have a pile of software that can be hand-assembled into an OS, instead of an actual cohesive, consistent platform.

    So I can't blame people too much for not trying to establish standards. If OSS had created LSP, there would be 20 different incompatible variations, and they would insist "We like it this way."

    EDIT: averse, not adverse

    • linguae a day ago

      There is another factor at play: different users value different things. For example, there are some people who don't like systemd, not because they are enamored with classic startup scripts, but because they take issue with systemd's design. It's not that they dislike coherent, consistent platforms: they just take disagreement with the design decisions of that particular platform. For example, I like the classic Mac OS and Jobs-era Mac OS X, but I don't like GNOME. All of these are coherent platforms, but they have different philosophies.

      The difference between open source software versus proprietary software is that if users don't like the changes made to proprietary software, there choices are limited to the following:

      1. Dealing with the changes even though they don't like it.

      2. Sticking to an older version of the software before the changes took place (which can be difficult due to needing to deal with a changing environment and thus is only delaying the inevitable).

      3. Switching to an alternative product, if available.

      4. Writing an alternative product (which can be a massive undertaking).

      Open source software provides additional options:

      5. Fork the older version of the software. If enough people maintain this fork, then this becomes a viable alternative to the changed software.

      6. Use the new version of the software, but modify it to one's liking.

      This is the blessing and the curse of open source software; we have the power to make our own environments, but some software is quite labor-intensive to write, and we need to rely on other people's libraries, systems, and tools to avoid reinventing wheels, but sometimes those dependencies change in ways that we disagree with.

      I think the best way to mitigate this is making software easier to develop and more modular, though inevitably there are always going to be disagreements when using dependencies that we don't directly control.

    • clipsy a day ago

      > The classic example is systemd, which fills a desperately needed hole in the Linux ecosystem

      If the hole is desperately needed, why would you want to fill it?

      • hackyhacky a day ago

        > If the hole is desperately needed, why would you want to fill it?

        Good point. Let me rephrase: "Systemd fills a hole in the Linux ecosystem, which desperately needs to be filled." This version of the sentence is more correct and conveniently functions as a double entendre.

        • j16sdiz a day ago

          systemd killed many projects and use cases along its way.

          Better integration for mainstream, sure. but at the end we have less choice.

          • hackyhacky a day ago

            > but at the end we have less choice.

            This is exactly my point: you want "diverse choices", which is fundamentally at odds with "cohesive functionality."

            The article is about LSP, an imperfect standard, but nevertheless a standard. The prioritization of "choice" above all else is why the OSS world is incapable of creating standards.

            > systemd killed many projects

            The purpose of software is to fulfill a need. Creation of software projects is simply a side-effect of that process. It's good that systemd killed many projects, because those people who had worked on those projects can now work on a problem that hasn't already been solved.

          • shadowgovt a day ago

            Sometimes the end user actually suffers from too much choice.

            Choice implies complexity, and there are some places less complexity is quite desirable. I still periodically, when setting up a new Linux machine, have to figure out why the audio frameworks are fighting, for example. The fact that "frameworks" is plural there makes everything harder for me, the end user.

            (I compare Python and Node environment management frequently here. Python standardized the protocol for setting up an environment. Wise, better than nothing, but now I have to care whether something is using conda or poetry or some several other options I don't know. Node has npm. If there's a package, it's in npm. To setup a Node service, use npm. One thing to know, one thing to get good at, one thing to use. Environment management with Node is much easier than in Python).

    • Joel_Mckay a day ago

      Indeed, "good" doesn't matter if the OS is a pain to use.

      The Driver support issues are essentially a theological war between FOSS ideals, and mystery OEM binaries.

      Most of the linux kernel code is still the driver modules, and board support packages.

      The desktop options have always been a mess of forks and bodged applets to make it useful.

      Ubuntu balances the purity of Debian with practical user experience (we could all write a book about UEFI shenanigans.) RedHat focuses more on hardened server use-cases.

      Is it worse than Win11 ? depends what you are doing, and what people consider is the low bar for messing with users. =3

  • keyringlight a day ago

    Valve doesn't seem interested in doing much on the desktop, they seem to have constrained themselves (probably wisely) to working on areas directly related to their business and products.

  • charcircuit a day ago

    Google is the closest with Android. They were even able to get Adobe to port photoshop which other Linux operating systems have failed to have happen.

    Despite Android's success the rest of the consumer Linux distributions chose to ignore it and continue on with what they were already doing. Trying to have them coordinate around what is succeeding is seemingly impossible.

    • TechPlasma a day ago

      Android is the most Pervasive yes, but I would consider it too focused on a specific type of platform, and one that is becoming more and more closed off. ChromeOS might be a better example, but much like android, it is also very closed off from the rest of the ecosystem.

    • pxc 21 hours ago

      Android doesn't succeed at what free software developers (or users) care about. The present-day reality of Android is a huge, depressing disappointment to anyone who gives a shit about software freedom.

      It's a TiVo-ized spyware delivery platform, absolute riddled with (often non-removeable, often installed by entities other than the user) badware.

      Android is an abject, dismal failure when it comes to very basic things like empowering users.

      • charcircuit 17 hours ago

        >Android doesn't succeed at what free software developers (or users) care about.

        I disagree, unless you mean that they care about having there OS copy how UNIX worked 50 years ago.

        >It's a TiVo-ized spyware delivery platform

        Boiling things down to a pile of buzzwords is not productive especially when they aren't accurate.

        "TiVo-ized": Android fully supports a user unlockable bootloader. Such a term doesn't even refer to the operating system, but to the device / bootloader, so it doesn't make sense to describe Android like that.

        "spyware delivery": I assume this means that it includes a package manager that can install apps automatically. Several other Linux operating systems support that too. That isn't unique.

        >absolute riddled with (often non-removeable, often installed by entities other than the user) badware

        It is up to the vendor to pick what software they bundle with the OS. It's not inherent that "badware" has to be bundled.

        Your criticisms of Android are not even with the operating system itself, but with downstream versions of it.

    • hackyhacky a day ago

      > Despite Android's success the rest of the consumer Linux distributions chose to ignore it and continue on with what they were already doing. Trying to have them coordinate around what is succeeding is seemingly impossible.

      I'm not sure I understand you here. What do you think other Linux distros should have done?

      • dontlaugh a day ago

        Long before Android existed, they could’ve all agreed to have the same single desktop environment, UI toolkit, app packaging and distribution method, etc. And also agreed to ship drivers, even if proprietary.

      • charcircuit a day ago

        >What do you think other Linux distros should have done?

        Collectively contributing to getting AOSP running on desktops, and then also working on backwards compatibility to be able to package their preexisting apps into Android apps. This would allow for there to be a common app platform for developers to target Linux with.

        • hackyhacky a day ago

          > Collectively contributing to getting AOSP running on desktops, and then also working on backwards compatibility to be able to package their preexisting apps into Android apps. This would allow for there to be a common app platform for developers to target Linux with.

          As a common target, AOSP isn't a very good one.

          AOSP ran on desktops. (Maybe it still does, haven't tried it in a while.) It was still a mobile OS, though, so it wasn't good on the desktop, but it ran.

          It also uses very old kernels.

          Other than the kernel, the Android UI is completely different from conventional Linux. Any Gnome or Qt app would have to be completely rewritten to support it, and would probably have to run in the JVM.

          Basically, if the Linux community followed your plan, they would have to commit a huge effort to port everything to what is essentially a completely different, incompatible OS in every respect except the kernel, and their reward would be to live in subservience to the whims of Google in supporting their product which Google themselves never had enough faith in to make it a proper desktop OS. It seems that the benefit does not justify the investment.

          • charcircuit a day ago

            >so it wasn't good on the desktop, but it ran.

            Which is why it would benefit from people who are trying to optimize it, and extend it to offer a good desktop experience.

            >It also uses very old kernels.

            It's based off the latest LTS release of the kernel.

            >Any Gnome or Qt app would have to be completely rewritten to support it

            Which is why my comment said that distros would work on backwards compatibility to avoid such expensive work of requiring a complete rewrite amd try to make it as seamless as possible.

            >and would probably have to run in the JVM

            Android does not use the JVM. It has ART, the Android Runtime, but you can still use native code.

            >and their reward would be to live in subservience to the whims of Google in supporting their product which Google themselves never had enough faith in to make it a proper desktop OS

            The benefit is being able to reap the fruits of the billions of dollars Google's is investing into the OS. Along with compatibility with a large amount of apps. As a bonus staple Linux applications may be able to installed to some of the billion existing Android devices today. Google may not have seen the benefit of supporting the desktop, but that's where smaller players can come in to play a role in trying to focus on more niche markets where there is less possible return.

            • skydhash a day ago

              I don't think Android is a good platform for desktop usage. First the windowing system, and the IPC mechanism. They are very limited. And one of the nice aspects of desktop computing is the ability to alter it for your own purposes (something that MacOS is running away from). Meaning you extend it for some other domain, think music production, video production,... Where you want to hook it to some hardware and have the software to talk directly to the latter. Which means having access to all the ports and coding bespoke protocols. I don't think current android API allows for that.

              • charcircuit a day ago

                >They are very limited.

                Sure the windowing is limited, but it could be extended. I disagree that the IPC is limited though.

                >Which means having access to all the ports and coding bespoke protocols. I don't think current android API allows for that.

                It's still all open source. The distros could add new APIs to expose new capabilities.

                • skydhash a day ago

                  > The distros could add new APIs could be added to expose new capabilities.

                  Those exist already. With Debian, Alpine, Fedora,... you can put anything on top of the kernel in the userland. Android goes with a restricted version.

                  It's the same with MacOS. It's Unix, but with proprietary add-ons and systems. And lately, with more restrictions.

                  • charcircuit a day ago

                    How does those existing make Android not a good platform? I don't fully understand the point you are trying to make.

                    By restrictions do you mean having proper capability based security instead of letting all apps have access to everything? These restrictions are a good thing.

                    • skydhash a day ago

                      Limited userland and limited access to it. Sandboxing may be good, but the user may need some software that needs to escape it. Fedora Silverblue is a promising direction, but interacting with the base system is currently a pain.

        • o11c a day ago

          90% of the problems Linux has had in the last 10 years are due to trying to unify desktop UI with mobile. This is fundamentally a mistake and it is critical to avoid it.

          • hackyhacky a day ago

            > 90% of the problems Linux has had in the last 10 years are due to trying to unify desktop UI with mobile.

            To be fair, Apple and Microsoft have also failed to try to unify desktop UI with mobile.

            • o11c a day ago

              Yeah, but they have enough other failures that this one alone can't reach 90%.

              Linux has had other major dramas but not failures.

        • happymellon a day ago

          A lot of effort went into Android x86.

          As far as I know Google has never accepted patches into Android, so everything has to be maintained outside the project in parallel, which helped kill it.

          Google is not your friend, and they will not work with you. Android has diverged several times, and they break everyone else without caring.

nyrikki a day ago

> I use NixOS. And NixOS isn’t a problem — it’s a solution.

> The past ten years saw a big shift in how we are writing software: baseline level of “interactive static analysis"

While I am making no judgement on what is 'better', the author's choices have impacts, and sometimes not all projects can work within the costs imposed by static analysis.

For example, remember that Rice's theorm generalizes HALT, and that you always have to under or overestimate with SA. Either introducing over constraints or missing things.

It is horses for courses, sometimes the added friction is worth it, other times it is damaging.

The question of if Nix is a problem is a context specific question.

Be careful about making default assumptions and patterns more than what they are, no matter how sensible they are as a default.

mongol a day ago

> But there was no one to coordinate Linux on desktop.

Freedesktop.org?

  • wmf a day ago

    I get the impression that some people don't want to participate in FreeDesktop. Maybe they see it as bloated or too aligned with GNOME.

  • dwheeler a day ago

    Yes. GNOME and KDE at least coordinate via freedesktop.org. At least they did!

  • j16sdiz a day ago

    Those are for DEs.

    For end-user application, it's a mess. You can't compile once and run everywhere.

    LSB (Linux Standard Base) tried to that and failed.

    flatpak/Snap/AppImage all tried to that, each have its own set of problems.

    • const_cast 17 hours ago

      > For end-user application, it's a mess. You can't compile once and run everywhere.

      Yes you can, you said it yourself: flatpak, snap, and appimage.

      Okay okay, those technologies have problems. But all technologies do. That's why Microsoft has half a dozen GUI frameworks and then they build their own in-house applications in Electron :P

    • LtWorf a day ago

      Can you compile on windows11 and run it on windows xp? Yeah didn't think so

      • pjc50 a day ago

        Unless you choose to use APIs that aren't in Windows XP, then this isn't a problem. Win32 backwards compatibility is very impressive.

        Building a Win16 application would be more difficult.

        But this isn't directly comparable. The issue with Linux is getting a precompiled binary to work properly across all currently up to date distributions.

      • int_19h a day ago

        You can do this even with the official Microsoft SDKs (just need to have the XP targeting pack). And then there are numerous third party development tools that allow this.

        • LtWorf a day ago

          You can do this on linux too, make a chroot, done.

          • freeone3000 a day ago

            Windows doesn’t need a chroot and a separate install base. You can build one binary for Longhorn, using the Windows 11 SDK, and it will run on everything from NT4 to 11 without modification.

            • LtWorf 15 hours ago

              *assuming you wrote it in such a way to be able to do it

          • int_19h 15 hours ago

            Yeah, and then hope that the tools that you need to build even run on some ancient glibc version.

            • LtWorf 15 hours ago

              Stuff like innosetup?

  • freeone3000 a day ago

    I’m still mad they stole the scriptable, composable DCOP and put it all behind a binary format.

    • LtWorf a day ago

      whatever happened with kdbus btw?

  • hackernoops a day ago

    [flagged]

    • bilkow a day ago

      I got curious and found a ton of red flags in about half hour...

      Summary of the drama that resulted in Xlibre:

      - https://discuss.pixls.us/t/weekly-recap-8-june-2025/50638

      From the README at https://github.com/X11Libre/xserver:

      > This is an independent project, not at all affiliated with BigTech or any of their subsidiaries or tax evasion tools, nor any political activists groups, state actors, etc. It's explicitly free of any "DEI" or similar discriminatory policies. Anybody who's treating others nicely is welcomed.

      This exchange between him and Torvalds in 2021:

      - https://lkml.org/lkml/2021/6/10/903

      - https://lkml.org/lkml/2021/6/10/957

      • shadowgovt a day ago

        Looking at the history of that doc, I was most annoyed not about what was added but what was removed: the random anti-DEI stuff was in place of the part that described what an X server is.

        Like, my guy. If your goal is to provide a better solution than the existing one, telling people what you're making is step 1. No, the user can't be assumed to already know. You've already made your first mistake if you've made that assumption.

        • mariusor a day ago

          I'm pretty sure that's not addressed to any potential "users", unless you're thinking that the future developers of the project are them.

          • shadowgovt a day ago

            The changes were in the README. That file is addressed to everyone who will have reason to touch the software: developers, users, future archivists trying to figure out what this directory is for.

bobajeff a day ago

Maybe open source doesn't need to coordinate. Perhaps users and developers should demand standards and interoperability from their platforms. Perhaps that's why we have things like Electron, Unreal Engine and Unity. One way or another we'll coordinate on something.

a-dub a day ago

idk. i don't really follow the argument. large projects in open source coordinate internally and engage externally when they need to- i suspect that isn't all that different from what you'd see in a large megacompany like apple or microsoft.

open source people create reusable interfaces. i'd argue they go one step further and create open and public internet communities with standards, practices and distribution/release channels.

notepad0x90 19 hours ago

I wonder if the author has seen codebases for enterprise apps that are a behemoth, designed by a committee with low developer retention.

In contrast, open source software led by the same handful (typically just one guy) of people over years/decades are well coordinated by the BDFL(s) and have a clear direction.

I think the term the author is looking for is "opinionated". The mantra is "if you don't like it fork it". The apparent lack of coordination is a feature of open source,not a bug.

Successful and popular projects rarely seem uncoordinated. The Linux kernel coordinates thousands of devs over mailing lists. Git was created to facilitate that, and now everyone uses to coordinate development.I would even dare say modern dev coordination is spearheaded by open source projects.

kragen 18 hours ago

Open source, like private property, markets, science, and Satoshi consensus, enables people to benefit from the work of others that they don't have a means of coordinating with—for example, because they're dead, or they don't trust you, or they're hard to get along with. If you already have a way of coming to terms with somebody, you don't need open source. Different engineers inside Google can benefit from each other's work without making it open source because they're coordinated by Google's management. It's people who aren't coordinating with Google who benefit from them making it open source.

pif a day ago

> There’s no single entity that can coordinate the API, in contrast to Windows and MacOS. > > But then, how can Linux exist? How does that square with “never break the user space?”

The “never break the user space” philosophy is limited to the kernel, and the single entity coordinating that realm is called Linus.

_rm 5 hours ago

Something uncoordinated struggles to co-ordinate. More news at ten

agumonkey a day ago

It's an important question. Paid enterprise enforce a certain level of coordination (supposedly overcoming the pain of dealing with people and ideas you don't necessary like). Open source imposes no such things, so efforts grow when and where they want..

jrm4 a day ago

I think that that two of our more instructive examples here might be Wayland and late stage GNOME -- namely, how did these two projects get as big as they did, despite being both 1) past compatibility breaking and 2) not obviously good?

initramfs a day ago

I think the definition of linux is much broader than what is considered today a platform for the IDE. It's kind of like the IDE is the cart, and the kernel is the horse, but 30 years later, linux is an engine with a cabin virtual machine, rather than a desktop per se. the parts interact at a different level now.

nixpulvis a day ago

This somewhat reminds me of arguments I've heard about waterfall vs agile over the years. Planning for ways to plan, and everything falling apart when faced with hard decisions and thick skulls.

throwaway2037 a day ago

    > But then, how can Linux exist? How does that square with “never break the user space?”
Hot take: This catch phrase is out of date. For Linux desktop normies like me who don't really care about the stability of the Linux user space API, user space does break when GUI libraries (and the myriad of libraries dependencies) change their APIs. For example, I mostly use KDE, which depends upon Qt libraries for its GUI. Qt regularly introduces breaking changes to their API during each version increment: 4->5->6, etc. (I don't hate them for it; it is normally carefully done and well-documented.)
  • vhantz a day ago

    "don't break user space" is about the Linux kernel not breaking user space. Qt is user space as well as any desktop environment or GUI framework.

    Introducing breaking changes with major version releases is standard software development practice. Very few projects go out of their way to always keep backwards compatibility.

  • LtWorf a day ago

    Kernel breaks API all the time too. It applies only if something that linus personally uses stops working.

    • LtWorf a day ago

      Funny that I'm getting downvoted after having spent hours investigating a failure due to kernel breaking API a couple of weeks ago :)

      I guess y'all know better :)

      • Gracana a day ago

        Why did you write a comment inviting downvotes when you could have told that story instead?

        • LtWorf 15 hours ago

          Why do people presume there isn't a story? Like I'm just making up stuff?

simonebrunozzi a day ago

Key sentence here:

> But it is also clear why JetBrains didn’t do LSP — why would they? While the right solution on the technical grounds, you aren’t going to get paid for being technically right.

ashoeafoot a day ago

The selection and standardisation comitee for open source is the usage data. Make public what is used where under what circumstances, standards emerge .

amelius a day ago

We need more people writing RFC style documents.

muglug a day ago

Ehh I don't buy that the market was ready 10 years earlier (in 2006) for open-source LSP implementations.

You gotta have someone write those language servers for free, and the language servers have to be performant. In 2006 that meant writing in a compiled language, which meant that anyone creating a language server for an interpreted language would need to be an expert in two languages. That was already a small pool of people.

And big multiplayer OSS platforms like GitHub didn't exist until 2008.

  • autarch a day ago

    > And big multiplayer OSS platforms like GitHub didn't exist until 2008.

    SourceForge launched in 1999. I think GitHub is better in many ways, but the basic building blocks of hosted repo, issue tracking, and discussions (via email lists) on Sourceforge. I collaborated with folks on a number of projects on SourceForge way back when.

  • skydhash a day ago

    And the fact is that anyone working professionally was using an IDE, and anyone else was fine with grepping and using ctags.

    I think LSP is only truly useful in two contexts, global symbols (even with namespacing) and confusing imports. In language like C, you mostly have a few structs and functions, and for the ones in libraries, you only need to include a single header. With Python, the imports are concise and a good references gives you the symbol identifier. But with languages that needs an IDE or LSP, you find yourself dealing with many imports and many identifiers to write a few lines of code and it becomes quickly unmanageable if you don't have completion or autoimports.

margarina72 a day ago

Seems to be a pattern - take a general then apply to open-source. You know what else can't coordinate - most of the corporate world.

The article doesn't bring up any critical issue that the world world opensource should suddenly deal with - feels more like a morning rant after his shower - which is also how the post starts - it's basically on the front-page because its title is trying to be provocative.

RossBencina a day ago

Who is being incentivised to reduce the friction of interoperation?

Coordination is hard. People who are good at coordinating are not necessarily the same people who are happy to contribute their time to FOSS. And FOSS may need to coordinate in ways that vertically integrated companies do not.

Coordinating between loosely aggregated volunteer projects is not the same as coordinating between vested stakeholders either. I would guess that most FOSS projects are more invested in their own survival than in some larger objective. Teams within a company are (presumably) by definition invested in seeing the company mission succeed.

The GNOME / KDE example mentioned elsewhere in this thread is interesting because these are two somewhat equivalent co-existing projects. Any coordination between them is surely not their highest priority. Same with all of the different distros. The each exist to solve a problem, as the fine article says.

I wonder how much the problem is actually "open source can't standardise on a single solution." Let one thousand flowers bloom, sure. But don't expect a homogeneous user experience. The great thing about standards is there are so many to choose from. xkcd 927. etc.

amelius a day ago

Yet most closed source stuff depends heavily on open source.

badsectoracula 21 hours ago

> I suspect that I have an outdated version of hotspot Linux profiler, but I can’t just go and download a fresh release from GitHub, because hotspot is a KDE app, and I use NixOS.

KDE (not to be confused with the Plasma desktop) is just a bunch of C++ libraries that can work on a variety of desktop environments and even OSes (though Hotspot being a perf report alternative is clearly meant for use with Linux).

I just went and downloaded the latest CI build from[0] and it ran just fine on my openSUSE Tumbleweed, running Xorg with Window Maker. I do have a bunch of KDE apps installed, like Kate (my currently preferred text editor), Dolphin (the file manager i use whenever i want thumbnails, usually for videos and images), Spectacle (for screenshots), Falkon (i use it as a "clean" browser to test out things), etc so i also do have the KDE libraries on my system, but that is just a `zypper install` away. Or an `apt-get install` or `pacman -S` or whatever package manager your distro uses, i've used a bunch of them and they all pretty much behaved the same. I'd expect Hotspot to be installable in the same way in any of them (and i'd expect the AppImage to have these libraries bundled in anyway so you probably wont need them[1]).

If there are issues with NixOS (i don't know, i haven't tried it) i think it might actually be a NixOS issue and not a KDE issue.

[0] https://github.com/KDAB/hotspot/releases/tag/continuous

[1] EDIT: i checked with --appimage-extract, it contains pretty much everything

xpe a day ago

Apple coordinates internally, since macOS works with Apple hardware. Windows can drive coordination among hardware vendors. In the Linux world, many organizations and projects share power; there is not the same focal power on having a consistent end user OS (dependencies, configuration). Declarative and deterministic build systems at the OS level allow different groups to package their subcomponents reliably. As various configurations get socialized, it gives choice to tradeoff between customization and popularity/vetting.

pacoxu2025 a day ago

but open source foundation provide some guides/events/programs to coordinate.

mike_hearn a day ago

Many moons ago Scott Alexander wrote a critique of Marx. It starts by arguing that if capitalists can be said to produce anything, it's coordination. Coordination, Scott argues, is a thing every bit as real as coal or food or legal services. People need to manufacture it, and we call them executives/investors/marketing staff, and others want to buy it. When we buy coordination we call it brand value or similar. Open source has a notable absence of coordinators, because producing coordination is hard and non-fun, so without a capitalist market there's not much incentive to do it. Same reason desktop Linux historically struggled with anything that wasn't hobby programming (art, UI design, etc... eventually Red Hat and others hired such people using server profits).

The Linux kernel and GNU in general are projects that hacked around that problem by just copying the decisions of other people who were coordinated by capitalists (UNIX vendors), which worked long enough to bootstrap the ecosystem until some of the key people could be coordinated by Red Hat and others who monetized indirectly. But at every stage, the coordination was being produced by capitalists even though it was hard to see.

In other places where the mimic-and-support model didn't work, open source really struggled. This is most obvious on the desktop. Even there, ultimately this approach has been adopted for large chunks of it. If you play games on Linux today it's because people copied the Win32 API i.e. the coordination was produced by capitalists like Bill Gates.

Now Alex mentions LSP and JetBrains. The reason JetBrains didn't do the LSP isn't because of value capture. After all, IntelliJ has been open source for a long time. Other IDEs could easily have embedded it and used its plugins. The reason JetBrains use a Java API is because it's a lot more productive and effective to design in-process APIs than network protocols. As long as you aren't crossing runtime boundaries they're easier to write, easier to test, easier to reason about statically (especially w.r.t. concurrency), and much more performant. You can exchange complex object graphs in a shared address space and coordinate them using locks. All this is a highly effective way to extend an IDE.

Microsoft did the LSP because they took a bunch of energetic developers who only wanted to do web development, so they used Electron. Also for reasons of sticking with the crowd, .NET being pretty useless for cross-platform desktop stuff... it's not just that experience with desktop programming is fading away. But browsers were never designed for the challenges of large scale desktop programming, in fact they weren't designed for building apps at all. So they don't let you use threads, static typing via TypeScript is an aftermarket hack, V8 has very low maximum heap sizes, and there are many other challenges with doing a clean JetBrains style architecture. To their credit, the VS Code team leaned into the architectural limits of the browser and did their best to turn it into advantages. They introduced this notion of a backend that could run independently of the frontend using a 'standard' protocol. This is technically not really different to the IntelliJ API being open source, but people like the idea of protocols more than embedding a JVM and using stuff in a company-specific namespace, so that created a lot of community good will and excitement for them at the cost of many technical challenges.

Those challenges are why JetBrains only use the LSP style approach for one of their IDEs, which due to historical reasons doesn't share the same architectural approach as all the others. And it's also why, if you look at the Rider protocol, it's some fairly advanced state sync protocol thing, it's not a plain old HTTP RPC style protocol.

Given that both are open source and both are produced by teams of paid developers working in an office coordinated by capitalists, it's probably not right to identify this as an open source vs proprietary difference. It's purely a technical one to do with JVM vs web as foundational platforms.

fr4nkr a day ago

The OP defeats his own argument. LSP was a collaborative effort that benefited from a degree of coordination that only hierarchical organizations can provide, yet it still sucks ass.

OP blames FOSS for not providing an IDE protocol a decade earlier, but doesn't ask the rather obvious question of why language-specific tooling is not only still around, but as market-viable as ever. I'd argue it's because what LSP tries to do is just stupid to begin with, or at least exceptionally hard to get right. All of the best language tooling I've used is ad-hoc and tailored to the specific strengths of a single language. LSP makes the same mistake Microsoft made with UWP: trying to cram the same peg into every hole.

Meanwhile, Microsoft still develops their proprietary Intellisense stuff because it actually works. They competed with themselves and won.

(Minor edit: I forgot that MS alone didn't standardize LSP.)

  • marcosdumay a day ago

    > OP blames FOSS for not providing an IDE protocol a decade earlier

    Everybody standardized on Eclipse plugins almost 2 decades earlier anyway. It got replaced because the standard sucked. The new one is better, but by how much is still a question.

  • oaiey a day ago

    He also overlooks that the central stable projects, like the Linux kernel/systems/... also have a very strict hierarchy / dictatorship ongoing.

  • shadowgovt 20 hours ago

    Maybe I don't ask too much from LSP, but it has enabled autocomplete on arbitrary languages across two or three IDEs I have to use regularly, so it satisfied my goals in a way the previous solutions did not.

  • diegoperini a day ago

    > yet the end result was complete shit

    Could you elaborate why? It looks like a useful protocol.

    • fr4nkr a day ago

      I elaborated a bit when I edited my post, but to be more specific, I think LSP is a protocol that fails at its stated goals. Every server is buggy as hell and has its own quirks and behaviors, so editors that implement LSP have to add workarounds for every server, which renders the point of LSP moot. It's the worst of both worlds: editors are still duplicating effort, but with fewer, if any of the benefits of tools tailor-made for a specific editor-language combination. And that's not even touching on the protocol's severe performance issues.

      Unsurprisingly, the vast majority of servers work much better with VSCode than other editors. Whether this was a deliberate attempt by Microsoft to EEE their own product, or simply a convenient result of their own incompetence, is ambiguous.

      • RossBencina 10 hours ago

        LSP is underspecified for sure. I don't think this is a situation that is limited to LSP though. It happens when software interfaces are underspecified (or post-hoc specified) with a strong dependence on a reference implementation (VSCode in this case) and the absence of a canonical validation test suite.

        Exactly the same thing happened with VST audio plugins. Initially Cubase was the reference host, later Ableton Live became the reference and it was impossible to convince plugin developers that they were out of spec because "it works in Ableton".

        My impression, having programmed against both the LSP and VST specifications is that defining well-specified interfaces without holes in them is not a common skill. Or perhaps such a spec (maybe ISO C is an example) is too expensive to develop and maintain.

    • mike_hearn a day ago

      The original blog post links to a critique.

udev4096 a day ago

What? How is this even at top? Some no-name program is not getting an update or is not perfectly installable and suddenly it's an open source problem? Stop being an entitled prick

UltraSane a day ago

I was truly shocked at how bad the experience is when you are using and RPM based distribution and a program is only available as a DEB

  • mroche a day ago

    Tools like Flatpak, AppImage, Snap, Toolbox, and Distrobox can go a long way on relieving the end-user of the burden of trying to get things playing nice in those situations. Not always a silver bullet, but a useful tool to keep in the back pocket.

    If it's FOSS, at least you have the option of trying to repackage it for your distribution. You're SOL if it's a proprietary application distributed in binary format. , though.

    • UltraSane a day ago

      It was the Termius SSH Client

    • shadowgovt 21 hours ago

      Something I don't yet understand about flatpak et al: isn't using that for every app in the core OS experience going to chew through storage because shared libraries can't be shared across images? Or does the containerization solve that problem?

  • udev4096 a day ago

    There is rpm-to-deb converter which works sometimes. Most of the modern projects include an AppImage these days, which is distro agnostic and only requires fuse to be installed

colesantiago a day ago

I would love to tell people about linux for their desktops, but the main issue I have with it is the fact that people who are interested in it ask me one question regarding Linux distributions:

“Which one?”

This is pretty much the cause of a 90% drop off of interest in Linux on the desktop.

I could say use Ubuntu (and I do) to some of the people who I’m close with that are interested in Linux, but they discover Lubuntu, or Linux Mint and Debian, then they get easily confused and give up.

And that is not even getting into the updates and the packaging and heaven forbid anything breaks.

  • udev4096 a day ago

    There aren't that many to ponder over the idea of recommending someone for a daily use. For beginners, Fedora is the perfect choice. For people with programming background, Arch. Ubuntu was sane sometime ago, not anymore because of the bloat it ships by default

    • int_19h 16 hours ago

      > For beginners, Fedora is the perfect choice.

      Last time I tried another round of "let's install the most recent versions of popular distros on random laptops I have", Fedora was the most finicky about hardware. As in literally wouldn't even boot into live CD on one of said laptops, and had troubles with graphics on others.

      The thing that worked every time? For the past decade or so, it had consistently been Linux Mint for me.

  • Milpotel a day ago

    > And that is not even getting into the updates and the packaging and heaven forbid anything breaks.

    How to spot the Ubuntu user...

    • imp0cat a day ago

      And lead him to Nix? :)

      And then watch his eyes glaze over as he realizes that he's bitten off a lot more than he can chew. :D

      • Milpotel 5 hours ago

        I have no experience with Nix but Ubuntu is the distribution that always breaks on major updates...

shadowgovt a day ago

I am reminded of someone I read recently decrying as a loss GNOME adopting systemd components as a critical dependency because they want alternatives to systemd.

... and this a layer of open source flexibility I never wanted. I don't want alternatives to core system management; I want one correct answer that is rugged, robust, well-tested, and standardized so that I don't have to play the "How is this service configured atop this service manager" game.

  • rjsw a day ago

    GNOME started out being able to run on more than just Linux.

    • shadowgovt a day ago

      That makes sense and I can see how it'd be frustrating to someone if it can't anymore. But how relevant is that today when there are enough flavors of Linux that you can install it on anything from space probes to toaster ovens?