I once sent out a proposal on the FreeBSD lists to merge /sbin with /bin, and /usr/sbin with /usr/bin. People were concerned that this would slow down the system, due to PATH lookups taking longer. Even when I demonstrated the opposite was true (it being faster due to fewer directories needing to be scanned), I wasn't able to get consensus. What a shame.
For me, the value in having a bin vs sbin split is in keeping system binaries (daemons, root-only tools) off the user's path. There's little value in a user starting inetd or apache2 from the command line, so why should those be present in the user's path? Same thing for system management tools that require root access for everything, such as dmsetup, blkdiscard, or shutdown (yes, Linux examples as I don't know FreeBSD).
Having only usable binaries in the path aids discoverability of the system.
There are many tools in sbin that should have been in bin instead. For example, there’s no need to run ifconfig as root if you only want to display the current set of addresses. Yet it lives in sbin.
This means that in practice people will just add sbin to PATH to get a somewhat usable system, which makes the division between bin and sbin useless.
Furthermore, on BSD derived systems binaries that should not be invoked by users directly (e.g., daemons) need to be stored in libexec.
/sbin is for statically linked executables, while /bin is for dynamically linked executables. It has nothing to do with daemons vs non-daemons, nor with things having to run as root.
Go take a look (using ldd) in your /sbin and tell me exactly how many of them are statically linked. On my system, only 170 out of the 838 items in /sbin are statically linked.
> Utilities used for system administration (and other root-only commands) are stored in /sbin, /usr/sbin, and /usr/local/sbin. /sbin contains binaries essential for booting, restoring, recovering, and/or repairing the system in addition to the binaries in /bin.
I believe they're referring to the old SunOS (at least) convention that /sbin was for utilities that could be run during the boot process before /usr was mounted. These tended to need to be statically linked, as the .so libraries were all under /usr. SunOS was how I learned the Unix filesystem layout, but of course that means a lot of my ideas of what "should" be where are outdated at this point.
Rather, the convention was that /sbin was for static binaries so that the system could still be fixed online if the dynamic linker got hosed. It's not about /usr not being mounted, but /lib/ld-linux.so not functioning correctly. For that reason, glibc still ships (or used to ship) an sln binary (static ln), and Debian still offers sash (stand-alone shell): so you could at least try to restore the dynamic library link farm by hand.
But I have only ever seen historic references to that argument, from back when dynamic linking was scary and unreliable. I certainly have never encountered that situation in almost 25 years of using Linux.
> I believe they're referring to the old SunOS (at least) convention that /sbin was for utilities that could be run during the boot process before /usr was mounted
My memory is hazy but I recall the distinction being / vs /usr not /bin vs /sbin.
The tools that root needs are more often served by being statically linked than dynamically for the situations where the volume with the shared libraries fails to mount.
Having mnt be statically linked makes it much easier to recover that system.
The ideal of "/sbin for system tooling" isn't so much one of static vs dynamic but rather users accidentally finding system tools that don't work and sending email to the admin saying "mnt gives me a permission denied error" when they have no business running it.
Pretty sure on both of those /sbin is just a symlink to /usr/sbin. If the static thing was ever true I suppose once you've moved everything in to /usr you wouldn't bother anymore.
> so why should those be present in the user's path
And why shouldn't they?
It's not as if a user could do anything damaging with them, if the system is setup properly.
> Having only usable binaries in the path aids discoverability of the system.
Except when someone new has to go online to ask "I found this tutorial telling me to use the `xyz` command to do this, but all I get is `bash: xyz: command not found`, please help!"
Yes, that's correct, it's only about searching the right directories to find and execute the program you asked for.
But autocomplete after sudo doesn't work for me on a stock Debian install anyway, not sure what one needs to do to get around that. I don't really rely on it. If I'm doing enough work that needs root I start the session with "sudo su -" anyway so not having autocomplete after sudo is not a big deal for me.
> Having only usable binaries in the path aids discoverability of the system.
Downside is it stops the autocomplete, so if you, say, wish to check quickly how binary is called on the system, e.g. if you should sudo apache2 or httpd, it will not work...
> the value in having a bin vs sbin split is in keeping system binaries (daemons, root-only tools) off the user's path
I think it's nice to be able to keep admin utils out of an admin's PATH when the admin isn't intending to use them.
It's much less interesting to me to keep daemons and such out of anyone's PATH if running them can't do much, though usually those things really belong in a libexec directory and should be exec'ed intentionally only.
Hypothetically speaking, would forking FreeBSD or a *nix to use a simpler folder structure be feasible? I can imagine a lot of package managers and applications make assumptions about the folder structure though, so there would have to be a lot of changes made to make everything work.
I was thinking "just symlink /sbin with /bin", but there would probably be conflicts.
> I was thinking "just symlink /sbin with /bin", but there would probably be conflicts
Given how long /sbin et al have been around, there would always be some edge cases. However it is still possible to do. GoboLinux uses symlinks to achieve LFH[3] compatibility while still having friendly directory names. ArchLinux also just has one bin directory and uses symlinks for compatibility:
» ls -l / | grep bin
lrwxrwxrwx 1 root root 7 2021-12-07 02:41 bin -> usr/bin
lrwxrwxrwx 1 root root 7 2021-12-07 02:41 sbin -> usr/bin
» ls -l /usr | grep bin
drwxr-xr-x 5 root root 110,592 2022-05-06 09:23 bin
lrwxrwxrwx 1 root root 3 2021-12-07 02:41 sbin -> bin
I'm actually a bit surprised about `/bin` there. Maybe it's archaic but I've always considered the point of `/bin` to be a minimal set of tools that could allow an otherwise-hosed system to be booted/debugged. So it (and `/lib` and a few other directories) might be on a small, read-only partition while `/usr` and friends are on a much larger read-write partition.
Of course in the last twenty-five years I don't think I've ever really used a system set up like that. But it does seem nice to at least be able to do so.
IIRC, you are correct. And OpenBSD still sets up distinct partitions for `/bin` and `/lib` etc.
The first PC I built had 7 disk drives in a tower case, four distinct hard drives. Yes it was crazy. But the largest of these by far was 540 MB. It made sense to keep the boot stuff on its own hard drive.
Linux has `boot`, of course, but `boot` should never appear in $PATH. I think.
Why hypothetical, Gobo Linux[1] has already done it. Or if you want to just hide (rather than replace) the traditional Unix hierarchy from the user, you get macOS (inherited from NeXTSTEP).
The problem is that the actual benefits a pretty nebulous, so it's probably not worth the effort (and drawbacks of using different conventions than most others *nix users).
I'm pretty sure Gobo Linux functions partially like macOS does, hiding system directories, by removing them from readdir with a custom kernel module[0].
Also FreeBSD (and other BSDs) usually mount /usr on its own partition. I think that causes issues in Linux these days. So yes, merging in the BSDs may be a big change.
FWIW, Slackware keeps the separate, following the Linux Standard Base.
/usr/games should never have existed in the first place, imnsho. If it's a small game, its binary could just have been put in /usr/bin. If it's a large game, it probably should be in /opt/$game.
It's a historical unix thing. Things in /usr/games (which were not all games) were frivolous and not essential to the OS, and were distributed as a separate tape or archive so that admins could easily choose whether or not to install them.
I'll also note /usr/games/dm ( https://github.com/vattam/BSDGames/tree/master/dm ) which allowed sysadmins to restrict when programs in /usr/games could be run. Setting up that structure in /usr/bin would be more work to maintain.
Please correct me if I'm wrong. Aren't binaries in /sbin and /usr/sbin statically linked as opposed to no requirement like this for files living in /bin and /usr/bin?
I always thought the rationale was that if statically linked binaries are on different partition they can be used to recover the system from a failure.
Edit: files in /bin are also statically linked, and I am unsure about what I wrote above but vaguely recall something like that
> /bin/
User utilities fundamental to both single and multi-user environments. These programs are statically compiled and therefore do not depend on any system libraries to run.
> /sbin/
System programs and administration utilities fundamental to both single and multi-user environments. These programs are statically compiled and therefore do not depend on any system libraries to run.
It's nice to be able to still run on a crippled system without access to dynamically linked executables, so you can recover. But in practice, wouldn't just about anyone simply boot to a more capable recovery system (via another partition, USB drive, netboot, etc...)?
That was indeed the tradition, but on Linux the GNU libc wants to be only dynamically linked, which creates a lot of problems for those who want static executables.
Because of that, in many Linux distributions there are few, if any, static executables. Due to this, it may happen that a botched glibc upgrade makes the system unusable, because no executable can be started to repair it (nowadays many distributions have a static busybox for such situations). I have seen this a couple of times, and the first time I could not understand what happened, because I was used to older systems, where the commands that I tried to execute (e.g. ls or mv) had been statically linked. Such a thing could never happen in a traditional UNIX or Linux system, before glibc disallowed static linking.
The GNU libc should have been split into a libc with most of the functions, which may be linked statically without problems, and into a small library with the name resolving functions, which could be linked dynamically only by the programs which need those functions.
Even better, the name resolving functions should have been organized in such a way to be able to use their default configuration with static linking and choose dynamic linking only when you really intend to override the default configuration when using less common services, e.g. NIS.
This happened to me on arch recently. I updated pacman but it didn’t warn me it needed an updated glibc. Now pacman refuses to run.
It should be easy enough to repair but it was just an old laptop I wanted to test something on so I ended up throwing the laptop back in the draw instead.
The good thing about arch packages being just tar archives is when pacman fails, you can often fix it by `tar xf` ing the right packages at the root. It's ugly but it works most of the time
I once heard about a "ln" variant called "sln", statically linked, as opposed to the normal ln one, so you could fix a system where the dynamic loader is broken and thus ln is unusable. I can't find it on Ubuntu, though.
then: statically linked bins into /bin, all the others in /usr/bin and 2 symlinks /sbin -> /bin, /usr/sbin -> /usr/bin. It requires duplicate binaries: one version statically linked and the other not: I still want "env" to exist as statically linked, but tons of scripts start with this horrible '#/usr/bin/env MYPREFEREDSCRIPTENGINE'
Calls like execvp() do little more than splitting PATH on ':', followed by repeatedly invoking execve() on ${dir}/${filename}. The fewer elements you have in PATH, the fewer execve() calls need to be performed in the worst case.
It's probably not exactly going to be hot, but even failing execve is inherently semi-expensive since it needs to be a syscall and incurs context switches.
It's just outweighed a couple orders of magnitude by all the overhead that comes with a successfully launching another executable unless you have, like, a thousand junk paths in your PATH.
The fix is for the user to use a smaller $PATH when possible. Any method of checking that the command exists and is executable before trying to execute it leads to TOCTOU race conditions.
I’m assuming you are proposing to stat each candidate before trying to execve it. I’m also assuming that a stat system call is roughly as expensive as an execve of a nonexistent or non-executable path.
For every failed candidate, you are doing one system call, so roughly the same cost each way.
Now if you just do an execve, you’re just paying that cost. If you stat first, you pay the cost of another system call that doesn’t change the flow of your program at all (a nice way of saying you’re wasting time).
Unless stat is dramatically faster than exec on a nonexistent or non-executable path, there’s never a case where this is better.
Context switches could straightforwardly be saved by doing the PATH splitting and lookup in-kernel, or just providing a list of executable paths to check.
It didn't work out this way historically (doing unnecessary string processing, requiring extra memory, could've been more expensive than the context switches), and the performance impact of failed execve isn't normally a high priority, and there are other reasons not to want stuff in the kernel (not that it stops frankly less critical stuff from getting in the kernel), but there's definitely low-hanging fruit here if it like, mattered.
It's not really an accurate description anyway. Most shells will only perform the PATH lookup one time per command, then store the found fully-qualified file path in an in-memory hash table for quicker lookup each subsequent invocation. This is why you need to blast the cache if you delete or move an executable. Plus, many common utilities are replaced by shell built-ins anyway and they never require directory traversal at all.
A merge needs to be done carefully for backwards compatibility.
You could move all the things in /bin and /sbin to /usr/bin and /usr/sbin, then leave behind links (symbolic or hard).
Since everyone ends up having /bin and /usr/bin in PATH, this merge makes a lot of sense from a performance point of view.
Merging bindirs and sbindirs is a touchier topic. Many things in sbindirs should have been in bindirs all along, and many should move to libexecdirs, but some should stay behind so that privileged users can keep sbindirs out of PATH when they're not wearing an admin hat.
> Change is breakage for someone so their ought to be a reason to do it,
Simplicity is reason enough to change something.
When things break because of reasonable change, they can be fixed. And in this case, backwards compatibility can be ensured simply by symlinking things.
I think this is a pretty dangerous attitude, and it is really the only thing wrong with Linux, and probably leads to replacement of simple structure and functionality with a complex software suite that is merely more convenient, like systemd. "Let's change this thing because we want to, because it will improve performance 0.0024%"
Feature creep is what happens when restraint was not exercised.
IMO, since it really doesn't matter what the filesystem looks like, leave it be for standards and compatibility. Seriously, it takes, idk, maybe, a lack of humility to want to change fundamental characteristics of UNIX when the reasons for doing so are a little capricious.
I'm not really talking about the parent, fwiw. I'm talking about the crowd and ochlocracy.
It's also dangerous and tiring the opposite attitude in the Linux world: don't dare change something that has been there for 30 years. Like this very article, there were plenty saying "the /usr split is there for a reason!". No, it's just an historical quirk.
There's plenty greybeards that for them "Linux" is a full screen terminal running emacs on decade-old hardware. "I don't use antialiased fonts, why the hell should I care about decent HiDPI support?" And then protest every time some working group tries to modernise and improve the Linux desktop. You see them every time on this forum.
I'm a greybeard, I've used Linux full time on the desktop for 20 years. I don't get this conservative, "we don't need it" attitude.
> Like this very article, there were plenty saying "the /usr split is there for a reason!". No, it's just an historical quirk.
For those of us who ran small-disk NFS workstations back in the day having the split and a common /usr was no quirk and very useful. (There were also diskless (Sun, OpenFirmware netbooting) workstations: common /bin, /usr, but per-machine /var on the NFS server.)
The article states:
> Cheap retail hard drives passed the 100 megabyte mark around 1990, and
partition resizing software showed up somewhere around there (partition magic
3.0 shipped in 1997).
Yeah, except if you have a fleet of several hundred or thousand workstations to provision. "Cheap" is relative, especially if you're an academic institution.
Even if a split was pragmatically warranted, the fact that the user directory was chosen is without a doubt a quirk, an accident of circumstance that has since been perpetuated out of tradition (or less charitably: cargo cult mentality.)
This is maybe why I gravitate towards NixOS now. It is already in its inception such a departure from tradition that the conservative crowd will probably not even attempt to use it, which in turn will make innovation more likely.
> It's also dangerous and tiring the opposite attitude in the Linux world
You're literally saying that not arbitrarily changing the file structure of linux is dangerous. I don't think that's what you meant.
It's not about "because it's been that way for 30 years," even though it's been 50 years, but never mind that, it's about consistency and standards. It just does not matter one way or the other what the structure of the file system is, so any agenda to change something that doesn't matter is itself a specious agenda. Changing fundamental design introduces complexity for no good reason. As soon as you do it, you've create a special case that doesn't work anywhere else and jeapardizes compatibility.
I agree there'd be quite a bit of compatibility breakage and churn associated with trying to change these at this point.
That said, I think one of the better reasons (and ways) to weigh the value of changing some long-term practice is to focus on the anticipated costs of the change on one side of the ledger, and the ongoing (easy to ignore) unbounded costs of the status quo on the other (and appropriately weight them by who pays and how often). To shoot from the hip:
- If it's only a modest improvement that still supports a bit of misunderstanding, folksonomizing, and arguing about where things belong--it'll just waste time and energy better spent elsewhere. Any time would probably be better spent on writing and promoting/propagating a really good canonical reference to the status quo that can help drive out confusion and enable devs/admins answer practical questions (even if inefficiently).
- If (utopia warning) someone is able to significantly improve how accurately and quickly humans can make real dev/admin decisions from a clear mental model _and_ get enough buy-in to do it across all of the major Unix-alikes, it's probably worth some medium-term pain.
FWIW, the ongoing progress of NixOS, which doesn't really have any of these paths (beyond /usr/bin/env and /bin/sh), demonstrates that this pain is surmountable with enough eyes and hands.
> "the /usr split is there for a reason!". No, it's just an historical quirk.
It's a historical quirk on linux, where there is no clear separation between "base OS packages" and "3rd party packages".
On FreeBSD the split is very real, anything in /bin/ ships with my OS and is maintained and updated by the FreeBSD team. Anything in /usr/bin/ comes from ports and is thus a 3rd party package I installed and can be safely nuked and I need to maintain/update it.
> It's a historical quirk on linux, where there is no clear separation between "base OS packages" and "3rd party packages".
It was a historical quirk to start with. At Bell Labs, back in the early 1970s, Unix was being developed on PDP-11s with RK05 hard disks (with removable disk packs), which had an amazingly generous capacity of 2.5MB each. The Unix operating system had grown too big to fit on a single RK05 disk volume so they had to split it across two. Other operating systems of the period faced similar issues, but dealt with them in (arguably) more elegant ways – on IBM mainframes, OS/360 maintained a database ("catalog") mapping file paths (dataset names, to use the proper terminology) to volume names, so you could move a file to another disk without changing its path. True to Unix's penchant for simplicity, its authors decided instead to just split the OS into / and /usr. And the split survived long after they'd upgraded to more spacious disks.
Any other explanation for the split is essentially a retcon. Some of those retcons (even if, as other commenters have pointed out, not your own) may actually have become true – some of them may have been approximately true to begin with, and they influenced people's decisions, thereby making themselves more true over time. But its ultimate origins will forever remain this quirk of computing history.
Funny aside: yours is an excellent comment, and yet proof that you didn't read the article, as the first part is almost word-for-word identical to the post.
I don't mean to shame you, I sometimes comment without reading TFA, and in your case you add a few more details that were not present in the article. I just found it interesting.
A much better separation is achieved in a few Linux distributions where every package is installed in a separate directory.
All the files that might be expected by others to be in certain standard locations are sym-linked to those locations, e.g. the executables to /usr/bin,/usr/sbin,/bin or /sbin, in order to appear in PATH.
In this case you no longer need any kind of database to know which files may be safely nuked to delete any package.
Moreover, in FreeBSD there is no such separation between the "base OS packages" and "3rd party packages", implemented as a difference between root and /usr. You might have misremembered /usr/local, which is indeed a place for "3rd party packages" in all UNIX-derived operating systems.
There are many "base OS packages" that are installed in /usr/bin or in /usr/sbin.
In any FreeBSD system, you can see their source files in /usr/src/usr.bin and in /usr/src/usr.sbin.
I have been using FreeBSD for a quarter of century, since FreeBSD 2.0, and there has never been such a separation between root and /usr.
The separation between /bin and /usr/bin and the other similar pairs was made only to allow /usr to be unmounted, when it is on another device than the root device, but still have in the root file system the minimal set of tools needed for diagnosing and repairing any broken file system or network connection.
In ancient FreeBSD installations it was always recommended to have a separate small root partition, e.g. of a few hundred megabytes, and some large partitions for usr and var.
This original use has become completely obsolete, because now, for diagnosing and repairing problems, it is preferable to boot from an USB stick or from the network (using a ramdisk as root file system), and then run diagnostics or repair programs without touching even the root file system unless modifying it is intentional.
In FreBSD it might still be possible to put /usr on a different partition or device and then unmount /usr, but in many Linux distributions this traditional usage is broken, because some of the programs installed in the root directories need components installed in /usr, so when /usr is unmounted they stop working.
The split is even stronger on NetBSD, where /usr is the base OS and /usr/pkg what's installed by the user through pkgin (binary packages) or pkgsrc (ports).
Likewise, the system configuration goes to /etc while the userland configuration goes to /usr/pkg/etc.
All it takes to factory reset a NetBSD system is an rm -Rf /usr/pkg.
Well, if you have an argument against KISS, we'd all love to hear it. The opposite of KISS is KICKME (Keep It Complicated Keep Me Employed). Life is a pretty good example of successful complexity. But we didn't design life, and we do not maintain it (understatement). Simplicity for simplicity's sake is self-evidently advantageous. Complexity for the sake of complexity is not.
I don't thinkit has to be that black and white, KISS in Linux world from my experience has been to see any software that isn't "simple" as bloat, while their software is like a car you only turn left with.
To be clear, GP's stated intention was to simplify a complex structure into a "simple structure", about which the stated concern was a loss of performance, to which GP's rebuttal was it actually improved performance. The main motivator for flattening the filesystem hierarchy isn't really performance, it's simplifying the organization, and (arguably) bringing it more in like with "pure UNIX", vs the quagmire of commercial SysV derivatives with a few dozen different bin directories in PATH, with esoteric justification.
> to merge /sbin with /bin, and /usr/sbin with /usr/bin
It's a bit more drastic than you make it out to be. This would give two valid $PATHS to the same commands. It would make tab-completion slow. It would likely break all kinds of compatibility across the SUS. And it is incredibly arbitrary, no better or worse than eliminating system hierarchy entirely and putting everything in /.
I've read this explanation a couple of times, and if you go all the way back to PDP-11 the split does indeed sound ridiculous. I had my first contact with Linux from some magazine CDs in the late 90s, I think it was Red Hat or SUSE based. The documentation there had a much clearer explaination:
/sbin, /usr/sbin is for binaries that need root. You put them in separate directories so their permissions all match up, and so they don't show up when completing in bash.
The paths without /usr - /bin and /sbin - are available from the get go. It is the very first partition that is mounted, and what is guaranteed to be available if you do "init 1" or boot in single user mode. You can also do fsck from there (assuming the boot partition is not damaged). I don't know how this integrated with initrd (initramfs wasn't a thing yet). I think there was only one "base system" - either initrd was very basic, or the whole base was in initrd, or something similar.
The paths with /usr were managed by the package manager. Word of mouth was: don't install anything manually there. If you do (via make install), keep around the source so you can do make uninstall. But better install to /usr/local or /opt.
> /sbin, /usr/sbin is for binaries that need root. You put them in separate directories so their permissions all match up, and so they don't show up when completing in bash.
I also got this explanation, but it never made much sense to me. First of all, the binaries there are executable by everyone anyway. Second, it really doesn't matter that they show up during completion. Third, many of them work fine and are quite useful without root! I don't recall the specific examples that bothered me (/sbin and /usr/sbin have been in my PATH forever now), but I think it was something like ifconfig or ping.
>Third, many of them work fine and are quite useful without root
It's more complicated than that - many can do a subset of useful things without root.
Often they can read things as a normal user - things like `apt` or `sysctl` can show you information about your current system, but will only be able to change it as root.
And even something like "shutdown" might be usable for a locally logged in normal user on a systemd system - or it might not be, depending on local configuration.
Finding things that actually always "need root" for everything is kind of hard, even discounting "print help" as a useful thing in its own right. And if you only came up with "chcpu" and "switch_root"... would you really want to have a top-level directory just for those? Plus the historical location for some things is in /sbin, so moving them out has a compatibility cost.
Tbh I find the only winning move here is not to play. There are so few binaries that are actually only useful to root that they don't really hurt in tab completion, and they could always grow non-root accessible features.
Yes, but you are effectively turning your box into a single user system. And that's fine if you are happy to work that way, but the origins of the directory structure is of course in multiuser UNIX. As a sysadmin, I would not want my /bin /sbin exposed to everyone. In your example I question the security implications of being able to run those binaries outside of root anyway (esp. in a professional environment) if you have your box exposed on a network.
> As a sysadmin, I would not want my /bin /sbin exposed to everyone.
Why not? It's not like most of them are suid (right?). Most Unix systems I've used allow any user to peruse /sbin at their leisure and run whatever they want.
Yes of course, just like on more or less any Linux system. But IIRC, shutdown is a suid binary that will do its own permission checks while running. The permissions on the /sbin/ directory should not matter.
This is exactly what I understood, too. The structure in Linux was familiar to me from SVR4 which I used in a number of implementations, most often Data General’s DG/UX (which was a fantastic system for its time).
It’s probably true that the distinction isn’t really important any more. The things we used to have to worry about in the (g)olden days of Unix (/s) are ridiculous by todays standards. We had one of the first 2.5GB RAID arrays in the country and could run a whole medical laboratory - maybe 100 people running Wyse 60 terminals - on it. We had a dedicated 500MB drive for the OS and a couple of other drives just for database logfiles.
These days the whole OS now fits on a single SSD which takes up a tiny fraction of the device. Large SSDs have made so much complexity obsolete for most people. I believe that one could, quite literally, run that old lab software from a single Raspberry Pi.
The point being, stuff that made sense in that old environment does not necessarily make sense any more. It’s good to have the discussion though.
Yes. And another benefit of /usr vs / was that is was simpler to read only mount /usr than r/o mount /.
Why do you want to do that? Well, when you have a machine with virtualization you can share the /usr partition across all instances, physically. Which makes a lot of sense if you want to virtualize hundreds of Linux guests on one physical box: you memory map the /usr partition in hypervisor ram, you share that ram across all guests and wham you have snappy fast virtual machines with low physical footprint.
That was actually done, e.g. on IBM mainframes running "your personal web server" for thousands of users in one single mainframe. Fun times.
And only when the root partition could also be mounted r/o, with just an individual /etc, and when large partitions became doable as /, only then it started to make sense to abandon /usr
> Why do you want to do that? Well, when you have a machine with virtualization you can share the /usr partition across all instances, physically.
Or you could share the whole /usr over NFS to hundreds of diskless workstations, each having their own separate / (also shared over NFS). Remember that disk space was expensive back then; having hundreds of identical copies of the large /usr tree on the NFS server would be a huge waste.
> I had my first contact with Linux from some magazine CDs in the late 90s, I think it was Red Hat or SUSE based.
Man that sounds awesome. I know we have it made these days with modern internet and computers, but sometimes I day dream about being 19 in the mid to late 90s and getting to experience that age of computing.
> I don't know how this integrated with initrd (initramfs wasn't a thing yet).
As far as I recall, early Linux didn't have initrd either; it's a novelty which came later.
> But better install to /usr/local or /opt.
I believe /opt is a novelty which appeared on either FSSTD or its successor FHS; I think /usr/local is older (perhaps even older than Linux), being the default --prefix for autoconf.
Bad/illogical/outdated directory structure is one of the most annoying things I've encountered while using Linux, because it makes the admin job feel unnecessarily messy (things are all over the place), and it feels as if there's a fundamental imbalance in the system that you can't get rid of.
Many admins feel like a Jedi when they memorize all the trivia about a file's path.
There's no shortage of people in a particular profession that feed on unnecessary complexity even when the original reason for said complexity (i.e. tiny drives) doesn't exist any more.
Now if you'll excuse me I have to figure out why sound doesn't work on Linux in 2022 like it's 1997. No seriously, I legit have to do that now. Someone should really develop another system for sound, again.
I just got done building an omni-channeling recording system, with a soc running ubuntu-server & alsa handling recording from several usb dacs connected to microphones. I feel your pain. Sound on linux is a nightmare. But now that I have an understanding of it, here are some helpful things I learned.
- Make sure alsa-utils is installed
- Auto-configure hardware devices: alsactl init
- View hardware for playback (use arecord for opposite): aplay -L | grep “^hw:”
^ Use that to make sure your hw is being detected
- Lower level list of sound cards, if having issues: cat /proc/asound/cards
- Base alsa conf: /usr/share/alsa/alsa.conf
^ go there to dive deeper into what alsa is actually doing. It will also show you the priority for config files, so you can go through that and check which ones are in use and modify accordingly. alsactl init should handle most configuration though.
- you will want to mess with this: /etc/modprobe.d/alsa-base.conf
…and get it working for your hardware. This is a resource to understand that file better: https://alsa.opensrc.org/MultipleCards
You can google configuration files and find one that works for you. Most issues for normal use will revolve around which card gets set to index 0 / default, so if you know your card you want as default, I’d recommend finding your device id (i think cat /proc/asound/cards will give you vendor/product ids you can use) then making a config using that id to set it as the default card, independent of indexing.
Turned into a lot, stopping here. Sound really shouldn’t be this hard for end users or devs, but it is what it is right now. Anyway, it’s fresh on my mind so at the very least, I might be able to point you in the right direction.
>Bad/illogical/outdated directory structure is one of the most annoying things I've encountered while using Linux
Every OS I've ever used has had these kinds of quirks, save simple ones that just dump everything in the root folder or equivalent. Its really hard to move files once you ship software and doubly so do an OS. Users expect files to be where they were last version.
There is really not that many places to look. I agree it could be better but part of the time the issue is with package maintainers. And to some extent, systemd has made things a little more convoluted. Compared to Windows it is far better because at least you don't go having to search thousands of registry keys.
At least for the PATH, you can also automate the looking. When on a new POSIXy system, I usually try "(IFS=: ; ls $PATH)" at the shell to get a listing of all programs available.
Roughly the same reason why dotfiles became a thing on Unix: https://linux-audit.com/linux-history-how-dot-files-became-h... Fortunately more and more software is putting its config in ~/.config/ rather than dumping it all over users' home directories.
AFAIK the XDG spec isn't a thing on macOS, so you get those CLI utilities written by devs on their fancy Macbook Pro that pollute your home directory, such as Deno, Doom Emacs, Elixir, Rust/Cargo, Kubernetes, npm, vscode, etc.
There is no specific reason for a program that uses the XDG dirs on other unices to not use them on macOS, other than some idea that it's "alien".
You can have ~/.config/. Nothing in macOS prevents you from having it. And so, some programs do. The worst thing that happens is that, instead of having one directoy ~/.foo you now have one directory ~/.config/foo and nothing else in ~/.config. But as soon as you add the second thing that uses ~/.config, you now have two directories in there instead of a second dotdirectory in ~.
It's just that for a bunch of them the XDG path is only used if it exists - e.g. emacs predates the spec, so it uses ~/.emacs.d (and a few others) first.
> There is no specific reason for a program that uses the XDG dirs on other unices to not use them on macOS
Nobody stops Apple developers respecting a Freedesktop spec, but the point is many people that mostly know macOS probably didn't even know XDG was a thing. It's not like Apple encourages it in any of their command line utilities.
I notice that your comments often include trigger words/phrases like "devs on their fancy Macbook Pro". Then I realized that I do the same thing. You spot it, you got it. Maybe I'll start a 12-step group for snark addicts.
On macos its less of a problem because the OS tries to hide your home folder and shows Documents Desktop and downloads in the Finder. Still much prefer .local and .config to a pile of dotfiles.
I haven't seen any software that conditionally disables XDG on macOS. What I do see common is software that hardcode paths. Many of these software use different paths depending on the platform. But those aren't XDG compliant because XDG paths are configurable through environment variables.
It's inconsistent, but certainly some programs adhere to XDG on macos. I've got pretty healthy looking ~/.config and ~/.local directories, and it's not all just my own stuff.
I can understand devs not using the right circumstances if their platform of choice doesn't come with an easy way to determine the right directory to put stuff in, let alone create it if necessary.
What I really want is an API that does "create/open/delete a file/directory for the relevant configuration/cache/resources store", be it user configured or platform default. What I get is an external package that gives me a list of potential storage locations (of which I'll probably just pick the first) that may or may not be actual directories on the system which I may or may not have access to touch files in.
Some devs are kindly reminded that there's a spec for these things but often it's too late as data is already in specific paths that users may have come to know. That way you end up with paths that get set by environment variables where you have to tell each and every program where to put their crap.
Other programs don't care enough to implement the standards (like Firefox; the bug report about XDG is old enough to vote [1] and it's still not implemented fully). Kubernetes has an open issue for its client that only ever gets bumped.
Even worse are devs that are reminded of standards like XDG and then decide to give everyone the middle finger. Snap is one of them, not only is the data directory hard-coded, it's hard-coded lowercase unlike every other standard directory on Canonical's distribution itself! Snap's biggest competitor, Flatpak, decided not following the standard is not a problem [3]. At least it's special snowflake folder starts with a period so that it's hidden by default, I suppose. Even Bash doesn't support XDG [4] because not everyone uses Linux (and apparently no effort should be made to support OS specific standards?) with the suggestion closed as won't fix.
Many tools that do support XDG only care about their own standards, of course; Windows has had SHGetKnowlFolderPath since Vista, replacing SHGetFolderLocation which dates back to Windows 2000. Still, developers like to push POSIX standards into Windows, creating .dotfiles and not even bothering to at least mark them as hidden.
There's a big list on the Arch wiki[7] listing programs and their compatibilities with XDG.
Incidentally, did you know that PowerShell on Linux respects the XDG specification? It was rather unexpected when I first noticed it and it just tickles me pink.
Tangent, but that's what made getting into Linux/Unix really hard. You have all these folders and files and no README.md to explain what is what. And there seemed to be no logic at all with how things were organized or named (and names often were shortened to abbreviations that I couldn't comprehend). I'm wondering what a modern system made to be readable and understandable would look like.
The other thing, coming from windows, was not understanding where to install things. In windows there's like a single place where you install all your stuff.
> You have all these folders and files and no README.md to explain what is what.
Markdown is a novelty. Back then, it would be just README (with no file extension at all).
> In windows there's like a single place where you install all your stuff.
Windows was even worse. Whenever you installed something, parts of it were in a new directory at the root of C:\, and parts of it were dumped in C:\WINDOWS\SYSTEM together with all the rest that's already there, often overwriting files of the same name (and the names were limited to 8 characters plus the extension, so they were quite opaque) used by other software you had installed earlier (that's the original scenario of what is now called "DLL hell"). On later Windows versions, instead of a new directory at the root of C:\ it was a new directory within "C:\Programs Files" (or is it "C:\PROGRA~1"? Or perhaps "C:\Arquivos de programas" aka "C:\ARQUIV~1"? Or something else?), and instead of C:\WINDOWS\SYSTEM it was now C:\Windows\system32, and there's also the "Common files" directory somewhere. And since there's no package manager (actually there is one, but not everything uses it, and it's very complex), you don't know which file came from which software. Oh, and if the program you installed overwrote a "protected" system file, the operating system overwrites the file again with its own copy.
There is a package manager, what’s missing is a directory tree owned by the package manager and protected from smuggling in unexpected crud without a big red warning for administrators.
It was, I really have no idea what you guys are talking about, everything I installed mostly went there and it was always easy to find applications. Again, as a normal user.
Not to mention applications can and do (could and did?) put things pretty much anywhere they liked and there’s never a way to really know for sure. I’ve had to hunt down dozens of directories for programs that just did not give a fuck about being easy to uninstall.
"Link a man to man and you solve his problem for a day, teach a man to man and you've enlightened him for life."
Side note, calling the file system layout "hier" has got to be the stupidest naming choice. Did they want this to be lost forever so that nobody ever finds it?
Once upon a time, the manpages were a printed object. This, coupled with some of Bell and later BSD's quirks about naming things, led to some historic naming conventions. See also: this entire damn conversation on naming directories.
One wasn't intended to call man directly, instead calling apropos first, finding the appropriate page to open.
Well, you can look at MacOS for a basic inspiration. Hide all the ugly Unix parts and expose sensible directories like Applications, Preferences, Volumes, Users.
The irony is, Microsoft originally put a space in "Program Files" intentionally, to force software developers to support paths with spaces in.
I don't know why developers have apparently collectively decided to go backwards. If your software doesn't support spaces there's a reasonable chance it doesn't support more exotic characters either, which really sucks if you are not natively English speaking.
> If your software doesn't support spaces there's a reasonable chance it doesn't support more exotic characters either, which really sucks if you are not natively English speaking.
The problem with space is that it's often a separator, which will not be the case for exotic characters. Fixing issues with exotic characters will not necessarily fix issues with spaces, and vice versa.
It's not as much issue with tools as it is an issue when working from shell. You have to make sure to quote such paths, and autocomplete gets confused sometimes when it auto-escapes the space with \
The fact that there are several directories with binaries is not a problem by itself. The problem is that many applications use hardcoded paths instead of searching for these binaries using PATH.
It means that if someone decides to get away from this legacy structure and move OS into something like /system/debian-11.1.2/ all those programs would break.
Examples: [1], [2]. I assume that developers have hardcoded those paths because /sbin is often not included into PATH.
>Standards bureaucracies like the Linux Foundation ... happily
document and add to this sort of complexity without ever trying to understand
why it was there in the first place.
That is because that is a standards organization's job. They exist to document what is actually being done, not editorialize about what should be done.
This seems to be a good example of the virtue of this sort of behaviour. The mostly arbitrary changes that have been done here have in themselves caused more problems and wasted effort than just keeping everything the same as it was.
Speaking of this, is there a good resource that elegantly but succinctly describes the intent of each of linux’s (Unix’s?) root directories?
I’ve spent like eight years with Ubuntu and realize it’s all symbol manipulation to me. I learn what is and goes where but all in practice and never because I understand the semantics.
Many distros, including Debian & Ubuntu, have merged /bin and /usr/bin, with symlinks for backwards compatibility: /bin -> /usr/bin (and similarly for /usr/lib etc).
Note: This is the dpkg maintainer arguing an apparently fairly unpopular position of linking the specific files inside of /bin instead of /bin directly, in opposition to what appears to be the majority of linux distros.
He's even added a warning to dpkg and a "usrunmess" tool to switch a system to his preferred way of doing things.
It's not clear to me where the breakage lies and I've not seen any actual reports of it.
Suppose a package has a boot-time, size-optimized, limited binary in /bin/runk and a user-optimized, feature-complete binary that requires the entire system to be up in /usr/bin/runk. When /bin and /usr/bin link to the same directory, the package manager will extract these files and run into a problem.
Things become even more complicated when these tools are split into different packages (say runk-boot and runk-user). Tracking which file comes from which package can become near impossible.
Of course this can be resolved relatively easily; make the package manager link-aware by handling the merged-bin setup as a special case and warn or error when files conflict. People don't seem to want to do that for various reasons, some good, some based in opinion only. It's a mess.
This can also be resolved externally by controlling the repos and not fucking it up. Package conflicts are already a thing, Debian already has all the infra, you've always been able to cause the "theoretical" "breakage". Frankly, it's already a non-problem.
The wiki page has a pretty detailed list of breakages, for eg "dpkg-query -S is currently broken by this approach". Hopefully the in-progress patch for some of these issues will get included.
I don't believe the list is detailed enough, because it just says "thing is broken", but not under what circumstances.
As best as I can tell, `dpkg-query -S` is broken by this iff it's passed a path to a file that's been installed under a different version of that path.
E.g. `dpkg-query -S /usr/bin/vim` fails if vim was installed via `/bin/vim`.
That's a minor bug, that should simply be fixed in dpkg, and that's also easy enough to workaround if the distribution simply installs all files in /usr/bin via /usr/bin.
None of any of that seems to be enough to unilaterally hold up a distribution-wide decision to move to a merged-usr, especially not via official sounding warnings in the install script for a major distribution component, and especially not when this way of doing things works without a lot of complaints in other distributions including the related Ubuntu, and especially not to call for a special Debian solution that has its own problems and to do so years after the fact.
Frankly if I was a debian developer I'd be quite cross with the dpkg maintainer.
What exactly is "behind dpkg's back" here? This was discussed, in the open, years ago!
This was implemented, as an option, years ago. This was implemented fully in other distributions years ago! Fedora has had it for a decade, with few problems.
Dpkg has a few minor bugs with it so it needs to be fixed. It's holding up progress here.
In that usrmerge adds symlinks that dpkg does not know about, doesn't manage and doesn't understand. Like if the sysadmin added random symlinks in various places. All bets are off after that. I'm surprised the amount of breakage isn't higher TBH.
Gentoo still uses the split setup by default. Unifying the directories is currently a work in progress and will eventually be the default from what I understand.
It's possible that Gentoo will still support the split setup even after the default is changed since it supports many different inits and libcs but I am not sure.
As it stands, I believe that Debian as a distribution did switch, but the .deb packaging software/dpkg package manager that Debian relies on doesn't support it well.
Its funny how many quirks of UNIX/C/etc go back to the severe limitations of early day computers. Which is why using modern languages like Rust and its compiler really feels like coming up for air.
> Nobody questions why main drive is C:, remnant of [an] early computer having two floppy (not sure) drives on A: and B:
Recently I was trying to install some obscure driver for a device that doesn't autodetect in my Windows 10 work computer, I had to go through the old school "add device" wizard. When clicking to manually provide the driver, the dialog is exactly (or almost?) the same as the one from Windows 95, and the path defaults to... A:\! There's no floppy on this computer, there even isn't an optical drive!
Windows is a 32 bit shell for a 16 bit extension to an 8 bit Operating System designed for a 4 bit microchip by a 2 bit company which can't stand one bit of competition.
As a user, the main one that really annoys me is the "Program Files" vs "Program Files (x86)" split. I can kinda see why they have to be different folders, but why did they have to name it "... (x86)" instead of "... (32bit)"?
You can call the 64 bit architecture x64 all you like, but it's still using the x86 instruction set and it's frequently referred to as x86-64, so naming that 32 bit only folder "... (x86)" will just make things more confusing than they should be.
I think this was because at the time of picking the name, Windows with a working 65-bit Windows-on-Windows subsystem only ran on x86 and x64, so the naming made sense. DEC builds weren't relevant at the time and ARM was still far away from gaining 64-bit support. There was a 64 bit version of XP for Itanium but that couldn't run x86 code natively.
It'll be interesting to see what Microsoft will do if Windows on ARM actually takes off. As far as I know, the current translation layer can't execute amd64 on ARM, only x86. Will we see Program Files, Program Files (x64) and Program File (x86)? It would make sense; have the redirection system ready to go and the naming scheme would also make perfect sense. ARM doesn't need a special 32-bit folder because there's no notable 32-bit vs 64-bit clash; nobody is migrating upgrading their Windows CE device to Windows 11, after all.
x64 emulation for Windows on arm Already exists. It is not based on WOW style technology. Furthermore 32 bit arm programs do exist on modern Windows on ARM, using a version of WOW64 very similar to WOW64 on x64 cpus. But they also have x86 WOW64, based on the Itanium version, that had to do binary translation.
- "C:\Program Files" <- ARM64 programs go here, as do x64 programs!
- "C:\Program Files (Arm)" <- ARM32 programs go here
- "C:\Program Files (x86)" <- x86 programs go here
I'm not sure how things like "Common Files" work in C:\program files, unless they made mixing arm64 dlls with x64 exes and vice versa just work. Which they probably did. I'm guessing they did not want another WOW version, since it was already bad enough to have to ship 3 different copies of certain system components, and they did not want to need to include a 4th copy, especially as ARM devices are often a bit light on storage space.
Fun fact this is the second time Microsoft have pulled this. The first time was for legacy 16 bit Windows applications running on Windows NT. Since most people have moved to 64bit processors, it has been shuttered.
Yes, in 16-bit windows it was system and then 32-bit binaries would go into system32. By the time 64-bit arrived so much stuff had system32 hard-coded in, there wasn't much point in trying to change it so you ended up with SysWOW64 (when a 32-bit app runs under emulation, it 'sees' SysWOW64 as System32, and can't see the 64-bit system directory)
And both the content of system32 and SysWOW64 is actually hard linked from the side by side folder (WinSxS), hence why this folder is usually half the size of the Windows folder.
It's the Windows way to abstract system folders and provide binary compatibility across architectures. I'd much rather have ld.so.preload and multiarch than this hard links mess though.
I'm a big fan of Scheme, which is just about as old as UNIX, and which is based on the even older LISP (which is even older than UNIX, going back to the 50's).
it's my understanding that most distros by now have moved to have their stuff in /usr, though there might still be backwards compatibility symlinks of course.
Just don't put /usr on its own partition. What is the point anyway after we have merged /bin into /usr/bin, /lib into /usr/lib etc. Just put your operating system on a single partition and be happy.
You're kidding me right? Nobody ever bothers with that for anything else and the company I work at spends like more than half the time resolving stupid install breaking changes that nobody asked for. This would just be one minor extra thing on that pile, but at least it would make sense for once.
I must say I really like the MacOS idea, that every app has it’s own folder. I think that apps scattered through filesystem are not a good idea at all. Maybe they should symlink executables to /bin and shared libraries to /lib. Also everything that is needed to boot should probably be in a sealed readonly filesystem or binary anyway. I think we have made a mess with unix filesystem stricture and it really needs to be simplified.
macOS also does a more strict but tidier hierarchy, grouped into "domains" … /System/* is "stuff from Apple" (including /System/Library, etc.), / is for "stuff on the local machine" (/Applications, /Library, etc.), and then each user can have their own hierarchy in their user directory (~/Applications, ~/Library, etc.).
Of course, the "stuff from BSD" winds up in /bin and /usr/bin anyway, so it's still a mess.
A long time ago, as a novice sysadmin, I spent some unhappy time fixing a broken Solaris server. The problem was fsck was in /usr/sbin, /usr was a mount point on an external drive array that got its power yanked. Challenge: to boot you need to mount /usr, but first you have to fsck it using the binary in /usr/sbin ...
After that I would make sure to have some working (static) binaries for rescue on every *nix system, tar at least, and on Solaris an extra /usr/sbin/fsck under the /usr mount point). You can fix a lot of things with tar, sed and netcat.
These days a static-linked copy of busybox on the root partition is usually enough; assuming you have the space for that. A 'full' initramfs can also help in case you need to bring over a USB drive of tools from another system or have changed hardware.
having the system mounted in its own sub directory rather than be spread over multiple directories (there's /usr/bin, /usr/share, /usr/lib, etc) has the advantage that a single read-only mount can mount the whole OS.
Having the OS mounted read-only provides some security benefits.
The other option would of course be to have / mounted ro and then have rw mounts in /home, /etc, /var and /tmp, but this is more complicated than a a rw / and a ro /usr
Disk space was not really the issue. Back in the day extra partitions would actually mean you waste space. It's more efficient to put them on one partition.
The issue is organisation. There is already so much junk in the bin folders. I think it would be much neater to further split the bins into various categories: "shell tools" like ls, [, echo; "applications" like firefox, inkscape, "helpers" like gnome-settings-daemon, ... There is no need to show weird daemons when pressing TAB in bash, and there is no need to show `ls` when picking an application via a GUI.
The problem with that suggestion is that some things belong in more than one category.
A much more flexible way of organizing is to use tags. This way a file could have more than one tag.
Having a tag hierarchy would be even better, so you can browse down the hierarchy as you'd traverse the tree structure of a typical file system (with the added advantage of allowing a single file to have multiple categories that it could be in).
One of my first exposures to *nix in the corporate world was on a Sun Sparc, running at the time SunOS 4.something.
The sysadmin at the time told me the /sbin versions of things where for statically linked binaries that didn't need any other FS' mounted to read any dynamic libs.
I'm not asserting it was right, but just another view into "tribal knowledge" vs "urban legend" vs ???
I'd forgotten that RK05s only had 1.5Mb which made me think about my disk quota as a student around the same time on a mainframe - 500 180byte blocks - ~90k - but I could back my programs up and take them home - on cards
Path dependence -- just like the default 80x24 character terminal window size that goes back to the VT100 of 1978. A curious historical hangover. GUI defaults are obviously less consequential, though.
TL;DR. Storage used to be so expensive per gigabyte that binaries were arbitrarily categorized into distinct file system partitions. Nobody ever bothered to revisit this and merge back to just /bin.
> I don’t wanna be callous but on a scale of 0=I stubbed my toe and 10=children are starving in [insert location] this seems to be a -3.
The scale is a measure of what though?
> My theory is every so often someone stumbles upon it, thinks “oh that’s interesting!” and submits it, not realizing it’s a duplicate? ..and then tons of readers in the same boat upvote it? (Similar to how urban legends about dog-sized rats taken as pets in Mexico keep circulating and will forever)
I'm not sure this is a theory, this is literally how the site works, is it not?
You know what I do when I see an article title that I've seen before and it's something I don't care about? I don't click it. Most times though, I do click into the comments to see what a 'new' audience thinks about it.
> My theory is every so often someone stumbles upon it, thinks “oh that’s interesting!” and submits it, not realizing it’s a duplicate? ..and then tons of readers in the same boat upvote it?
The social bug being that you keep getting older and there is always young newcomers that don't have the knowledge you have already picked up?
I once sent out a proposal on the FreeBSD lists to merge /sbin with /bin, and /usr/sbin with /usr/bin. People were concerned that this would slow down the system, due to PATH lookups taking longer. Even when I demonstrated the opposite was true (it being faster due to fewer directories needing to be scanned), I wasn't able to get consensus. What a shame.
For me, the value in having a bin vs sbin split is in keeping system binaries (daemons, root-only tools) off the user's path. There's little value in a user starting inetd or apache2 from the command line, so why should those be present in the user's path? Same thing for system management tools that require root access for everything, such as dmsetup, blkdiscard, or shutdown (yes, Linux examples as I don't know FreeBSD).
Having only usable binaries in the path aids discoverability of the system.
There are many tools in sbin that should have been in bin instead. For example, there’s no need to run ifconfig as root if you only want to display the current set of addresses. Yet it lives in sbin.
This means that in practice people will just add sbin to PATH to get a somewhat usable system, which makes the division between bin and sbin useless.
Furthermore, on BSD derived systems binaries that should not be invoked by users directly (e.g., daemons) need to be stored in libexec.
/sbin is for statically linked executables, while /bin is for dynamically linked executables. It has nothing to do with daemons vs non-daemons, nor with things having to run as root.
Go take a look (using ldd) in your /sbin and tell me exactly how many of them are statically linked. On my system, only 170 out of the 838 items in /sbin are statically linked.
https://refspecs.linuxfoundation.org/FHS_3.0/fhs-3.0.html
> Utilities used for system administration (and other root-only commands) are stored in /sbin, /usr/sbin, and /usr/local/sbin. /sbin contains binaries essential for booting, restoring, recovering, and/or repairing the system in addition to the binaries in /bin.
I can recall any Linux distro or Unix variant setup in the way you describe. In addition, the Filesystem Hierarchy Standard disagrees with you.
https://refspecs.linuxfoundation.org/FHS_3.0/fhs/index.html
You may be thinking of the /bin and /usr/bin difference, though.
I believe they're referring to the old SunOS (at least) convention that /sbin was for utilities that could be run during the boot process before /usr was mounted. These tended to need to be statically linked, as the .so libraries were all under /usr. SunOS was how I learned the Unix filesystem layout, but of course that means a lot of my ideas of what "should" be where are outdated at this point.
Rather, the convention was that /sbin was for static binaries so that the system could still be fixed online if the dynamic linker got hosed. It's not about /usr not being mounted, but /lib/ld-linux.so not functioning correctly. For that reason, glibc still ships (or used to ship) an sln binary (static ln), and Debian still offers sash (stand-alone shell): so you could at least try to restore the dynamic library link farm by hand.
But I have only ever seen historic references to that argument, from back when dynamic linking was scary and unreliable. I certainly have never encountered that situation in almost 25 years of using Linux.
> I believe they're referring to the old SunOS (at least) convention that /sbin was for utilities that could be run during the boot process before /usr was mounted
My memory is hazy but I recall the distinction being / vs /usr not /bin vs /sbin.
The article we're commenting on has that as the justification for /usr/bin and /bin in the second paragraph.
sbin as "static binary" is an ahistoric retcon, like claiming usr means 'universal system resources'. (It means 'user' and was the original /home.)
Then why is /sbin missing from PATH in Debian? Does that mean that only root can run statically linked binaries?
The tools that root needs are more often served by being statically linked than dynamically for the situations where the volume with the shared libraries fails to mount.
Having mnt be statically linked makes it much easier to recover that system.
The ideal of "/sbin for system tooling" isn't so much one of static vs dynamic but rather users accidentally finding system tools that don't work and sending email to the admin saying "mnt gives me a permission denied error" when they have no business running it.
Pretty sure on both of those /sbin is just a symlink to /usr/sbin. If the static thing was ever true I suppose once you've moved everything in to /usr you wouldn't bother anymore.
> so why should those be present in the user's path
And why shouldn't they?
It's not as if a user could do anything damaging with them, if the system is setup properly.
> Having only usable binaries in the path aids discoverability of the system.
Except when someone new has to go online to ask "I found this tutorial telling me to use the `xyz` command to do this, but all I get is `bash: xyz: command not found`, please help!"
Been using Linux for years and it still trips me up when it tells me "command not found".
I totally got thrown a curve ball when Debian started telling me 'shutdown' was not found...
Don't you need sbin in PATH anyway when you want to run them with sudo?
There's a "secure path" option in sudoers that you can use to add additional directories to the path that is searched when sudo is invoked.
Most examples will include the standard user path plus /sbin and /usr/sbin but you can add any directories you want to the option.
Isn't that for executing than autocompleting?
Yes, that's correct, it's only about searching the right directories to find and execute the program you asked for.
But autocomplete after sudo doesn't work for me on a stock Debian install anyway, not sure what one needs to do to get around that. I don't really rely on it. If I'm doing enough work that needs root I start the session with "sudo su -" anyway so not having autocomplete after sudo is not a big deal for me.
What if I want to run fsck on a drive image owned by my user? If I can run /sbin/fsck, I can rightly run that on such a file without using sudo.
> Having only usable binaries in the path aids discoverability of the system.
Downside is it stops the autocomplete, so if you, say, wish to check quickly how binary is called on the system, e.g. if you should sudo apache2 or httpd, it will not work...
> the value in having a bin vs sbin split is in keeping system binaries (daemons, root-only tools) off the user's path
I think it's nice to be able to keep admin utils out of an admin's PATH when the admin isn't intending to use them.
It's much less interesting to me to keep daemons and such out of anyone's PATH if running them can't do much, though usually those things really belong in a libexec directory and should be exec'ed intentionally only.
I do on my laptop Dev environments, but I do understand your point, it’s not a use case any but resource constrained devs do.
Hypothetically speaking, would forking FreeBSD or a *nix to use a simpler folder structure be feasible? I can imagine a lot of package managers and applications make assumptions about the folder structure though, so there would have to be a lot of changes made to make everything work.
I was thinking "just symlink /sbin with /bin", but there would probably be conflicts.
> Hypothetically speaking, would forking FreeBSD or a nix to use a simpler folder structure be feasible?*
Not only feasible but it's been implemented a few times over the years. The most notable being GoboLinux[1][2], which is nearly 20 years old.
[1] https://en.wikipedia.org/wiki/GoboLinux
[2] https://gobolinux.org/
> I was thinking "just symlink /sbin with /bin", but there would probably be conflicts
Given how long /sbin et al have been around, there would always be some edge cases. However it is still possible to do. GoboLinux uses symlinks to achieve LFH[3] compatibility while still having friendly directory names. ArchLinux also just has one bin directory and uses symlinks for compatibility:
[3] https://en.wikipedia.org/wiki/Filesystem_Hierarchy_StandardArchLinux symlinks /sbin, /bin and /usr/sbin to /usr/bin. And also /lib, /lib64, /usr/lib64 to /usr/lib:
$ ls -la / | grep -e bin -e lib lrwxrwxrwx 1 root root 7 Dec 6 23:41 bin -> usr/bin lrwxrwxrwx 1 root root 7 Dec 6 23:41 lib -> usr/lib lrwxrwxrwx 1 root root 7 Dec 6 23:41 lib64 -> usr/lib lrwxrwxrwx 1 root root 7 Dec 6 23:41 sbin -> usr/bin
I'm actually a bit surprised about `/bin` there. Maybe it's archaic but I've always considered the point of `/bin` to be a minimal set of tools that could allow an otherwise-hosed system to be booted/debugged. So it (and `/lib` and a few other directories) might be on a small, read-only partition while `/usr` and friends are on a much larger read-write partition.
Of course in the last twenty-five years I don't think I've ever really used a system set up like that. But it does seem nice to at least be able to do so.
IIRC, you are correct. And OpenBSD still sets up distinct partitions for `/bin` and `/lib` etc.
The first PC I built had 7 disk drives in a tower case, four distinct hard drives. Yes it was crazy. But the largest of these by far was 540 MB. It made sense to keep the boot stuff on its own hard drive.
Linux has `boot`, of course, but `boot` should never appear in $PATH. I think.
I've used to setup my system exactly like that, but that was in 20xx. Since then I've got lazy.
I think nearly all Linux distros did this when they adopted systemd. That’s where I first read this argument.
I know RHEL, Debian, and Arch do. Not a lot outside of those families.
Same on my (X)Ubuntu 20.04.
Why hypothetical, Gobo Linux[1] has already done it. Or if you want to just hide (rather than replace) the traditional Unix hierarchy from the user, you get macOS (inherited from NeXTSTEP).
The problem is that the actual benefits a pretty nebulous, so it's probably not worth the effort (and drawbacks of using different conventions than most others *nix users).
[1] https://www.gobolinux.org/
I'm pretty sure Gobo Linux functions partially like macOS does, hiding system directories, by removing them from readdir with a custom kernel module[0].
[0]: https://gobolinux.org/doc/articles/gobohide.html
Also FreeBSD (and other BSDs) usually mount /usr on its own partition. I think that causes issues in Linux these days. So yes, merging in the BSDs may be a big change.
FWIW, Slackware keeps the separate, following the Linux Standard Base.
https://en.wikipedia.org/wiki/Linux_Standard_Base
FreeBSD definitely doesn't create /usr as a separate filesystem by default. I think some people still do that, but I have no idea why.
It’s called Darwin.
What about /usr/games! Insensitive clod!
/usr/games should never have existed in the first place, imnsho. If it's a small game, its binary could just have been put in /usr/bin. If it's a large game, it probably should be in /opt/$game.
It's a historical unix thing. Things in /usr/games (which were not all games) were frivolous and not essential to the OS, and were distributed as a separate tape or archive so that admins could easily choose whether or not to install them.
I'll also note /usr/games/dm ( https://github.com/vattam/BSDGames/tree/master/dm ) which allowed sysadmins to restrict when programs in /usr/games could be run. Setting up that structure in /usr/bin would be more work to maintain.
/usr/games existing allows people to find the most important binaries. If in */bin finding all of these could be difficult
We have a hierarchical filesystem. /usr/bin/games could be a thing.
Please correct me if I'm wrong. Aren't binaries in /sbin and /usr/sbin statically linked as opposed to no requirement like this for files living in /bin and /usr/bin?
I always thought the rationale was that if statically linked binaries are on different partition they can be used to recover the system from a failure.
Edit: files in /bin are also statically linked, and I am unsure about what I wrote above but vaguely recall something like that
It is specified this way on OpenBSD: https://man.openbsd.org/man7/hier.7
> /bin/ User utilities fundamental to both single and multi-user environments. These programs are statically compiled and therefore do not depend on any system libraries to run.
> /sbin/ System programs and administration utilities fundamental to both single and multi-user environments. These programs are statically compiled and therefore do not depend on any system libraries to run.
It's nice to be able to still run on a crippled system without access to dynamically linked executables, so you can recover. But in practice, wouldn't just about anyone simply boot to a more capable recovery system (via another partition, USB drive, netboot, etc...)?
I just checked and the programs in /bin on OpenBSD are in fact statically linked. The ones in /usr/sbin are not.
Yeah, this is received unix lore: anything needed to recover a system needs to be statically linked and in /bin or /sbin.
That was indeed the tradition, but on Linux the GNU libc wants to be only dynamically linked, which creates a lot of problems for those who want static executables.
Because of that, in many Linux distributions there are few, if any, static executables. Due to this, it may happen that a botched glibc upgrade makes the system unusable, because no executable can be started to repair it (nowadays many distributions have a static busybox for such situations). I have seen this a couple of times, and the first time I could not understand what happened, because I was used to older systems, where the commands that I tried to execute (e.g. ls or mv) had been statically linked. Such a thing could never happen in a traditional UNIX or Linux system, before glibc disallowed static linking.
The GNU libc should have been split into a libc with most of the functions, which may be linked statically without problems, and into a small library with the name resolving functions, which could be linked dynamically only by the programs which need those functions.
Even better, the name resolving functions should have been organized in such a way to be able to use their default configuration with static linking and choose dynamic linking only when you really intend to override the default configuration when using less common services, e.g. NIS.
This happened to me on arch recently. I updated pacman but it didn’t warn me it needed an updated glibc. Now pacman refuses to run.
It should be easy enough to repair but it was just an old laptop I wanted to test something on so I ended up throwing the laptop back in the draw instead.
The good thing about arch packages being just tar archives is when pacman fails, you can often fix it by `tar xf` ing the right packages at the root. It's ugly but it works most of the time
I once heard about a "ln" variant called "sln", statically linked, as opposed to the normal ln one, so you could fix a system where the dynamic loader is broken and thus ln is unusable. I can't find it on Ubuntu, though.
then: statically linked bins into /bin, all the others in /usr/bin and 2 symlinks /sbin -> /bin, /usr/sbin -> /usr/bin. It requires duplicate binaries: one version statically linked and the other not: I still want "env" to exist as statically linked, but tons of scripts start with this horrible '#/usr/bin/env MYPREFEREDSCRIPTENGINE'
What ancient system makes a speed difference in command lookup in PATH?
Calls like execvp() do little more than splitting PATH on ':', followed by repeatedly invoking execve() on ${dir}/${filename}. The fewer elements you have in PATH, the fewer execve() calls need to be performed in the worst case.
Is that ever going to be a hot path?
It's probably not exactly going to be hot, but even failing execve is inherently semi-expensive since it needs to be a syscall and incurs context switches.
It's just outweighed a couple orders of magnitude by all the overhead that comes with a successfully launching another executable unless you have, like, a thousand junk paths in your PATH.
Theoretically can be. Every command you invoke without a path will need to look up PATH.
In practice well behaving shells cache the contents of PATH to speed up operations.
Sounds like they need fixed for inefficient handling of simple operation.
The fix is for the user to use a smaller $PATH when possible. Any method of checking that the command exists and is executable before trying to execute it leads to TOCTOU race conditions.
https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use
I’m assuming you are proposing to stat each candidate before trying to execve it. I’m also assuming that a stat system call is roughly as expensive as an execve of a nonexistent or non-executable path.
For every failed candidate, you are doing one system call, so roughly the same cost each way.
Now if you just do an execve, you’re just paying that cost. If you stat first, you pay the cost of another system call that doesn’t change the flow of your program at all (a nice way of saying you’re wasting time).
Unless stat is dramatically faster than exec on a nonexistent or non-executable path, there’s never a case where this is better.
Context switches could straightforwardly be saved by doing the PATH splitting and lookup in-kernel, or just providing a list of executable paths to check.
It didn't work out this way historically (doing unnecessary string processing, requiring extra memory, could've been more expensive than the context switches), and the performance impact of failed execve isn't normally a high priority, and there are other reasons not to want stuff in the kernel (not that it stops frankly less critical stuff from getting in the kernel), but there's definitely low-hanging fruit here if it like, mattered.
Enlighten me how you would implement it instead.
It's not really an accurate description anyway. Most shells will only perform the PATH lookup one time per command, then store the found fully-qualified file path in an in-memory hash table for quicker lookup each subsequent invocation. This is why you need to blast the cache if you delete or move an executable. Plus, many common utilities are replaced by shell built-ins anyway and they never require directory traversal at all.
A merge needs to be done carefully for backwards compatibility.
You could move all the things in /bin and /sbin to /usr/bin and /usr/sbin, then leave behind links (symbolic or hard).
Since everyone ends up having /bin and /usr/bin in PATH, this merge makes a lot of sense from a performance point of view.
Merging bindirs and sbindirs is a touchier topic. Many things in sbindirs should have been in bindirs all along, and many should move to libexecdirs, but some should stay behind so that privileged users can keep sbindirs out of PATH when they're not wearing an admin hat.
What was the reason for wanting to merge? Change is breakage for someone so their ought to be a reason to do it, rather that a reason not to.
We _could_ all decide to drive on the otherside of the road, if the other side is better, but you have to incorporate the cost of the change.
> Change is breakage for someone so their ought to be a reason to do it,
Simplicity is reason enough to change something.
When things break because of reasonable change, they can be fixed. And in this case, backwards compatibility can be ensured simply by symlinking things.
> What a shame.
I think this is a pretty dangerous attitude, and it is really the only thing wrong with Linux, and probably leads to replacement of simple structure and functionality with a complex software suite that is merely more convenient, like systemd. "Let's change this thing because we want to, because it will improve performance 0.0024%"
Feature creep is what happens when restraint was not exercised.
IMO, since it really doesn't matter what the filesystem looks like, leave it be for standards and compatibility. Seriously, it takes, idk, maybe, a lack of humility to want to change fundamental characteristics of UNIX when the reasons for doing so are a little capricious.
I'm not really talking about the parent, fwiw. I'm talking about the crowd and ochlocracy.
It's also dangerous and tiring the opposite attitude in the Linux world: don't dare change something that has been there for 30 years. Like this very article, there were plenty saying "the /usr split is there for a reason!". No, it's just an historical quirk.
There's plenty greybeards that for them "Linux" is a full screen terminal running emacs on decade-old hardware. "I don't use antialiased fonts, why the hell should I care about decent HiDPI support?" And then protest every time some working group tries to modernise and improve the Linux desktop. You see them every time on this forum.
I'm a greybeard, I've used Linux full time on the desktop for 20 years. I don't get this conservative, "we don't need it" attitude.
> Like this very article, there were plenty saying "the /usr split is there for a reason!". No, it's just an historical quirk.
For those of us who ran small-disk NFS workstations back in the day having the split and a common /usr was no quirk and very useful. (There were also diskless (Sun, OpenFirmware netbooting) workstations: common /bin, /usr, but per-machine /var on the NFS server.)
The article states:
> Cheap retail hard drives passed the 100 megabyte mark around 1990, and partition resizing software showed up somewhere around there (partition magic 3.0 shipped in 1997).
Yeah, except if you have a fleet of several hundred or thousand workstations to provision. "Cheap" is relative, especially if you're an academic institution.
Even if a split was pragmatically warranted, the fact that the user directory was chosen is without a doubt a quirk, an accident of circumstance that has since been perpetuated out of tradition (or less charitably: cargo cult mentality.)
This is maybe why I gravitate towards NixOS now. It is already in its inception such a departure from tradition that the conservative crowd will probably not even attempt to use it, which in turn will make innovation more likely.
Folding this back onto the question at hand:
> It's also dangerous and tiring the opposite attitude in the Linux world
You're literally saying that not arbitrarily changing the file structure of linux is dangerous. I don't think that's what you meant.
It's not about "because it's been that way for 30 years," even though it's been 50 years, but never mind that, it's about consistency and standards. It just does not matter one way or the other what the structure of the file system is, so any agenda to change something that doesn't matter is itself a specious agenda. Changing fundamental design introduces complexity for no good reason. As soon as you do it, you've create a special case that doesn't work anywhere else and jeapardizes compatibility.
I agree there'd be quite a bit of compatibility breakage and churn associated with trying to change these at this point.
That said, I think one of the better reasons (and ways) to weigh the value of changing some long-term practice is to focus on the anticipated costs of the change on one side of the ledger, and the ongoing (easy to ignore) unbounded costs of the status quo on the other (and appropriately weight them by who pays and how often). To shoot from the hip:
- If it's only a modest improvement that still supports a bit of misunderstanding, folksonomizing, and arguing about where things belong--it'll just waste time and energy better spent elsewhere. Any time would probably be better spent on writing and promoting/propagating a really good canonical reference to the status quo that can help drive out confusion and enable devs/admins answer practical questions (even if inefficiently).
- If (utopia warning) someone is able to significantly improve how accurately and quickly humans can make real dev/admin decisions from a clear mental model _and_ get enough buy-in to do it across all of the major Unix-alikes, it's probably worth some medium-term pain.
FWIW, the ongoing progress of NixOS, which doesn't really have any of these paths (beyond /usr/bin/env and /bin/sh), demonstrates that this pain is surmountable with enough eyes and hands.
> "the /usr split is there for a reason!". No, it's just an historical quirk.
It's a historical quirk on linux, where there is no clear separation between "base OS packages" and "3rd party packages".
On FreeBSD the split is very real, anything in /bin/ ships with my OS and is maintained and updated by the FreeBSD team. Anything in /usr/bin/ comes from ports and is thus a 3rd party package I installed and can be safely nuked and I need to maintain/update it.
This is wrong (and dangerously so too).
On FreeBSD 3rd party packages go into /usr/local and not /usr
You absolutely will get base packages in /usr/bin (eg `env`) so nuking /usr/bin will break your FreeBSD install.
There's a good write up here: https://unix.stackexchange.com/questions/332764/role-of-the-...
> It's a historical quirk on linux, where there is no clear separation between "base OS packages" and "3rd party packages".
It was a historical quirk to start with. At Bell Labs, back in the early 1970s, Unix was being developed on PDP-11s with RK05 hard disks (with removable disk packs), which had an amazingly generous capacity of 2.5MB each. The Unix operating system had grown too big to fit on a single RK05 disk volume so they had to split it across two. Other operating systems of the period faced similar issues, but dealt with them in (arguably) more elegant ways – on IBM mainframes, OS/360 maintained a database ("catalog") mapping file paths (dataset names, to use the proper terminology) to volume names, so you could move a file to another disk without changing its path. True to Unix's penchant for simplicity, its authors decided instead to just split the OS into / and /usr. And the split survived long after they'd upgraded to more spacious disks.
Any other explanation for the split is essentially a retcon. Some of those retcons (even if, as other commenters have pointed out, not your own) may actually have become true – some of them may have been approximately true to begin with, and they influenced people's decisions, thereby making themselves more true over time. But its ultimate origins will forever remain this quirk of computing history.
Funny aside: yours is an excellent comment, and yet proof that you didn't read the article, as the first part is almost word-for-word identical to the post.
I don't mean to shame you, I sometimes comment without reading TFA, and in your case you add a few more details that were not present in the article. I just found it interesting.
A much better separation is achieved in a few Linux distributions where every package is installed in a separate directory.
All the files that might be expected by others to be in certain standard locations are sym-linked to those locations, e.g. the executables to /usr/bin,/usr/sbin,/bin or /sbin, in order to appear in PATH.
In this case you no longer need any kind of database to know which files may be safely nuked to delete any package.
Moreover, in FreeBSD there is no such separation between the "base OS packages" and "3rd party packages", implemented as a difference between root and /usr. You might have misremembered /usr/local, which is indeed a place for "3rd party packages" in all UNIX-derived operating systems.
There are many "base OS packages" that are installed in /usr/bin or in /usr/sbin.
In any FreeBSD system, you can see their source files in /usr/src/usr.bin and in /usr/src/usr.sbin.
I have been using FreeBSD for a quarter of century, since FreeBSD 2.0, and there has never been such a separation between root and /usr.
The separation between /bin and /usr/bin and the other similar pairs was made only to allow /usr to be unmounted, when it is on another device than the root device, but still have in the root file system the minimal set of tools needed for diagnosing and repairing any broken file system or network connection.
In ancient FreeBSD installations it was always recommended to have a separate small root partition, e.g. of a few hundred megabytes, and some large partitions for usr and var.
This original use has become completely obsolete, because now, for diagnosing and repairing problems, it is preferable to boot from an USB stick or from the network (using a ramdisk as root file system), and then run diagnostics or repair programs without touching even the root file system unless modifying it is intentional.
In FreBSD it might still be possible to put /usr on a different partition or device and then unmount /usr, but in many Linux distributions this traditional usage is broken, because some of the programs installed in the root directories need components installed in /usr, so when /usr is unmounted they stop working.
GNU Stow provides this facility to all unices. I use it as a secondary package manager to keep /usr/local under control with self-compiled programs.
I think you've confused /usr/bin with /usr/local/bin. I'm pretty sure a default FreeBSD install has plenty if stuff in /usr/bin.
The split is even stronger on NetBSD, where /usr is the base OS and /usr/pkg what's installed by the user through pkgin (binary packages) or pkgsrc (ports).
Likewise, the system configuration goes to /etc while the userland configuration goes to /usr/pkg/etc.
All it takes to factory reset a NetBSD system is an rm -Rf /usr/pkg.
Please have an upvote for this clarity. I prefer the FreeBSD approach personally.
You hit the nail on the head there, I would add that today it's more the KISS crowd throwing a fit.
Can't imagine the frustration deva must have.
Well, if you have an argument against KISS, we'd all love to hear it. The opposite of KISS is KICKME (Keep It Complicated Keep Me Employed). Life is a pretty good example of successful complexity. But we didn't design life, and we do not maintain it (understatement). Simplicity for simplicity's sake is self-evidently advantageous. Complexity for the sake of complexity is not.
I don't thinkit has to be that black and white, KISS in Linux world from my experience has been to see any software that isn't "simple" as bloat, while their software is like a car you only turn left with.
To be clear, GP's stated intention was to simplify a complex structure into a "simple structure", about which the stated concern was a loss of performance, to which GP's rebuttal was it actually improved performance. The main motivator for flattening the filesystem hierarchy isn't really performance, it's simplifying the organization, and (arguably) bringing it more in like with "pure UNIX", vs the quagmire of commercial SysV derivatives with a few dozen different bin directories in PATH, with esoteric justification.
> To be clear, GP's stated intention was
> to merge /sbin with /bin, and /usr/sbin with /usr/bin
It's a bit more drastic than you make it out to be. This would give two valid $PATHS to the same commands. It would make tab-completion slow. It would likely break all kinds of compatibility across the SUS. And it is incredibly arbitrary, no better or worse than eliminating system hierarchy entirely and putting everything in /.
I've read this explanation a couple of times, and if you go all the way back to PDP-11 the split does indeed sound ridiculous. I had my first contact with Linux from some magazine CDs in the late 90s, I think it was Red Hat or SUSE based. The documentation there had a much clearer explaination:
/sbin, /usr/sbin is for binaries that need root. You put them in separate directories so their permissions all match up, and so they don't show up when completing in bash.
The paths without /usr - /bin and /sbin - are available from the get go. It is the very first partition that is mounted, and what is guaranteed to be available if you do "init 1" or boot in single user mode. You can also do fsck from there (assuming the boot partition is not damaged). I don't know how this integrated with initrd (initramfs wasn't a thing yet). I think there was only one "base system" - either initrd was very basic, or the whole base was in initrd, or something similar.
The paths with /usr were managed by the package manager. Word of mouth was: don't install anything manually there. If you do (via make install), keep around the source so you can do make uninstall. But better install to /usr/local or /opt.
> /sbin, /usr/sbin is for binaries that need root. You put them in separate directories so their permissions all match up, and so they don't show up when completing in bash.
I also got this explanation, but it never made much sense to me. First of all, the binaries there are executable by everyone anyway. Second, it really doesn't matter that they show up during completion. Third, many of them work fine and are quite useful without root! I don't recall the specific examples that bothered me (/sbin and /usr/sbin have been in my PATH forever now), but I think it was something like ifconfig or ping.
>Third, many of them work fine and are quite useful without root
It's more complicated than that - many can do a subset of useful things without root.
Often they can read things as a normal user - things like `apt` or `sysctl` can show you information about your current system, but will only be able to change it as root.
And even something like "shutdown" might be usable for a locally logged in normal user on a systemd system - or it might not be, depending on local configuration.
Finding things that actually always "need root" for everything is kind of hard, even discounting "print help" as a useful thing in its own right. And if you only came up with "chcpu" and "switch_root"... would you really want to have a top-level directory just for those? Plus the historical location for some things is in /sbin, so moving them out has a compatibility cost.
Tbh I find the only winning move here is not to play. There are so few binaries that are actually only useful to root that they don't really hurt in tab completion, and they could always grow non-root accessible features.
Yes, but you are effectively turning your box into a single user system. And that's fine if you are happy to work that way, but the origins of the directory structure is of course in multiuser UNIX. As a sysadmin, I would not want my /bin /sbin exposed to everyone. In your example I question the security implications of being able to run those binaries outside of root anyway (esp. in a professional environment) if you have your box exposed on a network.
> As a sysadmin, I would not want my /bin /sbin exposed to everyone.
Why not? It's not like most of them are suid (right?). Most Unix systems I've used allow any user to peruse /sbin at their leisure and run whatever they want.
Apologies if I'm missing your point, but yikes - any user on your system can run /sbin/shutdown?
Yes of course, just like on more or less any Linux system. But IIRC, shutdown is a suid binary that will do its own permission checks while running. The permissions on the /sbin/ directory should not matter.
Do you realize /bin is a symlink to /usr/bin these days?
This is exactly what I understood, too. The structure in Linux was familiar to me from SVR4 which I used in a number of implementations, most often Data General’s DG/UX (which was a fantastic system for its time).
It’s probably true that the distinction isn’t really important any more. The things we used to have to worry about in the (g)olden days of Unix (/s) are ridiculous by todays standards. We had one of the first 2.5GB RAID arrays in the country and could run a whole medical laboratory - maybe 100 people running Wyse 60 terminals - on it. We had a dedicated 500MB drive for the OS and a couple of other drives just for database logfiles.
These days the whole OS now fits on a single SSD which takes up a tiny fraction of the device. Large SSDs have made so much complexity obsolete for most people. I believe that one could, quite literally, run that old lab software from a single Raspberry Pi.
The point being, stuff that made sense in that old environment does not necessarily make sense any more. It’s good to have the discussion though.
Yes. And another benefit of /usr vs / was that is was simpler to read only mount /usr than r/o mount /.
Why do you want to do that? Well, when you have a machine with virtualization you can share the /usr partition across all instances, physically. Which makes a lot of sense if you want to virtualize hundreds of Linux guests on one physical box: you memory map the /usr partition in hypervisor ram, you share that ram across all guests and wham you have snappy fast virtual machines with low physical footprint.
That was actually done, e.g. on IBM mainframes running "your personal web server" for thousands of users in one single mainframe. Fun times.
And only when the root partition could also be mounted r/o, with just an individual /etc, and when large partitions became doable as /, only then it started to make sense to abandon /usr
The split made lots of sense back in the days.
> Why do you want to do that? Well, when you have a machine with virtualization you can share the /usr partition across all instances, physically.
Or you could share the whole /usr over NFS to hundreds of diskless workstations, each having their own separate / (also shared over NFS). Remember that disk space was expensive back then; having hundreds of identical copies of the large /usr tree on the NFS server would be a huge waste.
> I had my first contact with Linux from some magazine CDs in the late 90s, I think it was Red Hat or SUSE based.
Man that sounds awesome. I know we have it made these days with modern internet and computers, but sometimes I day dream about being 19 in the mid to late 90s and getting to experience that age of computing.
> I don't know how this integrated with initrd (initramfs wasn't a thing yet).
As far as I recall, early Linux didn't have initrd either; it's a novelty which came later.
> But better install to /usr/local or /opt.
I believe /opt is a novelty which appeared on either FSSTD or its successor FHS; I think /usr/local is older (perhaps even older than Linux), being the default --prefix for autoconf.
"/sbin, /usr/sbin is for binaries that need root"
No, they're for statically linked executables.
The s stands for superuser, not static.
Bad/illogical/outdated directory structure is one of the most annoying things I've encountered while using Linux, because it makes the admin job feel unnecessarily messy (things are all over the place), and it feels as if there's a fundamental imbalance in the system that you can't get rid of.
> it makes the admin job feel unnecessarily messy
Many admins feel like a Jedi when they memorize all the trivia about a file's path.
There's no shortage of people in a particular profession that feed on unnecessary complexity even when the original reason for said complexity (i.e. tiny drives) doesn't exist any more.
Now if you'll excuse me I have to figure out why sound doesn't work on Linux in 2022 like it's 1997. No seriously, I legit have to do that now. Someone should really develop another system for sound, again.
I just got done building an omni-channeling recording system, with a soc running ubuntu-server & alsa handling recording from several usb dacs connected to microphones. I feel your pain. Sound on linux is a nightmare. But now that I have an understanding of it, here are some helpful things I learned.
- Make sure alsa-utils is installed
- Auto-configure hardware devices: alsactl init
- View hardware for playback (use arecord for opposite): aplay -L | grep “^hw:”
^ Use that to make sure your hw is being detected
- Lower level list of sound cards, if having issues: cat /proc/asound/cards
- Base alsa conf: /usr/share/alsa/alsa.conf
^ go there to dive deeper into what alsa is actually doing. It will also show you the priority for config files, so you can go through that and check which ones are in use and modify accordingly. alsactl init should handle most configuration though.
- you will want to mess with this: /etc/modprobe.d/alsa-base.conf …and get it working for your hardware. This is a resource to understand that file better: https://alsa.opensrc.org/MultipleCards
You can google configuration files and find one that works for you. Most issues for normal use will revolve around which card gets set to index 0 / default, so if you know your card you want as default, I’d recommend finding your device id (i think cat /proc/asound/cards will give you vendor/product ids you can use) then making a config using that id to set it as the default card, independent of indexing.
Turned into a lot, stopping here. Sound really shouldn’t be this hard for end users or devs, but it is what it is right now. Anyway, it’s fresh on my mind so at the very least, I might be able to point you in the right direction.
Good luck!
- Someone with no more hair left to pull out
"I use arch btw"
>Bad/illogical/outdated directory structure is one of the most annoying things I've encountered while using Linux
Every OS I've ever used has had these kinds of quirks, save simple ones that just dump everything in the root folder or equivalent. Its really hard to move files once you ship software and doubly so do an OS. Users expect files to be where they were last version.
There is really not that many places to look. I agree it could be better but part of the time the issue is with package maintainers. And to some extent, systemd has made things a little more convoluted. Compared to Windows it is far better because at least you don't go having to search thousands of registry keys.
At least for the PATH, you can also automate the looking. When on a new POSIXy system, I usually try "(IFS=: ; ls $PATH)" at the shell to get a listing of all programs available.
Roughly the same reason why dotfiles became a thing on Unix: https://linux-audit.com/linux-history-how-dot-files-became-h... Fortunately more and more software is putting its config in ~/.config/ rather than dumping it all over users' home directories.
AFAIK the XDG spec isn't a thing on macOS, so you get those CLI utilities written by devs on their fancy Macbook Pro that pollute your home directory, such as Deno, Doom Emacs, Elixir, Rust/Cargo, Kubernetes, npm, vscode, etc.
There is no specific reason for a program that uses the XDG dirs on other unices to not use them on macOS, other than some idea that it's "alien".
You can have ~/.config/. Nothing in macOS prevents you from having it. And so, some programs do. The worst thing that happens is that, instead of having one directoy ~/.foo you now have one directory ~/.config/foo and nothing else in ~/.config. But as soon as you add the second thing that uses ~/.config, you now have two directories in there instead of a second dotdirectory in ~.
It's just that for a bunch of them the XDG path is only used if it exists - e.g. emacs predates the spec, so it uses ~/.emacs.d (and a few others) first.
Cargo doesn't use the XDG paths at all, apparently - https://github.com/rust-lang/cargo/issues/1734. However it also needs a directory for binaries (~/.cargo/bin) and ~/.local/bin isn't actually in the spec at the moment (https://gitlab.freedesktop.org/xdg/xdg-specs/-/issues/14).
It's a real shitshow.
https://wiki.archlinux.org/title/XDG_Base_Directory
> There is no specific reason for a program that uses the XDG dirs on other unices to not use them on macOS
Nobody stops Apple developers respecting a Freedesktop spec, but the point is many people that mostly know macOS probably didn't even know XDG was a thing. It's not like Apple encourages it in any of their command line utilities.
I notice that your comments often include trigger words/phrases like "devs on their fancy Macbook Pro". Then I realized that I do the same thing. You spot it, you got it. Maybe I'll start a 12-step group for snark addicts.
I'm sorry you got triggered by "devs on their fancy Macbook Pro". I don't know how you noticed this "often on my comments". Are you a fan?
It was just an observation that there are many devs writing UNIX tools on Apple hardware. There was no snark.
On macos its less of a problem because the OS tries to hide your home folder and shows Documents Desktop and downloads in the Finder. Still much prefer .local and .config to a pile of dotfiles.
I haven't seen any software that conditionally disables XDG on macOS. What I do see common is software that hardcode paths. Many of these software use different paths depending on the platform. But those aren't XDG compliant because XDG paths are configurable through environment variables.
Also FYI Doom Emacs is currently XDG compliant.
https://developer.apple.com/library/archive/documentation/Fi...
It's inconsistent, but certainly some programs adhere to XDG on macos. I've got pretty healthy looking ~/.config and ~/.local directories, and it's not all just my own stuff.
I can understand devs not using the right circumstances if their platform of choice doesn't come with an easy way to determine the right directory to put stuff in, let alone create it if necessary.
What I really want is an API that does "create/open/delete a file/directory for the relevant configuration/cache/resources store", be it user configured or platform default. What I get is an external package that gives me a list of potential storage locations (of which I'll probably just pick the first) that may or may not be actual directories on the system which I may or may not have access to touch files in.
Some devs are kindly reminded that there's a spec for these things but often it's too late as data is already in specific paths that users may have come to know. That way you end up with paths that get set by environment variables where you have to tell each and every program where to put their crap.
Other programs don't care enough to implement the standards (like Firefox; the bug report about XDG is old enough to vote [1] and it's still not implemented fully). Kubernetes has an open issue for its client that only ever gets bumped.
Even worse are devs that are reminded of standards like XDG and then decide to give everyone the middle finger. Snap is one of them, not only is the data directory hard-coded, it's hard-coded lowercase unlike every other standard directory on Canonical's distribution itself! Snap's biggest competitor, Flatpak, decided not following the standard is not a problem [3]. At least it's special snowflake folder starts with a period so that it's hidden by default, I suppose. Even Bash doesn't support XDG [4] because not everyone uses Linux (and apparently no effort should be made to support OS specific standards?) with the suggestion closed as won't fix.
Many tools that do support XDG only care about their own standards, of course; Windows has had SHGetKnowlFolderPath since Vista, replacing SHGetFolderLocation which dates back to Windows 2000. Still, developers like to push POSIX standards into Windows, creating .dotfiles and not even bothering to at least mark them as hidden.
There's a big list on the Arch wiki[7] listing programs and their compatibilities with XDG.
[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=259356
[2]: https://github.com/kubernetes/kubernetes/issues/56402
[3]: https://github.com/flatpak/flatpak/issues/1651
[4]: https://savannah.gnu.org/support/?108134
[5]: https://docs.microsoft.com/en-us/windows/win32/api/shlobj_co...
[6]: https://docs.microsoft.com/en-us/windows/win32/api/shlobj_co...
[7]: https://wiki.archlinux.org/title/XDG_Base_Directory#Hardcode...
Incidentally, did you know that PowerShell on Linux respects the XDG specification? It was rather unexpected when I first noticed it and it just tickles me pink.
I hate the scripting language, but technology wise Powershell is one of the most solid scripting engines out there.
Tangent, but that's what made getting into Linux/Unix really hard. You have all these folders and files and no README.md to explain what is what. And there seemed to be no logic at all with how things were organized or named (and names often were shortened to abbreviations that I couldn't comprehend). I'm wondering what a modern system made to be readable and understandable would look like.
The other thing, coming from windows, was not understanding where to install things. In windows there's like a single place where you install all your stuff.
> You have all these folders and files and no README.md to explain what is what.
Markdown is a novelty. Back then, it would be just README (with no file extension at all).
> In windows there's like a single place where you install all your stuff.
Windows was even worse. Whenever you installed something, parts of it were in a new directory at the root of C:\, and parts of it were dumped in C:\WINDOWS\SYSTEM together with all the rest that's already there, often overwriting files of the same name (and the names were limited to 8 characters plus the extension, so they were quite opaque) used by other software you had installed earlier (that's the original scenario of what is now called "DLL hell"). On later Windows versions, instead of a new directory at the root of C:\ it was a new directory within "C:\Programs Files" (or is it "C:\PROGRA~1"? Or perhaps "C:\Arquivos de programas" aka "C:\ARQUIV~1"? Or something else?), and instead of C:\WINDOWS\SYSTEM it was now C:\Windows\system32, and there's also the "Common files" directory somewhere. And since there's no package manager (actually there is one, but not everything uses it, and it's very complex), you don't know which file came from which software. Oh, and if the program you installed overwrote a "protected" system file, the operating system overwrites the file again with its own copy.
There is a package manager, what’s missing is a directory tree owned by the package manager and protected from smuggling in unexpected crud without a big red warning for administrators.
As a user everything was in program files/
It wasn't, though.
It was, I really have no idea what you guys are talking about, everything I installed mostly went there and it was always easy to find applications. Again, as a normal user.
The executables, yes. But poke around Common Files, AppData, etc and you’ll see precisely what everyone is talking about.
Not to mention applications can and do (could and did?) put things pretty much anywhere they liked and there’s never a way to really know for sure. I’ve had to hunt down dozens of directories for programs that just did not give a fuck about being easy to uninstall.
Take a look at "man hier": https://www.freebsd.org/cgi/man.cgi?hier
"Link a man to man and you solve his problem for a day, teach a man to man and you've enlightened him for life."
Side note, calling the file system layout "hier" has got to be the stupidest naming choice. Did they want this to be lost forever so that nobody ever finds it?
Once upon a time, the manpages were a printed object. This, coupled with some of Bell and later BSD's quirks about naming things, led to some historic naming conventions. See also: this entire damn conversation on naming directories.
One wasn't intended to call man directly, instead calling apropos first, finding the appropriate page to open.
But what if I need to read on how to use apropos? Then I need to do `man apropos` and I'm stuck in a cycle! /s
https://www.man7.org/linux/man-pages/man1/apropos.1.html
There is "man hier" and also "man file-hierarchy": https://www.man7.org/linux/man-pages/man7/file-hierarchy.7.h...
Well, as the man page itself says:
"that's what made getting into Linux/Unix really hard. You have all these folders and files and no README.md to explain what is what"
In the old days, I read books for that.
Well, you can look at MacOS for a basic inspiration. Hide all the ugly Unix parts and expose sensible directories like Applications, Preferences, Volumes, Users.
This. Although I wish Apple would follow MS's suite and remove "spaces" in directories. Right now we have something like
which is _nasty_ when working in a terminal.MS tried to fix this by making directories like:
The irony is, Microsoft originally put a space in "Program Files" intentionally, to force software developers to support paths with spaces in.
I don't know why developers have apparently collectively decided to go backwards. If your software doesn't support spaces there's a reasonable chance it doesn't support more exotic characters either, which really sucks if you are not natively English speaking.
> If your software doesn't support spaces there's a reasonable chance it doesn't support more exotic characters either, which really sucks if you are not natively English speaking.
The problem with space is that it's often a separator, which will not be the case for exotic characters. Fixing issues with exotic characters will not necessarily fix issues with spaces, and vice versa.
I honestly prefer having these spaces. It forces tools to actually cope with them instead of pretending spaces don't exist and breaking when they do.
It's not as much issue with tools as it is an issue when working from shell. You have to make sure to quote such paths, and autocomplete gets confused sometimes when it auto-escapes the space with \
>In windows there's like a single place where you install all your stuff.
Open a cmd box and type
How many folders do you see? They all count as places.But I completely agree with everything you said about Linux!
The fact that there are several directories with binaries is not a problem by itself. The problem is that many applications use hardcoded paths instead of searching for these binaries using PATH.
It means that if someone decides to get away from this legacy structure and move OS into something like /system/debian-11.1.2/ all those programs would break.
Examples: [1], [2]. I assume that developers have hardcoded those paths because /sbin is often not included into PATH.
[1] https://github.com/blueman-project/blueman/blob/fcef83a01c80...
[2] https://github.com/blueman-project/blueman/blob/fcef83a01c80...
>Standards bureaucracies like the Linux Foundation ... happily document and add to this sort of complexity without ever trying to understand why it was there in the first place.
That is because that is a standards organization's job. They exist to document what is actually being done, not editorialize about what should be done.
This seems to be a good example of the virtue of this sort of behaviour. The mostly arbitrary changes that have been done here have in themselves caused more problems and wasted effort than just keeping everything the same as it was.
Speaking of this, is there a good resource that elegantly but succinctly describes the intent of each of linux’s (Unix’s?) root directories?
I’ve spent like eight years with Ubuntu and realize it’s all symbol manipulation to me. I learn what is and goes where but all in practice and never because I understand the semantics.
You probably have a "hier" man page on your system. https://man7.org/linux/man-pages/man7/hier.7.html
Also systemd has a "file-hierarchy" man page for its understanding of the hierarchy, which includes e.g. its use of /run and which directory can be read-only - https://man7.org/linux/man-pages/man7/file-hierarchy.7.html
Aha! Thanks so much.
https://refspecs.linuxfoundation.org/FHS_3.0/fhs-3.0.pdf
Many distros, including Debian & Ubuntu, have merged /bin and /usr/bin, with symlinks for backwards compatibility: /bin -> /usr/bin (and similarly for /usr/lib etc).
https://www.freedesktop.org/wiki/Software/systemd/TheCaseFor...
The Debian/Ubuntu merged-/usr is incomplete, various things are broken by the way this was achieved:
https://wiki.debian.org/Teams/Dpkg/MergedUsr
Note: This is the dpkg maintainer arguing an apparently fairly unpopular position of linking the specific files inside of /bin instead of /bin directly, in opposition to what appears to be the majority of linux distros.
He's even added a warning to dpkg and a "usrunmess" tool to switch a system to his preferred way of doing things.
It's not clear to me where the breakage lies and I've not seen any actual reports of it.
For more context see https://lwn.net/Articles/890219/
As far as I know, the breakage is theoretical.
Suppose a package has a boot-time, size-optimized, limited binary in /bin/runk and a user-optimized, feature-complete binary that requires the entire system to be up in /usr/bin/runk. When /bin and /usr/bin link to the same directory, the package manager will extract these files and run into a problem.
Things become even more complicated when these tools are split into different packages (say runk-boot and runk-user). Tracking which file comes from which package can become near impossible.
Of course this can be resolved relatively easily; make the package manager link-aware by handling the merged-bin setup as a special case and warn or error when files conflict. People don't seem to want to do that for various reasons, some good, some based in opinion only. It's a mess.
This can also be resolved externally by controlling the repos and not fucking it up. Package conflicts are already a thing, Debian already has all the infra, you've always been able to cause the "theoretical" "breakage". Frankly, it's already a non-problem.
The wiki page has a pretty detailed list of breakages, for eg "dpkg-query -S is currently broken by this approach". Hopefully the in-progress patch for some of these issues will get included.
I don't believe the list is detailed enough, because it just says "thing is broken", but not under what circumstances.
As best as I can tell, `dpkg-query -S` is broken by this iff it's passed a path to a file that's been installed under a different version of that path.
E.g. `dpkg-query -S /usr/bin/vim` fails if vim was installed via `/bin/vim`.
That's a minor bug, that should simply be fixed in dpkg, and that's also easy enough to workaround if the distribution simply installs all files in /usr/bin via /usr/bin.
None of any of that seems to be enough to unilaterally hold up a distribution-wide decision to move to a merged-usr, especially not via official sounding warnings in the install script for a major distribution component, and especially not when this way of doing things works without a lot of complaints in other distributions including the related Ubuntu, and especially not to call for a special Debian solution that has its own problems and to do so years after the fact.
Frankly if I was a debian developer I'd be quite cross with the dpkg maintainer.
I'm more cross at the usrmerge people for inserting such a hack behind dpkg's back.
What exactly is "behind dpkg's back" here? This was discussed, in the open, years ago!
This was implemented, as an option, years ago. This was implemented fully in other distributions years ago! Fedora has had it for a decade, with few problems.
Dpkg has a few minor bugs with it so it needs to be fixed. It's holding up progress here.
In that usrmerge adds symlinks that dpkg does not know about, doesn't manage and doesn't understand. Like if the sysadmin added random symlinks in various places. All bets are off after that. I'm surprised the amount of breakage isn't higher TBH.
Arch and Gentoo does the same, though I don't think they break their respective package managers
Fedora has too, and doesn't have any issues.
Considering Debian is the only one that hasn't just switched, it does sound like a mountain out of a molehill for package manager breakage.
Gentoo still uses the split setup by default. Unifying the directories is currently a work in progress and will eventually be the default from what I understand.
It's possible that Gentoo will still support the split setup even after the default is changed since it supports many different inits and libcs but I am not sure.
Did Debian do it? I thought there was internal conflict in the project whether it would happen, and has been for a decade.
As it stands, I believe that Debian as a distribution did switch, but the .deb packaging software/dpkg package manager that Debian relies on doesn't support it well.
you are looking for the https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
Its funny how many quirks of UNIX/C/etc go back to the severe limitations of early day computers. Which is why using modern languages like Rust and its compiler really feels like coming up for air.
I think Windows is even more permeated with legacy
Off the top of my head:
Nobody questions why main drive is C:, remnant of [an] early computer having two floppy (not sure) drives on A: and B:
Or more recent - C:/Windows/System32 holds 64 bit executables; 32 bit exectuables live in C:/Windows/SysWOW64
> Nobody questions why main drive is C:, remnant of [an] early computer having two floppy (not sure) drives on A: and B:
Recently I was trying to install some obscure driver for a device that doesn't autodetect in my Windows 10 work computer, I had to go through the old school "add device" wizard. When clicking to manually provide the driver, the dialog is exactly (or almost?) the same as the one from Windows 95, and the path defaults to... A:\! There's no floppy on this computer, there even isn't an optical drive!
Every time I get one of those it's like falling back in time!
Windows is a 32 bit shell for a 16 bit extension to an 8 bit Operating System designed for a 4 bit microchip by a 2 bit company which can't stand one bit of competition.
As a user, the main one that really annoys me is the "Program Files" vs "Program Files (x86)" split. I can kinda see why they have to be different folders, but why did they have to name it "... (x86)" instead of "... (32bit)"?
You can call the 64 bit architecture x64 all you like, but it's still using the x86 instruction set and it's frequently referred to as x86-64, so naming that 32 bit only folder "... (x86)" will just make things more confusing than they should be.
More still, why do some apps install in other directories such as AppData?
https://stackoverflow.com/questions/12427245/installing-in-p...
I think this was because at the time of picking the name, Windows with a working 65-bit Windows-on-Windows subsystem only ran on x86 and x64, so the naming made sense. DEC builds weren't relevant at the time and ARM was still far away from gaining 64-bit support. There was a 64 bit version of XP for Itanium but that couldn't run x86 code natively.
It'll be interesting to see what Microsoft will do if Windows on ARM actually takes off. As far as I know, the current translation layer can't execute amd64 on ARM, only x86. Will we see Program Files, Program Files (x64) and Program File (x86)? It would make sense; have the redirection system ready to go and the naming scheme would also make perfect sense. ARM doesn't need a special 32-bit folder because there's no notable 32-bit vs 64-bit clash; nobody is migrating upgrading their Windows CE device to Windows 11, after all.
x64 emulation for Windows on arm Already exists. It is not based on WOW style technology. Furthermore 32 bit arm programs do exist on modern Windows on ARM, using a version of WOW64 very similar to WOW64 on x64 cpus. But they also have x86 WOW64, based on the Itanium version, that had to do binary translation.
- "C:\Program Files" <- ARM64 programs go here, as do x64 programs! - "C:\Program Files (Arm)" <- ARM32 programs go here - "C:\Program Files (x86)" <- x86 programs go here
I'm not sure how things like "Common Files" work in C:\program files, unless they made mixing arm64 dlls with x64 exes and vice versa just work. Which they probably did. I'm guessing they did not want another WOW version, since it was already bad enough to have to ship 3 different copies of certain system components, and they did not want to need to include a 4th copy, especially as ARM devices are often a bit light on storage space.
And I'd never map a network voume on D:, that's reserved for the CD drive!
And E, F, G, H for the CD-W, CD-RW, DVD, DVD-RW... :)
No, the CD drive lives on SCSI id #4!
sph's reaction when someone puts network volumes in the wrong spot:
Fun fact this is the second time Microsoft have pulled this. The first time was for legacy 16 bit Windows applications running on Windows NT. Since most people have moved to 64bit processors, it has been shuttered.
https://en.wikipedia.org/wiki/Windows_on_Windows
Yeah that is a windows gotcha!
so C:/windows/system is a remnant from the 16bit era?
Yes, in 16-bit windows it was system and then 32-bit binaries would go into system32. By the time 64-bit arrived so much stuff had system32 hard-coded in, there wasn't much point in trying to change it so you ended up with SysWOW64 (when a 32-bit app runs under emulation, it 'sees' SysWOW64 as System32, and can't see the 64-bit system directory)
And both the content of system32 and SysWOW64 is actually hard linked from the side by side folder (WinSxS), hence why this folder is usually half the size of the Windows folder.
It's the Windows way to abstract system folders and provide binary compatibility across architectures. I'd much rather have ld.so.preload and multiarch than this hard links mess though.
I'm a big fan of Scheme, which is just about as old as UNIX, and which is based on the even older LISP (which is even older than UNIX, going back to the 50's).
I'd infinitely prefer to use either than Rust.
> GPLv3: as worthy a successor as The Phantom Menace, as timely as Duke Nukem Forever, and as welcome as New Coke.
That's one way to make new friends :)
> There's no actual REASON for any of it anymore.
No reason to do it on embedded system. Lots of backward compatibility reason on servers/desktops.
it's my understanding that most distros by now have moved to have their stuff in /usr, though there might still be backwards compatibility symlinks of course.
Good luck mounting /usr when mount is in /usr/bin. Not everybody uses a ramdisk to boot the system.
Just don't put /usr on its own partition. What is the point anyway after we have merged /bin into /usr/bin, /lib into /usr/lib etc. Just put your operating system on a single partition and be happy.
Not using an initrd is unsupported on lots of distros these days.
You mean initramfs, initrd is unsupported on lots of distros these days.
Right.
> backward compatibility
You're kidding me right? Nobody ever bothers with that for anything else and the company I work at spends like more than half the time resolving stupid install breaking changes that nobody asked for. This would just be one minor extra thing on that pile, but at least it would make sense for once.
Backward compatibility can be ensured with a couple of symlinks.
I must say I really like the MacOS idea, that every app has it’s own folder. I think that apps scattered through filesystem are not a good idea at all. Maybe they should symlink executables to /bin and shared libraries to /lib. Also everything that is needed to boot should probably be in a sealed readonly filesystem or binary anyway. I think we have made a mess with unix filesystem stricture and it really needs to be simplified.
macOS also does a more strict but tidier hierarchy, grouped into "domains" … /System/* is "stuff from Apple" (including /System/Library, etc.), / is for "stuff on the local machine" (/Applications, /Library, etc.), and then each user can have their own hierarchy in their user directory (~/Applications, ~/Library, etc.).
Of course, the "stuff from BSD" winds up in /bin and /usr/bin anyway, so it's still a mess.
https://developer.apple.com/library/archive/documentation/Fi...
You should really check out GoboLinux then : https://gobolinux.org/
Now that we understand this can we please put all system tools into /bin?
Disk space for binaries has not been a problem for decades now.
A long time ago, as a novice sysadmin, I spent some unhappy time fixing a broken Solaris server. The problem was fsck was in /usr/sbin, /usr was a mount point on an external drive array that got its power yanked. Challenge: to boot you need to mount /usr, but first you have to fsck it using the binary in /usr/sbin ...
After that I would make sure to have some working (static) binaries for rescue on every *nix system, tar at least, and on Solaris an extra /usr/sbin/fsck under the /usr mount point). You can fix a lot of things with tar, sed and netcat.
These days a static-linked copy of busybox on the root partition is usually enough; assuming you have the space for that. A 'full' initramfs can also help in case you need to bring over a USB drive of tools from another system or have changed hardware.
having the system mounted in its own sub directory rather than be spread over multiple directories (there's /usr/bin, /usr/share, /usr/lib, etc) has the advantage that a single read-only mount can mount the whole OS.
Having the OS mounted read-only provides some security benefits.
The other option would of course be to have / mounted ro and then have rw mounts in /home, /etc, /var and /tmp, but this is more complicated than a a rw / and a ro /usr
While that's true, these days initramfs is what performs this job.
Disk space was not really the issue. Back in the day extra partitions would actually mean you waste space. It's more efficient to put them on one partition.
The issue is organisation. There is already so much junk in the bin folders. I think it would be much neater to further split the bins into various categories: "shell tools" like ls, [, echo; "applications" like firefox, inkscape, "helpers" like gnome-settings-daemon, ... There is no need to show weird daemons when pressing TAB in bash, and there is no need to show `ls` when picking an application via a GUI.
The problem with that suggestion is that some things belong in more than one category.
A much more flexible way of organizing is to use tags. This way a file could have more than one tag.
Having a tag hierarchy would be even better, so you can browse down the hierarchy as you'd traverse the tree structure of a typical file system (with the added advantage of allowing a single file to have multiple categories that it could be in).
It depends. Fsck is slow on big hard drives. Also separating data from programs is a good idea.
https://fedoraproject.org/wiki/Features/UsrMove (2012) and the linked systems article https://www.freedesktop.org/wiki/Software/systemd/TheCaseFor... include some motivation for changing the status quo, and moving all the duplicate root paths into /usr.
This is by far the best explanation for the mess that is historical file hierarchies I have ever read.
I think modern distros have resolved the issue . For example this is void-linux:
Ubuntu, Debian and RHEL, on the contrary still splits up:Past comments: https://news.ycombinator.com/item?id=3519952, https://news.ycombinator.com/item?id=11647304, https://news.ycombinator.com/item?id=22614731
> Embedded guys try to understand and simplify...
It is indeed true that when you have limited resources, simpler - in the sense of better, more beautiful - solutions often emerge.
On my personal Linux desktop PC I have dumped all binaries into /bin. And /sbin, /usr/bin, and /usr/sbin are symlinks to /bin.
One of my first exposures to *nix in the corporate world was on a Sun Sparc, running at the time SunOS 4.something.
The sysadmin at the time told me the /sbin versions of things where for statically linked binaries that didn't need any other FS' mounted to read any dynamic libs.
I'm not asserting it was right, but just another view into "tribal knowledge" vs "urban legend" vs ???
> Then somebody decided /usr/local wasn't a good place to install new packages, so let's add /opt! I'm still waiting for /opt/local to show up...
Heh, MacPorts installs stuff to /opt/local on the Mac.
For those who liked this text and want to read more from the author, have a look at his busybox replacement toybox:
http://landley.net/toybox/
I'd forgotten that RK05s only had 1.5Mb which made me think about my disk quota as a student around the same time on a mainframe - 500 180byte blocks - ~90k - but I could back my programs up and take them home - on cards
Anyone ever tried to run GoboLinux as a sysadmin ? What was the experience like ?
- `/home/username/bin` and for binaries installed under that specific user
- `/bin` for everything except the above, including binaries installed under root
- same pattern for configs, auxiliary and transient stuff
change my mind
...cache invalidation, naming things, filesystem hierarchy
Taxonomy, in general, consumes and perplexes us. It only seems to get worse as time goes on. Look at your typical react app...
Path dependence -- just like the default 80x24 character terminal window size that goes back to the VT100 of 1978. A curious historical hangover. GUI defaults are obviously less consequential, though.
TL;DR. Storage used to be so expensive per gigabyte that binaries were arbitrarily categorized into distinct file system partitions. Nobody ever bothered to revisit this and merge back to just /bin.
Also, why the heck was $HOME/bin never a thing?
[dead]
> I don’t wanna be callous but on a scale of 0=I stubbed my toe and 10=children are starving in [insert location] this seems to be a -3.
The scale is a measure of what though?
> My theory is every so often someone stumbles upon it, thinks “oh that’s interesting!” and submits it, not realizing it’s a duplicate? ..and then tons of readers in the same boat upvote it? (Similar to how urban legends about dog-sized rats taken as pets in Mexico keep circulating and will forever)
I'm not sure this is a theory, this is literally how the site works, is it not?
You know what I do when I see an article title that I've seen before and it's something I don't care about? I don't click it. Most times though, I do click into the comments to see what a 'new' audience thinks about it.
> I don’t wanna be callous but on a scale of 0=I stubbed my toe and 10=children are starving in [insert location] this seems to be a -3.
Not sure what we are measuring here but the issue of this particular article annoying you seems like a -25 by comparison, so maybe just ignore it?
> If not… holy crap can we please stop repeating articles like this???
Is there really a world-stopping problem that requires this to be fixed?
> My theory is every so often someone stumbles upon it, thinks “oh that’s interesting!” and submits it, not realizing it’s a duplicate? ..and then tons of readers in the same boat upvote it?
The social bug being that you keep getting older and there is always young newcomers that don't have the knowledge you have already picked up?
https://www.youtube.com/watch?v=9gWtrnb4KjU