Pneumaticat 5 years ago

FYI, AppImages do not run anywhere. There are a lot of issues with them in NixOS, since NixOS is all about having explicitly-linked dependencies, and AppImages still often have implicit dependencies that aren't in the image itself, since they are assumed to exist on the host system.

See https://github.com/NixOS/nixpkgs/pull/51060 for an example.

  • sjellis 5 years ago

    Yeah, AppImages are not really comparable to snap and Flatpak, which both address this issue by managing shared sets of libraries as well as applications. Flatpak calls these sets of libraries "runtimes", and snap calls them "base snaps". The metadata for each application declares one runtime or base snap that it uses.

    This ability to easily provide different userlands to different applications is one of the critical advantages of these new application formats.

  • ktpsns 5 years ago

    Something you can do always on a Linux system as a user is to run the AppImage with

      LD_LIBRARY_PATH=".:$LD_LIBRARY_PATH" ./Your*.AppImage
    
    and, before that, put all the missing dynamic libraries in the same path. That task might however be nontrivial due to (hiearchical/nested) dependencies (try "ldd *.so" to inspect it) of the so files itself. That's one of the reasons why we have package managers at the first place...
  • allset_ 5 years ago

    It also requires FUSE to work, which may not be available.

  • tigrezno 5 years ago

    but who really uses NixOS? a 0.001% of linux users?

    • nyanloutre 5 years ago

      How did you came up with that number ?

    • majewsky 5 years ago

      In my circle, roughly 30%.

im_down_w_otp 5 years ago

One of the issues I've had with things distributed as Snaps or Flatpaks is that they tend not to pickup the settings and preferences that I've configured on my workstation.

For example, if I've setup themes, fonts, scaling factors, custom keyboard shortcuts, etc., they tend not to be available/utilized by the applications which are distributed in these bundling formats. For the most part, that's why I avoid using them.

I assume, but don't actually know, that this also true of AppImage. Can anybody who knows better confirm, or deny, that?

  • pushpop 5 years ago

    Honestly, I would much prefer using a package manager than manually installing packages. Using a package manager means you only have one place to look for the package (searching in a browser for software can be so error prone for inexperienced users. Not to mention more disruptive and time consuming). Using a package manager means you only have one place for upgrade installers (3rd party updaters are a plague on Windows) and having everything updated through a centralised package manager means you’re less likely to overlook upgrading a package, which is better for your overall system security.

    My biggest gripe with OSX is that the default way of installing software is via manual downloading. Sure OSX has the AppStore, but that’s mostly garbage. Homebrew is a hell of a lot better but it’s frustrating that I have to install a 3rd party package manager on a modern OS.

    I get your rant about shared libraries but you can get around those problems surprisingly easily (eg you can ship your SOs in the package directory like you might with DLLs in Windows. Or you could just statistically compile your binaries and do away without the SO problem entirely). Problems with SOs are something you’d expect a junior Linux sysadmin to learn so any developer or maintainer worth their salt should have already figured this out.

    • omnimus 5 years ago

      I assume you install different browser than Safari. So is it frustrating that you have to install 3rd party browser?

      Come on homebrew is awesome. Install is easy and then you just point brew bundle to file with everything you need, go for coffe and your computer is ready.

      • pushpop 5 years ago

        > I assume you install different browser than Safari. So is it frustrating that you have to install 3rd party browser?

        That’s not really the same because Firefox isn’t a core part of an OS and Safari is actually a pretty decent browser in its own right.

        Whereas a package manager should be a core part of an OS (just as it is on any other UNIX-like OS) and OSX is, in my personal opinion, crippled without running homebrew (or similar)

        > Come on homebrew is awesome. Install is easy and then you just point brew bundle to file with everything you need, go for coffe and your computer is ready.

        Homebrew is ok. It’s not the best example of a package manager out there but it does it’s job well. However my point wasn’t about homebrew specifically but rather that OSX should have been built with a proper package manager from the outset. And no, the App Store doesn’t count because that’s completely inadequate in almost every department.

        • Wowfunhappy 5 years ago

          > And no, the App Store doesn’t count because that’s completely inadequate in almost every department.

          Apple would disagree. They quite clearly believe that all software should be able to fit inside their sandboxed app store model. I think they're completely wrong about this, but that's beside the point.

          • pushpop 5 years ago

            Of course Apple would disagree. It’s hardly surprising that companies will defend their income streams even when those products are actually pretty rubbish.

  • vhhn 5 years ago

    I was surprised when VSCode snap told me that I should start using different snap to install new versions if VSCode and that my settings would be kept. And it did. So there is some way to respect users settings across snaps at least

    • Lorkki 5 years ago

      The official VS Code snap is unconfined, ie. it only uses snap for package management without the sandbox. This means it can access the settings stored in your home directory just the same as if it was installed through apt.

      Snaps with strict confinement can also ask for home directory access as a specific permission.

    • redmorphium 5 years ago

      Possibly how it's done: the settings are stored in a config directory, like ~/.vscode or something.

      For instance, my neovim AppImage works this way (it still reads ~/.config/nvim/init.vim like regular neovim).

  • viraptor 5 years ago

    Just from reading the docs, it looks like your settings will be respected. By default it doesn't look like there's any sandboxing / namespaces happening. Haven't tried it in practice though.

  • zaarn 5 years ago

    Not any better than Flatpak or Snap in my experience, since Appimage applications have to (technically) ship their own libraries for everything.

earenndil 5 years ago

It makes me really sad that this is necessary. Unix has a concept of shared libraries. And somehow it managed to get ruined so irrevocably that there's no going back. This—this was a solved problem! It really was. It was solved, and then we unsolved it when we decided that 'move fast and break things' was more important than ABI stability. And now shared libraries are completely useless. I struggle to name a single useful c library that's useful as a system-wide shared library nowadays. Libc/m doesn't really count because it's part of the language, and libz is small enough that everyone who needs it statically links it. Everything else is neither ubiquitous nor stable enough to be used. HOW DID THIS HAPPEN?

  • vlovich123 5 years ago

    Private developer wants to distribute binary + shared dependency libs. On Windows they package it into an installer which unpacks it into the target destination & everything works. On MacOS the user gets a folder that acts like file within which everything is stored. Additionally there are reliable releases so something targeting a minimum of MacOS 10.14 has a reliable way to specify that in the toolchain & know that the prerequisite runtimes are there (Windows is a bit less elegant here but still manageable).

    On Linux you have to provide RPMs for Redhat, DEB files for Debian-variants, ??? for Gentoo users. Moreover, your dependencies have to be managed in a totally bizarre way & you need special launchers to put your shared libraries elsewhere & add them to the path to avoid making assumptions about whether or not the user has the right prerequisites. Or you run your own apt/yum/etc servers to host your packages & play nice within the ecosystem. Additionally, some do periodic releases. Some do rolling releases. Considering how small of a population Linux it makes it more headache than it's worth to target for commercial shops that are cross platform as most of their customers are. That also isn't getting into the mess that 32-bit vs 64-bit is on Linux.

    Finally, the big advantage is that the release is done by the author. No more package maintainers providing questionable maintenance across a bunch of distros.

    • bubblethink 5 years ago

      >Finally, the big advantage is that the release is done by the author

      I see that as the biggest net win of the current system although it may seem inefficient or bureaucratic. I do not trust the authors. The only modicum of sanity and trust comes from the fact that debain/fedora maintainers are actually on your (the user's) side and have strong rules and guidelines about everything. Desktop linux doesn't have that much meaningful isolation or sandboxing that malicous apps cannot circumvent. It's only now that we are seeing some efforts in this direction. Still, it's quite far from something like android where you can quite safely run arbitrary applications.

      • oblio 5 years ago

        With your logic, you've already lost.

        If you don't trust the developer of an application you already run, you're screwed in any scenario.

        Yours is not a realistic threat model.

        • flukus 5 years ago

          Developers don't deserve that trust.

          It's not just threat model, developers are increasingly focusing on fast iteration and annoying users with constant and often unwanted updates, something debian saves users from, very few users care about always having the latest features and bugs or want to become beta testers. Not to mention the privacy shitshow from developers wanting telemetry or more nefarious reasons.

          Software repositories like debian and the apple app store are great because the put a layer between the developer and the users and require a 1-1 trust calculation.

          • oblio 5 years ago

            Distributions in their current form are almost harmful. I like what they do, conceptually, but that model you're describing should only apply for the base system. I want Firefox to update ASAP, I want VLC to update ASAP.

            The distribution model should only apply to libraries and base tools. And even those should be versioned so they can coexist easily and I'm easily able to install any app, from the ones that want GTK1 to the ones that want GTKLatest.

            • flukus 5 years ago

              > I want Firefox to update ASAP

              Firefox is the perfect example of why I hate user facing apps updating constantly. They're always adding random features, breaking plugins (still don't have vertical tabs working properly) and shifting the UI around. It was much better back when they had stable releases.

              > The distribution model should only apply to libraries and base tools.

              As long as nothing breaks it doesn't worry me how many times libc is updated, it's the user facing changes that interrupt me I want to avoid.

              > And even those should be versioned so they can coexist easily and I'm easily able to install any app, from the ones that want GTK1 to the ones that want GTKLatest.

              If they can't commit to stable releases and non-breaking API then they aren't going to commit to maintaining the 15 versions of GTK that you'd end up with on your system, that's the worst of every world.

          • tobias3 5 years ago

            If upstream isn't interested in maintaining a stable version (or more realistically doesn't have the resources), someone'll have to fork it, rename it and release it as "stable foo fork". Upstream makes calculated decisions (if you want to be charitable) w.r.t. the resources they have, the new features they want to add, stability etc. If those trade-offs are not what you want, you'll have to use different software. Same applies to e.g. the telemetry. And from experience e.g. Debian maintainers often don't look at the code of the package they publish e.g. jwz's time bomb in XScreensaver, let alone backport bugfixes to the package version from the earliest maintained upstream stable version.

        • earenndil 5 years ago

          I think the parent means that, for opensource software, they trust their distro maintainers to read the source code and only publish trustworthy software.

          • oblio 5 years ago

            I know, but with his model a random third party decides what's best for that software.

            That third party has screwed the security of the package on occasion (Debian being a famous example: https://www.schneier.com/blog/archives/2008/05/random_number...), has delayed package updates for years if not decades (I don't even need to provide an example, just do a diff of stable upstream versions and your favorite distro's package versions), has even broken packages on occasion, etc. And let's not the frequent cases where there's a personality clash between the upstream developer and a package maintainer...

            And this model also assumes that a package maintainer has the time or expertise to actually audit the code fully and correctly. Really bold assumption!

    • beatgammit 5 years ago

      It really isn't that hard on Linux. Basically:

      1. define your dependencies (try to be conservative so you're not reliant on bleeding edge features) 2. Make a good app that people want with a fairly simple build step 3. Support one or two major distributions 4. Ask for help in bundling for everything else 5. Fix issues as they're discovered

      If you ask nicely and people want your app, the community will help you out with the rest. Just look at Steam, which is really only supported on Ubuntu and SteamOS, but is packaged by pretty much everyone.

    • earenndil 5 years ago

      Private developer wants to distribute binary, have system include shared dependency libs because—guess what, they're shared. That's what we could have had. That's the problem this is getting around.

  • nemothekid 5 years ago

    >It was solved, and then we unsolved it when we decided that 'move fast and break things' was more important than ABI stability.'

    I don't believe it was "solved", and I don't believed it was caused by people "moving fast and breaking things".

    Linux became HUGE, in terms of the number of people involved, and there is no master body that would govern anything. As an application developer, you could build against libfoo, then your app break on debian because they never updated libfoo besides some strange monkeypatch, and be broken on redhat because they decided to fix bug #12494 differently from the developers of libfoo.

    God himself has decided it's too hard, too complex and too much of a clusterfuck of competing motivations to expect that RedHat/Debian/FooBarLinux will reliably all provide the same shared library. And the user doesn't case, bandwidth the cheaper than disk, and its most certainly cheaper that user's time.

    • earenndil 5 years ago

      That's a pessimistic and imo unwarranted attitude. There are absolutely platforms that maintain backwards compatibility for decades on end. The most salient example is the linux kernel, but—-well, linux distributions are not a monolith. Fine. How about the c language, or common lisp? Perl packages are installed relatively globally, and things don't really break. And sure there's some fragmentation there, like with sbcl-only lisp packages, or gnu extensions, but by and large I can compile an arbitrary perl/common lisp/c package with an arbitrary implementation of that language and, at least in the case of perl and lisp, it works basically the same way it did 10 years ago. This is possible, because it has happened.

      I will grant, however, that it is hard. Newer languages—rust, c#, java, python, js—are trying. Of them I would say rust is the best, followed closely by c#, with python and js taking a distant 3rd and 4th, but none of these match the older languages. But—rust is newer than c# and java and python and js. They decided: we're going to make a high-quality, stable package repository, with a culture of stability in packages. And aside from stuff requiring nightly, that seems to be happening, it's still pretty good, and it's still a hell of a lot better than linux. Granted, linux has a much more difficult situation to wrangle, and because it's less monolithic than those other constructions, tragedy of the commons tends to occur, but I think all it takes is a group of people deciding that they will take the work to make things right, getting support, getting lucky, making a super-dynamic linker, and we can fix this.

  • kbumsik 5 years ago

    > HOW DID THIS HAPPEN?

    For example, distribution's package update cycle is so slow that some packages are never updated to upstream for some reason. For example, x11vnc's latest version is 0.9.16 but Ubuntu/Debian has locked down to 0.9.13 for more than a half of a decade [1]. ddclient is also a good example [2].

    In my case, I made a program that requires >= Python3.6 but I realized Ubuntu was stuck in 3.5 for years so that I was forced to use AppImage to include in-house compiled Python3.6.

    Not to mention that Electron apps are simply not compatible with distribution's dependency management.

    [1]: https://packages.ubuntu.com/search?keywords=x11vnc&searchon=...

    [2]: https://packages.ubuntu.com/search?keywords=ddclient

  • jillesvangurp 5 years ago

    It was never really solved. Shared libraries create huge integration testing headaches and the linux landscape fragmented early. This means that there are a lot of combinations of libraries out there in lots of different versions. Some distributions stick to really old versions of stuff to preserve backwards compatibility. E.g. Red Hat is really awful if you are a developer since it pretty much has the wrong version of anything you'd care about. But I remember this being a real PITA when you wanted to use e.g. the current versions of java, node.js, python or whatever and then realize that all of these basically are 2-3 years behind because the LTS version of the distribution was released just before the major release of what you need. That, and the wide spread practice of engineering around that (e.g. building from source, installing from debian unstable, etc.) results in a lot of problems.

    Luckily these days you package the correct versions up as Docker images so, it's much less of an issue. The amount of space you save with sharing your libraries does not really outweigh the benefits of shipping exactly those things you need as a statically linked binary.

    Doing the same for user space software makes a lot of sense. Apple figured this out in the nineties already. Mac software never really needed installers. Drag the app to the Applications folder, delete to remove. Just works. The only package managers on OSX are for managing OSS linux software.

    • dTal 5 years ago

      Nowhere is it written that a system can only have one version of each library. Indeed, there is an entire system (soname) for resolving which of many versions of a library a binary should run with. There is no reason at all why you couldn't have a system where all the libraries are shipped in your application folder, but it uses the system ones where available. There's also no reason why every distro couldn't have every version of every library, and all software since ever would just work with the system version.

      Distros screwed the pooch here.

  • mrkeen 5 years ago

    I hit this when I wrote some C on a modern Ubuntu machine.

    I wanted to sort an array based on information in another array. I don't think qsort has the power to do this, so I used qsort_r.

    I later found it didn't compile on another system (termux on android) because qsort_r didn't exist there.

    I don't feel like 'move fast and break things' is the culprit here (although it undoubtedly is in other cases).

    Having qsort_r in some unixes and not others feels wrong. And doing without qsort_r also feels just wrong. I don't know the solution.

  • flukus 5 years ago

    > I struggle to name a single useful c library that's useful as a system-wide shared library nowadays.

    Libcurl is the only one I frequently use that seem to take it seriously, there might be others but frustratingly few will indicate what level of binary compatibility they guarantee, it would be nice to have a list of reliable libraries. Not just for binary compatability but I would be nice to know how much churn you can expect with any given library.

    On the other hand, considering the state of the software industry with relation to privacy (including some open source products) running software from arbitrary sources is becoming less and less viable anyway, I'm not sure it's a problem worth fixing.

  • curt15 5 years ago

    >HOW DID THIS HAPPEN?

    Microsoft treats Windows APIs as a contract with the developer, with behavior defined by specifications. Are Linux libraries typically managed that rigorously?

    • earenndil 5 years ago

      Linux the kernel does that—see linus's rants about how 'kernel does not break userspace'. Unfortunately, someone decided at some point that versioned libraries should be a thing, and then all hell broke loose when openssl and glibc decided to break compatibility in minor versions so now it's just a free-for-all.

  • kitotik 5 years ago

    Isn’t cross platform/multi vendor development the cause?

    Static binaries are just a form of vertical integration.

    • earenndil 5 years ago

      The problem that's being solved is that vendors are incompatible, which could easily be solved if they were compatible. They all run a linux kernel. They all use ELF and x11 and opengl. If I compile 'hello, world' on one distro, I can drop it onto another random distro and it'll still work, but after a certain threshold of complexity, that stops working. It doesn't have to stop working.

      • flohofwoe 5 years ago

        Even a simple hello-world doesn't work across distros since it requires a specific version of glibc.

        • earenndil 5 years ago

          The glibc version tagging has to do with specific symbols; hello world just uses printf which is there since forever. (Actually the compiler probably optimizes it down to a syscall but.)

hardwaresofton 5 years ago

Video introduction to AppImage (linked on the AppImage website): https://www.youtube.com/watch?v=mVVP77jC8Fc

Also, a side point -- is it wrong to want people to spend more energy on building fat binaries? To me they are the ultimate in portability (by definition), and investing in projects like musl libc and distributions like alpine, languages like go and rust that build portable static binaries is so much more accessible to me as a developer.

All the approaches to portable apps seem to just be hacking around the problem but I wonder if we should instead be pouring energy into making fully static binaries easier to build, then trying to optimize them to get them smaller.

  • pknopf 5 years ago

    What if another heartbleed happens? Wouldn't it be better to update a single shared library?

    • mickael-kerjean 5 years ago

      In theory. In practise it's much faster to push for a fix to your users with a brand new fat binary than having to figure out the mess that is distributing your software on every possible Linux distribution (obligatory XKCD: https://xkcd.com/927/).

      Also shared libaries assumed your software will work on a different version of a library which is quite a bold assumption that may or may be true depending on the phase of the moon

      • hardwaresofton 5 years ago

        I agree with this -- I also think it's much easier to check for vulnerabilities this way.

        It's a bit of a stretch but I think we're moving towards a more micro-kernel approach across the board -- trying to move more and more code/libs into the software artifacts we run (in part making them bigger, like with containers/AppImage/snaps/flatpak).

        I'm no security expert, but I think it's much easier to maintain the security of barebones systems + fat binaries than big systems with smaller binaries. Running programs that are supposedly self-sufficient (i.e. will never need to dynamically link) is easier to reason about and secure.

        Also, there's the current renaissance in virtual machines and sandboxing tech (nemu, firecracker, gvisor, etc) which are being currently used for containers and cloud stuff but can usher in a huge level of security for the typical user as well.

      • Conan_Kudo 5 years ago

        > In theory. In practise it's much faster to push for a fix to your users with a brand new fat binary than having to figure out the mess that is distributing your software on every possible Linux distribution (obligatory XKCD: https://xkcd.com/927/).

        Electron seems to have disproved this. There are many Electron based applications that are broken with glibc >= 2.28 even though a fixed version of Electron has been out for it for nearly a year.

        Fat binaries (or fat binpacks) are a failure.

        • hardwaresofton 5 years ago

          Would you mind explaining more about this? I'm not sure I understand completely what you mean -- glibc is basically impossible to statically build and is linked. It's part of the reason why "static" builds don't really exist on debian and many other distributions. Correct me if I'm wrong but glibc just isn't portable -- this is why I mentioned having to go into alpine & build with musl libc. Seems like the electron project has chosen not to support it[0].

          Another aspect worth considering is the software logistics/delivery problem -- it absolutely would be great to have dynamically linked software updates if:

          1) Your software could always ensure to get the version it expects with the version it expects

          2) It wasn't hard to distribute the software (AKA X > 5 providers are hard to package for)

          Assuming I'm not completely misunderstanding your point, if the electron based applications you're discussing were truly fat binaries, nothing could break them, outside of CPU architecture level impropriety.

          BTW, there are some systems like Nix & Guix that have solved #1 -- it's extremely easy to ensure that your program gets the exact version of some dependency.

          [0]: https://github.com/electron/electron/issues/9662

giancarlostoro 5 years ago

You know if even Linus Torvalds likes it, you've done something right. I love the idea of solving the current approaches to installing / maintaining software on Linux. This is one approach I do like, but I still appreciate maintaining packages through a package manager. I would love to see a best of both worlds, packages like deb that can both run directly and be managed through the package manager, depending on how you choose to run them.

  • stubish 5 years ago

    The quote doesn't say he liked it. The quote says it is 'just very cool'. Maybe I've dealt with too many out of context book blurbs and sound bites in my time, but a single dubious endorsement like that is worse than no endorsements.

znpy 5 years ago

Every app packaging its own shared library and runtime because someone didn't want to deal with packaging software.

I foresee someone in five years complaining about how much RAM linux on the desktop uses. And somebody else blaming it on shared libraries not actually being shared because all the "apps" load their own snowflake library. So multiple copies of glibc, multiple copies of gtk, multiple copies of everything.

RIP RAM AND WALLET.

shmerl 5 years ago

Don't forget about trade-offs. Such kind of bundled packages have worse security than distro packaged method, where dependencies are getting patches and fixes. Because most developers won't ever bother patching their bundled dependencies.

So know what you are paying with.

voodootrucker 5 years ago

What people use it for at the top, what it does in the middle, and how it works at the bottom, buried in a video - typical modern sites (except this one at least has the video).

What I would like to see: 1. Problem statement 2. How this solves it 3. Usage guide 4. Source code link 5. No appeal to authority of who's using it

  • satori99 5 years ago

    > and how it works at the bottom, buried in a video

    A video that plays for 12 minutes before it explains the bit about using an ELF header which mounts a disk image payload using FUSE.

  • beatgammit 5 years ago

    Eh, I think 5 is important because it shows that there's a vested interest in keeping the project going. I absolutely do make decisions on what to use based on who is using it because switching to something else after a project dies is a royal pain, and I don't have the time to maintain it myself (though I can certainly submit patches here and there).

opan 5 years ago

I'd rather see people use Guix or Nix when their native package manager doesn't have something.

dorfsmay 5 years ago

Anybody has strong feelings about AppImage vs Snap vs Flatpack (and any other similar ones)?

  • Wowfunhappy 5 years ago

    AppImage makes me nervous because they don't actually create promise cross distro compatibility—it depends on what libraries the developer decides to ship alongside the AppImage, and which they assume the distro has. This also raises questions about forwards compatibility with future OS releases.

    Snap and Flatpak, by contrast, have explicit systems in place to prevent this. Of the two, I much prefer Flatpak for being a community-driven project. Also, Snap doesn't let you disable automatic updates (without hacks to your hosts file or such). Whatever you think about the security implications, this feels very against the Linux ethos of the user always retaining ultimate control.

    • ktpsns 5 years ago

      On the other hand, AppImages are dead easy. They do not assume any host infrastructure installed which is especially handy if you are lacking root on a shared system.

      • Wowfunhappy 5 years ago

        I'd prefer we just standardized around Flatpak being installed by default on most distros. Once the base package is installed, you can also install Flatpak apps without root. AppImage's won't work either if certain base packages haven't been installed (by a root user), like FUSE.

  • bengotow 5 years ago

    I'll bite. I ship the Mailspring email app on Linux and I prefer Snaps as an application developer. The big win is that the snap system provides automatic updates for packaged apps. You publish the build on Snapcraft and 24 hours later everyone on all linux distros has it. I cannot understate how incredible that is as a developer. I used to burn a lot of time investigating issues that users reported on Linux only to find that they were running a version a year old because each update required them to visit the site, download the dpkg/rpm, and install it. I know I could set up an apt-server and whatever else to vend updates to the major linux distros, but we also ship on Mac and Windows and that's a lot of overhead.

    The downside is that Snap-packaged apps don't always integrate correctly with the underlying system. For a while theme support was pretty broken. And subtle configuration options aren't always passed in. Worth it in my case though!

    • dorfsmay 5 years ago

      Interesting, very similar to Android app store. The concern though is what if your new release break something and this was a really bad time for the user? Can they temporally revert to a previous version?

      • Wowfunhappy 5 years ago

        They can, temporarily. They can’t revert perminantly, however, which is a dealbreaker to me ever using Snaps.

  • sandov 5 years ago

    I like the concept of AppImage much more than Snap and Flatpak.

    I fully embrace the idea of decentralized distribution of applications, as opposed to the way package managers work (central repository mantained by the distro)

    I believe the operating system should only be concerned about the base software and present a sane interface so that the user can then install the specific programs they need, the OS should not care about how or where the user gets those programs.

    Appimage is the only project I know that respects that idea. Snap and Flatpak are centralized AFAIK (or are unnecessarily hard to use in a decentralized manner).

    • Wowfunhappy 5 years ago

      Flatpak's can be distributed as standalone ".flatpak" bundles that can be installed offline. I'm not sure why this isn't promoted more.

    • stubish 5 years ago

      How does AppImage solve distribution of applications? It seems to do packaging, but the actual distribution part is left to the developer. Maybe they put them in a somewhat trusted location like a github releases page, or maybe they are pinned to a webforum post.

      Something like Snap tries to solve distribution and updating, using a store and cryptographic signatures. For decentralized use, the snaps can be downloaded along with a signature, and they can then be installed on computers with no net access. The snapd software can verify that the binary came via the store and can be trusted that far at least. Or you can avoid the store entirely, distributing .snap files unsigned or using your own verification mechanisms exactly as a developer does with AppImage, and force the installation using the relevant CLI arguments.

    • deviantfero 5 years ago

      How is this different from windows? I think this is good for dependency heavy apps, such as krita, but you should still try to keep things as centralized as possible, makes updates easier and painless

      • sandov 5 years ago

        it's not fundamentally different.

        The practical difference is that the ecosystem of Linux applications is composed almost entirely of open source software. Consequently, installing something you downloaded from the web is much less dangerous than installing a closed source program on Window, provided that you trust the website.

        I agree that the centralized scheme is easier to use in the 80% of cases. i.e. when:

        (1) The package you want is in the repos, and ... (2) The version of the package you want is in the repos.

        But, when those 2 conditions are not met, installing software is usually harder than on Windows. Additionally, I don't like the very nature of centralized things, even if they are managed by the good guys.

        • stewbrew 5 years ago

          Unless somebody else built the app from source and reproduced exactly the same binaries there is no guarantee that the binaries you download were actually built from the source you're looking at. Open source per se doesn't magically imply any benefits wrt security. Things look differently if the binaries were built on a central & trusted platform or by trusted packers.

          • sandov 5 years ago

            > Things look differently if the binaries were built on a central & trusted platform or by trusted packers.

            How so? I believe the same principle applies for centralized distribution. How do I know the packer didn't change the code?. The same way I trust repo mantainers I can trust application developers, or any other third party.

            And reproducible builds are possible both in decentralized and centralized modalities of distribution. Aren't they?

        • deviantfero 5 years ago

          Yeah I agree that it is a pain when a package is not in the official repos and maybe I should see this a centralized solution to that, currently I think each distro tries to solve it somewhat, for example Arch and it's AUR

    • morpheuskafka 5 years ago

      I have a similar but different view. I don't mind a centralized distribution platform, but I like how snap seperates that from the distro so it's easy to target many distros with one package.

    • viraptor 5 years ago

      If you haven't seen it yet, you may find Fedora Silverblue interesting.

      • sandov 5 years ago

        I'll take a look at it. Thanks for the suggestion.

TBF-RnD 5 years ago

Come on guys, I see a lot of negativity here but we can have the cake and eat it. Look at the following imaginary but likely scenario. A promising coder creates an app in C using cmake or whatever. For a veteran linux user to use git and compile is not a problem. Our up and comming coder want to reach out to a wider audience however so instead of creating X packages he provides an appimage. All of the sudden all the fedora, debian and ubuntu guys etcetera can run it without the hassle.

Now imagine if the project turns out to be a silver bullet for some really important problem. What will happen, the maintainers for the bigger distros will simply download the code and there will be maintainers that steps up and maintains the software for the repos.

Voila, the best of two worlds.

... and if the project doesn't become a huge mainstream access users now can get it via source or appimage .

As far as commercial projects are concerned they will operate according to different dynamics. But who cares we want open source solutions for our linux systems anyway.

Lowkeyloki 5 years ago

What I'm disappointed about with AppImage is that it doesn't deal with differing architectures as far as I can tell. Which would really help me right now as I had to send my laptop to Dell for repairs and I'm now using just my Android phone and my Raspberry Pi 3B+ as a desktop. And compiling stuff on the Pi is SO SLOW!

sprash 5 years ago

If you want portable apps just make one fully staticly compiled binary.

This is the worst of all worlds. No security updates for used libraries and no performance gain that comes with static linking.

  • zurn 5 years ago

    The static linking ship has sailed years ago for glibc based apps (no static linking support).

    • sprash 5 years ago

      Always use musl for static linking. As a bonus you might get an even smaller binary than a dynamically linked glibc binary.

      • cesarb 5 years ago

        Keep in mind, however, that AFAIK musl won't respect /etc/nsswitch.conf, so if for instance the machine is configured to lookup users on ldap, a musl static linked program won't be able to correctly lookup users.

        • beatgammit 5 years ago

          There are certainly cases where you need the features of glibc over musl, but those are pretty rare IMO, and you can always implement a missing piece yourself if using musl saves you enough in maintenance overhead.

baroffoos 5 years ago

What is the difference between appimage and just a binary marked as executable?

  • osrec 5 years ago

    It's supposed to contain all its dependencies.

zurn 5 years ago

Does it work on Android? For cli apps running under adb, at least?

hpaavola 5 years ago

"To run an AppImage, simply:

Make it executable

$ chmod a+x Subsurface.AppImage

and run!

$ ./Subsurface.AppImage

That was easy, wasn't it?"

No. How about click/double click the app icon in your menu/desktop/the folder where you downloaded it into?

  • IloveHN84 5 years ago

    You need .desktop files in /use/share/apps

    • hpaavola 5 years ago

      I, as a user, do not need any files in /usr/share/apps. The system might need. And I don't want to deal with those.

      All other major operating systems (can) work like this; download something > click it > it works. No need to launch terminal, set executable permissions and type the name of file.

jhoh 5 years ago

This website just makes me angry:

- Fixed social media share buttons that cover the text on mobile

- It auto-translates to German even though my system language is set to English (seems like they use geolocation for this which is bad practise)

- Center aligned text that is annoying to read

Annatar 5 years ago

"No need to install". This is every system administrator's nightmare: users running arbitrary executables bypassing operating system packaging. Come time to upgrade or reinstall what can happen? If there are security updates which are needed to the program, what could happen, since this is statically linked?

This is the pinnacle of destructive lazyness and amateurism in IT: as a developer it is one's job to master every operating system packaging format for the target platform one develops for. OS packaging is a tool invented for developers, not a tool meant to be subverted at every turn and opportunity like this.

  • etaioinshrdlu 5 years ago

    Literally the reason why PCs won over mainframes. Users want to run the software they want to run.

    If you don't trust your users to run software on your server you probably shouldn't let them on your server in the first place... or else contain and isolate them with a VM or similar.

    Multi-user operating systems are feeling a bit like they are going the way of the dodo, to me... That is acutall multi-user systems, not user accounts for system services.

    • Annatar 5 years ago

      What do you think powers the InterNet that you're connected to right now? Multiuser systems, running applications under different logins. Even the much hated systemd brought physical multiuser computing back to GNU/Linux.

      • etaioinshrdlu 5 years ago

        Yeah, but the user accounts are at the application level not the os level.

        I doubt there is a top internet company around that makes a unix account for each web user. That would be an antipattern...

        • Annatar 5 years ago

          Search for "free shell accounts". You might be surprised.

          • etaioinshrdlu 5 years ago

            It kind of reminds me of shared hosting providers without root access. Sure they exist but really have been overtaken in a big way by virtual private servers... That's what I mean by they seem going out of style.

  • viraptor 5 years ago

    I don't believe this is really a problem for administrators. Reasonable multi-user hosts will have both home and tmp mounted no-exec and will not allow fuse - appimage will not work. This solution will work just fine of personal machines though. I'd rather native packages were distributed too, but if the choice is this or nothing... why disappoint people?

  • pjmlp 5 years ago

    Any savy UNIX admin will be able to prevent new executables to run under user accounts.

    • Annatar 5 years ago

      But it won't solve the problem of developers' unwillingness to learn how to make operating system packages now will it?

      I don't get it: people like that will sink and waste hundreds of hours learning useless garbage like Puppet, Ansible, Chef, Docker or Kubernetes without batting an eyelash or even thinking twice about it, but they'll argue and fight back like hell come time to deliver their software as clean OS packages because they don't want to learn the technology. Technology which exists for them first and foremost: OS packaging is meant to be a developer's tool and best friend.

      • pjmlp 5 years ago

        Because those providing the said technology cannot agree what it means to be a GNU/Linux OS, and we have limited time on our life to bother with thousand variants of it.

        • Annatar 5 years ago

          Linux is just a kernel, so every distribution is a somewhat similar yet completely different operating system.

          Reality is, most software targets CentOS / RHEL, OpenSUSE / SLES and Debian / Ubuntu. That's exactly two packaging formats, RPM and DPKG.

          Now, let's presume for the purpose of illustration that learning both of those takes 100 hours (it takes much less): to learn Chef, Puppet, Docker, Ansible and Kubernetes to any degree of proficiency takes about 1,000 hours. Where's the business value?

          • pjmlp 5 years ago

            Try to install a random RPM package targeted to Red-Hat on SLES to see how far you will go.

            • Annatar 5 years ago

              That's again due to incompetence and not a fault of RPM. My own spec files build cleanly across both without any additional effort. It's not rocket science but insight.

              • pjmlp 5 years ago

                Ah, like the C based security exploits are the fault of programmer and not the lack of safety features to start with.

                • Annatar 5 years ago

                  Absolutely correct. Those who do not understand the hardware on the register and machine code level should go master that first before dabbling with C.

                  One has to learn to walk before one attempts to run. Working on and with computers requires competence and insight; no technology can replace that nor ameliorate it.

                  • pjmlp 5 years ago

                    Nice point of view, then you wonder why devs prefer package managers that abstract OSes.

                    • Annatar 5 years ago

                      I do, because in long term sight their approach is irrational and expensive.

        • Annatar 5 years ago

          But you don't have limited time to re-invent the wheel over and over again by inventing new programming languages, learning new frameworks and re-implementing what already exists? How many headlines here were of the "why do it inefficiently? Because I can!" type? The limited time argument is a fallacy in this context.

          • pjmlp 5 years ago

            A language package manager works everywhere, regadless of the OS.

            Thankfully modern languages are mostly OS agnostic due to their rich runtimes and library eco-systems.

            • Annatar 5 years ago

              A language package manager is system administrator's enemy and her or his nightmare.

              Now you explain to me why and how that is the case.