axaxs 5 years ago

This makes me sad, but not completely unexpected. I worked on Antergos a while early on and had a lot of fun. That said, at least 2 of the core devs went more or less MIA for months at a time as they got busy with life, counting myself. My hats off to the team but specifically Gustau and Dustin, who trudged along the entire journey through the years. Seriously great developers to work with.

As an aside, I just checked the geolocation server I'd setup for the installer.

03:14:51 up 1357 days, 3:43, 1 user, load average: 0.00, 0.01, 0.05

It's been up 4 years continuously on a Scaleway ARM box. I can't recommend them enough for such projects.

  • subway 5 years ago

    Does that mean part of your infrastructure saw no kernel patches in 4 years?

    OS uptime gave me pride in the 90s. Today it's usually a bad sign.

    • buildzr 5 years ago

      Unless you have untrusted users with SSH you can get away with a lot. I've reviewed many major Linux patches for the past several years and found we weren't actually impacted by most of them.

      For example, I don't need Zombieload/MDS patches as I don't have anyone running untrusted code on the servers, I didn't need the rds_tcp vulnerability patch from last week because I don't have RDS modules loaded on any of my servers. I didn't need client side OpenSSH patches on these servers either, nor OpenSSL patches for UDP SSL. Typically a quick check with ansible is all it takes to confirm if these things are or aren't real risks for you.

      EDIT: Just looking at some CVE lists... It looks like assuming that the entire attack surface is the kernel and pre-auth openssh you may be in the clear running stock Ubuntu Server Minimal 14.04, a 5 year old OS today.

      Kernel vulnerabilities resulting in code execution in the TCP stack or related code resulting in code execution are few and far between. OpenSSH vulnerabilities... well, the last pre-auth OpenSSH vulnerability, one of the two in it's entire lifetime had the severe consequence of... being able to check usernames too fast.

      Please let me know if I've missed a big one, but I don't see anything that could even be used to do more than DoS a system like this running an old kernel and openssh server.

      • chaosite 5 years ago

        The thing about Zombieload/MDS (well, not really those, they're really more theoretical attacks... But Meltdown/Spectre in general, and any other local root exploit) is that they turn a remote shell, and perhaps a very limited one, into a remote root shell.

        Not having any ports open is one thing, but I do think your attack surface is larger than just the kernel and OpenSSH. Does Ubuntu not have UPNP open by default?

        But things connecting from your computer to the outside world can also be exploited. Just the very first one I thought of, dhcpcd, has a recent CVE. And there are many more programs on a default Ubuntu install that connect to the outside world without user interaction -- are you willing to let a vulnerability in any one of those become a remote root shell?

        • buildzr 5 years ago

          > The thing about Zombieload/MDS (well, not really those, they're really more theoretical attacks... But Meltdown/Spectre in general, and any other local root exploit) is that they turn a remote shell, and perhaps a very limited one, into a remote root shell.

          There's also many, many other local exploits that don't get nearly as much PR in Linux and if an attacker wants to take advantage of one they can basically just wait. Local privescs are pretty common as the attack surface is massive.

          Isolating to separate kernels in separate VMs or better, separate physical hardware is always better than relying on Linux's privilege separation. All but my development servers could be run as root with no significantly greater risk.

          > Not having any ports open is one thing, but I do think your attack surface is larger than just the kernel and OpenSSH. Does Ubuntu not have UPNP open by default?

          On Ubuntu server as configured by this provider at least, this is all I have exposed in netstat -nlput

          > But things connecting from your computer to the outside world can also be exploited. Just the very first one I thought of, dhcpcd, has a recent CVE. And there are many more programs on a default Ubuntu install that connect to the outside world without user interaction -- are you willing to let a vulnerability in any one of those become a remote root shell?

          Not really a serious concern, dhcpcd isn't running on any of my servers. Sorry if you have this confused, I meant Ubuntu server... not much runs really. Yes, of course I wouldn't suggest browsing the web or similar operations, which opens a massive attack surface, but for a server the attack surface is much narrower. Not much phoning home except perhaps an update check if you have that enabled.

          • chaosite 5 years ago

            > Isolating to separate kernels in separate VMs or better, separate physical hardware is always better than relying on Linux's privilege separation.

            Sure, you can also do more. You can also air-gap your machines and sneaker-net everything that's needed, or if your server needs to send updates you can send UDP over a tx-only link (use an optical link and only connect the tx.)

            But there's a cost-benefit analysis here. Discounting MDS is one thing, I actually agree with Intel's risk assessment on it, biased as they are. But generally installing security updates on an LTS disro is easy and painless; there's no real reason not to do it.

            > All but my development servers could be run as root with no significantly greater risk.

            Are we operating under a different definition of "risk" here? Running servers as root definitely increases risk. As root you can do much more persistent damage when an attack does happen, basically putting the machine in a state where the only solution is to wipe and install from scratch.

            • SXX 5 years ago

              > As root you can do much more persistent damage when an attack does happen, basically putting the machine in a state where the only solution is to wipe and install from scratch.

              In any reasonable project or company server that malicious actors ever had access to is counted as completely compomissed no matter what permissions they had. There basically no other option than wipe and reinstall since OS cant really perform trusted self check. For all you know you can have rootkit living in bootloader.

              Of course even hardware cant be trusted really, but this is another level of risk management while "wipe and reinstall" (or wipe and restore from backups) is an industry standard.

          • chaosite 5 years ago

            Ubuntu Server's default install comes with (at least) a DHCP client and an NTP client. Maybe more things, those are just the two I checked.

            Sure, you can use static IPs, and disable NTP, and take other steps to harden your server and reduce your attack surface. But remote exploits for random default programs are routinely discovered, so defense in depth is just a good idea.

      • asveikau 5 years ago

        I am confused why you mention OpenSSH in a discussion about uptime and kernel patches. You don't need to reboot to patch sshd.

        • buildzr 5 years ago

          That's my bad, I was thinking of some personal scenarios in which I've had low value servers running single applications connected to the internet for years without updates at all.

    • axaxs 5 years ago

      If by 'part of your infrastructure' you mean a single API that emits geo coordinates based on client address, then yes.

      • subway 5 years ago

        That's dangerous thinking. Any un-patched service can turn into a pivot point. If the same folks who managed more critical pieces of infrastructure log in there, it can almost certainly be used to pivot onto their other systems.

        • axaxs 5 years ago

          I think you're thinking too big here. This is a single scaleway machine, by itself. It has only one user(me), and one listening process(two if counting ssh). Even if you gained root access to it, there's literally nothing you could do I would care too much about, including taking it offline.

          That wasn't meant to be a point of pride or accomplishment, more a testament to the reliability of Scaleway. I've been super happy with them.

          • scarejunba 5 years ago

            It's just the security cargo cultists. Some were arguing that I should be turning on Spectre/Meltdown mitigation on my Hadoop cluster. It's my cluster, dude. My engineers have the right to run code on it. If they don't and they're running code on it, I've already lost the game. If you can even contact one of my machines the game is up. What even is the threat model here for Spectre/Meltdown.

            They have no sense of risk. Just security cargo-cultists.

            • tlavoie 5 years ago

              I don't know that it's cargo-cult behavior, but maybe it's a lack of perspective in general. I work in security, and yes, it's good practice to patch all the things, but only in that it's the easiest default policy that makes things happen. If you have to pick and choose, you need to understand things well enough to be able to judge.

              As a security consultant, I think that kind of perspective is where I can help add value to our clients; our usual point of contact is a project manager, whose eyes tend to glaze over when given a big vulnerability report, or worse, a spreadsheet. To them, every line feels like some sort of crisis. Now if I can get them to patch in a timely fashion, there is at least no pile of years-old issues, and we can take the time to discuss the few that remain.

            • chithanh 5 years ago

              Very true. We have some cluster users on Gentoo, who are happy that they can simply flip off all those pesky performance-eating security mitigations system-wide. Not only in the kernel, but also userspace side PIC/PIE/SSP/etc.

          • zokier 5 years ago

            > Even if you gained root access to it, there's literally nothing you could do I would care too much about, including taking it offline.

            Hosting botnet is at minimum considered bad internet citizenship. Hosting child porn can ruin your life. Have fun.

          • kbenson 5 years ago

            > Even if you gained root access to it, there's literally nothing you could do I would care too much about, including taking it offline.

            Well, the person who has to deal with it being used as a jump point or IRC relay hiding some third party's behavior might care.

            Providing only one small service and OpenSSH and not being tied into other infrastructure directly means it's not really a desirable target for the rest of the project, but it also means it's not likely to cause too much of a problem if it gets a reboot every once in a while.

            The added benefit is that if gives you the occasional point in time to make sure everything is runnnig cleanly with all the updated applied. You ensure SSH and the geolocation service are restarted after they get updates, right? What about after glibc updates? What about after a zlib update?

            If you really want to make sure updates are applied, you want to make sure an prior version of updated code active in memory is cleared out. Knowing if that's been accomplished isn't always easy, but one easy way to do so is a reboot after an update.

        • theamk 5 years ago

          Is it really dangerous to log into compromised machines, though?

          I think unless one does something stupid like SSH agent forwarding or using shared passwords, it should be safe even if machine is totally compromised.

          If you fetch a remote binary or script to your dev machine and run it, your dev machine could be compromised -- but I am not sure why would anyone want to do this.

          If you specify X forwarding, then anyone can own you. But you should not have any X apps on the remote server to begin with.

          If you are transferring files, then vulnerabilities in rsync/rcp could get you. But those would have to be on the client side, your desktop machine -- and hopefully this machine is well patched.

          If you are using IP filters / the machine is on LAN, then yes, it could be bad. But in this case, the machine was on the public network.

          There was old bug with "get window title" putting stuff into input buffer, but it was fixed years ago.

          Don't get me wrong, I think you do want to keep the machines up to date, and one should always enable unattended updates.

          But I also believe in defense-in-depth, so if one is "managing more critical pieces of infrastructure", they should always assume the remote machine they are managing is compromised, and always take precautions.

          • buildzr 5 years ago

            There's actually been a few vulnerabilities in the OpenSSH client when connecting to untrusted servers.

            These come to mind:

            https://www.cvedetails.com/cve/CVE-2019-6111/

            https://www.cvedetails.com/cve/CVE-2016-0778/

            Generally speaking though you're correct though - keep your client up to date and you'll be protected from a hacked server.

            Clients are in general expose much larger attack surfaces in many cases, so likely will have more frequent and significant security patches. There's a lot more to attack in a web browser than in say, Nginx.

    • trishmapow2 5 years ago

      Is live kernel patching not commonly used these days?

      • baroffoos 5 years ago

        It seems to be something that is technically possible but almost no one uses it because reboots aren't that bad especially in the age of docker where you just destroy the whole OS when you update.

      • subway 5 years ago

        Not really. It's technically possible with Ksplice, but almost no distro actually supports it.

        Beyond the kernel, you have various libs and binaries that will be replaced during upgrades. All can usually/mostly be restarted without a reboot, but just upgrading packages alone won't guarantee all running processes have been updated.

        • cyphar 5 years ago

          The core code behind kSplice/kGraft have been upstream since Linux 4.0 and both Red Hat and SUSE support it (in fact, many security patches are released this way). I believe that some less enterprise-y disros like Fedora and Ubuntu support it too.

          The issue isn't whether it's supported, the problem is that live patching is limited in what it can patch (when functions are inlined it can become impossible to patch them and so on). So while a machine with 4 years uptime might be live patched there are some security issues that cannot be patched that way (for instance, the retpoline patches for Meltdown/Spectre require all function pointers to have different calling conventions and that requires a reboot).

        • genera1 5 years ago

          > but almost no distro actually supports it

          Ubuntu supports it officially, so does Fedora. From my experience it works more or less fine on CentOS, so probably RHEL too. For Suse there is kGraft, so basically >90% of install base supports live patching.

          • usr1106 5 years ago

            > Ubuntu supports it officially

            I don't think it's part of the usual Ubuntu distro. I understood you need to register to get it. And it's free (as in beer) only for limited use cases. Don't remember the details.

        • regecks 5 years ago

          In addition to the solutions in my sibling comment, there is also the commercial (but cheap) KernelCare (https://www.kernelcare.com/), which supports basically every major server OS (https://patches.kernelcare.com/).

          I run it on all dedicated servers, as well as managed servers where we can easily pass the cost on.

          They're currently releasing livepatches across all the kernel builds to address the Intel MDS stuff (at least the kernel-based mitigations) and it's all very pleasant and hands-off.

      • discreditable 5 years ago

        It's cool but we run Linux in VMs. The VMs can complete a reboot in less than 20s. It's fast enough that it doesn't register on uptime monitors. Live patching adds complexity for not a lot of benefit.

        Not to mention that if you're trying to be rebootless you have to worry about running services holding old versions of libraries in memory. Sure there's checkrestart/needrestart, but when reboots are so fast it doesn't matter much.

    • usr1106 5 years ago

      I don't think Scaleway has upgraded their bare metal ARM kernel (their 1st ARM gen) for a very long time. So if you don't build your own one, there are no kernel patches.

      (Please correct me if I'm wrong. Being wrong here would be good news.)

    • duxup 5 years ago

      Oh man good point about uptime.

      What a serious flip in what we think about there.

  • cnf 5 years ago

    Uptime is bad, mmkay. I don’t tolerate uptimes longer than 90 days. Always have planned reboots as part of your scheduled maintenance. One day, one of those high uptime machines will go down, and won’t come back up.

koltax 5 years ago

I have been using antergos for over 4 years, having dual booted it on with windows on 3 different computers. I always liked the fact that it took just 15-20 minutes for me to start working.

Arch purists may scoff on these spin offs, but they miss the point. The appeal was that even though I know how to set up arch on my own, it takes about 2 hours or so. Setting up nvidia was also a pain. With antergos, you can just sit back, and get a nice working system quickly.

Will definitely miss it

  • abtinf 5 years ago

    > The appeal was that even though I know how to set up arch on my own, it takes about 2 hours or so.

    I've used Arch on and off for about eight years. The first time I set it up, it took about 2 hours. I just tried again last week, and it took all day.

    They used to have a super-basic installer, but it sped things. They removed that in favor of detailed instructions on archwiki. Now, they've basically obliterated the install instructions. Whereas it was once a step-by-step guide with call outs to more detailed pages, now it is just a set of stubs that send you to other detail pages. And it is not opinionated at all.

    Want a full-disk encryption setup, but haven't installed arch in couple years? Be prepared to spend a lot of time researching everything that goes into that stack, with very little guidance as to what is typical practice.

    • boomboomsubban 5 years ago

      The "hard" parts of installing Arch are partitions and boot loaders. The old walkthrough gave you a basic method that many people just copied to get a working system, but they had no idea how those areas functioned leaving them screwed should they encounter any problems.

      If you want community chosen presets, why are you opting for Arch? And if you don't want presets, knowing how partitions and boot loaders function is necessary. This is also why encryption is a bitch, it adds tons of complexity to partitions and boot loaders.

      • cosarara 5 years ago

        > why are you opting for Arch?

        Because it has fast, rolling-release updates, little changes from upstream, and any package you will ever need (either in the repos or the AUR).

        There are many things nice to arch apart from the basic install. You can install very minimal debian systems, very minimal centos systems, very minimal gentoo systems. All with this and that bootloader and partition layout. It's not what people usually pick a distro for.

        • boomboomsubban 5 years ago

          >Because it has fast, rolling-release updates, little changes from upstream, and any package you will ever need (either in the repos or the AUR).

          Which of these is missing from say Solus, Debian Rawhide, or OpenSuse Tumbleweed, not to mention the various Antergos like distros. The differences between distros past the base install really is smaller than people realize, and often people pick distros based on misguided assumptions.

          • cosarara 5 years ago

            I think you meant Debian sid and Fedora Rawhide there. I know Debian fails on "little changes from upstream". Antergos obviously fails nothing because it is arch beneath. But it is dead now. Manjaro, the only other popular arch-based distro, has its own problems: https://web.archive.org/web/20150409040851/https://manjaro.g...

            With the rest, lets make a little test. Lately I've been using a program called syncplay: https://syncplay.pl/

            On Arch, you can find 3 different versions of it in the AUR: syncplay (latest stable, 1.6.3 at the time of writing), syncplay-git and syncplay-server-git https://aur.archlinux.org/packages/?O=0&K=syncplay

            I haven't found an online form to search packages on Solus, but after booting an ISO I can see it is there! https://u.teknik.io/EQOck.png (although a bit outdated, 1.5.2 from a year ago)

            Debian, nothing: https://packages.debian.org/search?keywords=syncplay&searcho...

            Fedora, no luck: https://apps.fedoraproject.org/packages/s/syncplay

            Tumbleweed, nothing: https://software.opensuse.org/search?utf8=%E2%9C%93&baseproj...

            Of course, this is a stupid small test. But it is so easy to create and share arch packages, and the user base is so large, that it is almost never the case that what I need doesn't have a build script already, or that the packaged version is too old. And when it does happen, and there is no build script, it is very easy to make one and share it with the world. Or when I really need the bleeding edge for a certain application, I can look for -git packages. I know that on debian checkinstall can help you avoid an /usr/local unpackaged mess, but I have no idea how I'd share that with others.

            In short, yes, the ideas between all of these distros are pretty similar, but as far as package availability goes Arch tends to win due to low barrier of entry for packages, and user base size. Its install process is shit, but thankfully I've only needed to do it a couple times.

            Oh, and it probably wins on documentation as well. The Arch wiki is just huge.

    • chungy 5 years ago

      Arch is basically made for Linux experts, and the installation process is the barrier to entry, to prove your worth (or at least drive).

      Fundamentally, installing Arch really isn't any different for us that install Debian or Fedora from scratch, from within another environment, without using the hand-holding installers. The only difference is using `pacman` instead of `apt` or `dnf`.

    • tombh 5 years ago

      This is true. Though to be fair a brief Google does reveal some turnkey full disk encryption instructions noted by others. Just would be nice to have them on the wiki. Of course there's nothing stopping us adding those ourselves.

  • jchw 5 years ago

    Maybe Manjaro fits the bill?

    https://manjaro.org/

    I never used Antergos but last I used Manjaro it seemed pretty good.

    Personally, I’m looking to see concepts like GuixSD and NixOS be extended and remixed. I’m imagining a world where you have a control panel that changes declarative settings then commits them by rebuilding the system...

    • moviuro 5 years ago

      Manjaro is the one linux distro that recommended users set their clocks back in the past because they f....d up big time with their HSTS+HTTPS cert. [0]

      Ok, that was 4 years ago, but still, I wouldn't dare use it.

      And for those who will use Manjaro, please don't ask for support on the Arch forums/subreddit/IRC chan.

      [0] https://www.reddit.com/r/linux/comments/31yayt/manjaro_forgo... https://webcache.googleusercontent.com/search?q=cache:NTDcUS...

      • LambdaComplex 5 years ago

        >the Arch Linux fork Manjaro ships a static private key on their installer image which also gets copied to the target system. The result is that all Manjaro users have the same "private" key and anybody can easily use this to sign any package that the package manager will accep

        https://pierre-schmitz.com/random-but-important/

        • pizzapill 5 years ago

          This post is 6 years old, does somebody know if this is still the case?

    • jplayer01 5 years ago

      Manjaro is fine, but I preferred Antergos because it used the Arch Linux packages, meaning I always had the latest ones. AFAIK, Manjaro is behind by like a month? In which case, why even bother with rolling release?

      • bashwizard 5 years ago

        Rolling release =/= bleeding edge.

      • narimiran 5 years ago

        > used the Arch Linux packages, meaning I always had the latest ones.

        This is provably false, as I can witness from my recent experience where Arch has 14 month old, long superseded, version of a package, while other Linux distros (including Manjaro, but also Fedora, Debian, Ubuntu, openSuse, etc.) use the most/more recent version.

        ----

        > AFAIK, Manjaro is behind by like a month? In which case, why even bother with rolling release?

        Even if this were true (and it isn't, see the sibling comment), I've seen responses like yours and I don't understand them.

        Where would you draw the line when it comes to rolling? You say that one month behind is too long. Is 2 weeks ok? One week? 3 days? 1 day? 12 hours? Why does it even matter?

        To answer you from my perspective:

        I "bother" with Manjaro because I like the rolling distro philosophy, but I also value stability. If those two weeks of delay will bring me packages which are better tested and more stability overall - I'm all for it.

        Oh and I don't even update as soon as an update hits the stable channel, I regularly decide to wait at least a day or two to see if others have any problems with the update.

        For other people, who are more impatient (and more adventurous), Manjaro offers two other channels: testing and unstable.

        • cosarara 5 years ago

          > This is provably false, as I can witness from my recent experience where Arch has 14 month old, long superseded, version of a package, while other Linux distros (including Manjaro, but also Fedora, Debian, Ubuntu, openSuse, etc.) use the most/more recent version.

          Would you mind telling us what package that is?

        • jplayer01 5 years ago

          My line is that I want the latest upstream packages with as little delay as possible while not breaking things. It’s a vague line, but that’s how it is. Arch Linux has generally managed to hit that line consistently and I haven’t experienced any breakages in years. Most of the issues I’ve had were due to Antergos screw-ups.

          Unfortunately, I have seen out of date (1+ month) packages without any response from the maintainer, so I guess it’s just something that happens. My reasonable expectation is that I do have the latest ones, and I’m annoyed if I don’t.

      • pizzapill 5 years ago

        Manjaro stable is behind Arch ca. 2 weeks. Unstable a day or two. Some packages get fast tracked. For example the Firefox update came yesterday, hours after release.

    • Teknoman117 5 years ago

      Manjaro was a gateway drug for me. Used Ubuntu from when it came out to ~2014. Switched to Manjaro, but after the SSL cert fiasco I switched to Arch proper. When I started my job, Gentoo is our office distro for workstations and servers, so I switched all my home machines over to make sure I understood it. Then I ended up liking it more than Arch...

      • tincholio 5 years ago

        I went a bit the other way around. Switched from RedHat to Gentoo ca. 2002, and used it for about 8 years, when I started using mostly Macs, and some Ubuntu installs (less time to dick around with build flags and long compile times). Just last week installed Manjaro on a new system, and it seems really nice. I'll probably switch my home laptop from Ubuntu to it (or plain Arch, maybe)

    • bevax 5 years ago

      Opposite to Antergos Manjaro is a) using solely their own repos, not also Arch's and b) come bloated (far too many applications installed by default) and themed out of the box.

    • unixhero 5 years ago

      Yes it does. Manjaro provides a GREAT user experience. I tried it on my laptop and I truly enjoyed it.

    • kungtotte 5 years ago

      I really enjoy Manjaro myself. I distro hop a bit but I always have Manjaro as a fallback since it's rock solid for me.

      Using the architect installer it's also fast and trivial to make your own custom install that's as minimal or fully loaded as you like.

  • jmontano 5 years ago

    The team behind Antergos did a great service to the community by offering a ready-to-run Arch experience that was the closest to the real thing.

    I will surely miss Antergos

  • forlorn 5 years ago

    This. Because I'm able to install Arch taking the long traditional way but at the end of the day I always end up with identical setups.

  • flanker 5 years ago

    Similar situation here I have Antergos on 3 diff systems and being using it for past 3 years longest I have used any distro before switching to new one. It was such a good arch spinoff.

zamalek 5 years ago

That's a picture-perfect exit. They have honestly disclosed the reasons and, most importantly, the continuance plan for loyal users. It's a shame they shined the brightest at the end.

monetus 5 years ago

After this, is manjaro the only wizard I can point people towards for arch? (If you can call it that)

Edit: found this script at least. https://github.com/MatMoul/archfi

  • zdxt 5 years ago

    There is also ArcoLinux.

    • monetus 5 years ago

      Thanks a lot, I didn't know about this.

  • deviantfero 5 years ago

    You also have anarchy linux, but it is not actively mantained, still manjaro and antergos were different concepts

    • tomcam 5 years ago

      > anarchy linux, but it is not actively mantained

      It’s right there on the tin

  • LambdaComplex 5 years ago

    >If you can call it that

    You can't. Manjaro uses its own repos. It'd be like calling Ubuntu a wizard for installing Debian.

ac29 5 years ago

As a long time Arch user, what's the attraction to these spinoffs?

Seems to me they are mainly pitching being easier to install, but the Arch installer (or lack thereof) is perfectly fine in my opinion. You get the benefit of understanding a little better how your system is installed as well.

  • tazard 5 years ago

    I've done many full Arch installs (but never antegros), but it's always the little things that I forget to do, then look into what's the new best way to do it, then fall down the ricing rabbit hole. Although it's fun it's also nice just to have a full system installed with no fuss. Ubuntu is fine for that, but I inevitably get the itch for something different. These spin offs let me cheat into a full setup desktop that works great from day one. No more 'I forgot to Auto Mount USB drives' or whatever. Plus it's fun to see how someone else sets it up and get some fresh ideas.

  • JadoJodo 5 years ago

    My $.02: As others have noted, there is some level of intimidation that contributed to me using Antergos. That being said, I did a reinstall a few months ago and decided I would do it "for real" this time. I followed the Arch wiki for each step, learned a huge amount about the way the OS installs, the drive partitioning works, and made it to the bootloader step. And that's where I ran into issues.

    Once I rebooted, I couldn't get the bootloader to load arch. I spent about two days trying different methods from the Arch Wiki before I decided that a) I had learned a huge amount and b) I didn't need the "geek cred" that came with a vanilla install of Arch.

    I went back to Antergos and am quite happy. I manually build packages from AUR, read the wiki when I need to setup something and am happy with it.

    I use i3 currently, so perhaps I'll give the install another go, but I may just switch to Manjaro.

    Happy to hear any tips, though!

    • nobleach 5 years ago

      The bootloader is the part I dread EVERY time I set up a new Arch install (basically every couple of years) I have to relearn it every stinking time! I'm using rEFInd these days so that either helps or hurts the cause.

  • zanny 5 years ago

    At least for Antergos its a super fast way to get a "real" Arch install.

    Sure I know the process of creating partition tables, formatting drives, bootstrapping wireless and what not, setting up system time, adding a bunch of services I want, etc. But for anyone that likes the Arch release process (and thus isn't interested in Manjaro) and AUR it was a place to point them without the massive learning curve because nowadays it generally just works.

  • ethhics 5 years ago

    As a moderately technical user, I’ve run Arch VMs for school work for the past 3 years, but when it came time for me to dual boot my laptop, I really didn’t want to have to be as hands-on involved with a device I actually use frequently. I picked Manjaro instead so I can still enjoy the rolling release cycle but with an already working system out of the box.

  • ubercow13 5 years ago

    One reason is that I learned how to install a system that way with Gentoo 15 years ago and I don't need to learn it again now. It's still a slow process even when you know what you're doing (though I still usually do it the slow way)

    • cyphar 5 years ago

      One counterpoint to this is (back when I used Arch), I found Arch to be the fastest-to-install distro if you already knew how to do it.

      No need to wait 2 minutes to boot a graphical installer that doesn't let you pick what packages you want (this is significant if you don't have Australian internet speeds), or forces you to set up your partitions in a certain way, and so on. You just partition your disk, pacstrap, chroot, run a few setup commands and reboot.

      The last time I installed arch I had a working system in 15 minutes -- other distributions took 40 minutes to bootstrap and reboot so I could start configuring them (mostly because of how long it took to download the 3000 packages I didn't need).

      Obviously there is a level of "it only takes 3 commands to install gentoo" here, but I personally long for the minimalism you get from Arch's no-bs installation.

    • ac29 5 years ago

      I, too learned how to Linux with Gentoo ~20 years ago, and while that semi-from-scratch approach still appeals to me, Arch 2019 is not Gentoo 2002.

      I find myself doing an Arch install once a year or so, and I'd say it takes about 20 minutes to get to my prefered desktop from completely un-initialized hardware - I remember Gentoo taking days!

      • 8draco8 5 years ago

        The complexity of installing Gentoo is the same as Arch, the only thing that slow downs the process for Gentoo was compiling all the packages. 2002 CPU speeds wasn't helping either.

  • kbumsik 5 years ago

    I'm an Arch user who come from Manjaro. Arch Linux seems kinda scary beast to begin with, especially when you are newbie in the Linux world.

    There is a huge difference between having good GUI for both installation and management (Manjaro) and having text-based Wiki for everything (Arch). I did not need to read anything when I install Manjaro, though it it true that I learned a lot after I switched to Arch. Not everyone want to experience pain of learning at the beginning. Also, Manjaro provides GUI that let you switch different versions of Kernel and GPU drivers with a few click.

    Lastly Manjaro has complete Desktop Environments for Gnome, KDE, and Xfce. These DEs have pretty cool tricks and packages alongside with their default packages. They are something hard to figure out when you use Arch.

  • yk 5 years ago

    As another Arch user, you get a working quasi-Arch on your home theater system in something like half an hour. Arch is great if you want to do anything interesting, but for a PC that is just supposed to run youtube and mpv, it is not supposed to do anything interesting.

  • O_H_E 5 years ago

    > is perfectly fine in my opinion

    Well I guess we are all not you. Different people have different opinions and use cases.

  • agumonkey 5 years ago

    desktop preconfig. I found fonts and similar to be a pain to make nice, so here you go

gchamonlive 5 years ago

This is for me huge news... I think of Antergos very fondly. I donated when I first installed it and was constantly amazed by it.

Showing Antergos off at work, people was also impressed and almost everyone on the team switched to Antergos from Ubuntu (and this is no small feat). It lowered the entry bar into arch without lowering quality and this is something that is really valuable. It made switching to an arguably a better distro for development (latest packages, the vast aur repository, incredible Arch wiki support...) time-effective: where before I couldn't imagine myself as SysOps recommending arch to the frontent designer, now I can just go "oh, try this, installation is almost as easy as Ubuntu".

I had a small share of grief with Antergos too. With my laptop I had to disable "nouveau" drivers so that I could boot to a stable installation environment using Nvidia 150M GPU and upgrading my home setup to Nvidia from a Radeon GPU killed my previous arch installation and now it won't boot from their live iso.

But what it enabled me to do at work, prompting this switch from ubuntu to arch, is something I can't measure in value. I would love to be able to maintain this project (is it open for public fork?), but I have neither time (maybe a workday a week) nor the preparation to do so, and the community would also have to create momentum in the direction of adopting this project...

Anyway this is sad, but in no way I could hang this over antergos team's heads. I can only be grateful and wish them good luck in their lives, they deserve all the internets they can get.

Jnr 5 years ago

Manually installing and managing Archlinux years ago was the best Linux tutorial I could have asked for.

Before switching to Archlinux I started with Ubuntu and other Debian based distros. The problem with using those is that it wraps many things in nice automated packages and scripts and you don't learn much. And understanding how Linux works can be quite useful at times.

  • gchamonlive 5 years ago

    That is unfortunately not scalable. It is nice to have for example a team of devs using the same distro. It makes it easier to help one another, debug the system and install packages. I at work couldn't have prompted the team's switch from ubuntu to arch without Antergos. It basically made widespread adoption of Arch with little compromises possible.

    • Jnr 5 years ago

      If your team is not ready to deal with the occasional problems that come from using Archlinux then just stick to Ubuntu. It is perfectly usable distribution and you can always get the latest development software using custom repos, Snap packages or even running it from some Archlinux lxc container.

      • gchamonlive 5 years ago

        I am ready to support their installations, I do it happily. I just can't justify wasting a couple of hours every time I need a fresh install just to get the bare os up and running.

        Automation is something good!

        I can't really recommend Ubuntu for for teams that are experimenting with tools. Lots of tools can't be installed with apt and therefore have to be updated manually. Arch with yay is convenient enough to justify base system setup automation.

        Also, I stay away from snap whenever possible. I have seen several tools I use (ripgrep for instance) pulling support from snap that makes me uneasy using it.

    • Teknoman117 5 years ago

      We all run Gentoo for the most part where I work.

      • gchamonlive 5 years ago

        Cool, I haven't got the chance to test it. How is it regarding community packages?

eigenspace 5 years ago

Funnily enough I almost installed Antergos a couple weeks ago but then decided to subject myself to the full fat command line Arch install instead.

aesthethiccs 5 years ago

Thanks to antegros my life changed for the better now it will go away, i'd rather the maintainers offered a subscription for distros just so they can maintain quality distributions over donations.

AdmiralAsshat 5 years ago

It's a shame to see it go. I never used it, but the fact that it used vanilla Arch packages under the hood was certainly preferable to Manjaro's "don't call it Arch" approach.

I worked on a similar "batteries included" distro based on Fedora that shut down. Fortunately, by the time it ended Fedora had come a long way towards making initial setup easier for newcomers, and our distro was mostly redundant sans a few extra packages and theming.

On the other hand, Arch still lacks a friendly installer.

  • 8draco8 5 years ago

    Antergos was pretty much friendly installer of Arch. They was just packaging all Arch goodies in user friendly box. Despite the fact that I don't like Arch I liked the idea of Antergos and was checking it every once in a while to check what has changed in Arch world.

ricjac 5 years ago

Funnily enough, distrowatch.com have already removed Antergos from their list. And looks like acrolinux has climbed up a little to 18th spot.

I personally will be still using Antergos for a little while longer. Then trying out arcolinux.

antouank 5 years ago

So sad to read this.

Been using antergos on my pc and laptop for couple of years now, best distro I've used.

Is manjaro now the only alternative for a simple arch setup?

jonotime 5 years ago

Very sad to hear this. Not sure where I will go from here. I used to live vanilla Arch (and still do on small servers), but once I found Antergos 5+ years ago there was no turning back. Soon I have another look at manjaro or arco perhaps.

dman 5 years ago

I have really grown to like Antergos over the last two years. Deepin on Antergos comes the closest to a Linux that is so polished out of the box that it exceeds Windows / Linux in visual and product polish.

  • pnutjam 5 years ago

    Try OpenSuse, their KDE environment has an excellent level of polish.

    • dman 5 years ago

      I have, Deepin is on another level - trust me!

Inversechi 5 years ago

Sad times :( Was my first dive into arch and I enjoyed most of the journey.

hankzter 5 years ago

Ahhh can the installer perhaps be merged into Arch? I know and love kiss but the installer is super nice!

rurban 5 years ago

Impossible to read/scroll on my Android phone

I_am_tiberius 5 years ago

That's a pitty. The perfect distro for me.

highhedgehog 5 years ago

Damn, I just installed it 2 days ago.

What should I switch to?

  • bevax 5 years ago

    You don't need to. As they describe in a blog post, it essentially turns into plain Arch.

milleramp 5 years ago

Wow sad to hear, glad I chose Manjaro.

monsterbash506 5 years ago

Manjaro Linux already provides the whole arch with an installer thing. There wasn't much of a purpose for Antergos.

usr1987 5 years ago

nothing will be missed some dude will fork it in his basement since it does not come with some tool he wants and he will make his own to show the world he is elite!

mastrsushi 5 years ago

Oh God, now I'll have to fall back on one of the 10 other Arch distros!

LinuXY 5 years ago

I've never quite understood the allure of wanting to reinvent the wheel by creating a "distribution." There was one time where the Linux kernel did not support enough abstraction or a project was brought under some less open licensing that these niche "OSes" made sense. I would much rather see less package managers rolling other projects' packages and more unity on a single declarative platform. Today it's systemd vs SystemV, apt vs yum vs pacman vs... ad-infinitum. The Linux kernel is finally at a point where not every snowflake needs to be made. I'm waiting for the day where the concept of a distribution is rightfully pedantic. Maybe by then we may have a real shot at Linux on the desktop. A collection of packages not an OS does make.

  • pushpop 5 years ago

    Distributions were never meant to describe distinct OSs. The term literally describes a collection of packages: or “distribution of packages” to be more precise ;)

    The point of distros was never about a limitation of abstraction at the kernel level. It was about different ways of packaging or user space tools; or about what tools came pre-configured as part of that particular package of Linux+tools. This is as true in the very early days of Slackware and Debian as it is now.

    Personally I like the variety out there, I call it “choice”. I don’t like Ubuntu Desktop - I’m not taking anything away from those who do but it’s just not a platform I feel at home on. If the choice were “Ubuntu or nothing” then I’d probably be running FreeBSD. But because we have choice it means I can run whatever opinionated flavour of Linux I want and you can run whatever opinionated flavour you want - even if our opinions differ - and thus we all still run Linux.

    The whole “Linux on the desktop” argument doesn’t make much sense anymore. We now have WSL, Chromebooks, Netbooks, Android tablets and GNU user space tools that run on OSX via homebrew and/or Docker. Not to mention various hardware vendors who do take Linux compatibility seriously on their laptops (even if they may not always ship Linux “out of the box”). Plus many of everyday GUI tools we use these days are now web applications because of how platform agnostic the wider computing landscape has become. So while we haven’t seen a surge of GNU/Linux desktops in the traditional Windows sense, I do think Linux / open source has already won in terms of breaking up the Microsoft monopoly.

  • abtinf 5 years ago

    It seems like you have some ideas you want to try out that involve system and community level decisions. If only there was a word for such a concept...