points by theamk 19 days ago

why would you do this?

If you are considering bare-metal servers with deb files, you compare them to bare-metal servers with docker containers. And in the latter case, you immediately get all the compatibility, reproducibility, ease of deployment, ease of testing, etc... and there is no need for a single YAML file.

man8alexd 18 days ago

If you need a reliable deployment without catching 500 errors from Docker Hub, then you need a local registry. If you need a secure system without accumulating tons of CVEs in your base images, then you need to rebuild your images regularly, so you need a build pipeline. To reliably automate image updates, you need an orchestrator or switch to podman with `podman auto-update` because Docker can't replace a container with a new image in place. To keep your service running, you again need an orchestrator because Docker somehow occasionally fails to start containers even with --restart=always. If you need dependencies between services, you need at least Docker Compose and YAML or a full orchestrator, or wrap each service in a systemd unit and switch all restart policies to systemd. And you need a log collection service because the default Docker driver sucks and blocks on log writes or drops messages otherwise. This is just the minimum for production use.

  • theamk 17 days ago

    Yes, running server farms in production is complex, and docker won't magically solve _every one_ of your problems. But it's not like using deb files will solve them either - you need most of the same components either way.

    > If you need a reliable deployment without catching 500 errors from Docker Hub, then you need a local registry.

    Yes, and with debs you need local apt repository

    > If you need a secure system without accumulating tons of CVEs in your base images, then you need to rebuild your images regularly, so you need a build pipeline.

    presumably you were building your deb with build pipeline as well.. so the only real change is that pipeline now has to has timer as well, not just "on demand"

    > To reliably automate image updates, you need an orchestrator or switch to podman with `podman auto-update` because Docker can't replace a container with a new image in place.

    With debs you only have automatic-updates, which is not sufficient for deployments. So either way, you need _some_ system to deploy the images and monitor the servers.

    > To keep your service running, you again need an orchestrator because Docker somehow occasionally fails to start containers even with --restart=always. If you need dependencies between services, you need at least Docker Compose and YAML or a full orchestrator, or wrap each service in a systemd unit and switch all restart policies to systemd.

    deb files have the same problems, but here dockerfiles have an actual advantage: if you run supervisor _inside_ docker, then you can actually debug this locally on your machine!

    No more "we use fancy systemd / ansible setups for prod, but on dev machines here are some junky shell scripts" - you can poke the things locally.

    > And you need a log collection service because the default Docker driver sucks and blocks on log writes or drops messages otherwise. This is just the minimum for production use.

    What about deb files? I remember bad old pre-systemd days where each app had to do its own logs, as well as handle rotations - or log directly to third-party collection server. If that's your cup of tea, you can totally do this in docker world as well, no changes for you here!

    With systemd's arrival, the logs actually got much better, so it's feasible to use systemd's logs. But here is a great news! docker has "journald" driver, so it can send its logs to systemd as well... So there is feature parity there as well.

    The key point is there are all sorts of so-called "best practices" and new microservice-y way of doing things, but they are all optional. If you don't like them, you are totally free to use traditional methods with Docker! You still get to keep your automation, but you no longer have to worry about your entire infra breaking, with no easy revert button, because your upstream released broken package.

    • man8alexd 16 days ago

      You switched from

      > ease of deployment

      to

      > running server farms in production is complex

      You just confirmed my initial point:

      > "Dockerfile is simple", they promised. Now look at the CNCF landscape.

      > with debs you need local apt repository

      No, you don't need an apt repository. To install a deb file, you need to scp/curl the file and run `dpkg`.

      >presumably you were building your deb with build pipeline as well

      You don't need to rebuild the app package every time there is a new CVE in a dependency. Security updates for dependencies are applied automatically without any pipeline, you just enable `unattended-updates`, which is present out of the box.

      > With debs you only have automatic-updates, which is not sufficient for deployments.

      Again, you only need to run `dpkg` to update your app. preinst, postinst scripts and systemd unit configuration included in a deb package should handle everything.

      > deb files have the same problems

      No, they don't. deb files intended to run as a service have systemd configuration included and every major system now runs systemd.

      > but here dockerfiles have an actual advantage: if you run supervisor _inside_ docker, then you can actually debug this locally on your machine!

      Running a supervisor inside a container is an anti-pattern. It just masks errors from the orchestrator or external supervisor. Also usually messes with logs.

      > No more "we use fancy systemd / ansible setups for prod, but on dev machines here are some junky shell scripts" - you can poke the things locally.

      systemd/ansible are not fancy but basic beginner-level tools to manage small-scale infrastructure. That tendency to avoid appropriate but unfamiliar tools and retreat into more comfortable spaces reminds me of the old joke about a drunk guy searching for keys under a lamp post.

      > What about deb files? I remember bad old pre-systemd days where each app had to do its own logs, as well as handle rotations - or log directly to third-party collection server.

      Everything was out of the box - syslog daemon, syslog function in libc, preconfigured logrotate and logrotate configs included in packages.

      There are special people who write their own logs bypassing syslog and they are still with us and they still write logs into files inside containers.

      There are already enough rants about journald, so I'll skip that.

      > but you no longer have to worry about your entire infra breaking, with no easy revert button, because your upstream released broken package.

      Normally, updates are applied in staging/canary environments and tested. If upstream breaks a package - you pin the package to a working version, report the bug to the upstream or fix it locally and live happily ever after.