Ask HN: While developing, do you run all the dependent services on your machine?

11 points by akkishore 5 years ago

Development requires running many dependent services, like databases, caching servers, queues, other dependent (micro) services etc. Do you run them on your machine? Or you use other server infrastructure to run them while only running the specific service you are developing on your machine? How do you enforce consistent dev environment across your team mates?

shoo 5 years ago

Last job: small web app with app servers, queue, workers, database. Plausible to run this all on one machine while hacking on one or more parts. After we migrated to AWS we made it easy for devs to launch a new isolated copy of the whole environment in EC2. Define cloud formation templates & scripts to boot the whole stack. If you need a database in a well defined state, automate that too (ideally build the schema and initial data from scratch using automated process, failing that, start with a snapshot/backup then run schema migrations on it)

Current job: there's about 10-20x as many integration points with downstream services, many of them owned by other teams. No investment in making it possible to spin up isolated service graph on a dev machine. Heavy use of persistent shared testing environments in company data centre, these environments usually in one or more state of brokenness. Some use of mountebank to replace troublesome downstream services with stubs returning canned data.

lkrubner 5 years ago

Yes, mostly I do. And I have not yet needed Docker to do this, though many people have argued that this is what Docker is for. I tried to point out how empty that argument was in "Docker is the dangerous gamble which we will regret":

http://www.smashcompany.com/technology/docker-is-a-dangerous...

When I need to run multiple database, or want a database with a copy of production data, then sometimes we run that remotely. Running a few servers that all developers can use seemed to be standard practice 10 years ago, but the practice seems to be in retreat, replaced by the notion of "Use Docker to run every service on your own machine." But I have not seen that Docker actually makes that easy to do, and if your team has a full-time devops person who can spin up some extra machines, then usually it saves the whole team a lot of time to simply rely on a few central servers that the devops person has set up for development.

  • akkishore 5 years ago

    Any thoughts how to enforce consistent dev environment among the team mates?

    • MH15 5 years ago

      Don't? I'm currently working on a Golang web app with a small team and we're lucky enough to be each working on separate modules that fulfill a predefined specification we drafted prior to starting a line of code. As long as each module fulfills the requirements we drafted, we don't care about each others dev environments, because go creates a binary.

      Also: `go fmt` is incredible (and built in to literally every Golang environment). We used to work primarily in Node.js but since making the switch last year to Golang we've noticed an increase in code-review productivity. Everyone's code looks the same so we don't have to spend time worrying about dev environments.

  • steve_taylor 5 years ago

    > I have not seen that Docker actually makes that easy to do

    docker-compose up -d

    What’s not easy about that?

    • richardknop 5 years ago

      It's slow feedback loop when developing / testing / debugging. This of course depends on your tech stack, with dynamic language such as NodeJS it is actually quite good as you can just mount your code base to the container and when you edit files they are reloaded automatically.

      However if your application docker container requires compilation and you need to restart docker compose once you make changes to the codebase (and/or want to run integration tests), the whole docker compose flow might add an extra minute or two to your feedback loop and slow down development.

      In those cases I'd rather run everything locally (still would include docker-compose.yaml file for developers who prefer that and are not comfortable running all required services locally).

      Developing Go applications I have found my development flow much more agile without docker, docker was decreasing my productivity a bit so I stopped using it.

      Our current integration test suite runs entirely through docker compose flow normally (in the CI or locally) but launching the integration test suite outside of docker, if you have all dependencies running locally, is quite faster and saves a lot of time, thus increasing my productivity.

      • steve_taylor 5 years ago

        I was referring to dependencies that you don’t work on yourself. For example, if you’re a front-end dev and there’s a bunch of microservices you want to have running, you can start them all in one command and get on with your job. No need to wait for anyone to spin up environments, refresh databases, etc.

    • lkrubner 5 years ago

      Go here:

      http://www.smashcompany.com/technology/docker-protects-a-pro...

      And then scroll down to where you see this:

      UPDATE 2018-07-09

      And read that Slack conversation. That is a real conversation that actually happened. We lost 3 days trying to fix that bug, which in the end was a complex interplay of the Docker cache and the way the Dockerfile was written.

      This is the sales pitch for Docker:

      docker-compose up -d

      But that Slack conversation is the reality.

      • steve_taylor 5 years ago

        This issue isn’t specific to Docker. Your tags should always be immutable.

true_religion 5 years ago

On my personal machine? No, but for work we just pre-install all the software needed to run the project before a new hire gets their machine.

For the people who prefer to use personal computers, there's a variety of options:

1. Virtualization. Previously we used VMWare or VirtualBox, but now offer Docker installs that replicate what CI runs.

2. Following the installation documents. Our stack is complicated, but doesn't change very much so one can run a script to install everything on a standard nix that we support, or edit it to support their preferred configuration. People who have nonstandard nix boxes tend to know what they're doing when it comes to compiling from source.

3. Use the dev machine you're given as a server, and work on your personal remotely.

All have their pros and cons, but it seems to work out.

iraldir 5 years ago

If those services are made by me or my team, I would usually run them on my machine. However, if they are made by a different team, I usually create a mock server of sorts to be able to run it locally with no dependencies for most of the dev. This allows my team to not be dependant on other teams. My flow is usually to work on this mocked server while I develop a feature, and then before making a pull request testing on the real thing. I found a good 70% of the dev work I do can be done in those conditions. Of course, that's something you'll be able to pull on a greenfield project, not on an existing giant codebase with billions of API.

tmm84 5 years ago

I have always had to run all the infrastructure when working for clients. I have used mostly VMs for the last two years but before that it was always a real bare metal box in the office for work.

yellow_lead 5 years ago

At a previous company, we had a project so large that you could only run portions of it locally. For everything else, we had ansible scripts to create and deploy to an AWS instance. Then, you could use SSH port forwarding to have those pieces available to your machine, as well as JMX ports for debugging those other parts, if need be.