3 Docker tips & tricks

Over the past few months, we’ve done a lot of development with Docker. There are a few things that we end up using over and over. I wanted to share three of these with other developers working with Docker:

  1. Remove all containers – Inevitably, during development you’re going to pile up a bunch of stale containers that are just lying around – or maybe you have a bunch of running containers that you don’t use. We end up needing to wipe out all the containers to start fresh all the time. Here’s how we do it:

    docker ps -a -q | awk '{print $1}' | xargs --no-run-if-empty docker rm -f

    It’s pretty self explanatory – it lists all the containers, and then removes each one by it’s ID. There are several incarnations of this but this one has the advantage that it can be used in Windows as well if you install UNIX command line tools (you could do that by grabbing MinGW for example). Alternatively, on Windows, you can use:FOR /f "tokens=*" %i IN ('docker ps -a -q') DO docker rm -f %i
  2. Mount the Docker Unix socket as a volume – OK, the way we use Docker is a bit more advanced than the standard use cases but it’s crazy how often we end up using this one. That’s because we always end up having to create Docker containers from within a Docker container. And the best way to do this is to mount the Docker daemon’s Unix socket on the host machine as a volume at the same location within the container. That means, you add the following when performing a docker run: -v /var/run/docker.sock:/var/run/docker.sock. Now, within the container, if you have a Docker client (whether that’s the command line one, or a Java one for example) connect to that Unix socket, it actually talks to the Docker daemon on the host. That means if you create a container from within the container with the volume, the new container is created using the daemon running on the host (meaning it will be a sibling of the container with the volume)! Very useful!
  3. Consider Terraform as an alternative to composeTerraform is for setting up infrastructure really easily and it’s great for that. For us, infrastructure means AWS when running in the cloud, and Docker when running locally. We have several containers that we have to run for our application – during development, we run all the containers locally, and in the cloud, we run the containers across various EC2 instances, each instance getting one or more containers. This is perfect for Terraform. We can use the Docker provider alone to configure resources to run our local setup, and we can use it together with the AWS provider to run our cloud setup. Note again that Terraform is for infrastructure setup, so you are doing things at a very high level – you may find that you need to do some prep using other tools to be able to work with Terraform. For example, you can’t use Dockerfiles – you will have to build your custom images prior to using them with Terraform.


Cloud pricing is unfair

Is it fair to round the CPU usage of a virtual machine to the nearest hour when charging customers for cloud computing? We were curious about this so we thought we would ask the Internet. Of course, we wanted to get people’s opinions on cloud pricing overall so we asked about more than just the rounding of CPU use. We are not statisticians so the approach we took was rather simple, and took the form of an online survey. Our audience was a broad group of people involved in software, and included many independent developers, as well as those working as part of an organization.


When making the decision to go with a particular platform, by far the most important factors were the cost and quality of service. Surprisingly, brand name and trust was only somewhat important for many developers, especially those who were independent. The importance of brand name and trust was higher for those making the decision for teams and organizations.


The question we were most interested in was which pricing model was most appealing to users. The results showed that customers preferred to be charged a flat fee per month for a virtual machine – the Digital Ocean model. A similar model of paying a flat fee per month for a cloud application was also deemed fair. The most prevalent model used by AWS, Azure and many other providers of charging per unit of resource used was not particularly appealing when compared to the flat-fee approaches. Interestingly, those surveyed said that when their cloud applications exceeded a certain cost (when being charged per resource usage), they actually preferred if they were switched automatically to a flat-fee model for the remainder of the billing period instead of having their applications suspended. This seems to indicate that when it comes to pricing, users are finding being charged per unit of resource consumed complex and unpredictable. They strongly favor a pricing model that allows them to have a predictable cost per month.

Finally, to answer to the original question: is it fair to round to the nearest hour when charging users for CPU use? A most definite no.

While the results seem to indicate some solid opinions, I do want to point out that the survey is still open and if you have experience with cloud platforms and want to opine – follow the link below to our survey:

Opinions on cloud pricing