I started writing this post with the idea of creating a step-by-step guide to using Docker to run a sample application, and then show how to do the equivalent without knowing anything about Docker (you do the latter using Sandbox). But given that this post will probably catch the attention of less seasoned Docker users, I decided to invert the order. So, without further ado, here’s how to get all the benefits of running Docker containers, without the overhead of learning Docker:
Now that you’ve seen how to run an application without Docker using Sandbox, let’s talk about how to do it with Docker.
But first, a bit about the Docker hype
Besides being one of today’s top buzzwords in software, Docker is a great tool every developer and operations team member should be familiar with, as it offers a great deal of benefits over the traditional way of handling environments (whether they’re dev, test, UAT, or production). Among those benefits, I would highlight reproducibility, and ease and speed in scaling up or down as requirements change (especially when compared to the more traditional VM approaches). On the other hand, dealing with containers can be vastly more difficult in areas like deployment and orchestration.
Docker adoption has grown significantly during the last few years (check out the graph at the bottom of this page), especially when you consider the project only launched in 2013. It obviously still has a long way to go, but it seems like the engineering community has embraced the transformation to a containerised world. Consider for example the efforts on container standardisation driven by the Open Container Initiative (among others), the support and involvement of big players in the Docker ecosystem with commercial products and open-source projects, and the support for Docker by the major IAAS and PAAS providers.
Let’s talk about some Docker basics – and when I say Docker basics, I mean just the concepts you need to know to follow this post. These descriptions of images and containers are straight from the Docker docs, as I couldn’t have put it any better:
- An image is a lightweight, stand-alone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files
- A container is a runtime instance of an image – what the image becomes in memory when actually executed. It runs completely isolated from the host environment by default, only accessing host files and ports if configured to do so.
If you’re looking for a more detailed overview of concepts, there’s a plethora of tutorials and documents online which will give you different levels of guidance on Docker. A good place to start is Docker’s own getting started documentation. For this post, knowing images and containers is enough.
Whether you want to try out the steps in this step by step guide, or achieve the equivalent using Sandbox, you’ll need to install Docker on your computer (Sandbox will aid in the installation process, so you might as well download it if you want to try it out). In any case, Docker currently offers binaries for all platforms, so please follow the steps for yours. You will also need to install Git.
A brief note about the sample application
For this post, we’re gonna be building a very simple echo service, using Java and Spring Boot. The code can be found in the following Git repository. I won’t go very deep into the code details, as they are mostly irrelevant for this post. It’s enough to say that the application will start an embedded Tomcat and expose an endpoint that will echo whatever is sent to it, as we’ll see later. To clone the repository, please execute the following:
git clone https://github.com/stackfoundation/java-echo-service.git
There’s no need to install Java or Maven on your computer as all we need will be contained in the image, as I’ll explain in the next section. Disclaimer: Please note that this is not the ideal way of bundling your application in an image, and I have chosen this approach for the sake of simplicity. Take a look at the README file in the project repository for instructions on a better approach.
Your first image
A Docker image is usually generated from a file called a Dockerfile (although the name can be arbitrary and provided to the Docker build command). In any case, this file will contain a series of instructions that will be used for building the image. For a full list of these please check the Dockerfile reference. It can also be of help to follow their best-practices section for defining these instructions, although that is a bit more of an advanced topic.
In order to be able to run the sample application in this tutorial from Docker, we’ll define ours with the following instructions:
FROM maven:3.5.0-jdk-8-alpine #(1) EXPOSE 8080 #(2) ADD java-echo-service /echo-service #(3) WORKDIR /echo-service #(4) ENTRYPOINT ["mvn", "spring-boot:run"] #(5)
So let’s build an image from the above Dockerfile. Please make sure the file is located at the same level as the root directory the application you cloned from git.
# ls Dockerfile java-echo-service
Then, simply execute the following command within the above directory:
docker build -t sandbox/echo . # Or, if you want to use a non-standard name for your Dockerfile or point to a non co-located file docker build -t sandbox/echo -f /path/to/myNonStandardDockerfileName .
The ‘-t’ argument is used to tag the image with a meaningful name, otherwise the Docker daemon will not assign a repository or a tag and you’ll have to use a randomly generated image ID to refer to it.
You should see an output like the following:
Sending build context to Docker daemon 14.59 MB Step 1/5 : FROM maven:3.5.0-jdk-8-alpine ---> 3c2b824cf55f Step 2/5 : EXPOSE 8080 ---> Running in b0e91370344e ---> 55b73eafb776 Removing intermediate container b0e91370344e Step 3/5 : ADD java-echo-service /echo-service ---> 8746d2e655f1 Removing intermediate container 64625f42ae4f Step 4/5 : WORKDIR /echo-service ---> c51a16a88b23 Removing intermediate container a5388966c3eb Step 5/5 : CMD mvn spring-boot:run ---> Running in c45dfa901a6a ---> 8869752d1093 Removing intermediate container c45dfa901a6a Successfully built 8869752d1093
Now, you should be able to see the following images:
REPOSITORY TAG IMAGE ID CREATED SIZE sandbox/echo latest 28bb92aa57d0 9 seconds ago 130 MB maven 3.5.0-jdk-8-alpine 3c2b824cf55f 3 weeks ago 116 MB
As you can see, each step relates to one of the instructions defined in our Dockerfile. Let’s examine what the purpose of each of these instructions is:
- Define the base image to use – An image can be seen as combination of layers (each layer corresponding to one instruction). In this case, we’re using the FROM instruction to define the image we want to extend, in this case one in the official Maven repository which bundles version 3.5 of Maven. At the same time, this one is based on a JDK8 image, which is based on an Alpine one. Do you see the pattern here? Images are built in layers, with each layer adding some small addition. You can check the actual contents of the Dockerfile here, or even see a list of all the layers by executing
docker history sandbox/echo
- Expose a list of ports – This will expose a port within the container main process (the Spring Boot application running an embedded Tomcat) and will allow us to bind that port to a port on the host when we start a container. This is a requirement if you want to be able to reach a process running within your container over a TCP connection from within the host. For this reason, you will be defining a EXPOSE instruction in most of your Dockerfiles, unless the container is meant to run a process with no inbound connections, and only outbound or none at all, like a cron process or any other doing some kind of task not based on an external input as a trigger.
- Add the project sources to the image file-system – The ADD instruction will allow us to run the application in a similar fashion as we could do it straight after cloning the repository, by executing the spring-boot-maven-plugin with the below command. Please note that the sources will be added under a folder named echo-service within the root filesystem of the container OS (‘/’). We could have added them to any other path though.
- Change the working directory to the root of the project directory – The WORKDIR sets the working directory of a few other instructions, CMD among them. It is is the equivalent of executing the below command from the shell.
- Define the command to use – The CMD instruction allows you ‘to provide the defaults for an executing container’. It can be used in combination with the ENTRYPOINT instruction or on it’s own, and as you can read on the Docker documentation, there are a series of rules and practices that will make you want to choose one over the other, all of the outside the scope of this post. For the time being, take that it will allow us to execute the application as we mentioned on (3).
Running your container
And finally, the long awaited moment of running an instance of the image you just created. We’ll do the following by executing the command below:
docker run --rm --name=echo -p 8888:8080 sandbox/echo
The syntax of the run command is as follows:
docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...]
Let’s explain the different [OPTIONS] used for running it.
- ‘–rm‘ – Will clean-up the container and remove the file-system after the container exits. I would not recommend this option unless you know upfront you won’t be needing to perform any post-mortem inspection of your container. I would have normally omitted this, but I found it useful for this post, as it avoids you having to manually delete the stopped container should you want to run it again.
- ‘–name‘ – Allows you to define a name of your choice for the container. If not provided, the Docker daemon will assign a random one. I find it to be a good practice to specify the name to use, as it avoids having to execute the below command to figure out the container ID/name, which is used by some Docker commands to identify the container to target.
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c4914dcac811 sandbox/echo "/usr/local/bin/mv..." 5 seconds ago Up 3 seconds 0.0.0.0:8888->8080/tcp echo
- ‘-p‘ – Will publish a container port (or range of ports) to the host. In this case, I have mapped the 8888 port on the host, to the 8080 port inside the container just to show that they don’t need to match. In fact, this is usually the case, as you may want to run multiple instances of the same container within the same host that all map to the same container port, for instance, if you have different containers running a Spring Boot application, and you want to use the default port (8080) for all of them. I could have omitted the actual port mapping to use, by only using ‘-p’, but in this case, the Docker daemon will assign a random host port which will change every time I run the container, and will force me to list the running containers (as we did above) in order to figure out which one.
Lastly, we provided the image repository ‘sandbox/echo’ to use.
Testing our application
Last but not least, let’s make sure our application running within the container we started in the previous step can be reached and performs as expected. Simply navigate to the following http://localhost:8888/echo/ohce. You can modify the path parameter to be anything you want, and it should return it as part of the response body. You should see the following:
Wrapping it up
The problem I see with Docker adoption, especially with more established enterprises, is the mindset change in ways of working, as well as the engineering and economic effort needed to make a Docker migration happen. We have only scratched the surface of Docker’s most basic concepts – enough to have the equivalent of a “Hello World” application running from a container. If this was your first contact with Docker, you may already find it a bit overwhelming to follow and digest. There is a lot to know about Docker, even without learning any of it’s ecosystem – and the more you learn, the more you’ll realise how much you’ve still got to learn! It is a steep learning curve, and due to it’s fast and constant evolution, it’s easy to get the feeling of being constantly out-dated. Don’t worry, we’ve all been there.
We have been working really hard with Sandbox to be able to offer the benefits of Docker without having to go through the deep learning curve. This means you can dive into running your application in a containerised fashion without needing to use the Docker client, having to write and maintain the project’s Dockerfiles or Docker Compose files. You can later decide if you want or need to do so, or if only somebody in your team needs to work on having that expertise.