At SocialCops, we use a microservices-based architecture for our platform. In a system like this, where there are many interdependent applications running as services on the cloud, you need some way for them to find one another.

Traditionally, you would run multiple services on various servers and have them communicate with each other through a private network interface on the cloud. This is a problem because, if you move an application from one server to another for maintenance, any applications dependent on the one you moved will stop working. This not only makes it difficult to deal with hardware failures, but also makes upgrading your applications difficult. This why many software companies, including SocialCops, have moved to microservices management software like Google’s Kubernetes.

Software like Kubernetes sandboxes your services in containers and makes it easy to manage your infrastructure through features like rolling updates and application discovery. Docker, on the other hand, is a tool that lets you build and package applications that can run anywhere. Kubernetes relies on Docker when it comes to running portable packaged software. Docker ensures that your code is portable and will be run in the exact same environment wherever it is run. Thus, application developers can write platform-independent code and operations engineers can deploy or update the code without bothering the developers.

In this article, we will give a very brief introduction to Docker and how you can use Docker Compose — a tool for Docker — to build reliable, easy-to-use development workflow in your development environment.

Docker

Docker is a software that was released in 2013 to help put software in portable Linux containers. It uses a technology called LXC that isolates processes and allocates them resources from the host machine. You can mount directories (called volumes) on a Docker container for the processes to read or write data to and from the host machine’s file system.

Writing a Dockerfile

A Dockerfile is like a script that tells Docker what it should do to build a portable image of your software. This includes tasks like installing dependencies, copying the required files, setting the working directory, and other trivial things you might want to do to prepare your runtime environment.

Here is an example of a sample Dockerfile that runs a basic “Hello World” server in NodeJS.

Once you put this Dockerfile in the directory where you have the server.js file, you can run the command above to build the server. Once built, you can run

docker run mycontainer:latest

to start the container process.

There are many ways that you can customize your containers, and all of them can be specified in the Dockerfile. For a full list of commands, check out the official Docker reference sheet.

Docker Compose

Earlier in the article we mentioned tools like Kubernetes that are making it easy for applications to discover each other. They do this by providing an internal DNS system, which allows a program to call other applications on a private network using host names instead of the applications’ IP addresses. For example, if you have a Redis server running, you can register it as redis-server. Now, if an application needs to connect to it, it can use the host name redis-server instead of using the IP address, and the DNS will automatically resolve to the correct location. This is extremely useful when an application crashes and is rescheduled on another server, or when the new version of an application is pushed on a different server and you need to reroute traffic to the new machine.

It may take a lot of time for standard automation tools to test your software, build the image, push it to a remote cluster, and start running it. This works well in production systems, since they focus on stability over speed of deployment. In your development environment, however, you need to move much faster. In Docker, you can simulate the above by linking containers. When two containers are linked, you can call them using a registered host name just like you would with Kubernetes DNS in your production systems.

While it is not hard to manually link containers, the compose tool allows you to do that much more easily. It also automatically builds the container when a change to the code or compose file is made, and runs them using a simple command. This makes changing things much easier since you can build and run your containers as and when you make changes to your code. Docker Compose is not only a good tool for testing your software, but it also makes it easier for you to push your code to the server without making any changes to configuration parameters (that are often passed as environment variables).

Writing a Compose File

Just like you can write a Dockerfile for containers, you can write a Compose file to tell Compose how it should use the containers. The Compose tool is smart enough to detect any dependencies and will start your dependencies before starting your main project. For example, if you have a web service which depends on a database, Compose will first build and run the database, then build and run your web service.

A compose file is called docker-compose.yml and is written in YAML format. Here is an example of a compose file taken from official documentation.

In the above file, we have declared two services — “web” and “redis”. Now, when you run docker-compose up in the directory where you have your Compose file, the Compose tool will automatically build the required images and link them at run time.

This is very useful if the names of your services in your Compose file are same as the ones on Kubernetes. It lets you use the same configuration across development and production.

For more information on all the features of the Compose tool, read the official Compose reference.

What Next?

Using the flow mentioned in this article, you can create deployment configurations that are powerful, easy to use, and portable. You can then run these on any system that supports Docker and Docker Compose.

Docker is a powerful tool and can be applied to many use cases. From testing to deploying distributed services in a high-scale production environment, big technology companies are using Docker everywhere. Of course, you can continue to run your applications the way you do now. But I’d recommend trying to run a container-based infrastructure on Kubernetes to unlock the true power of Docker!