Docker was released as open source in March 2013, so in software terms it’s relatively new. As always with shiny things, nerds like me see their potential and start thinking of ways of using them everywhere.
A lot of bloggers are still focusing on all of Docker’s benefits but we feel like it’s time to get more serious about asking where and why it is the best choice – and, more importantly, when you might be better off with alternatives. After thinking through the below points we made the choice not to use Docker in production, but we’d love to hear if you are and your reasons too.
In this post we’re going to share some of our findings and outline the key questions you should be asking if you plan to use Docker.
We’d also like to hear from you: What do you think it is about Docker that is driving adoption? How can you see the tool changing in the future? What do you wish it could do?
1 - How much do you really need to do?
Here are a few examples of the wide range of functions that Docker offers:
- Images: Docker can build pull and push images from and to an index
- Containers: Docker can start/stop containers and manage their lifetimes
- Logging: Docker captures the stdout and stderr of all these containers internally
- Volumes: Docker creates and manages associated volumes for the containers
- Networking: Docker creates and manages virtual interfaces and an internal bridge for all the containers
- RPC: the Docker server provides an API that allows external processes to control its behaviour
Providing all these features is inevitably going to involve some level
of complexity –
sloccount is 97,100 lines of code long just on the
With Docker you’re in an all or nothing relationship. All these features are packed into a single binary and there are no half measures. So if you’re looking to use Docker, think about the functionality offered and whether you even need it.
2 - Is the complexity worth it?
We started using Docker in combination with Jenkins a year ago, looking for ways to simplify the management of build runtimes.
The idea was that we wouldn’t have to worry about build dependencies or environment pollution between builds anymore. Docker would isolate each build in a fresh container. This really simplified our worker setup as we just needed java and docker and didn’t have to handle clashing dependencies anymore.
This worked fine for a while but also introduced a couple of more issues. Managing the runtime of the containers wasn’t trivial. We also ended-up with cleaning issues where old containers would leave their volumes around, eventually causing downtime. To solve these problems we actually had to build a wrapper tool (see cide) to manage the Docker containers for each build.
While building cide we also noticed some flexibility issues with the Dockerfile builder which doesn’t cater really well for private repositories in Gemfiles. Just getting the runtime and cleanup to work properly took me at least 3 different iterations.
In the end the new solution is better than what we previously had, but we have a feeling that things could have been simpler with a simpler and less tightly coupled tool-set. As any good developer would, you can work around the abstractions to find a solution but it’s not going to be pretty.
3 - Can you deal with the downtime?
Pusher’s use case is slightly niche because we have long running client connections - our customers pay for reliable and fast connections. So we must limit the amount of disruption to our customers. In fact, we’ve taken extra steps to limit disruption when doing deploys (see crank, for example).
Docker is releasing a new version once or twice a month and you probably want to keep the binary up-to-date. But because of how it’s structured it’s not possible to upgrade without shutting down all the containers on the machine. This inevitably introduces a new downtime challenge.
For now, this is where we hit a wall and the main reason we don’t use Docker on our main product. We have plans to replace whole machines and switch DNS to redirect the traffic but until then this is a deal breaker for us. Depending on your application’s architecture this could be something for you to consider too.
If you aren’t careful, you could find yourself re-architecting the entire application just to fit that model. This is one of the reasons we decided not to use Docker, we suspect that it could add latency and add some additional overheads.
4 - Do you have the support?
Finally, and here’s the kicker, you need to ask yourself do you have the operational knowledge? We found it very difficult to find detailed information on deploying Docker for specific use-cases. What are the operational issues that we will encounter and how to work around them?
As soon as you dig a bit deeper into how Docker operates there is little documentation available online. So the two ways to go are either to spend a lot of time tinkering, talking in forums and scouring the web for answers, or you could always go explore the dedicated support options offered by Docker.
Essentially there’s a lot of information on getting started but not so much available on optimisation and operations. The issue with this is that it is difficult to see over the horizon enough to understand if it is a practical solution in the long term.
This is one of the reasons we are sharing this post, to bring help inform people making the decision today.
So should you deploy Docker?
This final question is one only you can answer. Depending on your use case, Docker’s all encompassing approach could be perfect. Its also a great starting point if you’re building from the ground-up.
But if you’ve already got an established architecture you have to ask yourself whether it is worth it.
We’d advise mapping out your application, identifying what functionality you’ll need and cross-checking it with what’s offered in Docker. If you’re building something simple, then it probably isn’t the tool for you. If uptime is a deal breaker, it probably isn’t the tool for you either.
For those who do run Docker in production, we’d love to hear what you have discovered about the tool and get a real conversation going on how the community could help it improve.