The CI/CD and DevOps Blog

Containerized Microservices with Docker & NodeJS


Your application has been chugging along nicely when suddenly it grinds to a halt, again! After debugging thousands of lines of code you finally found the one tiny piece that caused it, but you have to reinitialize your whole service to fix it. How can you avoid this painful process every time your application inevitably breaks? By smashing it into pieces, and then letting each piece do its own thing! Thus your monolithic application becomes a series of interchangeable and easily manageable microservices.


Shippable is evolving constantly to help developers ship code faster by building the best continuous integration and delivery platform. The newest improvement to our infrastructure is the introduction of microservices. Our API not only handled http calls, it processed the tasks itself, so we're moving the processing of tasks into a microservices. Microservices are self-managed, self-contained units that monitor dependencies, listen to changes, and complete tasks delegated to it. In this way we can take out the parts and put in new ones without disturbing other services.

Immutable containers with version tags on Docker Hub

 Immutable containers with version tags on Docker Hub

Lately, several folks have asked us about our reasoning behind adding build numbers as the version tags for Docker Hub images. Briefly, our current flow is -

- Pull code from GitHub
- Pull image from Docker Hub (or build from a Dockerfile)
- Run CI in the container
- If CI passes and push to Docker Hub is configured in the yml or Project Settings, push image to Docker Hub with a version tag <image name>:<build number>

The question is - why don't we just tag the image with <image name>:latest? What is the value behind versioning images?

ApacheCon and Shippable's DevOps transformation with Slack, GitHub and Docker

This week, the Shippable team had the opportunity to present 'Modern DevOps with Docker' (describing our internal transformation - see below for more) and engage with the community at ApacheCon 2015 in Austin, TX.  We saw firsthand how the thriving group of dedicated professionals in the Apache community are tackling big challenges across the full tech spectrum.  In addition, while in Austin, we had the chance to connect with the tight-knit and talented DevOps Austin community and learn from their perspectives.  It was an energizing three days.

Docker overlay network using Flannel

This is the next blog post in the series where I’ll attempt to build a full multi-node kubernetes cluster from scratch with Docker overlay network using Flannel. You can find the previous post here where I describe bringing up a two-node cluster without using overlay network.

The first thing you need once you start scaling up your containers on different hosts is a consistent networking model, the primary requirement of which is to enable two(or more) containers on different hosts to talk to each other.  Now port forwarding might give you the same result when dealing with less number of containers but this approach gets out of control very quickly and you’re left to wade through port forwarding mess. What we want in situations like these is a network where each container on every hosts gets a unique IP address from a global namespace and all containers can then talk to each other. This is one of the fundamental requirements for kubernetes network implementation as specified here.

Caching containers to speed up your builds

Important update on this blog


This blog is based on the old shippable.yml format. A built-in yml translator does translates the code from the old to the new format. Read more about the translation from the old to the new format here.

For the latest information, refer our documentation on caching and/or open a support issue, if you have questions.