The CI/CD and DevOps Blog

Learn about various tried-and-tested strategies that will help you ship code faster

Ambarish Chitnis

Recent Posts

Kubernetes Tutorial: Attaching A Volume Mount To Your Application

Kubernetes allows you to package multiple containers into a pod. All containers in the pod run on the same Nodeshare the IP address and port space, and can find each other via localhost. To share data between pods, Kubernetes has an abstraction called Volumes. In this blog, we demonstrate how you can  easily hookup Kubernetes Volumnes to your pod and define the containers in the pod using Shippable.

 

Kuberetes Volumes

A Volume is a directory with data that is accessible to all containers running in a pod and gets mounted into each containers filesystem. Its lifetime is identical to the lifetime of the pod. Decoupling the volume lifetime from the container lifetime allows the volume to persist across container crashes and restarts. Volumes further can be backed by host's filesystem, by persistent block storage volumes such as AWS EBS or a distributed file system. The complete list of the different types of volumes that Kubernetes supports can be found here.

Shippable supports mounting all the types of volumes that Kubernetes supports via the dockerOptions resource. However, the specific volume type that we demonstrate in this blog is a gitRepo volume. A gitRepo volume mounts a directory into each containers filesystem and clones a git repository into it. 

Kubernetes Tutorial: Deploying a load-balanced Docker application

Kubernetes is a Production grade container orchestration platform with automated scaling and management of containerized applications. It is also open source, so you can install Kubernetes on any cloud like AWS, Digital Ocean, Google Cloud Platform, or even just on your own machines on premises. Kubernetes was started at Google and is also offered as a hosted Container Service called GKE. With Shippable, you can easily hook up your automated DevOps pipeline from source control to deploy to your Kubernetes pods and accelerate innovation.

CI/CD For Microservices Using Monorepos

We wrote a very popular blog a little over a year ago, detailing the reasone behind our choice of organizing our microservices codebase in a single repository called a mono repo. 

Since then, we've often been asked - how do you set up a CI/CD pipeline for a mono repo? When a code change to the repository triggers CI, how does your CI know which microservice changed so that it can rebuild and test just that service?

In this blog, we will demonstrate how the Shippable platform makes it simple to independently build, test and deploy microservices from a mono repo. For simplicity,  we will use a monorepo sample (that you can fork) that consists of just two microservices, and create a CI/CD pipeline with Amazon ECR and ECS. 

Scenario

  • Our Node.js application has two microservices: a front-end microservice www that makes API calls to a backend API microservice called api. The source code for both microservices is in separate folders in a mono repo.

  • Each microservice is packaged as a Docker image during the build process and has its own independent unit tests.

  • Each Docker image is pushed to its own Amazon ECR repository. Both images get deployed to a common Amazon ECS cluster.   

  • Both these microservice share some common code that is maintained in a separate folder in the mono repo.

  • A commit to a microservice builds, tests and deploys that specific microservice.

  • A commit to the common code builds, tests and deploys both microservices.

Multi-Stage Docker builds using Shippable

Docker introduced a new feature called Multi-stage builds in Docker 17.05. This feature enables you to build an image in multiple stages, with each stage represented by a FROM statement. 

 A very common use-case that motivated the development of the feature is building a production image of an application that has a much smaller disk (storage) fooprint than the development image. In the first stage of the build, the application is compiled in an image that has the entire language toolchain. In the second stage of the build, the built application and its runtime dependencies ONLY are copied over to a different base image. The process of copying selective artifacts from one stage to another is thus greatly simplied in a single Multi-stage Dockerfile. To learn more about this feature, see Docker's documentation here.

Shippable supports Multi-stage Docker builds out of the box. In this blog, we will learn how to build a docker image using a Multi-stage Docker file for a Hello-World goLang application.  

Configuring Multi-Stage CI

In this blog, we demonstrate how to use the Shippable platform to perform Multi-Stage CI on your repositories. The key benefit of Multi-Stage CI is to split a time-consuming CI process into smaller stages to detect issues in code quality / tests as early as possible and shorten the feedback loop on every checkin. This often entails refactoring or designing your application into smaller components and testing each component in isolation first before running more expensive integration tests of your component with other components in the system.

What is multi-stage CI?

In our multi-stage CI scenario, we split the CI of a Node.js app into several stages.

  • Stage 1: Stage 1 runs on every PR and lints the source code in the repository to find style errors. To learn more about the benifits of linting your javascript code, look at this article. The idea behind Stage 1 is to perform a quick code quality check on every PR and shorten the feedback loop for any errors in coding style and bugs found during static analysis. This allows developers to quickly find and fix issues in their PRs.
  • Stage 2: Stage 2 runs on successful completion of Stage 1. In Stage 2, we run a small subset of tests to quickly validate the PR.
  • Stage 3: Stage 3 runs on the merged commit to the repository. Here we run a broader set of core unit tests that take longer to run than Stage 2.