The CI/CD and DevOps Blog

Kubernetes Tutorial: Deploying a load-balanced Docker application

Kubernetes is a Production grade container orchestration platform with automated scaling and management of containerized applications. It is also open source, so you can install Kubernetes on any cloud like AWS, Digital Ocean, Google Cloud Platform, or even just on your own machines on premises. Kubernetes was started at Google and is also offered as a hosted Container Service called GKE. With Shippable, you can easily hook up your automated DevOps pipeline from source control to deploy to your Kubernetes pods and accelerate innovation.

In this blog, we demonstrate how to deploy a load balanced, multi-container application to multiple Kubernetes environments on GKE. The deployment occurs in multiple stages in a Shippable defined workflow.

 

Get Started: Kubernetes Deployment spec

The pods and services (load balancer) for the application are created using a deployment spec. Instead of creating and maintaining a deployment spec per environment which is a common practice, we create a single deployment spec template. This template has placeholders for the image and service/pod labels. When we deploy the application to a specific environment, we use powerful yet simple Shippable platform functions and resources to  replace these placeholders at run time when the deployment actually happens.

The deployment spec template (located here in our public repository) defines the label selectors placeholders in the .spec.selector section and the labels for the pods in the .spec.template.metadata.labels section. Labels are defined for both the front end voting application (FE_LABEL) as well as the Redis service (BE_LABEL) which the front ends makes API calls on via another load balancer.   

CI/CD For Microservices Using Monorepos

We wrote a very popular blog a little over a year ago, detailing the reasone behind our choice of organizing our microservices codebase in a single repository called a mono repo. 

Since then, we've often been asked - how do you set up a CI/CD pipeline for a mono repo? When a code change to the repository triggers CI, how does your CI know which microservice changed so that it can rebuild and test just that service?

In this blog, we will demonstrate how the Shippable platform makes it simple to independently build, test and deploy microservices from a mono repo. For simplicity,  we will use a monorepo sample (that you can fork) that consists of just two microservices, and create a CI/CD pipeline with Amazon ECR and ECS. 

Scenario

  • Our Node.js application has two microservices: a front-end microservice www that makes API calls to a backend API microservice called api. The source code for both microservices is in separate folders in a mono repo.

  • Each microservice is packaged as a Docker image during the build process and has its own independent unit tests.

  • Each Docker image is pushed to its own Amazon ECR repository. Both images get deployed to a common Amazon ECS cluster.   

  • Both these microservice share some common code that is maintained in a separate folder in the mono repo.

  • A commit to a microservice builds, tests and deploys that specific microservice.

  • A commit to the common code builds, tests and deploys both microservices.

Security Best Practices At Shippable

In light of a recent blog post about a competitor's security vulnerabilities, I wanted to be completely transparent about our security best practices to reassure our customers that they're in good hands.

From the start, we've been very aware of the fact that when customers click on the Authorize button to grant us access to their GitHub or Bitbucket repositories, they trust us with their Intellectual Property.  This is a tremendous step, especially since we're all aware of hackers attacking almost every major site and stealing personal information.

Our security measures fall under two pillars, Product and Process, both of which are explained below.

Multi-Stage Docker builds using Shippable

Docker introduced a new feature called Multi-stage builds in Docker 17.05. This feature enables you to build an image in multiple stages, with each stage represented by a FROM statement. 

 A very common use-case that motivated the development of the feature is building a production image of an application that has a much smaller disk (storage) fooprint than the development image. In the first stage of the build, the application is compiled in an image that has the entire language toolchain. In the second stage of the build, the built application and its runtime dependencies ONLY are copied over to a different base image. The process of copying selective artifacts from one stage to another is thus greatly simplied in a single Multi-stage Dockerfile. To learn more about this feature, see Docker's documentation here.

Shippable supports Multi-stage Docker builds out of the box. In this blog, we will learn how to build a docker image using a Multi-stage Docker file for a Hello-World goLang application.  

Configuring Multi-Stage CI

In this blog, we demonstrate how to use the Shippable platform to perform Multi-Stage CI on your repositories. The key benefit of Multi-Stage CI is to split a time-consuming CI process into smaller stages to detect issues in code quality / tests as early as possible and shorten the feedback loop on every checkin. This often entails refactoring or designing your application into smaller components and testing each component in isolation first before running more expensive integration tests of your component with other components in the system.

What is multi-stage CI?

In our multi-stage CI scenario, we split the CI of a Node.js app into several stages.

  • Stage 1: Stage 1 runs on every PR and lints the source code in the repository to find style errors. To learn more about the benifits of linting your javascript code, look at this article. The idea behind Stage 1 is to perform a quick code quality check on every PR and shorten the feedback loop for any errors in coding style and bugs found during static analysis. This allows developers to quickly find and fix issues in their PRs.
  • Stage 2: Stage 2 runs on successful completion of Stage 1. In Stage 2, we run a small subset of tests to quickly validate the PR.
  • Stage 3: Stage 3 runs on the merged commit to the repository. Here we run a broader set of core unit tests that take longer to run than Stage 2.