The CI/CD and DevOps Blog

Security Best Practices At Shippable

In light of a recent blog post about a competitor's security vulnerabilities, I wanted to be completely transparent about our security best practices to reassure our customers that they're in good hands.

From the start, we've been very aware of the fact that when customers click on the Authorize button to grant us access to their GitHub or Bitbucket repositories, they trust us with their Intellectual Property.  This is a tremendous step, especially since we're all aware of hackers attacking almost every major site and stealing personal information.

Our security measures fall under two pillars, Product and Process, both of which are explained below.

Multi-Stage Docker builds using Shippable

Docker introduced a new feature called Multi-stage builds in Docker 17.05. This feature enables you to build an image in multiple stages, with each stage represented by a FROM statement. 

 A very common use-case that motivated the development of the feature is building a production image of an application that has a much smaller disk (storage) fooprint than the development image. In the first stage of the build, the application is compiled in an image that has the entire language toolchain. In the second stage of the build, the built application and its runtime dependencies ONLY are copied over to a different base image. The process of copying selective artifacts from one stage to another is thus greatly simplied in a single Multi-stage Dockerfile. To learn more about this feature, see Docker's documentation here.

Shippable supports Multi-stage Docker builds out of the box. In this blog, we will learn how to build a docker image using a Multi-stage Docker file for a Hello-World goLang application.  

Configuring Multi-Stage CI

In this blog, we demonstrate how to use the Shippable platform to perform Multi-Stage CI on your repositories. The key benefit of Multi-Stage CI is to split a time-consuming CI process into smaller stages to detect issues in code quality / tests as early as possible and shorten the feedback loop on every checkin. This often entails refactoring or designing your application into smaller components and testing each component in isolation first before running more expensive integration tests of your component with other components in the system.

What is multi-stage CI?

In our multi-stage CI scenario, we split the CI of a Node.js app into several stages.

  • Stage 1: Stage 1 runs on every PR and lints the source code in the repository to find style errors. To learn more about the benifits of linting your javascript code, look at this article. The idea behind Stage 1 is to perform a quick code quality check on every PR and shorten the feedback loop for any errors in coding style and bugs found during static analysis. This allows developers to quickly find and fix issues in their PRs.
  • Stage 2: Stage 2 runs on successful completion of Stage 1. In Stage 2, we run a small subset of tests to quickly validate the PR.
  • Stage 3: Stage 3 runs on the merged commit to the repository. Here we run a broader set of core unit tests that take longer to run than Stage 2.

Configuring CI For A Postgres Database

Shippable makes it easy to setup database migrations and test them continuously. In this blog, we will go over the steps to execute and test migrations on a PostgreSQL database using Shippable CI

Our sample uses Node.js and the node-pg-migrate module to setup migrations on a PostgreSQL database. Shippable integrates with PostgreSQL and allows you to automatically launch a PostgreSQL instance with a single line in the yml configuration. We will test migrations on this PostgreSQL instance.

 

Sample project

The code for this example is in GitHub: devops-recipes/ci-migrate-postgresdb 

You can fork the repository to try out this sample yourself or just follow instructions to configure your own use case. 

Are you Stuck in The New DevOps Matrix From Hell?

If you google "matrix from hell", you'll see many articles about how Docker solves the matrix from hell. So what is the matrix from hell? Put simply, it is the challenge of packaging any application, regardless of language/frameworks/dependencies, so that it can run on any cloud, regardless of operating systems/hardware/infrastructure.

                     

The original matrix from hell: applications were tightly coupled with underlying hardware

 

Docker solved for the matrix from hell by decoupling the application from the underlying operating system and hardware. It did this by packaging all dependencies inside Docker containers, including the OS. This makes Docker containers "portable", i,e, they can run on any cloud or machine without the dreaded "it works on this machine" problems. This is the single biggest reason Docker is considered the hottest new technology of the last decade.

With DevOps principles gaining center stage over the last few years, Ops teams have started automating their tasks like provisioning infrastructure,  managing config, triggering production deployments, etc. IT automation tools like Ansible and Terraform help tremendously with these use cases since they allow you to represent your infrastructure-as-code, which can be versioned and committed to source control. Most of these tools are configured with a YAML or JSON based language which describes the activity you're trying to achieve.