The CI/CD and DevOps Blog

Learn about various tried-and-tested strategies that will help you ship code faster

Devashish Meena

Devashish Meena
Find me on:

Recent Posts

Provisioning AWS Instances Using Terraform Modules

In the previous post, we walked through the steps to provision an AWS Network using Terraform Modules. If you missed the post, here's the link to it: Provisioning AWS Network using Terraform Modules and the full code is available here: https://github.com/ric03uec/prov_aws_vpc_terraform.

In this post, we'll provision a few more components using Terraform modules. Additionally, we'll use the generated state file from the previous post as an input data source in this workflow. So it is highly recommended that you run that code first.

Since Terraform makes it super easy to create and destroy infrastructure, you should be able to spin up all components for testing and destroy them when done. I've probably done this at least ten times to test the code for this blog !!!

Provisioning AWS Network using Terraform Modules

At Shippable, we love using Terraform. From using it sparsely just a few years back, we've now reached a stage where every single component of all our environments is managed using Terraform. Hard to believe? Here's all our infrastructure code to prove it ! We've also published a few posts earlier which outline our process for managing infrastructure.Some of these are,

- Provisioning AWS Infrastructure Using Terraform

- Provisioniong AWS VPC With Terraform

- Provision AWS EC2 Virtual Machines Using Terraform

So why a new post? Terraform now supports Modules that provide an easy way to break down different parts of the infrastructure into reusable components. They also provide a Registry where users can publish their modules. Users can download "verified" modules from the registry and use them directly as building blocks for their infrastructure. We decided to give this a try by creating a complete, production-ready infrastructure(similar to what we use). The objectives of the tutorial are to

- Logically break down infrastructure components into modules

- Reuse and chain modules to create component decoupling

- Drive all configuration from one file

Extend your CI workflows using Assembly Lines

Do more with less - No one disagrees with this famous quote, but how often do we really take a step back to think about pushing the envelope with what we have right now ? At Shippable, we've tried to constantly ask this question with every feature we've built. This ideology manifests itself in the capabilities the current Shippable workflows have, compared to what they used to a few years back. All this while keeping things simple and with zero additional costs to the customers. A lot of our customers started using Assembly Lines after the launch, a year go. Most of them didn't need much help but we do admit that some steps in extending traditional CI with the new Assembly Lines are a bit complicated. The objective of this post is to provide a detailed, step by step guide, to enable any CI job on Shippable to use the power of Assembly Lines 

Setting permissions on Amazon EC2 Container Registry repositories


Setting up permissions for images on Docker Hub is pretty straightforward, given how it follows a simple GitHub-like model. Amazon EC2 Container Registry (or Amazon ECR) is a great service for storing images but setting correct permissions is slightly complicated. This is especially true when configuring user-specific permissions on the images. We’ll create a few users and repos and set up repo permissions. We’ll use the AWS command line tool to set the permissions. Using the cli makes it easier to script all the steps and automate the entire process. Everything we do using cli can be done using the web interface

The objective is to setup following rules for any image pushed on Amazon ECR

  • user usr1 should have push/pull permissions for Repo1 and Repo2
  • user usr2 should have push/pull permissions for Repo2 only
  • user usr3 should have only pull permissions for Repo1

Let's get started.

Kubernetes Cluster with Flannel Overlay Network

 

This is the third and final post in the series where we play around with Docker, Kubernetes and Flannel overlay network. The first two posts are available at:
Multi node kubernetes cluster
Docker overlay network using flannel

In this tutorial I’ll explain how to bring up a multi-node kubernetes cluster with an overlay network. This essentially combines what I’ve explained in previous posts. An overlay is necessary to fulfill the networking requirements for a fully functional kubernetes cluster. All this is taken care of auto-magically when the cluster is brought up on GCE, but the manual configuration is slightly complicated -- both because it is non-trivial to set up so many components correctly and with so many tools available for the same job, it is difficult to figure out which one to pick. I picked flannel because of its simplicity and community backing.