The CI/CD and DevOps Blog

Manisha Sahasrabudhe

Manisha Sahasrabudhe

Recent Posts

Provisioning AWS infrastructure with Terraform

Provisioning and updating infrastructure is a the first step in setting up your development, beta, or production environments. Hasicorp's Terraform format is fast becoming very popular for this use case.  We love Terraform at Shippable due to its easy declarative syntax, similar to our pipelines syntax. Other advantages are: 

  • Infrastructure as Code: Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used.

  • Execution Plans: Terraform has a "planning" step where it generates anexecution plan. The execution plan shows what Terraform will do when you call apply. This lets you avoid any surprises when Terraform manipulates infrastructure.

  • Resource Graph: Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.

  • Change Automation: Complex changesets can be applied to your infrastructure with minimal human interaction. With the previously mentioned execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors.

At Shippable, we use Terraform to provision all our environments and automate the provisioning using our Pipelines feature. If you're interested in taking a look at our terraform scripts and pipelines config, we have made our repositories public so you can check them out:  

Interested in trying it yourself? The following example walks you through a sample project that provisions two t2.micro instances on AWS. We've kept it simple for easy understanding, but you can also automate provisioning of complex environments as seen in our beta infra scripts above.

Deploy your first Continuous Deployment Pipeline

As you know, we released our new implementation of continuous deployment pipelines last month. While our basic documentation is up to date, we believe that learning the new pipelines is best done with quick tutorials that demonstrate the power of CD and how easy it is to get started.

We have created a sample project and sample configuration to deploy the project to test environment,  create a release with semantic versioning, and deploy the project to production. The entire end to end scenario should take less than 30 mins to try out and while you won't learn every little trick, it will definitely make you comfortable with the configuration and how to set things up.

So read on and try it out!

Shippable Launches Industrialized Continuous Deployment Platform

SEATTLE, WA (Aug 25, 2016) Shippable has announced the next generation of its continuous deployment platform. The enhanced platform adds key features like release management, multi-cloud capabilities, a declarative pipeline language and a unified view across all application pipelines. These features help software-powered organizations further streamline the process of shipping software and accelerating innovation.

Today, most organizations find it challenging to innovate fast enough to satisfy consumers. DevOps is a set of principles that tries to solve this problem. However, the workflow required to get applications from source code to running in production is complicated and riddled withfragmented technology solutions. The only way to achieve rapid, iterative innovation is to cobble these fragments together in one continuous pipeline. Unfortunately, these custom, homegrown pipelines are rigid, inflexible and hard to maintain. The do-it-yourself approach is a distraction and takes valuable cycles away from product engineering.

Shippable’s integrated platform is built from the ground up to defragment and streamline the process of shipping applications, so that software-powered organizations can accelerate innovation.

Triggering a sequential, parameterized build after continuous integration

We are happy to announce the addition of sequential parameterized builds to our feature list. Using this feature, you can trigger a sequence of CI workflows for your projects and even pass parameters from one build to another!

You will want to do this in 2 situations:

  • You have build dependencies, and if one codebase changes, you want to trigger builds for all dependent codebases. A great example of this is that you have a base Docker image foo/appBase for your application and all services have a FROM foo/appBase:latest in their Dockerfiles. With this new feature, you can easily trigger builds for all your services if the base image appBase changes.
  • You have codebases that need to be triggered sequentially since each build produces parameters required by the next build in the sequence.

Let's look at a more detailed example of how this will work in practice.

Triggering a custom webhook after continuous integration

Even with the best continuous integration platform on the planet, you will still have scenarios when you want to trigger custom workflows after your CI finishes. These workflows could be very targeted and should have the ability to be customized based on whether CI passed.

Today, I am going to introduce you to a very versatile feature we added recently - the ability to add a custom webhook to your CI workflows that will be triggered after your build finishes. You can configure this trigger based on several factors, including build result, branch, etc and also include one or more parameters.  

This new feature is documented here in greater detail

Let's look at a simple example to see how this works. I have a project manishas/sample_nodejs and I want to open a GitHub issue in the repository if my build fails. I also want to customize the issue title and description to include a the project name, build number, commit message, and build URL.