Provisioning AWS infrastructure with Terraform

- By Manisha Sahasrabudhe on October 19, 2016

Provisioning and updating infrastructure is a the first step in setting up your development, beta, or production environments. Hasicorp's Terraform format is fast becoming very popular for this use case.  We love Terraform at Shippable due to its easy declarative syntax, similar to our pipelines syntax. Other advantages are: 

  • Infrastructure as Code: Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used.

  • Execution Plans: Terraform has a "planning" step where it generates anexecution plan. The execution plan shows what Terraform will do when you call apply. This lets you avoid any surprises when Terraform manipulates infrastructure.

  • Resource Graph: Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.

  • Change Automation: Complex changesets can be applied to your infrastructure with minimal human interaction. With the previously mentioned execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors.

At Shippable, we use Terraform to provision all our environments and automate the provisioning using our Pipelines feature. If you're interested in taking a look at our terraform scripts and pipelines config, we have made our repositories public so you can check them out:  

Interested in trying it yourself? The following example walks you through a sample project that provisions two t2.micro instances on AWS. We've kept it simple for easy understanding, but you can also automate provisioning of complex environments as seen in our beta infra scripts above.

Fork sample project

Fork sample_pipelines_terraform from the ShippableSamples org on GitHub.

Understand the project

Let's take a deeper look at what the structure of this sample project is.

  • contains the terraform config for the infrastructure you want to provision. In this sample, we are provisioning two t2.micro instances in a subnet in avaiability zone us-east-1b using ami ami-0d729a60. You can edit this file to change what you need in terms of infrastructure.
  • sets up environment variables and calls terraform apply to provision infrastructure. It also preserves the terraform state file between runs. 
  •, shippable.resources.yml, and shippable.triggers.yml are the config files which describe jobs, resources, and triggers required.

  • The resources config has the following:
    • an integration resource that has access to the VPC and Subnet you want to provision your infrastructure in.
    • a gitRepo resource pointing to the repository containing terraform scripts. In this case everything is in the same repository.
    • A params resource that contains the region you want to provision infrastructure in. 
  • The jobs config has the following:
    • A runSh job which calls the script

Add integrations for AWS and GitHub 

  • Create an AWS integration from your Shippable UI by following directions in the Adding the Account Integration section.Please make sure you assign the integration to the Subscription which contains the forked sample_pipelines_terraform.
  • Create an integration of type GitHubPlease make sure you assign the integration to the Subscription which contains the forked sample_pipelines_terraform.  
  • Make sure both integrations are listed in <your-subscription-name>->Settings tab->Integrations. See the image below.
  • Write down the names of both integrations as named in your Subscription Settings


Edit config 

Now that you know what is included in  the project, let's get started on the config. Here are the edits you'll need to make:

  • In shippable.resources.yml
    • Replace manishas-aws in the integration-aws resource with the name of your integration. This needs to be the name you used in Subscription Settings.
    • Replace github-manishas in the repo-tfScripts resource with the name of your integration.
    • In sourceName: manishas/sample_pipelines_terraform, replace manishas with your org name where the forked repository resides.
    • If you want to create your infrastructure in a region other than us-east-1, enter your region in the REGION: "us-east-1" field in the params resource.
  • In
    • Replace subnet_id with the id of the subnet you want to use
    • Set availability_zone as needed.
    • Make any other changes you need. You can use your own terraform files to try this out as well. Just replace this file with your config file named 

Seed your pipeline

Follow instructions to seed your pipeline as described in our documentation. This should show the pipeline in the Single Pane of Glass (SPOG) tab.

Run the job and watch the magic!

After the pipeline shows up in the SPOG view, right click on the tf-Deploy job and click on Run. This should run the job and provision your infrastructure on AWS.

On success, the tfDeploy job turns green as shown below:



Your instances will be provisioned on AWS:



From here on, your pipeline is up and running. Any time you make changes to the terraform scripts or other config, your job will automatically run and make any infrastructure changes as required.

Try it and let us know what you think!

Try Shippable


Topics: how-to, terraform, AWS, infrastructure