Setup a Container Cluster on AWS with Terraform Part 1-Provision a VPC

- By Tom Trahan on February 23, 2016

This post will be the first in a series of posts covering the basics of using Terraform to configure a container cluster on AWS and run a service on the cluster.  If you're not already familiar, Terraform is a pretty incredible open source tool from Hashicorp for configuring and launching infrastructure across a variety of providers.  By enabling you to manage your infrastructure provisioning and configuration as code (i.e. "Infrastructure as Code"), Terraform gives you repeatability and consistency, which you'll find tremendously useful when setting up complicated infrastructures, such as a container cluster and its underlying infrastructure on AWS.

In this series, I'll walk through the following steps to configure the underlying infrastructure for a container cluster:

  1. Provision your VPC, subnet, routing table and security group.
  2. Set up IAM roles and instance profile.
  3. Set up an ALB.
  4. Set up an Autoscaling group and launch configuration.
  5. Create the ECS cluster.
  6. Create the Task definition and service to run on the ECS cluster.

 

In this first post, I'll cover the first step and the next post that can be found here covers the remaining steps.  Before you begin, make sure to install the Terraform CLI and create a directory to hold your Terraform scripts.  For this article, I've created a directory called demo to contain the scripts.  You can find all of the code for this article here on GitHub.

 

Step 1: Provision your VPC, subnet, routing table and security group

When you set up a cluster in Amazon ECS, you'll do so within a virtual private cloud (VPC) that is logically isolated from other virtual networks in the AWS cloud.  So, let's start by creating a Terraform script to do that.  Start by reading through the basics of Terraform syntax, then create a file called vpc.tf in your directory.

Within this file, we'll specify the instructions to:

  • Create a VPC
  • Create an Internet Gateway
  • Create a Public Subnet
  • Create a Routing Table
  • Associate the Routing Table to the Public Subnet
  • Create a Security group for the VPC
 
Here are drill-down instructions for building out the sections of the vpc.tf.
First you need to create a VPC. To create a VPC, add the following code to your vpc.tf file:

# Define a vpc
resource "aws_vpc" "demoVPC" {
  cidr_block = "200.0.0.0/16"
  tags {
    Name = "ecsDemoVPC"
  }
}

You'll see this same format throughout Terraform.  You specify the resource you'd like to provision, provide a name for the resource, provide any settings relevant for the resource, and lastly, add any optional tags (useful for filtering your views within AWS).

In this case, I specified the resource to be aws_vpc with the name demoVPC.  The only setting required for creating the VPC is to specify a CIDR block, which I set to  200.0.0.0/16.  Lastly, I tagged this resource with ecsDemoVPC.  You'll see that I tag all resources in this series with 'demo' included to make it easy to quickly pull up all of the resources related to these Terraform scripts within AWS.

Next, create an Internet Gateway. To create an interface between this VPC and the internet, I'll add an Internet Gateway and attach it to the VPC.  Add the following code to your vpc.tf file:

# Internet gateway for the public subnet
resource "aws_internet_gateway" "demoIG" {
  vpc_id = "${aws_vpc.demoVPC.id}"
  tags {
    Name = "ecsDemoIG"
  }
}

This time I specified an aws_internet_gateway resource named demoIG.  This resource must be linked to the VPC specified above, so I used Terraform's variable format to refer to it as ${aws_vpc.demoVPC.id}.  Behind the scenes, Terraform smartly manages dependencies and executes resource provisioning in the appropriate order.  So even though I do not yet know what the ID of the VPC will be, Terraform will know since it will provision the VPC before provisioning the Internet Gateway (regardless of what order I put these in my scripts).

 Then, you need to create a Public Subnet Our Amazon ECS cluster will have container instances registered to it that reside in a Public Subnet.  To create a Public Subnet, add the following code to your vpc.tf file:

# Public subnet
resource "aws_subnet" "demoPubSN0-0" {
  vpc_id = "${aws_vpc.demoVPC.id}"
  cidr_block = "200.0.0.0/24"
  availability_zone = "us-east-1a"
  tags {
    Name = "ecsDemoPubSN0-0-0"
  }
}

Like before, I've created the resource, named it and linked it to the VPC that was specified above by use of the variable. The additional settings specified include the subnet CIDR block, 200.0.0.0/24, and the availability zone, us-east-1a, in which this subnet will be valid.  

 After that, create a Routing Table. The Public Subnet will require a routing table, so add the following code to your vpc.tf file:

# Routing table for public subnet
resource "aws_route_table" "demoPubSN0-0RT" {
  vpc_id = "${aws_vpc.demoVPC.id}"
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_internet_gateway.demoIG.id}"
  }
  tags {
    Name = "demoPubSN0-0RT"
  }
}

You know the drill, I specified the resource, its name, and linked it to the VPC via the variable.  Likewise, I referenced the Internet Gateway created above via the aws_internet_gateway.demoIG.id variable within the route block.  Note that the route specified here will direct all internet traffic to the internet gateway.

Next, associate the Routing Table to the Public SubnetThis is the last step in setting up your VPC, where you will associate your Routing Table to the Public Subnet you created above. Add this code to your vpc.tf file:

# Associate the routing table to public subnet
resource "aws_route_table_association" "demoPubSN0-0RTAssn" {
  subnet_id = "${aws_subnet.demoPubSN0-0.id}"
  route_table_id = "${aws_route_table.demoPubSN0-0RT.id}"
}

Here, I've created the new route table association resource, named it and provided the variables representing the Public Subnet and the Route Table created above.

Finally, create a security group to define the ingress and egress for the VPC. Add this code to your vpc.tf file:

# ECS Instance Security group

resource "aws_security_group" "test_public_sg" {
    name = "test_public_sg"
    description = "Test public access security group"
    vpc_id = "${aws_vpc.test_vpc.id}"

   ingress {
       from_port = 22
       to_port = 22
       protocol = "tcp"
       cidr_blocks = [
          "0.0.0.0/0"]
   }

   ingress {
      from_port = 80
      to_port = 80
      protocol = "tcp"
      cidr_blocks = [
          "0.0.0.0/0"]
   }

   ingress {
      from_port = 8080
      to_port = 8080
      protocol = "tcp"
      cidr_blocks = [
          "0.0.0.0/0"]
    }

   ingress {
      from_port = 0
      to_port = 0
      protocol = "tcp"
      cidr_blocks = [
         "${var.test_public_01_cidr}",
         "${var.test_public_02_cidr}"]
    }

    egress {
        # allow all traffic to private SN
        from_port = "0"
        to_port = "0"
        protocol = "-1"
        cidr_blocks = [
            "0.0.0.0/0"]
    }

    tags { 
       Name = "test_public_sg"
     }
}

 

Run your script

You should now have a file called vpc.tf that contains each of the code blocks above in it.  Note that the order that you have these in your file doesn't matter, as Terraform will evaluate each action and determine the appropriate order to execute the steps.

If you're not already there, change to the directory that holds the script:
$ cd demo

 

Check that you have everything set up correctly:
$ terraform plan

 

If everything looks good, run the scripts for real (note -- you will be provisioning resources on AWS and may incur charges):
$ terraform apply

Assuming no issues, you should see a response similar to the below:

aws-terraform-apply.png

That's it!  Assuming everything went off without a hitch, you've just provisioned a VPC and its related resources on AWS using Terraform.

To remove these resources at any time:
$ terraform destroy

 

Shippable sample

I have created a Shippable sample that provisions and deprovisions the VPC (step 1) in a workflowYou can find all of the code for the sample here on GitHub

To run the sample, you will need to do the following:

If you're not familiar with Shippable, here are some basic concepts you should know before you start:

  • Configuration: The Assembly Lines configuration for Shippable resides in a shippable.yml file. The repository that contains this config in your source control is called a Sync Repository, aka syncRepo. A syncRepo is added through your Shippable UI to add your Assembly Line. 
  • Jobs are executable units of your pipeline and can perform any activity such as CI, provisioning an environment, deploying your application, or running pretty much any custom script. A simple way to think of it is, if something can execute in the shell of your laptop, it can execute as a Job.
  • Resources typically contain information needed for Jobs to execute, such as credentials, pointer to a cluster on a Container Engine or an image on a Hub, or any key-value pairs.  Resources are also used to store information produced by a job which can be then accesses by downstream jobs.
  • Integrations are used to configure connections to third-party services, such as AWS, Docker Hub, GKE, Artifactory, etc.
  • The Single Pane of Glass view shows a real-time, interactive view of your Assembly Line(s).

 

The next blog in this series, which can be found here, implements the remaining steps required to setup a container cluster infrastructure on AWS.


Try Shippable

Topics: ECS, terraform, AWS