This tutorial explains how to manually provision an AWS EC2 virtual machine using Terraform. Before you start, you should be familiar with the following concepts:
The best way to get started is to install Terraform and run scripts manually on your local machine to provision a VM. Once you understand the mechanics of it, you should consider automating your workflow by following our documentation on Automated provisioning of AWS EC2 using Terraform.
Follow the steps below in order to provision your EC2 machine.
Step 1: Prep your machine
- Have your security credentials handy to authenticate to your AWS Account. Refer to the AWS Credentials documentation.
Execute the following commands to set up your AWS credentials as environment variables. The playbook will need these at runtime.
export AWS_ACCESS_KEY_ID=<enter your access key> export AWS_SECRET_ACCESS_KEY=<enter your secret key>
- Install Terraform on your machine. Refer to the Terraform Installation guide.
Step 2: Prepare Terraform scripts
- Terraform scans for all files with extensions *.tf in the current folder and its subfolders recursively. It combines them all into a single file before executing it. In our example, we are using a the following files:
- terraform.tfvars supplies the values for all the dynamic variables needed
- variables.tf is the representation of those variables in Terraform format
- ec2.tf is the actual script that provisions EC2
- If you do not have your own Terraform scripts, please feel free to clone our sample repository here: https://github.com/devops-recipes/prov_aws_ec2_terraform
- You will need to replace the following values in terraform.tfvars to customize the scripts for yourself:
Step 3: Apply your Terraform scripts
- Execute the following command to run your Terraform scripts from the directory that contains the .tf files:
terraform apply -var-file=terraform.tfvars
- Verify on AWS if the EC2 machine was provisioned.
Challenges with manual execution of Terraform scripts
There are a few challenges with manual execution of Terraform scripts:
- Terraform uses a state file to determine the current state of infrastructure and the delta that needs to be applied on each execution. This creates a problem: where should you store the state file? You can push it to a source control repository, but if you have multiple such files, managing state files quickly becomes a challenge. You will have to clone the state file to your machine every time and remember to push it back. If you forget one time, it can create a mess that will take you some time to clean up.
- Terraform templates can be reused since they have wildcards. However, you need a programmatic way to replace wildcards at runtime. Creating static variables files is an option, but reduces reusability.
- Automating provisioning for different environments and creating a dependency tree of all applications that are deployed into that environment is tedious to achieve with manual steps. You need an automated workflow to effectively transfer information like subnet_id, security_group_id to downstream activities. for e.g. EC2 provisioners.
- The machine has to be prepped with the right version of the CLI. If multiple teams are deploying and they have a need to use different versions of the CLI, you will need different deployment machines for each team.
In a nutshell, if you want to achieve frictionless execution of Terraform templates with modular, reusable scripts, you need to automate the workflow used to execute them.
Automated provisioning of AWS EC2 VMs using Terraform
To show you how to automate the provisioning of your AWS infrastructure, we have designed a step by step tutorial in our documentation:
If you want a live demo of the Shippable platform and watch this scenario in action, schedule a demo with us: