On-Demand Test Environments With ANSIBLE and Shippable

- By Ambarish Chitnis on December 11, 2017

One of the biggest challenges to implementing an end-to-end Continuous Delivery pipeline is making sure adequate test automation is in place. However, even if you have automated across your entire Test Suite, there is a second challenge: How do you manage Test infrastructure and environments without breaking the bank?

If you want to move towards Continuous Delivery, you need to execute a majority of your tests for each code change in a pristine environment that is as close to your production environment as possible. This ensures that code defects are identified immediately and every code change is therefore 'shippable'. However, creating these environments and updating them with each new application version or every time the config changes adds a lot of overhead. If you're testing an application with many tiers or microservices, the complexity increases since each tier might need to be tested indepedently in its own environment against specific versions of other tiers.

The utopia of Test Automation is the following:

  • Environment definitions are represented by infrastructure-as-code tools like Ansible, Terraform, Puppet, or Chef. The provisioning scripts are committed to source control and versioned, so you can go back to an earlier state if needed. 
  • All (or at least a good majority) of your tests are automated and either committed to source control or hosted on services such as Nouvola, Sauce, etc.
  • You have a completely automated deployment pipeline that automatically spins up a production-like Test environment for every code change, triggers your automation, and if all tests succeed, destroys the environment. If tests fail, the right folks are notified and the environment is kept live until someone can debug the failures.

The first step is already happening in most organizations. The DevOps movement encouraged Ops teams to start writing scripts to provision and manage environments and infrastructure, and multiple vendors support this effort quite effectively. The second step is still a challenge in many organizations, but this is really something that needs executive buy-in and a commitment to automation, even if it slows down product development for a while.

This whitepaper presents a method of implementing the third step - spinning up test environments on-demand and destroying them automatically after the automated test suite is executed.

 

The Scenario

To make things simpler, we'll skip the CI step which builds and tests the application Docker image and pushes it to Amazon ECR. This can be accomplished by following instructions for CI: Run CI for a sample app

 

on-demand-test-environment-workflow.png

Our example follows the steps below:

1. A service definition, aka manifest, is created, including the Docker image and some options 

2. A test environment is provisioned using Ansible under the cover. Ansible config files are templatized using environment variables defined in

Shippable, allowing Ansible config to become highly reusable to provision multiple test clusters if needed.

3. The manifest is deployed to the Test environment and functional test suite is triggered

4. If tests pass, the test environment is destroyed using Ansible and test owner is notified.

5. If tests fail, the test owner is notified and the environment is not destroyed. The Test owner can always destroy the environment manually after he/she has extracted the information they need about the failure.

 

Before we start

You will need the following to implement this scenario:

  • A GitHub or Bitbucket account that you will use to login to Shippable
  • An AWS account
  • A Docker Hub account (or Amazon ECR/GCR/Quay)
  • Ideally, some familiarity with Ansible is desirable, though not required.

If you're not familiar with Shippable, here are some basic concepts you should know before you start:

  • Configuration: The Assembly Lines configuration for Shippable resides in a shippable.yml file. The repository that contains this config in your source control is called a Sync Repository, aka syncRepo. A syncRepo is added through your Shippable UI to add your Assembly Line. 
  • Jobs are executable units of your pipeline and can perform any activity such as CI, provisioning an environment, deploying your application, or running pretty much any custom script. A simple way to think of it is, if something can execute in the shell of your laptop, it can execute as a Job.
  • Resources typically contain information needed for Jobs to execute, such as credentials, pointer to a cluster on a Container Engine or an image on a Hub, or any key-value pairs.  Resources are also used to store information produced by a job which can be then accesses by downstream jobs.
  • Integrations are used to configure connections to third-party services, such as AWS, Docker Hub, GKE, Artifactory, etc.
  • The Single Pane of Glass view shows a real-time, interactive view of your Assembly Line(s).

 

How the sample application is structured

Our sample repositories are in GitHub:

  • The sample application that we will run functional tests on is a voting app that is built using Python Flask and Redis. The source for the front end (Flask) can be found in the vote_fe repository and the backend (redis) in the vote_be repository. The shippable.yml in these repositories contains the CI configuration to build and deploy their Docker images to their public repositories on Docker Hub.
  • devops-recipes/on_demand_test_environments contains the Shippable configuration and required Ansible playbooks for this scenario. The sections below explain in detail how the  Shippable configuration is built.

 

Step 1: Enabling CI for the sample application  

  • Fork the vote_fe and the vote_be repositories into your SCM.
  • Login to Shippable with your SCM account and enable CI using these steps.
  • Create a Docker Registry integration using these steps and call it drship_dockerhub. If you use a different integration name, replace drship_dockerhub in the shippable.yml file.
  • Specify your Docker repository and account in the shippable.yml file and commit the file.
  • Trigger CI for these repositories using these steps.

 At the end of Step 1, you should have two images published in your Docker registry integration.

 

Step 2: Create the service definition

A. Define the resource in shippable.yml file. 

shippable.yml file can be committed in one of the app repositories or to a separate repository. We have used a different repository devops-recipes/on_demand_test_environments in our sample. The repository containing your jobs and resources ymls is called a Sync repository and represents your workflow configuration.

 

resources:
###---------------------------------------------------------------#
###----------------------- BUILD/CI Resources --------------------#
###---------------------------------------------------------------#

# Back-end image
  - name: vote_be_odte
    type: image
# replace dr-dockerhub with your docker registry integration name integration: dr-dockerhub pointer:
# replace devopsrecipes/vote_be with your repository sourceName: "devopsrecipes/vote_be" seed:
# specify the latest tag of the image in your docker registry versionName: "master.2" # Front-end image - name: vote_fe_odte type: image
# replace dr-dockerhub with your docker registry integration name integration: dr-dockerhub pointer:
# replace devopsrecipes/vote_fe with your repository sourceName: "devopsrecipes/vote_fe" seed:
# specify the latest tag of the image in your docker registry versionName: "master.3"
# Docker options to expose port 80 on the front-end container and link the redis container - name: vote_fe_options_odte type: dockerOptions version: memory: 128 portMappings: - "80:5000/tcp" links: - vote_be_odte:redis

 

B. Define the jobs in shippable.yml file.

create_app_man_odte is a manifest job that defines all the containers than run in the ECS cluster. This definition is versioned and each version is immutable.

Add the following to your shippable.yml file and commit it.

jobs:

#---------------------------------------------------------------#
#------------------- BUILD/CI with SHIPPABLE CI ----------------#
#---------------------------------------------------------------#

# CI job definition. The image that is pushed to Docker hub is specified in an OUT image resource. 
# This image resource becomes an IN to the manifest job and triggers the manifest job whenever
# a new image version (tag) is created.

  - name: vote_be_runCI
    type: runCI
    steps:
      - OUT: vote_be_odte

  - name: vote_fe_runCI
    type: runCI
    steps:
      - OUT: vote_fe_odte

# Application service definition

  - name: create_app_man_odte
    type: manifest
    steps:
      - IN: vote_fe_odte
      - IN: vote_fe_options_odte
        applyTo:
          - vote_fe_odte
      - IN: vote_be_odte

 

Step 3: Provision the test environment

We use an Ansible playbook to create the ECS cluster whose implementation can be found here.

We templatize the Ansible configuration files to make them flexible.The configuration in then defined by Shippable generated environment variables and resources. 

 

A. Ansible.cfg file

Here we use the SCRIPTS_REPO_ODTE_STATE environment variable to point to the root of the repository when the playbook is run in a Shippable node.

[defaults]
# update, as needed, for your scenario
host_key_checking=false
inventory = ${SCRIPTS_REPO_ODTE_STATE}/infra/provision-ecs-ansible/inventory/

[ssh_connection]
# for running on Ubuntu
control_path=%(directory)s/%%h-%%r

 

B. Group variables.

All the variables used by Ansible modules to create the cluster are defined as placeholders. These placeholders are replaced at runtime by values defined in a params resource.

ec2_instance_type: "${EC2_INSTANCE_TYPE}"
ec2_image: "${EC2_IMAGE}"
ec2_keypair: "${EC2_KEYPAIR}"
ec2_user_data: "#!/bin/bash \n echo ECS_CLUSTER=\"${ECS_CLUSTER_NAME}\" >> /etc/ecs/ecs.config"
ec2_region: "${EC2_REGION}"
ec2_tag_Role: "${EC2_TAG_ROLE}"
ec2_tag_Type: "${EC2_TAG_TYPE}"
ec2_volume_size: ${EC2_VOLUME_SIZE}
ec2_count: ${EC2_COUNT}
STATE_RES_NAME: "${STATE_RES_NAME}"
ec2_security_group: "${TEST_PUBLIC_SG_ID}"
ec2_subnet_ids: ["${TEST_PUBLIC_SN_01_ID}","${TEST_PUBLIC_SN_02_ID}"]
ec2_tag_Environment: "${ENVIRONMENT}"
ECS_CLUSTER_NAME: "${ECS_CLUSTER_NAME}"

 

C. Define ansible configuration in shippable.yml file.

 

resources:

#---------------------------------------------------------------#
#-------------------- Common INFRA Resources -------------------#
#---------------------------------------------------------------#

# Ansible scripts repository
  - name: scripts_repo_odte
    type: gitRepo
    integration: "dr-github"
    pointer:
      sourceName: "devops-recipes/on-demand-test-environments"
      branch: master

# AWS integration that sets up the AWS CLI environment used by Ansible playbook
  - name: aws_cli_config_odte
    type: cliConfig
    integration: dr-aws-keys
    pointer:
      region: us-east-1

# SecOps approved AMI
  - name: ami_sec_approved_odte
    type: params
    version:
      params:
        AMI_ID: "ami-9eb4b1e5"

#---------------------------------------------------------------#
#----------------------- TEST VPC Resources --------------------#
#---------------------------------------------------------------#

# TEST environment config
  - name: test_conf_odte
    type: params
    version:
      params:
        EC2_REGION: "us-east-1"
        EC2_TAG_ROLE: "dr-on-demand-test-environments"
        EC2_TAG_TYPE: "ecs-container-instance"
        EC2_VOLUME_SIZE: 30
        EC2_COUNT: 1
        STATE_RES_NAME: "test_info_odte"
        ECS_CLUSTER_NAME: "test_env_ecs_odte"
        ENVIRONMENT: "test"
        EC2_INSTANCE_TYPE: "t2.large"
        EC2_IMAGE: "ami-9eb4b1e5"
        EC2_KEYPAIR: "ambarish-useast1"

# Test VPC Info
  - name: test_vpc_conf_odte
    type: params
    version:
      params:
        TEST_VPC_ID: "vpc-a36912da"
        TEST_PUBLIC_SG_ID: "sg-c30fc8b6"
        TEST_PUBLIC_SN_01_ID: "subnet-34378e50"
        TEST_PUBLIC_SN_02_ID: "subnet-34378e50"
        REGION: "us-east-1"

# Output of Test ECS Provisioning
  - name: test_info_odte
    type: params
    version:
      params:
        SEED: "initial_version"

# Reference to ECS Test Cluster
  - name: test_env_ecs_odte
    type: cluster
    integration: "dr-aws-keys"
    pointer:
      sourceName : "test_env_ecs_odte"
      region: "us-east-1"

 

D. Augment the Ansible playbook ansible-ecs-provision that provisions the ECS cluster

After the cluster is created, we use Shippable platform resources and API to persist important cluster metdata such as the ARN and public IP of the cluster in a params resource test_info_odte and the cluster resource test_env_ecs_odte.

The ansible-ecs-provision playbook calls two roles to provision the ECS cluster.

---
### provision AWS ECS cluster
- hosts: localhost
  connection: local
  gather_facts: false
  user: root
  pre_tasks:
    - include_vars: group_vars/ecs-cluster-vars.yml
  roles:
    - ecs-cluster-provision
    - ec2-container-inst-provision
  post_tasks:
    - name: refresh hosts inventory list
      meta: refresh_inventory
  • ecs-cluster-provision
    ---
    # update Shippable resource state with this job number
    - name: run cmd
      shell: |
        shipctl post_resource_state "" versionName "build-${BUILD_NUMBER}"
    
    # provision ECS cluster
    - name: Create ECS Cluster 
      ecs_cluster:
        name: ""
        state: present
      register: ecs
    
    # update shippable resource state with provisioned cluster_arn
    - name: run cmd
      shell: |
        shipctl put_resource_state "" CLUSTER_ARN ""
        shipctl put_resource_state "" TEST_ECS_CLUSTER_ID ""
  • ecs-cluster-provision
    ---
    - name: Provision  instances with tag 
      local_action:
        module: ec2
        key_name: ""
        group_id: ""
        instance_type: ""
        instance_profile_name: "ecsInstanceRole"
        image: ""
        user_data: ""
        vpc_subnet_id: ""
        region: ""
        instance_tags: '{"Name":"","Role":"","Type":"","Environment":""}'
        assign_public_ip: yes
        wait: true
        exact_count: ""
        count_tag:
          Role: ""
        volumes: 
          - device_name: /dev/xvda
            volume_type: gp2
            volume_size: ""
            delete_on_termination: true
      register: ec2
    
    - add_host: 
        name: "{{item.public_ip}}" 
        groups: tag_Type_,tag_Environment_
        ec2_region: "" 
        ec2_tag_Name: ""
        ec2_tag_Role: ""
        ec2_tag_Type: ""
        ec2_tag_Environment: ""
        ec2_ip_address: "{{item.public_ip}}"
      with_items: ""
    
    - name: Wait for the instances to boot by checking the ssh port
      wait_for: host={{item.public_ip}} port=22 delay=15 timeout=300 state=started
      with_items: ""
    
    # update shippable resource state
    - name: run cmd
      shell: |
        shipctl put_resource_state "" "INST_{{item.ami_launch_index}}_PUBLIC_IP" "{{item.public_ip}}"
        shipctl put_resource_state "" "INST_{{item.ami_launch_index}}_ID" "{{item.id}}"
        shipctl put_resource_state "" "REGION" ""
        shipctl put_resource_state "" "INST_{{item.ami_launch_index}}_PUBLIC_IP" "{{item.public_ip}}"
        shipctl put_resource_state "" "INST_{{item.ami_launch_index}}_ID" "{{item.id}}"
      with_items: ""

shipctl provides a comprehensive library of utilties that can be used to extract and persist useful data in a Shippable params or state resource. This data can then be used by jobs downstream.

 

E. Define the Shippable job that runs the Ansible playbook ansible-ecs-provision.yml that provisions the ECS cluster.

Add the following to shippable.yml file and commit it.

---
- name: Provision  instances with tag 
  local_action:
    module: ec2
    key_name: ""
    group_id: ""
    instance_type: ""
    instance_profile_name: "ecsInstanceRole"
    image: ""
    user_data: ""
    vpc_subnet_id: ""
    region: ""
    instance_tags: '{"Name":"","Role":"","Type":"","Environment":""}'
    assign_public_ip: yes
    wait: true
    exact_count: ""
    count_tag:
      Role: ""
    volumes:
      - device_name: /dev/xvda
        volume_type: gp2
        volume_size: ""
        delete_on_termination: true
  register: ec2

- add_host:
    name: "{{item.public_ip}}"
    groups: tag_Type_,tag_Environment_
    ec2_region: ""
    ec2_tag_Name: ""
    ec2_tag_Role: ""
    ec2_tag_Type: ""
    ec2_tag_Environment: ""
    ec2_ip_address: "{{item.public_ip}}"
  with_items: ""

- name: Wait for the instances to boot by checking the ssh port
  wait_for: host={{item.public_ip}} port=22 delay=15 timeout=300 state=started
  with_items: ""

- name: display ecs cluster
  debug:
    msg: "{{item}}"
  with_items: ""

# update shippable resource state
- name: run cmd
  shell: |
    shipctl put_resource_state "" "INST_{{item.ami_launch_index}}_PUBLIC_IP" "{{item.public_ip}}"
    shipctl put_resource_state "" "INST_{{item.ami_launch_index}}_ID" "{{item.id}}"
    shipctl put_resource_state "" "INST_{{item.ami_launch_index}}_PUBLIC_DNS" "{{item.public_dns_name}}"
    shipctl put_resource_state "" "REGION" ""
    shipctl put_resource_state "" "INST_{{item.ami_launch_index}}_PUBLIC_IP" "{{item.public_ip}}"
    shipctl put_resource_state "" "INST_{{item.ami_launch_index}}_ID" "{{item.id}}"
  with_items: ""

 

Step 4: Deploy the application to the test ECS environment

deploy_app_test_odte is a deploy job which creates the service and task definition in the ECS cluster and starts the service. Since it needs to run after the ECS cluster is created in the workflow, prov_test_vpc_odte is specified as an input. 

Add the following to shippable.yml file and commit it.

jobs:

##---------------------------------------------------------------#
##-------------------- App Release Automation -------------------#
##---------------------------------------------------------------#

# DEPLOY to TEST environment
  - name: deploy_app_test_odte
    type: deploy
    steps:
      - IN: create_app_man_odte
        switch: off
      - IN: prov_test_vpc_odte
      - IN: test_env_ecs_odte
        switch: off
      - TASK: managed

 

Step 5: Run functional tests on the test cluster

Add the deploy_app_test_odte job to your shippable.yml file. This job extracts the public DNS of the ECS cluster from the test_info_odte params resource and passes it to the script that runs some tests using the public DNS.

It is a runSh job that lets you run any shell script. Since it needs to run after the application is deployed in the workflow, test_env_ecs_odte is specified as an input. In addition, we also provide the manifest job as an input to the job.

jobs:

# RUN System Integration Testing
  - name: sit_odte
    type: runSh
    steps:
      - IN: scripts_repo_odte
        switch: off
      - IN: deploy_app_test_odte
      - TASK:
        # Run tests
        - script: |
            pushd $(shipctl get_resource_state "scripts_repo_odte")/tests
PARAMS_JSON=$(shipctl get_resource_version_key test_info_odte params)
CLUSTER_DNS=$(echo $PARAMS_JSON | jq -r .INST_0_PUBLIC_DNS)
echo "ECS Cluster DNS: "$CLUSTER_DNS
./run-tests.sh $CLUSTER_DNS popd on_success: - script: echo "SUCCESS" on_failure: - script: echo "FAILURE"

 

Step 5: Deprovision the cluster

A. Add the deprov_test_infra_ode job to your shippable.yml file. 

It is a runSh job that lets you run any shell script. Since it needs to run after the system integrations tests are run, sit_odte is specified as an input.

jobs:

#---------------------------------------------------------------#
#----------------------- Deprov Test Infra----------------------#
#---------------------------------------------------------------#

# DEPROV TEST Infra with Ansible
  - name: deprov_test_infra_odte
    type: runSh
    steps:
      - IN: sit_odte
      - IN: aws_cli_config_odte
        switch: off
      - IN: test_vpc_conf_odte
        switch: off
      - IN: test_conf_odte
        switch: off
      - IN: test_info_odte
        switch: off
      - IN: scripts_repo_odte
        switch: off
      - IN: ami_sec_approved_odte
        switch: off
      - TASK:
        - script: shipctl replace
            $SCRIPTS_REPO_ODTE_STATE/infra/provision-ecs-ansible/ansible.cfg
            $SCRIPTS_REPO_ODTE_STATE/infra/provision-ecs-ansible/group_vars/ecs-cluster-vars.yml
        - script: sudo pip install boto3
        - script: |
            cd $SCRIPTS_REPO_ODTE_STATE/infra/provision-ecs-ansible
            ansible-playbook -v ansible-ecs-terminate.yml
    on_success:
      - script: echo "SUCCESS"
    on_failure:
      - script: echo "FAILURE"

 

B. Commit Shippable.Yml And Create A Sync Repo In Your Shippable Account.

Follow these instructions to import your configuration files into your Shippable account.

Your pipeline should now look like this in the SPOG view.

pipelin1.png

Different sections of the pipeline expanded.

pipeline2.png

pipeline3.png

pipeline4.png

6. Trigger your pipeline

Right click on create_app_man_odte in the SPOG and click on Build Job. This will trigger the entire pipeline.

trigger.png

 

Screenshot of the Manifest job 

manifest.png

 

Screenshot of the Cluster provision job

provision.png

 

Screeshot of resources populated by the Cluster provision job

Screen Shot 2017-11-30 at 5.40.44 PM.png

Screen Shot 2017-11-30 at 5.41.07 PM.png

Screenshot of the Deploy job

deploy-2.png

 

Screenshot of the Deprovision Job

deprovision.png

 

Topics: test


Add comments below...