End-to-end container pipelines with Amazon EC2 Container Registry now available

- By Tom Trahan on December 21, 2015

this blog is deprecatedWe have significantly updated the Shippable platform with several new features. Hence this blog post is deprecated.

Check out our blog - Deploy your first Continuous Deployment Pipeline, that uses Docker Hub for image registry and Amazon EC2 Container Service (ECS) for container service.

For the latest information, refer our documentation, and/or open a support issue, if you have questions.

If you've thought about running containers on AWS, today's a big day.  With the introduction of Amazon EC2 Container Registry today, you can now establish and run a container-based pipeline entirely within AWS.  And with Shippable, setting up and running that pipeline with full visibility, history and control has never been easier.

During AWS re:Invent 2015, we announced our preview release for integration between Shippable and Amazon EC2 Container Service (Amazon ECS).  Today, we're thrilled to announce that we've added integration with Amazon EC2 Container Registry (Amazon ECR), as well. With this integration, Shippable customers can now pull and push Docker images from Amazon ECR as part of their Shippable CI builds and deploy them with Shippable Formations across multiple clusters in Amazon ECS, without ever having to manually update a Task Definition yourself.  Shippable automatically updates your ECS task definitions with the latest image information based on your CI builds and either automatically deploys or deploys with a single-click when you're ready.

If you missed the first blog, you can check it out here (you'll notice lots of improvements to our UI below).  In this installment, I'll focus on Amazon ECR.  Amazon ECR makes it easy for developers to store, manage, and deploy their Docker container images.  Now, with the extension of Shippable's preview to include Amazon ECR integration, you can push and pull images from Amazon EC2 Container Registry repos as part of your Shippable pipeline.  You can achieve particularly fast ingress/egress speeds when operating your pipeline entirely on AWS in the same region as your repo and leverage IAM security to control permissions on your Docker images.  You can also take advantage of namespaces to organize your images.

Let's take a look at how this is enabled to create an end-to-end software delivery pipeline.

Sample CI/CD pipeline powered by Shippable and Amazon Web Services


You can enable the above workflow with the following steps:

     OPTIONAL: Fork the sample application code
  1. Set up AWS and Amazon ECR integrations in Shippable Account Settings
  2. Execute CI and push a build to Amazon ECR with Shippable CI
  3. Set up your ECS cluster and container instances in AWS
  4. Create a Shippable Formations subscription for your Amazon ECS cluster
  5. Set up services to deploy to ECS in Shippable Formations 
  6. Execute the automated pipeline end-to-end (i.e. auto-deploy magic!)

OPTIONAL: Fork the sample application code

Step 1: Set up AWS and Amazon ECR integrations in Shippable Account Settings

  • Sign in to Shippable at shippable.com (if you don't have an account, you can create one in a few seconds by logging in via your GitHub or Bitbucket account)

  • Click the ‘Settings’ icon in the top-nav (upper right) and choose the 'Integrations' tab

  • Click the 'Add Integration' button and create two integrations - one with permissions to deploy infrastructure and one with permissions to push/pull from an Amazon ECR repository.  
    • Select "Amazon Web Services" from the ‘Master Integration’ drop-down list and complete the remaining information to enable deploying to AWS. The AWS integration will contain your AWS credential information to access the ECS instance. Enter your access and secret keys provided by AWS. See here for info on how to generate them.
    • Repeat for "Amazon ECR" from the 'Master Integration' drop-down list (choose us-east-1 for region).  When complete you should have two integrations saved:



Step 2: Push a build to Amazon ECR with Shippable CI

Next, we'll configure Shippable CI to push the results of a successful CI build to Amazon ECR:

  • Select the subscription from the CI menu that holds the source code you intend to build


  • Next, enable a repo as a Shippable CI project by selecting 'Enable Project' and selecting your repo.  For this demo, you can enable both 'micro-api' and 'micro-www':


  • Once enabled, we'll set a few things in the Project Settings to enable Docker Build and Docker Push for this project (both of the projects used in this walkthrough contain Dockerfiles used to build the CI build container).  Select the 'Settings' tab and set the following settings:
    • Docker Build: On
    • Docker Build Order: Pre-CI
    • Push Build: Yes
    • Lighthouse: On
    • Push Image to: your Amazon ECR repo URL/(optional namespace)/repo name (see below for instructions to locate it), e.g. 288971733297.dkr.ecr.us-east-1.amazonaws.com/ttrahan/micro-api
    • Push Image Tag: default
    • Source Location: /root/micro-api or /root/micro-www

      Your Amazon ECR repo location is your AWS account ID + '.dkr.ecr.us-east-1.amazonaws.com'.  You can also find it by navigating to the EC2 Container Service via the management console, selecting 'Repositories' in the left-nav and select 'Create Repository'.  The Repository URL will appear on the first screen.

  • In the 'Integrations' section, select the dropdown for 'Select hub integration' and choose the Amazon ECR integration you created in Step 1.


  • Select the 'Status' tab and click the 'Build' button.  Shippable CI will now run a CI build.  You can follow along in the console as your CI build executes, including steps for pulling a base Docker image, syncing your git repo, executing test scripts, and pushing a Docker image to Amazon ECR upon successful completion.


  • When complete, verify that the new image appears in your Amazon ECR repo.  Navigate to the EC2 Container Service via the AWS management console and select 'Repositories' in the left-nav.  You should see your newly created image now stored and available to download and/or deploy:


  • Repeat these steps for the other project (micro-www), as well

Now that we have Docker images being generated and pushed whenever we commit code to our repos, we'll turn our attention to seeing another great reason why Docker is so powerful - how easy it is to deploy and run containers.

Step 3: Set up your Amazon ECS cluster and container instances in AWS

We need to start by setting up the cluster infrastructure:
  • Amazon has great instructions on how to create your Amazon ECS cluster and the related container instances.  Use these instructions to create an Amazon cluster with registered container instances.
  • Or, you can use our sample Terraform scripts to get it created quickly (instructions are provided)
  • When complete, you should see your container instances registered to your cluster in your Amazon ECS console


 Step 4: Create a Shippable Formations subscription for your Amazon ECS cluster

  • From the Shippable Home screen, select the 'Formations' menu and choose 'Add Formation'
  • Sign up for a new Formation
    • Select Plan type 'Container Service Deploy'
    • Leave the slider at 2GB to enable the free tier
    • Select 'Enable Free Plan'

Step 5: Set up deployment config in Shippable Formations module

You'll now be in your new Formation, prompted to provide a few setup details:

  • First, select the Deploy integration you created (Master Integration = AWS):


  • Next, select the Amazon ECS cluster you created in Step 3:


  • Leave the Provisioning section unselected.
  • Next, select the 'Status' tab and you'll set up two Services to deploy.  Select 'Add Service' and create the first service for 'micro-api':

  • Then, complete the config as follows in the Settings section of the Settings tab:
    • Auto Deploy: checked
    • Notifications: skip
    • Post-deploy Hook: skip

  • In the Images section, click 'New Image'.  You'll now specify the repository URL and repo name that holds the image you want to deploy into this service.  Navigate to Amazon ECS in the AWS management console, select Repositories, and click on the repo 'micro-api' in order to get the repository address.  Expand the 'Build, tag, and push docker image' section and copy the repository URL/repository name, but do not include the image tag.


  • Copy/paste this repo address into the Image field and select the Amazon ECR Hub Integration you created in step 1. Upon saving, a list of image tags will be retrieved for the repo.  Select one to deploy:


  • Lastly, enter Auto Deploy pattern 'master.*', add port 80 as the port to open to communicate with your service (Shippable will re-map this between the load balancer and container service), leave Memory at the default of 400, leave Volume Mounts blank and click 'Save Image'.

  • Enter the following Environment Configs for the 'micro-api' service:
    • API_PORT: 80
    • LOG_LEVEL: debug
    • NODE_ENV: dev
    • WWW_PORT: 80

  • Select the 'demoAPILb' for the 'micro-api' service, then check the 'Load Balanced' checkbox:



  • Leave 'Volumes' blank and click 'Deploy'

You have just configured and deployed a Docker container into Amazon ECS!  In a few seconds, you will see your service running your container, ready to accept traffic.


You can repeat the above steps for the micro-www service, with changes to use the appropriate ELB and Amazon ECR repository.  Additionally, this demo application communicates from the micro-www service via a REST API, so you'll need to add an additional Environment Config for API_URL for the micro-www service and providing the URL for the micro-api service you created and deployed. The service address is found in the Service Dashboard and is the Load Balancer address in the screen above (with 'http://' prefix).  Once you have both services running, you should be able to hit the micro-www service address from your browser and see the running application.


At this point, you've fully configured the continuous delivery pipeline diagrammed at the top of this post, including configuring an Amazon ECS cluster with underlying infrastructure, creating Amazon ECR repositories to store your Docker images, and configuring Shippable CI and Shippable Formations to automate pipeline flow and track pipeline activities.

Step 6: Execute the automated pipeline end-to-end (auto-deploy magic!)

Of course, the true power of creating an automated pipeline pays off exponentially after it's established, as your developers and testers now have the freedom to iterate on changes and test and promote continuously into your various environments.  Let's see the continuous pipeline in action.
  • Make a change to the micro-api source code and commit it to your source code repository (I've made a change to update the API Message in the /routes/info.js file and committed the change to my GitHub repo):


  • That's it!  The entire pipeline is now triggered:
    • Webhook fires from source code repo to trigger Shippable CI build
    • Upon success, Shippable CI pushes a Docker image to Amazon ECR
    • Since we turned on Auto-Deploy for our service, it fires a deployment into our Amazon ECS cluster
    • The new service is brought up and the old service is brought down gracefully in Amazon ECS
    • Our changes are now visible, ready for use, testing, etc.

Shippable CI build auto-triggered and successfully completed: shippable-ci-auto-build.png
New Docker image pushed to Amazon ECR: ecr-new-image-push.png

New image auto-deployed into the service in Amazon ECS: shippable-auto-deploy.png

Updated app live and accessible in my environment:shippable-updated-app.png


Wrapping up

As you can see, with Shippable, you can quickly and easily configure continuous delivery of your software changes into Amazon ECS, now leveraging Amazon ECR.

In this demonstration, we didn't dive into other pipeline features, but a few of the things we could also put in place to expand the pipeline's functionality include:
  • Scale up our services to run multiple instances (try it by changing the Replicas value in a service's Settings)
  • Trigger automated functional testing after auto-deploying a new version of our service
  • Deploy to multiple Amazon ECS clusters representing different environments in our pipeline, e.g. dev, test, prod
  • Manage Amazon ECR repository permissions for fine-grained control for who can push, pull or manage the repo
  • Set up our cluster for auto-scaling
As you can see, it's now not only possible, but entirely feasible to leverage containers now to improve your software delivery pipeline flow.  And with the power of Amazon ECS and Amazon ECR, you can do so across any environment, from dev/test to production.




Topics: continuous deployment (CD), continuous delivery, tutorial, container registry, Amazon ECS, Amazon ECR