Kubernetes is a Production grade container orchestration platform with automated scaling and management of containerized applications. It is also open source, so you can install Kubernetes on any cloud like AWS, Digital Ocean, Google Cloud Platform, or even just on your own machines on premises. Kubernetes was started at Google and is also offered as a hosted Container Service called GKE. With Shippable, you can easily hook up your automated DevOps pipeline from source control to deploy to your Kubernetes pods and accelerate innovation.
In this blog, we demonstrate how to deploy a load balanced, multi-container application to multiple Kubernetes environments on GKE. The deployment occurs in multiple stages in a Shippable defined workflow.
Get Started: Kubernetes Deployment spec
The pods and services (load balancer) for the application are created using a deployment spec. Instead of creating and maintaining a deployment spec per environment which is a common practice, we create a single deployment spec template. This template has placeholders for the image and service/pod labels. When we deploy the application to a specific environment, we use powerful yet simple Shippable platform functions and resources to replace these placeholders at run time when the deployment actually happens.
The deployment spec template (located here in our public repository) defines the label selectors placeholders in the .spec.selector section and the labels for the pods in the .spec.template.metadata.labels section. Labels are defined for both the front end voting application (FE_LABEL) as well as the Redis service (BE_LABEL) which the front ends makes API calls on via another load balancer.