The CI/CD and DevOps Blog

Authenticating Against A Self-Hosted Kubernetes Cluster With A Service Account

This tutorial explains how to create a kubeconfig file to authenticate to a self-hosted Kubernetes cluster. If you use a hosted solution like GKE or AKS, you get the benefit of the cloud provider's Auth system. If it is self-hosted, then you'll have to take the DIY approach. This guide helps you to create a service account on Kubernetes and create a kubeconfig file that can be used by kubectl to interact with the cluster.

We assume that you have working knowledge of Docker and Kubernetes and understand the following concepts:

The main reason for authenticating with a service account is to use it with a central deployment platform like Jenkins or Shippable. Since these platforms are used to deploy your applications, you don't want to configure your deployments using your personal account or tokens. This decouples deployments from being associated with a specific person, making it more secure and independent of who actually manages the deployments.

Why the adoption of Kubernetes will explode in 2018

Kubernetes is an open-source orchestration engine for automating deployment, scaling, and management of containerized applications at scale. When your requires a large number of containers, you need a tool to group containers into logical units, and to track, manage and monitor them all.  Kubernetes helps you do that and is considered the de facto tool for container management.

The Kubernetes project is part of the Cloud Native Computing Foundation (CNCF) and has over 1500 contributors. It was started at Google, which still leads development efforts. 

Docker adoption is still growing exponentially and more and more companies have started using it in Production. It is important to use an orchestration platform to scale and manage your containers. Imagine a situation where you have been using Docker for a little while, and have deployed on a few different servers. Your application starts getting massive traffic, and you need to scale up fast, how will you go from 3 servers to 40 servers that you may require? And how will you decide which container should go where? How would you monitor all these containers and make sure they are restarted if they exit? This is where Kubernetes comes in. 

Kubernetes Tutorial: how to pull a private docker image in a pod

Docker images that comprise a production application are often deployed to private repositories in Docker registries. Kubernetes provides a feature called imagePullSecrets that allows pods to pull private docker images. In this blog, we demonstrate how you can easily hookup imagePullSecrets to your pod using Shippable.

 

Creating an imagePullSecrets secret

imagePullSecrets is a type of a Kubernete Secret whose sole purpose is to pull private images from a Docker registry. It allows you to specify the Url of the docker registry, credentials for logging in and the image name of your private docker image.

There are two ways an imagePullSecrets can be created.

1. kubectl create secret docker-registry command. We use this approach in our blog.

ambarishs-MacBook-Pro:gke ambarish$ kubectl create secret docker-registry private-registry-key --docker-username="devopsrecipes" --docker-password="xxxxxx" --docker-email="username@example.com" --docker-server="https://index.docker.io/v1/"
secret "private-registry-key" created

 

2. Creating the secret via a yml.

In this approach, a config.json file is created for the private registry. Its contents are then base64 encoded and specified in the .dockerconfigjson property.

apiVersion: v1
kind: Secret
metadata:
  name: private-registry-key
  namespace: default
data:
  .dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
type: kubernetes.io/dockerconfigjson

 

Kubernetes Tutorial: Using Secrets In Your Application

Applications deployed to a Kubernetes cluster often need access to sensitive information such as credentials to access a database and authentication tokens to make authenticated API calls to services. Kubernetes allows you to specify such sensitive information cleanly in an object called a Secret. This avoids putting sensitive data in a Pod defintion or a docker image. In this blog, we demonstrate how you can easily hookup Kubernetes Secrets to your pod using Shippable.

 

Creating a Kubernetes Secret

Secrets are defined in a yml file in a Secret object. A Secret object can specifiy multiple secrets in name-value pairs. Each secret has to be base64 encoded before specifying it in the yml.

Let's define an API token as a secret for a fake token xxx-xxx-xxx.

1. Base 64 encode the token.

ambarishs-MacBook-Pro:sources ambarish$ echo -n "xxx-xxx-xxx" | base64
eHh4LXh4eC14eHg=

2. Create the secrets yml called create-secret.yml.

apiVersion: v1
kind: Secret
metadata:
  name: auth-token-secret
type: Opaque
data:
  AUTH_TOKEN_VALUE: eHh4LXh4eC14eHg=

3. Create the secret in the kubernetes cluster using kubectl.

$ kubectl create -f secrets.yml
secret "auth-token" created

Kubernetes Tutorial: Attaching A Volume Mount To Your Application

Kubernetes allows you to package multiple containers into a pod. All containers in the pod run on the same Nodeshare the IP address and port space, and can find each other via localhost. To share data between pods, Kubernetes has an abstraction called Volumes. In this blog, we demonstrate how you can  easily hookup Kubernetes Volumnes to your pod and define the containers in the pod using Shippable.

 

Kuberetes Volumes

A Volume is a directory with data that is accessible to all containers running in a pod and gets mounted into each containers filesystem. Its lifetime is identical to the lifetime of the pod. Decoupling the volume lifetime from the container lifetime allows the volume to persist across container crashes and restarts. Volumes further can be backed by host's filesystem, by persistent block storage volumes such as AWS EBS or a distributed file system. The complete list of the different types of volumes that Kubernetes supports can be found here.

Shippable supports mounting all the types of volumes that Kubernetes supports via the dockerOptions resource. However, the specific volume type that we demonstrate in this blog is a gitRepo volume. A gitRepo volume mounts a directory into each containers filesystem and clones a git repository into it.