Docker images that comprise a production application are often deployed to private repositories in Docker registries. Kubernetes provides a feature called imagePullSecrets that allows pods to pull private docker images. In this blog, we demonstrate how you can easily hookup imagePullSecrets to your pod using Shippable.
Creating an imagePullSecrets secret
imagePullSecrets is a type of a Kubernete Secret whose sole purpose is to pull private images from a Docker registry. It allows you to specify the Url of the docker registry, credentials for logging in and the image name of your private docker image.
There are two ways an imagePullSecrets can be created.
1. kubectl create secret docker-registry command. We use this approach in our blog.
ambarishs-MacBook-Pro:gke ambarish$ kubectl create secret docker-registry private-registry-key --docker-username="devopsrecipes" --docker-password="xxxxxx" --docker-email="[email protected]" --docker-server="https://index.docker.io/v1/"
secret "private-registry-key" created
2. Creating the secret via a yml.
In this approach, a config.json file is created for the private registry. Its contents are then base64 encoded and specified in the .dockerconfigjson property.
apiVersion: v1 kind: Secret metadata: name: private-registry-key namespace: default data: .dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== type: kubernetes.io/dockerconfigjson
Referring to an imagePullSecrets on a Pod
Once a Secret is created, the next step is to refer it on every Pod that needs to pull this image. The Shippable platform automatically attaches the secret to the pod spec when it creates the replicationcontroller spec for the pod.
Creating secrets securely
Our scenario is a single container Node.js application, whose image is on a private repository on DockerHub. This application will be deployed to a Kubernetes cluster, provisioned on GKE using google cloud CLI. Once the application is deployed, we will hook up a load balancer.
We will build the scenario using the following steps using a Shippable workflow -
- Define the container packaged in the pod. The container runs a Node.js application that can be found in the sample repository.
- Create the Kubernetes cluster in GKE using Google cloud API
- Create the imagePullSecret on the cluster using kubectl.
- Create the pod that refernces the secret in the Kubernetes cluster.
- Create a Kubernetes load balancer/service for the application.
- Test the secret by loading the application in the browser using the public IP of the load balancer.
This is a pictorial representation of the workflow we're going to configure. The green boxes are jobs and the grey boxes are the input resources for the jobs. The workflow is defined across two configuration files: shippable.jobs.yml and shippable.resources.yml.
Resources (grey boxes)
- dprk_app_img is a required image resource that represents the private docker image of your application.
- dks_params is a required params resource used to specify key-value pairs that are set as environment variables for consumption by the application. We set environment variables needed to create the cluster such as name, number of nodes, machine type .
- dprk_cliConfig is a required cliConfig resource which is a pointer to the private key of your service account needed to initialize the gcloud CLI.
- dprk_dockerhub is a required integration resource which is a pointer to dockerhub integration that contains the Url of the docker registry, credentials for logging in as well as the email address associated with the credentials.
- dprk_pod_secret is an required dockerOptions resource which is used to reference the imagePullSecret key in the pod.
- dprk_cluster is a required cluster resource that represents the Kubernetes cluster in GKE.
- dprk_lb is an optional loadBalancer resource that defines the loadbalancer properties such as labels, port, cluster etc.
Jobs (green boxes)
- dprk-pod-def is a required manifest job that defines all the containers than run in the pod. This definition is versioned and each version is immutable.
- dprk_provision_cluster_and_secret is a required runSh job that creates the Kubernetes cluster using google cloud API. It also uses kubectl to create the secret in the cluster using the dockerhub integration data bound to the secrets template.
- dprk-pod-deploy is a required deploy job which builds the replicationcontroller spec for our application and deploys it to the Kubernetes cluster.
- dprk-provision-lb is a optional provision job used to create the load balancer for the Kubernetes cluster.
- Any Supported Docker registry with a private repository for your application. We have used Docker hub as the Docker registry in this sample.
- A GitHub account where you will fork and run this sample.
- Sign in with GitHub to create a Shippable account
If you're not familiar with Shippable, it is also recommended that you read the Platform overview doc to understand the overall structure of Shippable's DevOps Assembly Lines platform.blo
The code for this example is in a GitHub repository called devops-recipes/deploy-kubernetes-secrets. You can fork the repository to try out this sample yourself or just follow instructions to add Shippable configuration files to your existing repository.
- The Node.js application source code and Dockerfile can be found here in the repository.
- This repository also has the Shippable configuration files to create the workflow.
1. Define the containers in a pod
A. Create an account integration using your Shippable account for your Docker registry.
Instructions to create an integration can be found here. Copy the friendly name of the integration, which we have set as drship_dockerhub.
B. Define dprk_app_img
dprk_app_img is an image resource that represents the docker image of your application. In our example, we're using a private docker image hosted on Docker Hub.
Add the following yml block to your shippable.resources.yml file.
- name: dprk_app_img
# replace drship_dockerhub with your docker hub integration name
# replace devopsrecipes/deploy-kube-private-registry-node-app with your
# docker repository private image name
C. Define dprk-pod-def
dprk-app-def is a manifest job that defines all the containers than run in the pod. This definition is versioned and each version is immutable.
Add the following yml block to your shippable.jobs.yml file.
- name: dprk-pod-def
- IN: dprk_app_img
- TASK: managed
D. Commit config files and add them to your Shippable account.
Once you have these configuration files as described above, commit them to your repository. The shippable.jobs.yml and shippable.resources.yml can be committed to the same app repository, or to a separate repository.
The repository containing your jobs and resources ymls is called a Sync repository and represents your workflow configuration.
Follow these instructions to import your configuration files into your Shippable account.
2. Create the cluster and the imagePullSecret
A. Create account integration for Google Cloud.
Since your deployment config will interact with GCR to push the front-end image and GKE to deploy the application, you will need to create an integration for Google Cloud in the Shippable UI.
- Create a Google Cloud integration called drship_gcloud using instructions found here.
- Ensure that you have set the Subscription Scopes in the account integration to the subscription where your respository is located.
- Name the integration drship_dockerhub. If you change the name, change it also in the yml in Step C.
- Specify the url, credentials, email of your docker registry.
- Ensure you give access to the organization that your repository exists in Subscription scopes.
C. Define resources needed to create the cluster and the secret
Add the following resources to your shippable.resources.yml file:
resources: - name: dprk_app_img type: image # replace drship_dockerhub with your docker hub integration name integration: drship_dockerhub pointer: # replace devopsrecipes/deploy-kube-private-registry-node-app with your # docker repository image name sourceName: devopsrecipes/deploy-kube-private-registry-node-app seed: versionName: "latest" - name: dprk_params type: params version: params: DPRK_APP_LABEL: "dprk-kube-app" DPRK_CLUSTER_NAME: "dprk-test-cluster" DPRK_CLUSTER_NUM_NODES: 1 DPRK_CLUSTER_MACHINE_TYPE: "n1-standard-1" DPRK_SECRET_KEY_NAME: "dprk-registry-key" - name: dprk_dockerhub type: integration # replace drship_dockerhub with your Docker hub integration name integration: drship_dockerhub - name: dprk_cliConfig type: cliConfig # replace drship_gcloud with your Google cloud integration name integration: drship_gcloud pointer: # replace us-central1-a with your availability zone region: us-central1-a
dprk_provision_cluster_and_secret job to your shippable.jobs.yml file.
It is a runSh job that lets you run any shell script. Note that the dprk_cliConfig input to the job automatically initialize the google clould API. The script creates the GKE cluster using the environment variables injected by the dprk_params resource. It thereafter creates the imagePullSecret on the cluster.
Since it needs to run after the pod definition job in the workflow, dprk-pod-def is specified as an input.
jobs: - name: dprk_provision_cluster_and_secret type: runSh steps: - IN: dprk_params - IN: dprk_dockerhub - IN: dprk_cliConfig scopes: - gke - IN: dprk-pod-def - TASK: - script: | # check if the cluster already exists on GKE response=$(gcloud container clusters describe $DPRK_CLUSTER_NAME --zone $DPRK_CLICONFIG_POINTER_REGION || echo "ClusterNotFound") if [[ $response = "ClusterNotFound" ]] then echo "cluster not found, creating cluster" gcloud container clusters create $DPRK_CLUSTER_NAME --num-nodes=$DPRK_CLUSTER_NUM_NODES --machine-type=$DPRK_CLUSTER_MACHINE_TYPE else echo "cluster already exists, skipping creating cluster" fi # Generate the kubectl configuration gcloud container clusters get-credentials $DPRK_CLUSTER_NAME --zone $DPRK_CLICONFIG_POINTER_REGION # Delete and create the imagePullSecret kubectl delete secret $DPRK_SECRET_KEY_NAME 2>/dev/null || echo "secret does not exist" kubectl create secret docker-registry $DPRK_SECRET_KEY_NAME --docker-username="$DPRK_DOCKERHUB_INTEGRATION_USERNAME" --docker-password="$DPRK_DOCKERHUB_INTEGRATION_PASSWORD" --docker-email="$DPRK_DOCKERHUB_INTEGRATION_EMAIL" --docker-server="$DPRK_DOCKERHUB_INTEGRATION_URL"/
E. Commit config files and add them to your Shippable account.
Once you have these configuration files as described above, commit them to your repository.
3. Define the imagePullSecret reference to the poddprk_pod_secret is an dockerOptions resource which is used to reference the imagePullSecret key dprk-registry-key in the pod.
Add the following yml block to your shippable.resources.yml file and commit the file.
- name: dprk_pod_secret
- name: dprk-registry-key
4. Deploy the pod
A. Define dprk_cluster
dprk_cluster is a cluster resource that represents the Kubernetes cluster in GKE.
Add the following yml block to your shippable.resources.yml file.
resources: - name: dprk_cluster type: cluster # replace drship_gcloud with your google cloud integration name integration: drship_gcloud pointer: # replace dprk-test-cluster with your google container engine cluster name sourceName: "dprk-test-cluster" # replace us-central1-a with your availability zone region: us-central1-a
B. Create deployment job
dprk-pod-deploy is a deploy job which builds the Deployment spec for our application and deploys it to the Kubernetes cluster. Since it needs to run after the secret is created in the workflow,
dprk_provision_cluster_and_secret is specified as an input.
Add the following yml block to your shippable.jobs.yml file.
jobs: - name: dprk-pod-deploy type: deploy steps: - IN: dprk_pod_secret - IN: dprk_cluster - IN: dprk-pod-def - IN: dprk_provision_cluster
C. Commit the shippable.resources.yml and shippable.jobs.yml file to your repository.
Your pipeline should now look like this in the SPOG view.
5. Create a load balancer for the application
This is an optional step and the configuration required to create the load balancer can be found in this document. The sample application also has the load balancer configuration.
6. Trigger your pipeline
Right click on dprk-pod-def in the SPOG and click on Build Job. This will trigger the entire pipeline.
Screen shot of a run of the
Screen shot of a run of the dprk-app-deploy job
6. Testing the secret volume
Screenshot of the load balancer created in Google Cloud, since the Kubernetes cluster that we used runs in Google cloud.
Screenshot of the app running in the browser.
Try the sample above to automate your deployment pipeline for your Kubernetes application using secrets. You sign can in for free: