Get that Kubernetes cluster working !!!

- By Devashish Meena on March 25, 2015

[NOTE: Please check out our documentation showing how you can automate creation of a Kubernetes cluster on GKE using GCloud SDK and Shippable.]

-----------------------------------------------------------------------------------------

kubernetes cluster

This guide demonstrates how to build a 2 node kubernetes cluster. Kubernetes comes with a set of scripts to install on different cloud providers and locally on vagrant box that can be used for the cluster setup but the idea here to provide an installation mechanism that is provider and OS agnostic. So, at the end of this guide, we'll have a script that can be run on any two machines that can communicate with each other. The script will download the specified kubernetes and etcd release, install all necessary components to bring up kubernetes master and slave nodes,  and configure the components before booting them up.

For the impatient folks who just want want to copy-paste-run the script, here's the link: "https://gist.github.com/ric03uec/81f6dc1208c87e4f4b86#file-kube-install-sh"

Background:

Quoting from kubernetes github page:

Kubernetes is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.

Kubernetes provides a container management layer over Docker that makes it very easy to scale a container based application or microservices. It introduces additional constructs like 'Pods', 'ReplicationControllers', 'Services' and 'Namespaces' which are used to interact with containers. Actual Docker containers or images are never manipulated directly but only through Kubernetes constructs. It provides a command line tool called kubectl to manipulate objects and a full REST Api to do the same functions remotely. Reading of kubernetes Design Document is highly recommended.

A minimal kubernetes cluster will have two nodes. One will act as the master and other one as slave. Following components are installed on master

- etcd: highly available key-value store. Used for storing all the cluster information

- kube-apiserver: provides the REST api endpoint

- kube-scheduler: decides which nodes will run the containers defined in Pod(s)

- kube-controller-manager: maintains a the state of Pod(s) as defined in manifest

and following components are installed on slave

- kube-proxy: used by 'Services' to create iptable rules to connect to Pod(s)

- kubelet: talks to Docker to start/stop/destroy containers

Environment Setup:

For this tutorial, I'll use Vagrant as the provider and bring up two fresh Ubuntu14.04(x86_64) machines on the local system. As I mentioned earlier, this can as well be done on two Digital Ocean or AWS machines that can connect to each other. Absolutely no changes have to be made in the scripts for this.

Put the following Vagrant file in any folder, say /home/kube/Vagrantfile.

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  # Create a private network, which allows host-only access to the machine
  # using a specific IP.
  config.vm.define "kube-master" do |master|
    master.vm.box = "trusty64"
    master.vm.network "private_network", ip: "192.168.33.10"
    master.vm.hostname = "kube-master"
  end

  config.vm.define "kube-slave" do |slave|
    slave.vm.box = "trusty64"
    slave.vm.network "private_network", ip: "192.168.33.11"
    slave.vm.hostname = "kube-slave"
  end
end


[gist url:https://gist.github.com/ric03uec/81f6dc1208c87e4f4b86#file-kube-apiserver]

and run the following commands. This Vagrantfile creates two machines, with names 'kube-master' and 'kube-slave'. We'll use 'kube-master' to install the kubernetes master services and bring up etcd. 'kube-slave' will be used to install kubernetes slave components.

[terminal 1] $ cd /home/kube
[terminal-1] $ vagrant box add https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box --name trusty64
[terminal-1] $ vagrant up kube-master
[terminal-1] $ vagrant ssh kube-master
[terminal-1] $ ./kube-install.sh master

[terminal-2] $ vagrant up kube-slave
[terminal-2] $ vagrant ssh kube-slave
[terminal-2] $ ./kube-install.sh slave

[gist url: https://gist.github.com/ric03uec/81f6dc1208c87e4f4b86#file-vagrant-cmd]

After this, you should have two terminals, one ssh'd into the machine kube-master and one into machine kube-slave.

Steps:

The best documentation of the code is code itself and that's why I've tried to make the script as organized and readable as I could. I also threw in some comments just in case. So I'll just explain the functions used and what they do. You'll get a better idea once you go through the script itself. The script used environment variables heavily to make almost everything configurable.

- update_hosts(): This function updates the /etc/hosts file to add the names for master and slave nodes. nothing fancy here

- install_docker(): Docker is installed on slave if its not already there. You can install it manually and comment this function out

- stop_services(): Running a sanity check on the services and stopping any of the services that might be running accidentally.

- install_etcd(): The fun begins here. Downloads and extracts the etcd server at a predefined path (/usr/bin in this case).

- download_kubernetes_release(): Downloads and extracts the kubernetes binaries in /tmp folder.

- update_master_binaries(): Copy the kubernetes binaries at a predefined path (/usr/bin in this case)

- update_services_config(): This is the main function where configuration of all services take place. All the config files are located at /etc/default path. Since only the last config for any parameter is read, we just insert the configuration at the bottom of those files. So e.g. the config file /etc/default/kube-apiserver looks like following

# Kube-Apiserver Upstart and SysVinit configuration file

# Customize kube-apiserver binary location 
# KUBE_APISERVER="/opt/bin/kube-apiserver"

# Use KUBE_APISERVER_OPTS to modify the start/restart options
KUBE_APISERVER_OPTS="--address=127.0.0.1 \
--port=8080 \
--etcd_servers=http://127.0.0.1:4001 \
--logtostderr=true \
--portal_net=11.1.1.0/24"

# Add more envionrment settings used by kube-apiserver here
KUBE_APISERVER=/usr/bin/kube-apiserver
KUBE_APISERVER_OPTS="--address=0.0.0.0 --port=8080 --etcd_servers=http://localhost:4001 --portal_net=11.1.1.0/24 --allow_privileged=true --kubelet_port=10250 --v=0 "

[gist url: https://gist.github.com/ric03uec/81f6dc1208c87e4f4b86#file-kube-apiserver]

- remove_redundant_config(): When running inside the master, remove config and upstart files for slave services and when running inside slave, remove config and upstart files for master.

- start_services(): Start the services in master and slave nodes

- check_service_status(): Check if all the services are running correctly or not

Testing:

Moment of truth. Execute the following commands on the master node and you should see similar output. Also, after executing the second command, it might take a few minutes to change the status to 'RUNNING' because the image will be pulled from docker hub.

[terminal 1] $ cd /home/kube
[terminal-1] $ vagrant box add https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box --name trusty64
[terminal-1] $ vagrant up kube-master
[terminal-1] $ vagrant ssh kube-master
[terminal-1] $ ./kube-install.sh master

[terminal-2] $ vagrant up kube-slave
[terminal-2] $ vagrant ssh kube-slave
[terminal-2] $ ./kube-install.sh slave

[gist url: https://gist.github.com/ric03uec/81f6dc1208c87e4f4b86#file-vagrant-cmd]

Hope this helped you understand the basics of kubernetes and get the rather complicated setup working correctly. Read the next two parts in this series - Docker overlay network using Flannel and Kubernetes cluster with Flannel overlay network.


Try Shippable 

Topics: containers, devops