Kubernetes Cluster with Flannel Overlay Network

- By Devashish Meena on August 31, 2015

 

This is the third and final post in the series where we play around with Docker, Kubernetes and Flannel overlay network. The first two posts are available at:
Multi node kubernetes cluster
Docker overlay network using flannel

Kubernetes Cluster with Flannel Overlay Network

In this tutorial I’ll explain how to bring up a multi-node kubernetes cluster with an overlay network. This essentially combines what I’ve explained in previous posts. An overlay is necessary to fulfill the networking requirements for a fully functional kubernetes cluster. All this is taken care of auto-magically when the cluster is brought up on GCE, but the manual configuration is slightly complicated -- both because it is non-trivial to set up so many components correctly and with so many tools available for the same job, it is difficult to figure out which one to pick. I picked flannel because of its simplicity and community backing.

As before, the code for doing everything being explained here is available HERE. Feedback/suggestions to improve this are most welcome. I’ll bring up the cluster on a local box using Vagrant, but the script can be run on any cloud. As of now, the script is only compatible with ubuntu 14.04.

Bootstrapping:

  • Bringing up the cluster: running “vagrant up” from inside the directory will bring up two machines, a master and a node with static IP’s
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# Create a private network, which allows host-only access to the machine
# using a specific IP.
config.vm.define "kube-master" do |master|
master.vm.box = "trusty64"
master.vm.network "private_network", ip: "192.168.33.10"
master.vm.hostname = "kube-master"
end

config.vm.define "kube-slave1" do |slave|
slave.vm.box = "trusty64"
slave.vm.network "private_network", ip: "192.168.33.11"
slave.vm.hostname = "kube-slave1"
end
end
Usage:
./kube-installer.sh

Options:
--master <master ip address> Install kube master with provided IP
--slave <slave ip address> <master ip address> Install kube slave with provided IP
  • for master, run “sudo ./kube-installer.sh –master 192.168.33.10”
  • for slave, run “sudo ./kube-installer.sh –slave 192.168.33.11 192.168.33.10”

Master:

The installer executes the following steps for master node

  • download and extract kubernetes master specific binaries(kube-apiserver, kube-controller-manager, kube-scheduler, kubectl)
  • download and install etcd
  • copy configuration files for  etcd, kube-apiserver, kube-controller-manager and kube-scheduler to appropriate locations
  • start all services on master
  • update subnet configuration for flannel in etcd

Node(s):

the installer executes the following steps for slave nodes

  • install docker
  • download and extract kubernetes node specific binaries(kube-proxy, kubelet)
  • install flannel
  • copy configuration files for flannel, docker, kubelet, and kube-proxy to appropriate locations
  • update docker config to use flannel bridge

The routing process:

The picture below and this link provide more details on how the routing takes place using kube-proxy and the overlay network.

How routing takes place using kube-proxy and the overlay network

Let me know if you try this out and if you have any feedback/suggestions. 

[Devashish Meena is a senior developer at Shippable that offers cloud based Continuous Integration and Continuous Delivery platform. For more technical posts from Devashish, visit his blog. You can also follow him on Twitter  and GitHub.]


Try Shippable 

Topics: Docker, Kubernetes