Today.

Matthew Mihok posted:

Setting Kubernetes from Scratch - Part 1


Last time, we setup a simple three node cluster and ran kubernetes on it. Super easy, and should be the recommended approach when getting your feet wet with Docker container orchestration.

This post will be the first in a series that goes through setting up a custom kubernetes cluster. Throughout the kubernetes documentation, it suggests several other methods to deploy kubernetes to your infrastructure. I have found that I didn't quite grasp all of the pieces and how they work until I went through it from scratch. So this walk through will be a bit more difficult and is recommended that you have an OK understanding of:

  • Linux, bash, general command line experience (cd, cp, tar, mv, mkdir, etc)
  • Networking, specifically CIDR notation, network interfaces, and basic understanding of firewalls and maybe port mapping too
  • Docker
  • Ruby, very basic familiarity with the programming language, specifically basic syntax and condition flow

We'll also be using Vagrant again so that we don't have to spend any money on server infrastructure. That being said, it should translate fairly well to an actual deployment on Digital Ocean, Amazon AWS, or another IaaS.

Today we'll be specifically looking at setting up etcd, flanneld, and docker to all play nice together. They are the core underlying "fabric" that help kubernetes achieve it's goals and I'll explain what each does later on.

I always like to see what versions a blog post is working with in case there are small discrepancies or issues, so we'll be working with:

  • Vagrant 1.8.5
  • Ubuntu 14.04
  • etcd 3.0.1
  • flanneld 0.5.5
  • Docker 1.11.2

Okay, lets get started!

First, we'll need to start by setting up our Vagrantfile. Here is what we will begin with:


# -*- mode: ruby -*-
# vi: set ft=ruby :

$instances = 3
$instance_name_prefix = "app"

$app_cpus = 1
$app_memory = 1024

Vagrant.configure(2) do |config|
  config.vm.box = "ubuntu/trusty64"

  (1..$instances).each do |i|
    config.vm.define vm_name = "%s-%02d" % [$instance_name_prefix, i] do |config|
      config.vm.hostname = vm_name

      # Set the VM memory and cpu allocations
      config.vm.provider :virtualbox do |vb|
        vb.memory = $app_memory
        vb.cpus = $app_cpus
      end

      # Create a private network, which allows host-only access to the machine
      # using a specific IP.
      ip = "44.0.0.#{i+100}"
      config.vm.network :private_network, ip: ip

      # Section (A) -- etcd

      # Section (B) -- flannel

      # Section (C) -- docker
    end
  end
end

Now before we build our vagrant machines, we'll want to add in etcd, and flanneld:

etcd

From etcd's github page:

etcd is a distributed, consistent key-value store for shared configuration and service discovery

etcd is used by kubernetes to store information about each machine across all machines within the application pool. First, we'll need to add in a script provision below section (A) to install it in our vagrant machines:

# First machine gets a new state, and is the first in our cluster list
state = "new"
cluster = "app-01=http:\\/\\/44.0.0.101:2380"
if i > 1
  # All other machines get an existing state since they're set up sequentially 
  state = "existing"

  # Add each additional machine to our cluster list
  (2..i).each do |j|
    cluster = "#{cluster},app-0#{j}=http:\\/\\/44.0.0.#{j+100}:2380"
  end
end    

# The actual vagrant provision call
config.vm.provision "shell", path: "etcd.sh", name: "etcd", env: {"IP" => ip, "CLUSTER_STATE" => state, "CLUSTER" => cluster}

Try not to read too much into the snippet, the most important part is that we're provisioning a script etcd.sh and running it with three environment variables, IP, CURRENT_STATE, and CLUSTER. The reason for the if statement and variables is so that we can create 3 machines without duplicating code in our Vagrantfile.

In our etcd.sh script, we'll want to:

  • Download the release, and install it onto the machine.
  • Create a service file for etcd to run at boot, and
  • Start the service

I've used Ubuntu's Upstart method for our service file, but for newer versions of Ubuntu (16.04+) you'll want to look into systemd services. Since I'm using Upstart, I'll need to create two files in addition to my script. I've created a gist for all three files here. The gist comprises of:

  • etcd.conf Upstart service definition file
  • etcd.override Upstart override file, and
  • etcd.sh Vagrant provision script.

Place these files in the same folder as your Vagrantfile. Next, we'll provision flannel.

flannel

From flannel's github:

flannel is a virtual network that gives a subnet to each host for use with container runtimes.

flannel uses etcd to keep track of its subnets, and is used by kubernetes as a network layer. I should also mention that flannel is not the only option when setting up kubernetes from scratch. If you're using Amazon AWS, you may want to utilize Amazon's infrastructure to handle the network layer. If this is the case, or you just want to know your options, I would suggest reading through kubernetes Getting Started from scratch documentation. Specifically the network section.

Similar to etcd, we'll need to provision flannel in our Vagrantfile, under section (B):

# Provision flannel binaries and services
config.vm.provision "shell", path: "flanneld.sh", name: "flannel"

if i == 1
  # Create our flannel configuration in etcd
  config.vm.provision "shell", name: "flannel-config", inline: "etcdctl mkdir /network; etcdctl mk /network/config </vagrant/flanneld.json"
end

# Start flannel
config.vm.provision "shell", name: "flannel", inline: "start flanneld"

# Add the next node if we aren't the last node
if $instances > 1 && i < $instances
  config.vm.provision "shell", name: "etcd-add", inline: "etcdctl member add app-0#{i+1} http://44.0.0.#{i+101}:2380"
end

Here we provision flannel, send etcd our configuration, and then start flannel. The very last if statement is special in that we're adding the next node before we actually create it. Within flanneld.sh we'll follow the same pattern as above:

  • Download the release, and install it onto the machine.
  • Create a service file for flanneld to run at boot, and
  • Start the service

You can find my gist here, which contains:

  • flanneld.conf Upstart service definition file
  • flanneld.sh Vagrant provision script, and
  • flanneld.json JSON flannel configuration object for etcd

Like above, place these files in the same directory as the Vagrantfile. If everything works out as planned, flannel will create a bash source file in /run/flannel/subnet.env in each machine. This source file will contain FLANNELD_SUBNET and FLANNELD_MTU environment variables. We'll use these variables next to get Docker to use our flannel network layer for its containers.

Docker

From docker.com:

Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.

If you've used Docker before, you'll be familiar with it creating several network interfaces. In fact, one for each container running, and a bridge. What we're going to do today, is modify it's service parameters to use flannel for its bridge instead of the default docker bridge.

In essence, we just need to make sure the docker daemon is run with the --bip and --mtu flags. Simply put:

docker daemon --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}

Now docker installs its own Upstart script, so we dont need to worry about that, but we can leverage it to change some of the daemon options. Within the Upstart script, it sources /etc/default/docker. This defaults file is where we'll make our additions (Save it to the same directory your Vagrantfile is in with the name docker.default):

if [ -f /run/flannel/subnet.env ]; then
    . /run/flannel/subnet.env

    DOCKER_OPTS="--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}"
fi

Finally, our last step is to provision this file and restart Docker before vagrant finishes. We'll put the following in section (C):

config.vm.provision "docker"

config.vm.provision "shell", name: "docker", path: "docker.sh"

Where docker.sh contains:

#!/bin/bash

# Docker Daemon Options
cp /vagrant/docker.default /etc/default/docker

service docker stop
service docker start

Thats it, just place the docker.sh file in the same directory as your Vagrantfile and you should be able to run vagrant up !

To test and make sure everything worked, once vagrant is up and we've ssh'd into it, we can run etcdctl ls to get a list of all of etcd's keys and key directories. We've only added flannel's configurtion, so the output will simply look like this:

/network

Or, run ip a to see a new interface, flannel0, and Docker's network interface, docker0, using a subnet within range of our flanneld.json configuration. Which should look something like this:

4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue state UP group default 
    link/ether 02:42:09:61:7b:47 brd ff:ff:ff:ff:ff:ff
    inet 44.1.94.1/24 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:9ff:fe61:7b47/64 scope link 
       valid_lft forever preferred_lft forever
5: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500
    link/none 
    inet 44.1.94.0/8 scope global flannel0
       valid_lft forever preferred_lft forever

For le lazy, I've created a public repo that will have all parts of this series. Part 1 can be found here

Next post in this series we will build off of this and install all of the kubernetes binaries, hopefully getting the kubernetes dashboard running through kubernetes!