So I’ve been running some of our development components on Docker in the company but it was never significant to the level that we rely on them for Development and Production, due to some major changes in our DataCenter I had to setup some database clusters and services from scratch to migrate our old data into.
So I saw this as an opportunity to convert some of our classic VMs into docker containers for a easier production and development experience.
For now I have prepared 3 VMS in 3 different hosts to be used by Kubernetes, all running CentOS 7 minimal.
the IP addresses of the machines are:
- 172.20.100.120 -> Kubernetes Master
- 172.20.100.121 -> Kubernetes Node1
- 172.20.100.122 -> Kubernetes Node2
Lets begin by setting up some tools on each machine.
yum install -y yum-utils device-mapper-persistent-data lvm2 git nano wget vim
now lets configure our host files for a cleaner communication between the cluster. I added the following in the end of each /etc/hosts file on each server:
172.20.100.120 kubemaster 172.20.100.121 kube1 172.20.100.122 kube2
now lets get rid of SELINUX, run setenforce 0 on each server and then edit /etc/sysconfig/selinux like following:
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of three two values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted
now we need to enable br_netfilter kernel module to allow packets traversing the bridge to be processed by iptables for port forwarding matters as well of communication of nodes within the cluster.
modprobe br_netfilter echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
Disable swap on the machines by:
swapoff -a
then edit /etc/fstab file and comment out the swap partition line.
Then install Docker
yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce
systemctl start docker && systemctl enable docker
add Kubernetes repository.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube* EOF
then install kubelet, kubeadm and kubectl on all nodes and then reboot the servers.
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
once booted back lets start and enable the autostart of kubelet
systemctl start kubelet && systemctl enable kubelet
kubernetes and docker both need to be running under same cgroup, find out what cgroup docker is running under by running:
docker info | grep -i cgroup
now lets make sure kubernetes cgroup-driver is set to cgroupfs as well.
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
now reload the systemd and restart the kubelet service.
systemctl daemon-reload systemctl restart kubelet
we need to make sure port 6443 & 10250 are open on the firewalld (default shipped with CentOS7). in my case I prefer to completely disable firewalld as these machines are not publicly available and only machines behind our proxy servers will be able to access them.
service firewalld stop systemctl disable firewalld
lets create the cluster by running this command on the master.
kubeadm init --apiserver-advertise-address=172.20.100.120 --pod-network-cidr=10.244.0.0/16
once done you should get a commend outputed to the screen which need to be run on the nodes to join the master in cluster. run it on each of the nodes. I’d say keep this in your notes for future where if you may have additional nodes.
kubeadm join 172.20.100.120:6443 --token r5x6ce.bsaypyibs1m169y9 --discovery-token-ca-cert-hash sha256:xxxx
run the join command with your token on each node to get them to join the cluster.
thats all, you can confirm your nodes joined the master by running kubectl get nodes.
[root@kubemaster ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kube1 Ready <none> 4d18h v1.13.1 kube2 Ready <none> 4d18h v1.13.1 kubemaster Ready master 4d18h v1.13.1
Incoming search terms:
- enable the br_netfilter module for cluster communication