Application Delivery Management
Application Modernization & Connectivity
CyberRes
IT Operations Management
Setting up a fully functional Kubernetes cluster with high availability has been quite a challenging task. Involving complexity of understanding how kubeadm, kubectl, kubelet commands and networking among the pods work.
With this article, we can understand how the bottleneck of having to deal with manual installation/upgrade of Kubernetes cluster can be alleviated and use automation whose backbone is typical ansible configuration management.
Well, Kubespray fits right in!
Glossary:
# Have a look at https://kubernetes.io/docs/setup/production-environment/tools/kubespray/
# https://kubespray.io/#/?id=requirements
6. Let's get started with steps!
6.A. Prepare the VMs (Ansible host and Target Hosts)
- We are going to use 1 VM for Project Setup (will refer as Ansible Host with samarth@ansible-host) and 2 VMs for Kubernetes Cluster (will refer as target hosts; namely k8suser@k8s-node1 and k8suser@k8s-node2), all of the machines in the article are using Ubuntu 20.04 LTS
- Enable password less login from Ansible host to Target hosts.
- Generate RSA key on Ansible host with command
samarth@ansible-host:~$ssh-keygen -t rsa
-Copy the rsa keys to target machines.
samarth@ansible-host:~$ssh-copy-id k8suser@k8s-node1
samarth@ansible-host:~$ssh-copy-id k8suser@k8s-node2
6.B. Need to execute some commands as root
- Projects Ansible Setups, commands like kubeadm and few docker commands need special user permissions/elevated privileges/particular groups on linux machines. For this, edit the /etc/sudoers file on each machine(ansible host : admin and target nodes) and add the lines below %sudo group.
(Adding these line just below %sudo group is mandatory)
On Ansible Host: samarth ALL=(ALL) NOPASSWD:ALL
On Target Hosts: k8suser ALL=(ALL) NOPASSWD:ALL
Sample:
On Ansible Host
6.C. Proceed with installation
On Ansible Host:
samarth@ansible-host:~$sudo apt install python3-pip curl git
samarth@ansible-host:~$sudo pip3 install --upgrade pip
samarth@ansible-host:~$pip –version (confirm the version)
o/p: pip 20.2.3 from /usr/local/lib/python3.6/dist-packages/pip (python 3.6)
samarth@ansible-host:~$git clone https://github.com/kubernetes-sigs/kubespray.git
6.D. Install the requirement of the project.
Requirement file will always be inside of the cloned project.
samarth@ansible-host:~/ kubespray$pip install -r requirements.txt
The versions may vary based on kubespray’s further development by the Kubernets community on github:
ansible==2.9.6
jinja2==2.11.1
netaddr==0.7.19
pbr==5.4.4
jmespath==0.9.5
ruamel.yaml==0.16.10
6.E. Prepare configuration template for new k8s cluster
- Consider the inventory/sample of kubespray project as a template. Make a copy of this in the same project with some new name, while preserving the access rights on each file with the below command.
samarth@ansible-host: :~/ kubespray $ cp -rpf inventory/sample inventory/mycluster
6.F. Create host inventory
samarth@ansible-host:~/ kubespray $declare -a IPS=(10.10.10.2 10.10.10.3)
samarth@ansible-host:~/ kubespray $CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
The contents of inventory/mycluster/hosts.yaml filecan be edited as per requirement before running the kubespray automation. This host file is analogous to typical ansible host inventory. Below is the sample of hosts.yaml file after running the command.
a). inventory/mycluster/group_vars/all/all.yml (<- manage the etcd and internal load balancer)
b). inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml (<- manage the properties, k8s server version and drivers, n/w etc..)
c). There are many more add-ons/properties available which can be view/modified at inventory/mycluster/group_vars/[all/k8s-cluster]/ path.
6.G. Trigger the project from Ansible Host
- Now cluster.yaml playbook can be used for k8s cluster provisioning. Use the below command to start the automation.
samarth@ansible-host: :~/ kubespray $ ansible-playbook -i inventory/mycluster/hosts.yaml --become --user=k8suser --become-user=root cluster.yml
**In case your Ansible Host and Target Hosts are sharing the same user name, using ‘--user’ flag becomes optional.
8.A. Within the k8s cluster master node
In this section, you’d need to execute a few commands on Target Host i.e. Kubernetes master node.
**Command kubectl by default; looks for .kube/config file in the current host to work with the k8s cluster details.
k8suser@node1:~$ mkdir .kube && sudo cp /etc/kubernetes/admin.conf /home/k8suser/.kube/config
k8suser@node1:~$ chmod r /home/k8suser/.kube/config
(You can choose read permission based on user,group,others).
8.B. From outside of the k8s cluster.
Download the stable kubectl binary on any user (samarth@ansible-host in this case)with
samarth@ansible-host:~ $curl -LO https://storage.googleapis.com/kubernetes-release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
(To install a specific version: replace the nested curl req. with a specific version)
samarth@ansible-host:~ $chmod x ./kubectl
samarth@ansible-host :~$mv ./kubectl /usr/local/bin/kubectl
samarth@ansible-host:~$ mkdir .kube && sudo scp k8suser@node1:/etc/kubernetes/admin.conf /home/samarth/.kube/config
samarth@ansible-host:~$ chmod r /home/samarth/.kube/config
(node1 denotes a master node i.e. k8s-node1 in this case. host name k8s-node1 gets renamed to node1, after running the automation)
After Steps 8. Verify if config file exists at ~/.kube/ path(~ signifies user home)
Execute the kubectl commands to verify k8s infra is working correctly.
$kubectl version
$kubectl get nodes -o wide
$kubectl get all --all-namespaces -o wide
(If kubectl command is not recognized please follow https://kubernetes.io/docs/tasks/tools/install-kubectl/ )
*Note: After a successful installation and kubectl configuration, we can install helm with any package manager that suits the environment (https://helm.sh/docs/intro/install/)
on Ubuntu using the command: sudo snap install helm --classic
- Upgrade the existing K8s Cluster using kubespray.
Clone the new project or pull the latest changes from git before executing the below command. Because all of the binaries are pushed into new version to match with current Kubernetes version.
samarth@ansible-host:~/ kubespray-new $ansible-playbook upgrade-cluster.yml --become --user=k8suser --become-user=root -i inventory/mycluster/hosts.yaml -e kube_version=v1.20.2
Mentioning kube_version is mandatory for a successful run.
You must wait for ~20-25 minutes for this upgrade process to complete.
o/p
PLAY RECAP *************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
node1 : ok=622 changed=70 unreachable=0 failed=0 skipped=1311 rescued=0 ignored=1
node2 : ok=481 changed=56 unreachable=0 failed=0 skipped=1003 rescued=0 ignored=1
** Now check the NAM resources you had earlier (i.e. before upgrading the Kubernetes cluster)
Consider the YAMLs provided in the project root.
https://kubespray.io/#/docs/integration