Easily Install/Upgrade/Manage Highly Available Kubernetes Cluster with Kubespray Automation

1 Likes
6 months ago

Setting up a fully functional Kubernetes cluster with high availability has been quite a challenging task. Involving complexity of understanding how kubeadm, kubectl, kubelet commands and networking among the pods work.

With this article, we can understand how the bottleneck of having to deal with manual installation/upgrade of Kubernetes cluster can be alleviated and use automation whose backbone is typical ansible configuration management. 

Well, Kubespray fits right in!  

Glossary:

  1. What is kubespray?
  2. Why do we want to use kubespray?
  3. What are the wheels for kubespray?
  4. How does kubespray manage k8s infra?
  5. What are the basics to start with?
  6. Let's get started with steps! (Feel free to directly jump here )
  7. What do we expect at the end of automation?
  8. How to administer/manage the k8s cluster using kubectl.
  9. Upgrade existing k8s cluster.
  10. What else can be done?
  11. Resources

 

  1. What is kubespray?
    • Kubespray is a composition of Ansible playbooks, inventory, provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks.
    • Kubespray Project setup is a one time job. The same project can run against multiple set of VM(i.e. target hosts which are to be part of Kubernetes cluster)

# Have a look  at https://kubernetes.io/docs/setup/production-environment/tools/kubespray/

 

  1. Why do we want to use kubespray?
    • Kubespray runs on bare metal and most clouds, using Ansible as its substrate for provisioning and orchestration. Kubespray supports kubeadm for cluster creation and one who wants to manage or create the cluster can offload the generic configs on it.                       # https://kubespray.io/#/docs/comparisons
    • Kubespray also supports deployments on different cloud provider as well.
    • #https://kubespray.io/#/docs/cloud

 

  1. What are the wheels for kubespray?
  • Minimum required version of Kubernetes which can be configured/deployed is v1.17
  • kubeadm
  • Ansible v2.9 , Jinja 2.11 and python-netaddr
  • access to the Internet
  • Your ssh key must be copied

 # https://kubespray.io/#/?id=requirements

 

  1. How does kubespray manage k8s infra?
    • Basically uses Ansible for managing the configurations on target nodes. Something which Ansible is really good at. Every time when configuration management is run via Ansible commands, the latest report is generated based on the changes took place; on target hosts.
    • There are many networking add-ons which can provide pod to pod networking.
    • By default, Kubespray uses calico as a networking service among pods and communication.
    • Calico runs as a network policy which gains better performance and connectivity over weave and flannel.

 

  1.  What are the basics to start with?
  • Ansible commands (just the ones mentioned in the official doc to get started and keep running).
  • ssh keys: generate and manage.
  • Python modules which are used as part of the requirement.
  • YAML syntax and indentation.

 

     6. Let's get started with steps!

     6.A. Prepare the VMs (Ansible host and Target Hosts)

- We are going to use 1 VM for Project Setup (will refer as Ansible Host with samarth@ansible-host) and 2 VMs for Kubernetes Cluster (will refer as target hosts; namely k8suser@k8s-node1 and k8suser@k8s-node2), all of the machines in the article are using Ubuntu 20.04 LTS

- Enable password less login from Ansible host to Target hosts.

- Generate RSA key on Ansible host with command

samarth@ansible-host:~$ssh-keygen -t rsa

-Copy the rsa keys to target machines.

                   samarth@ansible-host:~$ssh-copy-id k8suser@k8s-node1

                   samarth@ansible-host:~$ssh-copy-id k8suser@k8s-node2

  6.B. Need to execute some commands as root

- Projects Ansible Setups, commands like kubeadm and few docker commands need special user permissions/elevated privileges/particular groups on linux machines. For this, edit the /etc/sudoers file on each machine(ansible host : admin and target nodes) and add the lines below %sudo group.

(Adding these line just below %sudo group is mandatory)

On Ansible Host: samarth    ALL=(ALL) NOPASSWD:ALL

On Target Hosts: k8suser    ALL=(ALL) NOPASSWD:ALL

Sample:

On Ansible Hostsudoer file.png

 

 

 

6.C. Proceed with installation

On Ansible Host:

  • Install python3 version's pip and make sure it is being used.

samarth@ansible-host:~$sudo apt install python3-pip curl git

samarth@ansible-host:~$sudo pip3 install --upgrade pip

samarth@ansible-host:~$pip –version (confirm the version)

o/p: pip 20.2.3 from /usr/local/lib/python3.6/dist-packages/pip (python 3.6)

  • Clone the reposotory

samarth@ansible-host:~$git clone https://github.com/kubernetes-sigs/kubespray.git

                  

6.D. Install the requirement of the project.

 Requirement file will always be inside of the cloned project.

samarth@ansible-host:~/ kubespray$pip install -r requirements.txt

The versions may vary based on kubespray’s further development by the Kubernets community on github:

ansible==2.9.6

jinja2==2.11.1

netaddr==0.7.19

pbr==5.4.4

jmespath==0.9.5

ruamel.yaml==0.16.10

 

6.E. Prepare configuration template for new k8s cluster

- Consider the inventory/sample of kubespray project as a template. Make a copy of this in the same project with some new name, while preserving the access rights on each file with the below command.

samarth@ansible-host: :~/ kubespray $ cp -rpf inventory/sample inventory/mycluster

 

mycluster.png

 

6.F. Create host inventory

  • Create a bash array with IPs of nodes you plan on using to create k8s master(s) and worker(s). IPs mentioned in the command belong to k8s-node1 and k8s-node2, respectively.

samarth@ansible-host:~/ kubespray $declare -a IPS=(10.10.10.2 10.10.10.3)

  • Create a hosts.yaml with nodes and IPs.(a sample of HA K8s cluster’s  hosts.yaml is attached)

samarth@ansible-host:~/ kubespray $CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

The contents of inventory/mycluster/hosts.yaml filecan be edited as per requirement before running the kubespray automation. This host file is analogous to typical ansible host inventory. Below is the sample of hosts.yaml file after running the command.

  • Check the file inventory/mycluster/hosts.yaml for the mentioned IP(s) and what role they are about to take within the k8s cluster.
  • [OPTIONAL] if you want to customize k8s infra and it's properties, please check out the below files of this project,

a). inventory/mycluster/group_vars/all/all.yml (<- manage the etcd and internal load balancer)

b). inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml (<- manage the properties, k8s server version and drivers, n/w etc..)

c). There are many more add-ons/properties available which can be view/modified at inventory/mycluster/group_vars/[all/k8s-cluster]/ path.

  

   6.G. Trigger the project from Ansible Host

-        Now cluster.yaml playbook can be used for k8s cluster provisioning. Use the below command to start the automation.

samarth@ansible-host: :~/ kubespray $ ansible-playbook -i inventory/mycluster/hosts.yaml --become --user=k8suser --become-user=root cluster.yml

**In case your Ansible Host and Target Hosts are sharing the same user name, using ‘--user’ flag becomes optional.

 

  1. What do we expect at the end of automation?
    • Automation takes ~18 minutes to complete, end of which; we get a fully provisioned k8s cluster infrastructure.
    • At the very end, on ansible host, a recap of the procedure get printed.
    • All of the target nodes which were mentioned have a new hostname as per the project standards.(i.e. node1, node2 etc…)
    • The project does NOT change anything on Ansible Host. The same project can be used against another set of Target Hosts/VMs. We just need to create the inventory/mycluster/hosts.yaml again. (Follow command from section 6.F).
    • Corresponding host mapping can be found in /etc/hosts on target hosts.

 

  1. How to administer/manage the k8s cluster using kubectl
    • Copy the /etc/kubernetes/admin.conf from a master to the user’s home at ~.kube/ location, rename the copied file admin.conf  to config and change file permission to readable ( r).
    • This step is mentioned in detail with 8.A ( If the Kubernetes cluster has to be managed from one of the masters follow 8.A.) and 8.B ( If the Kubernetes cluster has to be managed from outside of the cluster follow 8.B.) based on how admin wants to manage the cluster. 

       8.A. Within the k8s cluster master node

        In this section, you’d need to execute a few commands on Target Host i.e. Kubernetes master node.

**Command kubectl by default; looks for .kube/config file in the current host to work with the k8s cluster details.

k8suser@node1:~$ mkdir .kube && sudo cp /etc/kubernetes/admin.conf /home/k8suser/.kube/config

k8suser@node1:~$ chmod r /home/k8suser/.kube/config

(You can choose read permission based on user,group,others).

     8.B. From outside of the k8s cluster.

    Download the stable kubectl binary on any user (samarth@ansible-host in this case)with

samarth@ansible-host:~ $curl -LO https://storage.googleapis.com/kubernetes-release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

(To install a specific version: replace the nested curl req. with a specific version)

samarth@ansible-host:~ $chmod x ./kubectl

samarth@ansible-host :~$mv ./kubectl /usr/local/bin/kubectl

samarth@ansible-host:~$ mkdir .kube && sudo scp k8suser@node1:/etc/kubernetes/admin.conf /home/samarth/.kube/config

samarth@ansible-host:~$ chmod r /home/samarth/.kube/config

(node1 denotes a master node i.e. k8s-node1 in this case. host name k8s-node1 gets renamed  to node1, after running the automation)

 

After Steps 8. Verify if config file exists at ~/.kube/ path(~ signifies user home)

Execute the kubectl commands to verify k8s infra is working correctly.

$kubectl version

kubectl version.png

$kubectl get nodes -o wide

kubectl get nodes -o wide.png

$kubectl get all --all-namespaces -o wide

kubectl get all --all-namespaces -o wide.png

 

 

(If kubectl command is not recognized please follow https://kubernetes.io/docs/tasks/tools/install-kubectl/ )

*Note: After a successful installation and kubectl configuration, we can install helm with any package manager that suits the environment (https://helm.sh/docs/intro/install/)

on Ubuntu using the command: sudo snap install helm --classic

  1. Upgrade existing k8s cluster.

- Upgrade the existing K8s Cluster using kubespray.

Clone the new project or pull the latest changes from git before executing the below command. Because all of the binaries are pushed into new version to match with current Kubernetes version.

  • Prepare the host inventory: Follow commands mentioned in section 6.F.
  • Execute the below command to upgrade the Kubernetes Cluster(server version).

 samarth@ansible-host:~/ kubespray-new $ansible-playbook upgrade-cluster.yml --become --user=k8suser --become-user=root -i inventory/mycluster/hosts.yaml -e kube_version=v1.20.2

Mentioning kube_version is mandatory for a successful run.

You must wait for ~20-25 minutes for this upgrade process to complete.

o/p

PLAY RECAP *************************************************************************************************************************************************************************************************

localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

node1                      : ok=622  changed=70   unreachable=0    failed=0    skipped=1311 rescued=0    ignored=1

node2                      : ok=481  changed=56   unreachable=0    failed=0    skipped=1003 rescued=0    ignored=1

 

** Now check the NAM resources you had earlier (i.e. before upgrading the Kubernetes cluster)

 

  1. What else can be done?

Consider the YAMLs provided in the project root.

https://kubespray.io/#/docs/integration

 

  1. Resources

 

 

 

 

 

Labels:

Other
How To-Best Practice
Education-Training
Announcement
Comment List
Anonymous
Related Discussions
Recommended