This is a brief guide on how to upgrade Tanzu Workload Management within the vSphere cluster.
Kubernetes Release and Patch Cycles
Kubernetes versions are specified as x.y.z following Semantic Versioning terminology, where x is the major version, y is the minor version, and z is the patch version. For example, v1.22.6 denotes a minor version 22 with patch level 6. Minor versions are released approximately every 3-4 months. In the meantime, there are several patches within the minor version.
The Kubernetes project maintains release branches for the last three minor versions (1.24, 1.23, 1.22). Since Kubernetes 1.19, newer versions receive patch support for about a year. So keeping the Kubernetes versions in Tanzu up to date is highly recommended.
Step 1 – Update vCenter
This step is not mandatory, but recommended. Updates on vCenter are often accompanied by a new Kubernetes versions. You can see notifications about updates in the vSphere Client.
K3s is a lightweight, highly available open source Kubernetes cluster platform designed for easy and resource-efficient installation. K3s is provided in a package of less than 60 MB. The package is optimized for ARM platforms and can therefore also be run on hardware such as a Raspberry Pi, or as a guest VM on ESXi-on-ARM.
Prerequisites and collection of information
K3s is a cluster solution. That is why the order in which the nodes are updated is important. The update starts on the master node. So first we need to find out which node has this role. The easiest way to do this is with a kubectl command:
kubectl get node
NAME STATUS ROLES AGE VERSION
k3node1.lab.local Ready master 2y43d v1.19.3+k3s3
k3node2.lab.local Ready none 2y42d v1.19.3+k3s3
k3node3.lab.local Ready none 2y42d v1.19.3+k3s3
From the output above we see my three K3s nodes with FQDN, status, role, age and version. So here k3node1 has the master role.
As an alternative, you can also execute the command in verbose mode:
Added Network Security Policy support for VMs deployed via VM operator service – Security Policies on NSX-T can be created via Security Groups based on Tags. It is now possible to create NSX-T based security policy and apply it to VMs deployed through VM operator based on NSX-T tags.
Supervisor Clusters Support Kubernetes 1.22 – This release adds the support of Kubernetes 1.22 and drops the support for Kubernetes 1.19. The supported versions of Kubernetes in this release are 1.22, 1.21, and 1.20. Supervisor Clusters running on Kubernetes version 1.19 will be auto-upgraded to version 1.20 to ensure that all your Supervisor Clusters are running on the supported versions of Kubernetes.
Check before update
If you upgraded vCenter Server from a version prior to 7.0 Update 3c and your Supervisor Cluster is on Kubernetes 1.9.x, the tkg-controller-manager pods go into a CrashLoopBackOff state, rendering the guest clusters unmanageable
The image above indicates we’re already on version 1.21, which is good for an update.
Update
Before updating your VCSA make sure you have a configuration backup! An optional VM snapshot is a good idea too. It might help to revert settings fast in case something goes wrong.
You can either apply the update from VAMI or from the shell. The image below shows an overview of the new packages with this update.
After the update is installed you will have an option to deploy a new Kubernetes version in your Supervisor Control Plane.
You don’t need an enterprise cluster in order to get an impression of VMware Tanzu and Kubernetes. Thanks to the Tanzu Community Edition (TCE), now anyone can try it out for themselves – for free. The functionality offered is not limited in comparison to commercial Tanzu versions. The only thing you don’t get with TCE is professional support from VMware. Support is provided by the community via forums, Slack groups or Github. This is perfectly sufficient for a PoC cluster or the CKA exam training.
Deployment is pretty fast and after a couple of minutes you will have a functional Tanzu cluster.
TCE Architecture
The TCE can be deployed in two variants either as a standalone cluster or as a managed cluster.
Standalone Cluster
A fast and resource-efficient way of deployment without a management cluster. Ideal for small tests and demos. The standalone cluster offers no lifecycle management. Instead, it has a small footprint and can also be used on small environments.
Managed Cluster
Like commercial Tanzu versions, there is a management cluster and 1 to n workload clusters. It comes with lifecycle management and cluster API. Thus, declarative configuration files can be used to define your Kubernetes cluster. For example, the number of nodes in the management cluster, the number of worker nodes, the version of the Ubuntu image or the Kubernetes version. Cluster API ensures compliance with the declaration. For example, if a worker node fails, it will be replaced automatically.
By using multiple nodes, the managed cluster of course also requires considerably more resources.
Deployment options
TCE can be deployed either locally on a workstation by using Docker, in your own lab/datacenter on vSphere, or in the cloud on Azure or aws.
I have a licensed Tanzu with vSAN and NSX-T integration up and running in my lab. So TCE on vSphere would not really make sense here. Cloud resources on aws or Azure are expensive. Therefore, I would like to describe the smallest possible and most economical deployment of a standalone cluster using Docker. To do so, I will use a VM on VMware workstation. Alternatively, a VMware player or any other kind of hypervisor can be used.