Using ESXi on Arm as a tiny Kubernetes cluster

ESXi on Intel x86 architecture has been a commodity for many years now. In recent years and during VMworld for example we’ve seen early alpha versions of ESXi running on Arm architecture like smart NICs or even Raspberry Pi. Meanwhile VMware developers published a Fling named ESXi Arm Edition to deploy ESXi on Arm architecture. Of course this is a lab project and not supported by VMware for production workloads. But anyway, it’s a great opportunity to play around with ESXi on a cheap and tiny computer like Raspberry Pi. I will not explain how to deploy ESXi on Arm. Check the detailed documentation on the Fling project page (PDF). I will focus on day-2 operation.

I would like to thank William Lam for providing a lot of background information, hacks and tricks around PhotonOS and ESXionArm.

Now I’ve got an ESXi host on my Raspi. What can I do with it?

Just a few remarks before we start:

You can’t run any workload on the ESXi on Arm platform. As the project name says, it’s an Arm architecture, So you can’t run operating systems based on Intel architecture. All guest VMs need to be made for Arm architecture. That will rule out Windows guest systems and also most Linux distributions. But luckily there are a couple of Linux distributions made specific for Arm architecture like Ubuntu Server for Arm, or Photon OS. For my demonstration I chose the latest Photon OS (version 4 beta). As host hardware I’m using the “big” Raspberry Pi 4 with 8 GB RAM. You can imagine that 8 GB of RAM isn’t very much for host OS and guest VMs. We have to use resources sparingly.

Our aim is to deploy a 3 node Kubernets cluster on an ESXi on Arm host on Raspberry Pi with just 8 GB RAM and 4 cores. Sounds crazy, but it’s possible. Thanks to K3s lightweight Kubernetes on Arm.

Hardware used

  • Raspberry Pi 4, Broadcom BCM2711, Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz
  • Heat sink for Raspberry Pi4 (your Raspi will become hot without)
  • SD-card (only for UEFI BIOS)
  • USB stick for ESXi installation
  • USB 3 hub with external power supply (Raspi doesn’t provide reliable power on USB port for an NVMe SSD)
  • USB 3 NVMe M.2 case
  • Samsung NVMe EvoPlus 250 GB M.2

Using ESXi on Arm in standalone mode

Although I have joined my ESXi on Raspi to my vCenter 7, I will not use any vCenter features. All works are done like on a standalone ESXi (with all the shortcomings and limitations).

First we need 3 VMs for the 3 K3s nodes. It’s a good idea to build a VM with everything we need except K3s and then clone it. Well, if you think cloning a VM on a standalone ESXi on Arm host is just a mouse click in the UI, then welcome to the real world. 😉 I will come to that point later. Let’s build our first Photon OS VM.

Continue reading “Using ESXi on Arm as a tiny Kubernetes cluster”

vSphere with Kubernetes

What’s new in v7U1?

VMware will release vSphere 7 Update 1 shortly. Once update 1 is released users will be able to run Kubernetes workloads natively on vSphere. So far that was only possible for installations with VMware Cloud Foundation 4 (VCF). Beginning with update 1 there will be two kinds of Kubernetes on vSphere:

  • VCF with Tanzu
  • vSphere with Tanzu

VCF offers the full stack but has some constraints regarding your choices. For example VCF requires vSAN as storage and NSX-T networking. NSX-T offers loadbalancer functionality for the supervisor cluster and Tanzu Kubernetes Grid (TKG). Additionally it provides overlay networks for PodVMs. These are container pods that can run on the hypervisor by means of a micro-VM.

In contrast to VCF with Tanzu, vSphere with Tanzu has less constraints. There’s no requirement to utilize vSAN as storage layer and also NSX-T is optional. Networking can be done with normal distributed switches (vDS). It’s possible to use HA-proxy as loadbalancer for supervisor control plane API and TKG cluster API. The downside of this freedom comes with reduced functionality. Without NSX-T it is not possible to run PodVMs. Without PodVMs you cannot use Harbor Image Registry, which relies on PodVMs. In other words: if you want to use Harbor Image Registry together with vSphere with Tanzu, you have to deploy NSX-T.

VCF with TanzuvSphere with Tanzu
NSX-Trequiredoptional, vDS
vSANrequiredoptional
PodVMsyesonly with NSX-T
Harbor Registryyesonly with PodVM, NSX-T
LoadbalancerNSX-THA-proxy
CNICalicoAntrea or Calico
Overlay NWNSX-T

Tanzu Editions

In the future there will be 4 editions of vSphere with Tanzu:

  • Tanzu Basic – Run basic Kubernetes-clusters in vSphere. Available as license bundle together with vSphere7 EnterprisePlus.
  • Tanzu Standard – Same as Tanzu Basic but with multi cloud support. Addon license for vSphere7 or VCF.
  • Tanzu Advanced – Available later.
  • Tanzu Enterprise – Available later.

Links

vSphere Blog – What’s New with VMware vSphere 7 Update 1

vSphere Blog – Announcing VMware vSphere with Tanzu

Cormac Hogan – Getting started with vSphere with Tanzu

VMware Tanzu – Simplify Your Approach to Application Modernization with 4 Simple Editions for the Tanzu Portfolio

Announcement of VMware Cloud Foundation 4.1

Together with vSphere7 and vSAN7, VMware Cloud Foundation (VCF) 4.0 with Tanzu was released in March this year. Now VMware has announced VMware Cloud Foundation 4.1 along with vSphere 7.0 Update1 and vSAN 7 U1, which builds on some of the new features of vSAN.

What’s new?

  • vSAN Data Persistence Platform – This is an important feature for manufacturers of virtual appliances and container solutions that run on vSAN or VCF. Not all Container workloads are stateless. Some of them like object storage or NoSQL are stateful applications. Until now, separate replication mechanisms by the application were necessary. With vSAN Persistence Platform the providers are able to directly use the high availability of vSAN. First providers are for example MinIO, DataStax, Dell EMC ObjectScale or Cloudian.
  • VMware Cloud Foundation Remote Clusters – A feature based on vSAN HCI Mesh. With this feature vSAN Datastores of other clusters can be integrated. This is especially interesting for remote locations.
  • vVols in workload domains – Now you can deploy workload domains on vVol enabled storage. Supported protocols are FC, iSCSI and NFS.
  • Automatic deployment of vRealize Suite 8.1 – vRealize Suite Life Cycle Manager (vRSLCM) now integrates with SDDC manager. You can deploy and update vRealize products from vRSLCM.
  • New features and bugfixes in SDDC manager.
  • VMware Skyline support for VCF.

The update is expected in early October during or shortly after VMworld2020.