On October 20th 2020 VMware released NSX-T version 3.1 (release notes).
Upgrade from version 3.0
I’ll outline the process of upgrading from version 3.0.x to version 3.1. In the example shown, a base version 3.0.2 is upgraded, but the process is the same for all versions from 3.0.
Requirements
We’ll need an upgrade bundle (MUB) from VMware download site (login required).
Upgrade
First we need to login to NSX-T Manager. Go to section Lifecycle Management and select Upgrade. You’ll see your current version on the right. Start the process with Upgrade NSX.
ESXi on Intel x86 architecture has been a commodity for many years now. In recent years and during VMworld for example we’ve seen early alpha versions of ESXi running on Arm architecture like smart NICs or even Raspberry Pi. Meanwhile VMware developers published a Fling named ESXi Arm Edition to deploy ESXi on Arm architecture. Of course this is a lab project and not supported by VMware for production workloads. But anyway, it’s a great opportunity to play around with ESXi on a cheap and tiny computer like Raspberry Pi. I will not explain how to deploy ESXi on Arm. Check the detailed documentation on the Fling project page (PDF). I will focus on day-2 operation.
I would like to thank William Lam for providing a lot of background information, hacks and tricks around PhotonOS and ESXionArm.
Now I’ve got an ESXi host on my Raspi. What can I do with it?
Just a few remarks before we start:
You can’t run any workload on the ESXi on Arm platform. As the project name says, it’s an Arm architecture, So you can’t run operating systems based on Intel architecture. All guest VMs need to be made for Arm architecture. That will rule out Windows guest systems and also most Linux distributions. But luckily there are a couple of Linux distributions made specific for Arm architecture like Ubuntu Server for Arm, or Photon OS. For my demonstration I chose the latest Photon OS (version 4 beta). As host hardware I’m using the “big” Raspberry Pi 4 with 8 GB RAM. You can imagine that 8 GB of RAM isn’t very much for host OS and guest VMs. We have to use resources sparingly.
Heat sink for Raspberry Pi4 (your Raspi will become hot without)
SD-card (only for UEFI BIOS)
USB stick for ESXi installation
USB 3 hub with external power supply (Raspi doesn’t provide reliable power on USB port for an NVMe SSD)
USB 3 NVMe M.2 case
Samsung NVMe EvoPlus 250 GB M.2
Using ESXi on Arm in standalone mode
Although I have joined my ESXi on Raspi to my vCenter 7, I will not use any vCenter features. All works are done like on a standalone ESXi (with all the shortcomings and limitations).
First we need 3 VMs for the 3 K3s nodes. It’s a good idea to build a VM with everything we need except K3s and then clone it. Well, if you think cloning a VM on a standalone ESXi on Arm host is just a mouse click in the UI, then welcome to the real world. 😉 I will come to that point later. Let’s build our first Photon OS VM.
VMware vSphere and other products from the VMware ecosphere highly rely on DNS resolution. Name resolution is crucial to the virtual world and there’s a rule amongst troubleshooters:
“If you’ve ruled out DNS as the origin of your problem – check DNS again.”
In the corporate sector there are usually DNS servers of various types. Either hardware appliances with DNS functionality, or entire Microsoft Active Directory servers. However, if you want to set up a homelab, your office usually has only a small DSL router with a (poor) DHCP server functionality. It is possible to run DNS servers or whole ADS domain controllers inside a VM, but then we have the chicken and egg problem. The VM will start after cluster and vCenter are online. Until then wild things can happen in a vSphere cluster without DNS. So we are looking for a small, energy-saving, inexpensive and configurable hardware solution as DNS server for our homelab. Sounds like the Swiss-Army knive, but it can be easily realized with a Raspberry Pi.
In this article I will explain what you need to build your DNS server and how to configure a subnet for the lab.
Raspi as DNS-server
For this project we don’t need the latest model of the Raspberry Pi. A Raspi 3b model is fine for this purpose and the accessories are also available at low prices.
Raspberry Pi 3b+ / 1GB / 4-Core / 1,4 GHz
35 €
Micro SD card 32 GB
9 €
Case (optional)
8 €
Power supply 2,5A (optional)
10 €
HDMI cable
5 €
Parts list with average prices as of June 2020
For much less than 100€ you’ll get a tiny server which can also fulfill other tasks like home automization or as ad-blocker pi-hole.
There are a few things to consider. In principle you can power the Raspi via USB. But you have to make sure that the source delivers at least and reliably 1.2A. Power sources with 2.5A are recommended. My first boot attempts failed because my USB power supply did not provide enough power.
The Raspi requires a micro-SD card as permanent boot and storage media. Here you shouldn’t take the cheapest product, but for less than 10 € you can get 32 GB from a trustworthy brand.
After a failed firmware update on my Intel x722 NICs one host came up without its 10 Gbit kernelports (vSAN Network). Every effort of recovery failed and I had to send in my “bricked” host to Supermicro. Normally this shouldn’t be a big issue in a 4-node cluster. But the fact that management interfaces were up and vSAN interfaces were not must have caused some “disturbance” on the cluster and all my VM objects were marked as “invalid” on the 3 remaining hosts.
I was busy on projects so I didn’t have much lab-time anyway, so I waited for the repair of the 4th host. Last week it finally arrived and I instantly assembled boot media, cache and capacity disks. I checked MAC addresses and settings on the repaired host and everything looked good. But after booting the reunited cluster still all objects were marked invalid.
Time for troubleshooting
First I opened SSH shells to each host. There’s a quick powerCLI one-liner to enable SSH throughout the cluster. Too bad I didn’t have a functional vCenter at that time, so I had to activate SSH on each host with the host client.
From the shell of the repaired host I’ve checked the vSAN-Network connection to all other vSAN kernel ports . The command below pings from interface vmk1 (vSAN) to IP 10.0.100.11 (vSAN kernel port of esx01 for example)
vmkping -I vmk1 10.0.100.11
I received ping responses from all hosts on all vSAN kernel ports. So I could conclude there’s no connection issue in the vSAN-network.