VMware vSAN 8 – vSAN on steroids

VMware vSAN was developed about 10 years ago. The year was 2012, when magnetic disks were predominantly used for data storage and flash media was practically worth its weight in gold. It was during this time that the idea behind vSAN was born. Hybrid data storage with spinning disks for bulk data and flash media as cache. Flash devices at this time used the same interfaces and protocols as magnetic disks. As a result, they were not able to unfold their full potential. There was always the bottleneck of the interface.

Today – a more than 10 years later – we have more advanced flash storage with high data density and powerful protocols such as NVMe. The price per TB for flash is now on par with magnetic SAS disks, which has practically replaced magnetic disks. In addition, there are higher possible bandwidths in the network, higher core density in the CPU, and completely new requirements such as ML/AI or containers. The time has come for a new type of vSAN data storage that can fully leverage the potential of new storage technologies.

vSAN Express Storage Architecture (ESA)

Putting it in a nutshell: The vSAN ESA architecture is an optional data storage architecture. The traditional disk group model will continue to exist – even under vSAN 8.

VMware vSAN ESA is a flexible single-tier architecture. This means that it does not require disk groups and no longer distinguishes between cache and capacity layers. It is optimized for the use of modern NVMe flash storage. All storage devices of a host are gathered in a storage pool.

vSAN ESA Architecture (Source: VMware)

There is no upgrade path from the diskgroup model to ESA. Thus, the new architecture can only be used in greenfield deployments. The vSAN nodes must be explicitly qualified for this. There will be dedicated vSAN ReadyNodes for ESA.

Continue reading “VMware vSAN 8 – vSAN on steroids”

An Insight into vSAN-Striping

This article is a result of questions that are asked frequently by my students in vSAN classes. The subject of striping sounds very simple at first, but it turns out to be quite complex once you start going away from the simple standard examples. We shed light on the striping behavior of vSAN objects in mirroring, erasure coding, and for large objects. We also show the different striping behavior before vSAN 7 Update 1 and after.

What is striping?

Striping generally refers to a technique in which logically sequential data is segmented in such a way that successive segments are stored on different physical storage devices. Striping does not create redundancy. In fact, the opposite is true. In traditional storage, striping is also referred to as RAID 0 (note: RAID 0 -> zero redundancy). By distributing the segments over several devices that can be accessed in parallel, the overall data throughput is increased while latency is reduced.

Stripe size or stripe width is the number of segments an object is split into.

With a stripe width of 2, an object of 100 GB, for example, is split into two components of 50 GB each and distributed across two storage devices. This corresponds to a RAID 0.

Striping with stripe width=2
Continue reading “An Insight into vSAN-Striping”

vSAN Cluster Live-Migration to new vCenter instance

What can be done if the production vCenter Server appliance is damaged and you need to migrate a vSAN cluster to a new vCenter appliance?

In this post, I will show how to migrate a running vSAN cluster from one vCenter instance to a new vCenter under full load.

Anyone who works with vSAN will have a sinking feeling in their guts thinking about this. Why would one do such a thing? Wouldn’t it be better to put the cluster into maintenance mode? – In theory, yes. In practice, however, we repeatedly encounter constraints that do not allow a maintenance window in the near future.

Normally, vCenter Server appliances are solid and low-maintenance units. Either they work, or they are completely destroyed. In the latter case, a new appliance could be deployed and a configuration restore could be applied from the backup. None of this applied to a recent project. VCSA 6.7 was still working halfway, but key vSAN functionality was no longer operational in the UI. An initial idea to fix the problem with an upgrade to vCenter v7 and thus to a new appliance proved unsuccessful. Cross-vCenter migration of VMs (XVM) to a new vSAN cluster was also not possible, firstly because this feature was only available starting with version 7.0 update 1c, and secondly because only two new replacement hosts were available. Too few for a new vSAN cluster. To make things worse, the source cluster was also at its capacity limit.

There was only one possible way out: stabilize the cluster and transfer it to a new vCenter under full load.

There is an old, but still valuable post by William Lam on this topic. With this, and the VMware KB 2151610 article, I was able to work out a strategy that I would like to briefly outline here.

The process actually works because, once set up and configured, a vSAN cluster can operate autonomously from the vCenter. The vCenter is only needed for purposes of monitoring and configuration changes.

Continue reading “vSAN Cluster Live-Migration to new vCenter instance”

ESXi Configuration Restore fails with blank DCUI

Backing up and restoring an ESXi host configuration is a standard procedure that can be used when performing maintenance on the host. Not only host name, IP address and passwords are backed up, but also NIC and vSwitch configuration, Object ID and many other properties. Even after a complete reinstallation of a host, it can recover all the properties of the original installation.

Recently I wanted to reformat the bootdisk of a host in my homelab and had to fresh install ESXi for this. The reboot with the clean installation worked fine and the host got a new IP via DHCP.

Now the original configuration was to be restored via PowerCLI. To do this, first put the host into maintenance mode.

Set-VMhost -VMhost <Host-IP> -State "Maintenance"

Now the host configuration can be retored.

Set-VMHostFirmware -VMHost <Host-IP> -Restore -Sourcepath <Pfad_zum_Konfigfile>

The command prompts for a root login and then automatically reboots. At the end of the boot process, an empty DCUI was welcoming me.

I haven’t seen this before. I was able to log in (with the original password), but all network connections were gone. The management network configuration was also not available for selection (grayed out). The host was both blind and deaf.

Continue reading “ESXi Configuration Restore fails with blank DCUI”