Contact to Powershell Gallery not possible

On older Windows systems, it may not be possible to contact the Powershell Gallery. An error is returned when an attempt is made.

Unable to resolve package source ‘https://www.powershellgallery.com/api/v2’

Root cause in the TLS

Transport Layer Security (TLS) is an encryption protocol for secure data transmission on the internet. Since 2021, TLS versions 1.0 and 1.1 have been considered obsolete and are therefore no longer accepted by many applications. TLS 1.2 and 1.3 have therefore become the new standard. The Powershell Gallery has also required at least TLS 1.2 since 2020 and rejects older protocols. Older Powershell versions such as Powershell 5.1 do not support this configuration.

Query current security protocol

[Net.ServicePointManager]::SecurityProtocol

Powershell usually returns the value ‘SystemDefault’ as the result. This means that Powershell uses the system-wide settings for TLS.

PS > [Net.ServicePointManager]::SecurityProtocol
SystemDefault

If an older TLS version is defined as the default in the system, Powershell uses this as the default.

Enforce TLS 1.2

TLS 1.2 can be enforced in Powershell with the command shown below. However, this command must be executed again in every new Powershell session.

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

The command from the first screenshot can be executed again for testing purposes.

PS > Find-Module -Name VMware.PowerCLI

The version of the module is now returned without an error message.

Version Name Repository Description
------- ---- ---------- -----------
13.3.0.... VMware.PowerCLI PSGallery This Windows PowerShell module contains VMware.PowerCLI

Sustainable solution

Forcing the TLS 1.2 version can only be a short-term fix. In the long term, the Poweshell version in the OS should be brought up to date. Older systems that have reached their end-of-life (EoL) according to Microsoft should no longer be used. That’s easy to say, but in practice I often come across legacy systems that cannot be replaced for a variety of reasons.

ESXi Config-Backup with PowerCLI requires HTTP

There is a really useful and convenient PowerCLI one-liner for backing up the host configuration. I have been using it for years and had also explained this in detail in an old blogpost.

Get-Cluster -Name myCluster | Get-VMHost | Get-VMHostFirmware -BackupConfiguration -DestinationPath 'C:\myPath'

This is a command I always teach my students as part of my VMware courses. Backing up the host configuration is downright mandatory before making changes to the host, installing patches and drivers, or host updates. Just a few seconds of additional effort, but these configuration backups have saved me more than once from major trouble and many hours of extra work.

Recently, I was backing up host configurations in a major datacenter. Surprisingly, the command did not work on some of the vCenter instances and aborted with an error message.

Get-VMHostFirmware : 18.08.2023 12:05:49 Get-VMHostFirmware An error occurred while sending the request.
At line:1 char:28
+… et-VMHost | Get-VMHostFirmware -BackupConfiguration -DestinationPath …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Get-VMHostFirmware], ViError
+ FullyQualifiedErrorId : Client20_SystemManagementServiceImpl_BackupVmHostFirmware_DownloadError,VMware.VimAutomation.ViCore.Cmdlets.Commands.Host.GetVMHostFirmware

To understand the error, we must first understand how the PowerCLI command works. First, a backup of the host configuration is triggered on the host via vCenter. The host stores this locally as a zipped TAR archive (.tgz). The name is configBundle-HostFQDN.tgz (example: configBundle-esx01.lab.local.tgz). The archive is then downloaded from the host in a second step. The URL for this is:

http://[HostFQDN]/downloads/[Host-UUID]/configBundle-HostFQDN.tgz

By reading the error message above, there was obviously a problem with the download of the TGZ file. With the help of the network admins, it quickly became obvious what had happened. My workstation, from which I sent the PowerCLI command, tried unsuccessfully to establish an HTTP connection to the ESXi host. But this was blocked by a firewall rule.

I was wondering why the transfer is handled using unencrypted HTTP. In the log of the firewall you can see a connection attempt to the ESXi host with HTTP and HTTPS.

Is there a way to force the download using HTTPS?

My first thought was that there might be a parameter to the command that enforces the HTTPS protocol. A query in the VMTN forum unfortunately brought some disillusionment.

It is a bit surprising that VMware uses an unencrypted protocol for this sensitive data. All the more since the PowerCLI session to vCenter already runs over HTTPS anyway. The most plausible explanation would be that it was simply ‘forgotten’ to secure the transfer via SSL with this quite old command.

So currently there is no other choice but creating a firewall rule that allows downloading via HTTP.

vSAN Cluster Live-Migration to new vCenter instance

What can be done if the production vCenter Server appliance is damaged and you need to migrate a vSAN cluster to a new vCenter appliance?

In this post, I will show how to migrate a running vSAN cluster from one vCenter instance to a new vCenter under full load.

Anyone who works with vSAN will have a sinking feeling in their guts thinking about this. Why would one do such a thing? Wouldn’t it be better to put the cluster into maintenance mode? – In theory, yes. In practice, however, we repeatedly encounter constraints that do not allow a maintenance window in the near future.

Normally, vCenter Server appliances are solid and low-maintenance units. Either they work, or they are completely destroyed. In the latter case, a new appliance could be deployed and a configuration restore could be applied from the backup. None of this applied to a recent project. VCSA 6.7 was still working halfway, but key vSAN functionality was no longer operational in the UI. An initial idea to fix the problem with an upgrade to vCenter v7 and thus to a new appliance proved unsuccessful. Cross-vCenter migration of VMs (XVM) to a new vSAN cluster was also not possible, firstly because this feature was only available starting with version 7.0 update 1c, and secondly because only two new replacement hosts were available. Too few for a new vSAN cluster. To make things worse, the source cluster was also at its capacity limit.

There was only one possible way out: stabilize the cluster and transfer it to a new vCenter under full load.

There is an old, but still valuable post by William Lam on this topic. With this, and the VMware KB 2151610 article, I was able to work out a strategy that I would like to briefly outline here.

The process actually works because, once set up and configured, a vSAN cluster can operate autonomously from the vCenter. The vCenter is only needed for purposes of monitoring and configuration changes.

Continue reading “vSAN Cluster Live-Migration to new vCenter instance”

ESXi Bootmedia – New features in v7 und legacy issues from the past v6.x

With vSphere7 fundamental changes in the structure of the ESXi boot medium were introduced. A fixed partition structure had to give way to a more flexible partitioning. More about this later.

With vSphere 7 Update 3 VMware also brought bad news for those using USB or SDCard flash media as boot devices. Increasing read and write activity led to rapid aging and failure of these types of media, as they were never designed to handle such a heavy load profile. VMware put these media on the red list and the vSphere Client throws warning messages in case such a media is still in use. We will explore how to replace USB or SDCard boot media.

ESXi Boot Medium: Past and Present

In the past, up to version 6.x, the boot medium was rather static. Once the boot process was complete, the medium was no longer important. At most, there was an occasional read request from a VM to the VM Tools directory. Even a medium that broke during operation did not affect the ESXi host. Only a reboot caused problems. For example, it was still possible to backup the current ESXi configuration even if the boot medium was damaged.

Layout of an ESXi Boot media before version 7

Layout of the boot media up to ESXi 6.7

In principle, the structure was nearly always the same: A boot loader of 4 MB size (FAT16), followed by two boot banks of 250 MB each. These contain the compressed kernel modules, which are unpacked and loaded into RAM at system boot. A second boot bank allows a rollback in case of a failed update. This is followed by a “Diagnostic Partition” of 110 MB for small coredumps in case of a PSOD. The Locker or Store partition contains e.g. ISO images with VM tools for all supported guest OS. From here VM tools are mounted into the guest VM. A common source of errors during the tools installation was a damaged or lost locker directory.

The subsequent partitions differ depending on the size and type of the boot media. The second diagnostic partition of 2.5 GB was only created if the boot medium is at least 3.4 GB (4MB + 250MB + 250MB + 110MB + 286MB = 900MB). Together with the 2.5 GB of the second diagnostic partition, this requires 3.4 GB.

A 4 GB scratch partition was created only on media with at least 8.5 GB. It contains information for VMware support. Anything above that was provisioned as VMFS data store. However, scratch and VMFS partition were created only if the media was not USB flash or SDCard storage. In this case, the scratch partition was created in the host’s RAM. With the consequence that in the event of a host crash, all information valuable for support was lost as well.

Structure of the boot media from ESXi 7 onwards

The layout outlined above made it difficult to use large modules or third-party modules. Hence, the design of the boot medium had to be changed fundamentally.

Changes of the partition layout between version 6.x and 7.x

First, the boot partition was increased from 4 MB to 100 MB. The two boot banks were also increased to at least 500 MB. The size is flexible, depending on the total size of the medium. The two diagnostic partitions (Small Core Dump and Large Core Dump), as well as Locker and Scratch have been merged into a common ESX-OSData partition with flexible size between 2.9 GB and 128 GB. Remaining space can be optionally provisioned as VMFS-6 datastore.

There are four different boot media size classes in vSphere 7:

  • 4 GB – 10 GB
  • 10 GB – 32 GB
  • 32 GB – 128 GB
  • > 128 GB
Dynamic partitioning in vSphere 7 depending on media capacity.

The partition sizes shown above are for freshly installed boot media on ESXi 7.0, but what about boot media migrated from version 6.7?

Continue reading “ESXi Bootmedia – New features in v7 und legacy issues from the past v6.x”