Planning Phase
Testing software and playing with new technologies is a crucial part of my business. Some solutions can be deployed to a simple VMware Workstation VM, but others may require complex server and networking architectures. In the past I did most of my tests with nested vSphere or vSAN clusters. Well, it works…. somehow… but you might imagine that a nested vSAN cluster with virtual flash devices, based on spinning (SATA) disks sucks err.. does not perform very well.
I needed some bare metal to perform realistic testing, so I kept looking for phased out customer servers. The problem is, that many customers use their ESXi hosts until they literally fall apart or drop out of HCL. Hardware that isn’t capable of running latest VMware products is just scrap iron. Furthermore rackmount servers are usually noisy, energy hungry and require a lot of space. Not the best choice to put it in your office.
I’ve been searching for a while for a more compact solution. Intel NUC series looked like a possible candidate. I know they’re quite popular in the vCommunity, but what kept me from buying, was its lack of network adapters an the limited ability to install caching and storage devices.
Earlier this year I got a hint to look at Supermicro E300-9D series. This micro server looked promising. Still small, but equipped with 8 NICs (four of which are 10G) and M.2 connectors for NVMe flash devices. William Lam has posted an excellent article about the E300-9D. This little gem can be equipped with a SATA DOM boot device, up to 3 NVMe devices AND it is listed on VMware HCL. How cool is that?!
The Setup
As mentioned above, William Lam’s post was very helpful in the planning phase. I did some adjustments because I wanted Intel Optane flash as caching device. Luckily the experts at my Supermicro vendor CTT pointed me to some pitfalls with the dimensions of some of my chosen M.2 devices. Together we figured out a setup that would fit into the chassis. Kudos to the technical sales team at CTT. You’ve done a very good job.
Base unit | Barebone SuperServer SYS-E300-9D-8CN8TP |
CPU | Intel Xeon D 2146 2,30 GHz single socket |
NIC | 4x Intel i350, 4x Intel X722 (2 SFP+, 2 10Base-T) |
PCIe Riser Card | Supermicro Riser Card RSC-RR1U-E8 |
Memory | 2x Samsung 32 GB reg. ECC DDR4-2666 DIMM SDRAM |
Boot device | Supermicro SATA DOM 16GB SSD |
Cache Tier | Intel Optane SSD DC P4801X Series – 100 GB SSD – 3D Xpoint |
Cap Tier | Samsung PM981 1024 GB M.2 PCIe 3.0 x4 NVMe SSD |
M.2 to PCIe | Supermicro 2-Port NVme HBA |
How many units?
As we all know, the functional minimum of a vSAN cluster is 3 nodes. With the downside, that every maintenance on a node will turn your cluster in a fault state. With four nodes it is possible to evacuate data, do maintenance works and not losing redundancy. You might argue that it is acceptable for a non-production homelab. True, but there’s another point. With an all-flash cluster at hands and 10 Gbit vSAN traffic backbone it is also possible to use erasure coding (2n+2). But that requires 4 nodes at least. Besides having a stable and performant platform for testing VMware products, I will also do some research in performance impact on erasure coding vs. mirroring (RAID1).
Is it worth it?
To be honest. This little precious costs a considerable amount of €. For the equivalent we could go on a decent vacation to the other side of the planet and visit some dear friends living on a tiny patch of coral in the middle of the big blue ocean. So, one critical point will be the WAF score of the cluster. I’m very lucky, that my wife is vGeek too (and she can’t wait to get her hands on the new hardware. We both agreed that education and hands-on practice is priceless in our business. A homelab is a tool – not a fancy gadget. Like a fisherman needs a boat, an IT architect needs a lab.
Read part 2 – Unboxing