In my last episode HomeLab Stage LXVIII: NSX-ALB, I have written about my AVI implementation inside my #HomeLab. I really like the setup, which I am using for customer demos and PoCs.
This time I am focusing on one of my favorite topics: New toys for vSAN….. 🙂
When VMware launched vSAN several years ago, I was flashed about this solution. I have already done hundreds of vSAN installations at customer sites, some of them are 2-Node clusters with remote witnesses, some are normal clusters with up to 64 hosts and most of them are stretched clusters. German customers really love stretched clusters with all the different configuration options and I love every vSAN implementation. I have received a huge Captain vSAN flag after the official campaign ended, which has a special place in my office. (my Pearson Vue HomeOffice Testcenter system in front)
What have all these installations in common? The are two tier vSANs, now called Original Storage Architecture (OSA).
VMware announced a new single tier vSAN called Express Storage Architecture (ESA). Now we have OSA and ESA (sounds like two austrian dairy cows….).
I really wanted this new kind of technology inside my HomeLab environment, but the requirements for vSAN ESA are tough:
- vSAN ReadyNodes only (ESA is supported only when factory ordered, I will use my own….)
- 2 Socket CPUs only (No single socket systems, OK)
- 32 Cores per Socket (High core count is needed to deliver the massive amount of IOPs, That could be a problem)
- One 25 GbE NIC minimum, 100GbE recommended (No problem within my HomeLab)
- One 1 Gb of faster NIC for VMs and Mgmt (Easy part….)
- Minimum of 512 GBs of RAM (Shouldn´t be a problem)
- Dedicated Boot Device (Of course! Please use always a dedicated boot device)
- Minimum of four NVMe TLC devices per host (OK, that could be an issue)
- Disks at least 1.6 TBs in size (Holy cow… ESA)
OK, so let´s see what I can do….
- I need CPUs with a higher core count, but I will start with my existing ones
- I need a minimum of 4 x TLC NVMe devices per host, each with minimum of 1.6TB
The CPU part was not my main focus, but how to achieve the 4 x TLC devices?
Intel for the win! They offered me some sample hardware! YES
I received 8 x Intel Optane P4800X 2.5″ 375GB devices! What a perfect timing guys. The only “issue” is that the are not 1.6TB, but it should work, right?
I have ordered 4 PCIe x8 carrier boards to mount two Optanes on each board. But how to get this boards recognized inside my servers?
I placed two boards with 2 x Optane on each board into my existing Dell R730 servers:
Both boards are located in the same riser slot, supporting PCIe x8 connections, the left slot is a Mellanox 100GbE Dual Port NIC which needed a PCIe x16 slot.
After booting the server, ESXi could only see 2 of the 4 NVMe SSDs. What went wrong?
The answer was not so easy to find. The issue is related to the PCIe addressing, which could be resolved with slot bitfurcation.
After this configuration, ESXi is able to communicate with all 4 NVMe SSDs…. Yeah
The required network configuration for vSAN ESA is minimum 25GbE. I have 100GbE running inside my Lab:
I created a 2 node cluster for my new vSAN ESA environment. This requires a witness node, which has dedicated ESA requirements:
The witness itself exist only in Large or Extra Large for vSAN ESA….
My witness is running inside Datacenter II to maintain a different location.
My vSAN ESA configuration is showing some warnings, because the Intel Optane devices are not officially supported (does not exist in a vSAN ESA ReadyNode)
I disabled the warning within Skyline Health, the system itself is running without any issues.
The ESA requirements for the NVMe SSDs are based on Performance classes as well as Endurance Classes:
My Intel Optane P4880X disks perfectly fit into this requirement:
Stay tuned for my performance test results based on HCIbench for my newest VMware vSAN ESA cluster, especially with the U1 improvements!
Thanks again to my friends at Intel for the sample hardware!
Check the next episode HomeLab Stage LXX: Power Consumption