Stage XXXVIII: Dell VRTX

I completed the SD-WAN implementation with the Velocloud in my last step: Stage XXXVII: Velocloud. Now back to the server side. Last year I wanted to invest into new servers inside my HomeLab. I really wanted an environment with vSAN storage inside the nodes as well as a shared storage setup. 10GbE and 40GbE are also a requirement from my side. Some of my customers have the Dell VRTX system running in production. It combines all what I wanted…..

So I found a chassis on Ebay for a good price. Here are the specs of it:

Chassis

4 x 1600W PSU (should be enough, even for high-end blades and components)

10GbE Switch (R1-2210, 4 x external 10GbE and 16 x internal 10GbE). I configured an LACP Etherchannel to my Ubiquiti US-16-XG switch.

Dual Perc 8 (internal shared Raid Controller for the SAS storage, I retrofitted the second controller)

Dual CMC, internal redundant Management card for the chassis.

OK, the chassis was ready to rock, but what about the servers:

Blades

I wanted state of the art blades for my VRTX chassis: I choose 4 x M640VRTX blades with the following configuration:

Dual Intel Xeon 5118 Gold, 24 Cores รก 2,30 GHz with a TDP of 105W per socket

768GB RAM DDR4-2400, configured in Fault Resillient Mode (12,5%)

Quad Port 10GbE inside each blade and via PCIe slot mapping dual port 40GbE

2 x 240GB M2 SSD BOSS Device for ESXi binaries

1 x 800GB SAS SSD for vSAN Cache

1 x 7,68TB SAS SSD for vSAN Capacity

Network Config

I did not want to implement a 1GbE environment, so I choose the R1-R2210 Switch 10GbE internal inside the VRTX chassis.

The 40GbE setup is realized with the existing Dell S6000 40GbE Switch external.

Each Blade has 4 x 10GbE and 2 x 40GbE.

Storage

vSAN

Each Blade has 1 x 800GB SAS Mixed Use and 1 x 7.6TB SAS Read Intensive SSDs. vSAN Network is 40GbE. 28TB vSAN Datastore Capacity.

Bild vSAN Datastore Capacity

VRTX Chassis

Dual Shared Perc 8 SAS Controller for Internal Disks

8 x 3.84 TB SAS Enterprise SSDs Raid5

4 x 900GB SAS Enterprise SSDs Raid0 for Cohesity

GPUs

Nvidia P4 in 3 PCIe slots with dedicated slot assignment. Each Nvidia P4 uses maximum 75W which should work inside the VRTX, but it does not. I always got a PSOD when the GPU got load….

I switched the 3 x P4 with 2 x T4 GPUs, which consumes only 70W. The specs from the VRTX mentioning the 3 full length slots supports 150W, it should work, but it does not….. again PSOD on the ESXi host.

Letโ€™s check the officially supported GPUs inside the VRTX: Nvidia K2 (outdated) and dual Quadro P4000…. OK, I ordered two P4000 (they are active cooled and have external power connectors). Next challenge: Finding the correct power cables for the GPUs…..

Tape Library

LSI SAS Controller assigned to one Blade and PCI-Passthrough Configuration within ESXi to TapeServer VM for Cohesity.

vSphere 7.0 Upgrade

The environment was initially configured with vSphere 6.7U3 and worked without any problems.

The M640 VRTX blades are not supported for 7.0 at this time, but the blades are supported inside the 1000e chassis. Should not be a problem, they work perfectly. In the meantime, the M640 blades are fully supported inside the VRTX chassis, but a new UEFI version is required.

Main problem are the installed Dell add-ons, which must be uninstalled before the upgrade. The QLogic FCoE driver uses a deprecated vmkapi, so I removed the driver before starting the upgrade.

Dell OpenManage

I deployed an OpenManage appliance in the past to manage all my Dell stuff. Installation is pretty straight forward, simply deploy the OVA file…

First I started a discovery of the different systems running inside my HomeLab / HomeDC and they showed up in the different categories:

The S6000 network switch is equipped with only one PSU, that is the reason why it is displayed with a warning

OMIVV

I really love the OpenManage Integration for VMware vCenter (OMIVV). I ordered a license for it, with my new blades. The actual Version 5.1 is compatible for vSphere 7.0. Just download the files and deploy it.

You must register your vCenter Server system inside the OMIVV appliance (including the new Lifecycle Manager Integration).

Next step is to download your OMIVV license files from the Dell Digital Locker and upload it to the appliance. If you receive SSL certificate error inside your vSphere Client, restart the vSphere Client-Service on the VCSA.

Next step is to create Profiles within the OMIVV plugin inside vCenter. The plugin communicates with your hosts iDrac interfaces and with the ESXi itself. Now you are able to view the host status inside the vSphere Client. Pretty cool

Stay tuned for the next episodes of my HomeLab / HomeDC journey……

You can find the newest episode here: Stage XXXIX: Cohesity

#HomeLabKing