HomeLab Stage XLVII: New Hardware

In the last episode HomeLab Stage XLVI: New Host, I covered the second live of my Apple MacPro.

What´s next?

A massive Hardware Replacement! I want to retire my oldest ESXi hosts, the existing IBM x3650M3 cluster and IBM x3690X5 ROBO cluster.

IBM x3650M3

It was time to say goodbye….. I wanted to retire my oldest vSAN cluster to make room for something newer. I was running these systems for years and now they found a second or better saying a thrid new home. I colleague wanted to create also a HomeLab, he needed a starting point…..

Fare well, my old friends….

Dell R730

I was able to get 5 x Dell R730 servers as a replacement. Those systems are equipped with the following:

  • Dual Xeon E5-2680 v3 2.50 GHz
  • 128GB RAM DDR4-ECC Memory
  • 400GB SATA SSD
  • Dual 10GbE SFP+
  • Samsung 512GB NVMe M2
  • Nvidia P4 GPU
Rack Setup for the Bitfusion cluster

I really love the Dell Technologies VMware Integration. I am using the OpenManage Intergration VMware vCenter (OMIVV) since several years. That integration is absolutely amazing!

The OMIVV license is required per host for this feature. I am lucky to get also these licenses for my setup.

I am going to create a special blog post about the Dell / VMware Integration in the near future.

Dell OpenManage for VMware vCenter (OMIVV) Plugin

I wanted to use the new cluster for the VMware Bitfusion setup. vSAN? Of course! Each node is configured with the NVMe device for caching and the SATA SSD for Capacity. The NVMe device is a M2 SSD which is placed into a M2 PCIe Adapter.

M2 Samsung NVMe 512GB

vSAN network traffic is done through one 10GbE (Dell Breakout Cable QSFP 40GbE to 4 x 10GbE SFP+ from my Dell S6000 switch). The other 10GbE connection is used for the VM and Bitfusion traffic.

I am using only 4 of the 5 servers inside the cluster, the 5th host is just a spare parts system….

IBM x3690X5

Another retirement…. The IBM x3690X5 systems are also pretty old and I wanted to replace them and made rack space available. These machines are special, because of the number of memory slots available. My machines have 256GB DDR2 RAM inside each system. I think I´m gonna sell them for something new.

IBM 16-DIMM Internal Memory Expansion f/ x3690 X5 (81Y8926) | Convena

Dell XR2

Say hello to my newest HomeLab members…. The Dell / OEM XR2 servers…..

Customer Feedback Drove the Design of the New Dell EMC PowerEdge XR2 High  Performance Chassis | Dell Technologies
Dell Rugged Server for special environments: Hot, Cool, Dust and Vibration

I was able to get 4 of them! This will be my new main cluster. Dell is using this kind of server for special setups, where rugged devices are needed. There exists also a VxRail version, Series D!

Each of my server is equipped with:

  • Dual Intel Xeon- Silver 4214 CPUs (2,20GHz and 12 Cores)
  • 768 GB DDR4-RAM
  • 1 x Dell HBA 330
  • 1 x Dual 10GbE SFP+ LOM NICs
  • 1 x Nvidia T4 GPU
  • 1 x Special NIC (more on that in a later post)
  • 1 x 250GB Intel SATA M2 SSD (ESXi binaries)
  • 1 x 800 GB SAS SSD (vSAN Cache)
  • 2 x 3,84 TB SAS SSD (vSAN Capacity)
  • 1 x 7.6 TB SAS SSD (Cohesity-Cluster-Nodes)

My machines are configured with the High Performance Fans for the GPU usage

My servers have the custom OEM front plate with the power usage display option configured, which I really like.

Only the first and second servers are running during while this picture was taken

I tweaked the servers (including the iDRACs) to produce only a small amount of noise. The systems running inside my primary datacenter at the bottom of my house.

Every server took up to 250W, which is pretty nice for that kind of powerful machines. Actually I am migrating every workload from the “old” main cluster Dell VRTX to this new one.

Stay tuned for the next episodes of my #HomeLab #HomeDC journey. There are many more to come…..

Here is the newest posting: HomeLab Stage XLVIII: Nvidia GPU Power