Stage XXI: vSAN with 40GbE

After installing my vSAN 6.2 based systems in Stage XX: vSAN All Flash it was time to update the environment again….

vSAN Hosts:

I upgraded my IBM x3650 M4 servers with more NICs, RAM and Nvidia GPUs.


2x Intel Xeon E5-2670 8-Core 2.6 GHz

256 GB RAM

IBM ServeRaid M5110e

2 x SanDisk FusionIO ioDrive2 365GB

4 x SanDisk Optimus Eco 2TB SSDs

2 x Dual Port 10GbE NIC

1 x Dual Port 40GbE NIC

1 x Nvidia Grid K2

After upgrading the hardware, it was time to upgrade my environment to vSAN 6.6 (awesome release with lot of improvements)

I used my Fusion IO ioDrive II 365GB cards formatted in high performance mode for the vSAN caching tier.

The vSAN Capacity tier consist of SanDisk Optimus Eco 2TB Enterprise SSDs.




The Fault Domains are unchanged since the initial configuration


Especially the vSAN Health Service got a big improvement under 6.6. Now everything is green…..


The deduce factor is a little bit higher than under vSAN 6.5


Here is my configured vSAN policy


Network Upgrade:

My previous vSAN environment was build using 10GbE. I decided to upgrade the new setup to 40GbE….

I was able to get 2 x Dual Port Mellanox 40GbE cards and 2 x 40GbE cables for cross connect.


The numbering of the vmnics went strange with these cards. Maybe because I already have 2 x 1GbE and 4 x 10GbE within the servers…


I configured my vSAN with cross connect using one port of the 40bE card. The second port is used for vMotion with 40GbE.


I really love the performance of my vSAN 6.6 All Flash monster. Using vMotion and vSAN Data Network with 40GbE is incredible fast….

The next step at my HomeLab is the VMware NSX configuration. Check out Stage XXII: NSX