Stage XIII: PernixData FVP

I played with PernixData FVP a little bit in the past and implemented it at customer environments. After the creation of my custom built NAS in Stage XII it was time to implement FVP at my HomeLab….

My plan was to install Enterprise SSDs inside of all my Apple Mac Mini ESXi hosts and accelerate my FibreChannel storage.

I had PernixData FVP already installed at home and created a monitoring cluster, due to the lack of acceleration resources.

I searched the web for Enterprise SSDs and found the Intel DC S3700 series:

Capacity Sequential Read/Write (up to)2 Random 4KB Read3/Write (up to)4 Form Factor
200 GB SATA 6 Gb/s     500 MB/s / 365 MB/s up to 75,000 IOPS / 29,000 IOPS 1.8-inch SATA
400 GB SATA 6 Gb/s      500 MB/s / 460 MB/s up to 75,000 IOPS / 36,000 IOPS 1.8-inch SATA
100 GB SATA 6 Gb/s      500 MB/s / 200 MB/s up to 75,000 IOPS / 19,000 IOPS 2.5-inch SATA
200 GB SATA 6 Gb/s      500 MB/s / 365 MB/s up to 75,000 IOPS / 32,000 IOPS 2.5-inch SATA
400 GB SATA 6 Gb/s      500 MB/s / 460 MB/s
up to 75,000 IOPS / 36,000 IOPS 2.5-inch SATA
800 GB SATA 6 Gb/s      500 MB/s / 460 MB/s
up to 75,000 IOPS / 36,000 IOPS 2.5-inch SATA

The 400GB or 800GB version should be perfect for my Mac Mini ESXi hosts….

So I got 3 of these little beasts.

Intel_DS_S3700_400GB

My Apple Mac Minis are the server version, so they have two internal SATA connections. All of my three hosts have Samsung SSDs for ESXi boot and host caching, so I removed the default 1TB HDD and replaced it with the S3700 SSDs.

Every Mac Mini has now the following specifications:

Quad Core i7- 2,6GHz

16 GB RAM

256 GB Samsung 830 SSD

400GB Intel S3700 SSD

2 x 1GbE NIC (onBoard + Thunderbolt)

1 x 10GbE (Thunderbolt PCIe Expansion)

1 x Dual Port 4Gbit QLogic HBA (Thunderbolt Expansion)

FusionIO Duo Drive or Teradici APEX card on esxmac1 and esxmac2

After the SSD installation you can find the details inside your vSphere Client:

FVP_Implementation1

fusionio2

A quick check that the FVP host extension is up and running….

FVP_Implementation3

This is my FVP monitoring cluster, without acceleration resources

FVP_Implementation4

FVP_Implementation5

All new SSDs are visible in the Add Resources wizard for my existing cluster

FVP_Implementation6

FVP_Implementation7

My cluster has now 3 acceleration resources available.

FVP_Implementation8

It´s time to accelerate some virtual machines…. I choose Write Back with Local and Remote peer.

FVP_Implementation9

FVP_Implementation10

I created another instance of the VMware IO Analyzer appliance and configured Write Back with Local and Remote peer, too.

FVP_Implementation11

This is the result of a quick Max Read IOPS test from the IO Analyzer appliance. 76.537 IOPS… nice 🙂

The PernixData FVP Solution gives me more IOPS (Read and Write) and a much more better latency of my IBM DS3500 FibreChannel storage at home. I definitely recommend that solution to everyone who had a shared storage!!!!

This finishes Stage XIII: PernixData FVP for my HomeLab. The next story is already available: Stage XIV: APC vSphere Integration