VSAN Upgrades–NVMe & SATADOM

tier-what

Over the last 18 months or so I put together several posts around configuring/designing/implementing VMware’s VSAN hyper-converged product both in my lab and working with customers. Almost a year ago with the availability of support for VSAN to support an All-Flash (AF) configuration, I updated my lab when I could to ditch the spinning disks and move to an all flash model. I thought I was set and was good to go, but like most things you can’t leave good enough alone.  The last few months a made a few tweaks and changes to the lab, added Intel PCIe Flash devices for the write cache tier and moved from using USB drives for ESXi install to SATADOM’s on the hosts.

I Feel the Need…The Need for Speed…

First things first, everyone seems to care about IOP numbers, so we will start with PCIe flash. Smile After doing some research/digging on PCIe cards I settled on using the Intel 750 series card. In an effort not to break the bank and also not needing a large write tier I went with the 400GB cards for each of my three VSAN hosts. While the more expensive big brother of 750 series is on the VSAN HCL ( Intel P3xxx series), these cards worked without issue right out of the box. One thing of note, I did update the inbox driver to a driver provided by Intel that Pete Koehler (blog / twitter) recommended for overall performance gains.

With the drivers updated and a quick reconfiguration of the VSAN Datastore it was time to do some testing. For a testing model I leveraged three VM’s, one on each ESXi host in the cluster, and IOmeter to generate a workload. While synthetic workloads are the best method truly getting “real world” performance numbers, for the details I was wanting to capture IOmeter met those needs. For a workload metric I leveraged a configuration file that was based on 32K block size, 50% read and 50% write. I ran the workload three times on each VM at the same time and the table below details the averages:

Host IOPS AVG Response (MS) CPU Utilization
ESX01 5827 5.43 24.34
ESX02 4971 6.12 28.83
ESX03 5324 5.74 30.68

From the vSphere Web Client we can take a look at the backend performance numbers from VSAN. As you can see from the screenshot below, not too bad for whitebox/consumer grade lab gear.

VSAN-Backend

For the additional bonus with the upgrade to VSAN 6.2 and the ability to leverage deduplication, getting a decent amount of capacity on the cluster. Currently sitting at a 4x data reduction.

Pic1

Again, not bad for home lab gear. To really see the full data reduction a fourth host would need to be added to the cluster to take advantage of erasure coding/R5. Hmmmmmm….

USB to SATADOM

The other change to the environment was the change from booting from USB thumb drives to using SATATDOM. While not as cool as the PCIe flash cards that I added, this was something I wanted to see and use in the lab. After a quick Google search to verify if my Supermicro motherboards supported SATADOM (they do, and this PAGE helped out) the following options where available:

SATADOM-SKU

I ordered up the 64GB version from NewEgg and started off on some net new ESXi installs. Now, a few things to add here. VMware has a pair of very good articles outlining the endurance needed for SSD/USB/SATADOM devices when being used for either data or ESXi installs. For VMware VSAN 6.2 the recommendation is a SATADOM that can handle 384 Tera Bytes Written (TBW) when being used as the boot device or holding log and trace files. These Supermicro devices are rated at a paltry 68 TBW. Good enough for my lab, but I wouldn’t suggest/recommend for production. For further details on drive endurance and minimum requirements have a look at this VMware blog post and this VMware KB article.

Thanks for reading!

-Jason

%d bloggers like this: