Migrating From a 2-Node to a 3-Node vSAN Cluster

A few months back I put together a post outlining the deployment of a 2-node vSAN cluster (located HERE). Well just like in a customer scenario, a 2-Node cluster might just not be enough resources and there is a need to expand. My lab has proven to fall into that category as my need for additional compute and storage resources has expanded for my Secondary/DR site and a third host is being added. This post will step through the straight forward process of “breaking” the 2-Node configuration.

[Read more…]

What’s In Your Lab?

From my first involvement with working with VMware technologies I have been running some sort of a “Home Lab” to assist me with either learning or testing some new functionality of one of their products. From the initial stages of running a VMware Server on an old hand me do tower server from work, to my first true lab built on white box AMD hardware running vSphere, having your own access to gear takes your training/education to a whole different level. Fast forward five years and my home lab looks drastically different from where I started from or even where I thought I would be. From towers, to Intel NUCS’s, to NUC management cluster w/rack mounts, and now fully committed to rack mounts, your lab may start somewhere but take you and your career to another place as it changes and adapts.

[Read more…]

VSAN Upgrades–NVMe & SATADOM

tier-what

Over the last 18 months or so I put together several posts around configuring/designing/implementing VMware’s VSAN hyper-converged product both in my lab and working with customers. Almost a year ago with the availability of support for VSAN to support an All-Flash (AF) configuration, I updated my lab when I could to ditch the spinning disks and move to an all flash model. I thought I was set and was good to go, but like most things you can’t leave good enough alone.  The last few months a made a few tweaks and changes to the lab, added Intel PCIe Flash devices for the write cache tier and moved from using USB drives for ESXi install to SATADOM’s on the hosts.

I Feel the Need…The Need for Speed…

First things first, everyone seems to care about IOP numbers, so we will start with PCIe flash. Smile After doing some research/digging on PCIe cards I settled on using the Intel 750 series card. In an effort not to break the bank and also not needing a large write tier I went with the 400GB cards for each of my three VSAN hosts. While the more expensive big brother of 750 series is on the VSAN HCL ( Intel P3xxx series), these cards worked without issue right out of the box. One thing of note, I did update the inbox driver to a driver provided by Intel that Pete Koehler (blog / twitter) recommended for overall performance gains.

With the drivers updated and a quick reconfiguration of the VSAN Datastore it was time to do some testing. For a testing model I leveraged three VM’s, one on each ESXi host in the cluster, and IOmeter to generate a workload. While synthetic workloads are the best method truly getting “real world” performance numbers, for the details I was wanting to capture IOmeter met those needs. For a workload metric I leveraged a configuration file that was based on 32K block size, 50% read and 50% write. I ran the workload three times on each VM at the same time and the table below details the averages:

[Read more…]

All Quiet on the Server Front

RackDuring the last round of home lab host upgrades (post HERE) I moved away from the traditional ATX mid-tower cases I had been using and forewent the current trend of micro-ATX or Mac-Mini/NUC builds (though recently purchased NUC’s for a 2 node management cluster) to use rack mount servers. So far I have no regrets in making that choice as working on the systems as has been for simpler then in the past. Just unhook some cables and slide them out.

The trade off I made for this choice is these systems put off a far greater amount of noise then my previous systems. With 4 x 80MM fans per host only with active CPU coolers they could put off a decent hummm sound. While the systems passed the wife noise factor as the are resting in the garage, the hummmm sound grabbed my attention each and every time I stepped into the garage. It was like my own version of “The Tell-Tale Heart” or at the very least my adult ADHD kicking in.

I set out to do some research to see what options where available to me, and specifically for the SuperMicro server chassis. Hitting the Google’s I am stumbled across SuperMicro’s System Fan Matrix document, located HERE. Since I have model SC822 systems it showed that they are using a standard size 80 x 80 x 25 fan. The stock fan spins @ 3700 RPMs, moves 48.5 CFM of air, and is rated @ 36dBA. From the onboard IMPI interface of my motherboard I could see that my CPU temp hovered around 40 degrees Celsius with the fans spinning at 3000 RPMs.

[Read more…]