What’s In Your Lab?

From my first involvement with working with VMware technologies I have been running some sort of a “Home Lab” to assist me with either learning or testing some new functionality of one of their products. From the initial stages of running a VMware Server on an old hand me do tower server from work, to my first true lab built on white box AMD hardware running vSphere, having your own access to gear takes your training/education to a whole different level. Fast forward five years and my home lab looks drastically different from where I started from or even where I thought I would be. From towers, to Intel NUCS’s, to NUC management cluster w/rack mounts, and now fully committed to rack mounts, your lab may start somewhere but take you and your career to another place as it changes and adapts.

[Read more…]

VSAN Upgrades–NVMe & SATADOM

tier-what

Over the last 18 months or so I put together several posts around configuring/designing/implementing VMware’s VSAN hyper-converged product both in my lab and working with customers. Almost a year ago with the availability of support for VSAN to support an All-Flash (AF) configuration, I updated my lab when I could to ditch the spinning disks and move to an all flash model. I thought I was set and was good to go, but like most things you can’t leave good enough alone.  The last few months a made a few tweaks and changes to the lab, added Intel PCIe Flash devices for the write cache tier and moved from using USB drives for ESXi install to SATADOM’s on the hosts.

I Feel the Need…The Need for Speed…

First things first, everyone seems to care about IOP numbers, so we will start with PCIe flash. Smile After doing some research/digging on PCIe cards I settled on using the Intel 750 series card. In an effort not to break the bank and also not needing a large write tier I went with the 400GB cards for each of my three VSAN hosts. While the more expensive big brother of 750 series is on the VSAN HCL ( Intel P3xxx series), these cards worked without issue right out of the box. One thing of note, I did update the inbox driver to a driver provided by Intel that Pete Koehler (blog / twitter) recommended for overall performance gains.

With the drivers updated and a quick reconfiguration of the VSAN Datastore it was time to do some testing. For a testing model I leveraged three VM’s, one on each ESXi host in the cluster, and IOmeter to generate a workload. While synthetic workloads are the best method truly getting “real world” performance numbers, for the details I was wanting to capture IOmeter met those needs. For a workload metric I leveraged a configuration file that was based on 32K block size, 50% read and 50% write. I ran the workload three times on each VM at the same time and the table below details the averages:

[Read more…]

All Quiet on the Server Front

RackDuring the last round of home lab host upgrades (post HERE) I moved away from the traditional ATX mid-tower cases I had been using and forewent the current trend of micro-ATX or Mac-Mini/NUC builds (though recently purchased NUC’s for a 2 node management cluster) to use rack mount servers. So far I have no regrets in making that choice as working on the systems as has been for simpler then in the past. Just unhook some cables and slide them out.

The trade off I made for this choice is these systems put off a far greater amount of noise then my previous systems. With 4 x 80MM fans per host only with active CPU coolers they could put off a decent hummm sound. While the systems passed the wife noise factor as the are resting in the garage, the hummmm sound grabbed my attention each and every time I stepped into the garage. It was like my own version of “The Tell-Tale Heart” or at the very least my adult ADHD kicking in.

I set out to do some research to see what options where available to me, and specifically for the SuperMicro server chassis. Hitting the Google’s I am stumbled across SuperMicro’s System Fan Matrix document, located HERE. Since I have model SC822 systems it showed that they are using a standard size 80 x 80 x 25 fan. The stock fan spins @ 3700 RPMs, moves 48.5 CFM of air, and is rated @ 36dBA. From the onboard IMPI interface of my motherboard I could see that my CPU temp hovered around 40 degrees Celsius with the fans spinning at 3000 RPMs.

[Read more…]

Holy Switch! 10GbE Switching in the Home Lab

XS708EOver the last five to six months I set out to revamp/rebuild both the compute nodes and networking layer in my home lab. These lab upgrades seem to pop up post VMworld (posts here and here) as the idea of running the latest and greatest software from either VMware or its partner eco-system gets my mind racing and the PayPal account to open up.

With this latest round of host upgrades, I was able to introduce 10GbE networking functionality into the lab as the new host servers have dual integrated Intel X540 10GbE adapters. That put me into the market for 10GbE switch as my current switch (Cisco 2960) only supports 1GbE links. Jumping over to NewEgg.com I took a quick look at the available options for 10GbE switching and as you can imagine there are not a lot of “affordable” options out there. The main two models that jumped out where both made by Netgear, the Pro-Safe XS712T (12 ports) and the Pro-Safe XS708E (8 ports).

In comparing the two models I end up selecting the XS708E 8 port model for both a cost reason (the price per port was cheaper) and that I would either need to add two more hosts or a 10GbE Synology array in the future to really tap into the extra ports provided(I keep telling myself this was good reasoning Smile ). As it stands now, with my three hosts I can dedicate one 10GbE interface for VSAN traffic and the other for vMotion traffic, and have two ports free on the switch for future expansion.

[Read more…]