What’s In Your Lab?

From my first involvement with working with VMware technologies I have been running some sort of a “Home Lab” to assist me with either learning or testing some new functionality of one of their products. From the initial stages of running a VMware Server on an old hand me do tower server from work, to my first true lab built on white box AMD hardware running vSphere, having your own access to gear takes your training/education to a whole different level. Fast forward five years and my home lab looks drastically different from where I started from or even where I thought I would be. From towers, to Intel NUCS’s, to NUC management cluster w/rack mounts, and now fully committed to rack mounts, your lab may start somewhere but take you and your career to another place as it changes and adapts.

Logical and Physical Layout

As one might guess, I leverage my lab to run/test/learn the latest VMware technologies (vSphere/NSX/Horizon View/vSAN/SRM/etc) to get a stronger sense of the ins and outs of the products. Additionally, third party ecosystem products like Veeam, F5, Cohesity are deployed in the lab for the same reasoning. To make sure these products look and feel like they are running in a production environment, vSphere design “best practices” have been implemented when and where I can. Most notably is the layout of the cluster designs for Management, Compute, and Disaster Recovery.

Figure 1 – Logical vSphere Clusters

Mentioned in the opening and discussed in a previous blog post HERE, I made the switch from the “traditional” Baby Dragon/MicroATX lab build to rack mount units. This was mostly due to the ease of working on the systems and the space the lab was beginning to take up in the garage sitting on shelving units. Towards the end of last year with the Management/Compute/DR clusters all moved to rack mounts, the lab was on the move to dedicated colo space. Working with a local provider, WOWRack, I was to secure a pretty decent deal for a 21U cabinet and 8 amps of power. As you can see below, the gear all fits but in some tight quarters.

Figure 2 – Physical Rack Layout

Gear…Gear…Gear…

So, what makes up the actual gear? The Management and DR server builds will look familiar to long term VMware home lab aficionado, but the compute nodes are a bit different.

The management cluster is basically rack mounted Baby Dragon builds, nothing special here other than the SuperMicro 1U chassis. Storage for the cluster is provided by the Synology DS1518+ unit. As the name implies this is where my vCenter/Domain Controllers/File Servers/etc reside for the lab.

Table 1 – Management Cluster Hardware

Component Make/Model
Chassis Supermicro SuperChassis 813MTQ-350CB
Motherboard Supermicro X9SCM-F MicroAtx
Processor Intel Xeon E31220 @ 3.10Ghz QC
Memory 32GB of Kingston DD3 1600MHz Memory (4 x 8GB)

The compute cluster is where the heavy lifting happens. These boxes are built out with some beefier hardware along with higher RAM specifications. With the included 10GB LoM, these boxes have been my first foray into 10GB networking in the lab. Along with the networking, the primary storage for the cluster is provided by VMware vSAN using Intel NVMe cards along with Transcend SSD’s for capacity.

Table 2 – Compute Cluster Hardware

Component Make/Model
Chassis Supermicro SuperChassis 822T-400LPB
Motherboard Supermicro X9SRH-7TF
Processor Intel Xeon E5-2603 @ 1.8Ghz QC
Memory 64GB of Kingston DD3 1600MHz Memory (8 x 8GB)
RAID Controller LSI 9207-8i
vSAN Cache Tier Intel 750 NVMe PCIe
vSAN Capacity Tier 2 x 512GB Transcend SSD

This will look familiar, again going with a rack mounted Baby Dragon configuration. Everything is the same as the Management cluster except for the use of VMware vSAN as this cluster primary storage resource. To accomplish this task, in its lone PCIe slot is the addition of an LSI 9207-8i RAID controller fronting 64GB and 256GB Transcend SSD Drives.

Table 3 – DR Cluster Hardware

Component Make/Model
Chassis Supermicro SuperChassis 813MTQ-350CB
Motherboard Supermicro X9SCM-F MicroAtx
Processor Intel Xeon E31220 @ 3.10Ghz QC
Memory 32GB of Kingston DD3 1600MHz Memory (4 x 8GB)
RAID Controller LSI 9207-8i
vSAN Cache Tier 1 x 64GB Transcend SSD
vSAN Capacity Tier 1 x 256GB Transcend SSD

Plumbing or Networking?

Tying all the server assets together is a pair of switches, one from Cisco the other from Dell. The Cisco SG300-28 is used for IPMI connectivity to each of the nine hosts while the Dell N4032F is the primary data mover switch in the lab. With its 24 x 10GBaseT ports this switch is perfect for the All Flash vSAN configuration of the Compute cluster as well as providing the core networking/VLANs for the remaining cluster as well. To provide a layer of edge security a Sonicwall NSA220 is used to keep the bad peeps out and provide VPN access to the lab. From a vSphere perspective everything is held together leveraging the vSphere Distributed Switch.

Figure 3 – Example VDS Layout

Wrapping Up

If you have made it this far, thanks for reading. If you have any questions about the setup or want to talk lab gear give me a shout on Twitter or drop me an email.

Special shout out to Erik Shanks ( Blog / Twitter ) for this blogs inspiration. 🙂

-Jason

%d bloggers like this: