A few months back I put together a post outlining the deployment of a 2-node vSAN cluster (located HERE). Well just like in a customer scenario, a 2-Node cluster might just not be enough resources and there is a need to expand. My lab has proven to fall into that category as my need for additional compute and storage resources has expanded for my Secondary/DR site and a third host is being added. This post will step through the straight forward process of “breaking” the 2-Node configuration.
What’s In Your Lab?
From my first involvement with working with VMware technologies I have been running some sort of a “Home Lab” to assist me with either learning or testing some new functionality of one of their products. From the initial stages of running a VMware Server on an old hand me do tower server from work, to my first true lab built on white box AMD hardware running vSphere, having your own access to gear takes your training/education to a whole different level. Fast forward five years and my home lab looks drastically different from where I started from or even where I thought I would be. From towers, to Intel NUCS’s, to NUC management cluster w/rack mounts, and now fully committed to rack mounts, your lab may start somewhere but take you and your career to another place as it changes and adapts.
vSAN and Fault Domains, aka Rack Awareness
Keeping your virtual workloads up and running at all times while also providing the back-end data resiliency is key to any VMware vSphere deployment. This is true if your shared-storage mode consists of a “traditional” three tier architecture (host/fabric/storage) or if you leverage Hyper-Converged Infrastructure (HCI) to consolidate and provide compute/storage resources. How you accomplish this task though is different. With the traditional sto日本藤素
rage array you have redundant controllers front ending your disk subsystem, or if scaling you might place multiple controller across cabinets in a “cluster” configuration. With HCI/vSAN the concepts are still basically the same, but you are now leveraging both hardware (compute/storage nodes) and the software to logically place your data across cabinets. In vSAN this means leveraging Fault Domains for rack awareness.
Notes from the Field–vSAN SBPM Tags and VR/SRM
Some of my favorite posts to write and put together are for the “Notes from the Field” titles/classifications. The reason being is these posts come from my experiences with clients to help solve a business requirement or design challenge that I am sure others have been or are faced with. This time around I am working with a customer on their Business Continuity/Disaster Recovery (BC/DR) initiative. Always a fun a topic!
High Level Architecture
The customer I have been working with is already down the path of HCI and specifically with vSAN ROBO edition for some of their remote locations. When they were looking for both a primary storage uplift at their production site as well as encompassing a disaster recovery strategy, looking to vSAN was an easy choice. For the replication or “data mover” task, vSphere Replication will be leveraged tied with Site Recovery Manager (SRM) for the orchestration engine.