Migrating From a 2-Node to a 3-Node vSAN Cluster

A few months back I put together a post outlining the deployment of a 2-node vSAN cluster (located HERE). Well just like in a customer scenario, a 2-Node cluster might just not be enough resources and there is a need to expand. My lab has proven to fall into that category as my need for additional compute and storage resources has expanded for my Secondary/DR site and a third host is being added. This post will step through the straight forward process of “breaking” the 2-Node configuration.

But First, Some Pre-Planning

To streamline the number of screenshots and content for this post a few things have already been taken care of in the lab that you will need to plan for in your own environment as well:

  • The new host, DR-ESX03, has the latest build of ESXi installed with all relevant patches to match the other nodes in the cluster
  • I am using a vSphere Distributed Switch (as you should be) for the cluster. I have already added DR-ESX03 to the vDS
  • The needed VMKernel ports have already been created and assigned to the host (vMotion, vSAN, and Management)

With the table set, let’s get to work!

Clicking and Screenshot Time

  • Log into your vSphere Web Client with an Administrative account
  • From the Home screen Click the Host and Clusters icon:

  • Select the corresponding cluster in the Navigator panel, Click the Configure tab in the right-hand pane, and under vSAN choose Disk Management:


  • Next, we will want to remove the vSAN Disk Group assigned/created to the vSAN Witness Node (in my lab TUK-WTN01). Highlight the associated Disk Group and Click the Remove the Disk Group icon:

  • When the Remove Disk Group dialog is displayed, Click on Yes to continue and remove the Disk Group:

  • After successfully removing the Disk Group navigate to Fault Domains & Stretched Cluster under the vSAN menu in the Configure tab:

  • From the screen shot above you can see that the Preferred and Secondary Fault Domains are still in place from the original configuration of the 2-node cluster. You will also see that the third ESXi host, DR-ESX03.lab.local, is listed but not listed in a fault domain. Highlight/Select one of the defined Fault Domains (for this example I am using Preferred) and Click the <NAME> icon:

  • This will display the Move hosts in dialog box. I have selected both DR-ESX02 (currently in the Secondary Fault Domain) and the new host, DR-ESX03. Click on OK to continue:

  • A message dialog will be displayed informing you that a Fault Domain Will Be Deleted as it will now be empty. Click on Yes to continue:

  • Each of the three hosts will not be listed in the Preferred Fault Domain:

  • Next, we will want to Remove the configured Witness Host. In the Stretched Cluster portion of the right-hand pane Click the Disable button:

  • The Confirm Witness Host Removal dialog Click on Yes for the option Remove witness host from vSAN Stretched Cluster:

  • When completed, notice the Status for the Stretched Cluster is now listed as Disabled:

  • With the Stretched Cluster functionality disabled the last bit to clean up is the Preferred Fault Domain. To removed it Highlight the name and Click the Delete icon:


  • With everything cleaned up from the 2-Node configuration, we will want to Create a Disk Group with the disks on the third host for vSAN. Navigate to Disk Management to Select the new host. Click the Create New Disk Group icon:

  • From the Create Disk Group screen Select the drive that will be used for the Cache Tier and the Drive(s) that will be used for the Capacity Tier. Click on OK when complete:

  • With the additional host and drives added, we will verify the Health of the vSAN cluster. Navigate to Monitor, Select vSAN, and then Choose Health. You will notice that both the Data and Performance Service are listed in a Failed State. This has to do with the removal of the vSAN Witness Appliance and the corresponding vSAN objects. To rectify the issue, Click the Repair Objects Immediately button:

  • After waiting a few minutes for the objects to be recreated, Select the Virtual Objects context to expand/view the vSAN Object Health:


Wrapping It Up

While the expansion of vSAN from a 2-Node to a 3-Node configuration isn’t something I would expect to see often in the field, it is good to know that the process is straightforward and easy to accomplish.

Thanks for reading, and if any questions or comments hit me up on Twitter or drop me an email.

-Jason

%d bloggers like this: