Notes from the Field – vSAN Virtual Network Design

vvVirtual networking always takes a significant role in any VMware vSphere design, and even more so if you are leveraging IP-based storage like NAS or iSCSI. If using VMware’s vSAN product, I think it “turns the dial to 11” as the internode communication becomes that much more important versus host-to-target communication. A few months back (based on the date of this post), VMware released an updated vSAN Network Design document that I strongly encourage everyone to read if looking to, or are already running vSAN. For this post however, I am going to dive into what I have used in the field for customer deployments around NIC teaming and redundancy, as well as Network IO Control (NIOC) on the vSphere Distributed Switch (vDS).

Example Scenario

To start, let’s put together a sample scenario to create context around the “how” and “why”. As suggested in the vSAN Network Design document, all the customer designs I have been involved with have incorporated a single pair of ten gigabit Ethernet (10GbE) interfaces for the host-uplink connectivity to a Top of Rack (ToR) or core switch, using either TwinAX or 10GBaseT for the physical layer. This is accomplished using a pair of dual-port Intel X520- or X540-based cards, and allows for future growth if network needs arise down the road. The uplink ports are configured as Trunk ports (if using Cisco) or Tagged ports (if using Brocade/Dell/HP/etc) and the required VLANs for the environment are passed down to the hosts. On the virtual side, a single vDS is 必利勁
created, and each of the hosts in the vSAN cluster are added to the vDS. The required/needed port groups are created and configured with the relevant VLAN id, NIC Teaming and Failover policy (more to come later here). The following figure provides a visual representation:

(more…)

What’s In Your Lab?

From my first involvement with working with VMware technologies I have been running some sort of a “Home Lab” to assist me with either learning or testing some new functionality of one of their products. From the initial stages of running a VMware Server on an old hand me do tower server from work, to my first true lab built on white box AMD hardware running vSphere, having your own access to gear takes your training/education to a whole different level. Fast forward five years and my home lab looks drastically different from where I started from or even where I thought I would be. From towers, to Intel NUCS’s, to NUC management cluster w/rack mounts, and now fully committed to rack mounts, your lab may start somewhere but take you and your career to another place as it changes and adapts.

[Read more…]

vSAN and Fault Domains, aka Rack Awareness

Keeping your virtual workloads up and running at all times while also providing the back-end data resiliency is key to any VMware vSphere deployment. This is true if   your shared-storage mode consists of a “traditional” three tier architecture (host/fabric/storage) or if you leverage Hyper-Converged Infrastructure (HCI) to     consolidate   and provide compute/storage resources. How you accomplish this task though is different. With the traditional sto日本藤素
rage array you have redundant controllers front ending your disk subsystem, or if scaling you might place multiple controller across cabinets in a “cluster” configuration. With HCI/vSAN the concepts are still basically the same, but you are now leveraging both hardware (compute/storage nodes) and the software to logically place your data across cabinets. In vSAN this means leveraging Fault Domains for rack awareness.

(more…)

Deploying a 2-Node ROBO #vSAN Cluster

While Hyper-Converged Infrastructures such as Nutanix and VMware’s vSAN are popular topics in changing the dynamics of how compute and storage resources are consumed in a primary datacenter, one use case that sometimes gets overlooked is organizations that have or support a remote or branch office (referred to as ROBO going forward).

VMware addressed this customer need in the v6.1 release of vSAN supporting 2-Node + Witness configurations and has continued to introduce new enhancement/features since. Most recently the ability to “Direct Connect” the nodes in the ROBO location bypassing the need for a switch (at least for vSAN connectivity) to be deployed in v6.5.

While setting up vSAN via the vSphere Web Client is straightforward, there is a bit of “plumbing” that needs to be accomplished (both on the physical networking and ESXi networking side) to really get his use case up and running. Let’s see how it done!

[Read more…]