vSAN and Fault Domains, aka Rack Awareness

Keeping your virtual workloads up and running at all times while also providing the back-end data resiliency is key to any VMware vSphere deployment. This is true if   your shared-storage mode consists of a “traditional” three tier architecture (host/fabric/storage) or if you leverage Hyper-Converged Infrastructure (HCI) to     consolidate   and provide compute/storage resources. How you accomplish this task though is different. With the traditional storage array you have redundant controllers front ending your disk subsystem, or if scaling you might place multiple controller across cabinets in a “cluster” configuration. With HCI/vSAN the concepts are still basically the same, but you are now leveraging both hardware (compute/storage nodes) and the software to logically place your data across cabinets. In vSAN this means leveraging Fault Domains for rack awareness.

[Read more…]

Deploying a 2-Node ROBO #vSAN Cluster

While Hyper-Converged Infrastructures such as Nutanix and VMware’s vSAN are popular topics in changing the dynamics of how compute and storage resources are consumed in a primary datacenter, one use case that sometimes gets overlooked is organizations that have or support a remote or branch office (referred to as ROBO going forward).

VMware addressed this customer need in the v6.1 release of vSAN supporting 2-Node + Witness configurations and has continued to introduce new enhancement/features since. Most recently the ability to “Direct Connect” the nodes in the ROBO location bypassing the need for a switch (at least for vSAN connectivity) to be deployed in v6.5.

While setting up vSAN via the vSphere Web Client is straightforward, there is a bit of “plumbing” that needs to be accomplished (both on the physical networking and ESXi networking side) to really get his use case up and running. Let’s see how it done!

[Read more…]

Configuring vSAN iSCSI Targets

VSAN_AcceptedTest…Test…Test…This thing on?

Yes it has been awhile since I actually posted something here, so I thought I would kick out something that I have been recently playing with in my home lab and share some thoughts. That being, the ability in vSAN 6.5 to present physical or virtual guests with an iSCSI target served from a vSAN cluster.

Now some might be thinking to themselves “Isn’t the whole idea of HCI to get AWAY from the concept of provisioning storage on a per LUN basis?” And for those thinking that, you are correct! Sadly, that utopia doesn’t quite exist in the real world. Yet. I still have conversations with customers that still have a few hold out workloads or requirements for a bare metal server or some sort of in-guest iSCSI initiators (shivers) and see these a roadblock or limitation to moving to HCI.

[Read more…]

Capacity Expansion & Disk Group Design Decisions–All Flash vSAN

VSAN_AcceptedOne of the things I like/enjoy the most in my job as a consultant is working with people to help assist in the design process to come up with a solution that solves a specific customer challenge or meets design requirements criteria. While working on these projects there usually is more then one solution or configuration that fits the stated needs, it comes down to a process of filtering the pro’s/con’s or matching the project requirements for a given solution.

This was evident recently when I was working on a VMware vSAN design for a customer. A converstation occured around the design and layout of the Disk Group(s) construct that vSAN leverages to create the underlying Datastore. Now these considerations are typically straightforward, but with the release of vSAN 6.2 and the inclusion of deduplication/compression for All Flash (AF) implementations there are both technical and operational decisions to take in account. But before we get into that, here is a quick primer on vSAN configurations.

[Read more…]