Managing Veeam Agents in Backup & Replication 9.5U3

Veeam’s Backup and Replication product over the years has set the bar for what backup products could and should be in the virtualization space, whether it be VMware or Microsoft. But as Veeam was a hard act to follow in the pure virtualization crowed there was still a need to backup and manage physical servers that existed in one’s environment. Over the years this and tape support where always on the top list of the “must have’s” in future releases of the product. Veeam has worked hard to address those requests over the last major and minor release of the products but one thing stuck out. When the Veeam Agents for Windows/Linux where announced it was a standalone product that didn’t integrate with Veeam Backup and Replication management console. This caused some “one off” management of the agents that was a bit cumbersome. Well that has all been changed with the release of Update 3 for Veeam B & R v9.5, agent management and job creation can now all be handled via the console. This post will give a quick walk through of how to create the needed Protection Group objects as well as create backup job schedule, but for further reading and understanding be sure to give the Veeam Agent Management Guide and Veeam Agent for Microsoft Windows User Guide a look.

[Read more…]

Notes from the Field – vSAN Virtual Network Design

vvVirtual networking always takes a significant role in any VMware vSphere design, and even more so if you are leveraging IP-based storage like NAS or iSCSI. If using VMware’s vSAN product, I think it “turns the dial to 11” as the internode communication becomes that much more important versus host-to-target communication. A few months back (based on the date of this post), VMware released an updated vSAN Network Design document that I strongly encourage everyone to read if looking to, or are already running vSAN. For this post however, I am going to dive into what I have used in the field for customer deployments around NIC teaming and redundancy, as well as Network IO Control (NIOC) on the vSphere Distributed Switch (vDS).

Example Scenario

To start, let’s put together a sample scenario to create context around the “how” and “why”. As suggested in the vSAN Network Design document, all the customer designs I have been involved with have incorporated a single pair of ten gigabit Ethernet (10GbE) interfaces for the host-uplink connectivity to a Top of Rack (ToR) or core switch, using either TwinAX or 10GBaseT for the physical layer. This is accomplished using a pair of dual-port Intel X520- or X540-based cards, and allows for future growth if network needs arise down the road. The uplink ports are configured as Trunk ports (if using Cisco) or Tagged ports (if using Brocade/Dell/HP/etc) and the required VLANs for the environment are passed down to the hosts. On the virtual side, a single vDS is created, and each of the hosts in the vSAN cluster are added to the vDS. The required/needed port groups are created and configured with the relevant VLAN id, NIC Teaming and Failover policy (more to come later here). The following figure provides a visual representation:

[Read more…]

Migrating From a 2-Node to a 3-Node vSAN Cluster

A few months back I put together a post outlining the deployment of a 2-node vSAN cluster (located HERE). Well just like in a customer scenario, a 2-Node cluster might just not be enough resources and there is a need to expand. My lab has proven to fall into that category as my need for additional compute and storage resources has expanded for my Secondary/DR site and a third host is being added. This post will step through the straight forward process of “breaking” the 2-Node configuration.

[Read more…]

vSAN and Fault Domains, aka Rack Awareness

Keeping your virtual workloads up and running at all times while also providing the back-end data resiliency is key to any VMware vSphere deployment. This is true if   your shared-storage mode consists of a “traditional” three tier architecture (host/fabric/storage) or if you leverage Hyper-Converged Infrastructure (HCI) to     consolidate   and provide compute/storage resources. How you accomplish this task though is different. With the traditional storage array you have redundant controllers front ending your disk subsystem, or if scaling you might place multiple controller across cabinets in a “cluster” configuration. With HCI/vSAN the concepts are still basically the same, but you are now leveraging both hardware (compute/storage nodes) and the software to logically place your data across cabinets. In vSAN this means leveraging Fault Domains for rack awareness.

[Read more…]