Managing Veeam Agents in Backup & Replication 9.5U3

Veeam’s Backup and Replication product over the years has set the bar for what backup products could and should be in the virtualization space, whether it be VMware or Microsoft. But as Veeam was a hard act to follow in the pure virtualization crowed there was still a need to backup and manage physical servers that existed in one’s environment. Over the years this and tape support where always on the top list of the “must have’s” in future releases of the product. Veeam has worked hard to address those requests over the last major and minor release of the products but one thing stuck out. When the Veeam Agents for Windows/Linux where announced it was a standalone product that didn’t integrate with Veeam Backup and Replication management console. This caused some “one off” management of the agents that was a bit cumbersome. Well that has all been changed with the release of Update 3 for Veeam B & R v9.5, agent management and job creation can now all be handled via the console. This post will give a quick walk through of how to create the needed Protection Group objects as well as create backup job schedule, but for further reading and understanding be sure to give the Veeam Agent Management Guide and Veeam Agent for Microsoft Windows User Guide a look.

[Read more…]

Notes from the Field – vSAN Virtual Network Design

vvVirtual networking always takes a significant role in any VMware vSphere design, and even more so if you are leveraging IP-based storage like NAS or iSCSI. If using VMware’s vSAN product, I think it “turns the dial to 11” as the internode communication becomes that much more important versus host-to-target communication. A few months back (based on the date of this post), VMware released an updated vSAN Network Design document that I strongly encourage everyone to read if looking to, or are already running vSAN. For this post however, I am going to dive into what I have used in the field for customer deployments around NIC teaming and redundancy, as well as Network IO Control (NIOC) on the vSphere Distributed Switch (vDS).

Example Scenario

To start, let’s put together a sample scenario to create context around the “how” and “why”. As suggested in the vSAN Network Design document, all the customer designs I have been involved with have incorporated a single pair of ten gigabit Ethernet (10GbE) interfaces for the host-uplink connectivity to a Top of Rack (ToR) or core switch, using either TwinAX or 10GBaseT for the physical layer. This is accomplished using a pair of dual-port Intel X520- or X540-based cards, and allows for future growth if network needs arise down the road. The uplink ports are configured as Trunk ports (if using Cisco) or Tagged ports (if using Brocade/Dell/HP/etc) and the required VLANs for the environment are passed down to the hosts. On the virtual side, a single vDS is created, and each of the hosts in the vSAN cluster are added to the vDS. The required/needed port groups are created and configured with the relevant VLAN id, NIC Teaming and Failover policy (more to come later here). The following figure provides a visual representation:

[Read more…]

Migrating From a 2-Node to a 3-Node vSAN Cluster

A few months back I put together a post outlining the deployment of a 2-node vSAN cluster (located HERE). Well just like in a customer scenario, a 2-Node cluster might just not be enough resources and there is a need to expand. My lab has proven to fall into that category as my need for additional compute and storage resources has expanded for my Secondary/DR site and a third host is being added. This post will step through the straight forward process of “breaking” the 2-Node configuration.

[Read more…]

What’s In Your Lab?

From my first involvement with working with VMware technologies I have been running some sort of a “Home Lab” to assist me with either learning or testing some new functionality of one of their products. From the initial stages of running a VMware Server on an old hand me do tower server from work, to my first true lab built on white box AMD hardware running vSphere, having your own access to gear takes your training/education to a whole different level. Fast forward five years and my home lab looks drastically different from where I started from or even where I thought I would be. From towers, to Intel NUCS’s, to NUC management cluster w/rack mounts, and now fully committed to rack mounts, your lab may start somewhere but take you and your career to another place as it changes and adapts.

[Read more…]