Checking up on Multi NIC vMotion

vSphere 5 has now been GA pretty close to five months now, and during the initial launch the main discussion topic was around the “vRAM entitlements”. After that subsided more of the focus actually turned to the great new features that vSphere 5 was bringing to the table. First and foremost (in my mind) was the concept of the “Monster VM”. With ability to scale a VM to a 32 vCPU and 1TB of memory “Monster” there should be no workload you shouldn’t consider running in a VM. Next up was the implementation of Storage DRS. This is going to bring the workload balancing that DRS provides for VM’s to your storage.

Both of those are great features (as well as the many others I didn’t mention), but one I think that flew a little under the radar was the support for multi NIC vMotion. How much time as a VMware Administrator have we waited for VM’s to be evacuated from a host going into maintenance mode? If you were like me and only had 1GB uplinks on your hosts you were limited (IE supported by VMware) to four concurrent vMotions. Set Maintenance Mode and go to Starbucks. If you are one of the cool kids and have 10GB uplinks that number gets doubled to 8 concurrent vMotions. Set Maintenance Mode and run to the break room for a coffee refill.

If these numbers look familiar they should, they are the same as vSphere 4.1. And this is were the joy of Multi NIC vMotion comes in! Since the amount of concurrent vMotions hasn’t changed, how about changing the amount of NIC’s that can be used? You can still only move four VM’s at once, but instead of using a single NIC how about two or four? Same amount VM’s being moved but just more bandwidth to move them. If you are utilizing 1GB NICs the maximum you can combine is 16. For 10GB that number is decreased to four.

If you are not familiar with setting up Multi NIC vMotion both VMware and Duncan Epping have fantastic posts on how to do so. The VMware KB article (KB2007467) is located HERE and the blog post from Duncan is located HERE.

So after following either of the above guides lets do some checking to see if it truly is working. I find the easiest to see this in action is via ESXTOP. Log into one of your hosts (or vMA) launch ESXTOP and press “n” to view the networking screen. Identify your vmk/vmnics that you configured and verify that traffic is crossing each.

To demonstrate this I setup a vSwitch with two uplinks and configured two vMotion portgroups. In the screen shot below you can see traffic crossing over each interface.

MultiNIC

A more in depth way to see this as well is to view the /var/log/vmkernel.log. In the screenshot below you will see the kernel binding to the two vmk ports:

var-log

For overall performance testing in my lab environment I tested moving four VM’s with only a single vMotion interface and the same four VM’s with multiple vMotion interfaces:

Test 1

Test 2

Test 3

Average

Single NIC

58 sec

57 sec

61 sec

58.67 sec

Dual NIC

33 sec

31 sec

32 sec

31 sec

As you can see from the chart there was almost a 50 percent reduction in time when comparing the single interface to the dual interface. If you have upgraded your environment to vSphere 5 be sure to revisit your network design to make sure you are make use of this great new feature.

-Jason