Converting in-guest iSCSI volumes to native VMDKs

In this guest post, fellow Seattle VMUG member, Pete Koehler (@vmpete) writes about options on transitioning away from in-guest iSCSI attached volumes to native VMDKs.

Over the years I’ve posted about the benefits of using in-guest iSCSI volumes. The need stemmed from a time several years ago in which my environment had very limited tools in the war chest, but needed to take advantage of large volume sizes, good application quiescing using VSS, and squeak out every bit of performance with multi-pathing. As my buddy Jason Langer reminded me often, the thought of in-guest iSCSI volumes sounded a little, well… antiquated.

tweet

Interestingly enough, I couldn’t agree more. While it might have been the virtualization equivalent of wearing acid-washed jeans, they had served a purpose at one time. It also became painfully clear that in other ways, they were an administrative headache that no longer fit in today’s world. I wanted to change, but had limited opportunities to do so.

As times change, so do the options. We are all served well by doing a constant evaluation of design choices. So please welcome 2014. vSphere 5.5 has broken down the 2TB VMDK barrier. The best of the backup products leverage VMware’s APIs to see the data. And, if you wanted to take advantage of a great host acceleration solutions out there, VMware needs to be aware of the volumes. Add that up, and in-guest iSCSI wasn’t going to cut it. This was by no surprise, and the itch to convert all of my guest attached volumes has been around for at least a couple of years. In fact, many had been migrated a while ago. But I’ve noticed quite a few questions have come my way on how I made the transition. Apparently I wasn’t the only one sporting acid washed jeans.

That was the rationale for the change. Now for how to go about changing them. Generally, the most viable options are:

  • Conversion using the VMware converter
  • Conversion by changing connection to an RDM, then convert to VMDK via storage vMotion.
  • Transition data inside of guest to pristine VMDK using rsync (Linux VM).

What option you choose may depend somewhat on your environment. Honestly, I never thought of the RDM/Storage vMotion method until Jason suggested over dinner recently. Chalk one up to sharing ideas with your peers, which was the basis for this post. Below I will outline the steps taken for each of the three methods listed.

Regardless of the method chosen, you will be best served by taking the additional steps necessary to remove the artifacts from the old method of connections. Removing of NICs used for iSCSI connection, as well as any Integration tools (like the Dell EqualLogic Host Integration Toolkit). And finally, remove the old guest volumes from the storage array after they are no longer in use. Also remember to take any precautions necessary before the transitions, such as backing up the data or changes to the VM.

 

Conversion using the VMware converter
This method using the VMware converter tool installed in the VM. It allows the ability to convert the in guest volumes to a native VMDK files. Using this method is very predictable and safe, but depending on the size of the volume, might require a sizable maintenance window as you convert the volumes.

1. Install VMware Converter inside of guest.

2. Make note of all services that touch guest volumes, and shut off, as well as temporarily turning them to “disabled”

3. Launch Converter, and click “Convert Machine” > “This local machine”. Select a destination type of VMware Infrastructure VM, and change name to “[sameVMname]-guest”. Complete by selecting appropriate VMware folder and destination location. You may select only guest volumes necessary, as the other files it creates will be unnecessary

4. Remove newly created “[sameVMname]-guest” VM from inventory, and copy VMDK file(s) from old datastore to new location if necessary.

5. Once complete, disconnect all in-guest iSCSI volumes, remove the Host integration Toolkit, disable iSCSI NICs inside the VM, and power down.

6. Edit the VM properties to disable or remove the iSCSI NICs

7. Attach newly created VMDKs to VM, ideally choosing a device node of anything other than 0:# to improve performance. (e.g. new VMDK might be on 1:0 and an additional VMDK might be on 2:0, etc.)

8. Power on to verify all is running, and new drives are mapped to the correct drive letters or mount points.

9. Re-enable all services set to disabled earlier to their original settings, clear event logs, and reboot

10. Verify access and services are now running correctly.

This method was most commonly used on SQL servers that had smaller volume sizes. As the guest volumes grew, so did the maintenance window.

 

Conversion by changing connection to an RDM, then convert to VMDK via storage vMotion
This method first changes the connection method of the in guest volume to an RDM, then converts it to a native VMDK after a storage vMotion. A very predictable and safe method as well, but offers the additional benefit that any maintenance window needed for conversion is NOT based on the size of the volume. It’s maintenance windows is only for the time in which you briefly power down the VM.

1. Make note of all services that touch guest volumes, and shut off, as well as temporarily turning them to “disabled”

2. Once complete, disconnect all in-guest iSCSI volumes, remove the Host integration Toolkit, disable iSCSI NICs inside the VM, and power down.

3. On the storage system present the iSCSI disk to all ESXi hosts

4. Scan the host so they see the disk

5. Add an RDM (Virtual Mode) disk to the VM and pointing it to the newly host mounted iSCSI disk

6. Power on the VM, verify the RDM mounted and apps and/or data is present.

7. Re-enable all services set to disabled earlier to their original settings.

8. Storage vMotion the VM, making sure to you go into the “Advanced” settings.

9. Move the c: VMDK to a LUN and move the RDM to a VMFS LUN (Then change the disk format from “Same” to Thick, Thin, or Thick Eager Zero on the RDM disk). Once the storage vMotion is complete the RDM should now be migrated to a VMDK.

10. Unmount the previous mounted iSCSI volume from the ESXi hosts and verify access and services are now running correctly.

The nice thing about this method is that the VM is up and in production while the storage vMotion happens. It also catches all of the changes during the move.

 

Transition data inside of guest to pristine VMDK using rsync
This method is for Linux VMs, whereby one creates a pristine VMDK, then transfer the data inside the guest via rsync. This process can take some time to seed the new volume, but it is essentially a background process for the VM. The actual cut is typically just a changing of /etc/fstab and a restart. It can use additional resources, but in certain circumstances may be a good fit.

1. Create desired VMDKs for the VM, ideally choosing a device node of anything other than 0:# to improve performance. (e.g. new VMDK might be on 1:0 and an additional VMDK might be on 2:0)

2. Inside the guest, create the new partition using parted, or gparted, then format using mkfs.

3. Create the device mount locations, and then add entries in /etc/fstab.

4. Restart and validate that volume is mounting properly.

5. Begin the rsync process from the old location to the new location. Syntax will look something like rsync -av –delete –bwlimit=7500 root@[systemname]:/oldpath/todata /newpath/todata/

6. Once complete, redirect any symbolic links to the new location, and adjust mount points in /etc/fstab.

7. Restart to test and validate. Verify access and services are now running correctly.

8. Remove connections to old guest volumes, and clean up VM by disabling or removing iSCSI based NICs, etc.

This method allowed for some restructuring of data on some extremely large volumes, something that the Development team wanted to do anyway. It allowed IT to delegate the rsync processes off to the teams handling the data change, so that the actual cutover could be fit into their schedules.

 

The results
While I’m not completely finished with the conversion (a few more multi-terabyte volumes to go), the process of simplifying the environment has been very rewarding. Seeing these large or I/O sensitive data volumes take advantage of I/O acceleration has been great. Simplifying the protection our mission critical VMs was even more rewarding.

– Pete