Configuring Roaming Profiles w/RDS Hosts in Horizon View 6

RDSOne of the new and best features in the recently released VMware Horizon View 6 was the support of Microsoft Remote Desktop Services (RDS) hosts for application delivery. This has long been something Citrix has supported both in its MetaFrame/Presentation Server/XenApp/XenDesktop products over the years and something View administrators have been clamoring for. Now with the ability to publish a single application to user without the need to lunch a full desktop is in our hands and brings back a little history for me.

The one thing View administrators have gotten good at dealing with over the years is how to manage the user data or “Persona” for their users. Whether leveraging native folder redirection in Windows, Persona Manager from VMware, or a 3rd party product from Liquidware Labs or AppSense to do the job. Now with RDS, we just have one more profile to add to the list.

Managing RDS (or TS profiles from the last time I worked with it) is a pretty simple process as it leverages Microsoft Active Directory Group Policy Objects (GPOs) to control and manage the settings. In the Horizon View world much hasn’t changed other than VMware has provided a set of ADMX files to import to control/manage the behavior. One thing of note from the VMware documentation is the VMware settings are the preferred deployment strategy:

As a best practice, configure the group policies that are provided in the View ADMX files rather than the corresponding Microsoft group policies. The View group policies are certified to support your View deployment.

To begin testing in my lab environment I had a few prerequisites to get out of the way:

  • Downloaded the View GPU bundle zip file (the ADM/ADMX files are no longer located on a Connection Broker server)
  • Imported the ADMX files to the C:\Windows\PolicyDefinitions folder on my Domain Controller
  • Created an Active Directory OU (RDS_Hosts) to house my two RDS hosts, TS01-v6 and TS02-v6
  • Created a User Group named RDSH Users and placed a few users accounts in the group

With the housekeeping work taken care of, lets get down to some actual work. Within Group Policy Management Editor open I created a new GPO named RDS_Host_Policy (super technical I know) and linked the policy to the RDS_Hosts OU created above. With the ADMX properly imported if you browse to the following you will see the Remote Desktop Session Host node:

Computer Configuration –> Policies –> Administrative Templates –> Windows Components –> Horizon View RDSH Services

Under Remote Desktop Session Host  you will see eight additional nodes for configuration (with links to the settings under each per VMware documentation):

In a production deployment pay close attention to all the setting that are available to you in the various nodes. For this post we are going to focus mainly on the Profiles node a more specifically the following setting, Set Path for Remote Desktop Services Roaming User Profile.

    This will allow me to redirect my roaming profile to a share (RDS_Profiles$) that is hosted on my lab file server (FS01):

RDSH_Profiles

      A word on the folder permissions, follow

THIS

    Microsoft KB article to get the required NTFS and SMB permissions configured. Below is a summary:

NTFS Folder PermissionsNTFS_Perms

SMB Folder Permissions

SMB_Perms

With the share in place it was time to give it a test. In my Horizon View 6 lab I am currently publishing the all important Calculator app as well as Internet Explorer:

RDSH_Apps

After launching each of the apps a few times and making some changes (home page, etc) I checked my profile share to make sure all checked out OK:

RDSH_Profile

Everything checked out and I am good to go!

Thanks for reading,

-Jason

Running a 2 Node VSAN with VMware View 5.3.1

no-supportLast week I posted (located here) on new hosts that I setup and deployed in my lab. One goal for the new lab hosts was to work with and get familiar with VMware’s VSAN product while leveraging it in my studies for my upcoming VCAP-DTA exam. Though the exam is based on View 5.2, I needed to upgrade my lab to View 5.3.1 (the minimum supported version of View). While fighting the urge to upgrade the lab to the latest and greatest shiny object, View 6.0, I  went to work on getting VSAN deployed. Though I only have two physical hosts, I wanted to find away to setup VSAN and getting running without a 3rd host (whether physical or spoofing it with a virtual host).

Before going further I want to state that the following changes are NOT SUPPORTED and are for obvious reasons not recommended for a production environment. With these changes no “replica’’ virtual machine information is created, so with a lost of one of my hosts that VM and its data are lost. As I am running non-persistent desktops just for testing this is not an issue.

On with the show

With the disclaimer out of the way, lets get to it. As you can see from the screenshot below VSAN has been enabled on the two lab hosts. Each host is providing a single SSD drive for the read/write cache and two SATA magnetic drives for the persistent storage:

VSAN_Configuration

With the VSAN Datastore created and presented to each of the hosts I took to the Horizon View administrator console and try to spin up a quick desktop pool. After waiting a few minutes I received an error in vCenter that the replica creation had failed. I took a look at the settings for the desktop pool I created and received the following error:

ProvisioningError

The challenge is clearly highlighted in the screenshot, VSAN knows that only two hosts participate in the VSAN cluster and a minimum of three nodes are required for a proper configuration. Thinking that I could out smart View/VSAN I logged into the vCenter web client and created a new storage policy and set the “Number of failures to tolerate” to zero and “Force Provisioning” to yes:

StoragePolicy

With the new storage policy created I attached the policy to my View golden image desktop and tried to provision the desktop pool again. Second try, same result. Received the same message as the screen shot above, View/VSAN knows I am running with only two nodes. Keeping score, VSAN 2, Jason 0.

At this point I took to Google to do some research and see if anyone has tried this or ran into a similar situation. While the research was light on folks trying to run a two node VSAN cluster (not surprising, again NOT SUPPORTED) I did find an article that covered VSAN operability with VMware View 5.3.1:

Horizon View 5.3.1 on VMware Virtual SAN – Quick Start Guide

Reading through the KB article I stumbled across the answer I was looking for under the Virtual SAN Storage Policy Profiles section of the article:

With Horizon View 5.3.1, there is no need for any user action associated with the default policy. The policy applies to both linked-clone desktop pools and full-clone desktop pools. The default policy for all virtual machines using the Virtual SAN datastore is as follows:

  • Stripes: 1
  • Resiliency: 1
  • Storage provisioning: Thin

Even though I created a Storage Policy VSAN with View 5.3.1 leverages the “Default Policy” out of the box for virtual machines. This can be seen by running the following command via a console session on your ESXi host:

esxcli vsan policy getdefault

The output from the command is displayed below:

getdefault

From the screenshot we can see that for each of the “Policy Class” attributes the setting “hostFailurestoTolerate” is configured for 1 (one), which in my configuration is not supported. Also the other option to note is the “forceProvisioning” set to 1 (one). Mentioned in the VMware KB article is an example of how to manipulate the default policy via ESXCLI commands.

I went to work and set the “hostFailuresToTolerate” and “forceProvisioning” options to 0 (zero) and 1 (one) respectively for each of the Policy Classes as seen by the screenshot below:

getdefault-chages

After the changes were made I again tried to provision my test desktop pool with VM’s to the VSAN Datastore. This time with a little more success, though it looks like I need to update my View Agent in the golden image. Smile

Available

Thanks for reading and post questions or comments below!

-Jason

Home Lab Host Upgrades

chassisFunny what can change in a year, last year about this time I purchased a third host for my lab environment (blog post here) based on the popular “Baby Dragon” build leveraging micro-ATX motherboards and cases to lower the overall foot print. During the time from when that post was released there has been a higher demand on host side resources  with host based caching solutions (PernixData, Infinio, VMware’s vFlash) and server based storage solutions (specifically VSAN) that I found my current hosts somewhat limiting to these technologies.

After doing some research I stumbled across a blog post by Erik Bussink  (blog / twitter) who earlier this year was looking for a new hosts as well and documented his new build around the SuperMicro X9SRH-7TF motherboard. The motherboard provides some great features including dual onboard Intel X540 10GbE Ethernet adapters, integrated LSI 2308 adapter, and can scale to 64GB of RAM with non-ECC memory ( 8 DIMM slots), and last but not least IPMI capabilities for remote management. For my new lab requirements this motherboard brings all the needed toys onboard in a simple package. My plan will be to leverage the dual 10GbE interfaces to carry vMotion and VSAN vmkernel traffic between the two new hosts, and who doesn’t want 10GbE in their lab? :-)

With the motherboard decision out of way, I looked into CPU’s and CPU coolers. While in my lab environment, one resource I have never been shy on  is CPU processing. With this in mind I went looking for the most economical (read that as cheapest) LGA2011 compatible CPU. I landed on a Intel Xeon E5-2603 v2 Ivy Bridge 1.8GHz QC processor that fit the bill. With the CPU selection made I needed to find a compatible CPU cooler. As mentioned in Erik’s post, the SuperMicro motherboards utilize the Narrow ILM standard for coolers. With an idea about what I was going to do for a chassis (more below) I went with a Dynatron R13 70MM unit.

Now, with the motherboard and CPU components chosen this is where the build takes a slight change over what I have used in my lab in the past. Again, with keeping my options open for host side resources (IE SSD and HDD drives) I chose a 2U SuperMicro rack mount chassis to house everything. Sticking with the SuperMicro theme I purchased a 2U chassis equipped with a single 400 watt power supply, the SuperMicro CSE-822T-400LPB. Added bonus is the unit provides 6 hot swappable drives bays, perfect to load up on SSD or HDD drives for various configurations and testing.

To round out the remainder of the build I purchased four SuperMicro MCP-220-0080-0B 3.5 inch to 2.5 inch drive trays. Two units will be used per server chassis, for now one will be used to hold the SSD for host caching solutions and the other tray will be used for the  SSD needs for VMware VSAN. Last but not least, memory and a single dual port 1GB Ethernet adapter were recycled from the two legacy hosts to finish up the server builds.

With the departure from ATX and micro-ATX systems, this build brings some noise challenges that I should mention. While I wouldn’t classify the units as loud, the additional fans in the chassis ( 4 x 80mm fans) and the server class CPU cooler, they can create a decent “hum” sound and I can say that I wouldn’t want them running inside the home. So like my previous lab hosts, they fine their home in the garage.

If you have questions or comments on the build, let me know below!

-Jason

Full Parts List

Motherboard – SuperMicro X9SRH-7TF – http://www.amazon.com/Supermicro-LGA2011-Server-Motherboard-X9SRH-7TF-O/dp/B00BV2XXKK/ref=sr_1_1?ie=UTF8&qid=1411446325&sr=8-1&keywords=supermicro+x9srh-7tf

Case – SuperMicro CSE-822T-400LPB – http://www.newegg.com/Product/Product.aspx?Item=N82E16811152109

CPU – Intel Xeon E5-2603 v2 Ivy Bridge 1.8Ghz QC – http://www.newegg.com/Product/Product.aspx?Item=N82E16819116936

CPU Cooler – Dynatron R13 70MM Ball Bearing CPU Cooler – http://www.newegg.com/Product/Product.aspx?Item=N82E16835114122

Drive Trays – SuperMicro 3.5 to 2.5 Drive Trays – http://www.newegg.com/Product/Product.aspx?Item=0VE-006S-00006

Book Review–Networking for VMware Administrators

NetworkingForVMwareAdministrators

When I first heard that Chris Wahl  (blog / twitter) and Steven Pantol (twitter) were working on a book focused on networking topics for VMware administrators I knew it was going to be a must have book for the tech library and it did not disappoint. Being an IT veteran of 15+ years, my focus has always been on the system administration/storage side of the house. I did not really get more active in networking till I started working with VMware during VI 3.x days. Even at that point from a networking side I was mostly interested that my networking team provided the right/correct VLANs on my VMNIC uplinks.

This has changed over last few years as I have moved away from the day to day administration of a virtual environment more towards an architectural role. Along that way I have had to pickup networking skills from various resources, but nothing that was compiled together in a single book. Chris and Steve do a fantastic job of building up the basics of physical networking and then taking you into the advanced features of virtual networking in a vSphere environment.

The book is broken into four parts to help you across your networking journey:

  1. Physical Networking 101 – This section consists of six chapters and starts off with Ethernet basics, Layer 2/Layer 3 networking concepts, and finishes up discussing converged infrastructures solutions by Cisco, Nutanix, and others.
  2. Virtual Networking – Section two is the “meat” of this book. Seven chapters breakdown everything you need to know about configuring and designing virtual networking in your VMware vSphere environment. With full breakdowns of vSphere Standard Switch, vSphere Distributed Switch, and the Cisco 1000v this section alone is worth the price of admission.
  3. You Got Your Storage in My Networking: IP Storage – Four chapters covering the design and implementation of IP based storage. The chapters are split evenly between iSCSI and NFS best practices.
  4. Other Design Scenarios – The last two chapters in the book provide additional design scenarios and information. Chapter 18 provides four different network adapter configurations (2, 4, 6, and 8 nic based servers) with and without IP based storage. Chapter 19 covers mult-nic vMotion architectures.

While the subjects that are covered are for sure complex and detailed, both authors have done an excellent job creating content that is easy to read and retain. With the addition of the design examples you are sure to walk away from this book with the knowledge to implement the most advanced of vSphere networking features.

Happy Reading!

-Jason

Notes from the Field: VSAN Design–Networking

NoSANA few weeks back I published a post related to a VMware VSAN design I was working on for a customer (located here). That post focused mostly around the key area that VSAN addresses, storage. While the storage piece is where VSAN shines and has the most moving parts to understand from a design and implementation perspective, you can’t forget about the network. With the scale out nature of  VMware VSAN, the network connectivity between hosts to carry replica VM storage traffic becomes increasingly important.

As this post and the previous post are based on a customers design  leveraging  VSAN in a net new infrastructure we are implementing 10Gb Ethernet connectivity for the ESXi host connectivity. Two factors played into this decision, first was the fact that 10Gb Ethernet over the last few years has come down in pricing allowing for a greater adoption rate. Second, as we are deploying VSAN, VMware recommends using 10Gbe to provide the needed network throughput/bandwidth to handle the storage traffic.

Since we are “building” our own VSAN nodes as mentioned in the storage post, it was off to the VMware HCL I/O section to confirm/verify supported 10Gb Ethernet NICs to be used with our Dell R720 servers. We will be using copper based 10GBase-T switches  for connectivity so the servers will be configured with redundant Dell OEM’ed Intel Ethernet 10G 2P x540-t adapters. For the initial deployment we will be using one port from each card to provide redundancy and availability.

Someone Mixed VDS with My VSAN

While VSAN brings along with it some cool technology related to storage, one piece that is overlooked (or hasn’t received enough attention in my opinion) is that when licensing VSAN, VMware bundles in the ability to utilize the Virtual Distributed Switch (VDS). This feature is normally reserved for deployments involving VMware’s Cadillac version of licensing, Enterprise Plus. Leveraging the VDS along with the Network I/O Control (NIOC), a feature that is only available on VDS, allows for a streamlined installation/configuration of the vSphere environment. Additionally deploying the VDS in 10Gbe VSAN environment is preferred by VMware. The below quote is taken from page 7 of the VMware Virtual SAN Design and Sizing Guide:

Virtual SAN provides support for both vSphere standard switch and VMware vSphere Distributed Switch™, with either 1GbE or 10GbE network uplinks. Although both vSphere switch types and network speeds work with Virtual SAN, VMware recommends the use of the vSphere Distributed Switch with 10GbE network uplinks.”

If you are not familiar with VDS or NIOC, Frank Denneman has a great primer post on the feature and functionality, that post can be viewed here. Also, though a bit dated, VMware has an excellent whitepaper around VDS design and implementation. VMware vSphere Distributed Switch Best Practices  is available here. The diagram below provides an overview of how the hosts will configured and will communicate at both the physical layer as well as the VDS/portgroup layer.

image

For sake of simplicity the diagram above only show the use of five portgroups that will need to be created on the VDS for our deployment. The traffic type and VDS teaming policy for each portgroup is listed in the table below:

Traffic Type

Port Group

Teaming Option

Active Uplink

Standby Uplink

Management

Mgmt

LBT

vmnic0/vmnic2

N/A

vMotion

vMotion-1

Explicit Failover

vmnic0

vmnic2

vMotion

vMotion-2

Explicit Failover

vmnic2

vmnic0

VSAN

VSAN

Explicit Failover

vmnic0

vmnic2

Virtual Machine

Virtual Machine

LBT

vmnic0/vminc2

N/A

 

Virtual SAN Networking Requirements and Best Practices

VMware has published a guideline for VSAN requirements and deployment best practices. Below is the listing from VMware vSphere 5.5 Documentation Center located here.

  • Vitual SAN does not support IPv6
  • Virtual SAN requires a private 1Gb network. As a best practice, use 10Gb network.
  • On each host, dedicate at minimum a single physical 1Gb Ethernet NIC to Virtual SAN. You can also provision one additional physical NIC as a failover NIC.
  • You can use vSphere standard switches on each host, or you can configure your environment with a vSphere Distributed Switch.
  • For each network that you use for Virtual SAN, configure a VMkernel port group with the Virtual SAN port property activated.
  • Use the same Virtual SAN Network label for each port group and ensure that the labels are consistent across all hosts.
  • Use Jumbo Frames for best performance.
  • Virtual SAN supports IP-hash load balancing, but cannot guarantee improvement in performance for all configurations. You can benefit from IP-hash when Virtual SAN is among its many consumers. In this case, IP-hash performs the load balancing. However, if Virtual SAN is the only consumer, you might not notice changes. This specifically applies to 1G environments. For example, if you use four 1G physical adapters with IP-hash for Virtual SAN, you might not be able to use more than 1G. This also applies to all NIC teaming policies that we currently support. For more information on NIC teaming, see the Networking Policies section of the vSphere Networking Guide.
  • Virtual SAN does not support multiple VMkernel adapters on the same subnet for load balancing. Multiple VMkernel adapters on different networks, such as VLAN or separate physical fabric, are supported.
  • You should connect all hosts participating in Virtual SAN to a single L2 network, which has multicast (IGMP snooping) enabled. If the hosts participating in Virtual SAN span across multiple switches or even across L3 boundaries, you must ensure that your network is configured correctly to enable multicast connectivity. You can change multicast addresses from the defaults if your network environment requires, or if you are running multiple Virtual SAN clusters on the same L2 network.
    I hope this post as well as the original post is helpful in designing/implementing your VSAN environment.

-Jason

Notes From the Field: VSAN Design

NoSANWith the official release of VMware VSAN a bit of over a month ago on March 11th when ESXi 5.5 U1 dropped I am having more conversations with customers around the product and designing solutions. While with some customers it has been more of an inquisitive peek at the technology, I have had the chance to work on a few designs (OK, two) with customers looking to deploy VSAN over a “traditional” storage array for their storage needs.

For both configurations we went the “roll your own” solution over configurations available via the Node Ready program,  For this reason I leaned heavily on three key resources for the builds:

*Dell documentation listed as server/compute/storage are based on Dell platforms

    While I am not going to provide a deep dive review of VSAN as there are plenty of resources available on the internet/blogs as well as the documentation listed above that will provide the needed details. But I will give is a quick breakdown of what the storage requirements are  laid out by VMware for a VSAN deployment.
Artifacts Minimums Maximums
Disk Groups One per host Five per host
Flash Devices
SAS, SATA, PCIe SSD
One per disk group One per disk group
Magnetic Disk Devices One HDD per disk group Seven HHDs per disk group
Disk Formatting Overhead 750MB per HDD 750MB per HDD

*Table from page 3 of the “VMware Virtual SAN Design and Sizing Guide

For our specific use case and customer requirements we will be deploying a three node cluster (minimum for VSAN) with the default settings of Number of  Failures to Tolerate set to 1 and Number of Disk Stripes per Object  set to 1 as well. We are aiming for around twelve usable terabytes of space to start.

      • Number of Failures to Tolerate – This setting controls the number of replica copies of the virtual machine VMDK(s) created across the cluster. With the default value set to 1, two replicas of the VMDK(s) will be created. As you increase this value you will provide additional redundancy for the virtual machine at the cost of using additional storage for the replica copies. Maximum value is 3
      • Number of Disk Stripes per Object – The number of HDDs across which each replica of a virtual machine object is striped. A value higher than 1 might result in better performance, but also results in higher use of system resources

VSAN

The Build

As mentioned above, I will be leveraging servers from Dell for this configuration. To meet the minimum requirements defined by the customer we went with Dell R720’s as the host servers with the capability to hold 16 x 2.5 inch drives in a single chassis. Utilizing the 16 drive chassis gives us the ability to create at least two fully populated VSAN disk groups (7+1)  per host for future growth/expansion (one now, one down the road). To make room/allow for the use of all 16 slots we will be leverage redundant SD cards for the ESXi installation (Note – Remember to redirect the scratch partition!). Again, with building our own solution I checked and rechecked the VMware VSAN compatibility guide for IO devices (controllers/HDD/SSD) as well as the ESXi compatibility guide for supported servers and components.

DellR720_2

For the actual drive configuration I took to the VSAN HCL to verify the supported drives from Dell. As stated above each disk group needs to have one flash device and and one magnetic device. To meet the overall storage requirement (calculation below) of twelve usable terabytes the first disk group will be made up of 7 x 1.2TB 10K SAS drives. The flash device used for the read/write buffering will be the 400GB SSD SATA Value drive. Connectivity of all the drives will be provided by a LSI SAS 9207-8i controller. This controller was chosen as it will allow for true pass-through or JBOD mode to present the drives to VMware for the VSAN Datastore creation.

Some might ask why we decided to go with 10K SAS drives over “cheap and deep” with NL-SAS drives. The largest 2.5 inch NL-SAS drive offering from Dell is 1.0TB while the largest 2.5 inch SAS drive comes in at 1.2TB for a 10K spindle. Going with the 10K drives provided two design advantages, additional capacity per disk and the additional IOPS provided for when IO needs to come from spinning disk.

Now for the capacity. The VMware documentation breaks down the math needed to come up with sizing calculations around capacity, objects, components, etc. What I am going to show below is how the chosen configuration gets us to the target number for the customer. On page 9 of the VMware Virtual SAN Design and Sizing Guide the following formula is provided for Cluster Capacity:

Formula: Host x NumDskGrpPerHst x NumDskPerDskGrp x SzHDD = y

My Configuration: 3 x 1 x 7 x 1.2TB = 25.2TB

But that is only one step in the process. After calculating the Cluster Capacity I need to get to the number I really care about, Usable Capacity. Again from the VMware documentation on page 10 I get the following:

Formula: (DiskCapacity – DskGrp x DskPerDskGrp x Hst x VSANoverhead )/(ftt+1)

My Configuration: (25.2 – 1 x 7 x 3 x 1)/ftt+1 = 25804 –21/ftt+1 = 25783/2=  12891 or roughly 12.8TB

Now a word on flash devices. One thing to make note of, in your VSAN calculations the flash devices DO NOT participate in the calculations. Since they are only used as read/write buffering they don’t contribute to the overall storage pool. Also, you maybe wondering how/why I chose the 400GB SSD’s for my flash tier. Stated on page 7 of the VMware documentation:

The general recommendation for sizing flash capacity for Virtual SAN is to use 10 percent of the anticipated consumed storage capacity before the number of failures to tolerate is considered.

By the following statement I have oversized my flash tier as initially the customer will be using only percentage of the twelve terabytes of capacity. But I like to play things a little safer and sized my flash tier based on ten percent of the usable capacity (VMware’s original sizing guideline) as the difference in pricing from a 200GB to a 400GB SSD is nominal, in addition we have sized for the future utilization of the usable capacity to follow in-line with VMware’s statement above.