VCP5-DT Objective 2.12–Create ThinApp Applications and a ThinApp Repository

Objective 2.12 – Create ThinApp Applications and a ThinApp Repository


Create ThinApp Applications

Prior to creating and capturing a ThinApp package we will first need a “capture” system use. For this blog post I used a clean installation of MS Windows 7 32-bit with the following configuration:

  • 1vCPU
  • 2GB of RAM
  • All relevant Service Packs and Hot Fixes applied
  • VMware Tools installed
  • Installed ThinApp Enterprise software

With my capture system setup and ready to go the last step is to take a snapshot of the virtual machine to allow us to “roll back” to the base image for additional ThinApp package creation. Now on with the show.

With a console session opened on the capture machine, Win7-ThinApp, launch the ThinApp Setup Capture utility:


You will be presented with the Setup Capture Welcome screen that gives you a review of the steps that will be used to create your application as well as a link to a quick start video. Click Next to continue when ready:


The first actionable step in the capture process is completing the Prescan. The capture process will scan the current state of the hard drive and registry files of the system. This will provide the baseline for the Postscan function later in the capture process to see what has changed on the system (IE, your application installation). Click Prescan to continue:


With the postscan complete it is now time to install the application you wish to capture. For this post I used Google Chrome as the test subject. You can minimize the Setup Capture – Install Application window as needed, just be sure not to close/cancel the task:


With the application installed click on Postscan to continue:


The post scanning can take a few minutes so might want to pop open your Twitter stream to kill sometime:


Once the post scan has completed we will need to select the application Entry Point. As one may guess, this is the how the application will be launched.


Next up is the integration with Horizon Application Manager. That is beyond the scope of this post as well as the VCP-DT exam, so click Next to continue:


One cool thing I like about ThinApp packages is the ability to define who can/can’t launch the application. As you can see below I am using the TA_Chrome_Users Active Directory group to grant permissions. You can also define a custom Access Denied Message:


Covering two screens in on shot here. Next two steps we are configuring the Isolation Mode and the Sandbox locations for the application. These settings control how and where data is read and written to when using the application. For additional details have a look at the following VMware KB article – Understanding the ThinAPp Sandbox and Isolation Modes

For this example I selected the option for Merged Isolation Mode and set the Sandbox for User Profile:



I chose not to participate in VMware’s quality assurance program, but the choice is yours:


ThinDirect provides the ability to redirect specific websites that are opened via Internet Explorer to be redirected and opened via Google Chrome:


Make note of the project location as this will contain the output of the build process. These files will be later used/copied over to your network share for user access:




Project being saved. Smile


Under Advanced Configuration we have the chance to make changes to the Package.ini file (covered in some detail below). This file allows for additional configuration options for the package. Click Build to move on:



The build process will take a few minutes to complete:


Once the build process completes you will have your first ThinApp package ready to go. Click Finish to wrap it up.


Create or Identify Supported File Share

Assign Permissions To The Share

Going to cover two objectives for the price of one. In order to allow access to the ThinApp packages either via MSI or their executable VMware provides a few requirements:

  • The MSI packages need to be stored on a Windows network share that is accessible via the View Connection Server and virtual desktops, ie in the same Active Directory domain or one that is trusted. The share must be configured for access leveraging computer accounts.
  • The file share permissions must provide Read access to the built-in Active Directory group Domain Computers.
  • For users to access and stream ThinApp packages you will need to configure Read & Execute NTFS permissions for the user group or groups.

Verify MSI Streaming Settings In The package.ini Files

      To allow for streaming of MSI packages you need to modify the


        file and more specifically the


      parameter. You can make this change during the initial capture of the application and set the value to


Identify Necessary ThinApp Package Components To Put On The Share

Noting the capture directory from above when creating the ThinApp package, copy over the needed .exe and .msi files to the file share that will be used to host the applications. In the example below, for the Chrome package copy over Google Chrome.exe and Google Chrome.msi:


Assign ThinApp Applications To Pools

Using your web browser of choice access and log into the View Administrator console. Under Inventory expand View Configuration and select ThinApp Configuration. In the right hand pane click on Add Repository:


Provide the Display Name and Share Path to the application repository. Click Save we completed:


With the repository location added it is time to add the ThinApp packages to the inventory. Under Inventory expand Inventory and select ThinApps from the tree. In the right hand pane click Scan New ThinApps:


From the ThinApp Repository drop down, select the repository location that was added. The setup wizard will then scan the location for any ThinApp packages. When completed click Next to continue:


With the scan complete select the corresponding MSI file or files you wish to scan. For the Google Chrome example I have selected the Google Chrome.msi file. Click Scan to continue:


With the scan completed you can see that the Google Chrome MSI package has been successfully added to the inventory. Click Fisnish to exit:



With our package added, lets get it assigned to some desktops. Again from the ThinApp menu click Add Assignment in the right hand pane. For this example I chose Assign Pools:


In my lab environment I currently only have a single desktop pool created, Test-Pool. Using the Add button I assigned the test pool to the Google Chrome application.


With the ThinApp assigned to a pool lets take a look at our efforts. In the screen shot below I am connected to one of the desktops in the Test-Pool. As you can see, on the desktop is the shortcut for Google Chrome as well as it being listed in the Add/Remove Programs section in Control Panel:


Phew, long post and lots of screens shots. Hope it was helpful!


Configuring Roaming Profiles w/RDS Hosts in Horizon View 6

RDSOne of the new and best features in the recently released VMware Horizon View 6 was the support of Microsoft Remote Desktop Services (RDS) hosts for application delivery. This has long been something Citrix has supported both in its MetaFrame/Presentation Server/XenApp/XenDesktop products over the years and something View administrators have been clamoring for. Now with the ability to publish a single application to user without the need to lunch a full desktop is in our hands and brings back a little history for me.

The one thing View administrators have gotten good at dealing with over the years is how to manage the user data or “Persona” for their users. Whether leveraging native folder redirection in Windows, Persona Manager from VMware, or a 3rd party product from Liquidware Labs or AppSense to do the job. Now with RDS, we just have one more profile to add to the list.

Managing RDS (or TS profiles from the last time I worked with it) is a pretty simple process as it leverages Microsoft Active Directory Group Policy Objects (GPOs) to control and manage the settings. In the Horizon View world much hasn’t changed other than VMware has provided a set of ADMX files to import to control/manage the behavior. One thing of note from the VMware documentation is the VMware settings are the preferred deployment strategy:

As a best practice, configure the group policies that are provided in the View ADMX files rather than the corresponding Microsoft group policies. The View group policies are certified to support your View deployment.

To begin testing in my lab environment I had a few prerequisites to get out of the way:

  • Downloaded the View GPU bundle zip file (the ADM/ADMX files are no longer located on a Connection Broker server)
  • Imported the ADMX files to the C:\Windows\PolicyDefinitions folder on my Domain Controller
  • Created an Active Directory OU (RDS_Hosts) to house my two RDS hosts, TS01-v6 and TS02-v6
  • Created a User Group named RDSH Users and placed a few users accounts in the group

With the housekeeping work taken care of, lets get down to some actual work. Within Group Policy Management Editor open I created a new GPO named RDS_Host_Policy (super technical I know) and linked the policy to the RDS_Hosts OU created above. With the ADMX properly imported if you browse to the following you will see the Remote Desktop Session Host node:

Computer Configuration –> Policies –> Administrative Templates –> Windows Components –> Horizon View RDSH Services

Under Remote Desktop Session Host  you will see eight additional nodes for configuration (with links to the settings under each per VMware documentation):

In a production deployment pay close attention to all the setting that are available to you in the various nodes. For this post we are going to focus mainly on the Profiles node a more specifically the following setting, Set Path for Remote Desktop Services Roaming User Profile.

    This will allow me to redirect my roaming profile to a share (RDS_Profiles$) that is hosted on my lab file server (FS01):


      A word on the folder permissions, follow


    Microsoft KB article to get the required NTFS and SMB permissions configured. Below is a summary:

NTFS Folder PermissionsNTFS_Perms

SMB Folder Permissions


With the share in place it was time to give it a test. In my Horizon View 6 lab I am currently publishing the all important Calculator app as well as Internet Explorer:


After launching each of the apps a few times and making some changes (home page, etc) I checked my profile share to make sure all checked out OK:


Everything checked out and I am good to go!

Thanks for reading,


Running a 2 Node VSAN with VMware View 5.3.1

no-supportLast week I posted (located here) on new hosts that I setup and deployed in my lab. One goal for the new lab hosts was to work with and get familiar with VMware’s VSAN product while leveraging it in my studies for my upcoming VCAP-DTA exam. Though the exam is based on View 5.2, I needed to upgrade my lab to View 5.3.1 (the minimum supported version of View). While fighting the urge to upgrade the lab to the latest and greatest shiny object, View 6.0, I  went to work on getting VSAN deployed. Though I only have two physical hosts, I wanted to find away to setup VSAN and getting running without a 3rd host (whether physical or spoofing it with a virtual host).

Before going further I want to state that the following changes are NOT SUPPORTED and are for obvious reasons not recommended for a production environment. With these changes no “replica’’ virtual machine information is created, so with a lost of one of my hosts that VM and its data are lost. As I am running non-persistent desktops just for testing this is not an issue.

On with the show

With the disclaimer out of the way, lets get to it. As you can see from the screenshot below VSAN has been enabled on the two lab hosts. Each host is providing a single SSD drive for the read/write cache and two SATA magnetic drives for the persistent storage:


With the VSAN Datastore created and presented to each of the hosts I took to the Horizon View administrator console and try to spin up a quick desktop pool. After waiting a few minutes I received an error in vCenter that the replica creation had failed. I took a look at the settings for the desktop pool I created and received the following error:


The challenge is clearly highlighted in the screenshot, VSAN knows that only two hosts participate in the VSAN cluster and a minimum of three nodes are required for a proper configuration. Thinking that I could out smart View/VSAN I logged into the vCenter web client and created a new storage policy and set the “Number of failures to tolerate” to zero and “Force Provisioning” to yes:


With the new storage policy created I attached the policy to my View golden image desktop and tried to provision the desktop pool again. Second try, same result. Received the same message as the screen shot above, View/VSAN knows I am running with only two nodes. Keeping score, VSAN 2, Jason 0.

At this point I took to Google to do some research and see if anyone has tried this or ran into a similar situation. While the research was light on folks trying to run a two node VSAN cluster (not surprising, again NOT SUPPORTED) I did find an article that covered VSAN operability with VMware View 5.3.1:

Horizon View 5.3.1 on VMware Virtual SAN – Quick Start Guide

Reading through the KB article I stumbled across the answer I was looking for under the Virtual SAN Storage Policy Profiles section of the article:

With Horizon View 5.3.1, there is no need for any user action associated with the default policy. The policy applies to both linked-clone desktop pools and full-clone desktop pools. The default policy for all virtual machines using the Virtual SAN datastore is as follows:

  • Stripes: 1
  • Resiliency: 1
  • Storage provisioning: Thin

Even though I created a Storage Policy VSAN with View 5.3.1 leverages the “Default Policy” out of the box for virtual machines. This can be seen by running the following command via a console session on your ESXi host:

esxcli vsan policy getdefault

The output from the command is displayed below:


From the screenshot we can see that for each of the “Policy Class” attributes the setting “hostFailurestoTolerate” is configured for 1 (one), which in my configuration is not supported. Also the other option to note is the “forceProvisioning” set to 1 (one). Mentioned in the VMware KB article is an example of how to manipulate the default policy via ESXCLI commands.

I went to work and set the “hostFailuresToTolerate” and “forceProvisioning” options to 0 (zero) and 1 (one) respectively for each of the Policy Classes as seen by the screenshot below:


After the changes were made I again tried to provision my test desktop pool with VM’s to the VSAN Datastore. This time with a little more success, though it looks like I need to update my View Agent in the golden image. Smile


Thanks for reading and post questions or comments below!


Home Lab Host Upgrades

chassisFunny what can change in a year, last year about this time I purchased a third host for my lab environment (blog post here) based on the popular “Baby Dragon” build leveraging micro-ATX motherboards and cases to lower the overall foot print. During the time from when that post was released there has been a higher demand on host side resources  with host based caching solutions (PernixData, Infinio, VMware’s vFlash) and server based storage solutions (specifically VSAN) that I found my current hosts somewhat limiting to these technologies.

After doing some research I stumbled across a blog post by Erik Bussink  (blog / twitter) who earlier this year was looking for a new hosts as well and documented his new build around the SuperMicro X9SRH-7TF motherboard. The motherboard provides some great features including dual onboard Intel X540 10GbE Ethernet adapters, integrated LSI 2308 adapter, and can scale to 64GB of RAM with non-ECC memory ( 8 DIMM slots), and last but not least IPMI capabilities for remote management. For my new lab requirements this motherboard brings all the needed toys onboard in a simple package. My plan will be to leverage the dual 10GbE interfaces to carry vMotion and VSAN vmkernel traffic between the two new hosts, and who doesn’t want 10GbE in their lab? :-)

With the motherboard decision out of way, I looked into CPU’s and CPU coolers. While in my lab environment, one resource I have never been shy on  is CPU processing. With this in mind I went looking for the most economical (read that as cheapest) LGA2011 compatible CPU. I landed on a Intel Xeon E5-2603 v2 Ivy Bridge 1.8GHz QC processor that fit the bill. With the CPU selection made I needed to find a compatible CPU cooler. As mentioned in Erik’s post, the SuperMicro motherboards utilize the Narrow ILM standard for coolers. With an idea about what I was going to do for a chassis (more below) I went with a Dynatron R13 70MM unit.

Now, with the motherboard and CPU components chosen this is where the build takes a slight change over what I have used in my lab in the past. Again, with keeping my options open for host side resources (IE SSD and HDD drives) I chose a 2U SuperMicro rack mount chassis to house everything. Sticking with the SuperMicro theme I purchased a 2U chassis equipped with a single 400 watt power supply, the SuperMicro CSE-822T-400LPB. Added bonus is the unit provides 6 hot swappable drives bays, perfect to load up on SSD or HDD drives for various configurations and testing.

To round out the remainder of the build I purchased four SuperMicro MCP-220-0080-0B 3.5 inch to 2.5 inch drive trays. Two units will be used per server chassis, for now one will be used to hold the SSD for host caching solutions and the other tray will be used for the  SSD needs for VMware VSAN. Last but not least, memory and a single dual port 1GB Ethernet adapter were recycled from the two legacy hosts to finish up the server builds.

With the departure from ATX and micro-ATX systems, this build brings some noise challenges that I should mention. While I wouldn’t classify the units as loud, the additional fans in the chassis ( 4 x 80mm fans) and the server class CPU cooler, they can create a decent “hum” sound and I can say that I wouldn’t want them running inside the home. So like my previous lab hosts, they fine their home in the garage.

If you have questions or comments on the build, let me know below!


Full Parts List

Motherboard – SuperMicro X9SRH-7TF –

Case – SuperMicro CSE-822T-400LPB –

CPU – Intel Xeon E5-2603 v2 Ivy Bridge 1.8Ghz QC –

CPU Cooler – Dynatron R13 70MM Ball Bearing CPU Cooler –

Drive Trays – SuperMicro 3.5 to 2.5 Drive Trays –

Book Review–Networking for VMware Administrators


When I first heard that Chris Wahl  (blog / twitter) and Steven Pantol (twitter) were working on a book focused on networking topics for VMware administrators I knew it was going to be a must have book for the tech library and it did not disappoint. Being an IT veteran of 15+ years, my focus has always been on the system administration/storage side of the house. I did not really get more active in networking till I started working with VMware during VI 3.x days. Even at that point from a networking side I was mostly interested that my networking team provided the right/correct VLANs on my VMNIC uplinks.

This has changed over last few years as I have moved away from the day to day administration of a virtual environment more towards an architectural role. Along that way I have had to pickup networking skills from various resources, but nothing that was compiled together in a single book. Chris and Steve do a fantastic job of building up the basics of physical networking and then taking you into the advanced features of virtual networking in a vSphere environment.

The book is broken into four parts to help you across your networking journey:

  1. Physical Networking 101 – This section consists of six chapters and starts off with Ethernet basics, Layer 2/Layer 3 networking concepts, and finishes up discussing converged infrastructures solutions by Cisco, Nutanix, and others.
  2. Virtual Networking – Section two is the “meat” of this book. Seven chapters breakdown everything you need to know about configuring and designing virtual networking in your VMware vSphere environment. With full breakdowns of vSphere Standard Switch, vSphere Distributed Switch, and the Cisco 1000v this section alone is worth the price of admission.
  3. You Got Your Storage in My Networking: IP Storage – Four chapters covering the design and implementation of IP based storage. The chapters are split evenly between iSCSI and NFS best practices.
  4. Other Design Scenarios – The last two chapters in the book provide additional design scenarios and information. Chapter 18 provides four different network adapter configurations (2, 4, 6, and 8 nic based servers) with and without IP based storage. Chapter 19 covers mult-nic vMotion architectures.

While the subjects that are covered are for sure complex and detailed, both authors have done an excellent job creating content that is easy to read and retain. With the addition of the design examples you are sure to walk away from this book with the knowledge to implement the most advanced of vSphere networking features.

Happy Reading!


Notes from the Field: VSAN Design–Networking

NoSANA few weeks back I published a post related to a VMware VSAN design I was working on for a customer (located here). That post focused mostly around the key area that VSAN addresses, storage. While the storage piece is where VSAN shines and has the most moving parts to understand from a design and implementation perspective, you can’t forget about the network. With the scale out nature of  VMware VSAN, the network connectivity between hosts to carry replica VM storage traffic becomes increasingly important.

As this post and the previous post are based on a customers design  leveraging  VSAN in a net new infrastructure we are implementing 10Gb Ethernet connectivity for the ESXi host connectivity. Two factors played into this decision, first was the fact that 10Gb Ethernet over the last few years has come down in pricing allowing for a greater adoption rate. Second, as we are deploying VSAN, VMware recommends using 10Gbe to provide the needed network throughput/bandwidth to handle the storage traffic.

Since we are “building” our own VSAN nodes as mentioned in the storage post, it was off to the VMware HCL I/O section to confirm/verify supported 10Gb Ethernet NICs to be used with our Dell R720 servers. We will be using copper based 10GBase-T switches  for connectivity so the servers will be configured with redundant Dell OEM’ed Intel Ethernet 10G 2P x540-t adapters. For the initial deployment we will be using one port from each card to provide redundancy and availability.

Someone Mixed VDS with My VSAN

While VSAN brings along with it some cool technology related to storage, one piece that is overlooked (or hasn’t received enough attention in my opinion) is that when licensing VSAN, VMware bundles in the ability to utilize the Virtual Distributed Switch (VDS). This feature is normally reserved for deployments involving VMware’s Cadillac version of licensing, Enterprise Plus. Leveraging the VDS along with the Network I/O Control (NIOC), a feature that is only available on VDS, allows for a streamlined installation/configuration of the vSphere environment. Additionally deploying the VDS in 10Gbe VSAN environment is preferred by VMware. The below quote is taken from page 7 of the VMware Virtual SAN Design and Sizing Guide:

Virtual SAN provides support for both vSphere standard switch and VMware vSphere Distributed Switch™, with either 1GbE or 10GbE network uplinks. Although both vSphere switch types and network speeds work with Virtual SAN, VMware recommends the use of the vSphere Distributed Switch with 10GbE network uplinks.”

If you are not familiar with VDS or NIOC, Frank Denneman has a great primer post on the feature and functionality, that post can be viewed here. Also, though a bit dated, VMware has an excellent whitepaper around VDS design and implementation. VMware vSphere Distributed Switch Best Practices  is available here. The diagram below provides an overview of how the hosts will configured and will communicate at both the physical layer as well as the VDS/portgroup layer.


For sake of simplicity the diagram above only show the use of five portgroups that will need to be created on the VDS for our deployment. The traffic type and VDS teaming policy for each portgroup is listed in the table below:

Traffic Type

Port Group

Teaming Option

Active Uplink

Standby Uplink








Explicit Failover





Explicit Failover





Explicit Failover



Virtual Machine

Virtual Machine





Virtual SAN Networking Requirements and Best Practices

VMware has published a guideline for VSAN requirements and deployment best practices. Below is the listing from VMware vSphere 5.5 Documentation Center located here.

  • Vitual SAN does not support IPv6
  • Virtual SAN requires a private 1Gb network. As a best practice, use 10Gb network.
  • On each host, dedicate at minimum a single physical 1Gb Ethernet NIC to Virtual SAN. You can also provision one additional physical NIC as a failover NIC.
  • You can use vSphere standard switches on each host, or you can configure your environment with a vSphere Distributed Switch.
  • For each network that you use for Virtual SAN, configure a VMkernel port group with the Virtual SAN port property activated.
  • Use the same Virtual SAN Network label for each port group and ensure that the labels are consistent across all hosts.
  • Use Jumbo Frames for best performance.
  • Virtual SAN supports IP-hash load balancing, but cannot guarantee improvement in performance for all configurations. You can benefit from IP-hash when Virtual SAN is among its many consumers. In this case, IP-hash performs the load balancing. However, if Virtual SAN is the only consumer, you might not notice changes. This specifically applies to 1G environments. For example, if you use four 1G physical adapters with IP-hash for Virtual SAN, you might not be able to use more than 1G. This also applies to all NIC teaming policies that we currently support. For more information on NIC teaming, see the Networking Policies section of the vSphere Networking Guide.
  • Virtual SAN does not support multiple VMkernel adapters on the same subnet for load balancing. Multiple VMkernel adapters on different networks, such as VLAN or separate physical fabric, are supported.
  • You should connect all hosts participating in Virtual SAN to a single L2 network, which has multicast (IGMP snooping) enabled. If the hosts participating in Virtual SAN span across multiple switches or even across L3 boundaries, you must ensure that your network is configured correctly to enable multicast connectivity. You can change multicast addresses from the defaults if your network environment requires, or if you are running multiple Virtual SAN clusters on the same L2 network.
    I hope this post as well as the original post is helpful in designing/implementing your VSAN environment.