Sunday, February 24, 2013

VMware Horizon View 5.2

VMware's End User Computing strategy has certainly envolved over the last few years. It was formerly distributed across many products that all solved parts of the VDI problem: View, Horizon, Mirage, Project Octopus, ThinApp, AppShift, et cetera. I can tell you first-hand that many customers just didn't get it, nor did they want to wade into such a cloudy pool of technologies.

At the Potomac Region VMUG event in Washington, D.C. last year, I listened intently as VMware explained that future versions of View would incorporate many of these solutions.

VMware View 5.2 represents a major consolidation of EUC products. You'll note right away that the product name has evolved: VMware Horizon View. View is now a major component of the VMware Horizon suite. In 5.2, we finally get clientless access to our virtual desktops via HTML5. And AppShift... Wow. If you haven't had a chance to see AppShift in action, I highly recommend checking it out. I saw it for the first time at VMworld 2012, and it was amazing. In short, AppShift can re-imagine a Windows desktop or application into a touchscreen metaphor, complete with gesturing.

I suspect that 5.2 will entice many customers who have been considering VDI to finally take the plunge. Virtual infrastructure is kind of a done deal; pretty much everyone has already virtualized their server workloads. Now it's time to provide desktops in the same fashion.

Wednesday, February 20, 2013

Polls are Open! Vote on your favorite VMware Blogs!

Voting is now open for the 2013 Top VMware and virtualization blogs over at vsphere-land.com. And that's right, you can vote for my blog! It only takes a minute (or a few minutes, really), and you might even run into a few new blogs that you'll subscribe to.

I've been blogging about VMware for almost two months now, and I'm really surprised at how much traffic I've seen in that short time. Lots of people end up here looking for help with VMTools on various linux distributions. I'm working on a CentOS post now, and will gladly write more based on your feedback.

Truth be told, I'm having a ton of fun with this blog. Sure, the votes would be nice, but as long as I'm learning and helping along the way, I'm satisfied.

Here's the link to cast your vote. Thanks for reading!

mike

Sunday, February 17, 2013

Port Group Options for vSS - Part 2

In my last post, I discussed the options available to a vSS VM Port Group on the General and Security tabs. Now that you've had a chance to let that sink in, we're moving on to the remaining tabs: Traffic Shaping and NIC Teaming. Here we go.

The Traffic Shaping Tab

The Traffic Shaping Tab.
The Traffic Shaping tab gives you a significant amount of control over the performance of a virtual interface within your port group. It's important to keep in mind that these policies, if enabled, only affect outbound traffic from the selected port group. By default, these options will inherit the vSwitch's configuration, which by default is disabled. If you'd like to use these features, check the box and select Enabled. Now you can set the following configuration options:

Average Bandwidth - The amount of bandwidth your port group is allowed to use over time, measured in Kbits.

Peak Bandwidth - The absolute upper limit on bandwidth, including the Burst Size.

Burst Size - The amount of "bonus" bandwidth a port group can use, if available. This amount is used when your port group (well, really the VMs contained in your port group) request more bandwidth than the Average Bandwidth allows for. If additional bandwidth is available, the Burst Size is in play.

NOTE: These three items are very closely related; setting one improperly can muck up the other two pretty badly. For this reason, vSphere kindly displays an error if you've made a big mistake.

vSphere: "You're doing it wrong."
Take a moment and think about this: if you set your Average Bandwidth to 100,000 Kbits/sec, and your Peak Bandwidth to 9,000 Kbits/sec... not so good. You're basically saying that the average can never be reached, as it exceeds the Peak. Also remember that the Peak takes into account the Burst Size. So Average Bandwidth + Burst Size must be less than or equal to Peak Bandwidth.

In my experience, if you've considered capacity planning in your design (you did, didn't you?) then these values are less important. You don't want to build a vSwitch that is either too large or too small for the VMs contained therein.

You ready for the really fun tab? Good. Because next is...

The NIC Teaming Tab

The NIC Teaming Tab.
Now you know why the Port Group Properties page is so tall. The NIC Teaming tab has a ton of configuration options, relative to the other tabs that is. And these things are IMPORTANT. Don't make changes here, even if you're feeling bold and it's Friday afternoon, unless you know exactly what the impact will be.

Load Balancing - This setting allows you to select a different load balancing method than the one your vSwitch is using. You've got a decision to make here:
  1. Route based on the originating virtual port ID
  2. Route based on source MAC Address
  3. Route based on IP hash
  4. Explicit failover order
I won't get into the details of these methods in this post; they warrant a post unto themselves. Plus I have to create some drawings for each, which will take a bit of time.

Network Failover Detection - This option will determine how the port group identifies a network failure at the upstream switch. You can select Link Status, Beacon, or both.
  • Link status detection will be tripped if a physical cable between your ESXi host and your upstream switch is unplugged, or if the switch itself loses power. But it can't see problems beyond that switch; if the upstream switch is isolated from the rest of the network, link status will not detect a failure, and the link will remain active. In this case, you'll want to enable Beaconing.
  • Beaconing - When you enable Beaconing, you're telling ESXi to probe the status of all the physical uplinks for this port group or vSwitch. ESXi will be more capable of detecting links beyond your upstream switch.

Notify Switches - When enabled, ESXi will attempt to notify the upstream physical switch when virtual machines are connected to the virtual switch.

Failback - When set to yes, this option will move your VM traffic back to the original physical ethernet adaptor once it becomes available again. I recommend setting this to No. You'll want to validate that your network connection is stable before moving traffic back to it.

Failover Order - When your create your vSS, you specify the failover order for your active physical NICs. This setting allows you to override the vSS's failover order, based on the needs of your port group. A classic example is for your management port group. Your vSwitch may be configured to use multiple active uplinks to provide greater bandwidth, but bandwidth typically isn't a problem for your management traffic. Instead, you'll want to provide high availability. Do so by choosing to override the failover order for your management port group and select one NIC as Active, and at least one more NIC as a standby adapter.

Ok. So that's about it. Those of you with experience managing and deigning vSphere environments will recognize that I've glossed over lots of stuff here, but for those of you who are just getting into vSphere networking, you should now have enough information to get started with some basic configurations.

As always, let me know if anything is unclear, or if you'd like to see additional information on this topic.

Tuesday, February 12, 2013

Critical Updates for Adobe Flash

In case you missed it: Adobe announced updates to its Flash player today that address critical security flaws. Just your standard "take over your system" type stuff. Update Flash on your workstations today.

You might ask why this is being mentioned here. If you're using the vSphere Web Client, you're using Flash. So do the right thing and update today.

Sunday, February 10, 2013

NFS, ESXi 5, NetApp, and You.

The VMware community is buzzing about problems when using NFS datastores on NetApp filers with ESXi 5. Here's the link to VMware's KB article that describes the problem and resolution in detail.

If you're able to, enable Storage I/O Control on your filer, and you won't need to worry about the issue. Upgrading to a more recent version of NetApp's ONTAP operating system will also address the problem. Or you can modify the nfs.MaxQueueDepth advanced setting in ESXi.

In any event, if you're doing NFS to NetApp for your ESXi 5 datastore, please read through the article and take the appropriate action.

Here's Cormac Hogan's post that raised awareness on the issue. If you're not reading his blog, you're missing out on tons of great posts about storage and virtualization.

Thursday, February 7, 2013

Maryland VMUG Meeting 2/6

I attended another great presentation by the Maryland VMUG last night. As always, the team puts together a great evening. Here's a quick recap for those who are interested.

InMage gave a presentation on their vContinuum software which provides automated disaster recovery for virtual machines. It's an alternative to SRM, and InMage claims that their product is less expensive due to the smaller licensing packs they offer (packs of 8 with InMage versus packs of 25 with VMware). The software replicates your guests to an alternate site; the hypervisor isn't aware of what's going on. The presentation was really put together well, and their engineer was very knowledgable of the product and many use cases.

We were also introduced to VMware's Mirage product, which is AWESOME. It's a really clever solution to desktop migrations. As someone who has seen several organizations struggle with migrating from XP to Windows 7, I really latched on to Mirage. In short, you push a Mirage client to a Windows XP desktop. That client will download a pre-defined Windows 7 image (that you've built according to your standards and policies) to the XP machine in the background. When the download is complete, you reboot, the install completes, and you're running Windows 7. (There's LOTS that happens in the background to make this work, and there's a bit more to it than this, but you get the idea). No helpdesk visits to each desktop, no backing up user data to a temporary location (Mirage leverages USMT to move that to the new OS). Even works over long distances.

So looks like I'll be installing Mirage today.

If you're in the area, I encourage you to sign up for Maryland VMUG events in the future.

Sunday, February 3, 2013

Can't Change BIOS Settings on VM in Fusion 5

A question popped up in discussions section at VMTN today that was interesting. A Fusion user explained that he wasn't able to modify any of the settings in a VM's BIOS. Here's the screen that he was referring to:

On the right, under Item Specific Help, you'll see a message that says, "All items on this menu cannot be modified in user mode. If any items require changes, please consult your System Supervisor."

As is usually the case, the person who runs into this problem tends to be the System Supervisor. So... now what?

Turns out that this is the result of the VM's config file (*.vmx) having a boot order defined. If you've got something like this in your VM's config:

bios.bootOrder = "hdd"

... the BIOS Setup Utility will prevent access to the BIOS screen. Since most of the changes you'd make here are more easily done in the Edit Settings GUI, it's probably not worth it to get access to the BIOS. But if you insist:


  1. Shut down your VM.
  2. Edit your VM's .vmx file, and remove to entire line starting with bios.bootOrder.
  3. Change bios.forceSetupOnce = "FALSE" to "TRUE".
  4. Save the changes to the config file.
  5. Start your VM.
Now when your VM starts, it will load the BIOS Setup Utility, and the Boot menu will be editable:



Again, most changes you'll ever want to make to the boot order of a VM can be much more easily done via the VM's Edit Settings screen. But now you know how to change the BIOS Boot settings directly.


Saturday, February 2, 2013

VMTools and Fedora18

You'd think that the process for installing VMTools on Fedora18 would be the same as 17. But... you'd be wrong.

You'll want to perform all the steps I listed here, but there's one additional step: changing the path for the kernel headers. The vmware-config-tools.pl script finds them in a bogus location, but insists that the path is valid. When that script runs and asks if "" is a valid path, say no and change it to /usr/inc/version/headers.h.

The script will complete and, after a reboot, VMTools will be running and current.
Mastodon