Wednesday, May 13, 2015

PowerCLI for Modifying VM Network Adapters

A complex system of PowerShell and PowerCLI scripts manages the virtual machine lifecycle here. The scripts are remarkable in their implementation, and for years have been humming along just fine without much modification, even though the creator left two years ago.

Recent changes to the environment, however, caused portions of these scripts to break. So guess who gets to fix them? Correct: your humble correspondent.

Of course, I'm no PowerCLI god, or deity, or even apostle. I'm a spirited follower at best. Though to my credit, I've taken philosophy, logic, and ethics, and I have a beard, so you know, I'm qualified to debug and correct code.

So here's a fun overview of a problem I've dealt with lately, and what I learned in the process. Disclaimer: If you know everything about scripting already, you should have stopped reading by now.

The Problem

I replaced a few dozen vSwitches with a nice vDS, and there was much rejoicing. Except the scripting, which had been developed to use vSS cmdlets, was not happy. We had been using the following command to configure a newly-provisioned VM's network adapter:

We build the variables based on the relevant information in the request for a new VM. So things like $vlan are defined based on the ultimate location of the VM. But when you're working with a vDS (and more importantly a VDPortGroup) you can't use vSS cmdlets.

The Solution

So after some research and a lot of trial and error, we ended up with this:

The difference is significant. We're not just talking about changing a single cmdlet, or swapping out a single switch. We have to change our whole approach. Instead of just setting the port group, we need to define the VDPortGroup that we want the VM's interface to connect to, and that means identifying the vDS itself. So I built the $pg variable to contain that information. And $na holds the information need to properly identify the network adapter we want to modify.

Logging: it's fannnnntastic!
You'll notice some additional lines that echo the values for these variables, and the output of a get-networkadapter command, into a logfile. I set this up to debug a problem I was having (see below). This logging was crucial to helping me see where things were breaking, and I ended up leaving these cmdlets in place in case things go south again. NB: Logging is always a good idea.


The Problem with the Solution

However. I was getting some really strange results with this script when run as part of the automation system. I could run these commands in a PowerCLI window without a problem. The VM's network adapter would be configured exactly the way I intended. But when the same script was run under the context of a service account, the VM's network adapter would be configured to connect to the vDS but not to any port group. And the logging confirmed that: the portgroup value in the get-networkadapter output was blank. It was the kind of thing that drives you bonkers. I mean really.

The Solution to the Problem with the Solution

It occurred to me that maybe the problem was related to the modules that were loaded under the service account's profile. So I logged into the script host using the service account and ran the PowerCLI 6 R1 installer. (I had previously upgraded PowerCLI from 4.x to 5.8 R something (and PowerShell from 2 to 4) with my own administrative credentials.) And I even had another administrator do the same. After both of these actions, all of the scripting started working as expected.

If you ever run into weirdness with certain cmdlets after you upgrade PowerCLI, PowerShell, or both, you should consider re-running the respective installer for each user profile on your scripting host.

Epilogue

You've probably seen some syntax that you disagree with here. PowerCLI-wise, I mean. The process to modernize these scripts is a slow one, and many of the piped-output to piped-output bits will go away. There's always a faster way to get things done when you're scripting. But performance and elegance are always secondary to function. Always.

Tuesday, May 12, 2015

The Battle For Your Data Center’s Brain

The complex ecosystem of symbiotic, technologic, silicon-based organisms that is your datacenter: it’s the epicenter for your business, your mission, and your interactions with the world. Your applications, your data, your infrastructure, and a non-trivial amount of your capital, all end up in the orthogonal confines of four walls, a raised floor, and ceiling snaked with assorted cable types. 

Your data center is populated with all manner of resources. But generally speaking, these resources can be categorized into the same three groups we’ve used for decades in IT: server, storage, and network. Maybe you’ve been in the industry long enough to remember when storage, as a discipline, was certainly not a peer to server and network with regard to complexity, criticality, and functionality. For many IT professionals, storage was just a remote disk attached to your server and network, a dedicated pool of capacity for a server with no open bays. An avoidance strategy for having to scale out to yet another Microsoft Exchange 5.5 server.

Today’s storage is markedly different. So different, in fact, that newcomers to the technology profession likely can’t imagine storage not being an active participant in not only the delivery of your data center’s services, but also in the management of said services. The elevation of storage from a simple resource to a first-class data center citizen means that a new revolution is underway: it’s the battle for the right to manage your data center.

Brainz.
Well, maybe that’s a bit hyperbolic. It’s not that war has been declared for the right to manage your data center. Rather, it’s a grudge match to determine where the intelligence that’s needed to effectively manage your data center’s resources lives. When you see buzzwords that start with “Software-Defined” you know you’ve found a contender: software-defined networking, for example, is a play to apply intelligence to the data center through contemporary, sophisticated networking technologies; software-defined storage, on the other hand, attempts to apply intelligence by efficiently serving and storing your data, which is arguably the most important asset in your entire data center (except for the occasional human being that can be found staring at an old flat-panel monitor on a crash cart, cursing while listening to hold music for tech support). And we can’t overlook virtualization, which would certainly have been named “software-defined servers” if that tech had been introduced in 2011. Marketing types lump these technologies into the concept of the “software-defined data center.” But perhaps what’s really happening here is better named, “software-defined intelligence.”

Why Storage, and Why Now?
Chris Evans wrote a great article last month titled, “End-to-End Data Management.” He argues that data management needs to be raised up through the stack into application, not just relegated to the realm of the physical. And for the record, he is absolutely correct. But why are we only now making this realization? 

Because we’re finally coming to terms with the quantities of data that we’re generating. And the approach we’ve taken to managing data up to now simply cannot scale to the phonetically-improbable order of magnitude that obscures the true meaning of 1021.

For this reason, we demand that our storage solutions are more than just bit buckets with brushed bezels. We need storage that’s intelligent, that’s able to analyze its workload and not only report on its contents, but to generate metadata that informs our data retention policy. We need storage that automates the chore of defining storage performance levels and automatically promoting and evicting data between tiers.

As for why storage: consider how your data center looked 10 years ago, how it looks today, and how it will look 10 years from now. Like any other complex organism, your data center will likely see a total replacement of components, from switches to servers to SANs, perhaps twice in this twenty year period. Hardware breaks, becomes obsolete in function and fashion, and is readily replaced by the next revision. But your data is the constant in this equation. You may migrate data from one storage platform to another, but the data remains the same. Which is to say, we must stop treating data as just another resource to be managed, and start treating it for what it is: the digital representation of your business, mission, and research.


Managing the data center is comparatively easy when you consider the enormity of managing your data. Storage platforms will come and go. But the advent of intelligent data platforms will absolutely be the control point for data centers in near future.

NB: This post is part of the NexGen sponsored Tech Talk series and originally appeared on GestaltIT.com. For more information on this topic, please see the rest of the series HERE. To learn more about NexGen’s Architecture, please visit http://nexgenstorage.com/products/.

Friday, May 1, 2015

Exporting from AWS EC2

:)
I've decided to export my Ubuntu instance from AWS EC2 to test the process of migrating a workload to VMware's vCloud Air OnDemand service. Portability is important to everyone, and moving your virtual machines between cloud service providers shouldn't be the technological equivalent to climbing the Dawn Wall. This post will be less about VMware's offering, and more about how to get out of EC2.1

Checking Out of EC2

Getting your instance out of EC2 is... interesting. Unlike most actions in AWS, exporting an instance requires the use of a command-line toolkit that you need to download. I can tell you that, at this point, many people would throw in the towel. It's clear that getting your VM is not going to be an easy task; the process alone will intimidate many people who launched EC2 instances because Amazon made the provisioning process so easy. What took a few clicks to create will take a bit more work to export. And here's an observation I made during this experience: Hemingway wrote the on-boarding script; Kafka wrote the off-boarding.

Installing the Amazon EC2 Tools

I'm following the steps listed in this article from Amazon: Setting Up the Amazon EC2 CLI and AMI Tools. More specifically, I'm following these instructions because OS X. I'll spare you the tedium of these instructions, and you'll have to trust that I've followed them properly. Just follow those links to get a sense of what's required. And be glad that you've created an IAM user instead of using a keypair that's bound to the root account2.

Once you've downloaded the tools and configured them according to the instructions for your OS, you're ready to move on.

Creating your S3 Bucket

"Oh, you want to stop using an AWS service? No worries! Just make sure you sign up for another one while you're on the way out." -Amazon

In other words, you need to use S3 to store your exported instance until you download it. But that's relatively easy: just follow these instructions. Keep in mind that the bucket itself costs nothing. Using the bucket, that is transferring data into or out from the bucket, will cost you.

Exporting...

Once you've got everything set up (and I do mean everything; follow the steps to set up your ec2 tools environment exactly as documented, or it simply will not work. And this process requires you to review the particulars of your ec2 instance, including the region where your instance lives), you're ready to export. Just make certain you shut your instance down first; it can't be running during the export.

You'll end up with a command along these lines:


./ec2-create-instance-export-task <your-instance-id> -e VMware -f VMDK -c OVA -b export-mc-server --region us-west-2c

I forgot to mention something: prepare to be disappointed. Because you can only export an ec2 instance if it was originally imported into AWS. Any instance you create on ec2 cannot be imported using Amazon's tools.

So this is where the story ends. Hotel California moniker: well earned. Getting your instance out of ec2 will require the use of third-party tools, such as VMware's Converter, running inside the instance.

EC2 tools will never dismantle AWS's house. Or something along those lines.

1 My consulting business (www.holdenllc.com) partners with both VMware and Amazon. It's similar to registering as a Democrat and a Republican, and experiencing the feelings of elation and outrage simultaneously, all the time.

2 I mean, I certainly wouldn't have made that mistake, if someone had advised me not to. But they didn't, so I did.
Mastodon