Wednesday, December 30, 2015

Log Scraping in SystemCenter Operations Manager 2012R2

I haven't written a technical post in... a long time. So I'm shaking the dust off a bit, and getting back to the thing I enjoy most about work: solving a good problem.

The Problem

The hospital's paging system is supported by an application called SPOK (that's spoke, not Spock, FYI. And yes, I'm as disappointed as you are). SPOK runs on a few servers that support its various components. And like any good distributed application, SPOK establishes and maintains connections between servers.

Unfortunately, like nearly every other distributed applications, SPOK offers little in terms of availability monitoring. Sure, you can monitor the various services. But the application has been observed to fail even with all of the services happily running.

So how do we monitor an application that is running but not working?

The Context

Luckily, the team that developed SPOK implemented some great logging for their application. And if you've got a monitor focused on the server, you can just have the agent look for certain event IDs in the standard Windows event logs, right?

Well, no, actually. Because SPOK's logging is done via a text file, which is preferable, actually; the level of detail that can be stuffed into an application-specific logfile would overwhelm a standard Windows logging facility. So how do you deal with a logfile when you're using Microsoft SystemCenter Operations Manager 2012R2? (And why is SCOM such an awful thing to pronounce?)

The Solution

I should have started out by saying that I really don't like SCOM. It's clunky, the operations console is a huge waste of screen real estate, and the dashboards are so limited that you shouldn't even waste the time to set them up. SCOM is awful. Oh, all of these things will be fixed in the next version, you say? #idgaf

But in IT, we are often constrained to use the tools we have, not the tools we want.

So I set out to see what SCOM could offer in this case. And I was pleasantly surprised to find that SCOM has a very capable logfile monitoring feature to address this very problem.

First, Find the Facts

The first step was to review the SPOK logfile and identify two patterns:
  1. A pattern to indicate a healthy to unhealthy state change, and
  2. A pattern to indicate an unhealthy to healthy state change.
The first pattern was determined to be "Error sending handshake response: Client not connected." We observed that whenever the application fails, this exact message is logged. So we knew what to look for in order to denote the start of a problem.

The second pattern was more difficult to identify. I looked for a series of log entries that coincided with a server restart, which gave me what I was looking for: a clean set of startup messages in the logfile. It turns out that every time the application successfully connects to its network of servers, a specific message is logged: "Protocol supported. Sending response." 

(If you're asking why we wanted two patterns to look for: my goal was to set up a monitor in SCOM that would detect the problem and create an alert, which in turn sets off a series of events thanks to the integration we created between monitoring and ITSM solutions. But I also wanted to have this alarm auto-close and auto-clear if the application recovers. That functionality depends on a second pattern to indicate the application's return to a healthy state.)

I finally had the data I needed to set up the monitor. And in case it's not obvious: you should collect as much information as possible before you even log into SCOM.

Creating the Monitor

Fire up the Operations Manager console and select the Authoring tab. Then navigate down to Authoring | Management Pack Objects | Monitors on the left of the console. On the right, navigate to Windows Computer | Entity Health | Availability. Right-click Availability, then choose Create a Monitor... Unit Monitor.


The Event Reset option highlighted above is what I wanted in this case: one state for healthy, the other state for unhealthy. And I chose to create a new management pack for this monitor, to keep it separated from the rest of the SCOM deployment.


Because names are important, I gave the new monitor a good name: SPOK Logfile Scraping. This may seem like a detail that's not worth mentioning. But I'll counter that many objects get deployed into production with stupid names like "Test1" and "Demo1", and then people develop a phobia for changing these names for fear of breaking something, So use good names. Always. And make the description meaningful, too. Because what you create today will be examined by someone else long after you've moved on to a new position. This is your chance to document as you create; don't squander it.



The next step is not intuitive, so it warrants discussion.

Zork. The greatest IT simulation of all time.
In the Directory field, you enter the path (UNC in this case because the logfile is on a remote Windows server) to the logfile directory. You'll be tempted to include the name of the logfile here; do not give in to said temptation.

On to Pattern. In this field, you describe the pattern for the names of the logfiles to be monitored. For example, if your application rotates logfiles once a day, the filename may configured to include the current date. And because that configuration results in logfiles that will always have a unique file name, you have to use a pattern that is ambiguous enough to account for changes in filenames, but specific enough to identify the logfiles you want to watch. Easy, right? I went with amc*log.txt, which worked like a charm.

It's now time to create the first expression. This expression will trigger the unhealthy state alert, so we'll use the first pattern identified. But the trick here is the Parameter Name. For log scraping, the Parameter Name must be set to Params/Param[1]. Set the Operator to Contains (because the log entry will surely contain a time and datestamp along with the pattern we're matching on). And finally, paste that text "Error sending handshake response: Client not connected" into the Value field. Your first expression is now complete.

The second expression will trigger the return of the application to a healthy state. The configuration is identical to the first expression, with an obvious exception: we're now looking to pattern match on "Protocol supported. Sending response" because this pattern indicates a successful application startup.

On to the Health tab. Here, we'll correlate the two expressions with their health state. Easy. Basic.


Finally, we configure the Alerting. Because that's the whole point of going through this exercise.

Choose to generate alerts for this monitor, and generate an alert when the monitor is in a critical health state. The first expression enables this functionality. Next, choose to automatically resolve the alert when the monitor returns to a healthy state. The second expression enables this functionality.

Use a meaningful name and description for your alert; these bits of information will appear in the operations console, and in the data sent via integrations with other software. Finally, set the Priority and Severity, based on the application's role in your environment (in my case, the paging system for a busy hospital was considered pretty goddamned important, as one could guess).


And there you have it: a nice little logfile scraper in SCOM 2012R2. The monitor will dutifully inspect the logs for indications of errors, and create an alert as soon as one is detected. Of course, we tested the functionality during a scheduled outage of the application. Sure enough, when we shut down part of the application, the handshake failed, and the alert was tripped. Then we gracefully started the application, and the alert auto-closed.

The Conclusion

SCOM 2012R2, in spite of itself, is a perfectly adequate monitoring solution. The problem, though, is that it's only just adequate. Barely adequate, even. If you've got nothing else, use SCOM. If you've got anything else, use that instead.

Application monitoring will always be more valuable to an organization than infrastructure monitoring. And until applications include a contemporary logging method, you can always rely on the tried and true log scraping approach.

Wednesday, November 4, 2015

Meaning

Yesterday, the good people at Twitter lost their goddamned minds and replaced the Favorite / Star with a Like / Heart. The furor online was palpable; in a world of constant change, even a trivial change such as this can evoke disproportionate anger from tweeps everywhere. I'm sure you've seen dozens of zingers on this topic, so I'll move on to the deeper problem with this change.

A History of Symbolism


The heart shape we know today has been in use for at least 800 years. Cursory research on usage of the heart symbol reveals four hearts on a bible held by Jesus in the Empress Zoe mosaic in the Hagia Sophia. The heart symbol persists through the Sacred Heart devotion within the Roman Catholic faith, in which the heart was a symbol of Jesus's love and peace. The symbol appears frequently in renaissance, far east, and eventually western painting, sculpture, and pottery. More recently, the heart symbol represents the vitality of a hero clad in green bearing a wooden sword.


The shape of the heart symbol has changed only slightly since the early 13th century, but the meaning has remained intact. The heart is a symbol of love, most often romantic love, but love in a broader sense as well.

Modern Love

800 years of a direct correlation between the heart symbol and the concept of love; that's a hell of a legacy to carry into the 21st century. In fact, I argue that the symbolism behind the heart and the image itself cannot be separated. The heart doesn't symbolize love; the heart is love.

Now, if you'll forgive me, I'd like to hate on Facebook for a paragraph or two. Because Facebook is the harbinger for the end of meaningful interpersonal relationships and the death of free and courageous expressions of humanity. I don't believe this is hyperbole, either.

Facebook devalues the meaning of "friend" and "like" to the point where these terms would not be recognizable to 20th century humans. Friend now means someone who has a page on Facebook that you find agreeable for any reason, no matter how trivial. Friend no longer implies a personal, emotional connection between two people. In the same manner, like has been bastardized from its pervious meaning of "to express personal interest in a person, place, or thing." Contemporary descriptive definitions of like skew towards "to express passing, fleeting, and temporal favor in a person, place, or thing, usually as a means to signify personal preference." Friend isn't friend, and like isn't like. Facebook is an awful, awful place.

With friend and like forever ruined, it's only fitting that Facebook (through Instagram), and now Twitter, have clandestinely agreed to morph the meaning of love by saturating our social media feeds with the heart symbol. Flick through IG, and throw hearts in the direction of #destroyedplates, #nofilter, and #tbt photos. And now, Twitter has equated the heart symbol with "like" in our timelines. So many hearts, so little emoted love.

Twitter Activity

The majority of my actions on Twitter were favorites. I'd scroll through while on a call, or while walking to my car, or while doing any number of mindless activities, and I'd throw a star to tweets that I found amusing, or relevant, or important, or indicative of the online persona I wanted to project to the world. In some ways, my collective favorites were representative of my interests, perhaps my entire being.

But make no mistake: I do not love any of the content I see on Twitter. I don't love funny tweets from @manwhohasitall. I don't love thought-provoking articles from @nytimes. I don't love the latest posts from the technology vendors whose products have enabled me to build an entire career. I don't love any of these things.

I love my family. I love my wife, my boys, my baby girl. I love old friends with whom I've traveled the world and lived to tell the tale. I love the thought of growing old in the mountains. I love myself. I reserve the use of the word love for the things that I, you know, love. And because love is wrapped up in the symbolism of the heart icon, I can't just spray hearts all over the Twitterverse.

My activity on Twitter will surely be reduced with this change. But I'm not such a curmudgeon that I expect to throw a fit and have Twitter reverse its decision. The heart is likely here to stay, the star is gone forever. I'll just quietly lose interest, as I did with Facebook all those years ago.

Truth be told, I'll be better for it.

Monday, November 2, 2015

Selling Yourself, GitHub Edition

Exposition


I really do like GitHub. Kinda.
I like GitHub because I'm supposed to like GitHub. Well, that's not entirely true; I have come to rely on GitHub Gists for posting PowerCLI / PowerShell / whatever code in my blog posts here because it's quick and easy to post. But mostly, I signed up to avoid community ostracizing. Because ego.

It's always fun to sign up for a new service when you're not in the early adopter crowd; username selection quickly moves past the epoch of the purely alphabetic and straight into the era of the alphanumeric. And believe it or not, the username of "mstump" is very, very common. I'm in constant competition my eponymous doppelgänger when it comes to account names on the Internet. He beat me to GitHub (by four years, so not even close), but I beat him to Gmail. It's a struggle, and it's real.

Ok. Let's get into this story.

A Brief History of Exablaze

I graduated* from the University of Maryland, College Park in 1999 with a degree in that most lucrative of majors: English. To top that off, my concentration was in Language, Writing, and Rhetoric. (I'm sure you've got a hilarious joke about liberal arts majors and coffee shops; I've heard it before, and it's not as funny as you think. And furthermore, your major is stupid.) Right out of college, I worked as a proofreader for a financial services firm on the third shift. Yes kids, in the not too distant past, corporations hired legions of proofreaders to work on a 24x7 basis. One time I was sent to Philly to be on-call with two other proofreaders just in case a filing was being prepared. The 90's were weird.

After about one week of this job, it occurred to me that I had made a huge mistake. Proofreading is fun and all, when you're a pain-in-the-ass 20-something with a penchant for pedantry. But man, that's no way to live. So I went to a job fair at the hallowed Cole Field House on campus at UMD and met up with employees from a software firm named Avectra. (This is perhaps another sign of the times; no self-respecting startup today would select a name as linguistically grotesque.) After a few interviews, an offer, an acceptance, and a two-week notice, I started my first job in tech... as a technical writer.

Prolly shoulda resized this. Eh, F it.
I quickly learned that Avectra was a new company: it was the new name for two former competitors that had joined forces. One of those companies was TASS; the other, Ablaze. For the next 18 months, I witnessed a melting pot of corporate cults of personalities: executives and senior directors from both companies positioning for leadership roles in the merged venture. Many winners, as many losers. I had no sense of stability while I was there, and eventually the churn and turmoil proved too much: just shy of two years, I accepted a position with another company, and left Avectra behind.

But Avectra was my first real job out of college, and to this day that company means something important to me. Not because their software was mind-blowing (though in retrospect, it certainly was). But because my coworkers were smart, funny, and motivated people, and a few of the suits were willing to take risks (including tolerating this former tech writer when he moved into the network administrator's office one day (because the previous network administrator quit to spend more time surfing)) and invest in their employees. So while I had terminated my employment with the company, Avectra has stayed very much with me to this day.

Before I left Avectra, it occurred to me that I had the most meaningful relationships and discussions with former Ablaze employees. I mean, most of those TASS people were just weird. So as I boxed up my office on my last day, I did the 2001 equivalent of creating a parody Twitter account: I created a new AIM screen name: exablaze.

For years, as long as AIM was the de facto instant messaging application on the Internet, I was exablaze. To friends, to family, to everyone. It wasn't just a username; it was quintessentially me.

Exablaze, the Australians

A couple of years back, a new company was incorporated in Australia. They selected the name Exablaze for their venture. You can search them up if you'd like; what they do and who they are is immaterial to this post. Some engineers from this company were interested in sharing code via GitHub, and they were keen to use the exablaze handle for posting. But wouldn't you know it? I'm exablaze on GitHub. Go ahead, take a look. My GitHub is pathetic. I'm cool with that.

An Indecent Proposal

In November 2014, I was contacted by GitHub support. They were relaying a message to me from Exablaze (the company), which bluntly asked if I'd change my username so they could have it. "Github will be able to help you to transition to a new name," they said. "LOL," I said. I politely explained that I wouldn't be handing over the username, as I'd been using it in various forums and communities and comms systems for a long, long time. (At one point, my LinkedIn profile used exablaze as its alias). GitHub replied, said they understood, and thanked me for my response. NBD.

But in August of 2015, I was contacted directly by someone from Exablaze. This time, the deal was slightly different: would you please hand over your GitHub username, and we'll hand over $1,000?

What's for Sale, What's Not

I completely empathize with Exablaze for going after my username on GitHub. I get it. I own a small business, and I get that names are important things. You want consistency throughout your social and technical communities. You want a common identity that is easily identifiable by those who you seek to engage. You want a name that means something to you.

But for these exact reasons, I'm not parting with my GitHub account. I'm not giving it away for free, and I'm not selling it for $1,000. I'm not interested in this transaction. The end.

Why? Because you can't just buy a name from someone. Names are more than characters in a URL, more than just an alias on a trendy code repository. Over time, the name of a thing becomes the thing. You cannot divorce meaning from name, certainly not after 14 years. And for me, using the name exablaze (admittedly, I do so infrequently these days, noted exception aside) is a way to honor the insanity that is my professional career in IT.

Footnote

* - This is true, technically. But it's a story for another time.

Saturday, September 19, 2015

Federal

As socially engaged members of the technical community, we thoroughly enjoy talking about the evolution of IT discipline from wild, wild west to the process-oriented factory of technology. We espouse the benefits of repeatable processes in infrastructure and development, and eschew individual heroics that, while bringing immediate salvation, create enormous holes in the fabric of the IT operation at large. Ideally, new technologies beget new processes, and as engineers at any layer of the stack, we manage the process primarily, and the tech as an afterthought.

This is fine. This is good. This usually works.

Repeatable processes with reliable outcomes make any IT professional happy. Hell, even management cracks a subtle smirk when perched atop a process-oriented organization. Process, when done correctly, is a beautiful thing to behold.

But sometimes, process is the problem.
The triumph of the process, the defeat of endeavor.

To date, I've spent fourteen years architecting, implementing, and managing infrastructure technologies in the federal government. I've seen more than my fair share of dysfunctional, bureaucratic processes that, while well-intentioned, contribute more to chaos than calm. 

In these years, I've sought comfort from old friends: Sam, Jill, and especially Archibald. It's a lop-sided friendship; these are characters in Terry Gilliam's 1985 classic film, Brazil. But I don't mind the inequities of relationships that transcend the fourth wall. Whatever's cool with me.

I'll avoid the temptation to tell you what I think Brazil is about. I mean, the plot is simple enough: unchecked government bureaucracy is bad.

And now, I present to you, my dearest readers, the first of two vignettes on the topic of counterproductive processes at work.

The Lightbulb

My current desk is unremarkable in every way. It's gray, kinda. It's rectangular, mostly. It's safe from the ultraviolet light from the sun (which is to say, it's not near a single window). And it's replete with storage spaces: many file cabinets, many drawers, and many flippers1. If you've lived your work-life in modular monotony, you're aware that these flippers typically have light fixtures attached to the underside. These lights make for excellent task lighting, and their location keeps the glare from creeping through your retina, up through the optic nerve, and straight into the middle of your goddamned brain.

My getting-to-work routine is a thing of craft: I place my backpack on the right of the desk, remove both laptops, place the work-issued monstrosity in the docking station, power it on, and while I'm waiting for it to wake up, I flip the switches on both light fixtures. They flick on, and I sip the last drops of coffee #3 from the travel mug. But one morning, after the switches had been flipped, nothing happened.

So began the crisis.

I took a deep breath. And another. I'm a creature of habit, as it were, and such a seemingly trivial interruption to my morning routine felt like a head-on collision.

A coworker stopped by for a morning chat, and when I explained that the lights had gone out, he suggested I open a ticket with the facilities department. There's an intranet site devoted to building and facilities problems, which after over a year of working in this environment, I had never heard of before. I thanked him for the suggestion, and he floated a word of caution as he left the cube:

"It'll take a while. Few weeks, maybe more."

He wasn't kidding. Five weeks later, two uniformed gentlemen from the facilities department showed up with a print-out of my ticket in hand. They had difficulty locating my desk; there are no cube numbers, and no names affixed to the outer cube walls2. In fact, I ran into them by chance.

"You got a bulb out?"

So I walked them back to the desk, and let them inspect the scene. I braced myself for the usual questions:

"Is it plugged in?" Yes.
"Did you try turning the switch off and on a few times?" Yes.
"Has it ever worked?" Yes.

Satisfied with my responses, they left. LEFT. No indication of what would happen next. No chatter among them as they walked away. Nothing. I figured that was the end of it.

Thirty minutes later, they return. Now, each gentleman has a cardboard box about the shape of the bulb. But the bulb was fine, they said. The problem was the ballasts. They replaced the bulbs anyway, and when the lights still didn't work, they had the following advice.

"We can't help you. We are going to close your ticket. You need to contact your administrative officer, who will contact the furniture contractor to assess the problem and prepare a cost estimate. Once the estimate has been received, it will take 4-6 weeks to schedule the work, but that time depends on the number of requests in this building."

And like that, they left. This time, with no intention of returning. My ticket was closed (though I never received a notification to this effect). After five weeks, I still have no lighting above my workstation.

So I started thinking about their recommendation. I didn't know who the administrative officer was for my institute. I had no confidence that the AO would be interested in initiating any degree of procurement to replace a lightbulb for a contractor. And even if that was likely, waiting another few months to have another team of workers wander through my office looking for me... it was ridiculous. 

The process here was smothering its own output. In fact, I would argue that the process was intended to avoid output of any kind. The process was followed. And if I had pursued the administrative option, the process would consume well over two months just to replace two lights.

But I'm a divergent thinker, and when faced with the absurdity of such bureaucracy, I did the only thing I could: I went to Target and bought a $40 lamp.

Shadow IT is Real

You'll often hear management speak of the need to control (which is newspeak for eliminate) shadow IT. It's costly, they say. It eschews3 governance, they quip. It's a security risk, they squawk. In other words, they'll pile reason upon reason to enforce policy and process, and quash attempts to circumvent said policy and process. But thought is rarely given to what motivates employees to break out of the process in the first place.

If your organization's process for replacing a lightbulb takes over two months, it's time to get a new process. If your organization's process for ANYTHING takes over two months, it's time to get a new job.

1flippers - This is federal-speak for the cabinets above your desk.
2walls - Well, technically there are names affixed to the cube walls. They just list the names of people who left years ago.
3eschews - Ok, so this is one of my favorite words, but typically management doesn't use this one. It's a shame, and they should.

Wednesday, August 26, 2015

Vendor Shortlist for VMworld 2015

VMworld 2015 is next week! Social media vibrates with anticipatory tweets and puerile prognostications as we fall all over ourselves to predict what announcements will be made from temperate San Francisco. If you're a virtualization evangelist, it's truly the most wonderful time of the year.

I'm skipping VMworld this year, but wanted to share a short list of the vendors that I'll miss visiting and interacting with. And I'll admit right away: this list is driven primarily by the people I know and have developed a respect for, not purely the technology or the solutions each vendor offers. In fact, I'll argue that the people are the reason to go to a conference, with the technology a distant second. And while I've shed the vExpert title, I still carry the PernixPro and SolarWinds MVP designations. But that's for a good reason, which I'll explain later.

And now, the list.

VMware

It's so simple that it's easily overlooked. VMworld brings all of the vendors out of the woodwork, and it's understandable to forget that VMware itself is a vendor. They just happen to be the platform vendor. And even though it's easy and fashionable to criticize the company lately, VMware has a reputation for innovation that is undeniable. Take time to visit the VMware floorspace. And I highly recommend catching all of Jad El-Zein's VMworld sessions. Jad's sessions are always great, he's a dynamic presenter, and he's smart as hell.

PernixData

PernixData's FVP revolutionized the way we think about storage performance for vSphere. That's not an opinion, either. Overnight, FVP turned those 42U stacks of SAN hardware into a big, stupid bucket. Server-side caching, coupled with VM-aware intelligence, reminded us that storage performance is a server issue, not one to relegate to a storage network and array of increasingly complex devices. Two years ago I said that PernixData was a solution in search of a problem. Today that problem is pervasive, persistent, and solved.

PernixData isn't satisfied with solving the storage performance problem. At VFD5 in Boston this summer, we got our first view into the company's next killer app: PernixData Architect. It's a solution for the problem of building, managing, and maintaining virtualized environments. And PernixData Cloud is an ambitious attempt to break the silo you forgot about: your workplace.

Andy Daniel has been with PernixData for as long as I can remember, and it's always a pleasure to run into him at VMUG events. This VMworld, Andy is presenting "Solving Application Performance Issues with Infrastructure In-Memory Computing." Go check it out!

And it goes without saying that any session that Frank Denneman is presenting, or co-presenting in this case, is going to be killer. He's got a great talk lined up with Duncan Epping this year titled "5 Functions of Software-Defined Availability." It also helps that he's hilarious. Just ask him what he thinks about BlueJeans web conferencing software.

SolarWinds

I really like SolarWinds. A lot. I don't get to use their software nearly as much as I would like (actually, that's true for PernixData, too). But I can honestly say that I've used SolarWinds tools since the very first day of my IT career, which was... a while ago, in those heady, hazy late 90s when nothing could go wrong with an Internet startup. Anyway, SolarWinds has a reputation for building the right tools for whatever technology you're managing. And they have the best collection of subject matter experts in the industry. Lots of great people with the experience to help you out. It's what makes Thwack such a great place to hang out.

At VMworld, the infamous Thomas LaRock will deliver a session titled "Using Virtual SAN to Maximize Database Performance." Tom is the person you want to talk to about all things database. I mean, he's the SQL rockstar, after all.

Wrap-Up

Make every effort to catch these sessions, or queue up if you didn't register in time. And remember that it's the people that make VMworld such a great experience; the technology is just an excuse. Have fun in San Francisco, and I'll see you there in 2016!

Tuesday, June 30, 2015

Perfect

To prepare you for this post: I've been using a MacBook Air for about 4 years now, and before that I had (still have, in fact) a 13" MacBook (I even paid the dreaded black tax). And I've had an iPhone since the 3G (more specifically, I've had a 3G, a 4, another 4, and a 6). I think I have 4 iPads. Oh wait, I have a MacBook Pro, too. And 100 years ago when I was young I had a Performa that a guy stole from UPS and sold to me when I was too naive to know what I was getting myself into. But the statute of limitations has expired so NBD.)

I'm well aware that real problems and evils exist in the world. I understand that the world's collective energies are best spent addressing, if not solving, real problems like poverty, inequality, and cancer. As I write this from the safety and comfort of an environmentally pleasant office space in Bethesda, Maryland, I recognize that the problems I am about to convey have no meaning in the larger space we call Earth. What you are about to read is squarely in the domain of the #firstworldproblem.

And yet.

I must admit to you, dear reader, that I am the victim of conditioning by Apple, Inc. I have been conditioned to not simply use the devices that they market to me, but to examine the most minute detail of each product. It's a matter of having seen so many keynotes, where well-dressed executives use words like, "perfect," "gorgeous," and "beautiful" to describe electronics. It's a statement of how far we've come from the days of Radio Shack Tandys (which no one would dare describe using the previously listed adjectives).

I now expect my phone to be beautiful. I expect a seamless transition from screen to body. I expect perfect, gorgeous, and beauty.

For years, Apple has lived up to these descriptors. And in many ways, they still do. But lately, two minor annoyances have crept into my day. These problems are trivial. They are petty. They are not worth the time it took me to put this post together. And yet I can't stop thinking about them, because I've heard Apple tell me time and time again, to look at the details. 

Exhibit A - The iPhone 6 Case

One reason I expect so much from Apple products is the price tag. The iPhone 6 silicone case will run you $35. And since the iPhone 6 is nearly impossible to hold without some type of case, you likely bought this one, too. It's my fault for buying one, I know. But I liked the fact that it didn't add much to the thickness of the phone, which was important.

The trouble started in the spring, when I noticed that the edges of the case were starting to disintegrate. The silicone was separating from the hard plastic shell that gives the case its form. And because I just bought a macro lens for the a6000, I took some photographs in an attempt to convey the magnitude of the problem.

CAN YOU FEEL MY OUTRAGE? JUST LOOK AT THIS SHIT.

I mean, where is the perfection here? I'm careful with my phone. It doesn't get dropped, or tossed about, or otherwise manhandled. This case fell apart within five months of light duty. I am certain that I would not have noticed these imperfections on any other case, or more to the point, and other brand of case. But again, this AAPL conditioning has me trained on detail. It's the attention to detail, from design to manufacture to consumer, that Apple has staked out as its territory. And it's in this territory where it is beginning to fail.

Exhibit B - The Repaired MacBook Air

I hope you've calmed down from the table-flipping rage that you assuredly experienced from the photos above. Because this next shit is straight-up cray cray.

I had the battery replaced in the MBA because, well, because it was failing. As in 100% charge to 0% charge in 15 minutes. So I took a long walk from my office to the Apple Store nearby, and had it repaired. AppleCare was still active, so the replacement was covered.

I'm happy to report that the repair was successful. This MBA will live a long and happy life.

But get ready for rage, because just look at the condition in which my precious MBA was returned to me:


HOLY SHIT DO YOU SEE THAT? One of the screws on the bottom of the laptop wasn't tightened properly. AND IT STICKS OUT ABOUT 1MILLIMETER.

(╯°□°)╯︵ ┻━┻)

Every time I pick up my laptop, I feel this goddamned screw sticking out. It might as well be a goddamned thorn, or a wart, or something. Or a mountain.

Again, it's the expectation of perfection that makes me even notice this in the first place. I have a Dell something or other that I use for work. That thing has screws missing, edges worn, labels half peeled off, and I could not care less. F it. It turns on when I push the button, and does stuff when I faceroll the keyboard. It could have smoke pouring out of the vents and I wouldn't care. But a screw that isn't flush with the underside of the MBA? A goddamned crisis.

The Logical Conclusion

Kidding, sarcasm, rageface, hyperbole, and caps lock aside, this post isn't about problems with Apple, or problems with its iPhones and MacBook Air repair abilities. It's not about the marketing machine that has made Apple, Inc. the nearly $1T company it is today.  It's not about the legacy of Steve Jobs, who knew how to sell you something you never knew you needed. It's about my acquired delusion that form is as important as function.

And if you're wondering what I'm going to do about these two microcrises: I bought a new silicone case (because insanity is fun), and I'll pick up a Torx screwdriver and fix that goddamned screw. And with these two problems solved, for now, I'll move on to something new.

Wednesday, May 13, 2015

PowerCLI for Modifying VM Network Adapters

A complex system of PowerShell and PowerCLI scripts manages the virtual machine lifecycle here. The scripts are remarkable in their implementation, and for years have been humming along just fine without much modification, even though the creator left two years ago.

Recent changes to the environment, however, caused portions of these scripts to break. So guess who gets to fix them? Correct: your humble correspondent.

Of course, I'm no PowerCLI god, or deity, or even apostle. I'm a spirited follower at best. Though to my credit, I've taken philosophy, logic, and ethics, and I have a beard, so you know, I'm qualified to debug and correct code.

So here's a fun overview of a problem I've dealt with lately, and what I learned in the process. Disclaimer: If you know everything about scripting already, you should have stopped reading by now.

The Problem

I replaced a few dozen vSwitches with a nice vDS, and there was much rejoicing. Except the scripting, which had been developed to use vSS cmdlets, was not happy. We had been using the following command to configure a newly-provisioned VM's network adapter:

We build the variables based on the relevant information in the request for a new VM. So things like $vlan are defined based on the ultimate location of the VM. But when you're working with a vDS (and more importantly a VDPortGroup) you can't use vSS cmdlets.

The Solution

So after some research and a lot of trial and error, we ended up with this:

The difference is significant. We're not just talking about changing a single cmdlet, or swapping out a single switch. We have to change our whole approach. Instead of just setting the port group, we need to define the VDPortGroup that we want the VM's interface to connect to, and that means identifying the vDS itself. So I built the $pg variable to contain that information. And $na holds the information need to properly identify the network adapter we want to modify.

Logging: it's fannnnntastic!
You'll notice some additional lines that echo the values for these variables, and the output of a get-networkadapter command, into a logfile. I set this up to debug a problem I was having (see below). This logging was crucial to helping me see where things were breaking, and I ended up leaving these cmdlets in place in case things go south again. NB: Logging is always a good idea.


The Problem with the Solution

However. I was getting some really strange results with this script when run as part of the automation system. I could run these commands in a PowerCLI window without a problem. The VM's network adapter would be configured exactly the way I intended. But when the same script was run under the context of a service account, the VM's network adapter would be configured to connect to the vDS but not to any port group. And the logging confirmed that: the portgroup value in the get-networkadapter output was blank. It was the kind of thing that drives you bonkers. I mean really.

The Solution to the Problem with the Solution

It occurred to me that maybe the problem was related to the modules that were loaded under the service account's profile. So I logged into the script host using the service account and ran the PowerCLI 6 R1 installer. (I had previously upgraded PowerCLI from 4.x to 5.8 R something (and PowerShell from 2 to 4) with my own administrative credentials.) And I even had another administrator do the same. After both of these actions, all of the scripting started working as expected.

If you ever run into weirdness with certain cmdlets after you upgrade PowerCLI, PowerShell, or both, you should consider re-running the respective installer for each user profile on your scripting host.

Epilogue

You've probably seen some syntax that you disagree with here. PowerCLI-wise, I mean. The process to modernize these scripts is a slow one, and many of the piped-output to piped-output bits will go away. There's always a faster way to get things done when you're scripting. But performance and elegance are always secondary to function. Always.

Tuesday, May 12, 2015

The Battle For Your Data Center’s Brain

The complex ecosystem of symbiotic, technologic, silicon-based organisms that is your datacenter: it’s the epicenter for your business, your mission, and your interactions with the world. Your applications, your data, your infrastructure, and a non-trivial amount of your capital, all end up in the orthogonal confines of four walls, a raised floor, and ceiling snaked with assorted cable types. 

Your data center is populated with all manner of resources. But generally speaking, these resources can be categorized into the same three groups we’ve used for decades in IT: server, storage, and network. Maybe you’ve been in the industry long enough to remember when storage, as a discipline, was certainly not a peer to server and network with regard to complexity, criticality, and functionality. For many IT professionals, storage was just a remote disk attached to your server and network, a dedicated pool of capacity for a server with no open bays. An avoidance strategy for having to scale out to yet another Microsoft Exchange 5.5 server.

Today’s storage is markedly different. So different, in fact, that newcomers to the technology profession likely can’t imagine storage not being an active participant in not only the delivery of your data center’s services, but also in the management of said services. The elevation of storage from a simple resource to a first-class data center citizen means that a new revolution is underway: it’s the battle for the right to manage your data center.

Brainz.
Well, maybe that’s a bit hyperbolic. It’s not that war has been declared for the right to manage your data center. Rather, it’s a grudge match to determine where the intelligence that’s needed to effectively manage your data center’s resources lives. When you see buzzwords that start with “Software-Defined” you know you’ve found a contender: software-defined networking, for example, is a play to apply intelligence to the data center through contemporary, sophisticated networking technologies; software-defined storage, on the other hand, attempts to apply intelligence by efficiently serving and storing your data, which is arguably the most important asset in your entire data center (except for the occasional human being that can be found staring at an old flat-panel monitor on a crash cart, cursing while listening to hold music for tech support). And we can’t overlook virtualization, which would certainly have been named “software-defined servers” if that tech had been introduced in 2011. Marketing types lump these technologies into the concept of the “software-defined data center.” But perhaps what’s really happening here is better named, “software-defined intelligence.”

Why Storage, and Why Now?
Chris Evans wrote a great article last month titled, “End-to-End Data Management.” He argues that data management needs to be raised up through the stack into application, not just relegated to the realm of the physical. And for the record, he is absolutely correct. But why are we only now making this realization? 

Because we’re finally coming to terms with the quantities of data that we’re generating. And the approach we’ve taken to managing data up to now simply cannot scale to the phonetically-improbable order of magnitude that obscures the true meaning of 1021.

For this reason, we demand that our storage solutions are more than just bit buckets with brushed bezels. We need storage that’s intelligent, that’s able to analyze its workload and not only report on its contents, but to generate metadata that informs our data retention policy. We need storage that automates the chore of defining storage performance levels and automatically promoting and evicting data between tiers.

As for why storage: consider how your data center looked 10 years ago, how it looks today, and how it will look 10 years from now. Like any other complex organism, your data center will likely see a total replacement of components, from switches to servers to SANs, perhaps twice in this twenty year period. Hardware breaks, becomes obsolete in function and fashion, and is readily replaced by the next revision. But your data is the constant in this equation. You may migrate data from one storage platform to another, but the data remains the same. Which is to say, we must stop treating data as just another resource to be managed, and start treating it for what it is: the digital representation of your business, mission, and research.


Managing the data center is comparatively easy when you consider the enormity of managing your data. Storage platforms will come and go. But the advent of intelligent data platforms will absolutely be the control point for data centers in near future.

NB: This post is part of the NexGen sponsored Tech Talk series and originally appeared on GestaltIT.com. For more information on this topic, please see the rest of the series HERE. To learn more about NexGen’s Architecture, please visit http://nexgenstorage.com/products/.

Friday, May 1, 2015

Exporting from AWS EC2

:)
I've decided to export my Ubuntu instance from AWS EC2 to test the process of migrating a workload to VMware's vCloud Air OnDemand service. Portability is important to everyone, and moving your virtual machines between cloud service providers shouldn't be the technological equivalent to climbing the Dawn Wall. This post will be less about VMware's offering, and more about how to get out of EC2.1

Checking Out of EC2

Getting your instance out of EC2 is... interesting. Unlike most actions in AWS, exporting an instance requires the use of a command-line toolkit that you need to download. I can tell you that, at this point, many people would throw in the towel. It's clear that getting your VM is not going to be an easy task; the process alone will intimidate many people who launched EC2 instances because Amazon made the provisioning process so easy. What took a few clicks to create will take a bit more work to export. And here's an observation I made during this experience: Hemingway wrote the on-boarding script; Kafka wrote the off-boarding.

Installing the Amazon EC2 Tools

I'm following the steps listed in this article from Amazon: Setting Up the Amazon EC2 CLI and AMI Tools. More specifically, I'm following these instructions because OS X. I'll spare you the tedium of these instructions, and you'll have to trust that I've followed them properly. Just follow those links to get a sense of what's required. And be glad that you've created an IAM user instead of using a keypair that's bound to the root account2.

Once you've downloaded the tools and configured them according to the instructions for your OS, you're ready to move on.

Creating your S3 Bucket

"Oh, you want to stop using an AWS service? No worries! Just make sure you sign up for another one while you're on the way out." -Amazon

In other words, you need to use S3 to store your exported instance until you download it. But that's relatively easy: just follow these instructions. Keep in mind that the bucket itself costs nothing. Using the bucket, that is transferring data into or out from the bucket, will cost you.

Exporting...

Once you've got everything set up (and I do mean everything; follow the steps to set up your ec2 tools environment exactly as documented, or it simply will not work. And this process requires you to review the particulars of your ec2 instance, including the region where your instance lives), you're ready to export. Just make certain you shut your instance down first; it can't be running during the export.

You'll end up with a command along these lines:


./ec2-create-instance-export-task <your-instance-id> -e VMware -f VMDK -c OVA -b export-mc-server --region us-west-2c

I forgot to mention something: prepare to be disappointed. Because you can only export an ec2 instance if it was originally imported into AWS. Any instance you create on ec2 cannot be imported using Amazon's tools.

So this is where the story ends. Hotel California moniker: well earned. Getting your instance out of ec2 will require the use of third-party tools, such as VMware's Converter, running inside the instance.

EC2 tools will never dismantle AWS's house. Or something along those lines.

1 My consulting business (www.holdenllc.com) partners with both VMware and Amazon. It's similar to registering as a Democrat and a Republican, and experiencing the feelings of elation and outrage simultaneously, all the time.

2 I mean, I certainly wouldn't have made that mistake, if someone had advised me not to. But they didn't, so I did.

Wednesday, April 1, 2015

Data Is A Four-Letter Word

We want to manage data based on value.
It’s no coincidence that data is a four letter word.

And it’s a word that holds different meaning depending on your place in the organizational chart. Managers interpret data to mean things like budgets, schedules, and white papers. To counsel, data is a liability subject to discovery. Storage administrators interpret the word to mean the consumed space within a large rectangular box in the data center. Network administrators hear “data” and think in terms of moving it, intact, from here to there as quickly as possible. And virtualization administrators understand that data is the assorted VMDKs and associated files that comprise their software-defined data center.

According to a recent IDG survey, 94% of IT professionals find it beneficial to manage data based on its value. But with traditional storage platforms and management tools, only 32% of us actually manage data based on its business value.

We all have personal stories of file servers that ran out of space because an iTunes library ended up in someone’s home directory. As a result, the executives couldn’t get to their archived email (because it was stored in a PST in their home directory… cringe). The impact to the business can be painful. Early solutions to this problem included file screening and directory quotas. But the problem quickly jumped from files to filesystems, and eventually to virtual machines and entire datastores. We all agree that storage needs better governance and control.

But we’re starting to re-evaluate our view of data. We’re shifting from a technology-based view to a business-driven view, which forces us to consider the value of not only storing data, but of under- and over-allocating resources to data. Ironically, it’s technology that enables our business-centric view of data: the hybrid array, with its combination of flash, SSD, and HDD tiers. But hardware alone is not a solution; we need software to exploit these tiers effectively without burdening the IT staff.

This is where Storage Quality of Service (or Storage QoS) enters the discussion. Storage QoS aims to address the most common storage-related constraints: throughput, transactions, and latency. It also introduces an upper bound with regard to performance: QoS not only guarantees a minimum, but also enforces a maximum with regard to throughput and everyone’s favorite storage benchmark, IOPS. These limits prevent a burst in a single workload from overwhelming the storage platform and negatively affecting other IO workloads (i.e., the noisy neighbor problem). Storage QoS also provides a method for configuring a maximum value for latency, thereby ensuring that certain workloads get the responsiveness they require from the storage platform.

However, effective and efficient storage QoS depends on a simple method by which to create and apply these performance policies. Otherwise, we end up with technology that adds non-trivial administration to IT’s workload. To solve this problem, we need policy-based storage QoS.

With policies that guarantee performance, we will be ready to exploit new storage technologies such as VMware’s Virtual Volumes (VVOLs). VVOLs allow virtualization and storage engineers to eschew the creation of enormous LUNs for the purposes of creating a VMFS datastore, and introduce the ability to write virtual machine data directly to the storage array. Now, Storage QoS policy can be applied directly to individual virtual machines, which provides a high level of control over your infrastructure. We can configure our storage platform to treat messaging servers, for example, as a mission-critical workload (because if email is down, the business is down, right?). We can apply a business-critical Storage QoS policy to our application and database servers, and we can give our files servers a non-critical Storage QoS policy that will monitor and restrict performance and throughput in times of contention.

The hybrid array is a critical component for Storage QoS. Not only because we, by definition, need multiple tiers of storage with varying performance characteristics, but we also need a reliable method for dedicating our most expensive storage, flash, to our most critical workloads. With VM sprawl and the ever-expanding SDDC, administrators can no longer efficiently manage storage performance by manually moving workloads between tiers. We require an intelligent, adaptable platform that can reliably implement and govern the policies we’ve applied to our workloads.

A policy-based approach to Storage QoS on a hybrid array platform enables us to guarantee service levels for our virtual workloads. And it’s these service levels that represent the intersection of the business needs and the storage platform’s capabilities.

NB: This post is part of the NexGen sponsored Tech Talk series and originally appeared on GestaltIT.com. For more information on this topic, please see the rest of the series HERE. To learn more about NexGen’s Architecture, please visit http://nexgenstorage.com/products/.

Sunday, March 29, 2015

Minecraft and Microsoft

Last fall, Microsoft acquired Mojang, the Swedish company that developed Minecraft, for a smooth $2.5 billion. Reactions to this news fell into one of two possible categories1:

  1. WTF is Mojang / Minecraft?
  2. Microsoft has lost its goddamned mind.

These are reasonable responses. Because we couldn't see what Microsoft could possibly want with this game. And we certainly couldn't see how the game was worth so. much. money. But over the weekend, I think I've figured it out.

What is Minecraft?

It's Steve!
If you're under 14, or a parent of a child under 14, you already know what Minecraft is. The concept is simple: you control a character in a 3D world filled with materials to collect, items to craft, and enemies to defeat. Several game modes allow for different rules to apply, such as whether you're able to fly, whether you can fall without getting hurt / killed, and whether your game ends after a single death. Some players like the challenge of playing in Adventure mode, while others prefer to play exclusively in Creative mode. And no one likes Hardcore mode. It's just too crazy.

But it's the creative mode that's worth discussing. In creative mode, the players do not need to be on the lookout for hostile mobs. And there's no need to search for and collect materials such as wood and cobblestone; in creative mode, players are given an infinite supply of every type of block in the game. Creative mode lets the player build structures underground, above ground, in the sky, underwater, and... well anywhere. It's a blank canvas, or rather a canvas that is only the topographical suggestion of the player's world. And some people have built amazing things in creative.

A Brief History of Versions

Mojang releases new versions of Minecraft on an irregular basis, and each new version includes major changes to the game. For example, Minecraft 1.6 (aka the Horse update) introduced ridable horses and a new launcher for the game. Minecraft 1.5 introduced Redstone, which enables the creation of working machines and circuits. And most recently, and this is where the connection between Mojang and Microsoft becomes apparent, Minecraft 1.8 introduced twelve new commands that players can use to interact with and manipulate their worlds.

That means players can now use the console (or the command line, if you're looking for a metaphor) to create and destroy objects in Minecraft. It's like code.org, in that players who use these commands end up learning about programming without knowing it.

An Example: The Command Block

The Command Block.
I'll give you an example: my youngest son, who can navigate Minecraft with a trackpad faster than you ever could with a mouse, asked me a question yesterday. It was along the lines of, "can you help me with this command block?" (A command block is an object in Minecraft that you can load a command (or series of commands) into, and the command block will execute the command based on certain input. No, really.)

When I walked over to the space behind the sofa, which is where he perches when playing Minecraft, this is what I saw on his screen:

Editing the Command Block.














Inspect the Console Command field. Is there any denying that this is code? It certainly doesn't look like the over-referenced Nintendo codes of my 1980's infused youth. No, this is serious stuff. And my boy was asking for help, because his command wasn't working right. (The goal of this command is to give the nearest player an object that looks like the head of a player named eager0. Naturally.) It turns out, he was missing a colon between SkullOwner and "eager0".

I smiled when I saw him working on this, because I lost count years ago of the hours I've lost poring over code looking for syntax errors. We fixed the problem, tested it, got the expected result, and he moved on. Except now he knows the importance of each character in a command, and how each section of the command needs to be delineated from the others. Well played, Mojang. Well played, Microsoft.

Do you see the connection, now?


Microsoft acquired a massively popular (an estimated 27.6 million people play Minecraft in one form or another) tool that encourages kids to write code in order to solve a creative problem. Kids aren't learning to code because it's part of their curriculum; it's part of their fun. Can you imagine what these kids will be capable of doing after they've mastered some coding and have applied that knowledge directly to their work (play is work, after all)? And when they've spent years perfecting this skill, they'll be poised to meet any challenge.

Satya Nadella is a genius.

1 - Actually, these categories are not mutually exclusive.

Monday, March 16, 2015

Paleontology

Last weekend, I spent a day with my family at the Maryland Science Center. If you've never been, and you've got plans to visit Baltimore, do yourself a favor and drop by. You won't be disappointed, especially if you're traveling with your kids. Hmmm, that sentence sounds like it should be at the end of this post. But I'm lazy, so I'm leaving it here.

Like many science-themed museums, the Maryland Science Center features exhibits on specific fields of scientific study: astronomy, physics, biology, chemistry, and my favorite: paleontology.

The king is dead.
Etymologically, paleontology means, quite literally, the study of old and ancient things. So a stroll through an exhibit of the fossilized remains of dinosaurs is a stroll through the past. Much like gazing at the stars in your backyard is viewing a universe that hasn't existed for millions of years; it's the kind of thing that makes you dizzy to think about, and can understandably lead to feelings of insignificance. It's borderline ungrokable.

In a quiet moment, while my wife and boys examined a cast of fossilized dinosaur eggs, I walked over to a Tyrannosaurs Rex skull mounted on a metal frame. It's an image I've seen countless times; the mineralized cranial structure of what was once that planet's greatest predator. But this time, instead of looking upon this fossil with awe, I felt sadness. Because for all of its glory, this particular beast (or to be more accurate, a cast of a particular beast) ended up on display in a bright and spacious room in Baltimore, Maryland.

Today, we use the word dinosaur in a very different way, especially in the technology world. It's used pejoratively, a scoffed utterance to indicate the erstwhile utility of a technology, or worse yet, a human being. A dinosaur is a luddite, a troglodyte, a philistine, a provider and consumer of obsoleted solutions for obsoleted problems. A dinosaur is incapable, or perhaps disinterested, in evolving. In other words, a dinosaur is a dysfunctional anachronism of the first order. An organizational obstruction in need of percussive sublimation.

What Killed the Dinosaurs?

When used in IT (and other industries that undergo near-constant change), the word dinosaur implies a failure to adapt to change over time. We observe the dinosaur as it writes a batch file to automate an administrative task, or as it insists that more vCPUs means a faster virtual machine. The dinosaur's thought processes are mired in late 1990's capabilities; we conclude that this stagnation is what will lead to the dinosaurs extinction. But what killed the true dinosaurs was not a failure to evolve, it was a failure to sustain disruption. 

Cloud as Extinction Event

Make no mistake: cloud computing disrupts traditional hosting environments. For many, the migration to the cloud is perceived as an unwelcome change. The dinosaurs emerge to dig in to traditional models. But just as a successive failure of the food chain doomed the Earth's largest predator, an accelerating exodus of customers migrating to the cloud will doom IT's antiquated business models. The extinction burst will manifest as a last ditch attempt to save on-premises1 hosting. It's an understandable reaction to a realization that migration to the cloud is fait accompli.

Modern Dinosaurs

I've encountered many IT veterans over the years who would certainly meet the snarky criteria of a technology dinosaur. But I'm letting that term go. Today's dinosaurs were yesterday's luminaries; we'll all be dinosaurs one day. And maybe the next time you walk past corridors of quiet cubicles populated with sexagenary sysadmins, you'll bite your tongue before dismissively declaring, "he's a dinosaur."

Such declarations are unnecessarily cruel and inhumane to both the subject and the predicate.

To be continued...

1. You're welcome, pedants.

Monday, February 9, 2015

#ExvExpert

It is with great pleasure that I share with you that, effective February 5, 2015, I have completed my #vExpert journey. And by that, I mean I quit.

The Journey to #ExvExpert

In 2013, I applied for the title, but was rejected. And rightly so; I had done nothing more than write a few boring blog posts and RT someone else's tweets. Not exactly what you'd call a "contribution to the virtualization community." But that rejection set off a hell of a series of events. Some serious networking ensued. I found a voice for my blog, and started writing posts that I actually liked reading. Sometimes, you liked them, too.

In 2014, I applied for the title again, and was accepted into the program. And the year I spent as a vExpert was exciting, and punctuated with numerous opportunities to learn about upcoming products and solutions. More networking, more social media activity. It was a great year, to be certain.

But something occurred to me at VMworld last year. I was navigating the circus that is the Solutions Exchange, doing my best to avoid eye contact with the over-caffeinated booth staff. Swag hunters with their bags, collecting all manner of tchotchkes and punching their raffle cards for a chance to win an iPad Air. Walking billboards throwing dice for a chance to win a Tesla. Mention "I'm a vExpert" and the amount of swag you walk away with doubles. But aside from coffee presses that no one needs and yet another fleece with your Twitter handle embroidered on the breast, what's the point?
001

"Early access to new products," you say.
"An invite to the VCDX / vExpert party at VMworld," you say.
"That little vE badge for VMTN," you say.

No, thanks.

Independence Matters

I'm an independent consultant, but I was losing sight of that independence when I aligned myself so closely with one particular vendor. Most clients I work with don't know what the vExpert program is, so they certainly aren't impressed by that credential. In fact, the only people who are impressed... are vExperts. The same is true for #CiscoChampions; I let that one go, too.

Membership in a group of like-minded practitioners is certainly rewarding, and fun. I'm not suggesting otherwise. I'm just calling out this program, and the others like it, for cultivating an ever-growing community of clones.

Thursday, February 5, 2015

Breaking out of the EC2 Jail

A few weeks back, I wrote about a project I'm working on where I'll be migrating my Minecraft server from EC2 to vCloud Air. You may be thinking that I've given up, or forgotten. Far from it.

Instead, I'm still struggling with exporting this stupid thing from EC2 in the first place. I've got a post I'm working on to explain that process, and I'll give you a preview: exporting an EC2 instance is more difficult than you can imagine. It's a process filled with command line options, security credentials, environment variables, and storage buckets. And there's a good measure of "invalid instance ID" errors that are not helpful at all.

So fear not: this project is alive and well. And it confirms the snarky observations on Twitter: checking out of this hotel is close to impossible.

Stay tuned.

Sunday, January 18, 2015

A Declaration for 2015

No technology here, just cactus from the US Botanic Gardens in Washington, D.C.
I'm not making any predictions for 2015, or any year for that matter. Predictions are funny things, after all. Instead of a prediction that's based on sophomoric analyses of recent events and infused with the subconscious biases we all hold, I submit to you, dear readers, a declaration:
I'm losing interest in blogging about technology.
I don't mean that in any profound sense. It's not meant to be a statement on technology, or a passive-aggressive swipe at the pace of innovation across the various niches I've taken an interest in.

Instead, it's an acknowledgement that I've spent two years worrying about blogger stats, pageviews, Twitter interactions, LinkedIn connections, community designations, and certifications. And as I sit at my laptop, squinting at two years of blogging and social networking activities in my rear-view, I'm forced to ask myself what it's all for.1

For certain, I've benefited both professionally and personally (where personally, in this case, is a synonym for financially) as a direct result of this blog. I've met many wicked smart people, attended some amazing events, and have developed my writing in the process.

But I've also started to fall into the routine of posting once a week, even when I don't really have any technical information worth sharing. And I find that I'm posting because... well... that's what you do. And as the market for virtualization-centric blogs reaches saturation, I doubt that the world needs yet another VMware blog.

Exhibit A
So for 2015, I'm going to eschew the self-imposed limitations on what I post about. Creative non-fiction2 and photography will appear alongside whatever technical information I'm interested in sharing. And if I want to include a photograph of the chaotic congeries of knock-off Legos (see Exhibit A), then I will.

And I'm taking a break from the pursuit and collection of the various community designations. I'm no longer able to devote the time and effort required to truly participate in these groups, and to maintain these titles without contributing is disingenuous.

So that's what you can expect from this blog in 2015. Not a prediction, mind you. A declaration.

1 It's at this point where I'm tempted to go into stereotypical geek mode and tell you that I created a spreadsheet to capture the positive and negative results of these activities. But I'm tired of that trope as well, and you'll find no spreadsheet references in this post.

2 And it's at this point I should explain that I've been on a serious DFW bender lately, so footnotes will play a prominent role in upcoming posts.
Mastodon