We want to manage data based on value. |
And it’s a word that holds different meaning depending on your place in the organizational chart. Managers interpret data to mean things like budgets, schedules, and white papers. To counsel, data is a liability subject to discovery. Storage administrators interpret the word to mean the consumed space within a large rectangular box in the data center. Network administrators hear “data” and think in terms of moving it, intact, from here to there as quickly as possible. And virtualization administrators understand that data is the assorted VMDKs and associated files that comprise their software-defined data center.
According to a recent IDG survey, 94% of IT professionals find it beneficial to manage data based on its value. But with traditional storage platforms and management tools, only 32% of us actually manage data based on its business value.
We all have personal stories of file servers that ran out of space because an iTunes library ended up in someone’s home directory. As a result, the executives couldn’t get to their archived email (because it was stored in a PST in their home directory… cringe). The impact to the business can be painful. Early solutions to this problem included file screening and directory quotas. But the problem quickly jumped from files to filesystems, and eventually to virtual machines and entire datastores. We all agree that storage needs better governance and control.
But we’re starting to re-evaluate our view of data. We’re shifting from a technology-based view to a business-driven view, which forces us to consider the value of not only storing data, but of under- and over-allocating resources to data. Ironically, it’s technology that enables our business-centric view of data: the hybrid array, with its combination of flash, SSD, and HDD tiers. But hardware alone is not a solution; we need software to exploit these tiers effectively without burdening the IT staff.
This is where Storage Quality of Service (or Storage QoS) enters the discussion. Storage QoS aims to address the most common storage-related constraints: throughput, transactions, and latency. It also introduces an upper bound with regard to performance: QoS not only guarantees a minimum, but also enforces a maximum with regard to throughput and everyone’s favorite storage benchmark, IOPS. These limits prevent a burst in a single workload from overwhelming the storage platform and negatively affecting other IO workloads (i.e., the noisy neighbor problem). Storage QoS also provides a method for configuring a maximum value for latency, thereby ensuring that certain workloads get the responsiveness they require from the storage platform.
However, effective and efficient storage QoS depends on a simple method by which to create and apply these performance policies. Otherwise, we end up with technology that adds non-trivial administration to IT’s workload. To solve this problem, we need policy-based storage QoS.
With policies that guarantee performance, we will be ready to exploit new storage technologies such as VMware’s Virtual Volumes (VVOLs). VVOLs allow virtualization and storage engineers to eschew the creation of enormous LUNs for the purposes of creating a VMFS datastore, and introduce the ability to write virtual machine data directly to the storage array. Now, Storage QoS policy can be applied directly to individual virtual machines, which provides a high level of control over your infrastructure. We can configure our storage platform to treat messaging servers, for example, as a mission-critical workload (because if email is down, the business is down, right?). We can apply a business-critical Storage QoS policy to our application and database servers, and we can give our files servers a non-critical Storage QoS policy that will monitor and restrict performance and throughput in times of contention.
The hybrid array is a critical component for Storage QoS. Not only because we, by definition, need multiple tiers of storage with varying performance characteristics, but we also need a reliable method for dedicating our most expensive storage, flash, to our most critical workloads. With VM sprawl and the ever-expanding SDDC, administrators can no longer efficiently manage storage performance by manually moving workloads between tiers. We require an intelligent, adaptable platform that can reliably implement and govern the policies we’ve applied to our workloads.
A policy-based approach to Storage QoS on a hybrid array platform enables us to guarantee service levels for our virtual workloads. And it’s these service levels that represent the intersection of the business needs and the storage platform’s capabilities.
NB: This post is part of the NexGen sponsored Tech Talk series and originally appeared on GestaltIT.com. For more information on this topic, please see the rest of the series HERE. To learn more about NexGen’s Architecture, please visit http://nexgenstorage.com/products/.