• United States



by Michael Karp

In search of useful storage metrics

Mar 30, 20013 mins
CSO and CISOData and Information Security

It’s a curious thing, but CIOs

many of whom are now spending more than half their annual budgets on storage are hard-pressed to come up with a good answer when asked the hard question:

How efficient is the storage piece of your operation?

IT managers can often give their CIOs data on the total amount of storage capacity they have in their shops, and some even know how much of that is actually in use; but that is probably only a tenth of the real story. Understanding the other nine tenths is going to be a challenge.

Vendors and academia have long provided us with tools to manage and measure performance: MIPS (millions of instructions per second) and FLOPs (floating point operations), which apply to CPUs, are the best known. In storage, a typical performance metric is IOPS (input-output operations per second), which is particularly useful for measuring how quickly data travels from a disk drive to an HBA; we can also infer quite a bit by interpolating the storage piece of other standard benchmarks, such as the TPC-C, which measures database transaction performance. However, even when system costs are figured into the equation to give us a price-performance ratio, we still get no insight into system efficiency. Let’s acknowledge the need for speed and move on.

THE HURWITZ TAKE: Measuring efficiency can be done in many ways. At least one company is showing a megabytes-per-IT-staff measurement in some of their presentations, but with very little data to back it up. We warn our readers to take such superficial analysis with a healthy dose of salt.

At Hurwitz Group, we find that efficiency is a qualitative as well as a quantitative calculation. For instance, dividing the number of terabytes of storage in your shop by the dollars you spend to acquire and maintain the data (S/C) a capacity-cost ratio really offers no insight at all into efficiency. All it tells you is what you’ve spent and what you have on hand.

A somewhat better equation would reference MANAGED STORAGE (Sm/C), where a distinction is drawn between what is merely stored and what is actually managed. But because “managed” will have different definitions at different sites, a qualitative decision has to be made here: Does it mean backed up data? Data that are remotely mirrored in case of disaster? Clustered servers that can failover when needed?

No consensus currently exists, and thus no acceptable yardstick to measure various vendors’ offerings; vendors being what they are, we don’t expect to see any such metric in the near term either.

At the very least, storage efficiency is a function of the following:

  • Total data under management
  • Costs
  • Impact on core business functions during various management operations (backups, recoveries, provisioning, inventorying, etc.)
  • Time-to-recovery from disasters
  • Total personnel needed to manage the operation

Hurwitz Group thinks that the ultimate metric must include not only ROI and TCO, but also quality of service (QoS) and several general business impact variables, including in many cases opportunity costs.

The industry is a long way from settling on any standard metric. At this point, all we can say is that the calculation is not simple, and goes far beyond just capacity and expense. Until this situation is addressed, it remains a challenge to be an informed buyer.