Performance Metrics: Numbers Can Lie
I often think I have too many metrics. I know you can't manage what you don't measure, but how do you know when you've gone too far? Are there simple performance metrics I should use?
-- Name withheld by request
Depending on the nature of the business, performance metrics can vary widely. Maybe that's why so many businesses have so many metrics: the more processes, the more ways you can slice and dice performance.
Take productivity. There are a number of ways to measure productivity. A common measurement is throughput: the number of units (or transactions or actions or whatever) per unit time. If a call center rep handles 42 calls in three hours, his throughput is 14 calls/hour.
Also misleading. What was the nature of the calls? Were they quick and simple? Did they require significant time and effort? Did they require consultation with other departments? If you don't know the answers to those questions, comparing that rep's throughput to another rep's throughput is misleading at best. If handling more calls per hour is the goal and another rep's calls were all easy, then throughput is hardly a reasonable measure of call productivity or, more importantly, call effectiveness.
The same thing happened when I worked in book manufacturing. We ran thousands of titles a year with widely different quantities. Some run quantities were lower than 2,000 while others were over a million.
Running a number of jobs with small quantities during one particular shift automatically decreased a crew's throughput, since changing from job to job took time. On the other hand, the more jobs you ran in a day the lower your throughput... even if you were incredibly efficient.
So we measured run and job changeover results separately. We used two simple formulas:
1. Changeover average = total changeover time / number of changeovers
2. Run average = total books produced / total run hours
So for example, during a shift a crew may have run 3,950 books per hour at an average changeover time of 21 minutes. Sound good? It's hard to tell: Comparing results, either between crews or over time, was a problem. Which is better:
1. 3,950 books per hour and a 28-minute changeover average, or
2. 3,750 books per hour and a 19-minute changeover average?
If you can't answer the question, don't worry because we couldn't either. (Not without layering in other information.) Whether a slower run average and a faster changeover average was better than a faster run average and a slower changeover average depended on each crew's total number of changeovers.
Plus the crews knew that, so they would manipulate results, writing down slower or longer changeover times as they saw fit to manipulate their results. While a shift always added up to 8 or 12 hours, how time was actually spent during that 8 or 12 hours was open to manipulation. (If they wanted shorter changeover times they could get them, albeit at the expense of their run average.)
So instead of attempting and probably failing to enforce more accurate reporting, we created a metric we called adjusted books/hour (ABH).
Here's how it worked:
Our changeover goal was 20 minutes, so every time a crew performed a changeover they got a 20-minute credit regardless of how long that changeover actually took. If they had four changeovers during a 12-hour shift they were credited 80 minutes of changeover time. The remaining time was considered run time and was divided into the total books produced to determine ABH.
Here's the formula:
ABH = total books / (total run time--(number of changeovers times 20 minutes))
Here's an example. Say a crew had six changeovers and ran 22,000 books during an 8-hour shift. Six changeovers times the changeover credit of 20 minutes equals 2 hours of "credited" changeover time. That reduces the total run time to 6 hours, and 22,000 divided by 6 equals 3,667 ABH.
The ABH metric gave us a straightforward way to evaluate performance trends and compare crew-to-crew performance. If crews performed changeovers faster than 20 minutes, great--they had more run time available and could run more books, which should increase their ABH. If their changeovers took longer, their ABH suffered since they had less actual time to run books.
Keep in mind we tracked actual performance on changeovers so we could spot areas for improvement; we just used ABH as an apples to apples metric.
So take a look at what you measure:
- Do some metrics leave room for manipulation?
- Can some metrics be interpreted in different ways?
- Do your metrics measure what is truly important to your business?
If not, think of ways to create your own metrics, especially if custom metrics make it easier for employees to evaluate their performance.
Measuring is important, but measuring what you need to measure and measuring it the right way is critical.