Call Center Benchmarking Peer Group Selection
How Does YOUR Call Center Stack Up?
Part 3: Benchmarking Peer Group Selection
How to Ensure a Fair, Apples-to-Apples Comparison of
Your Call Center Benchmarking Data
The first question we often hear from a client who wants to join a MetricNet benchmarking consortium is “How many companies do you have in your database from my industry?” An equally common question is “Do you have companies ABC, and XYZ in your database?” Both of these questions assume that a valid call center benchmark must include only companies from your specific industry. Sometimes this assumption is accurate, but oftentimes it is not. The fact is, there are many other factors besides industry affiliation that are more important – sometimes far more important – when selecting a peer group for benchmarking comparison.
Call centers can improve their overall performance based on internal benchmarks alone, but will eventually experience diminishing returns in their improvement efforts unless they look outside their own organizations. It is in comparing themselves to peers that they can put their results into context, and begin to experience “breakthrough” improvements. For example, a call center may take pride in reducing its cost per call by 10%, but not realize that their peers are still 30% lower in cost!. Your call center performance is therefore best examined in light of comparisons to appropriate peer groups. This begs the question of what is an appropriate peer group…one that ensures a fair, apples-to-apples comparison of your call center?
From 30 plus years of benchmarking experience and more than 1,000 call center benchmarks, MetricNet has developed a proprietary technique called Dynamic Peer Group SelectionTM that ensures a fair and accurate benchmark of your call center. Here, for the first time, MetricNet explains the process, and provides an approach for selecting a valid peer group for your benchmark.
Let me start by debunking a couple of common myths about benchmarking. This is important because it sets the stage for how to select your benchmark peer group.
Benchmarking is an inexact science. Because of differences in the way call centers define their metrics, account for their costs, and track their performance, there will always be some inconsistency in the way call centers report benchmarking data. As an example, one call center may define an abandoned call to be any call that is dropped at any point after a call hits the ACD. By contrast, another call center will define an abandoned call to be any call that is abandoned only after a caller has waited on the line for at least 20 seconds before abandoning. Clearly, these different definitions will yield different results for call abandonment rate, even if the two call centers have exactly the same number of abandoned calls.