Skip to main content

Future modules will be released as they are created. SUBSCRIBE on YouTube and turn notifications on so you don’t miss the next one!

Welcome everyone. I’m Jeff Rumburg, Managing Partner of MetricNet.

In Metrics Essentials for Contact Center Professionals, my goal is to teach you everything you need to know, to leverage metrics for success in your contact center.

Today, we continue our discussion of contact center benchmarking in Part 2 of the sixth module. Specifically, we are going to discuss the various and different ways that benchmarking data can be presented.

To be clear, this is not just a discussion of aesthetics. Obviously, we want our benchmarking output to be beautiful and pleasing to the eye. But the bigger reason this topic is important is because the way you present your benchmarking data can make or break a benchmark. It can facilitate deep understanding of the strengths and weaknesses of a contact center. But it can also create confusion if not presented properly. Moreover, the way benchmarking data is presented can greatly facilitate the diagnosis and interpretation of the data, as well as the subsequent actions you take, based on the data. In short, the way you present a benchmark will actually drive the quality of your results.

Before I dive into the benchmarking data and presentation styles, let’s revisit a slide that I presented in Module 1 of our course. The 20 metrics shown on this page are among the most common in the industry, and they also yield the greatest insights. Not coincidentally, these are also the metrics that we use at MetricNet to benchmark inbound customer service contact centers. The reason I’m showing them now is because the subsequent slides that I go through are going to reference all of these metrics.

However, just so there’s no confusion about these metrics versus the seven KPIs that we have been discussing, let me make this point.

First, six of the KPIs are already included on this list. They are highlighted here. Only the balanced score, which is also a KPI, is not shown here.

At MetricNet, we collect data for all these metrics. So, we’re in a unique position. We have the luxury of using every one of these metrics when we conduct a benchmark.

However, for those who are not in the business of benchmarking, my advice is to use the KPIs highlighted on this page, plus the balanced score, when benchmarking your contact center.

Just as a quick reminder: The definitions for these metrics have been documented in three eBooks that you will find in the Resources section of Module 1. I would encourage you to download those so that you have the exact definition for every metric we discuss in this course.

With that said, let’s take a look at some benchmarking slides. The first chart we have is tabular; it’s a table of data. Let me explain how this slide works.

Down the left column of the table we have our metrics taxonomy. In the second column from the left, we have the benchmarking metrics themselves. Please note that we do not call these KPIs because not all the metrics shown are KPIs. Next to that, in the third column from the left, we have the benchmarking data for the contact center being benchmarked, Company XYZ. This is their actual benchmarking data. Please note that the benchmarking data represents the average performance over some period of time; it could be a year, six months, one quarter, or even one month. If you’re benchmarking for the first time, we recommend that you use a year of data, or at least six months of data if possible.

Finally, we have the four columns on the right, which show some important statistics, including the arithmetic average, the minimum, which doesn’t mean good or bad; it’s simply the lowest value in the benchmarking peer group. Then we have the median, which is different than the average. The average and median will only be the same for a symmetrical distribution, such as a normal distribution. But none of the distributions for a contact center are symmetrical because they are all tied, either directly or indirectly, to the Erlang C distribution, which describes, probabilistically, the rate at which contacts come into a contact center. And that distribution is highly asymmetrical. It has a spike on the left-hand side of the distribution, hence the term spikiness, and a long tail on the right side of the distribution. Next to the median on the far right of this chart is the maximum, which again, does not mean good or bad; rather, it’s simply the highest value in the benchmarking peer group.

There are 32 contact centers in this benchmark. So, what you are seeing here in the four columns on the right of the table are the statistics for those 32 contact centers.

Now, there’s a lot of insight to be gained from this chart. We could spend an hour on this chart alone, but I won’t do that.

Instead, let’s look at the KPIs. They’re highlighted on this chart, and it’s the KPIs that yield the greatest insights. We can see very quickly here that:

  • Costs for this contact center are below average. $8.42 vs. an average of $13.76 for the peer group
  • Agent Utilization is above average. 54.9% vs. 53.2% for the peer group.
  • The % of Calls Answered in 30 Seconds is way below average. 2.6% vs. 42.5% for the peer group.
  • First Contact Resolution Rate is also way below average. 57.6% vs. 72.6% for the peer group.
  • Customer Satisfaction is above average. 87.6% vs. 78.5% for the peer group.
  • And finally, we have Agent Job Satisfaction at 99% vs. 71.5% for the peer group.

The most significant insight from this chart comes from the foundation metrics: Cost per Contact and Customer Satisfaction. And in this case, Cost per Contact is below average, and Customer Satisfaction is above average. That is significant because these are the two most important KPIs, and in the case of Contact Center XYZ they fulfill two of the criteria for a world-class contact center, which is, cost below average and customer satisfaction above average.

Here is what the service level and quality metrics look like when presented in a quartile table. Remember, quartile means one fourth, and because we have 32 contact centers in this peer group, each quartile will contain data from 8 contact centers.

When we stack rank these metrics from best to worst performance, the best performing 8 contact centers will make up the first quartile; the top quartile, which is the best performing quartile. The second group of 8 contact centers in the ranking will make up the second quartile, and so on.

The cells that are highlighted in blue represent the quartile performance for Contact Center XYZ, the contact center being benchmarked. As you can see, all the service levels for Contact Center XYZ are in the fourth quartile, the bottom quartile. While all of the quality metrics are in the top quartile, the best performing quartile.

What I like about the quartile charts is that you can see very quickly, at a glance, which quartile your contact center lands in. Ideally, you want your contact center to be in the top quartile for every KPI. And as I mentioned in Module 5, where we covered Industry Benchmarks, we recommend that you establish performance targets that are top quartile in your benchmarking peer group.

You have already seen the balanced scorecard slides in this course. But I will show them again, just for the sake of completeness in this module. What’s shown here is the Balanced Scorecard table, which not only shows the balanced score in the lower right corner of the table, but it provides an explanation for how the scorecard is calculated.

Likewise, when you show the scorecard rankings in a distribution chart, or a bar chart, you get an illustration like this. Contact Center XYZ’s data point will always show up as a dark blue or green bar on these charts, while the peer group data is the light blue bars. Keep in mind that all of these bar charts are organized from left to right, good to bad. The better performers are on the left of the chart, while the weaker performers are on the right of the chart. The numbers on the x-axis are contact center designators. We know who these companies are, of course, but we don’t disclose their names. The red dashed line that goes horizontally across the chart is the average of the data points. And the legend that appears in the box in the upper right corner shows several statistics for the metric, including the maximum value, the average value, the median value, the lowest value, and the Contact Center XYZ value for the balanced score.

On this and the next five slides I will show you what the distribution chart looks like for each metric in the scorecard. This chart shows the Cost per Contact Data. Remember, left to right is good to bad. The lower cost per contact data points are on the left side of the chart, while the higher cost per contact data points are on the right side of the chart. You can see that Contact Center XYZ, the green bar, has the lowest Cost per Contact in the benchmark.

Here’s the chart for Customer Satisfaction. You can see at a glance that Company XYZ, the green bar, is above average for Customer Satisfaction.

Likewise for Agent Utilization. Contact Center XYZ is above average on this metric, which means that their agents are relatively efficient.

For First Contact Resolution Rate Contact Center XYZ had the lowest value in the benchmark.

They had the highest Agent Job Satisfaction in the benchmark.

And they had the lowest percentage of Calls Answered in 30 Seconds because their Average Speed of Answer was so long, at 176 seconds.

Finally, to emphasize the primacy of Cost and Quality, Efficiency and Effectiveness, we always include this two dimensional chart in our benchmarks. The x-axis shows the Cost per Contact from highest on the left, to lowest on the right. And the y-axis shows Customer Satisfaction, from lowest at the bottom of the chart, to highest at the top of the chart.

Moreover, the chart is organized into four quadrants. The upper right quadrant represents low cost and high quality. This is the golden quadrant, the one you want to operate in, and we already know from earlier charts that I have presented that this contact center, Contact Center XYZ, is low cost and high quality. The upper left quadrant is high quality, but it’s also high cost. The lower right quadrant is low cost but also low quality. And the lower left quadrant is the worst of all because it represents high cost and low quality.

Ideally, you want your contact center to operate in the upper right quadrant, where your costs are below average, and your quality is above average.

This concludes Part 2 of our sixth module. I would invite you to join me for part 3 of Module 6, where I will present a benchmarking case study, and illustrate the power of benchmarking to put your contact center on the path to World-Class performance.

I want to thank you for joining me today. I’m Jeff Rumburg, Managing Partner of MetricNet.

Angela Irizarry

Angela Irizarry is the President and Chief Operating Officer at MetricNet, where she is responsible for managing day-to-day operations, strategic planning, and new client acquisition. She also oversees the company's sales and marketing efforts and manages its intellectual property and online resources. Angela has been with the company for 10 years and has over 20 years of experience in business development and strategy. She has been featured in Fortune magazine and has received recognition for her work in competitive and trends analysis from executives at a variety of Fortune 100 companies. Angela is a dynamic and accomplished professional who consistently delivers exceptional results for MetricNet and its clients. She has a wealth of industry experience and a track record of success in driving business results, particularly in the financial services, insurance, and healthcare sectors. Angela is highly skilled in communication, problem-solving, and project management, and is committed to delivering the highest level of service to MetricNet's clients.

Leave a Reply

Close Menu