Future modules will be released as they are created. SUBSCRIBE on YouTube and turn notifications on so you don’t miss the next one! Welcome everyone. I’m Jeff Rumburg, Managing Partner of MetricNet. In Metrics Essentials for Contact Center Professionals, my goal is to teach you everything you need to know, to leverage metrics for success in your contact center. Today, we continue our discussion of contact center benchmarking in Part 3 of the sixth module. Specifically, I am going to take you through a benchmarking case study for a recently completed MetricNet benchmark. This slide probably looks familiar to you because I presented it in the last module. However, the data in this table is different because our case study involves a different contact center. We’re going to tackle this case study in four steps. First, we are going to identify the performance gaps in this contact center. Secondly, we are going to diagnose why the gaps exist. In other words, we will determine the root causes of what’s driving the performance gaps. Thirdly, we will discuss how to close or mitigate the performance gaps. In other words, the actions that will be taken based on the gap analysis. And finally, I’m going to tell you the outcomes that this contact center achieved as a result of implementing their action plan. So, let’s talk about the performance gaps. We can go right down the third column from the left, and compare Contact Center ABCs performance to their peer group average. The gaps are very obvious: First, the costs are high. Both Cost per Contact and Cost per minute of handle time are much higher than the benchmarking peer group average. Secondly, two of the productivity metrics are low. Inbound contacts per analyst per month and analyst utilization are significantly below the peer group average. In fact, that’s probably why the costs are high. Thirdly, all three service levels are excellent, and are significantly higher than the peer group averages. Next, all the quality metrics are lagging, and are much lower than the peer group averages. For the analyst metrics we see that the training hours are well below the peer group averages. And Job Satisfaction, which is one of our KPIs, is also well below average. And finally, the IVR Containment Rate, sometimes called the self-help/self-service rate, is much lower than the peer group average. That’s a lot of performance gaps. So, we have a lot to work with here. In fact, except for the service level metrics, which were particularly strong, virtually every other metric in this benchmark has a negative performance gap when compared to the peer group averages. So, that’s step 1. Very simple. Just identify the performance gaps. Now for step 2. Let’s analyze or diagnose these gaps. And answer the question: what’s driving them? Well, we know that cost and productivity are inversely related. As productivity goes down, costs go up. So, it’s reasonable to deduce that this contact center has high costs because their productivity metrics are well below average. When a contact center has low productivity, that’s the same thing as saying that they are over-staffed. They have too many agents for the workload they’re handling. Let’s dig a bit further, and ask why they would do that. Why would they have too many agents? In answering that question, the service levels give us a clue. Remember that the service levels are the only metrics where contact center ABC performed better than the peer group. We can see that the Average Speed of Answer, the % of Calls Answered in 30 Seconds, and the Call Abandonment Rate are all much better than the peer group averages. In fact, that’s probably why they are over-staffed. You see, when you have aggressive service levels of the sort we see here, you need to staff up to hit that 12 second ASA and the 2.9% abandonment rate. The explanation they gave me for their aggressive service levels is that they had been mandated by the COO of the company. They had been told to keep the ASA under 15 seconds and the abandonment rate under 3%. Well, they did that. But at what cost? We know from the cost metrics that it came at a very high cost. This is a classic example of someone, the chief operating officer, who is unfamiliar with contact center operations, assuming that aggressive service levels will lead to high levels of customer satisfaction. But we know from hundreds of benchmarks that customers don’t care that much about service levels, but they do care a lot about quality, as measured by first contact resolution rate. In other words, if you must make a tradeoff between the high cost of fast service levels and the high quality of first contact resolution rates, a high FCR wins out every time. It turns out that customers are willing to forgive an occasional long speed of answer or an abandoned call, as long as they reach a competent analyst when they get through. Competent, meaning a high first contact resolution rate. Now, let’s look at the quality metrics. All of them are below the peer group averages. We know that First Contact Resolution Rate is the biggest driver of Customer Satisfaction, and sure enough the FCR is low, thereby driving a low Customer Satisfaction. But why is the FCR low? Well, look at the training hours, which we know have a significant effect on First Contact Resolution Rate. Both the initial agent training hours and the annual agent training hours are well below the peer group averages. So, I think we can accurately state that low agent training hours are leading to low FCR, First Contact Resolution Rate, which in turn leads to low customer satisfaction. To summarize our diagnosis, we have high costs, due low productivity, and overstaffing. And the root cause, is the aggressive service levels, which had been mandated, under the false assumption that aggressive service levels would produce high levels of customer satisfaction. On the quality side, inadequate analyst training led to low first contact resolution rates, which in turn was driving low levels of customer satisfaction. By the way, the gap analysis I just went through follows our KPI cause-and-effect diagram perfectly. Remember from Module 4 of the course that the KPIs on this chart are in the dark blue boxes. On the left side of the diagram, you can see how the service levels drive agent utilization, and that agent utilization, in turn, drives Cost per Contact. The service desk in our benchmark was overstaffed because the service levels were too aggressive. This led to low Agent Utilization, which led to high Cost per Contact. On the right side of the diagram you can see that agent training hours drive First Contact Resolution Rate, and that FCR, in turn, drives Customer Satisfaction. Because the agents in our case study did not receive adequate training, their FCR was low, and that caused customer satisfaction to be below average. So far we have identified the performance gaps; we have diagnosed those gaps; and now, we want to develop an action plan to close those gaps, and specifically, to reduce the cost per contact and increase the customer satisfaction of this contact center. On the cost side of our action plan, a headcount analysis showed that the contact center was overstaffed by approximately 15 agents. We didn’t recommend any layoffs. Instead, we simply recommended that the contact center attrit by 15 agents, and not replace those agents. At the turnover rates they were experiencing, it only took this contact center about four months to downsize by 15 agents. On the quality side of our action plan, we recommended additional training that was specifically focused on improving the First Contact Resolution. We suggested an FCR goal of 85%, and 20 hours of training focused specifically on improving the first contact resolution rate. That was their action plan. Very simple, very straightforward, but remarkably effective. Which brings me to Step 4. What happened? What were the results? As I mentioned, the contact center downsized their agent headcount by 15 FTEs within 120 days. As expected, this did increase their ASA. It went from 12 seconds to 40 seconds. But it also increased their agent utilization from 56.7% to 64.9%, and it reduced their Cost per Contact from $6.89 to just $4.40. So, on the cost side, we achieved our objective of reducing the cost per contact to less than the peer group average. On the quality side, it took about six weeks for all of the agents to go through the FCR training. The contact center saw almost immediate results, as their first contact resolution rate began to climb as soon as the training was completed. Within six months the contact center had exceeded their FCR goal of 85%, but more importantly, their customer satisfaction increased from 63% to almost 90%, thereby driving their CSAT well above the peer group average of 79%. What I hope you take away from this is the following: Benchmarking is a very straightforward process. Selecting an appropriate peer group is critical, as is presenting your benchmarking results in a way that is easy to understand and digest. Once you have your benchmarking data organized in a chart like this, it’s very easy to spot the performance gaps. And it’s almost as easy to diagnose those gaps, as we did in our case study, by leveraging the KPI cause-and-effect diagram. Finally, once you know what drives the performance gaps, it’s just a matter of pulling the right levers to close or mitigate the performance gaps. Here again, the KPI cause-and-effect diagram is very effective in helping to formulate the actions that lead to improved performance. This concludes Part 3 of our sixth module. I would invite you to join me for Module 7, where we will discuss how process maturity drives your contact center performance. I want to thank you for joining me today. I’m Jeff Rumburg, Managing Partner of MetricNet.