Future modules will be released as they are created. SUBSCRIBE on YouTube and turn notifications on so you don’t miss the next one! You can find the scorecard template here. Welcome everyone. I’m Jeff Rumburg, Managing Partner of MetricNet. In Metrics Essentials for Contact Center Professionals, my goal is to teach you everything you need to know, to leverage metrics for success in your contact center. Today, in Module 8 of our course, we’re going to discuss how metrics can be leveraged to drive accountability in your contact center. The premise of this module is that you can’t get great performance from your contact center, unless you get great performance from the individual agents who work in the contact center. And the way you do that is by establishing agent-level performance targets, and then holding the agents accountable for achieving those targets. Now, I am aware that the word accountability has some negative connotations. It suggests that agents are being micromanaged, or that Big Brother is watching them. But in reality, nothing could be further from the truth. Accountability is a good thing. Because, clear goals and objectives go hand-in-hand with accountability. Without clear goals and objectives, you will never have accountability. Moreover, I have learned in 30 years of contact center benchmarking that the best performing agents embrace accountability; they want accountability; they are not threatened by accountability. On the other hand, those who are threatened by accountability are almost always those who are not performing well. And they know it, so they don’t want to be held to account. In Module 3 of the course I introduced the contact center balanced scorecard. What I am showing on this page is very similar, but it’s an Agent level Scorecard. The concept here is the same as for the overall contact center balanced scorecard, but in this case we’re creating a monthly scorecard for each individual agent in the contact center. You can think of this as a monthly report card for the agent. Every month, every agent gets a scorecard. We have found the agent scorecard to be far more effective at driving the right agent behaviors, and driving outstanding agent performance, than the more traditional approach of measuring just one metric, which is usually call quality. Creating an agent balanced scorecard in Excel is relatively straightforward. It’s mechanically and mathematically the same as creating a scorecard for your overall contact center. You can follow along on this slide as I explain the process. And, if you’d like, you can download this template in Excel by clicking on the link in the description below the video. First, you select the metrics to include in your scorecard. We recommend using the four metrics shown here in the left column of the Excel table. Depending upon the agent metrics you track, you may choose fewer metrics, or a different mix of metrics for your agent scorecard. A key point however, and this is very important, each metric must be measurable at an individual agent level. And that’s generally the case for the four metrics we have selected for this scorecard: Customer Satisfaction, Number of Contacts Handled per Month, First Contact Resolution Rate, and Schedule Adherence. Why these four metrics? Well, the first metric, Customer Satisfaction, is one of the foundation metrics. It’s the single most important measure of quality. The second metric, Number of Contacts Handled, is a productivity metric, and is a proxy for cost. A high number of contacts handled translates into a low cost per contact, while a low number of contacts handled translates into a high cost per contact. Then, as you know, FCR, First Contact Resolution Rate, is the biggest driver of Customer Satisfaction. And finally, Schedule Adherence is important because it tells you how seriously the agent takes the workforce schedule. A low score for Schedule Adherence tells you that the agent is not respecting the schedule, while a high score tells you that the agent is complying with the schedule. In other words, they are in the right place, at the right time. Secondly, you establish a weighting for each metric. This is a judgment call, but we have put an equal weighting of 25% on each metric in our sample scorecard. Step 3 is to insert the range of agent performance for the month – worst case to best case – for each metric. These performance ranges come directly from the performance of your agent pool for the month. In step 4 each agent’s performance for the month will be inserted into the third column from the right. This is what creates their unique scorecard for the month. Please keep in mind that you will be creating a scorecard for each agent on a monthly basis, and the scorecard will be used to coach the agent, where they need coaching, and to show each agent how they compare to other agents in the contact center. In our example here, we are calculating the monthly score for Agent #22. A score for each metric is then calculated based on the interpolation formula in step 5. This score tells each agent how far along the path they are from the worst performance to the best performance. And finally, a balanced score for each metric is determined by multiplying the metric weighting by the metric score. And when the balanced scores for each metric are summed up, you have the total balanced score for the particular agent whose performance you’re measuring! In this example, Agent #22’s balanced score for the month of June, is 67.4%. Just as with the overall contact center balanced score, each agent’s balanced score will range from 0%, if the agent has the worst possible score on every metric for the month. To 100%, if an agent has the best possible performance for every metric for the month. Here’s a key insight to take away from this slide. The second column from the right, the Metric Score, will show you, at a glance, where a particular agent is performing well, and where that agent needs improvement. As you can see here, the 100% score on customer satisfaction tells us that Agent #22 has the highest customer satisfaction in the contact center for the month of June. Their score of 86.3% for the number of contacts handled is also excellent; and likewise for their FCR, First Contact Resolution Rate metric score of 83.3%. So far, so good. However, for the last metric, Schedule Adherence, Agent #22 had the worst performance in the contact center for the month of June, so their metric score for Schedule Adherence is 0%. Any metric score that is above 50% tells you that the agent is performing above average for the month, while any score below 50% tells you that the agent is performing below average. A metric score of 75% or more tells you that the agent is in the top quartile of performance for the month. Since this particular agent, scored well on the first three metrics, but had the worst score in the contact center for schedule adherence, my high level feedback for this agent would be to maintain performance on Customer Satisfaction, Number of Contacts Handled, and First Contact Resolution, because the metric scores for those three KPIs are all very good. They’re in the top quartile. However, since the metric score for Schedule Adherence is zero, there’s clearly some room for improvement here. Moreover, this is not a hard problem to solve. To the agent, I would say – Just follow the workforce schedule, and make sure that you are in the right place at the right time. It’s not complicated, and it will improve your balanced score dramatically. As I mentioned a few minutes ago, the balanced scores can be used to rank the agents in your contact center. The dark blue row in the middle of this chart represents Agent #22’s performance for the past six months, from January through June. For the month of June, their balanced score of 67.4%, which we calculated on the previous page, puts them in 8th place when ranked against all other agents in this contact center. This type of ranking is becoming more common in the contact center industry. It’s designed to foster transparency, visibility, and accountability, by enabling each agent to see how their performance compares to their peers in the contact center, and ideally, to motivate them to do better. Because, after all, no one wants to be at the bottom of these rankings. I mentioned earlier that in addition to driving agent accountability in the contact center, the agent scorecard also facilitates monthly coaching. Historically, agents have been evaluated on their call quality every month. While there may be some value in this, it’s very time consuming for the supervisors who conduct these call quality audits. Moreover, call quality is very subjective, and by itself, it neglects other important metrics such as customer satisfaction, which is a quality metric. Number of contacts handled, which is a productivity metric. First contact resolution rate, another quality metric, which drives customer satisfaction. And schedule adherence, which can impact wait times for customers during busy periods in the contact center. On the scorecard slide, I have already alluded to how I would coach this particular agent. Since they are performing well on the first three metrics – Customer Satisfaction, Number of Contacts per Month, and First Contact Resolution – my advice is to keep performing at this level. That’s why the performance targets for these three metrics is the same as the agent’s current performance for these three metrics. You may recall from our module on contact center benchmarks, I mentioned that performance targets should be established at the top quartile. Well, the same is true for individual agent performance targets. If an agent’s metric score on the balanced scorecard is 75% or above, they are already in the top quartile. And that happens to be the case for Agent #22, where their metric scores are well above 75% for the first three metrics shown on this chart. However, for Schedule Adherence, this agent has lots of room for improvement. Their score would have to improve from the current 70%, to 90%, to get them into the top quartile for schedule adherence. That, in turn, would improve their agent balanced score for the month, from 67.4% to 84.6%. One final point. These metrics were chosen for the agent scorecard not just because they are usually tracked at the agent level. But also because agents have control over these metrics. It would be unfair to hold an agent accountable for any metric they can’t directly control. For example, contact center uptime, agent turnover, and average speed of answer are not controllable at the individual agent level. So, we would never want to include these in the agent balanced scorecard. By contrast, each of the four metrics in our sample agent scorecard, are controllable at the agent level. So, we’re not holding these agents accountable for any metric that is beyond their control. This concludes Module 8 of our metrics course. I would invite you to join me for Module 9, where we will discuss the Metrics Hierarchy, which addresses the nine key success factors for Contact Center metrics. I want to thank you for joining me today. I’m Jeff Rumburg, Managing Partner of MetricNet.