We offer Contact Center Benchmarks with Cost Metrics for organizations that operate their own internal, in-house Contact Centers.
We offer Desktop Support Benchmark with Cost Metrics for organizations that operate their own internal, in-house Desktop Support Groups.
Find answers to commonly asked questions regarding Metricnet products and services.
Our sample reports are available for Service Desk, Desktop Support and Contact Center professionals worldwide.
Our call center resources and articles
Our desktop support resources and articles
Our service desk resources and articles
Our free metrics ebooks and introductory guides
Our regular featured metric
Many of our clients rely upon our webcasts as an effective tool for training, coaching & improving the skill sets of their service & support professionals.
A range of free and downloadable whitepapers
Each month MetricNet highlights one Key Performance Indicator for the Contact Center, Service Desk, or Desktop Support. We define the KPI, provide recent benchmarking data for the metric, and discuss key correlations and cause/effect relationships for the metric. The purpose of this column is to familiarize you with the Key Performance Indicators that really matter to your support organization, and to provide actionable insight on how to leverage these KPI’s to improve your performance. Our most recent Metric of the Month articles can be found below. To access additional content, please visit the Metric of the Month Archives.
The vast majority of IT support organizations are tracking too many metrics – oftentimes 20 or more! Unfortunately, this approach favors quantity over quality, resulting in wasted time and energy on a metrics bureaucracy that provides little insight and few tangible benefits….
Today’s contact center technologies and reporting packages make it easy to capture copious amounts of performance data. Most contact center managers can tell you everything from last month’s average speed of answer to yesterday’s average handle time. But what does it all mean? If my abandonment rate goes up, but my cost per contact goes down, is that good or bad? Is my contact center performing better this month than it was last month?…..
Cost per unit is a common metric throughout our economy. Many of you know the cost of a gallon of gas, or the cost for a cup of coffee, or the cost of your monthly cell phone plan. Yet surprisingly, many service and support managers do not know their own Cost per Ticket….
A problem in ITIL is defined as the cause of one or more incidents – there is a cause-and-effect relationship between an incident and a problem. The cause is the problem, and the effect is the incident. If a user reports that they cannot log into an application, this is the incident. When this is reported, the cause of the incident is sometimes not known. If further investigation discovers that the application was inaccessible due to a server overload, that is the problem that caused the incident…
Cost per unit is a common metric throughout our economy. Many of you know the cost of a gallon of gas, or the cost for a cup of coffee, or the cost of a movie ticket. Yet surprisingly, many contact center managers do not know their own Cost per Contact….
The true test of any AI tool for service and support is the following: Without human intervention, will the tool reduce ticket volumes, resolve problems more quickly, decrease total cost of ownership (TCO), and improve the customer experience? If it checks all these boxes – and gets smarter over time – then it’s true AI, powered by machine learning….
We’ve all heard the joke about insanity. It’s defined as doing the same thing over and over again but expecting a different result. In IT support, it’s not hard to find organizations that behave this way. They get stuck in old habits, delivering services the same way they always have, and are surprised that longstanding challenges never seem to get resolved. But the key to getting better results, and the key to resolving challenges, is to change and improve the way you do things…
As a service level metric, same day/next day resolution is increasingly favored over mean time to resolve (MTTR) because it is more intuitive to most people. If I tell you that the MTTR for all field service tickets is 32 hours, that probably doesn’t mean much to you. But if I tell you that 84% of all field service tickets are resolved same day/next day, you have a pretty good idea of how quickly tickets are being resolved…
Poor ticket quality is a serious problem in the service and support industry. Imagine being a desktop tech or a level 3 applications engineer and receiving a ticket in your queue that simply says, “computer broken,” and the ticket is categorized as “other.” While this may seem humorous, it is not uncommon. Moreover, poor ticket quality creates a plethora of related problems…
Read More Scoring Template
Ticket backlog is the number of open tickets at a given point in time. It is a general IT support metric that can be measured at any level of support. So, for example, ticket backlog at level 1, desktop support, field services, level 3 IT, and vendor support are all important metrics to track. Additionally, much like a service level metric, there is a time frame associated with most ticket backlog metrics.
For level 1 support, the most common backlog metric is the average ticket backlog as a percent of the average daily ticket volume. So, for example…
ASA is one of the most widely tracked metrics in the technical support industry. It indicates how responsive a service desk is to incoming calls. Almost everyone who works in service and support can tell you their ASA off the top of their head. Since most service desks have an ASA target, ASA is tracked to ensure service-level compliance.
There is a common perception in service and support that faster (lower) ASAs are better. This would be true if faster ASAs did not cost anything.
Customer effort is both a service desk and a desktop support metric. It measures how easy it is for your customers to do business with you. The metric is captured by surveying your customers and asking them the following question: How easy was it to get the resolution you wanted today? Measured on a scale of 1–7, a customer effort score (CES) of 1 indicates an extremely difficult experience, while a CES of 7 indicates an effortless experience.
Schedule adherence is a service desk metric that measures whether analysts are in their seats ready to accept calls, chats, emails, or web tickets as scheduled. That is, it measures how well a service desk’s analysts are “adhering” to the work schedule. Schedule adherence is equal to the actual time that an analyst is logged in to the ACD or ITSM system ready to accept customer contacts, divided by the total time the analyst is scheduled to be available to accept customer contacts.
“The best ticket is the ticket that never happens!” I first heard this truism nearly 30 years ago when I got started in this industry. And today, it’s even more true than ever before! Preventing a ticket altogether is always better than handling a ticket that has been triggered by an incident or a service request. But can we prevent tickets from happening, and is there a way to measure tickets prevented? The answer to both questions is, Yes! So, let’s take a deeper dive into tickets prevented.
An increasing number of progressive IT support organizations recognize that when it comes to performance metrics, driving agent accountability really matters! They have discovered that clear and quantifiable agent performance targets have enormous benefits for both agents and the service desk overall. These include, but are not limited to, greater visibility and accountability into agent performance, improved ability to coach agents in areas were improvements are needed, and dramatically improved performance for both agents and the service desk overall. In fact, MetricNet’s research shows that establishing a single, unified performance metric for your agents is critical to achieving world-class performance. We call this metric the agent balanced score because it truly does communicate a balanced picture of agent performance.
As many of you know, I’ve been in the IT service and support industry for nearly 30 years. During that time, I’ve had the good fortune to work with more than half of the Global 2000 on countless performance measurement and management initiatives, including benchmarking, metrics maturation, and continual service improvement. We now track more than 70 key performance indicators (KPIs), many of which didn’t even exist when I started in the industry. This is important because as IT service and support evolves, so too should the metrics we measure. For example, the introduction of alternate channels such as chat and AI has led to an entirely new category of metrics that are channel specific. By the same token, it’s important to have a precise, objective definition of CX.
Last month I began this two-part series on Return on Investment (ROI) for service and support. In part 1, I defined how value is created in IT service and support. This month, in part 2, I will go through a case study that calculates the ROI for a particular support organization.
Most IT departments can tell you how much they spend on support. But very few can quantify the economic impact of support. The result is that many IT service and support organizations are on the defensive when it comes to budgeting and spending and often struggle just to get the funding needed to deliver adequate levels of support.
In recent years a handful of pioneering organizations have adopted a different strategy when it comes to support—a strategy that emphasizes value over cost—and they routinely deliver benefits far in excess of their costs. Support groups that understand and quantify their ROI gain a number of important advantages; chief among them is the ability to obtain funding and other resources based upon the economic benefits of the support they deliver.
Most support organizations will tell you that they don’t do as much training as they would like. The most common reasons cited for the shortfall are budget and time constraints. But given the overwhelmingly positive impact that training can have on an organization, it’s no surprise that the staunchest defenders of the training budget tend to have the best performing support organizations.
An abandoned call is one where the caller hangs up before being connected to a live agent in the service desk. Call abandonment rate is the number of abandoned calls divided by all calls offered to the service desk and is one of the most widely tracked metrics in the service desk industry. Virtually every service desk with an ACD has the ability to track this metric.
Ticket handle time is the average time that an agent spends on a service desk ticket, including talk time, chat time, wrap time, and after call or after chat work time (ACW). For non-live tickets, such as email and web submitted tickets, the ticket handle time is the average time that an agent spends working on the ticket before escalating or closing the ticket.
Service desk managers and supervisors frequently ask me about the proper ratio of agents to supervisors. Should it be 5 to 1? 10 to 1? 20 to 1? Like most KPIs, there are tradeoffs involved with this metric. If the ratio is too high, the management span of control is too broad, and agents can be working without the proper level of oversight and supervision. This, in turn, can lead to a multitude of issues ranging from low morale to inadequate training, coaching, and feedback.
Channel mix at level 1 is rapidly evolving (illustrated in the figure below), and is considered one of the industry’s megatrends. In 2007 voice calls represented almost 80% of all ticket volume. Today, voice accounts for just over 50% of incoming ticket volume. There are two key drivers behind this trend. One is economic, and the second is demographic.
User self-service measures the percentage of incidents that are self-resolved by the user, without the assistance of a live agent. Let’s say, for example, that the agents on a particular service desk handle 4,000 incidents per month through voice, chat, and email. Another 1,000 incidents per month are resolved through user self-service (e.g., through a self-help portal that includes a password reset tool). For this hypothetical service desk, the self-service rate is 1,000 self-service incidents ÷ 5,000 total incidents = 20% user self-service.
There are a number of ways to measure the efficiency of a service desk or desktop support group. Metrics such as cost per ticket and agent utilization are the most common measures of efficiency. A less well-known metric that also drives cost per ticket is the ratio of agents to total headcount, and it applies equally to both service desk and desktop support groups.
If you are like most consumers, you have probably experienced a chat session. Perhaps you engaged in chat with an agent at your bank or insurance company to resolve a payment issue. Or you may have used chat to troubleshoot your new computer or a software application you installed.
Why chat? One reason is that some people simply prefer this channel for service and support. Chat is the channel of choice for a growing number of consumers and businesses, particularly among millennials. The second reason is economics. An effective chat channel can significantly reduce the cost per transaction versus a more traditional live voice support model. Because of this, chat has the potential to both improve customer satisfaction (by giving customers an alternative channel choice) and reduce the cost per ticket.
Some organizations make a distinction between good turnover and bad turnover. Bad turnover is when an agent leaves the company altogether because of performance issues or to pursue other job opportunities. So-called good turnover, by contrast, is when an agent who is otherwise performing well is moved or promoted to a non-customer facing position in the service desk or accepts another position in the company that is outside of the service desk. Both types of turnover are included in the calculation of annual agent turnover because both types of turnover create a vacancy that must to be filled.
The key to using KPIs diagnostically and prescriptively is to understand their cause-and-effect relationships. You can think of these relationships as a linkage where all of the KPIs are interconnected. When one KPI moves up or down, other KPIs invariably move with it. Understanding this linkage is enormously powerful because it provides insight into the levers you can pull to affect continuous improvement and achieve desired outcomes.
Incident mean time to resolve (MTTR) is a service level metric for both service desk and desktop support that measures the average elapsed time from when an incident is opened until the incident is closed. It is typically measured in business hours, not clock hours. An incident that is reported at 4:00 p.m. on a Friday and closed out at 4:00 p.m. the following Monday, for example, will have a resolution time of eight business hours, not 72 clock hours. Most ITSM systems can easily measure and track MTTR.
Net promoter score (NPS) is based on the idea that every organization’s customers can be divided into three categories: Promoters, Passives, and Detractors. By asking one question—How likely is it that you would recommend our service to a friend or colleague?—you can track these groups and get a clear measure of your support organization’s performance from the customer’s perspective.
Percent resolved level 1 capable is a desktop support metric. It measures the percentage of tickets resolved by desktop support that could have been resolved by the level 1 service desk. This happens when the service desk dispatches or escalates a ticket to desktop support that could have been resolved by the service desk or when a user bypasses the service desk altogether and goes directly to desktop support for a resolution to their incident. Although the metric is tracked at desktop support, it has strong implications for both desktop support and the service desk.
Today’s service desk technologies and reporting packages make it easy to capture copious amounts of performance data. Most service desk managers can tell you everything from last month’s average speed of answer to yesterday’s average handle time. But what does it all mean? If my abandonment rate goes up, but my cost per ticket goes down, is that good or bad? Is my service desk performing better this month than it was last month?
First level resolution (FLR) is a measure of a service desk’s ability to resolve tickets at Level 1, without having to escalate the ticket to Level 2 (Desktop Support), Level 3 (internal IT professionals in applications, networking, the data center, or elsewhere), field support, or vendor support. FLR is not to be confused with its close cousin, first contact resolution.
Cost per ticket, along with customer satisfaction, are often referred to as the foundation metrics in desktop support. They are the two most important metrics because ultimately everything boils down to cost containment (as measured by cost per ticket) and quality of service (as measured by customer satisfaction).
In any service delivery organization, cost, or more accurately unit cost, is critically important. Cost per ticket is a measure of how efficiently desktop support conducts its business. A higher than average cost per ticket is not necessarily a bad thing, particularly if accompanied by higher than average quality levels and lower mean times to resolve.
Customer satisfaction is top-of-mind for virtually every service organization. and for good reason: it is the single most important measure of quality for a service desk or desktop support group. But what about agent job satisfaction? How important is that, and why don’t more service desks track this metric? It turns out that it’s plenty important, and every support organization should track and trend this metric on an ongoing basis.
One goal of every business is to achieve the highest possible quality at the lowest possible cost. It stands to reason, therefore, that cost and quality should be measured on an ongoing basis. In fact, many would argue that cost and quality are the only two things that really matter in a service desk. In past articles, I’ve discussed the importance of using metrics as a diagnostic tool to improve performance. So, we must ask ourselves, if cost per ticket is one of the foundation metrics for the service desk, how can we affect it? How can we improve it? What are the primary levers we have to manage cost?
Customers tend to be impatient when they want service. It doesn’t matter if they are calling their bank, their cable company, or their service desk. They want a resolution to their problem or an answer to their question right then and there! In fact, research across many different industries bears this out. Customer satisfaction—for virtually any type of customer service—is strongly correlated with FCR!
Customer satisfaction is by far the most common measure of quality. It is widely used, not just in IT service and support, but in all industries. It is so ubiquitous that most of us have probably been surveyed within the last week, by our bank, an airline, our insurance company, a hotel, or some other service provider. The metric is so common, that most have an intuitive feel for customer satisfaction. We know, for example, that a customer satisfaction rating of 70% is probably not very good, while a customer satisfaction score of greater than 90% is very good indeed!
In this Metric of the Month, instead of discussing a single metric, I will explore the cause-and-effect relationships for service desk KPIs. This will give us an overarching framework and roadmap for discussing future KPIs in this column.
Cost per unit is a common metric throughout our economy. Many of you know the cost of a gallon of gas, or the cost for a cup of coffee, or the cost of a movie ticket. Yet surprisingly, many service and support managers do not know their own cost per ticket.
Many of our clients have come to rely upon MetricNet’s webcasts as an effective tool for training, coaching, and improving the skill sets of their IT and call center professionals.
©2022 Metricnet, Llc. All Rights Reserved Worldwide