Skip to main content
search
0

NEW 2020 MTTR webcast can be found here.

Incident mean time to resolve (MTTR) is a service level metric for both service desk and desktop support that measures the average elapsed time from when an incident is opened until the incident is closed. It is typically measured in business hours, not clock hours. An incident that is reported at 4:00 p.m. on a Friday and closed out at 4:00 p.m. the following Monday, for example, will have a resolution time of eight business hours, not 72 clock hours. Most ITSM systems can easily measure and track MTTR.

My examples in this Metric of the Month will focus on desktop support, but the MTTR metric is equally applicable to the service desk. Please note that I make a distinction between incidents and service requests. A desktop incident is typically unplanned work that requires the assistance of an on-site technician to resolve. Common examples include break/fix requests for a laptop computer, a printer or server failure, connectivity problems, or other issues that cannot be resolved remotely by the level 1 service desk. By contrast, most desktop service requests represent planned work. Among the most common desktop service requests are move/add/changes, hardware refresh/replacement, and device upgrades. MTTR as discussed in this article refers specifically to incidents, not service requests.

Why It’s Important

As you know from prior Metric of the Month articles, service levels at level 1, including average speed of answer and call abandonment rate, are relatively unimportant. They have little influence on customer satisfaction. The same, however, cannot be said of service levels for desktop support. In fact, MTTR is one of the key drivers of customer satisfaction for desktop support. This makes sense, as a user may be completely down or forced to use workarounds until their incident has been resolved. This, in turn, has a significant impact on their overall satisfaction with desktop support.

The figure below shows the relationship between customer satisfaction and incident MTTR for a representative cross-section of global desktop support groups. The strong correlation between MTTR and customer satisfaction is readily apparent.

MTTR, CSAT

Inasmuch as customer satisfaction is driven by MTTR, many desktop support organizations take steps to actively manage this metric. Although the user population density and hence the travel time per incident cannot be controlled, other factors affecting MTTR can be managed. These include maximizing the first visit resolution rate (comparable to first contact resolution rate at level 1) and routing desktop technicians in real time. This latter technique allows an organization to effectively manage the incident queue by dispatching and assigning technicians based on the proximity, urgency, and geographic clustering of incidents rather than on a first-in-first-out (FIFO) basis, as is common in the industry. This has been shown to significantly reduce the MTTR for desktop support incidents.

Benchmark Data for Incident MTTR

To continue reading, you must become a member. Membership is free and sign-up only takes a moment. Click the sign-up button below, complete the short form and checkout. No credit card is required and your membership never expires!

Already a member?

Jeffrey Rumburg

Jeff Rumburg is a co-founder and Managing Partner of MetricNet, where he is responsible for global strategy, product development, and financial operations for the company. As a leading expert in benchmarking and re-engineering, Mr. Rumburg authored a best selling book on benchmarking, and has been retained as a benchmarking expert by such well known companies as American Express, Hewlett-Packard, General Motors, IBM, and Sony. Mr. Rumburg was honored in 2014 by receiving the Ron Muns Lifetime Achievement Award for his contributions to the IT Service and Support industry. Prior to co-founding MetricNet, Mr. Rumburg was president and founder of The Verity Group, an international management consulting firm specializing in IT benchmarking. While at Verity, Mr. Rumburg launched a number of syndicated benchmarking services that provided low cost benchmarks to more than 1,000 corporations worldwide. Mr. Rumburg has also held a number of executive positions at META Group, and Gartner. As a vice president at Gartner, Mr. Rumburg led a project team that reengineered Gartner’s global benchmarking product suite. And as vice president at META Group, Mr. Rumburg’s career was focused on business and product development for IT benchmarking. Mr. Rumburg’s education includes an M.B.A. from the Harvard Business School, an M.S. magna cum laude in Operations Research from Stanford University, and a B.S. magna cum laude in Mechanical Engineering. He is author of A Hands-On Guide to Competitive Benchmarking: The Path to Continuous Quality and Productivity Improvement, and has taught graduate-level engineering and business courses.

Leave a Reply

Close Menu