Skip to main content
search
0

In the first article in this series, I outlined the 11 most common failure modes for IT outsourcing relationships.  These are summarized below for your reference:

  • The vendor over-promises, and fails to deliver on their commitments
  • The client fails to exercise proper governance over the vendor contract
  • The vendor underprices the contract and fails to earn a profit
  • The contract fails to align vendor with client goals and objectives
  • Vendor reports contain raw data, but rarely include proper diagnosis
  • The client does not understand the metrics included in vendor reports
  • Both client and vendor view the contract as a zero-sum game
  • Vendors spin data and reports to cast themselves in the most favorable light
  • Continuous improvement is ill defined or not included in the contract
  • Vendors experience extremely high turnover on a client project
  • Vendors and/or the client do not adequately train personnel

In this ninth installment of the series, I will address the problem of vendors spinning data for their own benefit.

The Problem of Vendor Data Spin

I was in a client meeting recently where a managed service provider presented a very rosy picture of their performance for the prior month.  At a summary level, here is what the performance report said:

  • Customer Satisfaction = 97.8% (contract target was 94%)
  • First Contact Resolution (FCR) = 96.3% (contract target was 84%)
  • Average Speed of Answer = 29 seconds (contract target was 45 seconds)
  • Monthly Analyst Turnover = 0.3% (contract target was 2.0%)

By any measure, this vendor appeared to be exceeding all of their performance targets by a significant margin! There were many other metrics in the report, but I am showing these particular metrics because there was a debit/credit structure built into the contract for these four metrics, and the vendor was going to receive a 20% monthly bonus for exceeding their targets.

The performance reported by the vendor seemed suspiciously high to me.  Additionally, my client stated that the monthly reports, which came up almost entirely green every month (think red, yellow, green color scheme), did not comport with the general sentiment by enterprise users that the vendor was not performing well. This is the so-called “watermelon effect”: reports that are green on the outside, but red on the inside once you open them up. So, I did a bit of digging.  Here’s what I discovered.

Customer satisfaction was being measured on a scale of 1 – 5, with 1 being very dissatisfied and 5 being extremely satisfied.  This is a very typical scale for measuring CSAT.  However, what was not typical is that the vendor was counting all responses of 3, 4, and 5 as being satisfied customers.  The industry as a whole almost always considers only 4’s and 5’s as satisfied customers.  Once we made this adjustment to the CSAT measurement – counting only the 4’s and 5’s as satisfied customers – the real customer satisfaction score was just 88.2%.  That is still a respectable score, but it is nowhere near the 97.8% reported by the vendor.  Additionally, because they failed to meet the CSAT performance target of 94%, the performance credit they thought they had coming to them turned into a debit that they owed the customer for failing to meet the CSAT performance target.

Regarding First Contact Resolution Rate, this metric has always been somewhat subjective because it involves “carveouts”.  A carveout is when certain tickets are excluded from the calculation of FCR because they cannot be resolved on first contact.  For example, if a vendor does not have access rights to a certain application it would be unfair to penalize them for not resolving incidents related to that application.  Therefore, tickets related to the application would be carved out of the denominator when calculating FCR.  When I reviewed the tickets that had been carved out of the FCR calculation it was obvious that most of the tickets should have been resolved on first contact and should not have been carved out.  In fact, more than 80% of the tickets that were designated as carveouts should have been resolved on first contact.  I recalculated FCR with this correction and came up with a revised number of 84.5%.  That is excellent performance, and entitled the vendor to receive a small credit for exceeding the performance target.  But it was nowhere near the 96.3% FCR the vendor had initially reported.

The Average Speed of Answer appeared to have been accurate, as it came directly from the client’s ACD.  However, the call abandonment rate was quite high; it was almost 20%!  These two metrics – average speed of answer and call abandonment rate – should always be viewed together since either metric, by itself, provides an incomplete measure of vendor responsiveness.  Ideally, the client should have included an abandonment rate target, in addition to an ASA target in the debit/credit formula.  If that had been the case, the short ASA would have been roughly cancelled out by the high abandonment rate, and the vendor would likely have been debited, rather than credited, for their weak performance.

Finally, analyst turnover is an important metric because it is correlated with job satisfaction and absenteeism.  High turnover is almost always accompanied by low job satisfaction, and hence low morale in the workplace.  Here again, I dug a bit deeper to understand how monthly turnover was being calculated.  And, once again, the vendor had deviated from the standard definition of analyst turnover. They had excluded analysts who left during the month due to promotions or job transfers in their calculation of monthly turnover.  But all turnover matters, because all turnover represents the amount of knowledge and expertise that must be replenished as analysts leave for other opportunities, are terminated for cause, are promoted or transferred.  It turns out that the real turnover, from all causes, was 4.2% for the month.  When compounded over 12 months, the annual turnover was an unacceptably high 64%.  As with CSAT, what the vendor thought was performance worthy of a credit for the month, turned into a debit due to underperformance.

These four examples are just the tip of the iceberg when it comes to data spinning.  Left unchecked, vendors always report performance levels that break in their favor.  They take liberties with the strict, industry-accepted definitions of metrics, and they engage in other tactics that are designed to cast their performance in the most favorable light.

The Antidote to Vendor Data Spin

Your best defense against the pervasive practice of vendor data spin is to define very clearly in your contract with the vendor how each metric is defined and how it will be measured.  For example, in the aforementioned example about First Contact Resolution Rate, it is not good enough to simply specify a performance target for FCR without explicitly defining the metric and specifying exactly what will and will not be accepted as a carveout.  You don’t want to be haggling over the definition of the metric and the acceptable carveouts after the contract goes live.  Likewise for the measurement of Customer Satisfaction, it is best practice to measure this on a scale of 1 – 5, with only the 4’s and 5’s counting as satisfied or very satisfied customers.  This should be spelled out in the contract.

Secondly, educate yourself about metrics and benchmarks, and hold your vendors accountable. The topic of vendor accountability was discussed at length in the governance segment of this series.  But above all, ask questions!  Do not simply accept vendor reports at face value because they are almost always misleading.  Ask questions until you are satisfied that you have gotten clear and honest answers.  And when necessary, require that your vendor reform their reports to reflect the industry standard definitions of the KPIs and/or the KPI definitions specified in your contract with the vendor.

Here are some resources that may be helpful in your quest to educate yourself on the metrics of service and support:

Jeffrey Rumburg

Jeff Rumburg is a co-founder and Managing Partner of MetricNet, where he is responsible for global strategy, product development, and financial operations for the company. As a leading expert in benchmarking and re-engineering, Mr. Rumburg authored a best selling book on benchmarking, and has been retained as a benchmarking expert by such well known companies as American Express, Hewlett-Packard, General Motors, IBM, and Sony. Mr. Rumburg was honored in 2014 by receiving the Ron Muns Lifetime Achievement Award for his contributions to the IT Service and Support industry. Prior to co-founding MetricNet, Mr. Rumburg was president and founder of The Verity Group, an international management consulting firm specializing in IT benchmarking. While at Verity, Mr. Rumburg launched a number of syndicated benchmarking services that provided low cost benchmarks to more than 1,000 corporations worldwide. Mr. Rumburg has also held a number of executive positions at META Group, and Gartner. As a vice president at Gartner, Mr. Rumburg led a project team that reengineered Gartner’s global benchmarking product suite. And as vice president at META Group, Mr. Rumburg’s career was focused on business and product development for IT benchmarking. Mr. Rumburg’s education includes an M.B.A. from the Harvard Business School, an M.S. magna cum laude in Operations Research from Stanford University, and a B.S. magna cum laude in Mechanical Engineering. He is author of A Hands-On Guide to Competitive Benchmarking: The Path to Continuous Quality and Productivity Improvement, and has taught graduate-level engineering and business courses.

Leave a Reply

Close Menu