The 3 Whys Behind Yes/No Quality Scoring
This article originally appeared on the FCR blog on May 9, 2018. Click here to read the original.
Without exaggeration, I think I’ve seen quality forms for a hundred different companies in my tenure at FCR and can confidently say that they come in all different shapes, sizes, and flavors. Some of that’s to be expected given the variety of clients we work with. I’ve also sat in seminars about quality assurance and observed one half of the room that thought their quality form was the bomb while the other half seemed to be perpetually searching for a better way to monitor their quality.
When it comes to scoring methods on quality forms, the variety continues. Some forms feature broad rating scales like 1 to 10 and others have simpler scales like 1 to 3. In some cases the rating scale is used to rate the degree to which an agent completed a behavior and in others the behavior simply has multiple parts and they lose points for failing to complete a particular part.
Others, including FCR, adopt a yes/no scoring model where agents either do or don’t complete a desired behavior. Behaviors that are more critical to the success of an interaction, like providing correct information or properly authenticating the customer, might be weighted more heavily toward the overall score or result in an automatic failure if missed.
My goal in writing this article isn’t necessarily to convince you that one way is better than another. But I’ve been asked about this enough by my colleagues that I thought it fitting to share our reasoning for using yes/no scoring. Here are 3 reasons.
1. Easier to define
It’s challenging to define a quality customer service interaction but essential if you’re going to deliver any level of consistent service to customers — let alone get agents and supervisors to all understand and agree on the standard (this is called calibration). Our standard practice is to create a definitions document that accompanies quality forms so agents can read and understand what’s expected of them when they interact with customers.
With a numerical scale, regardless of what scale you choose, it becomes important to define what a 10 is versus a 9, 8, 7, and so on. This can be exhausting to document and difficult to maintain. With the yes/no model, it’s important to define what a yes is and then understand that anything that doesn’t meet that standard is a no. The goal then becomes to affirm the areas where agents excel and coach the areas where they need improvement.
2. Eliminate score haggling
Inevitably, the more scoring options that are available, the more tempting it is for the reviewer to say, “I gave you this score because…” when in fact it’s the agent that earns the score. Read more about why I feel strongly about this distinction. It’s easy for the conversation to devolve into a negotiation where supervisors defend their ratings and agents haggle for a better score — essentially working to determine the degree to which the agent either did or didn’t do the desired behavior. While I won’t argue with the fact that it’s important to score the interaction correctly, and sometimes this means reviewing the interaction together with the agent, yes/no can simplify this process significantly.
The other temptation in this process is for reviewers to give agents partial credit for partially completing a behavior. While I appreciate the sentiment, especially if the agent is putting forth strong effort and showing improvement, this can be tricky — especially if we’re talking about a partially correct answer given to a customer. At FCR, our quality reviewers are encouraged to think about the impact every interaction has on customer satisfaction. Partially correct answers often require customers to contact support again and this typically doesn’t result in more satisfied customers.
3. Shift focus to coaching and excellence
Your time is better spent focusing on coaching agents to deliver consistently excellent customer service. Where scores are present, most agents will look directly at those, and if negative, may not hear the other feedback about their performance. It’s essential that they hear the feedback and put it into practice on future interactions. Some of our supervisors have gone so far as to stop showing agents quality scores altogether so the conversation is completely focused on their performance and opportunities for improvement.
So why do we track quality scores at all if they’re such a problem? This is a great question. There’s value in being able to understand and track the behaviors on our interactions and see where agents are excelling and where they need improvement. This is a great opportunity for quality and training to work together to identify areas where agents can benefit from additional training. Also, in a regulated environment, we simply can’t afford to compromise sensitive customer information, so we have to ensure that we detect, track, and correct any errors right away.
A simple yes/no scoring method for quality allows us to place our focus on coaching and developing our agents to deliver a consistent, excellent level of service on every customer interaction. Let me know if you have any questions.