I Did Quality Assurance Without Scores and I Liked It
I’ve written and spoken extensively in the past on whether or not quality scores are essential to contact center quality assurance. You can read a whole lot about this topic here and read the opinions of 14 contact center professionals here.
Now for the honest truth about the scoreless QA discussion. While I’ve spent a lot of time working with contact center teams on quality assurance, I, myself have not actually done a quality evaluation and coached an agent in over a decade.
In fact, the last time I evaluated an agent, I’m pretty sure I filled out a quality form, calculated the score, and then emailed the feedback to the agent, along with their score, inviting them to let me know if they had any questions. 99 times out of 100 there were no questions. And if you were to ask me if anyone on my team actually improved their performance based on this feedback, my response would be, “I have no clue, but probably not.” I fear that this method caused more harm than good in our contact center.
A quality coaching reboot
I’ve had a lot of time to think about quality over the past few years — and as I’ve re-entered the contact center operations world, I’ve been excited to correct some of my past QA mistakes. With that in mind, our team dreamed up a new set of criteria for evaluating the quality of our customer interactions. And on our quality form, we selected only the behaviors that truly help us achieve our mission as a team and company.
After evaluating a series of interactions, it was time to coach the members of the team. Here are the key activities during each coaching session:
- Discuss what worked – I first talked about everything that was good about the interactions — and the positives far outweighed the negatives, heaping on praise wherever possible.
- Discuss areas for improvement – I then shared two or three key behaviors to improve. In this case, these were emails to customers so we talked about ways to reword, reformat, or add to the message. In many cases, the response was good and the goal was to simply raise the bar another notch or two.
- Summarize goals for next time – Talking through each interaction, I summarized the two to three most important things each person needed to work on, gaining agreement that they would work on this for next time. We wrote these goals down to make sure we review them prior to the next coaching session.
There are two overarching keys to these discussions to remember. First is the ability to continuously point back to our mission as a team and talk about how achieving these goals will help us achieve that mission. The second is communicating a focus on helping each team member continuously improve and grow in their role.
What about scores?
Guess what? I still tracked scores, but I did not, and will not, share them with any of the team members. One note about these scores, however, is that they are more or less averages tracking whether or not the team member performed the required behavior correctly. You won’t find an elaborate system of points or weights on our form.
Here’s how I’m using the scores:
- Tracking an average – I want to have an overall understanding of the quality average for each individual and the team as a whole, looking for this to improve over time.
- Tracking individual behaviors – We looked at the average scores of each individual behavior, finding that the team could improve most in connecting with the customer and effective communication skills. With that information, we discussed some key ways to improve these behaviors during a team meeting.
By tracking this data, we can determine if the team is improving and ultimately if coaching efforts are successful.
How does your team currently use quality scores in your contact center? If you’ve gone scoreless, leave a comment below and share your experience. Having gone from talking about this concept a whole lot to actually doing the work, I’m excited about what the future holds.
Pingback: On Being an Accountable Customer Service Leader - Customer Service Life
Hey Jeremy,
Awesome idea! Different people have different strengths. Scores feel like “picking” and is often perceived as negative. Positive reinforcement works much better for most CSRs. Great quality calls take a little extra time however. What advice would you give to a representative regarding call volume expectations vs quality; particularly using call control, to prevent speaking over a voluble caller. How can that be reconciled?