3 Reasons Why Customer Satisfaction Should Not Replace Quality Assurance
This article originally appeared on the CustomerThink blog as part of my monthly column on June 12, 2018. Click here to read the original.
Not long ago I saw a demo of a cool product for gathering customer feedback called Stella Connect. They have a fun angle on customer satisfaction where customers are invited to rate their experience with the specific agent that helped them. Customers can then recommend that agents be rewarded. One of our programs at FCR has three different reward levels from a coffee, to lunch, to a gift certificate — and after a certain number of recommendations, agents earn their reward. It’s a great system, and from what I’ve seen, drives strong engagement on the team.
During the demo, one of the salespeople made an interesting comment that got me thinking. They were positioning this not as a customer satisfaction tool, but as a quality tool to allow customers to rate the quality of the service they received. Stella isn’t the only company I’ve seen play this angle, either. This then led me to ask: Can and should customer feedback ever replace our own internal quality assurance efforts?
Let’s pause for a moment and acknowledge that the proposition of cutting quality assurance — a resource-intensive process in most contact centers — and allowing customers to handle it would certainly be attractive to many contact center leaders. For those who aren’t familiar, quality assurance is a process where a random set of customer interactions (phone calls, emails, chats, etc) are reviewed regularly based on a predetermined set of criteria. A quality score is a standard metric on most agent scorecards and therefore they’re held accountable to it. It’s an essential process for any company that wants to deliver consistently great customer service.
While I’d argue that customer satisfaction is most certainly a quality metric, it should never replace your quality assurance efforts. Here’s why.
1. CSAT isn’t a representative (enough) sample
Your quality assurance team should be reviewing a totally random and representative sample of the overall support contacts regardless of whether the customer was satisfied or not. We typically see a CSAT survey response rate somewhere between ten and thirty percent depending on how well integrated and how complex the survey is. And when you’re likely only hearing from the really happy or really upset customers, you’re possibly missing a cross-section of your client base. Random quality audits aren’t dependent on customer feedback so they can review a more representative sample.
Now if you’re looking to augment your quality assurance efforts with AI and/or an analytics solution, that’s a different conversation. This can potentially allow your quality team to review more interactions (up to 100%).
2. CSAT is sometimes measured too early in the process
In a recent quality calibration with a client, we were reviewing a particular interaction where the agent had given an incorrect answer that would likely require the customer to call back. During our discussion, we debated as to whether or not we thought the customer would have been satisfied with the support they received. We ultimately agreed that the customer very likely might have been elated with the agent and the support received at that moment, but once they realized they had to call back, they likely would have been dissatisfied.
Customers are often presented with a survey not long after their interaction with customer service, and in the case of a wrong answer, might not realize it was wrong until much later. So while a customer might make a terrific connection with the agent serving them, they may have also received poor customer service without knowing it. This is something a quality team would catch that a survey wouldn’t.
3. CSAT can be gamed
In my first exposure to customer satisfaction, our support team had to manually send a survey to a customer because we didn’t have a system with the functionality built in. I’m sure you can guess how often my team sent the survey after a potentially bad customer interaction. In most cases nowadays, these surveys are automated. But when agents are incentivized for good marks from customers they’re more likely to beg customers to complete surveys or find other ways to game the system in their favor.
Ideally, incentives around CSAT should compel agents to raise the bar in the service they provide, but there’s a dark side to this if we motivate the wrong behavior. Keep in mind that our goal is honest feedback from customers that can help us improve.
What next?
It’s clear that I’d never completely replace quality assurance with customer satisfaction. Before we wrap up this topic, let’s address two questions I field frequently:
Where does customer feedback fit into our quality assurance efforts?
Customer feedback should absolutely be a part of your quality efforts but it should be in addition to it. In the past I’ve shared the idea of adding customer satisfaction to our quality forms to better align CSAT and quality. This is one easy way to insert CSAT into your quality calibrations and coaching. We should also be regularly reviewing our dissatisfied surveys to spot instances where our service provided by an agent caused customer dissatisfaction and coaching them in much the same way we would after a quality audit.
How do I gauge how satisfied the customer was with the agent that served them regardless of what they think about the company?
While there are some companies that believe agents should be held accountable to a single metric like NPS or CSAT, regardless of issues agents might not have control of, others look at ways to separate out customer feedback about agents from feedback about the company. It’s possible to manually scrub this data but that can be exhausting and offer little return. Systems like Chattermill, Clarabridge, Medallia, and others can help immensely.
For those that want to expand their survey a bit to ask customers to rate the agent and the company separately, tools like Stella Connect and others certainly have their merit. When it’s clear that the customer is rating the agent specifically it makes holding them accountable to this performance more palatable.
While it’s nice to think that we can do away with quality assurance and allow our customers to be our quality department, I think it’s a bad idea. There are always going to be aspects of interactions that require internal, subject matter experts to ensure they’re consistently done right. A great idea, however, is to continue to explore ways quality and customer satisfaction can work together to drive a better customer experience. In the case of services like Stella Connect, I think you’ll see significant benefits by allowing your customers to give kudos to your agents.