Blogs

Five Tips For Creating a Next-Level QA Scorecard

Share

Your QA scorecard is one of the most important elements to the QA process. 

If you design it well, your evaluators will be able to easily define whether your agents are providing great customer service, adhering to key industry compliance regulations and meeting the needs of your wider business.  

It’s not easy to create a form that applies quantitative metrics to complex human interactions consistently, whilst balancing the needs of your business with the needs of your customer – but it’s not impossible either.

If you invest an amount of time and forethought into your QA scorecard creation, you’ll end up with a tool that generates valuable data about overall business performance whilst assessing agent performance effectively. 

Here are five tips to get you started. 

1. Find a Balance Between Business Requirements and Customer Needs

Whilst all organisations have different needs (and therefore different QA metrics to measure), there are three broad categories these tend to fall under: 

  1. Adherence to business requirements and internal practices
  2. Adherence to industry compliance regulations (for example GDPR)
  3. Quality of interaction with the customers

Contact centres often place too much emphasis on scorecard criteria tied to internal processes, rather than criteria that make a tangible difference to customer experience. 

For example, an agent might be able to resolve a customer’s issue effectively, but if they fail to adhere to specific script requirements outlined by the scorecard, this can constitute a ‘fail’ – regardless of how well the agent and the customer felt the interaction went. 

There are situations where exact wording is necessary – for compliance or data protection reasons, for example – but otherwise give your agents flexibility in how they respond to customers. This leads to natural, human and empathetic interactions rather than stilted, script-based phone calls that don’t connect on a personal level. 

Avoid this trap by placing customer centricity at the heart of your QA scorecard. 

To provide a great experience to customers, and to reap the rewards of increased brand loyalty and customer lifetime value that come with this, your business needs to be customer obsessed. 

This means putting your customers’ stated and proven needs at the heart of every process you have – and QA is no different. When creating your QA scorecard, dive deep into the data you have about what your customers want and expect from your brand. 

Is it friendliness? Is it efficiency? Do they want a formal or informal experience? Explore the sources of customer feedback available to you to see if any trends appear. You could use: 

  1. CSAT or NPS scores directly linked to interactions
  2. General trends in customer metrics – was there a general upward trend in CSAT scores after you reconfigured your script to sound more approachable, for example 
  3. Reviews on public sites like TrustPilot and FeeFo. 

For extra insight, you could listen back to successful interactions to identify any common patterns or themes. Taking a percentage of the top-ranked calls by CSAT is a great way to identify standout call openings, for example, or listen back to successful calls to pinpoint where successful agents provide the right information to reduce callbacks. 

2. Design Your Scorecard to Maximise Evaluator Efficiency

Many quality teams spend a considerable amount of time conducting audits and observing customer interactions. To help them do their job to the best of their ability, structure your QA scorecard to be as easy to use as possible.

We’ve seen scorecards with over 50 questions, all of which require some sort of input. Your evaluators can’t fill these in with any degree of accuracy – it’s too much to concentrate on. 

Instead, start simple. Mapping out which criteria relate to which part of an interaction and structuring your form accordingly can double the efficiency of your QA teams. 

This helps your QA teams work consistently and efficiently across each month, rather than cramming evaluations in to meet monthly targets. Without having to grapple with cumbersome QA scorecards, your team’s time is freed up to focus on the trends, insights and improvement areas suggested by the data they gather. 

3. Be Specific With the Language You Use and Make Guidelines Accessible 

If you want to maximise the value QA offers your business, your team of evaluators need to have a shared understanding of what constitutes a successful interaction and what constitutes a poor or failed interaction. 

One part of this equation is to run open, collaborative calibration sessions. The other half is to use precise, objective language within your QA guidelines so that your evaluators are all on the same page, right from the get-go. 

Loose terms like ‘good’ and ‘bad’ when scoring performance lead to inaccurate and useless data as most people have inconsistent definitions of these terms.

Try using language that allows the evaluator to quantify interactions more easily. By good, do you mean ‘approachable’, ‘friendly’ or ‘efficient’, for example? Or is that ‘good/bad’ binary box on your scorecard really a case of indicating whether they took a particular action or not. 

Being specific here will reduce effort in calibration sessions later, and ensures that feedback you provide to agents is useful, precise and actionable.

Read more about how to create effective QA guidelines for your evaluators

4. Allow Your Agents Some Autonomy

Compliance is almost always part of the quality process. Your agents may need to follow stringent processes to make sure you’re following key industry regulations. These could include scripted segments your agents need to read out to customers using very specific wording, for legal reasons. 

It’s important to make sure your agents do this – your customers and your business could be at risk if they don’t. 

Your agents will also need to stick to important internal processes as they interact with customers. However, unless it is an absolute requirement, you should avoid formatting these into a specific script to follow because it sounds clunky and doesn’t allow your agents to tailor their approach. 

Allowing your agents some autonomy here will create a more natural interaction and a better overall customer experience. Avoid specifying set phrases in your evaluation form wherever possible. 

5. Conduct a Field Test

Remember: test, measure, tweak, repeat. 

Once you’ve built your QA scorecard, give it a test run – and don’t be afraid to make changes if it isn’t working the best it could be. 

Ask all key stakeholders for input here. Obviously senior management will get a say, but involve frontline agents and team managers as well – they often offer practical, on-the-ground insight that others may miss. 

Agents involved in creating your scorecards are also more likely to be open to working with QA reports because they feel that has been designed with their needs in mind. 

Refining your QA process should be a constant process. Unfortunately, you’re not going to make it the best it can be with just one quick round of consultations. Regularly compare QA scores with key metrics like CSAT or NPS to ensure your QA process reflects success with customers, and take a look at QA score trends in relation to wider success metrics too. 

If youre QA scores are trending upwards and your customer retention isn’t – or vice versa – something needs to change. 

For more QA insight and practical tips, download our complete guide to contact centre QA 

By Chris Mounce
Chris is EvaluAgent’s Digital Training and Enablement Specialist and an award-winning performance coach. He takes the lead on developing innovative training solutions relating to our product to deliver maximum value.

Related Articles