The Ultimate Guide to QA Scoring


Many people regard scoring as being the most difficult part of the Quality Assurance (QA) process. In this blog we look at scoring in more detail and discuss:

If you’re just beginning with QA, or want to have a refresher, check out our Getting Started With QA Guide.

Why Do We Need To Score?

There are many reasons for QA scoring but the most common three are to:

Scoring an individual interaction is interesting but it can also be misleading. It’s only when multiple scores are aggregated and analysed that a real picture is provided.

This is really powerful when presenting trends over a period of time as these can be used to highlight development needs and in time, demonstrate improvement.

By breaking down an interaction into components and scoring each part, the output is a detailed analysis of that interaction and can therefore be broken down to identify strong and weak areas. The more detail, the more accurate the score will be.

One word of warning about QA scoring. Some people try to benchmark against others. This will only work if there are identical questions used in scoring and the service being delivered is comparable. If they’re not, then it’s not a valid benchmark. It’s much better to focus on trends internally and look for continuous improvement.

Top tip: Focus on improving the overall trend of QA scores for continuous improvement.

Different Types Of Quality Scoring

There are many different approaches to QA scoring and this is often the biggest difficulty for many evaluators. How do you ensure that the score reflects the quality of the interaction? This is very dependant on the quality scorecard or scoring mechanism developed.

The starting point is to ensure that the question, or scoring criteria reflects what’s important. Assuming this is correct then the next stage is to look at the relative importance of each score when compared with the others. For example, there may be some compliance issues that are compulsory, this may be more important than introducing yourself by name. This is achieved by giving each question a weighting so using the previous example, the compliance mater may be 10 but the use of the name may be 5.

There are many different types of quality scoring used, we’re going to discuss three of them:


This is quite simply yes or no. 

Using the compliance question from previously, was the compliance statement made correctly and in full? Yes or no. 

This is good when looking objectively at whether the compliance statement was delivered but it does not enable assessment of the way in which it was delivered.

Traffic Lights

The traffic light (RAG) scoring adds another option between yes or no, a partial yes, or a ‘yes but not all’.

When scoring is used it can often be based upon whether a call has passed or failed its quality stage. Care needs to be taken with traffic light scoring as the amber stage is very variable.

Numerical Score

This is typically a range e.g. 0 – 10 where the evaluator determines the score for that interaction as a position on the scale.

This can be the most difficult for evaluators as it’s often regarded as a subjective score. The way to make it more objective is to provide details of what is expected from each score on the scale with examples being used to show how scores should be allocated.

This is the most useful way to determine accurate changes in performance as it provides the level of detail required to highlight the difference in quality between different interactions.

Strengths & Weaknesses Of Scoring

The different types of scoring all have strengths and weaknesses but actually depend upon the level of objectivity/subjectivity being measured. The following scale highlights where each of the above fits.

The following table summarises the types of scoring to be used.

 BinaryTraffic LightNumerical
Score Yes/NoRed
0 – 10
Measure ObjectiveObjectiveSubjective
UsesComplianceCompliance with variableVariable scoring where difference can be identified
StrengthsEasy to scoreProvides an option for ‘some’ or ‘partial’Enables a detailed score that differentiates performance
WeaknessCan only be yes or noAmber can be very variableDifficult to clearly identify score 
NotesLimited opportunity to track performance in detailDoes not provide clarity of performanceNeeds a definition for each level 

One of the most successful methods of scoring is to use a mixture of scoring on a single scorecard. If you have a range of scores between 0 and 10 then you do not have to offer every score for every question. Using the compliance question as an example, the question may be:

Was the Compliance statement read out in full?

The choice of answers may be 0 or 10 as there is no point in-between.

However, for most areas there may be a greater variance, again when looking at the compliance question:

Was the compliance statement delivered in a way that the customer understood it?

This shows a more variable scoring between no and yes.

This shows the difference in scoring and how the different types can be included in scorecards. Of course, the weighting discussed earlier will have an impact upon any final score.

Finally, many organisations make the mistake of having to receive a minimum score for the interaction to pass. This is ok until a particular area for scoring becomes inappropriate.

An example being that it is not always appropriate to ask a very angry customer if there is anything else that you can help them with. In this case the question should be removed from the scoring without penalising the Agent.

There are many example scorecards that can be downloaded to get started but it is recommended that these are adapted for each contact centres own individual use to reflect the specifics of that organisation and the services it provides.

We’ve created 3 scorecard templates made specifically for customer support teams. 

Other Scoring Mechanisms

There are a huge number of variations of the above scoring types and how they can be used. Outputs are reported as a number, a percentage or for some even just a number within a range.

It’s important to retain the same method in order to enable trend analysis and track performance.Continually changing the scores and the way in which they are calculated will only lead to confusion.

Looking for new ways of scoring is common as evaluators look to change the way in which they complete their roles. An example is where some scoring mechanisms start with a perfect score and then reduce it for areas that are missed or not applied correctly. This could be seen as negative, where having a positive approach for including the correct ingredients/competences in a call is a better alternative.

The need to use scores to provide feedback to Agents is a consideration and especially when looking to motivate and generate improvement.

How To Score

Most evaluators find scoring difficult when they start.

Having a clear scoring criterion as discussed previously helps but for many it is about getting experience and gaining confidence.

Some useful tips are:

How to ensure consistency and fairness in scoring

With the inevitable nature of scoring being subjective and objective it is important to ensure that scoring is fair and consistent. This can be checked by working with other people to check samples, have calibration sessions with Evaluators and Agents to ensure that the levels are all the same and observing others when they are scoring.

Consistency is important and essential when providing feedback to Agents.

In Summary…

There are numerous reasons for scoring the quality of customer interactions, but the most common being to provide quality assurance, give feedback to Agents and improve performance.

There are also different types of scoring but these need to be based on a set of evaluation criteria that are aligned to the organisation and the service it delivers.

Initially, scoring is not easy but it does get much easier with experience and confidence. Having a good understanding of the ingredients that are being measured really helps.  

A quality Score is a great metric, yes, but QA can go way beyond just presenting a score to your business. Click to find out how scoring fits into the rest of the QA process.

Top Tip: Don’t be afraid to go with your instinct – if it sounds good from the customers perspective and meets the compliance requirements then it probably is good.

For more resources on Quality Assurance, employee engagement, coaching and feedback and other CX topics, head over to the Knowledge Hub page.

By Tom Palmer
Tom is EvaluAgent’s Head of Digital and takes the lead on developing and implementing our digital and content management strategies which results in creating a compelling, digital-first customer marketing experience.

Related Articles