How to develop a successful quality assurance framework


One of the biggest issues for growing customer service teams is the inability or failure to measure and manage the quality of service and conversation that front-line advisors are having with customers.

It is easy to focus on performance metrics, such as call answer times, as they are readily available from systems, but while the quality of conversation is arguably more important, how to measure quality assurance is not as simple.

Run a quick Google search, and you’ll find thousands of scorecard templates that you could download. One of these may be a good starting point, but it is crucial to ensure that this is adapted to meet your specific needs.

In this blog, I will explain how to develop a quality framework that will deliver real improvements to your QA process, your team, your organisation and your customers by developing a quality scorecard that measures, manages and supports the delivery of a high standard of service.

The quality of service is of critical importance for contact centers as it’s an accurate indicator of performance relating to the customer. Measuring quality and customer perception of service are two of the most used metrics in managing performance in successful operations, so it’s critical that the measurement is itself of a high quality.

Creating a Quality Framework. Where should you start?

In contact centres the people that make things happen are Team Leaders. A good Team Leader will know the strengths and weaknesses of each of their team and will be able to guide and motivate them to deliver their best. They have an organizational responsibility to meet the corporate objectives, but also provide the bridge between the company and individual Advisors. Yet for many Team Leaders, they lack the required tools and processes. Having an insight and overview of the quality of service provided by each member of their team is essential to support the development of agents.

When I get asked, “How do I create a Quality Assurance framework?” by call centers, I usually recommend a six-stage process.

This includes:

  1. Purpose
  2. Channels
  3. Values
  4. Measures
  5. Scoring
  6. Implement
Quality Assurance Framework

Step 1. Understand the purpose

The first area to address when building scorecard or framework is the purpose of QA; what are we measuring and why?

It’s often the case that measuring quality is about developing a score to report as a KPI, but quality scorecards can and should deliver so much more than this.

When Quality is used correctly, it can be an effective tool to:

  1. Measure compliance with regulations
  2. Assess adherence to policy
  3. Measure the customer experience
  4. Monitor agents’ performance
  5. Change agents’ behaviour

There needs to be clarity about what the scorecard is going to be used for, so you can ensure that the design meets these objectives.

Step 2. Identify the channels

Call Recording Quality Management (CRQM) was originally developed around call scoring and reviewing the telephone call recordings. In today’s multichannel environments, some agents may not handle telephone calls. Instead many may deal exclusively with webchat, email, social media or a wide range of other channels.

Customers may never interact with an organization over the phone, but the quality of the engagement still needs to be measured. Different channels have different components, and therefore the measures need to be adapted.

For example

Live chat will involve a written discussion, so the quality of spelling and the correct use of grammar is a critical measure of quality.


An email is still a written interaction but is usually more formal than live chat, so a different evaluation of style will be required.

Step 3. Align with company objectives and brand values

The objectives of the contact center and the values of the brand should be incorporated into the design of a scorecard. There are thousands of scorecards available to download, but the type of organization and the value of the services that it provides will impact the points of assessment and the scoring.

For example:

  1. A high-value sales organization would require different scoring criteria to a low-value service.
  2. In Financial Services and other regulated sectors, there is a requirement for particular terminology and wording to be used, but this is not the case in all industries.
  3. Inbound communications will have a different context to outbound where a different level of introduction will be required.

All of these factors make a difference to the scoring criteria and often affect how agents operate and behave.

Step 4. Develop specific measures

The previous three steps highlight some of the factors involved in the design of a scorecard. Using the information gathered so far as a baseline, the next step is to map out what the measurements may be, what are the specifics that need to be assessed and measured.

These may be varied and cover a wide range of areas, but we advise that the overall composition of the scorecard be simple to use. This could result in the creation of sections to house questions that relate to one another.

For example:

  1. Compliance
  2. Customer experience
  3. Process

Within each of these areas, there may be several specific measures that contribute to a rating for that area.

Of real interest is the development of measures from Step 1, which relate to a mixture of mandatory or discretionary components of a contact.

For example, if the contact relates to a regulated area, then it may be compulsory to state a specific phrase. If this is the case, then that should be a specific measure or contribute to a particular compliance measure.

Top tip: Struggle to get the right balance within existing quality assurance framework and scorecard? Check out our blog all about building a great scorecard.

Step 5. Identify scoring

The scoring is the most contentious area of scorecards as they can be both objective and subjective.

Where there are specific compliance matters, then the outcome is simple; yes or no. However, when the issue is more subjective, i.e. about building rapport, then you could use a range of different scoring templates and outcomes such as a three, four or ten point scale.

Binary Yes / No Used where clear compliant/non-compliant is easily measured.
RAGRed, amber, green Where a measure can be partially met in addition to Yes/No
ScaleRange or scale such as 1-10A range of scores enable a more subjective measure to be applied e.g. 7 = met the requirement but could improve. 

For example:

The call closure after a conversation with a disgruntled customer who is closing their account. Asking them if there is anything else that you can help them with may not be appropriate, and therefore this element should be removed from the call scoring for this interaction, or have the option to score the question as N/A (Non-applicable).

It’s also the case that some areas of scoring are more important than others, so consider weighting questions accordingly to reflect this. The example of compliance reflects this as this will be of much greater importance when in a regulated environment than having the correct call closure.

Top tip: Do not overcomplicate a scorecard; it should be easy to complete, review and understand.

Step 6. Implementation

The final stage in developing the scorecard is planning the implementation. Introducing a new quality framework or any form of assessment can be a concern for agents who will be suspicious of ‘being checked up on’. It’s essential to involve agents in the development of the scorecard at all stages and to launch it properly.

A launch should include:

  1. Active involvement from agents in the development
  2. Communication with all stakeholders about the purpose and objectives, including how it will be used
  3. Explanation about the questions and scoring criteria
  4. Briefing about the benefits to the organisation, customers and agents
  5. A pilot process to test the scorecard.

When implementing in large teams or when multiple Evaluators are involved then it will also be essential to introduce a calibration process. There is a potential issue where one Evaluator may score differently to others. Having the ability to check this by sharing completed scorecards and ensuring that the scoring is equal is vital in developing a fair and consistent approach.

Calibration discussions also help with the development of Evaluators and Team Leaders too. This can be done by asking different members of the QA team to undertake the same assessment or through Group sessions where multiple stakeholders score the same interaction.

For more best practice on calibration, check out our blog: What is contact center calibration and why do you need it?

What now?

Building a great scorecard is one part of the puzzle, but it can quickly become just another tick-box exercise with little purpose. Feedback to agents is important and an essential part of managing and improving all areas of performance. Feedback should be shared to agents in real-time and regular 1-2-1s should be scheduled so agents and their line managers have chance to discuss their overall performance, learning and development.

Having data from regular assessments will enable Team Leaders and Managers to build a profile of the individual team members but also the overall team. Analyzing this information will start to highlight trends which, in turn, will enable prompt action to be taken to resolve issues.

It is interesting to observe how quickly bad practice starts and spreads through a contact center and early intervention is possible with ongoing analysis. Similarly, if the majority of the team are performing poorly in the same area, then additional team training may be delivered to resolve and improve the issue.

Quality scorecards are an aid to measuring and improving performance. They’re challenging to get right, but by following these steps, you’ll have a much better chance.

I’ll leave you with one last top tip:
Review your scorecard at least every six months. Is it working and meeting your requirements? Has it adapted to your changing business? Is it helping to improve the quality of performance?

By Tom Palmer
Tom is EvaluAgent’s Head of Digital and takes the lead on developing and implementing our digital and content management strategies which results in creating a compelling, digital-first customer marketing experience.

Related Articles