If you’re considering implementing a Quality Assurance programme or rubric, and are unsure exactly what that should look like in practice, here’s our guide on how to write great QA guidelines.
In this blog we're going to cover:
If you're just beginning with QA, or want to have a refresher, check out our Getting Started With QA Guide.
Quality Assurance processes and guidelines need, above all, to be clear. Without clarity, you’ll be building a set of data that is incomplete, biased, and inaccurate.
In striving for clarity, you are able to objectively codify what good support looks like from the inside of your organisation. Knowing and understanding how you want your customers to relate to the business is a great starting point for building your QA rubric. If you can define your service ideals, such as ‘accountability’, ‘empathy’, ‘accuracy’ and ‘responsiveness’, then these should be among the core tenets you attempt to capture and measure in your guidelines.
You can align your internal view of support, and therefore your QA rubric, to other known metrics such as Customer Satisfaction, NPS and others. Clear guidelines that align closely to other business metrics create an influence that is simple to track. QA guidelines that are closely aligned with the wider customer experience also builds greater opportunities for clarity across business units, facilitating conversations that focus on the success of the customer, rather than any single metric.
Within the Customer Support team, there are equally important reasons that clarity reigns supreme.
Perhaps most importantly, a well-defined QA programme is simple to score. It doesn’t impede the progress of your service provision. It’s easy to measure objectively, allowing the process itself to be easy to learn and train for. And, ultimately, it provides a valuable and unambiguous data set that is easy to map, find trends and improve on.
How does this work in practice, though? Look no further! We have the guidelines for your guidelines!
Ultimately, this comes down to a specific set of key features on which you should endeavour to model each item in your rubric.
Each guideline should:
Unless something is specific, it cannot be measured, and if it cannot be measured, it is meaningless as a data point.
The simplest way to ensure meaningful data is to have your guideline be either a Yes/No answer, or a simple score or tally
The simpler the guideline, the simpler it is to train your QA conversation reviewers. Anyone can then, with the appropriate contextual knowledge, be able to fairly review a communication based on the guidelines.
Each potential reviewer will know what each score on each line item in the rubric represents. Such simplicity reduces guesswork, and, even more importantly, reduces the likelihood of bias.
The aim of each guideline in your rubric should be to reinforce and re-state the objectives of your whole QA plan.
When identifying the objectives of your implementation, consider what your current service challenges are, and how a QA program surmounts them. Have solid reasoning for each measure on your rubric, and, above all, ensure it is measurable.
Shortfalls in the service you provide can only be improved if you can coach your team in the relevant behaviours.
Ensure your QA and leadership team know how to coach for each guideline, and is able to clearly model them.
Go beyond just resolving the customer issue. Finding a solution is the base-level of your customer support. Consider instead what your customers’ ideal endpoint is. Is this best represented by confidence in the service or product? Or is it related to efficiency or responsiveness? What other factors are key to your customers’ success?
Once you have identified your ideal customer outcomes, you can link specific points and actions in your service that maximise the chance of achieving that outcome, and identify a guideline in your rubric that measures for that.
Every action and communication carried out by your agents will be clearly measurable and coachable. The team will unite around a clear set of common goals.
What do we mean by GREAT? No, the capslock isn't stuck on...
Here are 5 not-great examples, and how to make them better:
“How good was the agent’s spelling and grammar?” 1-10
This is not specific enough. It’s also open to personal judgement, and it’s not clear what the score relates to.
“Did the agent make fewer than three SPaG errors (within context of appropriate language, reflecting customer’s style)?” Y/N
“Was the customer issue solved?” Y/N
Resolving the customer issue is a bare minimum requirement of your service department. This isn’t clearly linked to ideal outcomes.
“Did the agent confirm resolution with the customer and provide supporting knowledge base article if one exists?” Y/N
True quality support goes beyond the minimum. This example demonstrates that extra steps were taken to ensure the customer was successful with the suggestion, and extra value was given.
“Was the agent empathic?” Y/N
This is a very subjective question, and probably not a binary answer.
“Did the agent mirror the customer’s challenges, confirming that understanding with the customer?” Y/N
Codifying the behaviours that best represent empathy in your organisation, and more generally on a social level, ensures that you are able to measure and coach for those behaviours, fostering greater empathy across your service team. In this case, we can simply ask if the agent restated the customers problem, and confirmed the impact with the customer, helping the customer feel heard, and improving the listening skills of the agent.
“How friendly and professional was the agent?” 1-10
Friendliness and professionalism are both quite different traits, and neither is easily measured on a ten-point scale.
“Did the communications comply with our style guide?” Y/N
A style guide gives a clear set of expectations that can be taught and coached. It also means you can very specifically design what “friendly” and “professional” are exhibited as in your customer conversations. Perhaps a certain formality is required, or conversely your customers typically use text speak!
“How well did the agent comply with procedures?” 1-10
Applying a ten-point scale to something so critical as trust and safety compliance is not advisable.
“Did the agent comply with internal procedures A, B and C?” Y/N
When you need clear alignment to a set of procedures or legislation, make the measure a simple binary Yes or No. Communications that fall short of expectations can be picked up simply, and appropriate coaching or retraining given.
You’ll notice that our examples all drill down to a binary response. The point of quality being measured was either met, or not. If you're at the start of your quality journey, this greatly simplifies the QA process in the early days, and the supporting training program. It also reduces the likelihood of the scores being open to interpretation by the reviewer.
Generally, the simpler the quality guidelines, the better. As you build out your quality programme, you might find it possible or even appropriate to expand the system to measure more nuanced notions of quality. Just be sure to train for it.
Invest time in setting clear, precise, well-crafted QA guidelines. Geat QA guidelines make your programme robust and defensible. Management of the whole process is more straightforward, and a more frictionless experience for both agent and review will ensure its long lasting implementation and benefit. It becomes sticky, both at organisational and individual level.
Great QA guidelines build trust, directly influence your ability to coach agents effectively AND target behaviours specifically to customer outcomes. That maximises your support team’s chance or success, and in turn, your customers’ success.
For more resources on Quality Assurance, employee engagement, coaching and feedback and other CX topics, head over to the Resources page.