The QA professional in 2026: From analyst to change-maker
There’s a version of the QA professional’s job that most people in the industry would recognise – and not entirely fondly.
The quality police.
The person who picks calls, marks scorecards, sends feedback, and repeats. A role defined largely by what it measures rather than what it changes.
That version of the job is disappearing, but not because QA matters less. If anything, the opposite is true. As Matt Jones, Head of Product at evaluagent, wrote recently: “The QA function isn’t becoming less relevant. It’s becoming more mission critical than it’s ever been”.
What’s changing is what QA professionals are actually needed for – and the gap between teams that have recognised this and teams that haven’t is starting to show.
Contact centre QA teams have always sat on something remarkable: a continuous, unfiltered view of how customers actually experience a business. Not survey data, not NPS scores averaged across a quarter, but the real experience based on thousands of individual conversations – what customers are worried about, where agents are struggling, which processes are breaking down, and why.
The problem was capacity. When a team of three is manually evaluating a sample of calls, writing up feedback, and chasing acknowledgements, there isn’t much left over for analysis. The insight existed in principle, but in practice it rarely travelled further than the QA inbox.
AutoQA changes that equation fundamentally. When scoring is automated and 100% of interactions are covered, the time that was spent on evaluation becomes available for something else. The question, as we explored in a recent webinar with Xander Freeman of Call Centre Helper, is what that something else should be.
“I can’t remember a time in my lifetime where contact centre professionals have had more leverage over the wider business than they do right now,” Freeman observed. That leverage is real. But it only materialises if QA teams are ready to use it.
When you’re no longer spending the majority of your time completing evaluations, you can spend it understanding what the evaluations are telling you.
This is a different skill to scoring. It requires looking across data rather than through it; spotting patterns, identifying root causes, asking why a metric is moving in a particular direction rather than simply noting that it is. It means connecting quality scores to the business metrics that leadership teams care about: customer satisfaction, repeat contact rates, first contact resolution, agent attrition. When those correlations are visible, QA data stops being a compliance report and starts being a business intelligence tool.
Matt Jones, Head of Product at evaluagent, sees this playing out in the most successful QA programmes: “The best QA teams are sharing all that insight across the business, because it can inform everything from process change to policy guidance to product development”.
The data has always been there. What’s changed is the capacity to do something with it.
For QA Managers, this means developing comfort with analysis that goes beyond the scorecard – building the habit of asking “So what?” after every metric, and being able to answer it in a way that means something to the people outside the QA team.
The second shift is perhaps the most personally significant for QA professionals, and the one that takes the longest to get right.
Automated scoring doesn’t remove the human element from quality, it relocates it. When the AI handles the evaluation, the QA professional’s relationship with agents changes from assessor to coach. That’s a fundamentally different dynamic, and it requires a different set of skills.
Coaching well means understanding the difference between a score and a conversation. It means:
This matters more than it might seem. In our recent poll of contact centre professionals, 55% said that agents not trusting or engaging with QA results was their biggest challenge. Some of that is about explainability – agents need to understand why they’ve received a score, not just what it is. But a significant part of it is about the quality of the coaching relationship itself. Agents who feel that QA is something done to them, rather than for them, will always find reasons to disengage.
The teams that get this right tend to do a few things consistently. For example, they:
None of this is complicated, but all of it requires deliberate effort. The capacity to give it that effort is what AutoQA increasingly provides.
The third shift is the one that determines whether the first two actually change anything.
QA teams can do excellent analysis and deliver brilliant coaching, and still find that nothing much changes at the organisational level. That’s because the insight hasn’t been communicated in a way that moves the people who need to act on it. This is the storytelling challenge, and it’s one that Freeman identifies as central to the evolving role: “You have to be an analyst, you have to be a coach, and you have to be a storyteller at the same time”.
Storytelling in this context doesn’t mean spin. It means translating data into the language of whoever’s in the room. For a frontline agent, that means connecting quality feedback to their own performance targets. For a team leader, it means showing how coaching outcomes are affecting team metrics. For senior leadership, it means correlating quality scores with the board-level numbers they’re accountable for – and being able to make that case clearly, without losing the nuance in translation.
The Rosetta Stone framing that Freeman uses is a useful one: the same underlying data tells a different story depending on who needs to hear it, and the QA professional’s job is increasingly to know how to translate that data to different audiences. “From a data perspective, you can more or less craft the narrative you want to be crafting, you just need to learn how to use the tools in the right way, and translate it into the metrics that matter for that stakeholder,” says Freeman.
This is also where the shift from quality police to internal oracle really takes hold. The QA teams that have made this transition aren’t waiting to be asked for their data. They’re bringing it proactively to the rooms where decisions are being made. As Matt Jones put it: “You’ll quite quickly be dragged into that room – they’ve got all this insight from all these conversations, and they’re the oracle”.
None of this happens overnight. Developing analytical instincts, building coaching relationships, learning to communicate data as narrative – these are skills that take time to acquire, and QA professionals who are still spending most of their day completing manual evaluations haven’t had much opportunity to practise them.
That’s why the capacity argument matters so much. AutoQA makes space for QA professionals to become something different. The teams that use that space intentionally, that treat the freed-up time as an investment in developing these capabilities rather than simply absorbing it back into existing workloads, are the ones that end up in that room.
Matt Jones puts it another way, in the face of growing use of AI bots: “The discipline you’ve built – evaluating conversations against an operational standard, identifying patterns, feeding insight back into the business – is exactly what’s needed as AI agents are deployed at scale.”
The contact centre has always held more insight than the wider business knew what to do with. Now, the tools exist to surface that insight at scale. What the industry needs is QA professionals who know how to use it.
Watch the full From Scores to Signals webinar with Matt Jones and Xander Freeman on YouTube.
Discover how you can get more from AutoQA, with conversation intelligence, agent engagement features and intuitive workflows.
Book a demo