Blue-Banner@2x

The latest news and views from the EvaluAgent Team

6 Common Challenges with Calibration and how to overcome them.

By Reg Dutton |
Welcome to the first Webinar of 2019 as our ongoing Webinar Wednesdays series continues.

 

This week, subject matter expert and all-around Calibration guru, Reg Dutton is going to look at Calibration. Originally named, 'The Six Common Challenges with Calibration and how to overcome them' - please enjoy the recording.

If you prefer reading, we've copied in the transcript below. The original slides are also available for you to download here.

Please note - the audio doesn't start until 0:11 seconds.

 

Webinar Transcript 

Good afternoon everyone, and welcome to the first EvaluAgent webinar in 2019.

The webinar today is around the six common challenges with calibration sessions and how to overcome them. Just let me introduce myself, I'm Reg Dutton, a customer success manager here at EvaluAgent.

I'll be coordinating a Q and A at the end but, if any ideas or questions do spring to mind as I'm going through, there's the question box on the Go To Meeting panel for people to type any questions into, and I'll be available after time to go through those at the end.

So, let's move on.

About us

The first slide that you're looking at here is ... I'm not going to go into any major detail, it's just all those who joined us today, and aren't familiar with who we are. The important thing really to have mentioned about this slide is that we work with a lot of different organizations of all different sizes, and we support them in a number of different ways, and I'm sure for any existing clients that are listening in and with us today, I'm sure you're aware of the kind of tech support that we can provide at EvaluAgent.

Why Calibrate?

Okay. So, first of all, why should we calibrate? As a little starting point, it's to:

  • Develop a fair and consistent approach to quality.
  • Eliminate any inconsistencies and make sure that everyone is on the same page. So everyone is scoring in exactly the same way.
  • Discuss more complex interactions in detail.
    • So, the good thing about calibration sessions is listening to those complex interactions, perhaps interactions that don't come up all of the time, that perhaps do create challenges within your contact centre.

So, this is a few summary reasons that will kind of give an umbrella overview on why we calibrate. But, let's move on to our common challenges.

The 6 common challenges

Majority Rules

So, the first challenge is, and I'm sure, I hope that everyone that's been involved in calibrations will be aware of these, but the majority rules challenge. I'm sure that for anyone listening in today with us in this section has perhaps experienced a calibration session, they're offered in rote, with a view that it's the majority decision that gives the final outcome to that calibration session. So, you know, the most people in the room have a certain view of a certain viewpoint, and at the end of the session perhaps that's the view that's taken and agreed upon . What comes with that is that this often excludes, or overlooks, the viewpoints of other parties of points, whether or not their thoughts or ideas are actually legitimate. It often can be an easy option in running calibrations to close down discussion, or to prevent debate; where you've got difficulty in an agreement, and so it kind of defaults to that majority rules.

Fear of speaking out

The fear of speaking out. Again, for those who have been part of the calibrations, there are sometimes participants that can be particularly vocal, have strong opinions, or can just be quite stubborn and won't shift their opinion, you know, no matter what the discussion is or wherever that's leading. Sometimes this can lead to other parties of participants that are involved in session be afraid to speak out for fear of being shut down, or made to feel that their differences of opinion will be ignored or overruled.

Low attendance

Low attendance can be a problem with calibration sessions. Often your invitees either ...

  • Might not show up
  • Feel that other elements of their workload might take priority

This is particularly evident perhaps when previous calibration sessions have taken place and perhaps:

  • They've overrun the allotted time
  • You've gone through the whole session and you've not been able to reach a final consensus, or you know, just too long,
  • People don't feel like they'll be listened to, or they don't feel involved

Whatever the reasons are for low attendance to your calibration sessions, if that's an issue, the core of this is that for whatever reason people feel the calibration sessions aren't adding value, or are simply viewed as a waste of time. That is an issue you'll really need to be looking at addressing. Looking at why you're getting low attendance in your sessions. I'll go onto some tips to address this shortly.

Independent Paper Scoring

So, independent paper scoring. I put paper on there ... It's probably the most commonly used. But generally, when you've got no central system in place for your calibration sessions, it's common that participants will evaluate perhaps prior to the session. They'll do that on a paper form perhaps, and they'll bring these along to the sessions. So the session starts, everyone's sat down with a complete paper version of their form in front of them, ready to start and going through the contact and talking about results.

The challenge that comes with this is that individual scoring isn't really open and transparent, and what certainly in my experience, I'm sure other people out there have experienced this as well, is that what it can do is perhaps for those that maybe aren't so confident is that they can change their scores on their paper, and they can be swayed to another person's point of view before they get an opportunity to speak, which perhaps will hold them back and mean that they're not giving, you know, a true representation of how they solved the call.

I think keeping scoring open and transparent in calibration is really important.

Going through the motions

Calibration sessions often really have no specific purpose and they really can come down to being a tick-off exercise.

For example, we hold our calibration sessions once a week, or once a month, so job done! We're doing the right thing and everything is great. Or, at the end of the day, this isn't particularly engaging, and it's not really giving anyone much motivation to attend and get involved.

So, it's just something to consider that, and we'll pick that up a little bit later on in this presentation.

No follow up

Finally, the last point I just want to talk about is having no follow up to your calibration sessions.

Again, experience from myself, and I'm sure for many others here, is that once you have that calibration session you all get together in your room, you listen to your call, you're talking about the scores, and then once the session is finished whatever has happened in that room, in that calibration session, stays in that room.

And potentially, what then happens is that any uncertainty, confusion, perhaps any points of view or challenges that perhaps people have not had the opportunity to raise, or haven't been picked up appropriate in the session, people are going to leave the room maybe none the wiser and no better off than they were before they entered this session.

What you really want to be making sure is that people are leaving those sessions more aligned, but of course, how can you be sure of this, and ultimately how is that being measured?

So, how do we overcome these challenges and improve our Calibration sessions?

The most important thing, I think, is to make calibration conversational.

There's no 'wrong' point of view

The idea is to try to reach a consensus through conversation while at the same time there is no wrong point of view. Whatever view somebody has it's not that if it's not agreed with by people it doesn't mean it's wrong. Everyone should be heard.

It's not a democracy, if you like, where majority rules. Calibration sessions are about allowing everyone to share their own views and opinions constructively in an open and honest way, and at the same time everyone else in the room is willing to listen, and then discuss whatever those points may be. It should be a comfortable environment for everyone involved. No one should be shut down for having differences of opinions, and in fact all differences of opinion need to be considered, because it's the differences that often identify gaps in perhaps equality guidelines, or even the scorecard itself, and it's through that, action can be taken to rectify these gaps, more efficiently.

Calibration should have a purpose, not just a tick-box exercise

Calibration should always have a purpose and it isn't just a tick-box exercise like I mentioned earlier. Calibration can help fix a range of different quality challenges. It's more than just getting together just to, you know, squabble about a couple of random policies. It's really about asking yourself:

  • What is the purpose of this session?
  • Why are we holding a calibration session this week, or later today, or whenever that may be?
  • What do you want to achieve from this session?

If you want people to be engaged when they turn up to your calibration sessions they've got to have a clear understanding of what the purpose is of this particular session and the reason that this session is taking place.

Some of the main reasons, perhaps, that calibration can benefit you and your business are for example:

 

Creating or editing scorecards
When you are creating a new scorecard or your modifying existing scorecards. Calibration is a great way to get examples of contacts that might be included in your new scorecards, or in existing scorecards that you're modifying, and you can get people together from all areas of the business to, you know, look at what needs to be included in those scorecards, what needs to be considered, etc.
 
New Channels
If you are considering opening new channels for customer contact as well. It becomes particularly powerful because if at the moment perhaps you're only dealing with calls and emails something like that, and you want to then venture into perhaps social media, or chat, then how are you going to determine what you're going to be measuring on your scorecards? How do you know what's going to be included in this part of that additional element to your quality process? Getting together and calibrating even existing contacts, or looking at it from the point of view of a forthcoming channel can really help make a difference and set you up for success.
 
Managing and reducing dispute volumes
Calibration is also great for managing and reducing any dispute volumes that you have that have come in through your dispute process. And also, calibration is great as a pre-cursor to agent coaching as well. So, this is something, certainly in my experience and something that I'm personally a big advocate of ... getting your agents to calibrate contacts, and then using these calibrations as a basis for coaching sessions can be really, really powerful. We're going to touch on this a little bit more in the next slide or two. So, just bear with me on that one, because it's actually a really good point.
 
Identify the opportunities for evaluating and coaching.
So, you know, calibration isn't just there for your agents,  you're evaluators, especially if they're new to it. New evaluators aren't going to be great at it, and master it straight away, so calibration can help give you a good idea of where they are and how you can support them moving forward.

Scoring should be clear and transparent

Calibration should always be a safe environment, as I mentioned. Everyone should be free to speak out and be heard without bias or made to feel that they're wrong.  I also mentioned before around paper scoring, and keeping that almost to yourself, and being able to keep your opinions to yourself as a result of that.

What I would say is that if you can keep your scoring open and transparent, and make everybody aware in your session that differences in scoring support the conversation that you're going to have, it will really help.

Also, it allows you to track those participants that perhaps are regularly having differences of opinion in each of the sessions that you have, so you can track and monitor, support them and make sure that they become more aligned.

Calibration must drive coaching, support and change

Starting any calibration without follow up after the session is finished is, quite honestly, a wasted calibration session. You need to have a plan to answer the questions about how do we follow up on the findings and the results that come from this session.

I've already mentioned that having a purpose to the calibration gives a clear reason for people to understand and attend the sessions, and also eliminating that tick-box mentality of "well we've done our calibration this month, so everything is great."

It really is about what happens next.

  • Think about the discussions in your calibration session, and how they're going to shape the changes to your scorecards, and your quality process, and then keep quality evolving.
  • Determine who needs to coaching and support, what you've identified from those individuals and how can you help them become more consistent and aligned.
  • How are you going to communicate the discoveries and everything that you've learned from those calibration sessions with the wider teams and the business?

It's the actions that you take following calibration sessions that really demonstrates the true value of them, and ultimately it gives people another good reason to turn up to the next session, and really make it feel like their contribution is making a difference.

Calibration Top Tips! 

Finally, some top tips. 

Include everyone 

Team leaders and coaches

It's very easy to limit calibration sessions to just those people that evaluate your organization because ultimately they're the ones that are scoring those contacts. However, it's important to consider your team leaders, and coaches as well. They're the ones that are often feeding back the evaluation, feeding back the results, and having the conversations with your agents about these scores. So, they need to be absolutely clear in their mind exactly what's involved in that scorecard, how it's being scored, so they can absolutely deliver that feedback in the most accurate way, and then be able to hold a conversation with the agent around those results.

Operation managers

Your operations managers, you know, they manage a range of different operational KPIs, and dependent on what challenges there are within an organization, it's quite easy to lose sight of specific detail and challenges that are faced, specifically in this case regarding quality, which could effectively enable them to manage a team in their quality performance. 

Dependent on what the challenges are, perhaps within a particular business area, you could discuss those calls if it's something that's been identified as having a bit of an impact on the operation.

Calibration just helps to keep them in touch with this process.

Agents

Going back to agents again and the value of including agents in calibration. So, the key thing with agents, I've mentioned the importance of it, and the value in it, and where this really stems from is just think about the agents in your organisation. Think about how they perceive quality.

Is it something that actually then completely emersed it, or for them is it something that done to them on a day-to-day basis. Do they feel that they've got an active role in quality? Do they have a clear understanding of its purpose? Do they actually believe that they can influence quality, anyway?

I think there are a lot of cases, things like this, with agents that are overlooked.

Calibration is a great way of really bringing them into quality, immersing them into that process. Especially when you're getting groups of agents together. Top tip here: Try and get a mix of different levels of performers.

Any approach to get your agents involved in calibration is going to help them feel more involved in quality, understand the whole end-to-end process a lot more and be less likely to challenge the results that are coming through if they are clear on how it's scored.

And, of course, we're all familiar with the 'them and us' attitudes. The perception from agents can be:

  • They only pick my bad calls
  • They look for my worst calls this month

Which we all know isn't the case, but it can seem like that. Something simple like inviting agents into calibration, either with other agents or with evaluators as well, can really help some of these agents.

Calibrate to guidelines

I can't stress this enough. Otherwise, if you're not doing this what you're going to be doing is just calibrating against each others opinions, and that really isn't going to get you anywhere, because it'll turn into an almighty argument, and you'll be scraping it out, and you're not really going to get anywhere. What you really want to be doing is keeping all of the discussion on track, and keeping it as [inaudible 00:22:38]. The only way that you're going to do this is by keeping the discussion strictly in accordance with your quality guidelines.

It's possible through the discussion perhaps that you may well identify gaps in your guidelines or lack of clarity in your guidelines. But, you know what, that's not a bad thing, because the opportunity that you're provided with there is to just tighten up your guidelines a little bit more, so all-in-all in that situation you're getting a positive result.

Etiquette and Standards

I've touched on it a couple of times, but I'm sure most of us have all been involved in calibration sessions where ultimately they descend into chaos. People start arguing, the sessions are taking too long, you don't get any agreement or just all of the things that I talked about earlier.

The way to really help to combat some of this as well is for whoever is facilitating this session, or if it's a regular facilitator every time, is to just create some standards that effectively people are signing up to.

For example:

  • The objective of the calibration session.
  • How long is the session going to last?
  • What is the role of the facilitator?
    • Are they going to take up any and all of the actions that arise during the session?
    • Are they going to control the conversation and make sure that that stays on track?

As a facilitator, its your job to make sure that everybody is allowed to express their own point of view without being interrupted. And again, make sure that all discussion is focused around guidelines

Have a robust disputes process

Final top tip. I suppose linked to calibration, but there is definitely a link there, which is to have a dispute process in place. So, most contacts centres, if not all of them, that measure quality these days will have some form of a dispute process. So perhaps the agent receives the results, they're looking at the results and the feedback that they've been given, and there's something in there that they want to challenge, and dispute, and that they don't agree with; which you know what, is fine.

But, you need to have a strict process in place, because often disputes are reviewed on a case-by-case basis, so as one dispute arises it's dealt with, handled and then it's put to bed and you move on. But, you know really what you should be doing is collecting all of this information from your disputes, so that you can store or retrieve all of this information from one place because disputes are usually driven from:

  1. Inconsistencies in scoring
  2. Genuine human error
  3. Misunderstanding, or misinterpretation of quality guidelines by evaluators.

It's all of this information that gives you a really rich source of quality data in itself. For your agents and your teams out there, your operational teams, it helps to improve the trust between people, because an action has been taken on the back of these disputes. Things are moving forward, and they're being continually improved.

How can our software help?

For the latest functions and features, please visit our software page.

In summary, EvaluAgent has a calibration feature, which allows you to:

  • Schedule calibration session from within the platform.
  • It enables your participants to blind evaluate, and then in turn automatically collect all of those results together so you can view them side-by-side, all in one place and ready for these calibration sessions.
  • Report on calibration performance, and also provides your evaluators for feedback from whatever is coming through those sessions.
  • As an evaluator, you can see:
    • How calibrated all of your evaluators are, so from that point forward you can start to target certain evaluators for more calibration sessions, or further coaching and support, and so on, and so forth.
  • Check-the-Checker and supports a dispute process
    • Whether this is your second or third line defence type mechanisms. EvaluAgent enables you to take any evaluation that's been completed and consistency check that evaluation against the evaluator.
  • All stored securely for later analysis and reflection. 

Exciting news. The calibration feature is launching this month (Jan 2019). If you're not using EvaluAgent at the moment, then you can 'Get started for free'.

It only takes a minute to fill out this form here.

Alternatively, for our existing customers - Calibration is available as a free trial so please contact us via the usual mechanisms to add it to your portal. 

Question & Answers from the webinar.

Q: How can you combat the rule of the majority in a calibration session? the majority might not be right?. 

A: Overcoming a majority that is based on each participants subjective opinion can be combat by ensuring that you're calibrating to clear quality guidelines where the information you're calibrating against is consistent. Calibration sessions should run based on seeking consistency rather than getting agreement. It's also possible to identify inconsistencies in your guidelines through discussion which is another great reason why calibration is so important.

Q: How can address the difference between the client view and the call centre view? From within an outsourcer. 

A: It's better to address differences between client and call centre at the point of creating your scorecard(s) because it's at this point you can collectively agree what is being measured, the required standards and all of the guidelines that keep everyone straight. If you're in a position where this has not been the case or you already have guidelines in place but there are still inconsistencies between the two parties the the priority of any calibrations in the first instance should be focused on agreeing those guidelines to support a consistent approach in the future.

If you have any questions, please tweet @evaluagent or email info@evaluagent.com. We'd love to hear from you. 

 

Okay, so on that note, close to time, I'll wrap it up there and I'll look forward to hopefully speaking to some of you very soon in the near future. Thank you!

Trusted by contact centres of all sizes

Bitmap%20Copy%209@2x
Bitmap%20Copy%2010@2x
Bitmap%20Copy%2011@2x
Bitmap%20Copy%2012@2x
Bitmap%20Copy%2013@2x
Bitmap%20Copy%208@2x