Customer complaints and general enquiries are on the rise, demanding faster and better responses, but achieving instant and meaningful resolutions is not always easy.

In the latest episode of our Experience CGI podcast, Director of Consultancy in Asset and Auto Finance, Philip Benke talks to our AI and automation expert Cheryl Allebrand. They dive into a real-world example: the FCA's recent discretionary commissions investigation, which has caused a surge in customer enquiries for lenders and auto finance companies.

A big challenge and costly to handle manually, this could be an opportunity to use AI to automate and streamline the complaint management process, while ensuring compliance with the FCA's rules and guidance.

Listen now to learn more about how AI can empower businesses, leveraging technology to take the load off and handle complaints more efficiently.

 

Transcript

Philip Benké: Hello, and welcome to our new CGI Experience podcast. I'm Philip Benké, and today I'm delighted to be joined by our AI and automation expert, Cheryl Allebrand.

Cheryl Allebrand: Hi, Philip. Nice to be talking to you today.

Philip: Great. I'm really looking forward to our chat today to understand a little bit more about how AI can help the auto finance industry and the lenders with the challenges that they're facing having to react to the FCA's investigation on discretionary commission, which was highlighted by Martin Lewis in his program, which led to a huge number of inquiries.

Cheryl: Yes, they've been inundated by hundreds of thousands of emails, haven't they?

Philip: Yes, I think actually he put a template onto his website and that led to over a million inquiries to various number of funders, huge numbers.

Cheryl: Now I'm assuming they're trying to deal with the repercussions of that.

Philip: Yes, there's a huge issue for the industry because of the resources that it requires to actually just manage that number of inquiries that are coming through.

Cheryl: It sounds like they need a little bit of help there and maybe some AI to help do some of the heavy lifting.

Philip: Oh, yes. Without a doubt, there's going to be a requirement to manage this. The number of inquiries which have gone up from the normal levels to what they're experiencing today will put a huge burden onto these businesses.

Cheryl: Do you have any insight into how they're currently handling it?

Philip: At the moment, the FCA has given them a period of nine months where they can delay their responses. They have to acknowledge the response initially, but they don't have to come back with an answer regarding discretionary commission until September when the outcome will be announced by the FCA. It does give them a period of time.

Cheryl: Yes, but still, when they're dealing with such a spike in emails, I think that they should probably get a little bit of help from AI around the emails. Some of the things that they could do there would be just right upon receipt of the email, it could start extracting information if the template is good and provides a lot of the information that's needed to investigate the claim. It could use it to automate basically creating a ticket and already having a ticketing system. Otherwise, it could recognise the content of the email and start to route it perhaps to the correct department or even triage it, depending on the perceived sentiment, if something sounds to be a bit more urgent.

You could even use generative AI in order to start creating responses so that people aren't sat there wondering what's going on with their claim or triggering new sets of calls into them trying to follow up on it. If you can understand the types of questions people are going to want to help with, you can use AI to help with those responses and just make it a more communicative type situation without having, perhaps, the call senders inundated the same way that the email servers are probably groaning under the strain.

Philip: That's actually really important because you've got one issue with discretionary commissions, but you also will still have your normal complaints, your normal issues that they will find. They do need to triage those between the two, so the normal standard complaints to the ones which have been generated by the recent publicity on the DCA complaints.

Cheryl: Even for the people who are handling these new complaints, they need to understand how they're going to be responding to them. They're going to need an internal Q&A, a set of information around how to handle it. You can also use AI to assist with that, creating basically this document store that can then be queried, and that can support internal people as much as it could be used to support external people.

Philip: That's really interesting. One of the main challenges also is how are they going to manage and sort their data? Because the complaints will be going back to 2007, there's going to be multiple systems, multiple different types of agreements. The industry has changed. In fact, actually, the FCA only really came into force in 2014. The regulation and ordering wasn't there. How are they going to manage that? How could AI or technology help there?

Cheryl: That's a good question. There's multiple ways that it can help. Part of it is, they're going to need to either extract specific data or aggregate data in order to access it or both. A lot of that work can be supported by language-based AI so that it can understand the information that's in different formats, potentially even stored with different computer languages and it can translate basically between them. Really, the important bit that AI can do some lifting around there is the data cleanup so that it can then be used. It's really good at doing things like categorisation and labeling and getting information from different formats all into the same format.

I think a lot of the heavy lifting, once they've identified which parts of the system the information is in, it can go find it, basically finding the proverbial needle in a haystack. If you think about specific points of data or specific agreements that this might apply to, it can winnow those down to a workable number and then help make those bits of data query-able so they can actually do the investigative work. Even beyond that, it can help with the analysis.

Philip: Okay, that sounds really important. Do you see there being future benefits as well as having this data sorted and having different points? Obviously, we're looking at the inquiry at the moment, but how do that looking at, say, consumer duty?

Cheryl: That's one of the things that I was going on about a year ago, basically letting people know how much consumer duty could potentially be an opportunity for their companies, not just a set of obligations. It was really very much around the two of those areas that we are discussing right now: improving communication with the customers and also improving data quality, but really tied to decisioning. Decisioning is basically part of what they're going to need to do in this investigative work, decide whether or not it was something that was impacted, how much needs to be paid back, if at all, those sorts of things.

One of the things that's nice about this is because you've got-- it's all based on data, but it also generates data and it's saved in a very traceable manner. You can use the explainable AI or, I guess I should say, you can use the principles around explainable AI paired with this data trail that you have then to show the FCA where and why decisions were made.

I didn't bring it up before, but even when it comes then to communicating these things to the FCA, AI can help in terms of report generation or communication and other forms. It doesn't just have to be to the customers or to the internal staff. It can also be with the FCA, whatever types of information they need. It can help the firms that are involved then gather that together in different formats and for different end purposes, for different groups.

If we're thinking forward, you could automate your auditing process for the most part. Really, this data transformation is something that's quite important when you want to begin to extract intelligence and when you want to try to automate processes or use, let's say, AI to support different processes. Just, equally going back to that beginning bit where I was talking about how AI can actually help get your data into the right format and gather your information together, it can play that role as well. It doesn't have to just be chicken and egg.

Philip: That sounds really, really good in the sense that it's something that we're building for the future. Obviously, there's a major issue at the moment for the industry regarding DCA and how they're going to cope with that. What do you perceive as some of the pitfalls of using AI at the moment? Do you see any issues regarding that? Because it is, to some organisations, a new technology, a new way forward. They may have some reservations.

Cheryl: Absolutely. anything that you don't understand well, you're going to, quite rightly, have some concerns about and there's much that we can do in order to help get people up to the maturity level, if that doesn't already exist, within the organisation in order to make the right decisions and work with these things in the right way.

I think one thing that we haven't really discussed here is what AI is and how it can be used, and that's really important if we're going to have discussions around where the pitfalls might lie or what companies need to be concerned about. There are different pieces of technology, different flavors of technology that all fall under the umbrella term of AI. There's the more traditional AI that is more accurate, but it takes more work and not everyone can use it. Companies will likely have some specialists already that work either within the firm or that they've been engaging with if traditional AI is going to be a proper solution.

Now that takes longer to get into place. It's a bit more involved, so that may or may not be appropriate for pieces of this particular work. Now we've been talking a little bit more, I think we might have even used the term "generative AI" earlier in here, and that is that newer technology that everybody got excited about, about a year and a half ago when ChatGPT really hit the scene and started capturing everyone's imaginations.

That can be used for more things and it's relatively fast and easy to use and implement so you can take action more quickly, and because it's all language based, I might even have mentioned language based. That means that you can talk to it or write to it rather than having to be a coder and be already be a specialist or data analyst who's working with this. It does not only open it up to a broader range of uses, but it can be used by a broader group of people within the organisation.

I think the important thing to remember there is that on either case you're going to need to ensure that you've got the right data in order to make the decisions or in order for it to be helpful and you have to have the data rights. You have to have the right to use that data for that particular purpose, think about GDPR and other things along those lines. Both of those require your organisation to have an AI governance plan and strategy in place.

Now, firms may already have it if they're already using the more traditional AI, they may already have it for that, but they might not have updated it for considering the extra risks around generative AI, because in that word "generative", it can create its own content, it comes up with its own answers. Even though you're going to want to ground that in your organisation's data, it absolutely requires a person to be part of that and to be checking that.

Even though it can help do some of the heavy lifting, particularly around going back to that bit we were talking about getting the data prepped, the things that would require too many people hours or would be too difficult when you integrate it into your processes, it can do a bit of that work. Then it goes all the way to the other end in terms of helping to respond to people and coming up with answers or helping to generate reports in order to customise the same information for different groups in order to give them the information that they need.

That's still going to need a person being part of that process. It's just that it can do that in a way that makes it faster, easier, and potentially of better quality than a person's going to be able to do on their own.

Philip: Thank you, Cheryl. That's great feedback. Would you say having an organisation which understands AI is supporting that but also understands the industry to implement AI would be a good position so you've got the two sides coming together?

Cheryl: Yes. I guess really the part to consider here is that the more traditional AI, and I'm even including machine learning as part of this, is something where you already have specialists. You don't really run the risk of doing something that you're going to regret unless your specialists aren't properly trained themselves, so if they don't consider everything that they need to consider, and that can partially be due to your organisational structures.

With generative AI, because it's in everyone's hands, that doesn't mean that it's right for organisations to use it straight off, but everybody might think that they can or they might be tempted to just go ahead and jump in and start using it to help with their work, particularly if they're being inundated by questions and they just don't have enough hands or hours in the day.

It's important for people to understand how to use it safely so that they don't end up falling into some pitfalls around that. Part of that is going to be probably not just using, let's say, the free ChatGPT straight off that they could access in their phone or on their laptop. You're going to want a version of whatever language model is being used that your organisation then has put in place and has already made some decisions around-- That's going back to the governance. They've already made some decisions around, "What's safe use of this? What's proper use of this? When and where is it appropriate?" It's not just a free-for-all.

Doesn't mean it can't be applied in different ways across your organisation, because it absolutely can, but you're going to want that governance in place before you just allow people to let loose.

Philip: Sounds that, with the companies having to plan and get ready for the outcomes from the FCA in September, now it's really important to be planning and putting these processes in place so that they're ready. Would you have any recommendations about the next steps for companies to take?

Cheryl: If they don't already have the sort of people in place who are experts in this area and who are able to take them forward, then reaching out to a company such as CGI who can help them with that, whether it be augmenting their staff or working on projects for them or just helping ensure that they've got all of the pieces in place, they've dotted the proverbial I's and crossed the T's in order to do things safely and timely and avoid stumbling blocks, might be a good step.

Philip: Okay, Cheryl. That's been really interesting. I think that looking at the current situation for the funders, now is the time to put your plans in place, start the dialogue, start talking to people. Don't wait until September because September may be too late. It would be really important to start the conversations now. Thank you so much for your input today. I've really enjoyed talking to you. I've learned a lot about AI and how it will help with the situation. Thank you.

Cheryl: Thank you. I know that they've been given a bit of breathing room, but let's help them breathe a little bit more easily.

[END OF AUDIO]