Thomas Rauschen:
In today's episode, we will discuss the aspects of trust, ethics, and regulation of AI deployment and insurance. To shine light on this question, I'm thrilled today to be joined by Tom Infante, who leads our cybersecurity and operational resilience practice COE in the UK. Hi Tom. I'm delighted to have you on my podcast.
Tom Infante:
Hi, Thomas. Thanks for having me.
Thomas Rauschen:
Perfect. Before we deep-dive into our conversation, I just want to lay out a little of what we see in the insurance industry. I mean, the insurance industry is changing and not only driven by the macro trends that we are seeing across the globe, but of course, driven by modern technology and AI, which we discussed in previous podcasts. Based on our Voice of Our Clients survey, digital transformation and modernization are the top priorities across the insurance industry, of course, and AI is everywhere. At the same time, as you know from your client conversations, resilience, cyber and data security is seen as a key to navigating the more digital-enabled ecosystems. And of course, we see a lot of regulatory pressure.
So, before we dive deep into the responsible AI topic, I just want to ask you to level-set for us what the difference is between being secure and being resilient in an insurance ecosystem.
Tom Infante:
Yeah, of course, thanks. So, I think the first thing worth pointing out is being secure and being resilient used to be separate things. So, in order to be secure, we need to prevent and protect anything from happening. And to be resilient, we need to react and recover. And the reason I say it's important that there used to be two separate things is because we used to have a classic patching and cybersecurity to a minimum level of what there is today on offer, but the resilience was just a business continuity plan or a disaster recovery plan. But now it becomes more and more important and business-critical to be able to be continuously resilient and secure.
Thomas Rauschen:
Perfect. No, thank you so much for the level setting here. And if we dive now into the term responsible AI, you know, what does it mean in practice? I mean we see a lot of rapid AI adoption across the industry, you know, there's an accelerated deployment. If you read the newspapers and obviously social media, there's public trust and concerns when it comes to bias, transparency, misuse of AI. But of course, we also see the rising regulatory scrutiny across the globe, especially in the UK. Can you shine a light on this?
Tom Infante:
Yeah, of course. Financial services, insurance, in particular, it's a human business. AI, like you just said, you know, it can do all manner of things, and within anyone's hands they can recreate and develop capability that is way beyond human within a particular timescale. So, I think the term responsible AI comes from the idea that we should design and build systems and models that are not only safe for people using it (people, obviously, the ones that are affected by it). But the most important thing is that they're aligned with human values. Again, relating back to that human aspect of a financial services business.
Thomas Rauschen:
Okay. And if I may, a follow-up question for me would be Tom, when it comes to AI, from your perspective, from your experience, is it rather a technological risk or is it a reputational risk? Some insurance companies might underestimate the trust dimension in the AI application.
Tom Infante:
Well, this relates quite well to one of the key topics in responsible AI. You know, who's accountable for that? You can't have several vendors in a value chain that are utilizing different AI models to recreate different scenarios, whether it's you know as part of insurance claims or whether you're you know generating new capability for business. The waters can be very muddied in terms of actually who should take on that responsibility and actually the outcome to the end user as well. But I mean, to be clear, it's the accountability of the creator of the model and the capability. One of the things that I'm really keen on and I've recently written some articles about is keeping the human in the loop. Again, that comes back to being responsible with the AI, but obviously, continually assessing and reviewing what the model is doing and how it's impacting the business.
Thomas Rauschen:
That's a good segue into the next segment, Tom. When we think about trust building, and trust building is a foundation or trust and accountability is a foundation for AI adoption. Now comes my next question to you. How do we regulate AI and what are the key aspects to consider? Of course, when I talk about regulated AI, it means it could be regulation, external regulation, but also business-driven, right? So, when we think about trust building and accountability, you know, terms like transparency and explainability, fairness, bias mitigation, human oversight, you mentioned that's all aspects that need to be considered. So, yeah, what is your view on how to regulate AI? And I know we discussed the four pillars beforehand, so maybe you can lay it out as well.
Tom Infante:
Yeah, of course. With any new technology and with any new processes or innovation that we bring into industry, there comes some buzzwords. So, you’ve mentioned the first one. The first one is automation bias. You know, it's the unintentional disadvantage of profiles or amplifying historical bias that may exist within data. The important thing there is about not bringing any disadvantage to a particular profile and ensuring that these things are reviewed. The other one is there's kind of ethical drift and you know how we do business now in our you know each company, each organization has their own ethical way of doing things which are governed within reason. AI can shift that away because you know business changes and the profiles change and the products change, etc.
In terms of how we actually regulate that, my main advice and things I talk to people about all the time is ensuring that we have those governance checks ensuring that everything that AI is doing is transparent and secure and the humans remain involved to ensure that, again, back to it being a human business, making sure before we scale anything, we solve those small problems that are, you know, for instance, so bringing this back to insurance where we're talking about signing off claims that are quite mundane and straightforward, by the book and don't particularly need too much human intervention. Start small with that and speed up your go-to-market and business with models and processes that will then allow you to start scaling properly once you understand how your business is working with AI.
Thomas Rauschen:
So, based on what you just described, it sounds like that trust is something that you can design or can't design, or is trust something that you earn over time?
Tom Infante:
I think everything that we're doing here, we're designing. I think one of the issues being that we trust AI a bit too much because we've seen what it's capable of. One area I think is quite interesting, and an article I read is that in the future, when we assume what AI is going to do and we understand that it's capable of certain things, and let's face it, we've probably all done this on an AI tool where we've asked it something, we've taken it for you know gospel. The important thing here is, are we going to start suppressing the way we use AI?
Because, like anything, if we regulate and we start to be more responsible with AI, is there a point where we think, okay, we need to start suppressing it now because it's actually getting a bit too smart in terms of the work that we're trying to do, and it's we're kind of getting carried away with it in our business. And I can definitely see that for IT companies moving forward in the future, but also for financial services, because actually one key thing here is that the regulators, and talking about regulation in AI already globally, the regulators can't keep up with what some of the more innovative and large-scale financial services organizations are doing with AI.
Thomas Rauschen:
So that means also that that obviously that companies need to build their own regulatory framework internally to build trust towards their clients because they can't rate for the regulator. So, what you say obviously rating for regulation is not always the right thing because obviously, as you mentioned, technology AI is moving so fast, so insurance companies need to build their own frameworks to regulate AI in order to avoid reputational risk and of course operational risk. That's what I wanted to say. Does it make sense?
Tom Infante:
Okay, but you know, add a caveat here, we're talking about the minority. Of all the clients, I'm sure you talk to clients as well, there's only a handful out there that have a really significant and advanced AI strategy that takes them beyond a few years, which is obviously where we come in because that's good for us because then we can advise our clients and tell them what we think the best way of managing their AI capability is.
Thomas Rauschen:
I couldn't agree more. And I think then it feels like when I have the client conversations, there's always a strong focus on the technological aspects, the value that AI can bring, but less so about governance sometimes. So, that's why I'm really pleased that you make those comments.
Let's move on to how insurance companies could build responsible AI. First, I want to start with the question: What is the one mistake that you see companies make with AI governance?
Tom Infante:
Good question. I would say that probably that they don't have an outcome in mind. So, it should be less rare than it is, but when you have a technology department and the rest of the business disjointed, that's where we start to see issues with AI implementation and new technology and innovation. For instance, a measurement for AI: we want to improve something by 20%. Okay, every organization, whether you're in insurance or farming or manufacturing, you all have a measurement where you want to improve something by x percent. That's where the journey with innovation and artificial intelligence starts, because a lot of customers, we're dealing with some serious organizations here, a lot of customers are just implementing for implementation's sake to make a process quicker, but there's no real business outcome at the end of it. And that’s what I would suggest is probably the biggest concern.
Thomas Rauschen:
Perfect. And if you walk into a client tomorrow and they talk about building responsible AI and trust from the outset, what would you recommend to your clients where to start, and what is maybe the first practical step to do so? Also consider what you describe human in the loop.
Tom Infante:
I think it ultimately comes down to data. So, I think a process that we can improve, a go-to-market, a product that we can enhance, a speed that we can deal with, a request or an incident, etc. That's the simplest way to start.
I was talking with someone the other day that had I think they said about 160, over the last two years, they've developed about 160 AI models, however you want to put it, robots, processes, etc., to fix a small issue. And again, this isn't taking away responsibility from humans in the organization, this is freeing up their time to be able to do something revenue generating or innovating that's going to improve or grow the business in a certain way. So, my suggestion is always to start with those small things. And to give you an example, straightforward cases where, in insurance, a human adjuster can just handle something that's not complex, emotional, but protects trust and empathy with the client. Start small.
Thomas Rauschen:
That's perfect. And look, we are coming to the end of our podcast, So, I’ll summarize the three takeaways from my perspective, and then you can comment on it if I get it right or wrong.
The first one, for me, is that trust must be designed and not assumed. The second is that ethics is a leadership responsibility. And the third one, for me, the most important one that you just described earlier is in building AI governance around your technology, there must be a real handshake between the business and the IT in order to build AI governance from the outset. What are your reflections on those?
Tom Infante:
Correct. I mean, so as you were talking, then I was just thinking about obviously, you know, things go in cycles, and just like when cloud was the latest and greatest thing that people are investing in, and subsequently now uninvesting in and being hybrid. But if there's any huge investment from an organization, particularly in financial services, where the mass of data and customer base and money that they're dealing with, they need these significant investments to succeed. It becomes a case of people taking on the journey with you.
So, you know, a CIO many years ago would obviously go up and say to the CEO and everyone else on the board that we need to make this massive investment, and then they get told no because it's not important, things are different now, but it's still the same scenario over and over again. But I think just to finish on that, I mean, if we look forward in 10 years’ time now, I think we'll be judged not by how fast we adopted AI, like the majority of organizations are doing. They’re seeing it as a race to get to implementation, but how deliberately we kept humans in the loop. And overall, we protect trust of the customer.
Thomas Rauschen:
That's perfect final words. I couldn't have summarized it better. So basically, what you said is AI is transforming the insurance industry at a really high speed, but with that power comes responsibility. And that will be my final words.
Thank you, Tom. Thank you so much for joining the podcast. I'm pretty sure we have more in the future; it was really a delight to have you on. Thank you so much.
Tom Infante:
Thanks, Thomas.