In industries where trust, compliance, and precision matter, rigid, one-size-fits-all AI solutions fall short. Join Ryan Cantor, Chief Product & Technology Officer at Origami Risk, for a webinar that explores why configurability is the key to meaningful AI adoption, enabling organizations to tailor intelligence to their workflows, policies, and risk tolerance. Why high-trust environments demand AI that adapts to the organization, not the other way around. How configurable AI supports human-led, iterative decision-making in risk and insurance. Why evolving regulations demand adaptable AI. What to look for in AI tools that promise flexibility, transparency, and control. Everyone. Welcome to our webinar on Right Size, Not One-SIzed Rethinking AI for High TRUST Environments. My name is Aubrey Eyer, and I’m thrilled to be your host today. We have an exciting session lined up, and we’ll have time for your questions at the end of the presentation. If you’d like to ask our speaker anything, please submit those questions through the q and a function on the Zoom toolbar. Like I said, we’ll address those after the presentation. Before we dive in, I’d like to introduce our esteemed speaker, Ryan Cantor. Ryan brings over fifteen years of executive leadership experience in SaaS, product strategy, and technology. As chief product and technology officer, Origami Risk, Ryan leads the global product and technology organization, spanning product management, engineering, UI UX, cloud operations, and internal IT. With a team of over three hundred professionals, he drives innovation and execution across Origami’s high growth cloud based SaaS platform serving the risk management and core insurance markets. So I’m gonna turn the presentation over to Ryan now. Please take it away. Thanks so much, Aubrey. Good morning, everybody, or good afternoon or evening, depending where in the world you may be. But I appreciate you taking a little bit of your time today to chat with me a little bit. And what we’re going do is just talk a little bit about what we’re seeing, if you will, and really what went into, you know, how we think about AI and a lot of the factors that we considered when thinking about a strategy and what we’re seeing in the market, what we’re hearing from clients and future potential clients as well. And the idea here is to really inform and educate your thinking about the right questions to ask and the right way to think about incorporating AI into your business for optimal results. So let’s dive in here real quick. So the first thing we obviously have to address when we think about AI is there is a delta looming, right? And that is kind of the AI hype, what you see in demos, what you see on websites, what you see in webinars, maybe even like this versus the reality gap. And we are seeing is that by far and away AI adoption is absolutely accelerating, but it’s very, very clear that not AI, all AIs, even in our industries were specifically built for high trust environments. What I mean by that, and I’ll just give you some tips and tricks is you got to be a little weary when you’re seeing demos or you’re seeing items. There are plenty of companies out there chasing headlines. If you’re a startup or you’re an unprofitable software company, you’re still chasing valuations, you’re still chasing investment. And so again, the hype, they get rewarded for headlines, get recorded for kind of flash over substance sometimes. And so again, it gets people really, really excited, but the hype doesn’t really match the real world. And we’ll talk a little bit about how it breaks down and things to look out for to kind of help separate out again that hype from reality. But most importantly, what you have to think about in our specific, you know, kind of risk insurance compliance sectors, what are we thinking about first and foremost has to be predictability. Predictability is the definition and bedrock of trust, right? If you can gain predictability, I know how this is going to perform. The cousin to predictability is transparency and we’ll cover that a little bit more later. But again, how did something become predictable? It’s not a black box. It’s not something that you throw out and kind of just hope that the actions that come back are going to be satisfactory to your business. For that reason, we also talk about accountability. Again, we’re gonna talk a little bit about changing regulations and things that are going on and the audit requirements that are showing up in various States. But really how can you at a granular level every time AI is applied to your business if needed, hold it accountable, be able to audit those results, audit what went into it, audit what came out of it and do so again with confidence. And that leads you to, again, regulatory alignment. This is an emerging technology. So we’re gonna give you a couple of examples today where, anytime there’s something new that goes on, there’s necessarily some uneasiness, there’s sometimes overreactions and then there’ll be corrections, but the regulatory environment is absolutely starting to impact AI usage rules, regulations and adoption. So really as a leader in your business, what you have to be thinking about is whatever AI strategy you’re thinking about adopting and incorporating into your business processes, you have to ask yourself, will your tool set, will your capabilities allow you to be agile? Will it allow you to continue to adapt as either your business has new ideas, as regulations change, or again, as the technology evolves? I mean, if you look at a year ago today for what some of these AI models were able to do, it is already a night and day difference. And so if you extrapolate out those capabilities in the future periods, again, maintaining that agility is going to be absolutely essential for your organization. So what’s the opposite when you think about kind of AI? So there’s two ways to think about it. I want to call out the risks of what I’m calling binary AI. I call this kind of the toggle button. It’s the on or the off. You take it or you leave it, right? And it’s kind of a one size fits all. Again, sometimes can demo pretty well because people will have picture perfect data sets or they’ll set up the environment just quite right. Or they’ve been training it for a while leading up to your demonstration. But again, upon installation or execution for your organization, it falls a little flat, doesn’t quite meet your needs. So it really allows you to just turn it on or off, take it or leave it. And the lack of configurability really just kind of leads to misalignment. It perfectly aligned to your individual business workflows? Does it have errors? And ultimately what we’re finding is clients who have kind of rushed and adopted some of those feature sets have buyer’s remorse pretty quickly because it didn’t quite do exactly what they wanted. Maybe there were some initial gains, but as you tried to scale it throughout the organization, it caused more friction and more pain than it was worth. And again, being so rigid, it can absolutely create some compliance gaps because what we’re gonna show you today is that the regulations are gonna vary from by location to location. And so again, the lack of modularity may mean that if three quarters of your business is an area that doesn’t have these regulations, but one quarter does, you may have to turn it off simply because you don’t have the modularity to conform to those unique circumstances. So again to think about things a little bit more strategically, kind of big picture, how you think about AI. And so ultimately the lack of configuration of the lack of modularity will and can lead to compliance failures, right? So I’ve talked a little about these regulations. We’re gonna talk about three in particular today that we’ve done some research on. That’s in the state of Colorado, New York and Texas. Interestingly, in this area, actually Texas seems to have been the most restrictive, which was a little surprising. But what you’re seeing is overall, these states are now requiring some level of human involvement or review of AI driven claims decisions. And so while we definitely talk about the hype and people are thinking about how can AI automate as much as humanly possible, These States are now putting in regulations for specific areas in your workflow where humans and under specific conditions where humans need to be involved in those decision making. So again, having that flexibility, these regulations could change at any moment in time. And therefore, again, your ability to be agile and adapt to those will be essential. A lot of information about bias audits, even if it’s not making automation on decisioning for you and it’s simply providing recommendations, there’s a level of accountability and transparency needed from a reporting perspective to understand any potential bias to correlate that with potential demographic or firmographic information to understand if there’s any bias coming out of these models. Almost all of them have consumer disclosure requirements now. So whether it’s auto decisioning or if it’s just involved anywhere in the workflow at all, these states are now putting in consumer disclosures. And I will say kind of accountability. Many of these states now are requiring board of director oversight and authority into some of these governance audit trails and things of that nature. So again, just things to be looking out for as you’re looking across your business unit, you’re thinking about gaining buy in and support for these kinds of decisions or automated workflows. These are just some of the things you’re going to have to factor in. So let’s dive in. So let’s talk about Colorado. So Colorado had a kind of a trust but verify regulation. And then they’ve had some follow on regulations already as well. And so first any AI being used in your environments need to be submitted to kind of governance and risk management frameworks, development life cycles, testing, repeat and monitoring that you need to conduct regular and routine bias testing and have that available on demand. And then again, that there’s an annual compliance attestation that all of these things are happening and being held true. And that you have to have audit trails for any AI driven decisions. And again, decisions is interesting kind of terminology. It doesn’t really make the difference or the difference between fully automated or even if it was kind of AI driven. And you could argue pretty successfully that if AI made recommendation to a particular employee or individual, and that information was used in the decisioning that that would be an AI driven decision. So again, having those audit trails will essential. So this rule hasn’t been around that long and it’s already been amended this calendar year to now expand it to cover additional lines of business beyond life insurance now into auto and health insurers as well. So again, it’s not the specifics of this specific regulation, it’s the speed and the frequency of which they’re changing that will force your businesses to be able to really toggle and be configurable in how you think about AI incorporating it into your business. Another example is New York. Again, this is a twenty twenty four guideline here, really helping to evaluate whether AI or data sources correlate with protected classes. So we’re getting back to a little bit of that bias, similar language around bias audits and tracking. New York took this very interesting piece that in order to incorporate AI in the process, I do think this one’s not a very high bar to cross, but you just have to be prepared and be mindful that you actually have to justify your use of it based on a legitimate business necessity. And obviously I think again there’s plenty of arguments there and that’s a pretty loose interpretation. You still need to maintain governance framework so that comes up again and this is one of the first areas where they require board level oversight at your organization. And again, notifying consumers when AI influences underwriting or pricing decisions at all. So again, all kind of important steps, all things to be kind of aware of, but again, these things are changing under our feet as we go. And then again, Texas, usually known for being the most strict in these regulations have actually now taken an even further stance that says they’re prohibited automated systems from making adverse determinations in health insurance claims. So anytime it’s going to be a decline of any kind, it does now require human in the loop. And so I use this just to educate people that says, listen, if it was an on off toggle switch and it was just kind of an all or nothing, you really kind of need a branch in your workflow and your logic to say, well, but if it’s in the state of Texas, I need to treat it slightly differently and I need it to go to a human in the loop queue to review and monitor no matter how big, no matter how small it doesn’t make any substantiation on size or impact. So again I’m just using this to illustrate the idea that these things are changing. Agility and configuration really are kind of must have essential kind of capabilities here. So let’s talk about what to do with that. So I’ve now kind of made this case that binary on off not great. So really what’s the option? So when you’re thinking about what is configurable AI and you’re starting to see this emerge even in some of the large kind of AI companies. I mean, OpenAI has a drag and drop configurable agent builder now. Why? So that you can kind of customize that AI experience to specific needs and goals, right? So this is gonna happen more and more. So this is what I mean by configurable AI, and I’ll show you a visual of that in a minute. But we have to think about how does AI adapt to your workflows, policies, and risk tolerance? And this is where I’ll pause a little bit and share a little bit about what I’m seeing. I talked to a lot of our existing clients and I talked to a lot of potential clients. I talked to a lot of CIOs and compliance people at these companies who are very interested in AI policies, data governance controls. But what I’ve really pulled away is that there is a spectrum of risk tolerance, right? And I think we’ve had companies who are really out there to kind of go get it and be as aggressive as humanly possible. And then we have people who really are looking to kind of more dip their their toe in the water. First we try, then we trust, we build some momentum and we move it along. But I think again, it’s important to be able to be realistic about that. There’s no right or wrong answer there. Every company’s culture, organization, regulation, circumstances, climate, budget, all different, right? And so having configurable AI was very, very important to us as we thought about our strategy. And again, do think it’ll allow the adoption of AI within your business to grow at a pace of which culturally is acceptable for your organization. The next one here is actually a very, very important one. Again, we talk about AI as if it’s a huge feature, it’s life changing, but there is this adoption capability and I’ll give you some examples. Oftentimes when you’re doing buying software, configuring capabilities, you test it, then you launch it and it kind of works. But that is really kind of an old framework. That framework works on code. Code is static. The code is supposed to do one thing. It either does it, it passes or it fails. AI is kind of an amorphous thing, right? So interestingly, the AI functionality could work, but do you like the results? Are the results consistent? And when you think about testing, well, testing is only as good then as the inputs of which you provide into the AI to test what the outputs are. So let me give you another way of thinking about adopting this. Having configurable AI could mean that you’ve done that testing and you’ve quote, set it live, but you’ve chosen not to bring the AI back into the platform or you’ve chosen not to incorporate it into the decisioning or maybe not even to present it to the end user. You’re allowing it to kind of do these calculations and do these summarizations or recommendations in parallel so that you would actually be able to pull a report thirty, forty five, sixty days later and see what it would have done in the wild, So again, first you try, then you trust. So it’s just a different way of thinking about adoption and building that case and getting comfortable with those actions. You then might say, okay, I then do want to bring it into the system, but I want to put a beta flag on it or I want to make recommendations only. And that’s now another phase of adoption. And then you do that enough times you can track, well, how many times does the human hit accept and just do what the AI says? And how many times do they edit it? And when you reach an acceptable percentage, then you could have the AI just automate it. So again, this kind of idea of phased adoption, gaining trust, gaining comfort in a responsible kind of organized methodical way, but all of it is predicated on configuration or the ability to have control over those phases and those individual decisions. And so I’ve just given you one example, but the reality is that every prospect or every client I talk to has different ideas and different risk tolerances and may wanna adopt it in slightly different ways. And just again, having that command and control will be absolutely essential. So really excited about that part. Then obviously we talk about support and this just comes back to transparency at every step. What were the inputs? What were the outputs? How do you report on that at scale? How do you monitor it and maintain it, track it and work it through? So again, things to think about there in terms of configurable AI. So when we think about how do you evaluate AI, so you’re out, you’re talking to a lot of partners and platforms and attending webinars like this and you’re thinking about it. How do you evaluate people? Right? So first I want to talk about this privacy first architecture And this is a little bit, maybe this will be a little bit of a controversial thing. If you’re talking to any company at this point that isn’t one of the kind of multi billion dollar companies building their own large language models and they tell you that they have their own language model, you should run for the hills. And the reason I’m telling you this is the reality is that there are the things you read about in the news, all the big AI factories and power plants that are consuming all the electricity and all of those things, there are billions and billions of dollars being poured into training these models, building these models. There are tons of models, iterations happening all of the time. And so when someone comes to you and says, you can use our AI, but we’re gonna ingest your data. Your data is gonna help train our models for us. But don’t worry, we have fifty million claims or fifty thousand claims or whatever record they have. All I’m here to tell you is the large language models have billions of records at this point and the pace at which they’re evolving is absolutely So the smart players are leveraging those investments, those proven big models that are evolving much faster than anyone can even imagine. And they’re layering on a special sauce. They’re layering on controls, guardrails, guidelines, infrastructure, configurability, transparency to enable you to bring AI features to market quicker. They’re not, again, little startup companies are not building their own language models. And if they are, they will not be able to maintain, compete or scale with the big players out there. And again, the pace of quality that we’re seeing out of the big language models. I’ll protect the names that I won’t disclose any names to protect the innocent, but we’ve run a lot of tests with a lot of individual partners. Some of them proclaiming again that, I’ll use claims as an example, that claims is all we do, that we are experts at claims and we ran the exact same test against this purpose built model that was significantly more expensive. And then we tested it against some of the regularly available big models that are much less expensive, much easier to use. And we’re talking about seventy six percent accuracy for a purposely trained model versus in the ninety to ninety five percent accuracy for these large language sets doing the exact same prompts with the exact same asks and the exact same summaries. And so it’s not hypothetical. I’m not kind of making this up. I’m saying this is kind of where these models have now grown and morphed into. And so again, just giving you tips and tricks to ask and to be thinking about when you’re talking with partners. Again, obviously I’ve talked about this, but ensuring that whatever partner or platform you’re looking at, that they have configurable AI. How is it not just going to be adapted to the workflow that you can imagine today, but that workflow is going to change, right? There’s going to be new ideas, new technology, new regulations, new requests from stakeholders in the business. How does that tool morph and grow with your business with a changing regulatory landscape at scale? And again, I’ve said it, but I can’t say it strongly enough. Our Chief Information Security Officer at Origami reports to me and she would yell at me if I don’t hammer this point transparent outputs, bias monitoring, ethical guardrails. How can you try but trust and how can you trust but verify? So make sure that all of those things are in place and that you feel comfortable with those tools prior to making a decision. So I keep talking about configurable kind of AI workflows, embeddable workflows. I thought I’ll just put a visual on the screen to kind of explain what I’m talking about when I say configurable workflows. At the end of the day, software has three things. All software is exactly the same. Maybe that’s stealing a little bit of the magic. There is a front end where the user interacts with buttons and fields and drop downs. There’s a middleware kind of logic layer. I press this button, something’s gonna happen, a calculation happens. That’s where usually most of the secret sauce happens in software solutions. And then there’s a database, there’s a place that we store the data and we can recall it quickly, efficiently and easily for clients, right? All software are those three things in the same order over and over again, they’re essential. So when I talk about embeddable workflows or AI workflows, I’m thinking about those, if you close your eyes and you imagine any of your users or what you use every single day, whether you’re drafting an email and outlook, if you open up Outlook and click a new button and start typing, that’s a workflow, It’s very basic. But when Outlook or Microsoft is thinking about how to incorporate AI, that’s where they’re thinking. They’re thinking how in the normal course of doing business do we incorporate AI into those practices? And that’s how Origami is thinking about it. And I just think that’s how you should be thinking about it. So a trigger, something happens, it could be again, a new record is created, a button is pressed, something happens in the system, it doesn’t have to be human done, it could be automated or scheduled, but that’s a trigger. And then an action happens, a file is created, file is saved, data is changed, updated, a status is changed, maybe another action happens that’s automated, a calculation happens, and then maybe we wanna do an AI step. And so again, that AI step could be, I wanna summarize the record, I want to provide recommendations, I want to analyze it. There’s all kinds of things that you can use AI in there, but you’re incorporating AI into this particular workflow at a specific step that makes sense for your business. And then you want to save that information back to the record. So again, if you think about this pattern or this logic, this gives you a lot of power right now. You could take out that AI step and you’re not incorporating AI into it at all. Again, in the cases of some of the regulations after action two, you might have a branch. Well, if it’s the state of Texas, do this. If it’s not the state of Texas, okay, go ahead and use the AI and keep going on your merry way. So allowing you that comfortability or flexibility using some basic logic and tools allows you to very quickly adapt these workflows again, as your business changes, as regulation change and kind of continue to move forward. So I do want to provide a kind of a visual example. Think it helps contextualize when I talk about kind of configurable AI, what I mean by that. And so again, building for what’s next. Future Ready starts with configurability. So innovation, again, people who are kind of rushing out AI features, again, it’s a little bit more hype, really great, well thought through things that solve really complex problems usually aren’t built in two weeks. And so be a little mindful of that, but innovation that’s intentional and grounded, does it solve real problems? Is it really, is it ultimately helping you kind of get to a better place? Again, making sure that it’s configurable enables resilience. So again, it’s stable, it’s adaptable, but again, is my personal opinion. I think this is the position of origami. The future of AI is truly collaborative, transparent and adaptable, right? So that we don’t know what’s next. We don’t know when it’s going to happen a year ago, a year from now, and we didn’t know what was going happen originally as this has started to groundswell. So building for the unknown is intentional. You have to build with that mindset so that you can be adaptable and you’re not kind of backed into a corner in future periods. And so that’s really that that’s kind of the summary of my talk. I wanted to kind of just give you a framework of things to think for, outline a little bit of kind of the regulatory landscape, things to look out for, and just again hopefully some helpful tips on how to think about and have fruitful conversations with various partners you may have. So with that I can turn it over to some questions. Okay, great. Thank you, Brian. That was really interesting. So we’re gonna open up the program to Q and A. Like I said earlier, you can type those questions into the Q and A section. It’s at the bottom of your zoom toolbar on a computer. It’s up at the top if you’re on a cell phone or an iPad, And we’ll start with our first question now. So a big one that we get asked a lot is when we’re looking at AI, how much visibility should we expect to have into how AI reaches a recommendation, and what should we be looking for? What’s being done to monitor for bias in those recommendations? Yeah, so I mean, you know, other than diving deep into the model itself, which I would not advise anyone to do, even my team struggles with that a little bit. We’re getting better every day. You wanna understand what the inputs are. You wanna understand what the outputs are, but believe it or not, and again, I can speak a little bit about what we’ve done at Origami. One of our AI features is what we’re calling an analyze feature where it actually, is gonna sound a little bit like inception for people who’ve ever watched the movie, but it actually asks AI to analyze the AI. And what that can do is it can actually identify accuracy kind of variations over time. So you’ve heard of, you maybe heard of the words like hallucinations or things of that nature, but you can also have drift, right? Where things were kind of working just fine. But if, again, if you’re overtraining a model or the model shifts ever so slightly, the results can kind of change over time and you really wanna be alerted when those things happen. And so having those kinds of capabilities are important. And I think having governance, having regular audits, could be sample audits, could be checks with however, is appropriate for your organization, but making sure you have access to the inputs, the outputs, the data, right? What went on, What were the recommendations in a structured way? No one wants to read blobs of content in order to try to extrapolate. So again, all of these things, AI is great that way. You tell it to come out as structured data, you get structured data. And so that’ll make it a lot easier to report, audit, and and and view it with transparency. Okay. Thanks. You know, when you’re looking at this, Brian, where do you believe human oversight still needs to play a primary role? Do you think AI can really safely support our work in the insurance business? Yeah. I mean listen I think everything is on an adoption curve folks and I think right now personally my risk profiles I would do human loop on virtually everything especially when you’re getting started. And I think you just track that over time. I think the best way to think about that is humans have a certain expense and cost. AI will always have some sort of error rate, but humans have an error rate too. And so we have plenty of programs and systems and audit trails and compliance protocols in place to try to detect and prevent those human errors. And I think the same is gonna hold true with AI. And so really it’ll end up, I think in the medium and long term being a mathematical game and having the tracking and transparency. Listen, at some point, if the human is reviewing it and clicking accept on ninety nine or ninety eight percent of the recommendations, you’ll just do the math on is that step really essential at that point for your business. And that decision will be individual and it’ll be unique for every client. And again, there’ll be some regulations as you saw in Texas where you can always have to have it. But I think having that flexibility is gonna be essential. Okay. Thanks. Another question. So you talked about regulations. With them changing so quickly across states and the federal government, how can we be sure that these AI tools we adopt will be able to keep us compliant over time? I mean, obviously there’s no certainty with anything. I mean, obviously, I mean, a state could come out right now and just say, I ban all AI in any part of this process. Right? So again, I think I don’t see that happening. I mean, I think there needs to be some level of oversight and control to ensure it’s fair and equitable and really just to enforce, I would just say good diligence and good decision making that people and companies aren’t just kind of rushing and adopting the latest thing. But again, the best I think defense you have is just simple control and agility. That configurability is gonna be essential because we just don’t know and you don’t wanna be boxed in. Okay. Well, it looks like that was our last question. Ryan, is there anything else you’d like to say before I wrap us up? No, that was great. I appreciate everyone’s time. If there’s no other questions, again, and easy. Hopefully it was a little bit insightful and helpful for you and your business. And I hope everyone has a splendid holiday season with family and friends, I appreciate taking a little bit of your day and sharing with us. Okay, well thank you again for joining us today. I want to extend a special thank you to our speaker Ryan and I appreciate everyone’s participation. Have a wonderful day and happy holidays. Thank you.
Webinar Deepening Member Engagement: How Risk Pools Can Drive Value, Reduce Friction, and Ultimately Reduce Risk