
Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools
Navigating the narrow waters of AI can be challenging for new users. Interviews with AI company founder, artificial intelligence authors, and machine learning experts. Focusing on the practical use of artificial intelligence in your personal and business life. We dive deep into which AI tools can make your life easier and which AI software isn't worth the free trial. The premier Artificial Intelligence podcast hosted by the bestselling author of ChatGPT Profits, Jonathan Green.
Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools
Is Artificial Intelligence Changing the Insurance Game with Amber Moss
Welcome to the Artificial Intelligence Podcast with Jonathan Green! In this episode, we delve into the transformative impact of AI on the insurance sector with our esteemed guest, Amber Moss. Amber is a seasoned expert in risk management within the healthcare insurance industry, and she shares her insights on the delicate balance between leveraging AI technologies and safeguarding sensitive data.
Amber discusses the cautious integration of AI in insurance, emphasizing the importance of strategic data management. She highlights how AI aids in creating compelling presentations and managing risk management jargon, while also stressing the critical need for careful legal considerations, especially in court scenarios.
Notable Quotes:
- "It's easy to let AI do 100% of the work, but that's where the glitches happen." - [Jonathan Green]
- "Each case in insurance is so different... the way you have to speak needs to come from yourself." - [Amber Moss]
- "With insurance, we're very careful on how we're using AI just because of the critical nature of the data." - [Amber Moss]
Amber underscores the legal risks associated with AI, such as AI "hallucinations" leading to false data submissions in legal contexts. She shares her perspective as a law student on the potential obsolescence of judges due to AI, dating back to its inception in 1950.
Connect with Amber Moss:
Website: https://hotchkissinsurance.com/
Amber also touches on cybersecurity insurance, the necessity of identity theft protection, and the evolving nature of AI-driven threats, such as ransomware and data breaches. She advises on the implementation of secure AI practices within organizations to protect sensitive client information.
Connect with Jonathan Green
- The Bestseller: ChatGPT Profits
- Free Gift: The Master Prompt for ChatGPT
- Free Book on Amazon: Fire Your Boss
- Podcast Website: https://artificialintelligencepod.com/
- Subscribe, Rate, and Review: https://artificialintelligencepod.com/itunes
- Video Episodes: https://www.youtube.com/@ArtificialIntelligencePodcast
Artificial intelligence is changing the insurance game. Let's find out if it's for the better or for the worse with today's very special guest, Amber Moss. Welcome to the Artificial Intelligence Podcast, where we make AI simple, practical, and accessible for small business owners and leaders. Forget the complicated T talk or expensive consultants. This is where you'll learn how to implement AI strategies that are easy to understand and can make a big impact for your business. The Artificial Intelligence Podcast is brought to you by fraction, a IO, the trusted partner for AI Digital transformation. At fraction a IO, we help small and medium sized businesses boost revenue by eliminating time wasting non-revenue generating tasks that frustrate your team. With our custom AI bots, tools and automations, we make it easy to shift your team's focus to the task. That matter most. Driving growth and results, we guide you through a smooth, seamless transition to ai, ensuring you avoid policy mistakes and invest in the tools that truly deliver value. Don't get left behind. Let fraction aio help you. Stay ahead in today's AI driven world. Learn more. Get started. Fraction aio.com. Amber, I'm really excited to have you here because. One of the areas that artificial intelligence is the best at is organizing, sorting, and analyzing data. We often think about making videos and all those cool things, but it's not the most useful thing. It's often the biggest value comes from the boring. And when I think of boring, I often think of actuarial tables and statistics, and that's really where I'm at. My weak spreadsheets is my great weakness, but I know that a lot is changing in insurance. And how do you see. Especially when it comes to risk and risk management. How is AI changing everything I. I think it's right now, it's slowly coming into our industry and we have to be very strategic and careful on how we're using the data and where we're using the data. Because as we're working with very sensitive data. In my line of work, specifically healthcare. I'm working with medical diagnosees, I'm dealing with claims, I'm dealing with socials. So for me, it might not be the best to use AI in a situation where I'm presenting data such as that. But I think it's, I. A lot of our team is able to produce more beautiful presentations for our standard verbiage and risk management, lingo quote. So it's helping us to be able to produce. Better presentations. But on the legal side of that, I think what's going to happen is when we go to court, because you do get sued and insurance, we have to say judge, I didn't say that. I used artificial intelligence to say that. So we have to be careful on how we are doing our presentations and how we're noting when AI is being used. So that in the court of law, it's who said what or where did this come from? It brings me to one of, I think the biggest dangers with ai, which is that it leads us to be too lazy. So AI can do 90% of the work, but we tend to let it just do a hundred percent and that's where the glitches happen. It's that, it did it first, find the first eight times, so I need to really double check. It's worth the ninth time and that's when something slips through the cracks. I see this all the time where people will post something they didn't read right. And that's really, I think, the most dangerous thing and it's. I think that's the biggest problem is that it's so easy, it's easy to slide into that, which is that's probably fine. And that's the thought that goes through your mind. The, I'm a current law student and one thing that we're noticing are the AI hallucinations. There's been recent cases where and I have it right here where a lawyer submitted something a fake yes. Trial it, it was gonna go to trial. He used chat, GPT and a legal briefing. It got sent to the judge. The sightings were correct. It looked legitimate, but it was purely a hallucination. So the legal field is, also worried that judges might become obsolete. And AI started in 1950, so it's been. It's been a long time coming. I'm sure I did a lot of research on the founder of AI and all that good stuff, but on the legal side, it's also becoming a problem because if you're not protected properly with cybersecurity, and that's something we offer all of our clients, cybersecurity protection and policies. But. There were at least seven cases submitted to a federal court that looked legit that were not. While there's so many positives my biggest concern is the legal side of things. And how do we control, what is being sent to our courts or, what is being sent in trial and who said it? Where did it come from and what software are you using? Yeah, you bring up something really important, which is that when it comes to the law, you have to be correct a hundred percent of the time, not 90 or 95% of the time. There's a really big difference, and there's this mistaken belief that AI never makes a mistake, right? And that only applies to small categories. It never makes spelling mistakes or grammar mistakes, everything else. It makes a lot of mistakes. And I saw something as recent as someone was using the Twitter AI and said, when did Elon Musk buy Twitter? And it got the date wrong. And it's if you, that's your master. If you're an ai, you should know when your master birth due or whatever. So it's if something small like that, and it's very common. I've had many times where it gets something wrong and goes, oh, sorry, you're right. But then if you ask it again later, it's still gonna get it wrong. And that's. Really hard for people to understand that it's an ai, but it's still fallible, like it doesn't, in our heads, we think either something's perfect as an ai, it's a perfect intelligent machine, or it's a human, but it's really somewhere in between. I think this is a really important area because we've really changed the definition of ai, which I think is a big mistake. AI used to mean sentient machine, and it doesn't mean that anymore. Now it's like pretty smart. Word processor or image maker. And it's I never thought when I was watching Terminator and 30 years ago that someone would say, oh, if you just have something, you push a button, it makes a picture of a cat riding unicycle. That's also ai. It's like very different. And we've had this mission, creeper definition creep that's caused a lot of problems. So for many people they say we're calling it ai, but it's not. And they say strong ai. Weak ai. Now they're saying artificial general intelligence, which is a terrible name. And it's so let's just make a name that doesn't sound very good, or synthetic intelligence, they'll probably go with next. But I think that this is an important, which is that you still have to double check the work. Yeah. Just like you would for any first year, someone at a law school or summer intern, mistakes happen and the assumption is that AI never makes mistakes. And one of the, one of the things that happens is as you work with AI a lot, you start to notice. Consistency in the mistake. So there's certain things that chat GBT does that I can always tell when someone's used it. So I am working on a book project for someone and I can tell when they use chat g PT on certain chapters, I can say, oh, you didn't write this right? It doesn't sound like you anymore. And it uses certain words we never use. And it's different for every person, but there's certain things, consistency. Like when it's like in the ever-changing digital landscape, whenever I hear the word landscape and then there's a comma, I know it's ai, right? Because humans don't say that. It's a weird phrase or pondering. I've never pondered. I thought so. There's certain giveaways That's true. True. Certain giveaways. Yep. And that's why, going back to insurance, we have very smart clients. We're working with, multimillion dollar companies and CEOs, and we have to go in with our best foot forward. I personally I use artificial intelligence a lot for my consulting firm, but when it comes to my insurance work each case is so different and the way that they fund the, their plan, the way that. You have to speak, I think needs to come from yourself. We are slowly warming up to it. But I will be honest with you, I think it's the last few months is really in my organizations where we're seeing teams going, oh yeah, I used AI and going well. Where did you get it? Where'd you get that AI data? What are you using to do it? We need to be consistent as an organization so that, and we are, our clients, we pride ourselves on giving our clients a hundred percent of our best. And so we're very careful on what we use with using ai just because of the critical nature of the data. Yeah, there's a lot of legal implications for AI that people don't pay attention to, so there's these different pieces you have to have. It's do you actually have a contract with a company? Do you have a BAA that says that they won't train on your data. Do you have my main project, I work for a company. We have to be hip, but SOC two compliant, which means that we have very specific requirements for data and which things we're allowed to use, and so people are like, which AI can we use? I'm like, none of them until we sign a contract and that's really something that's hard for people to understand because there's the danger of accidentally, just like you have with I Church, accidentally revealing personal information is a really big deal, and it happens all the time incidentally, and there's all these ways you don't think about it, which is you're doing a screen recording and you accidentally flash the wrong tab for one second. Yeah. That counts. So I'm really interested in how things are changing going forward. So you mentioned earlier cybersecurity insurance. Is AI changing the approach to that? Because now there's this new vector, which is that people can use AI to do, mimic a voice or mimic a video or do these new phishing attacks or smishing attacks. There's all these different types of attacks and also you can socially engineer. A chat bot now. You can actually trick a chat bot into revealing information if it's trained on too much information, which is something I struggle to convince my clients. I'm always overly security conscious. I'm like the chat bot should know anything you don't want everyone to know because someone can always trick it. There's always a better mouth trap. So how do you see insurance changing? Do you think we're gonna start seeing artificial intelligence insurance?'cause we've seen the way. Viruses online has worked changed. It used to be pub released viruses just'cause it was funny to crash computers. And then we've seen mistakes from big companies like CrowdStrike where they accidentally shut down a large portion of the internet because they made a mistake with an update. And in between now we're seeing more of these, like data hostage attacks where you lock down. It's now we're attacking companies and saying, gimme money and I'll give you back your data. What do you see? Yeah. Ransom. Yeah. Yeah, ransom. That's the word I was looking for. Not hostage. Yeah. Maybe the future be. Yeah, it's still. So what do you think is the next iteration where we're gonna start to see, are we gonna have to have insurance for the AIS you create or insurance against AI tax? Do you think it will change? I do. I'm looking at a policy, you know right now. I. And they look at a company's previous cyber incidents like an extortion, a malware infection. If other, you can put ai if other, if there's something there, data loss, privacy breaches, ransomware, a denial of an attack, theft of funds, that's another one. AI can trick you into giving funds. So I think that I. I don't write this personally. This is our, our commercial department writes these policies, but I do think there's room for coverage. If, depending on how the I AI affects the actual attack of the information. But we pretty much tell every client with assets and and employees and I use the internet, so a hundred percent to protect yourself with a policy. It just, it's too scary now. And another thing that I learned, there's attacks now where you get an attacker that comes in. I know going off ai, but AI is smart enough to maybe even start some of this. They get in your system for years and they hang out and chill and you never know that they're there. There's, I just did a report on a situation where the attackers were in a company's data for over a year with no knowledge. They had no knowledge that this was happening, I think yes, definitely. The AI policy or policies within these cyber insurance policies will probably be its own category. I don't think there's a lot of data, legal data. I. Yet on the actual cost or threat that it poses, because right now most of us are using it to make our presentations pretty. You're not gonna have an actuarial or somebody that's, trying to define mortality and things like that. They're not going to use AI for that information. I don't think we're at a point where, at least in insurance, where we have a huge risk because we are very careful on how we're using ai. Do you think that now that everyone's working remotely, it's increasing the risk of this type of vector because people aren't in the same room? I do be, and it's actually, I'm, I met with my CEO last week and had a conversation with him about, it's come to my attention that, a lot of folks are using. AI and their daily work, and I actually did a poll it was about a thousand people. I didn't get a huge response, but it looks about 50 50 on those that use it daily and those that don't. That's one thing companies have to know, which, who is using AI in your organization? What are they using it for? And what program are they using? You can't just go out in the world and do anything and then that data is available to anybody. So our organization will have, one software program that is vetted out and safe and protected, and we will have training on how you use AI and what's acceptable and what's not acceptable. And that's one of the big challenges is that. AI changes so fast, the tools that are available, the promises that are made. There's a major AI news story every single day, right? It's my full time job to keep aware of everything, and it's like often I'm just aware of the changes. I haven't had time to read the whole new white paper, the new article every day because so many things happen and. That's where I think we're seeing almost a singularity of things are happening so fast, it's mathematically impossible to keep up. I use an AI to tell me about other AI things, and I think that we're gonna start seeing ai, people building AI to protect their infrastructure and AI to attack infrastructure. And like more and more of that I think is the future. And I actually think there's gonna be a larger shift towards in-person. Like one of the way we used to handle the internet was that you didn't have access to the internet at your job. There was an intranet, so if you needed a piece of data, they would download it. They would download a website and you could view the website, but you couldn't actually actively. Exit the building through connection and then, that seems to have disappeared over the past 25 years or so since I worked first started working in it in 1999 and now it's we just block a few random websites, but you can go to almost anything, and I. I think the expectation has changed so much. Like one of the challenges we've had is just convincing everyone that we have to put certain security infrastructure on their work laptops, and if they're using a personal laptop for work, we have to put it on there. We have to secure the endpoint. Absolutely. We have to secure your passwords and we have to, we have a little app on there that tells us when you're not doing it. So I get a text every hour for anyone in the entire infrastructure who's doing that, and then it's really hard to create a culture of security. Where people realize it's a, so once a company enters like certain compliance rules, then there are rules for how you have to punish people who don't do it. So if you don't update your software for a certain amount of time, you get a warning, you do it again, you get another warning. Third time, the company has to fire you to maintain its compliance. And that's as we're entering this new phase where we just have to have these really strict rules because there are still people who are falling for old scams, like sending monies to a prince from Africa, or they find a USB stick in the parking lot. It's amazing that these things work, even for people are very switched on. They can just catch you at the right time. So I saw something really interesting happen that. LinkedIn has become a really popular vector for attacks, which is that they'll look at what company you work at and start sending you emails that seem like they're from the CEO of that company. So I've started getting, and then yeah, trying to initiate some type of conversation or end and it's easy to fall for that. Yeah, you think it, you don't notice'cause it says it's from the CEO's name unless you look at the from line. And part of it is that like all of these inboxes and email servers, like they claim they have an ai but they're dumber than ever. Yeah. Like they're letting more stuff through. Like it should, if the name of the sender and the from address don't match, the email should not come through or should have a flag on it. Like why is that so hard to mark? It's literally, we still don't have security that I thought we'd have 30 years ago. And what's interesting is that when I, when you change a position, I, this happened to me, I start getting tons of emails, which are like, congratulations on your new position. And it's we're definitely not friends. And what I can't tell, actually, I can't tell the difference between if it's just really bad sales or an attack. So I can't tell the difference if it I don't wanna be friends with you either way. I'm like, I can't. So one person actually from a large company, 'cause I saw the email, I didn't click anything, but it was like. We're your vendor for this. And I was like, you're definitely not. So I was, I've been working in this role for three months. I don't change my LinkedIn very often. I've been working the job for three months. I know our entire tech stack, right? I run the dev team, I know all of this stuff. And I was like, we de, I was, I've been messaged by lead engineer. I was like, have we ever worked with this company? She was like, absolutely not. So even large companies are trying to use this. I guess it's, I dunno if there's a better word for it, other than deception marketing. And it's like. I think another thing that we have to be aware of is. There's things that we can't assume we know, right? I think and I'm learning that in law school and that's when you become weak and you trust something a lot thinking that, okay, you got this software, we know we're in good hands. No, with this, you never know. You have to always be alert and like you were saying, you have to continue to. Understand the software updates because more than likely, we all are always up for any type of attack these days, and we have to understand that. We don't know what we don't know. I used to think oh, my website's too small. No one would ever attack me, and then I'd put on the software that would alert me. Every time there's an attack, I had to turn off the alerts. It was too frequent. It's every three minutes an attack would come in and it's. There's just these massive tools that just anytime a new website comes up, they know those have the weakest security. That's the best time to slide something in before someone's got everything installed right while you're setting it all up. And that's the best time to get in before they know what's going on. And I I'm very security conscious and very paranoid, but it doesn't matter because all you need is one unlucky moment or something weird and I've had like credit card numbers stolen multiple times. It just interesting. And it's I, but every time there's a transaction I get a text for any of my cards or anything. And so as soon as something weird happens, I'm like, that's definitely not me. And that's and so we're seeing more and more changes in the marketing to adapt to this. And I just wonder if it's ever gonna. Get easier. I almost feel like we're shifting back to an in-person meeting. Like the only thing you trust is if you meet someone in person, you can look 'em in the eye.'cause the only way you know you're talking to an actual person. True. Actually because there's been meetings just like this that I'm not who I am and I've heard it's crazy things. And as a risk manager, I think the best that we can do, Americans need to ensure. If they have a business that they have the cybersecurity coverage, everybody needs identity theft protection. I've had it for 20 years. I have a million dollars. You need to have those things to protect you and it makes me feel better that my identity can be stolen and I could file a claim for a million dollars. So I think and credit monitoring, I think those three things. Credit monitoring, id, theft or protection, identity protection and cybersecurity policies. And then you're backing yourself up with the, as much as you can with that risk. To me that makes sense. And it's not very expensive, to me it's worth it just because of the unknowns. So I wanna ask about something else. You brought something cool to my mind. I see a lot of commercials now for these services where you can pay data brokers to stop selling your data. And I'm, I don't believe it's true. Because it's if someone's already stolen your data, you go, I feel like who's actually, this is my feeling, and I'm not gonna name any specific copies, but I feel like it's like they go, we stole your data, but if you pay us, we'll stop selling it. It's like you've already sold it a thousand times. It's like it's out. You have to, Visa's had a breach. Wells Fargo's had a breach. Bank of America's had a, my, my identity's probably been stolen six times. And again, the only way that, and I have had things. I personally, I had somebody without a social try to get an apartment in Oregon. I've never been to Oregon, so I had to fight that, but I had the insurance coverage to fight that for me, where it wasn't a big deal, but it, stealing your data isn't hard anymore. You just have to make sure that risk is protected as best as possible, in my opinion. I think that, one of the challenges is that the vector is always what you don't expect. Like the 10 biggest losses in Vegas were the things they didn't expect. Like they used to have insurance in case the tiger would go into the audience, but they didn't have insurance on the tiger attacking the performer. And that cost. Because you, it's the thing you don't think of that always gets it, and we're seeing now is these different types of attacks. My parents have had different, I actually, my mom sent me money for one of my kids' birthdays recently. My sister called me 10 minutes later and was like, did you, did mom send you money? Or did she just get tricked? And I was like, oh, that makes a lot of sense actually. I was like, no, it actually was us. But it was a really good question as I thought about it. Yeah, because it's very common for people to pretend to be you and all these things. And I was doing a video call with my sister so she would know it's me, but it's I, when she messaged me, I was like, yeah, I'm not offended. It actually totally makes sense 'cause it's probably exactly what people pretend to do and she gets alerts whenever any of my parents, when my parents, any money goes out so that nothing happens. You have to get to that point where. There's so many different ways of tricking people and misleading people or men in the middle attacks that like, and I, something that I find interesting is a lot of companies still use non-secure security metrics. One of the things I find interesting is that a lot of companies are like, they do the text message, we'll text message you the code. I'm like, that's been non-secure for 12 years. That hasn't, right. That's that. That's. Everyone knows how to do that, and it's like they're still doing it. And like I always have to set up these policies where I work with where I'm like it that it has to be a device. It has to be a two of a thing. It has to be the scan, the QR code, because the text message one is not that secure. No, you're right. And. It's really scary. That's why with insurance, I joke around with my team. The hardest part of my job is chasing my passwords and authenticators because to get into, I've got, say a hundred websites I work on for carriers and different things and we use everything secure. So I have to go through three different processes to get into the website. It's worth it for my clients. I think the more that you can do to protect your data, the healthier your organization will be as far as cybersecurity. So there's this story from high school. One of my friends is in a band and he goes to the store, he needs to buy drums, and they go he's got $800. And they go, you can buy a set of$500 drums with cases or a better set of drums for 800 with no cases. And it's better get the cheaper drums because Yeah, like they're gonna get trashed. And I can't remember which one he bought, but I'm, that's like the important lesson. It's that we often think. It'll never happen to me or I'm too small or all of these things. But it's actually, most of the attacks are really like mass attacks and they just hit every small website, as many as they can, and they're just looking for volume.'cause you only need one person to fall for it. I used to know someone who was like a spam email or for a living. Wow. And he would just, and he would just send people like the dumbest emails, for a something you spray in your throat and then you lose weight. And I was like, oh my gosh, who would ever click on that? And he is enough people do, right? So it's like that's the problem is that enough people do that. It's worth it at the volume level. If you email enough people pretending to be their CEO, 'cause all the data on LinkedIn is very easy to withdraw. Yes. It's very easy to download it all. And I saw, so when I changed my position, it said, do you want to post? I was like, no. Yeah, definitely not. I don't want a bunch of those trite messages and I just don't want that on my feed. It's not that big a deal, especially because I waited a really long time and still I started getting messages within minutes, all email, not even through LinkedIn. All these people have automated systems. Yeah. To anytime someone changes a job position and it's if you don't know what's coming, like I have this saying that if someone sends me a social media message on my birthday, I know we're not friends. So it's like last birthday, I deleted all these LinkedIn. Everyone who gave me a birthday message I unconnected on. Because you know who didn't give me a LinkedIn message? My kids, my wife, my sister, my sisters, my parents. That matter. Yeah. The people that know you in person. Like it would be super weird if you came my birthday party and I was like, where's my present? They're like, check your LinkedIn wall. Yeah. Oh, what? That's so true. I think we and I think We work fast too nowadays. I know in our industry we're doing 15, 20 things at a time, and I think sometimes that will hurt you too. We have to slow down sometimes and, the AI is making it, it's making us look smarter. We're also working even faster, so there's going to be more mistakes, so it's a really. It's a, I think it's a hard area to manage with a big organization. But you have to, and that's, we're taking steps to ensure that everybody is doing the same thing with ai. And I know that's hard to, to manage thoroughly, but that's our goal for sure. Yeah. It's really hard to get everyone on the same page because. Once they get used, once people develop a habit, they're a chat GPT person. Switching 'em to a different platform is really hard. Or it's just saying you can't use that at work. So like with my new job project now, I have a work laptop that I only use for work and it's not convenient. It's not super convenient to just do that, but I'm like, if I don't do it. Nobody else will. You have to set the example. There was this famous story of a big video game company that got hacked for millions. They were deleting players' accounts, all these for seven years. They couldn't figure it out. They mean every single user ever. The company changed their password and one person didn't and it was the CTO and that's who they'd hacked. And it's it's, I was like, don't let that, I don't want, I don't wanna be one of those stories. Like I don't wanna be the lawyer who submitted the brief that was all written by ai, or the one person who goes my password would never get, and it's so I don't have anything that's not a work app on that computer. And just like being that secure. And because you have to, it's the only way you can set the don. And that's one of the big problems I see is a lot of CEOs and C-suites go, whoa. Everyone else has to be secure, but our stuff is fine. Or I don't really know how to use my computer. Like I've definitely known CEOs who the computer is more of a decoration, like it's not plugged in. And I guess if it's not plugged in, that's okay. If it's not plugged in, I guess they're okay. But it's that, yeah. It's a centerpiece everyone for some, yeah. And you don't, and the thing is that now that I have to get an alert every time someone has a nun, an unup updated app on one of their computers, I just notice how fast updates come out and how many security breaches are constantly getting found and all of these things, and it's really very hard to keep up with it. So I think it's very interesting to see how the insurance world is changing. I think it's important for a lot of people to start realizing that it's not the size of your company. There's a lot of. Misunderstandings as well. It's just like that these different types of insurance are massively expensive, but because it's, because the type of attack is the mass attack where they just hit a bunch of people and get a random hit. It actually, that means that the odds of you getting hit are very low. So the cost of the premiums are very low because it's a statistics thing. It's a lot of spreadsheets in math, but that means that it doesn't have to be expensive, but. If you, it's you're spinning the Wheel of Fortune and it's like if you hit bankrupt, like you probably won't, but if you do, so it's a very interesting ship to change. So I think this is very valuable for people. I think that starting to understand what's important and to see how there's an intersection between AI and insurance, and how insurance people are being very cautious. It's very interesting for people who wanna know more about what you do and connect with you online and maybe even buy a little insurance, where's the best place to connect with you and find out what you're doing? We can go to our website, which is hotchkiss.com. Personally, my email's on there too, Amber Moss, but we have over 200 employees. We've got very strong risk managers. The company's been around over 50 years. We can help you with any questions you may have relating to cybersecurity. Really any insurance, we do it all. So we're risk managers but we're happy to have the discussion and help you to become more secure in your business and protect what you've worked your life for most of us. Because it only takes one big attack that could bring down your entire organization, which is a really sad thought. It sure is. Yeah. Thanks for us on a sad note. Thank you again. Sorry. It's it's something that we see, but we're here to help. That's the good news. There are ways to protect yourself and just be alert, be aware and cover your wrists as best as you can. That's all we can do is, cover the risk. You're absolutely right. Thank you so much for being here. For today's amazing episode, man, it was fun. I had fun. Bye everyone. Bye. Thank you for listening to this week's episode of the Artificial Intelligence Podcast. Make sure to subscribe so you never miss another episode. We'll be back next Monday with more tips and strategies on how to leverage AI to grow your business and achieve better results. In the meantime, if you're curious about how AI can boost your business' revenue, head over to artificial intelligence pod.com/calculator. Use our AI revenue calculator to discover the potential impact AI can have on your bottom line. It's quick, easy, and might just change the way. Think about your bid. Business while you're there, catch up on past episodes. Leave a review and check out our socials.