Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools

AI versus Cybersecurity with Chris Puderbaugh

Jonathan Green : Artificial Intelligence Expert and Author of ChatGPT Profits

Welcome to the Artificial Intelligence Podcast with Jonathan Green! In this episode, we delve into the crucial topic of AI in cybersecurity with our special guest, Chris Puderbaugh, a leading expert in the field.

Chris shares insights into how AI is transforming cybersecurity by automating protection measures and addressing the ever-growing risks of digital transformation. He discusses how AI can serve as a safeguard against cyber threats by streamlining security processes and implementing defensive measures akin to advancements in the automotive industry over the years. This conversation highlights the necessity of fortifying human elements to remain secure amidst evolving threats.

Notable Quotes:

  • "There's not any type of cohesive fabric that's tying all those together, which really enables that automated response." - [Chris Puderbaugh]
  • "It's like we're too small or we're too big so we're not vulnerable." - [Jonathan Green]
  • "AI is certainly a business enabler, but the conversation needs to be around how can we grow the business responsibly and securely." - [Chris Puderbaugh]

Connect with Chris Puderbaugh:

Website: https://pellonium.com



Connect with Jonathan Green

Can AI help you manage your cyber risk? Let's find out with today's special guest, Chris Peter Ball. Today's episode is brought to you by the bestseller Chat, GPT Profits. This book is the Missing Instruction Manual to get you up and running with chat g bt in a matter of minutes as a special gift. You can get it absolutely free@artificialintelligencepod.com slash gift, or at the link right below this episode. Make sure to grab your copy before it goes back up to full price. Are you tired of dealing with your boss? Do you feel underpaid and underappreciated? If you wanna make it online, fire your boss and start living your retirement dreams now. Then you can come to the right place. Welcome to the Artificial Intelligence Podcast. You will learn how to use artificial intelligence to open new revenue streams and make money while you sleep. Presented live from a tropical island in the South Pacific by bestselling author Jonathan Green. Now here's your host. Now Chris, I'm really excited by this because, when I first started off in technology way back in 1999, my very first job was in IT and cybersecurity. That was back during the I Love You Virus back when people would. Just open up an email and it would ruin their day. Turns out no one loves you. So we've seen so many things happen and so many changes in the world of cybersecurity and risk, and I've seen a lot of, we've seen a lot now where people are using AI to get past people's defenses and use that as a negative attack. But how can we start to use AI as a defensive measure to start increasing our security levels? Yeah, so we always use a comparison of the automotive industry 30 years ago where the driver had full control over the car and as we progressed throughout the decades, more and more of that control started to move a transfer over to automated systems, whether it be anti-lock brakes, lane, and so on. So that's the upside from a positive perspective with AI in cyber is that a lot of those controls are beginning to be. Beginning to become automated and that is done by piecing together really what is a mosaic of security technologies. Throughout the past 10 years, we've added a lot of really cool point solutions, which can be viewed as widgets at the end of the day, and there's not any type of cohesive fabric that's tying all those together, which really enables that automated response. It. It's exactly that. Like no matter how many levels of security you have, if you just give someone your password or you get tricked by something on your computer, and more and more of that's happening. And now we have the ability with AI to mimic voices. So now someone can call my parents and sound like me and say, it's an emergency. I need money. And we've seen that element of scams, like how can we start to add in higher levels of security, especially for the older generation who like I. Can't even don't even know that voices can be mimicked, let alone video and imagery. Yeah, it's a valid point for sure. We could, implement it, whatever solution at the end of the day, but there's still a weak link in the actual human at the end of the day. So I think that there's also opportunities to add ai defensive layers upfront especially with like call centers, for example. What you're talking about is a help desk problem. I've been at government agencies where that specific scenario is actually unfolded, and I think that using AI for verification on that frontline aspect is where you'll see a lot of improvement in terms of being able to detect some of that. That really what is fr at the end of the day. Because we've seen a lot of companies and even some governments coming out with like defensive ais where it's like an old lady that will talk to someone who's a scammy telemarketer. But if it's just two ais talking to each other, it feels like it's gonna be a future where it's just constant. Like 90% of phone calls is ais calling ais, and it's gonna be like the first battlefield is like their offensive AI versus your defensive ai. And then like how much energy. And how many resources just gonna go towards this particular use case which is just making the world a worst place. Is there a core way to, I dunno, block out the negative use case or decrease the negative use case so it doesn't get to this point where you're just receiving hundreds of AI generated pseudo calls a day that are being answered by hundreds of your AI agents defending you every day. Yeah, I think so. One of the newer technologies that I've seen come out is ACT AI detection. So I remember it from my days in college. You had to submit whatever paper you did to some type of proofreading system to determine that it is authentic and it was you. I. And you're seeing that capability catch on within ai. So being able to detect things that were generated by prompts, that first step. Next step is that passive listening to phone calls. Is this ai, yes or no? I, that's the type of solution that's eventually going to solve that specific problem that you're talking about. So I know right now with the most ai, they won't say bad words. So if you curse on the phone and the person curses back, it's probably a per. There are some unfettered AI that will still do bad stuff. And so every time we find a way to detect it, then there's like a way to avoid detection, right? Because there are open source AI that will just do anything. How do you stay ahead for someone who's on the outside is just thinking, oh my gosh, how can I possibly be secure? And are feeling overwhelmed by this, which is legitimate.'cause there's constant updates and constant releases. Is there some way to at least maximize your defensive matrix? For that specific use case. I still think that we're green days in terms of being able to detect that stuff on the fly. I think from more of a programmatic approach where AI is doing something malicious within your environment, we're further ahead, but I think that there's room for growth for sure. On that specific scenario you're talking about where someone is impersonating someone else, whether it be email, phone call, or some other type of communication, medium. There's a lot of people who think computer viruses seem to have disappeared. Like it was all about computer viruses 30 years ago, right? And now of course, it's switched from attacking individuals to attacking companies. And a lot of people say things like my company's too small to be worth attacking, are my company's so big that it must have really good security? And I think about the use case of a really large video game company that was just getting devastated. With hacking the top accounts we're getting taken advantage of, and they forced everyone in the entire company to change their password, but one person didn't, and that was the head of security, and that's the one who'd been hacked. So the one person who like there's always the person who thinks they're above, like it could never be them. And it's always them. It's always, it always turns out to be that it took 'em like seven years. Seven years of just getting like taken advantage of. So is there. It seems like there's a mindset problem, right? It's like you can't, it's like we're too small or we're too big so we're not vulnerable. Like how do we create an environment where people are a little bit more aware and aware of the type of things? Because it seems like most people are expecting the Nigerian prince when there's so many other different types of attack vectors now. Yeah I think it starts with an acknowledgement that digital transformation is done. Everything has been transformed to digital, whether it be a local government or a large corporation. So there's implied attack surface for your larger corporations down to your local municipalities and whatnot. So that there's just awareness that comes with that. Incidents are increasing in volume. Some of 'em are quite ridiculous for those small companies to be honest. Like a ransomware attack where all of a sudden there's$5 million in a ransomware or sorry, ransom that's being requested based on the ability to pay for that. So I think it's a two pronged approach where as those incidents continue to increase in volume. Awareness will increase, but there's also needs to be a conversation about, yes, digital transformation is done. We all have implied digital risk. At the end of the day, our different functions within our society run, run on top of digital infrastructure. And there's always gonna be implied security risks with that. I guess that's the kind of where people are baffled with, they're like what level of security I need? One of the things I always find interesting is people think, my website it's new. Why would anyone attack it? And it's you use a smaller one to attack a bigger one. Like I, when I started adding security to my website I had to turn off alerts 'cause I was getting so many alerts for how many attacks were happening. The volume is so much higher than you think. And it's really, I feel like I. There's just so many things happening. So many things happening so fast, and a lot of people just aren't paying attention. Like I said, they think, oh, I'll just deal with this after there's an incident. It's kinda like you get insurance after your house burns down. So what are some things for people who are going, oh my gosh, we don't have any cybersecurity. Where should they start? I. So if you look at what the big cloud portfolio players have done in the past 10 years, like Microsoft and Google, you can achieve a lot of security just by adopting one of those portfolios. There's all the different widgets including in it. So I think that moving from legacy systems to some of those newer cloud platforms is table stakes in terms of what you can do. With that, you're getting things like data loss prevention, intrusion detection, threat intelligence, and so on. It's all baked into a single portfolio, and I think that for the most part, people are migrating to the cloud. I would say that, 90% of large corporations are, they had made that shift, but that's the meat and potatoes of what you can do to protect yourself regardless of the size of the corporation. So if you move everything to a single ecosystem, like you move everything to Microsoft 365 or everything to Google that, because you're in a single ecosystem, it adds a lot more protection than if, 'cause there's so many, I deal with a lot of companies that they've been using this system from like the 1990s or they have and it's feels like everything's duct taped together. And you get to this mindset, which I'm sure you're familiar with, which is nothing's happened yet and it's. And do you people are really worried, and I think it's this, it's it'll be really complicated. It'll be really hard. It'll take a really long time. And there's this like learning curve aspect of moving things over. And there's also the idea of now I'm all depend, like the value of being, having different parts of different companies. If one thing goes down, not everything goes down. So there's that element to it. But as far as the process of moving to an ecosystem, like how hard is it for people to go through that process these days? I. For smaller entities, not hard at all. Pretty frictionless for large corporations where you're talking about significant infrastructure transition that needs to occur. There's more of a conversation that needs to happen. And I would say it's not, a next day kind of thing. We're talking quarters for that transition. But I think that for the most part, for those smaller entities, the likes of Microsoft and Google have made it pretty simple, a matter of clicks to transition to the cloud and get the benefit of that portfolio security. Yeah, I think the, I guess the best lesson here is do that before you become a big company, like move to a single ecosystem. And I think that's something that like the idea of an even ecosystem didn't even exist that long ago and now you can't have everything in one place. But what are in your experience, what are the most common mistakes that people are making in the smaller to medium sized business space? What is like just something every time you hear an go, I wish I didn't hear that all the time. Investing in point solutions. So there's a bunch of different things that can happen to an organization, whether it be data loss, ransomware, denial service, and sometimes people will get fixated on that specific scenario and make it a significant investment in something that is designed to prevent just that. Without piecing together that functionality with the broader cybersecurity capability that the company has. So what you end up with is this IC of technologies that aren't really talking to each other or doing anything, and they're not working together to reduce risk within organizations. I. That is very interesting because it usually is, people are always, what is it? They're always worried about what happened before and but now that they're thinking about what will happen next. So with kind of the advent of ai, are you seeing new types of attack vectors or different ways that companies are getting hit? Other than using AI to create more content or to mimic voices other than things we've talked about. Yeah, malicious code generation is a big problem that is so far, like ahead of the, let's say, initial access of the scenario that it's hard to control that. But that is something that we're seeing within the industry in terms of how attacks are changing the AI specific attacks. That are starting to occur, whether it be LM poisoning, training, data poisoning, whatnot. There's not a headline that we can point to and say, yes, there's implied revenue loss because of that, but that is starting to become a thing. But for the most part, I think that the path towards ideation to impact has been significantly decreased from a threat active perspective because of ai. I. Sometimes people, one of the things that I've seen is that people will train an AI or their chat bot with too much information. Like I, I build chat bot for clients and I am way more paranoid than them. And I, my mindset is I'm like, if it doesn't know it, it can't be tricked. It can't be tortured out of it. And I'm like, so I'm like, I'm always like, they're like, my chatbot doesn't know enough. I'm like, that's'cause I don't want it.'cause people can always trick it. Like it has an element of it that is, can be socially engineered if it doesn't know it. Like it doesn't have the secret. It can't, the secret can't be stolen from it. And it's that's something that I deal with from a security perspective on I'll give you an AI that knows everything, but you can't let people outside you, your C-suite use it. You just can't. Please. So like it's the battle between convenience and security. And it is I can always tell, 'cause there's three types of people. There's the IT department that just wants to minimize the number of complaints, right? They wanna minimize tickets. Then there's the AI department's yeah, I wanna get you cutting edge, but like we also want to be, let's build a cohesive environment.'cause all of your tools are from all over the place. And then there's cybersecurity. They're like, if you wanna use your USB drive, you have to convince me I can trust you to use it. So there's a completely different mindset. And I see there's a lot of people are merging those three deliverables. So they'll go to their IT company and say, Hey, can you set up cybersecurity for us? I think that the problem is that it is trained on the mindset of we don't want you complaining all the time, whereas security is I don't trust you. And it's completely different mindset, right? Like you want the person in charge of security to be someone who doesn't trust anyone who's paranoid. He's checking everyone's pockets or scanning everyone walking in and of the building. That's what a security officer does. So it's a very different mindset, and I guess AI is somewhere in the middle, right? It just depends on your personality or where you come from, but. How do you see that where people have the ai, have the IT department set up security, and so then maybe they do the thing you talk about where they set up some parts but not other parts. Like how does that affect a company? Yeah, so I view it as basically these islands that we're having different conversations within a company. I'd even add a fourth island, which is compliance, entirely different mindset outside of IT and security. Really what needs to happen, and we're seeing a lot of this just from a company layout perspective, is that there needs to be a streamlined effort across all three of those functions. You'll see a lot of CISOs that now report to CIOs that's not degrading the value of security. It's a streamlined management and that cutting. Topic for all three functions needs to be enabling the business to achieve whatever revenue target they have over the next 12 months. So we need to be business enablers as security. It is certainly a business enabler, but the conversation needs to be around how can we grow the business responsibly and securely within the organization. I love that you brought up compliance because ais add this new complexity because some ais say I deal with some medical clients and they go, this AI says they're HIPAA compliant. And I'm like, but are they like, and it's and there's this thing is as long as they sign a contract and say they're HIPAA compliant, then it passes the buck along. So if something goes wrong. You could pass the blame, which I get, right? But it's it's very challenging to know unless you know the infrastructure, right? Like where's the data going? Where are the server stored? How is the, how do you firewall between conversations? Once chat, GBT added the element of remembering it changes everything. And we also see where then Twitter's just upload your scans and it's don't do that. Don't do that. Not a good idea. It's so there's. A lot of excitement. And I think that's, maybe that's part of the problem is everyone's so excited about ai, they're doing things they would never do before. We're seeing companies adopt tools or policies very quickly, like usually adoption, as is very slow and suddenly AI is cool, everyone needs it. And I've seen companies say ban. And then you see companies who ban AI and it's something happened like that means something happened. And I understand that as well. So I. When you look at kind of the AI adoption element and maybe one of your clients says, Hey, we wanna start using ai. How do you tell them what's okay and what's not okay? How to maintain the level of security in their ecosystem? Like I know the best thing to do is to just have a, like an enterprise contract with Microsoft or an enterprise contract with Google. So then there's paper, like a paperwork element, which says, we're not gonna train on your data. We promise we're not gonna train on your data. So beyond that, or for a smaller company who's oh, we're not ready to jump to that level, even as they love your ecosystem idea, which I think is great. I love that you said that. How can they transition with a level of caution or a level of like. Eyes open. Yeah, so I've been really pleased with how far AI governance has come, let's say in the past, like 12 months. I have a couple clients that are actually going through that process right now, and that process includes defining an acceptable abuse policy, which really forces the conversation in terms of. What are we using this for within an organization that is, has some type of productivity use case for it. And what are the unacceptable use cases, whether it be uploading the sensitive information you ha and it's not necessarily malicious. It could be that I have a presentation, I. That's due in a week and I wanna run it through AI just to blow it out a little bit, clean it up periphery and whatnot. But getting a control on that is very important and I think a lot of that conversation, the details comes out throughout that AI governance process, so that it really is the recommendation that we roll with for those clients that are early on their AI journey is lock down the AI governance, have that conversation up now before implementation. I think that's really good because a lot of people, like so many companies don't even have any security policy, like what's private and what's not. Like just dividing into two categories of a friend of mine was working on a project and he was getting the raw sales calls and I was like the raw sales calls from this client. I was like, do those include like credit card numbers or social security numbers or like addresses? And he was like, yeah. And I was like, don't let them send you those. I said, make them strip that data out before you get it because if something goes wrong, they're gonna blame you. Like I know how that game works. I have a lot of siblings, I have five kids. If you can blame someone else, you're gonna do it. And I am exactly how I feel is if I don't have the data, then I couldn't be the one who lost it. That's the my school of thought. But excitement of oh, this AI can optimize our sales calls, which is cool. I. And I'm working on a project in that space, but I'm always like either host with a local ai, right? Run a local AI that's inside your building.'cause I, again, when I worked in IT, people didn't have access to the internet at work. I. You had access to the intranet. So if you wanted a website, you would tell the IT department. They would download the website and you could look at like a photocopy of it. Even my friend who worked at T-Mobile in tech support, every day they would download the current website so they could see the pricing, but they couldn't see the real website. And now it's shifted to where instead of limiting access, it's like we give access to everything. We're trying to block the things you're not allowed to have access to. We've switched the mindset. I think it's possible to switch back to that mindset of at work I don't get access to the internet 'cause I'm at work. I. Or that's too late? Oh, I think it's too late for that. I think what will end up happening is that some type of backfilling technology to control what information is being sent to ai will unfold. And you're starting to see a lot of that. It's effectively a proxy where they're, you're sitting in the middle of whatever. AI provider using, and then your internal assets and just monitoring to make sure all of that sensitive information is sent to that open ai, or sorry, not open ai, but any AI provider. Same thing happened within cloud, right? So I would say, late two thousands, I. We opened up the faucet to a lot of those websites you're talking about. Obviously, cloud players began to gain a lot of traction and there's a lot of data leaving the network uncontrolled. So we had to develop technologies such as data loss prevention, CASB which is just basically that proxy for the cloud as well to get a grip on a lot of that information that was leading the network. So I think that will unfold within ai and it'll go hand in hand. You can't have AI without some type of proxying technology underneath it. Yeah, I think most of the mistakes I'm seeing right now are incidental or not realizing. Like someone was like, oh, I accidentally uploaded all of our customer's credit cards, and I was like, oh, there's no one do. There's no reverse for that. So one of my big complaints about chat, most of these big ais, is that they don't have secretary ai. They say, oh, choose which model you wanna use. And it's I don't know. How's the regular gonna user gonna know if I should be using chat GBTV zero one or V zero one Pro or four or 3.5. There should be like something that analyzes the question and says where to send it. And at that stage you can also have the kind of like when you send a letter home from the military, they choose what to redact. You can add in that element because it's, you don't think about it.'cause one of the use cases is oh, I want to connect it to my Google sheets. What's in your Google sheets? Most of us have something in there that we're not thinking about. Maybe it's in a different sheet, but once you connect it to all of them, it's all in there. And that's I think, the most common use case. It's if you let an AI scan your computer, if you don't think about, there's always something on the hard drive that you don't want there or you don't want getting added into the database. I really like what you're talking about for this having a secretarial or a postmaster that's redirecting the data and going, wait, this isn't going out. We need to stop this, or This is a mistake because yeah, maybe it's the report and you forget that on page seven it has some private data, or even worse, someone's email address or phone number in an example or in a case study. It's very interesting. I think that this are things that people need to start thinking about more and more who wanna have these levels of security for people who are starting to realize, oh my gosh, I need Chris's help. I don't have any security. I've been saying I'm too small to attack, or we're too big to attack. How can they find out what you're doing, learn more about polonium the project you're working on, the things that you're doing, and maybe get a little bit of your help. Yeah. What we're doing is leveraging AI to really manage an organization's cybersecurity program. That mosaic of technology that I was talking about, whether it be security technologies, SaaS applications, if you look at the enterprise stack, it's extremely bloated right now. And that's not any problem that's specific to an industry and whatnot. There's just people onboarding some of those new tools you're talking about that are a lot of fun. So in terms of how we can help. Establishing a risk-based conversation as early on in your process and your journey is fundamental, right? So we're looking at things relative to how they could impact our business, and from that, if you're early on in that journey, we can then plan accordingly and make smart investments around that risk centric conversation and not get sucked into a lot of the different point solution conversations that people get stuck on. I think that's amazing. So I'll make sure to put the links to your website below the video and in the show notes. So for anyone who is listening to this Are you tired of dealing with your boss? Do you feel underpaid and underappreciated? If you wanna make it online, fire your boss and start living your retirement dreams now. Then you can come to the right place. Welcome to the Artificial Intelligence Podcast. You will learn how to use artificial intelligence to open new revenue streams and make money while you sleep. Presented live from a tropical island in the South Pacific by bestselling author Jonathan Green. Now here's your host. and wants some help from Chris or wants a little level security, this guy knows who he's talking about. This has been an amazing episode. Thank you so much for being here for another amazing episode of the Artificial Intelligence Podcast. Thanks for listening to today's episode starting with ai. It can be Scary. ChatGPT Profits is not only a bestseller, but also the Missing Instruction Manual to make Mastering Chat, GBTA Breeze bypass the hard stuff and get straight to success with chat g profits. As always, I would love for you to support the show by paying full price on Amazon, but you can get it absolutely free for a limited time@artificialintelligencepod.com slash gift.