Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools

The War Between AI and IT with DJ Eshelman

Jonathan Green : Artificial Intelligence Expert and Author of ChatGPT Profits Episode 334

Welcome to the Artificial Intelligence Podcast with Jonathan Green! In this episode, we delve into the evolving dynamics between AI and IT with our special guest, DJ Eshelman, an expert in end user computing and technology integration.

DJ shares insightful perspectives on how technology and business are increasingly intersecting. He discusses the integration of AI within IT departments, highlighting the challenges and opportunities this presents. DJ emphasizes the importance of understanding the core goal of a business and aligning AI and IT strategies to support that mission effectively.

Notable Quotes:

  • "There has to be a distinction. It's binary. It's either you're there to make money or you're not." - [DJ Eshelman] 
  • "AI is not just a tool; it's a process enabler. It's not about replacing staff but enhancing their efficiency." - [DJ Eshelman] 
  • "A factory robot that screws in a screw will always do it correctly. AI adds an element of randomness that needs managing." - [Jonathan Green] 
  • "Knowing how to prompt AI is akin to mastering good Google FU." - [DJ Eshelman] 

Connect with DJ Eshelman:

Website: https://thrive-it.com/JG24
LinkedIn: https://www.linkedin.com/in/djeshelman/

Connect with Jonathan Green

The war between AI and IT with today's special guest, DJ Eshelman on today's episode of the Artificial Intelligence Podcast. Today's episode is brought to you by the bestseller Chat, GPT Profits. This book is the Missing Instruction Manual to get you up and running with chat g bt in a matter of minutes as a special gift. You can get it absolutely free@artificialintelligencepod.com slash gift, or at the link right below this episode. Make sure to grab your copy before it goes back up to full price. Are you tired of dealing with your boss? Do you feel underpaid and underappreciated? If you wanna make it online, fire your boss and start living your retirement dreams now. Then you can come to the right place. Welcome to the Artificial Intelligence Podcast. You will learn how to use artificial intelligence to open new revenue streams and make money while you sleep. Presented live from a tropical island in the South Pacific by bestselling author Jonathan Green. Now here's your host. Now, I love talking to people from the IT world because. It's the definition of it has broadened so much the idea and the meaning of it. And the same thing is happening in ai and we're now, they're almost merging. I'm now seeing the point where people think AI and it are the same thing. So it's like anything where you touch a computer is starting to, the meaning is merging. When I was young, it was the department that fixed your computer when it wouldn't turn on or when you had a virus that's you called to fix it and now it's so broadened. To mean so much more than that, that it could be someone who never fixes a computer. Someone who's software only, hardware only someone who works in a factory building computers. Oh, you work in it, you build computers. It's so broad. I just wanna start from that point 'cause it's so fascinating and why do you think that happened? And how can we start to re separate these different roles and start to roll back all this confusion. Oh yeah. I think it's one of those things I've as I've seen this in the field and just talking with folks and that sort of thing, we've seen a merging of, and I would say an appropriate merging of technology and business. And so that really has made it so that, like you're saying, the modern IT person is. Really just as much a project manager or sometimes even an accountant, which is weird than they are someone who actually works with the technology. Folks like myself rarely even touch a keyboard if they can avoid it because they're currently, very busy. I. Instructing people, other people how to do things or putting things together in their head based on requirements or looking through spreadsheets and figuring out what's the best way to do this, that, and the other in the cloud. And it's become a very necessary part of it. But I think also the thing that happened in the. Tens or so is we saw, started to see a necessary merging of business goals with it being merged together. And then you saw a lot of data-driven kind of things going on, but you also saw in the IT world as far as deployments go, a lot more of a focus on let's make sure that what we're doing makes sense. Otherwise it'll bankrupt us. Yeah, so I think that's been a big driver for that. It really comes down to money effectiveness, but also agility. I. It has been a big driver there for a lot of that. And that actually is a big part of a lot of the problems that we're seeing now too, is because with agility can come irresponsibility and, not following good, safe pro practices. And so we've been seeing a lot of the downside of that as well. So yeah, it's been an interesting merging of of business and technology for sure. One of the big challenges I faced early on in my career, the very first thing I did in 2010, I was selling search engine optimization services. I would help your website rank higher in Google, and then you're talking to a client and they say do you do paid ads too? And you go, I make twice as much money. And they go, how about social media? I. How about the, and you get tempted to almost cheesecake factory it, right? You just go, yeah, whatever you want, I'll sell it whatever services you want. Of course I could. And you move further and further out of your expertise. Yeah. And as you, but it's hard to say no when someone says, we'll pay you twice as much, we'll pay you four times as much. It's hard to go, oh, I can't do that. Because you sometimes you just need the money or you need that. And what I see happening now. And it's very interesting. It's happening in other fields. So in IT or an IT department, or you're an IT like company for a bunch of clients and they go, Hey, do you do cybersecurity? I. Do you do server installs? And now what? What's interesting, I've spoken to a lot of it people in the last two weeks to do a lot of research. A lot of them are now getting asked, can you deploy AI for us? And we've seen shift in the last two years. The most common change in a LinkedIn profile is from programmer to AI expert. Yeah. It's very common. So I can completely understand why an IT company is yeah, sure, we can do that. I. Yeah, we can figure it out that we're now getting. Sectors that used to be completely different. And there's a very different mindset. Whenever I talk to a cybersecurity person, their mindset is, I don't trust any of you to not mess up the computer. Like they don't that's their mindset. Whereas it is I want your, is the mindset of I want your computers working. I don't want you mad. I don't want you calling'cause you can't. The almost opposite mindsets, right? One is, I want you happy. One is I don't trust you, and that's what you want from a security person. You shouldn't really like them. So I'm just curious what your thoughts are on that. Oh, very much. And it to, to come back first to a point you made about now AI on top of a stack of things that. Our existing in a world where the expectation for management is that we hired you to do this job and they just keep stacking things on without changing the job description or anything like that, which causes traumas to go between places. We add on the security element too. And I think especially in my. Kinda niche that I made a name for myself in it, which is, we call it end user computing, which is the virtual application delivery onto your desktops, that sort of thing. And so that really made it so that we had to know a lot of things about a lot of things. But you stack security on there. Where does security the most? It's where the end users are. And so in my field, that had to be front of mind now for us as well. But I find too that there's a certain synergy that works there. If you have that as front of mind with your design process, then it makes it a lot easier. But yeah, the problem is, it's a constantly shifting kind of thing. I was talking to a somebody I used to work with 20 some odd years ago. Let's not age ourselves too badly here. But the the reality was that with the CrowdStrike thing that happened in July, that. He had CrowdStrike on every single client. And so his business was down. All of his client clients' businesses were down, and he was just miserable. But on the, at the same time, he's but I still, even though that happened, I still trust that this is the best way to do it. Because the it takes him outta the equation. And so it's an ironic kind of a thing too where people are learning that they can be taken outta the equation a little bit. But there's still risks they have to accept to do that. And that's the balance between security and deployment, I guess you could say or operations, is that, that balancing risk with experience and there's always gonna be some sort of detriment to the experience for that. And yeah I don't know what that's gonna mean in AI yet. Truly, we have some ideas, but they're not really fantastic. The, I can tell you with most computing processes, what happens when you inject security in there, it slows everything down appropriately. You cannot just, there's certain things like it's physics. You cannot exceed the computational abilities with this. So with ai, you have things that are computing at a level that's not. Compatible with the security products of the time. And so your security has to come after the fact. Like when you start an AI query and it gives a result, but before it gets to that result, it goes through a security filter. And the security filter says no. That work's already been done. And so there's a lot of things that are going on in that realm right now that are very much frustrating for people that are having to walk the line like you're saying. AI has gotten so broad and because of the advertising and the way it's marketed and all the press releases, aI can solve all your problems. AI can replace your entire IT team. And most of the time when I interact with a potential client or a client, their question, it usually starts around with, we don't really know what I AI can do because we've heard so many things. How do we even separate the signal from the noise? And there's. There are so many things that I do that are not ai. Most of the time when they ask questions. It's the definition of AI has gotten so broad and it's very interesting that you mentioned like planning out processes in your mind. Most of the things that I do start off on a piece of paper and I draw out what is the process I design? What exactly do they want to happen? Where is the piece of data start and where they want it to end. So most of what I do is probably 90% automations, which is just connecting two pieces of data, and there's an element where the AI does processing. But anything where there's more than one step, you're into the world of automation. And the challenge is that if you have a problem that no one's ever had before, the AI won't know how to solve it because it only pulls data from what exists out there. It's not good at new, it's good at. Mentioning what's out there, and I run into this all the time. I was trying to set up something on a server last night, and it was giving me the wrong answer. If it has this feature, I go, it doesn't, the tool I'm using doesn't have the thing you're talking about. I then I got access to a customized gp, a customized bot someone had made that's specific for this problem. And then it, I actually was able to solve it, but it had to decrease the amount of data, not increase it. Yeah. And I. Because of this, I call it mission creeper, like your role keeps expanding into more and more things. It turns into well actually want you to do this, and this. And it's really, I think it's very important for a consultant or an agency to spell out what you get and what you don't get inside of your contracts and to stop the mission creep because it's very, it would be very easy for someone to. Turn me into tech support or it if I didn't clearly set that line of, I will tell you what to do, I'll tell you what tools to do, I'll help you with the setup, but there's a line I don't go across, right? And you have to say, here's how many projects I do per month, or here's the tasks all that I do, the tasks that I don't do, and there's a temp. It's very scary to. Say what you don't do because you're afraid of losing the deal or losing the client. I think that's what leads, especially newer or smaller companies, to over promise or to go further and further outside their area of excellence. So how do you help IT companies to keep from doing it to themselves Over promising or from getting pulled into mission creep when it's the client that did it. Yeah. And that's actually something where on. It's like wearing multiple hats at the same time, so you have to call it a new hat. So I call it coach salting because you're doing coaching and consulting at the same time basically. And you're teaching them how work works. And so I think the biggest problem is like what your mention of like how to get the signal from the noise on this one. And it's coming, the AI's gonna replace everything and you're making those promises and the sales person's making those promises. Let's put it in then the way it usually happens, right? The person, the playing golf with the CEO EO makes the promises and that's then now all of a sudden is on a team to deliver. And they're saying that's not even. Sometimes I'll just flat out say, that's not even possible. They haven't even examined what they're actually doing yet. They'll just say that off the cuff that will happen. But I think in today's world where it's just make or break, like you're saying, I think what has to happen is and again, to put on the cult coach side of the hat here, we have to go back to a formula that we've been using since the nine before the 1980s. There's a book called The Goal by Eliahu Gold Grad that describes this. And so what it comes down to is identifying constraints and things like that. But the title, the goal comes from asking the question of the company, what is the goal of your company? And they'll give all these answers and you'll say, nail it down to what is your goal? Oh, it's to make money. There you go. That's the goal, right? That's the goal of any company is to make money. If you're. Then you're a charity, right? And so there has to be a distinction. It's binary. It's either you're there to make money or you're not, period. You're there to serve in some way. You're certain there to make money for your family, for the families of your employees. And that's the goal. So what you find in the goal, the book, is that you have a factory that's just making a bunch of widgets and they're stacking up and stacking up. And the problem with that is that the goal is not to make widgets. The goal is to make a product that makes money for the company. Widgets don't make money. Yeah. Widgets more widgets, more stuff doesn't do that. And so they had robots that were just making a bunch of widgets that they had other processes that were too slow to, to make actual products. And so they had a backlog or of work in progress. And so that was actually causing. Overall issues and things like that. AI in my mind is no different. And so if we don't identify what the actual desires of the company are and how, and walk them through the process of here's how work is done at your company. If we just inject AI with this pie in the sky notion that it's gonna replace all your staff, which is by the way, completely bogus. And I, you and I both know this is not really a reality right now. Or I don't know if it ever will be truly. Automation. Sure, that's great. But automation is only as good as your inputs, like you're saying. And so when it comes to that, you have to design better inputs is one thing, but you have to be realistic about, okay, where is this adding the actual value? What is this actually doing? And spoiler alert, in most cases that I find with it anyway, all it's really doing. Is helping the staff be more efficient with certain processes. But if you don't have a theory of constraints behind that, if you're not identifying where staff need to be and to be effective, then you're still wasting your time. You can automate, you can make AI everything great, but if it's not. Following that theory of constraints, then your work is not actually be done. And to the end result work is never gonna be done faster than your true constraint. And so that's, I think when it comes to that's what I remind people. The other problem that comes up is knowing how to work. And that goes back to a previous thing with the signal, the noise thing. And really let's take an example. A certain AI was trained with Reddit, right? Not exactly the best source material if you want to have effective results. And that's, I not knocking people that are on Reddit, you're great, wonderful, whatever, but you weren't trying to communicate in effective way when you were posting. You're trying to, being efficient. So AI is just now modeling that as if it was the best way to work and it's not so we have things like that where the models are just not right. I. And that's generative ai. Obviously, there's other kinds of AI that really we need to see. If we think that generative AI is gonna replace work, we're sorely mistaken. That's not what it's for. So yeah, it is a big problem with that too. So getting the right, just hiring a worker. You wouldn't hire all engineers if you needed administrators and people actually do deployments and help desk and all the kind of things that, that go into it. Services, for example. Same thing with programmers. You have different things that give people to do different things, right? So you wouldn't hire an AI that it does one thing and expect that to just magically fix everything. So that's the other thing that, that we have to be realistic with people and say, yeah, okay, this is gonna help, but we need to set the right expectations for your particular way of working. So I often think about when Americans go on vacation and they meet someone who doesn't speak English, what's our response when they don't understand us? Talk louder. Like it doesn't solve the wrong and in the same way, AI is an accelerant, but if you accelerate the wrong process or accelerate in the wrong direction, it doesn't solve the core problem. You're just gonna run outta money or run outta runway or get to the problem, make the problem worse, faster. And a lot of what I look at is, there's a lot of this idea. Like you mentioned, replace workers to do these other things that it's not good at. And the thing about ai, the core principle that a lot of people miss is that every time you ask it the same question, you're gonna get a slightly different answer. And that means it's not like a factory robot. A factory robot that screws in a screw will always do it correctly. But when you add in this, element of randomness or element of surprise that it will answer it differently, even if it's slightly different. If you say, who was the first president of the United States, it might say the first president was George Washington. Or it might say George Washington was the first president. It's the same answer, but it's still slightly different. And that's a fact. Once you move further and further away, you, I ask the same question all the time and get two very different answers and. That's where you start to run into danger. Even in automation, when you remove the check, which is the quality control check or the human component, that's where things get very dangerous. That's when you start to submit a brief to the judge that turns out had a bunch of fake trial cases in it, and now you're getting censored. Or your chat bot is now saying that you're the worst company in the world and you're trying to shut it down before you get into more press stories. And it's, that's the danger is the, you're accelerating the wrong thing or using the wrong place, and it's. Very murky right now. So for the world of it, which is getting pushed into these other sectors and saying, oh, you can set up our servers now and you can set up our hardware and you can update our WordPress, and you can, I downloaded a virus, I need you to delete my history on my browser. All these different things that are making it, more and more of you don't know. You never know what you're gonna be doing because you're now entering a world where I never know what I need to do today. How can we start to red delineate the world between, this is an AI thing, this is an IT thing. This is a hardware issue. This is a software issue. This is something that you need someone there in person to fix, right? Like the computer's smoking. I can't remote in and fix that, right? That you need someone in person. That's a different thing. But we're starting to. Because we've gotten so much mission creep and so much mingling of definitions that people don't know what isn't it? And what isn't something that a consultant does or an AI officer does, or a CTO does. There's so much merging because everyone wants to please everyone that we start to have all of these problems. So when you're from the IT world, how do you. Establish those barriers and start to say, this is what we can do. This is what we can't do. This is really something that you shouldn't do. And also that, like you mentioned, like knowing the goal of your business because what we're seeing a lot of is I bought this tool, you set it up and you go why did you buy it? And go, it looked cool. I don't know. I don't know why I bought it, but because I spent a lot of money, we have to use it to justify that expense. We have to look like we're using it and I always approach it from, tell me the problem you want AI to fix. I'll tell you if it can because and this is something I learned this from watch so many episodes of House is that people will always tell you the wrong thing first. Yeah. So usually I'll give 'em a solution. They go, actually I meant this. And I go, okay, wait a. Usually it takes me two or three iterations to find out what's the real problem. What is it really that you want? What do you really want to happen here? What are the f? And it's just the process. If you have to dig a little deeper to find out what's actually going on, nobody wants to admit they click the link they shouldn't have. So never gonna tell you that the first time. Yeah. Never. Never. Because they're afraid you're gonna tell on them. So you sometimes have to dig. We can't fix it until you know what happened. So with all of these things going on and these new. Whole new world of ai, which is like a whole new technology, and the definition is so broad, and now everyone's saying to their AI company or their ai, their one tech guy, the one IT person in the building, okay, we need you to do all these things. Set up an AI server. And all of these things. What do you think is like, how can the modern IT company or IT person and navigate these murky waters? Yeah. And again, it's funny that we have to go back to, 40 years ago or 50 years ago even really the Toyota process to really get the answers here. And the modern way to kinda look at that is to say, okay, let's say that your IT person or whatever that looks like is troubleshooting an issue. Okay? They're locked into that. And that's forensics, like you're saying. It's a rough, it's a part of your brain that is for somebody who's, who enjoys it is good, that they enjoy digging into, that they enjoy being house in that way, that they enjoy uncovering truth. These are people that read philosophy and they play music on their spare time. I guarantee it. The. The issues that are coming up in the processes of business though, is to say, okay, during the last period of troubleshooting what wasn't being done. And so that's in my mind, one of the ways you can identify a constraint is to say, if you have things that weren't being done that should have been being done, then you need to. Create different job roles or at least accommodate for that with additional staff or whatever that looks like. If your goal is to get those things done and do the other things. And so it works for a lot of things. If there's constraints that are happening and things that aren't happening, you kinda look at do that forensics of your own processes and say, okay, what kind of things are not being done when this occurs? And say that, break it down and make it into those roles. Really, do we have to think like a manufacturing floor? One thing is, I have not met, I have not worked in manufacturing in well over 30 years at this point. It's just like a, it's how you start your career sometimes, right? And you go on from there. But the more I've been looking at it in the modern world, the more I realize that, that we need to start thinking more, a lot like that, of saying, actually stacking roles onto everybody is not gonna work. Now ai, interestingly enough, the whole, one of the whole goals that most companies are trying to do is basically make a job role that didn't exist before and make that work for ai, which in some cases might be appropriate. In some cases it might free up some time. But again, like you're saying, if it's not something where there's an actual need there and it's not been identified as CRI critical to, it's a constraint. Then you're just wasting your time. And the problem with that is that just like a human would have, idle cycles, when you think about it, how often is AI actually working for a company? How often is it actually doing work? And if it's for a company, you the security problem of just having, somebody else's AI working for you is. Laughable, yet people are doing it all the time. It's it is just amazing to me the data that's being exposed right now just because of people wanting to do AI work. And so if you don't have your own ai or at least control over it your own models, all that kind of stuff then you are essentially. Exposing to others. The problem with that is if you have your own, then it's sitting there idle. And so you have the toy that is sitting there idle and it's not being quote unquote effective. And so again it becomes one of those things where it's just what's the value of this? To what you're pointing earlier, just like literally just writing it down and saying, here are our processes, here are our constraints. Here's what each of these costs even. Is AI actually worth. That for that, is it actually creating value in that regards? In some cases, it literally is, and I'll give you an example of the security thing where it can be monitoring for things when the human kind of falls asleep, it can be looking for things that, that the human can't. So that is a constant work kind of thing. It's not a, 30 seconds of work here, 30 seconds of work there. It's a constant thing. So there are things like that, that do make sense to supplement a human process, but. To bring up your house example again. I was actually literally thinking before you said that about how it works in medicine to where you'll have doctors that get together to diagnose a thing that's difficult to diagnose and they'll have a consensus. Right? And so we're starting to see some of that in ai where you have multiple generative or analytic processes that'll come together and try and form a consensus about what is the result of this. Proactive security and things like that, we're gonna start seeing a lot more of that, which is good. It'd be good to have a, before we shut this down, let's make sure that this is actually something that should be shut down. That sort of thing. Let's make sure we're not imagining, that we have each have different ways of saying the same answer when we come together in consensus. Is that actually the same answer is that's the same truth, and that's where things get really difficult. But that's, I think that's where. That how we get away from that, appearance of looking like we're using it, is to actually put it to the right kind of work. Whenever I whenever I design a new process and I'm working through something, I'm thinking about having AI do it. I go through a decision making calculus, which is first, how much time is this, does this process take me? And how much time take me to design an AI or an automation that will do it? And sometimes we'll spend two hours to make a process to save 10 minutes. And so you have to resist that. And then the second part of the process is how much will it cost me to have someone else do it? So I look at that's the second step. Maybe I'll just pay someone else to do it. And that makes more sense because it's a poor use of my time. Certain things. I don't know how to do, I'm not a programmer, I'm not a developer that's outside my area of expertise. I can learn how to do those things. I can follow along instructions and go through a set of processes, but it would take me four hours where someone else could do it 30 minutes and it costs me $30. And if I do it myself, now my time is worth $7 an hour. But we sometimes do that as, especially when a little bit techie. So I. So there's these several steps in the process. And for people or companies that are thinking about implementing ai, they have to go through the process of what are the, what's the upside and what's the potential upside? And what's the definite downside? Which is how much will it cost? How complex is the switch? What's the transition period gonna be? How long will it take to onboard or retrain whoever's using that particular tool, and how long will it take them to master the new process? Because. I do it sometimes too. I will forget and go do something the old way. I always did it earlier today, something that I have a new AI way of doing something. I started to do it the old way and I said, wait a minute. I was shocked. I completely forgotten that design, an AI way of doing it.'cause I just saw it. I go, oh, I forgot that was there. And it's, there's a transition period because you sometimes just forget and you do it like sometimes you drive home to your old house instead of your new house'cause you're on autopilot. Yeah. So there's these other steps in there that people are. Jumping past sometimes when they're thinking about how exciting something is and the downside is definite, whereas the upside is possible. It's not guaranteed because sometimes a process doesn't work the way you think it is. Sometimes it takes longer. I always have to explain to everyone what a design something for them, and AI chatbot or an AI process is, it's not gonna work right the first time. It's gonna be, it's gonna do things that we have to keep retraining and keep improving it. There's a development process, just like when I write a book. There's a reason the rough draft isn't the final draft. There's stuff you wanna change once you look at it, and it's part of the process. And I always run into these hurdles there where someone doesn't read, they never read the rough draft, or they don't look at the process. And then there's things that need to be tweaked. And I'm like, that's part of it because even for a process I designed for myself, it. Especially with ai 10 times the 11th time, it'll go off script. I have an AI process where I will upload a transcript of a podcast episode and have it generate three pieces of data for me. Like the show notes. Description and the blog post. And then just an hour ago in the first question, it goes, oh, I can't read that document. And then it responded to the second two by reading the document, giving an answer. And I go obviously you can read it. Liar. So you and that's just what happens if I'm not checking it. Then my show notes for an episode might say, oh, I can't read a text document. Please gimme a different format. Yeah. And then I'll look like a big, silly goo or a dummy. So there's a necessity to approach any project from a place of what problem is this solving? And it's either in the form of generating more revenue or saving time, or decreasing. Employee dissatisfaction. Everyone hates doing this, so we'll have the AI do it right? We always make the robot do the no one wants to do. That's why we make robot vacuum cleaners and not robot TV watchers. So when. Designing a process and thinking about should we implement this technology? And it's the same pro process I go through when I think about moving, the thought of moving to a new platform for my website, right? Changing to a new server or a new host, any of those things. Something always goes wrong. You mess up a number in the DNS settings or the new host has a different way of handling email. There's always something always. Goes wrong. So you have to factor in, here's how long it will take, if everything goes right, and here's how long it's really gonna take, if something's gonna go wrong. Yeah. So what's the right way or what's your approach to. Helping someone to make those decisions, whether it's deciding to add a new service, deciding to develop a new skill. There's a lot of people, I recently, as someone who was telling, recommending that people go back to college for four years and learn ai and I think four years from everything you learn in the first three years won't be relevant by the end of year four. Because it's changing so fast. Yeah. If I did that right now and had bunch of AI knowledge from early 2020 be completely irrelevant because technology has jumped like 50 cycles since then because the cycles are too fast. The college cycle is too slow to keep up with how AI is shifting. Yeah. And honestly I think it kinda goes back for me to when I actually very interesting. I actually wrote my second birth book first. It took me that much longer to get it out. And big part of that was. Some advice I got when I met Ryan Holiday and he had a book at the time called The Perennial Seller, and it was talking about how really you need to write things that are gonna last. So same thing with the AI being obsolete while you're learning it, you know that then That's absolutely true. Positively, I actually had to go back and say, okay, I. Instead of just writing a book where I'd have to do a revision every six months or every time there was a software release, why don't I go and see what the actual process was and so I can teach the methodology about that. And so I threw out everything that I had done first and then went back to Formula and said, okay, let's. Like actually formulate a methodology here, research other people's methodologies and all that kind of stuff and actually get a simplified way to communicate this that will last throughout the ages. The funny thing is my methodology still works for even AI processes. It's still. Here, many years later, it's still the right kinda way to do things. There might be nuances within each of those things, but if you have, like in my case, you have four gateways that you go through in each of the parts of the methodology, right? So I think in a similar kind of way, what you're saying about prompting, this is no different than me learning early on in my, in the early two thousands, how to use Google properly to quickly get to the source of what other people have done to solve the problem I'm having. So I look smart. Really, all the things that was smart about me was knowing really good Google food you might, and really knowing how to prompt Google to give me an answer. That part has not really truly changed. If you really think about it, you know what you're describing is getting really good at. Prompting. And that's why we're seeing and you and I were talking about this offline a while ago about the whole idea that you have prompt engineers that are making more than some doctors right now, which is insane in some ways, but at the same time, it's just like, where's the value? The value is in the constraint. And so if somebody's recognized that somebody who's really good at interfacing with AI is helping them with a constraint, then by all means. Pour money and talent into that, it makes sense but really it comes down to having a methodology and a way of working that actually translates beyond what the current skillset is required.'cause the skills come and go. The things that, that were true of, windows NT are not really all relevant for me these days. The things that were true about Linux in, 1997 when I was learning it. Are not necessarily all true for me today. And so there's a lot of things that that, that change over time. That if we don't have a methodology around that's when we start to fail. And so that, that's where prompting is something that evolves. But you have a, like you said, it's like a mental, a kind of default setting of saying, I'm gonna ask it this way. And you don't even think about it. You just, it becomes. It becomes just natural for you, but there's a methodology behind it. And so trying to teach that to someone else is really really difficult at times. But if you really come down and kinda break it down to say, okay, here's how I do this again, by just removing yourself from the process, if you will, and just say, okay, what would the result be if I do this and what did I do to get to that result? And that sort of thing. It's then you build up a methodology out of that. But yeah, sometimes it's just discovery along the. I love that you brought up the Google FU element of being in it. This is something I try to explain to other people in the AI space is that most of the time I get asked to solve a problem that I've never been asked to solve before. It's the very first time. The question is not if I already know the answer. If I can find the answer yes. I can either pay a contract basis to solve a specific problem or build a specific type of tool or design a specific automation. I have access to other people's recipes or sets, and I have access to other people's training. So I have different resources I'll go to, and there's this fear at the people have all the time. What if they ask me a question out, the answer to 'em, like they're always going to, people will ask you. Wild questions and it's your ability to go, great question. Gimme three days, I'll give you the perfect answer. And that's normal. And we don't go, wait a minute. That means you don't know the answer right now. And it's yeah, but I know how to find the answer because every time people ask me really unique questions. They navigated what they want is different. Everyone wants a slightly different result. So I love that you brought that up. And I think that a lot of people in the AI space can take a breath. And the second thing I wanna talk about is that process is more important is there's, over the last year, a lot of people, more last year than this year, we're just buying massive collections of prompts. And almost all of them no longer work because you have a set of instructions, but you don't know why it works, so you can't. Implement it when something changes. So what I try to teach people is a process or a way of talking to the AI to design it. Most of the prompts that I use GPT wrote for me because I know how to ask it. So if the prompt stops working, I can ask it to design version of it or to build on This is no longer working, I'm getting the wrong result. Can you tell me why? That's really the beauty of it. So even when I'm doing like. Programming stuff and I get an error 'cause I don't understand Python at all. I'll just copy and paste it in. It will tell me what, oh, you need to download this repository. You need to copy and paste this code, and it solves it. That's really where the magic is knowing to ask and finding that part of it to solve your process. So more than becoming a master of writing the perfect prompt because any prompt you write will become obsolete as they release a new version of AI or a new model, or your clients switch. Which enterprise solution they're using, it's gonna change and memorize prompts will always die. Eventually they'll always stop working, which is why understanding why or how to ask the question is more important than the wording of the question. So I'm really glad you brought that up. I think that's really amazing. And I know you're working on a new project of fiction. You've wrote a, you've written a lot of nonfiction books. So tell me about this idea, because a lot of people just assume that every book now was written by ai. Tell me about that. They're gonna love mine because it's set in 2018. But I am writing a book that is taking place at a hospital IT system, and I'm honing in on some of the, not only the challenges, but some of the traumas that are faced because of that. And the tentative title is called Outage because we're been focusing on a multitude of outages that happen in that realm, and that can really be damaging to a company. We've seen outages in it. Literally break companies before. We've seen some of these recently actually, and in a hospital system it can be, it can literally cost lives in some cases. And so having don't have it, not having processes around that actually is a good part of that. And so it's all dreamed up. Long time ago actually is when I originally thought of this high, the whole concept and said, nah, this will never work, but this is totally working. People are totally resonating with the idea of having a story instead of just being blasted out with a bunch of information. And and to your point, it really does come, lie right in the face of the usual. AI books that are just repeating what it's been told and hoping for the best and just no QA where it's a, it is an actual story that follows a kinda that hero's journey kind of thing. That, that is a totally different thing, but it gives me an opportunity to teach a why behind things too and say, okay, this is fictional, but not. Like almost every situation in the book are real situations that I encountered in the field when I was doing consulting work, and so it's, I'm doing my best to change details about it enough. But some of these I'm going to get phone calls. You, it is going to happen. I'm probably gonna get threatening emails. I'm probably gonna get in trouble for sending some things, saying, because they are, they resonate so deeply, but so many people different. So many different people have had these problems that I'm safe enough. But a lot of these are these kind of the things. But the story will be based around a a person that is being put into a position. Because he has some potential, but he's having to lead a team and immediately finds himself, unqualified to do that. He knows the technology, but he doesn't know how to lead. And so that, that becomes the problem. So it goes right back to the things we were saying about like injecting AI in a workflow, similar kind of things with even people. And so learning how to be effective and reduce risk is the big theme of this, also the theme of my second book just do this, it's really about, oh I literally call it a riskless method methodology. You're trying to reduce the risks. And that really is the whole gist of that. I. That, that book that it's November when we're recording this. And so I wanna add the final 50 thousands or so words to this, as part of the whole November process. But yeah, I'm looking forward to getting that out and being done. I think it's great. I think that. It's really sometimes nice to learn while enjoying fiction, that you can actually enjoy the journey.'cause a lot of programming computer books are, they're tough to read. They're just instruction manuals. And I really wanna encourage more people too, think about making the journey enjoyable. People are always surprised. I write business books that I would never read one, they're so boring. So I read fiction. That's why I inject so much of that into my books that I want it to be. An enjoyable journey. I always think about when you watch Discovery Channel, you don't actually learn anything. I don't know how to pick an old bicycle and know that and fix it up and sell it, but I've watched someone else do it about a thousand times. So I think that we're navigating that world that's going to give more people access to it and think about these processes because it will also help for people that are. Giving instructions to someone in it to finally understand the process in a way that's fun. So I love that, what you're doing. Yeah. And thank you so much for your time. It's an amazing episode. Where can people find out more about you? Find your books, connect with you online, and maybe even get some help growing their IT career with you? Absolutely. So the best resource right now is thrive it.com, and we have a lot of things there. In fact we have a special gift for your audience. We'll have that in the in the link there in the description. But if you go to thrive it what was that link? So thrive it.com. Yeah, if you go there, you'll find some resources. I'm actually going to give your audience access to my fir my, sorry, my second book. Just do this on audio. They'll be able to rent that and listen to that whole thing because I like to get that into hand people's hands. That's the most important thing and audio is a great way to do that. So you get to hear my voice, read my book, and be, hopefully not as boring as a business book usually is. But we do our best. Thanks for being here for another awesome episode of the official. Thanks for listening to today's episode starting with ai. It can be Scary. ChatGPT Profits is not only a bestseller, but also the Missing Instruction Manual to make Mastering Chat, GBTA Breeze bypass the hard stuff and get straight to success with chat g profits. As always, I would love for you to support the show by paying full price on Amazon, but you can get it absolutely free for a limited time@artificialintelligencepod.com slash gift. Thank you for listening to this week's episode of the Artificial Intelligence Podcast. Make sure to subscribe so you never miss another episode. We'll be back next Monday with more tips and tactics on how to leverage AI to escape that rat race. Head over to artificial intelligence pod.com now to see past episodes. Leave, review and check out all of our socials.