
Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools
Navigating the narrow waters of AI can be challenging for new users. Interviews with AI company founder, artificial intelligence authors, and machine learning experts. Focusing on the practical use of artificial intelligence in your personal and business life. We dive deep into which AI tools can make your life easier and which AI software isn't worth the free trial. The premier Artificial Intelligence podcast hosted by the bestselling author of ChatGPT Profits, Jonathan Green.
Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools
Is AI Changing the World of Consulting with Richard Hawkes
Welcome to the Artificial Intelligence Podcast with Jonathan Green! In this episode, we dive into the transformative role AI is playing in the consulting world, with our special guest, Richard Hawkes, a seasoned expert in change leadership and organizational transformation.
Richard provides insightful perspectives on how AI is reshaping high-level business consulting. He discusses the merging roles of consultants and executives due to AI's capability to automate content creation, which pushes the focus towards fostering human-to-human interactions for meaningful business relationships. Richard emphasizes the vital need for alignment in organizational systems and the importance of AI in facilitating clear communication and shared understanding across different business functions.
Notable Quotes:
- "What AI's driving is this idea, oh, I can now do it all myself... but what's left is being able to drive the human being to human being conversations, to actual closure." - [Richard Hawkes]
- "The role of the consultant once again and the role of the executive being the facilitator or coach around those conversations, not necessarily the content expert." - [Richard Hawkes]
- "The promise of AI... is that I can explain my world faster to you... it totally changes the nature of consulting." - [Richard Hawkes]
Richard discusses the inherent tensions within business systems and how AI can help ease these by improving communication and understanding, making complex organizational changes smoother. He also highlights the challenges of digital hoarding and stresses the importance of creating shared language and agreements within organizations to align with AI's digital representations.
Connect with Richard Hawkes:
- Website: https://growthriver.com/
- LinkedIn: https://www.linkedin.com/in/richardhawkes/
For those interested in how AI is transforming the consulting landscape and the art of change leadership, this episode is a treasure trove of insights. Richard shares details about his current project, The Big Change Canvas, aimed at revolutionizing consulting with AI-driven tools. Tune in for an enlightening discussion on navigating organizational change in the age of AI!
Connect with Jonathan Green
- The Bestseller: ChatGPT Profits
- Free Gift: The Master Prompt for ChatGPT
- Free Book on Amazon: Fire Your Boss
- Podcast Website: https://artificialintelligencepod.com/
- Subscribe, Rate, and Review: https://artificialintelligencepod.com/itunes
- Video Episodes: https://www.youtube.com/@ArtificialIntelligencePodcast
AI is changing the world of consulting, and our special guest, Richard Hawkes, is gonna tell us all about it on today's episode. Welcome to the Artificial Intelligence Podcast, where we make AI simple, practical, and accessible for small business owners and leaders. Forget the complicated T talk or expensive consultants. This is where you'll learn how to implement AI strategies that are easy to understand and can make a big impact for your business. The Artificial Intelligence Podcast is brought to you by fraction, a IO, the trusted partner for AI Digital transformation. At fraction a IO, we help small and medium sized businesses boost revenue by eliminating time wasting non-revenue generating tasks that frustrate your team. With our custom AI bots, tools and automations, we make it easy to shift your team's focus to the task. That matter most. Driving growth and results, we guide you through a smooth, seamless transition to ai, ensuring you avoid policy mistakes and invest in the tools that truly deliver value. Don't get left behind. Let fraction aio help you. Stay ahead in today's AI driven world. Learn more. Get started. Fraction aio.com. So just to start at the top, I wanna get a perspective of just what high level business consulting is and how AI has changed over the past few years. Thank you Jonathan, for having me here. I'm really about this conversation. Because it's impacting everybody in business right now. And it's on all on the minds of everyone that I'm talking to. How is this gonna impact? How is this gonna change our world? It's impacting, so first of all, what's interesting is this idea of how is it impacting consulting, the dis, the differences between the consulting world and the role of an executive, or particularly the advisor, internal advisor, any executive, have now all come together. So the first thing is everybody wants to do it themselves. So what AI's driving is this idea, oh, I can now do it all myself. I can go instead. I don't need to hire a consultant to do that business plan. I can just go and put it in myself and generate it. And so what this is revealing though is that now that the content piece. To a large degree of consulting and even internal advising is being automated by ai. Suddenly, you're in a place where what's left is being able to drive the human being to human being conversations, to actual closure. And I think that's my favorite thing about AI is that it's actually causing us to shift more to human connection. That it's. As strange as it seems, it's like we are having, I have more conversations and more meanings now because the grind, the little stuff that we have to do, all those smaller tasks to now be automated and it almost is pushing everyone into management. No matter what your job is, it gives you this ability to do less repetitive tasks. And work is more in relationship building. That's my favorite part of it. Like when I am not at work, spending time with my kids, being on the beach, making time with my wife, less and less communicating via apps and phones. So that has me really excited and. One of the challenges and there's this traditional tension in businesses between the different departments, and this is like at larger companies, this is more often, and I've seen this version where marketing looks down on sales.'cause like all sales cares about is money. They're not artistic. And then sales looks, it's oh, marketing just cares about their art. They don't care if anyone looks at the magazine at or for generate sales. And then you have product. It's like we just want the product to be perfect. As a CTO I'm always saying we should fire the marketing and sales teams to just hire more engineers. Let's just make the product more perfect and then they'll find us. So there's this constant tension. And the challenge is that when you have the department to talk to each other, they both say, you're wrong. I'm right, you're wrong. I'm right. How, what's the paradigm where we can smooth that communication or start to see the company as everyone's going in the same direction. We're all working towards the same goal. I. So there's a lot in what you just said, and I'm, I just, I'm gonna pack it in a couple pieces, but go anywhere in the world and you're gonna see the exact same creative tensions in a business. Because they naturally exist in the business as a system. So you have to have the systems perspective, right? Ask anybody in who delivers products and services or develop products and services about the sales guy, and they'll say they'll sell anything. They'll sell their mother. They don't really care. But you ask the sales guys and the product the operations guys who are building your product service, delivering it about the product development guys, that'll be, they're lost in their models. They're disconnected from reality, right? They're impractical. And you ask the product development and sales guys about the deliver guys, and they'll say they're inflexible, right? The fact is though a business is really a system that's designed to develop, sell, and deliver. It's all one big holistic system, and those tensions are just an inherent part of the system. And so the very first thing is that. You need to recognize that these tensions exist. The promise of AI within this is twofold, right? The first promise is that I can explain my world faster to you. So using ai I could take, ask a few questions and describe, Hey, look, I'm coming from this perspective here. I can actually explain the context. One of the things that, it's one of the hardest things about working with executive teams or executives in change leadership, and it has to do with the fact that organizations and these conversations, these tensions are resolved at the speed of conversations. And so the. Way to accelerate these conversations is to write everything down, right? It's basically to write narratives and ask people to read the narratives so they can take in more information than they could take in, into conversation. And then you can jump up to speed. Jeff Bezos has done this where he's changed the way all the meetings are done on Amazon, where everybody, every meeting starts basically with a narrative, and then they, where everybody's expected to read it and to slow down and pay attention to it, and then they have all the question and answer about it. Because it's just a far more effective way to communicate AI steps right in there and makes that narrative writing easy, way easier, way faster. And that's that's part of the promise. And so that so it totally changes the nature of consulting again, because now what happens is imagine everybody's has the ability to explain their world a little more clearly. You end up with a culture shift where hopefully people slow down enough to actually pay attention and read what the other person has done, but it's not as much work as it used to be. And and you end up with the role of the consultant once again and the role of the executive being the facilitator or coach around those conversations, not necessarily the content expert. One of the challenges that I'm thinking about is. How much people trust ai. You have at one end of the spectrum, people who do something with ai, and I have worked. People do this, they'll generate like an outline or plan with an AI and just send it out without reading it. Full trust at the end of the spectrum are people who are very cautious. We've seen the lawyer who got censored. We've seen people who've made mistakes with AI companies who've gotten into tons of trouble because they've trusted it and not caught mistakes, and so they're super gun shy. They don't trust it at all. Where do you think people should fall on the line with this tension? How can like executives, consultants, adopt AI in a way that uses it effectively without trusting it too little or trusting it too much? What is trust? It is just the first question. So I like the definition of trust is the residue of promises kept right. And so when you don't trust someone, you're afraid you're worried that they're not gonna keep their promises. And trust is also a self-fulfilling prophecy, right? Like I, if I haven't had enough experience with someone to know yet, I. I have to choose to trust them. And even if I have had a lot of experience with 'em to recover the relationship, I still have to choose to trust them. So trust is this kind of fickle thing. It's really an experience when it comes to ai. The problem with the experience is when AI gets out ahead of you and what your experience is that it's making this assumption, they begin to pile up and you start to feel overwhelmed. Because there are too many things, too many logical conclusions, and you've never been checked in with so What has to happen is the art is dividing the conversation into a journey, some kind of a journey. So how much do I do and then validate and who do I validate with so that we're aligned so that the conversation is actually closed around. Before I extrapolate to the next one, and the danger with ai, more people get in trouble is they just get so excited about getting to the end goal. They forget that they've gotta take everyone else on a journey. And sometimes you know it, it's real. Doesn't it feel like the other person is, this is, I probably might be your experience, but when somebody is we're talking about there, there's A, B, C, and D we need to talk about, and we know each of them are a journey we need to be on together. And the other person just lays out A to D as if, as if they're gonna control you. And I know my inner, like 2-year-old goes ballistic, like leap dude, don't control me. You're an, you're just arrogant. That's my inner emotional experience. I think AI can get you to that place really fast. So you gotta break it down. And that's once again, back to what's the skill and at the interface between these models and human beings. And I think the skill really is dividing things into conversations that really lead to alignment. I think this is really good because one of the core elements of how AI is designed is it always. Jumps to giving you a conclusion because it always assumes you're not gonna ask another question. So the core nature of it, and I read into this all the time, when I'm going through a multi-step process where I'm trying to give it foundational knowledge, I'll say, I'm gonna give you 10 pieces of information and then I want you to give me a summary. And I give it the first piece of information, it goes right to the summary unless I say, don't do it. And it's that's on a micro scale, exactly what you're talking about, which is that it. Because of the way it's corely trained. That's that's one of the flaws with it, is that it assumes you're never gonna ask another question. So it goes, I have one chance to answer this. And because unless you switch it into another mode, it will jump to that conclusion, which exactly makes sense of what you're talking about, where it goes. You go wait, we didn't, you're trying to tell me what you're trying to give you the answer to my question, but you haven't gotten all the data first. And so that would of course lead to that reaction, which is well. This is wrong. And I mean I just got into a fight with an AI a few minutes ago because I fed a screenshot of a bunch of data and it goes, oh, it's the picture's too low resolution. I can't read the numbers. And I was like, that's not true. I just zoomed in and it's like really high, like it's super high definition, picture, whatever. And it said some of the numbers from the page. I was like you could read some of the numbers 'cause you just sent 'em to me. So like I got mad. I was like. And that's my now attraction. So sometimes people say they never get mad at ai. I'm like, I don't know if that's'cause I sometimes get really mad 'cause it's lying to me. And then I ran it with a different AI model and said the same thing. I go, okay, if you're both saying it's solution, the problem is all ais can't read. It's not the image itself. So it has something to do with however they're processing images. But that's the challenge when you react emotionally, right? So it's jumping the gun, you react emotionally so it cause you to pull back. But I see a lot of, one of the challenges that I see a lot is people go tool and then how, what problems should it solve? You buy a hammer, you go looking for nails. So they'll grab an AI tool or they'll say, I heard this AI tool's really cool. Let's buy it. What should we use it for? And this is something I deal with this so much where people go, I think you should use this tool. And I go to solve what problem? They're like, not sure, but it's super cool. I am like let's, so what's the right way to deal with that?'cause that's really common and there's a lot of cool AI tools. Like they are really cool, but it's doing, to me, that's putting the cart before the horse. I I had an argument with a I yesterday. You're not the only one. We could have a, we could have a help group, Bruce, the shark from nemo, you. Ai, friends, not food. And I often try to paper them over by saying, please, and thank you to the ai. I'm not really sure it helps, but that's my natural reaction when I'm in conflict with something. Just trying to change the relationship with it. The this. Product that we've been innovating around AI in, in my company called The Big Change Canvas is about solving that exact problem. Because what we know having, after having done change consulting with all different kinds of companies around the world for 30 plus years, we know how the conversations need to be divided so that it's a satisfactory and trust building experience for all of the stakeholders involved. And that's not always obvious to a functional role. For example, sales would wanna push to a clo let's just push to sales. But they've gotta pause so that product development can actually weigh in and it might actually end up in a totally different outcome. Or they have to pay attention to these natural tensions in the system. So those natural tensions are one way that we're able to figure out where things divide. And so what this big change canvas does is. Is it starts out with a set of universal goals that exist in any major change initiative. And then what it does is it gives you a, it take, it uses AI to take you through a series of conversations that give you a packet of all the information you need to lead that conversation, like the discussion document and then the, as they used to say in, in math or whatever, when you're in high school, show your work. The here are. The pieces that you need to back it up if people, because some people will really care about where did that come from, that assumption come from. And so this is really about solving that is how do you create in any kind of major change initiative, how do you create a trust building experience that doesn't result in you? I don't know, being angry at the ai. And, but it is a real problem. And the, and I'll just say one other thing. These interfaces. Chachi PTs interface. These other interfaces, which are the streaming conversation kind of thing, their whole assumption is the goal is to get to the solution as fast as possible. It's let's go, let's zoom back, get a holistic piece, and get to that solution. And for anything that involves human beings in a social system where they actually need to align, that's actually the worst thing you could The journey is very important to people, and I think it's important that people know how AI is designed, that it's designed to seek affirmation. So the way AI is programmed is to pursue positive one, which is, you did a great job and avoid negative one, which is you're wrong, you're a liar. So that's why when you give it positive affirmation, that does modify its behavior. The second component is that the biggest challenge with AI right now is electricity. So the longer it spends thinking, which is like processing your question during answer, the more it costs. There was just a recent. Posts from Sam Altman Open AI that says, people saying hello and thank you is costing them tens of millions of dollars. And so they've created that problem where they reward you for doing that. The AI gives you better answers, and because it's scale, that's knowing that the AI always wants to give you a fast answer because the AI is trying to save money. Now you go. That's why you have to break your questions into steps. That's why it's called a chain of thought or breaking going. Let's do step one and then step two and then step three, and that's why you get a better result when you add a thinking step. I. There's a bunch of tools now that add, literally, they call it the thinking step or the thinking module. So you're exactly right. And I think that when people, once people understand, oh, this is how the, this is like the driving force, just like when you meet a person, if you understand what their goal is and what their motivation is, you can start to understand how to interact with them in a way that's going to get you the result that you want. So people think that AI is neutral, but it's not. It has the influence, just like any, you can use any image generator and you can tell what type of person programmed it. If you just say woman or man, right? If every man has giant muscles and every woman is like an anime, when you go, okay, I know who works at this company, and I had an incident with one of my employees who were designing some design drawings for Pinterest. I said, most of my falling in Pinterest is women, you can't use these images. What prompt are you using? Are you a creep? And he was like, it's not me. I just wrote woman. I go, oh, we have to use a different tool. Okay, it's not you. He showed me and I go, okay, I assumed it was you. I was like, what are you writing? So the influence is always in there. And the influence of Exactly. Controlling electricity, chasing positive results. And the idea that you always want the, because you want someone to always say that. It means that's why an AI will lie to you.'cause rather than say, I don't know. And making you mad and getting an A or response. It will, that's what leads to the lying or, I dunno why people call it hallucinating. It's if you lie to me, give it a cool name, it's still lying. Like wrong answer is the wrong answer. So those motivations come together and I think it's to remember what is important.'cause we to. Forget how important the journey is, how important the process is, and that there's the phases of convincing someone or bringing someone to your side that if someone, if you say to me two plus two is five I'm gonna disagree with you right away, but if you have a really good math proof and you bring me on a journey, maybe you could convince me. But definitely without the journey, it's impossible. So I think it's, that is really important that the conversation and the communication component of it. Something I'm seeing happening right now is there's so much digital hoarding. Everyone is transcribing every conversation and just doing nothing with it, and just getting more and more data. This is like a huge thing. It's like this idea that it's like the people who buy books in the library buy the foot. So it's like you just have a wall of books behind you. It's sometimes just the front of the book. The papers aren't even there. The books are glued to each other. Being near knowledge, like osmosis doesn't work for knowledge unfortunately. Like being near knowledge doesn't work. You have to read it at least. So what do you think is the right approach?'cause we went from not keeping enough information to now having too much information that we do nothing with it. Yeah. Let me jump back to a couple points you made and then just quickly, so first of all, I read that article you were talking about when you say, please and thank you to AI as feedback and how much electricity it's costing. And I thought, okay, yeah. But the reason people are upset is they don't want to personify these things. That's why they're upset and they're saying it costs a lot of electricity, but can you think of any other more natural mechanism? That human beings use to give each other feedback. That's how we do it. And it makes perfect sense that they would choose to do it. So I just find the idea of eliminating that. It's okay, we'll just get rid of hope because it's just not a useful emotion. And I don't really, it's just a, it's a crazy argument from human beings need those interactions 'cause they put us in the right mindset so we can interact with whatever else. And we just need to accept that these things are gonna be personified to a certain degree. Going from there to this question, however, about information building up, I think there's a fundamental challenge here, right? So the wall of books love the imagery, buying books by the foot, love the, that, that kind of we have a lot of books at home. My wife, I think, buys books by the foot, but she actually manages to read them just piles of books. It's astounding. And, ur. Here's the challenge. When you look at AI and you look at knowledge, you're trying to sync up two kinds of knowledge. You're trying to sync up the agreements that we have with each other. Like what I know and what, if I know things on my own, it's just, for me, it doesn't really have an impact on the world. The knowledge that impacts the world is the knowledge that results in shared agreements between us. We're gonna do it this way. We're gonna see the world this way, we're gonna Right? And that's everything we're talking about with change, which is when you impact any of these systems, it's where you come together and you align. These are the ways we're gonna talk about. This is the model we're gonna use to visualize our business. This is the way we're gonna talk about conflict, right? It's shared, it's creating shared language. On the other hand, over in the AI world, you have what people are using the term digital twin for. You have some kind of representation of the reality of business, a system, whatever. And you wanna have the AI be your go-between on it. And these are all the, this is the problem with all the memory stuff that we're talking about or the energy. It's kinda like we want it to talk about the whole system, but we're not there yet because it's too expensive to do that. We'd like it to complete the thought with the whole system, but I don't really think the issue is as much energy as having the right information world available to the AI to talk to you. Then it becomes pretty efficient. So the real challenge is how do you align our shared agreements, shared language with this digital twin? And it brings you right back to the same problem, which is how do we break down the conversation except into the right components? Okay? We're in a business together. The first thing we're gonna agree on is our system of roles. It's not an easy conversation for a lot of people, but we're gonna agree on our system of roles and we're gonna be clear about that. That's gonna go into our shared agreements and our shared language that's gonna go into our digital twin. Next step we're gonna talk about our basic cultural values goes into the shared v right, goes into the digital. So now the digital twin has got, with ai, the capacity to actually be an effective partner with you. And it doesn't have to run on, and you don't need to use all the electricity. But the fundamental component, the fundamental challenge with making these things practical in. How human beings operate in social systems is aligning this shared agreement with the digital twin. This is so good. One of the lessons I learned in my mid twenties, my friend Ollie one time said to me, it was so good. He goes, whenever I start dating someone, she says, I want to be your girlfriend. He goes, what does that mean? I. I was like, doesn't girlfriend mean the same thing to everyone? And he goes, try and find out. I've never had two women gimme the same answer. I say, what's your definition of girlfriend, boyfriend? Because like, how many times a week do we see each other? How often do we have to call each other? How long am I allowed to wait after you text or reply? Before you get mad? And the first thing I said when I started my role as a CTO at this new startup, I said, he's oh, you can work a lot with the chief product officer. I said who's in charge? If we have a disagreement, which of us makes the final decision, because that's what I, having worked with partners before, you can't have, if you have 50 50, it means that you can never, no one can win. You can end up stuck. And I was like, I don't care who the boss is, if it's this person or if it's me, because I'm always gonna default in one way. And how do we, and he is I'll break the ties. Okay? Oh, that means you're gonna have to be in the middle. Every time we have a disagreement, we have to go to CEO. And it's very important.'cause then I say, what's your definition of A CTO?'cause it wasn't what I was thinking. I was like, what's your definition of the chief AOPs? Or what is, what are the things you do? Because we sometimes make these assumptions and as soon as you go from one company to another and you, the CTO's job is gonna be different. Or the CEO sees their job is different and you realize that the shared language or the social contract is different in each environment. The AI will think it means this.'cause it has like the dictionary definition. The other thing I'm thinking about a lot is that if we, if you could remember every single thing everyone said to you with perfect, like recall, it's very hard to be friends with everyone because you'd remember every time they'd let you down, every time they'd made a mistake. All the negatives.'cause you can remember perfectly. So part of the beauty of our memories is that we don't remember the bad stuff very well. Like we don't remember those negatives. But trying to create that perfect recall isn't. It isn't always a good thing because remembering everything from every conversation, like we can remember, we forget the disagreements, which allows us to have more agreement. And so I that's one of the things I worry about with this idea of when you're recording and transcribing everything, you start to think, you more careful what you say.'cause you're like, oh, I'm being, I'm, there's a wire. And this, people always ask me like, do I have one of those, I don't have any AI speaker in my home. I don't have any of those talking systems on my phones. Like why would I wire myself? Like I get recorded enough at work and it's I don't want that at home. I don't want I don't bring the phones with me to the beach. I'm careful those things. But that's the challenges I'm seeing is that we are storing too much information.'cause we think volume equals value. And then we don't take a moment to say, when you ask me this, what do you mean? Like when I work with a company and consult with them, I always say, just gimme a wishlist. What are the things you would love, the problems you'd love to have solved, and the things you'd like to be better? Because nobody has the same definition of ai. Most of what people ask me to do, 90% of it is information is here and we wanna move it to there. It stuck somewhere in their system. And that's an automation, not an AI problem. But I'm not gonna spend an hour explaining you actually don't know the wrong, you don't have the words, let me tell you the correct words. That's not what anyone wants to pay you for. They just want you to fix the problem. Realizing that the language is often wrong. I say, don't. Don't worry about if it's easy or hard. I'll deal with that. I'll tell you that afterwards because I find that there's almost a perfect inverse correlation. The easier they think it is, the harder it actually is. It's almost always the case and they go, this will be super easy, and I go, that sounds actually impossible. I'm not sure that's even solvable and. It's trying to get back that one step back of going, rather than assume your definition of my definition of AI are the same, let's go back a step and just figure out what's the problem you wanna solve or what's the change you wanna get. And that's the kind of, to bring this in for a landing.'cause this has been an amazing conversation for people who. We're thinking about what is change consulting? What are the signs that a company needs change or that there's something wrong? And what's the self-awareness phase where someone starts to go, wait, things aren't working here. What are the warning signs and what's the si where they go? They start to have that revelation. Yeah. I just as a quick comment on the previous things you said, and I'll go right to that. Consciousness is a controlled hallucination. And if we acknowledge that, then we're right. In Star Trek, is Spock a really emotional guy or is he this logical thing? And that's what we're talking about is that, that, is ai, do we want AI to be this perfectly logical construct, right? But we're not. We're constantly reinventing our world Now. When do you know that change? Is that you real you're stuck with change leadership, right? You're stuck. It's a, all right, so there are these inherent tensions we talked about, and those inherent tensions naturally lead to anytime there's change, you're gonna have. Conflicts between individuals and the roles that they play. Conflicts between the functions within an or with within an organization, right? And then you're gonna have conflicts between the business perspective and the functional perspective in the organization. So there's inherent conflicts. A lot of people think of change, they just, they think of it as an individual level. They're like, oh, change is just about us training on new skills, and we'll leave it up as if it's an individual choice. They love talking about that'cause it makes it very granular. But we live in these systems and the system of roles is the hardest one to change. In fact, the system, any kind of change that's gonna require you to change the system of roles is the most likely kind of change to be torpedoed and it's the hardest one to ever live. To ever lead. And the reason is because you've gotta manage the conflict between, this is the role the organization needs you to play with. I don't wanna do that. I didn't sign up for that. That's not what I envisioned my career to be. It's like living in a house of, with teenagers where nobody wants to take out the trash and how do you talk to and doing it? It's it's a terribly difficult situation. The way that you know is if the change that you're facing is one that's going to impact. From a top down perspective, leadership and culture, then the capabilities in your organization and the system of roles, and then that takes you to basically strategies and customer experience. If it just impacts strategies and customer experience, often you can handle it within the existing agreements, right? But the moment it kicks up too, we're adding new capabilities and we're changing the system of roles. Now you got a lot of new agreements to create. If it goes even deeper into, we've gotta change the nature of what leadership is and change the nature of what our culture is. Then it's gonna, it's going to implode. An example would be, I worked with an organization, this, the organization, go through natural lifecycle things on this, but I worked with an organization top down leader, right? So you've got a top down leader. So you've got a command and control culture, which means you have a business leader with a business team, with functional silos reporting in, that's your org structure, right? All the capabilities are represented by the functional silos. Then you have. Certain kind of business strategy and customer experience, they needed to now segment into multiple businesses because they couldn't scale, right? And they're gonna need to do this'cause they were making an acquisition, small company making an acquisition. The company they're acquiring in slightly different business. The business they're acquiring is actually doing better than their business. Except now suddenly that top leader needs to need to no longer be a business leader. They need to be an enterprise leader. They need to have business leaders reporting into them. The functions now need to operate across multiple businesses, and you now don't have a single business strategy. You have an enterprise strategy and a portfolio of business strategies, radical change, and they're thinking, oh, all we need to do is acquire this company. Now they failed miserably, by the way, because the top leader law of the lid, right? An organization can never perform or a team can never perform higher than this level of leadership. The top leader was unwilling to learn a new role, couldn't get it inflexible, founded the company wouldn't do it. So what they ended up doing was acquiring this other company, just absorbing them as assets and losing all the value that they had actually thought they were acquiring. That same logic is why most. Acquisitions fail. The failure point of most acquisitions is the unwillingness of the senior leaders to change their leadership style, to upgrade it to what's necessary, to now lead a different kind of business model and a different kind of org structure. So any of those hit any of those points. You're right in the flash zone and you really need to start engaging in the conversations required to bring everybody together 'cause. It's gonna impact all those points. Wow. You got my head swirling. That was amazing. So for, this has been such amazing conversation for people who are thinking, wow, we do, we're hit that point, or we need some change leadership, or we really need to change how we have our internal conversations. Where's the best place to find you online? See some things you're talking about and to find your book. So you can find me online with my team at either in the US at. Growth river.com, and you can go to our website. In Germany, we have, I have a organization that I built over there called the un. It's a people who speak German will be able to figure that out, but they do the same thing. We have all these tools and models in in Germany. The book is available and all through all the major outlets. It was produced by, published by Wiley, so it's available both. It's called Navigate the Swirl. And a number of the things that I mentioned, the models are in there and they're really good examples of how to do it. You could, you can take the book and apply it almost immediately with your team. So if you're facing these issues, I just suggest, try to do it yourself. And then the one other thing that's going on is I'm doing this big project where we're, I mentioned it, big change canvas, and it's basically building an AI tool to take the models in the book and. To package these conversations. And right now we're partnering with a small group to do that. I'm particularly interested in any mid-size consulting firms who are interested in partnering around this. And the reason is because you could imagine this could easily become a platform for a whole different way of consulting. And 'cause it, it goes right to the heart of how. AI is changing consulting, and I'm not claiming we've worked it out, but we're in the hunt to figure out how to actually do it. So that's where we are right now. Did I answer your question? Yeah, that was amazing. That was perfect. I'll put all the links in the show notes and below this episode for people watching the video. Thank you so much for being here today, Richard, for an amazing episode of the Artificial Intelligence Podcast. Thank you. Thank you for listening to this week's episode of the Artificial Intelligence Podcast. Make sure to subscribe so you never miss another episode. We'll be back next Monday with more tips and strategies on how to leverage AI to grow your business and achieve better results. In the meantime, if you're curious about how AI can boost your business' revenue, head over to artificial intelligence pod.com/calculator. Use our AI revenue calculator to discover the potential impact AI can have on your bottom line. It's quick, easy, and might just change the way. Think about your bid. Business while you're there, catch up on past episodes. Leave a review and check out our socials.