
Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools
Navigating the narrow waters of AI can be challenging for new users. Interviews with AI company founder, artificial intelligence authors, and machine learning experts. Focusing on the practical use of artificial intelligence in your personal and business life. We dive deep into which AI tools can make your life easier and which AI software isn't worth the free trial. The premier Artificial Intelligence podcast hosted by the bestselling author of ChatGPT Profits, Jonathan Green.
Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools
AI Can’t Go Solo: Why the Human Touch Still Matters with Robert Brown
Welcome to the Artificial Intelligence Podcast with Jonathan Green! This episode delves into the nuanced world of decision-making as it intersects with artificial intelligence, featuring our exceptional guest, Robert Brown. With a wealth of experience in decision strategy, Robert offers insights into not just making decisions, but making the *right* decisions, highlighting the vital role of structured processes.
Robert discusses how current approaches often use AI tools backward, seeking problems for pre-existing solutions rather than vice versa. He emphasizes the necessity of understanding human decision-making weaknesses and crafting structured methodologies that guide better outcomes. Through this lens, Robert reveals how businesses and individuals can effectively navigate cognitive biases and uncertainties inherent in decision processes.
Notable Quotes:
- "We're doing everything backwards...I bought a hammer, now I need to find some nails." - [Robert Brown]
- "A good decision is not necessarily one that gives you the outcome you desire. It's one that conforms to a standard of quality associated with decision making." - [Robert Brown]
- "I know what you said, but I have to figure out what you meant to say and what's the problem we're trying to solve." - [Jonathan Green]
- "Solving irrelevant problems is the biggest waste of resources." - [Robert Brown]
Robert further discusses the importance of identifying core business objectives and aligning decision-making strategies with these goals. He introduces the concept of value-focused thinking, urging decision-makers to prioritize organizational values and preferences before embarking on major AI implementations.
Connect with Robert Brown:
- LinkedIn: https://www.linkedin.com/in/rdbrown3/
- Email: robbrown@cyberresilience.com
If you aim to hone your decision-making capabilities and leverage AI to its fullest potential, this episode is a treasure trove of insights that underscore the importance of structured frameworks in achieving desired business outcomes.
Whether you're a startup founder or an executive in a large corporation, tune in to glean from Robert's expertise and redefine your decision-making blueprint!
Connect with Jonathan Green
- The Bestseller: ChatGPT Profits
- Free Gift: The Master Prompt for ChatGPT
- Free Book on Amazon: Fire Your Boss
- Podcast Website: https://artificialintelligencepod.com/
- Subscribe, Rate, and Review: https://artificialintelligencepod.com/itunes
- Video Episodes: https://www.youtube.com/@ArtificialIntelligencePodcast
You always need to keep a human in the loop when using artificial intelligence. We're gonna talk about it today with very special guest, Robert Brown. Welcome to the Artificial Intelligence Podcast, where we make AI simple, practical, and accessible for small business owners and leaders. Forget the complicated T talk or expensive consultants. This is where you'll learn how to implement AI strategies that are easy to understand and can make a big impact for your business. The Artificial Intelligence Podcast is brought to you by fraction, a IO, the trusted partner for AI Digital transformation. At fraction a IO, we help small and medium sized businesses boost revenue by eliminating time wasting non-revenue generating tasks that frustrate your team. With our custom AI bots, tools and automations, we make it easy to shift your team's focus to the task. That matter most. Driving growth and results, we guide you through a smooth, seamless transition to ai, ensuring you avoid policy mistakes and invest in the tools that truly deliver value. Don't get left behind. Let fraction aio help you. Stay ahead in today's AI driven world. Learn more. Get started. Fraction aio.com. Thank you, Robert. I'm so excited to have you here because as we're just talking about before the show, the biggest problem I'm seeing right now is that we're doing everything backwards, which is that we're doing, here's a tool. I bought a hammer, now I need to find some nails, and we don't. And on a deeper level, it's that most people don't have a written down decision making process. We were talking about this with some other people I worked with last week, is that sometimes our clients don't have a template against which they measure is this go in the direction of our business or does this not go in the direction of our business? So even when you're deciding, this is when you see like mission creep or they have like add-ons to the brand. And I always say this isn't a logical progression. How did you decide to do from this? I just had a vision, right? And it's and people think you're a car brand and now you do helicopters. That's a huge jump.'cause they go, wait a minute. Exactly. They're very different. So that's what I wanna dive into first as an expert in decision making and this idea of processes. Where should someone start when they're thinking, oh, I don't have a process for decision making? I think the first thing to recognize is that first of all, human beings are really bad at making decisions. And when I say bad, this doesn't mean that everything we do is just a disaster, but we are, by adaptation, by evolution, whatever, we have developed certain mental heuristics that allow us to live in, let's say more primitive environments. And so the environment that we're in today, of course is not the African belt that we evolved from. So the type of mental processes that we use to evolve in a highly complex threatening, fast-paced world is not quite the same type of world that we live in today. So we need, I think, more structured ways of thinking through. Decision making situations that led us to short circuit or lead us to short circuit some of those cognitive illusions that we face that, arise as a result of that evolutionary process that brought us here today. So I think that's the first thing we need to understand is that human beings actually really do need, let's say, handrails or guide rails to get us to the point of making good decisions. Which also then should raise the question in our minds what is a good decision? What constitutes a good decision? And that's the other thing that we need to understand first before we even get into process. And that is a good decision is not necessarily one that gives you the outcome you desire. It's one that conforms to a quality of standard or a standard of quality, if I can maybe reverse my language there. A, a standard of quality associated with decision making. And if we can conform to that standard of quality. The closer we do it, the more frequently we do it, the more often we do make decisions that lead to the outcomes that we want. So it's, I can always tell you beforehand whether or not you made a good decision before you ever experienced the outcome. In fact, I can guide you through the process and I can guarantee you that you will make a good decision, but I can't guarantee that you'll get the outcome you want. And of course the reason for that. Is that we live in a world of uncertainty and risk. There are many factors that occur outside our control. And so ultimately we make a bet. We make a series of bets or make multiple bets in a portfolio. And what we're trying to do, of course, is increase the likelihood that we more often than not achieve the outcomes that we want, not that we achieve the outcome that we want on every single decision instance. I hope I set that up well for you. yeah, that's great. Because I'm actually thinking about what a lot of us deal with is we have a client or a boss or someone we're working with who they give us instructions, but the vagueness make them impossible to achieve. So one of the things I deal with a lot is someone will say, I want you to build an AI machine or a bot or automation that does this. And I go, great. Can you show me an example output? So I need, and I call it an ideal output. I said, can you show me what would be, if the AI gave you this, it would be a perfect result. And they go, no, but I'll know it when I see it. And I go yeah, that you're not gonna, whatever comes outta this is, you're not gonna because you're asking me to add in guesswork. And every time you add in, and this is the problem maybe I talked to, every time you add in an impossible variable or a new variable, the odds of success go down dramatically. Absolutely. So when I have to. Guess I'll have to, I said we'll have to iterate many times. Sure. Because every time I'm building it, I'm guessing at the output. And I deal with this so much where whether it's building an AI that will outline a book and I gimme an example of a perfect outline. I don't have one right now, but I'll know when I see it. Anytime I hear, I know when I see it, I go, oh no, I'm gonna have to raise the price. It's the classical, find me a rock dilemma. Somebody says, go find me a pretty rock. Can you tell me what you think of is pretty? No, but I'll, as you say, I'll know it when I see it. And you're exactly right. It's an impossible task nearly to accomplish. You are lucky if you actually do find the rock that people want. But you're not smart. Not you specifically, when people do this kind of thing, when they experience that sort of lucky outcome, it's not what, cause they're smart. So you're right. We absolutely need to have a pre-structured or a pre-framed idea about the outcome that we want. I think you're right to describe it in terms of can you gimme the ideal output? Now let's work backward. What does it take to get to that ideal output? And we can have a process that we can, that can guide us through that to go from ideal output back to inputs, and then work our way forward to figure out how to optimize that along the way. Is there a way to get someone to switch into that mode to realize how important it is? Because the two areas I deal with, a lot of vagueness are description of the final results. When they want it. Sure. So I was just talking to someone recently and I go, do you need this by Monday? And they go I'm just like doing a lot of things right now. And I'm like but do you need it by Monday? Yes or no? And they're like, and it's like I need to know if I have to pay my team to work through the weekend extra. This is an emergency. And sometimes people have a real challenge. With It's an emergency or not emergency, and it's something Alyssa learned from my dad, who was a lawyer for many years. He goes, it's always an emergency until it's on the client's desk. And suddenly it's. You have to work through the weekend and work all night and then they don't look at it for two weeks. Sure. So I think, the first instance is, how do we con convince people that they need to change their behavior? And I'll be honest, this is almost impossible unless a person is feeling or aware of the fact that they've been wasting time and resources or that they've had a necessarily large number of failures associated with what they've been doing. So unless that sort of exists beforehand, it's a little difficult to convince people you need to change the way you're making decisions because most people think of themselves as good decision makers, right? Particularly leaders in an organization. I. Because they convinced themselves, of course, that they wouldn't be a leader in an organization unless they were good decision makers. The fact of the matter is if you were to go back and do an audit trail, like of all the various decisions they made, it's probably a 50 50 chance that anything that they decided to do actually worked to what they wanted. But now once you're in that mode of dealing with a person who does want to change, then the way to start is to, I think, frame out. The objectives the the preferences and values that person has. And actually, there's a really good book about this and there's, it's a little dated now, and so when I say dated from my point of view, it's oh, that was early in my career. But from, I think 1996 book by Ralph Keeney called Value Focused Thinking. I think this actually flips the whole paradigm upside down of the current idea behind data-driven decision making. What you have to do is convince a person that they need to be values focused in their decision making. So start off, first of all by framing out what it is that is important to you in terms of your values. And when I say values, I don't necessarily mean your morals and ethics, but what is it that you want to experience? What is it that is valuable to you, and what are your preferences for those values? Once you can get that framed and clearly articulated. Then you can begin to start to think about the decisions that you would make to achieve that. Then once you've identified the decisions you can achieve that that you can use to achieve those objectives, then you can start to think about the various uncertainties and risks that could prevent you from actually achieving your objectives. That gets into a, more lower level, quantified level of the decision making. But. I say always start with identifying your goals and objectives and values and preferences first, and follow that outline that Ralph Keeny gave us back from 1996. Yeah, I am. It's interesting 'cause when I talk to my kids, especially my younger kids who are like young, the ones that are under six, I'll say, yeah, do you want. Toy one or toy two, and they go, I want both. That's how kids think. But like when I deal with clients and they, I go, which one do you want first? I want both. And it reminds me of high school when people are like, oh, I have seven best friends. I go, then that you don't know what best means. Right? So something interesting in my twenties, one of my friends, Ollie one time said to me, he goes, whenever I'm ask a girl to my girlfriend, I say to her, what do you think girlfriend means? And I was like, what? That sounds crazy. And then he goes, every girl gives you a different answer. And I realized that's true. Every person I've dated, I said, what does girlfriend mean to you? Yeah. And it's always different. No two women have given the same answer. I'm very married now, but my wife's definition of married is probably different than every other woman's. And it's really, I. Surprising how much words have different meanings. So now I've learned exactly when someone says, I want an automation or I want this, I go, what exactly do you mean? And I often go through this like Dr. House phase where I'm like, you, I know what you said, but I have to figure out what you meant to say. And what's the, we're trying to solve and. Sometimes people use words. I think there's actually a big problem with AI is that the word agent means 50 things to different, 50 different AI people. Like we don't have any standard definition, so it's really hard. I talked to someone last week who's agentic is different than agent. And I was like, okay, now it's really tough. So even to get to that point of what's important to you or what. Is the most important thing, and this is another area of definition because I've worked with people before who they go, what's the most important metric to you? And I go, dollars. Exactly. I don't care about followers, reach engagements, likes, if nobody likes most and it makes a thousand dollars. I like that way more than a thousand likes.'cause I can use those thousand to buy food. Like you can't. Always fascinating to me again, when you, I have to go through this process of let's, let me explain what's the most important metric to me and what are the values here and yes. I was explaining to someone the other day, like the thesis of my company, which is AI simplified. I said, if I teach a process and it's more complicated than another process out there, I can't do that. I have a, and it took me a long time to get to that thesis, like a year of thinking about how can I really specify what I do? So when I. When someone is trying to figure out what their values are in this context, I realize we're not talking about moral values. We're talking about what's the most important metrics for your company or what are you trying to achieve? I think this is a really important question because this is when people go into mission creep.'cause then you go, does this decision go match the core value? My company, and this is like when a company has a mission statement. Yeah. I think a lot of companies miss the point of a mission statement is to say, no, it's not to tell everyone how good you are. It's to create like. A tool to measure your other decisions against, to go, does this fit inside that It's your heading. It's the heading that your company has. The directional heading. What you're describing, you're describing this from the point of view of talking to, particularly in a single individual, and then running into the ambiguity that they give you with the words they use. But imagine now that you're in a it shouldn't be hard, right? You do this all the time, but working in a larger organization where you have multiple stakeholders that presumably are all working for the same sort of outcome, and yet, if you were to go and ask them. Tell me what your company's goals or objectives are, or what is, what is it that your company does? They will all give you a different answer. And then you wonder how in the world does any company make money at all? I'm actually pretty convinced that companies, after they get through a certain phase, make money in spite of themselves. And this is not really to denigrate anybody, but but the fact of the matter is, people set up systems and then they get stuck in various silos within their organization for good reasons. By the way, there's some really good reasons why companies develop silos. It's because there's a common language, there are a common set of practices and things like that, that allow that particular department or organization to accomplish something with, more efficiently if they were having to constantly explain themselves, right? But at the same time, when you get so siloed that the silos aren't talking to each other anymore, then you've got a real pathological problem. This happens all the time in organizations. You ask one particular, operating unit within a company, what's the most important objective about this decision? Let's say you've got an overarching decision to be made. They, of course, are going to give you the metric that they are measured on. They are not gonna give you the metric that the overall company, and I'm saying, as if they do this all the time with 100% certainty, but the propensity is much more in line with the idea that they focus on the thing that they are responsible for as opposed to thinking about what the overall organization is actually trying to achieve. And so this whole process, you mentioned it, how important it is, and I really cannot overemphasize it is so important when you embark on something like implementing a large artificial intelligence system or a significant one, let's say I, maybe the word large isn't the right word, but a significant system. That involves the input or the use of multiple stakeholders, getting that framing right, what are my values and objectives? What are the thing that I'm trying to achieve, and why am I trying to achieve it? That is absolutely table stakes. Most important, if you don't do this, you will literally fail. A really good point because I always think about how like marketing hates sales and sales hates marketing and departments. I was talking to some people I work with the other day and I was like, we should just fire all the departments to hire more engineers. And they're like, that's exactly what every engineer. So we just need keep making the product better and that's the only thing that matters. And they're like, but we have to generate sales and customers and the other things. And I'm like, yeah, but if the product's better, it's like I have that field of dreams thought. It's if the product's really good, everyone will find it. Yep. It's every engineer. It will come. Yeah. Talk about large scale decision making or using large amounts of data.'cause I find that the more people whose input you get, the murkier the data gets. And I think about, yeah, we have this idea that if we get everyone's opinion, we'll get a good answer. But then if you ever watch Family Feud, there's always the numbers at the bottom that are wild. Three people said this and two people said that. Those are not useful pieces of information you never get. 80 people said one thing, like it's not 80 people agreed on the number one answer. There's this idea now, and I think this is really popular because of ai, because AI is good at analyzing large amounts of data. Sure. That if we have more data, we'll get better answers. I not entirely convinced that yet. Particularly with the use of artificial intelligence for certain kinds of things, and particularly let's say strategic setting strategic intent. I think and don't misunderstand me by the way, I think art the advent of what we're seeing today with artificial intelligence is huge. It's going to, obviously, it's already changing things immensely. It's going to continue to change things immensely, and I think on net for the good, right? I think we're all gonna benefit from the trajectory that we're on with this. The problem is that in the short term. The failure rate for these artificial intelligence initiatives is extraordinarily high. And I think that's related to the discussion we've been having through all this. In fact, it really reminds me if you go back, maybe starting 10 years ago when the whole data science movement really began writ large, right? Everything needs to be data-driven decision making. In nine, in 2017, actually, maybe going back a year before that 2016 or so, Gartner reported that something like 65% of all data analytics initiatives failed. And you think, oh my gosh, that's a huge failure rate in particularly when we're talking about the field of inquiry, right? That is supposed to make us better decision making makers, right? So why does the physician not heal itself? Why is it that data driven initiatives can't be more successful than all other types of in initiatives? Because in fact they were at that time failing at about the same rate as just about all other IT initiatives. And then a year later, Gartner came back and said, oops we were wrong. The data failure rate wasn't really about 65%. It's more like 85%. It was even worse. If you track back the causal rea or reasons for that failure rate, it all starts with what we are exactly talking about on this whole thread, and that is it's a failure of the executive function to clearly identify the right problem to be solved. In other words, these, all of these large data driven initiatives were really science fair projects, large, complex science fair projects that didn't really have a problem to solve. They had, they came up with a solution, then they went looking for a problem. The same thing is happening right now with artificial intelligence initiatives. In fact, if you go back last year, late last year, Rand published a paper that showed that something like 80 to 85% of all of artificial intelligence initiatives were failing. Then when you look at the postmortem explanations for why they failed, it was the exact same reasons as we saw with data analytics. It was that failure of the executive function to clearly identify the right problem to be solved with an understanding of why you wanted to solve that problem. Again. In other words, people are solving irrelevant problems and there's really, in my mind, no bigger waste of resource than to solve an irrelevant problem. So this I think has gotta be something we fix in the industry, by the way. Yeah, I feel like so many companies are just doing something that's cool. Yeah, exactly. And clever and, but it doesn't solve a problem, I always think about the best inventions are you solve a problem you have and then you find out other people have the same problem. Exactly. Then you're onto, but what we see is people go, there's also this thing, you ever heard that oh, they don't know what they want and I'll show them. And it's okay, that does work very rarely. And that's the problem is it doesn't work. Never. It just almost never. And they're like, one. So it means there's a, there's more than a million people. It means it's gonna work for me. And it's I think this is a really important lesson. It gets back to first principles.'cause people, when I teach people entrepreneurship, are starting a business. They say that you wanna find the three Ps, which is, are there enough people who have the problem you solve and they're willing to pay for it. Those are the three elements. Now, if there's not a lot of people, but they'll pay a ton of money. Sure. Like surgery, you don't need that many customers. If it's something they'll pay very little amount of money for. Like a magazine, then you need a lot of customers. That's right. And that kind of determines things. And like how big the problem is for them means how likely they're pay for it. And I see that we're bypassing that so much right now, both in people investing in these AI ventures, and they reach out to me to solve problems or to help them to use a really cool tool. And I can always tell, because I really am aware of the news, which article someone read based on the question they asked me. They're suddenly really interested in something. And I go, okay, let's talk about the good idea fairy like, great when they visit, but let's make sure we have Yeah. A reason for this. So I go through this really complicated process when I'm making a purchasing decision and I teach ai, right? I teach ai, I publish an article every single on LinkedIn I, and yet I only use seven tools, and I would only use two if it wasn't my job. I just have to do a couple of sometimes comparative tests. Sure. That's why when I see people that they're like, I use 50 AI tools, and I go, not you're not good. There's nobody who's I do drywall and tile, and I paint and I do electric and plumbing. Yeah, you're probably bad at all of them. Then you're not a master. That's the difference is that it's the person who narrows their focus, who's really good, and I think that this is exactly what's happening. Just that. Yeah. And that's fine. But it's then say you're a handyman, right? But there's this thing that's happening, and I think you're exactly right, which is that we think more information is always better. And as I was saying earlier, like I was think, I think of this all the time, which is it now we transcribe every meeting and we store it on cloud servers and think about how many servers there are around the world with hundreds of terabytes of transcripts that nobody will ever use listen to. And just it's like hoarding. It's like now we're doing digital hoarding and all this data's out there and it's just like we don't, we think 'cause it's in the cloud, like the cloud's not a real place. It's no, the cloud's a computer just at someone else's house. It's still using electricity, taking made outta computer parts. And we're more data, and I try to say this like more data that you're not using doesn't help. And if more of the wrong data doesn't help and accelerating a broken process just means you crash faster. Take it away. Yeah. And I think that's really good distinction is really to make good decisions, whether it's, developing a new gas reactor, if you will, versus, implementing some significant AI solution in your organization. It doesn't matter if you're using irrelevant information or if you're, you are trying to mine. Just irrelevant information to help support your decision making. You'll probably find something to justify the decisions you've already made. But rather than starting from the top like we've been describing, identifying your goals and objectives, then the creating the decisions strategies that could help you get there. And you need multiple consider multiples. Then you find the data or the information. That would help you to make distinctions between those strategies of moving forward, right? You could have one artificial intelligence initiative idea in mind in terms of what you want to achieve, but you probably can come up with three or four different ways to get there. They all would have their, their benefits and their costs and risks associated with them, and you need to have the kind of information that gives you the relevant means to make distinctions between those pathways. I. So that's the really important thing of distilling the information, the data you have down to what's relevant and not just using all the possible data, all the data that's out there. We're, we've run into a, I think, a significant problem with this issue in that. Before the age of being able to house and warehouse all this data. Of course we said humans need data to make better decisions. We can't, necessarily rely just on our intuition, which by the way is true. We went the other direction, we went the other extreme, and now we have so much data that we now have a duplicate of the problem we had before. Right? That is the world was large and complex and somewhat scary because we didn't have data that we could access in an easily accessible way. Now we've got so much data that we don't know what to do with it, and it's just a duplicate of all the information that exists that surrounds us in the universe naturally. So which data set do we refer to? That's gonna be the big problem we face. And fortunately, we are getting better at collecting information, but the problem of course, as we've just been discussing is that we now have a lot of irrelevant data we have to parse through first before we can find what's relevant. I think that's one really big problem. The other really big problem with this is that all of the data that we have, of course is a reflection or an imprint rather of what the past looks like. It doesn't tell us anything really about the future except for short term, trajectories. It's very difficult to actually be successful at strategic thinking. If you're using data that's already representative of the past or only representative of the past, right? The real concept behind strategic thinking is that you do something different, which means that you're generating a counterfactual to something that has never existed before in the world, if that makes sense. So you're actually thinking about a future that has never existed, which means that you need information about that future that you won't have. This is the conundrum of strategic thinking, right? So it forces us, instead of relying on data, we have to rely on a different kind of information. I. And that information actually is probabilistic reasoning. We have to use probabilities and our willingness to make bets and our willingness to actually assign, probabilities to various outcomes, even if we don't necessarily have data to support us. We can at least quantify our propensity to believe certain things, and then we can make trade-offs based on those probabilities that we assign. But this again, brings us back down to that level of looking for the relevant information to support the decision making. We need to, get to the outcomes that we're seeking. The only way to do this, by the way, is to develop a structured business case analysis of what you're trying to achieve and so that you can make these sort of informed trade-offs. And that, I think that's actually one of the biggest problems I'm seeing myself with people that are, going down this pathway of developing large, significant artificial intelligence solutions to problems. They aren't necessarily stopping to ask themselves what is the business case rationale that supports this decision making? It reminds me of that saying often people are fighting the previous war instead of the current war. Like you look at the data from the last one and it's no, this one's different. And it's the same thing that one of the things I find interesting is people will often say oh, I admire this business versus that billionaire. And I say, their wife left them and their kids changed their names, their family hates them. Do you want that out? Like you have to look at the totality of someone's result and we just look I want this family from this person and that business, from that person, that income. And it's like you can't pick and match whoever you model. You're gonna get that result. And like maybe you can set the new world record for the biggest divorce settlement. That can be your dream. Fine. But it's, we, I see that so much where I. And it's this obviously a more personal decision, but even at larger levels, we model decisions or we look at things that people did in the past or we're looking at the wrong data and we don't look at enough of and go everyone used to love that book, good to Great, right about those 10 companies and then I think of them are gone like, and it's I don't know if that book is the reason, but it was like, this is the example of resilient companies. And it is, I think that's one of the things that's really interesting is that large companies are slower to make decisions and it's really challenging. Yeah. Because you get lost a certain way of thinking, and if you model that, you lose your advantages. A smaller company, which is agility, and I think that. It is this temptation of going I have a bunch of data. Tell me this is a good decision. And I often look, go, have you qual, what's the quality of the data? Have one thing I find really interesting, people make a decision based on like online opinions. And I go who's opinion did you ask? Did you ask your customers? Or just everyone. It's not exactly. It's very different existing customers. When I get feedback from an existing customer, I pay a lot more attention than when it's just a blind email from someone who's never bought something from me, right? I weight them completely differently, but I see a lot of companies that weight them the same, and it's I don't care. If you were never, A lot of people will never buy any of my products, read any of my books. Most people won't. The majority of the world isn't interested in what I do, which is fine. So I need to narrow it down just existing con customers or potential customers, and then make the decision about what they want. Because that's another way you, and there's just one in many ways that data could be thrown off. So I wanna dive a little bit more to this kind of predictive process. Can you explain that? How you can develop a system that stops relying so much on the past, but starts to factor what are the most likely outcomes or what's the likely future? So this really gets us a little bit into the weeds, if you will, of the concept behind decision quality. But I think it's a useful pathway for discussion here because I do think that people think, particularly when they have to implement a decision process, like we were discussing earlier on in our conversation, that this does lead to a reduction in agility. And by the way, I've seen the exact opposite. When you have a structured way of moving and thinking with each other, it actually increases your agility. So oddly enough, the effect is that at least in shorter periods of time, you're slowing down. So there's like this a stoic saying, we go fast by slowing down. What happens is that by having a structured decision making process, you're actually taking the time to think through what can happen before it happens. That way you can develop mitigation plans or the sort of the converse of that. Maybe instead of thinking in terms of the things that can happen to you that you don't want to happen, you think about the things that you want to happen, and then you develop a quote mitigation plan to help promote those, right? So the first thing is understand that a, a refined or a defined decision making process actually helps to accelerate your decision making when it's structured the right way. That's there, there is a bit of a caveat to it now, to get to the predictive part that you were just asking me about. The way to start this is to, is actually to map things out. I like to do this visually. I'm a very visual thinker and I find a tool, like an influence diagram is a very powerful way to think through the sort of the predictive, the structure of any sort of models we might need to help us think through the decision making. But if we start with what is that objective that we originally identified that we want to achieve, then we disaggregate or de decompose that objective outcome into the forces that led us there. Or the events that would lead you there. And so actually to make this sort of simple to understand, think of it like. Any generic business, what is it that you want to achieve from the perspective of most businesses? At the end of the day, it's that maximization of corporate value, right? It's money. How the measure for that over time with most organizations is net present value of cash flow, right? So that's something we can model. Net present value of cashflow would be our objective and our influence diagram that we end with. Then we might break this down. What causes net present value of cashflow? It would be revenue minus costs. So now we have two decompositions of net present value, revenues, and costs. Then we can decompose what's the source of revenue, and we can keep decomposing each little node in our influence diagram until we get back to a certain level of there are inputs now that we have to think of as let's say assumptions, right? And those assumptions are the sources of, or would be the placeholders for the data that we need to get. To help, support that decision making process. What we want to do, of course, is to align that overall influence diagram with each of the various decision pathways that we could take to achieve that goal. We want. Then we change the assumptions for each of the pathways and then test the effect of those assumptions right. Now, a key part of this though is that the assumptions. Cannot just be single point values because we don't deal in a world that basically, the future always delivers us a certain outcome. We have to get really comfortable with thinking in uncertainties. That means we have to think of these assumptions as distributions. We have to, and we have to get comfortable with representing our assumptions as probability distributions. And there's some easy ways to do this, right? We don't have to have a huge amount of data to support, building a distribution, we can actually go to a subject matter expert and ask them to give us their belief about how something might look in terms of its overall, probability distribution given the, the underlying qualifiers that describe a a given pathway for creating value like a decision pathway. I've been doing this for about a little over 25 years now, and it's astounding. Honestly, just how replicable this is. Once are working with a subject matter expert who can think clearly about the problem that has been given to them to think through they're very good at finding 80th percentile. Ranges that are fairly accurate about the world, that they'd be facing. But it's really important to get this right to support this predictive making capability that you're des asking about. You have to ask the subject matter experts to describe the reasons why a particular assumption can vary, why it has a range to it. You can't ask them what the outcome will be. This is actually, by the way, this is a a really important understanding about the quality of a subject matter expert that you might go to for information. Subject matter experts are actually constrained by the same types of biases that all human beings are. In fact they're pretty bad at making predictions about the future in an unaided way, but what subject matter experts are really good at. This is what makes them an expert is that they can give you fine grained explanations for why a system varies. Like what causes a system to behave the way that it does. That's why they're experts. They understand those really fine-grained details, the possibilities that can happen that you know yeah, an amateur or, maybe somebody that's not quite so sophisticated in their understanding of a specific type of event in the world. They can't give you very detailed explanations for why that event might might vary. Subject matter experts, on the other hand, are very good at giving you detailed explanations. Now, once they're able to think through those detailed explanations for why a system can vary, then they can give you their probabilities for the ranges that you might see and the sort of the central tendency that you might see across that assumption as represented as a distribution. Then you have to use something like Montecarlo simulation to then tie those assumptions with their ranges back to an outcome that you're predicting. And this takes a little bit of sophistication, I admit, but there are a lot of tools on the market that really help people that are novices in this area at least, pull their bootstraps up a little bit by themselves to get started. You don't have to have a massive, data science force to help you through this. There are a lot of resources that are available. Yeah, I think that it's just so interesting 'cause now we so often shoot from the hip. Yeah. Like it's an emotional decision and it. When I was a kid 20, 30 years ago, if you wanna start a business, you had to have a business plan, go to the bant, a loan, start the business, and now you don't need a business plan. So no one makes one. So no one has a plan and it's this like kind of, does he make, I'm not always perfect with this, but a friend of mine in the week said, oh, I wanna start a side business. And I go how much money do you need to make every month? And he goes, what? Yeah. And I was like, isn't that. That, let's start from there and work backwards, because that number will determine what you need. And the factors I always look at, and I try to explain that. I look at how much money to make and how much risk are you willing to put into it. Exactly. And risk management is something that people really struggle with. And I try to explain it as, there's two ways you can pay me. You could pay me a flat fee upfront or a commission on the backend. Fee. Like more risk for you. And less risk for me. If I write a book, I only make money every time the book sells. If you pay me to write your book, you pay me up front. I make money no matter what happens, right? So it's less money. Royalties will be more money over time. But if you never release the book, which many of my clients don't do, or you never finish the project or you mess up. That's, and I try to explain it's risk management. So I try to always have exactly fast money projects and slow money projects and risk management, and that's another element of it. How much money to make, then how much are you willing to, if you need to make it fast, you need to go high risk, right? If you have one day, it's very risky. You have to do something really extra versus if you have six months or two years. So it starts to bring in these other factors, and I think it's really important because we often hear. Everyone's a self-declared expert and I have to deal with this all the time. Me people overestimate my level of expertise. Yeah. And I try to explain, listen, here's what I can do. Everything outside of that. As we get further away, I get more and more terrible. Yep. I'm very good in a very narrow thing with AI and. It's al. People are always surprised when I don't claim, I know how to do everything and I say, listen, I don't wanna make a promise and I can't keep sure I wanna go down that path. And I'm not very good at a lot of things. Most things today I'm not good at. I'm good at a couple of, I'm really good at two or three things, but there's thousands of things that people can do and I think that I. Is where you start to know a real expert knows the limitation of their knowledge. So I always look for people at what they go outside, what they're good at, because we see so many experts on tv and then their predictions are so wrong. I look at that all. Yeah you're right. And that kind of goes back to the point that I was making earlier, is that actually experts are terrible at making predictions. They're terrible at making predictions in an unaided way. They're shooting from the hip. And let's be honest, it seems that thinking now has become decla say, right? Nobody is doing this anymore. Everything is driven by our intuition, it seems or else we're now become self proclaimed experts because we're relying on artificial intelligence in the background and not saying it upfront and explicitly. But yeah, I. I agree. Experts are actually they're subject to the same sort of biases and, mental failures that all of us are subject to, but they serve a, they do serve a very good purpose when they're utilized in the right way. And that's something we have to unders, we have to figure out how to understand better. How do I use an expert the right way? Don't ask for predictions, ask for explanations. That's the way to get around it. I think that's really good. I think this is gonna help a lot of people who've been caught. I often talk about double Dutch with making a decision, like you try to want to jump in at the right moment. The two jump ropes are spinning. I don't wanna make the wrong decision. So you wait longer and longer, or you jump in too fast and get all tangled. And I think this will help a lot of people.'cause I think. My feeling is that decision making before you implement is like where people are really struggling now and for people who wanna learn more about what you're working on and possibly get some of your help to help them to make better decisions. Sure. And do that free AI decision and build a strategy. Where can they find you online and find out more about the amazing things that you're doing? Sure. The best place probably to find me online is LinkedIn. Robert D. Brown ii. I know that sounds a little highfalutin, but it the name Robert Brown is actually such a very common name. I, I have to have a way to make a distinction so that people can find me. Oh, I know. Yeah, deal with it. Yeah. Those of us with colors for last names, and names like Robert and Jonathan. We run into this problem, but but yeah, LinkedIn is the best way to find me I think, and I maintain a sort of a, an open network attitude. If a person is interested in having a truly professional collegial sort of relationship with me on LinkedIn, I answer emails that I get there. I'm not really all that interested in people just jumping right in my inbox and trying to sell me something. Probably one out of a thousand times has that actually ever turned into something where I go, oh, I'm actually glad you did that. Maybe even less than I one in a thousand. But yeah, that's a good place to reach me. And also I give my, I'll give my email address Rob Brown at resilience. I'm sorry. I, I get these, my email addresses here confused Rob Brown at cyberresilience com. But currently I work for an insurance company called Resilience. And and I'm actually working within this company as an internal consultant, if you will as the the senior director of cyber resilience to support making these kinds of decisions that we've been discussing. Not necessarily AI decisions per se. But decisions related to new product development if you will, new supporting strategic planning. Those kinds of things, but they're all related. Just because artificial intelligence is certainly a new technology that's available for us and a, like the internet was and many other, really cool, powerful things that have been developed. In the end, really the decision making around it is the same. Again, it goes back to identifying what we want to achieve and why, and then what can I do to get there. Then finally taking into account what are the risks and uncertainties that would prevent me from doing that? Then how can I structure my risk management efforts to maximize the likelihood that I get the outcome that I want? But also keeping in mind, just because you do make that good decision, that is you've taken into account that information the way we've described it in that hierarchical way. That doesn't guarantee that you get the outcome that you want. It just increases the likelihood that over repeated trials at making decisions that you do increase the likelihood of getting what you want. That's a really important thing to understand. It's it gets you out of what we call resulting that is looking at the final outcome to determine whether or not you made a good decision to initiate the decis the process that you're on. I think that's gonna help a lot of people make a little better decisions, and hopefully some people who are interested in what you do reach out and only people that are interested in what you do. I get a lot, far too many sales messages in my LinkedIn inbox too, but thank you so much for being here for an amazing episode of the Artificial Intelligence podcast. Thank you for listening to this week's episode of the Artificial Intelligence Podcast. Make sure to subscribe so you never miss another episode. We'll be back next Monday with more tips and strategies on how to leverage AI to grow your business and achieve better results. In the meantime, if you're curious about how AI can boost your business' revenue, head over to artificial intelligence pod.com/calculator. Use our AI revenue calculator to discover the potential impact AI can have on your bottom line. It's quick, easy, and might just change the way. Think about your bid. Business while you're there, catch up on past episodes. Leave a review and check out our socials.