
Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools
Navigating the narrow waters of AI can be challenging for new users. Interviews with AI company founder, artificial intelligence authors, and machine learning experts. Focusing on the practical use of artificial intelligence in your personal and business life. We dive deep into which AI tools can make your life easier and which AI software isn't worth the free trial. The premier Artificial Intelligence podcast hosted by the bestselling author of ChatGPT Profits, Jonathan Green.
Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools
Is There Too Much Noise Around AI with Fadi Hindi
Welcome to the Artificial Intelligence Podcast with Jonathan Green! In this episode, we delve into the evolving definition of AI with our insightful guest, Fadi Hindi, a seasoned expert in digital transformation and AI consultancy.
Fadi shares his vast experience bridging the gap between academia and industry, highlighting the fluidity of the AI definition over the decades. He discusses the significant inflection points in AI history, particularly the impact of ChatGPT and how it revolutionized access for consumers. Fadi elaborates on the importance of understanding AI's capabilities and limitations, stressing the need for trust frameworks in AI applications.
Notable Quotes:
- "The definition of AI has evolved tremendously. It's no longer just about automation but about creating systems that mimic human intelligence." - [Fadi Hindi]
- "We've crossed the threshold of the Turing Test. With today's technology, many can't tell the difference between talking to a human or a chatbot." - [Fadi Hindi]
- "AI should be seen as a digital assistant that expands your capabilities, enhancing productivity across various domains." - [Fadi Hindi]
- "We need to have deeper conversations about AI's impact, not just stay at the headline level." - [Jonathan Green]
Fadi emphasizes the transformative power of AI, urging listeners to integrate AI tools into their workflows. He underlines the importance of custom AI assistants to boost productivity and stay competitive in the rapidly evolving digital landscape.
Connect with Fadi Hindi:
- LinkedIn: https://www.linkedin.com/in/fadihindi/
- Website: https://theconsultinglab.us/
Connect with Jonathan Green
- The Bestseller: ChatGPT Profits
- Free Gift: The Master Prompt for ChatGPT
- Free Book on Amazon: Fire Your Boss
- Podcast Website: https://artificialintelligencepod.com/
- Subscribe, Rate, and Review: https://artificialintelligencepod.com/itunes
- Video Episodes: https://www.youtube.com/@ArtificialIntelligencePodcast
Is there too much noise around ai? Let's find out with today's special guest, Welcome to the Artificial Intelligence Podcast, where we make AI simple, practical, and accessible for small business owners and leaders. Forget the complicated T talk or expensive consultants. This is where you'll learn how to implement AI strategies that are easy to understand and can make a big impact for your business. The Artificial Intelligence Podcast is brought to you by fraction, a IO, the trusted partner for AI Digital transformation. At fraction a IO, we help small and medium sized businesses boost revenue by eliminating time wasting non-revenue generating tasks that frustrate your team. With our custom AI bots, tools and automations, we make it easy to shift your team's focus to the task. That matter most. Driving growth and results, we guide you through a smooth, seamless transition to ai, ensuring you avoid policy mistakes and invest in the tools that truly deliver value. Don't get left behind. Let fraction aio help you. Stay ahead in today's AI driven world. Learn more. Get started. Fraction aio.com. I'm really interested body in your perspective as someone who's been in education world and now you've started doing more work with businesses, startup in consulting, there's been a lot of challenges where the definition of AI has changed dramatically. When I was a child, an artificial intelligence was a sentient machine. And now we're using AI to describe things like smart word processors, image generators, spell checkers, simple automations. What has really happened to the definition of ai? And what can the academic world do to solidify what AI actually means?'cause it was, first, it was ai, then it was strong versus weak ai. Now we're saying a AI's become a GI. All these words keep changing. How can people really know what we mean? Okay. Jonathan, first of all, I guess thank you for having me on the show. I really appreciate it. It's a great opportunity and an honor to be joining as a, I guess as a cast member and join and hopefully have your listeners take my perspective, my 2 cents, I guess for what it's worth. So I think just for entertainment purposes, I'll take you back to the late eighties, early nineties when I was actually going to NC State University here at . In Raleigh, North Carolina, and I was majoring in computer engineering. So we're talking about like hardcore engineering, it's like microprocessor design. And I minored in robotics and artificial intelligence. We know that the artificial intelligence mine around since the fifties. That's what we always say. I. In the late eighties, early nineties, we're working with neural network. There's some expert systems. Were really big. And we were also doing a lot of fuzzy logic, which disappeared. I think as a discipline is no longer around, but a lot of neural networks, which we see a lot of today especially with Chat, GPT and the like. So over the 30 years of my career, I would say I, I joined consulting joined Anderson Consulting, which is Accenture now. Or has become Accenture and between 11, 12 years I also joined Clarkston and so on of working domestically. And then the remaining, I would say 18 years working internationally in different roles. My focus has been predominantly on automation and digital, so there's some shape or form of ai. Throughout the work that we did, people always threw around, oh, this is AI and this is this, and the definition of AI changes. It really depends on the way that you look at what is artificial intelligence? Is it automation? Is it just simply having a program like an expert system? That is artificial intelligence at the end of the day, but the, I think to answer your question, as we've moved forward, I would say over the past 30 years. The introduction of chat GPT was really the major infliction point and the way that I like to describe it. I can't remember where I read this, but if you remember in the early nineties, Netscape, if you remember, the browser gave us access, excuse me, into the internet, and then it just exploded from there because all of a sudden it became user friendly. We talked about the internet and it was DARPA and it was uc, Berkeley, and like all these universities, but it was more in academia and defense, but it was not widely available. Then they gave us Netscape and things changed because now the average user has access to it. Same thing happened, in my opinion with Chad GPT. They, those guys did a good job. They figured out how to give us an interface into ai, which did not exist before, and hence the explosion. Now to answer your question, there's been many different definitions for AI throughout the years. More recently, over the, since 2023, the introduction of chat, GPT, this thing has exploded because every person, every consumer, all of a sudden had access to this massive this that's say chat bot that's been trained on massive amounts of data. And they just started creating new concepts and new ideas about what AI is. But to me the definition of ai, artificial intelligence, if you go back to the roots, is about being, if you talk to a machine, you don't know that's a machine, that's a human, could be a human, maybe that's what AI is. But in general, I think digital automation, ai, they get mixed and matched in, in conversations. I'll stop here and see if that makes any sense. Yeah, I'm glad you brought up the touring test, which is the idea that it could be in another room and you don't realize you're talking to a computer. And if that's the test like. Has it passed or not yet? We've had different types of online bots since the nineties that have been trying to convince you that they're real and to click a link. Okay. Those have been around since the a OL days. So I guess the real question is for people who are trying to understand what all of these terms mean, where is the line? Between where we are right now and what we thought of as AI 30 years ago with movies like Terminator and War games. That's really where my initial idea this is. And we seem to be in a different place now. Yeah, I think we've, I, my opinion is that we've crossed the threshold of the touring test. I think we've passed it like chat, GPT and similar Germany, like all these chatbots. Some of 'em are better than the others and I'm not gonna say which in my opinion, but we've crossed that threshold. So if you think about the introduction of four Oh of chat GPT, it was interesting because that was our initial kind of introduction to a agentic ai. And the ability for more like sophisticated workflows because when you're talking to it, if you activate the voice mode or the audio so you can converse with it, you're taking your voice, converting it to text, taking that text, putting it through the language model, the chat bot. It's coming up with an answer, and then it's converting it to text and then back to voice. So if you look at that, all of us, and then if we say the during test is that you gotta go behind the screen where you don't see the, what you're interacting with, how many of us do you think today would know that we're interacting with a chat bot? Especially if you do prompt priming to not make it so polite? Because one of the things with chat GPT is that it is super polite, but the reality is for people that are on the, like inside track of ai, I always tell Chat, GPT drop the nice act and let's just get to the, the, regular, fast conversation. And there's no need to put the niceties around it. So I think we've crossed that threshold, the. Sam Altman and team are pushing for a GI now a GI is the concept of artificial general intelligence, where we're giving it beyond capabilities that are beyond just simply having multiple PhDs in so many different disciplines and now we're beginning to trust it and give it more. I guess that's a bit also on the agent side. We're beginning to give it more autonomy in being able to conduct transactions on our behalf to and to me that's really the, that's really the rub for. And another inflection point where we as humans start putting more trust into a machine to actually act on our behalf, have autonomy to even use our money to get some to purchase some things or a ticket or whatever. And that's where I think we're headed. I don't particularly I'm not particularly crazy about the idea, but that's where we're headed. You brought up something really interesting there, which is how much do you really trust in ai? Yeah. So many companies go, I wanna use an AI chat bot for customer support. And I always ask the same question. I say, will you give it the power to initiate refunds? No, I would never. And that's, that means every person talking to that chat bot knows it can't actually solve their problem. And then I ask another question, which is, have you ever had a good experience with someone else's customer service chat bot? No one ever says yes, right? No one ever has ever had a good experience. So this core problem is one of trust because their fear is, what if the AI goes rogue and refunds everyone and put, and all our money is gone, right? And I say if you don't trust it with your money, why should the trust with their money? And I, I certainly understand that. I remember each, we've gone through these different phases of what we're willing to do with money, right? First it was. Even buying something online. There are many people who have never made an online transaction, a large number of people who never purchase online, and then it's mind boggling. But yeah, for a long time I was like I'm someone who I'll buy through a computer but not my phone. Only recently did I start doing transactions through my phone and there are a lot of people now who. They don't even bring out a credit card or money. The phone has their money in it and they can do tap to pay and other different ways of using the phone as the core piece of their transaction where the phone now controls all of their assets. So we are continuing to trust these tools more and more. How far away do you think we are from people actually the AI access or credit card? Because here's this, the question becomes if there's a mistake, if there's a purchase with your credit card and it's just someone else in your house, like if one of my kids gets my credit card and buys something that's not fraud, correct. Like I still cover that because it's my fault. It's only if it's someone outside your household. So if the AI makes a bad purchase, I. Where's the line of where you can refine, you're stuck or not stuck, right? They buy something with the no refund policy. Now are you stuck with it? That's a really, I think that's the question that has to be answered. And how far away do you think we are from that answer? And from people actually trusting an AI to buy stuff for them. Without there, Yeah. Yeah. So I think we're already there, Jonathan. I think that a I would I'm on the fringes. I take a lot of risks as a person anyway, and as an entrepreneur I guess more recently. But I would say I would trust an AI with my credit card as long as I've, as long as I understand how, like what is it operating on, and I've actually validated any biases or problems that it might have. But there's, you're absolutely correct. There's a trust bridge almost that you need to cross over. You cannot just blindly give any AI access to your credit card. But as a as a responsible, I guess consumer or individual or intellectual, whatever, you need to make sure that you've got a piece of software that has been versioned and trained or whatever, and you've done enough testing on it to that you or you feel comfortable to give it credit card access. I still think that you have to have cer certain stop measures or like a certain gates to where you say any spend over a hundred dollars, you have to come back to me. You cannot just simply. So I think that the technology's there. Now it's about the putting forth the effort as humans to figure out what are going to be these things. What are you comfortable with and what are you not comfortable with? My view as a business person and as an entrepreneur, is that if you are, if you've given your ai, your credit card, you've given it agency and you've given it authorization, so it's gonna be a straightforward answer for me. There's no refunds. If your AI makes a mistake, that's you. Again, that's just a business view, a capitalistic view, if you will. But that is, that becomes the consumer's responsibility because you've actually authorized that transaction. Yeah, I think that you're right. So I posted something a while ago, a poll on LinkedIn, and I said, if an ai, if you post something to Twitter that's offensive, and then you say, wait, chat, CBT wrote it, not me. Are you no longer in trouble? Everyone except for one person said, no. It's still your fault. That one person is the one who's gonna find out the hard way, be the one person who lets the AI write something, and that's why. I still believe it's critical to have that man in the middle moment where you're at the final decision. AI can set up the purchase, fill everything and go here's the order. This is the price. Should I do it? I think that for me, that's still where I'm at, and maybe we'll get to the point eventually where we have that security. There's also this element that AI is so exciting. We often see a ton of excitement, which causes a ton of security breaches, like people don't pay as attention to security when they're. You now have an AI that can be socially engineered, so it's a new way AI can be tricked out of your credit card because what to us as humans, looks obviously like a fake website or something's wonky. Ais don't see website the same way. They just see the code, so they might not notice the same things, the same signals. So you have these new vectors that. You have to be aware of, and that's one of the big challenges with AI and with these tools is that we're getting so excited so fast. I see this with a lot of companies and we see a lot of companies. They give the chat bot too much information and then someone can ask the question the right way. We've even seen people socially engineer the main chat bots from open AI where they've gotten it to reveal its core coding. We've seen it happen with several other of the major ais where you ask a question the right way and it tells you all the things like it's secret. The secret code name for the AI and its hierarchy of laws, Asimov's three laws, like it's rules of this and then this, and the different personalities. So even the most elite and most expensive of ais can be manipulated or tricked into saying different things. So I guess the real question for people, and this is really is. Can you have an AI that you have total control over? So if you're just using, version of, oh chat GPT, it's still connected to their servers, the knowledge goes into it. So now your credit card isn't just on your computer, it's now in their server. That's the first challenge. And if you have something that's totally local, then it's a different challenge. It's more secure, but it's not as smart. I actually, and this is. I think the direction of AI is moving towards smaller models, so I would rather have a model that's just inside my phone that works when I'm not connected to the internet, because it's usually I dunno about you, but when I most need it is when I don't have internet when I'm lost, right? Or I'm in the woods. It is like when you're in the woods, that's when you want to ask those questions. And if you have no signal, that's what you really wanna know. How do I ? How do a problem? A smaller model can be super useful for that. And so we're seeing some really cool things with the smaller models. I think that's the direction we're actually gonna shift towards because we've seen these huge models that are hundreds or terabytes or gigabytes. You need 55 graphics cards to run. And while those are cool, they know so many things. And this kind of leads to this other question I wanted to ask you, which was. Whenever they release a new ai, they release these graphs to show how much smarter it is in every other ai. And there's two types of graphs. The first is when they just compare it to their own models. They go, this is the new chat. GPT has much better than the old chat GPT. You go, oh, . There's a reason they're not showing the other companies. It's not better. Whatever Google has or whatever the other and Throw has, right? Or they show this is us versus philanthropic. And it's always, the scales are always, here's how good is it taking the lsat? Here's how good is it taking the gmat. And it's like very rarely. Do I have a physics-based emergency? Like I study trigonometry. I've never had a triangle in my life. Sure. Do I use trigonometry? It's actually massively. Most of how I program is based on algebra, like when I'm designing AI prompts and most of my work, it's all based on algebra, which I never thought I would use, and it's like how I solve problems A times B equals C. If I know A and C, I can solve for B. That's how I do most of my work. But. These tests we see and they have these massive databases and stuff that's almost never comes up. I don't really need the entire his history philosophy in my phone. I, there's a certain amount of data, like I. I can medical stuff, maybe survival stuff, although the things that are useful are actually really small compared to the entire history of human knowledge. So we build these really big models, but very rarely do we use any particular part. So I probably use a small, yeah, like pie of that. Maybe you use a different sector'cause have different expertise, but. If you start to realize what you just, the part you need, then you could probably fit that in a smaller model. Yeah. And I don't need physics, I don't need philosophy. There's probably a lot of other categories I don't need. So that allows you to get a smaller model that becomes remotely, and I think that is what It's a very good question, Jonathan. It's funny actually that you were mentioning this 'cause I was having a conversation with a buddy of mine actually. Gentleman from Google or actual somebody else in industry. But we were talking about language models. No, actually it was a founder of a startup. He was working on a smaller language models that are run on the phone. I. For particular reasons similar to the things that you have raised? I've, I don't know. I've been around for a while and I can tell you that I found that extremes on either end don't really materialize. Usually some, the answer is somewhere in the middle. So what I think we're gonna see is we're gonna see a hybrid approach where you'll have, I don't know, that's just this opinion. I just came up with it here. Like on the spot, just thinking through what you were saying. You're gonna see some stuff happening on your phone, but then you're gonna tap into a larger model that's running in the cloud. And I think developers are gonna leverage both so that you have and I think maybe some of the earlier conversation we're having around payment could be relegated or could be actually you specify that you only run this on the model that's on the phone, that is smaller, concise, that I trust fully versus anything that's happening on the cloud. Potentially this evolution of a hybrid model as we start getting more and more sophisticated solutions out there. But one thing I wanted to just go back to the beginning of your conversation around customer agent bots, not being that good. I don't know if you had a chance to look at some I'm not, I don't wanna name, I was looking through a, I don't wanna give any names, but you can just like Google search or chat GPT or whatever your preferred chat bot to see what are the LA latest emerging. Voice chat bots that are out there. Some of my buddies have actually told me in the past, I think within the last couple of months, I got two or three con calls from buddies of mine. They say, I think I just talked to a bot and they spent like 25 minutes on the phone and they couldn't tell, and they finally just hung up when they realized that they might be talking to a bot. Jonathan, one of them was a job interview. The guy was just like he was on the phone with the I, I guess it was a bot, but he was on the phone for twenty, twenty five minutes and then he started getting weird questions and he is wait a minute. That's not my area of expertise. It says that clearly on my resume. And there's like a 15 second pause, which is abnormal for a human. And then the voice came back and said why don't you tell me what your area of expertise is? And it's wait a minute, you have your, you said that you have your resume in front. You know my resume in front of you. So it just, and this is good for people, for listeners of your show, is that. You really have to, I think what, I've realized in the, we used to get these fraud tax calls in the US where they're saying, this is the IRS go to Walgreens or Walmart and give me a gift card and all that , I had that happen to me personally. And since that, since then. And it's definitely becoming more and more important now, as a consumer, you have to be vigilant now. To, to the first question you asked yourself, am I talking to a bot? Am I being manipulated? Is this video on YouTube real or is it just made up by an faceless marketing is what we're talking about? So there's, we're definitely headed for some really interesting times. And the amount of what was the statistic that I. Quoted for my last class. I think 90% of the internet data was generated the last two years. So if you remember the amount, you've been around and you know how much data has been generated since the nineties, and we say every two, three years the data is doubling or quadrupling or whatever. Now they're saying 90% of the internet data is only in the last two years. That's massive. Last two years, we're talking about 2023, Jonathan 2022, not very long ago. That is only gonna accelerate, which means we're gonna reach to a certain convergence where the amount of internet data is that you're generating is a hundred percent every year, if not more. That is actually out there and bots are getting trained on it. So I just think to just highlight the points. Chat bots are definitely getting much, much more intelligent. They're passing the touring test for sure. And I believe that customer service agents, we don't know. I don't know whether they, I think companies should disclose that you're talking to a bot before you talk to it. I think there's experiments that are actually happening right now where, and I know from friends of mine that have had the experience of, they're convinced that they were talking to a bot, so we are definitely there. And then the last point that about the things that we, about your, about our discussion is that I gonna, I think we're gonna see a hybrid of this LLM or this large language model, something on your phone and then something in the cloud. Because I think there's been, I never thought I put my data in the cloud. Did you? Did you ever think all of your data is gonna be on the Google like network or Apple's iCloud? We never thought that. If you said that in the nine, in the early nineties, I'd tell you're insane. But guess what? All my data's out there now. I think the hybrid model idea makes a lot of sense, which is that. My AI grabs what it knows I need the most. It's the more it talks me, the more it knows the type of questions I ask. Yeah. And then when it needs something else, it can grab it from the cloud or from the, that makes a lot of sense. I think that probably is the right answer. Because no matter how, here's what I've noticed. No matter how smart our phone get, we somehow end up with apps that use up all of the memory, no matter how much that's true. My first cell phone was a, I had a pager and then you had a flip phone with nothing. And then they were like, this phone from Nokia has a snake game in it. And now you have more and more advanced things. So we'll always, software seems like we'll always stay ahead of hardware, so that makes a lot of sense to me because I just see that people come, one of the big questions they get asked a lot is, which is the best model? And I always say, don't look at the charts. Here's what I can tell you. There are certain models I don't like. Yeah, I don't like GPT oh one. I think it's sassy the way it talks to me. I love, I still use four Oh, almost exclusively. Yeah. And first thing is that they still update it all the time. They push, yeah, they push a major update earlier this week.'cause I had to reprogram a bunch of stuff that stopped working and update a bunch of things. So they're constantly pushing updates even though they don't change the name and don't tell you things change all the time. But that doesn't mean that I'm right. There are plenty of people who hate four oh and Love oh one, or they love Anthropic and they say that all these charts, they're not, it doesn't matter. I've never asked an LSAT question. And one of the other tests they do, they ask these questions is like, how many W's in the word strawberry, or how many R's in the word strawberry? Or how many words are in your answer? And there's these tricky questions that you do ask to see how good is different tasks, but then you. What you're using it for is what's important and everyone uses AI differently, so whatever model's right for you, that's really okay. There's no wrong answer for that and I very rarely find that two people like the same model for the same reasons. Yeah. So I think that we're gonna, if I can maybe just give you my perspective over the past, I would say, I dunno, John, six months, I think I've, I think I have. Significantly moved across because, just being an AI person or a digital person, you're always a you're a critic and you're cynical because you don't know how good this, this particular technology has gotten to, or whether it's really, if it's if it does what it's advertised on the box, if you will. And I think I have significantly moved across the spectrum of really adopting. Something like a chat GPT. Again, there are no favoritism here. Just something like a chat GPT as almost like a digital, like a full-time digital assistant. As a matter of fact, with my business now, 'cause I do consulting and career coaching, I. I think that number is doubling. Like the people in my team that use it, like it's a requirement for me that you have your own digital assistant and all of a sudden we are two x three x, the cap the power of you. You have a team of two or three people and they're acting like a team of 10. And the reason that I've come across is the fact that with proper prompt engineering, with a proper, with the proper way of prompting and priming and putting custom instructions for the chat bot. Your ability to get things done is just amplified significantly and you get to a point where you have these custom chat bots that different ones do different things. And now with the advent of being able to connect them to do again, the AgTech model, they can do multiple things for you. It becomes an indispensable tool. And because if you look at what I do, I've got three things running in parallel. I do consulting work company out here in Raleigh called Vaco, as well as from my own business, the consulting lab. I do teaching and then I do I do coaching. For startups and founders and I look at help. Like with fundraise, I did I did four startups in the past. I'm on my fifth startup, so I can bring some of that perspective. And honestly without having some custom built chat bots or let's say large language models whatever your Grok, Gemini, tragedy, whatever it is that you like, copilot. Without those, I would not be able to get the work done that I need to get done in a day. It's impossible. So it is just been a transformative journey for me personally to start seeing the power that we get with this. And to your listeners if you guys are not using this technology I'm not, this is not self-serving. I have no , I'm not gonna get a commission or anything. But if you guys don't start using these large language models, find your favorite. Start really using it and exploiting it to your benefit. You are gonna be left in the dust, my friend, in less than I would say, less than two years. The person that's going to be getting the job, getting the funding, getting the money, what whatever shape or form it is gonna be somebody that is back with an army of chatbots. And the person that's gonna lose out is the person that just, keeping all that technology at an arm's length because you simply cannot compete. You will not be able to compete, Jonathan. If you're a blog writer now, there's good and bad. So let's just set that aside about the content. I could crank out a hundred blogs in the next hour. Now you're gonna argue the quality of the blog. So that's where we say, all right, you need somewhere in the middle. You gotta be able to check the content, et cetera. The amount of time that it used to write a post in the past, like on LinkedIn or whatever, it used to take me half a day. Right now. Now I direct a bot and I give it my tone of voice to say, here are my writing samples, the ones that I wrote. Make sure, and I create a custom bot that says, you are gonna use this style. And then I do prompt priming so that it's not sounding like everybody else and it sounds like me. Then I get the piece that I want and I review it and I amend it. I change it as I see fit, but then that's done in 10% of the time. So how can you compete with that as a, so it's very important for people to start ramping up. This is not a this is not a nice to have anymore, now, again, I'm on the fringes of AI and we are pushing the envelope and da. Some people could argue that this is not really the case, but I really believe it is. No, I think you're exactly right. And that's really the theme of this podcast is why the AI transition, it's happening. It's no longer if it's happening, it's just a question of how fast. Yeah. Your check your podcast is spot on. And we need to have these deeper conversations. And I think you're doing that, and thanks for your, for your show. We need to have these deeper conversations about how do we deal with this? We've not dealt with this in the past, right? So more and more of these deeper conversations and engaging conversations, and not just staying at the headline level, but going down into the details to say, how does this affect me on a day-to-day basis? And this is what we need to sort out. Think you are exactly right. For people who want to see more about your company or see what you're talking about. See what you're writing on LinkedIn. Where can, where's the best places to find you online? Excuse me, it's LinkedIn and you can reach us on the Consulting Lab. Dot US all one word. The Consulting Lab dot us. We're, we're a startup. We're a services company. We're a very small team, but we do digital transformation, AI consultancy assessment strategies, et cetera, as well as career coaching for. Business leaders that wanna reach the C-suite, figure out how to use AI or C-suite members that wanna reach the board level, like just general coaching as well as for technical users. I come from a very deep technical background as I told you, and I've crossed over to the dark side, the business side 15 years ago. So I know both sides of the fence and I can help technical leaders become more . Business adapt at being able to speak business and relate to business so that their ideas are not so alien when they meet with business people. But that's where they can find me more than happy to talk to anyone. Amazing. I'll put links to those in the show notes and below the video. Thank you so much for being here. Thank you, Jonathan. Thank you for your time of the Artificial Intelligence podcast. Bye everyone. Thank you for listening to this week's episode of the Artificial Intelligence Podcast. Make sure to subscribe so you never miss another episode. We'll be back next Monday with more tips and strategies on how to leverage AI to grow your business and achieve better results. In the meantime, if you're curious about how AI can boost your business' revenue, head over to artificial intelligence pod.com/calculator. Use our AI revenue calculator to discover the potential impact AI can have on your bottom line. It's quick, easy, and might just change the way. Think about your bid. Business while you're there, catch up on past episodes. Leave a review and check out our socials.