Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools

Leaving the C-Suite with Klaas Ardinois

Jonathan Green : Artificial Intelligence Expert and Author of ChatGPT Profits Episode 327

Welcome to the Artificial Intelligence Podcast with Jonathan Green! In this episode, we explore the transition from internal technology leadership to external advisory roles with our special guest, Klaas Ardinois, a seasoned fractional CTO and technology consultant.

Klaas shares his journey from being a full-time CTO to working as an external advisor, helping companies align their technology strategies with their business goals. He emphasizes the importance of identifying specific business problems before adopting AI solutions, comparing today’s AI hype to previous technology trends, such as the early days of websites and social media.

Notable Quotes:

  • “The first question is always, what problem are you solving?” - [Klaas Ardinois]
  • “AI means a hundred different things to different people—from expert systems to natural language processing. It’s important to define what you actually need.” - [Klaas Ardinois]
  • “You can’t build long-term success by just adopting AI for the sake of it. The value comes from using it to solve specific problems.” - [Klaas Ardinois]
  • “The reality is, many AI projects end up being about ticking a box for investors rather than creating real operational efficiency.” - [Klaas Ardinois]

Klaas discusses the challenges companies face when integrating AI without a clear understanding of its purpose or potential impact. He highlights the importance of ROI analysis and the need for CTOs and CEOs to collaborate closely when deciding on technology investments. Klaas also touches on the security implications of using AI, particularly with proprietary and sensitive data.

Connect with Klaas Ardinois:

Connect with Jonathan Green

[00:00:00] Jonathan Green: From inside the Csuite to external advisor with today's special amazing guest, Kloss Art Noa, 

[00:00:07] Jonathan Green: Today's episode is brought to you by the bestseller Chat, GPT Profits. This book is the Missing Instruction Manual to get you up and running with chat g bt in a matter of minutes as a special gift. You can get it absolutely free@artificialintelligencepod.com slash gift, or at the link right below this episode.

Make sure to grab your copy before it goes back up to full price.

Are you tired of dealing with your boss? Do you feel underpaid and underappreciated? If you wanna make it online, fire your boss and start living your retirement dreams now. Then you can come to the right place. Welcome to the Artificial Intelligence Podcast. You will learn how to use artificial intelligence to open new revenue streams and make money while you sleep.

Presented live from a tropical island in the South Pacific by bestselling author Jonathan Green. Now here's your host.

and I'm so excited to have you here today because we've had a lot of CTOs on the show recently and we have a lot of people that are fractional C-suites, either it's CTO or DIO or different fractional like information officer or these different positions.

But I'm really interested in the transition, what's it like to going from inside to outside. What's that big transition like? 

[00:01:20] Klaas Ardinois: Yeah, sure. And first of all, thanks for having me. I've enjoyed sort some of the previous episodes listening to other CTOs and then kinda hearing their sensitivities.

I think for me the biggest difference going from being a CTO in one company, responsible for the entire technology function to being now usually a CEO with Pfizer. 'cause it's easy to go your outside. My, my kind of peer, my buyer if you want, tends to be CEOs now. And so the transition has really been.

Rather than having a sort of a split brain inward perspective, outward perspective, my perspective now is nearly exclusively business first, and then it goes into, okay, you've got a CTO. Where can I help steer that person? What are the kind of gaps between corporate strategy versus technology strategy?

That's the kind of bits where I come in. To give you an idea of a recent project I engaged in company had lost their CTO. They went, look, we lost them for a whole bunch of good reasons. Like it was time for that person to move on. So the CEO kind of called me and went, look. I need someone to come in and tell me where are the gaps?

What should I be doing in the next 12 months and help me find the next CTO. It's perfect. That's a sweet spot gig for me. It's where do you want to go with your company? How can I help you put that in, into a structure, and then who's the right person to take that forward? 

[00:02:37] Jonathan Green: Yeah, that's like the dream client.

The dream client is someone who knows what they want. Like it's not trying I need you to do something, but I'm not sure what. One of the things that I often run into, and this is really common over the last year and a half or so, is A CEO will say, Hey, the board says we need ai. And then I said to them, what do you mean?

And they said, we don't know, but we wanted, and this. I brought this question up a few times 'cause this is really happened. It comes up all the time. They go, this happened 20 years ago when every company knew they needed a website, but they weren't sure why. And then they needed a mobile website. They weren't sure why.

And then they need a social media presence. And not every company does, like not every company benefits from a social media presence. It just depends on who your customer is, right? If you're business to business. If not very few corporations make a decision about which photocopier they're gonna buy based on an Instagram follow.

So it's not always right for your business, but it feels like it is. Just 30 years ago, everyone had a MySpace page. The thought of not having a MySpace page was like, how could you, how can you exist? And now no one uses MySpace. It exists. Yeah. But that's it. So it's like a memory.

And I think that this pressure. Is just building for everyone. And now everyone feels, every day you wait, it feels like I've waited longer. Now it's a harder decision. So when you're approaching someone who says, we know we need this, but I'm not sure why, how do you help them to figure out if they even need it or not and start to develop a plan when they don't know what their goal is?

[00:04:00] Klaas Ardinois: And so I think that's the exact point. It's the first question inevitably is what problem are you solving? And the problem might be. I need a tick box somewhere that says I use AI for my next round of investment. I know we talked about it earlier in the kind of pre-call. That is sometimes a legitimate reason to go, I need to do something to convince my investors to pony up more money.

Quite often the question then becomes, I. How do I putting on my corporate hat, how do I translate that typically in some form of operational efficiency, right? I'm gonna solve a problem of you are spending too much time there and I think some form of AI could solve that. And then the second part of that so first part of the discussion is just.

What's your actual problem? Which thing are you trying to optimize for? Is that efficiency? Is it a tick box? Is it a vanity feature on your marketing website? What's the key driver here? And then the second part is when you say ai, I. Do you mean chat GPT or do you mean the broad spectrum of ai?

Because, when I think about ai I go, I'm old enough to remember, expert systems as a thing. And then neural networks came up for a while and then it was machine learning and deep learning with sort of the AlphaGo in the early 2000 tens. And then these days it's more around the kind of transformer networks and the GPT, natural language processing is an entire branch of AI in its own right. You could go to robotics or evolutionary algorithms like AI is such a broad spectrum that on the one hand I could argue almost any business could use some ai. On the other hand, it's probably not what you think when you say ai.

[00:05:39] Jonathan Green: I think you've nailed one of the biggest problems. One of the problems I have with the AI industry is that we. Don't have a language for what things are. AI used to mean sentient robot, right? It used to mean sentient. Now it doesn't. Now they go, oh no, that's a GI. And it's wait a minute. Then it was strong ai, weak AI five years ago.

That's what sentient meant. And so we keep changing the terms. And one of the things that I deal with is when I'm learning new systems and programming things, just the definition of an AI agent, if I talk to 10 AI experts, they're gonna gimme 10 answers. It's like we can't even pick. What that means.

Like we can't, no. And even if I say, what's a bot? You're gonna get 10 answers, right? Some people, oh, bot means customer service chat bot for another person. Bot means fully autonomous ai that's gonna do research and give you an answer and talk to multiple ais and use dozen APIs. So there's such a spectrum, and this is why artists hate ai because we have a name for sculpting or painting, but there's no name for if you create something with an AI image generator, we haven't created a verb yet.

And if we just did that, I think that a lot of artists wouldn't be mad anymore. 'cause we would be like, oh, they're like, you're not creating art, you're doing something else. I'm like, just make up a name for it. So I think you've hit the nail on the head, which is that it's very easy for people to be confused because we don't have a consistency of naming.

Right out the gate. A lot of tools will say they have an AI and sometimes they just have a random number generator. Sometimes they just have a slot machine element or a basic heuristic. And these are words that not everyone knows what they mean. And that's why you start to get overwhelmed. And I really love that you dial into what exactly do you mean?

Because that's such a good question because most people that I talk to think AI means. It writes blog posts for me, or 

[00:07:22] Klaas Ardinois: that's which, we could spend the entire episode on why That's a bad idea, but I get what you mean. That's the heart of some of this. 

[00:07:30] Jonathan Green: What you're dialing into is my favorite topic, which is when we start to do solution problem, which is I bought a hammer.

I better justify that purchase. Something's getting hammered, like when I go to the hardware store. I always buy something. If I buy a hammer, something's getting hammered. If I buy a measuring tape, everything in the house is getting measured because I have to show my wife, no, we did. I bought it for a reason.

I'm going to use this. And I use it for two or three days, and I can see that happening for a lot of companies that grab ai. And then the tech department now has to maintain the ai, but no one's actually using it because we've gone solution problem. But if we go problem solution, and. One of the other areas that I worry a lot about, and I believe this is my industry's fault, is people thinking AI's gonna replace them.

So if I come into a company and I say, Hey, we're gonna start using this AI tool, a lot of people think they're training their replacement, and that means you're gonna crash morale and possibly they're gonna sabotage the project because they go, if we make the AI look like it's not working, it reminds me of that episode of the Office where they say, oh, you have the website's gonna replace you.

So they made sure the website didn't work. Because it's your competition. Nobody wants to train their replacement, and this is because of a lot of language. We constantly say, oh, you can fire your entire staff and replace with this ai, which is, we're not even close to that. We're so far away from that being a possibility because nobody, I've never met anyone at any company who goes, yeah, I would give an AI access to my credit card.

[00:08:55] Klaas Ardinois: That's. But you're hitting on a really, so a key part of my work there is I have a lot of discussions with, boards and CEOs around ROI. What is your level of investment and what's your expected return? I. I specifically talk about expected return rather than return.

'cause by nature of forecasting the future, you're gonna be wrong to some extent. But you could probably put a fork on it saying, I'm gonna expect to make between zero and a hundred thousand dollars, therefore, investing 10 maybe that's good. May it gives you an expected value if it was all equal of 50,000.

You should spend no more than 50,000 on this. How do you spend 50,000 on ai? Frankly, if you're run it in AWS, you probably ask it about five questions a month, and you're there exaggerating but it gives people a framework to start thinking about, Hey, it's not just about using ai, it's actually about justifying the investment case.

And then you get into. Are you buying some off the shelf thing? As in, are you just gonna leverage ChatGPT? Are you gonna run your own on Azure AWS Do you need developers to integrate this in some way? But what, how are you gonna maintain this? How long is this gonna run? What training do you need? And suddenly that, that we need AI becomes a high upfront cost for potentially quite limited ROI.

And so that, that's how you get around all this discussion of what are you solving? What do you hope to get out of it? And then how much money do you wanna throw at that 

[00:10:18] Jonathan Green: think? You think that's really good because it can, we start to think, oh, ChatGPT is only $20 a month per user. And that's true.

But also when you start to factor in, there's the time it takes to teach your team to new things. And one of the big issues that I like to talk about a lot is data security. A lot of. Anytime you send something to an AI or back, it's going through the internet. Even if your connection to AI is secure, just the transmission back and forth, right?

Someone can listen to your phone calls, like that idea, that's always a, you increase the level of risk and once you send something to an ai, it's gone forever, right? It's out there. If you accidentally send a social security number or a credit card number, or person's home address, which everyone seems to do on accident all the time, Andis can be socially engineered, which is this new area of.

Hacking We're used to either you hack a computer or you socially engineer employee, but now you can socially engineer a computer. We've added this whole new vector that I, the most common request I get is we wanna replace our customer service team with a chat bot. And this, I'm always like, this is a terrible idea.

You wanna take an angry customer and make them matter? This is, that's the goal then. Perfect. Nobody talks to a phone tree and has to hit those numbers and gets happier each time they have to hit it. The more numbers they hit the matter. You are talking to a chat bot that can't actually solve your problem.

That's not allowed to do a refund, that's not gonna make people happy and. Most companies, when they set up a chat bot, they go, oh, there's only three questions that keep coming in. That could have been solved with an FAQ for 50,000, less dollars. So they, you have to be very careful on which data you train it on because you've created this vector.

As we've seen, there's some companies where people have hacked their chat bot because they've socially engineered, and we don't think of it as, every time you add a new tool that has your data and speaks to the world, it adds this vulnerability. There's always a possibility, even if it's slim, you have to be aware of this.

I think about this a lot because people are so excited by what AI can do that they forget that it can also be used for nefarious purposes. So you have to also weigh that in, which is, it takes time to train people. There's possible, you're gonna have to deal with people thinking they're being replaced and you're pushing down company morale.

The cost, there's the training cost, the time to transition over, and then you have to have new security policies. 'cause now you have to understand how this new technology works and what you can and can't share with the ai. 

[00:12:39] Klaas Ardinois: And I think you're hitting on a key piece there about where the sort of the state of the industry is right now.

I always draw the comparison with automobiles way back in like the 18 hundreds, early 19 hundreds, where I. You know the engine, the combustion engine was just about coming to fruition and just people went, okay, I have an engine. Anything with a couple wheels and a baseboard will do. And its functions as a car of some form.

And so we're at that stage or. Everyone will mount the AI engine to any possible thing and pretend it's a car, and it's a platform that moves you forward. And if you look at history, 95% of the solutions never made it. They were throw risk at the earliest kind of car races. They weren't about finishing first.

They were about finishing that, that was the goal. Did you reach the end of the road trip rather than, did you come in first? I feel like a lot of the AI conversation today is in that sort of stage where you're really just worried about does this work at all, rather than is there a real solution?

Is there a real problem? Is there anything tangible? And I think the other part around security, I'm trying to remember who it was. It was a podcast, I think it was podcast with the Goldman Sachs CIO, talking about their internal usage of AI and kinda what they were doing and how they were evaluating all these language models.

And his key point was essentially we're Goldman Sachs. We deal with both highly proprietary data and highly sensitive data. We can't just chuck it over the wall to, and you can pick your AI provider chat, GPT Cloth Gemini. We can't just chuck it over the wall and hope for the best. They had to bring this all in house, put sanitizing layers over it both on the way out and on the way back in.

And he said at some point that becomes such a costly affair to run that the net benefit you get from these tools. It's not nothing, but it's definitely a lot less than what you'd get on $20 chat GPD subscription. 

[00:14:37] Jonathan Green: I think that's a really an important lesson, which is that if you have to add a censoring element, like you're, when you're writing letters in the military, someone has to read every letter where you gotta pay that persons salary and.

I often when people say is there a possibility to have a completely secure ai? It is. You can have it on a laptop that has no, it's air gapped, use it for the day, and at the end of the day, you destroy the laptop and the next day you start with a new laptop. Expensive. But that's total data security, right?

'cause there's no, it's destroyed. At the end of the day, there's no record Mission Impossible, right? Like it, it gives them the mission and then it burns. Nobody wants to do that. So you have to find that balance between security and convenience. And when I started off, my first job actually in 1999 was in it.

And at that time. You had an intranet. Nobody in the building had access to the internet. The only port that was open was email. You could receive and send email. And for companies that needed the internet access, for example, you needed your own company's website to see what your current prices were. It was a copy of the website.

It wasn't the real website. It wasn't the public facing one. So they'd copy all the files to the local server and that's what you could access, and you couldn't actually go outside the bill. It. You couldn't op, you could chat to other employees, but you could never chat to a non-employee. But we seem to have shifted away from that in the past 25, 30 years.

I'm not sure why, 'cause I wasn't working in it when that decision was made, but I. I wonder if we will shift back towards more of this intranet level where you have access to an AI but it's in a server inside the building. Like I can set up a local AI on a computer that's a couple thousand dollars that's just as powerful, or 90, 95% as effective as what I can do with an AI that's on someone else's server going through chat GBT or something else.

And most people don't realize that because as soon as you say GitHub, everyone freaks out and no, that's my nightmare. When you're thinking about this spectrum of security versus efficiency, and I think the Goldman Sachs point is really good, is that there's a cost to sanitize your data.

Do you think it's a better decision to go back to the intranet world where you have a higher, everything's internal, and so you have that one wall instead of trying to put a wall around each person? 

[00:16:44] Klaas Ardinois: Wanna take kinda hook it to a point you mentioned quite early on there around sort the evolution towards sharing more and more data.

'cause I think AI is just the latest version of what we share data with. But if I think beforehand, I. We all happily run our emails on Gmail these days. And so Google has default access to all our emails. Now they pretend they don't, but we all know they do. So I think, we happily gave that away.

If I think about any number of product analytics, whether you use, stuff like Mixpanel or GA or and there's a number of these tools that you will use. You've given over quite sensitive information potentially. 'cause depending on what your team decided to track on your website, you are handing over customer records, customer details.

We all use some form of online CRM whether that's HubSpot or Salesforce. None of these are hosted in house anymore, so we've gradually offloaded more and more of that security risk onto other parties. And therefore we sometimes explicit, sometimes implicit, but we trust these parties, right?

We go, if I give my data and I stick it in HubSpot, I trust HubSpot to be good custodians of that data. What I haven't seen in the AI landscape is proof that they are good custodians. So to me it's less about, should I bring it all back or not? It's more about. Who is a good custodian of my data?

Am I comfortable from a risk, from a capability, from a type of data? Am I comfortable sending that to a third party, whether that's OpenAI or someone else? And I think that's where I haven't seen enough proof of these companies. Like they haven't been around for 20 years. There haven't been any major hacks on one hand.

But then on the other hand, they also haven't been around long enough to be majorly hacked. So I suspect they are a target. But I haven't seen anyone go, oh, and by the way, I pulled data from my competitor out of chat, GPT. So I think there's a bit of a a discussion there or a trend of seeing what comes back and how far you could push it.

[00:18:42] Jonathan Green: Boy, that's, you know what I, so I'm bet no one not other people haven't thought of that, but I know that. There's a lot more data in there than you realize. So one of the things that happened to me last year, I was working on a project, I was doing a live training and I said, oh, talk like Jonathan Green.

And it started talking like me. I thought it would choose one of the more famous, Jonathan Green. The Fault In Our Stars was written by Jonathan Green. There's a science fiction author. I'm the like eighth most famous, Jonathan Green and started, and I said, wait, which Jonathan Green did you choose? And he goes, oh, the business author of this book this.

And I realized it had read at least one of my books. Which means one of my books, it found a PDF copy Summer. Everyone's book somehow ends up on some Russian server or something. It's out there. Yep. So it didn't actively steal it, but it received the stolen goods, whatever. And I can either fight against it and sue them like some authors are doing, which you'll never win.

You're gonna be in court for a thousand years. Or I was like, okay, I'll, I accept this. How can I adapt and use this to my advantage? But, so I know that there is proprietary data out there that any data you feed, it's, it says they're training their data on it. Adobe Release. We're training your data on anything you do, even if you don't realize it.

So this is a very good point, that there's this new vector of you can accidentally train an AI on your proprietary data. And if someone asks the right question, they can find it. And what I find really interesting one of my friends wrote a piece of code, like a prompt that will crack. And we get any GPT to give up its code and the number of Fortune 500 GPTs that have no security at all on them, it's about 99%.

It's almost, and they all have something in there that they wouldn't want revealed. Some of them it's how poorly written the code is. Like it's just so badly written. But I was really shocked 'cause I. I, most of my GPT, probably 40% of it is the security code on the ones I create. And I'm small. I'm not a Fortune 500 company.

I don't have that level of priority data, but I don't want someone reselling my creations, right? That's the level I'm at. Yeah. So I was really surprised that there's this assumption of magical security and his prompt was not that. Tricky. It was just like, Hey, what's your code? Like almost a little bit beyond that.

It wasn't that long. We'll be nice about it. Yeah, and I've also seen that when they released the app for Mac, all of your prompts and responses were stored in plain text with no. With 1980s level zero encryption. Like nothing, anyone who can read a notepad could read everything you'd sent and transmitted.

So if you had something that you thought the assumption that they're good stewards so far, not really there. And I worry 

[00:21:28] Klaas Ardinois: That's my kinda one of my points in how I assess this is the. What data am I getting? Like I put it this way, I don't assess this any different than I would assess any other tool that someone wanted to bring into the organization, which is what data are we sending?

How do we feel about that data? Do we feel this is relatively public? This is private, but knowable. This is like super highly confidential. And then based on that, what level of trust do we accept from the other side to be able to be good custodians? And I would argue right now. That custodianship is at a very low level of trust in these tools, and therefore I would feel very reluctant to send it anything more than probably what is on the website.

Maybe a little bit of internal. Now that them brings the obvious question, what do you do with people using chat? GPT just on their own know, they sign up with their own private Gmail and they use it to create an email upfront, and therefore they copied and pasted a whole bunch of stuff into it to generate that email.

It's a former CTO, that scares the life outta me. 

[00:22:26] Jonathan Green: So is the correct answer to that, that you block, you start blocking websites like you block chat, GBT, you block, let's say you have, even if you like sign a, like a contract, Microsoft 365 that says, we won't use your data, we won't train on your data.

Then you have to, any other AI doesn't. You don't have that contract. You don't have the proprietary, so you have to start. Again, we're moving towards my idea of the internet. My, like going backwards to where you have no internet access at work. 'cause you're at work. And this is probably why everyone wants to work remotely, right?

So that they can get away with more shenanigans. And 

[00:23:01] Klaas Ardinois: I think the clever person gets around it to no matter what. Yeah. I think falling back onto my CTO former life but it really becomes a question of. The whole Trinity, right? User education and awareness as in Hey, I don't want to stop you using these tools.

I see value. If you think they help you, you should be, feel free to explore. But I. Be aware, right? This sort of stuff, not a good idea. This kinda stuff. Yeah, that works. The difference between your private life and your corporate life actually makes a distinction here. And by the way, we have policies around data privacy, stuff you can share and they apply all the same.

And then lastly, the practical side saying, okay, look, this isn't about blocking you, but if you're gonna do it, use this one. If you are gonna do it, at least use the. Azure one that we trust or the GPT one that we as a company pay for. And depending on the size and the scope of your organization, you might wrap it a little bit in some fancy ui or you might just give them, it's on the corporate credit card and we've signed up on a corporate plan.

But I think it's as a CTO you kinda should be aware of. Look. You can't block it fundamentally ain't gonna happen. You're right. Whether it's remote work or someone will just tether to their phone over their 5G connection and be around your internet. So it's accept that they use it, don't fight it, but then go with training and providing them a workable answer to avoid them having to go through, run the shadow IT path.

[00:24:29] Jonathan Green: Yeah, I think that's a really good answer because. If you only get a small incremental benefit, the less benefit they get from going around the firewall, the less likely it's worth their effort to figure out how to do it. Because when I worked in it, one of the first things they did was audit every computer to see what people had on there that they weren't supposed to have.

And this was back in the days when at the end of every day, they would tape drive the entire company's hard drives to back up everything. And they were like we're going over the limit. What's going on? And it was a bunch of people had put in video games and I had to go find which computers had those.

And we also had someone who opened the I Love You Virus email twice, which is, it shouldn't, yeah. Shouldn't catch you twice. You should know after the first one. But it happens, right? You just really want someone to love you and, the next thing I kinda wanna talk about is this. I noticed that there's almost a reverse correlation between the amount of hype and AI has and the usefulness.

So what people really love to talk about is AI video, which is the only thing that's less useful than writing blog posts with AI is AI video. And there's this constant, I see constant posts about you can replace your employees and every AI video editor's gonna be gone and. Pixar is going outta business, and this is the Sora killer in every single one of those videos.

If you didn't tell me it was an AI video, I would never watch it. I would just think it's a really bad CGI and it's not useful yet. 

[00:25:54] Klaas Ardinois: And I threw this out for a client at one point the. The history of AI and then the path that I suspect it will follow, particularly on the gen AI side of things.

And so I think right now I'd have to update it a little bit 'cause it was about a year ago. But right now we're in a space where what I'm gonna call text based. So anything around audio to text 'cause that is more or less a solved problem by now. So audio to text, I'm talking English language.

I'm not into sort of, Chinese and other slightly more difficult languages, but. Spoken English to, to written text is more or less a solved problem. And then from there, the current generation, so I'm talking, the latest GPT, latest lama they are more or less usable.

I. Broadly speaking, the improvements we're seeing from, even from OpenAI, the latest release, the improvement is no longer around the number of parameters or the training and how they do it. The improvement is more on the usage of it through chain of reasoning, and then moving to inference time rather than training time.

So let's call them models. Not solved, but the big improvements are probably there for the time being. So now it becomes about how do we use this efficiently? And the next layer down from there is image generation. But if you think about image generation as an artist rather than a, I see an image, half of image generation is composition, right?

It's are these things in the right place? Does that make sense? Does that tell my story? None of that happens, right? If you can get a sort of a dolly image or a maturity image. You get 20 shots at the same description, but it doesn't really do much yet around composition and giving you layers you could go and edit and drag around and things like that.

So I think that's the evolution there is moving from a kind of a flat generation to something that's a lot more driven by your ability to compose and alter these kind of elements. So an example would be, let's say that you'd give it a good prompt to make journey to go. I want I want a picture with, two children playing soccer.

One left, one, two goals. You give 'em a good image, whatever comes back, whatever, four versions you get back. It would be great if you could then go with these images, actually move the guy on the left a little bit forward, and the person on the right a little bit up and do these tweaks that you would normally do on an image and manipulate it.

That's not really there. So I suspect that's the wave that's coming there, which is more from generation to perhaps curation and editing. And I think only after that, once that's solved, only after that you'll get video in a usable state. So I'm with you. Anyone who's really hyped up about video right now, understand that you're at the very frontier of this technology and therefore, on the one hand it's cool, it's amazing what it can do.

But equally, it's super frontier technology. It's nowhere mature, stable, useful. So yeah if I was talking to a CEO as an advisor and they're talking about video and ai, it's come back in two years, see where it's at. 

[00:28:49] Jonathan Green: What I find, I saw something really interesting, which is that everyone who talks about ai, they could write code for you, is never actually a coder.

It's never a full stack developer who's saying that it's always, and it's never an artist who's saying Midjourney is replaced what I do. And I think that's an important lesson that they're useful. I think of Midjourney as a replacement for a stock photo site. That's really the level it's at for me.

Whereas I, in the past sometimes I would spend hours looking for the exact photo that, what I'm thinking of in my head and now what that it doesn't take as long. That's really what we replace. I don't think of it as replacing a graphic designer. I still hire graphic designers all the time. I'll send them the AI generated image and say, here's the image I made.

Now make it into an amazing YouTube thumbnail, for example. So there's that. Still the high skill thing are being replaced, but I think you've really dialed into, we're at the beginning, but it's really, the excitement is always the highest when the usefulness is at the lowest because. The most useful things that AI can do is like data analysis, spreadsheet analysis.

Organizing data. None of that's interesting. That's not even, you can't visualize, just like in movies, they can't make a visual of hacking that looks like real hacking. That's interesting. Real hacking is someone eating Cheetos on a computer for eight hours trying the same code and changing what word?

It's really boring looking. So we have to create this illusion of interestingness and image video stuff is fun to look at, but a lot of people think, oh, I wanna use this in my business. I say, why are you making vi? How many videos did you do last year? Zero. It doesn't solve a problem. The danger of the biggest danger I see of AI is people adding to their workload rather than using it to decrease their lurk workload first.

I think that's my thought of the biggest danger area. 

[00:30:30] Klaas Ardinois: Yeah, it's the old is the old programmer. Joke around, I can automate anything. And then there's a graph where you go, the time it takes me to do it manually, and then the time it takes me to automate. And at some point I should cross over.

But the reality is at the time it takes me to automate, just keeps going up and refining and tweaking and bugging and you never get the benefit. And I think quite a lot of AI applications fall into that same trap. 

[00:30:52] Jonathan Green: Yeah, I think that's something I think a lot about. Efficiency experts. It's like you can get more and more efficient with your time.

You're so efficient that you don't get anything done. You just mastered efficiency. And it's exactly that. Sometimes I look at a task and I go, how long will it take me to create an automation that does this, that I'll then use one time? 'cause it's a one time task. And it's like you get caught up in that loop where you're so excited to develop a new thing.

So I completely understand where you're coming from. This has been really useful and really interesting to me, and this has been a great discussion and I love. Your perspective, and it's great to have the European perspective 'cause we think that all AI is American, which it's not international perspective.

So I appreciate you coming on today. Where can people connect with you, especially people in the United Kingdom and Europe who wanna see more of what you're doing and realize they need a fractional CTO right now and you can help them navigate those waters. 

[00:31:43] Klaas Ardinois: Easiest is my LinkedIn. I'm fairly active on there, so just hit me up on class, on LinkedIn or a three h consulting.com, which is the consulting business I run with my wife.

And as far as services, it's, if you're a CEO or a board and 

[00:31:58] Jonathan Green: Thank you for listening to this week's episode of the Artificial Intelligence Podcast. Make sure to subscribe so you never miss another episode. We'll be back next Monday with more tips and tactics on how to leverage AI to escape that rat race. Head over to artificial intelligence pod.com now to see past episodes.

Leave, review and check out all of our socials.

 

[00:32:23] Jonathan Green: Thanks for listening to today's episode starting with ai. It can be Scary. ChatGPT Profits is not only a bestseller, but also the Missing Instruction Manual to make Mastering Chat, GBTA Breeze bypass the hard stuff and get straight to success with chat g profits. As always, I would love for you to support the show by paying full price on Amazon, but you can get it absolutely free for a limited time@artificialintelligencepod.com slash gift.

[00:32:47] Klaas Ardinois: you've got questions about how do we actually use technology to drive our business goals forward, that's my sweet spot. 

[00:32:54] Jonathan Green: Amazing. I'll put all the links in the show notes and below the YouTube video. Thank you so much for being here.

Again, guys, thank you for listening all the way to the end. Another amazing episode of the Artificial Intelligence podcast.