Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools

Managing Clients In An AI World with Joeri Billast

April 08, 2024 Jonathan Green : Artificial Intelligence Expert and Author of ChatGPT Profits Episode 303
Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools
Managing Clients In An AI World with Joeri Billast
Show Notes Transcript

Welcome to the Artificial Intelligence Podcast! This podcast is aimed at helping you find ways to create new revenue streams or make money online without dealing with an underpaid or underappreciated job. Our host is best-selling author, Jonathan Green.

Today's guest is Joeri Billast, a former business intelligence CEO who transitioned into the world of fractional CMOs. Initially running his own company, he realized his true passion lay in strategizing, starting projects, and coaching clients. This led him to discover the role of a fractional CMO, which perfectly aligned with his interests.

Instead of hiring a full-time CMO, Joeri offers his expertise temporarily, setting up everything and building a team for his clients. His unique background allows him to bridge the gap between understanding a company's requirements and assessing the capabilities of various agencies, ensuring the best partnership for the job.

In this episode, Joeri Billast shares valuable insights on navigating the world of AI as a fractional CMO. He discusses the importance of having a trusted advisor who can separate fact from fiction in an industry rife with claims and overpromises. Joeri also tackles the critical issue of AI security, highlighting the need for companies to develop robust policies and understand the risks associated with different AI solutions.

From exploring the potential of open-source AI to striking the right balance between convenience and security, this episode offers a comprehensive look at the challenges and opportunities that come with embracing artificial intelligence.

Notable Quotes:

  • "If you have some person in the middle that understands the requirements of the company, but also can assess the services or the skills of an agency, then you know, we have the best of both worlds." - [Joeri Billast]
  • "People need to be aware that wherever they put data, if it's on social media, if it's on a translation engine, like Google Translate, that they should be careful." - [Joeri Billast]
  • "With AI, you never know what is going... It's not mathematical. You don't know if you give A, you get B back, you give A, and you get C back or D back, and the next time you get something else back. So you're never really sure." - [Joeri Billast]
  • "If it's insecurity, testing is important, but you can never be guaranteed, of course." - [Joeri Billast]

Connect with Joeri Billast:
Website: https://webdrie.net
The Web3 CMO Stories Podcast: https://www.cmo-stories.com/
LinkedIn: https://www.linkedin.com/in/joeribillast/

Connect with Jonathan Green

Jonathan Green (00:02.75)
 Managing clients and operating as a fractional CMO in the world of artificial intelligence with today's very special guest, Yuri Belost. So I'm really interested to start with how you kind of got into the fractional space because a lot of people that really appeals to them. At one point, I was kind of in negotiations with Vue Companies to be a fractional CEO on some smaller companies and I was like, but I never wanna come into the office.

I want my fraction to be zero. So I'm interested in kind of how you got into that world and maybe just how that began for

Joeri Billast (00:40.158)
Yeah. Hi Jonathan. And thanks for having me on the podcast. After you have been a guest on mine actually. So really good to see you again. Yeah. So the fractional SEMO part actually, if you know my background, I had a business intelligence company already. I had that for many years before I sold it. And I was, you know, with that company, I was already helping out clients and I was already as the CEO of a small company.

already taking care of managing, starting up different clients. So it was already a bit fractional work. But of course, when the company grew bigger, I was more busy with management than with clients themselves. And at a certain point I sold that company and I started to work again as a consultant. And then I was realizing really to help more clients, but not, you know, always been doing the work.

But really things that I like, which is strategizing, which is starting a project, which is coaching, all of that. There was something I liked to do and I said, how can I do this? And then I was in the digital marketer community. I don't know if you know them. And then I learned about what is the work of a fractional CMO. And it really resonated with me. And so I was a certified partner at the time. And so I started.

Just, you know, instead of when a client needs someone, instead of saying, yeah, you can hire me as the CMO, whatever I said, or if they just want to hire a junior one, I just said to them, you know what? Hire me as a fractional CMO. I come with you, I set everything up, and I build a team for you. And so you have all the experience and, you know, it's just a temporary mission. And so that's how it started.

Jonathan Green (02:30.974)
So one of the things a lot of people are trying to figure out right now is the idea of the C level suite AI executive, right? There's talks about CAIO, AIO, whatever it's going to be called, right? We haven't picked the name yet. But a lot of people are trying to figure out what would that role be and what is really the line between fractional AIO and consultant and kind of outside agency like marketing agency or AI agency.

that can help people. So a lot of our listeners are really trying to figure out their path because, like you mentioned, right, the digital marketer world, it starts off as it used to be, you're an SEO agency, then it became, oh, we also do Google places. Then we also do Facebook and it kind of expanded. And now the kind of modern version that is every company knows they need AI, but there's not enough people to know how to do it to kind of match the need. So there's a huge opportunity for fractional. So where do you kind of see the lines between those three kind of rules?

Joeri Billast (03:13.725)
Yeah.

Joeri Billast (03:28.979)
Yeah, for me it's actually being in the role as a fractional CAIO or fractional AI officer, or you want to call it. You don't do the work yourself, but if you are not there, the problem is that, you know, the CEO of a bigger company, they know they want to be more efficient, but they don't know who to trust, which agency will be the right agency for them, because they will all come to say, we are the best and we know this, this.

but they don't really know how to communicate with them. And if you have some person in the middle that understands the requirements of the company, but also can assess the services or the skills of an agency, then you know, we have the best of both worlds, I would say. And this is how I like to work, understand what is needed, define what is needed, and see who is the best partner that can help and be that.

telling me they cannot invent stories to me, you know, so I really will see that they are an added value. And it's really in the bridging role, if you want, that I see a lot of most of the value.

Jonathan Green (04:34.618)
Okay, that's a really good explanation. I think that's the best explanation I've ever heard of it because people often say to me, because I make a lot of revenue as an affiliate and they say, oh, your job is to recommend products. I say, no, my job is to not recommend the bad products. I screen out at least 90% of the things that come in front of me. I go, no, this isn't the right fit for my audience or I don't trust this or I can tell there's something wrong. Like I can look at a sales page for an online business model and go, there's a missing piece.

Whereas a lot of people read it and it's so convincing because the copywriting is so good, but I have that special ability. So a fractional AIO then, the real power is that you can talk to different AI agencies and services and go, wait a minute. Because I do that a lot. I review a lot of AI products and I notice, oh, this company doesn't have any AI features. They just added AI to the name of the software, which everyone's done this year, right? Just like everyone on LinkedIn added AI consultants or AI something to their profile. I, as a non-

I could see why a CEO or some running company, the board comes to you and says, hey, we have to admit AI. And you go, well, I don't know anything about this. This is not my area of expertise. How do I know who's telling the truth and who's not? Because it's such a wild west time, there are a lot of people that are kind of taking advantage, right? And they're like, know a little bit about it or not as much as they say. So having someone on your side who can detect when someone's telling stories or massaging the truth or doesn't know as much as they say.

That's really the role, right? Is to kind of be the ally of the CEO.

Joeri Billast (06:03.15)
Yeah. And it has always been like that for me with new technologies. So my background is in business analytics, business intelligence, you had that. Then if you're in digital marketing, social media marketing, it's also the same. When that became a hype, we need to do stuff on social media. Yes. But what? When something becomes a hype, there are suddenly a lot of consultants and people available that claim that they know it. So yes, it's about trust. It's about knowing where to go.

It's about more in the role as I would say a pilot, but then seeing, you know, that you have the right crew to make sure that you arrive at your destination.

Jonathan Green (06:42.758)
Yeah, I think that's really helpful because people are trying to figure out how to leverage their expertise. Like, I was had a really great meeting today with a founder, talking about their software. I was like, I don't build AI software. That's not my expertise. My expertise is figuring out how to use it, teaching regular people how to use it and telling when something's wrong. Being able to tell, oh, this is not a software you want to use. The pricing is wrong or there's something wrong in the technology or the AI is giving a bad result. Like those are the kind of things that are very important. So that's who I...

represent, right, with my followers or my customers is that ability to tell them, oh, this is something you want or this is a tool you don't want. Because as you've seen on LinkedIn, right, so many people are posting AI content and they think no one can tell. A lot of us can.

Joeri Billast (07:26.696)
Yeah, I don't know if it's because it's me, because you know, I directly see this. I imagine for most people they need to see that, you know, it's the same kind. It's a typical Chet Sheepeetee language, you know, that you can see. Yeah.

Jonathan Green (07:40.286)
Yeah, there's certain phrases. Like I was working on something today, like rewriting an email with ChatGPT, and I said, wait a minute, what's happening here? You've defaulted to your rego, because it does the, it's not this, but it's that. Like ChatGPT loves to start with that. It's not horses, it's zebras, right? It's not this, but that. And then it said, in the coming digital landscape, every time I see a digital landscape, I know it's ChatGPT, because no person has ever written that in a post ever. That's like a ChatGPT only phrase.

Joeri Billast (08:07.854)
there are a few words like that always come back. And also, you know what, Jonathan, it's not because I'm a non-native English speaker, there are words that I just don't use in my everyday language. So then if I ask the help of Chatshipity, I tell it I'm a non-native English speaker, or I ask it to, if you can say it to an AI, humanize it to make sure that it does not use these words that are overused by Chatshipity.

Jonathan Green (08:35.834)
I wonder if you have an advantage because you'll see a word and go, I've never seen this word before, right? Like a lot of posts have the word ponder. It says, oh, I'm pondering, which is the way we said thinking 500 years ago. It's not a modern word. Nobody actually says it anymore. So I wonder if you have this, why have I never seen that word before? It's because no one uses it. So you almost might have this advantage with AI detection because every time you see a word you've never seen before, it's because people don't say it.

People don't say pondering. People don't talk about landscapes. Those are words nobody uses. And that's kind of something I wanted to get into is like when you have clients who want to go all in on AI, I feel like there's two camps. There's people that want to go all in and go too far. And there's people that are like, let's wait and see if this is, if this AI thing is going to work out. How do you navigate those things, work with your clients, helping them find the right balance of using AI to accelerate their company without using it so far that it damages their reputation?

Joeri Billast (09:33.018)
Yes, meaning also what is the goal where you lose AI for us. So in marketing, of course, there is not so much you can do wrong if you want to be really creative with it. But if you want to put sensitive information in the AI, for instance, AI translation, this can be an issue for some companies and some clients that are not sure if they really want that or if they don't have a policy. I had these clients.

asked me to, can you come and give an AI presentation for management? And I said, yes. And then they came back to me and they said, we don't have a policy yet. We first need to get the policy right. And then you can come. So that's one thing. So clients that are afraid of, you know, how will it be used? And then you have those other clients that want to be ahead of the wave, as I say. And then what I suggest is yeah, start with the pilot project and start in marketing. There are lots of things that you can do.

to better understand your customer or, for instance, to have ways to answer to certain objections. For instance, that a customer can have to be more creative in a campaign. Something I did, and actually that was for myself, to just optimize the landing page. And instead of creating just one landing page, I used ChatGPT in different iterations to create a landing page. And yeah, I think a lot in the direction of marketing, it's probably a good place.

to start.

Jonathan Green (11:02.674)
So you brought up something I think is really important, which is the lines of security. Because here's what I was thinking. Let's imagine that you're a large company and you start doing a lot of work in tech ad, you mean your company, well, all of that data is going to open AI and upstream of them is Microsoft. So if Microsoft is your competitor, you might be just feeding them all of your like most important secrets because you think you're talking to a secure AI. Because a lot of people, even though they say they erase everything, do they?

People have been saying that for a long time about cookies and we erased your data and then you find out, no, we've kept all your data or all those like security cameras in people's homes. It turns out they've been recording and sending everything to a server the whole time. So what I think about, I think this is an area where people lack knowledge is that there are secure AIs. Like you can have an AI that's on your computer that's totally local and when you finish using it, you can erase it, right? You can delete everything, run a magnet over the hard drive. That's the...

Joeri Billast (11:35.426)
Yeah.

Joeri Billast (11:44.329)
Yeah.

Jonathan Green (11:59.898)
if that's an air gapped computer, that's the only secure AI, right? One that never touches the internet and you kill it after you finish using it. And then, like, especially for medical companies, right? They have specific rules about customer data and financial institutions. They have different laws in America about these things. So how do you educate people on the spectra of security? Because it's kind of like really a measurement of convenience, right? The most convenient version is the least secure.

Whereas the most inconvenient version where you go into a room that's all walls and has like a white noise machine, right? And a floating floor and no signals going in and out of Faraday cage. That's the most secure, but no one wants to work in that type of office. So how do you balance that when people think about AI security and what type of policies they should develop?

Joeri Billast (12:26.251)
It is.

Joeri Billast (12:43.258)
Yeah, and it's really interesting the question, Jonathan, because I have also done projects with government. And then I saw them using at the Belgian Ministry of Finance, they use Google Translate for even secret documents, if you want, but they use Google Translate. They don't worry about where is this data going to? And actually, now we chat with people asking more questions, but actually it's the same kind of issue.

And then, yes, it's in a matter of understanding how to use it, how to... People need to be aware that wherever they put data, if it's on social media, if it's on a translation engine, like we'll translate, that they should be careful. It's a good question. I don't have dialect in the answer too, because as you say, it's between the comfort and between really, you know, and those...

Medical companies, they need to build their own AI internally, like they are already doing, having their own AI internally, and then not use chat GPT for this or other public AI for these kind of things. I think that is a solution.

Jonathan Green (13:59.262)
How do you feel about kind of the world of open source AIs? Like Facebook puts out the almost open source llama model and then there's a lot of things happening where you have versions of like completely uncensored AI models, you have completely local AI models and then there's this Bloom model which is kind of like BitTorrent as in parts of the AI are stored on like a thousand computers so that you can have a much stronger AI but.

Joeri Billast (14:22.243)
Mm-hmm.

Jonathan Green (14:25.518)
It's not localized. Like there's pieces of it everywhere. And I don't know if that's more or less secure. I'm not 100% sure because there's pieces of the data everywhere. So what do you think about that and kind of where open source is taking AI and kind of the pressure it's putting on the other companies?

Joeri Billast (14:42.446)
I think it's always good to have open source because it makes the other companies evolve and makes the software evolve. About security that has always been a problem or maybe a question mark with open source software or if it's now AI, something else. So I would always be careful, of course, with that kind of solutions. I'm not too...

technical enough to really understand, you know, where there are the real dangers. But you mentioned like this decentralized part that should be like, it's also in Web3, in blockchain, everything that's decentralized, it's not at one place. So it should be normally, by the way, it is constructed more secure. But yeah, so the power is not, or the data is not in one place. That's my

my two cents, but I'm not an expert in the technical part.

Jonathan Green (15:38.93)
Yeah, my thought is that people won't decide how secure to be until something happens. Remember initially, everyone just assumed chat GBT always told the truth until a lawyer got in trouble, and a lot of trouble for bringing in fake court cases. And it seems like the only way we learn is by touching the stove, right? Until someone has a bad incident, they go, oh, okay, be more careful. Until someone has a security incident, then we go be more careful. And it seems like...

We still as humans, as a group, still like to learn things the hard way. So I wonder if we'll only develop security policies after some bad things happen to people. We just kind of have to hope we're not the ones who get chosen because you don't really know, especially people who are using tools on their phone. You don't know which tools secretly keep your microphone on all the time, right? Like they're just listening to everything and pulling in data and that alone. Like I don't. Like.

Joeri Billast (16:28.844)
Yeah.

Jonathan Green (16:33.234)
There are certain apps that I don't have on my main phone because I'm suspicious of them. I have to use them for my work. I use it and then I put it in another room. It's never in my pocket because of those things. But what do you think about all that? Do you think like that's how we're gonna have to learn is only the hard way?

Joeri Billast (16:48.558)
Actually, that's of course a good way to learn. If you make mistakes, you learn from it. That's probably mostly the best way to learn, but it's not, you know, if those mistakes are not too costly, then it's okay. But if it's really a security problem, then those mistakes can be really costly. And yeah, I see it also with AI, for instance. So my girlfriend, she's a French teacher and she used ChetGPT to create some exercises for the students. But yeah, obviously there was one big mistake in there that ChetGPT made. So you can...

never really trust and she's a French teacher, so she could have seen it. But if you do something that you don't understand, I use chat-shippity to do it for you, like create a program. Or, you know, if you want to create a translation in a language that you don't know, then there is a risk involved. And if it's with security, yeah, I would even be more careful because,

Joeri Billast (17:43.162)
With AI, you never know what is going... It's not mathematical. You don't know if you give A, you get B back, you give A, and you get C back or D back, and the next time you get something else back. So you're never really sure. So acting there is a kind of a danger. And yeah, testing, if it's insecurity, testing is important, but you can never be guaranteed, of course.

Jonathan Green (18:08.306)
Yeah, I think this is really helpful. I know a lot of people are trying to figure out this AI world and getting different perspectives, especially perspectives from other countries, from non-native English speakers, and the idea of being fractional. I think that's where a lot of people are trying to go, because especially when we're trying to figure out that, well, I don't want to go back to the office, right? There's still so many people that are still working remotely like years, which I didn't know until I saw it in the news this week. I was like, oh, I thought I was the only one still working remotely. So it's really interesting to see how...

there's these new opportunities rising. I think this has been really cool. You've been a great guest. I really appreciate you having you. So excited for this episode to get out there. Where can people connect with you online? Where can they find you? Find out more about what you're doing. See how your operating is a fractional CMO and maybe even work with you.

Joeri Billast (18:51.562)
Yeah, thank you, Jonathan. So if you just Google me, so I have a difficult name, Yuri Bilast, you will find a lot of stuff, but I have my blog, my website, which is web3.net, W-E-B-D-R-I-E.net, which is actually Dutch for Web3. I also have my podcast, which is the Web3 CMO Stories Podcast. The articles are on my blog. There is also the podcast episode with you, Jonathan, on there.

And if you want to connect with me, I love to connect on LinkedIn or on Excel, or on one of the other social media platforms.

Jonathan Green (19:29.15)
Great. Thank you so much for being here. Of course, I'll put all of those in the show notes. And of course, if you're watching on YouTube, it'll be in the description. Everything is there for you guys. Thank you so much for being here, Yori. It was an amazing episode of the Artificial Intelligence Podcast.