Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools

AI Investing with Adrian Mendoza

Jonathan Green : Artificial Intelligence Expert and Author of ChatGPT Profits Episode 326

Welcome to the Artificial Intelligence Podcast with Jonathan Green! In this episode, we explore the world of AI from the perspective of an investor with our special guest, Adrian Mendoza, a venture capitalist and tech expert who has been at the forefront of AI investment since the early days.

Adrian shares insights on how AI has evolved, particularly focusing on the current hype cycle surrounding AI technologies. He discusses the key differences between past AI innovations and the recent explosion of interest sparked by generative AI models like ChatGPT. Adrian emphasizes the importance of identifying companies with long-term potential in AI, warning against investing in companies that rely solely on hype without a strong foundation of proprietary data.


Notable Quotes:

  • “AI is everywhere, but the biggest concern for many businesses is whether they’ll still have a business in five to ten years.” - [Adrian Mendoza]
  • “The companies that will win are the ones that secure proprietary data contracts with major platforms, not just using publicly available data.” - [Adrian Mendoza]
  • “Generative AI changed everything because it put AI right in front of the public where people could see its value.” - [Adrian Mendoza]
  • “You can’t build long-term success by just using someone else’s LLM. To thrive, you need your own data and models.” - [Adrian Mendoza]

Adrian also explains the concept of “hallucination” in AI, where models generate incorrect or fabricated information. He discusses why this is a critical issue for sectors like finance and healthcare, where accuracy is paramount. Investors must carefully evaluate which companies are solving this problem by integrating AI with proprietary data and not just relying on generic models.


Connect with Adrian Mendoza:

  • LinkedIn: https://www.linkedin.com/in/adrianmendozavc/
  • Twitter: https://twitter.com/adrianmendozavc
  • Instagram: https://www.instagram.com/mendozaventures/

Connect with Jonathan Green

[00:00:00] Jonathan Green: AI from the respective of an investor with today's special guest, Adrian Mendoza. 

Today's episode is brought to you by the bestseller Chat, GPT Profits. This book is the Missing Instruction Manual to get you up and running with chat GBT in a matter of minutes as a special gift. You can get it absolutely free@artificialintelligencepod.com slash gift, or at the link right below this episode.

Make sure to grab your copy before it goes back up to full price.

[00:00:24] Jonathan Green: Are you tired of dealing with your boss? Do you feel underpaid and underappreciated? If you wanna make it online, fire your boss and start living your retirement dreams now. Then you can come to the right place. Welcome to the Artificial Intelligence Podcast. You will learn how to use artificial intelligence to open new revenue streams and make money while you sleep.

Presented live from a tropical island in the South Pacific by bestselling author Jonathan Green. Now here's your host.

Now, Adrian, I'm really excited to have you here today because you have the perspective of someone who's coming from the top as opposed to from the bottom, from inside the hype cycle, from screaming.

We got AI's changing everything, but you're more strategic. So I'd love to know for people who are listening and they're excited, they love AI and they love building businesses, what's really the very special thing that you're gonna teach them over the next 20 minutes? Why is this the episode that they need to be listening to today?

[00:01:13] Adrian Mendoza: Funny enough, I think part of it, what we have to understand is. There has been investments in AI as back as 2015 and 2016, and so this is, we're in the second AI hype cycle. The first one being in 2016 where everything was a.ai. But early on, like I was a tech founder in 20 10, 20 12, and we were building AI systems back then they were called expert systems.

It was this basic, simple pattern matching. And really back in 2016 when you started getting this sort of this open conversation where you're starting to overlay machine learning technologies and sentiment analysis, you're still overlaying on top of expert systems. We're still just doing pattern matching, even if you're doing more algorithms.

And as we get closer to today, you're starting to see the advent of more unique models. Nothing really changes. I. Until two years ago when you finally get the advent of lls, because there had been models out there, but there was nothing available at the public. And, but I think the really key thing that changed, started changing the dynamic of what AI is in public consciousness, was the use of generative ai because.

You have had for years. The in order to do any ai, you'd had to have a data scientist. They'd build the A model in R, then they'd build charts and graphs and whiskers and box plots, the general public. Most investors don't care about that. They're not gonna be looking at your box plots and saying, oh my God, this is amazing.

It's how do you engage your customer? How do you engage your investor? How do you engage your, the acquirer of the company? And so that chat, GPT, generative AI is the thing that changes everything because it puts the AI right in the forefront where people see it. And that's, I think where we are right now is we're finally getting to get real use of it by people engaging it on an everyday basis, whether or not.

That's going to, and we could, we're gonna d dig in deep more into sort of what's working right now and what's work not working next. 

[00:03:23] Jonathan Green: So what really interests me is from the perspective of the investor, what kind of companies do you look for? Because almost every AI company is free revenue, or they're just doing round after round.

Even open AI is not profitable yet. So how long. Is investor willing to wait? 

[00:03:46] Adrian Mendoza: So I have started to reframe how we look at AI's companies as more of, we're looking at the tech because this is gonna be a much longer timeframe as you're, when you look at sort of Sam Altman, all of five years ago when he started open ai, he was at a convention and they asked him like, Hey, when are you gonna get to profitability? And he is oh, at a certain point I'm gonna ask by AI that, how do I get to profitability? And I'm like, that's not an answer. If you guys dig in deep, that interview's out there. And I was like, oh, this is painful. But we're right now, just at the beginning phases, because right now the LLM phase the large language learning models that are out there, some of them are really good, some of them are garbage.

Right now what we're seeing is. In order for them to be at the level that we want them to be. And this is my view as a, as an investor, there's right now too much hallucination that happens. And hallucination is when you use a an LLM, even like OpenAI. I'll use a good example right now of Google Bar.

The one that just got deprecated by Google is, there is 90%. Of the answer that you get when you use a ChatGPT or any of these, or a Google Bard is based on whatever data. 10% is stuff that's just made up halluc. It's what's called the hallucination. So us at the, as the investor, when we look at that, we have to look at who owns the LLM.

Are you just going out and using someone else's LLM? Are you building your LLM? We were invited into the open AI deal and at the, about a, I would say six, nine months ago, and we were like, this valuation is insane. And but at the same time, what they're suffering with is the data, because this is, you are, you're right now using publicly available data.

It's whoever is going to the LLM that's going to win from an investor standpoint is the one that, or any startup. Is the one that goes out and negotiates a data, a data contract with proprietary data, with Facebook, with Reddit, with any of these content. Because right now I'll use the example I'm gonna give, right before Google deprecated we have a mascot here at the office.

His name is VC Puppy. I go, every time I find an LLMI write in the question, who is vc? Puppy Chat. GPT says, I don't know what this is. We, I forgot which other one I put in. I don't know. I think we UI used Metas. Google Bard at that point actually returns an answer and it says, VC Puppy. When I asked it, who is VC Puppy?

It comes back. VC Puppy is a real dog on social media, and I'm like, oh, we're starting to see hallucination. The reason being, it used One Data point, which was his tag on social media, on Twitter was real VC puppy. It took that as, oh, because you used the word real and you're on social media, this must be a real dog that's on social media.

And so it's great if you're using, a generative AI solution to create, a letter part of a term paper, some basic things that are data that are publicly available on the internet. Now, if you're starting to look at insurance claims, banking, information, underwriting, you cannot have hallucination.

And that's what we're starting to look into is how do you make the big port from the everyday person that's writing and asking a questions to the person that's sitting at a bank that has to ask. A question about their customers or even more thing, a physician that's actually trying to ask a question from, who are his patients?

Or an insurance company, trying to figure out if this is a claim that's gonna work or not work. 

[00:07:53] Jonathan Green: So from my perspective, that when you make an AI product for the general public, you try to make it. Idiot proof, right? You try to make something that will give a pretty good response to most people, which is why some of the training inside of chat GBT is it will never tell you've asked a dumb question.

It will never say, that's a bad question, or I need more information. Yeah. Because it doesn't wanna get a negative response. So I feel like for something that's programmed like that that's programmed to, I want everyone to have a pretty good experience. It's in the core programming to fill in that gap with something, right?

So that it will. Almost always have an element of hallucination because of the core programming, which says, which is why the lawyer got in trouble last year and other people have asked questions. 'cause I caught it lying to me a year and a half ago. It told me an entire story about a pretty significant crime.

 All. Then it gave me all the links. I go, wait a minute. I looked for all the links. They were all four oh fours. My first thought was this guy has enough money to erase it from the internet. And then I was like, wait a minute. I don't know that's possible because it was like, but it was entirely made up.

So I was, this is like maybe last February, a year and a half ago, so I got suspicious then. But I think it's in the court programming, if you make something for the general public that is always aiming to please that desire to please means that it will always shift away from what you're talking about is a hundred percent accuracy, which is.

If it doesn't know, it should tell you. That's how you avoid hallucination on the first level. So part 

[00:09:18] Adrian Mendoza: of it is it's from the data side. Having run AI businesses before, 10 years ago when we were building expert systems, in order to have a like statistical significance, it's 10,000 data points per answer and per question we're generate.

Perfect example, like you go back to the real VC puppy Twitter tag. It's only one data point and it's hallucinating based on one data point, and it doesn't want to come back, as you said, with, Hey, I don't have any more data. Would you A, would you please give me more data? Because it really doesn't want it.

We really wants to create more of user experience delight, so so that is a good, I think segue of what's coming up next and what's coming up next is, what we're starting to look at in investors is something called rag. Which is retrieval augmented gener generation, and this is enterprise driven generative ai, where you're not just using one LLLM, you're not only, you're using multiple, but you're also using your own proprietary data set because you can't put generative AI inside an insurance company.

Now you're training an LLM with proprietary data. You can't use it in the military. You can't use it in the DOD because then you're training it with data that it shouldn't be shared across, to the general public. So that's the next generation that we're gonna start being looking at in the next one to two years on one side, and then on the top of the side is beginning to look at applications.

What are we using in terms of applications? Because right now it's like you are a small business, you are a customer, you're anything, you don't wanna be using an interface of chat, GBT, you want to embed it into travel, into booking, into, ordering a sandwich to try to get more sort of sentiment from your customers.

[00:11:20] Jonathan Green: Yeah, I feel like. People get the most excited by is the least useful cases. Like everyone likes to talk about AI video. And every time a client talks to me about that and they go, oh, now we can do AI video. I say how many videos did you make last year? None. So you don't make videos. Like why?

Let's, but it's exciting, right? And everyone shows you a video. I've never seen an AI video that I would watch if you didn't first go. This is an AI video. It's like interesting but not useful. And that's what we're distracted by in a lot of these areas. But at the enterprise level and with the larger clients, it's what you're talking about is really know where can we actually get use from it?

Where can we move the needle? And I think that's, it's hard to match those two things, like the hype and the excitement is video generation. There's a new video generation model and none of them look real. I saw someone show me something last week that could do facial expressions when you move the mouse around on the page.

I was like, that's so hard. You're a puppet, like that takes six puppeteers to do facial expressions for an animatronic. If you're trying to do that, it's so much easier to just take a picture of a facial expression and to actually do it than to do it with a mouse. It's like sometimes we take something and just make it way harder because it's cool and like you watch, because that's why no one shows any videos of it except for the demo video.

You don't see anyone actually using the tool because I was like, this is impossible. This will never work. Yeah if I try to draw a facial, a smile with a mouse, it's gonna look weird. It's so hard. 

[00:12:44] Adrian Mendoza: Oh, absolutely. Part of it is the next stage is being able to load in. If you had 10,000 expressions of Happy 10,000 expressions of Sad 10,000, all different videos of happy, sad, everything, and then you loaded it into your own LLM, then of course you're gonna get some incredible video that comes out of it.

But I don't think that's what we're looking for. The reality is interesting. When you look back, I would say 20 some years ago, now I'm dating myself when like the first open use or rights managed images come out. You were having like a bunch of Getty images and we were having the same discussions about ip.

People were like, this is gonna destroy the industry. Did it? No, it didn't. It just enhanced it. You now, and this is where we're gonna be with ai, I. Is you're just gonna have it as back stock as oh and B-roll footage for things, but not really use it for your a roll footage. I think it's gonna make the, a roll footage and video, the sort of the hero shots that you're se that photographers are doing better because you don't need the B-roll footage that we're paying, thousands of dollars for it.

[00:13:53] Jonathan Green: I think that makes a lot of sense and I think there's also. A lot of confusion by using the word ai because when I grew up late eighties, early nineties, AI meant sentient, right? It meant thinking robot terminator, it can turn on us or it can help us. Depends if you're reading Isaac ov or where you come from.

But now we've shifted to almost everything is an ai. Oh, this is an AI image generator. This is an AI voice clone. And we're, and now they now. I think what used to mean AI now is A, and then it was strong and weak ai and then they're like, now they're saying a GI. They go, oh no, AI means a GI like we keep shifting the terms.

So what exactly, when people think of chat GPT, what exactly is it? Because there's so many shifting terms. It's not sentient like it's not thinking. I think of it as a really smart magic eight ball, but you have a much like a different perspective. So how do you think of it? 

[00:14:49] Adrian Mendoza: You know what I think about it right now, especially the LLMs, the LLM is a tool, and so where I think about where we are is we're still back in 1994.

In 1994, you can go into Yahoo or Lycos, any of the search engines, and when you type something in, you would get pretty close to 90, 90% accuracy of, you wanted to learn about the history of something, outcomes. And then we have close to 24 years of internet garbage. Like every, it's searching, all of the search engines are searching everything and everyone, and because of that, you now ask for the same thing.

You're gonna get a fairly muddy answer. All we're doing now with a chat GBT, and these generative AI is just trying to move more accuracy at the top. Even though there is a hallucination, and I think that's gonna start, you know what, where I see it coming in the, I was asked about this last week. What we're gonna see seen in the next one to two like years is what we're seeing with ads.

Having been in the ad space, we were getting ads and it's the wild west. Things are being personalized for you. And then a lot of them were paid ads. And then now we're finally seeing things that say, oh, this is a paid ad by so and in the next one to two years, when you ask generative, it's probably gonna come back to you and say, this is why I gave it to you.

These were the data sources. Because I think the validity of that response is going to be really important. But one of the things that we are looking at as investors is, alright, what's the short term? But also what's the long term, the long term is we're gonna use it to replace a bunch of things that, you know.

We don't have access to. So a good example, people asked me last week, I was at a conference, Hey, what do you think about AI replacing jobs? And we're scared about this. And I was like, look, use a good example. Cybersecurity. The number one job that is unfilled is a cybersecurity risk analyst. Like we can't hire fast enough.

We are, I would say something like there is 50 to a hundred thousand people that are like, that are jobs that are needed to get filled and they're not being filled right now. So what could we use? We could use an AI system, maybe a generative, so to start picking up and reading through millions of data points a day of people trying to do attacks or people trying to do any of that, any of the risk management that has to happen, that's good for us.

Because it's going to fill in a gap of what it doesn't exist as we move over to more enterprise. Perfect example, like I'm wearing a T-shirt today of a, of an AI company. One of our forecasts not an advertising to them, but what they do is they do something called meta, metadata data labeling.

They take someone's data and then they add labels to it. Which just sounds mundane, but if you think about it, if you are a company you're seeing, oh, that's my favorite 

[00:17:55] Jonathan Green: topic. 

[00:17:56] Adrian Mendoza: Oh my. That's, we could go down. Oh, I'm gonna invite him on there because he's gonna go down deep on the rabbit hole 

[00:18:04] Jonathan Green: and because it's literally my favorite topic.

'cause everyone can't find files. It's like absolutely. Trying to find that picture from seven years ago. And the naming is so bad. No. Like literally, I always tell people, I say the two biggest time wasters are e emails. You don't wanna read. And trying to find stuff on your computer. Absolutely. And now, my gosh, I love that topic.

You got, you're 

[00:18:24] Adrian Mendoza: an insurance company and you're sitting on. Terabytes of things, claim files receipts customers and what happens with these? Having worked at an insurance company, I was the entrepreneur in residence at John Hancock. We would meet these guys and they would be like, man, we have a data warehouse of stuff.

I'm like, wow, have you started mining it? And they're like, yeah, no, because there's no labels on it if you asked it, which is my a customer that. Paid a claim, they wouldn't know. So what these guys are coming in is they're building the rails to get the generative ai. Because in order to get there, you've gotta label everything.

You've gotta say, this is a claim, this is this is that, and then transform it. Oh we can't share PII. Private information. We can't share gender information. We can't share whether this is person's, has cancer or not has a cancer. But we do wanna know, like, more information about them because it's sort in, in order for you to label, it is taking one body maybe a month, they can label 5,000, 10,000 because the human body can't sit there and be like click.

Now if you use a collection of algorithms that are just going in, cleaning it up and and can do maybe a hundred to 200,000 a month, that data's now useful to us. 

[00:19:51] Jonathan Green: Yeah, this is, there's so like I was watching the recent iPhone announcement and they were like, you can take a picture of someone's dog and Pharaoh, what kind of dog it is.

I was like, you could also just ask him. If I had the choice of you walking up and snapping a photo of my dog really close or saying, what kind of dog is that? I would much rather the human interaction, like all the examples in their video where he stands in front of a restaurant, takes a picture, and he goes, you can see the menu.

I was like, the menu's in the window. If you watch the video like every rest. I've never met a restaurant. They'd say, can I see the menu? He goes, no. Yeah. So the three examples they have in the video are all things that were like, I would never do any of those. Like I. As much as maybe, I would rather order a pizza by text than a phone call.

I can read a menu in public I can handle that. Or saying what kind of, if you go to a dog, also the guy has no doggies at a dog park like that. Right away I was like, I can't believe they're showing this. That's weird. Like maybe that's just how I see it, 

[00:20:40] Adrian Mendoza: but but these were like what I refer to as useless use cases.

Yes. This is what we saw 2016 with the beginning of ai. At the same time with blockchain, you're looking at blockchain technology. There was a lot of useless use cases. You had the beginning of ai useless use cases. I always use this example and I feel bad to, to beat up this poor founder. We got a deck last year that said it was using AI computer vision cameras to check the number of pepperonis on a pizza.

And I was like, nobody cares. It's a 15 cent pepper pepperoni. Nobody cares whether or not you have more pepperonis or less pepperoni. Useless use case. So we're clearly, we're right in the middle of that hype cycle. And you're gonna see these really good players emerge. But it's funny because as an investor you're seeing a couple of things happen.

You're seeing like a bunch of these LLMs fighting it up to see who's gonna be the top LLM. They're just fighting, these are open ai, these are like meta has one. Then you have this middle piece of people being like oh my God, I need one. Which one should I use? So we're gonna either make one up or we're gonna partner with somebody.

And then you have the third one, which are these useless people that don't need an LLM. Who cares? 

[00:21:56] Jonathan Green: Yeah. The most common thing I hear, and this is so common, they, someone says, the CEO will say, the board director said, I need ai. And then they go, what do you mean? They go, we don't know, but we want it.

It's like very. S someone's, it's like a restaurant. I'm like, there's no, you don't. There's nothing you can do. AI can't cook the food. It can't seat someone at the table like it. None of your functionality isn't gonna solve the problem. As much as I'm an AI guy, people think I'm an AI hammer, and every solution needs to be hammered with ai.

It's not always the right case. I like human interaction, like I'm glad we're talking face to face. I actually think AI is gonna cause a shift to where, because AI video is so good, we meet face to face more often that we actually, I hope it actually leads to a shift to more. Human interaction. That's my hope.

I hope I'm right, but I feel like a lot of companies are overwhelmed and a lot of it is in the middle. There's a lot of this communication that's I see so many from people in my industry. This is the Sora killer. This is the whatever killer. I'm like, SOAR is not even out. They released a demo video like six months ago, but people get caught up in that and then two weeks ago.

Everyone philanthropic. Claude is the best. It's over for chat GBT. Then the same person to me messaged me today and said, A Claude is trash. I would never use that trash. And for someone who's, it reminds me of double Dutch jump rope, which I've never successfully done. There's two jump rope spinning. You wake, when do I jump in?

When do I jump in? And it's oh, you should use Claude. No, Claude is trash. You need to use Gemini. No Gemini's trash. You need to use cha Bt No, you need to wait for Strawberry. And it's when people say, oh, why are companies afraid to use ai? I'm like, it's very reasonable because there are so many. You don't wanna make the wrong decision, right?

You don't wanna spend, do a massive shift, shift all your platform to Claude, and then everyone goes, oh, that's trash. You chose the wrong one. You've wasted $4 billion. So I think the fear is reasonable because of the marketing in the middle between AI consultants and yeah, a lot of what we see in the media.

So I don't think it's that crazy for someone to be like I don't know the right thing. Because every day the recommendations change. Now, every time I interview a lot of CTOs on this show. Yeah, go please. 

[00:23:55] Adrian Mendoza: Oh, I was gonna say, like you were right on the money. You're right on the money. And that's why I refer to it as the LLM war.

And I'm gonna use a great example. Pre, like first hype cycle I'm helping actually, right as we're starting our firm, I got approached by one of the banks and I'm helping them build all their mobile functionality. I was a founder before build FinTech. Great. And so one of the sort of higher ups, this is like exactly like your board, says we need something.

ai, here's what we want. We want a chat bot. I. And I was like, all right sounds good. So spend millions dollars on a chat bott, and they're bringing in sentiment analysis. They're making algorithms, they're bringing in, like they're rolling out the red carpet and they put this out. Do you know the number one question that they ask the chat bot?

That customers chat asked? The chat bot, 

[00:24:44] Jonathan Green: did I talk to a human? 

[00:24:46] Adrian Mendoza: No. The number one question is, what's my routing number? That's it. So what did they do? They put the routing number on the first screen. No, we'd ever used a chat bot, and that's where we are right now. As enterprises are trying to figure out what to do with it, they're gonna go and spend a lot of money, but they're not really gonna figure out like, is anyone used doing anything with it?

And it's really been interesting because one of the things that's happened with two of our companies, two of our companies, they went from. Doing all the reporting and all the charts and graphs to the business team. And both companies scrapped what they were doing and they built an interface.

Now, customer service, now the person that's at the bank is literally using the user interface. I don't need charts and graphs anymore because the farther you get to the person that's using it, the more you're gonna start collecting the feedback loop of what questions are they. And if you're asking a certain question, then let's find a solution.

But also let's find training data in order to fill it in. 

[00:25:57] Jonathan Green: Yeah. I'm glad you brought up chatbots because I'm not really a big fan of customer service chatbots. 'cause every single person, I always ask people this. I say, have you ever had a positive experience with a customer service chat bot? And no one has ever said, yes.

Every company wants them. And I always, I ask this question, I say. Will the customer service chat bot have the ability to issue a refund? No, of course not. I would never let an AI control my money, but that means as a customer, I know right out the gate that this can't solve my problem at the highest level, right?

It doesn't have all the power that a person has, and I get that because everyone's nightmares. What if it hits the button and refunds everyone, right? Like we all imagine that it just hits the button and refunds our entire bank account. I get it. And I always say, listen, this is my point. I wonder what you think.

I say, why don't you put one at the front of the phone? If you wanna use a chat bot, fine. Put it. When people are visiting the website, when they're not already mad, when it's answering presales questions, because people will just think, oh, this is cool. This is interesting. You can answer some questions while I'm waiting to talk to a person.

But if I call, if someone calls customer support, right? Nobody's happy talking to a phone tree, right? And. I worked at a Fortune 50 company and the one they said, oh, we moved our customer support to India because 10% of people hang up when they hear that accent and it saves us money. And I said, oh my gosh, that's horrible.

That's horrible. I. That's your thing. And my friend who was head of customer service for the whole company, major issue for a problem my sister had with one of their products. It took him like two weeks to get it fixed. And I was like, you're the highest ranked person. You can't send a replacement part.

So now we have the bots in there, and I think that's the mindset that people always have. They think of the money they can save rather than. The customer experience, right? I'd rather thinking, what's the right place to deploy this? Because yes, you save some money in the long run, but you also build up a lot of ill will.

Everyone talks about their bad customer service experience, right? You can. That's when they're posting on all the websites and all of those things. So I just wonder, 'cause people, that's where people think they're gonna, everyone thinks they're gonna replace with a chat bot. So I also. I hate the marketing language and this is my industry, ai, AI consultants that are just like, I'll teach you how to fire entire staff.

I saw someone post something a couple of weeks ago that was about an optimization that I do, and he was like, and it can save you a bunch of money. I was like, it only saves you money if you fire two of the 20 employees you're talking about, if that, otherwise, that's the only way you save money is by, if you're talking about staff, the only way to save money is to have less of them.

And I was like, that's really, instead, you could talk about increasing revenue because the same process, you don't fire anyone. You can just increase revenue 20% across the year. That's much, most companies want that, but I do see this language, right? We have a lot of, and it's what gets clicks, right? It's what people pay attention to.

This is gonna raise this industry, this is gonna raise that industry your customer service team can replaced with an ai. And I think that. Creates a culture of fear. So sometimes when you go into, when I go to a company and I'm like, okay, here's a tool that will solve a problem, and I showing it to someone, they're, they think they're training their replacement.

So the first fear that I have to alleviate, and it's their fear is reasonable. Like it's not a crazy fear to have, since everyone says this, gonna replace your staff. And I'm like, no, I just wanna make you faster. I wanna give you an Ironman suit. I wanna take the thing that annoys you the most and we're gonna fix that.

And if you can just have all your employees accomplish 20% more during the week. I also like to start with morale. I say, let's fix a problem that everyone hates doing. Whatever the like data entry thing or the task that everyone gets to know about, I was like, let's fix that before you fix a moneymaker so that way we can get everyone on board with it.

So I say, let me solve their problem first. So that's how I approach it when I'm tell looking at implementing. And there's this other idea which is really scary, which is. I bought, I just signed a contract with an AI team. Let's figure out how to use it, right? Like sometimes there's this top down, and I call this a visit from the good idea fairy, where you buy the tool and now you're, you tell your team, we, now we have to justify the purchase.

Let's figure out how we're gonna use it, right? Like we, now we have to do, we have this huge contract for this or that, and. I try to approach it from what's a problem you have? Either money, too much money going out the door, something you can't find a person to do, something that's affecting morale, or something that just takes a lot of time and is repetitive.

Let's start with those areas rather than, let's make an AI video that looks really cool, but also looks really creepy when it shows its teeth. Let's move in that direction. So a lot of the messaging, I think the most exciting messaging is also the least useful messaging when it comes to ai.

[00:30:16] Adrian Mendoza: Yeah, I think you brought up two really good things, and the first one is when you talk about the efficiency piece. It's funny because every time a founder or a company comes up to me and this is my investor hat now on and says, oh, I'm going to make the team more efficient. Every startup that I've ever in my life that has tried to show efficiency has always failed.

The reason being is what they'll do is they'll say, oh, I'm gonna make everyone two hours more efficient. Alright and then that two hours is worth, I don't know, 20 bucks an hour. So now they're $40 more efficient times a staff of a hundred. And so they end up, it's literally a host hallucination.

They make up the sufficiency number and I'm like, no, because you're still paying for that two hours. It doesn't matter you, you may have recovered two hours. They're still gonna take a two hour lunch or they're gonna go out and do something else with it. So for me, every time that someone tries the efficiency play, I was like no.

Try again. Because we always, as investors, look at what's ROI? What's your return of investment? Someone pays, I don't know, a hundred thousand dollars for that group of AI people to come in a month. You should be getting a hundred K back. How long is the payback period from the brand, from the enterprise, from the company Because they want to know how much value it is, and this is where we've seen it go south, where someone brings that team or brings a product and then there's no payback period.

Because if what ends up happening is if you don't have a direct, oh, we got money back. Good example. We are investor in a fraud company. Uses AI algorithms to catch fraud. They, whatever. I may, I'm gonna make up a number. I'm not gonna use a real number. The client said, Hey, you, this is a hundred thousand dollars a year.

In the first weekend, they caught about $95,000 worth of fraud. Customer is happy as a clam. Oh my God. I literally, in one month got a payback period. That's what's gonna make an AI sort of customer happy. It's how long did it take for me to seek payback? 

[00:32:28] Jonathan Green: Yeah, so in the example I was talking about before, I was specifically talking about a sales team and I said one of the inefficiencies in sales team is filling in the data, which they all hate doing.

Salespeople just like selling, right? They wanna generate as many sales as they can. And in my example, you have 20 people on the sales team. They spend two hours a day filling in data, six hours a day, doing the calls. Rough ratio. If we can cut the time in half, they can do one to two more calls a day, which means.

Two or three more sales per week, and then that's the top line revenue. Whereas someone else talking about it was talking about you've saved all this money because you've saved these two hours. And I was like, but yeah, only if you fire someone, right? Only if you fire one of the people who that's their salary.

So I really like what you're talking about because a lot of times people like to talk about like imaginary math. Like when I have, when I talk to social media people and they talk about reach. And I had someone who worked for me, he goes, our reach is 4 million. And I said, I don't know what that means.

What's reach? He goes, that's the number of people that could have seen your post. And I said, oh, in that case, my reach in high school was 500. 'cause I could have had 500 friends, but I actually had three. So reach is like, how many people could have been friends with me? And it's that's the most imaginary, I couldn't believe it.

I was like, that's what that means. That's so imaginary. Like how many people could have, 

[00:33:37] Adrian Mendoza: like 

[00:33:38] Jonathan Green: how many people did see 

[00:33:39] Adrian Mendoza: it. 

[00:33:39] Jonathan Green: Yeah. 

[00:33:40] Adrian Mendoza: And no, absolutely. And I think when you get, you brought up a perfect case because here's the thing with the sales example. That two people that have two more hours that can make two more calls.

If they close, they're making money on that. If they close, that is more money in daily, weekly, monthly, in their pocket. I'll tell you right now, they're gonna be more incentivized to bring that on if they knows it's gonna help 'em close faster. Because there's real money, not imaginary math. I love that term.

I'm gonna end up using it 'cause that's such a social media thing. But this is real math. These are real dollars and cents that it's gonna take someone to actually put dollars in their pocket. 

[00:34:21] Jonathan Green: Yeah, he asked. The same person said, he goes what metric do you care about? I said, my favorite one is dollars.

Like my second favorite one is email addresses. If someone opts in and joins my email list, eventually they'll buy something. I said, but this is. The furthest from that, like it's the furthest possible and number you want is, we'll just say however many people on Twitter. That's my reach. However, how many?

A hundred million. 'cause they could all see it. Like it could go perfect viral. Elon Musk could retweet 

[00:34:46] Adrian Mendoza: it. I couldn't agree more. People ask us asset investors, Hey Adrian, what do you know that a company has the perfect go to market strategy? That they have product market fit? And I was like that they're actually making money.

That you have someone writing them a check to buy it. And I use a good example. If somebody wants this so much, they will pay ahead to help make it even before it's built. We had a company that was in cybersecurity. They sold to a brand. The brand wanted what they wanted so badly. They paid for 90% of the engineering costs and they still paid for a license afterwards.

That shows that they didn't just want this thing, they needed it. It's the want a cookie. Need a cookie, we all want the cookie, or do you need the cookie? 

[00:35:39] Jonathan Green: Yeah. The other imaginary number. When investing and people come to me, I, they always say, oh, this, many of my friends said they would do it like I used to be in publishing.

And I'd always say, do not ask your friend's opinion on your book. Do not ask your friend's opinions on the book cover because they're not your customers. When I published my first book, which was a massive success, my mom paid me $20 to not have to read it. And that was like a hard lesson to learn. But you, I was like, what?

And I was like, it's actually, it's number two. It's doing really well. It's like on all the charts I'm getting calls from publishers. She goes, yeah, I already know you. And that's the thing is that people who already know you will always say things like, yeah, of course I'll come to see your movie.

But then they never. Buy a ticket. And that's the really different, there's a very different thing I've said. I wanna, lots of movies, I said I'm gonna see and then I don't go see it. Something happens, right? Life gets in the way or a different movie comes out that's more interesting and that's really the only metric that is real.

Like everything else is a imaginary. Once you go past that of like everyone said, they liked it. Everyone liked my idea. And I'm always fascinated by companies that really focus on. Like these non-measurable growth. Like when Threads launched, they were like, we have 500 million people. I'm like, didn't you just transfer everyone with an Instagram account over?

That's not the same. It's not because people have to actively do it. It's very different than passively. I actually met the first person ever who actually used this threads last week. Someone told me they, oh, I love it. And I was like, you're the one I'd never met someone before. And it's very.

I guess there's something, I'm glad there's someone who likes it. Like I was really, he was like, it's the best. I get the best responses to my posts. Like I go viral all the time. Everyone's really nice. I was like, what? That sounds amazing. Like it sounds, you described it, it made me wanna try it. 

[00:37:24] Adrian Mendoza: I have a good ex way of describing it threads is like ChatGPT PT, because you've shifted through the garbage.

And you're just like getting, because you're not having millions of people on, you're getting a handful of people that are active, so you're gonna get great responses. It's like using chat GBT to ask it a question. Yeah. So this is where it is where Twitter is the internet and you're trying to shift and look for it.

Thread is just like chat GBT, or it's like some of it may be real, some of them may be fake, but look, they're responding to me. 

[00:37:55] Jonathan Green: It feels good. It feels good.

This has been amazing. There's one last area I wanna go into. There's a lot of AI software out there that is either just chat GPT in a wrap, or there's so many tools that are, that like half of product hunt seems to be that. And there's also a lot of tools that are just using heuristics like I am. I was testing every AI website builder, and a couple of them had no AI component.

They were faking it. It was a slot machine as in. You could get one or three responses for each box, and if you spun enough times, you would catch that, which is what I did. I said, wait a minute. You didn't ask me for any input. You just gave me choices from dropdowns, which means there's no AI component, and for the low level consumer, they're buying all sorts of stuff that has no AI component or they're massively overpaying.

Not knowing that anyone can have chat, G-P-T-A-P-I, anyone can get access to these APIs. They're very widely available and you can pay 4 cents a use. Like I could go down the rabbit hole with Salesforce's recent announcement of charging $2 per every conversation, which is I wanna get in that business because that's 99% profit.

That's amazing. So for people who are, like, from your perspective on the outside, like how do you separate, right? When you're looking at it and going this is. This isn't even real ai. You're either reusing someone else's, which means you're beholden to their data set. So if something happens to them, something happens to you, right?

If they go outta business, you go outta business or you are faking it. And sometimes they're faking it with the intention of eventually, or at leasing. Like perplexity was very interesting to me because they used someone else's and then they developed their own, I guess like they say they did. I'm not a hundred percent sure, but they've been doing some really cool advancements.

What, when you're coming in there, when you're looking at stuff, what are the things you look for when you're doing your sniff test of wait, something's wrong here. What are the things that get your spidey sense tingling, if you will? 

[00:39:42] Adrian Mendoza: Yeah, absolutely. Great question. Part of it is you're looking at I think the front end, the LLM side and then the back office and on the front end it's alright, did you build this interface?

Did you lease or you rent it? Do you own it? A lot of them are using APIs, a lot of 'em are using, whatever, but this has been around for a while. People have been using search APIs to create, integrate them. So that's not that much of an IP there because you're gonna have to build front end, but then you now shift the conversation like, alright, how usable is this?

There is now a user experience component that has to come to the foreground. But it's also going to be the question of did my mom give me 20 bucks to, to tell me that this is great. So there's a lot of con in, in order how you get past that due diligence. Sniff task is you start talking to.

The customers? Are you using this? How many people are using this? Who is using it? Did this go to the data scientist guide? Or is it actually going to the person that's opening an account? Or the person that's customer support? Like how, what is the process of them actually like putting their hands on it And also you know what questions are they asking it?

Because if they're only asking what it's routing number, there's nobody's using this thing. So there's that piece that's actually fairly easy to snip out because you're gonna engage them. The second one is you start digging into alright, if, are you using an LLM? If not, who's are you? How are you using generative ai?

Are you using RAG right now? What I've seen 99%. Are you like off the shelf LLMs? They're using chat GPTs. The problem is once they start charging, which is what's gonna happen, they're gonna start charging every time you do a conversa conversation. Salesforce is smart. 'cause they know that this is what's starting to happen for these API driven things, and then all of a sudden this is gonna happen.

What's, we just saw this on the FinTech side, for years fintechs were used, were built using people like Synapse. Two months ago, synapse went out of business. It took down a bunch of fintechs because they didn't have their own back office as a service. Same thing's gonna happen because what if you're tied to a Google bar and then they deprecate it your host, because you're writing your front end directly to that LLM.

But the other thing we're seeing, and this is a good place to start, like digging in, is do they have anything? Because a lot of time what we're seeing, it's an expert system. There's no LLM used because they're not actually like pulling in like the website builder. That's not real ai. It's an expert system.

I. It's got literally and this is why the third portion to go deep. What are your data sets? Because it's not like these website builders are using the plethora of everything on the internet. They're using a handful of people. They have licenses with a handful of people that they have, and it's just give, serving you up like on a random slot machine.

Again, not real ai. There's no sentiment, there's no lang, natural language processing. There's no, matching between one algorithm to the other. And that's when we start digging into being like, all right, where's the data set? Who's data set? Because again, it's okay if you're gonna go in.

And hire a bunch of people to make the decisions. We had a port code that did that. They were like, Hey, in the meantime, we're gonna hire out the Philippines, a bunch of, a staff authority to come up with the decisions. But in collecting the data of qualified decisions, and more importantly, not just the decision.

It's the workflow. Did someone actually buy it? And that's a huge thing that's missing. That's a massive red flag because if you have a front end and you have an LLM and you have a data set, but you're not catching the workflow, so whether or not someone successfully used it and what they chose and did they buy it, that's where we're seeing a lot of people not thinking about they're falling flat on this.

[00:43:51] Jonathan Green: Yeah, a lot of the tools that I encounter are basically an embedded prompt. Like you just fill something in and then they add their prompt and then send it to an AI like they're doing. And it's interesting to me because 'cause of what I do, I always reverse engineer them, like I can figure it out because they usually don't have that many variations.

It like he says, oh, it'll send one of maybe five or 10 proms with it each time to try and mask it. And I just wonder how long these businesses and how long it will take everyone else to figure it out. Like, how long can you until you notice, oh, my responses always have the same structure. It must be get sending it, say, oh, respond with the structure back.

That's all that's hidden in the prompt, which is fine for the short term. But I feel that's where a lot of people, like a lot of consumer level or lower level enterprise mostly. I work with companies of like 10 to 50 employees and that's where a lot of people like they're running into that problem.

And that's when I say, that's when you're overpaying. 'cause now you have to pay for that company who's paying for their API. Like anytime I deal with a company, I always ask, can I use my own API key? 'cause I know that they have to play risk management. If they're using their own, they're gonna try and guess.

And obviously charge over that so that they never go upside down if I do a massive use if I suddenly do a million calls that month. So I'm always looking at those things, but I'm more of a technical person. So I try to find that balance of what's useful, what allow, which will be here for a while and will continue to work and has overhead control because.

One of the stories I tell is that midjourney, when they updated a version six in their announcement, and no one noticed this, they said all previous prompt structures will no longer work. You have to use a completely new prompting structure. And I was thinking that if anything built, first of all, everyone who like bought a prompt collection's gotta be disappointed, right?

You're like, oh, nah, that's useless now. So people are buying like a million prompts they'll never use. Or if someone's built their tool. You have to redo everything. Like they just announced with the recent chat, GBT, they said, oh, the prompting structure's changed. I haven't tested enough to see, I haven't run in, my prompts not working anymore.

I use a pretty simple structure to avoid that problem, but it's a really scary thought of your entire company's built upon. You came up with these really cool prompts. Now they don't work anymore. Yeah, because your and your phone is ringing off the hook 

[00:46:05] Adrian Mendoza: are right on. Because one of the things that no one's talking about is how do you build.

A microservices based architecture that creates a layer between the prompt, the LLM, so that you should be able to switch them out. You should be able to like quickly chat. GPT changes something, alright, let me switch the switch. And I go from one service to another, to a different one.

Like we should be able to switch back and forth. And a lot of people are hard coding these things as well. And this is the thing, when you get into deep technical due diligence and you ask them to be like. Hey, walk me through a technical architecture. How does the request come in? Who's the API? Whose key are you using?

Are what is the api? What are the transactions look like? And also what's the SLA between you and the API With the service level agreement at certain number of calls, if you start sending a million calls to one of the LLMs, it's gonna get really expensive. Really fast and all of a sudden what you are getting out of the transaction, what you are selling, is going to outweigh the amount you have to pay to this.

And we've seen this from the FinTech side. This was, synapse banking as a service 1 0 1 people would be, oh, I'm gonna put this transaction in. But the API calls to synapse. The banking as a service was more expensive than what they were getting from the customer, and so the unit economics aren't working.

And so that's one of the things that we have to quickly look at is the in is the investors like, what's the cost of goods sold? Like, how much are you paying per call? How much are you paying per prompt? How much are you paying per, every model that you have in that you're associated with it.

And also what's your AWS charges? Now put that's a cost of goods sold. On one side of it, if you don't have that, you don't have a product. How much are you getting from revenue? And then let's deduct it. And let's say, what's net revenue after that? But then more importantly can you scale?

Because if you don't have the right agreement, as I said, those APIs are just gonna break you. 

[00:48:21] Jonathan Green: Yeah. This is why everything I build, I tell someone, use your own API Key have those, you control those fees. I just charge a smaller monthly fee, but you take on all the risk and people are always like, why do you just build on top of chat?

I was like, I just don't lie about it. Everyone else is doing the same thing. I just tell you how I built it because you'll figure it out eventually. And I think this is a really important lesson for people who are. Thing you're getting, this is the danger of thinking you're getting a great deal. It's yeah, it's a great deal until the company either doubles their prices or grows outta business.

Like you have to look at some of those things. Sometimes there's a re, like a good deal doesn't last that long, and I've seen a lot of same thing companies go outta business or they just massively spike their prices and very interesting because these are things that a lot of people. Because most people are from the outside of ai.

They know they need it. They're hearing about it all the time. But it's hard to separate the signal from the noise. 'cause most of the news is about things that are relevant, not things that are useful. It's about AI music or AI video or AI image generation, which is just replacement for stock photos, like you said, like that's a simple thing, but they think it's.

So much more because of the marketing. So it's very good to get to the center and say, here's what's useful. Here's the things to look for, here's the questions to ask. 'cause that's the most important thing. What questions should you ask to figure out? Is this tool gonna solve a problem for me? Is this tool actually an ai?

Is this something that's gonna be around for a while? Because there's a lot of hype. Like I see so many people posting videos of, oh, I made a piece of software with this tool, and I look at their profile. I'm like, you're not a, you've never developed anything before. You don't really know. I can push a button, I can push a button on a Python thing and it either works or it doesn't.

I can have ChatGPT be Right code may. I haven't done it. I don't sell that for a reason because there could be something in there that's broken and I won't find it and I can't fix it because I'm not skilled enough. It's not my area of expertise. Just like when I see an AI video by someone who's already made videos before, cinematographer so much better.

'cause they know storyboarding, they're actually telling a story. It's not just a car going backwards. There's the ability to. Because you're an expert at something, then you can tell if the AI's drifted or if it's hallucinated or dreamed or gone off track or lied to you like you were talking about. So I think this has really been a great discussion.

We went way over time because of having such a good time is so interesting to me. So I had a really great time. For people who wanna follow you, connect with you online, see more of what you're doing, make sure they catch more of your immediate appearances, what's the best place for people to find you online?

[00:50:45] Adrian Mendoza: I'm on LinkedIn. Follow me at Adrian Mendoza vc. I'm on Twitter as Adrian Mendoza vc, as well as Instagram or at Mendoza Ventures. Happy to follow us across all social media. You could follow VC puppy. I'll leave you with my, how I solve what LLM they're using is. I ask it that question, and that question is, who is VC puppy?

And that's how I know which LLM is doing what, because if you don't know and you don't have a key question like that, you'll never know what's under the covers. So yeah, follow us on social. Again, active on LinkedIn. Feel free to always reach out. 

[00:51:21] Jonathan Green: That's amazing. Thank you so much for being here, guys. I know you enjoyed it.

Thanks for listening to another amazing episode of the Artificial Intelligence podcast. 

[00:51:29] Adrian Mendoza: Thank you, Jonathan. This has been awesome, man.

[00:51:33] Jonathan Green: Thanks for listening to today's episode. Starting with AI Can Be Scary. ChatGPT Profits is not only a bestseller, but also the Missing Instruction Manual to make Mastering Chat GBTA Breeze bypass the hard stuff and get straight to success with chat g profits. As always, I would love for you to support the show by paying full price on Amazon.

We can get it absolutely free for a limited time@artificialintelligencepod.com slash gift.

Thank you for listening to this week's episode of the Artificial Intelligence Podcast. Make sure to subscribe so you never miss another episode. We'll be back next Monday with more tips and tactics on how to leverage AI to escape that rat race. Head over to artificial intelligence pod.com now to see past episodes.

Leave a review and check out all of our socials.