Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools

Pick up the Phone, ChatGPT is Calling with Alex Puga

Jonathan Green : Artificial Intelligence Expert and Author of ChatGPT Profits Episode 338

Welcome to the Artificial Intelligence Podcast with Jonathan Green! In this engaging episode, we dive into the evolving landscape of AI in sales with our special guest, Alex Puga, from Hypergrowth. Alex shares his insights on the integration of AI technologies within sales processes, emphasizing the importance of human oversight to maintain authenticity and trust in customer interactions.

Alex underscores the necessity of "babysitting" AI tools, ensuring they augment current skills without compromising trust. He elaborates on how Hypergrowth operates as a managed service, focusing on lead generation through a detailed approach that prioritizes quality over quantity.

Notable Quotes:

  • "We want to get to this whole AI SDR world... But Hypergrowth does is we're essentially handholding the AI agents through every step of the process." - [Alex Puga] 
  • "The more we begin to use these AI agents, the more we are losing human touch... You’re going to get replaced by someone that’s using AI." - [Alex Puga] "
  • These little mistakes can get bigger and bigger." - [Jonathan Green] 
  • There is a desperation for authenticity that I'm seeing in the market..." - [Jonathan Green]

Connect with Alex Puga:

https://www.hypergrowthgtm.com/

Connect with Jonathan Green

Pick up the phone. ChatGPT is calling with today's special guest, Alex Puga. Today's episode is brought to you by the bestseller Chat, GPT Profits. This book is the Missing Instruction Manual to get you up and running with chat g bt in a matter of minutes as a special gift. You can get it absolutely free@artificialintelligencepod.com slash gift, or at the link right below this episode. Make sure to grab your copy before it goes back up to full price. Are you tired of dealing with your boss? Do you feel underpaid and underappreciated? If you wanna make it online, fire your boss and start living your retirement dreams now. Then you can come to the right place. Welcome to the Artificial Intelligence Podcast. You will learn how to use artificial intelligence to open new revenue streams and make money while you sleep. Presented live from a tropical island in the South Pacific by bestselling author Jonathan Green. Now here's your host. I'm really excited to have you here, Alex, because there've been a lot of changes, especially recently with how AI is getting used more and more, and now that there's more real time ability and there's more AI voices. A lot of people are starting to think, should I have. ChatGPT answered the phone for me. Should have an AI do the sales calls, and there's a lot of different versions that everywhere from an AI does the entire sales process. TOIs are doing cold calling, Theis are doing video calling. And just maybe give start off by just letting us know where is the world right now, what tools are actually working and what tools making promises they can't quite live up to just yet. Yeah, definitely. I think first of all, thanks for having me. And yeah, very great question. I think there's a lot of tools, a lot of tools that might be over promising a lot of tools that are building things that are of value, but might not be where we hope, where we expect them to be. In my day to day, I use a lot of different jobs. I like to say hypergrowth a research and development firm in a way we'd like to experiment with a lot of different tools. Obviously since chat GBT was released to the public about two years ago, which is crazy. Time flies. Seems like it was just this year. But a lot of people have been leaning on or trying to cut corners, I'd like to say, using ai. And unfortunately we can't really trust. That the tools are at that point yet today to, and that point being letting them run loose and get the job done, and then you can just really trust that the job is getting done. Currently the way we do things now is we're really hands on. We wanna make sure that the AI essentially is doing what it needs to be doing. That requires a lot of manual checks, making sure that whether it's the data that we're sourcing is clean or even the messaging is right. I've played around with a lot of different tools. I'm sure you've heard or you may have heard in the past of, like Clay for example. They're big right now. They're essentially a platform that allows you to. To piece together your workflow AI included, you're able to touch your or pass in your API key points and essentially query their chat, GBT, their ai using the information that was scraped online to then provide or build a personalized relevant message. So Clay, one of those big players out there, I think they're one of the good ones that. Aren't necessarily over promising. They're just like, here's a platform. A lot of the time there is some skill that goes into prompting ais. So it's not just about using 'em. You need to know how to use them correctly. So that's where Clay comes in and they really do have a lot of nice resources that explain and show you how to prompt these ais. And they're not promising any magic there. Other tools, other AI tools besides just the typical clawed philanthropic and chat ai are very few, so we wanna be able to, I think today a lot of tools are saying there's ai but it's just not as seamless as we think. I wish I could, give out a few more tools. But I think at the top of my head right now clay is one of those big players that, that a lot of people are using and starting to look into. So it's really interesting to me. So I have a couple of ideas in my head. The first is that a lot of times like data mining things get one or two things wrong. So especially my name is super common. There are. Tens of thousands of people that name Jonathan Green. So it's very common for people to mix me up with other Jonathan Greens because there's just so many of us. There's, I'm not the only author. There's one Jonathan Green writes movies like A Fault in Our Stars. It was written by another John Green, and then there's a science fiction author, and then there's a famous painter. So there's a lot of people with the name. And there's also a lawn care company called Jonathan Green. So I used to, when people would search my name, that would come up first. So one of the things that I always worry about is false positives. So it's that thing you do where you try to show someone like how much you care, and you say the names of their kids, but then you get one wrong. That little mistake ends up actually undoing all the positives of doing that research. And that's one of the challenges. And even when I take one of these podcasts and I have an AI write the show notes, sometimes it makes me and the other person up and will write stuff about the other person that is about me. These little mistakes can get bigger and bigger. So that's one of the things I worry about with data mining where you're just not paying attention. It's like I always wanted to say what's the source?'cause sometimes we all have seen where ChatGPT, BT will just lie to you and I. The other thing I think about a lot is trust sales, especially over the phone, is so much about, do I trust this person? Same thing, even when you're buying a car. Do I think this person's telling me the truth or are they taking me for a ride? And when you put AI in the middle, if someone realizes it. That can shatter the trust. So the best way to make people hate you is to trick them because then they either go, either they're dumb or you are a jerk. That's their two choices for what happened. And so they will dramatically get really mad at you. I've seen this happen more common when someone leaves in a comment and they think it's a really nice comment, and the person, the author of the social media post responds and then they realize it was an AI comment. They get so mad, like really mad. Even I don't get that mad about it, I get annoyed, but I've seen people like Level 10 rage. Like they want the person's social media accounts deleted. They want the person banned from the internet because I. Of the switch from feeling good to feeling bad. So that's my biggest fear with having AI in the process because these little mistakes can happen. I used to have, when my YouTube channel was really small, I just set up to have ChatGPT automatically reply when someone would leave a post, and at first it would just say, oh, thanks for the comment. Whenever someone would say something. Simple stuff like that. And then someone asked me if I was the one who'd made a piece of software, and then chat chief goes, yep, we sure did. What do you think of it? And I was like, that's a lie. It was like a huge lie. So I turned it off immediately when I saw that.'cause up until then, it was supposed to just do when people leave a mean comment, just respond positively. So unlike me I don't get pulled into it. And I, that's the danger is that. AI has variance. So sometimes it will drift a little bit and say something that's just completely not true, and now you're stuck. Either you tell the person, oh, that was an ai. I've been tricking you the whole time, or you have to stick with whatever the AI promised, or whatever lie had said. So those are kind of things that I worry about a little bit, especially with AI on phone calls. Yeah and that's what I was mentioning earlier. You have to babysit it quite a bit. Making sure that it's not pulling the wrong, person. It's not saying the wrong things, it's not quoting the wrong sources. That's basically, that trust factor, right? Like of, as a user, like being able to trust AI as a user. And then obviously you touched on being able to trust it from a receiving standpoint. And you're totally on point with that. If I were to receive just an automated message, it's so easy to ignore and. Archive it, pass it on into delete, whatever the case is. And then you're just like, look these people aren't even gonna know if I saw the message or anything. They're just, they pro they probably sent this message out to thousands of others. You have that feeling that it doesn't matter if I don't res respond because. It's just an ai, I'm not hurting anyone's feelings at that point. But then, yeah, there's also that trust factor from the user perspective where it's can I let it run loose and will it do what I want it to do without breaking the trust of others? And we've started to see some phone companies and other big companies build fake ais to answer the phone. Especially like they've made a, they're always, they always seem to be elderly. Like I just saw a new one of a video grandma that it's designed to answer the phone for a cold call and just string the person on as long as possible. So I feel like we're gonna get to the point where it's just AI's talking to each other on the phone and it's just like your AI calls and my answers and nothing ever happens except for we just use it more and more electricity and resources for this kind of silliness that there's like a. Desperation for authenticity that I'm seeing in the market, which is people really wanna know if you are you, if you are a person, it used to be they wanna know, if you're a pen name or if you're using someone else's pictures. Right now it's, are you an ai? Are you a real person at all? Are you a composite? And most of the messages I get, like when people email me or respond to me on LinkedIn, the first message is usually like trying to see. If I'm a real person, like you're touching to see if it's a hologram. So we've already added in this new layer, which is am I talking to a real person? And it can be really small things that break trust. Like I get asked a lot if my background is real or if it's a. Like generated background. So I walk, I have to walk back towards the wall all the time to show people.'cause if I just say it's real, that doesn't prove anything.'cause I already have the background up and it's if it was AI generated it would look better. Like it would be even better. It would better than it is. There's always change I wanna make, but it's, we're really at this point now where like trust is getting harder and harder to grab onto because there are so many AI tools out there. And once you've. Been burned one time, once you've bought something online and didn't get it, or once you've talked to someone and it wasn't a real person, then you, it's so much harder for the next person. Like you have to recover that. It's like when someone comes out of a bad relationship, the next person has to suffer for all whatever the previous person did. So that all that like baggage comes into it. So when you are looking at what you're doing in AI and sales now, like how can you. Carve out this space where AI is useful, but not to the point where it like puts trust and danger where it, and it especially because like I have to massively over calibrate because I'm an AI person. People expect me to use AI more, so I have to use it less to make up for that to calibrate. So how do you deal with that as you're really building out AI in sales and this is the thrust of what you're working on. Yeah, exactly. Like previously mentioned, I think we really need to babysit. That's the biggest thing. Essentially we don't wanna be checking every single response that we're getting, but we are definitely checking most of the responses. So with that and just being able to. As specifically as possible to get the results we're looking for. I think currently we do want to get to this whole AI SDR world where, oh, set it and forget it, but, Hypergrowth does is we're essentially handholding the AI agents that we do use through every step of the process. If there's any point where it begins to hallucinate or do something that we're not, really expecting it to do. That's where we put a halt to it, and then we manually take over from there. Ultimately, the goal for us is to use AI to augment our current skills. And like you said, the whole trust factor there too. The more, or you mentioned that the more we begin to use these AI agents, the more we are. Human touch and, everyone's saying, oh, AI's gonna replace us, this and that. But no, not really. You're gonna get replaced by someone that's using ai, and we've probably heard this a million times now already. But that's what hypergrowth is really doing is we're. It's like human first, AI second. If AI is proving to, to do what it should be doing, we let it run. But the second that it doesn't we step right in and take over from there. Not sure if that makes a lot of sense, but that's the direction we move in. I think that's really good. One of the things that I encounter a lot is lazy prompting. And I think it's not the user's fault so much as the AI company's fault because they've over promised how easy it is to get a good result from chat GBT or cloud or from any tool. So people don't realize that there's a skill or an art to it. And there seem to be like two schools of thought. One school of thought is, it's so easy it works on the first try, right? That's the big promise that you sign up. And the other end of the spectrum is a lot of people say, oh, just like learning Python. And I'm like, that's hard. Like why? That's not easy. Like people who programmers think programming is easy, but everyone else thinks it's really hard. And sometimes people think I know how to program, and so they'll say something to me and I go that sounds impossible. I have no idea how to do that. I can copy and paste that's my level of programming ability, or I can fill in a blank. So when people start talking about specific code beyond that, and I'm like, I have no idea what these things mean, right? I can tell if something's working or not, but. These two ends of the spectrum. So these are the people that think it's really easier. People that think it's crazy hard, and it really falls somewhere in the middle. Like you can become a decent user if you just sit down for one day and say, today I'm just gonna try a lot of different prompts, a lot of methods, and see what works for me. If you sit down for eight hours straight, you'll be pretty good at it. I thought you cannot learn an eight hours and then you could learn in two years. So it's not five minutes, but it's also not weeks and weeks. So unfortunately. It all falls apart like a house of cards, like you said, if you, there's two things where I think people make a lot of mistakes. The first is they don't spend enough time to get into their prompting, and it doesn't take a lot of time. You just have to play around with it a little bit and try different answers. And the second part is that people really like to skip. The quality assurance. Now, this was before my time and certainly before your time used to be that when you would dictate a letter to your secretary, at the end of it, it would say, dictated but not read. Which means I said it out loud. She wrote the letter, but I didn't check it. So if something's wrong, it's her fault, not mine. And we don't, maybe we should add that when we have the AI read a social media post. Prompted but not read. Because every time someone runs into trouble, it's when they don't double check and. Like I caught, I got lucky that I caught when chat GBT lied with that YouTube comment right away I go, whoa. That was like a huge jump from, it used to be like, oh, thanks for your comment. Now it's oh yeah, that's our company. What do you think of it? It's slow down. I don't want trouble here. But if you're not checking, that's when you get into the biggest trouble because I've seen where people. Again, we fall into this. We believe it's so good that we don't check the work. And this is the same problem you can have with employees. If you hire a phone sales person and you don't check their calls, I. You can end up with a problem. They could be lying, they could be losing sales, they should be keeping, anything could be happening, right? You always have to have some component especially because there's a lot of laws about phone calls, like you're supposed to record them and check them and all these things to make sure you're not breaking the law and doing, because I used to be in phone sales and I know if you, there's tons and tons of rules, right? All sorts of regulations. You can't promise things that aren't true, so you have to be very careful. So if you're not checking that stuff, you can actually get into a lot of trouble. And it's the same with any employee if. An employee knows you're not checking their work. What are they gonna do? It's gonna start to go down. And I know this 'cause this, I've done this many times where I haven't checked the work, and suddenly it collapses after six months. You have to have an element of you're checking the work or you're paying attention, but we treat ai. Not, it's somewhere in between a tool and an employee. And I think it's because we think of it almost like a magic genie that we don't check the work and then it does stuff that we regret. Like I was thinking because I thought a lot about, since our first chat about the idea of if there's an ai that sounds like me, right? It does say the first half of the phone call or whatever, if it promises something or if it says something like if, because. The problem with chat GBT is that it always wants to give a positive answer. So if someone says oh, I was bullied in high school, then chat GBT will go, oh yeah, I was bullied too. It's the worst, isn't it? Now I am stuck, right with a backstory that's not true, or whatever it says, because it's always rapport seeking. So you can get stuck having to keep these promises that are not true and little things can become a huge problem. So when I wrote my first book. I wasn't married yet. I was dating my wife, and so in that book, I call her my girlfriend and then later on, but it's like people expect me to go back. I. Change it to wife in the book, which I'm not gonna do. That's weird. I'm not gonna re-edit a whole book just to change one thing. Later. Books, I call her my wife. We've been together a long time, and the number of kids I have changes. So now I have five kids, which means if someone's listened to this in a few years, maybe I'll have six kids or seven kids and the number will change. But that little thing, people bring that up. People have brought up the girlfriend thing many times. They go, oh, when are you gonna make your, when are you gonna marry her? I'm like, we've been married for this. Eight years now, long time ago. But that's the thing that if something as small as that becomes an issue, what happens if an AI version says, oh, I'm lactose intolerant, or even worse, I'm left-handed. Now you have to sign documents backwards. So these are little, it's not even the big lies I'm worried about. It's like a little lie that like, you just have to maintain now you have to keep up with this promise or you, the chat, the AI goes, yeah, I'm a vegan too. And so now you, every time you see this person, you have to pretend to be vegan. Yeah. Yeah, man, that sounds like a nightmare. Keeping up with AI lies, that's a tough one. Yeah, definitely. I think feeding the data, the right data to theis, whether it's like an a rag application at this point I think it's mostly gonna be rag, but I guess you, you'd have to be responsible to giving the AI or the rag application the most up to date data yourself. That, that's just a whole other rabbit hole of making sure your data's cleaned before you even put it into an ai. Yeah, that brings up something really interesting to me. So I'm working on a project now for a client who uses his conversations with other people, like his transcripts to train the ai. But the problem is if you just feed the transcript, it has both sides of the conversation. And if I just strip out the other person completely, then it's like a bunch of non-sequitur because you don't know what question was asked. So you have to go through this process as exactly what I'm building right now of transferring each time the other person speaks just into a question that's like really small so that it won't. Get misread as data when it gets put into the database when you're actually training the AI to talk like him. And I think that's really that critical part is like any small thing can accidentally infect your data. There's this saying that it takes only a teaspoon of urine to ruin a casp of cask of whiskey. If someone sees you, put it in there. So it's one little thing can infect it and cause these huge problems. And so it exactly, you're right, is that. Creating that policy and also creating an update policy, like how often are you gonna update the information and how much information do you want it to have? I've seen where people give too much information or too little. Like sometimes people don't want to tell you their name and that's too much. And sometimes people give like where they live and their address. I'm like, that's, you have to find that balance because anything that's in the ai, eventually it will say it to someone. So I think you're really onto something, which is that there is this new skill. Like we all thought the big AI job would be prompt engineer, and I don't think that's really the future. I think the future really is an automation and really in, I. Like bot training or automation, an automation building, which to me are really very next to each other kind of sim, very similar processes and really thinking about things because the process design is so important. I'll give you another example. A friend of mine was working on a project where they were taking phone calls for a large company and they were use the AI to train the AI and then train other people. But the phone calls have, secret data. They have like dangerous data, like people's credit card numbers or social security numbers. And I said, listen, you want that removed before the data gets to you because. Not that he would do ever do anything wrong, right? But if something goes wrong, who are they gonna blame? Their own team or the contractor who's working on the AI project or blame the ai. It gives them like an out, if something goes wrong and it's actually thinking of that is more important than the rest of the process. The platform doesn't matter. It's your pro, your design of the process. I'm realizing more and more. As they build automations that drawing out the workflow before you do anything is 10 times more important than the technical part. And it's really interesting to see that, at least for the next year or two. I think that's actually gonna be the cutting edge of AI is the person who thinks of all the things that could go wrong and then prevents them, because otherwise it will happen to someone else like I knew. Two years ago, I go, some lawyer is gonna get done, which at GBT lies and they believe it and it happened to someone. It's gonna happen to someone, right? Yeah. And you then know some company's chat bot is gonna say something wrong. It's always, you just don't want to be the one that's the first one it happens to and you get in the news. So with how you are building out sales tools and you're further in this direction, what are some of the things that like you have to stop your clients from doing? Or like data getting put in that can. Infect or cause a problem like down the line, it sounds small now oops, I uploaded the wrong thing. But if you don't find it and pull it outta the database, eventually it becomes a problem. Yeah. So currently the way things are running it's just all run through us. So luckily we're expecting or hoping that any clients are doing anything wrong. The big thing has previously touched on is like for example, I was working on this on this use case where I was pulling individuals or scraping individuals in a specific region and it pulls someone that's. It's not in anywhere near that and I check out the profile and everything and I don't see like New York anywhere, for example. And imagine I use that in my. And my messaging 'cause so I'm based out of New York and sometimes I like to meet in person, right? If we have the opportunity, why not? So yeah, imagine sending that message to someone based out of like Bali and saying, oh, hey, I'm in New York. Let's let's get a coffee. And it's wait. But there's nowhere in all my profile to say I'm in New York. But what are talking about? Did you mean to send this to me? Yeah, a lot of data cleaning upfront is crucial. You wanna make sure the data is clean. And that's where, like I said, we need to keep the human in the loop. We need to make sure that we're babysitting. And that starts with data cleaning. It's really funny that you mention that I. Because I've never gotten one of those messages that's gotten my location correct. So I just got one two hours ago that was like, Hey, you're in Boston. We can help you to learn this, how to do project management. And it's I don't, I get picked for always the wrong region. I usually get a lot of things about starting a franchise in Idaho and I don't know why. What in my profile triggers franchise or Idaho? I've been to Idaho once for three days, like 10 years ago, and somehow, I dunno if they're really finding that data or just random, but I never, whenever it's a LinkedIn sponsored me or an InMail, I know it's gonna be wrong. It always feels misaddressed. And you're exactly right that even when I do data pulling in sales navigator, it gives me tons of false positives. And I have to manually go through and remove all the people that aren't the right fit, that it pulled for the wrong reason.'cause it will pull the wrong word from their profile or it will pull like a post they reposted, not when they wrote. And that's very different. So I think what you're talking about is really why managed services are still a critical component. The idea that we can completely replace salespeople with AI salespeople. I think that I. Maybe for a few years it will happen and then people will hate them the same way people hate phone trees. Like you hate when it goes push seven, push eight, push six. It's just lemme talk to someone. So whenever we see like those futuristic science fiction movies where it's like a hologram selling stuff, the people always hate the hologram. They never go, oh boy, a hologram sales person. I'm excited. Even in fiction, they hate it. So I think you're really onto something. I think there's something very cool about, humans still being in the loop. And I've noticed that's another thing you brought up I think is really important is that I've seen, actually a new automation tool came across my desk today. They're like, USP is human in the loop. And I've actually been talking about this for a few weeks, which is there's some processes where I. I want there to be a person who double checks it. I've seen software that will write and send out emails for you and I go, no, you'd have to read it before consent. Especially if you're sending like your entire audience. So having a stop moment where you check before you go, okay, keep going, is so critical. So I think that the human in loop is gonna become a really big job in the future as well. So I think what you're talking about very cool. I think you're really on, on something and, this has been a really awesome episode 'cause these are things that are really interesting, I think getting really important. So can you tell us a little bit about what kind of project you're working on, what you do at Hypergrowth and maybe people can see if they wanna work with you, where they can find you? Yeah, definitely. I appreciate it. Yeah, so currently at Hypergrowth we're essentially, I think. This is a new word, a new term that I haven't used, but I'm gonna take it from you. Like a managed service, really where we're helping companies generate leads, if we're gonna put it simply. But internally, it's a whole process. It's a whole automation process. Some ai, some not. We try to be careful when and where to use it. If it's not necessary then there's no point. You can build the automations without the AI portion to it, then that still will help us do a lot more and get a lot more done. Essentially it's a very, oh, how would you say a very detailed approach to lead generation. It's not about the quantity, but more so the quality of everything, not just the leads that we're reaching out to, but also the messaging. We wanna make sure that the messaging's relevant and, relevant, personalized, people use 'em interchangeably, but personalized can mean, oh hey. Nice wiener dog. Are you interested in cloud services? It's okay, what does my dog have to do with cloud? So that's personalized, but it's not really relevant, right? So we wanna make sure everything's relevant from the messaging to, to the people that we're targeting. And that were like in hopes of getting the highest, response rates, conversion rates. You can check us out@hypergrowthgtm.com. We have just a bit of a, of solutions, but happy to continue a conversation and provide any guidance or advice, even if anyone's getting into, building ais or even just sales automations before they even touch the AI part. I'm happy to have that conversation. I'm really glad you brought up personalization versus relevance.'cause personalization, I get it all the time where you can see that they've just put in a variable and sometimes they forget to put it in and they just have their brackets, which says insert first name here, or certainly just podcast episode. And that's the worst.'cause then they versus knows and it really throws everything off. I've also I think that. Relevance is really the future as well because it's better to have 100 phone calls or 100 people who might become customers than a million people that are random. Like the chase for virality is a chase for NI, mostly people that are not interested. It's such a small percentage and you have to have, be so broad. In order to appeal to everyone that you no longer appealed your ideal customer. So I love that you brought that up. I think this is really cool. I think that people are gonna find this useful, and I think this was a really powerful episode that I think people are really gonna dig on of the artificial Intelligence podcast. So thank you so much for being here. This was awesome. Thanks for listening to today's episode starting with ai. It can be Scary. ChatGPT Profits is not only a bestseller, but also the Missing Instruction Manual to make Mastering Chat, GBTA Breeze bypass the hard stuff and get straight to success with chat g profits. As always, I would love for you to support the show by paying full price on Amazon, but you can get it absolutely free for a limited time@artificialintelligencepod.com slash gift. Thank you for listening to this week's episode of the Artificial Intelligence Podcast. Make sure to subscribe so you never miss another episode. We'll be back next Monday with more tips and tactics on how to leverage AI to escape that rat race. Head over to artificial intelligence pod.com now to see past episodes. Leave, review and check out all of our socials.