
Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools
Navigating the narrow waters of AI can be challenging for new users. Interviews with AI company founder, artificial intelligence authors, and machine learning experts. Focusing on the practical use of artificial intelligence in your personal and business life. We dive deep into which AI tools can make your life easier and which AI software isn't worth the free trial. The premier Artificial Intelligence podcast hosted by the bestselling author of ChatGPT Profits, Jonathan Green.
Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools
Is Your AI Knowledge Mature Enough With Eryn Peters
Welcome to the Artificial Intelligence Podcast with Jonathan Green! In this insightful episode, we delve into the critical question surrounding AI literacy and job security with our expert guest, Eryn Peters, who specializes in AI maturity and workforce transformation.
Eryn shares her insights on the level of AI knowledge required to thrive in different career stages and industries. She highlights the significance of understanding foundational AI concepts like prompt engineering, data privacy, and ethics, tailoring the depth of knowledge to one's specific industry needs. Moreover, Eryn discusses the evolving landscape of AI terminology and how crucial it is for professionals to adapt to the rapid changes.
Notable Quotes:
- "Basic AI literacy... what does that even mean? It's different for an accountant in retail than for a senior software engineer in the tech industry." - [Eryn Peters]
- "There's a chase to know all the best tools, but the answer will constantly be changing... Let's focus on principles rather than just the tools." - [Jonathan Green]
- "Inertia is a challenge... but we don't need to go rocket speed right away. Small experiments can prove benefits without disrupting the workflow." - [Eryn Peters]
- "Half the traffic on the internet is just bots talking to each other." - [Jonathan Green]
Eryn also emphasizes the importance of overcoming fear and inertia in adopting AI tools, exploring ways to integrate AI into everyday work practices while maintaining data privacy and security. She provides practical advice on change management and the significance of aligning AI adoption with business goals.
Connect with Eryn Peters:
- Newsletter: https://weeklyworkforce.com/
- Company: https://www.ai-maturity-index.com/
Connect with Jonathan Green
- The Bestseller: ChatGPT Profits
- Free Gift: The Master Prompt for ChatGPT
- Free Book on Amazon: Fire Your Boss
- Podcast Website: https://artificialintelligencepod.com/
- Subscribe, Rate, and Review: https://artificialintelligencepod.com/itunes
- Video Episodes: https://www.youtube.com/@ArtificialIntelligencePodcast
Is your AI knowledge mature enough? Let's find out with today's special guest, Eryn Peters. Eryn, I'm really glad to have you here today because this is a critical question that a lot of people are asking, which is how much AI do I need to know to keep my job? And that's. The real people. Don't say it out loud, but it's what we're all thinking, and let's start from there. How much AI does someone need to know and does it change based on where you're at in your career? Obviously, if you're in your twenties, you need it more than if you're 60, but how much is enough to get through this first phase of this ai? Question because 47% of people are afraid that the advancements in AI are gonna cause them to lose their job within the next five years. So it's very topical. But I think basic AI literacy is such a hot topic these days of what does that even. Mean, because the basic amount of AI knowledge that an accountant needs in retail industry is very different than someone who's a senior software engineer in the technology industry. But I think that baseline is, do you actually understand what these basic terms are and how they work? Like an assistant versus an agent? Do you know basic prompt engineering and do you know, data privacy and ethics? So knowing a little bit about each of these things is going to be the most. Important aspect, although ethics of course is something that might be a little bit subjective to different people. But I think that each person based on their job role and industry are gonna have to figure out what this means for them and then pick up those tools. But one thing I'll mention as well, that's an interesting development in the EU AI Act, is that basic AI literacy is actually required by all companies that are going to be implementing or using AI in the region. So it's like even more of an important topic for us to discuss at this stage. I think the big challenge from a foundational level is that we have a decide in the AI world what terms mean. If you ask 50 people what is an agent, you're gonna get 50 different answers. The line between bot and agent and automation and agentic is very blurry, and I think that on a, it's. Whenever I work with people, it always comes up. They don't know what's possible and not possible, and then they feel bad and I go, if you actually knew, I would be shocked because what's possible changes constantly. What I was capable of doing two weeks ago and what I'm capable of doing now are very different. So the rate of change, the rate of releases, just. In the past week, we get a new first. It's an update from a new AI from China. Then it's a different AI from China. Then it's Claude puts out a micro update. Then chat, GPT puts an update to answer that. And it's which one are you using?'cause if using the wrong one, you're not cool anymore. And it's almost like all the time people ask me, are you using this new one? I go, oh, now I haven't tested yet. They go, what? And then two weeks later they're like, oh, you're using that one gross. That one's so old. That's so 2008. And I'm like, what's, it's too fast to be trendy with ai. Like I'm a boring chat G boutique guy because I'm good at it. I use other ais for different things. But this starting point of a lack of like consistent language, how can we overcome this barrier for people that are trying to come in? Because this is like coding is how. We exclude people from lots of different societies, right? You show up, you're the cool kids.'cause you know all this stupid new words, and this is happening with AI far too much. And that even AI, people haven't decided what different words mean. How can we solve this problem, which is I think foundation. Oh yeah. You are absolutely right. Half life of skills have totally reduced in the last while, right? Like it used to be that you could learn a programming framework in university and then that's gonna be good for the next decade. But that's just simply not the case anymore because as you said, by the time these terms are even like some people are adopting them, they've changed their meaning in the next while. So I think the biggest thing is also not necessarily having to get hung up on. Specificity of it, but also just trying things out and knowing general buckets of terms is even more helpful. So if you know a few examples of the types of things that might consider themselves to be agents because you've used them, that might give you a better definition than trying to stay on top of some sort of master glossary. That never exists in any industry for us to speak the same language because the terms are evolving, as you said. So I think the biggest thing is just. Try things, try to put some labels on them and then keep talking to your peers because at the end of the day, that kind of thought about ways that you can use AI is gonna come from other people who are doing similar tasks as you. So you constantly have to remember to seek out new alternatives and try new things if you do wanna stay on top of it. Yeah. There's this chase to know all the best tools. Which is something I try to take a step back from. I was working on a project recently and they're like, what's the best tool for writing a blog post? What's the best tool for social media? What's the best tool for writing a video screen? I'm like, let's take a step back, because the answer will constantly be changing and. A lot of it's your personality and prompting style. There are people who love God and hate chat GT and vice versa and different tools, or they love access tool or rock, whatever. So I try to say, let's just stay in core categories so that even if the tool changes, the principles stay the same because the principles I use for prompting chat sheet T were with any LLA and the principles for creating an image to work with any image generator. And now I think the third area, which when you start talking about agents we're falling into, there's been a merging of I-F-T-T-T, which is if this, then that's Xavier and now Xavier, if you go to their website, it's all says we're a no-code platform. They've changed their branding to match what's happening in ai, which is multi-step processes because the first phase of AI is I'm using a chat bot and you're doing a lot of copying and pasting. Either you're copying your question or you're copying and pasting the answer into something else. And that's really phase one. The next phase is how do I get it to do the entire task instead of just one step of the task? And how do I get it to go from, I'm copying and pasting back and forth between two documents and the AI to where it does everything and that's where no code or automation comes in the play. Now I mostly build in N eight N, which is just a different version of Zapier. It's a little more technical and a little less expensive than, but. Rather than focusing on the NAN element, I focus on that. It's the, if this, then that element, which is the, when this happens, then this happens. Then he make a decision, then this happens, and that core principle is far more important than this specific tool that you're. Important to look at solutions and not tools, right? Because if you're looking to get a blog post and you want a high quality blog post, like the variance of difference between these tools as you mentioned, may be less or might be totally subjective. And so I think if we start thinking about, Hey, what is the cost of gray work in me doing and testing all of these tools versus the time to a certain quality of output, that's another major consideration when we're talking about tool selection. And the other big thing to remember is AI adoption isn't just in the frequency or volume of tools that you're using, right? It's about what is the actual application and how much is this an aspirin to the problems that I'm experiencing every day? So it's not as much as staying on top of all of these new sexy tools. Like I actually don't care if you use 500 or 10. I actually care what type of value you're getting out of these tools. Is it making giving you better quality of life? Is it saving you money? What are the actual. Solutions that you're trying to provide by using these tools, I think is far more important than simply having this huge roster and trying to compare to your friends and like some weird competition of I know the more tools than you do. It's congratulations. I bet your subscription on your credit card is just absolutely insane. Yeah. Whenever people make lists of their 50 favorite AI tools, I go, that's impossible. Who has this much time to use them? All right. Using each one for less than an hour a week if using all of them, and it means you're not included any of them. It takes me a lot of work. Yeah. And into the depth to understand the difference. Yeah. Yeah. So it's like there's no way that's possible. You're just choosing whatever has maybe the best affiliate program, I'm guessing. Yep. Or something like that. And I think that's a lot of the area where I think there's a struggles in the area of usefulness. So I do like we talk about, I like to talk about what's the problem we're solving, and then find the right tool for the problem. And then the next thing is that there's a huge problem with adoption. Even when I build a custom tool for a client and they pay a significant amount of money. They don't realize that I can tell if they use it or not. Like I can see how much bandwidth is happening.'cause I usually set something in there for monitoring. If they go there's a problem, then I can go and check the code and I can tell that people will pay me like a shocking amount of money and they never use it. And it's at the high and the low level, and I'm constantly. That's a struggle to go from. I've used it once to, I'm using it every day and kind of achieve mastery. How can we create a path where using these tools achieves normality? We've seen this with a lot of technology over the last 30 years, whether it was email in the nineties, search in the internet in the early two thousands. Social media. We've gone through different tools. Now. When you work a jobs, you go, oh, I don't know how to turn on a computer, or, I don't know how to use it, or people would be shocked, but it used to be normal. Sending an email used to be as significant as sending a red letter. That's not the same anymore. That's gone. So now we're so good at using these tools, but the adoption was very slow to learn how to use PowerPoint. Five years between when it was available and you had to know how to do it. Same thing with email. But now that there's not five years to learn ai, you don't have that much time. So how can we, and how do you, when you approach. People that wanna learn AI enough to protect their jobs, how do you help them go from knowledge to implementation? Classic tale of the difference between project management and change management, right is project management is over as soon as I closed last Jira ticket and the product and technical implementation is done. The change management is when it's business as usual, and this can be the most perfect technical solution that you're implementing for your team, but if someone fears that it's going to steal their job, they're just simply not going to use it. So there's not just a lack of skill that's gonna prevent them from using this type of technology. There's also fear. There's all kinds of different things that might block 'em above and beyond skill sets, right? So I think we have to go, okay, what is the true reason why people aren't using this technology? And the way that they overcome that is going to be different because some people just simply don't use it because they forget. So you're actually trying to train them how to develop a habit. Like I have a very non-technical solution for this. I have two sticky notes on my monitor. One is Will money solve this problem? And the other is, will AI solve this problem? So it forces me to constantly stay in the habit of going, okay, should I use AI for this? The second is, are we gonna help them to reduce fear? So what types of tools and training can we give them to reduce fear? Is it understanding more about how these things work and how the data is being shared? Is it an ethical concern where they go, I feel like this is stealing ip, and you can just show someone where that is coming from. Again, all back to these basic AI literacy topics, and then of course there's the skill aspect that we're talking about, which is the actual technical skill to be able to learn how to use these tools and how to go about it. This could be peer learning sessions with other people who are learning the tools and say, oh, I experimented with this week, and this application of that tool. And the other could be, how are we making. Like common language among these tools. So all for all of our AI tool developers, are we using similar UI UX principles where most people know how to use chat GBT now? So for most kind of chat and assistant environments, do they look and feel and smell similarly so people can pick up some of these basic practices that weren't as common in a traditional SaaS environment. So I think there's. Like when it comes to training and adoption, we really have to understand what is the barrier? Is it truly that they don't know how to use the technical aspect of the tools and they just need to learn how to go through it? Or is it more soft skill related where we need to teach them the background of how this stuff works to reduce fear or to gain confidence, or any of these other things that might be related in that area? I find that one of the challenges is inertia, which is, I've always done it this way. It's been a long time building habit. I'm just as guilty of this. Once I develop a good process, it's very hard to convince me to change it because you have to look at how much time will the new process change save me, but also how long will it take me to master the new process? And I think that's where a lot gets lost, especially when you build a tool, you show it to a, that's amazing, but. They don't feel it because they're not doing it themselves. And it's that bridge that's very challenging to cross. And it's that inertia. And this is where we see a lot of things, which is things like, say people say things like, that's just the way it is. It's the way we've always done it or been a broke. Don't fix it. And I see this especially larger or companies that have been around for longer, they have an advantage of inertia, but it also becomes a disadvantage at a time like this when you need to get the train to turn. How can we rebuild habits and alter that inertia? Yeah. I think it, it's has to be both a bottoms up and top down approach in different ways if everyone has to see and understand the value and try to understand what you're supposed to accomplish. Are we just using AI because. We're a startup and we got funding and someone said that we'll get more funding if we use the word ai, right? We need to check a box, or is it something where I truly have a specific business goal and outcome that I wanna achieve by experimenting with new technology? What is the purpose of using ai? Let's get alignment both culturally, skill wise, and then start to move in small amounts. Like I think the fear of inertia when you're not moving is we have to go a hundred miles an hour all at once and. The whole point of inertia is objects in at rest, stay at rest, and objects in motion, stay in motion, right? You just need a little bit of motion, like you don't need to go extremely fast at rocket speed, making a moonshot right away. But how can you do small experiments in ways that aren't disrupting a whole organization or team or workflow, but are just starting to prove some of the benefits towards that goal that you wanted to accomplish? So I think that's the side of inertia that is. Hey, we're not doing anything. I don't wanna change and I'm stuck in my way. We're not moving. But we also have to pay attention to the opposite side of inertia, right? That less people talk about, which is objects in motion, stay in motion. What happens when everyone is experimenting and everybody is trying new tools and it's an absolute lawless place and mess of data, privacy, payments, and procurement, and everything else, right? We need this balance of top down and bottoms up of what is actually needed versus how does it map to business goals. And you need that other side of the ME matrix, which is what are the guardrails and controls that we have? Am I pushing people to do more or am I guarding people to do less? Because if we don't govern the both of these aspects in the same time, it could be absolute chaos. You could have shadow ai, even there's all different types of things that can happen if you don't have the right level of motion and you don't have the right level of control. So it's a tricky puzzle to fix. Yeah, I think you brought up some of the topics which are really important, which is that when people start experimenting outside of the guardrails of the company, and this is really common, which is and some companies have banned ai and I know a lot of AI consult go, why have they done that? And it's because something happens, right? Like whenever there's a new rule, whenever there's a new fence, right? There's a reason. There's a sign that says don't feed the animals. Someone fed the animals. And something happens, right? That's why they put up a sign until you think no one would ever do that, and then someone does it. So there's this danger because people don't realize that. I've had a lot of clients do this where they forget that once you connect an AI to your computer, everything on your computer is out there and there's different layers of risk, right? There's just the risk of transmission as soon as it's going back and forth in the. You have a level of risk, right? You have a level of risk just in the transmission. Then there's a question of is the company that you're using a good steward of data? There's a lot of temptation right now to use the cheapest tools, and some of the tools outta China are very low cost. And there's this saying in poker, if you look around the room and you don't know who the sucker is, it's probably you. And if you're using a tool that's too good to be true. They're selling your data like they're doing, they're making money somehow, right? If it's, if something is free, right? Like Facebook is free, and yet they make so much money, how do they make all their money? They sell your pictures and they sell the pictures of your kids, and they sell your data, and they sell your data to again and again. So if it's not, if you're not paying for it, you're paying for it. Really a critical lesson here, which is that when start experimenting with tools aware of. Not just what the terms of servers are, but look, what promise do they make? Do they even say that they'll protect your data? And then they all say they won't train on their data, and yet how did they get their original data? Like they're all winking, right? They've all used mass, mass quantities of stolen data. Like it just is what it is, right? So the question really becomes how much do you believe? Yes, we stole your data before, but we're not gonna steal your data now. And when you play with a lot of different tools, that's when you have the risk there. Now, when you have an enterprise contract, then you can have actual contract with the company that says, we're not gonna train in your data. That's the next level. But we've already seen things where companies will change the terms of service. Adobe did this, they change the terms service, say, Hey, anything you've ever done with us. We could train on our data. It's wait, what? You can't, they made like a backwards like facing rule, right? And then they changed it back. They go, just kidding. But also because it was up for just a few hours, that's enough that everything before that they can still do no matter that they changed it back. They still have enough coverage to grab all your old data. And it's like we see these companies now do this thing where they get caught and they go, oh, we're so sorry. We'll never do it again. And it's really? Is that what happened? Or did you get too much pushback and now you're gonna try it a different way? And we constantly see that. So I think that data protection, privacy, and being careful with your data. And what I find really interesting is companies will use the AI of one of their competitors and feed all of their data. And I'm like what are you sure they're not gonna use it? And the effort to. Even have an isolated AI or a company AI that's on your own server, it's not that hard. It's not that expensive. It's not that far outside the window and it's compared to the cost and the risk, like it's very low cost, but it's a big challenge 'cause it sounds scary and complicated. So when people start thinking about where the line is, where do you start with as far as protecting data and as far as being cautious with your own data and your client data. It's something that each company has to evaluate where they're adding this in and where it actually makes sense, right? If you're a healthcare company, you probably have way more concerns with what types of data that you're uploading than if I'm a small boutique graphic design agency that's going to be maybe using a little bit of. AI to help me pump out designs on my own style, right? So I think everyone has to decide what are the guardrails of which areas of the business you're gonna do. Also, what are, what type of data are you protecting is a big one. But it's true. Not to sound too tinfoil hat and morbid about it, but I don't think data privacy is gonna exist in my lifetime because we look at all these times, just as you said, meta has found red-handed stealing and selling your data because if you're not paying for the product, you're the product. And then they said, we won't do it again, and then it happens again. So I think we're gonna start going down the route of these companies making more and more of their own LLMs and training them off base models and then not sending this out into the world, but. I'm also curious to hear your thoughts on this. The way that I look at it is then these models go to die, right? Because the reason that they're great is because they have all of this great information. But if all of us start taking and not giving back to the system, then the system starts to fail, right? So we have these big kind of irrelevant and useless. Pool of information that we can't use so much. So where is like the cost benefit analysis of I wanna protect my own things, but I still wanna get the benefit of everyone else's stuff coming through. And like, how do we both feed the machine and participate in the benefit of it? So my personal theory is I have two theories I think are getting stronger. The first is the dead internet theory. I think most traffic on the internet is bots talking to each other. Yep. It just is, and we're seeing it more and more. It's exploding with AI and I, my second theory is that we're gonna see a period of ai. I. The amount of data not written by AI decreases. So we've already seen they're using the data from Facebook, not great. Now they're using then this, then everyone bought the data from Google. Then they bought the data from core, then they bought the data from Reddit, and that's where you started getting recommendations from Google, which were things like, you should eat one small rock a day to stay healthy. That's a post from someone whose name was a bad word. That was obviously a joke, but Google thought it was real. They also posted for a while that you should put glue in your pizza to make the cheese stick to the pizza a little bit better. Both of those things are talk, don't do 'em. They're toxic. Both of them will kill you, but they're, they give lots of bad advice because they can't detect sarcasm. So what we're seeing is they're buying worse and worse and worse data. Eventually they're gonna be buying four chance data, and then I don't know what's worse than that. Where it's 90% sarcasm in pranks. So they've already run out of the good data, right? The smart data, the scientific papers, that's all in there. So what we're seeing is that most of the content on the internet generated the last year. A large percentage of that, whether it's the majority or not, was generated by ai. This year, it'll be a higher percentage. Eventually it'll be 90% than 95%. Than 99%. So now we're gonna see. Trained on AI content and I think that's what we're seeing with at GBT 4.5, which everyone was hating on recently. And I was like it's probably just trained on data from another ai. Like it's trained on stuff. It written itself and like when I read all these books about AI is going insane. I think this is what happens when you just train on stuff you've written when the only books you have to read in your library are books you've written. How long until you. Start going fully crazy and I believe not very long for an ai. So I think that the current methodology, which is to use AI for creating content is a poor use case and that it's the most interesting, but hopefully it will die out. Like how many times have you seen a post on LinkedIn or some stream platform? This new AI video is gonna change the worlds like you would never watch if someone didn't say it was video. It's just a bad video that if someone puts the word AI in front of it now you'll watch it. I've seen a car go down the street before. I've seen a bad animation before. I've seen all sorts of badly drawn cartoons, but now it's ai, so we watch it and it's a brawling, and unfortunately there's an inverse relationship between usefulness and virality. So the more useless something is with ai, the more people will watch it, click it, and like it, even on LinkedIn where everyone's supposed to be smart. And I've seen this with content I posted myself. The lower the quality of the content, the more people respond to it. It's a revelatory lesson. My wife, one time who's not from America, she was like our, she found out, she goes, oh my gosh. I thought everyone from America was smart. And I, she goes, nevermind. When she fed her first not smart person, I said, yeah, welcome. Every country has a lot of people that will just click on anything. So I think that we have to switch to this new mindset of usefulness and be more strategic. And the least useful element of AI is content generation. It's so much better. Content analysis, strategy, reverse engineering, planning, organizing, moving data around. That's where a lot of the corporate use cases are. But everyone always goes, I want AI to write my social media posts. So I'm like, yeah, and then AI can comment it and then another AI can comment it, and now you have no engagement with your audience. The actual result is really bad. So that's my personal theory. I think there's a lot that's useful with ai. I just think that the popular cases are the least useful. I think that's the thing when you're it's almost inbred ai, right? I'm training it on its own thoughts and feelings and training, and then all of a sudden you get this big mishmash of useless thoughts and content at the end of the day. And we know that people don't click because they've learned something new. In those cases, they click based on emotion. So if something seems crazy or seems interesting or is like rage bait, like people will interact with this type of content way more than anything else. And what's more. Infuriating than something that's completely useless. But I think it's super interesting this bot versus bot thing, because in my role, it's the future of workspace. Like we're seeing this really bad when it comes to hiring and hiring process because more people than ever before are creating AI generated applications and more of the hiring process than ever before starts with AI screening assessments. So Workday had interesting stats that came out in the fall. And they're going, is the hiring market and jobs market recovering? We had a 9% year over year increase in job postings and a 9% of job signings. So it is marginally increasing. They had a 30% increase in job applications and a 30% increase of hiring by referral, which means we have. Bots talking to bots and their applicant tracking systems just going in loops and people go, oh my gosh, I can't get quality candidates anywhere. But so many people are looking for jobs more than ever before. And then they go, gosh, Jonathan, do you just have a friend that can do this job? Can you refer someone to me like, I can't deal with this anymore? And so we're going back to such old school methods of hiring because it's just an absolute downward spiral of trying to get technology that's gonna help solve a problem, but completely removing the human and useful element from it where we're just going. Way old school and going, oh, did you go to school with anyone? Have you worked with anyone who can do X job? Let's just hire them. Yeah. As I've been going through this recently, we've hired a bunch of people recently and what is that AI writes the application and AI reviews its own applications. It's like chat. BT is doing both ends and now people on interviews will absolutely have chat, GBT Open, and they're looking at their questions and. Develop tricks to catch them out. So you go, oh, this person's playing a little game. And it, there's so much dishonesty. It's like this game of, oh, I can just get a couple of paychecks before they realize I have no idea what I'm doing. And that mindset, of course, like it's a great way to make a bunch of enemies. And more and more I say to all my employees, if you know someone because I can see their work, especially'cause I hire a lot of coders. So much of that market is filled. Are a little bit overselling what they can do, and then it just causes problems. But when you find someone who's good, you go, Hey, doing who's good like you. Exactly right. I think that we have seen so many, like on average now, if you post an app job on LinkedIn that's remote, you're gonna get 600, 800, 2000 of whom four have any of the qualifications you're looking for. There are more and more sophisticated tools that will create a resume that matches the job. So it's even trickier and trickier. So I'm seeing that all the time. So we ask more and more tricky questions and ask for more and more narrative answers rather than, it's not about how what you did, it's like how you did it and why you did it, and how you made it feel. Because most people don't prep those answers. And then what happen is they'll start prepping those. In a lot of spaces, it just reminds me of the dating market. On a dating website. If you're a guy, no, you send 10 messages, no response. Next day you send 20, next day you send 30. And if you're a lady who's getting a lot of messages, sometimes you're getting a hundred messages a day. So it's not, your response rate's not increasing, you're getting to a point where you can't respond. Like I sometimes look at these posts on Instagram. This lady is gonna scroll and see co like the 501,001 comment, and that's the one she's gonna respond to. It's mathematically impossible to respond to that many comments if you just click like on it. 500,000 comments would take you years and years. But people, it's very interesting the way crowd brings crowd. So I think you're exactly right that we're seeing these markets where things are shifting and LinkedIn was really built on the principle of three layers of separations. You don't hire Fred, you hire friend. So I think you're very much onto something. I think we're gonna see a lot of shifts in the job market as people play these games and trust starts to lower because that's what happens. You stop trusting systems 'cause there's a lack of ethics in it. And I think that a lot of what you're doing is very interesting for people that want to figure out how to develop their AI skills, keep their job or train their entire company. How can they find out more about what you're doing and find you online? What's the best place to find out the projects you're working on and maybe even work with you? Absolutely. The first thing that I do is I actually write a newsletter about this and take on the role of sifting through all of this noise that you don't have to, and when it comes to the future of work. So you can follow my newsletter, which is weekly workforce.com. But we, I also have a company that I created with my co-founder, Evo Chapar, and it's called the AI Maturity Index. Basically, it was founded as a research project on the principle that the media is scaring people and how AI's gonna steal their jobs. We just didn't think that was true, but it's really tricky for people to figure out what they're doing well and what they can do to improve. So we created a 15 minute diagnostic tool. You chat with our conversational AI and you get a free instant five page report on how you're doing, what your strengths and weaknesses are. Benchmarks against your peers in the same industry or role as you. And then some recommendations on how to improve. So we do this for individuals. We've got some offerings for teams and organizations, and then we also have a tool that can be white labeled for consultants who can help with the actual implementation of these types of recommendations. So trying to be a compass in the sense that you need to know where you are to know where you're going. That's very cool. I'll make sure to put the links in the show notes and below the video on YouTube. Thank you so much for being here, Eryn, for another amazing episode of the Artificial Intelligence Podcast. Thank you for listening to this week's episode of the Artificial Intelligence Podcast. Make sure to subscribe so you never miss another episode. We'll be back next Monday with more tips and strategies on how to leverage AI to grow your business and achieve better results. In the meantime, if you're curious about how AI can boost your business' revenue, head over to artificial intelligence pod.com/calculator. Use our AI revenue calculator to discover the potential impact AI can have on your bottom line. It's quick, easy, and might just change the way. Think about your bid. Business while you're there, catch up on past episodes. Leave a review and check out our socials.