Dec. 27, 2024

AI Predictions for 2025, Thoughts on OpenAI's o3 & The Year in Review

AI PREDICTIONS are never easy but we give our thoughts on what will happen in 2025 and beyond as well as dive into OpenAI’s o3 announcement and what it means for all of us. Then, we fed all of AI For Humans into Google's NotebookLM and had them take...

The player is loading ...
AI For Humans

AI PREDICTIONS are never easy but we give our thoughts on what will happen in 2025 and beyond as well as dive into OpenAI’s o3 announcement and what it means for all of us.

Then, we fed all of AI For Humans into Google's NotebookLM and had them take a DeepDive look at how we did over the course of 2024. We did ok?

It’s a quickie AI For Humans episode for the holidays (one day behind schedule) but we’ll see y’all for a normal one next week!

IT’S GONNA BE A GOOD YEAR Y’ALL!

#ai #ainews #openAI

Join the discord: https://discord.gg/muD2TYgC8f

Join our Patreon: https://www.patreon.com/AIForHumansShow

AI For Humans Newsletter: https://aiforhumans.beehiiv.com/

Follow us for more on X @AIForHumansShow

Join our TikTok @aiforhumansshow

To book us for speaking, please visit our website: https://www.aiforhumans.show/

// SHOW LINKS //

OpenAI’s o3 Announcement Video

https://www.youtube.com/live/SKBG1sqdyIU?si=Fjuy6S6fZ4hpdRXq

Truth Terminal X Handle

https://x.com/truth_terminal

Yes No Error

https://yesnoerror.com/

Genesis World Sim AI Video System

https://genesis-world.readthedocs.io/en/latest/index.html

 

Transcript

Audio Episode 90

Gavin Purcell: [00:00:00] It's the end of the year and we're here to make some AI predictions. 2025 is going to be a big one, everybody.

Kevin: And who predicted that they would just skip O2 entirely? Yes, OpenAI announced a new model, a foundational something called O3, and we're going to let you know why that's exciting to everyone.

Gavin Purcell: It's a special kind of end of year explosion of AI for humans.

 

Gavin Purcell: It is the end of 2024. Kevin, we are here. We just saw last week. Oh, three open a eyes new frontier model get teased and maybe we are done with the GPT nomenclature and we've moved on to something else. It is weird that it's oh three instead of oh two, which would make a lot more sense.

Kevin: they, they said, out of respect for some other product or something that

Gavin Purcell: No, there's a big British telecom company named O2, so literally they just said screw it, we're gonna just skip over O2 and go straight to O3,

we're going to talk about that next frontier model, [00:01:00] which you would think logically maybe should be called O2, but out of respect to our friends at Telefonica, and in the grand tradition of OpenAI being really, truly bad at names, it's going to be called O3.

Kevin: , surprise, surprise, the line goes up. And I think what's really interesting is that even the, the quickest, the, there's an O3 mini model which kind of outperforms the O1 model in some benchmarks that we all have access to today.

Kevin: We all predicted that, okay, you know, scaling was going to go up and now inference compute or reasoning was going to go up and, and here it is, it's getting faster, it's getting cheaper, it's getting more capable.

Gavin Purcell: That's right. And I think one of the most interesting things that came out of that oh, three announcement is the fact that it scored the highest compute version, which I think takes a lot of compute. There's oh, three low. Oh, three medium and oh, three high compute, which means how much actual compute time you throw at the thing. The oh three high compute scored an 85 percent on what is kind of known as like the premier benchmark in the world of AI. It's called the arc AGI test. In fact, they even had the [00:02:00] guy come on the live stream who, who, one of the organizers of arc AGI to kind of talk about how impressive that is.

Arc AGI version 1 took 5 years to go from 0 percent to 5 percent with leading frontier models. However, today, I am very excited to say that O3 has scored a new state of the art score that we have verified. On low compute for, uh, 03, it has scored 75. 7 on ArcAGI's semi private holdout set. Now, this is extremely impressive because this is within the, uh, compute requirements that we have for our public leaderboard, and this is the new number one entry.

Gavin Purcell: So Kev, I think in general, we don't have our hands on this yet.

Gavin Purcell: It's red teaming now, so it's going to go out to everybody. It'll be really interesting to see what it means for the plebs like us who are just doing the dumb things that we do with AI rather than trying to advance it. But this seems like a good opportunity since it's end of year, it's the day after Christmas to kind of talk about what we think might happen next year.

Gavin Purcell: We did an episode a little while ago where we talked about trends, but now I think it's kind [00:03:00] of worth predicting a few things, right? Like, what are, what are some predictions that we think that might come up in 2025 and it's time for a new theme song. It's AI predictions in 2025.

It's time for a new theme song it's AI for humans

predictions for AI in 2025

Gavin Purcell: So my first prediction involves weirdly something that we haven't talked a lot about in the show, but that's crypto. And my first prediction is that you're going to see

Kevin: Superman's dog.

Gavin Purcell: Yeah, no, not not crypto, which by the way, the trailer does look pretty good.

Gavin Purcell: I'm kind of back in on James Gunn's Superman. Um, crypto, I think is going to be kind of tied to AI, at least for the first six months of the year. There's been a lot of really [00:04:00] interesting things happening there. And the one thing I'll say. Is I think that crypto provides AI, this kind of weird, sometimes very shady. Let's be clear. Crypto is a very shady industry, but also provides a lot of builders, like interesting, um, money right away to concede conceivably to put into projects. And I think there is a world where crypto is going to kind of like tie itself to at least some of those edge cases of AI. And I've expect, we're going to see more of that, at least in the first six months. If you understand crypto cycles, like Next year could be a big year for crypto. It could also crash. This is the way crypto goes, but I think in the first six months of next year, you're probably going to see more AI plus crypto stuff.

Kevin: For the uninitiated, the AI plus crypto we've seen thus far, Gavin seems to be, um, using AI to embody some sort of personality that will tweet or make decisions on behalf of something that is tethered to a token. if people are gathered around with the family for the [00:05:00] holidays or for the new year and they go, what's this AI and crypto thing, like, how do you explain how they could be related

Gavin Purcell: Yeah, so the basic way I would explain is right now there's a lot of people in crypto talking about AI agents and they're not really AI agents in the way that we've talked about in the show and AI agent the way we've talked about in the show goes out and does stuff for you like actively will be able to book a trip or something like that in this instance are talking about. What essentially are AI personalities, right? There's the most famous one was the goat coin that came out out of a handle named at truth terminal. That one is kind of like been up and down, but it also if you're familiar with fart coin, which a lot of people may have heard about now, fart coin was based on one of truth terminals tweets, right?

Gavin Purcell: So that was an AI personality that tweeted out stuff and has been connected to basically an LLM. What you're starting to see now is a little bit more people trying to do things that are a little bit more interesting. There's a new token that just came out by the time this comes out, it could be a rug.

Gavin Purcell: And this is the most important thing to understand about, about crypto in general, is that so many crypto projects just go to [00:06:00] zero. So I really can't in good, good instance, tell anybody to invest in this, but a company, um, just started up called, I think it's called yes, no error or something like that.

Gavin Purcell: Mark Andreessen, uh, at p Marca on X had asked about this idea of creating a company that could go through and use O one and now maybe O three to check a bunch of scientific papers for errors. So a crypto guy and this guy Doc is a docs guy. So somebody out there created a, a, a very simple, you know, AI thing that could do this, and the way he pays for it and the way you pay for it to check this is you use a crypto coin.

Gavin Purcell: And so like that's the kind of thing I could kind of see starting to happen a little bit more going

Gavin Purcell: forward.

Kevin: Oh, that's really a fascinating. Yeah. For me, the end of this year has been pretty transformative. , personally, when it comes to coding, again, we thought AI was coming for factory jobs and we lamented the workers in the warehouse and then it was like creatives panicking.

Kevin: Oh, no, this is writing emails and pros and marketing copy. And the coders were like, well, that's okay. We're over here. We're writing code. And it [00:07:00] actually, you know, Might look like that is going to be fully cracked before anything else. Because as we've talked about on the show, code either works or it doesn't.

Kevin: Now it might be more, you can, might be able to get it more optimal or more elegant, more secure. Yes. Um, and there is an artistry, I believe, capital A art to writing good code. But that said, someone who has no idea what they're doing, someone who is a cat driving a bus, if you're getting the audio version only, I am gesturing to myself.

Kevin: I have been able to use code and, uh, coding agents to make all sorts of products and integrations. And if I need a, an AI voice assistant to go out and crawl the web in real time and return results in a certain way and display them across multiple devices, this would have made me shrug bigly and even be confused as to how to describe.

Kevin: That activity months ago, and now I can go build it myself. I'm not saying I could build it great, but I could go build it. I expect that to get leaps and bounds better next [00:08:00] year.

Gavin Purcell: Well, and that's one of the biggest upgrades with 03, right? We talk about

Gavin Purcell: how do we not know how to use 03 or 01 even try to do those things. But 03 coding is much better. And I think Kevin, to your point, what's interesting, and we've heard people say this over the course of the last year, coding may not even be coding eventually.

Gavin Purcell: It might just be asking for a thing that you want and the computer makes it. And yes, you, you would be helpful to be a coder. And I'm not saying that any top level coders are going to be out of business anytime soon, even. But you might just be able to ask the thing to make something that you want. And if you can make something that you want, then you have to bring in coders who are really specialized in like making it better and tweaking and fixing it.

Gavin Purcell: But the AI might be able to do that too, which is kind of crazy.

Kevin: the notion of, of, quote, one shotting entire websites or software suites is actually becoming a reality. That is when you or I or anybody listening to this goes and asks in plain English, Here's what I need from a website or from a mobile app or from a back end. And the, the system figures it out.

Kevin: Does the, the documentation goes, writes the code, writes its own tests, [00:09:00] improves upon itself. And that, you know, I think 2025 will be the year of the agent. It's something that it's not a, like a surprise end of the year prediction, but we're already seeing that stuff roll out. So basically when you can give a high level task to the machine and it will break it into smaller tasks, go out autonomously, do those things improve upon itself, and then alert your phone.

Kevin: When it's time, make the Alexa go off or the microwave beep to alert you that your, your software is ready. Your pizza's been rehydrated. That is, to me, the promise of 2025. I

Gavin Purcell: Absolutely. . So my next big prediction here involves AI video and involves , using AI video to make large, , video projects. And I think you and I both have believed prompt to Hollywood. We've talked about that a bit here, and we talked about that in our trends episode.

Gavin Purcell: But in specific, there was a new project that came out. At least it's been announced. No one I don't think has really gone hands on, hands on with this called Genesis, which does something really amazing, Kevin, and it combines 3D engine generation of like a unity or [00:10:00] unreal with realistic generative A. I. And if you haven't seen this video, I encourage you to go look it up. We'll put a link to the show notes. But Kev, I think this is, you know, one of the predictions of, I think it was, um, a 16 Z. Said, That they expect there to be a Pixar of AI video that would come out of the video game world. And to me, this is what we're kind of looking at. We've talked a lot about world building. We've talked a lot about the idea that AI video is actually a world simulation. When you look at what Genesis is saying it can do, and granted, from what I've read, the compute crunching is insane on it. Like, no normal person's gonna get

Gavin Purcell: access to this anytime soon. But they have this incredible video where they show a Heineken bottle and they can zero in on a droplet going down the Heineken bottle. And according to them, it's all being prompted per choice, right? And like, that feels like if we can get a suite of AI video tools and ultimately AI world creation tools that comes out, Prompto Hollywood is really almost here.

Gavin Purcell: And I think [00:11:00] again, this might be a world where. These, this company is so expensive that it sells itself to studios. But if imagine a studio where the VFX people is a team of five instead of a team of 200 for a Marvel movie. So like that to me is one of the most interesting things I'm really looking at.

Gavin Purcell: So my prediction is, is that one of these systems will start to really eat into the V the VFX world in Hollywood, and it's going to completely change how major movies are made, not just small ones, but like really large ones.

Kevin: I see that sort of stuff also integrating with all of the text to video or image to video tools that we've seen, Gav, where you'll say, I want the horse running on the beach, and yeah, go ahead and I'll spend the extra money on the compute so that when the hooves are slapping against the sand, or whatever, you're accurately modeling that in some sort of physics simulation. It is, uh, It's so exciting and so fascinating, and I think that we might have, uh, we might have been wrong with our prediction. We were accused of being way too ambitious in a [00:12:00] five year prompt to Hollywood. I think we're gonna see a short come out next year that wins some, some hardware.

Gavin Purcell: Oh, for sure.

Kevin: is gonna be packed with something. And then, but, I mean, like, We haven't really defined, you know, our AGI when it comes to Prompto Hollywood. Like, we've seen commercials hit TV that are fully AI generated. Have we hit the mark, or does it have to be an hour and 45 minute long something that the audience wishes was an hour and 30?

Kevin: Like, what do we have to have?

Gavin Purcell: mean, this is the other thing that goes along with the video that I think is another prediction that we can make is is consistency and outputs, right? You saw already because new drop is supposedly very good at making consistent outputs. I think you're going to see all the major video models start to focus on this so that individual characters can be held throughout.

Gavin Purcell: I think that feels like a pretty big thing. The other. The next production camp I want to make it is around Frontier models, because I think the interesting thing about coming out with 03, at least announcing it at the end of the year, is that we always know that inside the [00:13:00] halls of these places, they are cooking on something else, right?

Gavin Purcell: So there's an expectation I have on frontier models that we're already know we're going to get another meta model. We already know that we're going to get another X AI model. But I would suspect that both open AI and Gemini dropped these models at the end of the year, because they have been cooking on whatever the next thing is.

Gavin Purcell: And they'd been waiting since the, for the election to drop these things. I really do expect we're going to get a new frontier models by say quarter two of next year.

Gavin Purcell: And that's a big deal. It's going to be interesting to see how much of an improvement it is. I think some people would say the old three benchmarks are like. Only 25 percent better than Oh one. And that doesn't feel like a new frontier model to them. But I think that's kind of where we're going to be for a while. It's going to be maybe not exponentially better. We're not going to get a hundred X better, but if you can do 25 percent better on a regular basis, we are very close to what most people would consider AGI.

Kevin: Yeah. And you know, you mentioned a couple big companies there and what their behind the scenes frontier efforts are, we left out Anthropic, who has [00:14:00] still by many a best in class coding model, at least, but we'll see what the new Opus or the new Sonnet might bring, Mistral. Shouldn't shade them. There's clearly working on something as well and may hopefully open source.

Kevin: You mentioned meta with new Lama model, but where's their reasoning model? I bet it's coming. So yeah, there's exciting advancements happening all across the board there. Um, I think AI and warfare not to get too dark and we'll just leave it quick. But I mean, the announcements are coming fast and furious with open AI partnering with Andrew.

Kevin: Like, I think we're going to see military tech advance. I don't know if we will have a bipedal robot in our homes.

Kevin: By the end of next year, but I think you'll have the option to I just don't know if a lot of people will And last but certainly not least I think another mass hallucination event Powered by ai like we didn't talk about it too much on the podcast because you and I have been doing Some really fun stuff with these uaps over New jersey, uh in some other places like we've just been having fun generating those videos and seeing how far they can spread and

Gavin Purcell: what we do, right? We, now we can come clean, Kevin. We can come clean [00:15:00] before the end of the year. That was all us. We were the, we were

Kevin: it's a fun bit. It's a fun bit. And you know, generating like, Oh, Congress talking about it and having to say like, it's really easy to do. And it's fun, but I think someone who is like less jokey is unfortunately going to wield these tools to do something nefarious next year, Gavin. So I wanted to raise a flag for that.

Kevin: Y'all gotta be, be aware, but I hope you found our UAP stuff fun.

Gavin Purcell: That's right. And the funny thing about those UAP things is like, mass hallucination happens without AI a lot of the time too, right? So everybody should be clear on that. Okay, that's kind of our quickie episode for today. We want to make sure everybody knows how much we really appreciate you all listening.

Gavin Purcell: Please give us a, like, or a thumbs up on YouTube. Also subscribe to the audio podcast. We'll be back next week with a more regular episode. This is a quick episode because it is obviously the holidays. But before we go, we wanted to try to do one thing. I'm going to put in the entire corpus of the AI for humans, um, uh, over, which means every episode of AI for humans, and I'm going to have notebook LM, which just got an upgrade from Google, kind of go through it. [00:16:00] I will probably cut

Kevin: Oh, let them roast us. Yeah. Have them roast us.

Gavin Purcell: They'll give the review of how AI for humans has done so far over the year of 2024 and maybe give us some thoughts going forward in the next year.

Gavin Purcell: But again, thank you everybody for watching and listening. We really do appreciate all y'all and we will see you next week after the new year.

Kevin: Happy holidays. Happy new year. Be safe, everyone.

 

Welcome to the Deep Dive. We're, uh, we're doing something a little different today. You task us with listening to every single episode of the AI for Humans podcast from 2024. All of them. Wow. Yeah, all of them. And, uh, well, we could I can't believe how much those guys talked about AI. It was like their whole world.

Yeah, about that. Kevin and Gavin, bless their hearts. They were all over the place. We're here to judge just how good they were at covering the AI explosion of 2024. And let me tell you, we're not holding back. So get ready for some serious roasting because these guys, they've made some truly wild predictions.

Okay, so we're going to roast Kevin and Gavin. That's always fun. But we should [00:17:00] probably do more than just roast them, right? I mean, a lot did happen in 2024 in AI. Oh, of course. This deep dive is also your shortcut to understanding the key trends and breakthroughs that actually shaped the AI landscape in 2024.

Sounds good. To buckle up, because we're about to dissect a year of AI hype and drama and some truly questionable takes. This should be fun. Where do we even begin? Let's start with something Kevin and Gavin were absolutely obsessed with, AGI, Artificial General Intelligence. Yeah. Remember how they were convinced it was just around the corner?

By like 2027. Yeah, it felt like every other episode they were moving the goalposts closer, hyping up every new research paper. And pronouncements from the likes of, well, themselves, I guess. Exactly. They spent an entire episode dissecting this dense, jargon filled paper by former open AI researcher Leopold Aschenbrenner.

Aschenbrenner. He was a bit of a controversial figure, right? Didn't he get fired from open AI for leaking something? Yeah, he was let go after he publicly criticized OpenAI's shift towards a more [00:18:00] closed, commercially focused approach. He argued that true progress in AI required more transparency and collaboration.

Hence his focus on unhobbling AI systems. Unhobbling. Okay, I'm officially lost. Basically, he believed that we were holding AI back by imposing too many constraints, too many safety measures. He argued for giving AI systems more autonomy to learn and evolve on their own. So, like, letting the robots run free.

Sounds like a recipe for disaster, if you ask me. Well, his views definitely sparked a lot of debate within the AI community, and they clearly resonated with Kevin and Gavin, who took his predictions about AGI timelines as gospel. Which is hilarious in retrospect, right? AGI in 2027. What were they smoking?

I wouldn't say their concerns were completely unfounded. 2024 did see some significant advancements, particularly from open AI. Remember the constant stream of GPT 4 updates and that whole strawberry project that had everyone buzzing? Oh yeah, strawberry. Every week it was, any [00:19:00] news on strawberry? Is strawberry sentient yet?

Looks like they were waiting for a new iPhone or something. I swear they thought Strawberry was going to be the first AGI. To be fair to them, OpenAI was generating a lot of hype, and they were definitely shipping products. GPT 4 was impressive, even if those updates got a little ridiculous after a while.

And then there was their O1 platform, basically an attempt to build a more comprehensive AI ecosystem. They even released a voice model that sounded suspiciously like a famous film director, which frankly was a little creepy. Okay, remind me what the O1 platform was all about. It was their attempt to create a one stop shop for all things AI.

From text generation to image creation to, as you said, voice synthesis, it was supposed to be this interconnected ecosystem where all these different AI models could work together seamlessly. Sounds ambitious. Yeah. Did it actually work? It's hard to say. It definitely got a lot of attention, but there were also concerns about its closed nature, its potential for misuse, and the fact that it was all controlled by open AI.

Yeah. Didn't open AI have a whole bunch of drama in 2024 [00:20:00] lawsuits, internal conflicts, even Elon Musk was back in the ring suing them and warning about the dangers of AI. Right. Open AI had its fair share of controversies. Elon Musk, one of the original founders, sued them, claiming they had strayed from their original mission of developing AI safely and ethically.

He was particularly critical of their close ties to Microsoft and their increasingly secretive approach. There were also lawsuits from authors and artists who claimed that OpenAI's models were infringing on their copyrights. So much for OpenAI being the good guys of AI, huh? I guess Kevin and Gavin didn't see that one coming.

They did have a bit of a blind spot when it came to OpenAI. They were so enamored with the technology that they often overlooked the potential downsides. It's like they were totally oblivious to the ethical implications. Or maybe they just didn't care all that talk about AGI and they never stopped to think, Hey, maybe we should be a little concerned about this.

It's interesting because they did dedicate a few episodes to the potential risks of AI, even interviewing experts who raised concerns about biased job [00:21:00] displacement and the need for regulation, but then they'd turn around and gush over the latest open AI release like none of it mattered. It's almost like they thought they were immune to the consequences like, Oh, AI might be dangerous, but it'll never hurt us.

Talk about naïve. Well, to be fair, their naïveté was probably shared by a lot of people back then. 2024 was a bit of a wild west for AI. It felt like anything was possible, and the ethical guardrails were practically non existent. Yeah, true. Remember that whole fiasco with Anthropx, Claude AI? Yeah. The one that could write code and automate tasks.

Ah, yes, Claude. It did cause quite a stir. Yeah. People were both amazed and terrified by its capabilities. And, of course, Kevin and Gavin ate it up. They practically begged Claude to write their podcast script for them. Oh, God, I'd forgotten about that. Did it actually work? The results were Let's just say interesting.

The script was this bizarre mix of tech jargon, pop culture references, and completely nonsensical pronouncements about the future of humanity. [00:22:00] I think they end up scrapping it, thankfully. Good call. I'm all for AI helping with creative tasks, but maybe not when it comes to writing a coherent book. That's the thing with Kevin and Gavin.

They were so quick to jump on every new AI trend, every shiny new tool, without really stopping to think about the implications. Or the fact that most of these tools were still pretty rough around the edges. Remember how excited they were about AI video generation? They were playing with a Runway Gen 3 Luma AI Hadra.

They even tried to make their own AI powered video series. Oh yeah, their AI video series. That was ambitious, to say the least. I think they managed to produce one episode before giving up. Probably for the best. Remember that talking hot dog incident with Hedra AI? Yeah. I mean, I get that these tools were still in their early stages, but Well, you have to admire their willingness to experiment, even if the results were often less than impressive.

And 2024 was a pivotal year for AI video. The technology was advancing rapidly, and those early tools, clunky as they were, laid the foundation for what was to come. [00:23:00] Speaking of laying foundations, Kevin and Gavin were absolutely convinced that open source AI was going to change the world. Remember when Meta released its Llama models to the public?

They were practically dancing in the streets. Yeah, they saw it as a victory for the little guy. A way to level the playing field against the big tech giants like OpenAI and Google. And to be fair, Meta's move did shake things up. It sparked a surge of innovation within the open source community, with developers building on top of Llama to create all sorts of new applications.

Like what? What actually came out of the open source AI revolution? We saw new text to video tools, personalized AI assistants that weren't tied to any one company's ecosystem. Even AI powered music generation platforms that were far more accessible than anything the big companies were offering. So Kevin and Gavin were actually right about something.

Color me surprised. They weren't wrong to be excited about the potential of open source AI. It did democratize access to these powerful tools and allowed for a more diverse range of voices and [00:24:00] perspective in AI development. Let's not give him too much credit. I mean, they also made some truly outlandish predictions that never came to pass.

Remember how they were convinced that AI was going to take over Hollywood, that CGI was dead and studios would be churning out AI generated blockbusters? Ah, yes, their Hollywood takeover fantasy. It was a bit naive to say the least. While AI did start to make inroads into the entertainment industry, it was more about augmenting human creativity, not replacing it.

Yeah, those AI generated movies they were so excited about never actually materialized. Unless you count that weird experimental film that won an award at Sundance, the one that was entirely generated by an AI trained on David Lynch films. Even that film, as experimental as it was, still required a human director to guide the process to curate the output to shape the narrative.

So AI didn't exactly become the next Spielberg. What about music? They were pretty gung ho about AI music tools, even had that AI DJ on their show who was something. AI generated music did become more sophisticated and accessible. [00:25:00] Platforms like Suno and Udio allowed anyone to create original music, even if they had no musical training.

But did any of it actually sound good? Because, let's be honest, most AI generated music still sounds like a robot trying to play the piano. It was hit or miss for sure. But there were moments of genuine creativity, even brilliance. And the technology was evolving rapidly. Who knows what might be possible now, just a few years later.

So maybe Kevin and Gavin weren't completely wrong about AI's potential in music and film. But they did have a tendency to get swept up in the hype, didn't they? That was their biggest flaw, I think. They were so eager to embrace the possibilities that they often overlooked the complexities, the potential downsides, the fact that AI development wasn't always a straight line towards progress.

And they were completely obsessed with speed. Remember their catchphrase, AI is speeding up, everything is happening faster than we thought. It was like they were caught in this feedback loop of their own making. Every new development, every incremental improvement was proof that the singularity was just around the [00:26:00] corner.

It's funny because they weren't completely oblivious to the risks. They did dedicate a few episodes to the potential dangers of AI, the ethical dilemmas, the possibility of misuse, They even had that heated debate about AI and warfare that almost ended in a fist fight. Right, that was a memorable episode.

I think Kevin almost stormed out of the studio when Gavin suggested that AI could actually make warfare safer. Oh yeah, Gavin was definitely on the AI is a force for good train. Right. Which, let's be honest, was a pretty naive take even back then. It's easy to criticize them now with the benefit of hindsight, but 2024 was a different time.

The AI landscape was changing so rapidly it was hard to keep up, let alone predict where it was all heading. True. And to their credit, Kevin and Gavit were trying to make sense of it all. They were asking questions, interviewing experts, experimenting with new tools. They were like the audience's guides to this strange and wonderful new world of AI.

Clumsy guides, perhaps, but enthusiastic ones. And sometimes that's enough to spark [00:27:00] curiosity to get people thinking to start a conversation. Okay, so maybe they weren't the most insightful or nuanced AI commentators, but they were at least entertaining, right? It wasn't all bad, was it? They definitely had their moments.

Remember that time they tried to use an AI to write a country song about the robot apocalypse? Oh, yeah. My tractor's running on AI. I think that's what they called it. It was so bad, it was almost good. It was definitely a unique blend of humor and Well, let's just say it wasn't going to win any Grammys.

But it highlighted their willingness to experiment, to have fun with AI, even if it meant looking a little silly. Which honestly was kind of endearing. They were like those friends who were always trying the latest fad, diet, or fitness craze. Even if it's clear it's not going to work. Exactly. They weren't afraid to be the guinea pigs to try out the latest AI tools and trends, even if it meant failing spectacularly.

And in a way that made them more relatable, more human. But did they actually help anyone understand AI better? Or were they just adding to the noise? I think they did play a valuable [00:28:00] role in demystifying AI for a wider audience. They weren't afraid to ask the dumb questions to admit when they didn't understand something, which, let's be honest, was pretty often.

It's true. They weren't exactly AI experts, but they were enthusiastic, curious, and genuinely interested in learning more. And that enthusiasm was contagious. They made AI seem less intimidating, less like something out of a science fiction movie, and more like something that everyday people could actually engage with.

Okay, so maybe they weren't the most accurate or insightful AI commentators? Hmm. But they at least helped to make the topic more accessible, more approachable. Exactly. They weren't afraid to be silly, to make mistakes, to laugh at themselves. And in doing so, they helped to humanize a technology that can often feel cold, distant, even threatening.

So what's the final verdict on AI for humans? Were they pioneers? Profits? Or just a couple of guys with microphones and a lot of opinions? I think there was something in between. They weren't always right, but they were always engaging. They were enthusiastic, curious, and willing to dive headfirst into the [00:29:00] messy, unpredictable world of AI.

And in doing so, they captured the excitement, the uncertainty, the sheer weirdness of a year that would forever change the course of human history. That's a pretty poetic way of putting it. So what's the takeaway for our listeners? What can we learn from Kevin and Gavin's wild ride through AI in 2024? I think the biggest takeaway is that AI is a journey, not a destination.

It's a process of constant learning, exploration, and adaptation. And it's a journey that we're all on together, whether we like it or not. So what does that mean for us? For the people who are trying to make sense of this rapidly evolving field? It means we need to stay curious, to stay informed, to keep asking questions, to be skeptical of hype, to be aware of the risks, but also to embrace the possibilities.

Because ultimately, AI is a tool, and like any tool, it can be used for good or for bad. It's up to us to decide how we want to shape its future. That's a great point. AI is not some external force that's going to control our destiny. It's something that we're building, shaping, influencing with every [00:30:00] decision we make.

So as we look back on the AI landscape of 2024, let's remember the lessons of AI for Humans. Let's be curious, be critical, and be engaged because the AI story is far from over. In fact, it's just getting started. And on that note, thanks for joining us on this deep dive into the world of AI in 2024. We'll see you next time for another fascinating exploration of the frontiers of knowledge.