← All Episodes
AI for Humans

OpenAI's Growth Is Slowing. Is The AI Bubble Popping?

The Wall Street Journal reported OpenAI missed its end-of-year billion-active-user target. Is the AI bubble actually popping or is the panic overblown?  This week on AI For Humans, the AI bubble panic hit a fever pitch after a Wall Street Journal report revealed OpenAI missed its weekly user, monthl

OpenAI's Growth Is Slowing. Is The AI Bubble Popping?

The Wall Street Journal reported OpenAI missed its end-of-year billion-active-user target. Is the AI bubble actually popping or is the panic overblown? 

This week on AI For Humans, the AI bubble panic hit a fever pitch after a Wall Street Journal report revealed OpenAI missed its weekly user, monthly revenue, and end-of-year billion-active-user targets. CFO Sarah Friar reportedly told peers she's worried OpenAI won't be able to pay for future compute contracts if revenue doesn't accelerate, and the board is now scrutinizing Sam Altman's deals more closely. 

AI stocks crashed, with Oracle, AMD, and CoreWeave all sinking on the news. Anthropic is eating into OpenAI's market share on coding and enterprise. We dig into whether the bubble is actually popping or whether this is a panic that conflates AI capability with the business of AI. Spoiler: we don't think it's over. 

Plus, DeepSeek v4 launched and made surprisingly small waves. OpenAI's deal with Microsoft expanded to other clouds. Meanwhile, AI itself is absolutely not slowing down: Tom Cruise is running faster than ever in a viral video, OpenAI dropped a new Chappie-style voice interaction, Claude got a Blender connector for 3D modeling, NVIDIA released Nemotron 3 Nano Omni with 30B parameters and 256K context, and Talkie is an LLM trained entirely on 1880s text. 

OPENAI MISSED ITS NUMBERS. IS THIS THE BUBBLE? YES. NO. SHRUG EMOJI?

#ai #ainews #openai 

Come to our Discord: https://discord.gg/muD2TYgC8f

Join our Patreon: https://www.patreon.com/AIForHumansShow

AI For Humans Newsletter: https://aiforhumans.beehiiv.com/

Follow us for more on X @AIForHumansShow

Join our TikTok @aiforhumansshow

To book us for speaking, please visit our website: https://www.aiforhumans.show/

 

// Show Links //

WSJ: OpenAI Misses Key Revenue and User Targets

https://www.wsj.com/tech/ai/openai-misses-key-revenue-user-targets-in-high-stakes-sprint-toward-ipo-94a95273

AI Stocks Sink on OpenAI News (Yahoo Finance)

https://finance.yahoo.com/markets/article/oracle-amd-and-coreweave-stocks-sink-after-report-says-openai-missed-sales-user-targets-130600628.html

DeepSeek v4 Launch

https://x.com/deepseek_ai/status/2047516922263285776?s=20

Sam Altman: OpenAI's Microsoft Deal Expands to Other Clouds

https://x.com/sama/status/2048755148361707946?s=20

Kwindla's Smart Take on AI's Importance

https://x.com/kwindla/status/2049161481149935668?s=20

Tom Cruise Runs Faster: Viral AI Video

https://x.com/Le_Chuck_81/status/2049027447304196297?s=20

New Chappie-Style Voice Interaction From OpenAI

https://x.com/OpenAIDevs/status/2048871260512473385?s=20

Blender Connector From Claude

https://x.com/claudeai/status/2049143438281445811?s=20

NVIDIA's Nemotron 3 Nano Omni Announcement

https://x.com/NVIDIAAI/status/2049159441870717428?s=20

Talkie: LLM Trained on 1880s Text

https://x.com/status_effects/status/2048878495539843211?s=20

 

AIForHumansOpenAISlowingAIBubblePop
===
Kevin Pereira: [00:00:00] Hugely capable releases from open AI and Anthropic have done very little to quell the screens that the big old AI bubble half popped.
Gavin Purcell: Now, a new Wall Street Journal story is saying that open AI's growth is slowing down and that failing to reach. A billion users by the end of last year means that it's all over.
Kevin Pereira: Oh, it's all over again. Yes, AI stocks have crashed, but there's a lot more to this story. We're gonna get into how AI funding collides with its capabilities and whether or not open AI is on the edge of the cliff and they're about to break Baby
Gavin Purcell: spoiler. It's not.
Kevin Pereira: Like, why would you say now they're not gonna, why are they gonna watch Gavin?
Gavin Purcell: Because Kevin will be talking about open AI's, brand new voice integration, Claude's ability to interact with Blender.
Kevin Pereira: Oh, also Tom Cruise is running really, really fast now, and one of the fathers of chat, GPT has trained an AI model on old time text
Gavin Purcell: and load the appointed hour. Being arrived, we find ourselves to come at the [00:01:00] last of this particular entertainment of the modern age.
Kevin Pereira: Just, just tell 'em what the show is. Gavin, just say it.
Gavin Purcell: This is AI for humans. Good chaps.
Welcome everybody to AI for Humans Your twice a week guide to the world of ai. And Kevin, today we have yet another, uh, iteration. Refrain that is coming back again, that the AI bubble has popped and told you this time, told you losers. This is what you get from
Kevin Pereira: stealing from artists.
Gavin Purcell: You've been talking about this forever.
Kevin. Your whole thing is AI bubbles pop. You've got AI is bubbled Pop t-shirts that are on a hold right now that you're ready to sell. Dude, you should see
Kevin Pereira: the old English tattoo I have across my belly.
Gavin Purcell: Ye old AI bubble half popped. It's just a big thing.
Kevin Pereira: The, the Wall Street Journal has a story about open AI's growth slowing down, which, you know, surprise, surprise, it's hard to attract a bunch of users, but on the backs of raising so much money for so many massive initiatives falling [00:02:00] short of, as you said, a billion users.
Is actually, uh, well, that's a, a reason to cry uncle and to, for an entire industry to collapse. So what are some of these numbers? What are these forecasts and what does this mean for anybody who's like having a romantic relationship with their chatbot?
Gavin Purcell: Well, that's, I, I, there's two parts to the story. One is let's talk about what is in this Wall Street Journal story and kind of why it came out.
Second of all, I think you and I both kind of feel this very strongly that there's two very different conversations happening here. One is the financial bubble that exists around ai. Probably the other is the capabilities of these AI tools that our audience and we know are getting better all the time.
But first, let's start with the Wall Street Journal story. This is a big kind of exclusive story. It came out this morning, you know, it was big because OpenAI themselves has got a lot of like people coming out and saying like, this isn't that big a deal. Trust us, blah, blah, blah. The basics here are that they are saying, uh, wall Street Journal is saying that open AI's growth has slowed down.
And you know, as well as I do. Every startup is kind of, uh, judged on its growth before it gets to the [00:03:00] public markets. And one of the things that OpenAI has been trying to do this year is get ready for an IPO. And for those of you who are in our audience, who are not like financial prototypes, that means the idea that it is going to sell its stocks to the world at large.
We also know that OpenAI has raised more private money than almost any company ever, in fact, that that was like they raised $125 billion. Before they went to the stock market. So yeah, there's a lot of pressure on this particular company to grow and grow and grow. And the only way that this funding makes sense and, and this kind of represents the overall AI business at large, is if this company continues to grow to the size of billions of users in the same way that Facebook is and the idea that it would be slowing down and that it would not hit a billion users is.
Putting a big kind of like uhoh flag in this for lots of people. There's some important context here. Even in this story, they talk about anthropics growth, taking away some of open AI's momentum, which we know right? A lot of other as well, people online. Yeah. And a lot of other people online have talked about the idea that like, look, [00:04:00] OpenAI.
Has pivoted now to this kind of new slimmed down, no soa more codex, more, more programming, more, um, business oriented approach. So there's a couple things going on, but I do think it's important to hear first your take and then we should try to get into that, like kind of deeper conversation about what this means for AI at large.
Kevin Pereira: Well, you every week scream at me. You're either growing or you're showing. And I never understood what that meant, and I think I'm getting what it is now. It means if open eyes isn't growing, that they're showing weakness. Right. Oh, and I think,
Gavin Purcell: is that what this is about?
Kevin Pereira: That it's fair to say, wow, they're on the hook like you
Gavin Purcell: the path much faster than I expected, but keep going.
Kevin Pereira: They're on the hook for keep 6 billion, 600 billion. Yes. In future spending while having only raised 122 billion, right? Yes. And they're gonna burn that 1 22 projected in three years now with SOA gone. Yes. Maybe they get 3.25 years out of it. I don't know. They were bleeding out with soa. There's some that would say that OpenAI isn't even really a tech company anymore.
They're like the [00:05:00] right, in fact, they're like the WeWork of ai. They're like a leveraged infrastructure bet masquerading as an AI company. I, I don't particularly side with that because a lot of the people screaming that about OpenAI are. The same one's making memes about how Dario and Anthropic don't have enough compute Yes.
To serve the models that they're announcing. So which is it? Did Sam Altman get his greedy, greedy eyes and and reach for the infrastructure ring a little too soon? Or is he a genius? That looked out on the horizon and thought, oh, we are going to need all this. It's going to be like electricity. We have to be able to serve it, or is it somewhere in between?
Gavin Purcell: I think there's a really interesting moment we're sitting in right now, and by the way, this, these companies and this idea of AI being this big is affected by more than just like. Our use of ai, the private funding of ai. Obviously there's things like geopolitical issues, there's all sorts of other things happening.
The most important thing right now I think is whether or not open AI and Anthropic the two kind of [00:06:00] juggernauts in this space will continue to prove useful and continue to kind of allow people to do more and more of their actual work with it so that people will pay for it. And I think that's a big question right now.
Right? We, you and I have talked about in the show many times. We personally find it useful, although I'm not sure how economically valuable some of the stuff we make with it is, but the idea that business wise, that's a different argument. We're not, we're not really, we're not really the right example of that, but business wise, it does feel like a lot of businesses are finding use cases out of this.
Now, the biggest question I think, ultimately will be. Can that turn that wheel turn fast enough? Right. Because the other thing that's tricky about OpenAI, and I believe that there are lots of naysayers that have been out there and wanting this sort of story and this sort of thing to come true for a while, is that there are so much, there's so much money now tied up in all of these giant tech companies in this AI build out.
That if this does fail in some form or another, or it does start to slow down, there is a little bit of a house of cards effect that could affect everything. A little in Oracle a [00:07:00] little bit. Yeah, a little bit. Yeah. I mean, Oracle is the biggest one, right?
Kevin Pereira: Well, here's like the, you've seen the memes and I think we've discussed them on the show of the circular Yes.
Trading that happens Yes. Between these companies. Right. Oracle writes a big check to open AI to serve their models open. AI writes a check to Nvidia. Nvidia writes a check to Oracle, like there, it's like six companies all handing money around. And not only are the retail investors like highly leveraged in all of this, if you throw your money into the s and p 500 or anything like that, but your Papa or your grandpapa, if you're one of the younger listeners, their pension.
Is tied up in all this. Yeah, so there is like, there is a bit of a, oh god, what's the shape, Gavin? Where? It's like a three, it's like three sides. It's like a triangle. It gets really tall and there's a few companies and then everybody else kind buying. But it all goes back to, no,
don't
Gavin Purcell: say it's not a pyramid scheme.
This
Kevin Pereira: is No, that's right. No, no, no. It's like more like a circular, like a, a Rattler thing. And it's, it, it's like a snake eating its own tail. Yes,
Gavin Purcell: yes.
Kevin Pereira: Look, there's a little bit of that going on, but I do. Just wanna say like for all those saying that like this particular story and these Yeah. You know, like the, the six or [00:08:00] eight companies, if you will, that are really pushing all this stuff, that them slowing down at all is a sign that the bubble is bursting.
I don't think that's gonna be the first sign. I think it's the million plus companies that you and I have talked about time and time again. Yeah. That went out and raised $30 million because they put a skin on one of these companies. Yes. And released an app or a claim that they were an enterprise solution for something.
I think there's a lot. Oh, the, the, the dam is going to break there. That's where the holes are gonna be first before like open AI's slightly slowing growth right now is, means that this bubble half burst.
Gavin Purcell: Well, and also the number I wanna point out here is the three year number, which, you know, for a normal person in normal business might be like, not that long at all, but three years in the.
Space is a very long time. And the, I guess the question I continually come back to, and we're gonna get into like why I, I don't think either of us believes this affects like ai, the actual technology at this point, but the thing I just keep on coming back to is this idea that. Who knows what this stuff will do.
Three years from now, like I think a year [00:09:00] from now, the vast majority of coders would've said, there's no way I'm gonna end up doing most of my coding with ai. And now I think a lot of them, I would say a majority at this point now are coding mostly with ai. Right. And that is like one year ago. So imagine a year, two years, three years from now.
When these things get way more capable, and I, I do think they're going to, and maybe that's a good transition, Kevin, to kind of think about this separation of these conversations, right? Because I think a lot of people in the mainstream continually say like, AI is gonna fail, it's gonna fail. And that, yes, if, if the money side of AI fails, it will be harder for these companies to kind of make the bigger and newer models.
But again, we just said it's not gonna fail money-wise, even if OpenAI doesn't make more than they're making right now. They have three years of runway left. That is a lot of time for these things to get better over time, and you and I have seen in the last three months, the acceleration of these things just go bonkers.
Right,
Kevin Pereira: right. Yeah. I would say though, I mean two, two real shiny, uh, blinking [00:10:00] red weak spots, if you're a, you know, a, a final boss, PlayStation gamer, like the weakest points that are blinking right now is one is like, as we know, a lot of times this infrastructure is a massively depreciating asset. Yes. You spend billions of dollars to have all these servers and these high-end chips, and then the next year, Nvidia releases something new and now your chips are half as capable, let's say.
So there's, there's that massive unknown, which is happening right now. Uh, and the other one is this open source thing that we keep talking about. Yeah. Where it seems like every few months there's a new model that gets released usually by China. That is a fraction of the cost and it's almost as capable as the best things that these companies, which are raising billions of dollars
Gavin Purcell: Yes.
Kevin Pereira: Are churning out. So yes, three years is a long time for open AI to figure it out. They could also like cut their costs and do ads and do a whole bunch more to boost their revenue. Sure. But these depreciating data centers, the moment they're fired up coupled with open source. Oh my God. It is a bubble.
Gavin, we have to get out. Yeah.
Gavin Purcell: Wait, Gavin, Kevin? I have two. I have two counterpoints. I have two counterpoints here. Two things. Okay.
Kevin Pereira: Please hit me
Gavin Purcell: counterpoint. Number one. [00:11:00] Recently, there's been a lot of talk and semi semi analysis put out something about this, about the idea that older chips can actually serve the newer models better, like than they thought they could.
So like even though the, those chips might depreciate, they will still be able to serve different, they're still very capable. They're still very capable and they may not. Be as capable as the brand new chips, but they're as capable. In fact, they're better serving the new models than they were at serving the old models.
That's one thing. The second thing, Kevin, is that, so the new deep seek did come out, and if you didn't hear about that, I don't blame you. It was because it didn't make the impact that the last one did because it was actually about kind of a generation behind. Right now, it's equal to around Opus 4.5, maybe 4.6 level in that kind of era.
What's interesting about open source is open source might just be like a distillation so far of like what these frontier models are. I think to your open source point though, the bigger question is at some point you run out of the ability to want to use this like super high-end model, right? Because right now I would say there's a lot of stuff that we can be doing with the [00:12:00] current models that we're not doing right.
So. Maybe there is a world in a year or two where every open source model on the market is more than enough to serve every person's needs. And maybe that also makes the inference cheaper for these companies too. Right? Right. So there's all sorts of things that can happen from here on out. Um. I do wanna shout out.
I, I, I tweeted something about this and I, uh, Quila is a good friend of ours in the open source world who runs a company called Daily, uh, and then also helped make Pipe Kat, the Open source AI audio software actually replied to me and said something really interesting and we'll put the whole script thing on screen here.
But his basic take here is just that. He thinks that LLMs and AI at large are going to be bigger and more important than the internet and that we're just kind of now scratching the surface as to what the use cases are. And I think this was in response to me asking about like, are is, is this whole kind of bubble conversation around the fact that whether AI are as big as the internet or they may be something less and Quinlan's argument, which I think is a good argument, is that actually they might be a lot more, and it's just a matter of finding ways [00:13:00] to integrate all of these things and new capabilities.
Into the sorts of stuff we do. And obviously Quinn does a lot of stuff with voice, but like, it does feel like that's what we have this kind of moment of time right now where like this new magical thing has come into our lives and we have to figure out like, well, what the hell do we even do with this?
Right? Like, where do we put this and, and what does it go into?
Kevin Pereira: There was a time in my life, Gavin, and then I'm gonna assume in yours as well, where? Huh? I remember it was a transition. Sorry for the, for the old heads out there. Stay with me for the youngins. Just try to follow along. There was a thing called a 4 86 Gavin, and it was a processor.
Sure. And there were many before it and a whole lot after it. But around the time that the 4 86, uh, became the Pentium, intel had this crazy new chip, all these new capabilities. Then there were plugging Ram into these computers. The amounts were insane. The hard drives went up from four gigs to like 16 gigs.
The numbers got bigger. But something really interesting happened, which is where you could offer somebody a $6,000 high-end [00:14:00] pentium rid with all the ram in the world, and blah, blah, blah, blah, blah. If it didn't have a modem in it and it couldn't connect to something called the internet, I would take the 46 with a monochrome screen.
Yes. And barely any ramp, because that, that was clearly it. And I wonder if we're gonna have that moment. I don't wonder. I have my opinion. I want yours though. I, I wonder if we're going to have that moment. Where we, we look at intelligence. Similarly, I'm not gonna be interested in a device if it either doesn't natively have some sort of ai, some intelligence built in, or if it isn't readily, easily accessible by AI models to be interacted with or, you know Yes.
In, in some capacity. Yeah. Do you think we're gonna see it become that? That present.
Gavin Purcell: I think I, I think that's true. I honestly think that that's the thing that will prove the use case of this. And then what's interesting is like on the other hand, it also might be where AI becomes the commodity, right?
Where it's just in everything. And then what AI is special versus not special, I don't know. And that becomes another question of like, are these companies worth as much as they're worth? There's no shortage of [00:15:00] major things that's happening, especially if you as the viewer at home. Push that like and subscribe button because you know what, the amazing stuff all begins here we are.
The origin point, the big bang of everything interesting comes out of us. Our two little heads explode out. That's right. Thank you.
Kevin Pereira: And I do wanna say, like someone did say that we are the most impactful, uh, podcast ever released in the history I've ever. Others said that we are the taste makers for the community.
That without us AI would be a bubble that bursts. We're the only ones like holding the door, so to speak. And I just wanna say thank you and I agree. So please like, subscribe, leave a comment down below, juice that algo. And if you want, you can back us on Patreon. Go to AI for Humans Show. That's our official website you can send up for the newsletter.
Which still drops twice a week and you can buy us a coffee. How good is that? We like to stay caffeinated. Sip, sip
Gavin Purcell: newsletter drops once a week. Unfortunately, the twice a week just couldn't, couldn't keep going.
Kevin Pereira: Well, here's the thing, I get it on two different inbox inboxes, Gavin, I only check one of those later in the week, so it's really like a second treat for me.
Let's,
Gavin Purcell: you can do that too. Talk about. Let's talk about all the cool stuff that people are [00:16:00] actually using AI to make. And Kevin, I do wanna start with this clip that you sent me, which is just a perfect example of how money should be spent in ai. This is putting Tom Cruise into famous movies, running in them.
Tell me a little bit about what this is and who made it.
Kevin Pereira: There is a YouTube channel called Alternate, uh, alternate Reality Movies. Um, there's still a sleeper channel, so shout out to them. Give them a sub. They've got a bunch of, um, like kick ass cameos starring Chuck Norris, where he just shows up unannounced in a bunch of movies.
Uh, they're mashing up different genres, but this one was very simply, Tom Cruise runs faster. And it is clips from everything from, uh, you know, uh, Forrest Gump to back to the Future where Tom Cruise shows up unannounced. He'll even face palm a T 1000 and run faster. He runs faster than buses. In movies, he'll run faster than Arnold Schwarzenegger on a horse.
It doesn't matter. It's just Tom Cruise bolting at high speed and it even kind of has the Tom Cruise run, which is,
Gavin Purcell: yeah,
Kevin Pereira: chefs.
Gavin Purcell: It's unbelievable. And I think it goes to the [00:17:00] point like, you know, obviously you can't do this with a lot of current AI video models. This is probably being done with open source tools.
So it just goes to our open source point before this idea that you can face swap in Tom Cruise and make this stuff. It's just a very, very fun use case of technology. But
Kevin Pereira: watching him throw the boys from Stand By Me Away, or, um, like, uh, hip checking, uh, uh, Sylvester Stallone as he's running in Rocky to put the stitches.
So good.
Gavin Purcell: Tom Cruise has become like the AI of, uh, Hollywood in a lot of ways. He's just gonna be all actors in the future. Uh, Kevin, we should keep going here. There is a cool voice interaction from open AI that they're calling Chappy, which I kind of found interesting. Mm-hmm. The coolest thing about this is that it actually allows you to use realtime voice, open AI's, realtime voice.
To fill out forms, which is a real good use case of open AI's technology here.
Kevin Pereira: This is like, okay, to, to the thing that I said earlier. Not to pat myself on the back, Gavin, but this, the, what's really cool about this demo is that it, it's, yeah, someone using their voice to interface with [00:18:00] like a web browser, but it's not doing the traditional agent thing where the AI takes a screenshot.
Or, or scrapes the dom of the page or whatever else. It's like the, the, the, the, the tech stack gives the agent direct access and information about what's on the page. Right? So yes, filling out the form, like you said, or engaging in dark mode versus light mode. This is to me like the signal of like, oh, right.
I like there were rumors of the open AI wearable. The big device with Johnny Ive is going to be a phone.
Gavin Purcell: This would be a use case. This would be an actual use case. Yes. Do you wanna play a second? Give you an example. An audio. It's got an audio demo. Let's just take a listen to a brief moment of this.
OpenAI Demo: In this demo, I've even added a cursor just to show the user of what's happening on the screen.
Let's start with a practical example here. Define a simple application with a wake word and the ability to go from light mode to dark mode. Hey, chappy, can you go to dark mode?
Kevin Pereira: It's done.
Gavin Purcell: So what? You're can, yeah, so what? To be clear, I had
Kevin Pereira: to say it's done because it happens so quick. Go ahead, Jeff.
Gavin Purcell: Yeah, no, and what it's interesting here is you're, you realize that the thing's not talking back, this is just you saying something and then it does the stuff [00:19:00] automatically within the actual application.
OpenAI Demo: Can we do night to F three?
Kevin Pereira: This is a chess demo. The piece moved already, by the way, like that. Quickly, the piece moved because the AI didn't have to read the screen, see what he's talking about, who's trying to, whatever. It just knows what's happening on the, on the page. And so when you extrapolate this to all of your apps across your entire ecosystem, suddenly a device where.
You don't have to go and hunt for the thing that you want to load to then ask it to do something. Yes. You just tell the AI and it inherently has access to all those things and can manipulate it. I do get excited for that device.
Gavin Purcell: Yeah. I mean, it goes to show you how important really good computer use is.
Right? Because when you can plug a very smart agent into a very good computer use, suddenly you're able to do all this stuff. It also made me think of. All of the people who are paralyzed or who have, uh, disabilities, like the ability for them to, instead of having to like have a speech to text and try to find the actual form.
Right. That's a massive level up too. Again, this is just a cool [00:20:00] thing that came out from open AI's developer channel and like this is the future of it all.
Kevin Pereira: That is huge. When you talk about like the, uh, BCIS or brain computer interfaces. Yeah. Something like a neural link is, is what comes to mind. Uh, no pun intended.
For a lot of people, usually that is. Think about where you need the mouse to go. Yes. Think about the click you need it to make. Think like that that will be gone. Um, yeah, that's, it's a huge, huge boon for accessibility. You're totally right.
Gavin Purcell: Yeah. Another really cool thing that just came out is Blender is now being plugged directly into Claude.
This is a direct combination of within Claude itself, where you can talk to Blender, if you're not familiar with Blender, is an open source 3D modeling tool that I have actually spent a little bit of time lately with. And guess what, Kevin? Claude does blender really well. Like I made a little, oh does it kinda raccoon, gangster thing sort of scenario, which is very cool.
We're gonna talk about that a little bit more on Friday's show, but it is a very cool integration. And again, this is another example of a very complicated system. Blender is a nightmare. I try to learn it myself. I shouldn't say it's a nightmare. I am, I'm just not built to learn [00:21:00] complicated systems in that particular way.
But Claude is a translator in this. Way, like it translates your ideas directly into Blender and it gives you 3D models and it's not really difficult to do. So I was also very excited about this.
Kevin Pereira: This is like a, I won't get too super weedsy about it. It literally just came out. But, uh, Nemo tron three Nano Omni, whew.
Boy, that's a fun mouthful. But Nvidia release this. It's a 30 billion parameter model, uh, with a decent context window so it can, it can cram a lot. But what I really think is interesting about this is that it is like an omni model, which means it's one tiny model. Think of this running on your phone or running on a small robit in the near future, and it natively handles, uh, audio, video, and text.
So it does not need to call a tool or connect to something else to understand what it is seeing or what it is hearing. It can process that natively and across modalities so you can feed it a video. Ask a question and it could answer you with audio. Like there's, it's a really, really interesting space. I love, like Nvidia has a vested interest in [00:22:00] powering these things because again, future of robots, they're making things that plug directly into robots.
I just thought it was interesting. Perhaps not as interesting. As the talkie. LLM though.
Gavin Purcell: Well, this is the thing, like all those things we just mentioned are things that are important to understand about like how AI is advancing in different pathways. There's AI video or there's other stuff. AI's also going backwards.
Kevin, and this is actually a very interesting scientific project. What this is, uh, first of all, it was created by Alec Rad and David, uh, sorry David, I'm going butcher your last name. Dni, not David. Dni a very smart man, David. D. D Nod. D Nod. Sorry, I did butcher it. Mm-hmm. And Alec Rad, if you're not familiar, is actually the person that worked at OpenAI and kind of was the, the engineer behind the first iteration of Chat, GT GPT two, uh, and other ones like that.
This is a model that essentially is. Only trained on texts from 1931 or earlier. So what you're getting outta, this is an LLM that is trained to answer but [00:23:00] nothing past 1931. And what's really fascinating about this is it just gives you a sense that like these LLMs work on being trained on all this data, but we do have a lot of data pre 1931 and it speaks like an older, a person from that era.
And like when they ask a question of like. Who's the president right now that thinks of somebody in 1931? And it's just another good example of playing with the idea of what ais are capable of and then like maybe what smaller focused data sets could make ais into. Right. Because to me, I immediately think about like character stories, right?
Or like, mm-hmm. Oh, what would it be like to do a, have this AI write something versus an AI that we have right now? Write something. I, in general, I think it's a very cool idea.
Kevin Pereira: Yeah, and the important distinction again, is that it was trained off of that data exclusively. So it's not like a newer model that's being told role play like you're an old timey something.
A lot of that data came from patents and came from documentation that had to be scanned. So there was like a lot of OCR involved because there was no yield Reddit [00:24:00] for it to scrape. So I, I think that's cool. I think having a model like that and seeing if it can like, um, uh, investigate or, uh. Uh, or invent new science, like modern science, you know, how would it react to being told that there is a, a, a device called a pager or a cellular phone or even a phone?
How would it react to that? And then it got me thinking like,
Gavin Purcell: oh,
Kevin Pereira: what sort of, what sort of other LLMs would you want to, to train exclusively on? Like, uh, like I was thinking like a, um, uh, like a a, a guardian's LLM trained only on Groot and it can only answer with Groot would be interesting to me.
Gavin Purcell: Or, well, that's a pretty easy, you could fake that one pretty simply.
'cause it would just
Kevin Pereira: be
Gavin Purcell: groot.
Kevin Pereira: Uh, B-Movie, LLM or Cookie Monster.
Gavin Purcell: Well see this is what I'm getting at, is this idea that you could create an LLM that would essentially be a much more diverse, much more interesting, like very specific character. And you can think in the future of if you have, if everybody has this collection of AI agents, if one of your AI agents is like BMO hero and it's a only Bemo dialogue and about what a B-movie hero would.
Gonna do [00:25:00] that would be a lot of fun. Right? And like maybe useful for something maybe that, maybe you give that B movie hero whatever the most recent technology development is, and they find a way to kind of open it up to a whole different world. But I don't know, this is just a cool way of looking at
Kevin Pereira: literally, this makes me realize that we're never gonna be responsible for revenue at any of these companies.
No. And I love
Gavin Purcell: that they're not, no one should. No one should ever give us responsibility for revenue except for you when you go to our Patreon. Thank you everybody. See you on Friday. Great show and we'll talk to you soon. Bye-bye.