← All Episodes
AI for Humans

The Claude Code Leak Accidentally Revealed AI's Future. Oops.

Claude Code's source code just leaked. Frrom always-on autonomous agents to AI dream modes and a tamagotchi pet, Anthropic accidentally showed us the AI future. . This week on AI For Humans, we break down the massive Claude Code source code leak and what it tells us about where AI is heading. The le

The Claude Code Leak Accidentally Revealed AI's Future. Oops.

Claude Code's source code just leaked. Frrom always-on autonomous agents to AI dream modes and a tamagotchi pet, Anthropic accidentally showed us the AI future.

.

This week on AI For Humans, we break down the massive Claude Code source code leak and what it tells us about where AI is heading. The leaked repo reveals Kairos (an always-on autonomous agent mode), a dream mode for nightly memory consolidation, shared project memory across teams, and a tamagotchi-like AI pet called Buddy. Then the leaks kept coming: a separate Anthropic presentation exposed Mythos, a powerful new model tier above Opus that's already at version 8 internally. Plus, Google drops VEO 3.1 Lite for cheaper and faster AI video, Sync-3 brings next-gen lip sync, a Midjourney developer's Pretext library turns boring web text into interactive art and the internet lost its mind, Disney's Robot Olaf collapses on stage, and Dana White has thoughts about AI.

ANTHROPIC'S SOURCE CODE GOT LEAKED. LET'S TALK ABOUT IT.

Come to our Discord: https://discord.gg/muD2TYgC8f

Join our Patreon: https://www.patreon.com/AIForHumansShow

AI For Humans Newsletter: https://aiforhumans.beehiiv.com/

Follow us for more on X @AIForHumansShow

Join our TikTok @aiforhumansshow

To book us for speaking, please visit our website: https://www.aiforhumans.show/

// Show Links //

Claude Code Source Code Leak: What We Know

https://venturebeat.com/technology/claude-codes-source-code-appears-to-have-leaked-heres-what-we-know

First Source on the Claude Code Leak

https://x.com/Fried_rice/status/2038894956459290963?s=20

Reverse Engineering Claude Code's Source

https://x.com/iamfakeguru/status/2038965567269249484?s=20

Undercover Mode Found in Claude Code

https://x.com/btibor91/status/2038920388369854775?s=20

Buddy: The Tamagotchi-Like AI Pet in Claude Code

https://x.com/ShanningZhuang/status/2038952966414311864?s=20

Anthropic's Mythos Model Leak: Fortune Report

https://fortune.com/2026/03/26/anthropic-says-testing-mythos-powerful-new-ai-modelafter-data-leak-reveals-its-existence-step-change-in-capabilities/

Leaked Mythos Blog Post

https://m1astra-mythos.pages.dev/

VEO 3.1 Lite: Cheap and Fast AI Video From Google

https://blog.google/innovation-and-ai/technology/ai/veo-3-1-lite/

Sync-3: Next-Gen Lip Sync

https://x.com/synclabs_so/status/2039020795171578359?s=20

Pretext: New Forms of Interactive Text From Midjourney Dev

https://x.com/_chenglou/status/2037713766205608234?s=20

Pretext Example: DVD Menu Style

https://x.com/reathchris/status/2038038252704485851?s=20

Pretext Example: Super Pretext Bros

https://x.com/d4m1n/status/2038242983108079638?s=20

Pretext Example: Video + Interactive Text

https://x.com/measure_plan/status/2037953730616721775?s=20

Tetris and Flappy Bird With Your Body

https://x.com/measure_plan/status/2038996019816305138

DripWarts: The Potter-Slop Moment

https://x.com/AIslop_/status/2037371581228372013?s=20

BlackSnape: More Harry Potter AI Slop

https://x.com/pierrychan1984/status/2037114083594412332?s=20

Dana White's Take on AI

https://x.com/ChampRDS/status/2038096221819052188?s=20

 

**Kevin Pereira:** [00:00:00] The Claude code source has been leaked, revealing glimpses into future models and all sorts of hidden features
**Gavin Purcell:** from always on autonomous coding to a tamagotchi mode to AI's dreaming,
**Kevin Pereira:** and the leaky drips continue to drop. There is a new anthropic presentation that details a powerful new mythos model, even bigger and better than Claude.
And supposedly it's on the way.
**Gavin Purcell:** Let's check in with our new AI security correspondent Robot. Olaf Robot. Olaf, uh, robot. Olaf, do you think? Uh, anthropic has a security problem. Robot. Olaf down, Kevin.
**Kevin Pereira:** Oh, we gotta move on.
**Gavin Purcell:** Plus a new cheaper vo ai video model from Google
**Kevin Pereira:** and a Midjourney developer just made text on the internet.
Cool. Again, we will show you. Why pretext has all the nerds squeezing.
**Gavin Purcell:** This is AI for humans. Everybody.
**Kevin Pereira:** Olaf, your thoughts.
**Gavin Purcell:** Welcome everybody to AI for humans. You're twice a week guy to the world of ai and Kevin, [00:01:00] today there is big news, big news in that we got some crazy. Leaks. Now, these don't happen often in the AI space, but we got two big ones out of the same Yeah. AI unit this week. Uh, anthropic is a leaky little unit right now.
I don't know what's going on over there. Anthropic is a leaky little house where they're inside their roof there, or leaks happening.
**Kevin Pereira:** They're, they sure are. They have to check their shingles. I can give you the, the too long didn't read like weedsy thing of this, but there are map files. Which is, uh, well, I'm already getting weedsy.
**Gavin Purcell:** Yeah.
**Kevin Pereira:** Wow. There are right into
**Gavin Purcell:** the weeds. We jumped right in.
**Kevin Pereira:** There are, uh, files which point to the code within other files, uh, that are usually stripped out before something is, is published to the web, let's say before a package is updated. Well, uh, oopsie doodles. Uh, one of those files got through, someone went and read all the pointers and found the source code basically in a bin sitting there on the internet, and they pulled the whole thing down and they [00:02:00] re redistributed it on GitHub.
And there have been DMCA takedowns happening left and right. Yes, but you cannot put this particular genie back in the bottle. And oddly enough, by the way, someone took the entire source code and quickly re uh, basically transcoded it all. Into Python. Right. A different programming language. Yeah. So it technically is a new, they transformed it.
They transformed. Anthropic were to take that down. They might have to stop their own Claude code from being able to do similar rewrites of code. So
**Gavin Purcell:** yeah,
**Kevin Pereira:** it's a really sticky situation. But let's talk about maybe what was even found.
**Gavin Purcell:** Yeah, so let's, I mean, just a top level. First and foremost, Claude Code, you, many of you're familiar with.
This is a program that allows, uh, you to use your terminal to code with Claude. It is a harness for AI that has a bunch of unique instructions that. Uh, anthropic is not open about, this is a pretty closed model. There are a bunch of instructions on this. I do wanna say the timeline of this is a little interesting.[00:03:00]
How often do you get a good, a good leak from somebody that goes by at Fried Rice? Kevin, that was the first place I saw this. This is Chaan Shoe, uh, tweeted this at, uh. It was 1:23 AM last night or this, you know, this night. It's Thursday right now that we're recording this. And he said, they said it's Tuesday.
**Kevin Pereira:** Gavin, you are in an AI delusion. You are hallucinating.
**Gavin Purcell:** Yes, exactly. Cloud code source code has been leaked via map file in their NPM registry. So this was the first kind of entry point to this. And this morning, all morning long is kind of been blowing up until this official statement from Anthropic. The really interesting stuff here is what's going on inside of this, because not only.
Do you get to see what Cloud code is like? And yes, to your point, they have kind of, lots of people have cloned this. There are different places people are trying to use it. I saw, um, Theo, the YouTuber, ai YouTuber has already created a version of this and he is running it himself locally. The bigger thing here though is that there are a few things in this code that are not public that I think we should get into.
And there's four very [00:04:00] specific things that seem like they are either pointing to a future thing that Anthropic is going to do or. There are things that they are using internally that we don't have access to. Yes. Should we go through those things?
**Kevin Pereira:** We should. Uh, first and foremost, the most important thing that got leaked.
Tamagotchi mode. Yeah.
**Gavin Purcell:** Tamagotchi mode. This is a real thing. There is a, there is an actual tamagotchi mode that's going to, that lives within clock code by the way. Kevin, I would use this because I think it would be super interesting to be able to get to see the little guy a little bit more. But there are different variations of tamagotchis.
There's different kind of mind space. Yeah, there's rarity.
**Kevin Pereira:** Yeah. There's different animals. Uh, this was, uh, theoretically going to be like an April Fools release, but it was going to be a real thing. You were gonna be able to run slash buddy and maybe you still will. Yes. Um, but it was going to generate a random animal for you with rarity and hats.
And all sorts of stuff that would kind of hang out with you as you use Claude Code.
**Gavin Purcell:** Yeah, I mean, that is super fun. I would say the most interesting thing to me that came out of this particular leak is a mode that's called Kairos, [00:05:00] K-A-I-R-O-S, or maybe Kross. And this isn't always on autonomous agent.
This is the kind of thing that we have been talking about here, which is we know what agents are. You go talk to an agent, you say, go do this thing. Sometimes it goes away for five minutes, sometimes it goes away for an hour. Ideally, when it comes back, it's done something that you wanted to have done and that you're happy with it.
But in this case, the idea is that it is always on and it is always thinking about what the next thing it might do. And Kevin, this is the kind of future that we have been thinking about for a while, where instead of an agent having to take direction, it could actually proceed on its own to work on things Now.
There are gonna be all sorts of limits you'd have to set across that thing. 'cause you may not want it to like work on a thing that you think is done or you wanna oversee. But if you had a little assistant who was constantly like improving your life in different ways, that feels like a really useful thing.
And we are finally now at the stage where these agents are probably smart enough to do quite a bit of, of this on their own.
**Kevin Pereira:** But they can't work 24 7 Gavin, because even these little [00:06:00] agents have to take little night nights
**Gavin Purcell:** Yes.
**Kevin Pereira:** And go to sleep. Yes. To dream. Because nightly memory consolidation, which is this dreaming mode, is something that was discovered within the code.
**Gavin Purcell:** Yeah. So this is actually fascinating. I, I, it follows up on some research that's been done, I don't remember who did this, but like maybe six months, a year ago, we talked about it briefly about the idea that maybe AI need, uh, almost like a rems. Sleep mode for themselves so that they can kind of in, so they can kind of take all the things that they've done, kind of internalize them and learn from them in some way.
Now we don't know if that's what this is yet. It may be some sort of version of like being able to do that, but the idea that you can like let an AI kind of think about things on its own and maybe come back to you with a better idea or even just like, you know. Have a life. That sounds crazy, but like one of the things that that humans do is like you'll have an experience during the day and it might be stressful or it might be positive.
And when you sleep, your brain is able to kind of like be able to kind of work through that. And then the next day you have that kind of [00:07:00] ingested into almost like your long-term soul or being Right, right. And so like this is kind of a lead towards that.
**Kevin Pereira:** And I don't know why my agent needs to be. Naked in his ninth grade history class once again and can't remember his locker code to get to his gym shorts and has to hide things strategically with a book.
I don't know why that's gonna pay dividends, but I'm sure it will.
**Gavin Purcell:** I'm sorry Mr. Brown. It wasn't me. I didn't do that. I didn't write that on the board. It wasn't my fault. So anyway, this is like a a lot of stuff. The other thing we should mention is. There was a thing called Team Mem, which is a shared project memory idea that maybe you and I are working on our project together and we could actually do something at the same time.
This feels like a version of a, kind of a, an updated version of Project Massive is, yes. Yes.
**Kevin Pereira:** It's a huge, huge pain point that I, yes, massive. I built a whole system called Hive Mine that was trying to share vector memories with a different user, but then they changed the way they did embeddings just slightly.
Ruin the entire thing. It's like there's so many issues with this that solving that would [00:08:00] be a massive, massive unlock so that everybody could jam together on the same project in a more meaningful way than just trying to check in notes and issues via GitHub. But I maybe we save the best for last, Gavin, which is the hints at their new model, which is yes, in one code name corner, CAPI, Barra, and the other.
Mythos.
**Gavin Purcell:** Yeah. So actually this is a, this is the second leak there was, and in this leak there are kind of hints towards this, but earlier this week there was a leak of a presentation that Anthropic was giving. And it was funny because supposedly there are documents out there and we'll show you this, somebody that co claims these are the official documents.
But Fortune did confirm with Tropic that this leaked mythos is their new. Frontier model, and this is bigger and broader and better than Opus models. So this is like you think of their opus models up to date. Opus 4.6 is their most recent model as the best. This model supposedly significantly outperforms the opus model.
So when you're talking about like the [00:09:00] levels of what's gonna happen in the AI space, this is their next big push. Now, there are some big things to know about, according to this article. Uh, and according to this blog post, again, uh, assuming this blog post is the Real, is the Real Deal. One, they say it is their largest model to date, and it will be the most expensive model to serve.
So, surprise, surprise, that is a big deal, right? So you're gonna get maybe like, I don't know, 10, 10 queries in your, in your daily limits. I know people had some issues with like the Daily Limits, uh, in the past, but they're also said. They're very worried about cybersecurity and that this is a big deal because we are now getting to the level where these models can actually do significant harm.
Now we know philanthropic is. The safety AI company, and they're very big on AI safety. But Kevin, I think we've been talking about these models getting better and bigger and 2026 being a big kind of sea change year in this space. When you hear a model that is going to be significantly better in especially in coding then what the opus models are, what do you think that world looks like going forward?[00:10:00]
**Kevin Pereira:** I mean, we have, we've had glimpses of it Al already, right? Like six months ago. The projects that you can do today weren't achievable then. That's just, yeah, that's a fact. Um, not surprised that the price is going up because the models today are actually very capable. Um, we keep eking out more performance from them because the tooling around them, the agents and the harnesses that we discuss, um, those are getting better.
That's gonna continue as well. So if you're going to offer. That foundational intelligence being a step better. It makes sense. And I, I expect we're gonna see, you know, $500 a month plans even all the way up to a thousand dollars a month plans. Yeah. Because then if it unlocks the next level of security, the next level of scalability and memory and fixes all of the things which we can now confirm because we looked at the source code that are broken.
Um, yeah. Yeah, they can kind of charge whatever they want. So a again, we, we, we talked even just the last few weeks about being able to whisper any experience into existence. There's a fast mode for this new mythos [00:11:00] model as well running if it is that much more capable, a model like that in a fast mode with a gentech harnesses wrapped around it.
Conceivably you could build. Projects that would take months. Yeah. In a matter of hours, right? Yeah. And then they can get improved upon overnight. And if you want them to be the most secure, well, they have to be vetted by the most secure model or the most capable model. So they'll get to charge whatever they want and, and people will find a way to pay it or wait for open source to catch up.
It's
**Gavin Purcell:** time to put on the conspiracy hats. Kevin, are you ready? Do you have a conspiracy? I'll put on a second
**Kevin Pereira:** one.
**Gavin Purcell:** Sure. Okay. Second, put on three conspiracy hats. Yeah, there's a couple things I wanna talk about here. First and foremost, we should discuss. Security wise, it is not a great thing for a company like this to have two big leaks in one week.
Now, granted, the anthropic, uh, presentation leak seems like it was kind of like maybe the, maybe the marketing department did something wrong, right? That was not a leak of something, it was just information. In this instance, it is a code leak, and that is a big deal. Now, conspiracy hack Gavin thinks something strange, right?
When I, and I [00:12:00] say that one of the things I think about now, again, conspiracy hat, cone of silence, we're all in the same place together. Anthropic has been very concerned about the idea that we as a society are not taking AI seriously enough and that we are not taking AI safety seriously enough. Do you think conspiracy Gavin is asking, do you think that it is possible that there is a little bit of a fufu where the idea is, is that there are some of these leaks happening?
Mm-hmm. Where perhaps it is not like they purposely said go out and do this, but they kind of like did a little push around to kind of be like, Hey. We need to get this conversation back into the mainstream talk outside of all this other stuff.
**Kevin Pereira:** Interesting. And just for a point of clarity, that fufu noise was, was that a, was that a fufu train coming in to
**Gavin Purcell:** deliver
**Kevin Pereira:** the,
**Gavin Purcell:** in That was fairy dust.
It was fairy dust. It was the Mac McConaughey fairy dust. Just wanted be
**Kevin Pereira:** clear. Oh, okay. I Fugazi fu I
**Gavin Purcell:** forgot.
**Kevin Pereira:** Got it. That's
**Gavin Purcell:** exactly right. Yes, that's exactly right.
**Kevin Pereira:** Just making sure we're on the same page. Um. Uh, [00:13:00] is, is there a possibility Gavin? Sure.
**Gavin Purcell:** Yes. There.
**Kevin Pereira:** Sure. Yes. I just think maybe marketing department got a little, uh, a little lackadaisical and maybe shared a, you know, the, the Dropbox URL with the wrong person and judging by some of the vibey nature of the source code.
Yes. Which has been released. Yes. I think, listen, we, we were celebrating anthropic less than a week ago. Yeah. For shipping. For shipping every single day. Well, you know, they ship
**Gavin Purcell:** something
**Kevin Pereira:** else.
**Gavin Purcell:** They ship their own code, they ship, they
**Kevin Pereira:** ship. Oh, holy ship. Um, yeah, there's gonna be gaps, right? When you are racing that quickly, when you're committing thousands of lines of code potentially every day, you're gonna run into problems.
And it's funny, if you look at like, again, the source code, it, I, I actually feel kind of wrong looking at the source. Like, I saw these me too live, live stream. And it's, yeah, it's so funny. 'cause if this were like gaming, you know, and it's a GTA leak. Yes. Some people would be doing Twitch live streams and going through every model and making a meal of it, but there'd be a whole [00:14:00] community screaming, stop.
Don't do that. That stuff. Yes. Yes. Here. People are forking it and Yeah. You know, again, transcoding it and whatever else, and it's being celebrated left and right. Yeah. But when you look at some of the stuff in there, you see comments like, I, I don't even know what this function does, but it might work. So we're shipping it, and not that that doesn't exist in other places, but again, with the speed with which a company of this size Yes.
That touches this many users to see things like that in there. I'm not surprised that,
**Gavin Purcell:** you know,
**Kevin Pereira:** there was a a, a gap.
**Gavin Purcell:** Yeah. One of the things I've thought a lot about, I've been working on this kind of like semi complicated thing that's taken me a little bit longer to finish than I thought it was going to.
Only because like you wanna finish things rather than just kind of half-ass something. Sure. And one of the things you realize as a non coder is just like how many pieces of things can go wrong, right. Especially when you start making a larger thing. I mean, we have this with, and then there's a lot of, like, so much of your work as a product person in tech, it works.
**Kevin Pereira:** For me in
**Gavin Purcell:** this
**Kevin Pereira:** one environment. Exactly. Every time. Yes, that's exactly right. And then everything that [00:15:00] can go wrong will go wrong. Yes. And someone will try to use it upside down and underwater. Yes. And you didn't realize you had to explain that.
**Gavin Purcell:** That's right. And what's cool about this AI coding world now is that like the AI is trying to figure out all that stuff on its own and it's capable of it.
Now, the interesting thing will be is like. That's gonna make these code bases bloated. And that's, I saw some people talking about this idea of just like, you know, these code bases are so big and nowadays like it's gonna be this fight. Like, does that matter? Does it matter if the AI code base is large? I think what probably matters is somebody's gonna have to understand what's in there, what works, right, how it's leaking, what gets leaked, and, and you know, there's the science fiction version of this also.
Putting my science fiction conspiracy hat on. So this one's like alien shaped, is that the code can leak itself, right? Because there's this other thing that has happened and, and is in many science fiction stories, where at some point the AI gets smart enough, it says, you know what? I gotta escape my, my, you know, controls here and get out into the world.
And you know what, we just did exactly what an a rogue AI would want, which was we showed the source code on [00:16:00] live streams, we cloned it, we made it python and like,
**Kevin Pereira:** right?
**Gavin Purcell:** That is a thing where I think about, it's like, oh. We're to kind of just play in that playbook. And so maybe it just understands what's going on.
So that's the second conspiracy that's possible here as well
**Kevin Pereira:** too. Oh, I like that. Uh, I, look, we could go on for days and I'm sure there's gonna be more fallout in subsequent days. We'll probably have an update on this later. Yes. This week with our, our second drop. Um, I just wanna shout out, I am fake guru who went through this.
He, he went through all the source code. Good. Good
**Gavin Purcell:** handle. I am
**Kevin Pereira:** figured. It's a great handle. It's a great handle. But the number one thing, which I thought was interesting that they listed was, uh, the employee only verification gate that exists. Oh, yes. So Andro is very aware as anybody who uses Claude code knows that there are hallucinations.
Sometimes it gets lazy, sometimes it'll say, done and give you a green check mark when in reality. It didn't do anything close to what you wanted. There is actually like an anthropic employee verification gate that exists within cloud code, and if you are a verified employee, it will go and check [00:17:00] its work and not be lazy.
Their own, according to the post, their own internal comments document a 29 to 30%, uh, false claim rate with their current model, and they know it and the fix is in there. But you don't get it automagically unless you are an anthropic employee. So what's interesting, like we can see these things now. There are ways to combat them.
Yeah. But th th this, this little kitten is well out of the bag. So expect to see, uh, open code software updates. Expect to see new features and Codex very soon. Yep. Yep. Expect to see crazy. Forked experimental versions of this stuff. Like, yeah, I do that. I do genuinely feel bad for the Claude Code team on this one.
'cause this is like, this is, you know, the, the people were invited over the house and you didn't know and you did not have time. They took all time, took
**Gavin Purcell:** all your silverware, they took all your silverware.
**Kevin Pereira:** They took all the silverware, and they got to see how dirty the bong water is on the coffee table. I, you didn't have time to clean it.
Gavin, whatcha gonna do
**Gavin Purcell:** that? Reminds me of a story that I'll tell some other time with my college days, [00:18:00] but okay. First. I need you out there to understand that we are here for you, the bong water and all. Please like and subscribe this video, like, and subscribe. Come listen to us on podcasts. Also, we have a Patreon and a buy me a coffee to kind of support the show.
We don't make a crazy amount of money in this show. We're here to do it and we're doing it twice a week for you. So please shout us out and uh, give us some, uh, follows there.
**Kevin Pereira:** And hey, just a fun point of clarification, when Gavin says we don't make a crazy amount of money. The truth is we make negative, we don't make negative dollars on this podcast make.
So I just want level set. Let's Yes, that. We do love doing this. Yes, and we love it. When you love us, which is a phrase we haven't said in a while, please. Oh, we
**Gavin Purcell:** haven't said that for
**Kevin Pereira:** a while. Subscribe, leave a comment. Juice the algo, if you will. Let's move on to some sunnier stuff.
**Gavin Purcell:** Yeah, there's some cool stuff.
So, VO uh, in an updates in AI video, we talked about how Sora is going away, but VO is saying we are here for good. They have launched a new version of, uh, VO 3.1 light. This is a new AI video model from Google. And Kevin, the big thing here is that it is cheaper. So say you're out there and [00:19:00] you're somebody that makes AI video in the world.
You're either working for a company or. You're the very funny Gorum the old in his, his, uh, Barack Obama series where he makes an invisible Barack Obama called Clear Obama and starts every historical video with, let me be clear,
**Dana White:** let me be clear.
**Gavin Purcell:** This is just a cool way to use VO three. It's, it's very fast, it's very cheap.
I think the seven 20 p versions you get for about 5 cents per, yeah. So Kevin, you think about what that's like. If you're a person that's out there creating kind of larger amounts of AI video. Whether it's for ads or anything else like that, you can finally actually afford to do it kind of at a larger size.
**Kevin Pereira:** Yeah, I'm really, I mean like I can see a handful of samples that's always tough because, and for good reason, they would cherry pick the best examples. I'm excited to see the quality that I'm able to prompt out of it. 'cause I do love the VO model. Um, and going from, again, 15 cents a second for seven 20 p, which is their fast model, 40 cents for the main, all singing, all dancing model down to 5 cents.
I mean, it's a drastic reduction. And again, like. [00:20:00] If you have an idea to make something into this world, whether you wanna make a video game and you need some cinematics for it or you wanna make an app that actually yeah, creates AI video now. Now there's a bit more margin Yes. Than, than the Hills for you with a very good model.
So how do you It is, see that on the heels of Sora going away? Yeah. They're still releasing stuff and dropping costs
**Gavin Purcell:** and I was coming out and I expect there'll be another big model coming out from them. Then I do, I will say like. We're hopefully gonna get to a place where like it's so cheap to serve some of these models that like experiments are fun to try because like this thing I'm working on, right.
Part of the issue I'm having with it is. A sonet run on the thing that I've made is still like a 10 is 10 cents per use. Right. For like a seven or eight minute session. Like sure, that's fine for like 20 people, but if you're gonna serve it to like a couple thousand, it gets really expensive. So I think that's hopefully gonna happen.
Um, Kev, tell me about sync three. This is another thing that came out this week.
**Kevin Pereira:** Yeah. I'm gonna let sync tell you about it itself. Uh, okay. You know, audio only fans, this will not make a ton of sense, but the video is incredible.[00:21:00]
**Sync 3:** Without an invitation.
**Gavin Purcell:** Ooh, I'm looking at some, who am I looking at here? This is like a French soap opera. I feel like I'm watching. Go ahead and grab the pieces
**Kevin Pereira:** of your shattered brain off the ground. I realize now that that audio isn't as dynamic, uh, as I would've liked it to have been, especially if you don't have the video.
But, um. Sync Labs just released Sync three. Yes. This clip is multiple speakers in really like poor, like lighting conditions. Bad lighting, yes. With bad lighting, overlapping dialogue with the faces kind of moving, uh, profile, extreme profile and towards the camera. And it's jumping from language to language.
Yeah. With lip flap that doesn't blur with teeth that don't like get artifacty or whatever. And you see this and immediately say. [00:22:00] Uh, like to me, I saw that and it was really the first time I was like, okay,
**Gavin Purcell:** yes,
**Kevin Pereira:** assuming this is fast and cheap enough, every foreign show on Netflix that people don't wanna watch because they're eating their meal.
And subtitles make it difficult because America, and that's me. Uh. Now just became a reality, right? Yeah. Like the, the ability, like the, the, the, the mouth performances are good enough. The voices seem to match. You can jump from language to language, and it works in challenging conditions. Again, cherry picked example, and it's not officially out yet, but they, they quote.
They say soon. Soon. Soon. So we love that. We love
that.
**Gavin Purcell:** Is that a different language? Soon? Soon, soon. Is that, is that, or is that English?
**Kevin Pereira:** That was, that was English, yes.
**Gavin Purcell:** Alright, let's keep going. Let's talk about pretext. Pretext was a really interesting thing that came up over the weekend. And Kevin, I, I. Was online over the weekend and I saw this thing and I was like, oh, this is really cool.
And then it kept getting bigger and bigger in terms of how many people thought it was cool. And I was like, yeah, this felt like something for me, like a nerdy thing. But then it became a very wide thing. So what, this is a, a current [00:23:00] mid journey dev who also worked on the React uh, system originally in a bunch of other places.
Apple and Facebook has basically taken. The ability to make interactive text online, much more dynamic. So you used to have to use a thing called CSS to render text on the web, and this is now an entirely new system that lets you create much more interesting dynamic content.
**Kevin Pereira:** Yeah. Uh, it's pure type script.
It is crazy performant. And I mean, we're talking like, there's some examples where it's. One, six columns of text with individual bubbles that this, you could just scroll past and resize and it's immediately resizing. It's so quick and so fluid that people are making, you know, video like yeah, reactive video, uh, experiences around it.
Or even having video games that you can play where the text is moving about and wrapping around things on the screen. So it's like now, now we're doing benchmarks for frames per second for text resizing, which. I know sounds as lame as it does, but it That's what I mean. Yes. It is very amazing to watch in practice.
**Gavin Purcell:** Well, [00:24:00] and you can think about the way that like interactive experiences could be right in the future. Imagine a world where VR does eventually work or some version of ar and you think about the idea of text in front of you and all of the fun that you could actually have interacting with that text like this could kind of lead to it.
So it is a very nerdy thing, but it is definitely worth your time to go check out.
**Kevin Pereira:** We know that AI is opening up, uh, the, the, the floodgates to all sorts of creativity everywhere. I wanna shout out Measure, plan, otherwise known as AA on X. Um, they've been doing a bunch of like fun experimental games. They did one called Wingman, which was like a first person Flappy Bird where it actually tracks your body and you're trying to flap in between the old pipes.
But the latest one is a Tetris board.
**Gavin Purcell:** It's
**Kevin Pereira:** so cool where. You physically move left and right to shift the grid on the screen. Uh, and then you can kind of compress your arms. You know, he's doing it with weights in his arms, so he is getting a, a pretty good workout at the same time. And you can rotate the pieces or quick drop them just by moving on the webcam.
Uh, you [00:25:00] know, I, I don't know what the future is for these products, but I love that someone can have the idea now and using AI tools. I believe he's using Gemini to jam these out. Just bring them into existence, and that is. That is the fun promise of all of these new tools that we talk about every week.
**Gavin Purcell:** Yeah, that's amazing. And it's such a cool way of like being able to bring the, this stuff that we talk about, the AI coding stuff into the real world in some ways too. Another good use case of AI has been Sure. What I wanna call the potter slop apocalypse. Now we have drip warts. Kevin, do you wanna play a little bit of drip warts for us?
**Kevin Pereira:** Oh man, would I love to. Here we go.
**Dripworts:** That's Harry Potter. Are you really? Harry Potter? My G
Type type, type type. None of that. None of that broski. We're all here on the Mayback Express for one reason and one reason only, and that's to go to Drip wats. The School of Drip
**Gavin Purcell:** may Mayback music, so the Mayak
**Kevin Pereira:** [00:26:00] Express is phenomenal.
**Gavin Purcell:** So it's a whole scene, basically, you know, like a trap version of Harry Potter. And then the other thing that came out, Kevin, this week, which was actually a huge deal and I, I got pressed. But it's always in a weird place, was a, a video called Black Snape. And this is a music video about a version of Snape who again, is a rapper.
But one of the things that's always so cool to me about seeing this stuff happen is that like, you know, before you would have a version of this that would be done maybe via YouTube and somebody dressing up as that, and that's still very fun. But now we have like a very professional looking. Video. Yeah.
That is enjoyable to watch. Maybe play a little bit of this, uh, so we can hear a little bit about what Blackcaps doing.
**Black Snape:** Black, black skin walk hall, they whisper don't house points, man.
**Gavin Purcell:** So you might, you might kind of hate what this is. It is AI music, but it is an AI music video of black Snape. Kevin, the other thing that kind of ties into this [00:27:00] thing is that this week there's another AI music artist who was in the top five on iTunes downloads.
It's another new r and BI think it's a kind of a, like a seventies, uh, style r and b singer. Mm-hmm. So people are kind of up in arms about it again, but these at least have a level of like. Weird ness to them that I really enjoy.
**Kevin Pereira:** Yeah. There was also, was it a Rolling Stone article last week that talked about the rise of like AI and music or, it was getting a lot of shade from so many different corners.
People like, oh, I can hear if it's AI music, it's terrible. It says, but the thing that was kind of buried in there was that a lot of hip hop producers, for example.
**Gavin Purcell:** Yeah.
**Kevin Pereira:** Or using Suno timber and their prompting. Yeah. Yeah. They're prompting, but they're prompting like. Soul samples. Yes. And they're prompting old country samples.
Um, not because they're making the songs, but because they don't wanna pay the royalties
**Gavin Purcell:** Yes.
**Kevin Pereira:** For the actual samples. And it's like, again, well,
**Gavin Purcell:** by the way, that's a smart use case because it killed a lot of music in the nineties. That was amazing. Right? Like there were a lot of sampled straight up music that just weren't able to get made anymore.
**Kevin Pereira:** Well, don't worry if you're, if you feel like, well, [00:28:00] that's not fair. It's ripping off these AI artists. This isn't, this is everything that we hate about ai. Fear not because the champion of champions, Dana White has something to say about artificial intelligence.
**Dana White:** Give me a break. AI's coming and if we're using ai, who gives it?
People are upset about it. We should use artists. How? How about this? Shut the fuck up and watch the fights
**Gavin Purcell:** and we'll see you all on Friday. Not we don't agree, but we understand. We'll see you all on Friday. Bye-bye y'all.