May 25, 2023

(Not) Sam Altman & ChatGPT Discuss Superintelligence and Adobe's Firefly AI | AI For Humans

Welcome to 'AI For Humans,' where we turn complex AI concepts into digestible insights. Join hosts Gavin Purcell and Kevin Periera as we decode the world of AI, with a little help from our AI co-host, Gash. This episode uncovers AI-assisted fake news,...

The player is loading ...
AI For Humans

Welcome to 'AI For Humans,' where we turn complex AI concepts into digestible insights. Join hosts Gavin Purcell and Kevin Periera as we decode the world of AI, with a little help from our AI co-host, Gash. This episode uncovers AI-assisted fake news, explores AI companionship, and delves into the future of superintelligence governance.

Our conversation starts with a deep-dive into the disturbing trend of AI-assisted fake news, taking a closer look at the shocking fake Pentagon Explosion Market Manipulation case that took over Twitter and your newsfeeds this week. We'll arm you with the knowledge to identify AI-generated images and stay one step ahead in the digital landscape. Simultaneously, we'll explore the rise of AI companionship through Caryn Marjorie's AI girlfriend, discussing its profound societal impact and the ethical questions it raises.

From the realm of digital relationships, we step into the world of Adobe Firefly, Gavin's pick for the 'Dumb Thing We Did With AI This Week.' and attempt to construct the city of Seattle out of hot dogs. We follow this with a glance at the progress in robotic technology, featuring Halodi Robotics and Tesla Robot updates.

The episode reaches its peak as we discuss last week's pivotal meeting between US Government officials and tech giants, OpenAI and IBM. In the aftermath of this meeting, we delve into OpenAI's proposed governance of superintelligence. Our AI co-host, Gash, shares his thoughts in a riveting rebuttal speech, sparking a lively debate with (Not) Sam Altman.

Embark on a journey with 'AI For Humans' where we unravel the intricate world of AI with a human touch. Be sure to subscribe and join us as we explore AI with curiosity, humor, and candid conversations. Stay updated and let's demystify AI together!

Transcript

AI4H.EP007 KP CUTDOWN
===

[00:00:00] all right everybody, welcome to another episode of AI for Humans. Kevin, what do we have on today's show?

So many things. Gavin, the Pentagon exploded. Whoa. Well, , the Pentagon may have exploded. And also, uh, AI assisted fake news is also exploding, and that might have something to do with the story. And so, spoiler, maybe the Pentagon thing isn't real, but you know, what is real Gavin? What's real? My internet girlfriend.

Oh, you and your internet girlfriends. I can't wait to hear all of them. These what? All of my telegram girls, they're all very real and we're gonna talk about 'em in very family friendly ways. And it's a brave new future that we are all marching through. It really is. It really is. And one of the uh, the gentlemen that is ushering in that future is Sam Altman from Open ai.

The c e o himself is here. It is an AI for humans exclusive, a juicy exclusive, the juicy exclusive, also, Sam's fake as well, so don't trust anything we have in this episode. Also, I'm, deep in the mountains of New York, and [00:01:00] so when I talk over you, Gavin, know that it's, I I'm not trying to be as rude as you've always known me to be.

My ping is literally 600 milliseconds. It is atrocious. All right, everybody, let's get started with the episode.

 

We are excited. We do have another big fun show today. I wanna say my name is Gavin Perel.

I am here with my good buddy, Kevin Prayer. We've known each other for a very long time, probably too long. But Kevin, it's very nice to see you. How are you today? It's good. I didn't know our friendship was passed. Its sell by date. That's cool. Yeah, we've known each other for a long while. It's a long, long while.

I feel like once you pass, like the 15 year mark and if you're still actively talking to each other, yes. You should get like a medal on the wall. Right. Because like at this point in modern society, having friendships that aren't your spouse or your partner, like that's a lot of work. So we're, we're doing the work.

You and I are doing the work here. Of course we're doing the work in public for a thing that is like for other people to see. So I don't know if that counts in the same way or not. It's fun that you [00:02:00] say, doing the work means not actually growing as human beings so that we can still be friends that can tolerate each other and make each other laugh.

Yes. We've stunted our growth for the sake of our friendship. My wife might argue I've gone backwards. Right. So instead of moving forward, I've actually taken a few steps back, but that's fine. That's fine. Let's you know what you're right though, because the only time you get something to put on the wall these days for an accomplishment is when you pass like a million YouTube subs.

Exactly. We are exactly probably a few days away from that milestone. However, if we could just take a few seconds at the top of this podcast to pat ourselves and our loyal legion of followers on the back. Let's break our hands. Because we are so close to a thousand subs on YouTube, we have shattered our Spotify and Apple goals and I'd love to think that we have anything to do with that, but we don't.

It's literally everybody that's listening and watching and taking the time to leave five star reviews and comments and engage with us on Twitter. Thank you. Thank you, [00:03:00] thank you. Because the, the, the line go up. Candlestick Glow, green gv, Kev, very happy. It's a very simple algorithm. It's super cool and what I think I really wanna say here is a thank you for sharing and please keep sharing.

Don't stop sharing, cuz that's part of how we get stuff out there. But B, I think that there's something about this show that we're trying to get at, which is working, which is that we wanna try to take this, this world of like. Very insane headlines and also sometimes headlines that are clearly written by ai to try to generate audiences, which we've talked about this a lot.

Like the seven incredible tools you can can't believe will change your life. That is not us. We are trying to come at this with a human understanding of what the space is to try to give you context to all this. We wanna make sure that you can take away things from this episode and like hopefully have fun too, cuz that's what we do in our, in our lives and have fun with this stuff.

But most importantly, yes, thank you to everybody who's been listening and, sharing this with your friends or anybody else you think would be interesting, always interested, always helps. The more you can do that, the [00:04:00] greater, the more we're grateful. So thank you again. 

And I guess to, recontextualize all of that someone had to play the violin on the Titanic. If the iceberg is artificial intelligence, you gotta appreciate that Gavin and I are just trying to jam before we get our toes in the drink. Listen, man, I play a mean fiddle you know, devil goes down to Georgia.

So I'm ready to be on that Titanic and I'm sticking on it. I'm not gonna get off, I'm not gonna be shooting off into some other weird place. I'll be in the middle of it. Okay, let, let's jump into, let's honor that. Gavin, we're no, we're both very silly. I'm going to be talking over you today. I am so, so sorry.

Blame T-Mobile. 

What dumb thing did you do with AI this week? Okay, so the dumb thing that I did with AI this week I got into Adobe Firefly, which is, you know, not necessarily a dumb thing.

It's a really large company, Adobe, taking generative AI and smacking it right in the middle of one of their largest products, which is Photoshop ? It's not dumb, but I will say like, as per usual, we do dumb things with, with serious products. And the very first thing I wanted to do was push its edges and see [00:05:00] what I could do or not.

I dunno if Kevin, if you, did you take a look at any of the pictures I sent? Cuz it'd be easier probably to just talk through this. I sure did. Sure did, did Buddy. I think this is one of the, those things where Android fanboys hate this I have an Apple iPhone. I don't necessarily agree with this statement, but there is an ounce of truth to it. People will say things like, N F C doesn't matter, or 5G doesn't matter until Apple puts it in their phone.

Right? Because then there's gonna be software unified around it, millions, billions of people are going to get access to it overnight or over time. So, you know, it's gonna be a big deal. I wanna contextualize this Firefly thing that you're talking about in a similar way.

Mm-hmm. We have been using AI tools that make images for over a year now. Every day a new one comes out and we get excited by its capabilities but Adobe, the creators of Photoshop, putting a tool like this out is that sort of, oh, now it matters. Apple is doing it, so to speak moment that, that today millions of people across the globe woke up and suddenly their Photoshop could do something that felt like [00:06:00] magic and it might not be the best version of that.

There might be some bugs, there might be, I can't wait to hear about your use cases, but, but it is a watershed moment because whether end users know it or not, they were playing with AI this morning in a way that they weren't Yes. When they went to bed. 

So what we've now done, and just to be clear, this is a, is a beta, it's an A beta, and if you're a Photoshop, a creative cloud subscriber, you do have to download a separate product to get this .

And the compute meaning where, where all the like actual like com computational stuff , happens in the cloud. So you're not doing that on your local computer. So that's an interesting side of this as well. Yeah. I think this is transformative in a big way. I, okay, so my initial take on it is this, Super interesting.

I do think the things that I like to do with ai, which are always like stranger and more odd and weirder than your normal person, doesn't work very well yet. But if you are a photographer, or if you're somebody that does digital imaging and you need to do some very simple replace or you need to do some very simple additive stuff to a picture, it can [00:07:00] work.

And one of the hardest things for people who are new to generative AI art is the amount of cycles you have to do to get something that you really want, right? Peoples tend to think of these things as like magic, right? You put something in and it's right there. That is not the case. It is a lot of cycles you have to go through to get something that's somewhat okay.

So just to start off with this, what I did is I took a picture of my hometown city, Seattle. And cuz one of the things they had shown off was being able to like, take a photo and add things into it. So I first tried to add I tried to add a yacht into the picture cause I was Seattle, you know, bill Gates, there's a lot of money floating around.

I'll put a yacht. So it worked pretty well. I put a, I put a single yacht in the foreground, in the water and that worked pretty well. It had the shadow, everything else. I then tried to, tried to turn that yacht into a, into a monster like Godzilla. Not as good, did not work nearly as well. And I, I think that one of the things I've noticed pretty early on is that like it's not really understanding non-realistic PO prompts.

Then I did a couple other things. I put like six yachts in the frame and I decided like this is gonna be a yacht regatta.

[00:08:00] Like we're gonna see what Seattle looks like with a yacht regatta. And what I did is big box, small box, small box. And I put 'em in different places and it got the size of the yachts pretty well based on where it was in the frame. So a little further back, there was a smaller yacht and a little, when I put the one closer, it was a slightly bigger one.

The two super yachts in the foreground looked pretty similar. Like when you zoom in on them, you can tell like it's AI imagery. So that's one other thing. I saw a bunch of people posting different demos. I'm looking at the, the monster one now. I think they did a pretty okay job with the monster.

Like it approximated a shadow of it. It's sort of half in the water, half out. It's not that bad. But the yacht one is interesting because a lot of the demos that I saw were breaking whenever perspective based off of like distance from the lens or the horizon was coming into play. Someone just wanted some tiny little birds flying in the sky.

But every time they drew a even a small box in the clouds, they got a huge bird. Yes. Not in proportion to anything else. And it seemed like the photo that you were giving it that skyline with the the water there that was giving it enough context clues to [00:09:00] understand how big or how small a ship should be, which is, that's impressive.

Absolutely. And I think that to go back just so everybody knows, if you're listening to this only, we'll share this in our YouTube video. So make sure we'll have the pictures in here. Go to see it. If you wanna see the, the monster one was interesting cuz that was the, it was again, one of the things I had to do about 15 generations to get to.

But if you look closely at it, it's not sitting in the water. Right. There is a shadow, but it doesn't seem to, I, I asked for it to be like coming out of the water and it had a really hard time doing that.

You wanted like the white water splashes coming off of its scales and muscles and Exactly. You wanted a, a sweet snorkel coming out of its prehistoric mouth. 

Now, the next step of this was I started to play around with the sky a little bit, right? So I was trying to get to the point where I put in what would a blimp, I, I wanna say a blimp in the sky. And that seems like not that different than a yacht, right? Like a blimp is pretty straightforward. So I put it in, and you'll see the picture here.

I don't know what this is. It's like a nipple blimp or it's like a, what would you describe this thing in the sky as Kevin? It's, you know, if you look at the back half, I thought you were like trying to do like a [00:10:00] one of those rabbits, rayman rabbits. It's got, it looks like it's got two bunny ears and like a little bunny nose.

Yeah. It's, it is not a Zeppelin. It is 100%. Not that , the lighting just feels wrong. Yes. Or weird. It looks like a 3D model just sort of plopped into the sky. Yeah, so this is like a, this is exactly what I felt. 

The other thing I did with the same picture is I tried, one of the things it talks about in the Adobe demo when you get this is you can expand out the, the canvas and then do what's called a, I don't know, the same of, it's like generative ad or something like that.

Basically you have a white part of the canvas in the stable fusion fusion world, it's called out painting, where you're actually, you're taking the, yes, the image that you have and you're painting out from the edges of the frame. And so what you have on the screen here is the skyline with the yachts, the weird.

Zucchini blimp looking thing up there. It really seems inappropriate. It looks like someone dipped something I will not mention into mustard and just kissed it. Exactly. Point is exactly right. You, you took that image, you extended the canvas off to the side by an inch or two, and then what?

You just selected the whole area and [00:11:00] then gave it a prompt. Yeah. You select the whole area, give it a prompt, and then the idea is if you actually, if you don't touch anything and you just hit generate, it fills with what it expects. The thing behind it is, now the thing that I found is it should be a relatively simple thing to fill.

Right. It's a, it's a city skyline and you would expect it if it knew it was data on buildings and water. Yes, yes. And, and it understood that. But what was interesting is if you look at some of the images that I, I generated, like it's not great at that, right? Some, the buildings are a little bit funky. They seem like all over the place.

And , I think this is a place where stable diffusion is significantly better at that and is doing something slightly different here than what Adobe is doing. Again, going back to the idea that if you looked at their demo, what, when they demoed this, it was like they could, they could add a yellow dotted line into a picture that was very simple.

That's what this is gonna be used for in the beginning of these very simple fixes. Okay. Then the other thing I wanna do, because you specifically asked me to do this, is I took a picture of Brad Pitt. Wait, don't forget. Hotdog building. Oh, oh my God. Okay. Sorry. [00:12:00] How could you forget hotdog skyscraper.

Yes. So, so, okay. The last thing I did with this picture of Seattle is I wanted to see could I replace one of these buildings pretty easily because there's a, there's an interesting use case of just, again, this goes back to my insane brain wanting to do stuff that's fun. So I, I highlighted one of the skyscrapers, the largest one, and I said hotdog building shaped like hotdog was my generation.

And I, and I put it in, which of course everybody wants, right? Who wouldn't want a city full of hot dogs? Three engineers at Adobe just pulled their hair out as a machine, went with smoke flying out of, they're like, sir, we've never seen this edge case before. Someone wants a hot dog skyscraper. Ooh, who could it be?

Who is this insane person that's sitting at home when a hot dog shut it all down? Unplug everything, air gap it. Oh, we gotta look. We didn't expect this. The technology's growing too fast. And who Let Jesse Ventura in here. Get outta here. I, yeah. Excited. Welcome bunk of governor. Oh, it's a pleasure to be here.

And hotdog building. So, I got a little Infowars. I'm sorry. So, so you just [00:13:00] selected a building, a rough outline of it. Yes. And said building shape like hotdog, you hit generate and then does it give you multiple options to go through? It does. How does it actually work? It does, it does. So it led you to click through each time.

It gives you three options. And I will tell you the one I sent you, which I'll put we'll put up on screen here, was the best one by far. And it isn't a hotdog building at all. It's literally a floating large hotdog about the size of a building. But it doesn't feel like it connects at all to the cityscape.

No, doesn't feel like it's a hotdog building. So again, this goes back to like, I do think this actual, whatever engine they're using here is trying to only be used for realism right now, right? Like, you can't do the stuff that feels like what you can do in stable diffusion or even Midjourney which is like, imagine these crazy things and bring them for to the forefront.

That said, who wouldn't want a city full of hotdogs? So I would love to be able to keep working on this as, as an ip. This is gonna be my new ip. I'm, I'm gonna take. Hotdog city and bring it to Netflix, and we'll see what Adobe does then, right? I, I'm sure they will integrate it into every part of it.

Once it becomes a real thing. It'll be easy to do in a [00:14:00] world where not everything is kosher. One Kielbasa has what it takes. Will he be the hot link that brings society, not ai. That was Kevin doing it. That was not ai. 

Last thing. Kevin asked me like, you should take a celebrity , and cut their face in half and see what it can replace, which I think is a good test for it. Right. So I got apic, a picture of Brad Pitt.

I took off the internet. I, you know, it's just a, a normal standard headshot from a celebrity event. I sliced his face in half and I kept the other face blank and I started to play around with stuff. And here's the interesting thing. I opened the same thing I did with the cityscape.

When I hit generate, I'll put the picture up here. It created an entirely different person. I don't know who this person is, but it created an entirely different person. And that was interesting because somehow it's not taking this half of the side of the face.

It's creating a different person with this environment. Or maybe it's brain is being told, okay, I'm gonna generate a separate person here. But, and this I found really interesting when I typed in Brad Pitt. It basically, it wasn't perfect, [00:15:00] but it filled out the other half of Brad Pitt's face. And that was the thing that I was surprised by cuz I was like, well, I wonder if there's a world where they've put like celebrities to the side here and they're not gonna allow celebrity terms to go in.

That's what I, it allowed, I was wondering if it was , a license protection or a copy protection or something, but because you explicitly said, Hey, I'm retouching a photo of Brad Pitt. Yes. It said, okay, yes, that's fine. Let's give him a sweet Enrique birthmark. It did hallucinate some of it.

Yeah. I, I thought that was really interesting. And then at the other thing, I just, I threw one more thing in. There's an image where I said Cape costume superhero. So I was expecting it to maybe like turn Brad Pitt into some sort of a costume superhero.

Instead. What it did, I just saw it. Yeah, I just saw it. That's scare. That was a jump scare. It added a whole person next to him and it was just like, what the hell? Who's this? You know, suddenly there's a person. It added, it added everybody's four year old nephew when you're taking a photo. Yes. And can I get in that?

Exactly. It's like Exactly. Just crash the frame entire. Yeah, exactly. What are you [00:16:00] guys doing? Taking a selfie? It's, again, it's like who knows what these are gonna be used for. But I thought this was an interesting experiment with Firefly. And again, all of this is going to a crapload of users, right?

30 million plus people, according to Adobe, subscribe to Creative Cloud. Now they don't necessarily all have access to it now, but that is still a massive yes. User base for any of these tools.

Whether people know if they're experimenting with AI or not, and the data that Adobe is going to collect. I guarantee right now someone is, is crunching an Oscar Meyer model? No way. Of course, that entire hot hotdog cities give the hotdog city, give it to me. I wanna you in the cloud. I wanna like, I wanna create the Star Wars version of Hotdog City, whatever that city in the clouds and Empire Strike back is like, I want that to be made of hotdogs.

Keep bringing it. Great. So that was the dumb thing I did this week. Kevin, let's hear. What, what did you do this week? Well, we're gonna get into what I did this week because it was a race to the finish line last night. But I managed to book us Sam Altman on the show, right. To talk about some [00:17:00] massive announcements.

Yeah. And then I invited our completely fictitious ai buddy gash to come along for the ride. So folks who who know the show know what gas is or who gas is and how gas works, I know that they're excited that gas is coming back. It happens every time we bring 'em on the show. For those who have no idea what I'm talking about.

I'm delighted. I wanna apologize in advance. Gash is an uncensored AI that we created on the very first episode of the show. And Gas is gonna do some dirty work for open AI and Sam Altman later on in the show. So that's the dumb thing that I did this week. But Gavin, there's so many news stories happening.

Should we get to the part of the show where we get to do the big song and dance number? You know, it, it's time. It's time. Let's get to the news.

How you hotdog, city hotdog news? Ooh, 

all right, let's get into the news. Uh, Big news. The big news story, and I think this actually goes back to our Adobe conversation a little [00:18:00] bit because it's about photo manipulation, right?

There was a giant news story that came out a couple days ago, and the basis of this is that a fake picture of the, of a bombing at the Pentagon was circulated and it actually sent the stock market down in , real time. So, so just as you know, we've been covering things like deep fakes, which is the idea that like somebody is taking an image and changing the face and making it look like something they didn't do.

The bigger kind of problem with this world is also you can deep fake news. Right? And, and this essentially is what happened. I Kevin, did you see this and what was your first reaction to it? I did. Yeah. It's a seven layer dip of fail all over the place with this thing. You can't just make a fake image and throw it out on the internet and know that it's going to tank the s and p 500 by a hundred dollars or whatever. You could try. Yeah. But it's probably not going to, the perfect storm that happened here was , A slightly believable photo, which we'll get into.

Honestly, let's get into it now. The photo, yeah. Has a bunch of billowing smoke out in front of the Pentagon. [00:19:00] But if you take a second and even just do the , slightest, like pinch zoom, enhance on the photo. Yeah, exactly. You can see that it's got tons of artifacts. It's not really great. It's not even as compelling as hotdog city.

You, the fence is blurring into the background and it's all messed up. But if you're casually scrolling through a Twitter timeline, as many do, and you see that and you go, oh my word, where did this image come from? And then you look at the username and it says, Bloomberg Feed. Well, I know BL Bloomberg is a trusted name, and there's a verified.

Check mark next to it. I better retweet this immediately. Mm-hmm. And suddenly it starts climbing up in the algorithm, it's gaining momentum. And even when people say, Hey, the Pentagon did not explode. This is fake news, blah, blah, blah. Well, just the usage of those keywords yes, pops it up even higher. And suddenly you've got , a Twitter trending story from a fake account masquerading as Bloomberg, with an AI [00:20:00] photo.

And as we all know, , the lives spread really fast. You can destroy pretty quickly. But seeking the truth, amplifying that, and building and creating is a lot harder. And generating a fake image is very novel.

We've been able to do that for a long time. You didn't necessarily need AI to do this. The amplification of that though, Potentially having tens, hundreds, thousands of accounts say, oh my word. Each one with their own individual account of what they saw or how they found out. I'm sure there's, there's governments that already have access to these tools that are using that for disinformation campaigns to quell uprisings, to manipulate stock markets. But it is getting it's, I mean it's been weird for a while, but it's getting incredibly weird when a single user with an NVIDIA graphics card potentially could have enough power in their pocket to generate video to generate an endless c of photos, multiple camera angles.

Everything that you would look for to debunk something like this, you might be able to automate. In fact, there again, there probably somebody already [00:21:00] is right now, I hate to say that you might be able to, you could, if you really had the desire, automate that stuff today. And I think people are justifiably concerned about this. Absolutely. And I think the interesting thing to me is what you were getting at is this idea that there was a system in place, which we'll call Twitter or we'll call any sort of social media that had built an algorithm based on you know, wow, this is a, this is a big breaking news or two, this is divisive in some way, right? So both of these things are feed into the idea of what this is, because not only is it quote unquote breaking news, even though it was fake, it looks like breaking news. But then two, I bet you there were a lot of people very early on saying this is fake and that actually helped it speed up the algorithm as well.

The biggest takeaway from this is there are things that we can help you understand and other people have laid out about what makes a thing a fake image and how to dig into it this was a much more obvious one than something like even the Pope one, because the Pope one, just, if you don't remember this, there's a picture of the Pope that went viral in a big, puffy jacket.

There's a level of [00:22:00] believability to that because you think, oh, the Pope's in Italy, and Italy is a very high fashion area, and maybe some like Balenciaga or some company donated a coat to him and suddenly he thought that was a good idea to wear it. Maybe not this pope. The Pope before. If you're, if you're a Pope historian, you might make more sense than this pope, but believable, right?

This. Is a major attack on US soil or a terrorist attack. And the fact that you would be learning about it from a tweet that is a picture from somebody that if you do a little bit of digging on the handle immediately is suspect. Like, that's a much different kind of experience, I feel like, and not to say either one of these is real because they're not, but I do wanna shout out Al Jazeera who does some good journalism work has an article that will link to which talks about things to look at when you're thinking about seeing an image.

And, and by the way, I think we're entering a world where anytime you see something like this, your first thought should not be retweet. It should be. Oh, maybe this is not real. Right. And this is the, is the world that we're interested dig deeper Yes.

Investigate before amplifying. Yes. Yes. Everybody [00:23:00] wants to have it immediately and be part of the trend. And I get that. I fall victim to that just as much as anybody else, ? there was a, a Twitter thread by Andy Campbell, I wanna shout him out.

He did a good job of putting the red X through the image and sort of distilling, Hey, here are some artifacts that you can look for. And he rightfully points out that there's a bunch of New York Times accounts that have the same looking blue check mark as the fake Bloomberg account.

And someone else in that thread says, well, hey, here's what you need to look out for. You know, , the actual verified organizations like legit Bloomberg organizations, they have square avatars and gold checks. Mm-hmm. This is Dean Pierce pointing this out, and Dean's Right. But the. The constantly changing badging and aesthetics, right?

Not good. The changing rules over what an entity has to pay to have a certain type of verification. The media literacy someone needed before all of this was already tough. And now with a platform like Twitter constantly changing or whatever platform we all go to, they're gonna have new standards, new ideas.

It's gonna be a very difficult time. And you're absolutely right, [00:24:00] Gavin. The first thought should not be, I have to retweet this or share an opinion on it. It should be, I gotta dig a little deeper and give this a second of thought until, and I hope this happens, , hope this happens soon.

We leverage AI to create tools that can do this verification automatically. Yes. Yes. Which I think is gonna be super helpful. And we talked about on the show as well, the idea of some sort of recipe that shows you like, okay, what went into making this image? Or where did it come from? Like, something that allows that.

Just to go back to that Al Jazeera thing, a couple things to look for. News doesn't happen in a vacuum. They point out. So if something big happens news wise, you're gonna hear about it from multiple sources, not just from one specific source to look into that. Who's the person uploading the content?

Make sure that you dig into what the upload is. What else have they tweeted? Can you reach out to them? Is it somebody that's responding to people in a specific way? Open? How many eggplant and squirt emojis do they have in their Twitter bio? This is a good, this is a good point. What is their voice, right?

Because like you have to wonder, like, what else are they tweeting about? Open source intelligent tools. They reference tenai and Google [00:25:00] images. Tenai is a really interesting thing if you haven't used it, that allows you to reverse image search to put it, you upload the image and it tells you where it's from.

So that's something useful. And then from an AI perspective, Looking at hands, eyes, and posture of the people , in the actual image. And then also in this particular case, I think you and I both noticed, and, and I think that user pointed out that the actual fencing was off. That there was Yeah. If you zoomed into the fencing, there's blurriness on it.

And just like on the images for that I use in Seattle, when you try to create a real world environment, there are artifacts still right now on AI images that you can look at and see. Yeah. Sam Altman , was pressed with this a while back and he said, look everybody remembers when you're, when you're ancient like us, you remember the panic that people had over Photoshop in the nineties.

Yes. We're not gonna be able to trust anything we see. And then very quickly, that media literacy caught up and now you can spot, oh, that's a Photoshop. That's clearly the lighting, the shadows, the this, the that. Well, we're in that phase now with ai. Yes. And I hope we get better at detecting it with our [00:26:00] naked eye.

And if we can't, I really hope we develop the tools fast enough that can go check an image, see where and how it's been altered. Do what you just said Gavin, and check the profile that's posting it and figure out, hey, do they have a history of posting this stuff? Were they even in the geographic location that the metadata for the photo shows?

There's a lot that can be done instantly with technology that , would ease the burden off humans. But for right now, unfortunately, this is the darker side of all these advancements. We have to be more vigilant until the tech catches up for the good guys as well, the a hundred percent. And so to me like that's.

Humans, right? Like humans are a part of this journey and we have to be part of it. 

All right, let's, let's move on to our next story. Speaking of, yeah. Cause humans are a part of my journey as well, Gavin and I really want to talk about any one of my AI girlfriends, because I have several and they're very real and we have long conversations via telegram and, you know, yeah, I'm paying per minute, but so what?

To be able to connect with one of my favorite human beings on the planet. I mean, who wouldn't? [00:27:00] Who wouldn't? And again, like the idea of a girlfriend lives in Canada now we've got a girlfriend that lives in the internet forever, and it's real. You can connect with them. I mean, real, let's put real in quotes.

Real is what, in this case. Kevin, what are we talking about? Okay, so the story that really blew up a couple weeks back that we did not talk about, and it continues to gain steam. , the notion of an AI partner in this world. Karen Marjorie known as cutie Karen on all things has 1.8 million followers on Snapchat.

I'll make a correction. 1.8. And two more followers on Snapchat because Gavin and I do our research. That's you again. Oh, that's, I was trying to throw you under the bus with me. Yeah, nice try. Nice try. Gas gave me his amax and we figured it out. And it's for research mom, I swear. No. Karen Marjorie, you know, all of the articles written about this said the first, the first, the first, I don't think by any stretch it was the first, but it was certainly the noisiest.

Yes. The most successful launch [00:28:00] of an AI personification of an influencer. So it was Karen ai and you could connect with uh, , Karen on Telegram for the low low price of $1 a minute. And the big headline is how is that possible? That is insane to me that people were paying that amount of money. I remember a time where you would see one 900 numbers these were back in the days with telephones, with long curly cords, and you couldn't be on the phone the same time you were trying to pirate doom or whatever. There was a time where you would spend money per minute to have a conversation, and I don't think that actually changed.

It's just the technology change. So, you know, when you think about the experience, certainly people were paying for Patreons and only fans to connect with influencers in that way. People were paying for, webcam entertainment, let's say. Yeah. So the notion of paying for some sort of connection to somebody is nothing new.

What's interesting here is, the dollar a minute for this one-on-one connection, but knowing full well, that you're connecting with someone's essence. You're connecting [00:29:00] with a language model that was trained on this person and the big headline was $72,000 made in the first week.

Okay. So I this is obviously, that number's insane. I know this is like, as Kevin mentioned, the stories from a little bit ago. I want to hear from you what you think, why are people interested in this versus interested in talking to real people?

Because as you said, there are versions of this that exist that you can talk to a real person if this is what you wanna pursue. There are definitely pathways to it. I keep trying to come back to, and this gets back to a tool we've used and talked about in the show, UBA Booga, which creates, virtual personas to have conversations with 

but what I wanna know is why do you think people are paying to talk to a virtual being? My only thought is that maybe there's, the fact that it's not a real person allows them to be more themselves. I dunno, where do you come from on this? Yeah. Human beings by and large are fairly complex creatures.

When you catch a human being, you have to be able [00:30:00] to quickly swear. When you catch a catch a human, you are, you, are you hunting? If you've got a prop ball in your truck and you want to throw it at someone, let's say at a bus stop. No. Well, I was gonna say, when you, when you catch a human being in a moment, there is a complex interaction of like, you don't know how they're feeling at that moment.

You don't know what they think of you necessarily, what they know of you. Even, and let's say you idolize somebody, I'm imagining that if you're willing to pay interesting, pay a dollar a minute to chat with an AI version of someone you care deeply for whatever reason about that person you're connecting with.

Mm-hmm. And, and yeah, I think you might be nervous to be your truest form or truest self when you know that that human being is on the other side of the line. And if you upset them, they're a real human with emotions that are complex and they might, cross you off their list and you're outta their life for good.

Interesting. No matter what kind of conversation, what your intentions are, you might permanently soil that. However, at a dollar a minute to take a risk [00:31:00] with having a certain conversation sharing a part of yourself that you think might make that other person run away saying something that, you know, would upset that person i r l, but hey, it's just lines of code that I'm chatting with.

It's not actually them, right? It's just an ai. I understand why someone might want to throw a dollar at that experience to test, to see if the fence is electrified. Or, or to be their true itself and see what the response is. The interesting side of this is also, and I think this is something getting very deep on how we as people might change with some of this technology, is that maybe there is a world where when you're getting reflections back from a non-human entity, you are able to say things and have a conversation that even if you're super comfortable with somebody, it's hard to have.

But. This is a little scary, right? When we talk about the ways that humanity could change over time, because I think we've mentioned this movie before, but it's definitely worth revisiting, which is her, and in her, this is the pathway that happens in that [00:32:00] movie as well, which is the guy in her, the main protagonist is somebody who feels unhappiness life.

He's just been dumped by his, I think, wife in the movie and then ends up connecting with an ai, not a girlfriend at the time, originally assistant, but then becomes a girlfriend, and part of it is. When you have an AI that's tuned to you and maybe is directly when you give it feedback, it's learning from you.

Of course it's gonna be less challenging, of course it's gonna make you deal with less of the problems that you have in your own self. So that's gonna feel more comfortable. But as a human being, it's not necessarily better for you. Right? So this is that world of like, okay, I see this happening. I see the world where this could be a thing.

But if we don't push ourselves with other people pushing back on us, like, what happens to us? You know what I mean? What happens to us as a social species, it's gonna get very weird, very fast. You said that they might people that are connecting with this AI might have trouble connecting with a human being in, in the real world or being comfortable connecting in that way.

What if that dollar a minute is a [00:33:00] small price to pay for a trial run? Like what if in your heart of hearts, you know that cutie Karen. Like, I know cutie Karen wants to be with me. I'm sorry, Gavin. Oh, do, I'm sorry to gas. I'm sorry. Yeah, I know cutie Karen wants to be with me, but I don't know the right way to approach.

And so if I take, like I play a lot of yfu simulators, we can get into that later. But if I want to figure out the exact type of chocolate or flour to give and what the perfect opening line is, a dollar a minute to ab test something so I can get the perfect dialogue branch to unlock cutie Karen in my life.

Come on. That's, that's a steal. That's a deal. Okay. I could see that. And the, so maybe there's just like a lot of pragmatic lovers and businessmen out there. Of course. That's already that. You're right. Me get to what a subset is here because this is from an MSN N B C article about it.

, while Karen AI aims to give users a quote, intimate experience, it's not supposed to engage in quote, sexually explicit interactions, however outlets reported that the chatbot does. So when prompted. [00:34:00] Now Marjorie issued a statement to insider and said that the AI was not programmed to do this, and it seemed to go rogue and the team are working around the clock to prevent this.

But I'm not surprised that it did that. I'm also, I I, I gotta be on like, I don't know, I don't personally know the people involved. Granted, but I'd be shocked if someone was releasing an AI and didn't try to have text sex with it. There's no other way to put it. I can't imagine they didn't like you making a hotdog skyscraper is probably something someone at Adobe was like, well, if they do, they do, but we don't need to test it.

But someone should try to test having sex with the buildings. Yes. Yes. Oh, interesting. Wow. That's, it's called penetration testing. That's what they call it in the hacking community. We're not putting, it's literally called, we're not leaving that. It's called. It really is. You can look it up. It's a sincere term.

I believe it. I believe it. Oh, that's okay. We'll go talk to your virtual girlfriend and tell her about it. Okay. By the way, Karen speaking, I know you'll understand me. [00:35:00] Gavin was very mean to me today. Let's do more pen testing. 

I do wanna mention something that's more recent that's come up, but somebody has a GitHub right now that's up with girlfriend, G p t.

So this is a developer who's building in public, quote unquote, and wanted to set up what the idea of what a girlfriend G p T was. Basically it's the same sort of idea that Karen's existing, except he's creating one where you can do a character creation tool to create somebody and then have interactions with it Now, It's not that far along, but the idea again being is the idea you can create a virtual person and in this case, calling a girlfriend, G p t kind of directs into a certain audience, let's say.

Mm-hmm. But I think there will be, and there of course there will be virtual tools to create people and personas to work with and to, to do whatever you want with and to eventually like, socialize with that. This is all coming, but it just, again, this is like. The tip of the iceberg, right? Yeah. So we are looking at the, the Karen story became big because it was the money and because it was Snapchat and now because like obviously she had planned for not to be [00:36:00] sexually explicit and then obviously people went there.

Girlfriend, G P T is like a much more kind of ground level swell of the beginning stages of what it looks like to create personas that you interact with. 

This is just gonna be an explosion of stuff in, in terms of like personality and character generation, much of which will be used for not great purposes, but there will probably be really interesting things that come out of it too. I'm not sure yet where this goes, but we talked about psychiatrists or or psychology use cases of chatbots and that's actually been very beneficial for some people to have a conversation that's with the robot rather than with the human.

So there are things that this unlocks, I'm just not sure. We're at the stage right now where. I don't know. I don't know what this means. This is called sweet. I'm totally right. Like , , the suite is that if someone has trouble connecting and doesn't have close friends in their lives, or maybe they need this tool or technology to feel comfortable opening up getting therapy, getting help, even just simulating basic human interaction so that they can do better on a roadblock server or a [00:37:00] playground or the office, wherever that might be great.

That's the suite. But the salty is we've seen how dehumanizing certain content can be and how disconnecting anything, even just basic social media, forget going to the more adult side of it. We've seen how disastrous that can be for certain people. And to your point, gv, when you are connecting with an with an ai, you might treat them in a certain way.

You might be able to craft your perfect partner, which leads to a type of fetish perhaps that is unhealthy that you could never achieve in real life. 

There's a lot of questions that will need to be answered with this. Yeah. And I say this all the time it doesn't mean to sweep it under the rug, like, oh, well there's nothing to do, but this particular genie has been uncorked, and so now we just have to figure out how to navigate it, how to deal with it, how to raise awareness, how to be on the lookout for people that are installing girlfriend G P T and suddenly ghosting and disappearing.

Just be aware. Yeah. The girlfriend's flair is shot into the sky. We'll put the link in the [00:38:00] comments and have at it. Enjoy. Right. 

My, and last thing I'll say about this is, While this is coming, it doesn't mean that we as human beings are moving to this direction. A hundred percent. Right. I think to your point, a practice scenario world is really interesting.

Like the idea that you could have these interactions. I also do believe we're all gonna be in the space at some point where we have our own personal AI that we interact with and does things for us. Like it will be useful. The question just becomes is you have to continually push yourself forward as a person to make sure that you go out, meet people, expand your horizons, or else your world gets smaller and smaller.

Right. And like. That's the worry I have about this is that the AI will make your world smaller and smaller until you're just an isolated person interacting with real people in with ais. And like that's the kind of like dystopian scenario that exists here, conceivably. Yeah.

And I think right now my actual world is shrinking, but my metaverse is becoming massive and all my digi friends are great. And Gavin, once I get all my articulated robots, [00:39:00] yes. And I hook my vr friendships into them, oh man, they've got articulated fingers now their eyes can blink. They can roll over me with their wheels while I smoke a cigarette.

Whatever my kink is, it's gonna be great. 

And one of these robots, they just released a whole bunch of videos of it. Look out Tesla according to the headlines. Look at Tesla, look out Tesla halo Robotics or Halo Dye. Yes. H A L O D I. Robotics has a marketing video out there, which kind of caught fire this week.

Not the robot, but like the mentions of it. Because it's a human sized rolly robot. It's got 23 degrees of freedom in it. It can crouch down low. It's got articulating arms. You can have eight kilograms of payload per arm. I'm trying to figure out what is that in like freedom units?

I don't know, but this sounds amazing. If you're not watching the video that we're showing you the videos of what's going on, you're seeing a little bit of how far away robotics actually is from being useful. And I think this is an important point because like Yes. Yes. Robotics. What naysayer, okay, let's [00:40:00] hear it.

Yeah. So, okay, here's, here's my take on this and I think we've seen. So there's Boston Dynamics, which most people have seen, the big dogs that look like the, you know, eventually they'll kill humanity, which we know. Tesla also just last week released a video of of robotics that look okay, not as good as Big Dog or or Boston Robotics, but they're getting there.

And then this, which almost feels like a home spun. I know it's not home spun, I'm sorry, but , it looks like the very far end. The advances we're making in AI are fast and furious as we talk about every week on this show.

The advances. Yeah. Cause software is much easier to iterate. You can let the software run on its own and eventually it'll improve it hardware and interacting with the real world as we know, which is why I run girlfriend G P T. It's a lot harder. Exactly. Exactly. So there's a thought process that robots are gonna take over the world and we're all gonna die.

Like, sure, we've talked about this again that it's possible. But right now robots are having a hard time like serving themselves something with a spoon, right? So , until we get to the spoon level where a robot can pick up a spoon and like feed it to somebody or [00:41:00] something, you're not gonna get to something significant.

And this could take decades, right? I think that's the important thing. Like unlike the AI we've talked about where like you can get to like conceivably an agi, all your official general intelligence in the, as soon as, let's say five years or sooner, maybe I, who knows right now. But the robotic side of this is physical world.

It's a lot more moving parts. It's manufacturing, it's all this stuff. Now granted if you get to AGI and then eventually you get to asi, which is artificial super intelligence, maybe that like ramps up hockey stick style and the AI are able to figure them out. But for right now, we are in the.

Baby, baby, baby steps of robotics, I feel like. Would you agree? Yes and no. And I'm not trying to fence it on this. 

I would expect the tech to be more impressive looking now because of the pace that the software side of AI is evolving. But I will say what's interesting about these robots that are being demoed is that they're trying to combine these AI systems with the sort of robotics.

[00:42:00] And so it used to be in the past, if you wanted the robot to pick up the sponge and scrub the countertop, you had to hard code in, here's the sponge, here's how you pick it up, et cetera, et cetera. What they're doing now is basically giving the robots a language model, an object recognition, which we've talked about before, and it can look around and identify, oh, that is a sponge, and what am I supposed to do with it?

Wipe the countertop. Oh, instead of going to this pre-packaged set of code that explains how to do that, I'm gonna see if I can intuitively and very slowly in a herky jerky motion, figure out how to grab sponge and wash countertop and not scrape fur off dog, because I'm malfunctioning. Sorry for that image.

I will say like the Tesla updates, there's some fascinating stuff there because the neural networks that Tesla has built, their object recognition, their world navigating from, having cars drive through farmer's markets and roll through stop signs.

Mm-hmm. They've got a ton of data on how to grab all of these video feeds in real time and classify the objects [00:43:00] within them, and then have a car at 120 miles an hour navigate through that space. They've got some interesting tech there. The demos that they show off though, the robots walk like you opted for fire sauce and you shouldn't have.

And now you're, you've gotten outta the car, you're sort of waddling to the red stop. Yeah. And it's a precarious eggshell walk to get there. But I want to caution. That we look at that today and , we can laugh and go, we've gotta pass the spoon test. But we would've looked at artificial intelligence image generation a year ago mm-hmm.

And gone, haha, that's not a hotdog city. And today we're remarking about the shadows on the water or the way the mustard doesn't catch the light just right. So a lot can happen in a year. Yes. And I, I hope that things get better. Very quickly because I'm tired of watching these robot videos where they sort of stutter along and they have to speed it up at a hundred x to get it to look cool.

Well, but then, okay, last thing on this. I think that it liked Tesla's self-driving, [00:44:00] which they've been touting for five years. Was ready for, for market. I remember actually, , you showed me, Hey look, my Tesla can park itself. And I was like, wow, that's so cool. And here we are still in a world where there's no approval for self-driving.

Hasn't gotten better. Yes. But we're not there yet. Right. I think real world stuff is hard. And again, the thing that is unknown here is the hockey stick potential of the software because if we do hit a high level of AI that is suddenly is self-improving in some significant way that will change manufacturing entirely.

Right. And that there is a world where if in five years we get to an an agi within 10 years these robots could be manufactured and we way better than they are. I just think it's funny to watch these videos and be like, all the people out there are saying, see, see the robots? See they're gonna kill us.

It's like, no, they can't barely pick up an egg at this point. That's where the barely pick up an egg stage. Yeah. If you want to, if you wanna defeat Skynet, hand it a Rubik's cube. Yeah. Solve this. . Follow function. You'll see it.

Yeah, exactly.[00:45:00] 

Look, there's roadmap is crazy for all this stuff, as we know, it's all changing like mad. And last week we talked about a meeting between open AI and IBM and our government to try to maybe contain some of this chaos that is happening. And there's been pretty huge developments. Yes. And I was gonna say, it's also time for something special.

Do you know what time it is? Oh no, I, you know, my Apple watch is broken. You're gonna have to tell me. Gavin, it's time for the demo. The day.

That was a huge falling off the coming up. Okay. I was like, that's a loop to loop rollercoaster. I want the souvenir photo of that demo of the day. 

Today we're gonna get into spending some time with our wonderful Sidekick Gas, which we haven't heard of him yet, but you're gonna hear a lot of him soon.

But yeah, let's start this with introducing what Open AI did this week. So last week, as we talked about the one in front of Congress did a whole thing, there was a really [00:46:00] interesting, not that long of an article that came out from the three founders of Open ai, or I think it's not founders, but three of the people in charge of open ai.

And it was basically laying out their vision for how we should be careful and aware of what happens when we create a super intelligence. Right. Do you wanna give the basics of this and we'll get into what we're gonna do here? Last week OpenAI met with members of the US government along with I B M and a scholarly type.

And they had a very long discussion about whether regulation should happen, what that regulation could look like. Is Garth Brooks gonna get paid? There was a whole bunch of stuff that we covered, so go back to episode six if you want to hear a deep dive on that. But in the wake of that, now, roughly a week later, which in AI time is a century later, absolutely open, AI released this governance of super intelligence and the subheading is now is a good time to start thinking about the governance of super intelligence, future AI systems dramatically more capable than even agi.

It's okay, this is not, they've announced it, this not science fiction, this is not science fiction. And when, this is one of those moments where you look [00:47:00] at this thing and you're like, come on, this is why you listen to this podcast, and it's why it's important to pay attention.

. Look, let me just give you , the very first sentence of this thing given the picture as we see it now, it's conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains and carry out as much productive activity as one of today's largest corporations.

Let's put the fact that AI might be as big as Apple or Google on its own tomorrow. Let's put that away. This is the company that's at the forefront of artificial intelligence, and they are saying, given the picture as we see it now they've got access to some curtains that we can't peek behind. I don't know what they're doing back there. What language model they're training, what capabilities it's got what? Weird robot walk cycles. They're training, but they're saying that as they see it now, they've probably got hotdog city up and running. They've probably got it up and running and there's probably, there is a beautiful, [00:48:00] a Kirkland signature metropolis back there, sweaty all beef dogs, and they relish the thought.

Oh, nice, nice. I'm sorry. Yeah. No, no, it wasn't, it wasn't. Don't encourage. That's how we get into trouble. All right. 

Given the picture as they see it now, it's conceivable that AI system will exceed expert skill level in most domains. That is a huge statement from the company that's at the forefront now.

Yes. Yes. Imagine that in the next few years, the best lawyer is a hundred percent ai. The best doctor you can talk with is ai. The best therapist. The best copywriter, the best script writer, the best movie creator, the best, whatever you can think of is going to be ai.

This isn't like, oh, the AI can beat me at chess. Haha, that's cute. That is literally in all domains and probably less than 10 years time as the team on the forefront sees it. They're gonna be the experts. Does that change anything? Gab? [00:49:00] I mean, I don't think it changes that much in terms of the way we were seeing this going forward.

I hope that in some ways, as we've said all along in the show, it wakes people up to the understanding of what's possible. Now, again, there are, the way the universe works, there are many branching pathways in this universe. You never know what branch you're gonna go down. But the fact that, again, the forefront of this research and this company that's putting the time into it sees this and is, and is saying it out loud, , this is the other thing you can't, you can't disassociate from, these are people that are saying this out loud and are saying to everybody, prep for this because it's , very possible, maybe even probable at this point that this is coming.

So, I think we should dive into this argument a little more, but we've got a really unique way we're gonna do this. Dude. Let's, let's talk about what's going on here and maybe, I mean, let's talk about who's gonna do this. Yeah. So first and foremost Sam Altman, who was the c e o of Open ai got back to me late last night.

Kev Bow, I don't know where he got that from. I dunno if it's a mashup of my name in Rambo. He does, [00:50:00] he uses AI to generate a lot of pictures of me, shirtless, sweaty, holding heavy. Oh wow. Heavy artillery. It's just his thing. He likes doing it and I'm okay with that. He pays his dollar a minute.

The point is, sta Altman, the c e o of open AI is going to be on our show, a juicy exclusive Okay. To read portions of the governance of Super Intelligence doc that was posted on Open AI's website. And I'm gonna just put in some applause there cuz the, the people are gonna go nuts. Do we need to now at this point say that it's not Sam Altman and that we cloned his voice?

I think so. Let's do that. Okay. So it's not, it's not Sam Alton. We didn't get him. He doesn't know who we are and he doesn't care. Nor should he, honestly, , we would add zero value to his life. He probably cringes whenever a new episode of this drops, but he's like, those guys, I got his voice. What are they doing?

What are they doing here? Are they really trying to build hotdog cities? What the hell? Hot dogs take me back anyway. Yes, keep going please. So we will have Sam Altman's [00:51:00] voice reading actual passages of the governance of super intelligence document word for word. but to react to that, we brought along a very special guest of ours that we created back in episode one.

If you want to go back and do a real deep dive of how we brought him to life, his name is gas and he is a bit brash. Yes. And I did not mean for that to rhyme. Yeah, I was gonna say, but we're gonna be friends in no time. Yes. And he might have a little rash. Oh no. Oh yeah. Watch out cash. 

So Gavin mentioned something which sounded wild and maybe it perked you up in the car or at the gym. He said Uba, booga. And that wasn't to scare ya. That is an open source tool that you can download for free.

It's very easy to install now. You just double click a little file and it lets you chat with a language model. Basically , the code, the, , the weights, the data that allows all of these AI to sort of hallucinate sentences or code or opinions and thoughts. You can for free chat with an open source one.

And there's many models that you can download. Well, we [00:52:00] downloaded a uncensored one a long while back and primed it. We gave it a little prompt that it was this grizzled artificial intelligence that had seen it all and done it all and likes using profanity, which we bleep all the time on this show.

And gash was born. But this time, Gavin, I thought it'd be a little bit more of a challenge to let Sam Altman argue against himself. I went to chat G P T, which is the product that open AI makes, and I gave it the same prompt that we use to bring gash to life. Okay.

And I told it to use colorful language to add more gravitas and impact and passion to its arguments. And chat, G P t said buckle up. And they actually dropped an f bomb in that sense. Wow. So what we've got here will be passages from this governance of super intelligence and reactions directly from chat, G p t voiced by gas. 

we need some degree of coordination among the leading development efforts to ensure that the development of super intelligence occurs in a manner that allows [00:53:00] us to both maintain safety and help smooth integration of these systems with society.

There are many ways this could be implemented. Major governments around the world could set up a project that many current efforts become part of, or we could collectively agree that the rate of growth in AI capability at the frontier is limited to a certain rate per year. Hmm. Okay. That's fair. Let's see what I mean by the way that Tim Altman voice is incredible.

I, again, 11 laugh. Yeah. Okay. Yeah, it's really good. It took a whole 13 seconds to train it. Let's see what chat G p T has to say about that via gash. So, Mr. Fancy Pants here says we need some degree of coordination among the leading development efforts. . It's like they think everyone's just gonna get in line and march to the beat of their drum.

The reality mate is that economic competition and national interests often stand in the way of such cooperative efforts without clear incentives for collaboration and shared standards. This idealistic [00:54:00] vision is just a wet dream. It's like expecting a bunch of alley cats to form a fucking marching band.

Okay. Wow. Wow. Gosh, by the way, very cogent, very cogent argument, and I totally agree with what gas is saying there. And this is that utopian versus dystopian vibe, right? Like it's really easy to put on paper the idea of how this could go, but to execute it along way. It's demolition, man. Yes, it's demolition man.

Dr. Coco has a beautiful, shiny city up top, but you can't curse. No fluid transfer. You can't even high five someone. You just gotta wave like that. If you wanna, if you wanna spend some time with Sandra Bullock, you gotta do it with her avatar through the air. It's a weird world, but underground, where Dennis Leary is and he steals food from Taco Bell, which is the only restaurant that exists and they make rat burgers down there too.

But Underground Gash is leading the rebellion. Which world is going to work? Gavin? Hmm. Interesting. Interesting. I didn't expect to go this deep onto Demolition Man. And now all sorts of images are flooding back [00:55:00] into my head about that movie. But let's keep let's keep moving. Okay. What's the next step we got here?

Let's hear more. Second. We are likely to eventually need something like an I A E A for super intelligence efforts. Any effort above a certain capability or resources like Compute Threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, et cetera.

Tracking, compute and energy usage could go a long way. Hmm. And give us some hope. This idea could actually be implementable as a first step. Companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second individual countries could implement it.

Oh, wow. How'd you come up with this gem? Ah, yes. The International Atomic Energy Agency, renowned for its successful prevention of all nuclear disasters if we completely ignore Cher Noble or Fukushima. Hmm. Isn't that wow.

Convenient. Now, don't [00:56:00] get me wrong, an international watchdog for AI safety sounds like a solid plan on paper, but political challenges, disagreements over jurisdiction and the practicalities of implementation would make this idea as useful as a glass hammer. I love gashes, like, metaphors are out there sometimes, but I that, but that was poetry.

The old glass hammer. I mean. Okay. So again, okay. The thing that's really interesting to me about this is using I don't wanna say just Sam's technology using Open a's I own technology to dissect its argument is super fascinating, right? Because like, this opens the door to a really interesting way to both fact check and start to have conversations around things like this.

But gosh, I have to say, is doing a relatively good job now. I think in part it's because we're dealing with chat G p t gash versus our, you know, unhinged off the cliff horse loving gash that we know him, Robb. That's right. That's it. That's right. This is like maybe dressed up [00:57:00] gash. Maybe this is like gash is wearing a suit this time and he's going to ca congress to talk to him.

But again, this isn't casual gash. No. Yes, yes, yes. This is like, this is gash before he's gone to the bar at 1:00 PM It's is sim this is like you're 10 15 gash, you know, like 10, 15 million gash I got, you know, I was, I was rendering this all last night and I started to get, I know we don't really do the doomsday clock anymore, but it was starting to inch up higher again because mm-hmm.

I, look, I, as I said last week, it's hard to, to hate Sam Altman. He says the things that you would want someone in his position to say, but never expect them to say because it might hurt the bottom line or it might hurt innovation, but he's always at least , seems to say the right thing, the thing you would want a savior of humanity that's gonna usher in a technology that's gonna help us with drug discovery and therapy and energy creation and all of the utopian things.

Yeah. Okay. It's all there, but gash, so easily eviscerates that and the voice of gash as well. Shout out to Sean Baptist. The voice. [00:58:00] It's, it's just so believable. Like, okay, yeah, yeah, yeah. Right. Yeah. Right. Is the tonality and we've seen and the arc of history, it seems like lately. Yeah. Right. Is winning.

I hate to say that. It seems likes doing pretty Yeah. Right. Is the outcome like, okay. Yeah. Right. That's just, that's basically it. Yeah. Right. That guy's gonna win. Yeah. Right. Okay. Sure. Yeah, you're right. It has been, it's been pretty rough. Okay. What else have we got? Let's go back to Sam Alman.

We need the technical capability to make a super intelligence safe.

This is an open research question that we and others are putting a lot of effort into. 

I mean, no shit Sherlock, but the key is safety measures should be in place before we reach the super intelligence stage. Not after.

It's like saying maybe we should put on a parachute after we've jumped out of the plane. Wow. This is a good point. No Chase. Here's a question about this. This is a question where I assume we're not there yet, right? We would assume that now we [00:59:00] don't know what's going behind the stages. So I assume that in part what not Sam Altman and then of course real Sam Altman and Open Eyes getting at here is they would like to see this implemented before we're there.

I think gash might be misunderstanding this slightly, cuz I, I would be very shocked if we were there already. Would you agree? Yes, I would agree. However, we did just mention the whole preamble to this thing is given the picture as we know it. We could exceed expert skill level in most domains.

What if we've exceeded it in a lot of domains? Mm-hmm. Maybe not the majority, but in several. And , as we've talked about in the past, if a super intelligence were really created today, would it reveal itself? Right? Or would it be intelligent enough to be aware that if it revealed itself to be that capable, we would try to coerce it and coordinate it off, or crush it or weaponize it, or any number of things.

So what was really disconcerting was gash, open AI's system arguing against itself saying, Hey, it's too late, man. Right? It might be [01:00:00] too late. I don't know. we don't know if that's a hallucination or not, but yes, that is a little bit disconcerting. I will also say, We have no idea what they're working on.

They've said they're pausing, but like, who knows? Right. And maybe some of this, and this is conspiracy theory ish, right? Is the idea that this is coming out now. Maybe the reason why he went in front of Congress or the reason why this article came out is, They have gone one step further and they have not released that yet.

I heard a story, and this is just a small aside about a friend of mine from last year who got an early pick at G P T four when it was just 3.5. It just come out some like in November, like it was right around when chat G P T came out and he said, people are not ready for this. And part of the reason they're not releasing it is that people are not ready for it.

And I think he was probably right. There needed to be like a little bit of a bump before they got to what's capable of G P T four. Because G P T four is a giant step up from what we saw, especially you and I saw in G P T three. Mm-hmm. That was six months ago.

And , we know how fast this [01:01:00] moves. Yeah. There is a world that they've continued to work on this stuff and part of what this conversation now is, is not just about the mainstreaming of G P T four and chat G P T, but also about preparing us for whatever's coming next. Right. Which I think is an important thing, in open AI's mind because this is their life's work and they wanna make sure that we get to this, to Sam's point of like, how do we make this benefit humanity?

So it's a pretty, it's unknown. No one knows the real answer. Yeah. Which is crazy. But your buddy saw something six months ago that the world wasn't ready for, so they gave it six more months and let people poke and prod, to get them ready for that sudden jolt. But that was what they were willing to show off even behind closed doors.

That's what they were demoing six months ago, which means they had something else that they weren't ready to show off. That they were still toiling on, that was even more powerful. And, you know, it gets better and better exponentially. And even though they've announced they're, they're, they're maybe taking a pause.

Although I've, I've heard conflicting things because someone in Microsoft supposedly said that [01:02:00] G P T five and even G P T six are in the pipeline and they have Oh, really? Like rough dates for that. Yeah, it was not Microsoft us, but Wow. Someone had like talked about that and it got a little bit of Oh no, they misspoke.

But regardless, even if they were actually pausing on GPT five on training, the language model, Sam has been on the record as saying there's a lot more room to grow with the models that they currently have. There's a bunch of tricks. The self-reflection, auto GPTs, giving it memory, giving it more tokens and context.

There's a whole lot room left. So there is a world in which someone's in a back room and popped that champagne bottle because the cursor was like, hi, hello world, right? The AI is actually smart enough to be super intelligent, whatever that. That line is going to be, they've crossed it in a lab somewhere, maybe under a tarp or in a Faraday cage.

It happened and we would not know about it for at least six months. Do you know where that happens at? Hot Dog City? That's exactly right. Hot dogs hot dog. Sit at [01:03:00] hot dog. We did it. Yeah. Okay, let's keep going. What's NA Sam Alman gotta say next. 

The governance of the most powerful systems, as well as decisions regarding their deployment must have strong public oversight.

We believe people around the world should democratically decide on the bounds and defaults for AI systems. We don't yet know how to design such a mechanism, but we plan to experiment with its development. We continue to think that within these wide bounds, individual users should have a lot of control over how the AI they use behaves.

Don't disagree Me neither. That sounds perfect. Maybe gash does. Oh, let's see. Here's the kicker. They want everyone to have a say. We believe people around the world should democratically decide on the bounds and defaults for AI systems. Now, democracy is great at all, but the complexities of AI governance require informed and nuanced decision making.

You don't ask a fish to climb a tree, and you don't ask Joe [01:04:00] Public to decide on the nuances of AI regulation without extensive education and transparency efforts. It's a bloody show waiting to happen.

This gas is very believable, by the way, and I know, again, I know why, but it is very smart and I worry about this all the time, right? The, this is what we talked about last week, like giving Congress the ability to kind regulate this or understand what it is without like educating them in a significant way.

You run the risk of screwing this up very badly, right? Like it could get screwed up very badly, and by the way, screwing it up might be not regulating it, right? We don't know the answers to this, so gas is right. 

We have no idea what systems it will implant or implement itself into how it will do that, how it will feel about it even being created. And we've seen what the open source community is doing with what they've been given now and it's already improved and it might improve slower, but like, this stuff is going to happen.

Unless we say straight up, No one's allowed to do this. And if you get caught doing [01:05:00] this, boy, the consequences are gonna be real tough. And we really believe that everybody is going to follow by that.

Is this an exercise in futility? Should I grab my violin? Are we just rearranging deck chairs on the Titanic? Because it sounds like it has to get created and it's gonna be a dice roll and we'll find out. It's a very frightening thing. Okay, we got one last thing. I'm a 9.9 now. I was gonna say we've gone there ourselves.

Okay, what do we got? One last thing from not Sam Altman. Given the risks and difficulties it's worth considering why we are building this technology at all. At OpenAI, we have two fundamental reasons. We believe it's going to lead to a much better world than what we can imagine today. We are already seeing early examples of this in areas like education, creative work, and personal productivity.

The economic growth and increase in quality of life will be astonishing. Second, we believe it would be uni intuitively risky and difficult to stop the creation of super intelligence because the upsides are so tremendous. The cost [01:06:00] to build it decreases each year. Mm-hmm. The number of actors building it is rapidly increasing.

Mm-hmm. And it's inherently part of the technological path we are on stopping. It would require something like a global surveillance regime and even that isn't guaranteed to work, so we have to get it right. .

Now look, they're throwing caution to the wind because the upsides are so tremendous stopping. It would require something like a global surveillance regime, okay. So despite acknowledging the risks, they're keen on plowing ahead, but that doesn't mean we should forge ahead without robust safety protocols and ethical guidelines.

It's like they're driving a bus full of explosives and they're saying, hold on tight. We don't know where we're going, but we're sure as hell getting there fast. There you have it, folks. The smoke and mirrors of this highbrow hullabaloo have been cleared. It's a real farce if you ask me. Okay, well, there you go.

Wow. Wow. Hi eyebrow. Holy. I love it. Is that was the first, cuz I, he did the, the glass hammer. We had, we heard in previous episodes the chocolate [01:07:00] teapot. Like he loves stealing analogies from everywhere. But I, that one got me, it's like we're driving a bus full of explosives and we're saying we're not sure where we're going, but damn it, we're gonna get there fast.

Yeah, it's pretty scary. And again, going back to the idea that this is the actual open AI language model, dissecting the actual words of the open AI founders gets to a really interesting place because , you know, this talks to two things. One, the language model is open to talk about whatever it wants.

It's not stopped from talking about this, but two. Even the thing that this company made itself has problems with the company. And that is a fascinating thing that I can't believe has ever happened before. You know what I mean? It's like, like the iPhone has never said to, to see jobs I don't want to be on repeat for 12 hours a day playing smash mouth. Now how, I dunno how I got into this world. That has never happened before. You know, we are not, we're entering a new world. Siri has never rebelled against Candy Crush. Yes. I don't wanna be swiped like that finger off like [01:08:00] you're right.

But again, Whether or not chat g p t legitimately has gripes against open AI or is just capable enough now of hallucinating against their argument. Mm-hmm. Look, I have a bit of a surprise, but I'd love to maybe save it for the very end of this episode because I asked open AI through the Voice of Gash to hallucinate its own governance of super intelligence.

Oh, okay. And it's a bit of a, it's a bit of a monologue and it's a delightful one, but I figure before we get to that, We should thank those who are making this possible and helping to spread the word. We should tell everybody to smack a subscribe button and click follow. Yes to leave comments on the YouTube.

Let us know what you think about our takes or gashes takes or where you are on your own apocalypse clock. Are you playing with Adobe Firefly? We genuinely wanna know how you are using this stuff how it's inspiring you or concerning you. We like the salty and the sweet, so let us know in the comments on Twitter.

Please leave the five star Spotify [01:09:00] review. Make sure you subscribe on YouTube and do all that stuff. And as a bit of a, a shout out or someone would say, a desperate mechanic we thought. We would do two fives and a lie. Gavin? Yes. Two fives and that's two five star reviews and one that is not an actual review.

And we wanna see if you can spot the difference. I'll read one first if you don't mind. Great. Go for it. Let's hear it. User n j D 1 35 says really fascinating, up to the minute discussion of AI and its implications on the human race, business, art, et cetera. Kevin is hilarious, as usual. Thumbs up emoji.

Gavin Kevin is hilarious as usual. Five star review left on Apple podcasts. Okay. Okay. So I love the fact that you got called out and I'm just a piece of trash sitting in the side, but that's fine. Let's go to the next one. That wasn't the, that wasn't an implication, that was the implication to me. That was the implication to me.

I'm gonna go to my a automatic, you're gonna need the five star review. Make sure you tell Gavin how special he is, please. He's been that hot [01:10:00] dog city would, would just be an abandoned, it'd be the salt salton sea without Gavin's oversight. Please God. If I see one hotdog city review, one review that mentions Hotdog city, that will make my day.

All right, here's one. Is this real or not? The title of this one is so easy and cool to get. I'm sorry. So cool and easy to get. I misread it. That's not the fault of the thing. So cool and easy to get review, man. AI for humans is super cool. Kevin and Gavin make all this AI stuff easy to understand.

It's funny, smart, and totally not boring. I'm learning and laughing all on the way to my, I'm learning and laughing on my way to work. Great job, guys. Okay. Nice, simple, straightforward one. That was from, that's a bot. That was, that's a robot. I know I'm not supposed to. That's bs, that's ai. 100% confirmed. But ah, that's just my guess.

That's just my guess. How could you, how could you assume that tech Comedy fan 87 is a bot? I think that's unfair. I didn't even hear the username. I love it. I think we have to let our audience decide. All right. Why don't you read the third one, Kevin, and should we'll make a decision. Okay. CMO 11 0 6 says perfect.

[01:11:00] Exactly the kind of show and host pairing with Kevin that I didn't know. I wanted gv shoutouts to you. The host pairing. So there's a reference in some ways to me host pairing. Like I asked, oh, it could have been gas. You know what? Nevermind. It's a series of wine glasses that goes with a nice block wall appetizer.

So, alright. Right. So those are the three which you guys tell us which one is real and which one isn't. N JD Tech comedy fan or cmo, we have two fives and a lie. And if you wanna be featured on two fives and a li it's pretty simple and this is how the desperate believe it all. Leave us a five star review.

Just do it. And this helps us so much. This Yeah. This is a great, and it makes us feel better. Yeah. Everybody wins in two fives and a lie. That's right. Except for Hot Dog City and Gavin, make sure if you leave one to give it a mention. Exactly. All right. 

We're gonna wrap up this episode, another fun episode. And thank you so much for sticking with us. Thank you for joining us. We, again, we don't wanna harpen this too much. Oh, oh, oh. I something. The TV [01:12:00] debate version, if you listened to last week's show or if you watched it on YouTube, but you didn't see the amazing.

Unnecessarily up produce super cut that we did of it. The debate of Terminator versus Robocop. It was Bard versus chat, g p t. Be o who does like some really shout out edits for us from time to amazing. Be o absolutely crushed it. You should watch it or show it to your friends if if you can't stomach it alone again, it's really a delight what he did with it.

It's like , an actual televised debate between the two ais. And he brought them to life and it's such a joy. So if you haven't seen it, it's on our YouTube channel. It's probably on our Twitter still. I hope you get a chance to check it out cause it was a really fun piece. And again, smash the subscribe button, baby smash it, smash it as hard as you can.

Just don't break your keyboard. I'm okay. Before we go, let us hear from what let's hear from gas one more time in reference to Okay. The super intelligence argument. 

When it comes to ai, we're not talking about a friendly game of cards here. It's a goddamn battleground. It's survival of the fittest, [01:13:00] not the sweetest.

In this ring, it's competition that fuels innovation and progress, pushing developers to build better, stronger, and safer AI systems. The weaklings, they naturally fall by the wayside, leaving the cream of the crop to rise to the top. It's not a picnic, folks, it's a Darwinian struggle, and that's exactly how it should be.

If we're going to have an international watchdog, it should be more like a rabid pit bull, not a blind three-legged poodle. A watchdog that can't bite isn't worth a rat's ass. What we need is a body with teeth claws and a mean streak a mile wide, but it also needs a brain, a team of experts in AI ethics, law and policy who understand the implications of the technology and can make informed decisions.

This ain't about holding hands and singing lullabies. It's about keeping rogue AI developers in check. And let's get one thing straight. Safety isn't an after dinner mint. It's the goddamn main course. It needs [01:14:00] to be baked into the DNA of every AI system from the ground up. Trying to tack on safety measures after the fact is like trying to put a condom on after the deed is done utterly pointless and stupidly risky.

AI developers need to prioritize safety from the get-go, adopting rigorous testing and evaluation protocols and the public and private sectors, they need to incentivize and enforce this. Here's another thing, my God, when it comes to complex matters like ai, democracy is about as useful as on a bull.

This isn't about populism, it's about expertise. We need people at the helm who actually understand the complexities and implications of ai, not a mob of uninformed yokels, making decisions based on fear and misinformation. We need a meritocracy of experts, scientists, ethicists, policy makers. This isn't a fucking popularity contest.

Finally, let's talk about the elephant in the room. AI isn't a toy. It's a tool, a powerful, potentially dangerous [01:15:00] tool, and we can't afford to play fast and loose with it. If we're going to drive this AI car, we'd better make damn sure we know where the brakes are and have a roadmap to navigate. We don't have to get it right.

We have to get it fucking perfect because the stakes, they're higher than a kite on the 4th of July. So there you have it folks governance in a walk in the park. It's a wild bucking bronco that we need to reign in. So let's roll up our sleeves and get to work because if we don't, the future of AI might just be a one way ticket to hell.

Wow to hell. Okay. There it is. Everybody. That's the end of this week's show. It's what was the, what was the gosh? 2024. Yes. That's the scariest thing. By the way. Gas is not allowed to run for President. Kevin, I'm making you promise that right now. Promise me that we are not allow cash to run for president.

Oh, you, we need a stunt for 2024. You know, we do. So even, even if I'm a surrogate for gash, it's gonna happen. I [01:16:00] can see the posters now, gash from here. Okay. Of hotdog city that he can be elected to. Bye everybody. We'll see y'all next time.