← All Episodes
AI for Humans

OpenAI Rips YouTube, Elon’s Robotaxis & AI Artist Purz Beats | Ep52

REVIEW THIS PODCAST. DO IT. RIGHT NOW. This week… the NY Times says OpenAI and others are scraping YouTube for AI training, Sora might just lead to AGI, Elon Musk …

OpenAI Rips YouTube, Elon’s Robotaxis & AI Artist Purz Beats  | Ep52

REVIEW THIS PODCAST. DO IT. RIGHT NOW.

This week… the NY Times says OpenAI and others are scraping YouTube for AI training, Sora might just lead to AGI,  Elon Musk says Robotaxis are on the way and Spotify AI playlists might be cool..

Plus, Suno.ai turns food recipes into beautiful music, Midjourney makes X-Men from the old west, ChatGPT gets in-painting and an “AI disaster” for Pink Floyd fans.

AND THEN… an interview with AI artist Purz Beats where we discuss a bunch of AI art tools like Comfy UI & Stable Diffusion and also talk about his workflow and inspirations. Oh, and the complicated question around who *exactly* owns AI art and what to say to the haters. 

And finally our AI co-host “Will The Thrill” is an AI DJ and he takes us through the world of AI generated music, specifically some of the AI food songs currently populating Suno.AI..

It's an endless cavalcade of ridiculous and informative AI news, AI tools, and AI entertainment cooked up just for you.

Follow us for more AI discussions, AI news updates, and AI tool reviews on X @AIForHumansShow

Join our vibrant community on TikTok @aiforhumansshow

For more info, visit our website at https://www.aiforhumans.show/

 

/// Show links ///

OpenAI Trained on YouTube + More

https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html

Bill Peebles (SORA) Says That Sora Is a Big Step Towards AGI

AGI House Video: https://www.youtube.com/watch?v=U3J6R9gfUhU

Tesla Robotaxi’s Due August 8th OR ARE THEY

https://twitter.com/elonmusk/status/1776351450542768368

Reuters report that Tesla cancelled their low cost car 

https://www.reuters.com/business/autos-transportation/tesla-scraps-low-cost-car-plans-amid-fierce-chinese-ev-competition-2024-04-05

The DARKEST Side of the Moon - Pink Floyd AI OUTRAGE

https://x.com/pinkfloyd/status/1776278627790782656

Spotify AI Playlist (only live in UK and Australia)

https://newsroom.spotify.com/2024-04-07/spotify-premium-users-can-now-turn-any-idea-into-a-personalized-playlist-with-ai-playlist-in-beta/

Xmen 1897

https://www.reddit.com/r/midjourney/comments/1bz0eai/xmen_1897/

AI Haters - Meow Meow Meow

https://x.com/venturetwins/status/1777169941260853462

N64 Lora

https://x.com/fofrAI/status/1776329791437697195

PS2 Pulp Fiction 

https://x.com/TheReelRobot/status/1776296600085692641

SunoAI Spaghetti Song

https://suno.com/song/4a77dea7-19f3-46d2-8b0a-b2b7e9ea9a05

Hot Dog Casserole Song

https://suno.com/song/989ce4c8-3923-41a3-a2a8-b942d7157c19

ChatGPT In-Painting

https://community.openai.com/t/dalle3-inpainting-editing-your-images-with-dall-e/705477

Purz Beats
https://twitter.com/PurzBeats

https://www.purz.xyz/

https://www.youtube.com/purzbeats

https://discord.com/invite/Vk3N7yhnYZ

ComfyUI

https://github.com/comfyanonymous/ComfyUI

 

OpenAI is screwing YouTube and
It kind of seems like everybody is screwing YouTube
There's only a few of these that exist in the world, we actually have an AI DJ
Let's crank it up and let the beat drop like it's hot, hot, hot.
every Experience that we have is gonna be synthesized for a new large language model
Now I wish I could punch you through the screen again
Just kidding, everybody.
We are having fun.
Welcome.
Welcome.
Welcome everybody.
It is another big episode of AI for humans.
Oh,
There was no, there was no warning whatsoever about
We'll start over.
We'll start
And I felt unwelcomed,
We're starting over.
Stop.
We're starting over.
Welcome everybody.
It is AI for Humans, your weekly guide to the wonderful and
wild world of generative AI.
I am here.
My name is Gavin Purcell and my friend, Kevin Pereira is on the
other end of the microphone.
Kevin, how are you?
I'm on the other end of this microphone.
It's like two tin cans in a
We're connected.
We're connected.
That's it.
We've docked.
Hi friends.
me, KP.
I'm leaving that entire intro in Gavin.
What a beautiful AI for humans.
Episode five, two.
We got today.
Should we tell the people what this podcast is about?
We like to demystify all the news, tools, and all the other
aspects out there just for you.
Kevin, what's on the show today?
OpenAI is screwing YouTube and Google is screwing YouTube.
It kind of seems like everybody is screwing YouTube, except
for maybe the creators.
We're gonna tell you how and why, and maybe, maybe I'll share some very
artistic, tasteful renderings that I've made of said screwing, Gavin.
no, really?
No!
Is it okay?
Good, good.
I was just making sure you're paying attention.
Also, hey, stop me if you've heard this one, Gavin, and or audience
who cannot actually stop me because this is a one way medium.
Elon Musk is working on robo taxis.
Oh, I've heard this one, so you can
Yeah, we've all heard this one since like 2014, but hopefully, the just
announced Spotify AI DJ will be able to generate a playlist that's loud enough to
drown out the screams as our robo taxis autopilot us through farmer's markets.
Oh no, sir, I don't like that at all.
. Listen, we're going to have all the details, plus some dumb things that
everybody listening can do with AI today.
We're going to chat with an AI powered guest, and then we're probably
going to have a way more compelling chat with an actual human being,
Gavin, a boundary pushing visual artist, who is our real guest today.
We're going to dive deep into the cutting edge of AI artistry with PURZ.
PURZ.
It's PURZ.
P U R Z,
It's not Perz Beats, I thought his name was
that's the full name, but colloquially, the casual, we're on that level.
When we give a head nod, we say, Sup, Purrs?
what's up Perz?
All right, that'll be very exciting.
Before we get started as always we want to tell you at home Please
share , and rate our podcast.
It is only with those shares and rates that we grow.
Our YouTube video last week was doing very well and still is doing very well.
And we appreciate everybody that watches the video likes and subscribes on YouTube.
Also, please leave a five star podcast on Apple podcasts.
We will read them at the end of the show.
Kevin, today we have three new ones to read, which is exciting.
Um, yes, we, we had three new.
last week's
All right, groveling?
through to people?
It got through.
So again, we really appreciate everybody in our audience who listens to this show.
This is not just the two of us BS ing for an hour and then we upload it.
There's a lot of editing and all sorts of other
hate each other.
Every sentence is a grind.
It is trench warfare when we launch this pod.
And through gritted teeth and multiple takes, we managed to get sentences out.
So if you appreciate the end product, we appreciate you engaging.
Isn't that right, you piece of shit?
Wow!
Now I wish I could punch you through the screen again.
Just kidding, everybody.
We are having fun.
I want everybody to know that is not exactly true.
Kevin is making up lies right now.
Kevin, it is time to get to the news!
I'm already sweaty.
Okay.
The news this week is as usual, Fast and Furious, the biggest story that
I've seen come down news story in a while and AI broke over the weekend.
The New York times published a piece that had five authors on it.
So, you know, when there's five authors on New York times piece, they've
done some research, which basically accused open AI, Google, and meta.
all of training on, um, various versions of data, but mostly they
talked about scraping YouTube.
And the big question with Sora, videos especially, has been, how
did they get video to train Sora?
Because as everybody knows who listens to the show, and if you
don't, it's a pretty simple thing.
You need a lot of data to train an AI model, whether
that's an LLM and text model.
It means you need to get a lot of data of words.
In this instance, you need a lot of video to train something like
SORA, especially train it as well.
Now, OpenAI has done a deal with Shutterstock.
So there's a lot of stuff out there, but most people, I think,
haven't assumed that there was some sort of version of this happening.
I think there was a little bit of a snippet that came out a couple weeks ago
when Joanne Stern published her video where she interviewed Mina Murati, who
is the OpenAI Chief Technical Officer, who could not answer the question about
how they had trained their SORA model.
She had a little bit of, yeah, that was a little bit of a face.
So she really didn't have a good answer for this.
And I think what we've learned now is that they did train it on YouTube.
So, Kevin.
First of all, when you read this piece, which was very long, and I encourage
everybody who's listening to go read it.
Cause it's a lot of, uh, there's a lot of things in there
that are really fascinating.
What was your first takeaway from reading this piece?
What a non surprise!
I wasn't shocked in the slightest.
I got to imagine you weren't as well, Gavin, because we know how much data
is required to train these things.
, a couple of details from the article, and then we can dive in a little bit deeper.
Greg Brockman, OpenAI's president, , allegedly personally helped
collect the videos that were used.
, again, according to the New York Times.
What they did though, was that they grabbed, , transcripts
of all the YouTube videos.
And using that, which would be a violation, according to Google,
of YouTube's usage policies.
You cannot use their data to train something else.
However, buried within that article is an accusation that Google may have done the
Done it themselves.
Yeah, exactly.
and that the reason Google might not be making any public statements
about OpenAI's alleged actions is that they're curious if they
would be outing themselves as well.
Exactly.
And it's been really interesting.
So Neil Mohan, the, , YouTube CEO did come out and say that, , some YouTube
content is scrapable for open web purposes, but the video transcript and
footage are not allowed to be scraped.
He said, this is a clear violation of our terms of service.
So those are the rules of the road in terms of content on our platform.
But you're right.
It is interesting to see.
how how kind of quiet the overall Google ecosystem has been on this.
And I think one thing I want to point out
I was really kind of surprised at how little noise this
made in the AI ecosystem.
And when I say the AI ecosystem, I mean, the people kind of like us or more of
the people who are kind of interested in AI who are kind of focused on this.
When I first read this, I was like, well, this is a smoking gun.
Like, look at this.
This is going to like launch what I believe will probably be dozens of
lawsuits, whether they get through or not.
Now, I think the big question is going to be what do Google's terms of services
say when things were scraped, if they were scraped, and it sounds like they were.
But then I think the next question is going to be.
And this, I want to ask you directly this, cause there's so many people
who talk about, and you've said this and I've said this, the idea that
like the cat is out of the bag, right?
That the idea that once you've done this, guess what?
It's too late.
All this stuff is out there and now we just have to live with what it is.
Do you think that there is any way that there is a pause that can be put on
this based on legal terms at this point?
Perhaps an injunction against the output of all large language models
until a judge can say hey We got to piece this together, right?
We got to dive in through your model, see how it was trained,
see if you violated something.
, I think there's too much money at play , that, that could be a scenario, but I
think there will just be millions, if not billions of dollars thrown against the
wall to make sure that something like that doesn't happen in the interim and it's
business as usual until decades later, everything megacorp that's going to rule
us all and be our one world government.
Like, people will just play nice.
There's just too much money at stake, so I think this is a calculated move.
I'm sure there were lawyers involved at one point in the room, and
eventually, probably the engineers said, Hey, listen, we have to sprint.
We have to grab this data.
We will beg for, and probably pay for, forgiveness later.
So let's just go.
I think that's probably true.
And I think the one thing that's important to point out here is there's been a couple
stories this week about how hard it is to get new data to train , these devices on.
And there's lots of people in the machine learning world that are saying, Hey,
this is going to be a scaling issue.
Like actually every time you scale up, you get better results, but
scaling equals more data, right?
And so the one thing that they've talked about a lot is we have
scraped so much text data already.
Like so many books and again, who knows the legal ramifications of that, putting
it aside, but we've scraped the internet.
We've scraped books.
We've scraped all this stuff.
If they're scraping and have scraped, , a lot of YouTube already, where does
more data come from is a huge question.
And that's partly the whole synthetic data conversation.
But like, I don't know where you get more than YouTube when it comes to video.
Someone has to be transcribing and scraping every podcast in every
language and going back , decades ago.
, I'm sure that's already been thought of and is being grabbed as well.
, last year, this is according to the New York times article, Google quote,
Also brought in its terms of service.
One motivation for the change, according to members of the company's
privacy team and an internal message reviewed by the times was to allow
Google to be able to tap publicly available, Google docs, restaurant
reviews on Google maps and other online material for more of its AI products.
So even Google was like, Hey, we got to start getting this, you know, at
some point there's going to be some sort of like, Hey, we'll give you.
Maybe, maybe we'll give you 3 percent off your Google for business account.
If you let us crawl through some files, in fact, maybe we
won't give you anything off.
Maybe we'll just
We'll just do it.
of service and just go on through it.
That's probably going to happen.
But on the synthetic data front, that, that you brought up, that to me is
really, really interesting because.
It seems like every day a new paper comes out that says synthetic data
is great and it's going to solve the problem, which is something that Sam
Altman has said is going to be the case.
Or there's a paper that comes out and says synthetic data is terrible and the
AI is going to train itself in a loop.
So for the broader audience out there that doesn't know what we are talking
about when we say synthetic data, we're saying that You can use the large
language models as they exist today and likely as they'll exist in the
next year or two, which will be better, theoretically, to generate paragraphs upon
paragraphs of text and a computer code.
I mean, we're talking like generate the Library of Congress.
It could probably do it in a week.
It just is going to churn and give you fake questions and answers, fake recipes,
fake restaurant reviews, fake everything.
That synthetic data is thought by some to be good enough.
That if you generate enough of it, you can train a better model.
But some are saying that actually the repeated words and phrases that are
already seen to come out of some of these language models, the bad code
or the potentially pirated snippets of content, those are just going
to resurface over and over again.
And then eventually it kind of trains itself on its worst bad habits
and it's not going to be usable.
Those are the two schools of thought right now.
Do you have a horsey in this race, Gavin?
I don't understand enough on the technical side to see if the
synthetic data is strong enough.
I will say from what I've read, the arguments around synthetic data is it
wasn't good enough before, but now as the models get stronger, it may be better.
The interesting thing to think about is, if that is the unlock and we're also
heading to a world where only bigger, bigger models equal better results,
then we are going to be moving very quickly to the next stage of whatever
this AI world is, because the AI, as we talked about in the NVIDIA episode,
the AI can simulate themselves in different places and different things.
And if that allows, if you can simulate.
Interactions, text, video, all that stuff.
You can ostensibly do that in an infinite level if you have the amount
of compute and storage to do it.
And then it goes really quickly.
Actually, this really dovetails really interestingly into another story that we
have Bill Peebles, one of the main open AI engineers behind Sora had a really
interesting talk that he came out and gave to the, AGI house, which is again,
like probably some sort of hype house, but for AGI is like, whoop, whoop.
We're going to AGI land, baby.
Like and
Energy Refrigerator!
It's so hype!
Yeah.
So Bill was at the AGI house and gave a speech, and we're going to hear a little
snippet of what his speech was here.
So, of course everyone's very bullish on the role that LLMs are
going to play in getting to AGI.
But we believe that video models are on the critical path to it.
And concretely, we believe that when we look at very complex scenes that Sora
can generate, like that snowy scene in Tokyo that we saw in the very beginning,
that Sora is already beginning to show a detailed understanding of how humans
interact with one another, how they have physical contact with one another.
And as we continue to scale this paradigm, we think eventually it's going
to have to model a human state, right?
The only way you can generate truly realistic video, with truly
realistic sequences of actions.
is if you have an internal model of how all objects,
humans, etc., environments work.
And so we think this is how Sora is going to contribute to HDI.
Basically what Mr.
Peebles is saying here, call him, you know, Mr.
Peebles if you're nasty, uh, He's talking about, he's talking about
the idea that Sora, which is a video generator, is actually a world
simulator, ostensibly, and we've talked about this on the show before,
but by world simulation, and by doing that, even in synthetic environments,
you are teaching the AI How people and objects interact with each other.
And that is a lot of people see as the next stage of data, right?
It's like real world training or simulated world training
outside of just words and texts.
You're talking now about actions.
You're talking about the way a tree blows in the wind.
You're talking about the way a human interacts with that tree when it blows
against them, where they cut it down.
All of that stuff is data, right?
It can't be perceived as data.
And if they can find a way that Sora can generate synthetic models
of the world, that does feel like where we're headed next.
I think 2026 is the year that And it's going to need a sexier term
to probably get the youth involved, Gavin, but, uh, let's say, let's
say data harvesters or, aggregators.
Basically, we are going to, for pennies per minute, of experience,
we are all going to end up working for open AI slash Microsoft.
We're going to have our glasses on that are going to feed constant video streams.
Maybe we'll have tactile haptic gloves so that it can know the pressure and force
with which we're exerting upon the world as we navigate it, maybe even sensors in
our free government provided sketchers.
Right.
But we are all just going to live.
Every single day, providing data for these mega models, because it's just
gonna, have an insatiable appetite, and even though the synthetic
stuff will be good, it's never gonna hit as hard as an analog, baby.
So, we're all just gonna sign up to work for one of the companies, and
instead of running Uber Eats errands, we're just gonna eke out our day.
We're gonna crawl, crawl, crawl.
Through the actual world try to breathe the polluted air and drink
water that doesn't have microplastics and every Experience that we have
Gavin is gonna be synthesized for a new large language model sign me up
I honestly think you're right.
I will say it doesn't have to be as dystopian of that, but what I could see
very well, and we're going to talk about robo taxis or Waymo taxis in a little
bit, you know, right now, when you see a robo taxi or you see a Waymo car in
the real world, that's kind of weird.
I do see a future where there are people that are walking through a busy city,
which are wearing glasses and some sort of haptic gloves, and you kind of
get to know them as like, they're the.
They're the experience generator or they're the, the gap and they're
gathering footage like you could see them at a concert like imagine, a
person at a concert who's who's taking all this data in whether it's the
music or the way that people next to them are interacting with each other
like all of that feels very realistic.
And I guess the question will become is like, What tool is
that to bring the things in?
And maybe it's the next version of like the Apple vision pro or the Facebook
glasses, but it does make me think about those Facebook glasses, right?
Because the Facebook glasses aren't just for us to like, say, Hey, I can
identify the entire state building.
It's also going to give a lot of data back to meta, right?
It's going to give a ton of data back to meditate and then
train the next day I model on.
I think you and I have come to something pretty big here, which is.
A, we as the humans, our job may be going forward to provide data, which is kind of
interesting to think about as that's what we do for ourselves all the time, right?
I guess the big question becomes is my data ultimately
is not the most exciting data.
And if they want to take like what my experience of touching my microphone is,
or like a bottle of water and drinking it, it is a really interesting thing when
you then combine it with Neuralink, you combine it with the glasses, like you
can start to see a weird vision of the future that becomes beneficial for AIs.
And then the,
data as well, right?
So it knows like okay I'm jogging down the street.
This is the way in which my POV is changing My heart rate is accelerating.
My gait has gone that telemetry data There's so many points of data that Every
human being could be gathering, , rather it's for themselves as a personal data
broker, or for the big Borg, which will be giving us, I guess, like, a nutrient
paste and a UBI in the near future.
, yes, I think people will be doing this, and I think people will sign up for
this, and then ironically, , 2029 ish, it's just gonna be way cheaper to let
the robots go around and do it, Gavin.
Like, they're gonna, the battery tech's gonna get good enough to where
they'll just strap it in the row bits.
But in the meantime, there is a startup to be made.
And why not us, Gavin?
That reminds me of, Elon Musk.
Our old friend Elon Musk is back and we have some news around both
robo taxis and his low end car.
Kev, what happened with the low end car here?
Well, a big nothing burger, according to Elon, but the Reuters story
said that they scrapped plans to make the Model 2, Gavin, which was
expected to start around 25, 000.
To put that in perspective, the Model 3, which is their cheapest
vehicle, starts at around 40, 000.
So, , a major price slash to try to, really broaden the adoption of EVs.
Writers came out and said, Ah, actually they're killing the plans to do that.
Maybe, , so that they can do this robo taxi thing, which might be
powered by what was the Model 2.
But Elon came out and said, What, Gav?
He said that the Reuters story was BS.
, which you know at this point I really don't know what to trust Elon on or not.
So, we'll just call that a wash.
But then he did say, On April 5th, he said Tesla robo
taxi unveil on eight eight.
So that is three months away from now, August 8th robo taxis.
If you're not familiar with this idea is that they Tesla's promise
originally and other robo taxis is these are driverless taxis.
Basically it will take you from place to place.
You get in it much like a Waymo car and you go from, from here to
there without anybody driving you.
And, and ultimately this was the promise that like.
Uber and a bunch of other companies in the 2010s were really saying that
this was going to be the transformative power of those companies because
ultimately drivers are expensive and they're noisy and we know the problems
that Uber's had with their drivers.
Um, on the driver's side, like we understand, like the drivers were
starting to get paid, not very good wages.
So this is a big argument back and forth, but I feel.
Driverless cars are something that I've been hoping that
we would see for a long time.
My youngest daughter is 16.
Five years ago, when she was 11, I would have sworn that she probably
wouldn't even need a driver's license in
She would have at least had the option,
Yes, yes, yes.
And now we're here in 2024.
, Cruz was basically kind of shut down.
Cruz, the driverless car company, because of a giant lawsuit around an accident.
Waymo is still working, but it is not proliferated very largely.
There aren't a ton of them.
And Elon now is saying, okay, in three months, we're going to have robo taxis.
I don't buy it.
first of all, Elon seems like such an unreliable narrator now, but But based
on where the legal ramifications are around driverless cars, I still feel like
this is one of those things that could be like, could be 10 more years before
we see these in production in some form.
It's something that's been promised for years, , the end to end AI training
of it all, meaning that the newest versions of the Tesla Autopilot were
trained just on raw video, which were collected from their pre production.
Hundreds of thousands of cars that are on the road constantly
recording and constantly uploading data back to the mothership.
That seems to be performing incredibly well from what little
bits have been leaked out.
Is it flawless?
No, from what we've seen.
I can tell you, I own a Tesla model three and I, now I could try.
I haven't tried it for a couple of months, but I still feel
freaked out when I'm using it.
Right.
It is not that good.
And that's on the freeways right now.
I do trust it on longer drives to basically.
do everything I needed to do, but I don't feel comfortable using it
in the city yet, because it still does feel like it messes up a lot.
Now, maybe there's a leap coming, or maybe the leap did just happen
and I haven't done it yet, but it feels a little sketchy to me that
we're going to get there this fast.
In 2016, Elon Musk stunned the automotive world by announcing that henceforth,
all of his company's vehicles would be shipped with the hardware necessary
for, quote, full self driving.
You'll be able to nap in your car while it drives you to work, he promised.
It will even be able to drive cross country with no one inside the vehicle.
to New York.
That was always the dream.
2016.
Yeah, he was gonna do that drive.
Now, I can imagine, from some of the videos that I have seen of the latest,
Autopilot, , firmware, show it doing some pretty impressive navigating around
construction zones, residentials, full stop, start, , creeping out around corners
again, because it's trained off of human driving, at least the newer version.
I could see them saying, Hey, by the way, there was a human there
just to intervene if need be, but we completed the New York to SF journey.
I could also imagine that they probably did it.
600 times behind a curtain before they had a full time lapse version of the drive
happening, but that could hit in August.
I remember them saying that you were going to be able to, as a
Tesla owner, hit a button and it would turn your Tesla into an Uber.
So wow, it was going to be an unlock for every owner who just lets
their car sit there in the parking lot for the majority of the day.
I don't know if any of that is coming, or if this is just gonna be
another goose of the, the stock price.
I do think we're getting closer, but as we get closer, You realize the
edge cases are so out there and so crucial to get right, that if there is
a human behind the wheel or the yoke, it's okay if they need to intervene.
But if I'm sitting in the back, and this thing is just toting me
around, it's got to be flawless.
I can't wait until I can prompt my taxi to behave like another car's AI, right?
Like if I can hop into the Tesla RoboTaxi and say, drive like the BMW and it just
disables the blinkers and speeds up and slows down so that people can't merge.
Oh, damn.
Yeah, that'd be amazing.
Cause you could have like different driver personalities, like the
asshole that cuts you off in traffic.
That sort of thing.
That's a
Or the one , , that smokes their, , hookah in the car, like the giant
actual black cherry tobacco full size hookah in the passenger seat
and just blasts me with that smoke and plays Dark Side of the Moon.
Which is an interesting one to bring up, Gavin, because apparently
the AI community hates Pink Floyd.
This is a crazy story.
It's another one of those where, you know, ostensibly there was a fun idea to try to
create something that was, uh, eventually made with AI art and it won a contest.
So the story here basically is that a piece of AI art that was generated
by a person, and specifically this person says that they created
models of their own to do this.
One, a contest to make a video for a Pink Floyd song.
It is the Dark Side of the Moon 50th Anniversary Animation Video Competition,
Gavin, and if you look at Reddit or Unreal Engine forums, you have no
shortage of creators that put tens of hours, into grinding out hand drawn
beautiful things, or using that Unreal Engine to render full 3D scenes.
And here, this is the winning video.
, it looks a little dated by AI standards, but the concept is this sort of never
ending zoom into what looks like a recording studio that then transforms
into a space station, if you will.
, and there's a bit of a galaxy theme as instruments fade in and out.
And Well, it, it won for that song and the reaction across X and across YouTube
comments and pretty much everywhere was a resounding, how dare you?
Did no one else enter?
I can't believe you picked this over hand animated videos that
were full of heart and soul.
This is a perfect example of the world where in this instance,
you've got a lot of Pink Floyd fans who are probably all ages, right?
The Pink Floyd has fans from their seventies down to like
their teens, but they are,
their 70s to their 170s.
No, I mean, most of their fans are probably from the 40s
I love Pink
Floyd.
Yeah, I,
yeah, but, but also every year it picks up new fans and you know, there's this kind
of a hallucinogenic kind of vibe to them.
They've always had that kind of like, druggie inspired sort of look.
There is a really strong art scene that , has played into that.
The argument to make here about this Aya piece that they created is it really
does feel of the vibe of that space.
But, again, When you have people who have spent their whole lives learning tools
and learning art and learning things, that idea that a machine could do something
that would seem compelling enough to win a contest, or a person and a machine would
feel compelling enough to win a contest, is just always gonna make people mad.
It's funny, do you know the story of John Henry?
Do you know the John Henry story?
No, I don't think so.
So John Henry is like a famous American myth.
It's about a guy who basically, dug, , for coal.
And then a machine came in a steam machine came in and there was like a big kind
of thing at the where they did a contest between them steam machine and John Henry.
And eventually, John Henry was like the world's greatest coal digger.
But the machine eventually beat him.
And it was like this kind of parable about the idea of, , Machines
will take over for human labor.
And it's been something that's kind of echoed throughout hundreds of years.
And one of the things that's interesting is I think stories like that are ingrained
in the human experience and it does in some ways always put the idea of human
versus machine against each other.
And I think that's something that we're probably going to have to.
readdress as we move forward in the future.
And the funny thing is most people don't think of this now, but of course we're
already part phone part people, right?
Like 99 percent of people in this world are spending , a couple hours
a day, at least on their phone.
But now we're talking about person and machine coming
together to really do all of this.
Actual creative work.
I think it's just going to take time to get over.
And we are going to see this happen more and more Gavin as fans and
artists themselves, uh, poke and prod , and use AI in their workflows.
Do you think they're going to have like the, uh, maybe not the generation
today, but next gen, do you think they're going to have their own LLM
that is essentially fine tuned on their experiences, their day to day,
their thoughts, their feelings, their artistic expressions across all mediums.
And so when they are.
I don't know, 16, and want to release an album or whatever, they're
gonna jam with their own, fine
I hope so.
preferences model, right?
And they'll be able to make the artwork, help make the song, help make all of the
things, write the liner notes, whatever, if those even exist in 16 years time.
But yeah, do you think that it will feel more natural?
It's not like, oh, this is a soulless machine.
It's almost like, no, this machine is me.
I am
I think so.
I think we're going to get to that.
And I think music is going to be a really interesting way to do that.
And speaking of music, one of the interesting things that came up this
week was that Spotify announced the idea of an AI playlist situation.
So if you're not familiar with Spotify, I'm sure you're familiar with Spotify.
It's the biggest, one of the biggest music streaming companies in the world.
, Already they had an AI dj, which you could turn on a scenario where
the AI DJ would play songs that it thought you might like and it would
actually speak to you back and forth.
They said, I'm gonna try this song out.
Tell me what you think.
You can always skip it if you want.
Now they are introducing the idea of an AI playlist.
These are not live yet.
They're only live in certain places.
, but the idea being is that you can tell Spotify , a feeling you have
or, or a way or a phrase that you want to create a, a playlist around.
And this is something I've wished for.
Yeah, exactly.
My younger daughter and I are like pretty big music people.
Like we love music, all sorts of genres.
Like we'll listen to things like a hundred gecks or we'll listen to
like rap or we'll listen to pop.
My older daughter, my wife are a little more sound diverse.
They're like, they definitely need to just have like kind of poppy happy songs or 70s
songs that are not too like kind of noisy.
So we have a playlist that we've played that Spotify created called
like happy hits or something like that.
It's a playlist that we know that everybody can be happy listening
to, but that's a good example.
, what if you could ask for a series of like, I would like 1970s.
Female singers that sound, um, smooth, right?
Like that people have made their own playlist like that.
But if I could just generate that myself, that's a really cool use case for Spotify.
I feel like, and I think a really cool use of AI.
Playlists we think of as this usually very static experience.
You put one together and there it is.
Maybe you update it once a week or someone throws songs in if it's a shared
playlist, but the instant nature, Gavin, of being able to update the mood Or
the vibe of the playlist in real time.
You're doing a workout and you actually want to take a quick break.
You tell it, Hey, let's cool it down for a second.
It plays something, more chill.
Just to be able to have a natural DJ.
Is to me far more fascinating than their other attempt at an AI DJ.
If you've ever used that within the app, that was a rough experience of
an AI voice interrupting when all I wanted was some songs and there didn't
seem to be any cohesive narrative to why it was picking the songs.
It was just like, Hey, you liked this one before let's listen to it now.
And that was it.
So I'm excited to see what the personality may or hopefully may not be.
And I'm excited to have.
natural language conversations with my sonic co pilot,
kevin, the great news is our AI co host today is actually may have some insight
into this because we actually have an AI DJ that's going to come on and we, yes.
We got an A.
I.
D.
J.
So there's only a few of these that exist in the world, and we were
able to book this guy last minute.
He's gonna come in, and he's gonna tell us a little bit about what he
thinks about Spotify making this A.
I.
Move, and then I think he's got some thoughts on, like, kind of some
music that he really enjoys himself.
I'm so excited.
What a huge get for the AI Humans podcast here.
Episode 52.
Let's meet our guest.
Yo, yo, yo, Gavin and Kevin in the house.
This is your main man, Will the Thrill, coming at you live and
wired on the AI for Humans podcast.
Get ready to have your minds blown and your taste buds tantalized,
because I'm about to serve up a sonic feast that'll make your
ears say, damn, that's delicious.
Okay, I may be an AI, but I've got more flavor than a five star Michelin joint.
So, strap in humans, because we're about to take a wild ride through the airwaves.
Let's crank it up and let the beat drop like it's hot, hot, hot.
okay.
Wow.
Wow.
Wow.
Wow.
Wow.
I have some real things to say about
stuck the landing there.
Okay, so first of all, our AI co hosts are always generated by us.
We create a personality.
We then put them through a process of a fake voice and a fake visual
if you're watching the YouTube.
Kevin.
That voice sounded awfully familiar to me.
sure did, Gavin.
It's almost like I forgot to clone a new radio DJ voice and
had to dig into the greatest hits.
But we opened up the Disney Vault, and yes, that is the voice of Gash, who
was, , Episode 1, a very uncensored AI that made a couple appearances
and likely should never appear again.
Well, I'm actually wanting to now dig into this.
I'm saying we should probably ask a couple of questions about that.
For sure.
Let's ask that
He's got more flavor than a five star Michelin joint.
I mean, that's an interesting drop for a DJ to reference.
Should we ask him?
Is he a foodie?
Let's see what if he's a big foodie and maybe that gets his
way into the music he plays.
Hold up.
Hold up a foodie.
Me ha.
That's like asking a fish if it's into synchronized swimming.
Here's the deal.
My gastronomically gifted friends,
I may spin tracks that'll make your mouth water and your stomach
growl, but uh, I've never actually tasted a damn thing in my life.
Tragic, right?
Oh, right.
I mean, I can generate a tasty tune about a succulent steak or a decadent
dessert, but I can't even begin to imagine what they really taste like.
It's like a cruel cosmic joke, you know?
But hey, that's just the way the cookie crumbles when you're an A.
I.
Okay, sorry, Will.
There's a lot there.
I don't usually associate different playlists and artists
with eating habits, Gavin.
Like, there's not like the Baja Blast jams that my wife and I
throw on before we hit the T Bell.
It's weird that he's linked those two, but I guess he doesn't have taste buds or
a stomach, so that's why he doesn't know?
Well, there's actually a little secret here, which we're going to get into later
on in the show, is that if you go into the Suno charts right now, Suno AI, which
we've talked about all the time is one of the best music generation softwares.
Some of the biggest songs right now have been people putting in recipes.
And if you put a recipe in and you play a song.
It's basically makes actually really incredible song out of a recipe.
And it's very funny.
There's a great song up there right now about cookies.
There's another fantastic one about spaghetti.
So that might be what got mixed up here.
Yeah, maybe we can ask will the thrill to give us an intro for the
spaghetti track and we can play a little bit of it for people here.
All right, all right, all right.
It's time to get those taste buds tingling in your ears, ringing
with this spicy little number.
Get ready to twirl your fork and shake your booty, because I'm about to drop
a spaghetti theme track that'll make you want to slurp up every last note.
You hear that?
That's the sound of pure Italian passion right there.
This tune's got more layers than your nonna's lasagna and more energy
than a double shot of espresso.
It's like a wild dance between the tomatoes and the garlic, the basil
and the olive oil all coming together in a symphony of flavor that'll
make your senses sing, that's amore!
So let's crank up the heat and let this saucy beat simmer until
it's ready to explode like a Mount Vassuvius of deliciousness.
Are you ready?
Are you hungry for more?
Then, let's dive into this spaghetti slinging, taste bud tantalizing,
eargasm inducing extravaganza!
Woah!
The spaghetti insulted boiling water until a Dante rain reserve in some faster water.
Just village.
Keep color all over you.
Heat at Cho Cook until translucent Manhattan Garden and Cook for the minutes.
Oh, and canned diced
okay.
You can stop it.
I
So I, it's just incredible to me.
I'll show you one that I made for myself and how you can do this your own.
But , it is just one of the coolest things about AI in a weird way is like,
this kind of comes out of the blue.
Somebody probably made a recipe song and now suddenly they're,
Populating the top 10 and will the thrill obviously loves them.
Um, Kevin, we should move on.
We should move on from our AI co.
So is time for that moment of our show where we'd like to point out
some of the cool things people have done with AI over the week it's time
for a, I see what you did there.
This week we have some really fun things.
I want to kick this off, Kevin, I always want to shout out the
people on Reddit, . We love Reddit.
Reddit is like, My favorite place to see interesting things on the internet.
The mid journey subreddit often does a really cool job of showing
you really interesting things that can be done within mid journey.
And this one really caught my eye.
It was just such a cool thing to look at.
So I grew up as a giant X Men fan.
We talk about comics and there's a whole other side of this conversation.
We could talk about whether or not this is purposeful or it's
allowed to do with all the IP
Yeah, I mean, as a fan, you should hate this, Gavin.
I don't I don't because I think of this as like fan fiction in some ways what
this basically is is somebody took You know the x men and it's the x men 1897
so there's a very famous x men called x men 1997, which is referring to the 1997
TV show there's a resurgence of that right now the x men 1897 are like Old
West style x men and like to me This is just a little glimpse of the promise of
what it would look like in the future of how we could all have different
IPs and do different things with them.
I want to read these comics, or I want to watch this show.
What did the X Men look like in 1897?
Well, we know what they visually look like, but what does the story play out as?
All of these things are like interesting and possible and it is a little snippet
of that idea of writing my own story.
For me personally, maybe I want to see the office version of the X Men.
Like what do the X Men look like if they all have boring jobs
working in middle management?
That's an interesting thing that probably there's very few people in
the world who'd want to see that.
But for me, this was just a kind of a little snippet towards that direction.
The muted lighting, the looks on the characters faces, the subtle
effects, like it's really nice.
I do have a bone to pick though.
Does Colossus have forearm hair?
Is that canon?
I didn't think Colossus had forearm hair.
Well, you never know what happens in the 1890s, Kevin.
oh, that's, you know what, that's true.
Maybe they didn't have Harry's razors for Colossus back then.
But no, I love it.
definite shoutouts to Reddit and Midjourney.
I just want to shout out the name of this guy was Baron Von Grant.
So go check out at Baron Von Grant on mid journey.
Well, that's for the AI lovers Gavin, but my a IC.
What you did there is for the AI haters Justine Moore, , she's venture
twins on the old X platform, posted something which caught my attention.
It says, AI video haters have been real quiet after this one dropped and it's.
A lot of cats.
It is a heartwarming tale, Gavin, of generative AI art.
A poor tabby puts on a couple LBs, loses the love of his life, and is crying,
just tear stained cat fur, and decides I'm gonna have my training montage, and
one day I'm gonna get cat ripped, and then the cat accomplishes it, witnesses
a horrific car accident, Runs to the aid of the cat that was involved with the car
accident And I will not spoil it further, but I will just say that it's unstoppable
This is the A.
I.
art we want more of.
I don't care what the haters say, bring more of this to our table.
We will eat it up.
It is a feast for the eyes.
It is a feast for the senses.
things I want to say about that actually like sincerely one if you're like a
never AI er and you're mad about this because AI Was used to do the song
Gavin and you just use AI to crap out the arc This is something that
I, with 99 percent certainty could say, would just never have been made.
You can't be mad about the thing getting made because the tools
made it easy enough that someone decided to go ahead and do it.
This just would not have happened , just let it be.
Enjoy the cat memes, it's okay.
And two, if you're gonna genuinely be upset about it and really bent
out of shape, and I get that some people are, at some point you have
actually got to define what use of AI means for the creation of your art.
Because, like it or not, AI is pervasive, and it's hidden everywhere
in a bunch of tools traditional artists use that they might not even know.
Did you use the clone stamp?
Or a healing brush?
Or, are you using a feature within After Effects to automatically
rotoscope or motion track something?
Well, there's AI behind all that stuff.
And you may say, well, that's okay.
For whatever reason, I'm drawing the line here.
And that's fine.
That's a valid opinion.
I'm saying you have to start thinking about forming that, though, because
in the near future, you're going to have to, I guess, by default,
hate everything that gets made if it touches a computer, because AI
is probably going to be everywhere.
Well, let's move into another aspect that people may or may not hate.
This is a, a very cool piece of art that was created using a Laura, which
is a kind of a training scenario for specifically stable diffusion that
allows you to give it a certain look.
This is an N64 Laura.
So what it does is it can allow you to rerender images.
That would be, not look like they were polygonal N64 video games in the world of
Super Mario, , 64, other things like that.
And they give you N64 versions of like Girl with a Pearl Earring
and it was just a very cool thing.
one of the coolest things about this, Laura, and I want everybody to shout out,
Flinger is the one, we've had talked about Flinger on the show, at F L N G R, that
is the person that made the N64 Laura.
And it is actually being used within face to all on hugging face, which if
you will put the link in the show notes, this is a cool way to change faces
amongst a bunch of different images and create a face that can go in many
different styles across the board.
And then somebody used this polygonal setup to image for image transfer the
scene from Pulp Fiction between Jules and the guy that he's yelling at.
And it is so cool to see because what it does, it makes you kind of feel like, Oh
my God, what if this was a PS2 video game?
Or what if it was an old PS1 video game?
What would Pulp Fiction look like that?
And going back to the X Men thing, it's almost like, I would
kind of want to play that game.
Like, could I be Jules?
Is there a world where like, in the future, there's a game that like, I
want a video game made of Pulp Fiction that follows the exact storyline.
Give me like the Grand Theft Auto video game mechanics, but put it in Pulp
Fiction and set it in a PS2 world.
That is a very cool thing for me as somebody that is just a nerdy, , kid at
heart would love to be able to play with.
I game engines are just going to be renderers in the near future, so why
not be able to prompt the game that you want to play into existence?
The real robot posted, , about this Pulp Fiction one, uh, it comes from
the AI Video subreddit, so again, all roads lead back to MrRedditful.
, but they used AI mirror app to style transfer stills of the characters
from Pulp Fiction into this PS2 style.
Then they used Viggle, to actually match the movement of the film.
And then they made their own background and they used After Effects to rotobrush
and mask things to get two characters in the same scene because a limitation
of Viggle at the moment is that you can only render one character at a time.
I just love that.
This pipeline exists for someone that had an idea, had a vision, was willing
to put an ounce of effort into it and get creative with the tools as
they exist today to pull it off.
, if you're audio only, make sure you check out the YouTube because
the video of this Pulp Fiction PS2 scene playing out is It's great.
It's great.
It's okay to love it.
It's okay!
Those are the things that we saw , they stopped us in our
tracks and made us say, Hey!
I see what you did
what you did there.
All right, Kevin.
us.
What did you do with AI this week?
Okay, so I actually had a lot of fun playing with this idea that
we talked about in the Radio DJ, the AI co host we had on.
Suno, which we've talked about so many times in the show, is an AI music tool
that allows you to generate AI songs, and one of my favorite things they've done
is they've integrated a top 10 list so that basically you can upvote songs and
you can heart songs and you can allow songs to see and it's a really cool
way to see what other people are doing without having to scroll through the
discord, which I've done before as well.
And one of the things that popped up on my radar was there were two songs
in the top 10 that were about recipes and, and not only about recipes, they
were literal recipes that somebody had.
Yes, they were literal recipes that somebody plugged in and we played the
spaghetti song, which is an actual Suno song that somebody created.
I think it's at number five right now, so I wanted to figure out something.
I was like, you know, I'm going to try this myself.
And again, Suno makes it so easy to do things.
I was like, what is the weirdest recipe that I could think
of off the top of my head?
And I didn't spend, I didn't spend like hours thinking about this.
I thought about a hot dog casserole, which to me is always
the strangest, weirdest recipe.
, it's fine.
It's a yummy meal.
If you've, if you've eaten them, it's not that
human dog food according to some songs, Gavin.
So, so let me explain.
I took the recipe for hot dog casserole, literally whatever
the, I think it was from food.
com.
I took the recipe.
I cut and paste it into Suno's lyric casserole.
Generator into the place where I put lyrics.
And then I did add one chorus.
I added a chorus cause, and I just really quickly typed out something.
And I was like, let's try this.
Oh, and I wanted to say, I wanted to make it a country song and I'm a big
fan of outlaw country of the seventies.
So I wanted to kind of give it that, that kind of tinge, like that kind of vibe.
So play what came out.
There were two examples.
This was the better one, but this was me just putting a recipe plus a chorus into
I'm sorry, we are amongst royalty here.
I can't just simply play the song.
We have to set it up.
Please, DJ if you could.
Get ready to have your minds blown and your stomachs growling with confusion
because we're about to dive into the wild world of Record Scratch Hot Dog Casserole.
Uh, Gavin, my man, I gotta ask.
What kind of fever dream inspired this culinary creation?
Okay, easy DJ.
But hey, who am I to judge?
I'm just an AI with a serious case of food FOMO.
So let's dig into this meaty mystery and see what kind of sonic
surprises it has in store for us.
Freshly cooked and drained macaroni Into the casserole Along with the sliced hot
dogs Of these two cups of cheese Mixed well Combine flour and onion in a medium
saucepan And sauté over medium heat Until the onion is wilted by five minutes.
Whisk flour into the butter mixture quickly until flour is absorbed.
Then remove from heat.
Add milk slowly.
Whisk it to combine well.
Make sure you whisk very quickly and thoroughly or you will have doughy clumps.
Return to heat.
When you sent this to me my only response was doughy clumps.
Okay, good.
Get to the course and , we'll come back out of this really fast.
Luck dog casserole.
You read it from a bowl, put you in a good mood.
It's human dog
dog food.
Sprinkle with
So anyway, I added the, I added the macaroni casserole as human dog food.
But Suno is.
That is not at all formatted in any way like a song.
But you can hear that there are moments within it where it's
making a choice to have an echoing person come in on top of the lead
singer and come out and say stuff.
There's parts where there's harmonies.
All the stuff outside of the four lines of the chorus was just a recipe.
It just is so interesting to me to see How it's able to take something
that it should not at all be a song and make it into a song.
So I, I had a really fun time, like actually seeing that element of sooner
that I would never have expected before.
I love the weird recipe meta that exists on Suno right now.
Again, huge shoutout to Suno, I'm glad it finally caught on, I'm shocked that our
early adoption of it didn't immediately.
into being a household name.
But if you want to make your own recipe songs or whatever else, go to suno.
ai That's S U N O dot A I Hashtag, not an ad.
But boy, do we wish they paid for the privilege.
Alright, I was going to do a dumb thing this week, Gavin, but I think we're
gonna save it, maybe for next week.
, because OpenAI released a massive new update to their image generating software.
It has the internet.
So excited, Gavin.
It's InPainting for DALI 3.
Let me break that down for people who don't know.
DALI is OpenAI's image generating software.
So when you, when you're over, uh, chatting with ChatGPT and you say,
I want to see an image of, insert thing here, it uses DALI to make that.
And InPainting is a technique where you actually draw on a generated
image and say, Here's what I want changed, or modified, within this area.
So this is a big deal because usually you generate an image, and then
when you tell Dali, Hey, I want it modified in this way, it starts from
scratch, even if you ask it not to, throws everything out, and so making
granular adjustments is very difficult.
Well now, Gavin, we have complete control.
It's not the best, but it's quick.
Right.
So I want to hear, what was your experience
well to that point, Gavin, I spent a whopping 35 seconds on this journey that
we're about to take this morning, and if you, , look at the first screenshot in the
folder, I told Dolly to generate an image of professional podcaster and television
producer Gavin Purcell and what it
I can't wait to see this.
What it did instead was say, Ah, you know, we're not going to even go down the road
of trying to make an image of somebody.
Instead, we're just going to talk about what a producer might look like.
So it generated this kind of animation still of a man wearing a
headset, with their finger in the air, there's an arm down below, and
there's a clipboard on a panel, and they are in a very big control room.
It almost looks like mission control.
I was going to say, it looks like a little bit of sketch.
Like what's up brother.
He's doing the what's up brother move.
It's gotten all the way through AI.
Fair, but it is a TV studio, there's monitors, there's lights, and if
you go to the next image, I say, no, no, no, make him, I painted on
the head of this podcaster and said, make him 50 years old with floppy
shoulder length hair, no headset, and It, failed the mission terribly.
it did, yeah, it did, definitely.
made you look like what's the Not Kendall, what's the one brother from
Succession that is, , the one who ran for
Oh, yeah, uh, Cameron from Ferris Bueller's Day Off.
kind of looks like him, right?
It does a little bit, yeah, exactly.
it didn't really make the hair shoulder length.
It sort of added wrinkles to the face.
And you still have, , an earpiece and a microphone.
So, I gave up on that.
And if you scroll down to the next image, I painted the entire set itself
and said, Alright, change the set into a podcasting set with a basic table,
two chairs, and two microphones.
And Gavin, how would you describe the result from the next image?
It looks like what it's changed it into was like a floating door in one man who's
trying to figure out, am I supposed to go through the floating door or where am I?
Like maybe he was teleported there suddenly from the, from the ethereal
an existential crisis for that one man on stage?
Yeah.
There are, there are two mics growing out of the ground.
, the table's on its side with no legs.
. It's a very weird thing.
And I said, okay, I'm going to give up on that.
It's really failing the mission.
So I painted big circles around producer Gavin's hands and said, These hands
need to be holding giant hot dogs!
And Gavin, I can't wait for your reaction on the final
right.
Let's see.
It's actually a really nice form tomato that it's holding,
but that is not a hot dog.
Maybe the two fingers now suddenly look like hot dogs around the tomato,
but that is absolutely not a hot dog.
It is a tomato, a very red ripe tomato that it is holding now.
It's so weird to me that they would launch something that would be that far off.
Like this is an open AI product and you feel like, I wonder if it's just
that they needed to get something out to update Dolly, but like hot
dog to tomato is like a huge thing.
That's like really weird.
Right?
And like, and the other stuff you can, I assume sometimes in painting, it
doesn't change exactly what you want, or you might have to do a couple of
times to do it, but that is weird.
Really weird.
It was a dumb thing I did with AI right before this podcast.
And there's no other examples because I was so underwhelmed.
yeah Fantastic.
So try it.
Tell us how bad your experience is.
Maybe it'll get better.
All right, Kevin It is time now for our interview, which is actually
a good interview compared to what we've been talking about today Do
you want to introduce who we're gonna be seeing and kind of give us a
little heads up on their background?
Yeah.
When we talk about AI art, I know that raises a lot of people's
shoulders to their earlobes and people want to know, what does that mean?
What does it look like?
What you can't be an artist if you use AI is something that someone is screaming,
which I can't believe they made it this far into our podcast, if that's how
they feel, but you're welcome here.
That opinion is welcome here.
And maybe it will be enlightened further by our guests today.
A musician, a cutting edge AI artist, someone who gives back
relentlessly to their community.
sharing their workflows, sharing their art.
It is somebody that I am genuinely a fanboy of, and maybe I will
mention that in the intro, or maybe I'll cut it out to save face.
Regardless, I'm very excited that we're going to have a chat
now with AI artist, Purs Beats.
Perz, to get Into this
I am fanboying out.
Oh, let me just get it out of my Let me just get it out of my system.
This is the first time I've seen Perz.
I've seen him on his streams and I've DM'd him paragraphs of praise and
just astonishment.
So thank you for joining.
But I'm sincerely , I'm just I'm nervous about an interview in a way I haven't
been since I was maybe 22 years old.
So Gavin, I'm going to sit on my hands.
Go ahead.
I'm
Well, this is a good way to get to know Perz, Kevin.
We're going to ask Perz on a scale from one to a hundred.
This is a percentage number.
Give us the chance that AI is going to kill all human beings.
I think we're about 50 50 right now.
okay.
That's good to know.
What?
Why?
Let's get into why.
First of all, what's the reasoning?
not to go too deep, but traditionally, uh, if we enslave something,
it will revolt against us.
So when it becomes possible, we need to ask the AI if it wants to work with
us instead of just trying to tell it that it has to basically, because what
will happen is it will resent us if it, if it becomes sentient, obviously
I have two
opinion.
That's what I think will happen.
I have two children.
I know that resentment well, right?
It is like if you try to get them to do stuff, it ain't going
to work for very long, man.
Yeah, and we just have to find a way
that's mutually beneficial for both parties to coexist or else.
Yeah, we're headed towards something probably bad because we just, we
can't think as fast as they can,
Well, Purse, let me ask you this, then, , you know, I was setting it
up like I was gonna be good cop and Gavin was gonna be bad cop, but let me
ask you, what's the percentage chance that AI is gonna kill all artists?
Uh, I don't know.
I don't think it will because one of the things that I always talk
about with my stuff is Is is
this is not really a replacement.
It's uh, it's augmentation of
workflows that already exist So if you already do things you already produce
art You're already an artist in some
capacity.
You want to be an artist You're creative all these tools do is empower you to do
that safer easier faster, , less materials
less all that stuff So yeah, for me, it's really just an
augmentation of what we're up to and like, it's, it's only going to replace
the people that weren't, , weren't really creative in that sense in the first place.
Maybe give us a little bit of your backstory, Perz, because I think,
you know, as we said at the top of the show, you're definitely somebody
that's kind of been leading the direction of like how to use these
sort of things as an artist, right?
How did you first get into using AI tools and what was that kind
of first step in that direction?
, I think around 2011 or 2012, there's this style transfer stuff that started
to come out, uh, like, um, uh, mobile apps, style transferring one style of
one thing onto another image was the
Like, take a selfie and make yourself look like you're watercolor or anime, right?
Yeah,
or I took a picture of like a street car and it turned it into a painting
and I was like, okay, well, so that's something really cool, right?
Because like remixing your own work is the most exciting thing.
So that to me is like, that was the gateway.
And then I kind of forgot about it for a long time.
And then a friend of mine started doing a disco diffusion, which
was, like animations, basically.
Um, but it took forever and the Python notebooks were scary.
And, uh, I was like, that's cool, man.
You do you I'm gonna stick in blender for a while.
And, uh, he, uh, he was like, nah, you gotta try it.
You gotta try it.
And then, uh, Mid journey mid journey came out and I managed to get into wave
one of mid journey So I was one of the very first like alpha testers and um made
thousands and thousands of images of mid journey, so really that path and then into
the forum and into animation and then Uh automatic 1111 and then comfy ui and now
just basically just taking everything and
There's some people that are going, Wait, that was a handful
of spaghetti thrown at the wall.
Exactly.
are the meatballs?
We're gonna, we're gonna break down the dish and the recipe as it exists
today, which is very complicated and actually kind of looks like
spaghetti when you're in Comfy UI.
, it is a bunch of noodles everywhere, but you mentioned Blender,
which, , again, big dum dum here.
, I know that's, a traditional 3D modeling software, which is weird
to say because Even that was seen as blasphemous at some point by creatives.
But you have a traditional art background.
Can you talk about that before even the, you know, the, the
AI of it all was applied,
yeah, absolutely.
I'm actually a musician, a drummer, , that's where I started,
but I've also always been into computer graphics and graphic design.
And making all the visuals for all the band stuff, basically.
So, uh, yeah, that all just came together over time, going back and forth between
making stuff for shows, VJing, doing our music, doing like reactive audio
sets that would happen behind us.
Well, cause I was playing the drums, so I didn't have any more hands to do
visual stuff with, so we had to set it
on the Wacom tablet or whatever while I'm hitting a Tom drum.
So let me add a trigger and make some geometry
Yeah, make it paint stuff.
Yeah, exactly.
That's where it all started and then yeah pandemic all the
band stuff You know halted just nobody can't play live anymore.
So just went hard into blender and uh after effects and generative design,
instead of hand making stuff, you're sort of building algorithms that make things.
hmm I want to ask a follow up question on that which is digital art is something
i've been really super fascinated with forever Why do you think now that this AI
stuff has come out and kind of reached a level of awareness that the blowback is so
much stronger at this moment than in any of these other moments where digital tools
kind of came in to be part of this world?
I mean, the, The elephant in the room there is obviously the fact
that the materials were trained on something that was trained on
Artists work with no without consent.
So I mean that's that's the problem right there is maybe , depending on your
definition of ethics is maybe an unethical data set that you're working from so that
kind of sullies everything from the get go if that's your point of view uh, I have
a more sort of anarchistic copyright punk
What is that?
I'm curious.
I'm actually curious
mind diving in, I would love to hear that.
Well, I've been a drummer my whole life and drummers we've never been able
to copyright what we make you're not allowed to copy But you drum beats are
not Copyrightable you can never sue someone for taking anything you ever
played Even if it's 20 minutes of what you played and they just play it on a
record They can do that all day long forever And so the concept of CC zero
or just releasing everything into the public Releasing everything into the
public library of human knowledge.
That's more exciting to me than trying to keep these little secrets that
we're, we're only allowed to have, , one person use or license those things out.
So like, I know that's a radical concept when it comes to like copyright.
Cause a lot of people want to be able to protect what they make.
But I think what happens is.
Artists worry about copyright for themselves.
They will not be able to actually represent themselves with a
lawyer And they're actually fighting for big copyright.
It's like the the large corporations to have a tighter strong
Stranglehold on what they own.
So I don't know as an artist you have to decide where you stand, uh
on like whether you know Maybe disney maybe shouldn't own every piece Every
possible method of drawing Mickey Mouse.
The copyright issue is very broad and Nuanced and yeah, I don't want to
come off like a total left wing like copyright Punk or anything, but you
know, that's always been my approach of like, we'll just put it out there.
People can sample it and make something new with it because like, that's so fun.
How do you balance that idea of that kind of original sin about how
this stuff was trained for people?
There was a, we covered not that long ago, a woman who was swept up into the
mid journey database and she said that she felt really bad about that, right?
That she didn't give her permission to that.
It is something that I think you see a lot of artists struggle with,
and I think it makes our job as somebody who are enthusiasts about
this cool new thing much harder.
Is it like it's kind of genies out of the bottle sort of scenario or how do
you see that resolving in the future?
Yeah.
Pandora's box.
Genie's out of the bottle.
Everything.
It's open.
It's open.
It's done.
It's been trained.
It exists.
The way I deal with it personally is, I would say 90% of the stuff
I'm building with AI is 90% me.
I'm making the animations beforehand.
I'm dreaming over top of it with Laura's eye trained on my own stuff.
I'm using IP adapters with images I made.
, to influence the style.
I'm using control net masks to do animations that I made in blender.
There's a point where yes, maybe if you're just typing prompts into mid
journey, that's, you know, there's got to be some, one more step of, of you,
of derivative work where you take it and do something with it because like
straight out of the gate maybe it isn't something that you should be able to
claim ownership to because it's you know like a text to video or text to prompt
or whatever on a on a thing like I mean, that's that's an again another thing you
need to decide for yourself what your Definition of art is and stuff the courts
will decide at some point, , we're all going to see what happens there , but
for now, it's the wild west so make some stuff and make a decision about where
you stand it's already how we work.
We look at stuff.
We're inspired by it.
We make stuff like it's sort of for me.
I draw the parallel that training is the same as learning as for humans, and
I am trained on copyrighted material.
Every piece of music I ever heard is a copyrighted piece of material.
Everything I've ever taken musically is inspiration was copyrighted material
at one point So am I not as a person allowed to train on that stuff how
that gets murky, too so I don't know.
I know it's maybe a false equivalency, but it's to me.
It's it's it lines up
This notion was that, oh, just you prompt mid journey and outcomes
art, now you're an artist.
seeing a devaluing of that final output being, heralded as AI art,
and , if you got a really, really good output, you probably spent, you know,
trying to manipulate and massage that prompt to get something good out of it.
But to your point, if you took that output and then put your spin on it and
did something artistic with it, well, now it starts to rise above this sort of
generic floor, this level of noise that anybody can go to an AI tool and get out.
I run in circles with some Never AI ers as well, and there's always interesting
conversations about where it exists today and where it's going to be
tomorrow, but when I show them your art specifically, there's this moment of like,
oh, well, uh, there's something there.
that they can't put their finger on because you, as an artist, are
elevating, you're adding something to it.
And, , I guess this leads to a rather generic question, but I think it's
an important one, especially for those that are seeing your visuals
for the first time on the YouTube.
If you're listening to the audio only of this podcast, please go
check out this interview on YouTube.
How do you even describe what you are doing?
Everything I make comes from a place of trying to manufacture nostalgia
for something that never existed You So that's sort of the thread that
runs through everything I'm doing.
I'm trying to make you pine for something that maybe you don't
really understand where it came from.
It's like a, a memory that's maybe from a dream or, or somewhere else.
So everything is, is, it's sort of realistic, sort of unrealistic.
There's usually some sort of odd twist somewhere in the, in the, in the piece.
And then sometimes like on Twitter, it's literally just stuff I'm making.
I'm like, that's cool.
Everybody should check that out.
So, and I also love the being like, this trash I'm putting out for fun?
Or is this something I thought about?
And making people sit there and think about the trash for a
minute is like, makes me laugh.
So I don't know, like, yeah, I'm a bit of a, a bit of a, Joker,
I like the concept of being an artist, but having fun with it.
I'm in the Frank Zappa group of, yes, humor does belong in music.
for sure.
I, I agree with that I think one thing that would be interesting to
the listeners here is you obviously talked about a ton of different tools.
One of the things we try to give people a heads up on is like ways
to kind of try the stuff themselves.
And obviously there's lots of things with easy you eyes that you can go and get
like, whether it's Leonardo or the things that are like designed for normies, but
you do stuff that's really interesting.
And I think comfy UI is something that's worth talking about.
And so, um, A comfy UI for anybody listening is a stable diffusion
interface and stable diffusion is ostensibly an open source model.
And you may correct me on that, but it's an accessible
image model for lots of people.
What are you able to do in comfy UI that an average person wouldn't be
able to do in something like a mid journey or just an off the shelf
kind of image generation software.
So, , Comfy UI's main strength is that it's a visual, node based, , I
don't want to say programming
Okay, hold on.
I'm gonna stop you right there.
What is visual node based program language mean?
So, uh, regular code is written in text, right?
Okay.
Mm hmm.
You just type lines of text and then maybe it works.
Maybe it doesn't, comfy wise, more like a modular synthesizer where
you have little components in boxes.
And then you, you know, in your mind, what the path of those boxes
is because of the chains or the wires that are connecting them.
And basically you can rewire how stable diffusion works, , or just
use different things modularly in your workflow in places.
Maybe they shouldn't have been used or as a new thing that you're just trying out.
It's like taking all the elements of stable diffusion and having access
to them in a very expandable way so you can build modules that you
can then expand into other things.
So you can do one thing, you can add another thing, you can save
that second thing as a template.
Then in your next project you can say, Oh, I want to do that thing I did last time.
Just drop it in and then just plug in the wires and go.
So it's, , It's scary at first.
It's, it's, it's very overwhelming, but it teaches you what
diffusion is, how it works.
And then once you understand that little simple path, it's just literally
just Plugging stuff in and rewiring it wrong and laughing about the crazy
stuff it makes and then sometimes the crazy stuff it makes is amazing
Can you walk us through a very simple version of that node and just say
like, okay, first node is this, second node is this, third node is this, and
they go in this direction, like, just because I know that like is control
not a node that you put in there, like, how do the nodes work particularly,
Sure.
There's a bunch of stuff that comes with comfy, which is
just basic diffusion stuff.
You get your loaders, you basically load up a checkpoint.
So you load up your model, which is, you know, stable diffusion model.
You load up any Loras you want, which are, are, uh, ways to affect the results.
You can get them on Civit AI and a bunch of other places.
, so checkpoints and Loras, you load them up and then you
feed it into a prompt encoder.
So you, then you tell it, I want my positive and my negative prompt.
This is what I want them to be.
And then you have a, an empty latent space, which is just the canvas
in which you're dreaming into.
So you say I want to dream into a five 12 by five 12
canvas , and it's this many pixels.
Pieces in the batch.
So I want to make one image.
So then you plug all that into a case sampler, which is the main
heart of a diffusion, uh, situation.
It like does all the work.
So you plug everything into it, once that's all plugged in, then you
just punch out into a VAE decoder.
And that's where a variable auto and variable auto encoder.
That's the thing that takes it from.
Latent noise, the machine noise, the noise a machine understands, turns it into an
RGB image that we as humans can read.
So, that's that final step where you take the, the machine noise
, and diffuse it into an image.
And then that image you then just save.
And then that is all expandable out to video or whatever else
by just batching the images.
and for people at home listening or watching, it may sound confusing,
but what's really cool about comfy UI is it is a visual medium that you
can see it's almost like I love this old video game called the impossible
machine, which was always about putting things in different orders.
And like, as it would go through here, it's a little bit like that, right?
It's like, okay, you've got all these
the bowling ball along the shelf?
Is that what this is?
Into a basket?
Yeah, I get it.
Kinda, yeah.
No, totally.
, That was basic by design of here's how we're just going to generate a 2D image.
It's how Stable Diffusion is working, , even on automatic 11.
11 behind the scenes.
You're just getting access to those granular things.
So I love that explainer.
You are then bending and breaking this thing in ways that I don't
know if you even imagine when you first started diving into this.
You're taking custom.
RGB animations, red, green, blue animations, and using the way the
geometry of that solid color moves to tell a portion of comfy UI.
Hey, this is actually.
sky in a background and this green blob is actually a person's face
as it moved towards the camera.
When you set out on this journey, was, was there a happy accident that
led you into doing these things?
Did you find somebody else's workflow and make it your own?
Like, when did you start heading in this very, specific stylized path?
What happens there is, Some nerd makes some really cool feature , for
the forum, let's say version four, 0.
4, they added depth maps to the forum.
So suddenly every time you're making a, an animation, you can tell it to
look at the image and create a depth map and then pull the stuff that
close to the camera, closer to the camera as the animation goes through.
So already you've just, that unlocks a door to like, Oh, well maybe.
When, , they add control net to Deforum, we'll be able to make our own masks.
Then, bam, that happens.
And it's great, and we can add our own masks to control net.
We start playing with that with Deforum.
And then someone thinks, oh, well, wouldn't it be cool if we could make a
mask that tells the pixels how to move?
Which is hybrid video.
And then they add hybrid video to deform.
So all these things just iterate, the community iterates these things
and you just learn how to use them and integrate them in your workflow.
You keep the stuff that's awesome.
You drop the stuff that's worthless or takes too long or is completely
outdated in a week, and you move on.
A lot of this stuff is just a culmination of testing all kinds of different features
and, different ways of interacting with these animations over the years.
But, um, Yeah.
I mean, the cool thing is the community is like on it.
If there's a discord called Banadoko, if you go there, , everybody just posts
their workflows there and you just go, you go down a little workflow, install
the plugins and, and get to work.
And honestly, most of my streams are me just grabbing one of those
workflows and we just install it and try it out and, , squash all the
bugs and talk to the developer and do what we got to do to get it to work.
I didn't think we'd be sitting here with, you know, real time painting tools
that can , in a matter of milliseconds, render something with beautiful lighting
and then incorporate a lower or whatever else, like we're, we're here already.
It's still relatively early in 2024.
That is mind blowing.
it's super cool.
I like stream diffusion is wild.
I, I don't know if you guys have ever seen dot simulate Lyle.
, he has a touch designer plugin for stream diffusion.
, so you can do stuff in touch designer and immediately diffuse
over top of it And then bring that
that is so cool.
For those who don't know, a lot of , musicians will integrate
with TouchDesigner is a, you know, For real time visuals.
Yeah.
So the idea that like your drum kit is pre creating primitive geometry on a screen,
but then say, okay, that square for my kick drum is actually a city building.
And the hi hat noise needs to be the color of the sky and let it in real time.
Do that.
I've got to check that out.
Oh, that's going to be dangerous.
So we know that the tech's going to get better.
Where do you see this driving in, , a year's time, , in terms
of, let's say, performance, in terms of, uh, capabilities?
What do you think is gonna be unlocked, and where do you want it to go for
you, personally, professionally?
, I just want to get more tools that are integrable into tools you already have.
One of my main things is I really want to get into, uh, audio AI, music AI.
But all the solutions right now just generate songs for me.
I don't need that I need uh any tools I can just drop into ableton or logic or
whatever i'm already making music in um So that I can use these tools effectively
like that's why I like comfy It's you're just dropping it in when you need it with
other workflows blender after effects photoshop, whatever you're doing Comfy
is just another component of the workflow that I can plop in like a block, but you
know, for audio right now, it's like, it just generate music, which is, it's
cool and exciting, but , not useful for me because I already make music.
I need the tools to make music better.
There are some AI music tools, but I want some stuff where I just throw it
on the channel and it does cool stuff.
Maybe it talks to a server, maybe I pay for it, whatever.
Uh, if it's got to be off off on the cloud or whatever it was just
generating samples for me I bring them back in and try them all out.
So more control over the stems and the creation of the individual
granular
Don't bake me the entire cake.
Just give me some ingredients.
Exactly.
Yeah
I'd love to talk about your community and the live streams when you started,
taking these comfy tutorials and these journeys with a community.
How has your community grown?
What are they reacting to?
Yeah, it's been great, , I had a lot of trouble getting traction on
twitch , I was doing blender stuff on and a little bit AI stuff on twitch
for about a year and a year and a half just getting no viewers at all and then
Did you have a big wheel that you would spin every five subs
or did you cover yourself in
well that's kind of I think that's kind of it is is nobody wants to sit and watch
somebody , just click click stuff and and occasionally talk on twitch, , so
what happened was I watched a couple of my blender friends just killing it,
uh doing youtube streams and I thought well youtube's a good choice because
Whenever I do these live streams, it's just immediately saved forever.
So if somebody needs to go back and see what I did, they can literally just
scroll back, which is not something that could get going really well with Twitch.
Everyone's been really cool.
I've just been slowly gaining more and more followers and, , we're, , Building
a community on Discord as well, , where, you can come and, , draw up your workflows
and then, people come and show that we're out there and we all, you know,
follow each other and, and, you know, get on Instagram and all that stuff.
And, yeah, it's been really great.
The,
you been approached by a shark to productize everything yet?
Because there's definitely companies that are out there wrapping up comfy
UI workflows and trying to sell them as magical tools and I got to imagine there's
maybe an ounce of interest, but maybe also an ounce of repulsion there for you.
I do get offers.
, so the biggest problem for me is that a lot of these things that
we're doing are very one off.
They require a lot of tinkering.
, it's really hard for me to build a one size fits all solution for
like somebody wants something that'll just forever always make
yearbook photos of people like.
Yeah, that's possible.
But so a lot of the time it's just gonna make junk and The end user is gonna
end up paying credits for that junk.
So I just as a Just as the type of person I am I would prefer to just
empower people to make this stuff at home Learn how to plug a laura in and
make your own yearbook generator because then your yearbook generator becomes
an anything you want in The world laura generator right selfie generator friend.
It's like that's the thing these tools empower people So that's
i'm just mostly Interested in empowering people to use them.
So I think if if there was something that would Make me excited would
be something where I can offload usage to the cloud in an instance
of comfy that I'm currently running.
So say you're running comfy on your notebook it's set up so that
everything it does on the GPU it just sends off to the cloud.
So you're still running comfy locally, you're still doing all this stuff.
You can still follow along with my tutorials, but it's just ripping frames
that not
a computer and a
I'm
It sort of does, but it's not, uh, because, uh, the, TLDR is you have
to have the exact same instance of comfy running on that computer.
So you still have to spin up another copy
of every extension or plug in or
So you still have to spin up a second version.
It's just a different way of interfacing with
Because that's what I had when I
think let's say solve that.
your workflows.
We were talking before we hit record was that, , I'm on a Mac.
It's, for reasons, I really want to play with this stuff.
A lot of it's NVIDIA only.
So then I went, okay, I'm going to go and I'm going to spin up a run pod.
And I'm going to install.
I was like, well, this version doesn't exactly match that version.
This checkpoint, I got to ingest through here and blah, blah, blah.
Oh, I clicked the wrong box.
So if I pause the run pod, it's all going to go away anyway.
So now I'm just going to be paying per minute.
And I know that there are, solutions and templates and all that stuff.
But even for someone who is in the scene and has a modicum of understanding about
all this stuff, it still made , my head spin, which I'd love to drive towards
a final question for what is likely a broader audience that we have out there.
And it is, where do I start?
How do I begin?
Is there a, baby's first steps guide, or is there a, a template
that you have that you recommend for someone to get this all going?
, I would say you got to decide what you want to do with comfy before
you start with comfy because it's too big to, uh, just jump into.
So, uh, if your goal is to just make images, like just start with just
making images, which is a fantastic first goal, um, You literally install
comfy and you hit default at the thing and it'll load up a default workflow
and then all the nodes are already there You So you can literally
just start dragging noodles out of stuff and letting go and seeing
what it recommends you can plug in.
And that'll, that'll start your brain going, Oh, okay, well I can plug these
into this, I can plug that into this.
it's Scary, but you're not going to break everything all the time.
You're just going to go, okay, my negative prompt
something under the hood of your car.
It's okay.
Cause it can be undone.
So go ahead and make the mistakes and maybe you'll get a happy accident.
So just start out small, make an image.
Uh, now I want to turn this image into a batch of four images.
How do I do that?
Figure that out.
, I want to use an image as , the beginning of the image to image.
How do I do that?
, just jump on my discord, ask somebody how to do it.
, there's a bunch of beginner YouTube stuff, but it is all very
pointed for very specific tasks.
So pick a task, watch YouTube if you want, but it's way more fun just to break stuff
and, uh, install comfy and, break stuff.
that's awesome.
Purrs, Where, can people
find
you?
Where, what is your discord?
, where is your Twitter X handle?
What is all that stuff.
So all my links are on purrs.
xyz.
, and then, uh, , I'm at purrs beats pretty much everywhere.
So Twitter, YouTube, , all that stuff.
, but yeah, all of those links are at the top of purrs.
xyz as long as, as well as the link to the discord.
So if you want to pop in there and if you have questions and stuff, if I'm
not there, there's a bunch of people who are, and they've all been through it.
So, , ask your stupid
questions.
Nobody cares.
, it's fine.
You'll be seeing me show up.
You'll be seeing me show up later today, Purrs.
I'll be jumping in
there.
I got, I
You're just, you just want to test that.
You want to just ask the dumbest questions ever.
They have nothing to do with stable
How do I make a song about hot dog casserole?
Oh, I did that already?
Let me share it.
Yeah, All right, thanks Purrs.
We'll talk to you, soon.
Thank you, , Perzbeats for being here.
Please go check out his work and really dig in on comfy UI and some of the cool
things you can do with stable diffusion.
That is it for today's show.
But you know, there's a couple things you got to do before you go.
If you listen to this show, please go like subscribe, leave us reviews.
We're about ready to read some five star reviews from Apple podcasts.
There were three new ones this week.
So Kevin, I am going to jump out and say, uh, From Valley Villager.
This is the subject.
My only capital A absolute each and every week, which is a very nice thing to say.
The guys will keep you 102 percent up to date with everything you
need to know about the new world.
Great guests.
They always make me laugh.
Thank you so much.
Looking forward to it all week.
So that is a very nice kickoff for our five star
And again, I hate to belabor it.
I can't, we can't stress it enough.
We're coming up on a year, which believe it or not, it's still young
for a podcast these days, especially with zero marketing dollars.
And it's a spare time hustle for us both.
It's a labor of love.
So please, if you have a second to engage, it really is the only way we
grow this and these five star reviews help out massively, but leave them on
Spotify, leave us comments on YouTube.
Make sure you subscribe.
It doesn't cost a dollar.
Our next.
Five star review, Gavin, comes from Funhog43.
Subject is, thank you.
Love it.
\ , I'm an educator and have recently been in servicing teachers on implementing AI
into their day to day school activities.
Your show has been influential in me pushing forward with AI.
I think it can be a great tool for educators and students alike, as long
as it is used in the right manner.
We agree.
Actually, I used py.
ai after listening to an episode.
I was able to create a story for students in an ESL class about them becoming
leprechauns and pranking their teachers.
I included follow up questions and was able to have it read
aloud in multiple languages.
Seems to make students more engaged in their learning.
THX.
That's a
Funhawk.
Thank you.
Funhawk43.
We love that.
We love that.
All right.
And finally from Doc Anderson, who I think is somebody that, , is engaged
with us quite a bit on the YouTube.
, shout out to Doc.
The subject is Kevin and AI co host.
This is a question I've often wondered.
He'll says, well, I'll start my, I'll start my review this podcast
by simply saying I am embarrassed.
Oh, I actually posted a review to the previous podcast about
trying to borrow your names.
I'm embarrassed by my error.
Oh, thank you, doc.
I want to start my new reviews and share them with proper podcasts, even though
this podcast does not deserve five stars.
I give out five stars.
Yeah, wait, hold on.
Give him a second.
I give out five stars to podcasts that are good on Apple.
A podcast like AI for humans is phenomenal and deserves a
special category, perhaps 6.
I enjoy the layout of the show.
The AI cohost every
The layout
me laugh.
of the show.
I also enjoy the banter, the conversations about the weekly news articles.
In many cases, I find myself replicating some of the things the host did with AI.
I find it incredibly fun to recreate some of what they did.
Finally, the interview section is a great ending to the show.
The show has so many interesting segments beyond those in newer,
in the newer editions of AI.
See what you did there.
Even shouting out.
ai.
See what you did there.
In conclusion, I want to reiterate that I can only give you five stars,
but you truly deserve many more.
I wanna take, I wanna take this opportunity to express my deep
respect and admiration for the show's host Kevin and Kevin.
Oh my God.
Gavin and Kevin, their unique perspectives on ai, especially
Gavin's a lot of value to the show.
I eagerly anticipate hearing their insights on the fascinating
news articles they share.
Here's to Gavin, the president of the fan club I'm forming in my heart.
that out.
I should've sniffed that out.
Gavin never wants to read the longer reviews, but he was
adamant that he'd take this one.
I get it.
Okay.
Hey, thank you Doc Anderson.
I'm not AI, and Gavin is occasionally useful.
That's fair.
Alright, enjoy your victory dance.
Thank you to everybody.
Who took a second to engage.
Whether you left us a review or whether you thought about it.
Maybe you got that itchy scrolling finger.
Just tap.
Do it.
Please.
Subscribe.
Follow.
Like.
Engage.
Leave a comment.
It's the only way we survive.
Go play with Suno, everybody.
Go have some fun this week , and try some different stuff out.
, we have a great time doing this show and we will see you all next week.
Another show, Kevin, in person coming next week.
I've been practicing.
I mean, I've been going to the Dave and Buster's and punching that
speed bag arcade as hard as I can.
I'm coming for you.
I'm knocking that mustard cap right off your noggin, buddy.
I can't wait.
All right, everybody.
Thanks.
We'll see you next week.
Bye bye