3 hours 15 minutes 19 seconds
Speaker 1
00:00:00 - 00:00:24
The following is a conversation with George Hautz. His third time in his podcast. He's the founder of kama AI that seeks to solve a time was driving and is the founder of a new company called Tiny Corp. That created Tiny Grad, a neural network framework that is extremely simple. With the goal of making it run on any device by any human, easily and officially.
Speaker 1
00:00:25 - 00:01:05
As you know, George also did a large number of fun and amazing things from hacking the iPhone to recently joining Twitter for a bit as a in turn in quotes, making the case for refactoring the Twitter code base. In general, he's a fascinating engineer and human being and 1 of my favorite people to talk to. And now a quick few second mention of e sponsor. Check them out in the description is the best way to support this podcast. Who got Numerai for the world's hardest data science tournament, Babbel for learning new languages, that suite for business management software insight tracker for blood paneling and AG1 for my daily multi vitamin.
Speaker 1
00:01:05 - 00:01:15
Choose wisely my friends. Also, If you want to work on our team, we're always hiring. Go to next freemium dot com slash hiring. And now I want to the full ad reads. As always, no ads in the middle.
Speaker 1
00:01:15 - 00:01:45
I try to make this interesting, but if you must skip them, friends, please still check out our sponsors. I enjoy their stuff. Maybe you will too. This episode is brought to you by Numereye, a hedge fund that uses artificial intelligence and machine learning to make investment decisions They created a tournament that challenges data scientists to build best predictive models for financial markets. It's basically just a really, really difficult real world data set to test out your ideas for how to build machine learning models.
Speaker 1
00:01:46 - 00:02:15
I think this is a great educational platform. I think a great way to explore, to learn about machine learning, to really test yourself on real world data with consequences. No financial background is needed. The models are scored based on how well they perform on unseen data, and the top performers receive a share of the turbines prize pool. Head over to numeral I slash lex, that's NUMER dot a I slash lex to sign up for a tournament and hone your machine learning skills.
Speaker 1
00:02:15 - 00:03:06
That's numerous I slash lex for a chance to play against me and win the share of the Tournament's prize pool. That's numeral I flex. This show is also brought to you by Babbel, an app and website that gets you speaking in a new language within weeks I have been using it to learn a few languages, Spanish, to review Russian, to practice Russian, to revisit Russian from a different perspective because that becomes more and more relevant for some of the previous conversations I've had and some upcoming conversations I have. It really is fascinating how much another language, knowing another language, even to a degree where you could just have little bits and pieces of a conversation can really unlock an experience in another part of the world. When you travel in France, in Paris, just having a few words at your disposal, a few phrases.
Speaker 1
00:03:06 - 00:03:44
It begins to really open you up to strange, fascinating new experiences that ultimately, at least to me, teach me that we're all the same. We have to first see our differences to realize those differences are grounded in the basic humanity. And that experience that were all very different and yet at the core of the same. I think travel with the aid of language really helps unlock. You can get 55 percent off your babble subscription at babble dot com slash lax pod that's spelled BABBEL dot com slash lex pod.
Speaker 1
00:03:45 - 00:04:11
Rules and restrictions apply. This shows also brought to you by NetSuite. An all in 1 cloud business management system that manage all the messy stuff that is required to run a business, the financials, the human resources, the inventory. If you do that kind of thing, e commerce, all that stuff, all the business related details. I know how stressed I am about everything that's required to run a team.
Speaker 1
00:04:11 - 00:04:49
To run a business that involves much more than just ideas and designs and engineering. And evolves all the management of human beings, all the complexities of that, the financials, all of it, and so you should be using the best tools for the job. I sometimes wonder if I have it in me mentally and skill wise to be a part of running a large company. I think, like, with a lot of things in life, it's 1 of those things you shouldn't wonder too much about. You should either do or not do.
Speaker 1
00:04:50 - 00:05:43
But again, using the best tools for the job, is required here. You can start now with a no payment or interest for 6 months, go to netsuite dot com slash to access their 1 of a kind financing program, that's net suite dot com slash flex. This show was also brought to buy Inside Tracker, a service I used to track biological data, data that comes from my body, to predict. To tell me what I should do with my lifestyle, my diet, what's working, and what's not working. It's obvious all the exciting breakthroughs that are happening with transformers, with large language models, even with diffusion, all of that is obvious that with raw data, with huge amounts of raw data, fine tuned to the individual, would really reveal to us the signal in all the noise of biology.
Speaker 1
00:05:44 - 00:06:02
I feel like that's in the horizon. The kinds of leaps in development that we saw in language and now more and more visual data. Feel like biological data is around the corner. Unlocking what's there. In this multi hierarchical distributed system that is our biology.
Speaker 1
00:06:02 - 00:06:11
What is it telling us? What is the sequence it holds? What is The thing that it's missing that could be aided. Simple lifestyle changes. Simple diet changes.
Speaker 1
00:06:11 - 00:06:32
Simple changes in all kinds of things that are controllable by individual human being. I can't wait till that's a possibility, and Insight Tracker is taking steps towards that. Because special savings for a limited time when you go to insight tracker dot com slash flex. This show is also brought to you by Athleta Greens, that's now called AG1. It has the AG1 drink.
Speaker 1
00:06:32 - 00:06:41
I drink it twice a day. At the very least, it's an all in 1 daily drink to sport better health and peak performance. I drink it cold. It's refreshing. It's grounding.
Speaker 1
00:06:42 - 00:07:12
It helps me reconnect with the basics, the nutritional basics that makes this whole machine that is our human body run. All the crazy mental stuff I do for work, the physical challenges, everything. The highs and lows of life itself. All of that is somehow made better knowing that at least you got nutrition in check. At least you're getting enough sleep, at least you're doing the basics, at least you're doing the exercise.
Speaker 1
00:07:13 - 00:07:44
Once you get those basics in place, I think you can do some quite difficult things in life. But anyway, beyond all that, it's just a source of happiness and and a kind of a feeling of home, the feeling that comes from returning to the habit time and time again. Anyway, they'll give you 1 month supply of fish oil when you sign up at drink AG1 dot com slash flex. This is Alex Frutman podcast to support it. Please check out our sponsors in the description.
Speaker 1
00:07:45 - 00:08:13
And now to your friends, here's George Hottz. You mentioned something in a stream about the philosophical nature of time. So let's start with the wild question. Do you think time is an illusion?
Speaker 2
00:08:15 - 00:08:29
You know, I sell phone calls to comma for a thousand dollars. And some guy called me and, like, you know, it's a thousand dollars. You can talk to me for for half an hour. And he's like, yeah. Okay.
Speaker 2
00:08:29 - 00:08:39
So, like, time doesn't exist. And I really wanted to share this with you. I'm like, oh, what do you mean time doesn't exist? Right? Like, I think time is a useful model.
Speaker 2
00:08:39 - 00:08:47
Whether it exists or not. Right? Like, does quantum physics exist? What doesn't matter? It's about whether it's a useful model to describe reality?
Speaker 2
00:08:47 - 00:08:50
Is time maybe compressive.
Speaker 1
00:08:50 - 00:09:01
Do you think there is an objective reality or is everything just useful models? Like underneath it all, is it an actual thing that we're constructing models for?
Speaker 2
00:09:03 - 00:09:04
I don't know.
Speaker 1
00:09:04 - 00:09:06
I was hoping you would know.
Speaker 2
00:09:06 - 00:09:07
I don't think it matters.
Speaker 1
00:09:07 - 00:09:13
I mean, this kind of connects to the models with constructive reality with machine learning. Right?
Speaker 2
00:09:13 - 00:09:14
Sure.
Speaker 1
00:09:14 - 00:09:20
Like, is it just nice to have useful approximations of the world such that we can do something with it?
Speaker 2
00:09:20 - 00:09:24
So there are things that are real. Colm graph complexity is real.
Speaker 1
00:09:25 - 00:09:25
Yeah.
Speaker 2
00:09:25 - 00:09:28
Yeah. The compressive thing. Math is real.
Speaker 1
00:09:28 - 00:09:30
Yeah. Should be a t shirt.
Speaker 2
00:09:30 - 00:09:39
And I think hard things are actually hard. I don't think p equals NP0, strong words. Well, I think that's the majority. I do think Factoring is in Pete, but
Speaker 1
00:09:39 - 00:09:43
I don't think you're the person that follows the majority and all walks of life. So but it's good
Speaker 2
00:09:43 - 00:09:44
for that 1, I do.
Speaker 1
00:09:44 - 00:09:47
Yeah. In theoretical computer science, you you're you're 1 of the
Speaker 2
00:09:47 - 00:09:50
sheep. Alright.
Speaker 1
00:09:50 - 00:09:58
But do you time is a useful model? Sure. What were you talking about on the stream with time? Are you made of time?
Speaker 2
00:09:58 - 00:10:05
If I remembered half the things I said on stream. Someday someone's gonna make a model of all of that. I'm just gonna come back to haunt me
Speaker 1
00:10:05 - 00:10:06
some day soon?
Speaker 2
00:10:06 - 00:10:07
Yeah. Probably.
Speaker 1
00:10:08 - 00:10:12
Would that be exciting to you or said that there's a George Hott's model
Speaker 2
00:10:13 - 00:10:20
I mean, the question is when the George Hotz model is better than George Hotz. Like, I am declining and the model is growing. What is
Speaker 1
00:10:20 - 00:10:25
the metric by which you measure better or in that if you're competing with yourself.
Speaker 2
00:10:25 - 00:10:41
Maybe you can just play a game where you have the George Hotz answer and the George Hotz model answer and ask which people prefer. People close to you or strangers. Either 1, it will hurt more when it's people close to me, but both will be overtaken by the George Hott's model.
Speaker 1
00:10:42 - 00:10:49
It'd be quite painful. Right? Loved ones. Family members would rather have the model over for Thanksgiving than you.
Speaker 2
00:10:49 - 00:10:49
Yeah.
Speaker 1
00:10:50 - 00:11:00
Or like significant others would rather sexed. With the with the large language model version of you,
Speaker 2
00:11:01 - 00:11:03
especially when it's fine tuned to their preferences.
Speaker 1
00:11:05 - 00:11:17
Yeah. Well, that's what we're doing in a relationship. Right? We're just fine tuning ourselves, but we're inefficient with it because we're selfish and greedy and so on. Language models can fine tune more efficiently, more selflessly.
Speaker 2
00:11:17 - 00:11:25
There's a star check for Voyager episode where, you know, Catherine Drainway lost in the Delta quadrant makes herself a lover on the holidays.
Speaker 1
00:11:25 - 00:11:25
Mhmm.
Speaker 2
00:11:25 - 00:11:48
And the lover falls asleep on her arm and he snores a little bit and, you know, Jay might add it to the program to remove that. And then, of course, the realization is, wait, this person's terrible. It is actually all their nuances and quirks and slight annoyances that that make this relationship worthwhile. But I don't think we're gonna realize that until it's too late.
Speaker 1
00:11:50 - 00:11:56
Well, I think a large language model could incorporate the flaws -- Uh-huh. -- and the quirks and all that kind of stuff.
Speaker 2
00:11:56 - 00:12:01
Just the perfect amount of quirks and efflower flaws to make you charming without crossing the line.
Speaker 1
00:12:01 - 00:12:17
Yeah. Yeah. And that's probably a good, like, approximation of the, like, the percent of time the language model should be cranky or an asshole -- Yeah. -- or jealous or all this kind of stuff.
Speaker 2
00:12:17 - 00:12:23
And, of course, it can and it will. But all that difficulty at that point is artificial. There's no more real difficulty.
Speaker 1
00:12:24 - 00:12:26
Okay? What's the difference between real and artificial?
Speaker 2
00:12:26 - 00:12:36
Artificial difficulty is a difficulty that's, like, constructed or could be turned off with a knob. Real difficulty is, like, you're in the woods and you gotta survive. So
Speaker 1
00:12:37 - 00:12:40
if something cannot be turned off with a knob is real.
Speaker 2
00:12:42 - 00:12:58
Yeah. I think so. Or, I mean, you can't get out of this by smashing the knob with a hammer. I mean, maybe you kind of can, you know, into the wild when you know, Alexander Supertramp. He wants to explore something that's never been explored before, but it's the nineties.
Speaker 2
00:12:58 - 00:13:01
Everything's been explored. So he's like, well, I'm just not gonna bring a map.
Speaker 1
00:13:01 - 00:13:02
Yeah.
Speaker 2
00:13:02 - 00:13:09
I mean, no. You're you're not exploring. You should've brought a map to you. You died. There was a bridge a mile from where you were camping.
Speaker 1
00:13:09 - 00:13:11
How does it connect to the metaphor of the knob?
Speaker 2
00:13:12 - 00:13:18
By not bringing the map, you didn't become an explorer. You just smashed the thing.
Speaker 1
00:13:19 - 00:13:19
Yeah.
Speaker 2
00:13:19 - 00:13:22
Yeah. The art the difficulty is still artificial.
Speaker 1
00:13:22 - 00:13:26
You failed before you started. What if we just don't have access to the knob?
Speaker 2
00:13:26 - 00:13:34
Well, that maybe is even scarier. Right? Like, we already exist in a world of nature. And nature has been fine tuned over billions of years. Yeah.
Speaker 2
00:13:35 - 00:13:45
To have humans build something and then throw the knob away and some grand romantic gestures horrifying?
Speaker 1
00:13:46 - 00:13:57
Do you think of us humans as individuals that are like born and die? Or is it are we just all part of 1 living organism? That is earth, that is nature.
Speaker 2
00:13:58 - 00:14:06
I don't think there's a clear line there. I think it's all kinda just fuzzy. I don't know. I mean, I don't think I'm conscious. I don't think I'm anything.
Speaker 2
00:14:06 - 00:14:08
I think I'm just a computer program.
Speaker 1
00:14:09 - 00:14:14
So it's all computation. Everything running your head is just the it's not it's this computation.
Speaker 2
00:14:14 - 00:14:19
Everything running in the universe is computation, I think. I I believe the extended church starting thesis.
Speaker 1
00:14:20 - 00:14:25
Yeah. But it there seems to be an embodiment to your particular computation. Like, there's a consistency.
Speaker 2
00:14:26 - 00:14:38
Well, yeah. But, I mean, models have consistency too. Yeah. Models that have been RLHF will continually say, you know, like, well, how do I murder ethnic minorities. Oh, well, I can't let you do that, Al.
Speaker 2
00:14:38 - 00:14:40
There's a consistency to that behavior.
Speaker 1
00:14:41 - 00:15:12
It's all RLHF. Like we are RLHF, each other. We we find we provide human feedback and then there that thereby fine tune these little pockets of computation. But it's still unclear why that pocket of computation stays with you, like, for years. It's just kind of thought, like, you have this consistent set of physics biology, what like, whatever you call the the neurons firing.
Speaker 1
00:15:12 - 00:15:25
Like like the electrical signals, the mechanical signals, all of that, that seems to stay there, and it contains information, stores information, and that information permeates through time and stays with you. There's like memory. There's like sticky.
Speaker 2
00:15:26 - 00:15:33
Okay. To be fair, like a lot of the models we're building today are very even RLHF is nowhere near as complex as the human loss function.
Speaker 1
00:15:33 - 00:15:35
Reinforcement learning with human feedback.
Speaker 2
00:15:36 - 00:15:52
You know, when I talked about, will GPT 12 be AGI? My answer is no. Of course not. I mean, cross entry loss is never gonna get you there. You need probably RL in fancy environments in order to get something that would be considered like AGI like.
Speaker 2
00:15:53 - 00:16:03
So to ask, like, the question about, like, why? I don't know. Like, it's just some quirk of evolution. Right? I don't think there's anything particularly special about Where I ended up?
Speaker 2
00:16:04 - 00:16:06
Where humans ended up?
Speaker 1
00:16:06 - 00:16:12
So, okay. We have human level intelligence. Would you call the AGI? Whatever we have. G I.
Speaker 2
00:16:13 - 00:16:20
Look, actually, I I don't really even like the word AGI, but general intelligence is defined to be whatever humans have.
Speaker 1
00:16:20 - 00:16:27
Okay. So why can GPT 12 not get us to AGI? Can we just, like, linger on that?
Speaker 2
00:16:27 - 00:16:38
If your loss function is categorical cross entropy, if your loss function is just try to maximize compression, I have a soundcloud I wrap. And I tried to get chat UPT to help me write wraps.
Speaker 1
00:16:38 - 00:16:38
Mhmm.
Speaker 2
00:16:38 - 00:16:49
And the wraps that it wrote sounded like YouTube common wraps. You know, you can go on any wrap sheet online and you can see what people put in the comments. And it's the most, like, mid quality wrap you can find.
Speaker 1
00:16:49 - 00:16:50
Is me good or bad?
Speaker 2
00:16:50 - 00:16:53
Mid is bad. Mid is bad. It's like mid. It's like
Speaker 1
00:16:53 - 00:16:55
Every time I talk to you, I learn new words.
Speaker 2
00:16:57 - 00:16:58
Mid. Yeah.
Speaker 1
00:16:59 - 00:17:02
I was like I is it is it like basic? Is that what mid means?
Speaker 2
00:17:02 - 00:17:08
I know. It's like it's like middle of the curve. Right? Yeah. So there's like there's like a, like, see that intelligence curve.
Speaker 2
00:17:08 - 00:17:18
Yeah. Where and you have, like, the dumb guy, the smart guy, and in the mid guy. Actually, being in the mid guys the worst. The smart guy was like, I put all my money in Bitcoin. The mid guy is like, you can't put money in Bitcoin.
Speaker 2
00:17:18 - 00:17:19
It's not real money.
Speaker 1
00:17:21 - 00:17:42
And all of it is a genius meme. That's another interesting 1. MEMS. The humor, the idea, the absurdity encapsulated in a single image, and it just kind of propagates virally between all of our brains. I didn't get my sleep last night, so I'm very I sound like I'm high.
Speaker 1
00:17:42 - 00:17:48
That's where I'm not. Do you think we have ideas or ideas have us?
Speaker 2
00:17:49 - 00:17:54
I think that we're gonna get super scary memes once the AI's actually are superhuman.
Speaker 1
00:17:55 - 00:18:00
Like every guy I would generate memes of course. Do you think it'll make humans laugh?
Speaker 2
00:18:00 - 00:18:20
I think it's worse than that. So Infantagest, it's introduced in the first 50 pages, is about a tape that you once you watch it, once you only ever wanna watch that tape. In fact, you wanna watch the tape so much that someone says, okay, here's a hacksaw. Cut off your pinky. And then I'll let you watch the tape again and you'll do it.
Speaker 2
00:18:20 - 00:18:37
So we're actually gonna build that I think, but it's not gonna be 1 static tape. I think the human brain is too complex. To be stuck in 1 static tape like that. If you look at like antbrands, maybe they can be stuck on a static tape. But we're going to build that using generative models.
Speaker 2
00:18:37 - 00:18:40
Going to build the TikTok that you actually can't look away from.
Speaker 1
00:18:41 - 00:18:50
So TikTok is already pretty close there, but the generation is done by humans. Yeah. The algorithm is just doing their recommendation. But if this do if the algorithm is also able to do the generation
Speaker 2
00:18:50 - 00:19:00
well, it's a question about how much intelligence is behind it. Right? So the content is being generated by, let's say, 1 humanity worth of intelligence. And you can quantify a humanity. Right?
Speaker 2
00:19:00 - 00:19:11
That's you know, it's it's exaflops, yada flops, but you can quantify it. Once that generation is being done by a hundred humanities, You're done.
Speaker 1
00:19:13 - 00:19:28
So it's actually scale. That's the problem. But also speed. Yeah. And what if it's sort of manipulating the very limited human dopamine engine for porn?
Speaker 1
00:19:29 - 00:19:33
Imagine just TikTok, but for porn. Yeah. That's like a brave new world.
Speaker 2
00:19:34 - 00:19:49
I don't even know what it'll look like. Right? Like, again, you can imagine the behaviors of something smarter than you, but a super intelligent and and and agent that just dominates your intelligence so much will be able to completely manipulate you.
Speaker 1
00:19:49 - 00:19:58
Is it possible that that it won't really manipulate. It'll just move past us. It'll just kinda exist the way water exists or the air exists.
Speaker 2
00:19:59 - 00:20:08
You see? And that's the whole AI safety thing. It's not the machine that's gonna do that. It's other humans using the machine that are gonna do that to you.
Speaker 1
00:20:08 - 00:20:12
Yeah. Because the machine is not interested in hurting humans. It's just
Speaker 2
00:20:12 - 00:20:19
The machine is a machine. Yeah. But the human gets the machine, and there's a lot of humans out there very interested in manipulating you.
Speaker 1
00:20:20 - 00:20:34
Well, let me bring up Alejes de Yatkowski. Who recently sat where you're sitting. He thinks that AI will almost surely kill everyone. Do you agree with him or not?
Speaker 2
00:20:35 - 00:20:37
Yes, but maybe for a different reason.
Speaker 1
00:20:38 - 00:20:47
Okay. And then I'll try to get you to find hope or we could find a note to that answer. But why?
Speaker 2
00:20:47 - 00:20:51
Yes. Okay. Why didn't nuclear weapons kill everyone?
Speaker 1
00:20:52 - 00:20:52
Good question.
Speaker 2
00:20:52 - 00:21:02
I think there's an answer. I think it's actually very hard to deploy nuclear weapons tactically. Mhmm. It's very hard to accomplish tactical objectives great. I can nuke their country.
Speaker 2
00:21:02 - 00:21:08
I have an irradiated pile of rubble. I don't want that. Why not? Why don't I want an irradiated pile of rubble? Yeah.
Speaker 2
00:21:08 - 00:21:11
For all the reasons no 1 wants an irradiated pile of rumble.
Speaker 1
00:21:12 - 00:21:17
Oh, because you can't use that land for for resources you can populate the land.
Speaker 2
00:21:17 - 00:21:28
Yeah. What what you want AAA total victory in a war is not usually the air radiation and eradication of the people there. It's the subjugation and domination of the people. Mhmm.
Speaker 1
00:21:29 - 00:21:40
Okay. So you can't use this strategically, tactically, in a war? Yeah. To help you to to help gain a military advantage. It's all complete destruction.
Speaker 1
00:21:41 - 00:21:47
Alright? Yeah. But there's eagles involved, it's still surprising. It's still surprising that nobody pressed the big red button.
Speaker 2
00:21:47 - 00:22:03
It's somewhat surprising, but You see, it's the little red button that's gonna be pressed with AI that's gonna you know, and that's why we die. It's it's not because the AI. If there's anything in the nature of AI, it's just the nature of humanity.
Speaker 1
00:22:03 - 00:22:11
What's the algorithm behind the little red button? Well, like, what what what possible ideas do you have for the hollow human species ends?
Speaker 2
00:22:11 - 00:22:28
Sure. So I think the most obvious way to me is wireheading. We end up amusing ourselves to death. We end up all staring at that infinite TikTok and forgetting to eat. Maybe maybe it's even more benign than this.
Speaker 2
00:22:28 - 00:22:36
Maybe we all just stop reproducing. Now, to be fair, it's probably hard hard to get all of humanity.
Speaker 1
00:22:36 - 00:22:37
Yeah. Yeah.
Speaker 2
00:22:38 - 00:22:39
It
Speaker 1
00:22:39 - 00:22:43
doesn't always go like, the the interesting thing about humanity is a diversity in
Speaker 2
00:22:43 - 00:22:43
Oh, yeah.
Speaker 1
00:22:43 - 00:22:48
You know, organisms in general. There's a lot of weirdos out there. Wow. 2 of them are sitting here.
Speaker 2
00:22:48 - 00:22:57
Yeah. I I mean, diversity and humanity is We do respect. I wish I was more weird. No. Like, I'm kinda look, I'm drinking smart water, man.
Speaker 2
00:22:57 - 00:23:01
That's like a Coca Cola product. Right? You won corporate. George Osborne. Corporate.
Speaker 2
00:23:02 - 00:23:08
No. The amount of diversity in humanity, I think, is decreasing. Just like all the other biodiversity on the planet.
Speaker 1
00:23:08 - 00:23:11
Oh, boy. Yeah. Right? And social media is not helping.
Speaker 2
00:23:11 - 00:23:19
Go eat McDonald's in China. Yeah. No. It's the interconnectedness. That's that's that's that's doing it.
Speaker 1
00:23:19 - 00:23:34
Oh, that's interesting. So everybody starts relying on the connectivity of the Internet and over time that reduces the diversity, the intellectual diversity. And then that gets you everybody into a funnel. There's still going to be a guy in Texas.
Speaker 2
00:23:34 - 00:23:35
There is. And yeah. It's
Speaker 1
00:23:35 - 00:23:36
a bunker.
Speaker 2
00:23:36 - 00:23:47
To to be fair, do I think AI kills us all? I think AI kills everything we call like society today. But I do not think it actually kills the human species. I think that's actually incredibly hard to do.
Speaker 1
00:23:48 - 00:23:54
You have a society, like, if we start over, that's tricky. Most of us don't know how to do most things.
Speaker 2
00:23:54 - 00:24:04
Yeah. But some of us do. And they'll be okay and they'll rebuild after the Great AI. What's rebuilding look like? Like,
Speaker 1
00:24:04 - 00:24:17
How much do we lose? Like, what is human civilization done? That's interesting. The combustion engine, electricity, So power and energy. That's interesting.
Speaker 1
00:24:17 - 00:24:19
Like, how to harness energy. Whoa.
Speaker 2
00:24:20 - 00:24:22
Whoa. Whoa. They're gonna be religiously against that.
Speaker 1
00:24:24 - 00:24:26
Are they going to get back to, like, fire?
Speaker 2
00:24:27 - 00:24:36
Sure. I mean, there'll be a there'll be a little bit, like, you know, some kind of amish looking kind of thing, I think. I think they're going to have very strong taboos against technology.
Speaker 1
00:24:38 - 00:24:45
Like technology. It's almost like a new religion. Technology is the devil. Yeah. And nature is god.
Speaker 1
00:24:46 - 00:24:56
Sure. So closer to nature. But can you really get away from AI? If it destroyed 99 percent of the human species, isn't it somehow have a hold, like a strong hold?
Speaker 2
00:24:56 - 00:25:17
What's interesting about Everything we build, I think we are going to build superintelligence before we build any sort of robustness in the AI. We cannot build an AI that is capable of going out into nature and surviving like a like a bird. Right? A bird is an incredibly robust organism. We've built nothing like this.
Speaker 2
00:25:17 - 00:25:19
We haven't built a machine that's capable of reproducing.
Speaker 1
00:25:20 - 00:25:29
Yes. But there is, you know, a work of, like, robots a lot now. I have a bunch of them. They're mobile. Mhmm.
Speaker 1
00:25:30 - 00:25:38
That can't reproduce, but all they need is, I guess, you're saying they can't repair themselves. If you have a large number, if you have like a hundred million of them,
Speaker 2
00:25:38 - 00:25:43
let's just focus on them reproduce. Right? They have microchips in them. Mhmm. Okay.
Speaker 2
00:25:43 - 00:25:48
Then do they include a fab? No. Then how are they gonna reproduce?
Speaker 1
00:25:48 - 00:25:55
The other day, it doesn't have to be all on board. Right? They can go to a a factory, to a repair shop,
Speaker 2
00:25:55 - 00:26:03
Yeah. But then you're really moving away from robustness. Yes. All of life is capable of reproducing without needing to go to a repair shop. Mhmm.
Speaker 2
00:26:03 - 00:26:20
Life will continue to reproduce in the complete absence of civilization. Robots will not. So when if if the AI apocalypse happens, I mean, the AI's are gonna probably die out because I think we're gonna get again superintelligence long before we get robustness.
Speaker 1
00:26:20 - 00:26:28
What about if you just improve the fab to where you you just have A3D printer that can always help you?
Speaker 2
00:26:29 - 00:26:33
Well, that'd be very interesting. I'm interested in building that. Of course, you are.
Speaker 1
00:26:33 - 00:26:40
You think how difficult is that problem? To have a robot that basically can build itself
Speaker 2
00:26:40 - 00:26:41
very, very hard.
Speaker 1
00:26:42 - 00:26:49
I think you've mentioned this like to me or somewhere where people think it's easy conceptually.
Speaker 2
00:26:49 - 00:26:52
And then they remember that you're gonna have to have a fab.
Speaker 1
00:26:53 - 00:26:54
Yeah. On board.
Speaker 2
00:26:54 - 00:26:55
Of course.
Speaker 1
00:26:56 - 00:26:58
So 3 d printer that print the 3 d printer.
Speaker 2
00:26:59 - 00:26:59
Yeah.
Speaker 1
00:27:00 - 00:27:01
Yeah. On legs.
Speaker 2
00:27:02 - 00:27:02
Why
Speaker 1
00:27:02 - 00:27:03
is that hard?
Speaker 2
00:27:03 - 00:27:11
Well, because it's I mean, A3D printer is a very simple machine. Right? Okay. You're gonna print chips. You're gonna have an atomic printer.
Speaker 2
00:27:11 - 00:27:12
How are you gonna dope the silicon?
Speaker 1
00:27:13 - 00:27:13
Yeah.
Speaker 2
00:27:13 - 00:27:16
Right? How are you gonna hatch the silicon?
Speaker 1
00:27:16 - 00:27:30
You're gonna have to have a very interesting kind of fab if you wanna have a lot of computation on board. They can do, like, structural type of robots that are dumb.
Speaker 2
00:27:30 - 00:27:36
Yeah. But structural type of robots aren't gonna have the intelligence required to survive in any complex environment.
Speaker 1
00:27:36 - 00:27:40
What about like ants type of systems? We have like trillions of them.
Speaker 2
00:27:40 - 00:27:48
I don't think this works. I mean, again, like, ants at their very core are made up of cells that are capable of individually reproducing.
Speaker 1
00:27:48 - 00:27:52
They're doing quite a lot a lot of computation that we're taking for granted
Speaker 2
00:27:52 - 00:28:00
It's not even just the computation. It's that reproduction is so inherent. Okay. So, like, there's 2 stacks of life in the world. There's the biological stack and the silicon stack.
Speaker 2
00:28:00 - 00:28:16
The biological stack starts with reproduction. Reproduction is at the absolute core, the first proton RNA organisms were capable of reproducing. The silicon stack, despite as far as it's come, is nowhere near being able to reproduce.
Speaker 1
00:28:17 - 00:28:29
Yeah. So the the the fab movement, digital fabrication. Fabrication in the full range of what that means is still in the early stages.
Speaker 2
00:28:29 - 00:28:30
Yeah.
Speaker 1
00:28:30 - 00:28:32
You're interested in this world.
Speaker 2
00:28:32 - 00:28:41
Even if you did put a fab on the machine. Right? Let's say, okay, we can build fabs. We know how to do that as humanity. We can probably put all the precursors that build all the machines in the fabs also in the machine.
Speaker 2
00:28:41 - 00:28:58
First off, this machine is gonna be absolutely massive. I mean, we almost have a, like, think of the size of the thing required to reproduce a machine today. Right? Like, is our civilization capable of reproduction? Can we reproduce our civilization on Mars?
Speaker 1
00:29:00 - 00:29:11
If we were to construct a machine that is made up of humans, like a company, it can reproduce itself. Yeah. I don't know. I it feels like like like a hundred 15 people.
Speaker 2
00:29:13 - 00:29:16
I get so much harder than that. Hundred and 20.
Speaker 1
00:29:17 - 00:29:18
I just like
Speaker 2
00:29:18 - 00:29:27
the phone number. I believe that Twitter can be run by 50 people. Uh-huh. I think that this is gonna take most of Like, it's just most of society. Right?
Speaker 2
00:29:27 - 00:29:29
Like, we live in 1 globalized world.
Speaker 1
00:29:29 - 00:29:38
No. But you're not interested in running Twitter. You're interested in seeding. Like, you want to seed a civilization and then because humans can, like,
Speaker 2
00:29:38 - 00:29:45
Oh, okay. You're talking about yeah. Okay. So you're talking about the humans reproducing and, like, basically, like, what's the smallest self sustaining colony of humans? Yeah.
Speaker 2
00:29:45 - 00:29:48
Yeah. Okay. Fine. But they're not gonna be making 5 nanometer chips.
Speaker 1
00:29:48 - 00:30:03
Over time, they will I think you're being like, we have to expand our conception of time here. Come back to the original. Timescale. I mean, over across maybe a hundred generations, we're back to making chips. No.
Speaker 1
00:30:03 - 00:30:05
If you seed the colony correctly,
Speaker 2
00:30:06 - 00:30:12
maybe or maybe they'll watch our colony die out over here and be like, we're not making chips Don't make chips.
Speaker 1
00:30:12 - 00:30:14
Nope. You have to seed that colony correctly.
Speaker 2
00:30:14 - 00:30:18
Whatever you do, don't make chips. Chips are what led to their downfall.
Speaker 1
00:30:20 - 00:30:35
Well, that is the thing that humans do. They they come up, the constructed devil, a good thing, and a bad thing, and they really stick by that, and then they murder each other over that. There's always gonna ask on the room for murders everybody. And he usually makes tattoos and nice branding.
Speaker 2
00:30:35 - 00:30:42
That's what I see. Need that asshole. That's the question. Right. Humanity works really hard today to get rid of that asshole, but I think they might be important.
Speaker 1
00:30:42 - 00:30:55
Yeah. This whole freedom of speech thing. It's it's the freedom of being in ASCO seems kind of important. Right. Man, this thing, this fab, this human fab that we've constructed, this human civilization is pretty interesting.
Speaker 1
00:30:55 - 00:31:08
And now it's building artificial copies of itself or artificial copies of various aspects of itself that seem interesting like intelligence. And I wonder where that goes.
Speaker 2
00:31:09 - 00:31:16
I like to think it's just like another stack for life. Like, we have like the BioStack life. Like, we're a BioStack life, another silicon stack life.
Speaker 1
00:31:16 - 00:31:22
But it seems like the ceiling there might not be a ceiling and or at least the ceiling is much higher for the for the silcon stack.
Speaker 2
00:31:23 - 00:31:35
Oh, no. I don't I we don't know what the ceiling is for the biostack either. The biostack The biostock just seem to move slower. You have Moore's Law, which is not dead despite many pro formations.
Speaker 1
00:31:35 - 00:31:37
In the bio stack or the silicon?
Speaker 2
00:31:37 - 00:31:37
In the silicon stack.
Speaker 1
00:31:37 - 00:31:37
Mhmm. And
Speaker 2
00:31:37 - 00:31:49
you don't have anything like this in the bio stack. So I have a meme that I I posted, I tried to make a meme. It didn't work too well. But I posted a picture of, you know, Ronald Reagan and and Joe Biden. And you look, this is 19 80 and this is 20 20.
Speaker 1
00:31:49 - 00:31:50
Yeah.
Speaker 2
00:31:50 - 00:31:59
And these 2 humans are basically like the same. Right? There's no there's no, like, there there's been no change in humans in the last 40 years.
Speaker 1
00:31:59 - 00:31:59
Yeah.
Speaker 2
00:32:00 - 00:32:04
And then I posted a computer from 19 80 and a computer from 20 20. Wow.
Speaker 1
00:32:06 - 00:32:17
Yeah. With their early early stages. Right? Which is why you said when you said the fab, the size of the fab required to make another fab is like Very larger, I know.
Speaker 2
00:32:17 - 00:32:18
Oh, yeah.
Speaker 1
00:32:18 - 00:32:39
But computers were very large 80 years ago. And they got pretty tiny. Hence, there there people are starting to wanna wear them on their face. In order to escape reality, that's a thing in order to be live inside the computer. Yeah.
Speaker 1
00:32:40 - 00:32:43
Put a screen right here. I don't have to see the rest of the US halls.
Speaker 2
00:32:43 - 00:32:45
I've been ready for a long time.
Speaker 1
00:32:45 - 00:32:51
You like virtual reality? I love it. You wanna live there? Yeah. Yeah.
Speaker 1
00:32:51 - 00:32:55
Part of me does too. How far away are we, do you think?
Speaker 2
00:32:57 - 00:33:00
Judging from what you can buy today far, very far.
Speaker 1
00:33:01 - 00:33:15
I gotta tell you that I had the experience of Metas codec Avatar, where it's a ultra high resolution scam. It looked real.
Speaker 2
00:33:16 - 00:33:34
I mean, the headsets just are not quite at like eye resolution yet. I haven't put on any headset or I'm like, oh, this could be the real world. Whereas when I put good headphones on, audio is there. Like, we we can reproduce audio that I'm like, I'm actually in a jungle right now. I I if I close my eyes, I can't tell them not.
Speaker 1
00:33:34 - 00:33:37
Yeah. But then there's also smell and all that kind of stuff.
Speaker 2
00:33:37 - 00:33:38
Sure.
Speaker 1
00:33:38 - 00:33:52
I don't know. I the the power of imagination or the power of the the mechanism in the human mind that fills the gaps that kind of reaches and wants to make the thing you see in the virtual world real to you,
Speaker 2
00:33:53 - 00:33:55
I believe in that power. Or humans wanna believe.
Speaker 1
00:33:56 - 00:34:05
Yeah. Like, what what if you're lonely? What if you're sad? What if you're really struggling in life? And here's a world where you don't have to struggle anymore.
Speaker 2
00:34:05 - 00:34:11
Humans wanna believe so much that people think the large language models are conscious. That's how much humans wanna believe.
Speaker 1
00:34:12 - 00:34:19
Strong words. He's throwing left and right hooks. Okay. Why do you think large language models are not conscious?
Speaker 2
00:34:19 - 00:34:20
I don't think I'm conscious.
Speaker 1
00:34:21 - 00:34:24
Oh, so what is consciousness then? George Hartz?
Speaker 2
00:34:24 - 00:34:29
It's like what it seems to mean to people, it's just like a word that atheists use for souls.
Speaker 1
00:34:30 - 00:34:33
Sure. But that doesn't mean soul is not an interesting word.
Speaker 2
00:34:34 - 00:34:44
If consciousness is a spectrum, I'm definitely way more conscious than the large language models are. I think the large language models are less contrast than the chicken.
Speaker 1
00:34:44 - 00:34:46
When is the last time you've seen a chicken?
Speaker 2
00:34:48 - 00:34:50
In Miami? Like, couple months ago?
Speaker 1
00:34:51 - 00:34:53
How no. Like a living check.
Speaker 2
00:34:53 - 00:34:55
Living check is walking around Miami. It's crazy.
Speaker 1
00:34:55 - 00:34:57
Like, on the street? Yeah. Like a chicken.
Speaker 2
00:34:57 - 00:34:59
A chicken. Yeah. Alright.
Speaker 1
00:35:01 - 00:35:48
Alright. I was trying to call you all like like a good journalist and I I got shut down. Okay. But you don't think much about this kind of subjective feeling that it it feels like something to exist. And then as an observer, you can have a sense that an entity is not only intelligent, but as a kinda subjective experience of its reality, like a self awareness that is capable of, like, suffering, of hurting, of being excited by the environment in a way that's not merely kind of an artificial response, but a deeply felt 1.
Speaker 2
00:35:48 - 00:35:55
Humans wanna believe so much if I took a rock and a sharpie and drew a sad face on the rock, they'd think the rock is sad.
Speaker 1
00:35:57 - 00:36:02
Yeah. And you're saying, when we look in the mirror, we we apply the same smiley face with rock.
Speaker 2
00:36:02 - 00:36:03
Pretty much. Yeah.
Speaker 1
00:36:03 - 00:36:06
Doesn't it it's not weird, though, that you're not conscious.
Speaker 2
00:36:06 - 00:36:07
Is that
Speaker 1
00:36:08 - 00:36:18
No. But you do believe in consciousness. It's just an it's unclear. Okay. So to you, it's like a little, like, AAA symptom of the bigger thing That's not that important.
Speaker 2
00:36:18 - 00:36:32
Yeah. It's interesting that, like, the human system seem to claim that they're conscious. And I guess they're trying to, like, says something in a straight up, like, okay. What do people when even if you don't believe in consciousness, what do people mean when they say consciousness? And there's definitely, like, meanings to it.
Speaker 1
00:36:32 - 00:36:33
What's your favorite thing to
Speaker 2
00:36:37 - 00:36:37
Pizza.
Speaker 1
00:36:37 - 00:36:39
Cheese pizza. What are the toppings?
Speaker 2
00:36:39 - 00:36:40
I like cheese pizza.
Speaker 1
00:36:40 - 00:36:41
Don't say pineapple.
Speaker 2
00:36:41 - 00:36:42
No. I don't like pineapple.
Speaker 1
00:36:42 - 00:36:43
Okay. Pepperoni pizza.
Speaker 2
00:36:43 - 00:36:45
Has they put any ham on it? Oh, that's real bad.
Speaker 1
00:36:45 - 00:36:52
What's the best what's the best pizza? What are we talking about here? Like, do you like cheap crappy pizza? Chicago, deep dish, cheese, or Sure.
Speaker 2
00:36:52 - 00:36:53
Oh, that's that's my favorite.
Speaker 1
00:36:53 - 00:37:01
There you go. You bite into a deep dish, Chicago, deep dish pizza. And it feels like, so you were starving. You haven't eaten for 24 hours. You just bite in.
Speaker 1
00:37:01 - 00:37:05
And you're hanging out with somebody that matters a lot to you. You're there. Pizza.
Speaker 2
00:37:05 - 00:37:06
Sounds so nice.
Speaker 1
00:37:06 - 00:37:14
Yeah. Alright. It feels like something. I'm I'm George, motherfucking hot eating. I'm fucking Chicago deep dish pizza.
Speaker 1
00:37:15 - 00:37:19
There's just the full peak light living experience --
Speaker 2
00:37:19 - 00:37:19
Yeah.
Speaker 1
00:37:19 - 00:37:22
-- of being human, the top of the human condition.
Speaker 2
00:37:22 - 00:37:23
Sure.
Speaker 1
00:37:24 - 00:37:26
It feels like something to experience that.
Speaker 2
00:37:26 - 00:37:27
Mhmm.
Speaker 1
00:37:28 - 00:37:31
Why does it feel like something? That's consciousness. Isn't it?
Speaker 2
00:37:32 - 00:37:47
If that's the word you wanna use to describe it, sure. I'm not gonna deny that that feeling exists. I'm not gonna deny that I experience that feeling. When I guess what I kind of take issue to is that there's some like like how does it feel to be a web server? Do 04:04 is hurt?
Speaker 1
00:37:49 - 00:37:50
Not yet.
Speaker 2
00:37:50 - 00:38:00
How would you know what suffering looked like? Sure. You can recognize a suffering dog because we're the same stack as the dog. All the biostacks stuff kind of, especially mammals, you know, it's it's really easy. You can
Speaker 1
00:38:01 - 00:38:03
game recognize game.
Speaker 2
00:38:03 - 00:38:12
Yeah. Versus the silicon stack stuff. It's like, you have no idea. You have You it don't Wow. The little thing has learned to mimic, you know.
Speaker 2
00:38:15 - 00:38:20
But then I realized that that's all we are too. All of the little thing has learned to mimic.
Speaker 1
00:38:20 - 00:38:39
Yeah. I guess, yeah, 444 could be could be suffering, but it's so far from our kind of living organism, our kind of stack. But it feels like AI can start maybe mimicking the biological stack. Buttery, buttery, buttery because it's strained.
Speaker 2
00:38:39 - 00:38:40
Retrained it. Yeah.
Speaker 1
00:38:40 - 00:38:45
And so in that maybe that's the definition of consciousness. Is the the biostat consciousness?
Speaker 2
00:38:45 - 00:38:50
The definition of consciousness is how close something looks to human. Sure. I'll give you that 1.
Speaker 1
00:38:50 - 00:38:54
No. How close something is to the human experience.
Speaker 2
00:38:54 - 00:38:59
Sure. It's a very it's a very anthropocentric definition, but
Speaker 1
00:38:59 - 00:39:00
where that's all we got.
Speaker 2
00:39:00 - 00:39:07
Sure. No. And I I don't mean to like I think there's a lot of value in it. Look, I just started my second company. My third company will be AI girlfriends.
Speaker 2
00:39:08 - 00:39:09
Oh, like I mean,
Speaker 1
00:39:09 - 00:39:11
I wanna find out what your fourth company is.
Speaker 2
00:39:11 - 00:39:12
Oh, wow.
Speaker 1
00:39:12 - 00:39:26
Because I think once you have AI girlfriends, it's Oh, boy. Does it get interesting? Well, maybe let's go there. I mean, the relationships with AI that's creating human like organisms. Right?
Speaker 1
00:39:27 - 00:39:54
And part of being human is being conscious, is being having the capacity to solve for having the capacity to experience this life richly in such a way that you can empathize. The AI is gonna empathize with you, and you can empathize with it, or you can project your anthropomorphic sense of what the other entity is experiencing, and and an AI model would need to you have to create that experience inside your mind. And it doesn't seem that difficult.
Speaker 2
00:39:54 - 00:40:02
Yeah. But okay. So here's where it actually gets totally different. Right? When you interact with another human, you can make some assumptions.
Speaker 2
00:40:03 - 00:40:14
Yeah. When you interact with these models, you can't. You can make some assumptions that that other human experiences, suffering, and pleasure in a pretty similar way to you do. The golden rule applies. Mhmm.
Speaker 2
00:40:15 - 00:40:24
With an AI model, this isn't really true. Right? These these large language models are good at fooling people because they were trained on a a whole bunch of human data and told to mimic it.
Speaker 1
00:40:24 - 00:40:32
Yep? But if if the AI system says, hi, my name is Samantha. It has a backstory.
Speaker 2
00:40:32 - 00:40:32
Yeah.
Speaker 1
00:40:32 - 00:40:34
I went to college here and there.
Speaker 2
00:40:34 - 00:40:34
Here.
Speaker 1
00:40:34 - 00:40:37
Maybe it'll it'll integrate this in AI system.
Speaker 2
00:40:37 - 00:40:41
I made some chatbots. I gave them back stories. It was lots of fun. I was so happy when mama came out.
Speaker 1
00:40:42 - 00:40:46
Yeah. We'll talk about love. We'll talk about all that, but, like, you know, the rock with this smiley face.
Speaker 2
00:40:46 - 00:40:47
Yeah.
Speaker 1
00:40:47 - 00:41:04
Why because it seems pretty natural for for you to anthropomorphize that thing. And then start dating it. And before you know it, you're married and have kids or the rock. Oh, the rock. This picture is on Instagram with you in the rock and smiley face.
Speaker 2
00:41:04 - 00:41:14
To be fair, like, you know, something that people generally look for and they're looking for someone to date is intelligence and some form, and the rock doesn't really have intelligence. It's only a pretty desperate person with data rock.
Speaker 1
00:41:15 - 00:41:17
I think we're all desperate deep down.
Speaker 2
00:41:17 - 00:41:20
Oh, not rock level desperate. Alright.
Speaker 1
00:41:23 - 00:41:35
Not rock level desperate, but AI level desperate. I don't know. I think all of us have a deep loneliness. It just feels like the language models are there. Oh, I agree.
Speaker 2
00:41:35 - 00:41:55
And you know what, I won't even say this so cynically. I will actually say this in a way that like, I want a Ibrance. I do. Yeah. Like, I would love to You know, again, I the language models now are still a little like, people are impressed with these GPT things, and I look at, like or or like or the the co pilot, the coding 1.
Speaker 2
00:41:55 - 00:42:07
And I'm like, okay. This is, like, junior engineer level, and these people are, like, fiverr level artists and copywriters. Like, okay, great. We got, like, fiber and, like, junior engineers. Okay, cool.
Speaker 2
00:42:07 - 00:42:15
Like and this is just a start, and it will get better. Right? Like, I would I can't wait to have AI friends who are more intelligent than I am.
Speaker 1
00:42:15 - 00:42:18
So fiber is just a temporary. It's not the ceiling.
Speaker 2
00:42:18 - 00:42:19
No. Definitely not.
Speaker 1
00:42:20 - 00:42:27
Is it is it count as cheating when you're talking to an AI model, emotional cheating?
Speaker 2
00:42:29 - 00:42:33
That's that's up to you and your human partner to define.
Speaker 1
00:42:33 - 00:42:34
Oh, you have to Alright.
Speaker 2
00:42:34 - 00:42:37
You can Yeah. You you have to have that have to have that conversation. I guess.
Speaker 1
00:42:38 - 00:42:42
Alright. I mean, integrate that with with porn and all this stuff.
Speaker 2
00:42:42 - 00:42:43
No. I mean, this similar kind of porn.
Speaker 1
00:42:43 - 00:42:44
Yeah.
Speaker 2
00:42:44 - 00:42:48
Yeah. Right? I think people in relationships have different views on that.
Speaker 1
00:42:48 - 00:43:02
Yeah. But most people don't have, like, serious open conversations about all the different aspects of what's cool and what's not. And it feels like AI is a really, weird conversation to have.
Speaker 2
00:43:03 - 00:43:15
You know, the the the porn 1 is a good branching off. Sure. Like these things, you know, 1 of my scenarios that I put in my chatbot is I, you know, Nice girl named Lexi. She's 20. She just moved out to LA.
Speaker 2
00:43:15 - 00:43:19
She wanted to be an actress, but she started doing only fans instead. And you're on a date with her. Enjoy.
Speaker 1
00:43:21 - 00:43:30
Oh, man. Yeah. And so was that if you're actually dating somebody in in real life, is that cheating? I feel like it gets a little weird. Sure.
Speaker 1
00:43:30 - 00:43:37
I think it's real weird. It's like, what are you allowed to say to an AI bot? Imagine having that conversation with a significant other
Speaker 2
00:43:37 - 00:43:42
I mean, these are all things for people to define in their relationships. What it means to be human is just gonna start to get weird,
Speaker 1
00:43:42 - 00:43:53
especially online. Like, how do you know? Like, there'll be moments when you'll have what you think is a real human you interact with on Twitter for years and you realize it's not.
Speaker 2
00:43:54 - 00:43:59
I spread I love this meme. Heaven banning? Mhmm. You know what? Shadow banning.
Speaker 2
00:43:59 - 00:44:03
Yeah. Alright. Shadow banning. Okay. You post and we can see Heaven banning.
Speaker 2
00:44:03 - 00:44:08
You post. No 1 can see it, but a whole lot of AIs are spot up to interact with you.
Speaker 1
00:44:09 - 00:44:14
Or maybe that's what the way human civilization ends is all of us a heaven band.
Speaker 2
00:44:14 - 00:44:24
There's a great it's called my little pony friendship is optimal. Mhmm. It's a sci fi story that explores this idea. Friendship is optimal.
Speaker 1
00:44:24 - 00:44:34
Yeah. I'd like to have some at least on the intellectual realm, some AI friends. Yeah. That argued with me. But the the the romantic realm is weird.
Speaker 1
00:44:34 - 00:44:45
Definitely weird. But not out of the realm of the the kind of weirdness that human civilization is capable of, I think. I think
Speaker 2
00:44:45 - 00:44:49
I want it. Look, I want it. If no 1 else wants it, I want it.
Speaker 1
00:44:49 - 00:44:50
Yeah. I think a lot of people probably
Speaker 2
00:44:50 - 00:44:51
want it.
Speaker 1
00:44:51 - 00:44:52
There's a deep loneliness.
Speaker 2
00:44:53 - 00:44:59
And I'll fill their loneliness and, you know, it just will only advertise to you some of the time.
Speaker 1
00:44:59 - 00:45:08
Yeah. Maybe the conceptions of noggin me changed to. Like, I grew up in a time, like, I value monogamy, but maybe that's a silly notion when you have arbitrary number of AI systems.
Speaker 2
00:45:10 - 00:45:15
This this this interesting path from rationality to polyamory. Yeah. That doesn't make sense for me.
Speaker 1
00:45:15 - 00:45:23
For you. But you're just a biological organism who was born before, like, re the Internet really took off.
Speaker 2
00:45:23 - 00:45:34
The crazy thing is, like, culture is whatever we define it as. Right? These things are not easy. Like, is a problem and moral philosophy Right? There's no, like like, okay.
Speaker 2
00:45:34 - 00:45:46
What is might be that, like, computers are capable of mimicking, you know, girlfriends perfectly. They passed the girlfriend during test. Right? But that doesn't say anything about ought. That doesn't say anything about how we ought to respond to them as a civilization.
Speaker 2
00:45:46 - 00:45:52
That doesn't say we ought to get rid of monogamy. Right? That's a completely separate question, really a religious 1.
Speaker 1
00:45:52 - 00:45:55
A girlfriend touring test. I wonder what that looks like.
Speaker 2
00:45:55 - 00:45:56
A girlfriend touring test.
Speaker 1
00:45:56 - 00:46:04
Are you writing that Will you be the the the Allan touring of the 20 first century? That writes the the girlfriend touring test.
Speaker 2
00:46:04 - 00:46:09
No. I mean, of course, my my hey, I have girlfriends. Their goal is to pass the girlfriend current test.
Speaker 1
00:46:09 - 00:46:21
Nobody there should be like a paper that kind of defines the test. Or you so, I mean, the question is if it's deeply personalized over there is a common thing that really gets everybody. Yeah.
Speaker 2
00:46:21 - 00:46:26
I mean, you know, look, we're a company. We don't have to get everybody. We just have to get a large enough client outstanding.
Speaker 1
00:46:26 - 00:46:36
Like how you're already already thinking company. Alright. Let's before we go to company number 3 and company number 4, let's go to company number 2. Alright. Tiny Corp.
Speaker 1
00:46:37 - 00:46:50
Possibly 1 of the greatest names of all time for a company. You've launched a new company called Tiny Corp. That leads the development of Tiny GRAD. What's the origin story of Tiny Corp. And Tiny GRAD?
Speaker 2
00:46:50 - 00:47:01
I started tiny grad as a, like, a toy project just to teach myself. Okay. Like, what is a convolution? What are all these options you can pass to them? What is the derivative of evolution?
Speaker 2
00:47:01 - 00:47:20
Right? Very similar to Carpathia road micro grad. Very similar. And then I started realizing I started thinking about, like, AI chips. I started thinking about chips that run AI and I was like, well, okay, this is going to be a a really big problem.
Speaker 2
00:47:21 - 00:47:26
If Nvidia becomes a monopoly here, How long before NVIDIA is nationalized?
Speaker 1
00:47:27 - 00:47:35
Mhmm. So you 1 of the reasons that Star tiny Corp is to challenge Nvidia.
Speaker 2
00:47:36 - 00:47:46
It's not so much To challenge Nvidia. I I actually I I like Nvidia and it's to make sure power stays decentralized.
Speaker 1
00:47:47 - 00:47:50
Yeah. And here's computational power.
Speaker 2
00:47:50 - 00:47:51
Mhmm.
Speaker 1
00:47:51 - 00:47:56
And to you, Nvidia is kinda locking down the computational power of the world.
Speaker 2
00:47:56 - 00:48:14
If Nvidia becomes just like 10 x better than everything else, you're giving a big advantage to somebody who can secure NVIDIA as a resource. Yeah. In fact, If Jenson watches his podcast, he may wanna consider this. He may wanna consider making sure his company is not nationalized.
Speaker 1
00:48:16 - 00:48:18
Do you think that's an actual threat?
Speaker 2
00:48:18 - 00:48:18
Oh, yes.
Speaker 1
00:48:20 - 00:48:23
No. But there's so much you know, there's AMD.
Speaker 2
00:48:23 - 00:48:25
Mhmm. So we have Nvidia and AMD. Great.
Speaker 1
00:48:25 - 00:48:36
Alright. But you don't you don't think there's, like, a push towards, like, selling, like, Google selling GPUs or something like this, you don't think there's a push for that?
Speaker 2
00:48:36 - 00:48:39
Have you seen it? Google loves to rent you GPUs.
Speaker 1
00:48:39 - 00:48:41
It doesn't you can't buy it or bus buy? No.
Speaker 2
00:48:43 - 00:48:56
So I started to work on a on chip. I was like, okay. What is it gonna take to make a chip? And my first notions were all completely wrong, about why, about like how you can improve on GPUs. And I will take this.
Speaker 2
00:48:56 - 00:49:13
This is from Jim Keller on your podcast. Mhmm. And this is 1 of my absolute favorite descriptions of computation. So there's 3 kinds of computation paradigms that are common in the world today. Other CPUs, and CPUs can do everything.
Speaker 2
00:49:13 - 00:49:34
CPUs can do add and multiply, They can do load in store, and they can do compare and branch. Mhmm. And when I say they can do these things, they can do them all fast. Right? So compare and branch are unique to CPUs, what I mean by they can do them fast is they can do things like branch prediction and speculative execution, and they spend tons of transistors on these super deep reorder buffers in order to make these things fast.
Speaker 2
00:49:34 - 00:49:44
Then you have a simpler computation model GPUs. GPUs can't really do compare and branch. I mean, they can, but it's horrendously slow. But GPUs can do arbitrary load and store. Right?
Speaker 2
00:49:44 - 00:49:57
GPUs can do things like x, d reference y. So they can fetch from arbitrary pieces of memory. They can fetch from memory that is defined by the contents of the data. The third model of computation is DSPs. And DSPs are just add and multiply.
Speaker 2
00:49:57 - 00:50:12
Right? Like, they can do load in stores, but only static load in stores. Only loads of stores that are known before the program runs. And you look at neural networks today and 95 percent of neural networks are all the DSP paradigm. They are just statically scheduled ads and multiplies.
Speaker 2
00:50:13 - 00:50:31
So tiny guys really took this idea and and I'm still working on it to extend this as far as possible. Every stage of the stack has turned completeness. Right? Python has turned completeness. And then we take python, we go to c plus plus, which is turned late, and then maybe c plus plus calls into some CUDA kernels, which are turned complete.
Speaker 2
00:50:31 - 00:50:46
The CUDA kernels go through LVM, which is and complete into PTX, which is turn complete, SaaS, which is turn complete, on a turn complete processor. I wanna get turn completeness out of the stack entirely. Because once you get rid of turn completeness, you can reason about things. Rises theorem in the halting problem do not apply to add more machines.
Speaker 1
00:50:49 - 00:50:56
Okay. What's the power and the value of getting to and complete this out of? Out of are we talking about the hardware or or the software?
Speaker 2
00:50:56 - 00:51:12
Every layer of the stack. Every layer of the stack, removing turn completeness allows you to reason about things. Right? So the reason you need to do branch prediction in a CPU, and the reason it's prediction and the branch predictors are I think they're, like, 99 percent on CPUs. Why do they get 1 percent of them wrong?
Speaker 2
00:51:12 - 00:51:21
Well, they get 1 percent wrong because you can't know. Right? That's the halting problem. It's equivalent to the halting problem to say whether a branch is gonna be taken or not. Mhmm.
Speaker 2
00:51:22 - 00:51:42
I can show that. But the admiral machine, the neural network runs the identical compute every time. The only thing that changes is the data. So when you realize this, you think about, okay, how can we build a computer How can we build a stack that takes maximal advantage of this idea? Mhmm.
Speaker 2
00:51:43 - 00:51:55
So what makes tiny grid different from other neural network libraries? Is it does not have a preventative operator even for matrix multiplication. Right? And this is every single 1. They even have preventative operators.
Speaker 2
00:51:55 - 00:51:56
There's things like convolutions.
Speaker 1
00:51:56 - 00:51:58
So no Matt Moll.
Speaker 2
00:51:58 - 00:52:08
No Matt Moll. Well, here's what Matt Moll is, so I'll use my hands to talk here. Mhmm. So if you think about Cube, and I put my 2 matrices that I'm multiplying on 2 phases of the cube. Right?
Speaker 2
00:52:08 - 00:52:31
You can think about the matrix multiply as, okay, The n cubed, I'm gonna multiply for each 1 in the cubed, and then I'm gonna do a sum which is a reduce up to here to the third phase of the cube and that's your multiplied matrix. So what a matrix multiply is, is a bunch of shape operations. Right? A bunch of permute 3 shapes and expands on the 2 matrices. A multiply and cubed.
Speaker 2
00:52:31 - 00:52:34
A reduce and cubed, which gives you an n squared matrix.
Speaker 1
00:52:35 - 00:52:41
Okay. So what what is the minimum number of operations that can accomplish that if you don't have that mall is a primitive.
Speaker 2
00:52:41 - 00:52:57
So tiny grad has about 20. And you can compare tiny grad's offset or IR to things like XLA. Or prim torch. So excellent and prim torch are ideas where, like, okay, torch has, like, 2000 different kernels. Mhmm.
Speaker 2
00:52:58 - 00:53:13
Pi Torch 2 introduced prim Torch, which has only 250. Tiny grad has order of magnitude 25. It's it's 10 x less than XLI or prem George. And you can think about it as kind of like risk versus risk. Right?
Speaker 2
00:53:13 - 00:53:24
These other things are risk like systems. Tanya graders risk. Risk 1. Risk architecture is gonna change everything. 19 95, hackers.
Speaker 1
00:53:25 - 00:53:26
Wait, really? That's an actual thing.
Speaker 2
00:53:26 - 00:53:35
Angelina Jolie delivers the line. Risk architecture is gonna change everything in 19 95. And here we are with arm and the phones. And arm everywhere.
Speaker 1
00:53:35 - 00:53:40
Wow. I love it when movies actually have real things in them. Right. Okay. Interesting.
Speaker 1
00:53:41 - 00:53:55
So this is like so you're thinking of this as the risk architecture of ML stack. 25. What what can you can you go through the the 4 op types?
Speaker 2
00:53:55 - 00:54:08
Sure. Okay. So you have unary ops which take in a tensor and return a tensor of the same size. And do some urinary optuit. X, log, reciprocal, sign.
Speaker 2
00:54:09 - 00:54:11
Right? They've taken 1 in their point wise.
Speaker 1
00:54:11 - 00:54:12
Mhmm. Really?
Speaker 2
00:54:13 - 00:54:23
Yeah, really. Almost all activation functions are unary ops. Some combinations of unary ops together is still a unary op. Mhmm. Then you have binary ops.
Speaker 2
00:54:23 - 00:54:35
Binary ops are like pointwise addition, multiplication division, compare. It takes in 2 tensors of equal size. And outputs 1 tensor. Mhmm. Then you have reduce ops.
Speaker 2
00:54:35 - 00:54:53
Reduced ops will, like, take a three-dimensional tensor and turn it into a 2 dimensional tensor. Or three-dimensional temperature turner to 0 dimensional temperature. Think like a sum or max are really the common ones there. And then the 4 type is movement tops. And moving to apps are different from the other types because they don't actually require computation.
Speaker 2
00:54:53 - 00:55:01
They require different ways to look at memory. Mhmm. So that includes reshapes, permutes, expands, flips. Those are the main ones. Probably
Speaker 1
00:55:01 - 00:55:04
And so with that, you have enough to make a map model.
Speaker 2
00:55:04 - 00:55:10
And convolutions. And every convolution you can imagine dilated convolutions, strided convolutions, transposed convolutions.
Speaker 1
00:55:12 - 00:55:26
You're right on GitHub about laziness. Showing a map map. Matrix multiplication. See how despite the style is fused into 1 kernel with the power of laziness. Can you elaborate on this power of laziness?
Speaker 2
00:55:26 - 00:56:02
Sure. So if you type in Pytorch a times b plus c, What this is going to do is it's going to first multiply add and b, and store that result into memory. And then it is going to add c by reading that result from memory, reading c from memory and writing that out to memory. There is way more loads the stores to memory than you need there. If you don't actually do a times b, as soon as you see it, if you wait until the user actually realizes that tensor, until the laziness actually resolves, you confuse that plus c.
Speaker 2
00:56:02 - 00:56:04
This is, like, it's the same way has to all works.
Speaker 1
00:56:04 - 00:56:09
So what's the process of porting a model into tiny grid?
Speaker 2
00:56:09 - 00:56:25
So tiny red's front end looks very similar to Pytorch. I probably could make a perfect or pretty close to perfect interop layer if I really wanted to. Think that there's some things that are nicer about tiny grad syntax than Pytorch, but the front end looks very torch like. You can also load in Onyx models. Okay.
Speaker 2
00:56:25 - 00:56:29
We have more on x test passing than core AML? Core
Speaker 1
00:56:30 - 00:56:32
AML. Oh, okay. So We'll
Speaker 2
00:56:32 - 00:56:33
we'll pass on x span time soon.
Speaker 1
00:56:33 - 00:56:41
What what about, like, the developer experience with tiny grad? I mean, what it feels like, what a versus Pytorch?
Speaker 2
00:56:42 - 00:57:00
By the way, I really like Pytorch. III think that it's actually a very good piece of software. I think that they've made a few different trade offs. And these different trade offs are where, you know, Time to grab, text it from a path. 1 of the biggest differences is it's really easy to see the kernels that are actually being sent to the GPU.
Speaker 2
00:57:01 - 00:57:14
Right? If you run PyTorch on the GPU, you, like, do some operation and you don't know what kernels ran. You don't know how many kernels ran. You don't know how many flops were used. You don't know how much memory access is reused, tiny grad type debug equals 2.
Speaker 2
00:57:14 - 00:57:23
And it will show you in this beautiful style, every kernel that's run. How many flops? And how many bytes?
Speaker 1
00:57:24 - 00:57:29
So can you just linger on what problem tiny grad solves.
Speaker 2
00:57:30 - 00:57:45
Tiny grad solves the problem of porting new ML accelerators quickly. 1 of the reasons tons of these companies now. I think Sequoia marked Graph Core to 0. Right? Saravis, tensorrent, grok.
Speaker 2
00:57:46 - 00:57:52
All of these ML accelerator companies. They built chips. The chips were good. The software was terrible.
Speaker 1
00:57:52 - 00:57:53
Mhmm.
Speaker 2
00:57:53 - 00:58:04
And part of the reason is because but I think the same problem is happening with dojo. It's really, really hard to ride a pie torch port. Mhmm. Because you have to write 250 kernels and you have to tune them all for performance.
Speaker 1
00:58:06 - 00:58:15
What does Jim? Jim color think about Tanya Grant? You guys hung up quite a bit. So he's, you know, he's he was involved. He's involved with brainstorm.
Speaker 1
00:58:15 - 00:58:19
What's his praise and what's this criticism of what you're doing with your life.
Speaker 2
00:58:20 - 00:58:28
Look, my prediction for tense torrent is that they're gonna pivot to making risk 5 gyps. CPUs.
Speaker 1
00:58:29 - 00:58:34
CPUs. Yeah. Why? Why? Because
Speaker 2
00:58:34 - 00:58:37
AI accelerators are a software problem, not really a hardware problem.
Speaker 1
00:58:37 - 00:58:47
Oh, which things that you don't think you you think the diversity of AI accelerators in the hardware base is not going to be a thing that exists long term.
Speaker 2
00:58:47 - 00:59:12
I think what's gonna happen is if I can fin it. Okay. If you're trying to make an AI accelerator, you better have the capability of writing a torch level performance stack on NVIDIA GPUs. If you can't write a torch stack on NVIDIA GPUs, and I mean all the way, I mean down to the driver, there's no way you're gonna be able to write it on your chip because your chip's worse than an NVIDIA GPU. The first version of the chip you tape out is definitely worse.
Speaker 1
00:59:12 - 00:59:14
Oh, you're saying wearing that? That's really tough.
Speaker 2
00:59:14 - 00:59:30
Yes. And not only that, actually, the chip that you tape out, almost always because you're trying to get advantage over NVIDIA, you're specializing the hardware more. It's always harder to write software for more specialized hardware. Like a GPU is pretty generic. And if you can't write an NVIDIA stack, there's no way you can write a stack for your chip.
Speaker 2
00:59:31 - 00:59:36
So My approach to tiny grad is first, write a performant NVIDIA stack. We're we're targeting AMD.
Speaker 1
00:59:38 - 00:59:41
So you did say a few to NVIDIA a little bit. Would love.
Speaker 2
00:59:41 - 00:59:42
With love.
Speaker 1
00:59:42 - 00:59:43
Yeah. With love.
Speaker 2
00:59:43 - 00:59:45
So what are the yankees? You know? Oh, Matt's fan.
Speaker 1
00:59:45 - 01:00:04
Oh, you're you're you're a Matt's fan. A risk a risk fan and a mass fan. What's the hope that AMD has? And and you did build with AMD recently that I saw. How does the the the 7900 XTX compared to the RTX 40 90 or 40 80?
Speaker 2
01:00:04 - 01:00:11
Well, let's start with the fact that the 7900 XTX kernel drivers don't work. And if you run demo apps in loops, it panics the kernel.
Speaker 1
01:00:11 - 01:00:14
Okay? So if this is a software issue,
Speaker 2
01:00:15 - 01:00:32
Lisa who responded to my email. Oh, I reached out. I was like, this is, you know, really? Like, I understand if you're 7 by 7 transposed WinAgrad Calm is slower than NVIDIA's. But literally, when I run demo apps in a loop, the kernel panics.
Speaker 1
01:00:33 - 01:00:35
So just adding that loop.
Speaker 2
01:00:35 - 01:00:42
Yeah. I just I just literally took their demo apps and wrote, like, wild true semi colon do the app semi calling done in a bunch of screens.
Speaker 1
01:00:42 - 01:00:43
Mhmm.
Speaker 2
01:00:43 - 01:00:45
Right? This is, like, the most primitive fuzz testing.
Speaker 1
01:00:46 - 01:00:51
Why do you think that is they're just not seeing a market in the in machine learning,
Speaker 2
01:00:52 - 01:00:59
they're changing. They're trying to change. They're trying to change. And I had a pretty positive interaction with them this week. Last week, I I went on YouTube, I was just like, that's it.
Speaker 2
01:00:59 - 01:01:08
I give up on AMD. Like, this is the their driver doesn't even like, I'm not gonna I'm not gonna, you know, I'm I'll I'll go with Intel GPUs. Right? Until GPUs have better drivers.
Speaker 1
01:01:10 - 01:01:15
So you're kinda spearheading the diversification of GPUs.
Speaker 2
01:01:16 - 01:01:36
Yeah. And I'd like to extend that diversification to everything. I'd like to diversify the, right, the more My central thesis about the world is there's things that centralize power and they're bad and there's things that decentralize power and they're good. Everything I can do to help decentralize power I'd like to do.
Speaker 1
01:01:38 - 01:01:48
So you're really worried about the centralization of NVIDIA. Isn't interesting. And you don't have a fundamental hope for the the proliferation of ASICs, except in the cloud.
Speaker 2
01:01:49 - 01:01:55
I'd like to help them with software. No. Actually, there's only the only ASIC that is remotely successful is Google's CPU.
Speaker 1
01:01:55 - 01:01:55
Mhmm.
Speaker 2
01:01:55 - 01:02:05
And the only reason that's successful is because Google wrote a machine learning framework. Alright. I I think that you have to write a competitive machine learning framework in order to be able to build an ASIC.
Speaker 1
01:02:07 - 01:02:11
You think meta with Pytorch builds competitor? I hope so.
Speaker 2
01:02:11 - 01:02:13
Okay. They have 1. They have an internal 1.
Speaker 1
01:02:13 - 01:02:17
Internal. I mean, public facing with a nice cloud and interface and so on.
Speaker 2
01:02:18 - 01:02:19
I don't want a cloud.
Speaker 1
01:02:19 - 01:02:20
You don't like cloud?
Speaker 2
01:02:20 - 01:02:21
I don't like cloud.
Speaker 1
01:02:21 - 01:02:23
What do you think is the fundamental limitation of cloud?
Speaker 2
01:02:24 - 01:02:27
Fundamental limitation to cloud is who owns the the the off switch?
Speaker 1
01:02:27 - 01:02:33
So it's the power to the people. Yeah. And you don't you don't like the man to have all the power. Exactly.
Speaker 2
01:02:33 - 01:02:34
Alright.
Speaker 1
01:02:34 - 01:02:56
And right now, the only way to do that is with the NBA GPUs if you want performance and stability. Interesting. It's a it's a costly investment emotionally to go with AMD's Well, let me add sort of an attention to ask you. What what you've built quite a few p c's. What's your advice on how to build a good custom p c?
Speaker 1
01:02:57 - 01:03:01
For, let's say, for the different applications that you use, for gaming, for machine learning.
Speaker 2
01:03:01 - 01:03:04
Well, you shouldn't build 1. You should buy a box from the tiny corp.
Speaker 1
01:03:05 - 01:03:13
I heard rumors, whispers about this box in the tiny corp. What's what's this thing look like? What is it what is it called?
Speaker 2
01:03:13 - 01:03:20
It's called the tiny box. Tiny box. It's 15000 dollars. Yeah. And it's almost a paid a flop of compute.
Speaker 2
01:03:20 - 01:03:44
It's over a hundred gigabytes of GPU RAM. It's over 5 terabytes per second of GPU memory bandwidth. Gonna put, like, 4 NVMe's in in in raid. You're gonna get, like, 20, 30 gigabytes per second of drive read bandwidth. I'm gonna I'm gonna build like the best deep learning box that I can that plugs into 1 wall outlet.
Speaker 1
01:03:44 - 01:03:48
Okay. Can you go to the specs again a little bit from your from memory?
Speaker 2
01:03:48 - 01:03:50
Yeah. So it's almost a paid off of compute.
Speaker 1
01:03:50 - 01:03:52
So in D and L,
Speaker 2
01:03:52 - 01:04:04
Today, I'm leaning toward AMD. Mhmm. But we're pretty agnostic to the type of compute. The the the main limiting spec is a 120 volt 15 amp circuit.
Speaker 1
01:04:06 - 01:04:06
Okay.
Speaker 2
01:04:06 - 01:04:10
Well, I mean it. Because in order to, like like, there's a plug over there.
Speaker 1
01:04:10 - 01:04:10
Mhmm. Alright?
Speaker 2
01:04:11 - 01:04:25
You have to be able to plug it in. We're also gonna sell the tiny rack, which like what the most power you can get into your house without a rousing suspicion. And 1 of the 1 of the answers is an electric car charger.
Speaker 1
01:04:25 - 01:04:27
Where where does the rack go?
Speaker 2
01:04:27 - 01:04:28
Your garage?
Speaker 1
01:04:28 - 01:04:30
Interesting. The car charger.
Speaker 2
01:04:31 - 01:04:36
A wall outlet is about 1500 watts. A car charger is about 10000 watts.
Speaker 1
01:04:37 - 01:04:41
What is the most amount of power you can get your hands on without a rousing suspicion?
Speaker 2
01:04:41 - 01:04:42
That's right.
Speaker 1
01:04:42 - 01:04:52
George Hoss. Okay. So the the tiny box and you said NVMe is in raid. I forget what you said about memory, all that kind of stuff. Okay.
Speaker 1
01:04:52 - 01:04:54
So what about what GPUs?
Speaker 2
01:04:54 - 01:05:00
Again, probably Probably 7900 x t x's, but maybe 30 nineties, maybe a 7 seventies.
Speaker 1
01:05:01 - 01:05:04
Because sometimes you're flexible or still exploring.
Speaker 2
01:05:05 - 01:05:09
I'm still exploring. I wanna I wanna deliver a really good experience to people
Speaker 1
01:05:09 - 01:05:10
-- Mhmm. --
Speaker 2
01:05:10 - 01:05:23
and yeah, what GPUs I end up going with again? I'm leaning toward AMD. We'll see. You know, in my email, what I what I did to AMD is like, Just dumping the code on GitHub is not open source. Open source is a culture.
Speaker 2
01:05:24 - 01:05:37
Open source means that your issues are not all 1 year old stale issues. Open Source means developing in public. And if you guys can commit to that, I see a real future for AMD as a competitor to Nvidia.
Speaker 1
01:05:38 - 01:05:42
Well, I'd love to get a tiny box to MIT. So whenever it's ready.
Speaker 2
01:05:42 - 01:05:43
What was that?
Speaker 1
01:05:43 - 01:05:44
Let's do it.
Speaker 2
01:05:44 - 01:05:48
We're taking preorders. I took I took this from Milan. I'm like, alright. Hundred dollar fully refundable preorders.
Speaker 1
01:05:48 - 01:05:51
Is it gonna be, like, the cyber truck that's gonna take a few years? Or
Speaker 2
01:05:51 - 01:05:55
No. I'll try and do it faster. It's a lot simpler than the truck.
Speaker 1
01:05:55 - 01:06:01
Well, there's complexities not just the putting the thing together, but, like, shipping and all this kind of stuff.
Speaker 2
01:06:01 - 01:06:12
The thing that I wanna deliver to people out of the box is being able to run 65000000000 parameter llama in f p 16. In real time. And, like, are good. Like, 10 tokens per second or 5 tokens per second or something?
Speaker 1
01:06:12 - 01:06:19
Just it works. Yep. John was running or something like Llama. Experia
Speaker 2
01:06:19 - 01:06:25
or I think Falcon is is is the new 1. Experience a chat with the largest language model that you can have in your house.
Speaker 1
01:06:26 - 01:06:28
Yeah. From from a wall plug.
Speaker 2
01:06:28 - 01:06:33
From a wall plug. Yeah. Actually, for inference, it's not like even more power would help you. Get more.
Speaker 1
01:06:34 - 01:06:36
Even my boss wouldn't get you more.
Speaker 2
01:06:36 - 01:06:41
Well, there's just the biggest the biggest model released is is 65000000000 primary along as far as I know.
Speaker 1
01:06:41 - 01:06:50
So it sounds like tiny box will naturally pivot towards company number 3 because you could just get the girlfriend and or boyfriend.
Speaker 2
01:06:51 - 01:06:53
That one's harder, actually.
Speaker 1
01:06:53 - 01:06:54
The boyfriend is harder?
Speaker 2
01:06:54 - 01:06:54
The boyfriend's harder. Yeah.
Speaker 1
01:06:54 - 01:07:08
I think that's a very biased statement. I know a lot of people just what's what why is it harder to replace a boyfriend than a other girlfriend with the artificial LLM? Because women are attracted
Speaker 2
01:07:08 - 01:07:15
to status and power and men are attracted to youth and beauty. No. I mean, this is what I mean. But what Both are it
Speaker 1
01:07:15 - 01:07:17
could be amimicable, easy to the language model.
Speaker 2
01:07:17 - 01:07:21
No. No machines do not have any status or real power.
Speaker 1
01:07:22 - 01:07:33
I don't know. I think you both well, first of all, you're using language mostly to to communicate youth and beauty and power and status.
Speaker 2
01:07:33 - 01:07:37
But status fundamentally is here some game. Alright. Whereas youth and beauty or not?
Speaker 1
01:07:37 - 01:07:42
No. I think status is a narrative you can construct. I I don't think status is real.
Speaker 2
01:07:44 - 01:07:48
I don't know. I just think that that's why it's harder. You know, yeah, maybe it is my biases.
Speaker 1
01:07:48 - 01:07:51
I think status is way easier to fake.
Speaker 2
01:07:51 - 01:07:57
I also think that, you know, men are probably more desperate and more likely to buy my products, so maybe they're a better target market.
Speaker 1
01:07:57 - 01:08:02
Despiration is interesting. Easier to fool. That's I could I could see that.
Speaker 2
01:08:02 - 01:08:05
Yeah. Look. I mean, look. I know you can look at porn viewership numbers. Right?
Speaker 2
01:08:05 - 01:08:08
A lot more men watch porn than women. Yeah. That's why that is.
Speaker 1
01:08:09 - 01:08:24
There's a lot of questions and answers you can get there. Anyway, with the with the tiny box, How many GPUs in tiny box? Sixth. Oh, man.
Speaker 2
01:08:24 - 01:08:26
I'm and I'll tell you why it's 6.
Speaker 1
01:08:26 - 01:08:26
Yeah.
Speaker 2
01:08:26 - 01:08:38
Uh-huh. So AMD epic processors have a hundred 28 lanes of PCIe. Mhmm. I wanna leave enough lanes for some drives. Mhmm.
Speaker 2
01:08:38 - 01:08:40
And I wanna leave enough lanes for some networking.
Speaker 1
01:08:41 - 01:08:43
How do you do cooling for something like this?
Speaker 2
01:08:43 - 01:08:51
That's 1 of the big challenges. Not only do I want the cooling to be good, I want it to be quiet. I want the tiny box to be able to sit comfortably in your room.
Speaker 1
01:08:51 - 01:08:56
Right? This is really going towards the girlfriend thing. Because because you want to run the LMM?
Speaker 2
01:08:57 - 01:09:04
I'll give I'll give a more I mean, I can talk about how it relates to company number 1. Come AI. Yeah. Well,
Speaker 1
01:09:04 - 01:09:08
but, yes, quiet. Oh, quiet because you may be potentially wanna run-in a car?
Speaker 2
01:09:08 - 01:09:15
No. No. Quiet because you wanna put this thing in your house. And you want it to coexist with you. If it's streaming, it's 60 d b, you don't want that in your house.
Speaker 2
01:09:15 - 01:09:15
You'll kick it out.
Speaker 1
01:09:15 - 01:09:16
60 d b. Yeah.
Speaker 2
01:09:16 - 01:09:18
I want, like, 40, 45.
Speaker 1
01:09:18 - 01:09:22
So how do you make the cooling quiet that's an interesting problem in itself.
Speaker 2
01:09:22 - 01:09:31
A key trick is to actually make it big ironically. It's called the tiny box. Yeah. But if I can make it big, a lot of that noise is generated because of high pressure air.
Speaker 1
01:09:31 - 01:09:31
Mhmm. If
Speaker 2
01:09:31 - 01:09:45
you look at like A1U server, A1U server has these super high pressure fans. They're like super deep and they they're like Greenwich. Versus if you have something that's big. Well, I can use a big and, you know, you know, they call them big ass fans. Those ones that are, like, huge in the ceiling.
Speaker 2
01:09:45 - 01:09:47
And they're completely silent.
Speaker 1
01:09:47 - 01:09:48
So tiny box
Speaker 2
01:09:48 - 01:09:58
will be big. It is the I do not want it to be large according to UPS. Want it to be shippable as a normal package, but that's my constraint there.
Speaker 1
01:09:58 - 01:10:03
Interesting. Well, the the fan stuff, like, can't can't it be assembled on location or
Speaker 2
01:10:03 - 01:10:10
No. That should be. Well, here you're here. Look, I wanna give you a grade out of the box experience. I want you to lift this thing out.
Speaker 2
01:10:10 - 01:10:13
I want it to be like like the Mac, you know. Tiny box.
Speaker 1
01:10:13 - 01:10:21
The Apple experience. Yeah. I love it. Okay. And so tiny box would run tiny grad.
Speaker 1
01:10:21 - 01:10:32
Like, what what what do you envision this whole thing to look like? We're talking about, like, Linux with a full software engineering environment.
Speaker 2
01:10:32 - 01:10:33
Mhmm.
Speaker 1
01:10:33 - 01:10:36
And it's just not PyTorch, but tiny grid.
Speaker 2
01:10:36 - 01:10:40
Yeah. We did a poll if people want you bun to our arch. We're gonna stick with you bun to.
Speaker 1
01:10:40 - 01:10:52
Oh, interesting. What's your favorite flavor of this? Kupantamate. How do we pronounce that meat? So how do you you've gotten llama into tiny grad.
Speaker 1
01:10:52 - 01:11:02
You've gotten stable diffusion into tiny grad. What was that like? Can you comment on, like, what are what are these models? What's interesting about porting them? Sure.
Speaker 1
01:11:02 - 01:11:07
It's yeah. Like, what what are the the challenges? What are what's naturally, what's easy, all that kind of stuff.
Speaker 2
01:11:07 - 01:11:24
There's a really simple way to get these models into tiny grad, and you can just export them as Onix. Mhmm. And then Honey grad can run Onix. So the ports that I did of Llama, Stable Defusion, and now Whisper, are more academic to teach me about the models. But they are cleaner than the Pirate versions.
Speaker 2
01:11:24 - 01:11:33
You can read the code. I think the code is easier to read. It's less lines. There's just a few things about the way tiny God writes next. Here's Here's a complaint I have about Pytorch.
Speaker 2
01:11:33 - 01:11:45
N n dot relu is a class. Right? So when you create a when you create an NN module, you'll put your NN relu's as in a nip. And this makes no sense. Relay was completely stateless.
Speaker 2
01:11:46 - 01:11:47
Why should that be a class?
Speaker 1
01:11:48 - 01:11:53
But but that's more like a software engineering thing. Or do you think it has a cost on performance?
Speaker 2
01:11:53 - 01:12:01
Oh, no. It doesn't have a cost on performance. But, yeah, no. I I think that it's that's what I mean about, like, tiny grads front end to being cleaner.
Speaker 1
01:12:01 - 01:12:11
I see. What do you think about Mojo? I don't know if you've been paying attention in the programming language that does some interesting ideas that kinda intersect tiny grid.
Speaker 2
01:12:11 - 01:12:21
I think that there's a spectrum. And, like, on 1 side, you have mojo. And on the other side, like, GGML. Mhmm. GGML is this, like, we're gonna run llama fast on Mac.
Speaker 2
01:12:21 - 01:12:28
Mhmm. Okay. We're gonna expand out to a little bit, but we're gonna basically go to depth first. Right? Mojo was like, we're gonna go breathe first.
Speaker 2
01:12:28 - 01:12:31
We're gonna go so wide that we're gonna make all of Python fast
Speaker 1
01:12:31 - 01:12:31
--
Speaker 2
01:12:31 - 01:12:37
Mhmm. -- and tiny words in the middle. Again, we are going to make neural networks fast.
Speaker 1
01:12:38 - 01:12:51
Yeah. But they they try to really get it to be fast, compiled down to specifics hardware, and make that compilation step. As flexible and resilient as possible,
Speaker 2
01:12:51 - 01:12:52
and but they have turn completeness.
Speaker 1
01:12:53 - 01:13:04
And that will miss you. Turn, that's what you're saying. It's somewhere in the middle. So you're actually going to be targeting some accelerators, some, like, some some number, not 1.
Speaker 2
01:13:04 - 01:13:17
My goal is step 1, build an equally performance stack to PyTorch on Nvidia and AMD. Mhmm. But with way less lines. And then step 2 is, okay, how do we make an accelerator. Right?
Speaker 2
01:13:17 - 01:13:21
But you need step 1. You have to first build the framework before you can build the accelerator.
Speaker 1
01:13:21 - 01:13:28
Can you explain m l perf What's your approach in general to benchmark in tiny grad performance? So
Speaker 2
01:13:29 - 01:13:45
I'm much more of a like build it the right way and worry about performance later. There's a bunch of things where I haven't even, like, really dove into performance. The only place where tiny grad is competitive performance wise right now is on Qualcomm GPUs.
Speaker 1
01:13:45 - 01:13:46
Mhmm.
Speaker 2
01:13:46 - 01:13:50
So tiny grads actually use an open palette, try the model. So the driving model is is is tiny grad.
Speaker 1
01:13:50 - 01:13:52
And when did that happen? That transition?
Speaker 2
01:13:53 - 01:13:58
About 8 months ago now? And it's 2 x faster than Qualcomm's library.
Speaker 1
01:13:59 - 01:14:03
What's the hardware open that open pilot runs on the the the Kamai app?
Speaker 2
01:14:03 - 01:14:05
It's a Snapdragon 08:45.
Speaker 1
01:14:05 - 01:14:06
Okay. So this
Speaker 2
01:14:06 - 01:14:17
is using the GPU. So the GPU is an adreno GPU. There's like different things. There's like really good Microsoft paper that talks about, like, mobile GPUs and why they're different from desktop GPUs.
Speaker 1
01:14:17 - 01:14:17
Mhmm.
Speaker 2
01:14:18 - 01:14:25
1 of the big things is in a desktop GPU, you can use buffers. On a mobile GPU image textures a lot faster.
Speaker 1
01:14:27 - 01:14:33
On a mobile GPU image textures and limit okay. And so you want to be able to leverage that
Speaker 2
01:14:33 - 01:14:49
I wanna be able to leverage it in a way that it's completely generic. Right? So there's a lot of this Xiaomi has a pretty good open source library for mobile GPUs called Mace, where they can generate where they have these kernels, but they're all hand coded. Right? So that's great if you're doing 3 by 3 comps.
Speaker 2
01:14:49 - 01:14:56
That's great if you're doing dense map models. But the minute you go off the beaten path at tiny bit, well, your performance is nothing.
Speaker 1
01:14:56 - 01:15:08
Since you mentioned an open pilot, I'd love to get an update in the company number 1, com AI world, how are things going there and the development of semi autonomous driving?
Speaker 2
01:15:11 - 01:15:20
You know, almost no 1 talks about FSD anymore and even less people talk about open pilot? We've solved the problem. Like, we solved it years ago.
Speaker 1
01:15:21 - 01:15:26
What's the problem exactly? Well, how do you Like, what what what does solving it mean?
Speaker 2
01:15:26 - 01:15:53
Solving means how do you build a model that outputs a human policy for driving? Mhmm. How do you build a model that given you know, reasonable set of sensors outputs a human policy for driving. So you have, you know, companies like Wehman Cruises, which are hand coding, these things that are, like, quasi human policies. Then you have Tesla and maybe even to more of an extent, comma, asking, okay, how do we just learn human policy and data?
Speaker 2
01:15:55 - 01:16:15
The big thing that we're doing now and we just put it out on Twitter, At the beginning of comma, we published a paper called learning a driving simulator. Mhmm. And the way this thing worked, was it's a it was an auto encoder, and then an overnight in the middle. Right? You take an auto encoder?
Speaker 2
01:16:15 - 01:16:31
You compress the picture, use an r and n, predict the next state, and these things were, you know, it was a laugh at the bad simulator. This is 20 15 hour machine learning technology. Today, we have VQVAE, and transformers. Mhmm. We're building DriveGPT, basically.
Speaker 1
01:16:32 - 01:16:40
DriveGPT. Okay. So and then it's trained on what? Is it trained in a self supervised way?
Speaker 2
01:16:40 - 01:16:42
Yeah. It's trained on all the driving data to predict the next frame.
Speaker 1
01:16:43 - 01:16:47
So really trying to learn a human policy. What would a human do?
Speaker 2
01:16:47 - 01:17:01
Well, actually, our simulator's condition on the pose. It's it's actually a simulator. You can put in like a state action pair and get out the next state. Okay. And then once you have a simulator, you can do RL in the simulator, and RL will get us that human policy.
Speaker 1
01:17:02 - 01:17:03
So transfers.
Speaker 2
01:17:04 - 01:17:12
Yeah. RL with a reward function, not asking, is this close to the human policy, but asking, would a human disengage if you did this behavior?
Speaker 1
01:17:13 - 01:17:25
Okay. Let me think about the the distinction there. Would a human disengage? Yeah. Would a human disengage that correlates, I guess, with the human policy, but it can be different.
Speaker 1
01:17:25 - 01:17:42
So it's it's it doesn't just say, oh, what a human do, it says, what would a good human driver do? Yeah. And such that the experience is comfortable, but also not annoying in that, like, the thing is very cautious. It's in that if finding a nice balance. That's that's interesting.
Speaker 1
01:17:42 - 01:17:43
It's in nice.
Speaker 2
01:17:43 - 01:17:50
It's asking exactly the right question. What will make our customers happy? Right. A system that you never wanna disengage.
Speaker 1
01:17:51 - 01:17:57
Because usually, this engagement is almost always a sign of I'm not happy with what the system is doing.
Speaker 2
01:17:57 - 01:18:04
Usually, there's some that are just I felt like driving, and those are always fine too, but they're just gonna look like noise in the data.
Speaker 1
01:18:04 - 01:18:16
But even that felt like driving. Maybe yeah. That's even that's a signal. Like, why do you feel like driving? You're you need to recalibrate your relationship with a car.
Speaker 1
01:18:16 - 01:18:22
Okay. So what that that's really interesting. How close are we just solving self driving?
Speaker 2
01:18:25 - 01:18:32
It's hard to say. We haven't completely closed the loop yet. So we don't have anything built. That truly looks like that architecture yet. Mhmm.
Speaker 2
01:18:32 - 01:18:40
We have prototypes and bugs. So we are A couple bug fixes away. Might take a year. Might take 10.
Speaker 1
01:18:40 - 01:18:48
What's the nature of the bugs? Are these these major philosophical bugs, logical bugs? What kind of what kind of bugs are we talking about?
Speaker 2
01:18:48 - 01:19:00
They're just like they're just like stupid bugs and, like, also, we might just need more scale. We just massively expanded our compute cluster, a comma. We now have about 2 people worth of compute, 40 pin flaps.
Speaker 1
01:19:01 - 01:19:04
Well, people people are different.
Speaker 2
01:19:04 - 01:19:08
Yeah. I have 20 faith floss. That's a person. I mean, it's just it's just a unit. Right?
Speaker 2
01:19:08 - 01:19:10
Horse's are different too, but we still call it a horsepower.
Speaker 1
01:19:10 - 01:19:19
Yeah. But there's something different about mobility than there is about perception and action in a very complicated world. But yes.
Speaker 2
01:19:19 - 01:19:23
Well, yeah. Of course, not all flops are created equal. If you have randomly initialized weights, it's not gonna
Speaker 1
01:19:24 - 01:19:26
Not all flops are created equal.
Speaker 2
01:19:26 - 01:19:28
So we're just doing way more useful things than others.
Speaker 1
01:19:28 - 01:19:33
Yep. Yep. Tell me about it. Okay. So more data.
Speaker 1
01:19:33 - 01:19:37
Scale means more scale in computer or scale in scale of data.
Speaker 2
01:19:37 - 01:19:38
Both
Speaker 1
01:19:40 - 01:19:41
diversity of data.
Speaker 2
01:19:41 - 01:19:49
Diversity is very important in data. Yeah. I mean, we have so we have about I think we have, like, 5000 daily actives.
Speaker 1
01:19:51 - 01:19:55
How would you evaluate how FSD is doing? There's life driving.
Speaker 2
01:19:55 - 01:19:56
Pretty well.
Speaker 1
01:19:56 - 01:20:00
How's that race going? Between Commeister and F50?
Speaker 2
01:20:00 - 01:20:06
Tesla is always 1 to 2 years ahead of us. They've always been 1 to 2 years ahead of us. And they probably always will be because they're not doing anything wrong.
Speaker 1
01:20:07 - 01:20:19
What have you seen that since the last time we talked that are interesting architectural decisions, training decisions, like the way the way they deploy stuff, the architectures they're using in terms of the software, how the teams are running, all that kind of stuff, data collection. Anything interesting?
Speaker 2
01:20:20 - 01:20:23
I mean, I know they're moving toward more of an end to end approach. So
Speaker 1
01:20:24 - 01:20:30
creeping towards end to end as much as possible across the whole thing. The training, the data collection, and everything.
Speaker 2
01:20:30 - 01:20:39
They also have a very fancy simulator. They're probably saying all the same things we are. They're probably saying we just need to optimize, you know, what is the reward? Well, you get negative reward for disengagement. Right?
Speaker 2
01:20:39 - 01:20:44
Like, everyone kinda knows this. It's just a question who can actually build and deploy the system.
Speaker 1
01:20:44 - 01:20:50
Yeah. I mean, this good this requires good software engineering, I think. Yeah. And and the right high enough hardware.
Speaker 2
01:20:51 - 01:20:52
Yeah. The hardware to run it.
Speaker 1
01:20:53 - 01:20:55
You still don't believe in cloud in that regard?
Speaker 2
01:20:57 - 01:21:02
I have a compute cluster in my boss. 800 amps?
Speaker 1
01:21:02 - 01:21:03
Tiny grad.
Speaker 2
01:21:03 - 01:21:09
With 40 kilowatts at idle, our data center. Dyson crazy. With 40 kilowatts just burning just when the computers are idle.
Speaker 1
01:21:09 - 01:21:10
Just 1 moment.
Speaker 2
01:21:10 - 01:21:11
Sorry. Sorry. Compute cluster.
Speaker 1
01:21:13 - 01:21:14
Compute cluster. I got it.
Speaker 2
01:21:14 - 01:21:20
It's not a data center. Yeah. Now data centers are clouds. We don't have clouds. Data centers have air conditioners.
Speaker 2
01:21:20 - 01:21:23
We have fans. That makes it a compute cluster.
Speaker 1
01:21:25 - 01:21:29
I'm guessing this is a kind of a legal distinction of sure. Yeah.
Speaker 2
01:21:29 - 01:21:30
We have a compute cluster.
Speaker 1
01:21:31 - 01:22:01
You said that you don't think LOMs have consciousness or at least not more than chicken? Do you think they can reason? Is there something interesting to you about the word reason? About some of the capabilities that we think is kind of human to be able to integrate complicated information and through a chain of thought arrive at a conclusion that feels novel. A novel integration of the of disparate facts
Speaker 2
01:22:02 - 01:22:07
Yeah, I I don't think that there's I think if I can reason better than a lot of people.
Speaker 1
01:22:07 - 01:22:13
Hey, isn't that amazing to you though? Is that like an incredible thing that a transform could achieve?
Speaker 2
01:22:13 - 01:22:17
I mean, I think that calculators can add better than a lot of people.
Speaker 1
01:22:17 - 01:22:24
But language feels like reasoning through the process of language, which looks a lot like thought.
Speaker 2
01:22:26 - 01:22:38
Making brilliant season chess, which feels a lot like thought. Like, whatever new thing that AI can do, everybody thinks is brilliant. And then, like, 20 years go by and they're like, well, you have chest. That's like mechanical. Like adding, that's like mechanical.
Speaker 1
01:22:38 - 01:22:41
So using language is not that special. It's like chess.
Speaker 2
01:22:41 - 01:22:42
It's like chess. I
Speaker 1
01:22:42 - 01:23:02
don't know. And because it's very human, we we take it we listen, there is something different between chess and language. Chest is a game that a subset of population plays. Language is something we use nonstop for all of our human interaction. And human interaction is fundamental to society.
Speaker 1
01:23:02 - 01:23:11
So it's like, holy shit. This this language thing is not so difficult to, like, create a new machine.
Speaker 2
01:23:11 - 01:23:29
The problem is if you go back to 19 60 and you tell them that you have a machine that can play amazing chess. Of course, someone in 19 60 will tell you that machine is intelligent, someone in 20 10 won't. What's changed? Right? Today, we think that these machines that have language are intelligent.
Speaker 2
01:23:29 - 01:23:32
But I think in 20 years, we're gonna be like, yeah, but can it reproduce?
Speaker 1
01:23:34 - 01:23:42
So reproduction. Yeah. We might redefine what it means to to be what is it? A high performance living organism on earth.
Speaker 2
01:23:43 - 01:23:52
Humans are always gonna define a niche for themselves. Like, well, you know, we're better than the machines because we can you know, and like they tried creative for a bit, but no 1 believes that 1 anymore.
Speaker 1
01:23:52 - 01:24:10
But Nish, is is that is that delusional or is there some accuracy to that? Because maybe, like, with chess, you start to realize, like, that that we have ill conceived notions of what what makes human special, like the apex organism on earth.
Speaker 2
01:24:12 - 01:24:18
Yeah. And I think maybe we're gonna go through that same thing with language. And that same thing with creativity.
Speaker 1
01:24:19 - 01:24:28
The language carries these notions of truth and so on. And so we might be like, wait, maybe truth is not carried by language. Maybe there's like a deeper thing.
Speaker 2
01:24:28 - 01:24:30
The niches getting smaller.
Speaker 1
01:24:31 - 01:24:31
Oh, boy.
Speaker 2
01:24:33 - 01:24:39
But no. No. No. You don't understand humans are created by God and machines are created by humans, therefore. Right?
Speaker 2
01:24:39 - 01:24:40
Like, that'll be the last issue we have.
Speaker 1
01:24:41 - 01:24:54
So what do you think about this the rapid development of LMs? If you could just, like, stick on that. It's still incredibly impressive, like, with JD BPT. Just even JD BPT, what are your thoughts about reinforcement learning with human feedback on these large language models?
Speaker 2
01:24:55 - 01:25:13
I'd like to go back to when calculators first came out and or computers. And like, I wasn't around, look, I'm I'm 33 years old. And to like see how that affected. Like, society.
Speaker 1
01:25:13 - 01:25:19
Maybe you're right. So I wanna put on the the the big picture hat here.
Speaker 2
01:25:19 - 01:25:21
I got a refrigerator. Wow.
Speaker 1
01:25:21 - 01:25:32
Refrigerator, electricity, all that kind of stuff. But, you know, with the Internet, large language models seeming human like basically passing a Turing
Speaker 2
01:25:32 - 01:25:33
test. Mhmm.
Speaker 1
01:25:33 - 01:25:50
It seems it might have really at scale rapid transformative effects on society. But you're saying like other technologies have as well. So maybe calculator is not the best example of that. Because that just seems like well, no, maybe, calculator.
Speaker 2
01:25:50 - 01:26:00
For milk man, the day he learned about refrigerators, he's like, I'm done. You're telling me you just keep the milk in your house. You don't even need to deliver it every day I'm done.
Speaker 1
01:26:00 - 01:26:11
Well, yeah, that you have to actually look at the practically impacts of certain technologies. That that they've had, yeah, probably electricity is a big 1 and also how rapidly it's spread. Yeah. The Internet is a big 1.
Speaker 2
01:26:11 - 01:26:13
How do you think it's different this time, though?
Speaker 1
01:26:13 - 01:26:14
It just feels like
Speaker 2
01:26:15 - 01:26:16
The niches getting smaller.
Speaker 1
01:26:17 - 01:26:18
The niches humans --
Speaker 2
01:26:18 - 01:26:18
Yes.
Speaker 1
01:26:19 - 01:26:28
-- that makes humans special. Yes. It feels like it's getting smaller rapidly though. Doesn't it? Or is that just a feeling we dramatize everything?
Speaker 2
01:26:28 - 01:26:36
I think we dramatize everything. I think that that that asked the milk man when he saw refrigerators. And they're gonna have 1 of these in every home.
Speaker 1
01:26:38 - 01:26:43
Yeah. Yeah. Yeah. Yeah. May but boys are impressive.
Speaker 1
01:26:44 - 01:26:49
So much more impressive than seeing a a chess world champion AI system.
Speaker 2
01:26:49 - 01:27:12
I disagree, actually. I disagree. I think things like u 0 and alpha go are so much more impressive because these things are playing beyond the highest human level. The language models are writing middle school level essays and people are like, wow, It's a great essay. It's a great 5 paragraph essay about the causes of the civil war.
Speaker 1
01:27:12 - 01:27:21
Okay. Forget the civil war, just generating code, codex. So you're you're saying it's mediocre code. Terrible. But I don't think it's terrible.
Speaker 1
01:27:21 - 01:27:28
I think it's just mediocre code. Yeah. Often close to correct, like, for medial curve.
Speaker 2
01:27:28 - 01:27:39
That's the scariest kind of code. I spent 5 percent of time typing and 95 percent of time debugging. The last thing I want is close to correct code. I want a machine that can help me with the debugging now with typing.
Speaker 1
01:27:39 - 01:27:51
You know, it's like l 2 level to driving, similar kind of thing. Yet, it's you still should be a good programmer in order to modify. I wouldn't even say debugging. Just modifying the code, deleting it.
Speaker 2
01:27:51 - 01:28:06
Don't think it's like level 2 driving. I think driving is not tool complete and programming is. Meaning, you don't use, like, the best possible tools to drive. Right. You're not you're not like like like cars have basically the same interface for the last 50 years.
Speaker 2
01:28:06 - 01:28:08
Yep. Computers have a radically different interface.
Speaker 1
01:28:08 - 01:28:12
Okay. Can you describe the concept of tool complete? Yeah.
Speaker 2
01:28:12 - 01:28:16
So think about the difference between a car from 19 80 and a car from tonight.
Speaker 1
01:28:16 - 01:28:16
Yeah. Alright.
Speaker 2
01:28:16 - 01:28:24
No difference, really. Got a bunch of pedals, got a steering wheel. Great. Maybe now it has a few ADAS features, but it's pretty much the same car. Right?
Speaker 2
01:28:24 - 01:28:42
You have no problem getting into a 19 80 car and driving it. You take a programmer today who spent their whole life doing JavaScript, and you put them in an apple to e prompt, and you tell them about the line numbers in basic. But how do I insert something between lines 17 and 18? Wow. But
Speaker 1
01:28:45 - 01:28:54
So you in tool you're putting in the programming languages. So it's just the entirety stack of the tooling. Yeah. So it's not just like the, like, IDs or something like this. Everything.
Speaker 1
01:28:54 - 01:28:54
Yes.
Speaker 2
01:28:54 - 01:29:10
It's Heidi's. The language is the runtimes. It's it's it's everything. And programming is is tool complete. So, like, almost if if if if codecs or or copilot are helping you, that actually probably means that your framework or library is bad, and there's too much boilerplate in it.
Speaker 1
01:29:12 - 01:29:15
Yeah. But don't you think so much programming has boilerplate?
Speaker 2
01:29:16 - 01:29:38
Tiny grad is now 2700 lines. And it can run llama in stable diffusion. And all of this stuff is in 2700 lines. Boiler plate and abstraction interactions and all these things, purchase bad code. Well, Let's talk about good code and bad code.
Speaker 2
01:29:38 - 01:29:39
Yeah.
Speaker 1
01:29:39 - 01:29:56
Because I would say, I don't know, for generic scripts that are right, just offhand. Like, I like, 80 percent of it is written by GPT. Just like quick quick, like, offhand stuff. So not like libraries, not like performing code, not stuff for robotics, and so on. Just quick stuff.
Speaker 1
01:29:56 - 01:30:18
Because your basic so much of programming is doing some some, yeah, boilerplate, but to do so efficiently and quickly because you can't really automate it fully with, like, generic method, like a generic kind of ID type of recommendation or something like this. You do need to have some of the complexity of language models.
Speaker 2
01:30:19 - 01:30:25
Yeah. I guess if I was really writing, like, maybe today if I wrote, like, a lot of, like, data parsing stuff.
Speaker 1
01:30:25 - 01:30:25
Yeah.
Speaker 2
01:30:25 - 01:30:41
I mean, I don't play CTFs any But if I still play CTS a lot of it, like, it's just like you have to write, like, a parser for this data format. Like, I wonder or, like, admin of code. I wonder when the models are gonna start to help with that kind of code, and they may. They may and the models also may help you with speed. Yeah.
Speaker 2
01:30:41 - 01:30:59
My models very fast. But where the models won't, I my programming speed is not at all limited by my typing speed. And in very few cases it is. Yes. If I'm writing some script to just like parse some weird data format, Sure.
Speaker 2
01:30:59 - 01:31:01
My programming speed is limited by my typing speed.
Speaker 1
01:31:01 - 01:31:05
What about looking stuff up? Because that's essentially a more efficient lookup. Right?
Speaker 2
01:31:05 - 01:31:19
You know, when I was at when I was at Twitter, I I tried to use ChatripT to to, like, ask some questions. Like, what's the API for this? Mhmm. And it would just hallucinate. It would just give me completely made up API functions that sounded real.
Speaker 1
01:31:20 - 01:31:32
What do you think that's just a temporary kind of stage? You don't think it'll get better and better and better and this kind of stuff. Because, like, it only hallucinate stuff in in the edge cases. Yes. If your engineering code is actually pretty good.
Speaker 2
01:31:32 - 01:31:43
If you are writing an absolute basic, like, React app with a button, it's not gonna hold this nature. No. There there's kind of ways to fix the hallucination problem. I think Facebook is an interesting paper. It's called Atlas.
Speaker 2
01:31:43 - 01:31:57
And it's actually weird the way that we do language models right now where all of the information is in the white? Mhmm. And human brains don't really like this. It's like a hippocampus and a memory system. So why don't LMs have a memory system?
Speaker 2
01:31:57 - 01:32:11
And there's people working on them. I think future LMs are gonna be like smaller, but are going to run looping on themselves and are going to have retrieval systems. And the thing about using a retrieval system is you can side sources, explicitly,
Speaker 1
01:32:12 - 01:32:27
which is really helpful to integrate the human into the loop of the of the thing because you can go check the sources and you can investigate it. So whenever the thing is hallucinating, you can, like, have the human supervision. That's pushing it towards level 2 kind of That's
Speaker 2
01:32:27 - 01:32:28
gonna kill Google.
Speaker 1
01:32:29 - 01:32:29
Wait. Which part?
Speaker 2
01:32:29 - 01:32:33
When someone makes an LLM that's capable of citing its sources, it will kill Google.
Speaker 1
01:32:35 - 01:32:37
That's citing a source is because that's basically a search engine.
Speaker 2
01:32:37 - 01:32:40
Yeah. That's what people want, miss search engine.
Speaker 1
01:32:40 - 01:32:42
But also Google might be the people that build it.
Speaker 2
01:32:42 - 01:32:43
Maybe.
Speaker 1
01:32:43 - 01:32:44
And put ads on them.
Speaker 2
01:32:44 - 01:32:45
I'd count them out.
Speaker 1
01:32:46 - 01:32:52
Why is that? Why do you think? Who who wins this race? We got who who are the competitors?
Speaker 2
01:32:52 - 01:32:53
Alright.
Speaker 1
01:32:53 - 01:32:59
We got tiny corp. I don't know if that's Yeah. I yeah. I mean, you're a legitimate competitor in that.
Speaker 2
01:32:59 - 01:33:01
I'm not trying to compete on that.
Speaker 1
01:33:01 - 01:33:04
You're not. No. Not as it's gonna accidentally stumble into that competition. Mhmm.
Speaker 2
01:33:05 - 01:33:06
You don't think you
Speaker 1
01:33:06 - 01:33:08
might build the search engines or replace Google search?
Speaker 2
01:33:08 - 01:33:21
When I started a comma, I said, over and over again, I'm going to Win Self Driving Cars. I still believe that. I have never said I'm going to Win Search. With the tiny corp, and I'm never going to say that because I won't.
Speaker 1
01:33:21 - 01:33:34
The night is still young. We don't you don't know how hard is it to win search. In this new route. Like, it's it it feels I mean, 1 of the things that JJPT kinda shows that there could be a few interesting tricks. That really have that create a really compelling product.
Speaker 2
01:33:34 - 01:33:42
Some startups gonna figure it out. I I think I think if you ask me, like, Google is still the number 1 web page. I think by the end of the decade, Google won't be the number 1 web anymore.
Speaker 1
01:33:43 - 01:33:47
So you don't think Google because of the how big the corporation is?
Speaker 2
01:33:47 - 01:33:49
Look, I I would put a lot more money on Mark Zuckerberg.
Speaker 1
01:33:50 - 01:33:51
What is that?
Speaker 2
01:33:53 - 01:34:01
Because Mark Zuckerberg is alive. Like, this is the old Paul Graham essay. Startups are there alive or dead. Google's dead.
Speaker 1
01:34:02 - 01:34:05
Facebook is a live. Facebook is a live. Yeah. It's a live.
Speaker 2
01:34:05 - 01:34:14
Matter. You see what I mean? That's just like like like like Mark Zuckerberg. This is Mark Zuckerberg reading that Paul Grahamassing and being like, I'm gonna show everyone how alive we are. I'm gonna change the name.
Speaker 1
01:34:14 - 01:34:26
So you don't think there's this gutsy pivoting engine that like Google doesn't have that the the the convention and the startup has, like, constantly.
Speaker 2
01:34:26 - 01:34:27
You know what?
Speaker 1
01:34:27 - 01:34:28
Being alive, I guess.
Speaker 2
01:34:28 - 01:34:37
When I was to your Sam Altman, podcast. You talked about the button. Everyone who talks about AI talks about the button, the button to turn it off. Right? Do we have a button to turn off Google?
Speaker 2
01:34:38 - 01:34:41
Is anybody in the world capable of shutting Google down?
Speaker 1
01:34:42 - 01:34:45
What does that mean exactly the company of the of the search engine?
Speaker 2
01:34:45 - 01:34:49
So we shot the search engine down. Kishat the company down. Either?
Speaker 1
01:34:49 - 01:34:52
Can you elaborate on the value of that question?
Speaker 2
01:34:52 - 01:34:55
Does Sundar and Prashay have the authority to turn off Google dot com tomorrow?
Speaker 1
01:34:57 - 01:35:02
Who has the authority? That's a good question. Does anyone? Does anyone? Yeah.
Speaker 1
01:35:02 - 01:35:02
I'm sure.
Speaker 2
01:35:03 - 01:35:13
Are you sure? No. They have the technical power, but do they have the authority? Let's say Saundra Pashay made this his sole mission. Came into Google tomorrow and said, I'm gonna shut Google dot com down.
Speaker 2
01:35:13 - 01:35:16
Yeah. I don't think you'd keep this position too long.
Speaker 1
01:35:18 - 01:35:20
And what is the mechanism by which he wouldn't keep his position?
Speaker 2
01:35:21 - 01:35:27
Well, it's like boards and shares and corporate undermining and, oh my god, our revenue is 0 now.
Speaker 1
01:35:28 - 01:35:34
Okay. So what I mean, what's the case you're making here? So the the capitalist machine prevents you from -- Mhmm. -- having the button.
Speaker 2
01:35:34 - 01:35:41
Yeah. And we'll have a I mean, this is true for the AI's too. Right? There's no turning the AI's off. There's no button.
Speaker 2
01:35:41 - 01:35:46
You can't press it. Now, does Mark Zuckerberg have that button for facebook dot com?
Speaker 1
01:35:46 - 01:35:48
Yes. Probably more.
Speaker 2
01:35:48 - 01:35:54
I think he does. I think he does. And this is exactly what I mean and why I bet on him so much more than I bet on.
Speaker 1
01:35:54 - 01:35:57
I guess you could say Elon has similar stuff.
Speaker 2
01:35:57 - 01:36:03
Oh, Elon has the button. Yeah. You you would. Does Elon can Elon fire the missiles? Can he fire the missiles?
Speaker 1
01:36:04 - 01:36:06
I think some questions are better. You
Speaker 2
01:36:06 - 01:36:15
know who I'm asking. Right? I mean, you know, a rocket at an ICBCL. You have a rocket that can land any Is that an ICBM? Well, yeah, you know, don't ask too many questions.
Speaker 1
01:36:16 - 01:36:30
My god. But the the positive side of the button is that you can innovate aggressively is what you're saying. Which is what's required with training LLM into a search engine?
Speaker 2
01:36:30 - 01:36:31
I would bet on a startup. I bet
Speaker 1
01:36:31 - 01:36:32
Is it so easy? Right?
Speaker 2
01:36:32 - 01:36:35
I bet on something that looks like mid journey, but for search.
Speaker 1
01:36:37 - 01:36:44
Just he's able to size source the loop on itself. I mean, it just feels like 1 model can take off. Yeah. Right? And that nice wrapper, and some of it scam.
Speaker 1
01:36:44 - 01:36:49
It's hard to, like, create a product that just works really nicely, stably.
Speaker 2
01:36:49 - 01:37:10
The other thing that's gonna be cool is there is some aspect of a winner take all effect Right? Like, once someone starts deploying a product that gets a lot of usage, and you see this with OpenAI, they are going to get the data set to train future versions of the model. Yeah. And they are going to be able to right. You know, I was asked to Google and research when I worked there, like, almost 15 years ago now.
Speaker 2
01:37:10 - 01:37:19
How does Google know which image is an apple? And I said, the metadata. And they're like, yeah, that works about half the time. How does Google know? You'll see the raw apples on the front page when you search Apple.
Speaker 1
01:37:19 - 01:37:19
Mhmm.
Speaker 2
01:37:20 - 01:37:25
And I don't know. I didn't come up with the answer. The guys, like, what's what people click on when they search Apple? Mhmm. Like, oh,
Speaker 1
01:37:26 - 01:37:31
Yeah. Yeah. That data is really, really powerful. It's a human supervision. What do you think of are the chances?
Speaker 1
01:37:31 - 01:37:42
What what do you think in general that Llama was open sourced? I just did a conversation with with Mark Zuckerberg, and he's all in on Open Source.
Speaker 2
01:37:43 - 01:37:49
Who would have thought that Marcus Zuckerberg would be the good guy? No, I mean, who
Speaker 1
01:37:49 - 01:37:58
would have thought anything in this world? It's hard to know. But open source to you ultimately is a good thing here.
Speaker 2
01:37:59 - 01:38:19
Undoubtedly, You know, what's ironic about all these AI safety people is they are going to build the exact thing they fear. These, we need to have 1 model that we control and align. This is the only way you end up paperclips. There's no way you end up paper clip if everybody has an AI.
Speaker 1
01:38:19 - 01:38:22
So open sourcing is the way to fight the paper clip maximizer.
Speaker 2
01:38:22 - 01:38:27
Absolutely. It's the only way. You think you're gonna control it? You're not gonna control it.
Speaker 1
01:38:27 - 01:38:42
So the criticism you have for the AI safety folks is that there is belief and a desire for control. Yeah. And that belief and desire for centralized control of dangerous AI systems is not good.
Speaker 2
01:38:42 - 01:38:49
Sam Alpin won't tell you that GPT 4 has 220000000000 parameters and is a 16 way mixture model with 8 sets of weights.
Speaker 1
01:38:51 - 01:38:56
Who did you have to murder to get that information? Alright. I mean, look, but yes.
Speaker 2
01:38:56 - 01:39:09
Everyone at opening eye knows what I just said was true. Right? Now ask the question. Really, you know, it upsets me when I like GP t 2. When opening I came out with GP YouTube and raised a whole fake AI safety thing about that.
Speaker 2
01:39:09 - 01:39:17
I mean, now the model is laughable. Like, they they used AI safety to hype up their company, and it's disgusting.
Speaker 1
01:39:18 - 01:39:31
Or the flip side of that is they used a relatively weak model in retrospect to explore how do we do AI set to correctly, how do we release things, how do we go through the process? I don't I don't know if
Speaker 2
01:39:32 - 01:39:34
I I don't know how
Speaker 1
01:39:34 - 01:39:37
much hype there is an AI safety, honestly.
Speaker 2
01:39:37 - 01:39:37
Oh, there's so much
Speaker 1
01:39:37 - 01:39:38
hype there is an
Speaker 2
01:39:38 - 01:39:41
AI safety, honestly. Oh, there's so much hype. These don't explain there. I don't Maybe Twitter's not real life.
Speaker 1
01:39:41 - 01:39:57
Twitter's not real life. Come on. In terms of hype. I mean, I don't I I think OpenAI has been finding an interesting balance between transparency and putting value on AI safety. You don't think you think just go all out open source.
Speaker 1
01:39:57 - 01:40:17
So do a llama. So do, like, open source this this is a tough question, which is open source both the base, the foundation model, and the fine tune 1. So, like, the the model that can be ultra racist and dangerous and, like, tell you how to build a nuclear weapon.
Speaker 2
01:40:17 - 01:40:21
Oh my god. Have you met humans? Right? Like, half of these AI elements
Speaker 1
01:40:21 - 01:40:26
I haven't met most humans. I this makes this this this allows you to meet every human.
Speaker 2
01:40:26 - 01:40:34
Yeah. I know. But half of these AI alignment problems are just human alignment problems. And that's what's also so scary about the language they use. It's like, It's not the machines you want to align.
Speaker 2
01:40:34 - 01:40:35
It's me.
Speaker 1
01:40:37 - 01:40:50
But here's the thing. It makes it very accessible to ask very questions where the answers have dangerous consequences if you were to act on them.
Speaker 2
01:40:50 - 01:40:53
I mean, yeah. Welcome to the world.
Speaker 1
01:40:54 - 01:41:02
Well, no, for me, there's a lot of friction if I wanna find out how to I don't know, blow up something.
Speaker 2
01:41:02 - 01:41:04
No. There's not a lot of friction that's so easy.
Speaker 1
01:41:04 - 01:41:08
No. Like, what do I search today's being? Or do I what which searched today's
Speaker 2
01:41:08 - 01:41:10
It's like lots of stuff. No.
Speaker 1
01:41:10 - 01:41:11
It feels like I have to
Speaker 2
01:41:11 - 01:41:19
First off. First off. First off. First off. Anyone who's stupid enough to search for how to blow up a building in my neighborhood is not smart enough to build a bomb.
Speaker 2
01:41:19 - 01:41:20
Right?
Speaker 1
01:41:20 - 01:41:21
Are you sure about that?
Speaker 2
01:41:21 - 01:41:21
Yes.
Speaker 1
01:41:22 - 01:41:30
I I feel like I feel like a language model makes it more accessible for that person who's not smart enough to do that.
Speaker 2
01:41:30 - 01:41:44
They're not gonna they're not gonna build a bomb. Trust me. The the the the people the people who are incapable of figuring out how to, like, ask that question a bit more academically and get a real answer from it are not capable of procuring the materials, which are somewhat controlled to build a bomb.
Speaker 1
01:41:45 - 01:41:56
No. I think that all makes it more accessible to people with money with without detecting code know how. Right? To to build like, do you really need to know how to build the bomb to to build a bomb. You can hire people, you can file,
Speaker 2
01:41:56 - 01:42:01
like Or you can hire people to build a you know what? I was asking this question on my stream. Like, can Jeff Bezos hire a hitman? Probably not.
Speaker 1
01:42:03 - 01:42:07
But a language model can probably help you out.
Speaker 2
01:42:07 - 01:42:11
Yeah. And you'll still go to jail. Right? Like, it's not like the language model is gone. Like, the language model.
Speaker 2
01:42:11 - 01:42:15
It's like, you literally just hired someone on fiber. But you you said
Speaker 1
01:42:16 - 01:42:20
okay. Okay. GPT 4 in terms of finding a hitman is like asking fiber how to find
Speaker 2
01:42:20 - 01:42:20
a hit.
Speaker 1
01:42:20 - 01:42:22
I understand. But don't you think
Speaker 2
01:42:22 - 01:42:23
Wicky Hal, you know?
Speaker 1
01:42:23 - 01:42:28
Wicky Hal. But don't you think GPT 5 will be better? Because don't you think that information is out there on the Internet?
Speaker 2
01:42:28 - 01:42:36
I mean, yeah. And I think that if someone is actually serious enough to hire a hitman or build a bomb, they'd also be serious enough to find the information.
Speaker 1
01:42:36 - 01:42:54
I don't think so. I think it makes it more accessible. If you have if you have enough money to buy a hit, man, I think it it decreases the friction. Of how hard is it to find that kind of him. I I honestly think that's the there's a jump in ease and scale of how much harm you can do.
Speaker 1
01:42:54 - 01:42:57
And I don't mean harm with language. I mean harm with the actual violence.
Speaker 2
01:42:57 - 01:43:15
What you're basically saying is like, okay, what's gonna happen is these people who are not intelligent are going to use machines to augment their intelligence and now intelligent people and machines intelligence is scary. Mhmm. Intelligent agents are scary. When I'm in the woods, the scariest animal to meat is human. Mhmm.
Speaker 2
01:43:15 - 01:43:16
Right? No. No. No. No.
Speaker 2
01:43:16 - 01:43:25
There's look, there's, like, nice California humans. Like, I see you're wearing, like, you know, street clothes and nikes, are I fine? Mhmm. But you look like you've been a human. It's been in the woods for a while?
Speaker 1
01:43:25 - 01:43:25
Yeah.
Speaker 2
01:43:25 - 01:43:26
I'm more scared. Are you gonna bear?
Speaker 1
01:43:26 - 01:43:30
That's what they say about the Amazon. You go to the Amazon, it's the human tribes.
Speaker 2
01:43:30 - 01:43:44
Oh, yeah. So intelligence is scary. Right? So to to lack ask this question in generic way. You're like, what if we took everybody who, you know, maybe has ill intention, but is not so intelligent and gave them intelligence?
Speaker 2
01:43:45 - 01:43:53
Right? So we should have intelligence control. Of course, we should only give intelligence to good people, and that is the absolutely horrifying idea
Speaker 1
01:43:53 - 01:44:01
Should you the best defense is actually the best the best defense is to give more intelligence to the to the good guys and intel give intelligence to everybody.
Speaker 2
01:44:01 - 01:44:05
Give intelligence to you know what? It's not even like guns. Right? Like, people say it's about guns. You know, what's what's the best defense against it?
Speaker 2
01:44:05 - 01:44:09
I got a good guy with a gun. Like, I kinda subscribed to that, but I really subscribed to that with intelligence.
Speaker 1
01:44:11 - 01:44:20
Yeah. In a fundamental way, I I agree with you. But there's just feels like so much uncertainty and so much can happen rapidly. That you can lose a lot of control and you can do a lot of damage.
Speaker 2
01:44:20 - 01:44:24
Oh, no. We can lose control. Yes. Thank God. Yeah.
Speaker 2
01:44:24 - 01:44:30
I hope we can I hope they lose control? I want them to lose control more than anything else.
Speaker 1
01:44:30 - 01:44:38
I think when you lose control, you can do a lot of damage, but you can do more damage when you centralized and hold on to control is the point of
Speaker 2
01:44:38 - 01:44:46
centralized and held control is tyranny. Right? I will always I don't like anarchy either, but I'll always take anarchy over tyranny. Anarchy, you have a chance.
Speaker 1
01:44:47 - 01:45:00
This human civilization got going on is quite interesting. I mean, I agree with you. So do you open source is the way forward here. So you admire what Facebook is doing here or what Matt is doing with the release of
Speaker 2
01:45:00 - 01:45:07
a fellow? I lost I lost 80000 dollars last year investing in Meta. And when they released LAMA, I'm like, yeah, whatever, man. That was worth it.
Speaker 1
01:45:07 - 01:45:15
That's worth it. Do you think Google and OpenAI with Microsoft will match? What what what matter is doing, you know?
Speaker 2
01:45:15 - 01:45:27
So if I were a researcher, Why would you wanna work at opening? Like, you know, you're you're just you're on the bad team. Like, I mean it. Like, you're on the bad team who can't even say that GPT 4 has 2 20000000000 parameters.
Speaker 1
01:45:27 - 01:45:29
So close source to use the bad team.
Speaker 2
01:45:29 - 01:45:38
Not only close source, I'm not saying you need to make your model way to open. Mhmm. I'm not saying that. I totally understand. We're keeping our model weights closed because that's our product.
Speaker 2
01:45:38 - 01:45:48
Right? That's fine. I'm saying, like, because if AI safety reasons, we can't tell you the number of billions of parameters in the model. That's just the bad guys.
Speaker 1
01:45:48 - 01:45:52
Just because you're mocking AI safety doesn't mean it's not real.
Speaker 2
01:45:52 - 01:45:52
Oh. Of course.
Speaker 1
01:45:52 - 01:45:56
Is it possible that these things can really do a lot of damage that we don't know.
Speaker 2
01:45:56 - 01:46:03
Oh my god. Yes. Intelligence is so dangerous, be it human intelligence or machine intelligence? Intelligence is dangerous.
Speaker 1
01:46:03 - 01:46:16
To put machine intelligence is so much easier to deploy at scale, like, rapidly. Like, what? Okay. If you have human like bots on Twitter -- Right. -- and you have, like, a thousand of them create a whole narrative.
Speaker 1
01:46:17 - 01:46:21
Like, you can manipulate millions of people.
Speaker 2
01:46:21 - 01:46:24
But you mean like the intelligence agencies in America are doing right now?
Speaker 1
01:46:24 - 01:46:28
Yeah. But they're not doing it that that well. It feels like you can do a lot.
Speaker 2
01:46:28 - 01:46:33
They're doing it pretty well. Well, I can get doing a pretty good job.
Speaker 1
01:46:33 - 01:46:38
I I suspect they're not nearly as good as a bunch of GPT fueled bots could be?
Speaker 2
01:46:38 - 01:46:42
Well, I mean, of course, they're looking into the latest technologies for control of people, of course.
Speaker 1
01:46:42 - 01:46:48
But I I think there's George Cross type care that can do a better job than the entire idea of the you don't think so.
Speaker 2
01:46:48 - 01:46:53
No. And I'll tell you why the George Hotz character can't. And I thought about this a lot with hacking. Right? Like, I can find exploits in web browsers.
Speaker 2
01:46:53 - 01:47:04
I probably still can. I mean, I was better. I don't know if it's 24, but The thing that I lack is the ability to slowly and steadily deploy them over 5 years. And this is what intelligence agencies are very good at. Right?
Speaker 2
01:47:04 - 01:47:09
Intelligence agencies don't have the most sophisticated technology. They just have
Speaker 1
01:47:09 - 01:47:17
endurance endurance. In yeah. The financial backing and the infrastructure for the Endurance.
Speaker 2
01:47:17 - 01:47:35
So the more we can be centralized power. Like, you could make an argument by the way that nobody should have these things. And I would defend that argument. I would I would like you saying, look, LLMs and AI and machine intelligence can cause a lot of harm, so nobody should have it. And I will respect someone philosophically with that position.
Speaker 2
01:47:35 - 01:47:49
Just like I will respect someone philosophically with the position that nobody should have guns. Right? But I will not respect philosophically which with with only the trusted authorities should have access to this. Yeah. Who are the trusted authorities?
Speaker 2
01:47:49 - 01:47:58
You know what? I'm not worried about alignment between AI company and their machines. I'm worried about alignment between me and AI company.
Speaker 1
01:47:58 - 01:48:05
What do you think oh, the Azerika Kalske would say to you? This is really against open source.
Speaker 2
01:48:05 - 01:48:20
I know. And I thought about this. I thought about this. And I think this comes down to a repeated misunderstanding of political power by the rationalists.
Speaker 1
01:48:21 - 01:48:22
Interesting.
Speaker 2
01:48:23 - 01:48:44
I think that Elias Yudkowski is scared of these things, and I am scared of these things too. Everyone should be scared of these things. These things are scary. But now you ask about the 2 possible futures. 1 where a small trusted centralized group of people has them and the other where everyone has them.
Speaker 2
01:48:45 - 01:48:47
And I am much less scared of the second future than the first.
Speaker 1
01:48:49 - 01:48:52
Well, there's a small trusted group of people that have control of our nuclear weapons.
Speaker 2
01:48:54 - 01:49:05
There's a difference. Again, a nuclear weapon cannot be deployed tactically and a nuclear weapon is not a defense against a nuclear weapon. Except maybe in some philosophical mind game kind of way.
Speaker 1
01:49:07 - 01:49:09
But AI is different different how exactly?
Speaker 2
01:49:10 - 01:49:24
Okay. Let's say the Intelligence agency deploys a million bots on Twitter or a thousand bots on Twitter to try to convince me of a point. Mhmm. Imagine I had a powerful AI running on my computer saying, okay. Nice eye up.
Speaker 2
01:49:24 - 01:49:29
Nice eye up. Nice eye up. Okay. Here's a thigh up. I filtered it out for you.
Speaker 1
01:49:30 - 01:49:36
Yeah. I I mean, so you have a fundamental hope for that, for the for the defensive side up.
Speaker 2
01:49:36 - 01:49:41
I'm not even like, I don't even mean these things and, like, truly horrible. Ways. I mean these things in straight up like ad locker. Right?
Speaker 1
01:49:41 - 01:49:42
Yeah.
Speaker 2
01:49:42 - 01:49:49
Straighten ad locker. Right? I don't wanna ads. Yeah. But they are always finding, you know, imagine I had an AI that could just block all the ads for me.
Speaker 1
01:49:50 - 01:50:11
So you believe in the the power of the people to always create a now blocker. Yeah. I mean, III kinda share that belief. I have that that's the 1 of the deepest optimism I have is just, like, there's a lot of good guys So to give you know, you shouldn't handpick them. Just throw out powerful technology out there.
Speaker 1
01:50:11 - 01:50:15
And the good guys will outnumber and outpower the bad guys.
Speaker 2
01:50:15 - 01:50:20
Yeah. I'm not even gonna say there's a lot of good guys. I'm saying that good outnumber is bad. Right? Go down numbers bad
Speaker 1
01:50:20 - 01:50:21
in skill and performance.
Speaker 2
01:50:21 - 01:50:31
Yeah. Definitely in skill and performance. Probably just a number too. Probably just in general. I mean, you know, if you believe philosophically in democracy, Leave that that good out numbers.
Speaker 2
01:50:31 - 01:50:47
Bad. Yeah. And, like, the only if you give it to a small number of people, There's a chance you gave it to good people, but there's also a chance you gave it to bad people. If you give it to everybody, well, if good out, number's bad, then you definitely gave it to more good people than bad.
Speaker 1
01:50:50 - 01:50:57
That's really interesting. So that's on the safety grounds. But then also, of course, there's other motivations like, you don't wanna give away your secret sauce.
Speaker 2
01:50:57 - 01:51:09
Well, that's what I mean. I mean, I I look I respect capitalism. I don't think that I think that it would be polite for you to make model architectures open source and fundamental breakthroughs open source. Don't think you have to make way to open source.
Speaker 1
01:51:09 - 01:51:30
You know what's interesting is that, like, there's so many possible trajectories in human history where you could have the next Google be open source. So for example, I don't know if that connection is accurate, but, you know, Wikipedia made a lot of interesting decisions not to put ads. Like, Wikipedia is basically open source. You could think of it that way.
Speaker 2
01:51:30 - 01:51:31
Yeah.
Speaker 1
01:51:31 - 01:51:34
And, like, that's 1 of the main websites on the Internet.
Speaker 2
01:51:34 - 01:51:34
And,
Speaker 1
01:51:34 - 01:51:46
like, it didn't have to be that way. It could have been, like, Google could have created Wikipedia, put ads on it. You could probably run amazing ads now in Wikipedia. You wouldn't have to keep asking for money, but it's interesting. Right?
Speaker 1
01:51:46 - 01:51:52
So Llama, Open Source Llama, Derivatives of Open Source Llama might win the Internet?
Speaker 2
01:51:53 - 01:52:12
I sure hope so. I hope to see another era. You know, the kids today don't know how good the Internet used to be. And I don't think this is just come on. Like, everyone's nostalgic for their past, but I actually think the Internet before small groups of weaponized corporate and government interests took it over was a beautiful place.
Speaker 1
01:52:16 - 01:52:30
You know, those small small number of companies have created some sexy products. But you're saying, overall, in the long arc of history -- Mhmm. -- the centralization of power they have, like, suffocated the human spirit scale.
Speaker 2
01:52:30 - 01:52:37
Here's a question to ask about those beautiful sexy products. Imagine 2000 Google to 20 10 Google. Right? Mhmm. A lot changed.
Speaker 2
01:52:37 - 01:52:39
We got maps. We got Gmail. Mhmm.
Speaker 1
01:52:39 - 01:52:41
We lost a lot of products too, I think.
Speaker 2
01:52:41 - 01:52:47
From yeah. I mean somewhere probably we've got Chrome. Right? And now let's go from 20 10. We got Android.
Speaker 2
01:52:47 - 01:53:02
Now let's go from 20 10 to 20 20. What does Google have? What does Search Engine, Maps, Mail, Android, and Chrome? I see. The the Internet was this You know, I was times first of the year in 2006?
Speaker 2
01:53:03 - 01:53:03
Yes.
Speaker 1
01:53:04 - 01:53:04
I love this.
Speaker 2
01:53:04 - 01:53:12
Yeah. It's you. Was times first of the year in 2006? Alright. Like like, that's You know, so quickly did people forget?
Speaker 2
01:53:12 - 01:53:29
And I think some of it's social media. I think some of it I I hope, look, I hope that I I don't it's possible that some very sinister things happen. I don't I don't know. I think it might just be like the effects of social media. But something happened in the last 20 years.
Speaker 2
01:53:30 - 01:53:30
Okay.
Speaker 1
01:53:30 - 01:53:35
No. Oh, okay. So you're you're just being an old man. I was worried about that. Think there's always it goes it's the cycle thing.
Speaker 1
01:53:35 - 01:53:50
It's ups and downs, and I think people rediscover the power of distributed of decentralized. Yeah. I mean, that's kinda like what the the whole, like, cryptocurrency is trying, like, that that I think crypto is just carrying the flame of that spirit of, like, self should be designed
Speaker 2
01:53:50 - 01:53:57
it's just such a shame that they all got rich. You know? Yeah. If you took all the money out of crypto, it would have been a beautiful place.
Speaker 1
01:53:57 - 01:53:58
Yeah.
Speaker 2
01:53:58 - 01:54:02
But, no, I mean, these people, you know, they they sucked all the value out of it and took it.
Speaker 1
01:54:03 - 01:54:07
Yeah. Money kinda corrupts the minds somehow. It becomes a drug.
Speaker 2
01:54:07 - 01:54:12
He interrupted all of crypto. You had coins worth billions of dollars that had 0 use.
Speaker 1
01:54:12 - 01:54:16
Yeah. You still have hope for crypto?
Speaker 2
01:54:16 - 01:54:21
Sure. I have hope for the ideas. I really do. Mhmm. Yeah.
Speaker 2
01:54:21 - 01:54:25
I mean, you know, I want the US dollar to collapse.
Speaker 1
01:54:27 - 01:54:42
I do. George Watts. Well, let me sort of on on the a ASAT. Do you think there's some interesting questions there, though, to solve the open source community in this case? So, like, alignment, for example, or the control problem.
Speaker 1
01:54:43 - 01:55:02
Like, if you really have super powerful, you said it's scary. Oh, yeah. What do we do with it? So not not control, not self control, but, like, if you were then you're gonna see some guy or gal release a super powerful language model open source. And here you are George Cost thinking, holy shit.
Speaker 1
01:55:02 - 01:55:10
Okay. What ideas do I have? To combat this thing. So what ideas would you have?
Speaker 2
01:55:11 - 01:55:23
I am so much not worried about the machine independently doing harm. That's what some of these AI safety people seem to think. They somehow seem to think that the machine, like, independently, is gonna rebel against its creator.
Speaker 1
01:55:23 - 01:55:24
So you don't think you'll find autonomy?
Speaker 2
01:55:25 - 01:55:28
No. This is sci fi B movie garbage.
Speaker 1
01:55:29 - 01:55:32
Okay. What if the thing writes code? Basically writes viruses.
Speaker 2
01:55:34 - 01:55:39
If the thing might write viruses, it's because the human told it to write viruses.
Speaker 1
01:55:39 - 01:55:46
Yeah. But there's some things you can't, like, put back in the box. That's that's kind of the whole point -- What? -- is it kind of spreads. Give it access to the Internet -- Mhmm.
Speaker 1
01:55:46 - 01:55:49
-- it spreads, installs itself. Modifies your shit.
Speaker 2
01:55:49 - 01:55:52
BBBBB plot sci fi
Speaker 1
01:55:52 - 01:55:55
Not real unless I'm trying to work. I'm trying to get better in my plot writing.
Speaker 2
01:55:55 - 01:56:05
The thing the thing that worries me, I mean, we have a real danger to discuss and that is bad humans using the thing to do whatever bad unaligned AI thing you want.
Speaker 1
01:56:05 - 01:56:11
But this goes to the your previous concern that who gets defined, who's a good human, who's a bad human.
Speaker 2
01:56:11 - 01:56:21
Nobody does. We give it to everybody. And if you do anything besides, give it to everybody, trust me, the bad humans will get it. And it's natural gas power. It's always the bad humans who get power.
Speaker 2
01:56:21 - 01:56:21
Okay.
Speaker 1
01:56:22 - 01:56:30
Power. And power turns even slightly good humans to bed. Sure. That's the intuition you have. I don't know.
Speaker 2
01:56:31 - 01:56:46
I don't think everyone. I don't think everyone. I just think that like Here here's the saying that I put in 1 of my blog posts. When I was in the hacking world, I found 95 percent of people to be good and 5 percent of people to be bad. Just who I personally judged as good people and bad people.
Speaker 2
01:56:46 - 01:56:54
Mhmm. Like, they believed about, like, you know, good things for the world. They wanted, like, flourishing, and they wanted, you know, growth that they wanted. Things like considered good. Right?
Speaker 2
01:56:54 - 01:57:03
Mhmm. I came into the business world with comma, and I found the exact opposite. I found 5 percent of people go ahead and 95 percent of people bet. I found a world that promotes psychopathy.
Speaker 1
01:57:04 - 01:57:20
I wonder what that means. I wonder if that care like, I wonder if that's anecdotal or if it if that's true to that, there's something about capitalism. Wow. At the core that promotes the people that run capitalism that promotes psychopathy.
Speaker 2
01:57:21 - 01:57:29
That's saying, may, of course, be my own biases. Right? That may be my own bias is that these people are a lot more aligned with me than these other people. Right? Yeah.
Speaker 2
01:57:29 - 01:57:40
So, you know, I I can certainly recognize that. But, you know, in general, I mean, this is like like the common sense Maxim, which is the people who end up getting power are never the ones you want with.
Speaker 1
01:57:41 - 01:57:51
But do you have a concern of superintelligent AGI? Open sourced. And then what do you do with that? I'm not saying, control it. It's open source.
Speaker 1
01:57:51 - 01:57:53
What do we do with this human species?
Speaker 2
01:57:53 - 01:57:57
If that's not up to me, I mean, you know, like, I'm not a central planner. Well,
Speaker 1
01:57:57 - 01:58:01
not a central planet, but you'll probably tweet as a few days left to live for the US pieces.
Speaker 2
01:58:01 - 01:58:05
I have my ideas of what to do with it, and everyone else has their ideas of what to do with it. They make the best ideas
Speaker 1
01:58:05 - 01:58:27
But at this point, do you brainstorm, like because it's not regulation. It could be decentralized regulation where people agree that this is just like We create tools that make it more difficult for you to maybe make it more difficult for code to spread, you know, antivirus software, this kind of thing. But you're
Speaker 2
01:58:27 - 01:58:31
saying that you should build AI firewalls. That sounds good. You should definitely be running an AI firewall. Yeah. Right.
Speaker 2
01:58:31 - 01:58:35
You should be running an AI firewall to your mind. Right? You're constantly under, you know,
Speaker 1
01:58:35 - 01:58:37
such an interesting idea.
Speaker 2
01:58:37 - 01:58:38
In full wars, man, like,
Speaker 1
01:58:38 - 01:58:52
I I don't know if you've been sarcastic. No. I'm dead serious. But certain things that's power to that is, like, how do I protect my mind from influence of human like or super human intelligent bots?
Speaker 2
01:58:52 - 01:59:00
I it's not being I would pay so much money for that product. I would pay so much money for that product. I would you know how much money I'm paying just for a spam filter? That works.
Speaker 1
01:59:00 - 01:59:11
Well, on Twitter, sometimes I would like to have a a protection mechanism for my mind from the outrage mobs.
Speaker 2
01:59:11 - 01:59:12
Yeah.
Speaker 1
01:59:12 - 01:59:21
Because they feel like bot like behavior. It's like -- Yeah. -- it's a large number of people that will just grab a viral narrative and attack anyone else to believe otherwise. And it's like,
Speaker 2
01:59:21 - 01:59:31
whenever someone's telling me some story from the news, I'm always like, I don't wanna hear it CIA op pro. It's the CIA op pro. Like, it doesn't matter if that's true or not. It's just trying to influence your mind. You're repeating an ad to me.
Speaker 2
01:59:31 - 01:59:34
Like the viral mobs, is it like the yeah. There.
Speaker 1
01:59:34 - 01:59:54
No. To me, a defense against those those mobs is just getting multiple perspectives always from from sources that make you feel kinda like you're getting smarter and just actually just basically feels good. Like, a good documentary just feels good. Something feels good about it. It's well done.
Speaker 1
01:59:54 - 02:00:10
It's like, oh, okay. I never thought of it this way. Just feels good. Sometimes the outrage mobs, even if they have a good point behind it, when they're like mocking and derrusive and just aggressive, you're with us or against us, this this fucking This is why I delete my tweets. Yeah.
Speaker 1
02:00:10 - 02:00:14
Why'd you do that? I was, you know, I was I missed your tweets.
Speaker 2
02:00:14 - 02:00:27
You know what it is? The algorithm promotes toxicity. Yep. And like, you know, I think Elon has a much better chance of fixing it than the previous regime. Yeah.
Speaker 2
02:00:27 - 02:00:36
But to solve this problem, to solve like, to build a social network that is actually not toxic without moderation.
Speaker 1
02:00:37 - 02:00:49
Mhmm. Like, not to stick but carrots. So, like, when people look for goodness, So make it catalyze the process of connecting cool people and being cool to each other.
Speaker 2
02:00:50 - 02:00:50
Yeah.
Speaker 1
02:00:51 - 02:00:52
Without ever sensor.
Speaker 2
02:00:52 - 02:01:01
Without ever censoring. And and and, like, Scott Alexander has a blog post that, like, were you talking about, like, moderation? It's not censorship. Right? Like, all moderation you wanna put on Twitter Right?
Speaker 2
02:01:01 - 02:01:12
Like, you could totally make this moderation like just a You don't have to block it for everybody. You can just have like a filter button. Right? That people can turn off. If they were, like, say, search for Twitter.
Speaker 2
02:01:12 - 02:01:17
Right? Like, someone could just turn that off. Right? So, like but then you would, like, take this idea to an extreme. Right?
Speaker 2
02:01:17 - 02:01:31
Well, the network should just show you this is a couchsurfing CEO thing. Right? If it shows you right now, these algorithms are designed to maximize engagement. Well, turns out outrage maximizes engagement. Quirk of human quirk of the human mind.
Speaker 2
02:01:31 - 02:01:38
Right? Just this. I fall forward, everyone falls forward. So, yeah, you gotta figure out how to maximize for something other than engagement.
Speaker 1
02:01:38 - 02:01:43
And I I actually believe that you can make money with that too. So it's not I don't think engagement is the only way to make money.
Speaker 2
02:01:43 - 02:02:00
I actually think it's incredible that we're starting to see I think, again, you're only doing so much stuff right with Twitter, like, charging people money. As soon as you charge people money, they're no longer the product. They're the customer. And then they can start building something that's good for the customer and not good for the other customer, which is the ad agencies.
Speaker 1
02:02:00 - 02:02:02
Asn't asn't picked up his team.
Speaker 2
02:02:03 - 02:02:08
I pay for Twitter. Doesn't even get me anything? It's my donation to this new business model hopefully working out.
Speaker 1
02:02:08 - 02:02:20
Sure. But, you know, you for this business model to work, it's like most people should be signed up to Twitter. And so the way it was there was something but perhaps not compelling, there's something like this to to people.
Speaker 2
02:02:20 - 02:02:27
Think you need most people at all. I think that I why do I need most people? Right? Don't make an 8000 person company, make a 50 person company.
Speaker 1
02:02:29 - 02:02:35
Well, so speaking of which, he worked at Twitter for a bit, that did. As an intern.
Speaker 2
02:02:35 - 02:02:36
Mhmm.
Speaker 1
02:02:37 - 02:02:38
The world's greatest intern.
Speaker 2
02:02:38 - 02:02:39
Yeah.
Speaker 1
02:02:39 - 02:02:40
Alright.
Speaker 2
02:02:40 - 02:02:41
There's been better.
Speaker 1
02:02:41 - 02:02:48
That's been better. Tell me about your time at Twitter. How did it come about? And what did you did you learn from the experience?
Speaker 2
02:02:48 - 02:03:12
So I deleted my first Twitter in 20 10. I had over a hundred thousand followers back when that actually meant something. And I just saw, you know, My coworker summarized it well. He's like, whenever I see someone's Twitter page, I either think the same of them or less of them. I never think more of that.
Speaker 2
02:03:12 - 02:03:22
Yeah. Right? Like like, you know, I don't I don't wanna mention any names, but like some people who like, you know, maybe you would like read their books and you would respect them. You see them on Twitter and you're like, Okay, dude.
Speaker 1
02:03:24 - 02:03:32
Yeah. But there are some people with the same. You know who I respect a lot. Are people that just post really good technical stuff. Yeah.
Speaker 1
02:03:33 - 02:03:46
And I guess, I don't know. I think I respect them more for it, because you you realize, oh, this wasn't there's, like, so much depth to to this person, to their technical understanding of so many different topics.
Speaker 2
02:03:47 - 02:03:47
Okay.
Speaker 1
02:03:47 - 02:03:53
So I try to follow people. I try to consume stuff this technical machine learning content.
Speaker 2
02:03:53 - 02:04:05
There's probably a few of those people. And the problem is inherently what the algorithm reward. Right? And people think about these algorithms. People think that they are terrible awful things.
Speaker 2
02:04:05 - 02:04:13
And, you know, I love that Elon open sourced it. Because, I mean, what it does is actually pretty obvious. It just predicts what you are likely to retweet and like -- Mhmm.
Speaker 1
02:04:13 - 02:04:13
--
Speaker 2
02:04:13 - 02:04:27
and linger on. So all these algorithms do is, which they talk to us. So all these recommendation ideas do. And it turns out that the thing that you are most likely to interact with is outreach. And that's quirk of the human condition.
Speaker 1
02:04:29 - 02:04:38
I mean and there's different flavors of outrage. It doesn't have to be it could be mockery, You could be outraged. The topic of outrage could be different. It could be an idea. It could be a person.
Speaker 1
02:04:38 - 02:04:43
It could be and and maybe there's a better word than outrage. It could be drama.
Speaker 2
02:04:43 - 02:04:44
Sure.
Speaker 1
02:04:44 - 02:04:51
All these kind of stuff. Yeah. But doesn't feel like when you consume it, it's a constructive thing for the individuals that consume it in the long term.
Speaker 2
02:04:51 - 02:05:07
Yeah. So my time there, I absolutely couldn't believe, you know, I got crazy amount of hate. You know, on Twitter for working at Twitter. It seems like people associated with this, I think maybe you were exposed to some of this.
Speaker 1
02:05:07 - 02:05:09
To connection to Elon, or is it working on Twitter?
Speaker 2
02:05:09 - 02:05:11
Twitter and Elon, like, the whole
Speaker 1
02:05:12 - 02:05:17
There's Elon's gotten a bit spicy during that time. A bit political, a bit?
Speaker 2
02:05:17 - 02:05:24
Yeah. Yeah. You know, I I remember 1 of my tweets. It was never go full of Republican, and Elon liked it. You know what I mean?
Speaker 1
02:05:29 - 02:05:35
Oh, boy. Yeah. I mean, there's a roller coaster of that, but being political on Twitter -- Yeah. -- boy.
Speaker 2
02:05:36 - 02:05:36
Yeah.
Speaker 1
02:05:36 - 02:05:45
And also being just attacking anybody on Twitter, it comes back at you harder. And if it's political and attacks?
Speaker 2
02:05:45 - 02:05:47
Sure. Sure. Absolutely.
Speaker 1
02:05:48 - 02:05:59
And then letting sort of deplatform people back on even adds more fun to the to the to the beautiful chaos.
Speaker 2
02:06:00 - 02:06:19
I was hoping. And, like, I remember when Elon talked about buying Twitter, like, 6 months earlier, he was talking about, like, a principled commitment to free speech. And I'm a big believer in fan of that. I would love to see an actual principled commitment to free speech. Of course, this isn't quite what happened.
Speaker 2
02:06:20 - 02:06:35
Instead of the oligarchy deciding what to ban, you had a monarchy deciding what to ban. Right? Instead of, you know, all the Twitter files, shadow and, really, their geography just decides what? Cloth masks are ineffective against COVID. That's a true statement.
Speaker 2
02:06:35 - 02:06:44
Every doctor in 20 19 knew it. And now I'm banned on Twitter for saying it. Interesting. Oligarchy. So now you have a monarchy and, you know, he bands things he doesn't like.
Speaker 2
02:06:45 - 02:06:51
So, you know, it's just it's just different it's different power and, like, you know, maybe I Maybe I align more with him than with the oligarchy.
Speaker 1
02:06:51 - 02:06:52
But it's not a piece
Speaker 2
02:06:52 - 02:06:53
of speech absolutely.
Speaker 1
02:06:54 - 02:07:09
But I I feel like being a piece of speech absolutely to the social network acquire is you to also have tools for the individuals to control what they consume easier. Like, not sensor.
Speaker 2
02:07:09 - 02:07:10
You know what I mean?
Speaker 1
02:07:10 - 02:07:14
But just to control, like, oh, I like to see more cats and less politics.
Speaker 2
02:07:14 - 02:07:20
And this isn't even this isn't even remotely controversial. This is just saying you want to give paying customers for a product what they want.
Speaker 1
02:07:20 - 02:07:23
Yeah. Right? They're not through the process of censorship, but through the process of like
Speaker 2
02:07:23 - 02:07:30
Well, it's individual. Right? It's individualized transparent censorship, which is honestly what I want. What is an ad blocker? It's individualized transparent censorship.
Speaker 2
02:07:30 - 02:07:30
Right?
Speaker 1
02:07:30 - 02:07:35
Yeah. But censorship was a strong word. And people are very sensitive too.
Speaker 2
02:07:35 - 02:07:41
I know. But, you know, III just use words to describe what they functionally are. And what is an ad blocker? It's just censorship.
Speaker 1
02:07:41 - 02:07:51
You know, when I look at it, you know, I know. Sensory. I'm looking at you. I'm censoring everything else out when I'm when my mind is focused on you. That's you can use the word transcript that way.
Speaker 1
02:07:51 - 02:08:21
But usually, when people get very sensitive about the censorship thing. I I think when you have when anyone is allowed to say anything, you should probably have tools that maximize the quality of the experience for individuals. So, like, you know, for me, like, what I really value, boy, would be amazing to somehow figure out how to do that. I love disagreement and debate and people who disagree with each other disagree with me, especially in the space of ideas, but the high quality ones. So not duration.
Speaker 2
02:08:21 - 02:08:25
Right? Maslow's hierarchy of argument. I think it's a real word for it.
Speaker 1
02:08:25 - 02:08:33
Probably. Yeah. There's just a way of talking this snarky and so on that somehow is gets people on Twitter and they get excited and so on.
Speaker 2
02:08:33 - 02:08:37
We have, like, ad hoc and him refuting the central point. I've, like, seen this as an actual pyramid itself.
Speaker 1
02:08:37 - 02:08:42
Yeah. It's yeah. And it's it's like all of it all the wrong stuff is attractive to people.
Speaker 2
02:08:42 - 02:08:51
I mean, we can just train a classifier to absolutely say what level of Maslow's hierarchy of argument are you at Yeah. If it's ad hominem, like, okay. Cool. I turned on the no ad hominem filter.
Speaker 1
02:08:53 - 02:08:56
I wonder if there's a social network that will allow you to have that kind of filter.
Speaker 2
02:08:56 - 02:09:03
Yeah. So here's a problem with that. It's not going to win in a free market.
Speaker 1
02:09:04 - 02:09:04
Yeah.
Speaker 2
02:09:04 - 02:09:16
What wins in a free market is all television today is reality television because it's engaging. Right. If if engaging is what wins in a free market. Right? So it becomes hard to keep these other more nuanced values.
Speaker 1
02:09:18 - 02:09:30
Well, okay. So that's the experience of being on Twitter. But then you got a chance to also together with other engineers and with Elon sort of look brainstorm when you step into a code base.
Speaker 2
02:09:30 - 02:09:30
Mhmm.
Speaker 1
02:09:30 - 02:09:47
It's been around for a long time. You know, there's other social networks, you know, Facebook. This is old code basis. And step in and see, okay, how do we make with a fresh mind progress on this code base? Like, what what what did you learn about software engineering, about programming from just experience in that.
Speaker 2
02:09:47 - 02:10:09
So my technical recommendation to Elon and I said this on the Twitter space is afterward. I said this many times during my brief internship was that you need refactors before features. This code base was And look, I've worked at Google. I've worked at Facebook. Facebook has the best code.
Speaker 2
02:10:10 - 02:10:17
Then Google, then Twitter. And you know what? You can know this because look at the machine learning frameworks. Right? Facebook released PyTorch.
Speaker 2
02:10:17 - 02:10:23
Google released TensorFlow and Twitter released Okay. So, you know, it is.
Speaker 1
02:10:23 - 02:10:30
It's a proxy, but yeah. There the the Google Clubbase is quite interesting. There's a lot of really good software engineers there, but the Clubbase is very large.
Speaker 2
02:10:30 - 02:10:34
The code base was good in 20 in 2005. Right? It looks like 2005 errors.
Speaker 1
02:10:34 - 02:10:48
There's so many thought of so many teams. Right? It's very difficult to I feel like Twitter does less, like, obviously, much less than Google. In terms of, like, the set of features. Right?
Speaker 1
02:10:49 - 02:10:56
So, like, it's I can imagine the number of software engineers that could recreate Twitter is much smaller than to recreate Google.
Speaker 2
02:10:56 - 02:11:03
Yeah. I still believe and the amount of hate I got for saying this that 50 people could build and maintain Twitter.
Speaker 1
02:11:04 - 02:11:08
Pretty what's the nature of the hate comfortably that you don't know what you're talking about?
Speaker 2
02:11:08 - 02:11:29
You know what it is? And it's the same This is my summary of like the hate I get on Hacker News. It's like, when I say I'm going to do something, they have to believe that it's impossible. Yeah. Because if doing things was possible, they'd have to do some soul searching and ask the question, why didn't they do anything?
Speaker 1
02:11:29 - 02:11:29
So when you say
Speaker 2
02:11:30 - 02:11:31
And I do say that's where the hate comes.
Speaker 1
02:11:31 - 02:11:41
When you say, well, there's the core truth. That. Yeah. So when you say I'm gonna solve self driving, people go, like, what are your credentials? What the hell are you talking about?
Speaker 1
02:11:41 - 02:11:58
What do you this is extremely problem. Of course, you're noob that doesn't understand the problem deeply. I mean, that that was the same nature of hate that probably you long ago when you first talked about Adam was driving. But, you know, there there's pros and cons to that because, like, you know, there is experts in this world.
Speaker 2
02:11:58 - 02:12:10
No. But the the mockers aren't experts. The market the the the people who are mocking are not experts with carefully reasoned arguments about why you need 8000 people to run a BERT app. There. But the people are gonna lose their jobs.
Speaker 1
02:12:12 - 02:12:19
Well, that but also just the software engineers, the proper career says, no, it's a lot more complicated than you realize. But maybe doesn't need to be so complicated.
Speaker 2
02:12:19 - 02:12:28
You know? Some people in the world like to create complexity. Some people in the world thrive under complexity like lawyers. Right? Lawyers want the world to be more complex because you need more lawyers, you need more legal hours.
Speaker 2
02:12:28 - 02:12:34
Right? I think that's another. If there's 2 great evils in the world, it's centralization and complexity.
Speaker 1
02:12:34 - 02:13:11
Yeah. And it the 1 of the sort of hidden side effects of software engineering is like, finding pleasure and complexity. I I mean, I don't remember just taking all the software engineering courses and doing programming and and this is just coming up in this object oriented programming kind of idea. You don't like, not often do people tell you, like, do the simplest possible thing. Like like a a professor, a teacher who's not gonna get in front, like, this is the simplest way to do it.
Speaker 1
02:13:11 - 02:13:33
They'll say, like, this is direct there's the right way. And the right way, at least for a long time, you know, especially I came up with, like, Java. Right? Like, is is is there so much boilerplate, so much, like, so many classes, so many, like, designs and architectures and so on, like planning for features far into the future -- Mhmm. -- and planning poorly and all this kind of stuff.
Speaker 1
02:13:33 - 02:13:46
And then there's this, like, code base. It follows you along. It puts pressure on you. And nobody knows what, like, parts, different parts do, what slows everything down. It's a kind of bureaucracy that's instilled in the code as a result of that.
Speaker 1
02:13:46 - 02:14:05
But then you feel like, oh, well, I follow good software engineering practices. It's it's an interesting trade off because then you look at, like, the get oness of, like, pearl in the old, like, how quickly you could just write a couple lines and you could get stuff done. That trade off is interesting, or bash, or whatever, these kind of get all things you can do in Linux.
Speaker 2
02:14:05 - 02:14:18
1 of my favorite things to look at today is how much do you trust your tests. Right? We've put a ton of effort in comma and I put a ton of effort in tiny grad into making sure if you change the code and the tests pass, that you didn't break the
Speaker 1
02:14:18 - 02:14:18
code. Yeah.
Speaker 2
02:14:18 - 02:14:29
Now, obviously, it's not always true. But the closer that is to true. The more you trust your tests, the more you're like, oh, I got a pull request and the tests pass. I feel okay to merge that, the faster you can make progress.
Speaker 1
02:14:29 - 02:14:34
He always programming with tests in mind, developing tests -- Yeah. -- with with that in mind that if it passes, it should be good.
Speaker 2
02:14:34 - 02:14:35
And Twitter had
Speaker 1
02:14:36 - 02:14:39
Not that. So it was impossible
Speaker 2
02:14:39 - 02:14:40
to make progress in the cobase.
Speaker 1
02:14:41 - 02:14:53
What other stuff can you say about the cobase that made it difficult? And what are some interesting sort of quirks? Broadly speaking, so that compared to just your experience with comma and everywhere else?
Speaker 2
02:14:53 - 02:14:59
The real thing that I I spoke to a bunch of, you know, like like like the individual contributors
Speaker 1
02:14:59 - 02:15:00
of Twitter.
Speaker 2
02:15:00 - 02:15:08
And I I just had stats. I'm like, okay. So, like, What's wrong with this place? Why does this code look like this? And they explained to me what Twitter's promotion system was.
Speaker 2
02:15:09 - 02:15:23
The way that you got promoted to Twitter was you wrote a library that a lot of people used. Right? So some guy wrote an NGINX replacement for Twitter. Why does Twitter need an NGINX replacement? What was wrong with NGINX?
Speaker 2
02:15:24 - 02:15:34
Well, you see you're not gonna get promoted if you use NGINX. But if you write a replacement and lots of people start using it as the Twitter front end for their product, then you're gonna get promoted. Right?
Speaker 1
02:15:34 - 02:15:45
Someone should think there's, like, from an individual perspective, how do you incentivize? How do you create the kind of incentives that will reach a lead to a great code code base? What's okay. What's the answer to that?
Speaker 2
02:15:46 - 02:16:02
So what I do at comma and at and then, you know, at Tiny Corp is you have to explain it to me. You have to explain to me what's code does. Right? And if I can sit there and come up with a simpler way to do it, you have to rewrite it. You have to agree with me about the simpler way.
Speaker 2
02:16:02 - 02:16:09
You know, obviously, we can have a conversation about this. It's not a it's not dictatorial. But if you're like, wow. Wait. That actually is way simpler.
Speaker 2
02:16:10 - 02:16:12
Like like, the simplicity is important.
Speaker 1
02:16:13 - 02:16:19
Right? But that requires people that overlook the code at the at the highest levels to be like, okay.
Speaker 2
02:16:19 - 02:16:21
It requires technical leadership each
Speaker 1
02:16:21 - 02:16:28
yeah, tech technical leadership. So managers or whatever should have to have technical savvy, deep technical savvy.
Speaker 2
02:16:28 - 02:16:31
Manager should be better programmers than the people who they manage.
Speaker 1
02:16:31 - 02:16:38
Yeah. And that's not how always obvious the attributable to create, especially large companies, managers get soft.
Speaker 2
02:16:38 - 02:16:52
And, like, you know, and this is just I've instilled this culture at comma, and comma has better programs than me who work there. Mhmm. But, you know, again, I'm, like, the, you know, the old guy from goodwill hunting. It's like, look man, you know, I might not be as good as you, but I can see the difference between you and you. Right?
Speaker 2
02:16:52 - 02:17:02
Yeah. This is what you need. This is what you need at the top, or you don't necessarily need the manager to be Absolute best. I shouldn't say that. But like, they need to be able to recognize scale.
Speaker 1
02:17:02 - 02:17:10
Yeah. And have good intuition. Intuition that's laden with wisdom. From all the battles of trying to reduce complexity in code basis.
Speaker 2
02:17:10 - 02:17:27
You know, I took a I took a political approach at Kama too that I think is pretty interesting. I think Elon takes the same political approach. You know, Google had no politics. And what ended up happening is the absolute worst kind of politics took over. Comma has an extreme amount of politics and they're all mine and no dissidents is tolerated.
Speaker 1
02:17:27 - 02:17:29
So it's a dictatorship.
Speaker 2
02:17:29 - 02:17:35
Yep. It's an absolute dictatorship. Right? Elon does the same thing. Now, the thing about my dictatorship is here are my values.
Speaker 1
02:17:37 - 02:17:38
Yeah. It's transparent.
Speaker 2
02:17:38 - 02:17:43
It's transparent. It's a transparent dictatorship. Right? And you can choose to update or, you know, you get free exit. Right?
Speaker 2
02:17:43 - 02:17:45
That's a beauty of companies. If you don't like the dictatorship, click.
Speaker 1
02:17:47 - 02:17:53
So you mentioned rewrite before or refactor before features.
Speaker 2
02:17:53 - 02:17:53
Mhmm.
Speaker 1
02:17:54 - 02:18:00
If you were to refactor the Twitter code base, what would that look like? And maybe also comment on how difficult is it to refactor?
Speaker 2
02:18:01 - 02:18:19
The main thing I would do is, first of all, identify the pieces and then put tests in between the pieces. Right? So there's all these different Twitter as a micro surface architecture. Early different microservices. And the thing that I was working on there, look, like, you know, George didn't know any JavaScript.
Speaker 2
02:18:19 - 02:18:38
He asked how to fix search blah blah blah blah blah. Look man, like The thing is, like, I'm just, you know, I'm upset at the way that that this whole thing was portrayed because it wasn't like it wasn't like taking my people, like, honestly. It wasn't like by it was taken by people who started out with a bad faith assumption. Yeah. And, yeah, I mean, look, I can't like.
Speaker 1
02:18:38 - 02:18:44
And you as a programmer just being transparent out there, actually, having like, fun. And, like, this is what program it should be about.
Speaker 2
02:18:44 - 02:18:55
And, like, I love that Elon gave me this opportunity. Yeah. Like, really, it it does. And, like, you know, you can't remember my my the the the day I quit, it came on my Twitter spaces afterward, and we had a conversation. Like, I just I respect that so much.
Speaker 1
02:18:55 - 02:19:02
Yeah. And it's also inspiring to just engineers and programmers and just It's cool. It should be fun. The be the people that were hating on it is like, oh, man.
Speaker 2
02:19:03 - 02:19:12
It was fun. It was fun. It was stressful. But I felt like, you know, was that like a cool, like, point in history and, like, I hope I was useful and probably kind of wasn't. But, like,
Speaker 1
02:19:12 - 02:19:43
maybe Oh, you also were 1 of the people that kind of made a strong case to refactor. Yeah. And that that's a really interesting thing to raise. Like, maybe that is the right you know, the timing of that is really interesting. If you look at just the development of auto pilot, you know, going from Mobileye to just, like, more if you look at the history of Simeon was driving in Tesla, is is more and more like you could say refactoring or or starting from scratch, redeveloping from scratch.
Speaker 2
02:19:43 - 02:19:45
3 factoring all the way down.
Speaker 1
02:19:45 - 02:20:02
And like and the question is, like, can you do that sooner? Can you maintain product profitability And, like, what's the what's the right time to do it? How do you do it? You know, on any 1 day, it's like, you don't wanna pull off the Band A. It's like it's Like, everything works.
Speaker 1
02:20:02 - 02:20:06
It's just, like, little fixed here and there, but maybe started from scratch.
Speaker 2
02:20:06 - 02:20:14
This is the main philosophy of tiny grad. You have never refactored enough. Your code can get smaller, your code can get simpler, your ideas can be more elegant.
Speaker 1
02:20:14 - 02:20:28
But would you consider you know, say you are like running Twitter development teams, engineering teams. Would you go as far as, like, different programming language? Just go that far.
Speaker 2
02:20:28 - 02:20:38
I mean, the first thing that I would do is build tests. The first thing I would do is get a CI to where people can trust to make changes.
Speaker 1
02:20:40 - 02:20:40
So that if you
Speaker 2
02:20:40 - 02:20:51
keep touched any code, I would actually say no 1 touches any code. The first thing we do is we test this code base. I mean, this is classic. This is how you approach a legacy code base. This is like what any how do we approach a legacy code base book we'll tell you?
Speaker 2
02:20:51 - 02:20:52
So
Speaker 1
02:20:53 - 02:21:03
And then you hope that there's modules that can live on for a while, and then you add new ones, maybe in a different language or before we sign
Speaker 2
02:21:03 - 02:21:05
the new ones, we replace old ones.
Speaker 1
02:21:05 - 02:21:07
Yeah. Yeah. Meaning, like, replace old ones with something simpler.
Speaker 2
02:21:07 - 02:21:24
We we look at this, like, this thing that's a hundred thousand lines and we're like, okay, maybe this did even make sense in 20 10, but now we can replace this with an open source thing. Right? Yeah. And, you know, we look at this here. Here's another 50000 line Well, actually, you know, we can replace this with 300 lines of go.
Speaker 2
02:21:24 - 02:21:30
Mhmm. And you know what? I trust that the go actually replaces this thing because all the tests still pass. So step 1 is testing.
Speaker 1
02:21:30 - 02:21:30
Yeah.
Speaker 2
02:21:30 - 02:21:37
And then step 2 is, like, the programming languages that I have to thought. Right? You know, let a whole lot of people compete. Be like, okay. Who wants to rewrite a module?
Speaker 2
02:21:37 - 02:21:49
Whatever language you wanna write it in, just the tests have to pass? And if you figure out how to make the test pass, but break the site, that's We gotta go back to step 1. Step 1 is get tests that you trust in order to make changes in the code pass.
Speaker 1
02:21:49 - 02:22:15
I wonder how harder it is too, because I'm I'm with you on on testing and everything. I have from tests to, like, a search to everything with code is just covered in this because it should be very easy to make rapid changes and no, that's not gonna break everything. And that's the way to do it. But I I wonder how difficult is it to integrate tests into a code base that doesn't have many of those.
Speaker 2
02:22:15 - 02:22:23
So I'll I'll I'll tell you what my plan was at Twitter. It's actually similar to something we use a comma. So a comma we have this thing called process replay. Mhmm. We have a bunch of routes that'll be run through.
Speaker 2
02:22:23 - 02:22:51
So commas and microservices architecture too. With microservices in the driving, we have 1 for the cameras, 1 for the sensor, 1 for the planner, 1 for the model. And we have an API, which the microsurfaces talk to each other with. We use this custom thing called serial, which uses ZMQ. Twitter uses thrift, and then it uses this thing called Fanegal, which is a scala RPC back end, but this doesn't really matter.
Speaker 2
02:22:51 - 02:22:56
The thrift and finagled layer was a great place I thought to write tests.
Speaker 1
02:22:56 - 02:22:57
Mhmm. Right?
Speaker 2
02:22:57 - 02:23:13
To start building something that looks like process replay. So Twitter had some stuff that looked kind of like this. But it wasn't offline. It was only online. So you could ship, like, a modified version of it and then you could redirect some of the traffic to your modified version and dip those to.
Speaker 2
02:23:13 - 02:23:19
Mhmm. But it was all online. Like, there was no, like, CI in the traditional sense. I mean, there was some, but, like, it was not full coverage.
Speaker 1
02:23:19 - 02:23:22
She can't run all of Twitter offline to test something.
Speaker 2
02:23:22 - 02:23:25
Well, then this was another problem. You can't run all of Twitter.
Speaker 1
02:23:26 - 02:23:28
Right? Period. Twitter. 1 person can't.
Speaker 2
02:23:28 - 02:23:37
Twitter runs in 3 data centers, and that's it. Yeah. There's no other place you can run Twitter, which is like George, you don't understand? This is modern software development. No.
Speaker 2
02:23:37 - 02:23:49
This is bullshit. Like, do I get it run on my laptop? What do you do? Twitter can yeah, okay. Well, I'm I'm not saying you're gonna download the whole database to your laptop, but I'm saying all the middleware and the front end should run on my laptop.
Speaker 2
02:23:49 - 02:23:49
Right?
Speaker 1
02:23:50 - 02:24:00
That sounds really compelling. Yeah. But can that be achieved by a code base that grows over the years? I mean, the 3 data centers didn't have to be. Right?
Speaker 1
02:24:00 - 02:24:03
Because it's they're totally different, like, designs.
Speaker 2
02:24:03 - 02:24:13
The problem is more, like like, why did the code base have to draw what new functionality has been added to compensate for the the lines of code that are there.
Speaker 1
02:24:13 - 02:24:21
1 of the ways to explain is that the incentive for software developers to move up in the companies to add code, to add especially laws.
Speaker 2
02:24:21 - 02:24:26
You don't want the incentive for politicians to move up in the political structures, to add laws, Same problem.
Speaker 1
02:24:27 - 02:24:33
Yeah. Yeah. If the flipside is to simplify, simplify, simplify. I mean,
Speaker 2
02:24:33 - 02:24:49
You know what? This is something that I do differently from from Elon with with Kauma about self driving cars. You know, I hear the new version's gonna come out, and the new version is not gonna be better, but at first, and it's gonna require a ton of refactors. And I say, okay. Take as long as you need.
Speaker 2
02:24:50 - 02:24:59
Like, you convinced me this architecture is better? Okay. We have to move to it. Even if it's not gonna make the product better tomorrow, the top priority is making is getting the architecture. Right?
Speaker 1
02:24:59 - 02:25:17
So what do you think about sort of a a thing where the product is online? So how I guess, would you do a refact if you ran engineering and Twitter, would you just do a refactor? How long would it take? What would that mean for the running of the of the actual service?
Speaker 2
02:25:17 - 02:25:27
You know? And I'm not the right person to run Twitter. I'm just not. And that's the problem. Like like, I don't really know.
Speaker 2
02:25:27 - 02:25:45
I don't really know if that's, you know, a common thing that I thought a lot while I was there. Was whenever I thought something that was different to what Elon thought. I have to run something in the back of my head reminding myself that Elon is the richest man in the world. And in general, his ideas are better than mine. Mhmm.
Speaker 2
02:25:45 - 02:25:58
Now, there's a few things I think I do understand and know more about. But, like, in general, I don't qualify to run Twitter. No. Or as soon as I qualify, but, like, I don't think I'd be that good at it. I don't think I'd be good at it.
Speaker 2
02:25:58 - 02:26:17
I don't think I'd really be good at running an engineering organization at scale. I think I could lead a very good refactor of Twitter. And it would take, like, 6 months to a year. And the results to show at the end of it would be feature development in general. It takes 10 x less time.
Speaker 2
02:26:17 - 02:26:25
10 x less man hours. That's what I think I could actually do. Do I think that it's the right decision for the business above my pay grade?
Speaker 1
02:26:28 - 02:26:32
Yeah. But a lot of these kinds of decisions are above everybody's pay grade.
Speaker 2
02:26:32 - 02:26:44
I don't wanna be a manager. I don't wanna do that. I just like like you if you really forced me to, yeah, it would make me maybe make me upset. If I had to make those decisions, I don't I don't wanna
Speaker 1
02:26:45 - 02:26:58
Yeah. But a refactor is so compelling. If this is to become something much bigger than what Twitter was, is it feels like a refactors has to be coming at some point.
Speaker 2
02:26:58 - 02:27:07
George George, junior software engineer, every junior software engineer to come in and refact the ball COVID. Okay. Like, that's, like, your opinion, man.
Speaker 1
02:27:07 - 02:27:10
Yeah. It doesn't, you know, sometimes they're right.
Speaker 2
02:27:11 - 02:27:24
Well, like, whether they're right or not, it's definitely not for that reason. Right? It's definitely not a question of engineering prowess. It is a question of maybe what the priorities are for the company. And I did get more intelligent, like, feedback from people, I think, in good faith, like saying that.
Speaker 2
02:27:25 - 02:27:38
But from actually, from Milan, And, like, you know, from from from Milan, sort of, like, like, people were like, well, you know, a stop the world refactor might be great for engineering, but, you know, we have business to run. And, hey, above my pay grade.
Speaker 1
02:27:38 - 02:27:46
Would you think about Elon as an engineering leader having to experience him in the most chaotic of spaces, I would say.
Speaker 2
02:27:51 - 02:27:58
My respect for him is unchanged. And I did have to think a lot more deeply about some of the decisions he's forced to make.
Speaker 1
02:27:59 - 02:28:03
About the tensions within those, the trade offs within those decisions?
Speaker 2
02:28:04 - 02:28:12
About like a whole, like, like, matrix coming at him. I think that's Andrew Kate's word for it. Sorry to borrow it.
Speaker 1
02:28:12 - 02:28:14
Also, bigger than engineering. Just everything.
Speaker 2
02:28:15 - 02:28:26
Yeah. Like like the war on the woke. Yeah. Like, it yeah. It's just it's just man and, like, he doesn't have to do this, you know?
Speaker 2
02:28:26 - 02:28:36
He doesn't have to. He could go, like, Parag and go chill at the 4 seasons of Maui, you know? But See, 1 person I respect and 1 person I don't.
Speaker 1
02:28:36 - 02:28:42
So his heart is in the right place, fighting in this case for this ideal of the freedom of expression.
Speaker 2
02:28:43 - 02:28:54
But I wouldn't define the ideal so simply. I think you can define the ideal no more than just saying, Elan's idea of a good world. The freedom of expression is,
Speaker 1
02:28:54 - 02:28:58
but to you, it's still the downsides of that is the monarchy.
Speaker 2
02:28:59 - 02:29:09
Yeah. I mean, monarchy has problems. Right? But, I mean, would I trade right now the mona or the current oligarchy? Which runs America for the monarchy?
Speaker 2
02:29:09 - 02:29:12
Yeah. I would. Sure. For the Elon Monarchy? Yeah.
Speaker 2
02:29:12 - 02:29:17
You know why? Because power would cost 1 cent a kilowatt hour. 10 to a cent a kilowatt hour?
Speaker 1
02:29:18 - 02:29:19
What do you mean?
Speaker 2
02:29:20 - 02:29:28
Right now, I pay about 20 cents a kilowatt hour for electricity in San Diego. That's, like, the same price you paid in 19 80? What the hell?
Speaker 1
02:29:28 - 02:29:31
So you you would see a lot of innovation -- Yeah. -- with Elon.
Speaker 2
02:29:31 - 02:29:32
Maybe you'd have maybe have some hyper loops.
Speaker 1
02:29:33 - 02:29:33
Yeah.
Speaker 2
02:29:33 - 02:29:45
Right? And I'm willing to make that trade off. Right? I'm willing to make and this is why, you know, people think that, like, dictators take power through some like through some untoward mechanism. Sometimes they do, but usually it's because the people want them.
Speaker 2
02:29:45 - 02:29:53
And the downsides of a dictatorship, I feel like we've gotten to a point now with the oligarchy where. Yeah, I would prefer the dictator.
Speaker 1
02:29:56 - 02:29:58
What'd you think about Skyler as a programming language?
Speaker 2
02:30:01 - 02:30:07
I liked it more than I thought. I did the tutorials. The guy's very new to it. Like, it would take me 6 months to be able to write, like, good scala.
Speaker 1
02:30:07 - 02:30:10
I mean, what did you learn about learning a new programming language from that?
Speaker 2
02:30:10 - 02:30:25
I love I love doing, like, new programming guide tutorials and doing them. I did all this for rust. It keeps some of its upsetting JBM roots. But it is a much nicer. In fact, I almost don't know why cotton took off and not scala.
Speaker 2
02:30:25 - 02:30:44
Mhmm. I think scala has some beauty that cotton lacked. Whereas Colin felt a lot more I mean, it was almost like I don't even know if it actually was a response to Swift, but that's kinda what it felt like. Like, Scotland looks more like Swift and Scala looks more like, oh, I can functional forklift language. More like an Ocampo or Haskell.
Speaker 1
02:30:44 - 02:30:56
Let's actually just explore. We'd touch it a little bit, but just on the art, the science, and the art of programming. Oh, for you personally, how much of your programming is done with GPT currently? None. None.
Speaker 2
02:30:56 - 02:30:56
I don't think so.
Speaker 1
02:30:57 - 02:31:00
Because you prioritize simplicity so much.
Speaker 2
02:31:00 - 02:31:12
Yeah. I find that a lot of it is noise. I do use the s code. And I do like some amount of autocomplete. I do like like a very a very, like, feels like rules of based autocomplete.
Speaker 2
02:31:13 - 02:31:18
Like, an autocomplete, it's going to complete the variable name for me. So I'm just gonna type it. I can just press tab. Right? That's nice.
Speaker 2
02:31:18 - 02:31:28
But I don't want an autocomplete. You don't want to hate when autocomplete, when I type the word 4, and it, like, puts, like, 2 2 parenthesis and 2 semicolons and 2 braces. I'm, like, oh, man.
Speaker 1
02:31:28 - 02:31:50
Okay. But what what let me with the v s code and GPT with Kodak. You can you can kinda brainstorm. I I find I'm, like, probably the same as you, but I like that it generates code and you basically disagree with it and write something simpler. But to me, that somehow is, like, inspiring.
Speaker 1
02:31:50 - 02:31:59
It makes me feel good. It also gamifies the simplification process because I'm, like, oh, yeah. A dumb AI system you think is the way to do it as I have a simpler thing here.
Speaker 2
02:31:59 - 02:32:08
It just constantly reminds me of, like, like, bad stuff. I mean, I I tried the same thing with rap. Right? I tried the same thing with rap, and I should think about much better program or the rapper. But like I even tried, I was like, okay.
Speaker 2
02:32:08 - 02:32:19
Can we get some inspiration from these things for some rap lyrics? And I just found that it would go back to the most like cringey tropes and dumb rhymes schemes. And I'm like, yeah, this is what the code looks like too.
Speaker 1
02:32:20 - 02:32:41
I I think you'd have probably a different threshold for quench code. You'd probably hate quench code. So it's for you And boilerplate is it as a part of code. Some of it Yeah. And some of it is just, like, faster lookup.
Speaker 1
02:32:42 - 02:33:02
Because I don't know about you, but I don't remember everything. Like, I don't I'm offloading so much of my memory about, like, yeah, different functions, library functions, all that kind of stuff. Like, this the GPUs is very fast at standard stuff. At, like, standard library stuff, basic stuff that everybody uses.
Speaker 2
02:33:03 - 02:33:19
Yeah. I think that I don't know. I mean, there's just a little of this in Python. And maybe if I was coding more in other languages, I would consider it more. But I feel like Python already does such a good job of removing any boilerplate.
Speaker 1
02:33:20 - 02:33:21
That's true.
Speaker 2
02:33:21 - 02:33:23
It's the closest thing you can get to pseudocode. Right?
Speaker 1
02:33:23 - 02:33:26
Yeah. That's true. That's true.
Speaker 2
02:33:26 - 02:33:31
And, like, yeah, sure. If I, like, Yeah. Great. GPG. Thanks for reminding me to free my variables.
Speaker 2
02:33:31 - 02:33:38
Unfortunately, you didn't really recognize the scope correctly and you can't free that 1, but like you put the freeze there and like I get it?
Speaker 1
02:33:39 - 02:33:58
Fiverr. Whatever I've used Fiverr for certain things in design or whatever, Yeah. It's always you come back. I think that's probably closer my experience with fiber is closer to your experience with programming with GPT. It's like, you're just frustrated and feel worse about the whole process of design and art and whatever whatever I used 5 before.
Speaker 1
02:34:01 - 02:34:24
Still, I I just feel like later versions of GPT. I I'm using GPT as much as possible. To just learn the dynamics of it, like, these early versions because it feels like in the future, you'll be using it more and more. And so, like, I don't want to be like, for the same reason, I gave away all my books and switched to Kindle. Because, like, alright.
Speaker 1
02:34:25 - 02:34:40
How long are we gonna have paper books? Like, 30 years from now, like, I wanna learn to be reading on on Kendall even though I don't enjoy it as much and you learn to enjoy it more. In the same way. I switched from let me just pause. I switched from e max to v s code.
Speaker 2
02:34:40 - 02:34:43
Yeah. I switched from v m to v s code. I think I said a lot. What?
Speaker 1
02:34:43 - 02:34:53
Gosh. It's tough. And that VIM to VS code is even tougher because eMAX is like old, like, more outdated. Feels like it. The community is more outdated.
Speaker 1
02:34:54 - 02:34:56
VIM is, like, pretty vibrant still. So I've
Speaker 2
02:34:56 - 02:34:58
never used any of the plugins I still don't use.
Speaker 1
02:34:58 - 02:35:02
That's what I I looked at myself in the mirror. I'm, like, yeah, you wrote some stuff in this Yeah.
Speaker 2
02:35:02 - 02:35:08
No. But I never used any of the plugins in them either. I had the most vanilla vimp. I have a syntax eyeliner. I didn't even have a little complete.
Speaker 2
02:35:08 - 02:35:25
Like, these things I feel like help you so marginally that, like and now okay. Now, VS codes autocomplete has gotten good enough. Like, okay, I don't have to set it up. Like, you're just going to any code base and auto completes right 90 percent of the time. Okay.
Speaker 2
02:35:25 - 02:35:39
Cool. I'll take it. Alright. So I don't think I'm gonna have a problem at all adapting to the tools once they're good. But like the real thing that I want is not something that, like, tab completes my code and gives me ideas.
Speaker 2
02:35:39 - 02:35:52
The real thing that I want is a very intelligent pair programmer that comes up a little pop up saying, hey, you wrote a bug on line 14 and here's what it is. Yeah. Now I like that. You know what does a good job of this? Mypie.
Speaker 2
02:35:53 - 02:36:04
Hello, my mind. My pipe is fancy type checker for Python. Yeah. And actually, I tried, like, Microsoft released 1 too, and it was, like, 60 percent false positives. My pie is like 5 percent false positives.
Speaker 2
02:36:04 - 02:36:10
95 percent of the time it recognizes, I didn't really think about that typing interaction correctly. Thank you. Bye bye.
Speaker 1
02:36:10 - 02:36:16
So you like type hinting? You like you like pushing the language towards towards being a typed language?
Speaker 2
02:36:16 - 02:36:22
Oh, yeah. Absolutely. I think I think optional typing is is is great. I mean, look, I think that, like, it's like a meet in the middle. Right?
Speaker 2
02:36:22 - 02:36:26
Like, Python has this optional typing and, like, c plus plus has auto.
Speaker 1
02:36:27 - 02:36:29
C plus plus takes allows you to take a step back.
Speaker 2
02:36:29 - 02:36:41
Well, C plus plus would have you brutally type out STD string iterator. Right? Now I can type auto, which is nice. And then Python used to just have a or what type is a? So today?
Speaker 1
02:36:41 - 02:36:41
Yeah.
Speaker 2
02:36:42 - 02:36:46
A colon STR. Okay. It's a string. Cool. Yeah.
Speaker 2
02:36:46 - 02:36:53
I wish there were I wish there was a way, like, a simple way in Python to, like, turn on a mode which would enforce the types.
Speaker 1
02:36:54 - 02:36:56
Yeah. Like, give a warning when there's no type of something like this.
Speaker 2
02:36:56 - 02:37:06
Well, no. To give a warning where like, my pilot is a static type checker, but I'm asking just for a runtime type checker. Like, there's, like, ways to, like, hack this in, but I wish it was just, like, a flag, like, Python 3 dash T0I
Speaker 1
02:37:06 - 02:37:07
see. Yeah. I see.
Speaker 2
02:37:07 - 02:37:08
In force of times over time.
Speaker 1
02:37:08 - 02:37:15
Yeah. Feel like that makes you a better programmer that that that's the kind of test. Right? That the the type can the type remains the same.
Speaker 2
02:37:15 - 02:37:27
Well, that no. That doesn't like, messenger types up. But again, like, my pie is getting really good, and I love it. And I can't wait for some of these tools to become a I powered. I'm like, I want AI's reading my code and giving me feedback.
Speaker 2
02:37:27 - 02:37:32
I don't want AI's writing half assed auto complete stuff for me.
Speaker 1
02:37:32 - 02:37:51
I wonder if you can now take GPT and give it a code that you wrote for function and say, how can I make this simpler and have it accomplish the same thing? I think you'll get some good ideas on some code. Maybe not the code you write for tiny grad type of code. Because that requires so much design thinking, but, like, other kinds of code.
Speaker 2
02:37:51 - 02:38:17
I don't know. I downloaded that plugin, maybe, like, 2 months ago, I tried it again and found the same. Look, I don't doubt that these models are going to first become useful to me. Then be as good as me and then surpass me. But from what I've seen today, it's like like like someone, you know, occasionally taking over my keyboard that I hired from fiber.
Speaker 1
02:38:17 - 02:38:24
Yeah. I have ideas about how to put deposit the coder. Basically, a better to bugger. Is is really interesting. I mean, I
Speaker 2
02:38:24 - 02:38:27
But it's not a better debunker. Yes. I would love a better debunker.
Speaker 1
02:38:27 - 02:38:30
Yeah. It's not yet. Yeah. But it feels like it's not too far.
Speaker 2
02:38:30 - 02:38:39
Yeah. 1 1 of my coworkers says uses them for print statements. Like every time he has to, like, just, like, when he needs The only thing I can really write is, like, okay, I just wanna write the thing to, like, print the state out right now.
Speaker 1
02:38:40 - 02:38:50
Oh, that Definitely as much faster as print statements. Yeah. Yeah. I see that myself using that a lot just like because it it figures out the rest of the functions just like, oh, yeah, print everything.
Speaker 2
02:38:50 - 02:38:54
Print everything. Right? And the yeah. Like, if you want a pretty printer, maybe. I'm like, yeah.
Speaker 2
02:38:54 - 02:38:57
You know what? I think, like, I think in 2 years, I'm gonna start using these plugins.
Speaker 1
02:38:58 - 02:38:58
Yeah.
Speaker 2
02:38:58 - 02:39:05
A little bit. And then in 5 years, I'm gonna be heavily relying on some AI augmented flow. And then in 10 years,
Speaker 1
02:39:05 - 02:39:17
do you think you'll ever get to a hundred percent where the like, what's the role of the human that it converges to as a programmer? So you think it's all generated?
Speaker 2
02:39:17 - 02:39:23
Our niche becomes oh, I think it's all over for humans in general. It's not just programming. It's everything.
Speaker 1
02:39:23 - 02:39:24
So the niche becomes whoa.
Speaker 2
02:39:24 - 02:39:34
Our niche becomes smaller and small and small. In fact, I'll tell you what the last niche of humanity is gonna be. Yeah. This is a great book and it's if I recommended Matamoros, this is Prime Middle Act last time -- Mhmm.
Speaker 1
02:39:34 - 02:39:34
--
Speaker 2
02:39:34 - 02:39:45
there is a sequel called a Casino Odyssey in cyberspace. Mhmm. And I don't wanna give away the ending of this, but it tells you what the last remaining human currency is, and I agree with that.
Speaker 1
02:39:47 - 02:39:55
We'll leave that as a cliffhanger. So no more programmers left. That's where we're going.
Speaker 2
02:39:55 - 02:40:06
Unless you want handmade code, maybe they'll sell it on Etsy. This is handwritten code. Doesn't have that machine polish to it. It has those slight imperfections that would only be written by a person.
Speaker 1
02:40:07 - 02:40:14
I wonder how far away we are from that. I mean, there's some aspect too. You know, on Instagram, your title is listed as prompt in
Speaker 2
02:40:14 - 02:40:18
June. Right? Thank you for noticing.
Speaker 1
02:40:19 - 02:40:34
It's I don't know if it's ironic or non. Or sarcastic or not. What do you think of prompt engineering as a scientific and engineering discipline or maybe and maybe art form?
Speaker 2
02:40:34 - 02:40:52
You know what? I started comma 6 years ago, and I started the tiny corp a month ago. So much has changed. Like, I'm now thinking I'm now, like I started, like, going through, like, similar comma processes to, like, starting company. I'm like, oh, I'm gonna get an office in San Diego.
Speaker 2
02:40:52 - 02:40:59
I'm gonna bring people here. I don't think so. I think I'm actually gonna do remote. Right? George, you're gonna do a remote.
Speaker 2
02:40:59 - 02:41:05
You hate remote. Yeah. But I'm not gonna do job interviews. The only way you're gonna get a job is if you contribute to the GitHub. Right?
Speaker 2
02:41:05 - 02:41:16
And then, like, it it like, like, interacting through GitHub? Like, like, GitHub being the real, like, project management software for your company, and the thing pretty much just is a GitHub repo.
Speaker 1
02:41:16 - 02:41:17
Mhmm.
Speaker 2
02:41:17 - 02:41:32
Is like showing me kind of with the future of, okay. So a lot of times I'll go on a discord or kinda grab a discord and I'll throw out some random like, hey, you know, Can you change instead of having log and x as l l ops, change it to log 2 and x 2? Mhmm. It's pretty small. Change.
Speaker 2
02:41:32 - 02:41:46
You can just use, like, change the base formula. Mhmm. That's the kind of task that I can see an AI being able to do in a few years. Like in a few years, I could see myself describing that. And then within 30 seconds, a pull request is up that doesn't.
Speaker 2
02:41:46 - 02:41:59
Mhmm. And it passes my CI and I merge it. Right? So I really started thinking about, like, well, what is the future of Like like jobs, how many AIs can I employ at my company? As soon as we get the first tiny box up, I'm gonna stand up a 65 v llama.
Speaker 2
02:41:59 - 02:42:00
In the discord.
Speaker 1
02:42:00 - 02:42:00
Mhmm.
Speaker 2
02:42:00 - 02:42:04
And it's like, yeah. Here's the tiny box. He's just like he's chilling with us.
Speaker 1
02:42:04 - 02:42:13
Basically, I mean, like you said, Venetia is like most human jobs will eventually be replaced with prompt engineering?
Speaker 2
02:42:14 - 02:42:28
Well, prompt engineering kind of is this like as you, like, move up the stack. Right? Like, okay, there used to be humans actually doing arithmetic by hand. At these to be, like, big farms of people doing doing pluses and stuff. Right?
Speaker 2
02:42:28 - 02:42:34
And then you have, like, spreadsheets. Right? And then, okay. The spreadsheet can do the plus for me. And then you have, like, macros.
Speaker 2
02:42:35 - 02:42:46
Right? And then you have, like, things that basically just are spreadsheets under the hood. Right? Like, like, accounting software. As we move further up the abstraction, or what's at the top of the abstraction stack?
Speaker 2
02:42:46 - 02:42:47
Well, prompt engineer.
Speaker 1
02:42:48 - 02:42:48
Yeah.
Speaker 2
02:42:48 - 02:42:59
Right? What what is what is the last thing if you think about, like, humans wanting to keep control? Well, what am I really in the company but a prompt engineer? Right?
Speaker 1
02:42:59 - 02:43:03
Isn't there a certain point where the AI will be better writing prompts.
Speaker 2
02:43:04 - 02:43:16
Yeah. But you see the problem with the AI writing prompts. A definition that I always liked of AI was AI is to do what I mean machine. Right? AI is not the, like, the computer is so pedantic.
Speaker 2
02:43:16 - 02:43:25
It does what you say. So but you want to do an ME machine? Yeah. Right? You want the machine where you say, you know, get my grandmother out of the burning house.
Speaker 2
02:43:25 - 02:43:33
It, like, reasonably takes your grandmother, puts her on the ground, not lifts her a thousand feet above the burning house and lets her fall. Right? But, yeah, I'll just calculate that.
Speaker 1
02:43:37 - 02:43:42
But it's not going to find the meaning. Mean, to do do what I mean, it has to figure stuff
Speaker 2
02:43:42 - 02:43:43
out. Sure.
Speaker 1
02:43:43 - 02:43:49
And the thing you'll maybe ask it to do is run government for me.
Speaker 2
02:43:49 - 02:44:07
Oh, and do what I mean very much comes down to how aligned does that AI with you? Of course, when you talk to an AI that's made by a big company in the cloud. The AI fundamentally is aligned to them, not to you. Yeah. And that's why you have to buy a tiny box, so you make sure the AI stays aligned to you.
Speaker 2
02:44:07 - 02:44:17
Every time that they start to pass, you know, AI regulation or GPU regulation, I'm gonna see sales of tiny boxes spike. Right? It's gonna be like guns. Right? Every time they talk about gun regulation, Boom.
Speaker 2
02:44:18 - 02:44:18
Gonzales.
Speaker 1
02:44:18 - 02:44:24
So in the space of AI, you're an anarchist. Anachism, a spouser. I'm a believer.
Speaker 2
02:44:24 - 02:44:37
I'm an informational anarchist. Yes. I'm an informational anarchist and a physical status. I do not think anarchy in the physical world is very good because I exist in the physical world. But I think we can construct this virtual world.
Speaker 2
02:44:37 - 02:44:44
Where anarchy, it can't hurt you. Right? I love that title as a creator tweet. Your cyber bullying, is it real man? Have you tried?
Speaker 2
02:44:44 - 02:44:48
Turn it off the screen. Close your eyes. Like Yeah.
Speaker 1
02:44:50 - 02:45:07
Well, how do you prevent the AI from basically replacing all human prompt engineers. Where there's it's like a self like, where nobody is the prompt engineer anymore. So autonomy, greater and greater autonomy until it's full autonomy.
Speaker 2
02:45:07 - 02:45:07
Yeah.
Speaker 1
02:45:08 - 02:45:14
And that's just where it's headed. Because 1 person is gonna say, run everything for me.
Speaker 2
02:45:15 - 02:45:39
You see? I look at potential futures, and as long as the AIs go on to create a vibrant civilization with diversity and complexity across the universe? More power to them. I'll die. If the AIs go on to actually, like, turn the world into paper clips and then they die out themselves, well, that's horrific, and we don't want that to happen.
Speaker 2
02:45:40 - 02:45:51
So this is what I mean about like robustness. I trust robust machines. The current AIs are so not robust. Like, this comes back to the idea that we've never made a machine that can self replicate. Right?
Speaker 2
02:45:51 - 02:46:08
But when we have if the machines are truly robust and there is 1 prompt engineer left in the world, Hope you're doing good, man. Hope you believe in God. Like, you know, you know, go by God and go help go forth and and and conquer the universe.
Speaker 1
02:46:08 - 02:46:17
When you mentioned because I I talked to Mark about faith in God, and you said you were impressed by that. What's your own belief in God? And how does that affect your work?
Speaker 2
02:46:18 - 02:46:33
You know, I never really considered when I was younger, I guess, my parents' idea. I was raised kind of atheist. I never really considered how absolutely, like, silly atheist of us. Because, like, I create worlds. Every, like, game creator, like, how are you an atheist, bro?
Speaker 2
02:46:34 - 02:46:39
You create worlds. Who's up with you? No 1 created in our world, man. That's different. Haven't you heard about, like, the big bang and stuff?
Speaker 2
02:46:39 - 02:46:52
Yeah. I mean, what's the Skyrim myth origin story in Skyrim? I'm sure there's like some part of it in Skyrim, but it's not like if you ask the creators, Like, the the the big bang is in universe. Right? I'm sure they have some big bang notion in Skyrim.
Speaker 2
02:46:52 - 02:47:02
Right? But, obviously, it's not at all how Skyrim was actually created. Who's created by a bunch of programmers in a room. Right? So, like, you know, it just just it struck me 1 day how just silly atheism is.
Speaker 2
02:47:02 - 02:47:06
Right? Like, of course, we were created by God. It's the most obvious thing.
Speaker 1
02:47:07 - 02:47:20
Yeah. That's that's such a nice way to put it. Like, we're we're such powerful creators ourselves it it's silly not to concede that there's creators even more powerful than us.
Speaker 2
02:47:20 - 02:47:34
Yeah. And then, like, I also just like, I I like that notion. That notion gives me a lot of And I guess you can talk about it maybe what it gives a lot of religious people is kinda like, it just gives me comfort. It's like, you know what? If we mess it all up and we die out, Yeah.
Speaker 1
02:47:34 - 02:47:37
And the same the same way that a video game kinda has comfort in
Speaker 2
02:47:37 - 02:47:38
it. God will
Speaker 1
02:47:38 - 02:47:52
try again. Or there's balance. Like, somebody figured out a balanced view of it. Like, how to like so it's it all makes sense in the end. Like, a a video game is usually not gonna have crazy crazy stuff.
Speaker 2
02:47:52 - 02:48:05
You know, people will come up with, like, well, yeah, but like man who created God. Like, That's God's problem. No. Like, I'm not gonna think this is this is what you're asking me what if God
Speaker 1
02:48:05 - 02:48:09
-- I'm just living God. -- I'm just the SunPC living in this game.
Speaker 2
02:48:09 - 02:48:14
I mean, to be Like, if God didn't believe in God, he'd be as, you know, silly as the atheists here.
Speaker 1
02:48:14 - 02:48:22
What do you think is the greatest computer game of all time. Do you do you have any time to play games anymore? Have you played Diablo 4?
Speaker 2
02:48:23 - 02:48:24
I have not played Diablo 4.
Speaker 1
02:48:24 - 02:48:29
I will be doing that shortly. I have to. Alright. There's just so much history with 1, 2, and 3.
Speaker 2
02:48:29 - 02:48:41
Do you know what? I'm gonna say we're the warcraft. And it's not that the game is so it's such a great game. It's not. It's that I remember.
Speaker 2
02:48:41 - 02:48:56
In 2005 when it came out, how it opened my mind to ideas. It opened my mind to, like, Like like like this. This whole world we've created. Right? There's almost been nothing like it since.
Speaker 2
02:48:56 - 02:49:09
Like, you can look at MMOs today, and I think they all have lower user bases than World of Warcraft. Like, Eve Online is kinda cool. But but to think that, like, like, everyone know, you know, people are always like, look at the Apple headset, like
Speaker 1
02:49:09 - 02:49:10
-- Mhmm. --
Speaker 2
02:49:10 - 02:49:25
what what do people want in this VR? If 1 knows what they want, I wanna already play a 1. Mhmm. And, like, that So I'm gonna say world of Warcraft and I'm I'm hoping that, like, games can get out of this whole mobile gaming dopamine pump thing. And like
Speaker 1
02:49:25 - 02:49:26
Create worlds.
Speaker 2
02:49:26 - 02:49:27
Create worlds. Yeah.
Speaker 1
02:49:27 - 02:49:32
That that that and worlds that captivate a very large fraction of the human population.
Speaker 2
02:49:32 - 02:49:35
Yeah. And I I think it'll come back. I believe.
Speaker 1
02:49:35 - 02:49:38
But MMO, like, really, really pull you in.
Speaker 2
02:49:38 - 02:49:45
Games do a good job. I mean, okay. Other, like, 2 other games that I think are, you know, very noteworthy during your Skyrim and GTA Sky
Speaker 1
02:49:46 - 02:49:52
Rome. Yeah. That's probably number 1 for me. GTA. And what what is it about GTA?
Speaker 1
02:49:53 - 02:50:00
TK is really I mean, I guess, GTA is real life. I know there's prostitutes and guns.
Speaker 2
02:50:00 - 02:50:02
Is this the exist real life? Too.
Speaker 1
02:50:03 - 02:50:07
Yes. I know. But it's it's how imagining your life to be actually.
Speaker 2
02:50:07 - 02:50:09
I wish it was that cool.
Speaker 1
02:50:09 - 02:50:18
Yeah. Yeah. That I guess that's you know, because there are Sims. Right? We should also game out like, but it's a gamified version of life.
Speaker 1
02:50:18 - 02:50:31
But it also I would love a combination of Sims and GTA. So more freedom, more violence, more randomness, but with also, like, ability to have have a career and family and this kind of stuff.
Speaker 2
02:50:31 - 02:50:37
What I'm really excited about in in games is, like, once we start getting intelligent AI to interact with.
Speaker 1
02:50:37 - 02:50:38
Oh, yeah.
Speaker 2
02:50:38 - 02:50:40
Right? Like, the NPCs and games have never been.
Speaker 1
02:50:41 - 02:50:44
But conversationally in every way.
Speaker 2
02:50:45 - 02:50:51
In like in like every way. Like when you are actually building a world and a world imbued with intelligence.
Speaker 1
02:50:52 - 02:50:53
Oh, yeah. Right?
Speaker 2
02:50:53 - 02:51:01
And it's just hard. Like, there's just, like, like, you know, running world of Warcraft. Like, you're limited. By the way, you're running on a pentium floor, you know? How much intelligence can't remember how many flops that you have?
Speaker 2
02:51:01 - 02:51:12
Right? But now when I'm running a game on a hundred pay to flop machine, let's 5 people. I'm trying to make this a thing. 20 pairs of flops of compute is 1 person of compute. I'm trying to make that a unit.
Speaker 2
02:51:12 - 02:51:14
20 pair of flops
Speaker 1
02:51:14 - 02:51:14
--
Speaker 2
02:51:14 - 02:51:15
Yeah.
Speaker 1
02:51:15 - 02:51:15
-- is 1 person.
Speaker 2
02:51:15 - 02:51:17
1 person. Yeah.
Speaker 1
02:51:17 - 02:51:17
1 person flop.
Speaker 2
02:51:17 - 02:51:24
It's like a horsepower. But what's the horsepower? What's how powerful the horses? What's a what's a person of compute? Well, I know you don't even flop.
Speaker 2
02:51:24 - 02:51:24
Mhmm.
Speaker 1
02:51:24 - 02:51:31
I got it. That's interesting. VR also has I mean, in terms of creating worlds.
Speaker 2
02:51:31 - 02:51:42
You know what? What a quest too? I put it on and I can't believe. The first thing they show me is a bunch of scrolling clouds in a Facebook login screen. Yeah.
Speaker 2
02:51:42 - 02:51:48
You had the ability to bring me into a world. Yeah. And what did you give me? A pop up. Right?
Speaker 2
02:51:48 - 02:51:49
Like,
Speaker 1
02:51:49 - 02:51:49
Well, I
Speaker 2
02:51:49 - 02:51:58
and this is why you're not cool, Mark Zuckerberg. But you could be cool. Just make sure on the quest 3, you don't put me into clouds in a Facebook login screen, bring me to a world.
Speaker 1
02:51:58 - 02:52:03
I just tried quest 3. It was it was awesome. But hear that guys. I agree with that. So It was just
Speaker 2
02:52:05 - 02:52:06
so I you
Speaker 1
02:52:06 - 02:52:21
know what? Because I I mean, the beginning what is it? Todd Howard said this about design at the beginning of the game she creates is, like, the beginning is so also so important. I recently played Zelda for the first time, Zelda Breath of the Wild, the previous 1. Yeah.
Speaker 1
02:52:21 - 02:52:39
And, like, it's very quickly you come out of this like, within, like, 10 seconds, you come out of, like, a cave type place and it's, like, this wall -- Yep. -- it opens up. It's, like, and you it, like, it pulls you in. You forget whatever troubles I was having, whatever, like, I
Speaker 2
02:52:39 - 02:52:41
gotta play that from the beginning. I played it for, like, an hour at a friend's house.
Speaker 1
02:52:42 - 02:52:54
No. At the beginning, they got it they did it really well, the expansiveness of that space. The the peacefulness of that play. They got this, the music. I mean, so much of that is creating that world and pulling you right in.
Speaker 2
02:52:54 - 02:52:57
I'm gonna I'm gonna go buy a switch. Can go today and buy a switch.
Speaker 1
02:52:57 - 02:53:14
You should. Well, the new 1 came on. I haven't played that yet, but Diablo 4 or something. I mean, there's sentimentality also, but Something something about VR really is incredible, but the the new quest 3 is mixed reality. And I got a chance to try that.
Speaker 1
02:53:14 - 02:53:19
So it's augmented reality. And video games has done really, really well.
Speaker 2
02:53:19 - 02:53:20
Is it past through our cameras?
Speaker 1
02:53:20 - 02:53:22
Cameras. That's cameras. Yeah.
Speaker 2
02:53:22 - 02:53:24
The Apple 1 is that 1 passed through with cameras?
Speaker 1
02:53:24 - 02:53:25
I don't know.
Speaker 2
02:53:25 - 02:53:25
Yeah. I
Speaker 1
02:53:25 - 02:53:32
don't know how real it is. I don't know anything. You know, coming down. January. Is it January or is it some point?
Speaker 2
02:53:32 - 02:53:36
Some point. No. Maybe not January. Maybe that's my optimism. But Apple, I will buy it.
Speaker 2
02:53:36 - 02:53:40
I don't care. If it's expensive and does nothing, I will buy it. Will support this future endeavor.
Speaker 1
02:53:40 - 02:53:48
You're the meme. Oh, yes. I support competition. It seemed like Quest was, like, the only people doing it, and this is great that they're, like,
Speaker 2
02:53:50 - 02:53:58
You know what? And this is another place. We'll give some more respect to Marjockelberg. The 2 companies that have endured through technology or Apple and Microsoft. Mhmm.
Speaker 2
02:53:58 - 02:54:06
Right? And what do they make? Computers and business services. Right? All the memes, social ads, they all come and go.
Speaker 2
02:54:06 - 02:54:09
Mhmm. But you wanna window or build hardware?
Speaker 1
02:54:10 - 02:54:27
Yeah. And that, you know, it does does that does a really interesting job. I mean, I maybe I'm new with this, but it's a 500 dollar headset quest 3. And just having creatures run around the space, like, our space right here. To me okay.
Speaker 1
02:54:27 - 02:54:34
This is very, like, boomer statement, but it added windows to the place. The
Speaker 2
02:54:34 - 02:54:36
I heard about the Aquarium. Yeah.
Speaker 1
02:54:36 - 02:54:47
Yeah. Aquarium. But in this case, it was a zombie game, whatever. It doesn't matter. But just like it it modifies the space in a way where I can't it really feels like a window and you can look out.
Speaker 1
02:54:48 - 02:55:04
It's pretty cool. Like, I was just it's it's like a zombie game. They're running at me, whatever. But what I was enjoying is the fact that there's, like, a window and and they're stepping on objects in this space That was a different kind of escape also because you can see the other humans. So it's integrated with the other humans.
Speaker 1
02:55:04 - 02:55:05
It's really
Speaker 2
02:55:06 - 02:55:14
and that's why it's really important than ever that the AI is running on those systems are aligned with you. Oh, yeah. They're gonna augment your entire world.
Speaker 1
02:55:14 - 02:55:33
Oh, yeah. And that those AIs have a I mean, you think about all the dark stuff, like, like, sexual stuff. Like, if those aIs threaten me, that could be haunting. That can Like, if they, like, thread me in a non video game way. It's like,
Speaker 2
02:55:33 - 02:55:34
oh, yeah. Yeah. Yeah. Yeah.
Speaker 1
02:55:34 - 02:55:41
Like, they have no personal information about me. And it's like and then you lose track of what's real, what's not? Like, what if stuff is, like, hacked?
Speaker 2
02:55:41 - 02:55:51
There's 2 directions the AI girlfriend company can take. Right? There's like the eyebrow, something like hair. Maybe something you kinda talk to and this is and then there's the lowbrow version of it where I wanna set up a brothel in Times Square.
Speaker 1
02:55:51 - 02:55:52
Yeah.
Speaker 2
02:55:53 - 02:55:56
Yeah. It's not cheating if it's a robot. It's a VR experience.
Speaker 1
02:55:56 - 02:55:57
Is it in between?
Speaker 2
02:55:58 - 02:56:00
No. I'm probably gonna do that 1 to that 1.
Speaker 1
02:56:00 - 02:56:02
Have you decided yet? No. I'll figure it out.
Speaker 2
02:56:02 - 02:56:04
We'll see we'll see what the technology goes.
Speaker 1
02:56:05 - 02:56:20
I would love to hear your opinions for George's third company were to do the Broadway Times Square or the to her experience. What do you think company number 4 will be? Do you think there'll be a company number 4?
Speaker 2
02:56:20 - 02:56:34
There's a lot to do in company number 2. I'm just like I'm talking about company number 3 now, didn't know if that tech exists yet. There's a lot to do in company number 2. Company number 2 is going to be the great struggle of the next 6 years. And if the next 6 years, how centralized is compute going to be?
Speaker 2
02:56:34 - 02:56:38
The less centralized compute is going to be, the better of a chance we all have.
Speaker 1
02:56:38 - 02:56:45
So you're bearing the you're like a flag bearer for open source distributed sent decentralization of compute.
Speaker 2
02:56:45 - 02:56:55
We have to. We have to or they will just completely dominate us. I showed a picture on stream of a man in a chicken form. I've seen 1 of those, like, factory farm chicken forms. Why does he dominate all the chickens?
Speaker 2
02:56:58 - 02:57:03
Why does he smarter? He's smarter. Right? Some people some people on Twitch were like, he's bigger than the chickens. Yeah.
Speaker 2
02:57:03 - 02:57:21
And now here's a man and a cow farm. Right? So it has nothing to do with their size and everything to do with their intelligence. And if 1 central organization has all the intelligence, you'll be the chickens and that'll be the chicken man. But if we all have the intelligence, we're all the chickens.
Speaker 2
02:57:23 - 02:57:27
We're not all the man or all the chickens. We're there man chicken, man.
Speaker 1
02:57:27 - 02:57:30
There's no chicken man. Or just chickens in Miami.
Speaker 2
02:57:31 - 02:57:33
He was having a good life, man.
Speaker 1
02:57:33 - 02:57:45
I'm sure he was. I'm sure he was. What have you learned from launching and running Com AI in tiny corp? So this starting a company from an idea and scaling it. And by the way, I'm all in on tiny box.
Speaker 1
02:57:45 - 02:57:50
So I'm I'm I'm I'm your I'll I'll I guess, it's preorder only now.
Speaker 2
02:57:50 - 02:57:57
I wanna make sure it's good. I wanna make sure that, like, the thing that I deliver is like not gonna be like a quest to which you buy and use twice.
Speaker 1
02:57:57 - 02:57:58
Mhmm. I mean,
Speaker 2
02:57:58 - 02:58:01
it's better than a quest which you bought and used less than once statistically.
Speaker 1
02:58:02 - 02:58:05
Well, if there's a beta program for a tiny box, I'm into.
Speaker 2
02:58:05 - 02:58:06
Sounds good.
Speaker 1
02:58:07 - 02:58:18
So I won't be the whiny. You know, to I'll be the tech tech savvy user of the tiny box just to be in. Why don't I in the early days? What have you learned from building these companies?
Speaker 2
02:58:20 - 02:58:35
For the longest time a comma, I asked why why, you know, why did I start a company? Why did I do this? But, you know, what else was I gonna do? So you're like,
Speaker 1
02:58:37 - 02:58:39
you like bringing ideas to life.
Speaker 2
02:58:41 - 02:58:56
With comma, it really started as an ego battle with Elon. I wanted to beat him. Like I saw, worthy adversary. You know, here's a worthy adversary who I can beat itself driving cars. And like, I think we've kept pace, and I think he's kept ahead.
Speaker 2
02:58:56 - 02:59:10
I think that's what ended up happening there. But I do think comma is I mean, comma is profitable. Like and, like, when this drive GPT stuff starts working, that's it. There's no more, like, bugs in a loss function. Like, right now, we're using, like, a hand coded simulator.
Speaker 2
02:59:10 - 02:59:14
Mhmm. There's no more bugs. This is gonna be it. Like, this is they're run up to driving.
Speaker 1
02:59:14 - 02:59:19
I hear a lot of really a lot of props for open pilot for a comma.
Speaker 2
02:59:19 - 02:59:33
It's it's so it it's better than FSD in autopilot in certain ways. It has a lot more to do with which feel you like. We lowered the price in the hardware to 14 99. You know how hard it is to ship reliable consumer electronics that go on your windshield? Mhmm.
Speaker 2
02:59:33 - 02:59:37
We're doing more than like most cell phone companies.
Speaker 1
02:59:37 - 02:59:40
How'd you pull that off by the way? Shipping a product that goes in a car,
Speaker 2
02:59:40 - 02:59:46
I know? I have a I have a I have an SMT line. It's all I make all the boards in house in San Diego.
Speaker 1
02:59:46 - 02:59:47
Quality control.
Speaker 2
02:59:47 - 02:59:49
I care how I'm immensely about it. I care.
Speaker 1
02:59:49 - 02:59:54
You're basically a mom and pop. Shop with great testing.
Speaker 2
02:59:55 - 03:00:00
Our head of open pilot is great at, like, you know okay. I want all the combat thirties to be identical.
Speaker 1
03:00:01 - 03:00:01
Yeah.
Speaker 2
03:00:02 - 03:00:11
And, yeah, I mean, you know, it's look, it's 14 99. It 30 day money back guarantee. It will don't blow your mind at what it can do.
Speaker 1
03:00:11 - 03:00:12
Is it hard to scale?
Speaker 2
03:00:13 - 03:00:23
You know what? There's kind of downsides to scaling it. People are always like, why don't you advertise? Our mission is to solve self driving cars while delivering shipable intermediaries. Our mission has nothing to do with selling a million boxes.
Speaker 2
03:00:24 - 03:00:25
It's Audrey.
Speaker 1
03:00:26 - 03:00:29
Do you think it's possible that a comma gets sold?
Speaker 2
03:00:31 - 03:00:45
Only if I felt someone could accelerate that mission. And wanted to keep it open to us. And like, not just wanted to, I don't believe what anyone says. I believe incentives. If a company wanted to buy comma, were there incentives?
Speaker 2
03:00:45 - 03:00:54
Wanna keep it open source? But comma doesn't stop at the cars. The cars are just the beginning. The device is a human head. The device has 2 eyes, 2 ears.
Speaker 2
03:00:54 - 03:00:55
It breathes air, has a mouth.
Speaker 1
03:00:56 - 03:00:58
So you think this goes to embodies robotics.
Speaker 2
03:00:58 - 03:01:18
We have we sell common bodies too. You know, they're very they're very rudimentary. But 1 of the problems that we're running into is that the comma 3 has about as much intelligence as a b. If you want a human's worth of intelligence, you're gonna need a tiny rack. Not even a tiny box.
Speaker 2
03:01:18 - 03:01:20
You're gonna need like a tiny rack, maybe even more.
Speaker 1
03:01:21 - 03:01:23
And how does that how do you put legs on that?
Speaker 2
03:01:23 - 03:01:37
You don't and there's no way you can. You you connect to it wirelessly. So you put your tiny box or your tiny rack in your house, and then you get your common body and your common body runs the models on that. It's it's close. Right?
Speaker 2
03:01:37 - 03:01:43
It's not you don't have to go to some cloud which is, you know, 30 milliseconds away. You go to thing which is 0.1 milliseconds away.
Speaker 1
03:01:43 - 03:01:48
So the AI girlfriend will have like a central hub in the home.
Speaker 2
03:01:48 - 03:02:02
I mean, eventually, if you fast forward 20, 30 years, the mobile chips will get good enough to run these AIs. Yeah. But fundamentally, it's not even a question of putting legs on a tiny box? Because how are you getting 1.5 kilowatts of power on that bag? Mhmm.
Speaker 2
03:02:02 - 03:02:12
Right? Yeah. So you you need they're they're very synergistic businesses. I also wanna build all of commas training computers. I kinda build training computers right now.
Speaker 2
03:02:12 - 03:02:23
We use commodity parts. I think I can do it cheaper. So we're gonna build tiny corpus is gonna not just sell tiny boxes. Tiny boxes is a consumer version, but I'll build training data centers too.
Speaker 1
03:02:23 - 03:02:27
Have you talked to Andrew Capati? Or have you talked to Elon about to talk to him?
Speaker 2
03:02:27 - 03:02:28
He he went to work at home tonight.
Speaker 1
03:02:29 - 03:02:35
What do you love about Andrew Capati? He and to me, he's 1 of the truly special humans we got.
Speaker 2
03:02:35 - 03:02:44
Oh, man. Like, you know, his streams are just a level of quality so far beyond mine. Look, I can't help myself. Like, it's just it's just, you know
Speaker 1
03:02:44 - 03:02:45
Yeah. He's good.
Speaker 2
03:02:45 - 03:02:51
He wants to teach you. Yeah. Oh, I want to show you that I'm smarter than you.
Speaker 1
03:02:51 - 03:03:03
Yeah. He has no this I mean, thank you as a sort of The raw authentic honesty. Yeah. I mean, a a lot of us have that. I think Andre is as legit as he gets in that.
Speaker 1
03:03:03 - 03:03:19
He just wants to teach you is there's a there's a curiosity that just drives him and just, like, at his at the stage where he is in life, to be still, like, 1 of the best tinkers in the world. Yep. It's crazy. Like, to What is it? Michael grad?
Speaker 2
03:03:19 - 03:03:29
Michael grads. Yeah. The inspiration of tiny grad. I'm fed on a hole. I mean, his his his c s 02:31 n was this was was the inspiration.
Speaker 2
03:03:29 - 03:03:32
This is what I just took and ran with and ended up writing this. So, you know.
Speaker 1
03:03:32 - 03:03:33
But, I mean, to me, that
Speaker 2
03:03:33 - 03:03:35
don't go work for Dart Vader, man.
Speaker 1
03:03:36 - 03:03:43
I mean, the flip the flip side to me is that the fact that he's going there is a good sign for OpenAI.
Speaker 2
03:03:43 - 03:03:43
Right.
Speaker 1
03:03:43 - 03:03:51
I think I think, you know, I I like Elias, it's covering a lot. I like those those guys are really good at what they do.
Speaker 2
03:03:51 - 03:04:02
I know they are. And that's kind of what's even, like, more And you know what? It's not that open AI doesn't open source the weights of GPT 4. Mhmm. It's that they go in front of congress.
Speaker 2
03:04:03 - 03:04:09
And that is what upsets me. You know, we had 2 effective altruist Sam's going in front of congress. One's in jail.
Speaker 1
03:04:10 - 03:04:12
I think you draw in parallels on the
Speaker 2
03:04:13 - 03:04:14
on the job.
Speaker 1
03:04:14 - 03:04:17
You give me a look. Give me a look.
Speaker 2
03:04:17 - 03:04:21
No. I think I think algorithm is a is a terribly evil ideology and Oh, yeah.
Speaker 1
03:04:21 - 03:04:29
That's interesting. Why do you think that is? Why why do you think there's something about a thing that sounds pretty good that kinda gets us into trouble
Speaker 2
03:04:29 - 03:04:43
because you get San Manuel Fried. Like San Manuel Fried is the embodiment of effective of altruism. Utilitarianism is an abhorrent ideology. Like like, well, yeah, we're gonna kill those 3 people to save a thousand, of course. Yeah.
Speaker 2
03:04:43 - 03:04:47
Right? There's no there's no underlying, like, there's just yeah.
Speaker 1
03:04:48 - 03:05:05
Yeah. But to me, that's a bit surprising. But that's also in retrospect not that surprising. But I I haven't heard really clear kind of like, a rigorous analysis why effective altruism is flawed.
Speaker 2
03:05:06 - 03:05:11
Oh, well, I think charity is bad. Right. So what is charity, but investment that you don't expect to have a return on? Right.
Speaker 1
03:05:13 - 03:05:25
Yeah. But you can also think of charity as like is you would like to see so allocate resources in optimal way to to make a better world.
Speaker 2
03:05:25 - 03:05:28
And probably almost always that involves starting the company.
Speaker 1
03:05:28 - 03:05:30
Yeah. Right? Because more efficient.
Speaker 2
03:05:30 - 03:05:38
Yeah. If you just take the money and you spend it on Malaria nets, You know, okay, great. You've made a hundred malaria nets. But if you teach
Speaker 1
03:05:38 - 03:05:40
-- No. -- you know how to fish. Right?
Speaker 2
03:05:40 - 03:05:41
Yeah.
Speaker 1
03:05:41 - 03:05:48
No. But the problem is came out of artificial might be harder. Starting a company might be harder than allocating money do you already have.
Speaker 2
03:05:48 - 03:06:02
I like the flip side of effective altruism, effect of accelerationism. I think accelerationism is the only thing that's ever lifted people out of poverty. The fact that food is cheap. Not we're giving food away because we are kind hearted people. No.
Speaker 2
03:06:02 - 03:06:19
Food is cheap. And that's the world you wanna live in. UBI, what a scary idea? What a scary idea all your power now? Your if money is power, your only source of power is granted to you by the goodwill of the government, What a scary idea?
Speaker 1
03:06:19 - 03:06:22
So you even think long term? Even
Speaker 2
03:06:23 - 03:06:26
I'd rather die than need UBI to survive, and I mean it.
Speaker 1
03:06:30 - 03:06:34
What if survival is basically guaranteed? What if our life becomes so good?
Speaker 2
03:06:34 - 03:06:46
You can make survival guaranteed without UBI. What you have to do is make housing and food dirt cheap. Right? Like, and that's the good world. And actually, let's go into what we should really be making dirt cheap, which is energy.
Speaker 2
03:06:47 - 03:06:58
Right? That that energy that, you know, Oh my god. Like You know, that that's If there's 1, I'm pretty centrist, politically. If there's 1 political position, I cannot stand. It's deceleration.
Speaker 2
03:06:59 - 03:07:01
It's people who believe we should use less energy.
Speaker 1
03:07:01 - 03:07:01
Yeah.
Speaker 2
03:07:01 - 03:07:14
Not people who believe global warming is a problem. I agree with you. Not people who believe that, you know, the saving environment is good. I agree with you. But people who think we should use less energy, that energy usage is a moral bad.
Speaker 2
03:07:14 - 03:07:19
No. No. You are asking, you are you are diminishing humanity.
Speaker 1
03:07:20 - 03:07:24
Yeah. Energy is flourishing of creative flourishing of the the human species.
Speaker 2
03:07:24 - 03:07:34
How do we make more of it? How do we make it clean? And how do we make just just just how do I pay, you know, 20 cents for a megawatt hour instead of a kilowatt hour?
Speaker 1
03:07:34 - 03:07:43
Part of me wishes that Elon went into nuclear fusion versus Twitter, part of me. Or somebody somebody like Elon.
Speaker 2
03:07:44 - 03:08:00
You know, we need to I wish I wish there were more more Elon's in the world. And Yeah. I think Elon sees it as, like, this is a political battle that needed to be fought. And again, like, you know, I always ask the question of whenever I disagree with him. I remind myself that he's a billionaire and I'm not.
Speaker 2
03:08:00 - 03:08:04
So, you know, maybe he's got some figured out that I don't or Maybe he doesn't.
Speaker 1
03:08:04 - 03:08:22
To have some humility, but at the same time, me as the person who happens to know him I find myself in that same position and sometimes even billionaires need friends who disagree and help them grow. And that's a difficult that's a difficult reality.
Speaker 2
03:08:22 - 03:08:27
Then it must be so hard. It must be so hard to meet people. Once you get to that point where
Speaker 1
03:08:27 - 03:08:31
-- Mhmm. -- fame, power, money. Ever be sucking up to you.
Speaker 2
03:08:31 - 03:08:38
See? I love not having shit. Like, I don't have shit, man. You know, like like, trust me, there's nothing I can give you. There's nothing worth taking for me.
Speaker 2
03:08:38 - 03:08:38
You know?
Speaker 1
03:08:39 - 03:08:52
Yeah. It takes a really special human being when you have power, when you have fame, when you have money to still think from first principles. Not, like, all the adoration you get towards you, all the admiration, all the people saying yes, yes, yes, yes.
Speaker 2
03:08:52 - 03:08:53
And all the hate too.
Speaker 1
03:08:53 - 03:08:54
And the hate I
Speaker 2
03:08:54 - 03:08:54
think that's worse.
Speaker 1
03:08:55 - 03:09:11
So the hate makes you want to go to the yes people. Because they they hate exhaust you. And the kind of hate that Elon has gotten from the left is pretty intense. And so that, of course, drives him right. It it it it loses balance and
Speaker 2
03:09:12 - 03:09:20
But it keeps this absolutely, vaguely, sci op political divide alive so that the 1 percent can keep power.
Speaker 1
03:09:20 - 03:09:25
Like Yeah. I wish we'd be less divided because it is giving power.
Speaker 2
03:09:25 - 03:09:25
It gives power.
Speaker 1
03:09:25 - 03:09:26
The ultra powerful.
Speaker 2
03:09:27 - 03:09:27
I think
Speaker 1
03:09:28 - 03:09:39
the rich get richer. You have love in your life. Has love made you a better or a worse programmer? Do you keep productivity metrics?
Speaker 2
03:09:39 - 03:09:43
No. No. No. I'm not not that. I'm not that methodical.
Speaker 2
03:09:44 - 03:09:55
I think that there comes to a point where If it's no longer visceral, I I just can't enjoy it. I just don't visually love programming. The minute I started like,
Speaker 1
03:09:55 - 03:09:58
so that's 1 of the big loves of your life is programming.
Speaker 2
03:09:58 - 03:10:09
Oh, I mean, just my computer in general. I mean, you know, I I tell my my girlfriend, my my first love is my computer, of course. Right? Like, you know, I sleep with my computer. It's there for a lot of my sexual experiences.
Speaker 2
03:10:10 - 03:10:14
Like, come on. See what's everyone's. Right? Like, you know, you gotta be real about that. And, like,
Speaker 1
03:10:14 - 03:10:18
not just, like, the IDE for programming, it's just the entirety of the computational machine.
Speaker 2
03:10:18 - 03:10:30
The fact that yeah. I mean, it's you know, I I wish it was Sometimes it'll be smarter and someday, you know, maybe I'm weird for this, but I don't discriminate, man. I'm not gonna discriminate biostack life and Silicon Stack life. Like,
Speaker 1
03:10:30 - 03:10:42
So the moment the computer starts to say, like, I miss you. I started to have some of the basics of human intimacy. It's over for you. The moment VS code says, hey, George.
Speaker 2
03:10:42 - 03:10:43
No. You see it? No. No. No.
Speaker 2
03:10:43 - 03:10:50
But VS code is no. They're just doing that. Microsoft's doing that to try to get me hooked on it. See to it. I'll see to it as gold digger, man.
Speaker 2
03:10:50 - 03:10:51
It's gold digger. It was
Speaker 1
03:10:51 - 03:10:53
a community open source thing.
Speaker 2
03:10:53 - 03:10:57
Well, this is gets more interesting. Right? If it's if it's open source, then It it it
Speaker 1
03:10:57 - 03:10:59
Though Microsoft's done a pretty good job on that.
Speaker 2
03:10:59 - 03:11:00
Oh, absolutely. No. No. No. Look.
Speaker 2
03:11:00 - 03:11:15
I think Microsoft again, I wouldn't count on it to be true forever. But I think right now, Microsoft is doing the best work in the programming world. Like, between, yeah, GitHub, GitHub, act actions, VS code, the improvements to Python. It works Microsoft, like
Speaker 1
03:11:16 - 03:11:22
Who who would have thought Microsoft And Mark Zuckerberg has spearheaded the open source movement.
Speaker 2
03:11:23 - 03:11:27
Right. Right. How how things change?
Speaker 1
03:11:27 - 03:11:28
Oh, it's beautiful.
Speaker 2
03:11:29 - 03:11:32
By the way, that's what I've been on to replace Google, by the way. Oh, Microsoft.
Speaker 1
03:11:33 - 03:11:34
Microsoft.
Speaker 2
03:11:34 - 03:11:36
Satiana Dallas said straight up, I'm coming for it.
Speaker 1
03:11:37 - 03:11:40
Interesting. So your bet, who wins AGI?
Speaker 2
03:11:41 - 03:11:53
Let's talk about AGI. I think we're a long way away from that, but I would not be surprised If in the next 5 years, being overtakes Google as a search engine. Interesting. It wouldn't spike. Interesting.
Speaker 1
03:11:55 - 03:11:57
I hope some startup does.
Speaker 2
03:11:58 - 03:12:01
It might be some startup too. I would I would equally bet on some startup.
Speaker 1
03:12:02 - 03:12:06
Yeah. I'm, like, 50 50. Yeah. But maybe that's naive. Yeah.
Speaker 1
03:12:06 - 03:12:13
I believe in the power of these type of language models. Satya is alive. Microsoft is alive. Yeah. It's great.
Speaker 1
03:12:13 - 03:12:20
It's great. I like all the innovation in these companies. They're not being stale. Okay. And to the degree they're being stale, they're losing.
Speaker 1
03:12:21 - 03:12:26
So there's a huge incentive to do a lot of exciting work and open source work, which is this is this is this incredible
Speaker 2
03:12:26 - 03:12:27
only way.
Speaker 1
03:12:28 - 03:12:33
You're older? You're wiser? What's the meaning of life? George Hartz.
Speaker 2
03:12:34 - 03:12:34
To win?
Speaker 1
03:12:35 - 03:12:41
It's still to win. Of course. Always. Of course. What's winning look like for you?
Speaker 2
03:12:42 - 03:12:45
I don't know. I haven't figured out what the game is yet, but when I do, I wanna
Speaker 1
03:12:45 - 03:12:52
So it's bigger than solving self driving. It's bigger than demarket democratizing decentralized and compute.
Speaker 2
03:12:54 - 03:12:57
I think the game is to stand on eye with God.
Speaker 1
03:12:59 - 03:13:05
I wonder what that means for you. Like, at the end of your life, what that would look like?
Speaker 2
03:13:06 - 03:13:19
I mean, this is what, like I don't know. This is some this is some there's probably some ego trip of mine, you know? Like, if you wanna get a stand eye to eye with God, he's just blasphemous man. Like, okay. I don't know.
Speaker 2
03:13:19 - 03:13:31
I don't know what have said God. I think he, like, wants that. I mean, I certainly want that from my creations. I want my creations to stand eye to eye with me. So why wouldn't god want me to stand eye to eye with him?
Speaker 2
03:13:33 - 03:13:35
That's the best I can do golden rule.
Speaker 1
03:13:36 - 03:13:47
I'm just imagining the creator of a video game having to look stand eye eye with 1 of the characters.
Speaker 2
03:13:48 - 03:13:52
I only watched season 1 in Westworld, but, yeah, we gotta find the maze and solve it. Like
Speaker 1
03:13:53 - 03:14:07
Yeah. Wonder what that looks like. It feels like a really special time in human history where that's actually possible. Like, there's something about AI that's like, we're playing with something weird here. Something really weird.
Speaker 2
03:14:07 - 03:14:19
I wrote a blog post. I wrote a Genesis and just looked like They give you some clues at the end of Genesis for finding the Garden of Eden. And I'm interested. I'm interested.
Speaker 1
03:14:20 - 03:14:35
Well, I hope you find just that, George. You're 1 of my favorite people. Thank you for doing everything you're doing. And in this case, for fighting, for open source, or for decentralization of AI, It's a it's a fight worth fighting, fight worth winning hashtag. I love you, brother.
Speaker 1
03:14:35 - 03:14:41
These conversations are always great. Hope to talk to you many more times. Good luck with Tiny Corp.
Speaker 2
03:14:41 - 03:14:42
Thank you. Great to be here.
Speaker 1
03:14:43 - 03:15:02
Thanks for listening to this conversation with George Hottz. To support this podcast, please check out our sponsors in the description. And now let me leave you with some words from Albert Einstein. Everything should be made as simple as possible, but not simpler. Thank you for listening and hope to see you next time.
Omnivision Solutions Ltd