15 minutes 4 seconds
🇬🇧 English
Speaker 1
00:00
The Jorogun Experience.
Speaker 2
00:03
And wondering what the potential for the future is and whether or not that's a good thing.
Speaker 1
00:10
I think it's going to be a great thing, but I think it's not going to be all a great thing. And that is where I think that's where all of the complexity comes in for people. It's not this like clean story of we're gonna do this and it's all gonna be great.
Speaker 1
00:25
It's we're gonna do this, it's gonna be net great, but it's gonna be like a technological revolution. It's gonna be a societal revolution. And those always come with change. And even if it's like net wonderful, you know, there's things we're gonna lose along the way.
Speaker 1
00:40
Some kinds of jobs, some kind of parts of our way of life, some parts of the way we live are gonna change or go away. And no matter how tremendous the upside is there, and I believe it will be tremendously good. You know, there's a lot of stuff we gotta navigate through to make sure. That's a complicated thing for anyone to wrap their heads around, and there's deep and super understandable emotions around that.
Speaker 2
01:04
That's a very honest answer that it's not all going to be good. But it seems inevitable at this point.
Speaker 1
01:13
It's yeah, I mean, it's definitely inevitable. My view of the world, you know, when you're like a kid in school, you learn about this technological revolution and then that 1 and then that 1. And my view of the world now sort of looking backwards and forwards is that this is like 1 long technological revolution.
Speaker 1
01:31
And we had, sure, like first we had to figure out agriculture so that we had the resources and time to figure out how to build machines. Then we got this industrial revolution and that made us learn about a lot of stuff, a lot of other scientific discovery too. Let us do the computer revolution And that's now letting us, as we scale up to these massive systems, do the AI revolution. But it really is just 1 long story of humans discovering science and technology and co-evolving with it.
Speaker 1
01:56
And I think it's the most exciting story of all time. I think it's how we get to this world of abundance. And although we do have these things to navigate, there will be these downsides. If you think about what it means for the world and for people's quality of lives, if we can get to a world where the cost of intelligence and the abundance that comes with that, the cost dramatically falls.
Speaker 1
02:23
The abundance goes way up. I think we'll do the same thing with energy. And I think those are the 2 sort of key inputs to everything else we want. So if we can have abundant and cheap energy and intelligence, that will transform people's lives largely for the better.
Speaker 1
02:37
And I think it's going to, in the same way that if we could go back now 500 years and look at someone's life, we'd say, well, there's some great things, but they didn't have this, they didn't have that, can you believe they didn't have modern medicine? That's what people are gonna look back at us like, but in 50 years.
Speaker 2
02:52
When you think about the people that currently rely on jobs that AI will replace, when you think about whether it's truck drivers or automation workers, people that work in factory assembly lines. What, if anything, what strategies can be put to mitigate the negative downsides of those jobs being eliminated by AI? So,
Speaker 1
03:20
I'll talk about some general thoughts, but I find making very specific predictions difficult because the way the technology goes has been so different than even my own intuitions, or certainly my own intuitions.
Speaker 2
03:34
Maybe we should stop there and back up a little. What were your initial thoughts?
Speaker 1
03:40
If you had asked me 10 years ago, I would have said, first AI is gonna come for blue collar labor, basically. It's going to drive trucks and do factory work, and it'll handle heavy machinery. Then maybe after that, it'll do some kinds of cognitive labor.
Speaker 1
04:00
But it won't be off doing what I think of personally as the really hard stuff. It won't be off proving new mathematical theorems. It won't be off discovering new science. It won't be off writing code.
Speaker 1
04:12
And then eventually, maybe, but maybe last of all, maybe never, because human creativity is this magic special thing. Last of all, it'll come for the creative jobs. That's what I would have said. Now A, it looks to me like, and for a while, AI is much better at doing tasks than doing jobs.
Speaker 1
04:32
It can do these little pieces super well, but sometimes it goes off the rails, can't keep like very long coherence. So people are instead just able to do their existing jobs way more productively, but you really still need the human there today. And then B, it's going exactly the other direction. Could do the creative work first, stuff like coding second, or they can do things like other kinds of cognitive labor third, and we're the furthest away from humanoid robots.
Speaker 2
05:01
So back to the initial question. If we do have something that completely eliminates factory workers, completely eliminates truck drivers, delivery drivers, things along those lines, that creates this massive vacuum in our society.
Speaker 1
05:21
So I think there's things that we're going to do that are good to do, but not sufficient. So I think at some point, we will do something like a UBI or some other kind of like very long-term unemployment insurance something, but we'll have some way of giving people like redistributing money in society as a cushion for people as people figure out the new jobs. But and maybe I should touch on that.
Speaker 1
05:49
I'm not a believer at all that there won't be lots of new jobs. I think human creativity, desire for status, wanting different ways to compete, invent new things, feel part of a community, feel valued, that's not going to go anywhere. People have worried about that forever. What happens is we get better tools and we just invent new things and more amazing things to do.
Speaker 1
06:12
And there's a big universe out there. And I think, I mean that like literally, in that there's like spaces really big, but also there's just so much stuff we can all do if we do get to this world of abundant intelligence where you can sort of just think of a new idea and it gets created. But again, that doesn't, to the point we started with, that doesn't provide great solace to people who are losing their jobs today. So saying there's going to be this great indefinite stuff in the future, people are like, what are we doing today?
Speaker 1
06:48
So I think we will, as a society, do things like UBI and other ways of redistribution, but I don't think that gets at the core of what people want. I think what people want is like agency, self-determination, the ability to play a role in architecting the future along with the rest of society, the ability to express themselves and create something meaningful to them. And also I think a lot of people work jobs they hate, and I think we as a society are always a little bit confused about whether we want to work more or work less. But somehow, we all get to do something meaningful, and we all get to play our role in driving the future forward, that's really important.
Speaker 1
07:36
And what I hope is as those long-haul truck driving jobs go away, which people have been wrong about predicting how fast that's going to happen, but it's going to happen, We figure out not just a way to solve the economic problem by giving people the equivalent of money every month, but that there's a way that And we've had a lot of ideas about this. There's a way that we share ownership and decision making over the future. I think I say a lot about AGIs that everyone realizes we're going to have to share the benefits of that, but we also have to share the decision making over it and access to the system itself. Like I'd be more excited about a world where we say, rather than give everybody on earth like 1 8 billionth of the AGI money, which we should do that too, we say you get like 1 8 billionth of, A18 billionth slice of the system.
Speaker 1
08:34
You can sell it to somebody else, you can sell it to a company, you can pool it with other people, you can use it for whatever creative pursuit you want, you can use it to figure out how to start some new business. And with that you get sort of like a voting right over how this is all gonna be used. And so the better the AGI gets, the more your little 1.8 billionth ownership is worth to you.
Speaker 2
08:56
We were joking around the other day on the podcast where I was saying that what we need is an AI government. We should have an AI president. And have AI write the- Just make all the decisions?
Speaker 2
09:07
Yeah, have something that's completely unbiased, absolutely rational, has the accumulated knowledge of the entire human history at its disposal, including all knowledge of psychology and psychological study, including UBI, because that comes with a host of pitfalls and issues that people have with it.
Speaker 1
09:29
So I'll say something there. I think we're still very far away from a system that is capable enough and reliable enough that any of us would want that. But I'll tell you something I love about that.
Speaker 1
09:42
Someday, let's say that thing gets built, The fact that it can go around and talk to every person on earth, understand their exact preferences at a very deep level, how they think about this issue and that 1, and how they balance the trade-offs and what they want, and then understand all of that and like collectively optimize for the collective preferences of humanity or of citizens of the US, that's awesome.
Speaker 2
10:06
As long as it's not co-opted, right? Our government currently is co-opted.
Speaker 1
10:12
That's for sure.
Speaker 2
10:13
We know for sure that our government is heavily influenced by special interests. If we could have an artificial intelligence government that has no influence, nothing has influence on it.
Speaker 1
10:26
What a fascinating idea.
Speaker 2
10:27
It's possible, and I think it might be the only way where you're gonna get completely objective the absolute most intelligent decision for virtually every problem, every dilemma that we face currently in society.
Speaker 1
10:44
Would you truly be comfortable handing over like final decision-making and say alright AI you got it.
Speaker 2
10:50
No no but I'm not comfortable doing that with anybody right you know I mean I don't right I was uncomfortable with the Patriot Act I'm uncomfortable with you know many decisions that are being made It's just there's so much obvious evidence that decisions that are being made are not being made in the best interest of the overall well of the people. It's being made in the decisions of whatever gigantic corporations that have donated to and whatever the military industrial complex and pharmaceutical industrial complex and it's just the money. That's really what we know today that money has a massive influence on our society and the choices that get made and the overall good or bad for the population.
Speaker 1
11:34
Yeah, I have no disagreement at all that the current system is super broken, not working for people, super corrupt, and for sure, unbelievably run by money. And I think there is a way to do a better job than that with AI in some way. But – and this might just be like a factor of sitting with the systems all day and watching all of the ways they fail.
Speaker 1
12:01
We got a long way to go.
Speaker 2
12:02
A long way to go, I'm sure. But when you think of AGI, when you think of the possible future, like where it goes to, do you ever extrapolate? Do you ever like sit and pause and say, well, if this becomes sentient and it has the ability to make better versions of itself, how long before we're literally dealing with a god?
Speaker 1
12:27
So, the way that I think about this is, It used to be that AGI was this very binary moment. It was before and after, and I think I was totally wrong about that. The right way to think about it is this continuum of intelligence, this smooth exponential curve, back all the way to that sort of smooth curve of technological revolution.
Speaker 1
12:50
The amount of compute power we can put into the system, the scientific ideas about how to make it more efficient and smarter, to give it the ability to do reasoning, to think about how to improve itself, that will all come. But my model for a long time, I think if you look at the world of AGI thinkers, there's sort of 2, particularly around the safety issues you were talking about, there's 2 axes that matter. There's the short, what's called short timelines or long timelines, you know, to the first milestone of AGI, whatever that's gonna be. Is that gonna happen in a few years, a few decades, maybe even longer, although at this point, I think most people are a few years or a few decades.
Speaker 1
13:32
And then there's takeoff speed. Once we get there, from there to that point you were talking about where it's capable of the rapid self-improvement, is that a slower, a faster process? The world that I think we're heading, that we're in, and also the world that I think is the most controllable and the safest, is the short timelines and slow takeoff quadrant. And I think we're going to have, you know, there were a lot of very smart people for a while who were like, the thing you were just talking about happens in a day or 3 days.
Speaker 1
14:05
And that doesn't seem likely to me given the shape of the technology as we understand it now. Now, even if that happens in a decade or 3 decades, It's still like the blink of an eye from a historical perspective, and there are going to be some real challenges to getting that right. And the decisions we make, the sort of safety systems and the checks that the world puts in place, how we think about global regulation or rules of the road from a safety perspective for those projects. It's super important because you can imagine many things going horribly wrong.
Speaker 1
14:44
But I feel cheerful about the progress the world is making towards taking this seriously. And you
Omnivision Solutions Ltd