1 hours 9 minutes 14 seconds
Speaker 1
00:00:00 - 00:00:29
♪ Hello there, this is Chris Anderson, and I am hugely, hugely, tremendously excited to welcome you to a new series of the TED interview. Now then, This season, we're trying something new. We're organizing the whole season around a single theme, albeit a theme that some of you may consider inappropriate. But hear me out. The theme is The Case for Optimism.
Speaker 1
00:00:31 - 00:00:59
And yes, I know the world has been hit with some extraordinarily ugly things in the last few years. Political division, a racial reckoning, technology run amok, not to mention a global pandemic and impending climate catastrophe. What on earth are we thinking? In this context, optimism just can seem so naive and unwanted almost, annoying. So here's my position, don't think of optimism as a feeling, it's not just this sort of shallow feeling of hope.
Speaker 1
00:01:00 - 00:01:38
Optimism is a search, It's a determination to look for a pathway forward. Somewhere out there, I believe, I truly believe, there are amazing people whose minds contain the ideas, the visions, the solutions that can actually create that pathway forward. If given the support and resources they need, they may very well light the path out of this dark place we're in. So these are the people who can present not optimism, but a case for optimism. They're the people I'm talking to this season.
Speaker 1
00:01:38 - 00:02:09
So let's see if they can persuade us. Now then, the place I want to start is with AI, artificial intelligence. This of course is the next innovative technology that is going to change everything as we know it, for better or for worse. Today we'll see it painted not with the usual dystopian brush, but by someone who truly believes in its potential. Sam Altman is the former president of Y Combinator, the legendary startup accelerator.
Speaker 1
00:02:10 - 00:02:55
And in 2015, he and a team launched a company called OpenAI, dedicated to 1 noble purpose, to develop AI so that it benefits humanity as a whole. You may have heard, by the way, recently a lot of buzz around an AI technology called GPT-3 that was developed by OpenAI and proved the quality of the amazing team of researchers and developers they have working there. We'll be hearing a lot about GPD 3 in the conversation ahead. But sticking to this lofty mission of developing AI for humanity and finding the resources to realize it haven't been simple. OpenAI is certainly not without its critics, but their goal couldn't be more important.
Speaker 1
00:02:55 - 00:03:18
And honestly, I found it really quite exciting to hear Sam's vision for where all this could lead. Okay, let's do this. So, Sam Altman, welcome.
Speaker 2
00:03:18 - 00:03:19
Thank you for having me.
Speaker 1
00:03:20 - 00:03:30
So Sam, here we are in 2021. A lot of people are fearful of the future at this moment in world history. How would you describe your attitude to the future?
Speaker 2
00:03:32 - 00:04:10
I think that the combination of scientific and technological progress and better societal decision-making, better societal governance is going to solve, in the next couple of decades, all of our current most pressing problems. There will be new ones. But I think we are going to get very safe, very inexpensive, carbon-free nuclear energy to work. And I think we're going to talk about that time that the climate disaster looked so bad and how lucky we are we got saved by science and technology. I think we've already now seen this with the rapidity that we were able to get vaccines deployed.
Speaker 2
00:04:10 - 00:04:56
We are going to find that we are able to cure or at least treat a significant percentage of human disease including I think we'll just actually make progress in helping people have much longer, decades longer health spans. I think in the next couple of decades, that will look pretty clear. I think we will build systems with AI and otherwise that make access to an incredibly high-quality education more possible than ever before. I think the lives, you know, when we look forward like 100 years, 50 years even, the quality of life available to anyone then will be much better than the quality of life available in the very best case To anyone today to any single person today. So yeah, I'm super optimistic.
Speaker 2
00:04:57 - 00:05:06
I think like It's always easy to doomscroll and think about how bad are the bad things are. But the good things are really good and getting much better.
Speaker 1
00:05:07 - 00:05:14
Is it your sincere belief that artificial intelligence can actually make that future better?
Speaker 2
00:05:15 - 00:05:35
Certainly. How? Look, with any technology, I don't think it will all be better. I think there are always positive and negative use cases of anything new, and it's our job to maximize the positive ones, minimize the negative ones. But I truly, genuinely believe that the positive impacts will be orders of magnitude bigger than the negative ones.
Speaker 2
00:05:36 - 00:06:02
I think we're seeing a glimpse of that now. Now that we have the first general purpose AI's built out in the world and available via things like our API, I think we are seeing evidence of just the breadth of services that we will be able to offer as this sort of technological revolution really takes hold. And we will have people interact with services that are smart, really smart, and it will feel like as strange as the world before, mobile phones feels now to us.
Speaker 1
00:06:04 - 00:06:28
Yeah, you mentioned your API. I guess that stands for Application Programming Interface. It's the technology that allows complex technology to be accessible to others. So Sam, give me a sense of a couple of things that have got you most excited that are already out there, and then how that gives you visibility to a pathway forward that is even more exciting.
Speaker 2
00:06:28 - 00:06:40
So I think that the things that we're seeing now are very much glimpses of the future. We released GPT-3, which is a general purpose natural language text model in the summer of
Speaker 1
00:06:40 - 00:06:41
2020.
Speaker 2
00:06:41 - 00:07:21
You know, there's hundreds of applications that are now using it in production. That's ramping up all of the time. But there are things where people use GPT-3 to really understand the intent behind a search query and deliver results and sort of understand not only the intent but all of the data and deliver the thing of what you want. So you can sort of describe a fuzzy thing and it'll understand documents, it can understand short documents, not full books yet, but bring you back to the context of what you want. There's been a lot of excitement about using the generative capabilities to create sort of games or sort of interactive stories or letting people develop characters or chat with a sort of virtual friend.
Speaker 2
00:07:21 - 00:07:59
There are applications that, for example, help a job seeker polish a tailored application for each individual company. There's the beginning of AI tutors that can sort of teach people about different concepts and take on different personas. You know, we could go on for a long time, but I think anything that you can imagine that you do today via computer that you would like to really understand and get to know you and not only that, but understand all of the data and knowledge in the world and help you have the best experience that is possible, that will all happen. So what gets opened up? What new adjacent possible state is there as
Speaker 1
00:07:59 - 00:08:16
a result of these powers? Frame this question from the point of view of someone who's starting out on a career, for example, they're trying to figure out what would be a really interesting thing to do in the future that has only recently become possible. What are some new things that this opens up?
Speaker 2
00:08:17 - 00:09:14
In a world where you can talk to a computer and get the output that would normally require you hiring the world experts back immediately for almost no money, I would say think about what's possible there. So that could be like, as you said, what can normally only the best programmer in the world or a really great programmer do for me? And can I now instead just ask in English and have that program written? So all these people that you know want to develop an app, they have an idea but they don't know how to program, now they can have it. You know what does a service look like when Anyone on earth who wants really great medical advice can get better medical advice than any single doctor could ever get because this has the total medical knowledge and reasoning ability that some humanity has ever produced.
Speaker 2
00:09:15 - 00:09:46
You know, when you want to learn something, you have a sort of AI tutor that understands your exact style, how you best learn everything you know, and custom teaches you whatever concept you want to learn. Someday you can imagine that like, you have an AI that reads your email and your task list and your calendar and the documents you've been sent and any meeting maximally perfectly prepares you and has all of the information that you need in all the context of your entire career right there for you. We could go on for a long time, but I think these will just be powerful systems.
Speaker 1
00:09:47 - 00:10:14
So it's really fun playing around with GPT-3. 1 compelling example for someone who's more tech space is try Googling the Guardian essay that was written entirely by different GPT-3 queries and stitched together. It's an essay on why artificial intelligence isn't a threat to humanity, and it's impressive. It's very compelling. I actually tried inputting on 1 of the online UIs.
Speaker 1
00:10:15 - 00:10:33
I asked the question, what is interesting about Sam Altman? Here's what it came back with. It was rather philosophical actually. It came back with, I don't understand what you mean by interesting. It seems to me that this word has no unambiguous meaning in the context of human society beyond its literal definition as somewhat pleasing or entertaining.
Speaker 1
00:10:34 - 00:10:51
I do not believe there to be any such thing as interestingness except in the mind of a human or other sentient being. But to my knowledge this is an entirely subjective state that varies from person to person. However, I will grant that this is an interesting question to ask. This does not mean it has been answered. There is no answer to be found.
Speaker 1
00:10:51 - 00:11:03
Well, so you can grade that as somewhere between profound and gibberish. Is that almost where the state of play is? I mean.
Speaker 2
00:11:03 - 00:11:39
That's where we are today. I think somewhere between profound and gibberish is the right way to think about the current capabilities of say GPT-3. Yeah, I think like we're definitely had a bubble of hype about GPT-3 last summer. But the thing about bubbles is the reason that smart people fall for them is there's a kernel of something really real and really interesting that people get overexcited about. And I think people definitely got and still are over excited about GPT-3 today, but still probably underestimated the potential of where these models will go in the future.
Speaker 2
00:11:40 - 00:12:12
And so maybe there's this like short term overhype and long term underhype for the entire field, for text models, for whatever you'd like that's going on. And as you said, there's like clearly some gibberish in there. But on the other hand, those were like well-formed sentences, and there were a couple of ideas in there that I was like, oh, like, actually, maybe that's right. And I think if artificial intelligence, even in its current very larval state, can make us confront new things and sort of inspire new ideas, that's already pretty impressive. IAN Give us a sense of what's actually happening in
Speaker 1
00:12:12 - 00:12:37
the background there. I think it's hard to understand because you read these words seem like someone is trying to mean something. Obviously I don't think you believe that there's whatever you've built there, that there's a sort of thinking sentient thing that's going, oh, I must answer this question. So how would you describe what's going on? You've got something that has read the entire internet, essentially, all of Wikipedia, et cetera.
Speaker 2
00:12:37 - 00:13:05
We've read something that's read like a small fraction, a random sampling of the internet. We will eventually train something that has read as much of the internet, or more of the internet than we've done right now. But we have a very long way to go. I mean, we're still, I think, relative to what we will have operating at quite small scale with quite small AIs. But what is happening is there is a model that is ingesting lots of text, and it is trying to predict the next word.
Speaker 2
00:13:07 - 00:13:54
So we use transformers, they take in a context which is a particular architecture of an AI model, they take in a context of a lot of words, let's say like a thousand or something like that, and they try to predict the word that comes next in the sequence. And there's like a lot of other things that happen, but fundamentally, that's it. And I think this is interesting, because in the process of playing that little game of trying to predict the next word. These models have to develop a representation and understanding of what is likely to come next. And I think it is maybe not perfectly accurate but certainly worth considering to say that intelligence is very near the ability to make accurate predictions.
Speaker 1
00:13:54 - 00:14:16
What's confusing about this is that there are so many words on the internet which are foolish as well as the words that are wise. And how do you build a model that can distinguish between those 2? And this was prompted actually by another example that I typed in, like I asked, you know, what is a powerful idea? I'm very interested in ideas, that was my question.
Speaker 2
00:14:16 - 00:14:16
What is
Speaker 1
00:14:16 - 00:14:40
a powerful idea? And it came back with several things, some of which seemed moderately profound, some of which seemed moderately gibberish, but then here was 1 that it came back with. The idea that the human race has, quote, evolved, unquote, is false. Evolution or adaptation within a species was abandoned by biology and genetics long ago. So I'm going, whoa, wait a sec, that's news to me.
Speaker 1
00:14:40 - 00:15:14
What have you been reading? And I presume this has been pulled out of some recess of the internet. Is it possible even in theory to imagine how a model can gravitate towards truth, wisdom, as opposed to just like majority views? How do you avoid something taking us further into the sort of, you know, the maze of errors and bad thinking and so forth that has already been a worrying feature the last few years?
Speaker 2
00:15:14 - 00:15:57
It's a fantastic question and I think it is the most interesting area of research that we need to pursue now. I think at this point, the questions of whether we can build really powerful general purpose AI system, I won't say they're in the rear view mirror, we still have a lot of hard engineering work to do, but I'm pretty confident we're gonna be able to. And now the questions are, like, what should we build? And how, and why, and what data should we train on? And how do we build systems not just that can do these, like, phenomenally impressive things, but that we can ensure do the things that we want, and that understand the concepts of truth and falsehood and, you know, alignment with human values and misalignment with human values.
Speaker 2
00:15:59 - 00:16:42
1 of the pieces of research that we put out last year that I was most proud of and most excited about is what we call reinforcement learning from human feedback. And we showed that we can take these giant models that are trained on a bunch of stuff, some of it good, some of it bad, and then with a really quite small amount of feedback from human judgment about, hey, this is good, this is bad, this is wrong, this is the behavior I want, I don't want this behavior, we can feed that information from the human judges back into the model and we can teach the model, behave more like this and less like that. And it works better than I ever imagined it would. And that gives me a lot of hope that we can build an aligned system. We'll do other things too.
Speaker 2
00:16:42 - 00:17:40
Like I think curating data sets where there's just less sort of bad data to train on will go a very long way. And as these models get smarter, I think they inherently develop the ability to sort out bad data from good data. And as they get really smart, they'll even start to do something we call active learning, which is where they ask us for exactly the data they need when they're missing something, or when they're unsure, or when they don't understand. But I think as a result of simply scaling these models up, building better, I hate to use the word cognition because it sounds so anthropomorphic, but let's say building a better ability to reason into the models, to think, to challenge, to try to understand and combining that with this idea of aligning to human values via this technique we developed, that's going to go a very long way. Now there's another question which you sort of just kicked the ball down the field to, which is how do we as a society decide to which set of human values do we align these powerful systems?
Speaker 1
00:17:41 - 00:18:13
Yeah, indeed. So if I understand rightly what you're saying there, you're saying that it's possible to look at the output at any 1 time of GPT-3, and if we don't like what it's coming up with, some wise human can say, no, that was off, don't do that. Whatever algorithm or process led you to that, undo it. And the system is then incredibly powerful at avoiding that same kind of mistake in future because it sort of back replicates the instruction. Correct.
Speaker 2
00:18:14 - 00:18:26
And eventually, and not much longer, I believe that we'll be able to not only say that was good, that was bad, but say that was bad for this reason. And also tell me how you got to that answer so I can make sure I understand.
Speaker 1
00:18:27 - 00:19:01
But at the end of the day, someone needs to decide who is the wise human or humans who are looking at the results. So it's a big difference is, you know, someone who grew up with intelligent design worldview could look at that and go, that's a brilliant outcome. Well, gold star, well done. And someone else would say, something's gone awfully wrong here. So how do you avoid, and this is a version of the problem that a lot of the, I guess, Silicon Valley companies are facing right now in terms of the pushback they're getting on the output of social media and so forth.
Speaker 1
00:19:01 - 00:19:09
How do you assemble that pool of experts who stand for human values that we actually want?
Speaker 2
00:19:12 - 00:19:50
I mean we talk about this all the time. I don't think this is like solely or even not even close to majorly up to open AI to decide. I think we need to begin a societal conversation now about how we're going to make those decisions, how we're going to make sure we have representational input in that, and how we sort of make these very difficult global governance systems. My personal belief is that we should have pretty broad rules about what these systems will never do and will always do, but then the individual user should get a system that kind of behaves like they want. And there will be, you know, people do have very different value systems.
Speaker 2
00:19:51 - 00:20:09
Some of them are just fundamentally incompatible. No 1 gets to use AI to like exploit other people, for example, and hopefully we can all agree on. But do you want the AI to like, you know, support you in your belief of intelligent design? Like, do I think open AI should say it can't? Even though I vehemently disagree with that as like a scientific conclusion?
Speaker 2
00:20:09 - 00:20:36
No, I wouldn't take that stance. I think the thing to remember about all of this is that GPT-3 is still quite extraordinarily weak. It still has such big problems and is still so unreliable that for most use cases, it's still unsuitable. But when we think about a system that is like a thousand times more powerful, and let's say a million times more reliable. You know it just doesn't it doesn't say gibberish very often.
Speaker 2
00:20:36 - 00:20:57
It doesn't totally lose the plot and get distracted. A system like that is going to be 1 that a lot of the economic activity in the world comes to rely on. And I think it's very important that we don't have a small group of people sort of saying that you can never use it for this thing that like most of the world wants to use it for, because it doesn't match our personal beliefs.
Speaker 1
00:20:58 - 00:21:28
Talk a bit more about some of the other uses of it, because 1 of the things that's most surprising is it's not just about sort of text responses, it can take generalized human instructions and build things. So for example, you can say to it, write a Python program that is designed to put a flashing cursor in 1 corner of the screen and the Google logo in the other corner and it can go and do something like that shockingly effectively.
Speaker 2
00:21:31 - 00:21:32
It can translate.
Speaker 1
00:21:32 - 00:21:50
That's amazing. That seems amazing to me. That opens the door to An entirely way to think about programmers for the future that you could you could have people who can program just in human natural language potentially and gain rapid efficiency and let the AI do the engineering?
Speaker 2
00:21:50 - 00:22:13
We're not that far away from that world. We're not that far away from the world where you will write a spec in English and for a simple enough program, the AI will just write the code for you. As you said, you can see glimpses of that even in this very weak GPT-3, which was not trained to code. Like I think this is important to remember, we trained it on the language on the internet. Very rarely, you know, internet, language on the internet also includes some code snippets.
Speaker 2
00:22:15 - 00:22:54
And that was enough. So if we really try to go train a model on code itself, and that's where we decide to put the horsepower of the model into, just imagine what would be possible. It would be quite impressive. But I think what you're pointing to there is that because models like GPT-3, to some degree or other, and it's like very hard to know exactly how much, understand the underlying concepts of what's going on. And they're not just regurgitating things they found on a website, but they can really apply them and say, oh yeah, I kind of like know about this word and this idea in code, and this is probably what you're trying to do and I won't get it right always, but sometimes I will just generate this brand new program for nothing that anyone has ever asked before and it will work.
Speaker 2
00:22:55 - 00:23:12
That's pretty cool. And data is data, so it can do that from English to code, it can do that from English to French. Again, we never told it to learn about translation. We never told it about the concepts of English and French, but it learned them. Even though we never said, this is what English is, and this is what French is, and this is what it means to translate,
Speaker 1
00:23:13 - 00:23:48
it can still do it. I mean, for creative people, is there a world coming where the sort of the palette of possibility that they can be exposed to is just explodes? I mean, if you're a musician, is there a near future where you can say to your AI, okay, I'm going to bed now, but in the morning, I'd love you to present me with a thousand 2 bar jingles with words attached that you think have a sort of meme factor to them. And you come down in the morning and the computer shows you this stuff and 1 of them you go, wow, that is it. That is a top 10 hit.
Speaker 1
00:23:48 - 00:23:53
And you build a song from it. Is that going to actually be a value add?
Speaker 2
00:23:53 - 00:24:25
We released something last year called Jukebox, which is very near what you described, where you can say, I want music generated for me in this style or this kind of stuff, and it can come up with the words as well, and it's pretty cool. And I really enjoy listening to music that it creates, and you can do full songs, 2 bars of a jingle, whatever you'd like. And 1 of my very favorite artists reached out cold to OpenAI after we released this and said that he wanted to talk. And I was like, whoa, like total fanboy here. I'd love to join that call.
Speaker 2
00:24:25 - 00:24:45
And I was so nervous that he was going to say, this is terrible. This is like a really sad thing for human creativity. Like, you know, why are you doing this? This is like, whatever. And he was so excited, and he's like, this has been so inspiring, I want to do a new album with this, you know, it's like giving me all these new ideas, it's making me much better at my job, I'm going to make better music because of this tool.
Speaker 2
00:24:45 - 00:25:13
And that was awesome, And I hope that's how it all continues to go. And I think it is going to lead to this. We see a similar thing now with DALI, where graphic designers sometimes tell us that they just, they see this new set of possibilities because there's new creative inspiration and their cycle time, Like the amount of time it takes to just come up with an idea and be able to look at it and then decide whether to go down that path or head in a different direction goes down so much. And so I think it's going to just be this like incredible creative explosion for humans.
Speaker 1
00:25:14 - 00:25:55
And how far away are we Sam before an AI comes up with a genuinely powerful new idea? An idea that solves a problem that humans have been wrestling with. It doesn't have to be quite on the scale of, okay, we've got a virus coming, please describe to us what a national rational response should look like. But some kind of genuinely innovative idea or solution. Like 1 internal question we've asked ourselves is when will the first genuinely interesting purely AI written TED talk show up?
Speaker 2
00:25:55 - 00:26:16
I think that's a great milestone. I will say it's always hard to guess timelines. I'm sure I'll be wrong on this, but I would guess the first genuinely interesting TED talk thought of written, delivered by an AI is within the kind of the seven-ish year time frame. Maybe a little bit less.
Speaker 1
00:26:17 - 00:26:37
And it feels like, I mean, just reading that Guardian essay that was kind of, it was a composite of several different GPT-3 responses to questions about, you know, the threats of robotics or whatever. If you throw in a human editor into the mix, you could probably imagine something much sooner indeed like tomorrow.
Speaker 2
00:26:38 - 00:26:59
Yeah, so the hybrid version where it's basically a tool-assisted TED talk, but that it is better than any TED talk a human could generate in 100 hours or whatever. If you can sort of combine human discretion with AI horsepower, I suspect that's like a next year, 2 years from now kind of thing where it's just really quite good.
Speaker 1
00:27:01 - 00:27:19
That's, That's really interesting. How do you view the impact of AI on jobs? There's obviously been the familiar story is that every white-collar job is now up for destruction. What's your view there?
Speaker 2
00:27:20 - 00:27:49
You know, I think it's always hard to make these predictions. That is definitely the familiar story now. 5 Years ago, it was every blue-collar job is up for destruction. Maybe like last year, it was every creative job is up for destruction because of things like jukebox. I think there will be an enormous impact on the job market, and I really hate it.
Speaker 2
00:27:49 - 00:28:09
I think it's kind of gross when people like working on AI pretend like there's not going to be or sort of say, oh, don't worry about it. It'll just all obviously better. It doesn't always obviously get better. I think what is true is every technological revolution produces a change in jobs. We always find new ones, at least so far.
Speaker 2
00:28:09 - 00:28:55
It's difficult to predict from where we're sitting now what the new ones will be. And this technological revolution is likely to be, again, it's always tempting to say this time it's different, maybe I'll be totally wrong, but from what I see now, this technological revolution is likely to be more dramatic, more of a staccato note than most. And I think we as a society need to figure out how we're gonna cushion everybody through that. I've got my own ideas about how to do that. I wouldn't say that I have any reason to believe they're the right ones, but doing nothing and not really engaging with the magnitude of what's about to happen, I think is like not an acceptable answer.
Speaker 2
00:28:56 - 00:29:19
So there's going to be huge impact. It's difficult to predict where it shows up the most. I think previous predictions have mostly been wrong. But I'd like to see us all as society, certainly as a field, engage in what the shifts we want to make to the social contract are to kind of get through that in a way that is maximally beneficial to everybody.
Speaker 1
00:29:20 - 00:30:07
I mean, in every past revolution, there's always been a space for humans to move to that is, if you like, moving up the food chain. We've retreated to the things that humans could uniquely do, think better, be more creative, and so forth. I guess the worry about AI is that in principle, and I think you probably believe this, that there is no human cognitive feat that won't ultimately be doable, probably better by artificial general intelligence simply because of the extra firepower that ultimately they can have, the vast knowledge that they bring to the table and so forth. Is that basically right? There is ultimately no safe sort of space where we could say, oh, but they'll never be able to do that.
Speaker 2
00:30:07 - 00:30:45
On a very long time horizon, I agree with you. But that's such a long time horizon, I think, that, you know, like maybe we've merged by that point, like maybe we're all plugged in and then we're this sort of symbiotic thing. I think an interesting example is what we were talking about a few minutes ago, where right now we have these systems that have sort of enormous horsepower, but no steering wheel. Incredible capabilities, but no judgment. And there's these obvious ways in which today even a human plus GPT-3 is far better than either on their own.
Speaker 1
00:30:45 - 00:30:58
Many people speak about a world where AI is this external threat. You speak about at some point we actually merge with AIs in some way. What do you mean by that?
Speaker 2
00:30:59 - 00:31:28
There's a lot of different versions of what I think is possible there. You know, in some sense, I'd argue the merge has already begun, the human technology merge. Like, we have this thing in our hands that sort of dictates a lot of what we think, but it gives us real superpowers. And that can go much, much further. Maybe it goes all the way to like the Elon Musk vision of Neuralink and having our brains plugged into computers and sort of like literally we have a computer on the back of our head or goes the other direction and we get uploaded into 1.
Speaker 2
00:31:28 - 00:31:49
Or maybe it's just that we all have a chatbot that kind of constantly steers us and helps us make better decisions than we could. But in any case, I think the fundamental thing is it's not like the humans versus the A.I. Is competing to be the smartest sentient thing on earth or beyond. But it's that this idea of being on the same team.
Speaker 1
00:31:50 - 00:32:03
I certainly get very excited by the sort of the medium term potential for creative people of all sorts if they're willing to expand their palette of possibilities but with the use of AI. You have to
Speaker 2
00:32:03 - 00:32:16
be willing to. I mean the 1 thing that the history of technology has shown again and again is that something this powerful and with this much benefit is unstoppable and you will get rewarded for embracing it the most and the earliest.
Speaker 1
00:32:18 - 00:32:18
♪
Speaker 2
00:32:33 - 00:32:34
So talk
Speaker 1
00:32:34 - 00:33:02
about what can go wrong with AI. So let's move away from just the sort of economics displacement factor. You were a co-founder of OpenAI because you saw existential risks to humanity from AI. Today, what would you put as the sort of the most worrying of those risks and how is OpenAI working to minimize them?
Speaker 2
00:33:02 - 00:33:56
I still think all of the really horrifying risks exist. I am more confident, much more confident than I was 5 years ago when we started that there are technical things we can do about how we build these systems and the research and the alignment that make us much more likely to end up in the kind of really wonderful camp. But you know, like maybe open AI falls behind and maybe somebody else builds AGI that thinks about it in a very different way, or doesn't care as much as we'd like about safety and the risks, or has a different trade-off of how fast we should go with this, and where we should just say, let's push on for the economic benefits. But I think all of the sort of like, you know, traditionally what's been in the realm of sci-fi risks are real and we should not ignore them and I still lose sleep over them.
Speaker 1
00:33:56 - 00:34:23
And just to update people, AGI is artificial general intelligence. Right now we have incredible examples of powerful AI operating on specific areas. AGI is the ability of a computer mind to connect the dots and to make decisions of the same level of breadth that humans have had. What's your sort of elevator pitch on AGI about how to identify it and how to think of it?
Speaker 2
00:34:23 - 00:35:11
CB Yeah, I mean, the way that I would say it is that for a while we were in this world of like very narrow AI, you know, that could like classify images of cats or whatever, More advanced stuff than that, but that kind of thing. We are now in the era of general purpose AI, where you have these systems that are still very much imperfect tools, but that can generalize. And 1 thing like GPT-3 can write essays and translate between languages and write computer code and do very complicated search. It's like a single model that understands enough of what's really going on to do a broad ray of tasks and learn new things quickly, sort of like people can. And then eventually we'll get to this other realm.
Speaker 2
00:35:11 - 00:35:22
Some people call it AGI, some people call it lots of other things, But I think it implies that the systems are like to some degree self-directed, have some intentionality of their own.
Speaker 1
00:35:23 - 00:35:53
Is a simple summary to say that like the fundamental risk is that there's the potential with general artificial intelligence of a sort of runaway effect of self-improvement that can happen far faster than any kind of humans can even keep up with. So that the day after you get to AGI, suddenly computers are thousands of times more advanced than us and we have no way of controlling what they do with that power.
Speaker 2
00:35:54 - 00:36:38
Yeah, and that is certainly in the risk space, which is that we build this thing And at some point, somewhat suddenly, it's much more powerful than we are. We haven't really done the full merge yet. There's an event horizon there, and it's sort of hard to see to the other side of it. Again, lots of reasons to think it will go okay, lots of reasons to think we won't even get to that scenario, but that is something that I don't think people should brush under the rug as much as they do. It's in the possibility space for sure, and in the possibility subspace of that is 1 where we didn't actually do as good of a job on the alignment work as we thought, and this sort of child of humanity kind of acts in a very different way than we think.
Speaker 2
00:36:38 - 00:37:07
A framework that I find useful is to sort of think about like a 2 by 2 matrix, which is short timelines to AGI and long timelines to AGI, and a slow takeoff and a fast takeoff on the other axis. And in the short timelines fast takeoff quadrant, which is not where I think we're going to be, but if we get there, I think there's a lot of scenarios in the direction that you're describing that are worrisome and we would want to spend a lot of effort planning for.
Speaker 1
00:37:09 - 00:37:22
I mean, the fact that a computer could start editing its own code and improving itself while we're asleep and you wake up in the morning and it's got smarter. That is the start of something super powerful and potentially scary.
Speaker 2
00:37:24 - 00:37:41
I have tremendous misgivings about letting an AI system, not 1 we have today, but 1 that we might not have in too many more years start editing its own code while we're not paying attention. I think that's the kind of thing that is worth like a great deal of societal discussion about, you know, just because we can do that, should we?
Speaker 1
00:37:42 - 00:38:16
Yes, because 1 of the things that's been most shocking to me about the last few years has been just the power of unintended consequences. It's like you don't have to have a belief that there's some sort of waking up of an alien intelligence that suddenly decides it wants to wreak havoc on humans. That may never happen. What you can have is just incredible power that goes amok. So a lot of people would argue that what's happened in technology in the last few years is actually an example of that.
Speaker 1
00:38:16 - 00:38:46
You know, social media companies created these intelligences that were programmed to maximally harvest attention, for example. And the unintended consequences from that turned out to be in some ways horrifying and extraordinarily damaging. Is that a meaningful sort of canary in the coal mine saying look out humanity this could be really dangerous and how on earth do you protect against those kinds of unintended consequences?
Speaker 2
00:38:47 - 00:39:24
I think you raise a great point in general, which is these systems don't have to wish ill to humanity to cause ill. Just when you have like very powerful systems, I mean, unintended consequences for sure, but another version of that is, and I think this applies at the technical level, at the company level, at the societal level, incentives are superpowers. Charlie Munger had this thing which is incentives are so powerful that if you can spend any time whatsoever, you know, working on the incentive system, that's what you should do before you work on anything else. And I really believe that. And I think that applies to the individual models we build and what their reward functions look like.
Speaker 2
00:39:24 - 00:39:58
I think it applies to society in a big way. And I think it applies to our corporate structure at OpenAI. You know, we sort of observed that if you have very well-meaning people, but they have this incentive to sort of like maximize attention harvesting and profit forever, through no one's ill intentions, that leads to a quite undesirable outcome. And so we set up OpenAI as this thing called a capped profit model, specifically so that we don't have the system incentive to just generate maximum value forever with an AGI. That seems like obviously quite broken.
Speaker 2
00:39:58 - 00:40:37
But even though we knew that was bad, and even though we all like to think of ourselves as good people, it took us a long time to figure out the right structure, to figure out a charter that's going to govern us and a set of incentives that we believe will let us do our work and kind of these, we have these like 3 elements that we talk about a lot. Research, sort of engineering, development, deployment, policy, and safety. Put those all together under a system where you don't have to rely on anything but the natural incentives to push in a direction that we hope will minimize the sort of negative unintended consequences.
Speaker 1
00:40:39 - 00:41:26
So help me understand this, because this is, I think this is confusing to some people. So You started OpenAI initially, I think Elon Musk was the co-founder, and there was a group of you, and the argument was this technology is too powerful to be left developed in secret, and to be left developed purely by corporations who have whatever incentive they may have. We need a non-profit that will develop and share knowledge openly. First of all, just even at that early stage, some people were confused about this. It was saying, if this thing is so dangerous, why on earth would you want to make its secrets even more available?
Speaker 1
00:41:26 - 00:41:32
You may be giving the tools to the sort of AI terrorist in his bedroom somewhere.
Speaker 2
00:41:32 - 00:42:15
Yeah, I think we got misunderstood in the way we were talking about that. We certainly don't think that the right thing to do is to build this super weapon and hand it to a terrorist. That's obviously awful. 1 of the reasons that we like our API model is it lets us make the most powerful AI technology anyone in the world has, as far as we know, available to whoever would like to use it, but to put some controls on its usage and also if we make a mistake to be able to pull it back or change it or tweak it or improve it or whatever. But we do want to put, and this is continuing to be true, with appropriate restrictions and guardrails, very powerful technology in the hands of people.
Speaker 2
00:42:15 - 00:42:44
I think that is fair. I think that will lead to the best results for society as a whole. And I think it will sort of maximize benefit, but that's very different than sort of shipping the whole model and saying, here, do whatever you want with it. We're able to enforce rules on it. We also think, and this is part of the mission, that like something the field was doing a lot of that we didn't feel good about was sort of saying like, oh we're gonna keep the pace of progress and capabilities secret.
Speaker 2
00:42:44 - 00:43:05
That doesn't feel right because I think we do need a societal conversation about what's going on here, what the impacts are going to be. And so we, although we don't always say like, you know, here's the super weapon, hopefully, we do try to say like, this is really serious. This is a big deal. This is going to affect all of us. We need to have a big conversation about what to do with it.
Speaker 1
00:43:06 - 00:43:39
Help me understand the structure a bit better, Sam, because you definitely surprised a bunch of people when you announced that Microsoft were putting a billion dollars into the organization and in return, I guess they get certain exclusive licensing rights. And so for example, they are the exclusive licensee of GPT-3. So talk about that structure of how you, I mean, Microsoft presumably have invested not purely for altruistic purposes, they think that they will make money on that billion dollars.
Speaker 2
00:43:40 - 00:44:30
I sure hope they do. I love capitalism but a thing that I really loved even more about Microsoft as a partner, and I'll talk about the structure and the exclusive license in a minute, is that we went around to people that might fund us and we said, 1 of the things here is that we're gonna try to make you some money, but AGI going well is more important and we need you to sign this document that says if things don't go the way we think and we can't make you money, like you just cheerfully walk away from it and we do the right thing for humanity. And they were like, yes, we are enthusiastic about that. We get that the mission comes first here. So again, I hope it's phenomenal investment for them, but they were like, they really pleasantly surprised us on the upside of how aligned they were with us about how strange the world may get here and the need for us to have flexibility and put our mission first, even if that means they lose all their money, which I hope they don't and don't think they will.
Speaker 1
00:44:31 - 00:44:46
So the way it's set up is that if at some point in the coming year or 2 years, Microsoft decide that there's some incredible commercial opportunity that they could realize out of the AI that you've built, and you feel actually know that's damaging, you can block it, you can veto it?
Speaker 2
00:44:46 - 00:45:17
Correct. So the full, most powerful version of GPT-3 and its successors are available via the API and we intend for that to continue. What Microsoft has is the ability to sort of put that model directly into their own technology if they want to do that. We don't plan to do that with other people because we can't have all these controls that we talked about earlier, but they're like a close, trusted partner and they really care about safety too. But our goal is that anybody who wants to use the API can have the most powerful versions of what we've trained.
Speaker 2
00:45:18 - 00:45:58
And the structure of the API lets us continue to increase the safety and fix problems when we find them. But the structure, so we started out as a non-profit, as you said, we realized pretty quickly that, although we went into this thinking that the way to get to aji would be about smarter and smarter algorithms that we just needed bigger and bigger computers as well. And that was gonna require a scale of capital that no 1 will be certainly not me could figure out a raise is non-profit. I'm also needed to be able to compensate very highly compensated, talented individuals that do this. But a full for-profit company had this runaway incentives problem, among other things.
Speaker 2
00:45:58 - 00:46:56
Also just 1 about sort of fairness in society and wealth concentration that didn't feel right to us either. And so we came up with this kind of hybrid where we have a non-profit that governs what we do and it has a subsidiary LLC that we structure in a way to make a fixed amount of profit so that all of our investors and employees, hopefully if things go how we like, if not, no 1 gets any money, but hopefully they get to make this 1 time great return on their investment or the time that they've spent at OpenAI and their equity here. And then beyond that, all the value flows back to the nonprofit and we figure out how to share it as fairly as we can with the world. And I think that this structure and this nonprofit with this very strong charter in place and everybody who joins signing up for like the mission coming first and the fact the world may get strange. I think that that was at least the best idea we could come up with.
Speaker 2
00:46:56 - 00:47:04
And I think it feels so far like the incentive system is working just as I sort of watch the way that we and our partners make decisions.
Speaker 1
00:47:04 - 00:47:11
But if I read it right, the cap on the gain that investors can make is 100x and it's a massive.
Speaker 2
00:47:11 - 00:47:19
Well, that was for our very first round investors. It's way, way lower. Like as we now take incremental big capital it's way way lower.
Speaker 1
00:47:19 - 00:47:23
So your deal with Microsoft isn't you can only make the first hundred billion dollars. No no
Speaker 2
00:47:23 - 00:47:24
it's way lower.
Speaker 1
00:47:24 - 00:47:26
And after that we're giving it to the world.
Speaker 2
00:47:26 - 00:47:27
No it's way lower than that.
Speaker 1
00:47:28 - 00:47:29
Have you disclosed what it is?
Speaker 2
00:47:29 - 00:47:33
I don't know if we have so I won't accidentally do it now. All right.
Speaker 1
00:47:35 - 00:47:59
Okay, so explain a bit more about the charter and how it is that you hope to avoid, or I guess help contribute to an AI that is safe for humanity. What do you see as the keys to us avoiding the worst mistakes and really holding on to something that's beneficial for humanity?
Speaker 2
00:48:01 - 00:48:23
My answer there is actually more about like technical and societal issues than the charter. So if it's okay for me to answer it from that perspective, sure. Okay. I'm happy to talk about the charter too. I think this question of alignment that we talked about a little earlier is paramount.
Speaker 2
00:48:24 - 00:49:33
And then I think to understand that, it's useful to differentiate between accidental misuse of a system and intentional misuse of a system. So, like, intentional would be a bad actor saying, I've got this powerful system, I'm going to use it to, like, hack into all the computers in the world and wreak havoc on the power grids. And accidental would be kind of the Nick Bostrom, make a lot of paperclips and view humans as collateral damage. In both cases, but to varying degrees, if we can really truly technically solve the alignment problem and the societal problem of deciding to which set of human values do we align, then the systems understand right and wrong and they understand probably better than we ever can unintended consequences from complex actions in very complex systems. If we can train a system which is like, don't harm humanity, and the system can really understand what we mean when we say that, again, who is we and what is that have some asterisks on them.
Speaker 2
00:49:33 - 00:49:34
Sorry, go ahead.
Speaker 1
00:49:34 - 00:50:24
Well, I was going to say that's if they could understand what it means to not harm humanity, there's a lot wrapped up in that sentence because What's been so striking to me about Effort so far is that they seem to have been based on a very naive view of human nature. Go back to the sort of Facebook and Twitter examples of, well, the engineers building some of the systems would say, we've just designed them around what humans want to do. We said, well, if someone wants to click on something, we will give them more of that thing. And what could possibly be wrong with that? We're just supporting human choice, ignoring the fact that humans are complicated, weird animals who are constantly making choices that a more reflective version of themselves would agree is not in their long-term interests.
Speaker 1
00:50:24 - 00:51:01
So that's 1 part of it. And then you've got, layered on top of that, all the complications of systemic complexity where multiple choices by thousands of people end up creating a reality that no 1 could possibly have designed. So how do you cut through that? Like an AI has to make a decision based on a moment on a specific data set, as those decisions get more powerful, how can we be confident that they don't lead to this sort of system crashing basically in some way?
Speaker 2
00:51:03 - 00:51:43
I think that I've heard a lot of behavioral psychologists and other people that have studied this say in different ways are that I hate to keep picking on Facebook but we can do it 1 more time since we're on the topic. Maybe you can't in any given moment at night where you're tired and you had a stressful day stop yourself from the dopamine hit of scrolling on Instagram, even though you know that that's bad for you and it's not leading to your best life. But if you were asked in a reflective moment where you were sort of fully alert and thoughtful, do you want to spend as much time as you do scrolling through Instagram? Does it make you happy or not? You would actually be able to give like the right long-term answer.
Speaker 2
00:51:43 - 00:52:28
It's sort of a the spirit is willing but the flesh is weak kind of moment. And 1 thing that I am hopeful is that humans do know what we want and what on the whole, and if presented with research or sort of an objective view about what makes us happy and doesn't, we're pretty, let's say great about it, but pretty good. But in any particular moment, we are subjected to our animal instincts, and it is easy for the lower brain to take over. The AI will, I think, be an even higher brain, and as we can teach it, you know, here's what we really do value, here's what we really do want. It will help us make better decisions than we are capable of even in our best moments.
Speaker 1
00:52:29 - 00:52:54
So Is that being proposed and talked about as an actual rule? Because it strikes me that there is something potentially super profound here to introduce some kind of rule for development of AIs. They have to tap into not what humans want, which is an ill-defined question, but as to what humans in reflective mode want.
Speaker 2
00:52:54 - 00:52:56
Yeah, we talk about this a lot.
Speaker 1
00:52:56 - 00:53:17
I mean, do you see a real chance where something like that could be incorporated as a sort of an absolute golden rule and if you like spread around the community so that it seeps into corporations and elsewhere. I've seen no evidence that. That would potentially be a game changer.
Speaker 2
00:53:17 - 00:53:52
Corporations have this weird incentive problem, right? What I was trying to speak about was something that I think should be technologically possible and it's something that we as a society should demand. And I think it is technically possible for this to be sort of like a layer above the neocortex that makes even better decisions for us and our welfare and our long-term happiness and fulfillment than we could make on our own. And I think it is possible for us as a society to demand that. And if we can do like a pincer move between what the technology is capable of and what we as a society demand, maybe we can make everybody in the middle act that way.
Speaker 1
00:53:54 - 00:54:49
I mean, there are instances of, even though companies have their incentives to make money and so forth, they also, in the knowledge age, can't make money if they have pissed off too many of their employees and customers and investors. By analogy of the climate space right now, you can see more and more companies, even those that are emitting huge amounts of carbon dioxide, saying, wait a sec, we're struggling to recruit talented people because they don't want to work for something that's evil, and their customers are saying we don't want to buy something that is evil. And so, you know, ultimately you can picture processes where they do better. And I believe that most engineers, for example, working in Silicon Valley companies, are actually good people who want to design great products for humanity. I think that the people who run most of these companies want to be a net contribution to humanity.
Speaker 1
00:54:49 - 00:55:15
It's, we've, we've rushed really quickly and design stuff without thinking it through properly. And it's led to a mess up. So it's like, okay, don't move fast and break things, slow down and build beautiful things that are built on a real version of human nature and on a real version of system complexity and the risks associated with systemic complexity. Is that the agenda that fundamentally you think that you can push somehow?
Speaker 2
00:55:16 - 00:55:45
Yes, but I think the way we can push it is by getting the incentive system right. I think most people are fundamentally extremely good. Very few people wake up in the morning thinking about how can I make the world a worse place? But the incentive systems that we're in are so powerful and even those engineers who join with the absolute best of intentions. Get sucked into this world where they're like trying to go up you know from l4 and l5 or whatever facebook calls those things and you like.
Speaker 2
00:55:45 - 00:56:10
It's pretty exciting you get caught up playing the game you're rewarded for kind of doing things that move the company's key metrics. It's like fun to get promoted it feels good to make more money. And the incentive systems of the company and that's what it rewards an individual performance. Are maybe like not what we all want. And here I don't want to pick on Facebook at all because I think there's versions of this at play at like every big tech company, including in some ways, I'm sure at OpenAI.
Speaker 2
00:56:11 - 00:56:39
But to the degree that we can better align the incentives of companies with the welfare of society and then the incentives of an individual at those companies with the now realigned incentive of those companies, the more likely we are to be able to have things like AGI that follow an incentive system of what we want in our most reflective best moments, and are even better than what we can think of ourselves.
Speaker 1
00:56:41 - 00:57:07
Is it still the vision for OpenAI that you will get to artificial general intelligence ahead of other corporations so that you can somehow put a stake in the ground and build it the right way. Is that really a realistic thing to dream for? And if not, how do you live up to the mission and help ensure that this thing doesn't go off the rails?
Speaker 2
00:57:07 - 00:57:24
I think it is. Look, I certainly don't think we will be the only group to build an AGI, but I think we could be the first. And I think if you are the first, you have a lot of norm-setting power. And I think you've already seen that. We have released some of the most powerful systems to date.
Speaker 2
00:57:24 - 00:57:57
And I think the way that we have done that, kind of in controlled release, where we've released a bigger model, then a bigger 1, then a bigger 1, and we sort of try to talk about the potential misuse cases, and we try to like talk about the importance of releasing this behind an API so that you can make changes. Other groups have followed suit in some of those directions, and I think that's good. So yes, I don't think we can be only, I do think we can be ahead. And if we are ahead, I think we can use that leverage to hopefully push people in a better direction. Or maybe we're wrong and somebody else has a better direction, we're doing something bad.
Speaker 1
00:57:57 - 00:58:21
Do you have a structural advantage in that your mission is to do this for everyone as opposed to for some corporate objective and that that that allows you. Why is it that GPT-3 came out of OpenAI and not someone else? It's like it's surprising in some ways when you're up against so much money and so much talent in these other companies that you came up with this platform ahead of us.
Speaker 2
00:58:21 - 00:58:49
You know in some sense it's surprising and in some sense like the startup wins most of the time. Like I'm a huge believer in startups as the best force for innovation we have in the world today. I talked a little bit about how we combine these 3 different clans of research, engineering, and sort of safety and policy that don't normally combine well. And I think we have an unusual strength there. We're clearly like well-funded, we have super talented people.
Speaker 2
00:58:49 - 00:59:04
But what we really have is like intense focus and self-belief that what we're doing is possible and good. And I appreciate the implied compliment. But you know, we like work really hard and if we stop doing that, I'm sure someone would run by us fast.
Speaker 1
00:59:05 - 00:59:25
Tell us a bit more about some of your prior life Sam, because for several years you were running Y Combinator, which has this incredible impact on so many companies and there are so many startup stories that began at Y Combinator. What were key drivers in your own life that took you on the path you're on and how did that path end up at Y Combinator?
Speaker 2
00:59:27 - 00:59:51
No exaggeration, I think I have back to back had the 2 jobs that are at least the most interesting to me in all of Silicon Valley. I went to college to study computer science. I was a major computer nerd growing up. I knew a little bit about startups, but not very much. I started working on this project the same year I started working on that, this thing called Y Combinator started and funded me and my co-founders.
Speaker 2
00:59:52 - 01:00:26
And we dropped out of school and did this company, which I ran for like 7 years. And then after that got acquired, I had stayed close to Y Combinator the whole time. I thought it was just this incredible group of people and spirit and set of incentives and just badly misunderstood by most of the world but obvious to everyone within it that it was going to create huge amounts of value and do a lot of new things. My company got acquired. PG, who is the founder of YC and like truly 1 of the most incredible humans and business people and...
Speaker 2
01:00:27 - 01:00:38
Paul Graham. Paul Graham asked me if I wanted to run it. And kind of like the central learning of my career, YC's AI individual startups has been that if you really scale them up, remarkable things can happen.
Speaker 1
01:00:40 - 01:00:41
And
Speaker 2
01:00:43 - 01:00:56
I did it and I was like, 1 of the things that would make this exciting for me personally and motivating would be if I could sort of push it in the direction of doing these hard tech companies, 1 of which became OpenAI. And if I
Speaker 1
01:00:56 - 01:01:03
could- So describe actually what Y Combinator is, how many people come through it, give us a couple of stories of its impact.
Speaker 2
01:01:03 - 01:01:36
Yeah, so you basically apply as a handful of people and an idea, maybe a prototype, and say I would like to start a company and will you please fund me? And we review those applications and we, I shouldn't say we anymore, I guess they fund 400 companies a year. You get about $150, 000. YC takes about 7% ownership and then gives you lots of advice in a network and it's sort of this like fast track program for starting a startup. I haven't looked at this in a while but at 1 point a significant fraction of the billion dollar plus companies in the US that got started at all came through the YC program.
Speaker 2
01:01:38 - 01:02:03
Some recently in the news ones have been like Airbnb, DoorDash, Coinbase, Instacart, Stripe. And I think it's just it has become an incredible way to help people who understand technology get a 3 month course in business. But instead of hurting you with an MBA, we actually teach you the things that matter and kind of go on to do incredible, incredible work.
Speaker 1
01:02:04 - 01:02:18
What is it about entrepreneurs? Why do they matter? Some people just find them kind of annoying. But I think you would argue, I think I would argue that they have done as much as anyone to shape the future. Why?
Speaker 1
01:02:18 - 01:02:19
What is it about them?
Speaker 2
01:02:20 - 01:03:13
I think it is the ability to take an idea and by force of will to make it happen in the world And in an incentive system that rewards you for making the most impact on the most people, like in our system, that's how we get most of the things that we use. That's how we got the computer that I'm using, the software I'm using to talk to you on it, like all of this, you know, everyone in life, everything has a balance sheet. There's plenty of very annoying things about them and there's plenty of very annoying things about the system that sort of idolizes them. But we do get something really important in return. And I think that as a force for making things that make all of our lives better happen, it's very cool.
Speaker 2
01:03:13 - 01:03:32
Otherwise, you know, like if you have like a great idea, but you don't actually do anything useful with it for people. That's still cool, it's still intellectually interesting. But like, there's gotta be something about the reward function in society that is like, did you actually do something useful? Did you create net value? And I think entrepreneurship and startups are a wonderful way to do that.
Speaker 2
01:03:32 - 01:03:49
You know, we get all these great software companies, but I also think it's like how we're gonna get AGI, how we're gonna get nuclear fusion, how we're gonna get life extension. And like on any of those topics or a long list of other things I could point to, there's like a number of startups that I think are doing incredible work, some of which will actually deliver.
Speaker 1
01:03:51 - 01:04:31
I mean, it is a truly amazing thing when you pull the camera back and to believe that a human being can be lying awake at night and something pops inside their mind as a repatterning of the neurons in their brain that is effectively them saying, aha, I can see a way where the future could be better. And they can actually picture it. And then they wake up and then they talk to other people and they persuade them and they persuade investors and so forth. And the fact that this system can happen and that you can then actually change the history, change it in some sense, it is mind-boggling that that happens that way. And it happens, you know, again and again.
Speaker 1
01:04:31 - 01:04:47
So you've seen so many of these stories happen. What would you say is the, is there a key thing that differentiates good entrepreneurs from others? If you could double down on 1 trait, what would it be?
Speaker 2
01:04:48 - 01:05:19
If I could pick only 1, I would pick determination. I think that is the biggest predictor of success, the biggest, at least differentiated predictor. And if you would allow a second, I would pick like communication skills or evangelism or something in that direction as well. There are all of the obvious ones that matter like intelligence, but there's like a lot of smart people in the world. And when I look back at kind of the thousands of entrepreneurs I've worked with, all of many of whom were like quite capable, I would say that's like 1 in 2 of the surprisingly differentiated characteristics.
Speaker 1
01:05:20 - 01:05:47
Well, it's it's when I look at the different things that you've built and been working on, I mean, it could not be more foundational for the future. I mean, entrepreneurship, AI, you know, this is I agree that this is really what has driven the future. Do you see some people get really, you know, they look at Silicon Valley and they look at this story and they worry about the culture, right? That it's a sort of bro culture. Do you see prospects of that changing anytime soon?
Speaker 1
01:05:47 - 01:05:57
And would you welcome it? Can we get better companies by really working to expand a group of people who can be entrepreneurs and who can contribute to AI, for example?
Speaker 2
01:05:57 - 01:06:36
For sure. And in fact, I think I'm hopeful, since these are like the 2 things I've thought the most about. I'm excited for the day when someone combines them and uses AI to better select who to more fairly maybe even select who to fund and how to advise them and really kind of make entrepreneurship super widely available that will lead to like better outcomes and sort of more societal wealth for all of us. So, uh, so yeah, I think broadening the set of people able to start companies and that sort of get the resources that you need. That is like an unequivocally good thing.
Speaker 2
01:06:36 - 01:06:56
And it's something that I think Silicon Valley is making some progress in, but I hope we see a lot more. And I do really truly think that the technology industry entrepreneurship as 1 of the greatest forces for self-betterment if we can just figure out how to be a little bit more inclusive in how we do things.
Speaker 1
01:06:57 - 01:07:07
My last question Sam, Ted is all about ideas we're spreading. If you could inject 1 idea into the mind of everyone listening, what would that idea be?
Speaker 2
01:07:09 - 01:07:36
We've touched on it a bunch, but the 1 idea would be that AGI really is going to happen. You have to engage with it seriously and you shouldn't just listen to this and then brush it aside and go about life as if it's not going to happen. Because it is going to affect everything and we all, I think, have an obligation, but also an opportunity to figure out what that means and how we want the world and this sort of one-time shift to go.
Speaker 1
01:07:38 - 01:07:46
Sam Altman, I'm kind of awed by the breadth of things you're engaged with. Thank you so much for spending so much time sharing your vision.
Speaker 2
01:07:46 - 01:07:47
Thanks so much for having me.
Speaker 1
01:07:58 - 01:08:24
Okay, that's it for today. You can read more about OpenAI's vision and progress at openai.com. If you want to try playing with GPT-3 yourself, it's a little tricky. You have to find a website that has licensed the API. The 1 I went to was philosopherai.com, where you pay a few dollars to get access to a very strange mind.
Speaker 1
01:08:24 - 01:08:51
It's actually quite a lot of fun. The TED Interview is part of the TED Audio Collective, a collection of podcasts dedicated to sparking curiosity and sharing ideas that matter. This show is produced by Kim Nederfein-Peterser and edited by Grace Rubinstein and Sheila Orfano. Samber is our mixer. Fact check is by Paul Durbin and special thanks to Michelle Quint, Colin Helms, and Anna Phelan.
Speaker 1
01:08:51 - 01:09:02
If you like the show, please rate and review it. It helps other people find us. We read every review. So, Thanks so much for listening. See you next time.
Omnivision Solutions Ltd