Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI | Lex Fridman Podcast #419

S0

Speaker 0

00:00 - 00:26

I think compute is going to be the currency of the future. I think it will be maybe the most precious commodity in the world. I expect that by the end of this decade and possibly somewhat sooner than that, we will have quite capable systems that we look at and say, wow, that's really remarkable. The road to AGI should be a giant power struggle. I expect that to be the case.

S1

Speaker 1

00:26 - 01:13

Whoever builds AGI first gets a lot of power. Do you trust yourself with that much power? The following is a conversation with Sam Altman, his second time in the podcast. He is the CEO of OpenAI, the company behind GPT-4, ChadGPT, Sora, and perhaps 1 day, the very company that will build AGI. This is Alex Friedman podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's Sam Altman. Take me through the OpenAI board saga that started on Thursday, November 16th, maybe Friday, November 17th for you.

S0

Speaker 0

01:13 - 01:59

That was definitely the most painful professional experience of my life. And chaotic and shameful and upsetting and a bunch of other negative things. There were great things about it too, and I wish it had not been in such an adrenaline rush that I wasn't able to stop and appreciate them at the time. I came across this old tweet of mine, or this tweet of mine from that time period, which was like, it was like, you know, kind of going to your own eulogy, watching people say all these great things about you, and just like, unbelievable support from people I love and care about.

S0

Speaker 0

02:01 - 02:43

That was really nice. That whole weekend, I kind of like felt, with 1 big exception, I felt like a great deal of love and very little hate. Even though it felt like I just, I have no idea what's happening and what's going to happen here, and this feels really bad, and there were definitely times I thought it was going to be like 1 of the worst things to ever happen for AI safety. Well, I also think I'm happy that it happened relatively early. I thought at some point between when OpenAI started and when we created AGI, there was going to be something crazy and explosive that happened.

S0

Speaker 0

02:44 - 03:01

But there may be more crazy and explosive things still to happen. It still, I think, helped us build up some resilience and be ready for more challenges in the future.

S1

Speaker 1

03:02 - 03:08

But the thing you had a sense that you would experience is some kind of power struggle.

S0

Speaker 0

03:08 - 03:17

The road to AGI should be a giant power struggle. Like the world should, well not should, I expect that to be the case.

S1

Speaker 1

03:17 - 03:38

And so you have to go through that, like you said, iterate as often as possible, in figuring out how to have a board structure, how to have organization, how to have the kind of people that you're working with, how to communicate, all that in order to de-escalate the power struggle as much as possible, pacify it.

S0

Speaker 0

03:38 - 04:17

But at this point, it feels like something that was in the past that was really unpleasant and really difficult and painful. But we're back to work and things are so busy and so intense that I don't spend a lot of time thinking about it. There was a time after, there was like this fugue state for kind of like the month after, maybe 45 days after, that was, I was just sort of like drifting through the days. I was so out of it. I was feeling so down.

S1

Speaker 1

04:17 - 04:20

Just in a personal psychological level. Yeah,

S0

Speaker 0

04:20 - 04:37

really painful. And hard to like have to keep running open the eye in the middle of that. I just wanted to like crawl into a cave and kind of recover for a while. But now it's like we're just back to working on the mission.

S1

Speaker 1

04:38 - 05:18

Well, it's still useful to go back there and reflect on board structures, on power dynamics, on how companies are run, the tension between research and product development and money and all this kind of stuff, so that you, who have a very high potential of building AGI, would do so in a slightly more organized, less dramatic way in the future. So there's value there to go both the personal psychological aspects of you as a leader and also just the board structure and all this kind of messy stuff.

S0

Speaker 0

05:18 - 05:59

Definitely learned a lot about structure and incentives and what we need out of a board. And I think that is, It is valuable that this happened now in some sense. I think this is probably not like the last high stress moment of opening up, but it was quite a high stress moment. My company very nearly got destroyed. And we think a lot about many of the other things we've got to get right for AGI, but thinking about how to build a resilient org and how to build a structure that will stand up to a lot of pressure in the world, which I expect more and more as we get closer.

S0

Speaker 0

06:00 - 06:01

I think that's super important.

S1

Speaker 1

06:01 - 06:18

Do you have a sense of how deep and rigorous the deliberation process by the board was? Like, can you shine some light on just human dynamics involved in situations like this? Was it just a few conversations and all of a sudden it escalates and why don't we fire Sam kind of thing?

S0

Speaker 0

06:19 - 07:00

I think the board members were, are well-meaning people on the whole. And I believe that in stressful situations where people feel time pressure or whatever, people understandably make suboptimal decisions. And I think 1 of the challenges for OpenAI will be we're going to have to have a board and a team that are good at operating under pressure.

S1

Speaker 1

07:00 - 07:02

Do you think the board had too much power?

S0

Speaker 0

07:02 - 07:43

I think boards are supposed to have a lot of power. But 1 of the things that we did see is in most corporate structures, boards are usually answerable to shareholders. Sometimes people have like supervoting shares or whatever. In this case, and I think 1 of the things with our structure that we maybe should have thought about more than we did is that the board of a nonprofit has, unless you put other rules in place, like quite a lot of power. They don't really answer to anyone but themselves. And there's ways in which that's good, but what we'd really like is for the Board of OpenAI to answer to the world as a whole as much as that's a practical thing.

S1

Speaker 1

07:44 - 07:53

So There's a new board announced. Yeah. There's, I guess, a new smaller board at first, and now there's a new final board.

S0

Speaker 0

07:53 - 07:56

Not a final board yet. We've added some. We'll add more.

S1

Speaker 1

07:56 - 08:05

Added some. Okay. What is fixed in the new 1 that was perhaps broken in the previous 1?

S0

Speaker 0

08:06 - 08:30

The old board sort of got smaller over the course of about a year. It was 9 and then it went down to 6. And then we couldn't agree on who to add. And the board also, I think, didn't have a lot of experienced board members. And a lot of the new board members at OpenAI have just have more experience as board members. I think that will help.

S1

Speaker 1

08:31 - 08:42

It's been criticized, some of the people that are added to the board. I heard a lot of people criticizing the addition of Larry Summers, for example. What's the process of selecting the board like? What's involved in that?

S0

Speaker 0

08:43 - 09:21

So Brett and Larry were kind of decided in the heat of the moment over this like very tense weekend. And that weekend was like a real roller coaster. It was like a lot of ups and downs. And we were trying to agree on new board members that both sort of the executive team here and the old board members felt would be reasonable. Larry was actually 1 of their suggestions, the old board members. Brett, I think I had even previous to that weekend suggested, but he was busy and didn't want to do it. And then we really needed help and would.

S0

Speaker 0

09:22 - 10:02

We talked about a lot of other people too, but that was I felt like if I was going to come back, I needed new board members. I didn't think I could work with the old board again in the same configuration, although we then decided, and I'm grateful that Adam would stay, but we wanted to get to, we considered various configurations, decided We wanted to get to a board of 3 and had to find 2 new board members over the course of sort of a short period of time. So those were decided honestly without That's like you kind of do that on the battlefield.

S0

Speaker 0

10:03 - 10:37

You don't have time to design a rigorous process then. For new board members since, new board members will add going forward, we have some criteria that we think are important for the board to have, different expertise that we want the board to have. Unlike hiring an executive where you need them to do 1 role well, the board needs to do a whole role of kind of governance and thoughtfulness well. And so 1 thing that Brett says, which I really like, is that we want to hire board members in slates, not as individuals 1 at a time.

S0

Speaker 0

10:38 - 10:49

And thinking about a group of people that will bring nonprofit expertise, expertise in running companies, sort of good legal and governance expertise, that's kind of what we've tried to optimize for.

S1

Speaker 1

10:49 - 10:52

So is technical savvy important for the individual board members?

S0

Speaker 0

10:52 - 10:56

Not for every board member, but for certainly some you need that. That's part of what the board needs to do.

S1

Speaker 1

10:56 - 11:24

So, I mean, the interesting thing that people probably don't understand about OpenAI, I certainly don't, is like all the details of running the business. When they think about the board, given the drama, they think about you, they think about like, if you reach AGI or you reach some of these incredibly impactful products and you build them and deploy them, what's the conversation with the board like? And they kind of think, all right, what's the right squad to have in that kind of situation to deliberate?

S0

Speaker 0

11:25 - 11:55

Look, I think you definitely need some technical experts there. And then you need some people who are like, how can we deploy this in a way that will help people in the world the most and people who have a very different perspective? I think a mistake that you or I might make is to think that only the technical understanding matters. And that's definitely part of the conversation you want that board to have. But there's a lot more about how that's going to just like impact society and people's lives that you really want represented in there too.

S1

Speaker 1

11:55 - 12:00

And you're just kind of, are you looking at the track record of people or you're just having conversations?

S0

Speaker 0

12:00 - 12:17

Track record is a big deal. You of course have a lot of conversations, but I, you know, there's some roles where I kind of totally ignore track record and just look at slope, kind of ignore the y-intercept.

S1

Speaker 1

12:18 - 12:21

Thank you. Thank you for making it mathematical for the audience.

S0

Speaker 0

12:21 - 12:33

For a board member, like, I do care much more about the y-intercept. Like, I think there is something deep to say about track record there. And experience is sometimes very hard to replace. Do you

S1

Speaker 1

12:33 - 12:36

try to fit a polynomial function or exponential 1 to the track record?

S0

Speaker 0

12:36 - 12:39

That's not that, and analogy doesn't carry that far.

S1

Speaker 1

12:39 - 12:53

All right. You mentioned some of the low points that weekend. What were some of the low points psychologically for you. Did you consider going to the Amazon jungle and just taking ayahuasca and disappearing forever?

S0

Speaker 0

12:53 - 13:30

I mean, there's so many low, like it was a very bad period of time. There were great high points too. Like my phone was just like sort of nonstop blowing up with nice messages from people I work with every day, people I hadn't talked to in a decade. I didn't get to like appreciate that as much as I should have because I was just like in the middle of this firefight, but that was really nice. But on the whole, it was like a very painful weekend and also just like a very It was like a battle fought in public to a surprising degree and that was extremely exhausting to me much more than I expected.

S0

Speaker 0

13:30 - 14:07

I think fights are generally exhausting, but this 1 really was. You know, the board did this Friday afternoon. I really couldn't get much in the way of answers, but I also was just like, well, the board gets to do this. And so, I'm going to think for a little bit about what I want to do, but I'll try to find the blessing in disguise here. And I was like, well, my current job at OpenAI is or it was to run a decently sized company at this point. And the thing I had always liked the most was just getting to work on, work with the researchers.

S0

Speaker 0

14:07 - 14:46

And I was like, yeah, I can just go do a very focused HCI research effort. And I got excited about that. Didn't even occur to me at the time to possibly that this was all going to get undone. This was like Friday afternoon. So you've accepted the death of this preview. Very quickly. Like within, you know, I mean, I went through like a little period of confusion and rage, but very quickly. And by Friday night, I was like talking to people about what was going to be next. And I was excited about that. I think it was Friday night evening for the first time that I heard from the exec team here, which is like, hey, we're going to fight this and we think whatever.

S0

Speaker 0

14:46 - 15:09

And then I went to bed just still being like, okay, excited, like, onward. Were you able to sleep? Not a lot. It was 1 of the weird things was this like period of 4 and a half days where sort of didn't sleep much, didn't eat much, and still kind of had like a surprising amount of energy. You learn like a weird thing about adrenaline in wartime.

S1

Speaker 1

15:09 - 15:13

So you kind of accepted the death of a, you know, this baby opening up.

S0

Speaker 0

15:13 - 15:17

And I was excited for the new thing. I was just like, okay, this was crazy, but whatever.

S1

Speaker 1

15:17 - 15:18

It's a very good coping mechanism.

S0

Speaker 0

15:18 - 15:44

And then Saturday morning, 2 of the board members called and said, hey, we destabilize, we didn't mean to destabilize things. We don't want to store a lot of value here. You know, can we talk about you coming back? And I immediately didn't want to do that, but I thought a little more and I was like, well, I don't really care about it. The people here, the partners, shareholders, like all of the I love this company. And so I thought about it and I was like, well, okay, but like here's the stuff I would need.

S0

Speaker 0

15:45 - 16:22

And then the most painful time of all was over the course of that weekend, I kept thinking and being told, and we all kept, not just me, like the whole team here kept thinking, well, we are trying to keep open eyes stabilized while the whole world was trying to break it apart, people trying to recruit, whatever. We kept being told, like, all right, we're almost done, we're almost done, we just need like a little bit more time. And it was this like very confusing state. And then Sunday evening, when again, like every few hours, I expected that we were going to be done and we're going to figure out a way for me to return and things to go back to how they were.

S0

Speaker 0

16:23 - 17:03

The board then appointed a new interim CEO. And then I was like, I mean, that feels really bad. That was the low point of the whole thing. You know, I'll tell you something. It felt very painful, but I felt a lot of love that whole weekend. It was not other than that 1 moment, Sunday night, I would not characterize my emotions as anger or hate. But I really just like, I felt a lot of love from people towards people. It was like painful, but it would like the dominant emotion of the weekend was love, not hate.

S1

Speaker 1

17:04 - 17:15

You've spoken highly of Mira Moradi that she helped especially as you put in the tweet in the quiet moments when it counts. Perhaps we could take a bit of a tangent. What do you admire about Mira?

S0

Speaker 0

17:15 - 17:49

Well, she did a great job during that weekend in a lot of chaos, but people often see leaders in the crisis moments, good or bad. But a thing I really value in leaders is how people act on a boring Tuesday at 9.46 in the morning and in just sort of the normal drudgery of the day-to-day, how someone shows up in a meeting, the quality of the decisions they make. That was what I meant about the quiet moments. Meaning like most

S1

Speaker 1

17:49 - 17:58

of the work is done on a day by day, in a meeting by meeting, just be present and make great decisions.

S0

Speaker 0

17:58 - 18:10

Yeah. I mean, look, what you have wanted to spend the last 20 minutes about, and I understand, is like this 1 very dramatic weekend. Yeah. But that's not really what Open AI is about. Open AI is really about the other 7 years.

S1

Speaker 1

18:10 - 18:17

Well, yeah, human civilization is not about the invasion of the Soviet Union by Nazi Germany, but still that's something people

S0

Speaker 0

18:17 - 18:19

focus on. Very, very understandable.

S1

Speaker 1

18:20 - 18:52

It gives us an insight into human nature, the extremes of human nature, and perhaps some of the damage and some of the triumphs of human civilization can happen in those moments. That's illustrative. Let me ask you about Ilya. Is he being held hostage in a secret nuclear facility? No. What about a regular secret facility? No. What about a nuclear non-secret facility? Neither. Not that either. I mean, this is becoming a meme at some point. You've known Ilya for a long time. He's obviously in part of this drama with the board and all that kind of stuff.

S1

Speaker 1

18:53 - 18:56

What's your relationship with him now?

S0

Speaker 0

18:57 - 19:14

I love Ilya. I have tremendous respect for Ilya. I don't have anything I can say about his plans right now. That's a question for him. But I really hope we work together for certainly the rest of my career. He's a little bit younger than me. Maybe he works a little bit longer.

S1

Speaker 1

19:16 - 19:26

There's a meme that he saw something. Like he maybe saw AGI and that gave him a lot of worry internally. What did Ilya see?

S0

Speaker 0

19:28 - 20:10

Ilya has not seen AGI. None of us have seen AGI. We've not built AGI. I do think 1 of the many things that I really love about Ilja is he takes AGI and the safety concerns, broadly speaking, including things like the impact this is going to have on society very seriously. As we continue to make significant progress, Ilya is 1 of the people that I've spent the most time over the last couple of years talking about what this is going to mean, what we need to do to ensure we get it right, to ensure that we succeed at the mission.

S0

Speaker 0

20:12 - 20:30

So Ilya did not see AGI, But Ilya is a credit to humanity in terms of how much he thinks and worries about making sure we get this right.

S1

Speaker 1

20:30 - 21:05

I've had a bunch of conversations with him in the past. I think when he talks about technology He's always like doing this long-term thinking type of thing. So he's not thinking about what this is gonna be in a year He's thinking about in 10 years. Yeah, just thinking from first principles like okay If the scales what are the fundamentals here? Where's this going and so that that's a foundation for them thinking about all the other safety concerns and all that kind of stuff, which makes him a really fascinating human to talk with. Do you have any idea why he's been kind of quiet?

S1

Speaker 1

21:05 - 21:07

Is it he's just doing some soul searching?

S0

Speaker 0

21:08 - 21:27

Again, I don't want to like speak for Ilya. I think that you should ask him that. He's definitely a thoughtful guy. I think I kind of think Ilya is like always on the soul search in a really good way.

S1

Speaker 1

21:27 - 21:35

Yes. Yeah. Also, he appreciates the power of silence. Also I'm told he can be a silly guy, which I've never seen that side

S0

Speaker 0

21:35 - 21:37

of him. It's very sweet when that

S1

Speaker 1

21:39 - 21:43

happens. I've never witnessed a silly Ilya, but I look forward to that as well.

S0

Speaker 0

21:43 - 21:54

I was at a dinner party with him recently And he was playing with a puppy and he was like in a very silly move, very endearing. And I was thinking like, oh man, this is like not the side of the ilio that the world sees the most.

S1

Speaker 1

21:55 - 22:03

So just to wrap up this whole saga, are you feeling good about the board structure about all of this and like where it's moving.

S0

Speaker 0

22:03 - 22:37

I feel great about the new board. In terms of the structure of OpenAI, 1 of the board's tasks is to look at that and see where we can make it more robust. We wanted to get new board members in place first, But we clearly learned a lesson about structure throughout this process. I don't have, I think, super deep things to say. It was a crazy, very painful experience. I think it was like a perfect storm of weirdness. It was like a preview for me of what's going to happen as the stakes get higher and higher and the need that we have robust governance structures and processes and people.

S0

Speaker 0

22:39 - 22:47

I am kind of happy it happened when it did, but it was a shockingly painful thing to go through.

S1

Speaker 1

22:47 - 22:50

Did it make you be more hesitant in trusting people?

S0

Speaker 0

22:50 - 22:51

Yes.

S1

Speaker 1

22:51 - 22:52

Just on a personal level?

S0

Speaker 0

22:52 - 23:21

Yes. I think I'm like an extremely trusting person. I always had a life philosophy of, you know, like, don't worry about all of the paranoia, don't worry about the edge cases. You know, you get a little bit screwed in exchange for getting to live with your guard down. And this was so shocking to me. I was so caught off guard that it has definitely changed. And I really don't like this. It's definitely changed how I think about just like default trust of people and planning for the bad scenarios

S1

Speaker 1

23:21 - 23:25

You got to be careful with that. Are you worried about becoming a little too cynical?

S0

Speaker 0

23:26 - 23:35

I'm not worried about becoming too cynical. I think I'm like the extreme opposite of a cynical person But I'm I'm worried about just becoming like less of a default trusting person.

S1

Speaker 1

23:36 - 24:05

I'm actually not sure which mode is best to operate in for a person who's developing AGI. Trusting or untrusting. So interesting journey you're on. But in terms of structure, see I'm more interested on the human level. Like how do you surround yourself with humans that are building cool shit, but also are making wise decisions. Because the more money you start making, the more power the thing has, the weirder people get.

S0

Speaker 0

24:05 - 24:35

You know, I think you could like, you can make all kinds of comments about the board members and the level of trust I should have had there or how I should have done things differently. But in terms of the team here, I think you'd have to like give me a very good grade on that 1. And I have just like enormous gratitude and trust and respect for the people that I work with every day. And I think being surrounded with people like that is really important.

S1

Speaker 1

24:40 - 24:51

Our mutual friend, Elon, sued OpenAI. What is the essence of what he's criticizing? To what degree does he have a point? To what degree is he wrong?

S0

Speaker 0

24:52 - 25:23

I don't know what it's really about. We started off just thinking we were going to be a research lab and having no idea about how this technology was going to go. It's hard to because it was only 7 or 8 years ago, it's hard to go back and really remember what it was like then. But before language models were a big deal, this was before we had any idea about an API or selling access to a chat bot. It was Before we had any idea we were going to productize at all. So we're like, we're just going to try to do research and we don't really know what we're going to do with that.

S0

Speaker 0

25:23 - 26:01

I think with many fundamentally new things, you start fumbling through the dark and you make some assumptions, most of which turn out to be wrong. And then it became clear that we were going to need to do different things and also have huge amounts more capital. So we said, okay, well, the structure doesn't quite work for that. How do we patch the structure? And then patch it again and patch it again and you end up with something that does look kind of eyebrow raising to say the least. But we got here gradually with, I think, reasonable decisions at each point along the way.

S0

Speaker 0

26:01 - 26:11

And it doesn't mean I wouldn't do it totally differently if we could go back now with Oracle, but you don't get the Oracle at the time. But anyway, in terms of what Elon's real motivations here are, I don't know.

S1

Speaker 1

26:12 - 26:20

To the degree you remember, what was the response that OpenAI gave in the blog post. Can you summarize

S0

Speaker 0

26:20 - 26:43

it? Oh, we just said like, you know, Elon said this set of things. Here's our characterization, or here's the sort of, on our characterization, here's like the characterization of how this went down. We tried to like not make it emotional and just sort of say like, here's the history.

S1

Speaker 1

26:44 - 27:08

I do think there's a degree of mischaracterization from Elon here about 1 of the points you just made, which is the degree of uncertainty you had at the time. You guys are a bunch of like a small group of researchers crazily talking about AGI when everybody's laughing at that thought.

S0

Speaker 0

27:09 - 27:20

Wasn't that long ago Elon was crazily talking about launching rockets when people were laughing at that thought? So I think he'd have more empathy for this.

S1

Speaker 1

27:20 - 27:33

I mean, I do think that there's personal stuff here. That there was a split that OpenAI and a lot of amazing people here chose to part ways with Elon. So, there's a

S0

Speaker 0

27:33 - 27:35

personal- Elon chose to part ways.

S1

Speaker 1

27:37 - 27:41

Can you describe that exactly? The choosing to part ways?

S0

Speaker 0

27:41 - 28:05

He thought OpenAI was going to fail. He wanted total control to sort of turn it around. We wanted to keep going in the direction that now has become OpenAI. He also wanted Tesla to be able to build an AGI effort. At various times, he wanted to make OpenAI into a for-profit company that he could have control of or have it merge with Tesla. We didn't want to do that, and he decided to leave, which that's fine.

S1

Speaker 1

28:06 - 28:22

So you're saying, and that's 1 of the things that the blog post says, is that he wanted OpenAI to be basically acquired by Tesla in the same way that, or maybe something similar, or maybe something more dramatic than the partnership with Microsoft.

S0

Speaker 0

28:23 - 28:31

My memory is the proposal was just like, yeah, get acquired by Tesla and have Tesla have full control over it. I'm pretty sure that's what it was. So What is

S1

Speaker 1

28:31 - 28:43

the word open in OpenAI mean to Elon at the time? Ilya has talked about this in the email exchanges and all this kind of stuff. What does it mean to you at the time? What does it mean to you now?

S0

Speaker 0

28:43 - 29:22

I would definitely pick a different... Speaking of going back with an Oracle, I'd pick a different name. 1 of the things that I think OpenAI is doing that is the most important of everything that we're doing is putting powerful technology in the hands of people for free as a public good. We don't run ads on a free version. We don't monetize it in other ways. We just say it's part of our mission. We want to put increasingly powerful tools in the hands of people for free and get them to use them. I think that kind of open is really important to our mission.

S0

Speaker 0

29:22 - 29:54

I think if you give people great tools and teach them to use them or don't even teach them, they'll figure it out and let them go build an incredible future for each other with that, That's a big deal. So if we can keep putting free or low cost or free and low cost powerful AI tools out in the world, I think it's a huge deal for how we fulfill the mission. Open source or not, yeah, I think we should open source some stuff and not other stuff. It does become this like religious battle line where nuance is hard to have, but I think nuance is the right answer.

S1

Speaker 1

29:55 - 30:07

So he said, change your name to closed AI and I'll drop the lawsuit. I mean, is it going to become this battleground in the land of memes? I think that speaks

S0

Speaker 0

30:09 - 30:20

to the seriousness with which Elon means the lawsuit. And yeah, I mean, that's like an astonishing thing to say, I think.

S1

Speaker 1

30:21 - 30:33

Well, I don't think the lawsuit, maybe correct me if I'm wrong, but I don't think the lawsuit is legally serious. It's more to make a point about the future of AGI and the company that's currently leading the way.

S0

Speaker 0

30:36 - 30:48

So... Look, I mean, Grok had not open sourced anything until people pointed out it was a little bit hypocritical, and then he announced that Grok will open source things this week. I don't think open source versus not is what this is really about for him.

S1

Speaker 1

30:48 - 31:00

Well, we'll talk about open source and not. I do think maybe criticizing the competition is great. Just talking a little shit, that's great. But friendly competition versus like, I personally hate lawsuits.

Page 1 of 5

00:00

00:00