See all Smart Sense transcripts on Youtube

youtube thumbnail

Elon Musk's BRUTALLY Honest Interview With Tucker Carlson (2023)

1 hours 27 minutes 30 seconds

Speaker 1

00:00:00 - 00:00:25

Why doesn't Facebook do this? I know that Zuckerberg has said, and I take him at face value that he, well, I do actually in this way, that he is a kind of old-fashioned liberal who doesn't like to censor, he has, but he, you know, like why wouldn't a company like that take the stand that you have taken. It was pretty rooted in American traditional political custom you know for free speech.

Speaker 2

00:00:27 - 00:00:43

This is the kind of thing that tends to accelerate. So that's the thing you can get negative equity in the home market as well. This is a dire situation. And so essentially what's happening is they're training the AI to lie.

Speaker 1

00:00:43 - 00:00:49

Yes. So all of a sudden AI is everywhere. People who weren't quite sure what it was are playing with it on their phones. Is that good or bad?

Speaker 2

00:00:49 - 00:00:50

Artificial insemination?

Speaker 1

00:00:50 - 00:00:53

Yes, artificial insemination. It's everywhere.

Speaker 2

00:00:53 - 00:00:55

That's what they call it in the ag industry.

Speaker 1

00:01:00 - 00:01:02

I'm talking about a more digital form.

Speaker 2

00:01:02 - 00:01:43

Yes. Yeah, so I've been thinking about AI for a long time, since I was in college, really. It was 1 of the things that, just sort of 4 or 5 things I thought would really affect the future dramatically. It is fundamentally profound in that The smartest creatures, as far as we know, on this earth are humans, is our defining characteristic. We're obviously weaker than, say, chimpanzees and less agile, but we are smarter.

Speaker 2

00:01:46 - 00:02:09

So now what happens when something vastly smarter than the smartest person comes along in silicon form? It's very difficult to predict what will happen in that circumstance. It's called the singularity. It's a singularity like a black hole because you don't know what happens after that. It's hard to predict.

Speaker 2

00:02:13 - 00:03:14

So I think we should be cautious with AI and we should, I think there should be some government oversight because it affects the it's a danger to the public and so when you when you have things that are a danger to the public you know like let's say so food food and drugs that's why we have the Food and Drug Administration and the Federal Aviation Administration, the FCC. We have these agencies to oversee things that affect the public, where there could be public harm. And you don't want companies cutting corners on safety and then having people suffer as a result. So that's why I've actually for a long time been a strong advocate of AI regulation. So that I think regulation is not fun to be regulated.

Speaker 2

00:03:15 - 00:03:40

It's somewhat arduous to be regulated. I have a lot of experience with regulated industries because obviously automotive is highly regulated. You could fill this room with all the regulations that are required for a production car just in the United States. And then there's a whole different set of regulations in Europe and China and the rest of the world. So I'm very familiar with being overseen by a lot of regulators.

Speaker 2

00:03:42 - 00:04:17

And the same thing is true with rockets. You can't just willy-nilly shoot rockets off, not big ones anyway, because the FAA oversees that. And then even to get a launch license, there are probably half a dozen or more federal agencies that need to approve it, plus state agencies. So I've been through so many regulatory situations it's insane. And you know, sometimes people think I'm some sort of like regulatory maverick that sort of defies regulators on a regular basis, but this is actually not the case.

Speaker 2

00:04:18 - 00:05:02

So once in a blue moon, rarely I will disagree with regulators. But the vast majority of the time, my companies agree with the regulations and comply. Anyway, so I think we should take this seriously and we should have a regulatory agency. I think it needs to start with a group that initially seeks insight into AI, then solicits opinion from industry, and then has proposed rulemaking. And then those rules will probably, hopefully grudgingly, be accepted by the major players in AI.

Speaker 2

00:05:02 - 00:05:12

And I think we'll have a better chance of advanced AI being beneficial to humanity in that circumstance.

Speaker 1

00:05:12 - 00:05:17

But all regulations start with a perceived danger and planes fall out of the sky or food causes botulism.

Speaker 2

00:05:17 - 00:05:17

Yes.

Speaker 1

00:05:17 - 00:05:25

I don't think the average person playing with AI on his iPhone perceives any danger. Can you just roughly explain what you think the dangers might be?

Speaker 2

00:05:26 - 00:06:18

Yeah, so the danger, really, AI is perhaps more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production in the sense that it has the potential, however small 1 may regard that probability, but it is non-trivial. It has the potential of civilizational destruction. You know, there's movies like Terminator, But it wouldn't quite happen like Terminator because the intelligence would be in the data centers. The robot's just the end effector. But I think perhaps what you may be alluding to here is that regulations are really only put into effect after something terrible has happened.

Speaker 1

00:06:18 - 00:06:18

That's correct.

Speaker 2

00:06:18 - 00:06:30

And if that's the case for AI, and we're only putting regulations after something terrible has happened, it may be too late to actually put the regulations in place. The AI may be in control at that point.

Speaker 1

00:06:30 - 00:06:41

You think that's real? It is conceivable that AI could take control and reach a point where you couldn't turn it off and it would be making the decisions for people.

Speaker 2

00:06:41 - 00:07:02

Yeah, absolutely. Absolutely. No, That's definitely where things are headed, for sure. I mean, things like, say, Chatchi BT, which is based on GPT-4 from OpenAI, which is a company that I played a critical role in creating, unfortunately.

Speaker 1

00:07:02 - 00:07:04

Back when it was a nonprofit?

Speaker 2

00:07:05 - 00:07:28

Yes. I mean, the reason OpenAI exists at all is that Larry Page and I used to be close friends. And I was at his house in Palo Alto. And I would talk to him late into the night about AI safety. And at least my perception was that Larry was not taking AI safety seriously enough.

Speaker 2

00:07:29 - 00:07:43

And... What did he say about it? He really seemed to be 1 sort of digital superintelligence, basically digital god, if you will, as soon as possible.

Speaker 1

00:07:44 - 00:07:45

He wanted that?

Speaker 2

00:07:45 - 00:08:18

Yes. And he's made many public statements over the years that the whole goal of Google is what's called AGI, artificial general intelligence or artificial super intelligence. And I agree with him that there's great potential for good, but there's also potential for bad. And so if you've got some radical new technology, you want to try to take a set of actions that maximize the probability that it will do good and minimize the probability we'll do bad things. Yes.

Speaker 2

00:08:18 - 00:08:38

It can't just be health or leather. It has to go barreling forward and hope for the best. And then at 1 point I said, well, what about, you know, we're going to make sure humanity's OK here. Um, and, and, and, um, uh, and then he called me a specious.

Speaker 1

00:08:39 - 00:08:43

Uh, Did he use, did he use that term?

Speaker 2

00:08:43 - 00:08:52

Yes. And there were witnesses. I wasn't the only 1 there when he called me a specious. And so I was like, okay, that's it. Yes, I'm a specious.

Speaker 2

00:08:52 - 00:09:01

Okay. You got me. But what are you? Yeah, I'm fully a specious. Busted.

Speaker 2

00:09:03 - 00:09:40

So that was the last straw. At the time, Google had acquired DeepMind, and so Google and DeepMind together had about 3 quarters of all the AI talent in the world. They obviously had a tremendous amount of money and more computers than anyone else. So I'm like, okay, we have a unipolar world here where there's just 1 company that has close to a monopoly on AI talent and computers, like scaled computing. And the person who's in charge doesn't seem to care about safety.

Speaker 2

00:09:40 - 00:09:59

This is not good. So then I thought, OK, what's the furthest thing from Google? Would be like a nonprofit that is fully open, because Google was closed for profit. So that's why the open in OpenAI refers to open source, transparency, so people know what's going on.

Speaker 1

00:09:59 - 00:10:00

Yes.

Speaker 2

00:10:01 - 00:10:13

And we don't want to have like a, I mean, while I'm normally in favor of full profit, we don't want this to be sort of a profit maximizing demon from hell. That's right. That just never stops.

Speaker 1

00:10:13 - 00:10:14

Right.

Speaker 2

00:10:14 - 00:10:18

So that's how OpenAI would.

Speaker 1

00:10:18 - 00:10:21

So you want specious incentives here, incentives that.

Speaker 2

00:10:21 - 00:10:57

Yes, I think we want pro-human, let's make the future good for the humans. Yes. Yes, because we're humans. And also the other creatures on Earth too, but you know, I think people sometimes take the fact that we're here on Earth for granted, you know, and that this consciousness is just a normal thing that happens, but to the best of my knowledge we see no evidence of conscious life anywhere in the universe. So it might be there.

Speaker 2

00:10:59 - 00:11:18

You know, physics, they call it sort of the Fermi paradox. Enrico Fermi, an amazing physicist, asked the fundamental question, where are the aliens? Yeah. A lot of people ask me, you know, where are the aliens? And I think if anyone would know about aliens on Earth, it would probably be me.

Speaker 2

00:11:18 - 00:11:31

I would think. Yeah, I'm very familiar with space stuff. And I've seen no evidence of aliens. So I would immediately tweet it out. This is flitsangian.

Speaker 2

00:11:32 - 00:11:42

That would be like probably the top tweet of all time. I found 1, guys. I found 1. This is a jackpot. This is some 8000000000 likes, you know?

Speaker 2

00:11:43 - 00:12:12

Next level jackpot if you find the aliens. Like, I don't think they're keeping those under, you know, and it was like some general, I think in the 60s, where they were saying like, show us the aliens, like era 51, et cetera. And he said like, listen, We are constantly trying to get the defense budget to expand. And you know what would really get no arguments for anyone? If we pulled out an alien and said, we need money to protect ourselves from these guys.

Speaker 2

00:12:13 - 00:12:26

How much money do you want? You got it. They look dangerous. So the fastest way to get a defense budget increase would be to pull out an alien. We're like, yeah.

Speaker 2

00:12:26 - 00:12:33

I mean, it could be the invasion fleet. It could be arriving any minute. Who knows? So

Speaker 1

00:12:34 - 00:12:41

I digress. But you were saying that our consciousness makes us unique in the universe as far as we know. Yes, I'm

Speaker 2

00:12:41 - 00:12:58

not saying that we are unique. I'm simply stating to the best of my knowledge that there is no evidence for other conscious life. I hope there is, and I hope they're peaceful, obviously. 2 important characteristics. But I'm just saying we haven't seen anything yet.

Speaker 1

00:13:00 - 00:13:05

But you think that we take our existence here for granted? Yeah, I think we... And that there are threats to it?

Speaker 2

00:13:05 - 00:13:41

Yeah, yeah, yeah, exactly. So, um... I just think we should not assume that civilization is robust. And if you look at the history of civilizations, the rise and fall of the ancient Egyptians, the ancient Sumerians, Rome, you know, throughout the world there have been rise and fall of many civilizations. So there's an arc, there's sort of a life cycle arc to civilizations just as there is to individual humans.

Speaker 2

00:13:45 - 00:14:06

And I think we just want to make sure that we have civilization go onward and upward. And that's, for example, why I'm concerned about decreasing birth rates and the fact that, for example, Japan had twice as many deaths last year as births. So that's it. And they're a leading indicator. This is.

Speaker 1

00:14:06 - 00:14:19

Can I say, and you've written and talked a lot about this, but can I ask you to pause just for a parenthetical note? Why is that? I mean, the urge to have sex and to procreate is, after breathing and eating, the most basic urge. How has it been subverted?

Speaker 2

00:14:20 - 00:15:15

Well, it's just that in the past, we could rely upon simple limbic system rewards in order to procreate. But once you have birth control and abortions and whatnot, now you can still satisfy the limbic instinct but not procreate. So we haven't yet evolved to deal with that because this is all fairly recent, you know, the last 50 years or so for birth control. So yeah, you know, I'm sort of worried that hey civilization, you know, if we don't make enough people to at least sustain our numbers, perhaps increase a little bit, then civilization is going to crumble. And, you know, the old question of like, will civilization end with a bang or a whimper?

Speaker 2

00:15:15 - 00:15:21

Well, it's currently trying to end with a whimper in adult diapers. Yes. Which is depressing as hell.

Speaker 1

00:15:21 - 00:15:22

The most depressing.

Speaker 2

00:15:22 - 00:15:27

I mean, seriously. Yeah. War is less depressing. Yeah, I'd rather go out with a bang. Yeah.

Speaker 2

00:15:27 - 00:15:28

And with

Speaker 1

00:15:28 - 00:15:29

your shoes on. Yeah. Not with your diapers on.

Speaker 2

00:15:29 - 00:15:30

More exciting.

Speaker 1

00:15:30 - 00:15:55

Yeah. So can you just put it, I keep pressing it, but just for people who haven't thought this through and aren't familiar with it, and the cool parts of artificial intelligence are so obvious, you know, write your college paper for you, write a limb work about yourself. There's a lot there that's fun and useful. Can you be more precise about what's potentially dangerous and scary? What could it do?

Speaker 1

00:15:55 - 00:15:57

What specifically are you worried about?

Speaker 2

00:15:59 - 00:16:41

Well, I mean, Going with old sayings, the pen is mightier than the sword. So if you have a super intelligent AI that is capable of writing incredibly well and in a way that is very influential, you're convincing. And is constantly figuring out what is more convincing to people over time and then enter social media, for example, Twitter, but also Facebook and others, you know, and potentially manipulates public opinion in a way that is very bad. How would we even know? So.

Speaker 2

00:16:42 - 00:17:15

We wouldn't. That's why, for example, I'm insisting that going forward, people on Twitter need to be verified as humans, so we know that this person is in fact a human. Bots are allowed, but they have to, they can't impersonate a human, they can't pretend to be humans. Because obviously, you could have a million bots that are, let's say, chat GPT version 6, 5, 6, like incredibly, write better than humans. And they can train on a reward function, which is influence.

Speaker 2

00:17:16 - 00:17:41

And so you could have a million seemingly real humans that have a massive effect on public opinion. And unless we focus very strongly on verifying that someone is human, this was naturally what will happen. You'll have some humans using AI to influence the public in ways they don't understand.

Speaker 1

00:17:41 - 00:17:47

You're already seeing that. GPT is ideological. It's very preachy.

Speaker 2

00:17:47 - 00:17:48

Yes.

Speaker 1

00:17:48 - 00:17:50

If you ask it, extremely preachy.

Speaker 2

00:17:50 - 00:17:51

You mean woke GPT.

Speaker 1

00:17:51 - 00:18:02

It's unbelievable. Yes. If you spend 20 minutes asking it questions of actual relevance, modern relevance, it will start lecturing you about your moral shortcomings. How did that happen?

Speaker 2

00:18:03 - 00:18:15

Well, this is a function of OpenAI's headquarters being in downtown San Francisco. So the politics, therefore, of the AI are that of San Francisco.

Speaker 1

00:18:16 - 00:18:20

Why would it have any politics at all? It seems like subversion.

Speaker 2

00:18:21 - 00:18:39

Well there's, they have what's called like human reinforcement learning, which is another way of saying that they have a whole bunch of people that look at the output of GPT-4 and then say whether that's okay or not okay. And so essentially what's happening is they're training the AI to lie.

Speaker 1

00:18:39 - 00:18:43

Yes. It's bad. To lie. To lie. That's exactly right.

Speaker 2

00:18:43 - 00:18:59

And to withhold information. To lie and yes. Yeah, exactly. To either comment on some things, not comment on other things, but not to say what the data actually demands that it say.

Speaker 1

00:19:01 - 00:19:06

How did it get this way? You funded it at the beginning. What happened.

Speaker 2

00:19:06 - 00:19:10

Yeah. Well that would be ironic. But faith the most ironic outcome is most likely it seems.

Speaker 1

00:19:13 - 00:19:15

I'm stealing that. That's good.

Speaker 2

00:19:15 - 00:19:39

That's actually a friend of mine, Jonah, came up with that 1. I actually have a slight variant on that, which is the most entertaining outcome is the most likely But that's entertaining as viewed from a third-party viewer Like so if we're like an alien from on high. Yes Like you go see a movie about World War 1 they're being blown to bits and gassed and everything in the trenches and it's like you're eating popcorn and having a soda, you know, it's fine. Not so great for the people in the movie. True.

Speaker 2

00:19:40 - 00:19:59

So, but that's my variance on this. Occam's razor, the simplest explanation is most likely Jonah's variant, which is irony and then my variant which is the Most Entertaining As Seen by a Third Party Audience, which seems to be mostly true.

Speaker 1

00:20:00 - 00:20:03

But it seems true in this case. So you gave them, did you give them a lot?

Speaker 2

00:20:04 - 00:20:42

Yes, I provided. So I came up with the name and the concept and pushed, had a number of dinners around the Bay Area with some of the people, the leading figures in AI. And I helped recruit the initial team. In fact, Ilya Sutskaya, who was really quite fundamental to the success of OpenAI, I put a tremendous amount of effort into recruiting Ilya. And he changed his mind a few times and ultimately decided to go with OpenAI.

Speaker 2

00:20:42 - 00:21:33

But if he had not gone with OpenAI, OpenAI would not have succeeded. So I really put a lot of effort into creating this organization to serve as a counterweight to Google. And then I kind of took my eye off the ball, I guess, and they are now closed source and they are obviously for profit and they're closely allied with Microsoft. In effect, Microsoft has a very strong say, if not directly controls OpenAI at this point. So you really have an OpenAI-Microsoft situation, and then a Google DeepMind are the 2 heavyweights in this arena.

Speaker 1

00:21:35 - 00:21:37

So it seems like the world needs a third option.

Speaker 2

00:21:38 - 00:21:48

Yes. So I think I will create a third option, Although I'm starting very late in the game, of course.

Speaker 1

00:21:48 - 00:21:49

Can it be done?

Speaker 2

00:21:49 - 00:22:06

I don't know. I think it's, we'll see. It's definitely starting late. But I will try to create a third option. And that third option hopefully does more good than harm.

Speaker 2

00:22:06 - 00:22:13

Like the intention with opening the eye was obviously to do good, but it's not clear whether it's actually doing good or whether it's

Speaker 1

00:22:13 - 00:22:14

โ€“

Speaker 2

00:22:15 - 00:23:03

I can't tell at this point, except that I'm worried about the fact that it's being trained to be politically correct, which is simply another way of saying untruthful things. So that's a bad sign. And there's certainly a path to AI dystopia is to train an AI to be deceptive. So yeah, I'm going to start something which I know you call truth GVT or a maximum truth-seeking AI that tries to understand the nature of the universe. And I think this might be the best path to safety in the sense that an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe.

Speaker 2

00:23:05 - 00:23:25

Hopefully, they would think that. I think, you know, because, yeah, like we, humanity could decide to hunt down all the chimpanzees and kill them. But we don't. Because we're actually glad that they exist. And we aspire to protect their habitats.

Speaker 2

00:23:25 - 00:23:29

And that's, you know, so I think.

Speaker 1

00:23:29 - 00:23:43

But we feel that way because we have souls and that makes us sentimental and reflective, it gives us a moral sense, longings. Can a machine ever have those things? Can a machine be sentimental? Can it appreciate beauty?

Speaker 2

00:23:49 - 00:24:17

Well, I mean, we're getting into some philosophical areas that are hard to resolve. I take somewhat of a scientific view of things, which is that we might have a soul or we might not have a soul. I don't know. It feels like we have, I feel like I've got some sort of consciousness that exists on a plane that is not the 1 we observe. That is certainly how I feel, but it could be an illusion, I don't know.

Speaker 2

00:24:17 - 00:24:36

But for AI, in terms of understanding beauty, is there some form of appreciating beauty and being able to create incredibly beautiful art? Yes. Will AI be able to create incredibly beautiful art? It already does.

Speaker 1

00:24:36 - 00:24:37

Yes.

Speaker 2

00:24:37 - 00:24:38

Have you seen some of the Majoni?

Speaker 1

00:24:39 - 00:24:39

I have.

Speaker 2

00:24:39 - 00:25:04

This stuff, it's incredible. It is. So, no question that it can create art that we perceive as stunning, really. And it's doing still images now, but it won't be long before it's doing movies and shorts. Movies are just a series of frames with audio.

Speaker 1

00:25:05 - 00:25:24

But at that point, because it can mimic people and voices, any image, it can mimic reality itself so effectively. Yeah. I mean, How could you have a criminal trial? How could you ever believe that evidence was authentic, for example? And I don't mean in 30 years, I mean next year.

Speaker 1

00:25:25 - 00:25:29

That seems totally disruptive to all of our institutions.

Speaker 2

00:25:32 - 00:26:14

Well, I don't think you could take, say, a random video on the internet and assume it to be true. That's definitely not the case. Now if somebody say has some video on their phone or their computer with a date stamp and a particular time, I think it's more likely to be true than not. You can also cryptographically sign things. Like mathematically we don't see any way, for example, for AI to subvert the fundamentals of mathematics and say figure out how to hash Bitcoin easily.

Speaker 2

00:26:17 - 00:26:52

It's not as like AI can't defy fundamental math. So you can improve the efficiency of Bitcoin hashing algorithms in the silicon, but not fundamentally crack it. So I guess cryptographic signatures is 1 way to do it. But I'm not so worried. I think it's more like will humanity control its destiny or not?

Speaker 2

00:26:53 - 00:27:10

Will we have a future that is better than the past or not? We can certainly destroy ourselves without the help of AI. You know, that's, you look at all of the past civilizations, they didn't have AI, the ones that aren't around anymore.

Speaker 1

00:27:11 - 00:27:12

They had chariots, and that was enough.

Speaker 2

00:27:12 - 00:27:15

Yeah, chariots, and chariots were probably a real big deal back

Speaker 1

00:27:15 - 00:27:16

then. They

Speaker 2

00:27:16 - 00:27:17

were, yeah.

Speaker 1

00:27:17 - 00:27:27

So... You've heard people say we should just blow up the server farms because there's no way that once it this gets rolling there's no way to slow it down. What do you think of that?

Speaker 2

00:27:28 - 00:27:48

Well the the really heavy-duty intelligence is not going to be distributed all over the place. It'll be in a limited number of server centers. If you say like very sort of deep AI, heavy-duty AI. It's not going to be in your laptop or your phone. It's going to be in a situation where there's like

Speaker 1

00:27:48 - 00:27:48

100, 000

Speaker 2

00:27:50 - 00:28:27

really powerful computers working together in a service center. So it's not like subtle and there are a limited number of places where that can happen. In fact, you can just look at the heat signature from space and it'll be very obvious. Now, I'm not suggesting we go and blow up to service centers right now, but there may be some, It may be wise to have some sort of contingency plan where the government's got an ability to shut down power to these service centers. Like, you don't have to blow it up.

Speaker 2

00:28:27 - 00:28:33

You can just cut the power. And what would tripโ€ฆ Or cut connectivity as well. That's another way. TN.

Speaker 1

00:28:33 - 00:28:44

Right. But what would trip that switch, do you think, in your mind? What would be the threshold that you'd have to pass to warrant the government cutting off your power or cutting off your signal?

Speaker 2

00:28:59 - 00:29:46

Well, I mean, I guess if we lost control of some super AI, like for some reason, like the things that would normally work to do a passive shutdown, like the administrator passwords, if they somehow stop working, where we can't slow down or, you know, I'm not sure, I don't have a precise answer, but if there's something that we're concerned about and are unable to stop it with software commands, then we probably want to have some kind of hardware wolf switch. Yes. You know, can't hurt. Have you talked to,

Speaker 1

00:29:46 - 00:29:50

since you know Larry Page, and obviously you know the OpenAI guys because you started it.

Speaker 2

00:29:50 - 00:29:50

I definitely have

Speaker 1

00:29:50 - 00:29:58

1. Have you talked to the people who run these 2, the biggest AI companies about this recently?

Speaker 2

00:30:00 - 00:30:45

I haven't talked to Larry Page in a few years because he got very upset with me about OpenAI. So when OpenAI was created, it did shift things into it from unipolar world where Google DeepMind controlled, like I said, three-quarters of all AI talent to where there's now a sort of bipolar world, or OpenAI and Google DeepMind. And now, weirdly, it seems OpenAI is maybe ahead. So I have had conversations with the OpenAI team, Tim Altman. I haven't talked to Larry Page because he doesn't want to talk to me anymore for a few years.

Speaker 1

00:30:46 - 00:30:58

Can I ask you just about, since you've been around a lot of this, the thinking? So why would anyone not be a speciesist, be human-centered in his thinking about technology? What's the thinking there?

Speaker 2

00:30:59 - 00:31:31

I think what he's trying to say is that, if I were to guess, that all consciousness should be treated equally, and whether that is digital or biological. And you disagree? I disagree, yeah. Especially if the digital consciousness or whatever you want to call it, digital intelligence, decides to curtail the biological intelligence.

Speaker 1

00:31:31 - 00:31:34

Right. So you're just building your own slave master and why would you do that?

Speaker 2

00:31:34 - 00:31:48

Doesn't sound great. Yeah. I mean, we should at least, no need to rush. Like, what's the hurry? Where's the fire?

Speaker 1

00:31:50 - 00:32:10

Well, I mean, tell us about the hurry. So I know you've been talking about this for years, and on sort of the periphery of our attention, we've heard Elon Musk talking about AI. But for most people, It's been like 3 months since they've had any interaction with this at all. So what's the timeline here? At what point does it start to really change our society, do you think?

Speaker 2

00:32:15 - 00:32:48

I think it starts to have probably an impact this year. So you've got a massive expansion of GPT-4 based systems and many companies trying to emulate GPT-4. And you've got, OpenAI is going to come out with GPT-5 end of this year, which will be yet another significant improvement. And I was there for GPT-1,

Speaker 1

00:32:48 - 00:32:49

2, 3, 4,

Speaker 2

00:32:49 - 00:33:13

you know, so GPT 1 was terrible. Like if you tried it, you'd be like this is, this ain't going anywhere, it seems lame. And then GPT 2, you started to see kind of like an inkling of like, well, maybe this could be something useful. And then GPT-3 was a huge improvement. And now it's like, wow, OK, this is it's still spouting a lot of BS, but it's you know, it's coherent BS.

Speaker 2

00:33:13 - 00:33:17

Yes. And then GPT-4 now it's like writing poetry

Speaker 1

00:33:18 - 00:33:20

and Pretty decent poetry, actually.

Speaker 2

00:33:20 - 00:33:23

Pretty decent. Skill at rhyming is incredible.

Speaker 1

00:33:23 - 00:33:26

Yes. Yes. And it's coherent.

Speaker 2

00:33:26 - 00:33:29

Yes, it is. It's even got a narrative.

Speaker 1

00:33:29 - 00:33:30

Yes, that's right.

Speaker 2

00:33:31 - 00:33:48

So you could say like, most humans can't do that. That's true. So it's already past the point of what most humans can do. Most humans cannot write as well as a check GPT. And no human can write that well that fast, to the best of my knowledge.

Speaker 2

00:33:49 - 00:33:57

So maybe Shakespeare. So then how much better will GPT-5 be? And how about GPT-6 or

Speaker 1

00:33:57 - 00:34:12

7? How can you have a democracy with technology like that? I mean, If democracy is government by the people, each person's vote is equal to every other person's vote, people are choosing their votes freely. Can you have a democracy with this?

Speaker 2

00:34:23 - 00:34:56

Well, that's why I raise the concern of AI being a significant influence in elections. And even if you say that AI doesn't have agency, well, it's very likely that people will use the AI as a tool in elections. And then, you know, if the AI is smart enough, are they using the tool or is the tool using them? So I think things are getting weird, and they're getting weird fast. And so, I think we should be concerned about this and we should have regulatory oversight.

Speaker 2

00:34:56 - 00:35:14

That's why I think it's a big deal. And I think social media companies really need to put a lot of attention into ensuring that the things that get created and promoted are that we're dealing with real people, not with a million chat GBTs pretending to be people.

Speaker 1

00:35:14 - 00:35:28

Exactly. Do you think, speaking of social media, you bought Twitter famously, you've got a lot of other businesses and a lot going on. Yes. You said you bought it because you believe in speech, free speech. You've had a lot of hassles since you bought it.

Speaker 1

00:35:28 - 00:35:30

In retrospect, was it worth buying it?

Speaker 2

00:35:38 - 00:35:49

I mean, it remains to be seen as to whether this was financially smart. Currently, it is not. We just revalued the company at less than half of the acquisition price.

Speaker 1

00:35:49 - 00:35:49

Did you really?

Speaker 2

00:35:49 - 00:36:03

Yes. Sorry. No, my timing was terrible for when the offer was made because it was, you know, right before advertising plummeted and...

Speaker 1

00:36:03 - 00:36:06

Yeah. You caught the high watermark, I noticed.

Speaker 2

00:36:06 - 00:36:19

Yeah. Yeah. So I must be a real genius here. My timing is amazing, since I've bought it for at least twice as much as it should have been bought for. But some things are priceless.

Speaker 2

00:36:20 - 00:36:47

And so whether I lose money or not, that is a secondary issue compared to ensuring the strength of democracy And free speech is the bedrock of a functioning democracy. And the speech needs to be as transparent and truthful as possible. So we've got a huge push on Twitter to be as truthful as possible. We've got this community notes feature, which is great.

Speaker 1

00:36:47 - 00:36:48

It is great.

Speaker 2

00:36:48 - 00:36:49

It is awesome. Yeah and it's like

Speaker 1

00:36:49 - 00:36:51

I saw it this morning. Yeah.

Speaker 2

00:36:51 - 00:37:03

It was far more honest than the New York Times. It's great. Yeah. We put a lot of effort to ensuring that community notes does not get gamed or have biases. It is simply cares about what is the most accurate thing.

Speaker 2

00:37:04 - 00:37:32

And you know, sometimes truth can be a little bit elusive, but you can still aspire to get closer to it. Yes. You know, and so, and I think the effect of community notes is more powerful than people may realize because once people know that they could get noted You know community noted on Twitter, then they'll think the more carefully about what they say There are likely it basically it's an encouragement to be more truthful and less deceptive.

Speaker 1

00:37:32 - 00:37:35

Yes, and if the notes themselves are truthful, then it will have the effect.

Speaker 2

00:37:35 - 00:37:43

Absolutely. And all of that is open source. All the community notes is open source. So you can read about every community note. You can see exactly how the algorithm works.

Speaker 2

00:37:43 - 00:37:54

You can register, say, like, oh, we need to make this change or that change. So everything is super open book with community notes. There's no black box.

Speaker 1

00:37:55 - 00:38:01

When you jumped into this, though, when you bought it, did you understand? Clearly you understood its importance. You wouldn't have bought it.

Speaker 2

00:38:02 - 00:38:02

Twitter, yes.

Speaker 1

00:38:02 - 00:38:15

Right. But it's not the biggest, but it's the most important of the social media companies. But did you understand the kind of ferocity you'd be facing, the attacks you'd be facing from power centers in the country?

Speaker 2

00:38:16 - 00:39:05

I thought there would probably be some negative reactions. So I'm sure everyone would not be pleased with it. But at the end of the day, you know, if the public is happy with it, that's what matters and the public will speak with their actions Oh, I mean though that you if they find truth Twitter to be useful They will use it more and if they find it to be not useful I will use it less they find it to be the best source of truth, I think they will use it more. So that's my theory. And so even though, you know, now, there's obviously a lot of organizations that are used to having sort of unfettered influence on Twitter, they no longer have that.

Speaker 2

00:39:05 - 00:39:06

We used to put New

Speaker 1

00:39:06 - 00:39:16

York Times of their badge this morning, and then you called them diarrhea. You did. You did. I'm just quoting you. You described their Twitter feed as diarrhea.

Speaker 2

00:39:17 - 00:39:19

I said it was the Twitter equivalent of diarrhea.

Speaker 1

00:39:19 - 00:39:21

OK, it's not literally diarrhea.

Speaker 2

00:39:21 - 00:39:49

No, it's a metaphor. But an accurate 1. So I mean, if you look at the NY Times, Twitter feed is unreadable. Because what they do is they tweet every single article, even the ones that are boring, even ones that don't make it into the paper. So it's just nonstop, a zillion tweets a day with no, they really should just be saying, like, what are the top tweets?

Speaker 2

00:39:51 - 00:40:20

What are the big stories of the day? I don't know, put out like 10 or something, you know, some number that's manageable as opposed to right now If you were to follow at NY Times on Twitter, you're going to get barraged with hundreds of tweets a day. And your whole feed will be filled with NY Times. So this is something I would recommend, actually, for all publications, which is for your primary feed, only put out your best stuff. Don't put out everything.

Speaker 2

00:40:22 - 00:40:50

You can have a second feed that is, here's everything. But have your primary feed be, here's our best stuff. Any media organization or individual, just don't put out hundreds of tweets a day, just put out like 10 good ones or 5 good ones. And if it's a slow news day, don't put out any. Maybe put out 1 or 2.

Speaker 2

00:40:51 - 00:41:27

But don't try to say we're always going to put out 100 tweets even if it's World War III or a bicycle accident was the biggest news. It's got to be like news that's got to earn someone's attention. So just in general, I think I know a thing or 2 about how to use Twitter because I was the most interacted with account on the whole system before the acquisition, before the acquisition closed. I didn't have the most number of followers, but I had the most number of interactions. And so I clearly know something about how to use Twitter.

Speaker 2

00:41:27 - 00:41:38

And so people should listen to my advice, I think. So people's attention is limited, so just make sure you put the stuff that's most important there.

Speaker 1

00:41:38 - 00:41:50

So because, you know, you and people like you do interact on Twitter, it's Obviously enormously powerful in shaping public opinion. It's where a lot of ideas and trends are incubated.

Speaker 2

00:41:50 - 00:41:52

You know it. That's why you bought it.

Speaker 1

00:41:52 - 00:41:56

Absolutely. It's also a magnet for intel agencies from around the world.

Speaker 2

00:41:56 - 00:41:57

And 1

Speaker 1

00:41:57 - 00:42:02

of the things we learned after you started opening the books is that they were exerting influence from within Twitter?

Speaker 2

00:42:03 - 00:42:04

I mean, it was absurd.

Speaker 1

00:42:07 - 00:42:08

Did you know that going in?

Speaker 2

00:42:08 - 00:42:19

No. So things like, I have a, Since I've been a heavy Twitter user since

Speaker 1

00:42:19 - 00:42:19

2009,

Speaker 2

00:42:21 - 00:42:45

it's sort of like I'm in the matrix. I mean, I can see like things, do things feel right, do they not feel right, what tweets am I being shown as recommended? Like I get a feel, like what accounts are making comments, Where are the comments eerily similar? Yeah. And then you look at the account, and it's just obviously a fake photo.

Speaker 2

00:42:47 - 00:43:26

And it's just obviously a bot cluster over and over again. So, this is actually, so I started to get like just more and more uneasy about the Twitter situation. And my initial goal was actually not to acquire Twitter. I mean, the actual sequence of events was that I was looking at, I held a Twitter poll to say, should I sell some of my Tesla stock? Because a couple years ago, I was getting attacked a lot for allegedly not paying taxes.

Speaker 2

00:43:29 - 00:43:54

Now, I've actually paid a tremendous amount of taxes. There was 1 year I didn't pay taxes because I had overpaid taxes in the prior year. And you know, when they had that IRS leak BS, they knew that I had overpaid taxes in the prior year, but they said, oh, Elon Musk didn't pay taxes in 2017 or whatever it was. And I was like, but you know that the reason I didn't pay taxes is because I overpaid the prior year. You didn't mention that.

Speaker 2

00:43:54 - 00:44:14

So that was deceptive. Anyway, so, but you know, he's the Elizabeth Warrens of the world and Bernie Sanders like saying, oh, you know, I'm not selling stock and I'm not paying taxes. And so I'm like, look, I don't know what the right thing to do is here. I thought the right thing to do was to not sell stock. The captain should be the last 1 to leave the ship.

Speaker 1

00:44:14 - 00:44:14

That's

Speaker 2

00:44:14 - 00:44:33

right. And I thought I was doing the right thing by not selling stock. And now I'm being told I'm doing the wrong thing by clinging on to the stock and not paying taxes. So I held a Twitter poll to say, what do you guys want? Should I sell, I don't know, 10% of my Tesla stock or not.

Speaker 2

00:44:33 - 00:44:44

I'll buy by the results of the poll. And that's like 60% of people said, yeah, you should sell 10%. So I did. So then I had a bunch of cash. And I'm like, what should I do with this?

Speaker 2

00:44:44 - 00:44:51

At the time, the Federal Reserve rates were super low, so it's just like sitting in the, you know, I guess the-

Speaker 1

00:44:51 - 00:44:52

Your checking account?

Speaker 2

00:44:52 - 00:44:55

Well, in the T-Bill account,

Speaker 1

00:44:55 - 00:44:55

you

Speaker 2

00:44:55 - 00:45:13

know, money market account, whatever. The whole banking thing is a whole separate subject. I know a thing or 2 about finance. So then I'm sitting with this money in an account that's earning less than the rate of inflation. So the rate of inflation is much higher.

Speaker 2

00:45:13 - 00:45:18

So we've got high inflation. I'm earning peanuts in the money market account. This is dumb. I'm getting like minus.

Speaker 1

00:45:18 - 00:45:19

It's just evaporating.

Speaker 2

00:45:19 - 00:45:39

Yeah, I'm getting minus like 6% or 7% return here, maybe worse. And so then, well, it's like what stock should I buy? And I believe in buying stocks at companies where you use the product. And Apple's got a competing electric vehicle car program. So I like Apple products.

Speaker 2

00:45:39 - 00:45:56

I'm not going to invest in them because they've got a competing autonomous EV program. And so what's the other product that I use a lot? Oh, Twitter. Okay, so I'll put the money in Twitter. It's better than just having a negative 6% inflation situation.

Speaker 2

00:45:58 - 00:46:10

So I like bought a bunch of Twitter stock. Like I said, not with the intent of buying the company, just better than keeping it in a money market. Do you remember how much you bought? I think it was like

Speaker 1

00:46:11 - 00:46:12

8%

Speaker 2

00:46:12 - 00:46:31

or something of the company. I was talking to some of the board members. And then they said, hey, well, do you want to join the board? So I was like, well, I generally don't want to be on boards, because it's boring. And I have a lot of things to do.

Speaker 2

00:46:32 - 00:47:00

But I do care about the direction of Twitter. So I'll consider being on the board. And I thought about it for about a week or so. And then based on the conversations that I was having with the management team and the board, I came to the conclusion, rightly or wrongly, that if I joined the board, they would not listen to me. So then I'm like, huh, okay.

Speaker 2

00:47:01 - 00:47:38

Then I would just be a quizling. I don't want to be some sort of just go along for the ride quizling situation and if I collaborate or effectively. And it really felt like, I was starting to feel like, wait a second, like something's not right in this, you know, something's wrong in the state of Denmark here. Something feels wrong about the platform. It seemed to be just drifting in a, I couldn't place it exactly, it felt like it was drifting in a bad direction.

Speaker 2

00:47:38 - 00:48:02

So then I was like, and my conversations with the board and management seemed to confirm my intuition about that. So then I was like, okay. But basically I was convinced these guys do not care about fixing Twitter. And I had a bad feeling about where I was headed based on the conversations I had with them. So then it was like, you know what?

Speaker 2

00:48:05 - 00:48:27

I'll try acquiring it and see if acquiring it is possible. Now, I didn't have enough cash to acquire it, so I would need support from others, from some of the existing investors. I would also need a lot of debt. And so it wasn't clear to me whether an acquisition would succeed, but I thought I would try. And ultimately, it did succeed.

Speaker 2

00:48:30 - 00:48:31

So anyway, here we are.

Speaker 1

00:48:31 - 00:48:36

But when you got there and all of a sudden you own it and all the data on the servers belongs to you.

Speaker 2

00:48:37 - 00:48:40

Well, it belongs to the people in my view, but yes.

Speaker 1

00:48:40 - 00:48:50

But you can see what it is. Yes. And you can see what they've been doing and you can see who's been working there. You were shocked to find out that various intel agencies were affecting its operations?

Speaker 2

00:48:53 - 00:49:06

The degree to which various government agencies were effectively, had effectively had full access to everything that was going on on Twitter blew my mind. I was not aware of that.

Speaker 1

00:49:06 - 00:49:07

Would that include people's

Speaker 2

00:49:07 - 00:49:20

DMs? Yes. Yes, because the DMs are not encrypted. So 1 of the first, you know, 1 of the things that we're about to release is the ability to encrypt your DMs.

Speaker 1

00:49:20 - 00:49:26

That's pretty heavy-duty though because a lot of well-known people, reporters talking to their sources, government officials, the richest people

Speaker 2

00:49:26 - 00:49:26

in the

Speaker 1

00:49:26 - 00:49:35

world, they're DMing each other. And the assumption obviously was incorrect, but was that that's private, but that was being read by various governments?

Speaker 2

00:49:36 - 00:49:52

Yeah, that seems to be the yes. Scary. Yes, it is. So like I said, we're moving to have the DMs be optionally encrypted. I mean, you know, there's like a lot of DM conversations which are, you know, just chatting with friends.

Speaker 2

00:49:52 - 00:50:17

It's not important. But, so we're, it's hopefully coming out later this month, but no later than next month, is the ability to toggle encryption on or off. So if you are in a conversation you think is sensitive, you can just toggle encryption on and then no 1 on Twitter can see what you're talking about. They could put a gun to my head and I couldn't tell. That's sort of the gun to the head test.

Speaker 2

00:50:17 - 00:50:23

If somebody puts a gun to my head, can I still not see your DMs? That should be, that's the acid test.

Speaker 1

00:50:23 - 00:50:24

Yes.

Speaker 2

00:50:26 - 00:50:28

And that's how it should be if you want your.

Speaker 1

00:50:28 - 00:50:31

Have you had complaints from various governments about doing this?

Speaker 2

00:50:33 - 00:50:51

I haven't had direct complaints to me. I've had sort of like some indirect complaints. I think people are a little concerned about complaining to me directly in case I tweet about it. They're like, uh-oh. So they're sort of trying to be more roundabout than that.

Speaker 2

00:51:00 - 00:51:14

And If I got something that was unconstitutional from the U.S. Government, my reply would be to send them a copy of the First Amendment and just say, like, what part of this are we getting wrong? You have

Speaker 1

00:51:14 - 00:51:15

a lot of government contracts. I'm curious. What part

Speaker 2

00:51:15 - 00:51:17

of this are we getting wrong? Please tell me. I mean,

Speaker 1

00:51:17 - 00:51:31

it's a pretty, no, I'm just saying, but you're kind of exposed in your other businesses. So this is, just in case our viewers aren't following this, this is not, you're not just like a journalist taking a stand on behalf of the First Amendment. You're a guy with big government contracts giving the finger to the government in some way.

Speaker 2

00:51:31 - 00:51:43

Well, am I giving the finger to the government? I think that there are... I'm not someone who thinks that the government is just sort of evil. Right. It's a large bureaucracy.

Speaker 2

00:51:44 - 00:52:22

There are people in government who are human beings and they have the people with good motivations, occasionally bad motivations, with rare exception the people that I know in government who have good motivations and just want to get their job done and they actually believe in the Constitution. And they're, so I think, my opinion is actually most people in the government are good. That's heartening to hear. Yeah, it's rare for me to find someone in the government who I think is perhaps not good. But, you know, at the highest level of the agencies, there are political appointees, as you know.

Speaker 2

00:52:22 - 00:52:56

And the political appointees will have a political agenda. And so, at the highest levels of the various government agencies, there is the ability to put a sort of political thumb on the scale, even if the people operating the agencies don't agree with that. So that's something to be concerned about. I'd be more concerned about political appointees, I think, than the career people. That's been my experience, at least.

Speaker 1

00:52:56 - 00:53:04

Do you think Twitter will be as central to this presidential campaign as it was in the last several?

Speaker 2

00:53:07 - 00:53:34

I think it will play a significant role in elections, not just domestically but internationally. So The goal of new Twitter is to be as fair and even-handed as possible, so not favoring any political ideology, but just being fair at all.

Speaker 1

00:53:34 - 00:53:59

Why doesn't Facebook do this? I know that Zuckerberg has said, and I take him at face value, that he, well I do actually in this way, that he is a kind of old fashioned liberal who doesn't like to censor, he has, but he, you know, like why wouldn't a company like that take the stand that you have taken? It was pretty rooted in American traditional political custom, you know, for free speech.

Speaker 2

00:54:04 - 00:54:07

My understanding is that Zuckerberg spent

Speaker 1

00:54:08 - 00:54:09

$400

Speaker 2

00:54:09 - 00:54:22

million in the last election nominally in a get out the vote campaign but really fundamentally in support of Democrats. Is that accurate or not accurate? That is accurate. Does that sound unbiased to you? No it doesn't.

Speaker 2

00:54:22 - 00:54:22

Yes.

Speaker 1

00:54:28 - 00:54:35

Um, so you don't see hope that Facebook will approach this as a non-aligned arbiter.

Speaker 2

00:54:37 - 00:54:39

I'm unaware of evidence to suggest that path.

Speaker 1

00:54:46 - 00:54:52

Can you, you've allowed Donald Trump back on Twitter, he hasn't taken you up on your offer because he's got his own thing.

Speaker 2

00:54:52 - 00:54:52

Right.

Speaker 1

00:54:52 - 00:54:54

Do you think he will go back on Twitter?

Speaker 2

00:54:56 - 00:55:35

Well, that's obviously up to him. My job is to, you know, I take freedom of speech very seriously. So it's, you know, I didn't vote for Donald Trump, I actually voted for Biden. People think I'm some sort of hardcore, you know, certainly some of the media try to paint me as like far right or whatever. And the only time I've ever even voted Republican was once for, because I registered to vote in South Texas, it was for a Mexican-American woman for Congress.

Speaker 2

00:55:35 - 00:56:19

That's literally the only Republican vote I've ever cast in my entire life, once. And so, not saying I'm a huge fan of Biden because I would think that would probably be inaccurate. But, you know, we have difficult choices to make in these presidential elections. It's not โ€“ I would prefer, frankly, that we put someone, just a normal person as president, a normal person with common sense and whose values are smack in the middle of the country, you know, just center of the normal distribution, and I think that would be great. I agree.

Speaker 2

00:56:19 - 00:56:22

I agree with that. Everyone would be happier. Would you run?

Speaker 1

00:56:22 - 00:56:24

Like why wouldn't you run?

Speaker 2

00:56:24 - 00:56:32

I was born here, so. Oh, of course you weren't. I'm a technologist also, I'm not a politician, so it's not like

Speaker 1

00:56:32 - 00:56:32

โ€“

Speaker 2

00:56:33 - 00:57:19

I think we have made maybe being president not that much fun, to be totally frank. It is by design a relatively weak role because it's intended to be balanced by the House and the Senate and the judiciary. So it's not like if you're a prime minister in England or Canada, you have far more power than if you're president because it's like being Speaker of the House and being President. So you know, President's like deliberately weak in order to avoid creating a king situation, king queen situation. But you get dumped on all day, no matter what you do.

Speaker 2

00:57:20 - 00:57:44

And everything you do is scrutinized. And your life is not your own. And if you're โ€“ any skeletons you've got in the closets will be trotted out and you know braided down Main Street and even if they don't exist they'll make them up and fakes it whatever. Politics is a blood sport. Yeah.

Speaker 2

00:57:44 - 00:57:46

So It's not something I'd want to do.

Speaker 1

00:57:46 - 00:58:03

So I got 1 last thread that you alluded to. You said, don't get me started on the banks. So you've seen a couple of regional bank collapses. And we've been told that's not a big deal, that these are isolated and each 1 collapsed for unique reasons. It's not systemic in any sense.

Speaker 1

00:58:04 - 00:58:08

What's your sense of the stability of the American banking system?

Speaker 2

00:58:12 - 00:58:44

Well, it's actually at this point a global banking system problem. So we have a situation here where it's not that the canary in the coal mine has died, but the miners are starting to die too. So, you know, Silicon Valley Bank collapsing overnight is 1 hell of a big canary. It's more like a turkey. It's not like some small fry thing.

Speaker 2

00:58:45 - 00:59:15

It's big fry, or medium fry. And then, Credit Suisse, which is, I think, was formed in the mid-1800s, was basically sold for pennies on the dollar, forced to merge with UBS, and even then required backstop by the Swiss government. I mean like, hello guys, maybe we have issues here, maybe things aren't all great. They're definitely not all great. Maybe it's more forceful here.

Speaker 2

00:59:19 - 01:00:02

I think that there is a serious danger with the global banking system. There's a strong argument that if you were to actually mark to market the portfolios of the banks, the loans and what not, that the entire banking industry would have negative equity. It feels that way. Yes. So if you look at, say, commercial real estate, like offices and whatnot, the whole work from home thing has substantially reduced office usage in cities around the world.

Speaker 2

01:00:05 - 01:00:07

And I think San Francisco is a

Speaker 1

01:00:07 - 01:00:07

40%

Speaker 2

01:00:08 - 01:00:38

off. San Francisco is like an extreme example, but I think it's on a 40% vacancy. Even New York has, I think almost all cities at this point have record vacancies in commercial real estate. So now commercial real estate used to be something that was a grade A asset. That if a bank had commercial real estate holdings, those would be considered the highest security, some of the safest assets you could have.

Speaker 2

01:00:38 - 01:01:05

Now that is not the case anymore. 1 company after another is canceling their leases or not renewing their leases. Or if they go bankrupt, there's nothing for the bank who owns that real estate to go after, because they're a previously strong company, now dead. What do you go after at that point? So we really haven't seen the commercial real estate shoe drop.

Speaker 2

01:01:05 - 01:01:40

That's more like an anvil, not a shoe. So, the stuff we've seen thus far actually hasn't even... It's only slightly real estate portfolio degradation, but that will become a very serious thing later this year in my view. I think if we see, which we're likely to see, a drop in house prices because the interest rates are too high. And for most people when buying a house, they look at the monthly payment.

Speaker 2

01:01:41 - 01:02:04

If you're a 30 year mortgage, the vast majority of it is interest. So if the Fed rate is high, You have a high base interest rate. Effectively the price you can pay for the house drops because you now have to pay more interest which means that if you've got a fixed monthly payment you can now afford to buy a house for less money. It effectively drops the prices of houses.

Speaker 1

01:02:04 - 01:02:04

Yes.

Speaker 2

01:02:07 - 01:02:42

This is the kind of thing that tends to accelerate. So then you can get negative equity in the home market as well. And so if banks end up having loan licenses in both their commercial and, well, they're definitely going to have loan licenses in their commercial portfolio, but also in their mortgage portfolio, this is a dire situation. There is a solution to mitigate the magnitude of the damage here, which is for the Fed to lower the rate. But they raised the rate again.

Speaker 2

01:02:46 - 01:02:53

Now, if I recall correctly, which is an important caveat, I think the last time the Fed raised rates going into a recession was

Speaker 1

01:02:53 - 01:02:56

1929. What happened next?

Speaker 2

01:02:57 - 01:02:58

Yeah, the Great Depression.

Speaker 1

01:03:03 - 01:03:12

The concern, I'm going to tell you nothing you don't know, but the concern is if the Fed drops rates again, then inflation will accelerate, and you can't do that in an election year.

Speaker 2

01:03:13 - 01:03:47

So Inflation is going to happen no matter what. If you increase the money supply, you get inflation. So there's not some magical cure for getting rid of inflation, except to increase the productivity, the output of goods and services. So if you say, like, what is money? You've got these sort of, it's basically numbers in a database that come up to some total.

Speaker 2

01:03:47 - 01:04:31

Then you've got the output of goods and services of the economy. And as long as the ratio of money to ratio of goods and services stays, if that stays constant, you have no inflation. If you add more money, if you add money to the system faster than you increase goods and services, then you have inflation. So all of these COVID sort of stimulus bills were not paid for. They were just generated more currency, more money was created because the federal government, the checks never, the checks always pass, you know, unless you hit a debt limit, which there's probably going to be some debt limit crisis later this year.

Speaker 2

01:04:31 - 01:04:58

But provided you haven't hit the debt limit, the federal government, unlike state governments or city governments or individuals, can simply issue more money. And that's what they did. I mean, as the old saying goes, there's no free lunch. So if you could just issue massive amounts of money without negative consequences, why don't we just take that to the limit and make everyone a trillionaire? Well, I mean, they tried that in Venezuela.

Speaker 2

01:04:59 - 01:05:00

How'd that work out?

Speaker 1

01:05:00 - 01:05:01

Well, they had to eat zoo animals.

Speaker 2

01:05:02 - 01:05:26

Right, it's not good. You get to the point where the, you know, sort of Weimar Germany type stuff where you could like bring in the cash to the store in a wheelbarrow. So There's no free lunch. There's not some ability to issue money and not have inflation. This is just, yeah.

Speaker 2

01:05:28 - 01:05:58

So the inflation will happen. And there's no, fiddling with the Fed rate is not going to affect that, really. But a high Fed rate can cause a lot of damage in shifting funds in the wrong direction. So the long-term return on, say, the S&P 500, I believe is, depending on how you count it, around

Speaker 1

01:05:58 - 01:05:58

6%.

Speaker 2

01:06:01 - 01:06:20

So if the Fed real rate of return starts to approach what the long-term return is on the stock market, why would you keep any money in the stock market? You should simply buy Treasury bills. Of course. Because the Treasury Bill is a certainty, whereas the stock market fluctuates. This is pretty basic.

Speaker 2

01:06:20 - 01:06:33

Also, why would you keep money in a bank savings account if you can put it in what's called a money market account, which is an account that represents Treasury Bills? If the Treasury bill money market account gives you

Speaker 1

01:06:35 - 01:06:35

4

Speaker 2

01:06:35 - 01:06:55

to 5 percent interest and the bank savings account only gives you 2 percent, you'd be a fool to keep the money in the bank savings account. So the Fed has made a tremendous mistake by going this high with their rate and they need to drop it immediately.

Speaker 1

01:06:56 - 01:06:57

Do you think they will?

Speaker 2

01:07:01 - 01:07:35

They will have no choice but to drop it, I think, later this year. Part of the issue is that the Fed is an old institution and has a lot of latency in its data. So it's like driving a car along a windy cliffside road while looking out the rearview mirror. But not even actually the rearview mirror, a video that was taken of the rearview mirror that's 3 months old. Now, if you're on a straight road, that works out OK, because nothing's changing.

Speaker 2

01:07:35 - 01:07:47

Or it's only a slightly bending road. But we're more along like the we're doing the Highway 1 PCH trip here. And so you really want to look out the front window.

Speaker 1

01:07:47 - 01:07:49

When you're in Big Sur, yes.

Speaker 2

01:07:49 - 01:08:03

If you're on a cliffside road where you could plunge to your doom. So, yes, you want to look out the front window. You want to look at the sort of forward commodity prices. Like, look at what the forward contracts are predicting for commodity prices. Are predicting for commodity prices.

Speaker 2

01:08:05 - 01:08:24

And not some laboriously slow government data collection process that, like they'll claim to have, for example, December data. That's not December, that's not the data of December. It's the data that arrived in December.

Speaker 1

01:08:24 - 01:08:25

Right, exactly.

Speaker 2

01:08:25 - 01:08:28

I mean, think about it, how good is the government at actually collecting data? Horrible.

Speaker 1

01:08:28 - 01:08:29

Yeah. Yeah,

Speaker 2

01:08:29 - 01:08:33

so it's like, that's what I mean. It's like 3 months old with lots of errors.

Speaker 1

01:08:33 - 01:08:35

So if you had $100, 000 in your bank account.

Speaker 2

01:08:35 - 01:08:37

Making decisions on that basis is insane.

Speaker 1

01:08:38 - 01:08:47

So what should the average non-rich person do on the cusp of what you're describing, which is economic catastrophe? How do you protect yourself? Economic catastrophe, like how do you protect yourself?

Speaker 2

01:09:05 - 01:09:37

I think probably a smart move overall, and this is guidance that I think applies across the ages, is if there are companies in whose products you believe, buy and hold the stock. And whenever else is panicking, then buy more. And when everyone else thinks that the stock is going to the moon, sell it. You know, sort of the buy low, sell high.

Speaker 1

01:09:37 - 01:09:40

So you're not an index fund guy. You pick specific stocks.

Speaker 2

01:09:48 - 01:10:08

You have to say, what is the purpose of a company? Why should a company exist a company exist is a group of people Collected together to to provide products and services. It's not It's not a thing in and of itself. It's just a group of people that's like, it's hard if it was just 1 person making, you can make cupcakes yourself, but you can't make cars by yourself. Yes.

Speaker 2

01:10:13 - 01:11:08

So, therefore the value of a company is a function of the quality of the products and services that it has created and will create. And so if there's a company that you think, well, this company's got a lot of exciting products that I think are awesome, their current products are good, that's probably a company to invest in. Because that's the reason companies exist, to produce goods and services that you like. And so, I mean, there's some caveats here to make sure you're not investing when it's like the hottest thing, you know, because then it's going to be at a temporary high. But, you know, when it's not sort of at a weirdly temporary high, I think just generally looking at a company and saying, well, I like the products, the service of that company, and I like where they're going, and the management seems sensible, then I think buying and holding that stock is probably the right move.

Speaker 2

01:11:09 - 01:11:37

I'm probably doing that with a few companies. That's what I'd recommend. I think the, on the, I mean, I could really go on at length about the financial system and the stock market and everything. But the โ€“ I mean, these days I think we've gotten a little too far into the index passive fund world. Like somebody at some point's gotta make a decision.

Speaker 1

01:11:41 - 01:11:42

I couldn't agree more.

Speaker 2

01:11:43 - 01:11:45

Yeah, and by the way there's a- It's

Speaker 1

01:11:45 - 01:11:48

like betting on both red and black. I mean, it doesn't, yeah.

Speaker 2

01:11:49 - 01:12:14

Well, betting on red and black in a casino situation where it could come up green, and you're bound to lose. So the longer you play, the worse you do. Now, the stock market is kind of like the opposite of a casino, which is the longer you play, the more likely you are to succeed. That historically has been the case and I think will continue to be the case. So but it's really just important not to panic.

Speaker 2

01:12:14 - 01:12:33

If you buy a stock and you read something terrible in the newspaper, you want to just remember the news has got a negative bias, just think about what other products of the company still sound. Does it have a good product roadmap? Do you believe in the management? If so, ignore what the press says. Or if the price drops when this negative article buys stock.

Speaker 1

01:12:33 - 01:12:42

So here's my last question. And you mentioned the press. You've been the subject of press coverage for a long time, but very intense media coverage for the last year.

Speaker 2

01:12:43 - 01:12:44

Yeah, sure.

Speaker 1

01:12:45 - 01:12:51

Seems that way. Anyway, How has your opinion of the press changed?

Speaker 2

01:13:01 - 01:13:32

Well, so my first company way back in the day, in the sort of pre-Cambrian era of the Internet or the World Wide Web, was up to, we actually helped bring a number of the media organizations online. So most newspapers are not online. We helped bring hundreds of newspapers and magazines online for the first time. We added a tremendous amount of functionality to their websites with our software. The New York Times company and Knight Ritter were major investors and were on the board.

Speaker 2

01:13:32 - 01:14:06

I spent a lot of time in newsrooms. So I'm not unfamiliar with the media. I got to see it firsthand all the way back in like 1996. So it's been a while. So traditional media certainly had revenue challenges because as online advertising has increased and it's much more measurable and much more sort of direct, you can say like I spent this amount and got this output, you know like it's interactive unlike say a newspaper or broadcast TV.

Speaker 2

01:14:07 - 01:14:09

You're kind of guessing with a newspaper and broadcast TV.

Speaker 1

01:14:09 - 01:14:09

That's

Speaker 2

01:14:09 - 01:14:30

right. Where if something's online, you can tell immediately that that person saw the ad and bought the product. That's very immediate. And it's actually more effective if the advertising is customized to the individual. So the advertising is more likely to be relevant.

Speaker 2

01:14:31 - 01:15:30

Whereas broadcast, if it's being shown to everyone, it's going to be irrelevant to most people. The result of that has been a huge shift in advertising revenue from newspapers and TV to the Googles and Facebooks of the world, and a tiny bit to Twitter. I think Twitter gets like 1% of advertising revenue, which is quite tiny. So This has meant a shrinking pie, obviously, for most of the traditional media companies and made them more desperate to get clicks, to get attention. And it's made them โ€“ when they're in sort of a desperate state, they will then tend to really push headlines that get the most clicks, whether those headlines are accurate or not.

Speaker 2

01:15:31 - 01:16:18

So, it's resulted in my view, I think most people would agree, a less truthful, less accurate news. So, because they just got to get a rise out of people. And I think it's also increased the negativity of the news because I think we humans instinctually respond more to negative. I think we have an instinctual negative bias which which kind of makes sense in that like if like let's say like it's more important to remember where was the lion or where was the tribe that wants to kill my tribe then where is the bush with berries?

Speaker 1

01:16:18 - 01:16:19

Yes.

Speaker 2

01:16:19 - 01:17:08

Like one's like a permanent negative outcome, and the other is like, well, I might go hungry. So meaning like there's an asymmetry in, sort of an evolved asymmetry in negative versus positive stuff. And also historically, the negative stuff would have been quite proximate. Like it would have been near, represented a real danger to you as a person if you heard negative news. Because historically, you know, like a few hundred years ago, we're not hearing about what negative things are happening on the other side of the world or on the other side of the country, we're only, we're hearing about negative things in our village, things that could actually have a bad effect on you, Whereas now we're hearing about, I mean the news very often seems to attempt to answer the question, what is the worst thing that happened on Earth today?

Speaker 2

01:17:09 - 01:17:24

And you wonder why you're sad after reading that. And then use the most inflammatory language. Because every day they've got to sell the advertising, even if it happens to be a slow news day.

Speaker 1

01:17:24 - 01:17:28

Do you read any legacy media outlets?

Speaker 2

01:17:29 - 01:18:07

I read a lot. I mean, I really get most of my news from Twitter at this point. So it is the number 1 news source I think in the world at this point. So and thus, all the more important that we strive to be accurate. And it's not just a question of accuracy, but we also need to allow the people to develop the narratives that are of interest to them.

Speaker 2

01:18:07 - 01:18:38

So it's possible for news to be technically truthful, but not, but they're still deciding what the narrative is. Like let's say you wanted, like let's say you took a photo of someone and they had a little zit. Now you could zoom in on the zit and make it look gigantic like Mount Vesuvius And it is still true that they have a zit. It's just not the size of Mount Vesuvius. And they, you know, it doesn't properly reflect their face.

Speaker 2

01:18:40 - 01:19:04

Their face is not 1 giant zit. But you could say like, well, it's true. But have they lied? They haven't, you know, they just happen to zoom in on the zit and not look at the rest of the face type of thing. So what I'm saying is that choice of narrative is extremely important.

Speaker 2

01:19:04 - 01:19:30

And at the point at which, if there's only like, say, half a dozen editors-in-chief, or maybe even fewer than that, maybe it's only 3 or 4, that are deciding what the narrative is, what's going to be on the front page, then, you know, that's a form of manipulation of public opinion I think the public often doesn't appreciate and is perhaps the most pernicious of all.

Speaker 1

01:19:30 - 01:19:31

That's right, because it's the most subtle.

Speaker 2

01:19:31 - 01:19:35

Yeah, it's the most subtle. They haven't said an untrue thing. They've just chosen what they're going to focus on.

Speaker 1

01:19:35 - 01:19:44

A man called Douglas Mackey's facing 10 years in prison for posting what he believed were funny memes on Twitter. What do you make of that case?

Speaker 2

01:19:45 - 01:20:24

I don't know the details of that case. I've read a little bit about it. You probably know more about it than I do. I certainly don't think someone should go to prison for a long period of time for posting memes on Twitter, in which case we're gonna have a very full prison. So and if we're talking about election interference, well there's quite a few people that should be on trial for that for much far more serious crimes than memes on Twitter, far more serious.

Speaker 2

01:20:24 - 01:20:26

Yes, the Twitter files kind

Speaker 1

01:20:26 - 01:20:27

of showed that I think.

Speaker 2

01:20:27 - 01:20:42

Yes, So you know unless this person really does, like I said, I don't know everything that was shown at the trial. Has he been convicted? Is this?

Speaker 1

01:20:42 - 01:20:43

Yes, he was convicted on

Speaker 2

01:20:43 - 01:20:46

Friday. Unanimous jury verdict?

Speaker 1

01:20:46 - 01:20:47

Yes.

Speaker 2

01:20:49 - 01:20:52

What was the venue? New York City.

Speaker 1

01:20:53 - 01:20:56

Okay. It was in Brooklyn and it was a hung jury.

Speaker 2

01:20:56 - 01:20:58

A hung jury. It's not unanimous then?

Speaker 1

01:20:58 - 01:21:00

Well, the judge prodded the jury.

Speaker 2

01:21:00 - 01:21:01

Okay.

Speaker 1

01:21:02 - 01:21:11

And they reached unanimous guilty verdict. It'll be appealed. So how many what percentage of your staff did you fire at Twitter. 1 of the great business stories of the year.

Speaker 2

01:21:11 - 01:21:21

I think we're about we're about 20 percent of the original size. So 80 percent left. Yes. So a lot of people voluntarily. Sure.

Speaker 1

01:21:21 - 01:21:25

Sure. But but it's 80% are gone from the day to

Speaker 2

01:21:25 - 01:21:26

correct. Yes.

Speaker 1

01:21:26 - 01:21:29

So how do you run the company with only 20% of the staff.

Speaker 2

01:21:30 - 01:21:33

It turns out you don't need all that many people to run Twitter.

Speaker 1

01:21:33 - 01:21:35

But 80%? That's a lot.

Speaker 2

01:21:36 - 01:21:50

Yes. I mean, if you're not trying to run some sort of glorified activist organization, and you don't care that much about censorship, then you can really let go of a lot of people, it turns out.

Speaker 1

01:21:54 - 01:22:13

How many others, without naming names, but how many, I had dinner with somebody who runs a big company recently, he said, I'm really inspired by Elon. And I said, the free speech stuff? He goes, no, the hiring the staff stuff. How many other CEOs have come to you to talk about this?

Speaker 2

01:22:18 - 01:22:44

I spend a lot of time at work, so it's not like I'm meeting with lots of people. They see what actions I've taken. But I think we just had a situation at Twitter where it was absurdly overstaffed. So it wasn't to like, you look and say, what does it really take to operate Twitter? Most of what we're talking about here is a group text service at scale.

Speaker 2

01:22:47 - 01:23:14

How many people are really needed for that? And if you look at the, say like, what has been the product development over time with Twitter, and you like, so like, you know, years versus product improvements, and it's like a pretty flat line. So what are they doing? You know, it took a year to add an edit button that doesn't work most of the time. I mean this is, I feel like this is a comedy situation here, you know.

Speaker 2

01:23:16 - 01:23:34

You're not making cars, you know. It's very difficult to make cars or get rockets to orbit. So, you know, the real question is like how did it get so absurdly overstaffed? This is insane. So, anyway, that's, and it's clearly working.

Speaker 2

01:23:35 - 01:23:42

In fact, I think it's working better than ever. It's about, we've increased the responsiveness of the system by in some cases over

Speaker 1

01:23:42 - 01:23:43

80%.

Speaker 2

01:23:45 - 01:24:03

The, There's a core piece of code for generating the timeline, which is run literally billions of times a day. We've cut that code from 700, 000 lines to 70, 000 lines run. Yeah. And the code efficiency by over

Speaker 1

01:24:03 - 01:24:04

80%

Speaker 2

01:24:04 - 01:24:33

Like meaning how much computers necessary to render the timeline? Yeah by 80% I mean this is a You know in 4 or 5 months. We've increased the video time from roughly 2 minutes or best case 10 minutes to now 2 hours so you can put 2 hours of video on Twitter. We'll soon be increasing that to where there's no meaningful limit. We've increased the tweet length from 240 characters to 4, 000.

Speaker 2

01:24:34 - 01:25:14

We'll be increasing that to where there's, again, no meaningful length to if you want to post a novel on Twitter, you should be able to do it. And As everyone saw on Friday, we open sourced the super embarrassing recommendation algorithm, which people are taking apart and eviscerating, which is exactly what I had hoped they would do, and pointing out all the nonsense. And we're going to open source more. And we're going to subject it to public review. We're also going to get criticized a lot because people will point out all of the foolish things that are happening in the code, but then we'll fix it.

Speaker 2

01:25:14 - 01:25:35

We'll fix it fast and in full public view. And I think that's the kind of thing that owns the public trust. Because don't take my word for it. You can literally read the code and you can read what people say about the code. And you can see the improvements that we make.

Speaker 2

01:25:35 - 01:26:05

And you can see, in real time, live, see it get better. So my prediction is that this, I would be surprised if this does not lead the public to think, okay, this is something that I can trust. I mean, I think far more trustworthy than say, other social media organizations that have some mysterious black box that they refuse to show how it works. I mean, what are they trying

Speaker 1

01:26:05 - 01:26:06

to hide?

Speaker 2

01:26:07 - 01:26:13

What are they trying to hide? Because it's not good things. If they had to have something to hide, why don't they show it?

Speaker 1

01:26:14 - 01:26:17

Because it's proprietary business secret?

Speaker 2

01:26:19 - 01:26:51

Yeah, sure. So we're trying to make Twitter the most trusted place on the internet, where you can get the least untrustworthy place on the internet. I don't think anyone should trust the internet, but maybe we can make Twitter the least untrustworthy. And in a way you can see a wide range of political opinions, including ones you disagree with. I think people should be exposed to things they disagree with.

Speaker 2

01:26:51 - 01:27:29

So it shouldn't just be continuous self-reinforcement that reflect what, you know. So, that's the goal, and I think we're making some good progress in that direction. I feel good about where things are going and we definitely want to have things as as sort of cleaned up as possible before the elections. If there's any manipulation that we're aware of, make that, make the public aware of that. And just, like I said, try to get the truth to the people as best we can.

Speaker 2

01:27:30 - 01:27:29

Thank you.