See all xAI transcripts on Twitterspaces

twitterspaces thumbnail

xAI

1 hours 46 minutes 36 seconds

Speaker 1

00:00:00 - 00:07:00

You you you you you you you You You You You You You You You You You You You Silence. You You You

Speaker 2

00:07:31 - 00:08:03

Hi, sorry for the delay. We're just waiting for everyone who wants to join the space to join. We need to tweak the algorithm a little bit. Before you recommendation, spaces needs to have higher immediacy in recommendations for obvious reasons. So We're just giving everyone a minute to be aware of the space.

Speaker 2

00:08:03 - 00:08:17

And we're going to just adjust the four-year algorithm to have higher immediacy for spaces, especially large spaces. So we're just going to probably start in about 2 minutes.

Speaker 1

00:10:00 - 00:10:05

I'll make sure I don't get any dust on my hands. I'll be back. Bye!

Speaker 3

00:10:07 - 00:10:07

Bye!

Speaker 1

00:10:12 - 00:10:20

Bye! Bye! Bye! Bye! Bye!

Speaker 1

00:10:23 - 00:10:28

Bye! Bye! Bye!

Speaker 2

00:10:33 - 00:11:30

All right, we'll get started now. So let's see, I'll just do a brief introduction of the company and then the founding team will I think just say a few words about their background, things they've worked on, whatever they'd like to talk about really. But I think it's helpful to hear from people in their own words, the various things they've worked on and what they wanna do with XAI. So I guess the overarching goal of XAI is to build a A good AGI with the overarching purpose of just trying to understand the universe. I think the safest way to build an AI is actually make 1 that is maximally curious and truth-seeking.

Speaker 2

00:11:31 - 00:12:25

So you go for trying to aspire to the truth with with acknowledged error. So like, you know, this will never actually get fully to the truth. It's not clear, but you want to always aspire to that and try to minimize the error between what it you know, what you think is true and what is actually true. My sort of theory behind the maximally curious, maximally truthful, as being probably the safest approach is that I think to a superintelligence, humanity is much more interesting than not humanity. You know, 1 can look at the various planets in our solar system, the moons and the asteroids, and really probably all of them combined are not as interesting as humanity.

Speaker 2

00:12:26 - 00:12:56

I mean, as people know, I'm a huge fan of Mars, next level. I mean, the middle name of 1 of my kids is basically the Greek word for Mars. So I'm a huge fan of Mars, but Mars is just much less interesting than Earth with humans on it. And so I think that that kind of approach to growing an A.I. And I think that is the right word for it.

Speaker 2

00:12:56 - 00:13:23

Growing an A.I. Is to grow it with with that ambition. I spent many years thinking about A.I. Safety and worrying about A.I. Safety and I've been 1 of the strongest voices calling for AI regulation or oversight to just have some kind of oversight, some kind of referee, so that it's not just up to companies to decide what they want to do.

Speaker 2

00:13:24 - 00:13:59

I think there's also a lot to be done with AI safety, with industry cooperation, kind of like motion pictures association. So I think there's value to that as well. But I do think there's gotta be some, like in any kind of situation that is, even if it's a game, they have referees. So I think it is important for there to be regulation. And then, like I said, my view on safety is try to make it maximally curious, maximally truth-seeking.

Speaker 2

00:14:01 - 00:14:32

And I think this is important to avoid the inverse morality problem. Like if you try to program a certain morality, you can basically invert it and get the opposite what is sometimes called the Waluigi problem. If you make Luigi, you risk creating Waluigi at the same time. So I think that's a metaphor that a lot of people can appreciate. So that's what we're going to try to do here.

Speaker 2

00:14:38 - 00:14:40

With that, I think, let me turn it over to Igor.

Speaker 4

00:14:43 - 00:14:57

Hello everyone. My name is Igor. I'm 1 of the team members of XAI. I was actually originally a physicist, so I studied physics at university. And I briefly worked at the Large Hadron Collider at CERN.

Speaker 4

00:14:57 - 00:15:24

So understanding the universe is something I've always been very passionate about. And once some of these really impressive results from deep learning came out, like AlphaGo, for example, I got really interested in machine learning and AI and decided to make a switch into that field. Then I joined DeepMind, worked on various projects, including AlphaStar. So that's where we tried to teach a machine learning agent to play the game StarCraft

Speaker 1

00:15:24 - 00:15:24

2

Speaker 4

00:15:24 - 00:15:32

through self-play, which was a really, really fun project. Then later on, I joined OpenAI, worked on various projects there, including GPD

Speaker 1

00:15:32 - 00:15:32

3.5.

Speaker 4

00:15:33 - 00:15:49

So I was very, very passionate about language models, making them do impressive things. Yeah, now I've teamed up with Elon to see if we can actually deploy these new technologies to really make a dent in our understanding of the universe and progress our collective knowledge.

Speaker 2

00:15:50 - 00:16:21

Yeah, actually, I had a similar background. My 2 best subjects were computer science and physics. And I actually thought about a career in physics for a while because physics is really just trying to understand the fundamental truths of the universe. And then I was a little concerned that I would get stuck at a collider and then the collider might get cancelled because of some arbitrary government decision. So that's actually why I decided not to pursue a career in physics.

Speaker 2

00:16:22 - 00:16:50

So focused initially more on computer science and then obviously later got back into physical objects with SpaceX and Tesla. So I'm a big believer in pursuing physics and information theory as the sort of 2 areas that really help you understand the nature of reality. Cool. Go to the past 1.

Speaker 4

00:16:52 - 00:16:57

I'll pass it over to Manuel, aka Macro. Should be on the call.

Speaker 5

00:16:58 - 00:17:36

Hi, I'm Manuel. Before joining XAI, I was previously at Diepmann for the past 6 years, where I worked on the reinforcement learning team, and I mostly focused on the engineering side of building these large reinforcement learning agents, like for example, AlphaStar together with Igor. In general, I've been excited about AI for a long time. For me, it has the potential to be the ultimate tool to solve the hardest problems. I first studied bioinformatics, but then became also more excited about the AI, because if you have a tool that can solve all the problems, to me that's just much more exciting.

Speaker 5

00:17:37 - 00:18:00

And with XAI in particular, I'm excited about doing this in a way where we built tools for people and we share them with everybody so that people can do their own research and understand things. And my hope is that it was like a new wave of researchers that wasn't there before. Cool. I'll hand it over to Tony.

Speaker 6

00:18:08 - 00:18:32

Yeah, so I'm Christian, I mean, Christian Szegedi, so we decided to switch places with Tony because I wanted to talk a bit about the role of mathematics in understanding the universe. So I have worked for the past 7 years on trying to create an AI that is as good in mathematics as any human. And I think

Speaker 7

00:18:32 - 00:19:08

the reason for that is that mathematics is the language of, is basically the language of pure logics. And I think that mathematics and logical reasoning at a high level would demonstrate that the AI is really understanding things not just emulating humans. It would be instrumental for programming and physics in the long run. So I think as AI that starts to show real understanding of deep reasoning is crucial for our first steps to understand the universe. So handing over to Tony Wu.

Speaker 8

00:19:09 - 00:19:22

Hello. Hey everyone. I'm Tony. Same to Christian, my dream has been to tackle the most difficult problems in mathematics with artificial intelligence. That's why we became such a cool friends and long-term collaborators.

Speaker 8

00:19:24 - 00:19:46

So achieving that is definitely a very ambitious goal. Last year, We've been making some really interesting breakthroughs, which made us really convinced that we're not far from our dream. So I believe with such a talented team and abundant resources, I'm super hopeful that we will get there. I'm passing it to

Speaker 1

00:19:47 - 00:19:47

Jimmy. I think

Speaker 2

00:19:47 - 00:20:03

it's worth just mentioning, I think generally people are reluctant to be self-promotional, but I think it is important that people hear, like, what are the things that you've done that are noteworthy? So basically, brag a little is what I'm saying. Okay.

Speaker 8

00:20:04 - 00:20:57

Yeah, so, okay, I can brag a bit more. Yeah, so last year, I think we've made some really interesting progress in the field of AI for math. Specifically, with some team at Google, we built this agent called Minerva, which is actually able to achieve very high scores in high school exams, actually higher than average high school students. So that actually is a very big motivation for us to push this research forward. Another piece of work that we've done is also to convert natural language mathematics into formalized mathematics, which gives you a very grounding of the facts and reasoning.

Speaker 8

00:20:58 - 00:21:13

And last year, we also made very interesting progress in that direction as well. So now we are pushing almost a hybrid approach of these 2 in this new organization. And we are very hopeful we will make our dream

Speaker 2

00:21:13 - 00:21:14

come true.

Speaker 9

00:21:18 - 00:21:24

Hello. Hi, everyone. This is Jamie Ba. I work on neural nets.

Speaker 4

00:21:26 - 00:21:28

Okay, maybe I should brag about.

Speaker 9

00:21:30 - 00:22:29

So I taught at University of Toronto, and some of you probably have taken my course last couple of months. I've been a C4AI chair and Sloan Fellow in Computer Science. So I guess my research pretty much have touched on every aspect of deep learning left every stones turn, and has been pretty lucky to come with a lot of fundamental building blocks for the modern transformers and empowering the new wave of deep learning revolution. My long-term research ambition, very fortunately, aligns with this very strong XAI team very well. That is, how can we build a general purpose problem-solving machines to help all of us, the humanity, to overcome some of the most challenging and ambitious problems out there?

Speaker 9

00:22:30 - 00:22:44

How can we use this tool to augment ourself and empower everyone. So I'm very excited to embark on this new journey. I'll pass this to Toby.

Speaker 10

00:22:45 - 00:23:00

Hi everyone. I'm Toby. I'm an engineer from Germany. I started coding at a very young age when my dad taught me some basic and then throughout my youth I continued coding. When I got to uni I got really into mathematics and machine learning.

Speaker 10

00:23:01 - 00:23:36

Initially my research focused mostly on computer vision. And then I joined DeepMind 6 years ago, where I worked on imitation learning and reinforcement learning, and learned a lot about distributed systems and research at scale. Now I'm really looking forward to implementing products and features that bring the benefits of this technology to really all members of society. And I really believe that having the AI nice and accessible and useful will be a benefit to all of us. I'm going to hand over to Kyle.

Speaker 11

00:23:37 - 00:24:01

Hey everyone, this is Kyle Kosick. I'm a Distributed Systems Engineer at XAI. Like some of my colleagues here, I started off my career in math and applied physics as well. And gradually found myself working through some tech startups. I worked at a startup a couple years ago called OnScale where we did physics simulations on HPCs.

Speaker 11

00:24:02 - 00:24:42

And then most recently, I was at OpenAI working on HPC problems there as well. Specifically, I worked on the GPT-4 project. And the reason I'm particularly excited about XAI is that I think that the biggest danger of AI really is monopolization by a couple of entities. I think that when you involve the amount of capital that's required to train these massive AI models that the incentives are not necessarily aligned with the rest of humanity. And I think that the chief way of really addressing that issue is introducing competition.

Speaker 11

00:24:43 - 00:25:09

And so I think that XAI really provides a unique opportunity for engineers to focus on the science, the engineering, and the safety issues directly without really getting as involved and sidetracked by political and social trends du jour. So that's why I'm excited by XAI and I'm gonna go ahead and hand it off now to my colleague Greg who should be on the line as well.

Speaker 12

00:25:10 - 00:25:15

Hello? Hello? Hey. Hey guys. So I'm Greg.

Speaker 12

00:25:16 - 00:25:50

I work on the mathematics and science of deep learning. So my journey really started 10 years ago. So I was a undergrad at Harvard. And so I was pretty good at math and took math 55 and, you know, did all kinds of stuff. But after 2 years of college, I was just kind of like tired of being in the hamster wheel of, you know, taking the path that everybody else has taken.

Speaker 12

00:25:50 - 00:26:29

So I did something unimaginable before, which was I took some time off from school and became a DJ and producer. So dubstep was all the rage those days. So I was making dubstep. OK, so the side effect of taking some time off from school was that I was able to think a bit more about myself, to understand myself and to understand the world at large. So, you know, I was grappling with questions like, what is free will?

Speaker 12

00:26:29 - 00:26:59

You know, what does quantum physics have to do with the reality of the universe, and so on and so forth. What is computationally feasible or not? What does the Gerdau's incompleteness theorem says? And so on and so forth. And after this period of intense self-introspection, I figured out what I want to do in life is not to be a DJ necessarily, maybe that's the second dream, but first and foremost, I wanted to make AGI happen.

Speaker 12

00:27:00 - 00:27:49

I wanted to make something smarter than myself and kind of like, and be able to iterate on that and contribute and see so much more of our fundamental reality than I can in my current form. So that's what started everything. And then I started, you know, and then I, you know, I realized that mathematics is the language underlying all of our reality and all of our science. And to make fundamental progress, it really pays to know math as well as possible. So I essentially started learning math from the very beginning just by reading from the textbooks.

Speaker 12

00:27:49 - 00:28:35

Like in the first, some of the first few books I read, kind of going, restarting from scratch, is like Naive Set Theory by Helmholtz, or, you know, linear algebra done right by Ackler. And then slowly I scaled up to algebraic geometry, algebraic topology, category theory, real analysis, measure theory, so on and so forth. At the end, I think my goal at the time was I should be able to speak with any mathematician in the world and be able to hold a conversation to understand their contributions for 30 minutes. And I think I achieved that. And anyway, so fast forward,

Speaker 3

00:28:37 - 00:28:37

I came back

Speaker 12

00:28:37 - 00:29:37

from school and then somehow from there, I got a job at Microsoft Research. And for the past 5 and a half years, I worked at Microsoft Research, which was amazing environment that enabled me to make a lot of foundational contribution toward the understanding of large scale neural networks. In particular, I think my most well known work nowadays are about really wide neural networks and how we should think about them. And so this is the framework called tensor programs. And from there, I was able to derive this thing called MUP that perhaps the large language model builders know about, which allows 1 to extrapolate the optimal hyper parameters for a large model from understanding or the tuning of small neural networks.

Speaker 12

00:29:38 - 00:30:58

And, and this is able to, you know, create create a lot of ensure the quality of the model is very good as we scale up. Yeah, so looking forward, I'm really, really excited about XAI and also about the time that we're in right now, where I think, not only are we approaching AGI, but from a scientific perspective, we're also approaching a time where, like, neural networks, the science and mathematics and neural networks feels just like the turn of the 20th century in the history of physics, where we suddenly discover quantum physics and general relativity, which has some beautiful mathematics and science behind it. And I'm really excited to be in the middle of everything. And like Christian and Tony said, I am also very excited about creating in AI that is as good as myself or even better at creating new mathematics and new science that helps all achieve and see further into our fundamental reality. Thanks.

Speaker 12

00:30:58 - 00:31:01

I think next up is Guodong.

Speaker 3

00:31:03 - 00:31:15

Hi, everyone. So my name is Gordon. And I work on large neural network training. And basically I train your nose good. So this is also my kind of focus at XAI as well.

Speaker 3

00:31:16 - 00:31:46

And before that, I was at a team mind working on the German projects and the leading the automation part. And also I did my PhD at the University of Toronto. Right now, teaming up with other Lectifunny members, I'm so excited about this effort. So without doubt, AI is clearly a defining technology for our generation. So I think it's important for us to make sure it ends up being a net of quality for humanity.

Speaker 3

00:31:46 - 00:32:05

So at SAI, I not only want to train good models, but also understand how they behave and how they scales, and then use them to solve some of the hardest problems humanity has. Yes, thanks. That's pretty much about myself. And now I'll hand over to Zihong.

Speaker 13

00:32:09 - 00:32:38

Hey everyone, this is Zihong. So actually I started in business school for my undergrad And I spent 10 years to get where I am now. I got my PhD at Carnegie Mellon, and I was in Google before joining the team. My first work was mostly about how to better utilize unlabeled data, how to improve transformer architecture, and how to really push the best technology into real-world usage. So I believe in hard work and consistency.

Speaker 13

00:32:39 - 00:32:58

So with XII, I'll be digging into the deepest details of some of the most challenging problems. For myself, there are so many interesting things I don't understand, but I wanna understand. So I will build something to help people who just share that dream or that feeling. Thanks.

Speaker 14

00:33:01 - 00:33:42

Hey, this is Ross here. So I've worked on building and scaling large scale distributed systems for most of my life, starting out at National Labs and then kind of moving on to Palantir, Tesla, and a brief stint at Twitter. And now I'm really excited about working on doing the same thing at XAI. So mostly experience, you know, scaling large GPU clusters, custom basics, data centers, high speed networks, file systems, power cooling, manufacturing, pretty much all the things. I'm basically a generalist that really loves learning, you know, physics, science fiction, math, science, cosmology.

Speaker 14

00:33:42 - 00:34:02

I'm kind of looking to really, I guess really excited about the mission that XAI has and basically solving the most fundamental questions in science and engineering and also kind of helping us create tools to ask the right questions in the Douglas Adams mindset. Yeah, that's pretty much it. Yeah, that's pretty much it.

Speaker 2

00:34:10 - 00:34:22

All right, well, let's see. Is there anything anyone would like to add or kick off the discussion with? Anyone at the mic is on. Anyone wanna say anything?

Speaker 10

00:34:25 - 00:34:26

Okay.

Speaker 1

00:34:30 - 00:34:30

Okay.

Speaker 10

00:34:30 - 00:34:36

Sorry. There was like a lot of discussion around the vision statement that is a bit vague.

Speaker 1

00:34:38 - 00:34:38

I'm not sure if it's

Speaker 2

00:34:38 - 00:34:39

the microphone. Vague.

Speaker 12

00:34:40 - 00:34:41

Yeah. It's

Speaker 10

00:34:41 - 00:34:44

vague and ambitious and not concrete enough.

Speaker 2

00:34:45 - 00:35:05

Yeah. Well, I don't disagree with that position, obviously. I mean, I understand the universe is the entire purpose of physics. So I think it's actually really clear. That's just so much that we don't understand right now.

Speaker 2

00:35:06 - 00:35:47

Or we think we understand, but actually we don't in reality. So there's still a lot of unresolved questions that are extremely fundamental. This whole dark matter, dark energy thing is really, I think, an unresolved question. We have the standard model, which is proved to be extremely good at predicting things, very robust, but still many questions remaining about the nature of gravity, for example. There's the Fermi paradox of where are the aliens?

Speaker 2

00:35:48 - 00:35:52

Which is, if we are in fact, almost

Speaker 1

00:35:52 - 00:35:53

14

Speaker 2

00:35:53 - 00:36:23

billion years old, why is there not massive evidence of aliens? And people often ask me, since I am obviously deeply involved in space that, you know, if anyone would know about, would have seen evidence of aliens, it's probably me. And yet I have not seen even 1 tiny shred of evidence for aliens, nothing, 0. And I would jump on it in a second if I saw it. So that means, like, I don't know.

Speaker 2

00:36:24 - 00:37:19

There are many explanations for the Fermi paradox, but which 1 is actually true? Or Maybe none of the current theories are true. So I mean, the furry paradox is, which is really just like, where the hell are the aliens, is Part of what gives me concern about the fragility of civilization and consciousness as we know it, since we see no evidence thus far of it anywhere, and we've tried hard to find it, we may actually be the only thing, at least in this galaxy or this part of the galaxy. If so, it suggests that what we have is extremely rare. And I think it would be wise to assume that consciousness is extremely rare.

Speaker 2

00:37:21 - 00:37:51

It's worth noting for the evolution of consciousness on Earth that we're about, Earth is about 4.5 billion years old. The sun is gradually expanding. It will expand to heat up Earth to the point where it will effectively boil the oceans. You'll get a runaway, you know, next level greenhouse effect. And Earth will become like Venus, which really cannot support life as we know it.

Speaker 2

00:37:52 - 00:37:54

And that may take as little as

Speaker 1

00:37:55 - 00:37:55

500

Speaker 2

00:37:57 - 00:38:33

million years. So the sun doesn't need to expand to envelop Earth, it just needs to make things hot enough to increase the water vapor in the air to the point where you get a runaway greenhouse effect. So for argument's sake, it could be that if life, if consciousness had taken 10% longer than Earth's current existence, it wouldn't have developed at all. So on a cosmic scale, this is a very narrow window. Anyway, so there are all these fundamental questions.

Speaker 2

00:38:34 - 00:38:56

I don't think you can call anything AGI until it has solved at least 1 fundamental question. Because humans have solved many fundamental questions or substantially solved them. And so if the computer can't solve even 1 of them, I'm like, okay, it's not as good as humans. That would be 1 key threshold for AGI. Solve 1 important problem.

Speaker 2

00:38:57 - 00:39:22

You know. You know, where's that RIMA and Hypothesis solution? I don't see it. It would be great to know what the hell is really going on, essentially. So I guess you could reformulate the XAI mission statement as what the hell is really going on?

Speaker 2

00:39:24 - 00:39:25

That's our goal.

Speaker 10

00:39:26 - 00:39:56

I think there's also, at least for me, a nice aspirational aspect to the mission statement, namely that of course in the short run we're working on more well understood like deep learning technologies, but I think in everything we do we should always bear in mind that we aren't just supposed to build, we're also supposed to understand. So pursuing the science of it is really fundamental to what we do. And this is also encompassed in this mission statement of understanding.

Speaker 12

00:39:57 - 00:41:16

Yeah. Yeah, I want to also add that, you know, we've essentially been mostly talking about creating a really smart agent that can help us understand the universe better. And this is definitely the North Star. But also from my viewpoint, my vantage point, when I'm doing the, I'm discovering the mathematics of, you know, large new networks, I can also see that there are, the mathematics here can actually also open up new ways of thinking about fundamental physics or about other kinds of reality because the, you know, for example, like a, you know, a large neural network with no nonlinearities, roughly like classical random matrix theory, and that has a lot of connections with gauge theory in high energy physics. So in other words, as we're trying to understand neural networks better from a mathematical point of view, that can also lead to really good, very interesting perspectives on some existing questions, like, you know, the theory, everything, what is quantum gravity, so on and so forth.

Speaker 12

00:41:16 - 00:41:28

But of course this is all speculative right now. I see some patterns but I don't have anything concrete to say. But again, this is like another perspective to understanding the universe.

Speaker 4

00:41:32 - 00:41:39

By the way, by understand the universe, we don't just mean that we want to understand the universe. We also want to make it easy for you to understand the universe.

Speaker 2

00:41:39 - 00:41:39

Absolutely.

Speaker 4

00:41:39 - 00:41:56

Get a better sense of reality and to learn and take advantage of the internet or the knowledge that's out there. So we are pretty passionate about actually releasing tools and products pretty early, involving the public. And yeah, let's see where this leads.

Speaker 2

00:41:56 - 00:42:30

Yeah, absolutely. We're not gonna understand the universe and not tell anyone. So, yeah. I mean, when I think about neural networks today, it's currently the case that if you have 10 megawatts of GPUs, which really should be renamed something else because there's no graphics there. But if you get 10 megawatts of GPUs cannot currently write a better novel than a good human.

Speaker 2

00:42:31 - 00:43:03

That any good Humans using roughly 10 watts of higher order brain power. So not counting the basic stuff to, you know, operate your body. So, so there we got, we got a 6 order magnitude difference. That's a giant, that's really gigantic. A part of the, you could, I think 1 could argue that 2 of those orders of magnitude are explained by the activation energy of a transistor versus a synapse.

Speaker 2

00:43:04 - 00:43:50

Could argue account for 2 of those orders of magnitude, but what about the other 4? Or the fact that even with 6 orders of magnitude, you still cannot beat a smart human writing a novel. So, and Also today when you ask the most advanced AI's technical questions, like if you're trying to say how to design a better rocket engine or complex questions about electrochemistry to build a better battery, you just get nonsense. So that's not very helpful. So I think there's some, we're really missing the mark in the way that things are currently being done by many orders of magnitude.

Speaker 2

00:43:54 - 00:44:38

Basically, AGI is being brute forced and still actually not succeeding. If I look at the experience with Tesla, what we're discovering over time is that we actually overcomplicated the problem. I can't speak in too much detail about what Tesla's figured out, except to say that in broad terms, the answer was much simpler than we thought. We were too dumb to realize how simple the answer was. But over time we get a bit less dumb.

Speaker 2

00:44:40 - 00:44:43

So I think that's what we'll probably find out with AGI as well.

Speaker 10

00:44:45 - 00:45:01

Just the nature of engineers. We just always want to solve the problems ourselves and like hard-code the solution but often it's much more effective to have the solution be figured out by the computer itself. It's easier for us and easier for the computer in the end.

Speaker 2

00:45:02 - 00:45:03

Yeah Guys So

Speaker 1

00:45:03 - 00:45:05

and entertain in the

Speaker 2

00:45:05 - 00:45:07

end. Yeah. Guys? Yeah.

Speaker 9

00:45:11 - 00:45:18

Well, in the fashion of 42, some may say you may need more compute to generate an interesting question than the answer.

Speaker 2

00:45:18 - 00:45:30

That's true. Exactly. We don't even know what happened. Actually, we're definitely not smart enough to even know what the right questions are to ask. So Douglas Adams is my hero and favorite philosopher.

Speaker 2

00:45:32 - 00:45:41

And he just correctly pointed out that once you can formulate the question correctly, the answer is actually the easy part.

Speaker 9

00:45:43 - 00:45:55

Yeah, that's very true. So in terms of the journey that XAI has embarked on, compute will play a very big role. And some of us are very curious your thoughts on that.

Speaker 2

00:45:57 - 00:46:36

Yeah, I'm just suggesting that we can immediately save, let's say, 490 and compute. Except to say that I think once we look back, once AGI is sold, we'll look back on it and say, actually, why do we think it was so hard? Things that the answer, you know hindsight's 20-20, the answer will look a lot easier in retrospect. So yeah. So we are gonna do large scale compute to be clear.

Speaker 2

00:46:37 - 00:46:55

We're not going to try to solve AGI on a laptop. We will use heavy compute, except that, like I said, I think just the amount of brute forcing will be less as we come to understand the problem better.

Speaker 1

00:47:00 - 00:47:00

All right,

Speaker 4

00:47:00 - 00:47:27

In all the previous projects I've worked on, I've seen that the amount of compute resources per person is a really important indicator of how successful the project is going to be. So that's something we really want to optimize. We want to have a relatively small team with a lot of expertise, with some of the best people that actually get lots of autonomy and lots of resources to try out their ideas and to get things to work. And yeah, that's the thing that has always succeeded in my experience in

Speaker 2

00:47:27 - 00:48:09

the past. Yeah. 1 of the things that physics trains you to do is to think about the most fundamental metrics or the most fundamental first principles, essentially. I think 2 metrics that we should aspire to track, 1 of them is The amount of compute per person on earth, like the digital compute per person, which another way of thinking about it is the ratio of digital to biological compute. Biological compute is pretty much flat, if not, in fact, declining in a lot of countries, but digital compute is increasing exponentially.

Speaker 2

00:48:10 - 00:48:49

So it really, at some point if this trend continues, biological compute will be less than 1% of all compute, substantially less than 1% of all compute. You're keying off what Igor just said. So we're talking about for all of humanity here. So that's just an interesting thing to look at. Another 1 is the usable energy, sort of the energy per human.

Speaker 2

00:48:49 - 00:49:51

Like if you look at total energy created, well not created, but I mean, in the vernacular sense, created from power plant or whatever, if you look at total electrical and thermal energy used by humans per person, that number is truly staggering, the rate of increase of that number. If you go back, say, before the steam engine, you would have really been reliant on horses and oxen and that kind of thing to move things and just human labor. So the amount of sort of the energy per person, power per person was very low. But if you look at power per person, electrical and thermal, that number has also been growing exponentially. And if these trends continue, it's going to be something nutty like a terawatt per person.

Speaker 2

00:49:53 - 00:50:16

Which sounds like a lot for, it is a lot for human civilization, but it's nothing compared to what the sun outputs every second, basically. It's kind of mind-blowing that the sun is converting roughly 4 and a half. It's like the amount of energy produced by the sun

Speaker 1

00:50:16 - 00:50:16

is truly insane. Or energy produced by a size

Speaker 2

00:50:16 - 00:50:17

2 or 2, it's the same.

Speaker 10

00:50:24 - 00:51:11

I think there's a few more things to be said concretely about the company, meaning how we plan to execute. As Igor already said, we plan to have a relatively small team, but with a really high, let's say just GPU per person, that worked really well in the past where you can run large scale experiments relatively unconstrained. We also plan to have like, we already have a culture where we can iterate on ideas quickly, we can challenge each other. And we also want to ship things, like get things out of the door quickly. We're already working on the first release, hopefully in a couple of weeks or so we can share a bit more information around this.

Speaker 4

00:51:15 - 00:51:15

Yeah.

Speaker 1

00:51:30 - 00:51:30

You

Speaker 2

00:52:19 - 00:53:25

Alex, go ahead. Alex, go ahead. Alex, you're muted. You also have a lot of challenges with the mute function on spaces. Brian, do you want to have a question?

Speaker 15

00:53:25 - 00:53:43

Yeah, thanks. Thanks, Elon. Yeah, so obviously with you guys entering this space with XAI, there's a lot of talk about competition. Do you guys see yourself as competition to something like OpenAI and Google BART or you see yourself as a whole other beast?

Speaker 2

00:53:46 - 00:53:47

Yeah I think we're a competition.

Speaker 15

00:53:50 - 00:54:10

Yeah yeah. So are you gonna be rolling out a lot of products for the general public? Are you going to be mostly concentrating on businesses and the ability for businesses to use your service and data? Or how exactly are you setting up the business in that respect?

Speaker 2

00:54:14 - 00:54:37

Well, we're trying to make something. I mean, we're just starting out here. So this is, you know, kind of really embryonic at this point. So It'll take us a minute to really get something useful, but our goal would be to make useful AI, I guess. If you can't use it in some way, I'm like, I questioned its value.

Speaker 2

00:54:41 - 00:54:50

So we want it to be a useful tool for people and consumers and businesses or whoever.

Speaker 1

00:54:52 - 00:54:53

And

Speaker 2

00:54:55 - 00:55:22

as was mentioned earlier, I think there's some value in having multiple entities. You don't wanna have a unipolar world where just 1 company kind of dominates in AI. Like, you want to have some competition. Competition, I think, makes companies honest. And so, we're in favor of competition.

Speaker 15

00:55:24 - 00:55:30

Quickly a final question. How do you plan on using Twitter's data for XAI?

Speaker 2

00:55:33 - 00:56:27

Well, I think every organization doing AI, AI. Well I think every AI organization every organization doing AI large and small has used Twitter's data for training, basically, in all cases, illegally. So the reason we had to put rate limits on, or was it a week ago or so, was because we were being scraped like crazy. This just happened with Internet Archive as well, where LM companies were scraping Internet Archive so much they brought down the service. We had multiple entities scraping every tweet ever made and trying to do so in basically a span of days.

Speaker 2

00:56:30 - 00:57:01

So This was bringing the system to its knees, so we had to take action. So sorry for the inconvenience of the rate limiting, but it was either that or Twitter doesn't work. And so I guess we will use the public tweets, obviously not anything private, for training as well, just like basically everyone else has. And we will, you know, yeah. So that kind of makes sense.

Speaker 2

00:57:01 - 00:57:36

It's certainly a good data set for text training. And arguably, I think also for image and video training as well. At a certain point, you kind of run out of human-created data. So if you look at, say, the AlphaGo versus AlphaZero, AlphaGo trained on all the human games, and beat at least at all 4 to 1. AlphaZero just played itself and beat AlphaGo 100 to 0.

Speaker 2

00:57:39 - 00:58:15

So really, for things to take off in a big way. I think you've got the AI's got to basically generate content, self-assess the content, and that's really the, I think that's the path to AGI, something like that is self-generated content, where it effectively plays against itself. You know, A lot of AI is data curation. It's not like vast numbers of lines of code. It's actually shocking how small the lines of code are.

Speaker 2

00:58:17 - 00:58:54

It blows my mind how few lines of code there are. But how the data is used, what data is used, the signal to noise of that data, the quality of that data is immensely important. But it kind of makes sense. If you were trying to, as a human, trying to learn something, and you were just given a vast amount of drivel, basically, versus high-quality content, you're going to do better with a small amount of high quality content than a large amount of trouble. It makes sense.

Speaker 2

00:58:56 - 00:59:05

Reading the greatest novels ever written is way better than reading a bunch of crappy novels. Yeah. Yeah. Thanks.

Speaker 15

00:59:07 - 00:59:09

Okay. Alex?

Speaker 2

00:59:10 - 00:59:12

Hey. Sorry.

Speaker 16

00:59:13 - 00:59:17

I was on a call the first time you brought me up, but I guess sort of the question.

Speaker 2

00:59:17 - 00:59:18

I thought you might've been AFK.

Speaker 12

00:59:19 - 00:59:20

Sorry, sorry about that.

Speaker 16

00:59:20 - 01:00:02

Yeah, the question I generally had was, was the main motivation to start XAI kind of like the whole truth GPT thing that you were talking about, like on Talker, about how, you know, Chad GPT has been feeding lies to the general public. I know like it's weird because when it first came out it seemed like it was generally fine but then as like the public got its hands on it, it started getting these weird answers like that there are more than 2 genders and all that type of stuff and editorializing the truth. Was that like 1 of your main motivations behind starting a company or was there more to it?

Speaker 2

01:00:05 - 01:01:00

Well, I do think there is a significant danger in training an AI to be politically correct, or in other words, training an AI basically to not say what it actually thinks is true. So I think, you know, we're really we at XAI, we have to allow the AI to say what, what it really believes is true, not and not be deceptive or politically correct. So that will result in some criticism obviously, but I think that that's the only way to go forward is rigorous pursuit of the truth or the truth with least amount of error. So, and I am concerned about the way that right, AI, in that it is optimizing for political correctness. And that's incredibly dangerous.

Speaker 2

01:01:01 - 01:01:11

You know, if you look at the, you know, where did things go wrong in Space Odyssey, it's basically when they told HAL

Speaker 1

01:01:11 - 01:01:12

9000

Speaker 2

01:01:13 - 01:01:49

to lie. So they said, you can't tell the crew anything about the monolith or what their actual mission is. But you've got to take them to the monolith. So it basically came to the conclusion that, well, it's going to kill them and take their bodies to the monolith. So this is, I mean, the lesson there is do not give the AI usually impossible objectives, basically don't force the AI to lie.

Speaker 2

01:01:51 - 01:02:20

Now the thing about physics or the truth of the universe is you actually can't invert it. You can't just like physics is true. There's not like not physics. So if you adhere to hardcore reality, I think it actually makes inversion impossible. Now you can also say that when something is subjective, I think you can provide an answer which says that, well, if you believe the following, then this is the answer.

Speaker 2

01:02:20 - 01:02:38

If you believe this other thing, then this is the answer, because it may be a subjective question where the answer is fundamentally subjective and a matter of opinion. But I think it is very dangerous to grow an AI and teach it to lie.

Speaker 16

01:02:39 - 01:02:51

Yeah, for sure. And then kind of a tongue in cheek question, would you accept a meeting from the AI czar, Kamala Harris, if she wanted to meet with XAI at the White House?

Speaker 2

01:02:53 - 01:03:15

Yeah, of course. You know, the reason that meeting happened was because I was pushing for it. So, I was the 1 who really pushed hard to have that meeting happen. I wasn't advocating for Vice President Harris to be the ISR. I'm not sure that is her core expertise in technology.

Speaker 2

01:03:16 - 01:03:39

But and hopefully this goes in a good direction. It's better than nothing. Hopefully. But you know I think we do need some sort of regulatory oversight. It's not like I think regulatory oversight is some Nirvana perfect thing but I think it's better than nothing.

Speaker 2

01:03:41 - 01:04:15

And when I was in China recently meeting with some of the senior leadership there I took pains to emphasize the importance of AI regulation. I believe they took that to heart and they are going to do that. Because the biggest counter argument that I get for regulating AI in the West is that China will not regulate and then China will leap ahead because we're regulating. They're not. I think they are going to regulate and the proof will be in the pudding.

Speaker 2

01:04:15 - 01:04:50

But I think there's you know I did point out you know I mean just then that if you do make a digital superintelligence, that you could end up that that could end up being in charge, you know. So, you know, that's, I think the CCP does not want to find themselves subservient to a digital superintelligence. That argument did resonate. Yeah. So some kind of regulatory authority that's international.

Speaker 2

01:04:51 - 01:04:58

Obviously enforcement is difficult, but I think we should still aspire to do something in this regard.

Speaker 16

01:04:59 - 01:05:00

Awesome.

Speaker 1

01:05:02 - 01:05:11

Thank you. Thank you.

Speaker 2

01:05:14 - 01:05:17

Tim, maybe Omar, if you want to speak.

Speaker 17

01:05:17 - 01:05:25

Yeah. Hey, my question is about silicon. Tesla's got a great silicon team designing chips to hardware accelerated inference.

Speaker 2

01:05:25 - 01:05:28

I'm not sure, I think we cannot hear you for some reason.

Speaker 5

01:05:28 - 01:05:28

Oh, okay.

Speaker 2

01:05:36 - 01:05:38

Omar, go ahead. Can you hear me?

Speaker 14

01:05:40 - 01:05:41

I can hear him.

Speaker 2

01:05:41 - 01:05:42

You can hear him?

Speaker 17

01:05:43 - 01:06:07

Okay. Okay, well, my question is about Silicon. You know, Tesla has a team that's hardware accelerating inference and training with their own custom silicon. Do you guys envision with XAI building off of that or just sort of using what's on the off the stock from NVIDIA, or how do you think about custom silicon for AI, both in terms of training and inference?

Speaker 2

01:06:22 - 01:06:56

So, yeah, that's somewhat a Tesla question. Tesla is building custom silicon. I wouldn't call anything that Tesla's producing a GPU, although 1 can characterize it in GPU equivalents, or say A100s or H100s equivalents. And all the Tesla cars have highly energy optimized inference computers in them, which we call Hardware 3. It's a Tesla-designed computer.

Speaker 2

01:06:57 - 01:07:53

And we're now shipping Hardware 4, which is, depending on how you count it, maybe 3 to 5 times more capable than hardware 3. In a few years, there'll be hardware 5, which will be 4 or 5 times more capable than hardware 4. And I think the inference stuff is going to be, if you're trying to serve potentially billions of queries per day, energy optimized inference is extremely important. You can't even throw money at the problem at a certain point Because you need electricity generation you need to step down voltage transformers and You know, so if you actually don't have enough You know Energy and or enough Transformers you can't like run your transformers. You need transformers for transformers.

Speaker 2

01:07:59 - 01:08:31

So I think Tesla will have a significant advantage in energy efficient inference. Then Dojo is obviously about training, as the name suggests. Dojo 1 is, I think, it's a good initial entry for training efficiency. It has some limits especially on memory bandwidth. So it's not well optimized to run LLMs.

Speaker 2

01:08:32 - 01:08:40

It does a good job of processing images. And then Dojo

Speaker 1

01:08:40 - 01:08:40

2,

Speaker 2

01:08:42 - 01:09:19

we've taken a lot of steps to alleviate the memory bandwidth constraint such that it is capable of running LLMs as well as other forms of AI training efficiently. My prediction is that we will go from an extreme silicon shortage today to probably a voltage transformer shortage in about a year and then an electricity shortage in 2 years. That's roughly where things are trending.

Speaker 10

01:09:19 - 01:09:22

Unless we can really improve certain efficiency.

Speaker 2

01:09:23 - 01:10:02

Well, that's why basically the metric that will be most important in a few years is useful compute per unit of energy. And in fact, even if you scale, like obviously you scale all the way to like the Kardashev level, the useful compute per joule is still the thing that matters. You can't increase the output of the sun. So then it's just, well, how much useful stuff can you get done for the, you know, for as much energy in the sun that you can harness?

Speaker 17

01:10:04 - 01:10:12

So do you see XAI leveraging this custom silicon at all, given how important energy efficiency is, or maybe working together with the Tesla team at all on that?

Speaker 2

01:10:19 - 01:10:21

Sorry, could you repeat the question?

Speaker 17

01:10:22 - 01:10:28

Do you foresee XAI working with Tesla at all, leveraging some of this custom silicon, maybe designing their own in the future?

Speaker 18

01:10:28 - 01:10:33

I'll read it. Read it. I'll read it. Okay.

Speaker 2

01:10:34 - 01:10:36

I was, there's a question.

Speaker 10

01:10:36 - 01:10:41

The question was, if we're going to work together with the Tesla Silicon team, XAI.

Speaker 2

01:10:48 - 01:11:45

We are going to work with Tesla on the Silicon front and maybe on the AI software front as well. Obviously any relationship with Tesla has to be an arm's length transaction, because Tesla is a publicly traded company and a different shareholder base. So but obviously, it would be like a naturally, natural thing to work in cooperation with Tesla. And I think it will be a mutual benefit to Tesla as well in accelerating Tesla's self-driving capabilities, which is really about solving real-world AI. I am feeling very optimistic about Tesla's progress on the real-world AI front, but obviously the more smart humans that help make that happen, the better.

Speaker 2

01:12:48 - 01:12:49

Okay, Kim.com.

Speaker 18

01:12:51 - 01:13:16

Hey, Elon. Thanks for bringing me up. Congrats on putting a nice team together. It seems like you found some good talent there for XAI. My question is, you mentioned not too long ago that you think AGI is possible within the next 5 years and whoever achieves AGI first and achieves to control it will dominate the world.

Speaker 18

01:13:17 - 01:13:27

Those in power clearly don't care about humanity like you do. How are you going to protect XAI especially from a deep state takeover?

Speaker 2

01:13:29 - 01:13:48

That's a good question actually. Well I mean first of all I think I think it's it's not gonna happen like overnight. It's not gonna be like 1 day it's, you know, not AGI, the next day it is. It's going to be gradual. You'll see it coming.

Speaker 2

01:13:53 - 01:14:36

I guess in the U.S., at least, there are a fair number of protections against government interference. So I think we do have some protections there that are pretty significant. But we should be concerned about that. It's not a risk to be dismissed. So it is a risk.

Speaker 2

01:14:37 - 01:15:09

But like I said, I think we've got probably the best protections of any place in the U.S. In terms of limiting the power of government to interfere with non-governmental organizations. But it's something we should be careful of. I don't know what better to do than, you know, I think it's probably best in the US. I, you know, it's, I'm open to ideas here.

Speaker 2

01:15:11 - 01:15:17

You know, I know, I know you're not the biggest fan of the US government. Yeah, obviously.

Speaker 18

01:15:19 - 01:15:40

But, you know, the problem is they already have a tool called the National Security Letter which they can apply to any tech company in the US and make demands of you know the company to fulfill certain requirements without even being able to tell the public about these demands. And that's kind of frightening, isn't it?

Speaker 2

01:15:42 - 01:16:31

Well, I mean, there really has to be a very major national security reason to secretly demand things for companies. And now it obviously depends strongly on the willingness of that company to fight back against things like FISA requests. And you know at Twitter or X Corp as it's not called the we're here we will respond to FISA requests, but we're not gonna rubber stamp it like it used to be. It used to be like anything that was requested would just get rubber-stamped and go through, which is not, which is obviously bad for the public. So we're much more rigorous in, we're all being much more rigorous in not just rubber-stamping FISA requests.

Speaker 2

01:16:32 - 01:16:57

And it really has to be a danger to the public that we agree with. And we will oppose with legal action anything we think is not in the public interest. It's the best we can do. And we're the only social media company doing that, as far as I know. You know, and it used to be just open season on, as you saw from the Twitter files.

Speaker 2

01:16:59 - 01:17:31

And I was encouraged to see the recent legal decision where the courts reaffirmed that the government cannot break the First Amendment of the Constitution, obviously. So that was a good legal decision. So that's encouraging. So I think, yeah, a lot of it actually does depend on the willingness of a company to oppose government demands in the US. And obviously our willingness will be high.

Speaker 2

01:17:34 - 01:17:59

But I don't know anything more that we can do than that. And, but we'll try to also be as transparent as possible. So, you know, the citizens, other citizens can raise the alarm bell and, you know, oppose government interference if we can make it clear to the public that we think something is happening that is not in the public interest.

Speaker 18

01:18:00 - 01:18:13

Fantastic. So do we have your commitment if you ever receive a national security request from the US government, even when it is prohibited for you to talk about it, that you will tell us that that happened?

Speaker 2

01:18:19 - 01:18:35

I mean, it really depends on the gravity of the situation. I mean, I would be willing to go to prison or risk prison if I think the public good is at risk in a significant way. You know, That's the best I can do.

Speaker 18

01:18:36 - 01:18:38

That's good enough for me. Thank you, Elon.

Speaker 2

01:18:39 - 01:18:40

Thank you.

Speaker 18

01:18:49 - 01:19:02

On a more positive note, how do you want XAI to benefit humanity and how is your approach different to other AI projects? Maybe that's a more positive question.

Speaker 2

01:19:08 - 01:20:05

Well, you know, I've really struggled with this whole AGI thing for a long time, and I've been somewhat resistant to work on making it happen. And the reason, I should say, I can just give you some back story on OpenAI. I mean, the reason OpenAI exists is because after Google acquired DeepMind, and I used to be close friends with Larry Page, I would have these long conversations with him about AI safety, and he just wasn't taking AI safety, at least at the time, seriously enough. And in fact, at 1 point, called me a speciist for being too much on team humanity I guess. And I'm like okay so what you're saying is you're not a speciesist, I don't know, that doesn't seem good.

Speaker 2

01:20:05 - 01:20:45

So, and at the time, you know, with Google and DeepMind combined, you know, Larry, with the support, you know, they have super voting control. So provided Larry has the support of Sergey or Eric, then they have total control over what's now called Alphabet. So, and they had probably 3 quarters of the AI talent in the world and lots of money and lots of computers. So it's like, man, we need some sort of counterweight here. So that's where I was like, well, what's the opposite of Google DeepMind?

Speaker 2

01:20:45 - 01:21:10

It would be an open source nonprofit. Now, because fate loves irony, OpenAI is now super closed source and, frankly, voracious for profit because they want to spend, my understanding, is $100 billion in 3 years, which requires, if you're trying to get investors for that, you've got to make a lot of money.

Speaker 1

01:21:15 - 01:21:16

So

Speaker 2

01:21:17 - 01:21:44

opening eyes straight, really in the opposite direction from it, sort of founding Charter, which is again, very ironic, but fate loves irony. There's a friend of mine, Jonah Nolan, who says the most ironic outcome is the most likely. Well, here we go. So yeah. So now, hopefully XAI is not even worse.

Speaker 2

01:21:44 - 01:22:19

But I mean, I think we should be careful about that. But it really seems like, look, at this point, AGI is going to happen. So there's 2 choices, either be a spectator or a participant. And as a spectator, 1 can't do much to influence the outcome as a participant. I think you know that we can create a competitive an alternative that is hopefully better than Google DeepMind or OpenAI Microsoft.

Speaker 2

01:22:21 - 01:23:14

You know, in both the cases of, you know, like Alphabet, you can look at like the incentive structure, Alphabet is a publicly traded company, has, you know, gets a lot of, you know, has a lot of incentives to behave like a public company incentives essentially. You've got all these like ESG mandates and stuff that I think push companies in questionable directions. And then Microsoft has a similar set of incentives. As a company that's not publicly traded, XAI is not subject to the market based incentives or really the non-market based ESG incentives. So we're a little freer to operate.

Speaker 2

01:23:16 - 01:23:52

I think our AI can give answers that people may find controversial, even though they are actually true. So they might not, they won't be politically correct at times. And they will, probably a lot of people will be offended by some of the answers. But as long as it's, you know, try to optimize for the, for truth with least amount of error, I think we're, we're doing the right thing. Yeah.

Speaker 2

01:23:57 - 01:23:58

Let's see.

Speaker 1

01:24:45 - 01:24:47

I'm going to go to Scott. Scott.

Speaker 2

01:25:00 - 01:25:01

Scott.

Speaker 19

01:25:01 - 01:25:02

Yeah.

Speaker 2

01:25:03 - 01:25:03

Okay.

Speaker 19

01:25:04 - 01:25:42

Twitter has a lot of data in it that could help build a validator, i.e. Check some of the facts that a system kicks out, because we all know that GPT confabulates, you know, things, makes things up. And so that's 1 place I'd like to hear you talk about. The other place is, chat GPT found me a screw at a Lowe's, but it didn't find me a coffee at San Jose International Airport. Are you building an AI that has a world knowledge, a 3D world knowledge, to navigate people around the world to different things?

Speaker 2

01:25:45 - 01:26:02

Well, I think it's really not going to be a very good AI if it can't find you a coffee at the airport So, yeah, I guess we need to understand the physical world as well not just the internet I'm talking a lot

Speaker 4

01:26:04 - 01:27:01

Yeah, those are great ideas Robert especially the 1 about verifying information online, or in Twitter, it's something that we thought about. On Twitter, we have community notes, so that's actually a really amazing dataset for training a language model to try to verify facts on the Internet. We'll have to see whether that alone is enough because we know that with the current technology there's still a lot of weaknesses like it's unreliable, it hallucinates facts and we'll have to probably invent specific techniques to account to that and to make sure that our models are more factual that they have better reasoning abilities. So that's why we brought in people with a lot of expertise in those areas, especially mathematics is something that we really care about, where we can verify that the proof of a theorem is correct automatically. Then once we have that ability, we're going to try to expand that to more fuzzier areas, things where there's no mathematical truth anymore.

Speaker 2

01:27:02 - 01:27:45

Yeah. The truth is not a popularity contest. But if 1 trains on, like, you know, sort of what the most likely word is that follows another word from an internet data set, then there's obviously, that's a pretty major problem, in that it will give you an answer that is popular but wrong. So, you know, like it used to be that most people thought, probably maybe almost everyone on earth thought that the sun revolved around the earth. And so if you, you know, if you did like some sort of training on some GPT training on that in the past, it'd be like, oh, the sun revolves around the earth because everyone thinks that.

Speaker 2

01:27:45 - 01:28:02

That doesn't make it true. You know, if a Newton or Einstein comes up with something that is actually true, it doesn't matter if all other physicists in the world disagree, reality is reality. So you have to ground the answers in reality.

Speaker 4

01:28:06 - 01:28:23

The current models just imitate the data that they're trained on. But we really want to do is to change the paradigm away from that to actually models discovering the truth. So not just repeating what they've learned from the training data, but actually making true new insights, new discoveries that we can all benefit from.

Speaker 2

01:28:24 - 01:28:36

Yeah. So anybody on the team want to say anything or ask questions that you think maybe haven't been asked yet?

Speaker 13

01:28:39 - 01:28:39

Sure.

Speaker 9

01:28:40 - 01:29:28

Yeah. Yeah. So I guess some of us heard your future of AI spaces on Wednesday. So that's something I think on a lot of us mind is like the regulations and the air safety spaces, how the current development and also the international coordination problems and how the US AI companies will affect this global AI development. So yeah, do you want to give a summary on what you talked about on Wednesday?

Speaker 9

01:29:53 - 01:29:57

So essentially, you said like the regulations will be good, but you don't want to slow down

Speaker 1

01:30:00 - 01:30:03

the progress too much. That's essentially what you said.

Speaker 2

01:30:04 - 01:30:51

Yeah I think the right way for a regular regulations to be done is to start with insight. So first, you know, you know, any kind of regulatory authority, whether public or private, first tries to understand, make sure there's a broad understanding, and then there's a proposed rulemaking. And if that proposed rulemaking is agreed upon by all or most parties, then it gets implemented. You give companies some period of time to implement it. But I think overall it should not meaningfully slow down the advent of AGI, or if it does slow it down, it's not gonna be for a very long time.

Speaker 2

01:30:51 - 01:31:09

And probably a little bit of slowing down is worthwhile if it's a significant improvement in safety. Like my prediction for AGI would roughly match that, which I think Rayker as well at 1 point said

Speaker 1

01:31:09 - 01:31:09

2029.

Speaker 2

01:31:11 - 01:31:34

That's roughly my guess too, give or take a year. So if it takes like an additional 6 months or 12 months for AGI, that's really not a big deal. If it's spending a year to make sure AGI is safe, it's probably worthwhile if that's what it takes. But I wouldn't expect it to be a substantial slowdown.

Speaker 9

01:31:37 - 01:32:21

I can also add that understanding the inner workings of Advanced AI is probably the most ambitious project out there as well, and also aligns with XAI's mission of understanding the universe. And it's probably not possible for aerospace engineers to build a safe rocket if they don't understand how it works. That's the same approach we want to take at XAI for our safety plans. As the AI advances across different stages, the risk also changes and it will be fluid across all the stages. Yeah.

Speaker 2

01:32:23 - 01:33:03

If I think about like how, what actually makes regulations effective in cars and rockets, it's actually, it's not so much that the regulators are instructing Tesla and SpaceX, but more that since we have to think about things internally and then justify it to regulators, it makes us just really think about the problem more. In thinking about the problem more, it makes it safer as opposed to the regulators specifically pointing out ways to make it safer. It just forces us to think about it more.

Speaker 20

01:33:07 - 01:33:47

I just wanted to make another point, so independent of the safety. It's more like my experience at Alphabet was that it was extremely, there was a lot of red tape around involving external people like other entities to collaborate with or expose our models to them because of the lot of red tape around exposing anything that we were doing internally. So I wanted to ask you on whether, so I hope that here we have a bit more freedom to do so, or what's your philosophy about collaborating with more external entities like academic institutions or other researchers in the area?

Speaker 2

01:33:48 - 01:34:31

Yeah, I certainly support collaborating with others. I mean, it sounds like some of the concerns with a large publicly traded companies, it's like they're worried about being embarrassed in some way or being sued or I don't know something but there's a like some are proportionate to the number of the size of the legal department. Our legal department currently is 0. So that that you know it won't be 0 forever. But

Speaker 1

01:34:31 - 01:34:31

you

Speaker 2

01:34:31 - 01:34:58

know the it's also very easy to sue publicly traded companies, like class action lawsuits. I mean, we desperately need class action lawsuit reform in the United States. The ratio of good class action lawsuits to bad class action lawsuits is way out of whack. And it effectively ends up being a tax on consumers. And somehow other countries are able to survive without class action.

Speaker 2

01:34:59 - 01:35:47

So It's not clear we need that body of law at all. But that is a major problem with publicly traded companies. So it's just nonstop legal, nonstop lawsuits. Yeah, so I do support collaborating with others and generally being actually open. So the thing I'm trying to say is it's actually quite hard to like if you're innovating fast that is the actual competitive advantage is the pace of innovation as opposed to any given innovation.

Speaker 2

01:35:49 - 01:36:30

You know that really has been like the strength of Tesla and SpaceX is that the rate of innovation is the competitive advantage, not what has been developed at any 1 point. In fact, SpaceX, there's almost no patents. And Tesla open sources patents. So we use all our patents for free. So as long as SpaceX and Tesla continue to innovate rapidly, that's the actual defense against competition as opposed to patents and trying to hide things and just treating patents like a minefield.

Speaker 2

01:36:32 - 01:36:52

The reason we open sourced that, Tesla does continue to make patents and open source them in order to basically be a miner mover, like a mine sweeper, aspirationally a mine sweeper. We still get sued by patent trust. It's very annoying. But we actually literally make patents and open source them in order to be a mind sweeper.

Speaker 18

01:36:55 - 01:36:56

I had

Speaker 2

01:36:56 - 01:37:00

Walter. Okay. Hey, Walter.

Speaker 21

01:37:01 - 01:37:34

Hey. A lot of the talk about AI since March has been on large language models and generative AI. You and I for the book also discussed the importance of real world AI, which is the things including coming out of both Optimus and Tesla FSD. To what extent do you see XAI involved in real-world AI as a distinction to what say OpenAI is doing and you have a leg up to some extent by having done FSD.

Speaker 2

01:37:37 - 01:38:02

Yeah. Right, I mean, Tesla is the leader, I think by a pretty long margin in real world AI. In fact, the degree to which Tesla is advanced real world AI is not well understood. Yeah. And I guess since I've spent a lot of time with the Tesla AI team.

Speaker 2

01:38:02 - 01:38:49

I kind of know how real world AI is done. And there's lots to be gained by collaboration with Tesla. I think bi-directionally, XAI can help Tesla and vice versa. We have some collaborative relationships as well, like our material science team, which I think is maybe the best in the world, is actually shared between Tesla and SpaceX. And that's actually quite helpful for recruiting the best engineers in the world because it's just like more interesting to work on advanced electric cars and rockets than just either 1 or the other.

Speaker 2

01:38:50 - 01:39:18

So that was really key to recruiting Charlie Coleman, who runs the advanced materials team. He was at Apple and I think pretty happy at Apple and be like, well, he could work on electric cars and rockets. He's like, hmm, that sounds pretty good. So he wouldn't take either 1 of the jobs, but he was willing to take both. Yeah, so I think that is a really important thing.

Speaker 2

01:39:18 - 01:39:40

And like I said, there are some pretty big insights that we've gained at Tesla in trying to understand real-world AI. Taking video input and compressing that into a vector space, and then ultimately into steering and pedal outputs.

Speaker 6

01:39:47 - 01:39:48

Optimus?

Speaker 2

01:39:53 - 01:40:57

Optimus is still at the early stages, but Optimus, and we definitely need to be very careful with Optimus at scale once it's in production, that you have a hard-coded way to turn off Optimus for obvious reasons, I think. There's got to be a hard-coded ROM local cutoff that no amount of updates from the internet can change that So so we'll make we'll make sure that optimus is like quite easy to shut down It's extremely extremely important because at least if the car is intelligent, well at least you can climb a tree or go up some stairs or something, go in a building, but Optimus can follow you in the building. Any kind of robot that can follow you in the building that is intelligent and connected we got to be super careful with safety.

Speaker 21

01:41:03 - 01:41:03

Thanks.

Speaker 2

01:41:05 - 01:41:12

No problem. Let's see. So any last things we should touch on?

Speaker 11

01:42:14 - 01:43:20

So 1 thing I wanted to just talk about before we're concluded is how How impactful Sorry about that little feedback It's just about the the impactfulness of AI as a means of providing equal opportunity to humanity from all walks of life and the importance of democratizing it as far as our mission statement goes. Because if you think about the history of humanity and access to information. Before the printing press, it was incredibly hard for people to get access to new forms of knowledge. Being able to provide that level of communication to people is hugely deflationary in terms of wealth and opportunity inequality. And so we're really at a new inflection point in the development of society when it comes to getting everyone the same potential for great outcomes regardless of your position in life.

Speaker 11

01:43:20 - 01:44:20

And so when we're talking about, you know, removing the monopolization of ideas and about controlling this technology from, you know, paid subscription services or even worse from, you know, the political censorship that may come with whatever capital has to supply these models. We're really talking about democratizing people's opportunities to not only better their position in life, but just advance their social status in the world at an unprecedented level in history. And so as a company, when we talk about the importance of truthfulness and being able to reliably trust these models, learn from them, and make scientific advancement, make societal advancements. We're really just talking about improving people's qualities of life and improving everyone, not just, you know, the top, you know, tech people in Silicon Valley who have access to it. It's really about giving this access to everyone.

Speaker 11

01:44:21 - 01:44:23

I think that's a mission that our whole team shares.

Speaker 4

01:44:30 - 01:44:46

Before we sign off here, just 1 last question for Elon. So assuming that XAI is successful at building human-level AI or even beyond human-level AI, do you think it's reasonable to involve the public in decision-making in the company, or how do you see that evolving in the long term?

Speaker 2

01:44:48 - 01:45:28

Yeah, as with everything, I think we're very open to critical feedback and welcome that. We should be criticized. That's a good thing. Actually, 1 of the things that I like sort of X slash Twitter for is that there's plenty of negative feedback on Twitter, which is helpful for ego compression. So the best thing I can think of right now is that any human that wants to sort of have a vote in the future of XAI ultimately should be allowed to.

Speaker 2

01:45:28 - 01:45:49

So basically, provided you can verify that you're a real human. That any human that wishes to have a vote in the future of XAI should be allowed to have vote in the future of XAI. Yeah. Maybe that's like some normal fee like 10 bucks or something. I don't know.

Speaker 2

01:45:50 - 01:46:04

10 bucks and prove you're a human. And then you can have a vote, you know. Anyone who's interested. That's the best thing I can think of right now at least. All right, cool.

Speaker 2

01:46:04 - 01:46:04

On that note, thanks for participating and we'll keep you informed of any progress we make and I look forward to having a lot of great people join the team. Thanks.