2 hours 31 minutes 30 seconds
🇬🇧 English
Speaker 1
00:00
The following is a conversation with Brian Johnson, founder of Kernel, a company that has developed devices that can monitor and record brain activity. And previously, he was the founder of Braintree, a mobile payment company that acquired Venmo and then was acquired by PayPal and eBay. Quick mention of our sponsors, 4 Sigmatic, NetSuite, Grammarly, and ExpressVPN. Check them out in the description to support this podcast.
Speaker 1
00:27
As a side note, let me say that this was a fun and memorable experience, wearing the Kernel Flow Brain Interface in the beginning of this conversation, as you can see if you watch the video version of this episode. And there's a Ubuntu Linux machine sitting next to me, collecting the data from my brain. The whole thing gave me hope that the mystery of the human mind will be unlocked in the coming decades as we begin to measure signals from the brain in a high bandwidth way. To understand the mind, we either have to build it or to measure it.
Speaker 1
01:00
Both are worth a try. Thanks to Brian and the rest of the Kernel team for making this little demo happen. This is the Lex Friedman podcast, and here is my conversation with Brian Johnson.
Speaker 2
01:13
You ready, Lex?
Speaker 3
01:14
Yes, I'm ready.
Speaker 2
01:15
Do you guys wanna come in and put the interfaces on our heads? And then I will proceed to tell you a few jokes.
Speaker 3
01:21
So we have 2 incredible pieces of technology and a machine running Ubuntu 2004 in front of us. What are we doing? All right.
Speaker 3
01:31
Are these going on our head?
Speaker 2
01:32
They're going on our heads, yeah. And they will place it on our heads for proper alignment.
Speaker 3
01:41
Does this support giant heads? Because I kind of have a giant head. Is this just giant head fine?
Speaker 3
01:46
Are you saying
Speaker 2
01:46
As like an ego or are you saying physically both?
Speaker 3
01:55
It's a nice massage.
Speaker 2
01:57
Yes. Okay, how does this feel?
Speaker 3
02:03
If It's okay to move around?
Speaker 2
02:04
Yeah.
Speaker 3
02:05
It feels, oh yeah, he
Speaker 2
02:07
he. He he.
Speaker 3
02:09
He he. I'm not gonna tell
Speaker 2
02:10
you at all.
Speaker 3
02:10
This feels awesome. It's
Speaker 2
02:11
a pretty good fit.
Speaker 3
02:12
Thank you.
Speaker 2
02:13
That feels good.
Speaker 3
02:15
So this is big head friendly.
Speaker 2
02:17
It suits you well, Lex.
Speaker 3
02:19
Thank you. I feel like I need to, I feel like when I wear this, I need to sound like Sam Harris, calm, collected, eloquent. I feel smarter, actually.
Speaker 3
02:33
I don't think I've ever felt quite as much like I'm part of the future as now.
Speaker 2
02:38
Have you ever worn a brain interface or had your brain imaged?
Speaker 3
02:42
Oh, never had my brain imaged. The only way I've analyzed my brain is by talking to myself and thinking, no direct data.
Speaker 2
02:53
Yeah, that is definitely a brain interface. That has a lot of blind spots.
Speaker 3
02:59
That has some blind spots, yeah. Psychotherapy. That's right.
Speaker 2
03:04
All right, are we recording? Yeah, we're good. All right.
Speaker 2
03:09
So Lex, the objective of this, I'm going to tell you some jokes and your objective is to not smile, which as a Russian you should have an edge.
Speaker 3
03:23
Make the motherland proud, I gotcha. Okay, let's hear the jokes. Lex, and
Speaker 2
03:30
this is from the Colonel crew, We've been working on a device that can read your mind and we would love to see your thoughts Is that the joke that's the opening okay If I'm If I'm seeing the muscle activation correctly on your lips, you're not gonna do well on this. Let's see. All right, here comes the first.
Speaker 3
03:56
I'm screwed.
Speaker 2
03:57
Here comes the first 1.
Speaker 3
03:58
Is this gonna break the device? Is it resilient to laughter?
Speaker 2
04:07
Lex, what goes through a potato's brain?
Speaker 3
04:14
I already failed. That's the hilarious opener. Okay, What?
Speaker 3
04:19
Tater thoughts.
Speaker 2
04:24
What kind of fish performs brain surgery?
Speaker 3
04:27
I don't know.
Speaker 2
04:28
A neural surgeon.
Speaker 3
04:35
And so we're getting data of everything that's happening in my brain right now? Lifetime, yeah. We're getting activation patterns of your entire cortex.
Speaker 3
04:45
I'm gonna try to do better. I'll edit out all the parts where I left and Photoshop a serious face over me. You can recover. Yeah, all right.
Speaker 2
04:53
Lex, what do scholars eat when they're hungry?
Speaker 3
04:56
I don't know, what?
Speaker 2
04:58
Academia nuts.
Speaker 3
05:03
That was a pretty good 1.
Speaker 2
05:05
So what we'll do is, so you're wearing Kernel Flow, which is an interface built using technology called spectroscopy. So it's similar to what we wear wearables on the wrist using light, so using LIDAR as you know. And we're using that to image, it's a functional imaging of brain activity.
Speaker 2
05:30
And so as your neurons fire electrically and chemically, it creates blood oxygenation levels, we're measuring that. And so you'll see in the reconstructions we do for you, you'll see your activation patterns in your brain as throughout this entire time we were wearing it. So in the reaction to the jokes and as we were sitting here talking, and so it's a, we're moving towards a real time feed of your cortical brain activity.
Speaker 3
05:56
So there's a bunch of things that are in contact with my skull right now. How many of them are there? And so how many of them are, what are they?
Speaker 3
06:05
What are the actual sensors?
Speaker 2
06:07
There's 52 modules and each module has 1 laser and 6 sensors and the sensors fire in about 100 picoseconds and then the photons scatter and absorb in your brain, and then a bunch go in, then a few come back out, and we sense those photons, and then we do the reconstruction for the activity. Overall, there's about 1,000 plus channels that are sampling your activity.
Speaker 3
06:35
How difficult is it to make it as comfortable as it is? Because it's surprisingly uncomfortable. I would not think it would be comfortable.
Speaker 3
06:44
Something that's measuring brain activity, I would not think it would be comfortable, but it is. In fact, I want to take this home.
Speaker 2
06:52
Yeah, yeah, that's right. So people are accustomed to being in big systems like fMRI where there's 120 decibel sounds and you're in a claustrophobic encasement or EEG, which is just painful, or surgery. And so yes, I agree that this is a convenient option to be able to just put on your head.
Speaker 2
07:14
It measures your brain activity in the contextual environment you choose. So if we want to have it during a podcast, or if we want to be at home in a business setting, it's freedom to be where, to record your brain activity in the setting that you choose.
Speaker 3
07:28
Yeah, but sort of from an engineering perspective, are these, what is it? There's a bunch of different modular parts and they're kind of, there's like a rubber band thing where they mold to the shape of your head.
Speaker 2
07:40
That's right. So we built this version of the mechanical design to accommodate most adult heads.
Speaker 3
07:49
But I have a giant head and it fits fine. It fits well actually. So I don't think I have an average head.
Speaker 3
07:58
Okay, maybe I feel much better about my head now. Maybe I'm more average than I thought. Okay, so what else is there, interesting you could say while it's on our heads. I can keep this on the whole time, this is kind of awesome.
Speaker 3
08:13
And it's amazing for me as a fan of Ubuntu, I use Ubuntu Mate, you guys should use that too. But it's amazing to have code running to the side, measuring stuff and collecting data. I mean, I just, I feel like much more important now that my data is being recorded. Like somebody care, like, you know when you have a good friend that listens to you, that actually like listens, like actually is listening to you this is what I feel like, like a much better friend because it's like accurately listening to me, Ubuntu.
Speaker 2
08:47
What a cool perspective, I hadn't thought about that, of feeling understood. Yeah. Heard.
Speaker 2
08:54
Yeah, heard deeply by the mechanical system that is recording your brain activity versus the human that you're engaging with, that your mind immediately goes to that there's this dimensionality and depth of understanding of this software system, which you're intimately familiar with. And now you're able to communicate with this system in ways that you couldn't before.
Speaker 3
09:19
Yeah, I feel heard. Yeah, I
Speaker 2
09:24
mean, I guess what's interesting about this is your intuitions are spot on. Most people have intuitions about brain interfaces that they've grown up with this idea of people moving cursors on the screen or typing or changing the channel or skipping a song. It's primarily been anchored on control.
Speaker 2
09:41
And I think the more relevant understanding of brain interfaces or neuroimaging is that it's a measurement system. And once you have numbers for a given thing, a seemingly endless number of possibilities emerge around that of what to do with those numbers.
Speaker 3
09:57
So before you tell me about the possibilities, This was an incredible experience. I can keep this on for another 2 hours, but I'm being told that for a bunch of reasons, just because we probably wanna keep the data small and visualize it nicely for the final product, we wanna cut this off and take this amazing helmet away from me. So Brian, thank you so much for this experience and let's continue without helmet-less.
Speaker 3
10:25
All right. So that was an incredible experience. Can you maybe speak to what kind of opportunities that opens up
Speaker 2
10:32
that stream of data, that rich stream of data from the brain? First, I'm curious, what is your reaction? What comes to mind when you put that on your head?
Speaker 2
10:41
What does it mean to you? And what possibilities emerge? And what significance might it have? I'm curious where your orientation is at.
Speaker 3
10:50
Well, for me, I'm really excited by the possibility of various information about my body, about my mind being converted into data such that data can be used to create products that make my life better. So that to me is a really exciting possibility. Even just like a Fitbit that measures, I don't know, some very basic measurements about your body is really cool.
Speaker 3
11:17
But it's the bandwidth of information, the resolution of that information is very crude, so it's not very interesting. The possibility of recording, of just building a data set, coming in a clean way, in a high bandwidth way from my brain, opens up all kinds of, you know, at the very, I was kinda joking when we were talking, but it's not really, is like I feel heard in the sense that it feels like the full richness of the information coming from my mind is actually being recorded by the machine. I mean, there's a, I can't quite put it into words, but there is a genuinely for me, this is not some kind of joke about me being a robot, it just genuinely feels like I'm being heard in a way that's going to improve my life. As long as the thing that's on the other end can do something useful with that data.
Speaker 3
12:17
But even the recording itself is like, once you record, it's like taking a picture. That moment is forever saved in time. Now a picture cannot allow you to step back into that world. But perhaps recording your brain is a much higher resolution thing, much more personal recording of that information than a picture that would allow you to step back into that, where you were in that particular moment in history, and then map out a certain trajectory to tell you certain things about yourself.
Speaker 3
12:58
That could open up all kinds of applications. Of course, there's health that I consider, but honestly, to me, the exciting thing is just being heard. My state of mind, the level of focus, all those kinds of things, being heard.
Speaker 2
13:10
What I heard you say is you have an entirety of lived experience, some of which you can communicate in words and in body language, some of which you feel internally, which cannot be captured in those communication modalities. And that this measurement system captures both the things you can try to articulate in words, maybe in a lower dimensional space using 1 word, for example, to communicate focus, when it really may be represented in a 20 dimensional space of this particular kind of focus and that this information is being captured. So it's a closer representation to the entirety of your experience captured in a dynamic fashion that is not just a static image of your conscious experience.
Speaker 3
13:53
Yeah, that's the promise, that was the feeling. And it felt like the future, so it was a Pretty cool experience. And from the sort of mechanical perspective, it was cool to have an actual device that feels pretty good, that doesn't require me to go into the lab.
Speaker 3
14:11
And also the other thing I was feeling, there's a guy named Andrew Huberman, he's a friend of mine, amazing podcast, people should listen to it, Huberman Lab Podcast. We're working on a paper together about eye movement and
Speaker 1
14:25
so on. And we're kinda, he's a neuroscientist
Speaker 3
14:28
and I'm a data person, machine learning person, and we're both excited by how much the data measurements of the human mind, the brain, and all the different metrics that come from that can be used to understand human beings and in a rigorous scientific way. So the other thing I was thinking about is how this could be turned into a tool for science. Sort of not just personal science, not just like Fitbit style, like how am I doing my personal metrics of health, but doing larger scale studies of human behavior and so on.
Speaker 3
15:07
So like data, not at the scale of an individual, but data at a scale of many individuals or a large number of individuals. So personal being heard was exciting and also just for science is exciting. It's very easy. Like there's a very powerful thing to it being so easy to just put on that you could scale much easier.
Speaker 2
15:28
If you think about that second thing you said about the science of the brain, Most, we've done a pretty good job, like we, the human race, have done a pretty good job figuring out how to quantify the things around us from distant stars to calories and steps and our genome. So we can measure and quantify pretty much everything in the known universe, except for our minds. And we can do these one-offs if we're going to get an fMRI scan or do something with a low res EEG system, but we haven't done this at population scale.
Speaker 2
16:11
And so if you think about human thought or human cognition is probably the single largest raw input material into society at any given moment. It's our conversations with ourselves and with other people. And we have this raw input that we can't, haven't been able to measure yet. And if you, when I think about it through that frame, it's remarkable.
Speaker 2
16:42
It's almost like we live in this wild, wild west of unquantified communications within ourselves and between each other when everything else has been grounded. I mean, for example, I know if I buy an appliance at the store or on a website, I don't need to look at the measurements on the appliance to make sure it can fit through my door. That's an engineered system of appliance manufacturing and construction. Everyone's agreed upon engineering standards.
Speaker 2
17:10
And we don't have engineering standards around cognition is not a for it has not entered as a formal engineering discipline that enables us to scaffold in society with everything else we're doing, including consuming news, our relationships, politics, economics, education, all the above. And so to me, the most significant contribution that kernel technology has to offer would be the formal, the introduction of the formal engineering of cognition as it relates to everything else in society.
Speaker 3
17:45
I love that idea that you kind of think that there is just this ocean of data that's coming from people's brains as being in a crude way reduced down to tweets and texts and so on. Just a very hardcore, mini-scale compression of actual, the raw data. But maybe you can comment, because you used the word cognition, I think the first step is to get the brain data, but is there a leap to be taking to sort of interpreting that data in terms of cognition?
Speaker 3
18:22
So is your idea is basically you need to start collecting data at scale from the brain and then we start to really be able to take little steps along the path to actually measuring some deep sense of cognition. Because as I'm sure you know, we understand a few things, but we don't understand most of
Speaker 2
18:45
what makes up cognition. This has been 1 of the most significant challenges of building Kernel. And Kernel wouldn't exist if I wasn't able to fund it initially by myself.
Speaker 2
18:55
Because when I engage in conversations with investors, the immediate thought is what is the killer app? And of course, I understand that heuristic. That's what they're looking at is they're looking to de-risk. Is the product solved?
Speaker 2
19:09
Is there a customer base? Are people willing to pay for it? How does it compare to competing options? And in the case with Brain Interfaces, when I started the company, there was no known path to even build a technology that could potentially become mainstream.
Speaker 2
19:24
And then once we figured out the technology, we could commence having conversations with investors and it became what is the killer app? And so what has been, so I funded the first $53 million to the company. And to raise the round of funding, the first 1 we did, I spoke to 228 investors. 1 said yes.
Speaker 2
19:44
It was remarkable. And it was mostly around this concept around what is a killer app. And so internally, the way we think about it is we think of the go-to-market strategy much more like the Drake equation, where if we can build technology that has the characteristics of, it has the data quality is high enough, it meets some certain threshold, cost, accessibility, comfort, it can be worn in contextual environments, it meets the criteria of being a mass market device, then the responsibility that we have is to figure out how to create the algorithm that enables the human, to enable humans to then find value with it. So the analogy is like brain interfaces are like early nineties of the internet.
Speaker 2
20:37
Is you wanna populate an ecosystem with a certain number of devices. You want a certain number of people who play around with them, who do experiments of certain data collection parameters. You want to encourage certain mistakes from experts and non-experts. These are all critical elements that ignite discovery.
Speaker 2
20:51
And so we believe we've accomplished the first objective of building technology that reaches those thresholds. And now it's the Drake equation component of how do we try to generate 20 years of value discovery in a 2 or 3 year time period? How do we compress that?
Speaker 3
21:14
So just to clarify, So when you mean the Drake equation, which for people who don't know, I don't know why you, if you listen to this, I bring up aliens every single conversation. So I don't know how you wouldn't know what the Drake equation is. But you mean like the killer app, it would be 1 alien civilization in that equation.
Speaker 3
21:31
So meaning like, this is in search of an application that's impactful and transformative. By the way, we need to come up with a better term than killer
Speaker 2
21:39
app. It's also violent, right?
Speaker 3
21:43
Very violent. You can go like viral app, that's horrible too, right? It's some very inspiringly impactful application.
Speaker 3
21:51
How about that? No. Okay, so let's stick with killer app, that's fine. Nobody's- But I concur with you.
Speaker 3
21:57
I dislike the chosen words in capturing the concept. You know, it's 1 of those sticky things that is effective to use in the tech world, but when you now become a communicator outside of the tech world, especially when you're talking about software and hardware and artificial intelligence applications, it sounds horrible.
Speaker 2
22:17
Yeah, no, it's interesting. I actually regret now having called attention to this, I regret having used that word in this conversation because it's something I would not normally do. I used it in order to create a bridge of shared understanding of how others would, what terminology others would use.
Speaker 2
22:32
Yeah. But yeah, I concur.
Speaker 3
22:34
Let's go with impactful application. Or just value creation. Value creation.
Speaker 3
22:41
Something people love using.
Speaker 2
22:43
There we go,
Speaker 3
22:44
that's it. Love app. Okay, so what, do you have any ideas?
Speaker 3
22:49
So you're basically creating a framework where there's the possibility of a discovery of an application that people love using. Is, do you have ideas? We've began to play
Speaker 2
23:00
a fun game internally where when we have these discussions, we begin circling around this concept of does anybody have an idea? Does anyone have intuitions? And if we see the conversation starting to veer in that direction, we flag it and say, human intuition alert, stop it.
Speaker 2
23:20
And so we really want to focus on the algorithm of there's a natural process of human discovery that when you populate a system with devices and you give people the opportunity to play around with it in expected and unexpected ways, we are thinking that is a much better system of discovery than us exercising tuitions. And it's interesting, we're also seeing a few neuroscientists who have been talking to us. Well, I was speaking to this 1 young associate professor and I approached a conversation and said, hey, we have these 5 data streams that we're pulling off. When you hear that, what weighted value do you add to each data source?
Speaker 2
23:59
Like which 1 do you think is gonna be valuable for your objectives and which ones not? And he said, I don't care, just give me the data. All I care about is my machine learning model. But importantly, he did not have a theory of mind.
Speaker 2
24:10
He did not come to the table and say, I think the brain operates in this way and these reasons or have these functions, he didn't care. He just wanted the data. And we're seeing that more and more that certain people are devaluing human intuitions for good reasons, as we've seen in machine learning over the past couple of years. And we're doing the same in our value creation market strategy.
Speaker 3
24:35
So more collect more data, clean data, make the products such that the collection of data is easy and fun, and then the rest will just spring to life through humans playing around with it.
Speaker 2
24:52
Our objective is to create the most valuable data collection system of the brain ever. And with that, then applying all the best tools of machine learning and other techniques to extract out, you know, to try to find insight. But yes, our objective is really to systematize the discovery process because we can't put definite timeframes on discovery.
Speaker 2
25:24
The brain is complicated and science is not a business strategy. And so we really need to figure out how to, this is the difficulty of bringing, you know, technology like this to market. It's why most of the time it just linguishes in academia for quite some time. But we hope that we will over, you know, cross over and make this mainstream in the coming years.
Speaker 3
25:48
The thing was cool to wear, but are you chasing a good reason for millions of people to put this on their head and keep on their head regularly? Is there, like who's going to discover that reason? Is it going to be people just kind of organically?
Speaker 3
26:08
Or is there going to be a Angry Birds style application that's just too exciting to not use? If I think through
Speaker 2
26:18
the things that have changed my life most significantly over the past few years, when I started wearing a wearable on my wrist that would give me data about my heart rate, heart rate variability, respiration rate, metabolic approximations, et cetera, For the first time in my life, I had access to information, sleep patterns, that were highly impactful. They told me, for example, if I eat close to bedtime, I'm not going to get deep sleep. And not getting deep sleep means you have all these follow-on consequences in life.
Speaker 2
26:54
And so it opened up this window of understanding of myself that I cannot self-introspect and deduce these things. This is information that was available to be acquired, but it just wasn't. I would have to get an expensive sleep study that it's an end, like 1 night, and that's not good enough to run all my trials. And so if you look just at the information that 1 can acquire on their wrist.
Speaker 2
27:18
And now you're applying it to the entire cortex on the brain. And you say, what kind of information could we acquire? It opens up a whole new universe of possibilities. For example, we did this internal study at Kernel where I wore a prototype device and we were measuring the cognitive effects of sleep.
Speaker 2
27:36
So I had a device measuring my sleep. I performed with 13 of my coworkers. We performed 4 cognitive tasks over 13 sessions. And we focused on reaction time, impulse control, short-term memory, and then a resting state task.
Speaker 2
27:52
And with mine, we found, for example, that my impulse control was independently correlated with my sleep, outside of behavioral measures of my ability to play the game. The point of the study was I had the brain study I did at Kernel confirmed my life experience. That if I, my deep sleep determined whether or not I would be able to resist temptation the following day. And my brain did show that as 1 example.
Speaker 2
28:24
And so if you start thinking, if you actually have data on yourself, on your entire cortex, you can control the settings. I think there's probably a large number of things that we could discover about ourselves, very, very small and very, very big. Just for example, like when you read news, what's going on?
Speaker 3
28:44
Like when you use social media, when you use news, all the ways we allocate attention with the computer. I mean, that seems like a compelling place to where you would want to put on a kernel, by the way, what is it called? Kernel flux, kernel, what?
Speaker 2
29:01
Flow. We have 2 technologies, you or flow.
Speaker 3
29:04
Flow, okay. So when you put on the kernel flow, it is seems like to be a compelling time and place to do it is when you're behind a desk, behind a computer, because you could probably wear it for prolonged periods of time as you're taking in content. And there could, a lot of, because some of our, so much of our lives happens in the digital world now, that kind of coupling the information about the human mind with the consumption and behaviors in the digital world might give us a lot of information about the effects of the way we behave and navigate the digital world to the actual physical meat, space, affects on our body.
Speaker 3
29:50
It's interesting to think, so in terms of both like for work, I'm a big fan of, so Cal Newport, his ideas of deep work that I spend, with few exceptions, I try to spend the first 2 hours of every day, usually if I'm at home and have nothing on my schedule, is going to be up to 8 hours of deep work, of focus, 0 distraction. And for me to analyze, I mean, I'm very aware of the waning of that, the ups and downs of that. And it's almost like you're surfing the ups and downs of that as you're doing programming, as you're doing thinking about particular problems. You're trying to visualize things in your mind, you start trying to stitch them together, you're trying to, when there's a dead end about an idea, you have to kind of calmly walk back and start again, all those kinds of processes.
Speaker 3
30:47
It'd be interesting to get data on what my mind is actually doing. And also, recently started doing, I just talked to Sam Harris a few days ago and been building up to that. I started meditating using his app, Waking Up, very much recommend it. It'd be interesting to get data on that, because you're very, it's like you're removing all the noise from your head, and you very much, it's an active process of active noise removal, active noise canceling, like the headphones, and it'd be interesting to see what is going on in the mind before the meditation, during it, and after, all those kinds of things.
Speaker 2
31:29
And in all of your examples, it's interesting that everyone who's designed an experience for you, so whether it be the meditation app or the deep work or all the things you mentioned, they constructed this product with a certain number of knowns. Now, what if we expanded the number of knowns by 10X or 20X or 30X? They would reconstruct their product, co-incorporate those knowns.
Speaker 2
31:55
So it'd be, and so this is the dimensionality that I think is the promising aspect is that people will be able to use this quantification, use this information to build more effective products. And this is, I'm not talking about better products to advertise to you or manipulate you. I'm talking about our focus is helping people, individuals have this contextual awareness and this quantification, and then to engage with others who are seeking to improve people's lives. That the objective is betterment across ourselves, individually and also with each other.
Speaker 3
32:33
Yeah, so it's a nice data stream to have if you're building an app. Like if you're building a podcast listening app, it would be nice to know data about the listener so that like if you're bored or you fell asleep, maybe pause the podcast. It's like really dumb, just very simple applications that could just improve the quality of the experience of using the app.
Speaker 2
32:51
I'm imagining if you have your neuron, this is Lex, and there's a statistical representation of you and you engage with the app and it says, Lex, your best to engage with this meditation exercise in the following settings. At this time of day, after eating this kind of food or not eating, fasting, with this level of blood glucose and this kind of night's sleep. But all these data combined to give you this contextually relevant experience, just like we do with our sleep.
Speaker 2
33:27
You've optimized your entire life based upon what information you can acquire and know about yourself. And so the question is, how much do we really know of the things going around us? And I would venture to guess, in my own, my life experience, I capture, my self-awareness captures an extremely small percent of the things that actually influence my conscious and unconscious experience.
Speaker 3
33:50
Well, in some sense, the data would help encourage you to be more self-aware, not just because you trust everything the data is saying, but it'll give you a prod to start investigating. Like I would love to get a rating, like a ranking of all the things I do and what are the things, this is probably important to do without the data, but the data will certainly help. It's like rank all the things you do in life and which ones make you feel shitty, which ones make you feel good.
Speaker 3
34:23
Like you're talking about evening, Brian. Like this is a good example, somebody like, I do pig out at night as well. And it never makes
Speaker 2
34:35
me feel good.
Speaker 3
34:35
Like you're in
Speaker 4
34:36
a safe space. It's just a safe space. Just a safe space.
Speaker 2
34:39
Let's hear it.
Speaker 3
34:40
No, I definitely have much less self-control at night. And it's interesting. And the same, You know, people might criticize this, but I know my own body.
Speaker 3
34:49
I know when I eat carnivore, just eat meat, I feel much better than if I eat more carbs. The more carbs I eat, the worse I feel. I don't know why that is. There is science supporting it, but I'm not leaning on science, I'm leaning on personal experience.
Speaker 3
35:07
And that's really important. I don't need to read, I'm not gonna go on a whole rant about nutrition science, but many of those studies are very flawed. They're doing their best, but nutrition science is a very difficult field of study because humans are so different and the mind has so much impact on the way your body behaves and it's so difficult from a scientific perspective to conduct really strong studies that you have to be almost like a scientist of 1, you have to do these studies on yourself. That's the best way to understand what works for you or not.
Speaker 3
35:41
And I don't understand why, because it sounds unhealthy, but eating only meat always makes me feel good. Just eat meat, that's it. And I don't have any allergies, any of that kind of stuff. I'm not full like Jordan Peterson where if he deviates a little bit that he goes off, like deviates a little bit from the carnivore diet, he goes off like the cliff.
Speaker 3
36:03
No, I can have like chalk, I can go off the diet, I feel fine, it's a gradual, it's a gradual worsening of how I feel. But when I eat only meat, I feel great. And it'd be nice to be reminded of that. Like it's a very simple fact that I feel good when I eat carnivore.
Speaker 3
36:24
And I think that repeats itself in all kinds of experiences. Like I feel really good when I exercise. Not, I hate exercise, okay? But in the rest of the day, the impact it has on my mind and the clarity of mind and the experiences and the happiness and all those kinds of things, I feel really good.
Speaker 3
36:48
And to be able to concretely express that through data would be nice. It would be a nice reminder, almost like a statement, like remember what feels good and whatnot. And there could be things like, that I'm not, many things, like you're suggesting that I could not be aware of, they might be sitting right in front of me that make me feel really good and make me feel not good, and the data would show that.
Speaker 2
37:13
I agree with you. I've actually employed the same strategy. I fired my mind entirely from being responsible for constructing my diet.
Speaker 2
37:23
And so I started doing a program where I now track over 200 biomarkers every 90 days. And it captures, of course, the things you would expect like cholesterol, but also DNA methylation and all kinds of things about my body, all the processes that make up me. And then I let that data generate the shopping lists.
Speaker 3
37:42
And so
Speaker 2
37:43
I never actually ask my mind what it wants. It's entirely what my body is reporting that it wants. And so I call this goal alignment within Brian.
Speaker 2
37:51
And there's 200 plus actors that I'm currently asking their opinion of. And so I'm asking my liver, how are you doing? And it's expressing via the biomarkers. And so then I construct that diet and I only eat those foods until my next testing round.
Speaker 2
38:07
And that has changed my life more than I think anything else because in the demotion of my conscious mind that I gave primacy to my entire life. It led me astray because like you were saying, the mind then goes out into the world and it navigates the dozens of different dietary regimens people put together in books. And it all has their supporting science in certain contextual settings, but it's not N of 1. And like you're saying, this dietary really is an N of 1.
Speaker 2
38:39
These, what people have published scientifically, of course, can be used for nice groundings, but it changes when you get to the end of 1 level. And so that's what gets me excited about brain interfaces is if I could do the same thing for my brain, where I can stop asking my conscious mind for its advice or for its decision making, which is flawed, and I'd rather just look at this data. And I've never had better health markers in my life than when I stopped actually asking myself to be in charge of it.
Speaker 3
39:09
The idea of demotion of the conscious mind is such a sort of engineering way of phrasing meditation. What the? What they?
Speaker 3
39:21
I mean, that's what
Speaker 2
39:21
we're doing, right?
Speaker 3
39:22
Yeah, yeah, that's beautiful. That means really beautifully put. By the way, testing round, what does that look like?
Speaker 3
39:28
What's that? Well, you mentioned.
Speaker 2
39:31
Yeah, the very, the test I do. Yes. So includes a complete blood panel.
Speaker 2
39:37
I do a microbiome test. I do a food, a diet-induced inflammation. So I look for exocytokine expressions. So foods that produce inflammatory reactions.
Speaker 2
39:48
I look at my neuroendocrine systems, I look at all my neurotransmitters. I do, yeah, there's several micronutrient tests to see how I'm looking at the various nutrients.
Speaker 3
39:59
What about like self-report of like how you feel? Almost like, you can't demote your, you still exist within your conscious mind, right? So that lived experience is of a lot of value.
Speaker 3
40:16
So how do you measure that?
Speaker 2
40:17
I do a temporal sampling over some duration of time. So I'll think through how I feel over a week, over a month, over 3 months. I don't do a temporal sampling of if I'm at the grocery store in front of a cereal box and be like, you know what, Captain Crunch is probably the right thing for me today, because I'm feeling like I need a little fun in my life.
Speaker 2
40:36
And so it's a temporal sampling. If the data sets large enough, then I smooth out the function of my natural oscillations of how I feel about life, where some days I may feel upset or depressed or down or whatever. And I don't want those moments to then rule my decision-making. That's why the demotion happens.
Speaker 2
40:52
And it says, really, if you're looking at health over a 90-day period of time, all my 200 voices speak up on that interval. And they're all a given voice to say, this is how I'm doing and this is what I want. And so it really is an accounting system for everybody. So that's why I think that if you think about the future of being human, there's 2 things I think that are really going on.
Speaker 2
41:17
1 is the design, manufacturing, and distribution of intelligence is heading towards 0, caught in a cost curve. Over a certain design, over a certain timeframe, but our ability to, you know, evolution produced us an intelligent, a form of intelligence. We are now designing our own intelligent systems. And the design, manufacturing, and distribution of that intelligence over a certain timeframe is gonna go to a cost of 0.
Speaker 3
41:45
Design, manufacturing, distribution of intelligent costs is going to 0.
Speaker 2
41:49
For example.
Speaker 3
41:49
Again, just give me a second. That's brilliant, okay. And evolution is doing the design, manufacturing, distribution of intelligence, and now we are doing the design, manufacturing, distribution of intelligence.
Speaker 3
42:04
And the cost of that is going to 0. That's a very nice way of looking at life on earth.
Speaker 2
42:10
So if that's going on, and then now in parallel to that, then you say, okay, what then happens if when that cost curve is heading to 0, our existence becomes a goal alignment problem, a goal alignment function. And so the same thing I'm doing where I'm doing goal alignment within myself of these 200 biomarkers where I'm saying when When Brian exists on a daily basis and this entity is deciding what to eat and what to do and etc It's not just my conscious mind, which is opining, it's
Speaker 1
42:48
200
Speaker 2
42:49
biological processes, and there's a whole bunch of more voices involved. So in that equation, we're going to increasingly automate the things that we spend high energy on today because it's easier. And now we're going to then negotiate the terms and conditions of intelligent life.
Speaker 2
43:10
Now we say conscious existence because we're biased because that's what we have, but it will be the largest computational exercise in history, because you're now doing goal alignment with planet Earth, within yourself, with each other, within all the intelligent agents we're building, bots and other voice assistants. You basically have a trillions and trillions of agents working on the negotiation of goal alignment.
Speaker 3
43:35
Yeah, this is in fact true. And what was the second thing?
Speaker 2
43:40
That was it. So the cost, the design, manufacturing, distribution of intelligence going to 0, which then means what's really going on? What are we really doing?
Speaker 2
43:50
We're negotiating the terms and conditions of existence.
Speaker 3
43:54
Do you worry about the survival of this process? That life as we know it on Earth comes to an end, or at least intelligent life, that as the cost goes to 0, something happens where all of that intelligence is thrown in the trash by something like nuclear war or development of AGI systems that are very dumb. Not AGI, I guess, but AI systems, the paperclip thing, what en masse is dumb but has unintended consequences to where it destroys human civilization.
Speaker 3
44:30
Do you worry about those kinds of things?
Speaker 2
44:32
I mean, it's unsurprising that a new thing comes into the sphere of human consciousness. Humans identify the foreign object, in this case, artificial intelligence. Our amygdala fires up and says, scary, foreign, we should be apprehensive about this.
Speaker 2
44:53
And so it makes sense from a biological perspective that humans, the knee jerk reaction is fear. What I don't think has been properly weighted with that is that we are the first generation of intelligent beings on this earth that has been able to look out over their expected lifetime and see there is a real possibility of evolving into entirely novel forms of consciousness. Yeah. So different that it would be totally unrecognizable to us today.
Speaker 2
45:32
We don't have words for it. We can't hint at it. We can't point at it. We can't, you can't look in the sky and see that thing that is shining.
Speaker 2
45:39
We're going to go up there. You cannot even create an aspirational statement about it. And instead we've had this knee jerk reaction of fear about everything that could go wrong. But in my estimation, this should be the defining aspiration of all intelligent life on earth that we would aspire, that basically every generation surveys the landscape of possibilities that are afforded, given the technological, cultural, and other contextual situation that they're in.
Speaker 2
46:14
We're in this context, we haven't yet identified this and said, this is unbelievable, we should carefully think this thing through, not just of mitigating the things that'll wipe us out, but we have this potential, and so we just haven't given a voice to it, even though it's within the realm of possibilities.
Speaker 3
46:33
So you're excited about the possibility of superintelligence systems and what the opportunities that bring. I mean, there's parallels to this. You think about people before the internet, as the internet was coming to life, I mean, there's kind of a fog through which you can't see.
Speaker 3
46:50
What does the future look like? We get to predicting collective intelligence, which I don't think we're understanding that we're living through that now, is that there's now, we've in some sense, stopped being individual intelligences and become much more like collective intelligences because ideas travel much, much faster now. And they can, in a viral way, like sweep across the population. And so it's almost, I mean, it almost feels like a thought is had by many people now, thousands or millions of people as opposed to an individual person.
Speaker 3
47:29
And that's changed everything. But to me, I don't think we're realizing how much that actually changed people or societies. But to predict that before the internet would have been very difficult. And in that same way, we're sitting here with the fog before us thinking, what is superintelligence systems, how is that going to change the world?
Speaker 3
47:51
What is increasing the bandwidth, like plugging our brains into this whole thing, how is that going to change the world? And it seems like it's a fog, you don't know, and it could be, whatever comes to be could destroy the world. We could be the last generation, but it also could transform in ways that creates an incredibly fulfilling life experience that's unlike anything we've ever experienced. It might involve dissolution of ego and consciousness and so on, you're no longer 1 individual.
Speaker 3
48:32
It might be more, you know, that might be a certain kind of death, an ego death. But the experience might be really exciting and enriching. Maybe we'll live in a virtual, like, It's funny to think about a bunch of sort of hypothetical questions of would it be more fulfilling to live in a virtual world? Like if you were able to plug your brain in in a very dense way into a video game.
Speaker 3
49:03
Like which world would you wanna live in? In the video game or in the physical world? For most of us, we kinda, toying with the idea of the video game, but we still wanna live in the physical world, have friendships and relationships in the physical world, but we don't know that. Again, it's a fog.
Speaker 3
49:23
And maybe in 100 years, we're all living inside a video game, hopefully not Call of Duty. Hopefully more like Sims 5. Which version is it on? For you individually though, does it make you sad that your brain ends?
Speaker 3
49:41
That you die 1 day very soon? That the whole thing, that data source just goes offline sooner than you would like?
Speaker 2
49:54
That's a complicated question. I would have answered it differently in different times in my life. I had chronic depression for 10 years.
Speaker 2
50:03
And so in that 10 year time period, I desperately wanted lights to be off. And the thing that made it even worse is I was in a religious, I was born into a religion. It was the only reality I ever understood. And it's difficult to articulate to people when you're born into that kind of reality and it's the only reality you're exposed to, you are literally blinded to the existence of other realities because it's so much the in-group, out-group thing.
Speaker 2
50:31
And so in that situation, it was not only that I desperately wanted lights out forever, it was that I couldn't have lights out forever. It was that there was an afterlife. And this afterlife had this system that would either penalize or reward you for your behaviors. And so it was almost like this indescribable hopelessness of not only being in hopeless despair of not wanting to exist, but then also being forced to be to exist.
Speaker 2
51:04
And so there was a duration of my time of a duration of life where I'd say, like, yes, I have no remorse for lights being out and actually wanted more than anything in the entire world. There are other times where I'm looking out at the future and I say, this is an opportunity for future evolving human conscious experience that is beyond my ability to understand. And I jump out of bed and I race to work and I can't think about anything else. But I think the reality for me is, I don't know what it's like to be in your head, but in my head, when I wake up in the morning, I don't say, good morning, Brian, I'm so happy to see you.
Speaker 2
51:52
Like, I'm sure you're just gonna be beautiful to me today. You're not gonna make a huge long list of everything you should be anxious about. You're not gonna repeat that list to me 400 times. You're not gonna have me relive all the regrets I've made in life.
Speaker 2
52:04
I'm sure you're not doing any of that. You're just gonna just help me along all day long. I mean, it's a brutal environment in my brain. And we've just become normalized to this environment that we just accept that this is what it means to be human.
Speaker 2
52:18
But if we look at it, if we try to muster as much soberness as we can about the realities of being human, it's brutal if it is for me. And so Am I sad that the brain may be off 1 day? It depends on the contextual setting. Like how am I feeling?
Speaker 2
52:39
At what moment are you asking me that? And that's, it's my mind is so fickle. And this is why, again, I don't trust my conscious mind. I have been given realities.
Speaker 2
52:47
I was given a religious reality that was a video game. And then I figured out it was not a real reality. And then I lived in a depressive reality, which delivered this terrible hopelessness. That wasn't a real reality.
Speaker 2
53:00
Then I discovered behavioral psychology and I figured out how biased, 188 chronicle biases and how my brain is distorting reality all the time. I have gone from 1 reality to another. I don't trust reality. I don't trust realities are given to me.
Speaker 2
53:15
And so to make, try to make a decision on what I value or not value that future state, I don't trust my response.
Speaker 3
53:22
So not fully listening to the conscious mind at any 1 moment as the ultimate truth, but allowing you to go up and down as it does, and just kind of being observing it.
Speaker 2
53:35
Yes, I assume that whatever my conscious mind delivers up to my awareness is wrong, on pond landing. And I just need to figure out where it's wrong, how it's wrong, how wrong it is, and then try to correct for it as best I can. But I assume that on impact, it's mistaken in some critical ways.
Speaker 3
53:55
Is there something you can say by way of advice when the mind is depressive, when the conscious mind serves up something that, you know, dark thoughts, how you deal with that, like how in your own life you've overcome that and others who are experienced in that can overcome it? 2 things. 1,
Speaker 2
54:18
those depressive states are Biochemical states it's not you and The suggestions that these things that this state delivers to you about suggestion of the hopelessness of life or the meaninglessness of it, or that you should hit the eject button, that's a false reality. And that it's when I completely understand the rational decision to commit suicide. It is not lost on me at all that that is an irrational situation, but the key is when you're in that situation and those thoughts are landing, to be able to say, thank you, you're not real.
Speaker 3
55:06
I
Speaker 2
55:06
know you're not real. And so I'm in a situation where for whatever reason I'm having this neurochemical state, but that state can be altered. And so it, again, it goes back to the realities of the difficulties of being human.
Speaker 2
55:20
And like when I was trying to solve my depression, I tried literally, if you name it, I tried it systematically and nothing would fix it. And so This is what gives me hope with brain interfaces, for example, like, could I have numbers on my brain? Can I see what's going on? Because I go to the doctor and it's like, how do you feel?
Speaker 2
55:38
I don't know, terrible. Like on a scale from 1 to 10, how bad do you want to commit suicide? 10. You know, like, okay.
Speaker 2
55:45
Yeah, at this moment. Here's this bottle. How much should I take? Well, I don't know, like just.
Speaker 3
55:50
Yeah, it's very, very crude. And this data opens up the, yeah, it opens up the possibility of really helping in those dark moments to first understand the ways, the ups and downs of those dark moments. On the complete flip side of that, I am very conscious in my own brain and deeply, deeply grateful that, it's almost like a chemistry thing, a biochemistry thing, that I go many times throughout the day, I'll look at like this cup and I'll be overcome with joy how amazing it is to be alive.
Speaker 3
56:34
I actually think my biochemistry is such that it's not as common, like I've talked to people and I don't think that's that common. And it's not a rational thing at all. It's like, I feel like I'm on drugs. And I'll just be like, whoa.
Speaker 3
56:54
And a lot of people talk about the meditative experience will allow you to sort of look at some basic things like the movement of your hand as deeply joyful because that's life. But I get that from just looking at a cup. Like I'm waiting for the coffee to brew. I'll just be like, fuck, life is awesome.
Speaker 3
57:15
And I'll sometimes tweet that, but then I'll regret it later. Like, goddammit, you're so ridiculous. Yeah, but that is purely chemistry. There's no rational, it doesn't fit with the rest of my, I have all this shit, I'm always late to stuff, I'm always like, there's all this stuff, you know, I'm super self-critical, like really self-critical about everything I do, to the point I almost hate everything I do, but there's this engine of joy for life outside of all that, and that has to be chemistry.
Speaker 3
57:45
And the flip side of that is what depression probably is, is the opposite of that feeling of like, because I bet you that feeling of the cup being amazing would save anybody in a state of depression. Like that would be like, you're in a desert and it's a drink of water. Shit, man, the brain is, it would be nice to understand where that's coming from, to be able to understand how you hit those lows and those highs that have nothing to do with the actual reality.
Speaker 2
58:21
It
Speaker 3
58:21
has to do with some very specific aspects of how you maybe see the world, maybe it could be just like basic habits that you engage in and then how to walk along the line to find those experiences of joy.
Speaker 2
58:35
And this goes back to the discussion we're having of human cognition is in volume, the largest input of raw material into society. And it's not quantified. We have no bearings on it.
Speaker 2
58:50
And so we just, you wonder, we both articulated some of the challenges we have in our own mind. And it's likely that others would say, I have something similar.
Speaker 3
59:03
And
Speaker 2
59:03
you wonder when you look at society, how does that contribute to all the other compounder problems that we're experiencing? How does that blind us to the opportunities we could be looking at. And so it really, it has this potential distortion effect on reality that just makes everything worse.
Speaker 2
59:27
And I hope if we can put some, If we can assign some numbers to these things, just to get our bearings, so we're aware of what's going on, if we could find greater stabilization in how we conduct our lives and how we build society, it might be the thing that enables us to scaffold. Because we've really, again, we've done it, humans have done a fantastic job systematically scaffolding technology and science institutions.
Omnivision Solutions Ltd