37 minutes 8 seconds
🇬🇧 English
Speaker 1
00:01
This is my voice. It can tell you a lot about me. And I'm not changing it for anyone. In NPR's Black Stories, Black Truths, you'll find a collection of NPR episodes centered on the Black Experience.
Speaker 1
00:16
Search NPR Black Stories, Black Truths, wherever you get podcasts.
Speaker 2
00:23
Hello and welcome to How I Built This Lab. I'm Guy Raz. So right now it's the beginning of the show and you're probably listening pretty actively.
Speaker 2
00:32
At least I hope you are. But the reality is that sometimes our attention drifts. So what if right before you reach that point, your headphones or your earbuds triggered a small audible tone that basically told you you're starting to lose focus. Well, that's where today's guest comes in.
Speaker 2
00:50
Ramses Alkaid is the co-founder and CEO of the non-invasive wearable tech company, Neurable. Neurable's main product is N10. It's a set of headphones that adapt to your natural working rhythms to help prevent burnout. Neurobl actually began in
Speaker 1
01:05
2015
Speaker 2
01:06
when Ramses was getting his PhD in neuroscience at the University of Michigan. But his interest in technology goes back much further.
Speaker 3
01:14
I came from Mexico. I was born in Mexico. And I came to the United States when I was 5.
Speaker 3
01:19
And I remember that I was so into computers, my parents bought me 1 that was basically a computer that was a throwaway computer by some large company. And I remember taking it apart and being just so fascinated by it that I wanted to fix other people's computers. And so I would post up signs in my neighborhood where I would basically do like computer repair for people for like $20 an hour. And that was great money back when I was a kid, like, especially when you're like anywhere between 6, 7, like 8 years old, right?
Speaker 3
01:47
And I remember I'd show up and they'd open the door and they're like, oh, it's so cute. You brought your son with you. And my dad's like, no, I don't know anything about computers. Like he's gonna fix whatever your problem is.
Speaker 2
01:57
So from an early age, you had this knack for computers and clearly your passion for tech continued. And from what I understand, when you were a kid, your family experienced a tragedy that really sparked your interest in robotics. Can you tell me a little bit more about that?
Speaker 3
02:13
Yeah, I mean, really the idea of Nurable and kind of the concept of what I've dedicated my life to started when I was about 8 years old. My uncle got into a trucking accident in Mexico and he lost both his legs and it was really intense time for the family. We brought him from Mexico to the United States to get his prosthetics made.
Speaker 3
02:34
And my uncle's kind of a genius. You know, he's been an inventor all his life. He's always been a hard worker. And seeing him through that struggle and seeing how unnatural the prosthetic systems he started using were, That's what really motivated me into how do I leverage this curiosity that I have with electronics and computers to try to make something more natural for him.
Speaker 2
02:56
Leon Tucker Like in your head, you were thinking, 1 day I'm going to make something that's going to let him control his prosthetics.
Speaker 3
03:02
Exactly.
Speaker 2
03:03
I read that you got an undergraduate degree in electrical and electronics engineering. And it seems like you could have taken a path towards robotics, right? Especially inspired by what happened to your uncle.
Speaker 2
03:15
Was that a path that you were potentially pursuing?
Speaker 3
03:18
Yeah, definitely. So when I was studying electrical engineering, I worked with the prosthetics teams at the University of Washington. And then I was like, I really wanted to create the brain into these prosthetic limbs.
Speaker 3
03:31
So I went to grad school and I started working with brain-computer interfaces more, and that's when I worked with people with ALS and children's cerebral palsy and I was like, wow, this is just a whole other level. I mean, as terrible as it is, for example, my uncle having to go through this experience and having prosthetics that don't work with him naturally, what if you can't even move your eyes, right? Like that's just a whole different level of need. And so that's really what pivoted me from something that I thought I was going to dedicate my life to robotics to something much broader.
Speaker 2
04:03
All right, let's talk about brain-computer interfaces because I think a lot of people, when they hear that term, they think of like electro EEGs, right?
Speaker 3
04:11
Correct, EEG, electroencephalography.
Speaker 2
04:13
Nodes that are attached to people's heads, you know, that track brain waves. But what were you trying to solve with this research and this concept?
Speaker 3
04:25
Yeah, definitely. So essentially the work that I did as my graduate student work is I was working with children who had severe cerebral palsy. And the issue there is that we were not able to essentially give them these tests that they needed to be allowed to get physical rehabilitation.
Speaker 3
04:44
And the reason for that is because they couldn't communicate, at least not in the traditional means like talking or pointing. And so we use brain-computer interfaces to solve that problem. And the blocker there was that, you know, you have this eight-year-old kid, he just got 20 minutes worth of setup with goop and gel in his hair to make the technology work, it would be about 10 minutes worth of like calibrating the system and then it would take sometimes between 1 to 5 minutes to like even get a response from them. And so the work that I did was essentially machine learning classification so that we could interpret their brain activity at a much higher level of accuracy.
Speaker 3
05:18
And what that enabled us to do is reduce the response time from 1 to 5 minutes to anywhere between 30 seconds to a minute, which is enormous for an eight-year-old kid because having them sit there for 5 minutes to say yes or no question is crazy. All right.
Speaker 2
05:33
Help me understand what the technology is that you're talking about. You said it would take, you know, 5 to 10 minutes for setup. And what was it that you were doing differently?
Speaker 3
05:42
Yeah. So the reason the setup takes so long is because getting brain data is incredibly difficult. There's a lot of noise in the environment, even your blinking, you know, talking, the electromagnetic noise, even the lights impact the ability to collect this data because the sensors are so sensitive and brain signals are so small. And so we essentially developed an artificial intelligence that was able to use brain data that we previously collected and then also brain data that was coming in and essentially increase the signal to noise in order to essentially be able to do classifications of what a person intended to do.
Speaker 3
06:18
In this case, the child was selecting a multiple choice question at a greater fidelity. And when you can do that at greater fidelity, you don't need to repeat the question numerous times. And through less repetition, it gave people a better user experience. And so being able to make that brain-computer interface work in a seamless way really unblocks a lot of its use cases.
Speaker 2
06:40
All right, so how much data can brainwaves tell us? Like in theory, could you look at a person's EEG reading and know, you know, for example, how they'd answer a simple yes or no question or even that, like, that they're thinking about, like, pushing a button or moving an object?
Speaker 3
06:56
Yeah, I mean, there's a lot that you can take from brain data. So for example, just using EEG, you can identify a person's focus. You can identify, you know, for example, measures of stress, whether, you know, for example, they're going through a stroke, epilepsy, sleep responses like REM sleep.
Speaker 3
07:19
So there's a lot that you can do because your brain is the central hub for everything. But the main issue is in order to tap into those types of applications, you usually need to have a giant gel cap system with lots of setup to really be able to tap into them. So imagine you're an eight-year-old kid and we set you up with a system and there's gel running down your face and you don't want to be there, but your future is being dependent on it. Or imagine we try to bring this to the real world, like no one's going to wear that, right?
Speaker 3
07:49
No one's going to want to wear a swimmer's cap with gel in their hair. And so how do we unlock all these incredible value propositions? And we created an IP at the University of Michigan that helped us increase the signal to noise of those brainwaves so that we could actually bring it to everyday devices instead of having them be trapped in a laboratory setting. Even if we could just unlock what we already have in lab, it'd be a major step and it would accelerate so many fields.
Speaker 3
08:16
And so it was, that's what the company was focused on. Essentially, how do we create an everyday brain-computer interface? How do we unlock the brain to billions of people?
Speaker 2
08:26
It makes sense, because the brain just sends out electric signals to the rest of our body. And so if you could somehow harness or capture those signals, maybe you could make them work in ways that we haven't been able to make them work yet.
Speaker 3
08:40
Exactly. And even just the stuff that we were doing in the laboratory, for example, No 1 has a EEG device at home, right? But what if instead you just put on your Apple air pods to go take a call and Then every single time you do that we're tracking your brain just a little bit more to be able to tell you Hey, actually, you know what you're trending toward Alzheimer's, you know now you're getting older. We're starting to see this cognitive decline this is when you should go seek out a doctor.
Speaker 3
09:06
Not 10 years into the disease before it's, you know, it's already too late, right? And so how do we unlock all these incredible value propositions of brain-computer interfaces to the masses. And so that's really what the main work that we do at Nurble is.
Speaker 2
09:23
We're going to take a short break, but when we come back in just a moment, more from Ramsey's about bringing brain interface technology to the masses. Stay with us. I'm Guy Raz and you're listening to How I Built This Lab.
Speaker 2
09:39
Hey, it's Guy here and while we take a little break, I wanted to let you know about an episode of How I Built This we released a couple of weeks ago. It's about how Sal Khan, the creator of Khan Academy, is using AI to make education more accessible. We've all heard about the rise of generative AI technology and the popularity of things like ChatGPT. Well, at Khan Academy, Sal Khan is working on ways to build this technology into a new tool to help students learn more effectively by providing a personal learning experience.
Speaker 2
10:09
Of course, like all technology, this has its risks, but Sal believes that generative AI could reshape the classroom for good and help us close the massive learning gap left by the pandemic by supporting students and teachers alike. You can find this episode by following How I Built This and scrolling back a little bit to the episode titled When AI is Your Personal tutor with Sal Khan of Khan Academy or by searching Khan Academy, that's K-H-A-N Academy, wherever you listen to podcasts. Welcome back to How I Built This Lab. I'm Guy Raz.
Speaker 2
10:51
My guest today is Ramzes Alkaid, who launched his company, Neurable, back in 2015 to develop brain-computer interface technology. All right, so you basically, while you were a student, launched what eventually becomes an herbal, starting out just looking at how you could use brainwaves and patterns to help people, and now it's evolved into wearable devices. But essentially, it's about the technology. It's about building up a way that you could really measure what's happening inside of our bodies in an accurate way by measuring brain waves.
Speaker 3
11:30
Exactly. And the hypothesis there was, if we just build a reliable brain-computer interface system, which is what our core technology enabled us to do, it doesn't mean it's going to scale because it has to bring people value, right? And so that's why we ended up going toward focus and helping individuals essentially prevent, you know, burnout from occurring. And then on top of that, with some of our groups, we actually do it for safety.
Speaker 3
11:56
So preventing injuries due to fatigue. For example, the Air Force, You know, really big problem, right? So it can save billions of dollars. And then that enables us to have a really strong concrete step 1 for us to build a business and enable large amounts of these systems to go out.
Speaker 3
12:16
And then from there, really open it up to others to help solve some of these other problems as well.
Speaker 2
12:21
All right. So basically, we're trying to solve this challenge, this problem, which is how do you gather data from brainwaves in an easier way, right? Like for example, I've got an Apple Watch.
Speaker 2
12:32
And so I wake up and it can give me some data about my sleep. It'll tell me my heart rate, blood oxygen. So, I mean, you know, given what we can already gather from just our heart rate, right, which is quite a bit of data, beyond all the things we talked about, what other things potentially could we learn about our health or our general state from these devices, from these brainwaves?
Speaker 3
12:58
Yeah, you know, what's really interesting is that brain detecting devices are actually the ultimate wearable. A lot of the devices that you wear right now, for example accelerometers for movement or for heart rate, they can either be picked up through brain data Or they originally come from the brain and those are just secondary sources of signals, right? So for example, you can actually pick up Parkinson's responses using the Apple Watch.
Speaker 3
13:24
The issue is that by the time you pick it up at the hands or through walking metrics, your brain's already been dealing with it for the past 10 years. And so with the brain, you can actually pick up a lot of those things earlier. So there's 2 parts. 1 is that brain-based wearables are going to replace all the other wearables that you have.
Speaker 3
13:43
That's step 1. And so all that data and all that value that we're seeing with existing wearables are going to be all consolidated into 1 device. But then 2, there's certain things that you can only pick up from the brain. You know, for example, traumatic brain injury information, tracking ALS, right, seizure detection.
Speaker 3
14:01
There's so many other things that you can only do with the brain that, you know, not only are you taking care of your previous wearables, but now you're adding a whole plethora of medical use cases that have already been tested out in scientific literature, but now are able to be used at scale.
Speaker 2
14:17
All right, so let's talk about these headphones, the Enton headphones that your team has been working on and is getting ready to release later this year. What are you able to track using, putting these headphones on people now? What can you actually find out about?
Speaker 3
14:32
Yeah, so there's kind of like 3 areas where we use the technology in right now. On the first end is, for example, understanding an individual's focus over time, when they're fatiguing, when they should be taking a break in order to maintain it. Your brain is kind of like your body is to dehydration.
Speaker 3
14:48
You should be, in the case of the body, drinking water throughout the day. Well, you should be taking breaks throughout the day too. Even though you may not feel thirsty or you feel tired, you should be doing that in order to maintain a high level of hygiene for your own work and life balance. And so that's kind of the first area.
Speaker 3
15:05
The second 1 is in control. So we have the ability to, for example, use brain activity to do very minimal controls on the hardware as well too. So changing music tracks, play and pausing music. And then on the third end is there's so many incredible biomarkers that can be picked up using this type of technology.
Speaker 3
15:24
Like I said, tracking Alzheimer's or cognitive decline or, you know, other types of biometrics. So essentially just like how the Apple Watch started out as a system for tracking your movement, now it can actually pick up like heart arrhythmias. And so all of that medical landscape and biomarkers are also available through these brain-computer interfaces.
Speaker 2
15:46
But how do headphones, how do sensors around your ear capture as accurate information as those other sensors?
Speaker 3
15:55
Yeah, I guess to answer that, we have to break it down into 2 steps. First is like, how do we capture those signals from headphones? Well, the brain is very conductive, right?
Speaker 3
16:04
So for example, 1 of the brain signals we look at is called P300. We don't really have to get into the details of it, but essentially that comes from an area of your brain called the parietal lobe. It's around the back of your head. And even though that signal comes from around the back of your head, the signal is such a strong response, it can actually go, it goes all over your brain.
Speaker 3
16:26
Only the farther it goes from the signal source, the smaller that signal becomes, so it becomes harder to read. So we know that these signals go across the head, but you lose the ability to record them easily. And so that's where our AI comes in. It picks up these signals, even though they don't come from the most perfect location, they come from the areas that headphones are at.
Speaker 3
16:47
And from there, we're able to boost those signals to a level that makes them usable for different applications.
Speaker 2
16:53
So presumably, when people have access to initially the headphones, they'll be like an interface where they can sort of either on their watch or just on a webpage or their smartphone, where they can see this data.
Speaker 3
17:07
Correct, yes.
Speaker 2
17:08
And initially it will be, what kind of data will they have access to?
Speaker 3
17:12
Yeah, you know, it really depends on the audience. So for most individuals buying our tech, they're just going to be able to see their focus scores whenever they do work and suggestions on how to improve their focus over time, reduce fatigue, create more balance in their life. But there is a whole bunch of raw data and filtered data that is more granular, that's gonna be available to scientists or people who wanna dig in deeper.
Speaker 3
17:39
And from that, that's where you can pick up a lot more detail as to like, you know, sleep measures or, you know, epilepsy, et cetera.
Speaker 2
17:49
And so the idea is you would wear the headphones all day, and you would just go about your day?
Speaker 3
17:57
You would wear the headphones whenever you're working. So like right now I'm wearing headphones, so I just do a couple hours of work, wear the headphones, and then they would notify me when I should be taking a break in a way that's optimal for my mental health.
Speaker 2
18:11
So like what kind of notification?
Speaker 3
18:12
It would just be an audible notification, you know, we recommend you to take a break. And you can ignore it if you want to. Sometimes you're in the middle of something, you need a few more minutes, that's fine.
Speaker 3
18:21
And then you would take a break and it can tell you how long you should be taking a break. And then when you come back, it's actually really surprising, especially in our user testing. Like people don't realize how much they needed it and they come back and then they just crush their work. And they're like, wow, like I didn't think I needed this break, but I came back so energized and like focused.
Speaker 3
18:40
And it's having people have that feeling consistently and their day feeling really motivated about the work that they did so that they don't feel guilty about taking some time off when they get home. That's essentially the feeling we're able to give people with the technology.
Speaker 2
18:53
Essentially is recommending when you need to take a break, right, when you're working. But what is that based on? What kind of data is it getting to suggest that you need to stop working?
Speaker 2
19:03
Dr.
Speaker 3
19:03
Will Cullen-Hodgson Yeah, so we did a large study with a professor from Harvard who's now a professor at Worcester Technical Institute, and he had created this incredible method for identifying an individual's focus. And so what we were able to do, and we did close to a thousand individuals' worth of data collection on this, is we tracked people just doing their work and then leveraging this algorithm that we co-worked with this individual at Harvard. And what we saw is that there was very clear breaks in the data where we could see that if we were to recommend them a break at this time point, it enabled them to have 3 to 4 hours of higher productivity afterwards, feel more refreshed, reduce their errors in the amount of work that they do, instead of just burning themselves out and feeling really like, you know, bad about their day, essentially.
Speaker 2
19:55
Okay, so it can basically sense certain brainwaves that would suggest, according to this research, that it's time to take a break. What else can it do while you're working and you've got the headphones on?
Speaker 3
20:07
1 of the best parts about the technology is you can actually build things on top of it. So a few of the things that people are building, For example, the first 1 is with Audible, right? You can listen to an audio book, and then when you get distracted, it'll actually automatically pause it, which is really great because this happens to me a lot when I read or when I'm doing audio books, I have ADHD.
Speaker 3
20:30
I'll start reading it or listening, and then I'll just start zoning off into something else and then I'll realize that I get to the end of the page of the book or the end of that paragraph in the audiobook and I realize I haven't paid attention at all and I gotta rewind it. So there's ways to, for example, automatically pause things. And there's a whole bunch of other things that you can do with the technology too, but essentially it's that reliable.
Speaker 2
20:52
On that note, because this happens to me too, right? When I'm reading a book and I just sort of zone out or even listening to something, how does it know that you're zoning out?
Speaker 3
21:03
Yeah, it's the same algorithm for focus and fatigue. Essentially, we notice that there's a sharp decrease in an individual's focus. And once we identify that, if it remains consistent, then we know that the person's not focused on the task that they were on previously.
Speaker 3
21:21
And so then we're able to just pause it or in the case of reading, for example, I have it output a small audible tone and it just reminds me, hey, that's right, I should be reading right now. You know, I just got distracted by a random cat that walked by, and now I should get back to reading.
Speaker 2
21:38
We're going to take another quick break, but when we come back, more from Ramsey's on just how close we might be to mind-controlled technology. Stick around. I'm Guy Raz, and you're listening to How I Built This Lab.
Speaker 2
22:05
Welcome back to How I Built This Lab. I'm Guy Raz, and my guest today is Ramzes Alkaid, co-founder and CEO of Nourable. There are a bunch of companies working in this space, just like with AI and other categories, many of those companies are massively well-funded, you know, hundreds of millions of dollars in funding. You know, you've raised $20 million, which is impressive, but tiny compared to some of these other companies, I'm sure, as you know.
Speaker 2
22:31
How do you prevent those companies from just, you know, with their cash supercharging this technology and leaving smaller companies behind?
Speaker 3
22:40
Yeah, I think a lot of those companies aren't really our competitors. That's why, you know, like Neuralink is not a competitor. You know, we're non-invasive.
Speaker 3
22:47
We don't require surgery. Most of those companies that do invasive need a ton more money. It makes sense. The way I would think of Neuralink is kind of like a hip replacement.
Speaker 3
22:56
You know, no 1 really wants a hip replacement until, you know, you actually need 1. And even if they end up being 10 times better than your actual hips in the future, you probably wanna be able to keep your hips for as long as possible regardless. When it comes to the competitors more in our space, which is non-invasive, we're probably 1 of, if not the best funded company in the world. And so that's enabled us to continue staying ahead.
Speaker 3
23:24
And at first it was because of our technology, we're at least 5 to 10 years ahead of anybody else in the market. But Now it's because of our business partnerships that we have that we continue to grow.
Speaker 2
23:34
So let's talk about the products that you're developing. Right now it's going to be a commercial product, right? Headphones initially, is that right?
Speaker 3
23:44
Yeah, headphones and then very soon after in earbuds.
Speaker 2
23:47
And let's talk about the headphones remote. When will they be available?
Speaker 3
23:51
They're gonna be available essentially Q4 this year or Q1 next year. And then from there, we'll continue improving things and working with the customers that we have to help us further evolving the product and helping scientists unlock more capabilities. It's really going to be a community effort to build these types of devices.
Speaker 2
24:11
OK, and so presumably the idea is to build on this and to be able to create other features down the road. But lots of companies are working on ways for our thoughts to be turned into actions. Tell me about that side of the research that you're doing, because presumably down the road, you'd wanna do something around that.
Speaker 3
24:33
So, you know, some of this research, it's called silent vocalization research, has been around and the main issue is that when you collect this type of data, you usually need sensors like around the mouth and the face and they pick up primarily muscle activations that are completely invisible to the user. And so we're essentially doing a very similar methodology but inside the wearable devices that are going to be compatible with our platform. And with that, we're going to be able to open up more capabilities for individuals to interact with our technology.
Speaker 3
25:07
And the best part is, if you have any headphones or earbuds that are Neuralable compatible, you'll just be getting all of this through software updates, essentially. But at first, we're going to introduce a system for very simple forms of control, just launching Spotify, Play, Pause, Next Track. And then as we collect more data with individual's consent, obviously, We're going to be able to further expand those capabilities, but the main goal is how do we essentially create more of a seamless interaction system between humans and their computers?
Speaker 2
25:39
Also on your webpage, there's a – in this video, it shows like a young woman walking and like thinking in her mind of a message she's sending to her dad, like a text message. She's like, hey dad, meet me at home for dinner, and she's just thinking it. How far away are we from that reality?
Speaker 2
25:58
I mean, are we 10 years away, 5 years away, a year away, 20 years away?
Speaker 3
26:04
So that thinking to text perspective, at least the very first versions of it are the ones that I was discussing earlier, where it's very simple, like play, pause, next track. And then as we collect more data and build out the system, it'll enable more things similar to the video that you saw.
Speaker 2
26:21
So just to clarify, you're saying that in a short period of time, you'll be able to wear these headphones and just think in your brain without saying it, play music or play this song and it'll play it?
Speaker 3
26:34
It's a little bit more nuanced than that but essentially yes and like I said we already have working systems of that in lab and so really everything on that video are things that are wholly within what we're building so you know it's not like a vision video of 100 years from now, at least V1 is gonna be available within the next 2 years.
Speaker 2
26:54
So just for me to understand, I mean, when I'm thinking of the word play, Is my brain making a specific brain wave to that specific word?
Speaker 3
27:05
So, think about, for example, you have an athlete, and the athlete thinks about throwing a football. Yeah. Well, when they think about that, Even though they're not throwing the football, that area of the brain that is associated with throwing a football is activating, and those muscles are activating.
Speaker 3
27:23
It's not happening at a visual level. You don't see the football player moving his arm. Just thinking about it activates those areas. And so whenever you think of a word like play, pause, or next, the same thing happens in the brain.
Speaker 3
27:37
And so we're able to pick up those signatures even though you're doing it silently, you're not visually saying something out loud, audibly or visually, but we're able to pick up those signals and then use that essentially as a source for how we control devices.
Speaker 2
27:53
So just like, let me go back and sort of reverse engineers. You know when you walk down the street and you see somebody talking really loud And you now know that they're on a phone call, like they're wearing earbuds on a phone call. But even like 10 years ago, even 5 years ago, that was still jarring.
Speaker 2
28:11
You'd be like, wait, what are they? And then, oh, they're on a phone call. And now it's normal, totally normal to see people walking down the street just talking to themselves. But if you took a human from like the 1950s and you dropped them in like, you know, modern day, some modern day city and just saw people talking to themselves, They wouldn't understand what's going on.
Speaker 2
28:32
In 10 years from now, are we likely to see a version of that except in silence? Like people maybe having conversations with other people through their device, their earbuds or whatever it might be, but just in silence?
Speaker 3
28:50
I mean, I wouldn't say 10 years from now, right? I think it's gonna take longer, but what I would say is that, you know, within the next 10 years, people are going to be communicating to their technology silently. You know, I think that communicating via voice is still gonna be a more efficient method of communication with somebody, at least in the near term.
Speaker 3
29:09
But when it comes to, for example, let's say you're having a conversation with somebody and you get a notification, right? Being able to push it out of the way or to reply to it real quick in a way that doesn't break the conversation would be very valuable, right? Let's say, you know, for example, you were talking to somebody about a really great place that you went to go eat to, but you forgot the name, right? So helping it pull up that information in a way that doesn't disrupt, oh, hold on, let me pull out my phone and just wait a second while I figure everything out.
Speaker 3
29:40
So I think that we're going to be communicating with our technology seamlessly and invisibly, And then that enables us to also free up some of our cognitive loads that we can continue to have these more engaged and connected conversations in the way that we traditionally do.
Speaker 2
29:56
I mean, the idea presumably is to initially sell the headphones, but tell me more about the broader vision, certainly around, you know, making this a profitable business.
Speaker 3
30:10
Yeah, definitely. You know, at least for us, the number 1 step is how do we unlock all the potential brain-computer interfaces to the world, right? And so the first step is we work with different companies, OEMs, some of you know the largest in the world are some of the ones that we're working with, and we help them release neural-powered products.
Speaker 3
30:33
The first 1 is going to be a pair of headphones. Eventually it's going to be earbuds, AR glasses, helmets. So we also work with a few groups that build helmets for, for example, pilots or for individuals that are in high-risk environments. And so imagine being able to at least step 1, track their mental health, track their fatigue so that they don't, you know, to prevent accidents that could happen.
Speaker 3
31:01
And then longer term enabling them to control their technology much more easily, and then even longer term being able to make sure that key health markers can be caught ahead of time so that they're able to get care earlier and we're able to accelerate a lot of research.
Speaker 2
31:20
So the idea is long-term, not just have consumer products but enterprise products.
Speaker 3
31:27
Exactly, yeah, and we work across the realm. Like A lot of people think that what we're doing is we're building headphones, but the reality of it is our technology can scale across any type of head-worn device. And so we partner with different head-worn companies, and we help empower their devices to be compatible with our platform.
Speaker 3
31:47
And then that enables them to get access to this portfolio of use cases that can help their, you know, employees that can help students that can be used for medical applications, etc.
Speaker 2
31:58
Ramsey's, I'm not sure what what science fiction book it was, I had somebody listening will remember, but there's at least 1 book about how our thoughts in the future, it'll be possible to read them. Now, obviously, we're talking about science fiction today, but if we think about where this technology is going, you can take that leap of faith and imagine that within my lifetime and yours, we might get to a place where our thoughts could be read. And it's amazing if we can achieve that as humans.
Speaker 2
32:28
But it's also really scary. Like our brains, like the things that the stuff between our ears, like 1 of the last private places left. You know, everyone's got a camera, there are drones everywhere, you know, we talk on cell phones, like the only place where we're really, we can really be private is in our heads. And that might go away.
Speaker 3
32:51
Yeah, I guess I'm a little bit less worried about that. Main reason for that is just because when you're talking about non-invasive, so non-surgical measures, it's 1 of these things where we're just so far away from that level of detail that, like, I'm less worried about it, and at the end of the day you can just take it off, right? You can just take off your headphones or your earbuds.
Speaker 3
33:14
Where that really becomes more scary is invasive. Like with invasive we can get to that type of future, I agree. But at the same time, you know, and this I was actually at a panel with the DoD where they were asking us questions about, hey, should we be worried about, you know, people leveraging brain-computer interfaces, at least the invasive kind for all this kind of scary stuff, like controlling jets or something? And at the end of the day, it's like, 1, everyone's more focused on how do we help that person with ALS communicate.
Speaker 3
33:47
2, it's going to be way easier to fly a jet the way you fly it now for the next like 100 years. So like let's not really worry about that right now. Let's just help that person with ALS. Let's help them at least reliably say yes, no, and you know that they love somebody and let's get through that breakthrough.
Speaker 3
34:05
You know I mean obviously in the far far future anything is possible but I'm more of an optimist when it comes to where we're headed. I think we already have good enough ways to destroy 1 another. We don't need to do it in a more complicated and difficult way.
Speaker 2
34:19
You know, I just saw the film Oppenheimer, like probably millions of other people, and you just see the challenges that they were dealing with and, you know, just how quantum physics has just developed from nothing into this incredibly powerful discipline, you know, in the course of a lifetime, half a lifetime. What is something that you just are trying to figure out that kind of keeps you up at night but you're really excited about?
Speaker 3
34:45
Yeah, I mean, for me, the thing that, there's 2 parts to that. 1 is like, what are some of the challenges and what are some of the excitements? So the challenges are, essentially, what we're doing right now is we're trying to validate as many of our assumptions as possible, Trying to make values that are really sticky with customers that add a lot of value to them so that when the product comes out, you know, it is going to be successful.
Speaker 3
35:10
This is kind of like when the iPhone first came out, you know, we don't really know how much impact it has until it came out, right? For example, Uber wouldn't have existed without the iPhone. We wouldn't have GPS in a person's pocket, right? So there's going to be so many solutions that we don't even know people are going to start building with this.
Speaker 3
35:27
Once you have brain-computer interfaces as part of your everyday life, that I'm really excited to see what others create from what we're building.
Speaker 2
35:37
Ramsey, thanks so much.
Speaker 3
35:38
Yeah, thank you. This is an absolute pleasure. It's a pleasure meeting you as well, too.
Speaker 2
35:42
Yeah, nice meeting you. Good luck.
Speaker 3
35:43
Thank you.
Speaker 2
35:48
It's Ramsey Salcade, co-founder and CEO of Neurable. Hey, thanks so much for listening to How I Built This Lab. Please make sure to follow the show wherever you listen on any podcast app.
Speaker 2
36:00
Usually there's just a follow button right at the top so you don't miss any new episodes and it is entirely free. If you want to contact our team, our email address is hibt at id.wondery.com. This episode of How I Built This Lab was produced by Ramel Wood and edited by John Isabella, with music by Ramteen Arablui. Our production team at How I Built This includes Neva Grant, Casey Herman, JC Howard, Kerry Thompson, Alex Chung, Elaine Coates, Chris Messini, Carla Estevez, and Sam Paulson.
Speaker 2
36:31
I'm Guy Raz, and you've been listening to How I Built This Lab. Hey Prime members, you can listen to How I Built This early and ad-free on Amazon Music. Download the Amazon Music app today or you can listen early and add free with Wondery Plus in Apple Podcasts. If you want to show your support for our show be sure to get your How I Built This merch and gear at wonderyshop.com.
Speaker 2
37:01
Before you go tell us about yourself by completing a short survey at wondery.com slash survey.
Omnivision Solutions Ltd