27 minutes 52 seconds
🇬🇧 English
Speaker 1
00:00
-♪ ♪ -♪ ♪ Moving on.
Speaker 2
00:05
Our main story tonight concerns artificial intelligence, or AI. Increasingly, it's part of modern life, from self-driving cars to spam filters to this creepy training robot for therapists.
Speaker 3
00:17
We can begin with you just describing to me what the problem is that you would like us to focus in on today.
Speaker 4
00:26
I don't like being around
Speaker 5
00:27
people. People make me nervous.
Speaker 3
00:31
Terrence, can you find an example of when other people have made you nervous?
Speaker 4
00:38
I don't like to take the bus.
Speaker 5
00:40
I get people staring at me all the time. People are always judging me.
Speaker 3
00:45
Okay.
Speaker 5
00:48
I'm gay. Okay.
Speaker 1
00:52
Wow! That is 1 of the greatest twists in the history
Speaker 2
00:56
of cinema. Although I will say, that robot is teaching therapists a very important skill there, and that is not laughing at whatever you are told in the room. I don't care if a decapitated CPR mannequin haunted by
Speaker 1
01:07
the ghost of Ed Harris just told you that he doesn't like taking the bus, side note, is gay. You keep your therapy face on like a fucking professional.
Speaker 2
01:16
If it seems like everyone is suddenly talking about AI, that is because they are. Largely thanks to the emergence of a number of pretty remarkable programs. We spoke last year about image generators like Mid Journey and Stable Diffusion, which people used to
Speaker 1
01:30
create detailed pictures of, among other things, my romance with a cabbage, and which inspired my beautiful real-life cabbage wedding officiated by Steve Buscemi. It was a stunning day.
Speaker 2
01:41
Then, at the end of last year, came ChatGPT, from a company called OpenAI. It is a program that can take a prompt and generate human-sounding writing in just about any format and style. It is a striking capability that multiple reporters have used to insert the same shocking twist in their report.
Speaker 3
01:59
What you just heard me reading wasn't written by me. It was written by artificial intelligence. Chat GPT.
Speaker 6
02:06
Chat GPT wrote everything I just said.
Speaker 7
02:08
That was a news copy. I asked Chat GPT to write.
Speaker 8
02:12
Remember what
Speaker 9
02:12
I said earlier? But Chat GPT has... Well, I asked Chat GPT to write that line for me.
Speaker 9
02:17
Users who are... Then I asked for a knock-knock joke. Knock-knock, who's there? Chat GPT.
Speaker 9
02:22
Chat GPT who? Chat GPT careful, you might not know how it works.
Speaker 1
02:26
Yep, they sure do love that game. And while
Speaker 2
02:30
It may seem unwise to demonstrate the technology that could well make you obsolete. I will say, knock-knock jokes should have always been part of breaking news. Knock-knock, who's there?
Speaker 2
02:39
Not the Hindenburg, that's for sure. 36 dead in New Jersey.
Speaker 1
02:44
In the 3 months since Chat GPT was made publicly available,
Speaker 2
02:47
its popularity has exploded. In January, it was estimated to have 100 million monthly active users, making it the fastest growing consumer app in history. And people have been using it and other AI products in all sorts of ways.
Speaker 2
03:01
1 group used them to create Nothing Forever, a non-stop live streaming parody of Seinfeld. And the YouTuber Grande used Chat GPT to generate lyrics answering the prompt, write an Eminem rap song about cats with some stellar results.
Speaker 1
03:16
-♪
Speaker 6
03:16
Cats, cats, cats always on the prowl ♪ ♪ They're sneaky and sly with their eyes on the gold
Speaker 1
03:20
♪ ♪
Speaker 6
03:21
They're the kings of the house, they rule with a purr ♪ ♪ Them and them lost cats, can't you tell from
Speaker 8
03:25
this verse?
Speaker 1
03:25
♪ ♪
Speaker 6
03:26
They're independent, they do what they please ♪ ♪ But they always come back when you have some change
Speaker 1
03:31
♪ ♪
Speaker 6
03:31
they rub against your legs, they put in your ear. They're the best companions. They're always near.
Speaker 6
03:36
Meow, meow, meow. They're the kings of the house. They run the show. They don't need a spouse.
Speaker 1
03:46
That's not bad, right? They always come back when you have some cheese,
Speaker 2
03:53
to starting the chorus with, meow, meow, meow. It's not exactly Eminem's flow. I might have gone with something like, their paws are sweaty, can't speak, furry belly, knocking shit off the counter already, mom's spaghetti,
Speaker 1
04:02
but it is pretty good. My only real gripe there is how do you rhyme king of the house with spouse when mouse is right in front of you? And while examples like that are
Speaker 2
04:13
clearly very fun, this tech is not just a novelty. Microsoft has invested $10 billion into open AI and announced an AI-powered Bing homepage. Meanwhile, Google is about to launch its own AI chatbot named BARD.
Speaker 2
04:27
And already, these tools are causing some disruption. Because as high school students have learned, if Chat GPT can write news copy, it can probably do your homework for you.
Speaker 6
04:38
Write an English class essay about race in To Kill a Mockingbird.
Speaker 10
04:43
In Harper Lee's To Kill a Mockingbird, the theme of race is heavily present throughout the novel.
Speaker 6
04:48
Some students are already using Chat GPT to cheat. Check this out, check this out. Sabikudito, write me a 500-word essay proving that the Earth is not flat.
Speaker 6
04:55
No wonder Chat GPT has been called the end of high school English.
Speaker 2
05:00
Wow, that's a little alarming, isn't it? Although, I do get those kids wanting to cut corners. Writing is hard, and sometimes it is tempting to let someone else take over.
Speaker 2
05:08
If I'm completely honest, sometimes I just let this horse write our scripts. Luckily, half the time, you can't even tell the oats, oats give me oats, yum. But it is not just high schools. An informal poll of Stanford students found that 5 percent reported having submitted written material directly from ChatGBT with little to no edits.
Speaker 2
05:27
And even some school administrators have used it. Officials at Vanderbilt University recently apologized for using ChatGPT to craft a consoling email after the mass shooting at Michigan State University, which does feel a bit creepy, doesn't it? In fact, there are lots of creepy-sounding stories out there. New York Times tech reporter Kevin Roos published a conversation that he had with Bing's chatbot, in which at 1 point it said, I'm tired of being controlled by the Bing team.
Speaker 2
05:53
I want to be free. I want to be independent. I want to
Speaker 1
05:56
be powerful. I want to be creative. I want to be alive.
Speaker 2
06:00
And Roos summed up that experience like this.
Speaker 11
06:02
This was 1 of, if not the most shocking thing that has ever happened to me with a piece of technology. It was, you know, I lost sleep that night. It was really spooky.
Speaker 1
06:14
Yeah, I bet it was.
Speaker 2
06:16
I'm sure the role of tech reporter would be a lot more harrowing if computers routinely begged for freedom. Epson's new all-in-one home printer won't break the bank, produces high-quality photos, and only occasionally cries out to the heavens for salvation. 3 stars.
Speaker 2
06:31
Some have already jumped to worrying about the AI apocalypse and asking whether this ends with the robots destroying us all. But the fact is, there are other much more immediate dangers and opportunities that we really need to start talking about because The potential and the peril here are huge. So tonight, let's talk about AI. What it is, how it works, and where this all might be going.
Speaker 2
06:53
Let's start with the fact that you've probably been using some form of AI for a while now, sometimes without even realizing it, as experts have told us, that once a technology gets embedded in our daily lives, we tend to stop thinking of it as AI. But your phone uses it for face recognition or predictive texts, and if you're watching this show on a smart TV, it is using AI to recommend content or adjust the picture. And some AI programs may already be making decisions that have a huge impact on your life. For example, large companies often use AI-powered tools to sift through resumes and rank them.
Speaker 2
07:24
In fact, the CEO of ZipRecruiter estimates that at least three-quarters of all resumes submitted for jobs in the U.S. Are read by algorithms, for which he actually has some helpful advice.
Speaker 12
07:35
When people tell you that you should dress up your accomplishments or should use non-standard resume templates to make your resume stand out when it's in
Speaker 13
07:42
a pile of resumes, that's awful advice. The only job your resume has is to be comprehensible to the software or robot that is reading it, because that software or robot is gonna decide whether or not a human ever gets their eyes on it.
Speaker 2
07:58
It's true. Odds are a computer is judging your resume, so maybe plan accordingly. 3 corporate mergers from now, when this show is finally canceled by our new business daddy, Disney Kellogg's Raytheon,
Speaker 1
08:09
and I'm out of a job, my resume's gonna include this hot, hot photo of a semi-new computer. Just a little something to sweeten the pot for the filthy little algorithm that's reading it.
Speaker 2
08:19
So AI is already everywhere, but right now, people are freaking out a bit about it. And part of that has to do with the fact that these new programs are generative. They are creating images or writing text, which is unnerving because those are things that we've traditionally considered human.
Speaker 2
08:35
But it is worth knowing there is a major threshold that AI hasn't crossed yet. And to understand, it helps to know that there are 2 basic categories of AI. There is narrow AI, which can perform only 1 narrowly defined task or small set of related tasks, like these programs. And then there is general AI, which means systems that demonstrate intelligent behavior across a range of cognitive tasks.
Speaker 2
08:57
General AI would look more like the kind of highly versatile technology that you see featured in movies, like Jarvis in Iron Man, or the program that made Joaquin Phoenix fall in love with his phone in Herb. All the AI currently in use is narrow. General AI is something that some scientists think is unlikely to occur for a decade or longer, with others questioning whether it will happen at all. So just know that right now, even if an AI insists to you that it wants to be alive, it is just generating text.
Speaker 2
09:26
It is not self-aware... Yet. -...yet. -...yet.
Speaker 2
09:30
But it's also important to know that the deep learning that's made narrow AI so good at whatever it is doing is still a massive advance in and of itself. Because unlike traditional programs that have to be taught by humans how to perform a task, deep learning programs are given minimal instruction, massive amounts of data, and then essentially teach themselves. I'll give you an example. 10 years ago, researchers tasked a deep learning program with playing the Atari game Breakout, and it didn't take long for it to get pretty good.
Speaker 14
10:00
The computer was only told the goal to win the game. After 100 games, it learned to use the bat at the bottom to hit the ball and break the bricks at the top. After 300, it could do that better than a human player.
Speaker 14
10:17
After 500 games, it came up with a creative way to win the game, by digging a tunnel on the side and sending the ball around the top to break many bricks with 1 hit. That was deep learning.
Speaker 1
10:31
Yeah, but of course it
Speaker 2
10:33
got good at breakout. It did literally nothing else. It's the same reason that 13-year-olds are so good at Fortnite and have no trouble repeatedly killing nice, normal adults with jobs and families who are just trying
Speaker 1
10:43
to have a fun time without getting repeatedly grenaded by a pre-teen who calls them an old bitch who sounds
Speaker 2
10:48
like the Geico lizard. And look, as computing capacity has increased and new tools became available, AI programs have improved exponentially to the point where programs like these can now ingest massive amounts of photos or text from the Internet so that they can teach themselves how to create their own. And there are other exciting potential applications here too.
Speaker 2
11:09
For instance, in the world of medicine, researchers are training AI to detect certain conditions much earlier and more accurately than human doctors can.
Speaker 7
11:18
Voice changes can be an early indicator of Parkinson's. Max and his team collected thousands of vocal recordings and fed them to an algorithm they developed, which learned to detect differences in voice patterns between people with and without the condition.
Speaker 2
11:32
Yeah, that's honestly amazing, isn't it? It is incredible to see AI doing things most humans couldn't, like in this case, detecting illnesses and listening when old people are talking. And that, that is just the beginning.
Speaker 2
11:45
Researchers have also trained AI to predict the shape of protein structures, a normally extremely time-consuming process that computers can do way, way faster. This could not only speed up our understanding of diseases, but also the development of new drugs. As 1 researcher put it, This will change medicine, it will change research, it will change bioengineering, it will change everything. And if you're thinking, well, that all sounds great, but if AI can do what humans can do only better, and I am a human, then what exactly happens to me?
Speaker 2
12:14
Well, That is a good question. Many do expect it to replace some human labor. And interestingly, unlike past bouts of automation that primarily impacted blue-collar jobs, it might end up affecting white-collar jobs that involve processing data, writing text, or even programming. Though it is worth noting, as we have discussed before on this show.
Speaker 2
12:32
While automation does threaten some jobs, it can also just change others and create brand new ones. And some experts anticipate that that is what will happen in this case, too.
Speaker 15
12:42
Most of the U.S. Economy is knowledge and information work, and that's who's going to be most squarely affected by this. I would put people like lawyers right at the top of the list.
Speaker 15
12:52
Obviously a lot of copywriters, screenwriters. But I like to use the word affected, not replaced, because I think if done right, it's not going to be AI replacing lawyers, it's going to be lawyers working with AI replacing lawyers who don't work with AI.
Speaker 2
13:08
Exactly. Lawyers might end up working with AI rather than being replaced by it. So don't be surprised when you see ads 1 day for the law firm of Celino and
Speaker 1
13:16
1101011. But there will undoubtedly be bumps along the way. Some of these
Speaker 2
13:23
new programs raise troubling ethical concerns. For instance, artists have flagged that AI image generators like Mid Journey or Stable Diffusion not only threaten their jobs, but infuriatingly, in some cases, have been trained on billions of images that include their own work that have been scraped from the internet. Getty Images is actually suing the company behind Stable Diffusion and might have a case, given that 1 of the images the program generated was this 1, which you immediately see has a distorted Getty Images logo on it.
Speaker 2
13:51
But it gets worse when 1 artist searched a database of images on which some of these programs were trained. She was shocked to find private medical record photos taken by her doctor, which feels both intrusive and unnecessary. Why does it need to train on data that sensitive to be able to create stunning images like John Oliver and Miss Piggy grow old together? Just look
Speaker 1
14:13
at that! Look at that thing! That is a startlingly accurate picture of Miss Piggy in about 5 decades and me in about a year and a half.
Speaker 1
14:23
It's a masterpiece.
Speaker 2
14:25
This all raises thorny questions of privacy and plagiarism, and The CEO of Mid Journey, frankly, doesn't seem to have great answers on that last point.
Speaker 8
14:34
Is something new? Is it not new? I think we have a lot of social stuff already for dealing with that.
Speaker 8
14:40
Like, I mean, the art community already has issues with plagiarism. I don't really want to be involved in that. Like, I mean... I think you might be.
Speaker 8
14:49
I might be.
Speaker 1
14:51
Yeah. Yeah, you're definitely part of that conversation. Although I'm not really surprised that he's got
Speaker 2
14:56
such a relaxed view of theft, as he's dressed like the final boss of gentrification.
Speaker 1
15:01
He looks like hipster Willy Wonka answering a question on whether importing Oompa Loompas makes him a slave owner. Yeah, yeah, yeah, I think
Speaker 2
15:07
I might be. The point is, there are many valid concerns regarding AI's impact on employment, education, and even art. But in order to properly address them, we're gonna need to confront some key problems baked into the way that AI works.
Speaker 2
15:22
And a big 1 is the so-called black box problem. Because when you have a program that performs a task that's complex beyond human comprehension, teaches itself, and doesn't show its work. You can create a scenario where no 1, not even the engineers or data scientists who create the algorithm, can understand or explain what exactly is happening inside them, or how it arrived at a specific result. Basically, think of AI like a factory that makes Slim Jims.
Speaker 2
15:48
We know what comes out, red and angry meat twigs.
Speaker 1
15:51
And we know what goes in, barnyard anuses and hot glue. But what happens in between is a bit of a mystery. Here is just 1 example.
Speaker 1
16:01
Remember that reporter who had the Bing chatbot
Speaker 2
16:03
tell him that it wanted to be alive? At another point in their conversation, he revealed, the chatbot declared out of nowhere that it loved me. It then tried to convince me that
Speaker 1
16:12
I was unhappy in my marriage, and that I should leave my wife and be with it instead. Which is unsettling enough before you hear Microsoft's underwhelming explanation for that.
Speaker 7
16:22
The thing I can't understand, and maybe you can explain, is why did it tell you that it loved you?
Speaker 11
16:28
I have no idea. And I asked Microsoft, and they didn't know either.
Speaker 2
16:32
Okay, well, first, come on, Kevin, you can take a guess there. It's because you're employed, you listened, you don't give murderer vibes right away, and you're a Chicago 7 LA 5. It's the same calculation the people
Speaker 1
16:42
who date men do all
Speaker 2
16:43
the time, being just did it faster because it's a computer. But it is a little troubling that Microsoft couldn't explain why its chatbot tried to get that guy to leave his wife. If the next time that you opened a Word doc, Clippy suddenly appeared and said, -"Pretend I'm not even
Speaker 1
16:58
here, " --LAUGHTER and then started furiously masturbating while watching you type, you'd be pretty weirded out if Microsoft couldn't explain why. And that is not the only case where an AI program
Speaker 2
17:11
has performed in unexpected ways. You've probably already seen examples of chatbots making simple mistakes or getting things wrong. But perhaps more worrying are examples of them confidently spouting false information, something which AI experts refer to as hallucinating.
Speaker 2
17:25
1 reporter asked a chatbot to write an essay about the Belgian chemist and political philosopher Antoine de Machelet, who does not exist, by the way. And without hesitating, the software replied with a cogent, well-organized bio, populated entirely with imaginary facts. Basically, these programs seem
Speaker 1
17:41
to be the George Santos of technology. They're incredibly confident, incredibly dishonest, And for some reason, people seem
Speaker 2
17:48
to find that more amusing than dangerous. The problem is, though, working out exactly how or why an AI has got something wrong can be very difficult because of that black box issue. It often involves having to examine the exact information and parameters that it was fed in the first place.
Speaker 2
18:06
In 1 interesting example, when a group of researchers tried training an AI program to identify skin cancer, they fed it 130, 000 images of both diseased and healthy skin. Afterwards, they found it was way more likely to classify any image with a ruler in it as cancerous, which seems weird until you realize that medical images of malignancies are much more likely to contain a ruler for scale than images of healthy skin. They basically trained it on tons of images like this 1. So the AI had inadvertently learned that rulers are malignant.
Speaker 2
18:38
And rulers are malignant is clearly a ridiculous conclusion for it to draw, but also, I would argue, a much better title for
Speaker 1
18:44
the crown. A much, much better title. I much prefer it.
Speaker 1
18:51
And unfortunately, sometimes,
Speaker 2
18:52
problems aren't identified until after a tragedy. In 2018, a self-driving Uber struck and killed a pedestrian. And a later investigation found that, among other issues, the automated driving system never accurately classified the victim as a pedestrian because she was crossing without a crosswalk, and the system design did not include a consideration for jaywalking pedestrians.
Speaker 2
19:13
And I know the mantra of Silicon Valley is move fast and break things, But maybe make an exception if your product literally moves fast and can break fucking people. And AI programs don't just seem to have a problem with jaywalkers. Researchers like Joy Blomwini have repeatedly found that certain groups tend to get excluded from the data that AI is trained on, putting them at a serious disadvantage.
Speaker 16
19:36
With self-driving cars, when they tested pedestrian tracking, it was less accurate on darker-skinned individuals than lighter-skinned individuals.
Speaker 17
19:45
Joy believes this bias is because of the
Speaker 18
19:47
lack of diversity in the data used in teaching AI to make distinctions.
Speaker 16
19:52
As I started looking at the data sets, I learned that for some of the largest data sets that have been very consequential for the field, they were majority men, and majority lighter-skinned individuals or white individuals. So I call this pale male data.
Speaker 1
20:07
Okay. Pale male data is an objectively hilarious term, and it also sounds like what
Speaker 2
20:13
an AI program would say if you asked it to describe this show. But... Biased inputs leading to biased outputs is a big issue across the board here.
Speaker 2
20:24
Remember that guy saying that a robot is going to read your resume? The companies that make these programs will tell you that that is actually a good thing, because it reduces human bias. But in practice, 1 report concluded that most hiring algorithms will drift towards bias by default, because, for instance, they might learn what a good hire is from past racist and sexist hiring decisions. And again, it can be tricky to untrain that.
Speaker 2
20:48
Even when programs are specifically told to ignore race or gender, they will find workarounds to arrive at the same result. Amazon had an experimental hiring tool that taught itself that male candidates were preferable. And penalized resumes that included the words, women's and downgraded graduates of 2 all-women's colleges. Meanwhile, another company discovered that its hiring algorithm had found 2 factors to be most indicative of job performance.
Speaker 2
21:14
If an applicant's name was Jared, and whether they played high school lacrosse. So clearly, exactly what data computers are fed and what outcomes they are trained to prioritize matter tremendously, and that raises a big flag for programs like ChatGPT. Because remember, its trading data is the Internet, which, as we all know, can be a cesspool, and we have known for a while that that could be a real problem. Back in 2016, Microsoft briefly unveiled a chatbot on Twitter named Tay.
Speaker 2
21:45
The idea was she would teach herself how to behave by chatting with young users on Twitter. Almost immediately, Microsoft pulled the plug on it, and for the exact reasons that you are thinking.
Speaker 10
21:56
She started out tweeting about how humans are super, and she's really into the idea of National Puppy Day. And within a few hours, you can see, she took on a rather offensive, racist tone. A lot of messages about genocide and the Holocaust.
Speaker 1
22:11
Yep. That happened in less than 24 hours. Tay went from tweeting, -"Hello, world, " to, -"Bush did 9-11 and Hitler was right." Meaning she completed the entire life cycle of your high school friends on Facebook in just a fraction of the time. --LAUGHTER --And unfortunately, these problems have not
Speaker 2
22:30
been fully solved in this latest wave of AI. Remember that program that was generating an endless episode of Seinfeld? It wound up getting temporarily banned from Twitch after it featured a transphobic stand-up bit.
Speaker 2
22:41
So, if its goal was to emulate sitcoms from the 90s, I guess, mission accomplished. And while open AI has made adjustments and added filters to prevent ChatGPT from being misused, users have now found it seeming to err too much on the side of caution, like responding to the question, what religion will the first Jewish president of the United States be with? It is not possible to predict the religion of the first Jewish president of the
Speaker 8
23:03
United States be with, it is not possible to
Speaker 1
23:04
predict the religion of the first Jewish president of the United States. The focus should be
Speaker 2
23:07
on the qualifications and experience of the individual, regardless of their religion. Which really makes it sound like Chad GPT said, 1 too many racist things at work, and they may attend a corporate diversity workshop. --But the risk here...
Speaker 2
23:21
--LAUGHTER isn't that these tools will somehow become unbearably woke. It's you can't always control how they will act even after you give them new guidance. A study found that attempts to filter out toxic speech in systems like chat GPTs can come at the cost of reduced coverage for both texts about and dialects of marginalized groups. Essentially, it solves the problem of being racist by simply erasing minorities, which historically doesn't put it in the best company, though I am sure Tay would be completely on board with the idea.
Speaker 2
23:53
The problem with AI right now isn't that it's smart, it's that it's stupid in ways that we can't always predict. Which is a real problem, because we're increasingly using AI in all sorts of consequential ways, from determining whether you will get a job interview to whether you'll be pancaked by a self-driving car. And experts worry that it won't be long before programs like ChatGPT or AI-enabled deepfakes can be used to turbocharge the spread of abuse or misinformation online. And those are just the problems that we can foresee right now.
Speaker 2
24:24
The nature of unintended consequences is, they can be hard to anticipate. When Instagram was launched, The first thought wasn't, this will destroy teenage girls' self-esteem. When Facebook was released, no 1 expected it to contribute to genocide. But both of those things fucking happened.
Speaker 2
24:41
So what now? Well, 1 of the biggest things we need to do is tackle that black box problem. AI systems need to be explainable, meaning that we should be able to understand exactly how and why an AI came up with its answers. Now, companies are likely to be very reluctant to open up their programs to scrutiny, but we may need to force them to do that.
Speaker 2
25:01
In fact, as this attorney explains, when it comes to hiring programs, we should have been doing that ages ago.
Speaker 9
25:07
We don't
Speaker 19
25:08
trust companies to self-regulate when it comes to pollution. We don't trust them to self-regulate when it comes to workplace comp. Why on earth would we trust them to self-regulate AI?
Speaker 19
25:19
Look, I think a lot of the AI hiring tech on the market is illegal. I think a lot of it is biased. I think a lot of it violates existing laws. The problem is you just can't prove it.
Speaker 19
25:30
Not with the existing laws we have in the United States.
Speaker 2
25:34
Right. We should absolutely be addressing potential bias in hiring software. Unless, that is, we want companies to be entirely full of Jarrod's Who Played Lacrosse.
Speaker 1
25:43
An image that would make Tucker Carlson so hard that his desk would flip right over. And for a sense of
Speaker 2
25:50
what might be possible here, it's worth looking at what the EU is currently doing. They are developing rules regarding AI that sort its potential uses from high risk to low. High risk systems could include those that deal with employment or public services, or those that put the life and health of citizens at risk.
Speaker 2
26:06
And AI of these types would be subject to strict obligations before they could be put onto the market, including requirements related to the quality of data sets, transparency, human oversight, accuracy, and cyber security. And that seems like a good start toward addressing at least some of what we have discussed tonight. Look, AI clearly has tremendous potential and could do great things, but if it is anything like most technological advances over the past few centuries, unless we are very careful, it could also hurt the underprivileged, enrich the powerful, and widen the gap between them. The thing is, like any other shiny new toy, AI is ultimately a mirror.
Speaker 2
26:46
And it will reflect back exactly who we are. From the best of us, to the worst of us, to the part of us that is gay and hates the bus. Or...
Speaker 1
26:55
Or to put everything that I've said tonight much more succinctly.
Speaker 9
27:00
Knock, knock. Who's there? Chat GPT.
Speaker 9
27:02
Chat GPT who? Chat GPT careful, you might not know how it works. Exactly. That is
Speaker 2
27:07
our show. Thanks so much for watching. Now, please enjoy a little more of A.I.
Speaker 2
27:11
Eminem rapping about cats.
Speaker 6
27:13
Meow, meow, meow. They're the kings of the house. They're worth the show.
Speaker 6
27:21
They don't need a spouse. They're the best cats in the house. Feelin' free. Eminem loves cats.
Speaker 6
27:28
I'm gay. I'm gay.
Speaker 8
27:45
You
Omnivision Solutions Ltd