2 hours 27 minutes 30 seconds
🇬🇧 English
Speaker 1
00:00
You you you you you you you you you you you You The following is a presentation of the AI Master's Forum. Ladies and gentlemen, welcome to the AI Master Forum hosted by the Yonglin Foundation and the Commonwealth Magazine. Today, we have invited an international AI master to discuss with us how AI will shape the future of mankind. Today, we will stand on the shoulders of giants and feel the power of AI across the field.
Speaker 1
03:55
Let's move forward together towards the future. Ladies and gentlemen, we are going to stand on the shoulders of giants today. Together, we're going to explore the revolutionary power of AI and how the AI revolution is going to redefine our tomorrow. Next, I'm going to bring to the stage our distinguished guests and speakers.
Speaker 1
04:30
Speakers. First, let's welcome the Founder of Yonglin Foundation, Chairman Guo Xiaolin, and the Host of the Gourmet Talk, President Lin Qi Hong, President of National Yangming University. Let's welcome Shelly Guo, CEO of the Yonglin Foundation, and our moderator today, Qi Chi-Hung Lim, president of National Yangming Chao-Tung University. Dr.
Speaker 1
05:00
Li Jie. Let's welcome our 2 keynote speakers today, Dr. Andrew Ng and Dr. Jay Lee.
Speaker 1
05:12
It's a great honor to have the founder of Yonglin Foundation, the organizer of this forum, Mr. Guo Taiming, and the father of ChatGPT, the founder of Open AI, Sam Altman, please welcome Terry Guo, founder of the Yongling Foundation, and the father of Chad Chibi Chin, co-founder of OpenAI, Sam Altman. Let's firstly look at the cameras in front of you. Let's have a photo together.
Speaker 1
06:00
And to your right. Next, to your right. And to your right. And to your right.
Speaker 1
06:07
And to your right. And to your left. And to your left. And to your left.
Speaker 1
06:17
Next, let's have Mr. Guo Taiming to lead the audience to give a round of applause. Let's look at the camera in the middle. Thumbs up, everybody, for the second group photo.
Speaker 1
06:30
The camera for the photographer. Back, back. And in the back. And then to your right.
Speaker 1
06:44
And then To your left. And back to the middle for the last 1. And our great speakers. Please return to your seats.
Speaker 1
07:17
Now, let's welcome the host, CEO of Yonglin Foundation, Shirley Guo. Now for the opening remark, ladies and gentlemen, please welcome Shirley Guo, CEO of the Yonglin Foundation. Welcome.
Speaker 2
07:38
Good afternoon everyone. I am Shirley Goh, the CEO of Yonglin Foundation. I'd like to invite all of you today.
Speaker 2
07:45
We have our speakers, invited guests, and everyone here today in person as well as online. Thank you so much for taking the time to join us today. Yonglin Foundation was founded in the year 2000, and Since then, we have always kept our eye on the future. We have worked tirelessly to push for fundamental change and long-term impact in places of need here in Taiwan and around the world.
Speaker 2
08:13
In education, we launched Yonglin School of Hope with a goal of equalizing basic education for underprivileged kids. We have teachers, social workers, and created our own teaching materials and teaching training courses to support each and every 1 of our students. As of today, we have helped over 160,000 students, many of whom have gone on to bigger and better jobs in society. In philanthropy, we have stood in front of the front lines of disaster.
Speaker 2
08:46
In 1999, the biggest earthquake in recent Taiwan history, where over 50,000 houses collapsed, we were there to help rebuild. In
Speaker 1
08:56
2009,
Speaker 2
08:57
1 of the most disastrous typhoon wiped out entire towns, misplacing hundreds of families. We were there to meet their needs. In
Speaker 1
09:06
2021,
Speaker 2
09:08
at the height of the COVID-19 pandemic, Yongning Foundation and Hong, and, I'm sorry, let me say that again. The Yonlin Foundation and Foxconn Group donated 5 million doses of the life-saving vaccine to help protect the people of Taiwan. In health, we have built a cancer hospital, the National Taiwan University Cancer Center.
Speaker 2
09:33
It is a state of the art facility providing most advanced treatment practices available internationally. The cancer center aimed to offer precise treatment and personalized regimen to all cancer patients. Fast track to today. At the verge of the next technological revolution for mankind, we are here.
Speaker 2
09:56
We had the industrial revolution in the 1800s, the computer revolution in the 1900s and early 2000s. And now we are facing the artificial intelligence revolution. We understand the importance of AI to the future of the world, but most importantly, to the future of Taiwan. For this very reason, we are hosting this forum today that is absolutely free to everyone and anyone who wants this knowledge.
Speaker 2
10:25
We have over a thousand people here in this very room, and the entirety of this forum will be available via Yongning's YouTube channel and Facebook channel for those who are not able to join us here live today. I was told that this event has reached full capacity in just a matter of a few days, so thank you for that. But obviously, this is because of our esteemed speakers that are here today. I know that every 1 of our speakers here today are already leaders in their respective worlds and are famous in their own right.
Speaker 2
10:57
So it is my great honor to give each and every 1 of them a proper introduction. So our first speaker today is Dr. Andrew Ng. Dr.
Speaker 2
11:07
Ng is Managing General Partner at AI Fund, CEO of Landing AI, founder of Deep Learning AI, chairman and co-founder of Coursera, and an adjunct professor at Stanford University. He is also authoring and co-authoring over 200 research papers in machine learning, robotics, and related fields. He was also the founding lead of the Google Brain Team and chief scientist at Baidu. Dr.
Speaker 2
11:35
Eng now focuses his time primarily on his entrepreneur ventures, looking for the best ways to accelerate responsible AI practices in the larger global economy. He holds degrees from Carnegie Mellon University, MIT, and the University of California at Berkeley. In 2023, he was named 1 of Times Magazine's 100 most influential people in AI around the world. So let's give him a warm welcome to Dr.
Speaker 2
12:03
Andrew Ng.
Speaker 1
12:08
Thank you so much. Shirley Guo, CEO of the Yongling Foundation. And ladies and gentlemen, the first keynote speaker today is Dr.
Speaker 1
12:19
Andrew Ng. Next is our first speaker today, Dr. Wenda Wu. Thank you, Shirley.
Speaker 3
12:37
Thank you, Shirley. Is this on? Can you hear me?
Speaker 3
12:41
Okay. Thank you, Shirley. It's good to see everyone here. What I want to do is share with you today what I see as some exciting opportunities in AI.
Speaker 3
12:53
So I've been saying for a few years that AI is the new electricity and that's because 1 of the difficult things about to understand about AI is a general purpose technology, meaning it's not useful just for 1 or 2 things. It's useful for a lot of different applications, like electricity. If I ask you what is electricity good for, it's almost hard to answer that question because electricity is useful for so many different things. And like electricity about 100 years ago, I think AI is now going to revolutionize every sector of the economy.
Speaker 3
13:25
What I want to do is start off with a brief description of the technology landscape, and this will then lead into a discussion of what AI, what we can all do with AI. So there's a lot of hype and excitement about AI. And I think a good way to think about AI is as a collection of tools. The 2 most important tools in AI are supervised learning, which is very good at labeling things, and the new entrant generative AI.
Speaker 3
13:54
If you study AI you may have heard of some other tools as well, but today I want to focus on just the 2 most important which are supervised learning, good at labeling things, and generative AI. So supervised learning is a technology that started to work really well about 10, 15 years ago, where we could take inputs and label them with an output. Given input A, we could compute an output B. For example, given an email, we can label it with whether or not it is spam.
Speaker 3
14:24
The most lucrative application of this I've ever worked on is probably online advertising, where all the large ad platforms can input an ad and some information about a user and label if you're likely to click on the ad. And for a single company like Google, this drives more than 100 billion US dollars a year. I see we have also Appia here, a very successful MarTech Taiwanese company also making good use of this technology. In self-driving cars, we have AI that can input a picture of what's in front of the car and label it with position of the other cars with driver assistance or self-driving 1 of the projects 1 of my teams worked on is given a ship route We can label it with how much fuel is going to consume use that for logistics My teams do a lot of work in manufacturing, where we can take a picture of a smartphone that's just been manufactured and tell us if there's a defect for inspection.
Speaker 3
15:24
Or given a restaurant, if you want to monitor the reputation, you can look at the restaurant review and label that as a positive or negative sentiment for reputation tracking. So 1 thing about AI, even supervised learning, is notice how many different applications it is useful for. And when it started to work well 10, 15 years ago, it was a lot of work for many of us to go and find and build to these applications of supervised learning. So just to walk through 1 concrete example, this is what building a supervised learning labeling project looks like.
Speaker 3
15:58
If you want to build a restaurant reputation monitoring system, you start by getting data, so pieces of text like the oyster omelet. This is what Oatien is amazing. You say there's positive sentiment. Several are slow.
Speaker 3
16:13
There's a negative. My favorite time being that's positive. And so given thousands of data points like this, you then have your label data. Then you can train your AI model and then find a cloud service to run it.
Speaker 3
16:28
And after that, you can input your best sentence I've ever had. And it'll say, there's a positive sentiment. So this is a typical workflow of an AI supervised learning project. Now, I know there's a lot of excitement about generative AI.
Speaker 3
16:43
And I think that generative AI was set up for success with some of the early progress in supervised learning. In fact, I think the last decade was a decade of large-scale supervised learning. We're starting about 15 years ago, 10, 15 years ago. We found that If we built a very small AI model, a very small neural network, even as we fed it more and more data, it would not get much better.
Speaker 3
17:08
But when I started the Google Brain team, the primary mission I set for the team is I said, let's just build really, really large neural networks with Google's compute infrastructure. And we found that as we scale to very, very large neural networks, we could feed in more and more data, and we just keep on getting better. I know Jensen Huang and Lisa Su were in town just recently, and I feel like GPU hardware has powered a lot of this rise of AI in the last decade. How about if that was the last decade, I think this decade is turning out to be the decade of generative AI, which adds, doesn't replace, but adds to the tools we have in supervised learning, where we now have tools that can input a piece of text called a prompt, and if you run it once, it may say mango shaved ice, mother's braised pork rice, or tofu.
Speaker 3
18:00
And I think that when Sam and his team released ChaiGPT in November less than a year ago, it was almost a magical moment. But I want to take just a minute to demystify how the heart of large language models works. It seems so magical, how does it actually work? It turns out that the heart of Gent of AI, of these large language models, is they're built by repeatedly predicting what is the next word using supervised learning.
Speaker 3
18:26
So for example, if an AI has read on the internet a lot of text, like a sentence like, my favorite food is tomato scrambled eggs, right? I actually really like that. Then it can use this as data and ask, if you see my favorite food is, predict what's the next word. In this case, it is tomato, or if it sees my favorite food is tomato, what's the next word?
Speaker 3
18:48
Scramble, and so on. If it sees my favorite tomato is scramble, what's the next word? It's eggs. And it turns out that if you train a very large AI system on a very large number of words, hundreds of billions of words, or sometimes more than a trillion words, then you get a large language model like ChaiGPT or Googlebots or Bing Chats or others.
Speaker 3
19:11
And I think that while many of you are familiar with tools like ChaiGPT as a consumer tool, and I think it's a fantastic consumer tool. There's 1 other revolution, which is how generative AI is changing AI application development. And I think I'm as excited, even more excited, about generative AI as a developer to, as a consumer to. But let me show you what I mean.
Speaker 3
19:38
So if I wanted to build a restaurant-centered review system with supervised learning, this is what I would do. I would get label data. Maybe it takes a month. Train an AI model, maybe it takes 3 months.
Speaker 3
19:48
Deploy in the cloud, Maybe that takes me 3 months. So a typical realistic timeline for building a production commercial AI system is maybe 6 to 12 months. And many very good teams I worked with took about that long to build a system. But in contrast with prompting, this is what the workflow looks like.
Speaker 3
20:05
You can specify a prompt. This takes minutes, maybe hours. And then you can deploy the model in maybe hours or days. What this means is that there are a lot of AI applications that used to take me 6 months to build, that today there are hundreds of people.
Speaker 3
20:20
There are, sorry, hundreds of thousands of people, maybe millions of people, that can do in 1 week what used to take me 6 months to do. And this is leading to a flourishing of AI applications. Now, I know you were not expecting me to write code in this presentation, but that's what I'm going to do. So Sam's here.
Speaker 3
20:40
I'm going to use OpenAI tools. But this is all the code it takes to write a sentient classifier. I'm going to import some tools from OpenAI, load my API key. Then, I don't know, I really enjoyed the Yonglin Foundation meeting.
Speaker 3
20:59
What came away? Feeling, I don't know. I've never run this before, so I really hope it works. All right, so that's the problem.
Speaker 3
21:08
So it says, classify the text below the 3 dashes as having either positive or negative sentiment. So now we're actually gonna run it. Right, And so, oh, thank goodness that worked. But this is all the code it takes in order to build a sentiment classifier.
Speaker 3
21:26
And this is revolutionary that So many people can now, with just a few lines of code, build and deploy a pretty good AI system. 1 of the things I've been working on over the last many months is work with many different industry partners to teach generative AI as a developer tool. And I've been privileged to work with many of the industry experts. So OpenAI, AWS, Lanchain, Lambda, Huckinface, Google, Microsoft, Ways and Bytes, Cohere, almost all the leading AI, generative AI companies.
Speaker 3
21:57
I think we're probably working with all of them at this point, including many that have not yet been announced. And I also want to mention that key to our developer education efforts is our Taiwan engineering team. So deeplearning.ai, my team, is working on this generative AI application. Our engineering team is actually here in Taiwan, and we couldn't have done it without them.
Speaker 3
22:19
And I hope this will be our little contribution to the Taiwan engineering ecosystem as well in dissemination of knowledge. Now, given this technology landscape, let me share a little bit about where I think are some significant AI opportunities. The size of these circles shows what I think is the value of different AI technologies today in terms of value generated from applications compared to what the value would grow to 3 years from now. Supervised learning, really massive.
Speaker 3
22:51
For a single company like Google, worth more than $100 billion a year, and with millions of developers working on it, I think over the next 3 years, which is a very short time horizon, may double, go from massive to maybe even more massive. Generative AI, very exciting, but it's small right now. But I think it will much more than double over the next 3 years. Sometimes people see this slide, and they think, I'm not excited about Generative AI.
Speaker 3
23:14
No, I'm very excited about Generative AI. I think it just will take time to figure out and build the applications. And 3 years is a very short time. If it continues to grow at this rate, then in 6 years, maybe it'll be even vastly bigger.
Speaker 3
23:27
But the light-shaded region is where I think there are a lot of opportunities for developers, big companies, startups, everyone to identify and build exciting opportunities. And 1 thing I hope you take away from this part of the presentation is AI technologies are general purpose technologies. Supervised learning and generative AI, They're both general purpose technologies. And this means they're useful for many different tasks, and many applications remain to be identified and built.
Speaker 3
23:58
But sometimes you get asked about this. But there's 1 caveat, which is that there will be fads along the way. I'm not sure how many of you remember the app Lenzer. This is the app that you can upload 10, 15 pictures of yourself and draw a cool picture of you as a scientist or an astronaut.
Speaker 3
24:16
And so it was a really good product. Its revenues took off like that through last December, and then it did that. And I think this is because Lenzer was the first of, unfortunately, what may be multiple fads. There was a good idea, but it wasn't a long-term, sustainable business.
Speaker 3
24:32
When I think about Lenzer, I'm reminded of when Steve Jobs, as well as Terry, gave us this. And then there were fads, too. Developers built an app for $199 to do this, to turn on the flashlight. And it was a good idea, but this too was a fact that wasn't a long-term sustainable business.
Speaker 3
24:56
But on the flip side, the iPhone also allowed other smart entrepreneurs to build long-term sustainable businesses like Uber and Airbnb and Tinder. And I think on top of Genesive AI platforms, view this as a developer too, we have opportunities to do that too, and that's what I'm excited about. So the first trend is Genesive AI as a... Oh, sorry, AI as a general purpose technology.
Speaker 3
25:18
There's just 1 more trend I want to share with you, and I only share this because I think this will impact many industries that Taiwan is strong in, which is, and this addresses why AI isn't more widespread yet, right? So a bunch of us, we've been talking about AI for 10, 15 years now, but its value is still very concentrated in consumer internet. So why is that? If you were to take all current and potential projects, current and potential AI projects, and sort them in decreasing order of value, you get a curve that looks like this, where to the left are the most valuable projects, like online advertising and web search, where you can write 1 AI piece of software, if you're Google, say, and apply that to a billion users and generate massive economic value.
Speaker 3
26:02
So about 10, 15 years ago, my friends and I figured out a recipe for how to build projects like that. We hire 100 engineers, have them all work on 1 project, apply it to 100 million or a billion users, and this turns out to be very valuable. But once you go outside the internet sector, almost no 1 has 100 million users that you can apply 1 AI system to. Instead, these are the types of projects I've been working on.
Speaker 3
26:28
We're working with a factory that makes pizzas, and we're taking pictures of the pizza, and the factory needed to take pictures of the pizza to make sure that the cheese is evenly spread. This turns out to be about a $5 million project for the factory, but you can't hire 100 engineers to work on a $5 million project. Or agriculture, I know there's a lot of agriculture in Taiwan obviously, but we're working with a company that makes agricultural machinery, and we figured out that if we can find out how tall the wheat is with cameras, with computer vision, and chop off the wheat at the right height, then we get more food for the farmer to sell, and it's better for the environment. But this is another $5 billion project that the old recipe is not economical for.
Speaker 3
27:14
Or inspection, material grading, a lot of these $5 billion projects. So where's to the left of this curve is a small number of billion dollar projects, $100 million projects, we know how to do that. In other sectors, we see tens of thousands of these $5 million projects and the struggle has been how do we get all of these tens of thousands of projects economically built? Fortunately, the trend in AI has been to have better tools, specifically low-code, sometimes no-code tools, that enable the user to do the customization.
Speaker 3
27:47
So what this means is instead of me having to do work to figure out what to do with pictures of pizza, now the IT department of the pizza factory can build an AI system using their own images and realize that $5 million worth of value. And by the way, these pictures of pizza, they don't exist on the internet. They're proprietary to the factory. And so Google and Bing do not have access to these pictures either.
Speaker 3
28:11
And the key technology to do this is prompting, as well as a technology called data-centric AI, in which you can let the, say, IT department, the factory, write prompts to provide data and this lowers the barrier. And I'm sharing this because I think this is an important part of the recipe to take a lot of the value of AI, which has so far been concentrated in consumer software internet and push it to the rest of the economy, which is even bigger than the tech industry. So, where are the opportunities? I felt 5 years ago that many valuable opportunities are now possible in AI because of supervised learning, but I feel even more strongly about it because of generative AI.
Speaker 3
28:55
And the puzzle I wanted to solve 5 years ago was, how do I get many of these projects built? And even though I had led AI teams in big companies like Google and Baidu, I couldn't imagine how I could operate a team inside a big tech company to go after the very diverse set of opportunities that AI now makes possible. I think starting new companies is an efficient way to do this, which is why I started AI Funds, which is a venture studio. But of course, to incumbent companies, there are also many opportunities to integrate AI into existing workflows.
Speaker 3
29:28
And often incumbent companies have a huge distribution advantage. But concretely, where are the opportunities? This is what I think of as the AI stack. At the lowest level is the hardware layer.
Speaker 3
29:40
And I know Jensen Huang and Lisa Su are just in town. I think semiconductor industry is very capital intensive, very concentrated. Clearly, TSMC, UMC are doing very well. I personally don't play there because of how capital intensive and how concentrated it is.
Speaker 3
29:58
But clearly, in Taiwan also, building service is a great business. On top of that are the cloud infrastructure, also very capital intensive, very concentrated. So I tend not to play there myself. And then there are the developer tools, including open AI, the API part of open AI.
Speaker 3
30:20
I see the developer tool market as potentially with some huge winners, but also hyper-competitive. In fact, look at all the startups chasing OpenAI. So I personally tend to play here only when I have a significant technology advantage because I think that gives you the right to have a shot at building a sustainable platform to be 1 of the huge winners. And even though a lot of the media attention is on the infrastructure and tooling layer, We saw this with every wave of technology.
Speaker 3
30:48
When SaaS came up, all the attention was in the technology layer. It turns out that for the infrastructure and tooling layer to be successful, there's 1 other layer that needs to be even more successful, and that's the application layer. Because for the tools and infrastructure to be successful, the applications have to generate even more revenue to pay them to make all the economics work. So maybe 1 example, I was talking the other day on the upper right with the CEO of Amore, which is a company that applies AI to romantic relationship coaching.
Speaker 3
31:22
And, you know, what do I... I feel like I'm an AI guy. I know nothing about romance, right? And if you don't believe me, you can ask my wife, and she will confirm that I know nothing about romance.
Speaker 3
31:37
But when my team AI Fund decided we wanted to do something applying AI to romance, we wound up partnering with the former CEO of Tinder, Renata Naibal, who knows much more in a systematic way about relationships than anyone else I know. And with my team supplying the AI expertise and her supplying the relationship expertise, we're able to build something very unique. And at the application layer, I seem to keep on finding more and more opportunities where it looks like there's a huge market need, but the intensity of the competition is just, you know, quite low. So, over the last few years, AI Fund has actually refined our process of building startups.
Speaker 3
32:18
I'm going to share with you what this process is, because I think there'll be many great startups to be built, hopefully here in Taiwan as well. But we always start off with the idea. Often we, partners suggest ideas to us. And I want to share 1 case study, which is 1 of our investors, Mitsui, operates a line of ships.
Speaker 3
32:37
And so they came and suggested to me, they said, hey, Andrew, you should use AI to make shipping more fuel efficient. And this is 1 of those ideas that I would never have come up with myself, because I've been on a boat. But what do I know about global maritime shipping? But so they suggest the idea to us.
Speaker 3
32:54
And in a typical process, we spend up to a month validating the idea for market need and technical feasibility. And if it passes that, we then recruit a CEO, recruited Dylan Kyle, fantastic CEO with 1 exit before, to work with us to build the company. We then spent 3 months to do deep customer validation and build a technical prototype. And the company survives this stage, It survives this stage two-thirds of the time.
Speaker 3
33:18
We then fund it. That gives it capital to build an executive team, get real customers, and is then in a position to go and raise additional capital and be wellness way. So today, Bering is steering hundreds of ships, many hundreds of ships on the high seas, like Google Maps for ships, where the software, AI software, tells a ship where to go to get to the destination on time and with about 10% less fuel, thus saving about half a million dollars per ship and also reducing carbon emissions. And this thought would not have existed if Mitsui, that knows this idea, hadn't suggested this to us.
Speaker 3
33:54
And I find that, I've learned that my swim lane is AI. I don't try to be expert in relationships and shipping and everything else. I try to be an expert in AI and instead work with partners on all of these different application sectors to find exciting opportunities to go after. And so the 1 in Taiwan, I'll be here for a few days.
Speaker 3
34:11
Hopefully there'll be opportunities to explore locally meaningful ideas to Taiwan as well. I've just 1 last slide now I'll wrap up which is touch on risk and social impact. So there are many risks of AI. I think the biggest risk of AI are bias, fairness and accuracy.
Speaker 3
34:27
But AI technology is rapidly improving and become much safer And I think AI will disrupt jobs as well, even though we create a lot of value. And I think we have an obligation to take care of people, the people whose jobs are disrupted. And there is excitement about AGI, artificial general intelligence. The most widely accepted definition of AGI is AI that can do any intellectual task that any human can.
Speaker 3
34:53
And I think AGI is still many decades away. There are different opinions on that. But you want AI to do any intellectual task a human can. I think that's still quite far away, but I hope we will get there.
Speaker 3
35:06
And then lastly, I know even last couple of days, even today in Taiwan, some people are asking me about extinction risk. And I think AI leading to human extinction risk is very overhyped. I hope that maybe in our lifetimes, there'll be AI smarter than any of us. It's already smarter than any of us in some ways, but humanity has lots of experience controlling things far more powerful than ourselves, like corporations and nation states.
Speaker 3
35:30
If we manage to make it okay. So I don't really doubt our ability to control AI. And lastly, when it comes to the real existential risk to humanity, things like the next pandemic, fingers crossed, hopefully not, or climate change leading to massive depopulation of parts of the planet, or another asteroid, lower chance, doing to us what it did to the dinosaurs, wiped out the dinosaurs. I think that if we look at the real existential risk to humanity, AI would be a key part of the solution.
Speaker 3
36:01
So rather than trying to make AI go slower, if we want humanity to survive and thrive for the next thousand years, I would rather make AI go as fast as possible. So with that, let me say thank you all very much.
Speaker 1
36:15
Thank you so much to Dr. Andrew and for your very, very informative and comprehensive talk. Thank you so much.
Speaker 1
36:26
Thank you, Dr. Andrew, for your wonderful presentation. Andrew, thank you so much for taking us on a deep dive into the opportunities in AI. Before we move on to the fireside chat, I'd like to invite the photographer to take a group photo for all the guests of the AI Master Forum.
Speaker 1
36:49
And before we move on to the fireside chat, we're going to have our photographers take a photo of all of you who joined us for the AI Master Forum today. Okay, everybody, please look at our photographer on the stage. Please look at the camera on the stage. Okay, everybody a big smile.
Speaker 1
37:10
321. Alright, let's do 1 more. Okay, 321. You want to thumbs up or whatever poses that you like.
Speaker 1
37:28
Okay? 3, 2, 1. Please give yourselves a big round of applause. And we thank you so much for your participation today.
Speaker 1
37:44
Thank you. All right. Next is the fireside chat we've all been waiting for. To introduce our moderator and speakers, please welcome Shirley Goh, CEO of the Yonglin Foundation again.
Speaker 1
37:57
Next, let's welcome the CEO of Yonglin Foundation, to introduce the host and lecturer of the side-by-side talk, Shirley.
Speaker 2
38:05
For the much-anticipated fireside chat, we're honored to have Dr. Chi-Hung Lin as our moderator. Dr.
Speaker 2
38:11
Lin currently serves as the president of National Yangming-Qiaodong University. He has a PhD in cell biology from Yale University and his research interests focus on cancer biology and is an expert on cell tissue microscopy. In 2010 he was appointed the Health Commissioner of Taipei City Government and subsequently also new Taipei City Government, earning him 8 years of experience in public health policies and their implementations. Please put your hands together and welcome Dr.
Speaker 2
38:42
Chi-Hung Lin to the stage. Also joining the fireside chat is Mr. Mark Chen. Mr.
Speaker 2
38:53
Chen is the head of frontiers research at OpenAI and where he focuses on multimodal and reasoning research. He led the team that created DALI
Speaker 1
39:03
2
Speaker 2
39:04
and the team that incorporated visual perception into GPT-4. Prior to joining OpenAI, Mark worked as a quantitative trader at several proprietary trading firms, including Jane Street Capital, where he built machine learning algorithms for equities and futures trading. His parents are Taiwanese and have spent his and have spent his high school years in Taiwan actually and later graduated from MIT with a bachelor's degree in mathematics and computer science.
Speaker 2
39:38
So, give a round of applause to 1 of our very own favorite son of Taiwan, Mr. Mark Chen. And our main speaker for the fireside chat, I'm sure is no stranger to anyone here. Mr.
Speaker 2
39:56
Sam Altman is the CEO of OpenAI and Chairman of Helion Energy. He has become known as the man behind the AI revolution and the legend in Silicon Valley despite his age, despite his young 38 years of age. His vision to make the most cost of energy and intelligence fall to near 0 as a way to create abundance and accessible for all. He's also the chairman of WorldCoin, a global UBI experiment, and Retro, a partial reprogramming longevity company.
Speaker 2
40:30
He was formerly the president of Y Combinator, where he expanded the scope of the program and mentored thousands of founders of many of the most successful startups of the last decade. As you can see, his influence on the world is no joke. So let's give a round of applause to the favorite son of the world, Mr. Sam Altman.
Speaker 1
40:52
Thank you, Shirley. And now over to our moderator, Professor Lin. 接下來把時間交給主持人,林啟宏校長
Speaker 4
41:00
Okay, Mark and Sam, welcome to Taiwan. Thank you. I know that you swing by Taiwan and you're gonna stay for less than a day.
Speaker 4
41:12
Thank you very much.
Speaker 5
41:13
Very happy to be here.
Speaker 4
41:15
So did the previous talk feels like back to the classroom again?
Speaker 5
41:20
Yeah, Andrew was my professor when I was an undergrad, so it does bring it back.
Speaker 4
41:25
Okay, great. So in the next
Speaker 1
41:28
25
Speaker 4
41:29
minutes, I think I will go through the topics that the audience, both in this room and online, will be very interested in, which is, what's your keys to succeed, and your areas to implement and to apply, and ways to sustain, and things to worry about. So maybe we should follow that order. But before I begin, you wanna say a few words to the audience, just say hello and how many times have you been in Taiwan?
Speaker 5
42:08
Sure. Hello. This is my third time to Taiwan And I wish I had a longer 1. This is a very short visit, but it's been a great day so far.
Speaker 5
42:23
I think the AI revolution is going to be a massively transformative and positively transformative thing. Humans, through every technological revolution, have always worried about what the jobs are gonna be like and what we're all gonna have to do. And it turns out that human desire for new things and creativity is pretty limitless. And so we're gonna find wonderful things to do.
Speaker 5
42:48
And what we're seeing with ChachiBT is people just use these AI tools to do more and do better, and they can take their ideas and bring them to life even faster. And it very much is an amplifier of human ingenuity and will. And I think it's, although always a little scary when the ground shakes, where we're headed is just probably much better than any of us can imagine. We all have tons to do, and we'll look back at this time as a sort of very barbaric age and we won't believe how much better it's gotten.
Speaker 5
43:22
I think the rate of change will be fast. I don't think AGI is decades away, although it mostly depends on how fast Mark's team makes progress. But I think it'll be pretty quick. And you know each year more amazing things will be possible that weren't possible the year before.
Speaker 5
43:38
And that's also true even if we look backwards, you know, most of the world I think didn't predict chat GPT a year or 3 years ago. And I think that will will remain true. But I think we're on to something pretty deep and fundamental, and it'll be great. Sure.
Speaker 6
43:54
Hey everyone, I'm Mark. It's nice to be back in Taiwan. I haven't been here in 4 or 5 years, but it still feels like home, so glad to be back.
Speaker 6
44:04
Like Sam said, AI is bringing about a lot of change, but maybe 1 unifying characteristic of people who work at OpenAI is they're more optimistic than pessimistic about the future of AI. And while we care about safety a lot, we really think it will bring about a better future.
Speaker 4
44:20
Okay, thank you. So first, keys to succeed, I'll let you just say whatever you want to say. But I realize that you just are in the process of a world trip.
Speaker 4
44:35
So what's your main takeaway from this trip and what you expected? Is there any critical feedback that you wanna share with us?
Speaker 5
44:47
Yes, So Mark and I traveled together earlier this year on this crazy trip around the world, which was incredibly gratifying, tiring, but really amazing to see how throughout many countries, 6 continents, many, many cities, the enthusiasm about AI, the belief that the future is gonna be much better, and more than that, what people were already doing today. GPT-4, and chat GPT as it is today, when we look back at it, we're already embarrassed by it. Everybody's gonna be embarrassed.
Speaker 5
45:18
You know when you got that first iPhone, you thought it was really cool, but now if you look back at it, those pixels are each this big and just didn't hold up that well. But it still pointed to something amazing, and now we have these super computers in our pockets. Even in that iPhone 1 era that we're in now, seeing the creativity of people around the world, the enthusiasm for where it's gonna go, the businesses people are building, the way they're transforming their workflows, that was all amazing. It was interesting to see how different things are in different cultures and different ways, but the enthusiasm was pretty remarkable.
Speaker 6
45:53
Yeah, not too much to add to that. You know, it's just we went from place to place and developers were coming to us with a wide diversity of apps, And I think they're really far ahead of the curve from what we had expected.
Speaker 4
46:05
Yeah. Good to know. So it seems that opportunities are everywhere, right? So based on whatever infrastructure and platform
Speaker 5
46:15
you have
Speaker 4
46:16
built. So that moves to my next question, which is the areas to apply. And also, I think very importantly, if it is a platform, like the electricity that Professor Ng mentioned about, specifically in what area that you think will be nourished in the next decades. And the next decade, and I know that open AI, you started with a NPO, but now you are a profit-making company, just a regular company.
Speaker 4
46:53
So what do you point yourself on these 2 roles, please?
Speaker 5
46:59
You want to do new areas and I'll do companies?
Speaker 6
47:01
Sure, yeah, yeah. Which direction that we wanna take OpenAI?
Speaker 5
47:04
Or kind of like what areas are we most excited about for people to build on?
Speaker 6
47:08
Yeah, yeah, so I mean, I have to plug the areas that I personally research. So I focus on multimodal and reasoning research. And I really think that these are the 2 most promising directions going forward.
Speaker 6
47:19
First with multimodal, right? We want our models to access what we see, what we hear, and to be able to generate novel content, you know, that we couldn't generate ourselves. And I think this will really unlock a bunch of new interfaces for the model. And on reasoning, you know, today, the model still makes silly mistakes here and there, right?
Speaker 6
47:36
And we really want to get to the point where in order for the model to be truly assistive and robust, that it doesn't make these mistakes anymore and can really be kind of a sparring partner for ideas.
Speaker 5
47:47
Yeah, I just wanna underscore how big of a deal it is if we get reasoning to work. You know, GPT-4 can reason the tiniest bit some of the time, but it clearly is not doing what we do, even like what a child can do. And GPT-4, although it can do a very good job mimicking what humans have done, doesn't feel like it's gonna go invent new science.
Speaker 5
48:14
And if we can teach these models to reason, then I think we can contribute to the sum total of human knowledge. And that's when the world will really transform. If we can make more scientific progress in 1 year with these tools than we could in 100 without them, the quality of life will go up for all of us so much. Some areas that I'm particularly excited about are education, healthcare, personal productivity tools like what we see people do using these coding tools.
Speaker 5
48:42
That's been amazing. I think those will all be great. And then on the company structure side, so we started as a non-profit. We knew that training these systems were going to be expensive, but we had no idea how expensive.
Speaker 5
48:53
And when it became clear to us that we were eventually gonna have to spend hundreds of billions of dollars on building out infrastructure, if We wanted to get to our goal. We knew that the non-profit structure we started with wasn't quite gonna work, but the reasons that we started with the non-profit structure were still super important. We wanted to get the incentives right. We wanted a way to sort of return the benefits and the governance of this technology to society as a whole.
Speaker 5
49:18
Wouldn't make sense for 1 company to own electricity. If we were successful enough to invent AGI, wouldn't make sense for 1 company, I think, to control that either. And so we came up with this idea of a capped profit. We have a non-profit that governs us.
Speaker 5
49:33
We have a capped profit to still let capitalism do its magic for our investors and our team. But then beyond that the non-profit gets the excess.
Speaker 4
49:41
Yeah, okay. Now move to my third question is what's the ways to sustain? I know that you know the computation cost a lot and so people have been worrying about the energy consumption And I also read upon 1 of your wish list in what you want to do next, is to reduce the cost of energy and intelligence to near 0.
Speaker 4
50:15
So, you know, what's the way to make such kind of AI and the future development sustainable, both from the energy perspective and from the population or people's perception view?
Speaker 5
50:34
Fusion. And if that doesn't work, then solar. The energy needs as we look out are enormous. I think the cost of intelligence over time should trend towards the cost of energy.
Speaker 5
50:48
And there's gonna be huge demand for intelligence and thus huge demand for energy. And if we can create abundant, cheap, clean, safe energy, and again, I think we're gonna have this very soon with fusion at massive scale, like the scale that you need for everybody on Earth to get to enjoy the fruits of AGI. That's our plan.
Speaker 4
51:07
Okay. Mark, you want to add something?
Speaker 6
51:09
He's the energy guy. I can't. I can't add anything
Speaker 1
51:12
to that. Okay.
Speaker 4
51:13
All right. Good. And Then, of course, the risk, things to worry.
Speaker 4
51:19
So is there any real things that we should be worried about, or there's just none?
Speaker 5
51:27
All right, I'll go first. You can add. Look, things are gonna go wrong.
Speaker 5
51:32
Things go wrong with every new technology. 1 thing that we think is important is to make our mistakes, not just open AI, but society's collected mistakes while the stakes are low. So we believe in deploying the technology iteratively. We put these tools out in the world.
Speaker 5
51:49
We see how people use them and abuse them in ways we didn't think about. And every rep, every time we put out a new system, we want it to get better and better. So GPT-3 had many flaws. GPT-4 has less flaws.
Speaker 5
52:02
GPT-N will have even less. And the way that systems get safer and the way that we collectively as society get to steer them is this iterative process of using them in the real world, seeing what breaks, fixing that, using them a little more, making them a little better, a little more powerful, and that'll keep going. So I agree with what Andrew said. I think we have, as a society, successfully confronted very risky technologies and very powerful systems, institutions before, and we'll do that again, but it doesn't mean that things won't go wrong along the way it doesn't mean there will be no disruption things will go wrong there will be disruption and that's part of why we think it's so important to give people in our institutions time to gradually adapt.
Speaker 5
52:51
And what some people in the field advocate for of, like, basically build AGI in secret and then push the button 1 day, we think would be quite bad.
Speaker 6
53:00
Yeah. Like, the last 5 years of safety research has taught us anything. It's that you really can't do safety research in a vacuum But the things that people were predicting that you know Maybe unsafe some of those didn't turn out right and you really need this deployment process to have your AI be grounded in the world and then see how people use it. I think this also underscores why international cooperation is important here.
Speaker 6
53:22
We're here because we want to understand the norms, what's safe for 1 geography may not be safe for another. And we think this is not just a US kind of problem to solve.
Speaker 5
53:31
Yeah, that's really important. We have come together as a globe several times before. The IAEA for atomic energy is an example that we point to.
Speaker 5
53:41
There's several others. I think this is going to require another 1 of those because this is a technology that does have such global impact. And figuring out how we get the collective preferences of the world and make these decisions together, that's going to be super important.
Speaker 4
53:58
Do you worry about bias database or misinformation? Some of them actually could be really contaminated on purpose. So do you have any thoughts about those?
Speaker 1
54:13
You want
Speaker 5
54:13
to go first? Sure.
Speaker 6
54:14
Yeah. Of course, yeah, we worry a lot about bias. And I think this goes back to the point of, you know, figuring out what the different geographies define as bias, because it really varies from place to place. And, you know, misinformation, people will put incorrect content out there.
Speaker 6
54:31
Our model is trained to predict content on the internet, so there are mitigations we have to do after training to make sure that we reduce as much as possible the scope of misinformation regurgitation.
Speaker 5
54:42
No 2 people on earth will probably ever agree that 1 system is unbiased. And so 1 of the things that we think is really important is that we can make systems perform reliably, that society as a whole agrees on what the wide bounds are, And then in different contexts, different cultures, you know, different subgroups, different individuals, there's a lot of ability to get the system to behave in a particular way. And that ability has worked far better than people have thought.
Speaker 5
55:13
So it's certainly true that if you just train a model on the raw internet, you're going to get something that most reasonable people would call biased in different ways. But with techniques that we and others have developed, like reinforcement learning from human feedback and a whole bunch of other things, you can surprisingly steer these models. And so you can kind of configure it how you want. Oh, but I make some decisions, but you can change it a lot.
Speaker 5
55:39
And this idea of empowering individuals, but setting the edges and also the sort of standard behavior well, I think is really important. And again, this has surprised even the experts in bias. They've said this just works way better than they thought. GPT-4 out of the box is like less biased than a human expert in these topics.
Speaker 5
56:00
And the configurability per user is pretty doable. Now, on the question of misinformation, that's a really hard 1. It's hard to define that. It's hard to agree on it.
Speaker 5
56:13
I again come to this thing of the metric should be reliability. Like if you ask the model to follow a certain set of like ground truth facts, can it do that well? And I think a lot of Mark's team's work will turn out to be very important for that.
Speaker 4
56:30
Yeah, people also worry about technology, especially high tech, will cause inequality or inequity for the society. So do you think the generated AI will worsen or maybe?
Speaker 5
56:49
Absolutely not. I think, well, I think in general, technology is a force for equalizing. And if you, You know, the common example everybody uses is a very poor person and the richest person in the world get the same iPhone.
Speaker 5
57:08
And that's not been, things like that have not been true throughout history. But if you think about a tool like AI, if everybody on earth can get access to the best tutoring system ever, the best medical care ever, that helps everybody. But that relatively helps poorer people more than richer people who can already pay a lot for intelligence. Same thing with energy.
Speaker 1
57:36
You
Speaker 5
57:36
know, this is like really true throughout history. If you study the decrease in the price of energy over time, that has always helped the poorest half of the world more than the richest half. So I think technology does this in general, AI will be a super strong force for it.
Speaker 4
57:52
Okay, now do you? There was a but. Please.
Speaker 5
57:57
The but is that all is only true if society makes the right decisions. You could certainly imagine a world where all of the compute power in the world belongs to 1 company or 1 small set of people. And in that case, it won't be such an equalizing force.
Speaker 5
58:15
But I don't think society's going to let that happen.
Speaker 4
58:17
Okay, so do you worry about over regulation by the authority of governments?
Speaker 5
58:28
Not that much. I mean it certainly could happen. I also worry about under regulation.
Speaker 5
58:34
I think this is... People in our industry bash regulation a lot. You know, it's like hard, we've been calling for sort of regulation, but only of the most powerful systems. We say, let the small models go, let open source go.
Speaker 5
58:51
Models that are like
Speaker 1
58:52
10,000
Speaker 5
58:52
times the power of GPT-4, models that are like as smart as human civilization, whatever, those probably deserve some regulation. It seemed to me like the least controversial thing I was gonna say this year. And I got completely, still am getting bashed for it by the tech industry.
Speaker 5
59:09
And there's this reflexive anti-regulation thing. But regulation has been not a pure good, but it's been good in a lot of ways. You know, I don't want to have to make an opinion about every time I step on an airplane how safe it's going to be. But I trust that they're pretty safe.
Speaker 5
59:27
And I think regulation has been a positive good there. Same thing for medicines or whatever. So it's possible to get regulation wrong, but I don't think we sit around and fear it. In fact, we think some version of it's important.
Speaker 5
59:41
OK.
Speaker 4
59:42
You don't worry about it, Mark.
Speaker 6
59:44
Well, On regulation, I think Sam said a lot, but just kind of going back to inequity, whether AI causes more inequity. I think there's 2 examples that come to mind. First, code generation.
Speaker 6
59:54
You know, code generation helps people who are learning to code or just kind of beginning at coding.
Omnivision Solutions Ltd