See all Unknown transcripts on Cspan

cspan thumbnail

Hearing on the Use of AI in the Military and Combat

1 hours 37 minutes 29 seconds

Speaker 1

00:00:00 - 00:00:14

If you go to our website, cspan.org. We take you live now to a House Armed Services Subcommittee hearing on the Pentagon's utilization of artificial intelligence on the battlefield. You're watching live coverage here on C-SPAN 3.

Speaker 2

00:00:30 - 00:01:28

The subcommittee will come to order. I ask for unanimous general consent that the chair be authorized to declare a recess at any time Without objection so ordered I want to briefly review the the 3 commandments of the city subcommittee 1 is that we shall start on time which we just did so that's good We're going to enforce the five-minute rule, but if time allows we'll entertain a second round of questions, and it often does. But I bring this up, I wanna stress the third commandment, which is thou shalt not use acronyms or jargon, which I think is particularly important in this discussion, because discussions about AI can quickly generate or degenerate into jargon-laden discussions. We have 3 true experts on this topic, but Just don't assume your average member of Congress, or let me just say, don't assume I understand what you're talking about when you get into the nuances of AI. So we want to have a discussion in the open that your average American can understand today.

Speaker 2

00:01:28 - 00:02:13

We're asking you to demystify a lot of the concepts surrounding AI. And in thinking about this topic, it may sound counterintuitive, but I've been going back to the history of the early Cold War. In particular, I'm obsessed with the Korean War, which is the moment in which the Cold War first turned very hot, and at great cost to Americans, at even greater cost to the Korean people themselves. And I was reading this sort of obscure book about it, and came across the words of a historian named David Rees, who said, at the heart of West's military thought lies the belief that machines must be used to save its men's lives. Korea would progressively become a horrific illustration of the effects of a limited war where 1 side possessed the firepower and the other the manpower.

Speaker 2

00:02:13 - 00:02:46

There's a lot of different ways to interpret this in the current context, and particularly in the context of this hearing. 1 is that AI could potentially increase the destructive power of modern warfare. The other is AI has the potential to decrease it, or at least decrease the exposure that our soldiers, sailors, airmen, and marines take when they put themselves in a combat situation. Or the third, and what is unique in contrast to the early Cold War is that the machines themselves might somehow take power and go beyond our ability to control them. Today we want to dig into all of these different hypotheses.

Speaker 2

00:02:46 - 00:03:08

The only thing is I've dug into this topic, and I want to commend the ranking member, Mr. Khanna, for the way in which he's worked with me to really use the subcommittee to explore AI concepts. We had a very fascinating discussion with Elon Musk last week. I'll say there were some sources of disagreement. Mr.

Speaker 2

00:03:08 - 00:03:42

Musk believes China is on Team Humanity. I'm not persuaded of that point. And the only thing I've become convinced is that the CCP, if they win this competition or win the sort of AI component of this competition, will likely use that technology for evil as a way of perfecting a repressive totalitarian surveillance state, as well as exporting that model around the world. Whereas we in the West, we in the free world at least have the chance of using it for good. So to make sense of all these things, we're lucky to have 3 incredible witnesses.

Speaker 2

00:03:42 - 00:03:56

Mr. Alex Wang is the CEO of Scale AI. And you might be the most successful MIT dropout of all time at this point. But there's actually probably a unique subset of people that qualify there. Mr.

Speaker 2

00:03:56 - 00:04:20

Klon Kitchen is a senior fellow at the American Enterprise Institute, and someone many of us on Capitol Hill look for advice when talking about the intersection of technology and warfare and Dr. Hania Mahmoudian of data robot is an absolute AI expert as well. I've been looking forward to this hearing for a long time. I look forward to an open and honest discussion. Just remember, no acronyms, no jargon.

Speaker 2

00:04:20 - 00:04:23

And with that, I yield to the ranking member, Mr. Khanna.

Speaker 3

00:04:24 - 00:04:51

Thank you, Mr. Chairman, and thank you for convening this panel and your interest in a bipartisan way in addressing AI and making sure that our military is leading with AI. I've appreciated how you've approached this throughout your chairmanship. I'm not going to be long because I know people want to hear for the witnesses. I would just say that My understanding is that China is spending almost 10 times as much as the U.S.

Speaker 3

00:04:51 - 00:05:23

As a percent of their military budget on AI, and we really need to think about the modern technologies that are going to be needed to have a most effective national security strategy. So I'm particularly curious from the witnesses about how they think America can maintain and have the lead in AI technology going forward, what are the investments we need to make, and what are the standards we need to have to ensure that our AI is used most effectively. I'm looking forward to this panel.

Speaker 2

00:05:24 - 00:05:28

Thank you. Mr. Wang, you are now recognized for 5 minutes.

Speaker 4

00:05:29 - 00:06:01

Chairman Gallagher, Ranking Member Khanna, and members of the subcommittee, my name is Alexander Wang, and I'm the founder and CEO of Scale AI. It is an honor to be here today to testify at the dawn of this new era of warfare. 1 that will be dominated by AI and what the United States must do to win. In 2016, I founded Scale with a mission to accelerate the development of AI. From our earliest days of working with the leading autonomous vehicle programs at General Motors and Toyota, technology companies such as Meta, Microsoft, and OpenAI, and partnerships with the U.S.

Speaker 4

00:06:01 - 00:06:17

Government, including the U.S. Department of Defense's CDAO, the U.S. Army, and the U.S. Air Force, We've been at the forefront of AI development for more than 7 years. The country that is able to most rapidly and effectively integrate new technology into war fighting wins.

Speaker 4

00:06:17 - 00:06:45

If we don't win on AI, we risk ceding global influence, technological leadership, and democracy to strategic adversaries like China. The national security mission is deeply personal for me. I grew up in the shadow of the Los Alamos National Lab. My parents were physicists and worked on the technology that defined the last era of warfare, the atomic bomb. The Chinese Communist Party deeply understands the potential for AI to disrupt warfare and is investing heavily to capitalize on the opportunity.

Speaker 4

00:06:46 - 00:07:21

I saw this firsthand 4 years ago when I went on an investor trip to China that was both enlightening and unsettling. China was making rapid progress developing AI technologies like facial recognition and computer vision and using these for domestic surveillance and repression. That same year, President Xi Jinping said, quote, we must ensure that our country marches in the front ranks where it comes to theoretical research in an important area of AI and occupy the high ground in critical and AI core technologies, end quote. China's investing the full power of its industrial base for AI. This year, they're on track to spend roughly 3 times the U.S.

Speaker 4

00:07:21 - 00:07:48

Government on AI. The PLA is also heavily investing in AI-enabled autonomous drone swarms, adaptive radar systems, autonomous vehicles, and China's launched over 79 large language models since 2020. AI is China's Apollo project. To lead the world in the development of AI, we must lead the world in the amount of high quality data powering AI. Scale is firmly committed to doing our part to support the US government and ensure America maintains its strategic advantage.

Speaker 4

00:07:49 - 00:08:04

Today, we do so in 3 ways. 1, scale data engine. We annotate and prepare vast troves of data for the US government. 2, autonomous mission systems. We partnered with DIU to develop a data engine that will support the Army's robotic combat vehicle program.

Speaker 4

00:08:05 - 00:08:33

3, we developed Scale Donovan, our AI-powered decision-making platform that rapidly helps the US government make sense of real-world information. The DoD has also taken a number of steps in the right direction, most notably with the launch of the Chief Digital and Artificial Intelligence Office. While this progress is promising, more must be done to achieve AI overmatch. AI overmatch is our five-pillar plan to maintain the United States' security and technological edge in this new era. First, investment in AI.

Speaker 4

00:08:33 - 00:08:49

It is critical to increase America's investment to maintain our leadership. Despite record AI investment in the FY24 president's budget, the U.S. Is still spending 3 times less than China. Second, data supremacy. AI systems are only as good as the data they are trained on.

Speaker 4

00:08:49 - 00:09:17

The DoD creates more than 22 terabytes of data daily, most of which is wasted. AI warfare requires leading the world in developing AI-ready data. Scale fully supports the CDIO and its legislative mandate to establish a centralized data repository which would enable the DoD to harness the power of data with AI. Third, testing and evaluation. It is 1 of the most important ways to ensure that AI models are accurate, reliable, and uphold the DoD's ethical AI principles.

Speaker 4

00:09:17 - 00:09:41

The administration has embraced this concept by highlighting SCALE's role building an evaluation platform for frontier LLMs at DEF CON. Fourth, Pathfinder projects. Congress should authorize and fund new programs with the mission of developing innovative AI-powered war fighting capabilities. Since Project Maven was started more than 6 years ago, no new AI Pathfinder projects have begun. Fifth, upskilling the workforce.

Speaker 4

00:09:42 - 00:10:05

The US should invest in rapidly training the DoD workforce for AI. Scales already worked on this with the DoD to tackle this challenge head-on. In St. Louis, we established an AI center which has created more than 300 AI-focused jobs ranging from entry-level labelers to machine learning engineers with advanced degrees. The race for global AI leadership is well underway and I cannot be more excited to do everything in my power to ensure that the U.S.

Speaker 4

00:10:05 - 00:10:21

Wins. It is in moments like this that Congress, the DOD and the tech industry can either rise to the challenge together or stand idle. I have included my further remarks in a written statement to be submitted for the record. And thank you again for the opportunity to be here today. I look forward to your questions.

Speaker 4

00:10:21 - 00:10:22

Thank you.

Speaker 2

00:10:22 - 00:10:25

Thank you, Mr. Wang. Mr. Kitchen, you are recognized for 5 minutes.

Speaker 5

00:10:25 - 00:10:58

Good morning, Chairman Gallagher, Ranking Member Khanna, and members of the committee. Thank you for the privilege of testifying. I'd like to use my opening statement to make 3 points. First, I believe artificial intelligence and particularly emerging capabilities like generative AI are a national security lifeline for the United States. The national security community has discussed the potential of AI for years, but now it seems these technologies are finally maturing to where they can be applied at scale, with few doubting that they will soon reshape almost every aspect of our lives, including how we fight and win wars.

Speaker 5

00:10:59 - 00:11:29

The importance of AI is felt as acutely in Beijing as it is in Washington. But until recently, I was not at all confident that the United States would hold the AI advantage. If you assume this advantage comes down to algorithms, data, and hardware, just 1 year ago, I would have given the United States the advantage on algorithms, the Chinese the advantage on data, and I would have called hardware a jump ball. But this deserves another look. Large language models and other generative AIs may be moving the competition back to the American advantage.

Speaker 5

00:11:30 - 00:12:13

The US dominates the underlying computer science giving birth to these advancements, and we remain the home of choice for global talent. On hardware, a strong bipartisan consensus is allowing us to meaningfully constrain China's access to cutting-edge capabilities like advanced graphics processing units, and even more can and should be done. For example, limiting Chinese cloud services would be an excellent next step. Finally, on data. While the Chinese economy and people continue to generate a deluge of digitized data, And while the Chinese Communist Party continues to have unfettered access to these data, the promise of synthetic data and the fact that many of the new AI models are indexed on the open Internet may blunt the CCP's advantage.

Speaker 5

00:12:13 - 00:12:51

It is my hope, for example, that the Chinese government's political fragility, strict content controls, and general oppression of its own people will compromise or bias much of the data that it collects, diluting its utility and ultimately limiting the development of Chinese AI. At the very least, I think that the United States has an opportunity to surge ahead of Beijing if we are aggressive and deliberate. But AI offers the U.S. More than bespoke capabilities. Large language models and other generative technologies, if properly realized, could provide an economic base for a new era of American prosperity and security.

Speaker 5

00:12:52 - 00:13:37

For years, we have known that the United States is not investing in its military sufficiently to meet the demands of the nation. The truth of this has been laid bare as our defense industrial race struggles to keep up with the demand of the conflict in Ukraine, for example. But according to 1 recent study, existing generative AI capabilities could add the equivalent of $2.6 trillion to $4.4 trillion annually to the global economy, and that this estimate would double if we include the impact of embedding generative AI into existing software that is currently used. The bottom line is this. I believe AI is offering us an opportunity to get our economic house in order, to lay a foundation for our nation's long-term prosperity, and to build a national security enterprise that is properly resourced.

Speaker 5

00:13:39 - 00:14:21

But finally, while AI offers all this promise and more, it also has serious national security risks. Most acutely, a flood of misinformation and the exponential growth of conventional and novel cyber attacks. By now we have all seen the photos, videos, and other media generative AIs are creating, and these capabilities have already been democratized. Virtually anyone can create and distribute synthetic media that will undoubtedly be used to undermine American confidence in our democratic institutions. Similarly, generative AIs will offer hostile cyber actors potent tools for generating and automating traditional and new online attacks.

Speaker 5

00:14:22 - 00:14:37

In a world where we are already overwhelmed by online threats, generative AIs will soon pour gas on these fires. There is much more that I could say on these matters, but I trust we'll cover them more fully on the course of this hearing. Thank you again for the opportunity to testify, and I look forward to your questions.

Speaker 2

00:14:39 - 00:14:42

Thank you, Mr. Kitchen. Dr. Mahmoudian, you are recognized for 5 minutes.

Speaker 6

00:14:45 - 00:15:07

Thank you. Chairman Gallagher, Ranking Member Khanna, and the members of the Cyber Information Technologies and Innovation Subcommittee. Thank you for the opportunity today to testify before the subcommittee on the critical issues of machine learning and human warfare, artificial intelligence on the battlefield. My name is Dr. Haniyeh Mahmoudian, and I am a global AI ethicist at DataRobot.

Speaker 6

00:15:07 - 00:15:35

In my personal capacity, I am an advisory member to the National AI Advisory Committee and Co-Chair AI Futures Working Group. Today, I testify in my individual capacity. AI holds immense potential and is increasingly becoming an essential component of modern military strategies and operations, with potential to profoundly impact operational efficiency and decision making.

Speaker 7

00:15:37 - 00:15:37

In the

Speaker 6

00:15:37 - 00:16:41

realm of cyber security, AI can help military protect its network and systems against increasingly sophisticated cyber threats and also assist them in offensive cyber operations. AI can also play a critical role in predicting and prevention of injuries among military personnel. AI can efficiently track real-time fatigue and injuries, which can aid prevention of MSK and other bodily injuries, which, along its consequences, is a major reason for medical disability and consequent discharge from service. Thus, it is imperative that the United States expedites the adoption of AI to sustain our strategic military leadership and advantage. While these benefits are significant, it is crucial to ensure that the use of AI in the military context adheres to law and ethical guidelines.

Speaker 6

00:16:42 - 00:17:50

In recent years, insufficient scrutiny of AI and evaluation of AI systems, coupled with a limited comprehension of AI's potential adverse effect, have led to numerous instances where AI, despite being developed with good intentions, ended up harming individuals and groups it was designed to help. This suggests that consideration of AI ethics have often been relegated to secondary thought when it comes to building and deploying AI systems. However, it is encouraging that the Department of Defense has taken initiatives to develop AI ethics principles that will apply to both combat and non-combat functions. As former Secretary Esper has remarked, AI technology will change much about the battlefield of the future, but nothing will change America's steadfast commitment to responsible and lawful behavior. Incorporation of responsible AI frameworks and fostering trust in AI systems requires consideration of people, process, and technology.

Speaker 6

00:17:52 - 00:18:51

Investment in AI and AI ethics literacy for military personnel at all levels is a key step to ensuring responsible and appropriate use of AI. To successfully adopt AI and have it at scale at the Department of Defense requires that the department implement AI governance frameworks and adopt risk management processes to manage and mitigate risks associated with AI. 1 of the challenges in adoption of AI in the government, especially in the Department of Defense, is a slow procurement process. As mentioned earlier, AI is an evolving space. Therefore, it is paramount for us to make sure that we have a faster procurement cycle, but ensuring that we also have proper evaluation of AI tools by using robust governance processes.

Speaker 6

00:18:53 - 00:19:11

In conclusion, AI holds transformative potential. However, along these benefits, it is vital to establish ethical frameworks and comprehensive governance processes that ensures effectiveness, reliability, and human oversight. Thank you.

Speaker 2

00:19:12 - 00:19:18

Mr. Carney. Thank you to all our witnesses for your thoughtful testimony. I now recognize myself for 5 minutes. Mr.

Speaker 2

00:19:18 - 00:19:34

Wang, I'd like to begin by asking you to respond a bit to some of what Mr. Kitchens laid out in terms of our advantages and disadvantages relative to China in the AI race. How do you see those advantages – relative advantages and disadvantages?

Speaker 4

00:19:36 - 00:20:08

So, I certainly agree that America is the place of choice for the most talented AI scientists in the world. So we certainly continue to have an advantage there And the evidence is clear. If you look at CHAT-GPT, GPT-3, GPT-4, as well as the transformer model that underpins it, all of those were invented in the United States. When it comes to data, I actually also agree that we have a potential very powerful advantage here, specifically when it pertains to military implementations. So in America, we have the largest fleet of military hardware in the world.

Speaker 4

00:20:08 - 00:20:41

This fleet generates 22 terabytes of data every day. And so if we can properly set up and instrument this data that's being generated into pools of AI-ready data sets, then we can create a pretty insurmountable data advantage when it comes to military use of artificial intelligence. Now, I think this is something that we need to work together and actually move towards as a country. Today, most of this data goes unused or is wasted in some manner. We need to fix that to create a longstanding and durable advantage in artificial intelligence data.

Speaker 4

00:20:41 - 00:20:56

And when it comes to computational power, NVIDIA, which is the world's leader in chips for artificial intelligence, is an American company. These technologies are innovated and built in America, and so again, there I think we have an advantage. Thank you. And, I mean, you've dealt

Speaker 2

00:20:56 - 00:21:13

a lot with the Pentagon. It's a customer of yours. Why is it, at present, I know a lot of this is in your written testimony, that it is wasted? What's preventing us from harnessing that data? And I guess more broadly, what's preventing us, why have we not had new Pathfinder projects since Maven?

Speaker 4

00:21:15 - 00:21:57

So data is something that is significantly more valuable with the advent of these artificial intelligence algorithms. So a very simplistic way to look at AI is that you have these algorithms that analyze troves and troves of high quality data, identify patterns in those data, and then can emulate those patterns going forward. So we see that with models like chat GPT which are able to read troves and troves of language data, things that humans have written over years and years, then it can emulate how a human might speak in a lot of these instances. So these artificial intelligence algorithms have made data significantly more valuable than they have in the past. And so it's a new paradigm that the DoD needs to adapt towards.

Speaker 4

00:21:58 - 00:22:25

As we all know, the DoD is a fragmented organization. There's many different constituencies and organizations that each have their own approach to data. And like 1 of my witnesses mentioned, there's an education process and an upskilling process that needs to happen. Everyone within the DoD needs to understand that data is actually the ammunition in an AI war. And if we have that recognition as a entire department and as a country, I think it becomes very clear for us to take the right actions to actually collect all this data and move forward.

Speaker 2

00:22:25 - 00:23:12

It sounds as though you're suggesting that with the right leadership and organization, DOD could actually be a leader in this space. I wonder if it could also be a leader in terms of the guardrails that a lot of our constituents are asking us about, right? I think most, your average American understands we need to win this competition, but is concerned about uncontrolled AI and everyone's seen Terminator, et cetera, et cetera. What is your assessment of the DoD's ethical framework? Is that potentially a foundation that could be built upon, expanded to ensure we're on the same page within the 5 Eyes Alliance, within the NATO Alliance, and then gradually bring more and more people into that sort of free world framework for AI?

Speaker 4

00:23:13 - 00:23:35

I definitely agree. I think it's really critical that the United States takes the lead on this topic, particularly as it pertains to ensuring that artificial intelligence is used in accordance with our values and our principles. The DoD has established ethical AI principles, which I believe are great. And those principles are ones that we should continue to adhere. I think now it comes down to implementation.

Speaker 4

00:23:35 - 00:24:17

How are we going to actually make sure that these principles are followed? That's where I think a test and evaluation regime is incredibly important and critical to the increased deployment of these AI systems. As the DoD looks to apply AI to every function within its operation, everywhere from warfighting to back office functions and logistics, we need to have proper test and evaluation mechanisms that ensure that every instance of artificial intelligence deployed follows our ethical AI principles. So I think we need to set up the framework by which we can ensure all this deployment follows those principles and really lead the world in terms of thinking on how AI can be used in accordance with democratic principles. I have questions for

Speaker 2

00:24:17 - 00:24:22

the other witnesses that I'll have to save for a second round. I recognize Mr. Conner for 5 minutes.

Speaker 3

00:24:22 - 00:24:24

Mr. Kinning, would you like to go?

Speaker 8

00:24:24 - 00:24:39

Thank you, Mr. Chairman, since I have to do this. See if I can do in 5 minutes here 3 quick questions and quick answers, I hope. Mr. Kitchen mentioned global talent, and we have an advantage.

Speaker 8

00:24:40 - 00:24:56

But we also have immigration issues here that's hindering that talent. Indeed, should we change some of the immigration barriers that exist to get that global talent here to the U.S., make sure we're not losing that talent to other countries?

Speaker 5

00:24:57 - 00:25:08

Mr. Hickman. So immigration policy is outside of my area of expertise, what I would say is that maintaining our access and continuing to be the preferred home of global talent will be essential for national security.

Speaker 8

00:25:08 - 00:25:19

Mr. Laird. Okay. This question deals with the procurement issue that the doctor mentioned, if you could. It's fragmented, Department of Defense.

Speaker 8

00:25:19 - 00:25:38

Can you give this committee as a follow-up some suggestions on procurement changes just within the area of AI? Is that something that could be carved out? Because this is an area of great significance and you mentioned that and I agree it's a major, major problem. It's a problem generally but can we do something specifically with that that you could suggest to this committee?

Speaker 6

00:25:40 - 00:26:15

So 1 area that we can think about and I have to emphasize that military is not my area of expertise But 1 area that I can bring from the business perspective, because on the business side we also go through procurement cycles through proof of value or proof of concept. So 1 of the challenges that we also see over there is a long process of evaluation, which if we have standard procedure in place, it would – these type of processes, these type of evaluations, as long as they're standardized, it can go much faster.

Speaker 8

00:26:15 - 00:26:19

Thank you. Mr. Wang, you're familiar, though, on the military side. Can you follow up on that? Mr.

Speaker 8

00:26:19 - 00:26:20

Wang

Speaker 4

00:26:20 - 00:26:50

Yeah. I think there have been immense strides in building fast procurement methods for the Department of Defense. Notably, the CDAO, the Chief Digital and AI Office, has set up a TradeWins program, which is 1 of the fastest procurement methods for new technologies, new and innovative technologies in the DoD. DIU as an organization has also been actively partnering with many innovative tech companies in bringing their technologies into the DOD. And so there are there are current programs that I think we can double down on.

Speaker 4

00:26:50 - 00:27:02

These both of these instances I mentioned at the CDAO and at DIU are working and I think what we need to look towards in the next era of AI is doubling down on some of these fast procurement methods and ensure that we continue innovating.

Speaker 8

00:27:02 - 00:27:16

Is that something that you could follow up with the committee and provide that kind of information on how it could be tailored or double down, as you said, more efficiently, something we could exchange with the military? Of course. Thank you. Then just an overview. I think Mr.

Speaker 8

00:27:16 - 00:27:46

Wang, if it might be the proper person, but the others can comment in the 2 minutes I have left. Mr. Wang mentioned that we have a data advantage in the U.S., but we're not capturing all that data. But we, I think inherently in our democracy with privacy right protections, we are at a disadvantage in terms of how, you know, Chinese operate themselves. And, you know, it can't just be broken down into information military and otherwise.

Speaker 8

00:27:46 - 00:27:59

All that information is valuable that they gathered. Is there an area where, because of our privacy protections, which is something we shouldn't change in our country, where we might be an inherent disadvantage with China?

Speaker 4

00:28:00 - 00:28:37

So, I actually look at our democratic values as an advantage when it comes to artificial intelligence. If you zoom in on the realm of large language models, this is an area where in the United States we've clearly raced ahead and we've invented much of the technologies. And if you compare that to the current, what we know of how China views this technology, you know, they are likely going to squash a lot of the technology because it's impossible to censor. You know, anyone can use chat GPT and notice that, you know, chat GPT can say all sorts of different things. In the United States, we have protection of free speech, and so we will continue innovating when it comes to large language models.

Speaker 4

00:28:37 - 00:28:48

In China they view that as a risk to their socialist values. They recently came out with regulations that say that their AI technology has to adhere to socialist values and so yeah it's very interesting.

Speaker 8

00:28:48 - 00:29:05

And I'm glad I asked that question. And I never looked at that aspect of the answer. Lastly, quickly, with 30 seconds to go, Vladimir, Putin has said whoever controls AI has a huge advantage. But look at Russia right now. Is it fair to say that they are way behind?

Speaker 8

00:29:05 - 00:29:16

Is it fair to say that their involvement in Ukraine and what it's doing to the economy and the sanctions are having an effect? Yes or no? In 14 seconds.

Speaker 5

00:29:17 - 00:29:26

Yes, I think there's good reason to suspect that the Russian AI capability, while they may have some basic research in terms of applied deployment, is minimal.

Speaker 8

00:29:26 - 00:29:30

Great. I thank the ranking member for switching his time so I could go to another hearing. Thank

Speaker 2

00:29:30 - 00:29:33

you. Dr. McCormick is recognized for 5 minutes.

Speaker 9

00:29:33 - 00:29:43

Thank you, Mr. Chair, And thank you to the witnesses. I wish I had time to talk to you all day because this is fascinating. You're all obviously experts. Unfortunately, we get about time enough for about 2 questions.

Speaker 9

00:29:43 - 00:30:11

That's about it. So I'll go with the most pertinent that you guys actually brought up in your opening statements, I thought was really, Mr. Kitchen, you just discussed limiting Chinese access to our information, which totally makes sense, we see how they can develop very rapidly when they literally take our information and apply it. My concern is we have an enormous amount of foreign students at our universities right now in some of the leading technology areas, including AI development. Georgia Tech is right in my backyard.

Speaker 9

00:30:11 - 00:30:28

I went to Georgia Tech and did my pre-med there. And we're literally educating them and sending them right back there. That is access to leading technology in America. Is that what you're discussing when you talk about access or are you talking about in the industry itself or the stuff that's out on the internet or is it everything?

Speaker 5

00:30:29 - 00:31:37

Thank you, Congressman. It's an important question. Certainly, there's undeniable a level of risk associated with foreign and particularly Chinese student presence in the United States. However, the Research that I have seen by organizations like Georgetown Center for Security Emerging Technology actually demonstrates that the vast majority of foreign research students, even Chinese students, actually stay in the United States or more broadly in the West for the course of their career and amplify our capability. What I'm most concerned about, however, when I talk about Chinese access to data, again, not dismissing an inherent built-in threat there, is, frankly, their acquisition through purchase of American data, through large data stores, but then also things that we've all been talking about and staring at in the face for multiple years now, things like Chinese-owned and operated social media companies like TikTok, where every bit and byte of data that's generated via these applications on Americans' phones is by law made accessible to the Chinese Communist Party.

Speaker 5

00:31:38 - 00:31:48

And so while Chinese students and other foreign students may have some type of risk, it pales in my view in comparison to the type of money or the type of data that we're just kind of giving away.

Speaker 9

00:31:48 - 00:32:19

I appreciate that, and I can totally understand where that's coming from. My other concern, though, in regards to that, and this is a quick comment, is that the Chinese government's not stupid, and they obviously don't really care about their people more than they do about their government. So when they allow people to come here for education or jobs, I think it's with nefarious intent, and that's my worry. I'm not saying we don't need to educate people from other foreign lands, but I'm worried about it, and I'm worried about anybody who's pushing their people over here, knowing they're not coming back for a reason. With that said, also, Mr.

Speaker 9

00:32:19 - 00:32:59

Wang, you made an interesting statement about investing in AI and how China's got 3 times more investment in their AI. Of course, the 1 thing we do have a huge advantage is we have a lot of private people that are investing in AI now, and China doesn't have that. They don't have the capacity to outperform our private industry, because they don't have a private industry. How do we compare when we combine our synergistic efforts between government and private industry with China as far as, and you mentioned, Mr. Kitchen, in that effect that we allow this freedom of flow and it's not controlled, so it does have the potential to outpace as long as we put the right guardrails on it when we're talking about our competition with China.

Speaker 4

00:33:00 - 00:33:38

Certainly, if you factor in the amount of private sector investment into AI in the United States, that is an incredible sum. You know, large technology companies, the venture capital industry, and now the sort of global enterprise is investing billions and billions of dollars into AI. And so if you tally all that up, it's an incredible investment into artificial intelligence in the United States. That being said, I don't think we should rest easy on that because military implementations of AI are going to be incredibly important. We need to ensure that in this next phase that the US is both economically dominant but also has military leadership as well when it comes to artificial intelligence.

Speaker 4

00:33:38 - 00:34:01

And so, you know, we need to consider what the overall investment into military implementations looks like, And that's where there's a large disparity. That's where China's investing 3x more. And if you compare as a percentage of their overall military investment, the PLA is spending somewhere between 1 to 2% of their overall budget into artificial intelligence, whereas the DOD is spending somewhere between 0.1 and

Speaker 1

00:34:01 - 00:34:02

0.2%

Speaker 4

00:34:02 - 00:34:03

of our budget into AI.

Speaker 9

00:34:03 - 00:34:28

It's a good point. It is interesting to watch these private industries now in the United States, pairing with the DOD, develop a lot of this stuff, which is very cool, including yourself. I will say, since I'm out of time, it's just that we shouldn't sleep on Iran and Russia who obviously want to be players. They've used technology in the past to disrupt other countries and they of course love misinformation so this is something we need to be aware of.

Speaker 7

00:34:28 - 00:34:29

Thank you with that I yield.

Speaker 2

00:34:30 - 00:34:31

Mr. Khan has recognized for 5 minutes.

Speaker 3

00:34:32 - 00:34:51

Thank you, Mr. Chairman. Mr. Kitchen, I thought it was interesting that you said that the advantage that China may have because of data is diminishing because things like chat GPT are based on the entire universe of the Internet, which has both good and bad data in it. And then Mr.

Speaker 3

00:34:51 - 00:35:13

Wang, you said that DOD is really relying on sort of tagged, annotated data. I guess I'm trying to understand what is the best data that is needed for AI to be effective in military applications? And does China have an advantage on that kind of data or not? And I'd love both of your answers on that.

Speaker 4

00:35:14 - 00:35:41

So both data are important. Both sort of open source data that is accessible on the internet, that's a key data source for large models like chatGPT, as well as high quality annotated data sets. ChatGPT and its precursor InstructGPT were trained on large quantities of high quality expert generated data. And it's an important data source to ensure that these systems are more trustworthy, truthful, responsible, et cetera. So both matter.

Speaker 4

00:35:41 - 00:36:23

But when we look towards, again, military implementations of AI, the key is, what is the military data that these models are trained on. Right now, the models that are used by consumers and are present in the private sector are trained on essentially no military data. As a result, if you try to apply these without any additional data towards military problems, they would not perform particularly well. So as we look towards applying artificial intelligence to the military, we need to have military AI-ready data sets that are ready for this kind of deployment. When it comes to that kind of data, I think probably today you would say it's a jump ball.

Speaker 4

00:36:23 - 00:36:43

I think the PLA is looking deeply at this issue and the DOD is looking deeply at this issue. But we have all of the fundamentals to have an insurmountable advantage because the DOD generates 22 terabytes of data, far more data than PLA generates on a daily basis. So if we can instrument this data into 1 central repository, we can come out ahead.

Speaker 3

00:36:43 - 00:36:55

So there being, and then I, when Mr. Kitchens comments, there being a surveillance state of just getting data from all their citizens is not really going to be helpful for the military data sets that are needed to solve military problems.

Speaker 8

00:36:55 - 00:36:55

Correct.

Speaker 4

00:36:56 - 00:37:02

It will be a very limited help, and military data is, you know, orders of magnitude more valuable for military problem sets.

Speaker 5

00:37:02 - 00:37:08

Mr. Kitchin. Yes, sir. I completely agree with what Alex was saying. I think the application matters.

Speaker 5

00:37:08 - 00:38:00

So in military applications, particularly anything that would be tactical or kinetic, military generated, well curated data is really gonna be the key differential. The point that I was trying to raise when I mentioned the data advantage perhaps swinging back our way, it's in 1 sense aspirational. Part of the hope of generative AI is that over the course of time we'll be able to generate what's called synthetic data. So instead of data that's been produced via normal economic activity or military activity, that gen AI, generative AIs, are able to then begin generating synthetic data sets that would be useful for training. I suspect that we're, number 1, not there yet, and number 2, that those will be, those data sets will be helpful for broad economic application, but not at all the type of, or it will be supplemental to the type of military applications that Alex was discussing.

Speaker 3

00:38:01 - 00:38:14

Thank you. Dr. Mahmoudian, thank you for your testimony. I know Secretary Esper had introduced an AI framework guidelines for DoD. I'm not sure if that's been updated now.

Speaker 3

00:38:14 - 00:38:23

Are there things you'd want the DoD to do more in terms of the ethical guidelines framework for the use of AI?

Speaker 6

00:38:25 - 00:39:01

So the DoD, as I mentioned, they already have AI ethics principles in place. So 1 comment that I would have about that is how we can make these frameworks from abstract to a practical form of view. And that comes with the education of the personnel, to make sure that personnel understand what these principles mean and how they can actually in practice apply to their use cases that they're working on. So that's the first step. The second step is the implementation of AI governance.

Speaker 6

00:39:01 - 00:39:34

So when we are talking about policies, processes that their AI governance would have, those measurements that would be part of this process would include the principles that they have. So it's all about people and the process. And obviously the technology, how we are going to measure those risks that we may identify in a use case, these are all part of the technology aspect of it. Designing the technology in a way that it would provide explanation of why the system made certain decision. And I'm-

Speaker 3

00:39:34 - 00:39:35

Thank you.

Speaker 2

00:39:35 - 00:39:38

Thank you. Mr. Gates, Esquire is recognized for

Speaker 7

00:39:38 - 00:39:38

5 minutes.

Speaker 10

00:39:38 - 00:40:11

Mr. Wang, thank you for bringing into sharp relief the extent to which we have to think about all of these weapon systems that we have in contested environments as data collections platforms. Almost primarily when it comes to integration with AI. And I took great interest in your call to the committee that, you know, we not waste that exquisite data that is being collected. What advice would you have for the committee about shaping some sort of access or utilization regime for the data that we are currently wasting?

Speaker 4

00:40:13 - 00:40:51

I think this is 1 of the most important things that we can do to set up America for decades and decades of leadership in military use of AI. Right now a lot of this data goes onto hard drives and what ends up happening are the hard drives are either overwritten with new information so the old data gets deleted effectively and lost or these hard drives go into sort of closets or places where they never see the light of day. So first is instrumenting the data to sort of flow into 1 central data repository. The CDAO has a legislative mandate to do so and set up a central data repository for the DoD. So I think that's of critical importance.

Speaker 4

00:40:51 - 00:41:15

And then this is a whole of DoD issue. Every service, every group, every program needs to be thinking about how can they, all of the data that their programs are collecting and that are being generated within their purview, how can they ensure that all these data flow through into 1 central data repository and then are prepared and tagged and labeled for AI-ready use down the line?

Speaker 10

00:41:15 - 00:41:49

And it would seem as though under the normal construct of a mission set, someone might reasonably be stove piped away from the broader utilization of some of that data. So it almost seems like something that is an appendage to a mission set, very hard to weave it in because as you're collecting data in contested environments, it could be for all kind of reasons and all kind of help. I wonder, Aloud, what will be commoditized first, the processing capability on some of these platforms or the data itself?

Speaker 4

00:41:51 - 00:42:18

Well I think you're right that this is you know data is a new asset for this new regime of AI warfare. Data truly is the ammunition that will power our future efforts in the military. So it is a new paradigm to think about data as a key and central resource versus, as you mentioned, an appendage that doesn't feel particularly critical to the future operation of our programs.

Speaker 10

00:42:18 - 00:43:09

Yeah, you know, we do all kind of domestic policy, military policy around who can access rare earth minerals, who can access various forms of energy, and I wonder if in the future a nation-state's access to exquisite data sets that have been properly stored and collected are viewed just as precious. I also wanted to reflect on the smartest hour I ever spent, and it was listening to Elon Musk, with our chair and ranking member, discuss some of these issues. And I would encourage anyone watching this who has an interest in the issue, hard to find a conversation on the internet with a higher average IQ across the board than that 1. But what Mr. Musk presented as an argument was that China understands that AI control of governance is equally a threat to them and to the United States.

Speaker 10

00:43:09 - 00:43:53

And so Mr. Musk's argument was we really are ideal partners with China because we share a common goal to not have the AI robots ultimately take over our governance. And our chairman offered, I think, a pretty strident critique of that perspective, saying that, you know, while we view China's typically thinking long-term and the short-term, they're more team communistic genocide than they are team humanity. So I was just wondering if, because you had so much in your written testimony about your time in China and how that shaped your perspective on the ethics of all this, Do you think China sees an overlap of interests with the United States on this, or do they see us as explicitly an arm's length competitor?

Speaker 4

00:43:55 - 00:44:35

I think it would be stretched to say we're on the same team on this issue. I think that if you look at the last generation of AI, computer vision technology, the way that China approached it was utilizing it, building an industrial base that was government funded to immediately build advanced facial recognition technology for the suppression of their population and the suppression of Uyghurs, ultimately sort of tightening the grip of their totalitarian regime. I expect them to use modern AI technologies in the same way to the degree that they can. And that seems to be the immediate priority of the Chinese Communist Party when it comes to implementations of AI.

Speaker 10

00:44:35 - 00:44:54

So we'll count you on Team Gallagher, not Team Elon on that. And just a question for the record, I would love to know everyone's perspective on what the most important alliances the United States is involved in when it comes to these AI regimes? Is it AUKUS? Is it 5 Eyes? Does NATO have a role to play in some of the ethics around this?

Speaker 10

00:44:54 - 00:44:55

I'd love to submit that.

Speaker 2

00:44:55 - 00:45:04

Well, I'll break the second commandment, which there's a corollary. If you say something nice about me or the ranking member, you get more time. Just quickly, what is the answer to Mr. Gates question? That's an interesting question.

Speaker 4

00:45:06 - 00:45:38

I think they're all important. I'd probably start with 5 Eyes, given the strength of our partnerships within that group. But as we look towards artificial intelligence as a global technology that will shape, you know, much of the future of the world, I think we need to form as many key partnerships as possible to ensure that particularly the governance of this technology, both for, you know, certainly for military use, for use in intelligence, and for use in, and for use, you know, sort of in commercial purposes are adhering to the democratic values that we have as a country.

Speaker 2

00:45:38 - 00:45:39

Quickly, Mr. Kitchen.

Speaker 5

00:45:40 - 00:45:56

From a traditional security alliance, I would say 5 Eyes and NATO will be critical. However, I would say that the broader economic partnership with our friends and allies in the European Union is going to be critical long-term and is going in the wrong direction. Happy to talk about that more.

Speaker 2

00:45:56 - 00:45:57

Quickly, Dr. Mahmoudian.

Speaker 6

00:45:59 - 00:46:14

I'm echoing the sentiment that the other witnesses has had, later in the year we are going to have our first AI summit that's happening in the UK. So we need to expand this type of alliance, as mentioned earlier, with our allies on the area of AI.

Speaker 2

00:46:14 - 00:46:16

Great. Ms. Slockin is recognized for 5 minutes.

Speaker 11

00:46:16 - 00:46:58

You know, I would just say, just following on that last question, with 5 Eyes, you know, we've had generations of learning how to share with each other and become interoperable. I don't actually know if we have data sharing arrangements when we don't have a joint platform. It's just fascinating to just think about like getting those arrangements in place and sharing data given the value is going so precipitously up on it. So, you know, I would say what we are doing here up on the hill with the help of industry who's invested in AI is like admiring the problem, right? We are all talking about the problem of like this new tool that we know has real potential, but also has potential real downsides.

Speaker 11

00:46:58 - 00:47:33

And so how do we govern it? And our constituents are asking us like, what are the ground rules on this new technology? Because it sounds scary. And I would commend the Joint Artificial Intelligence Center at DoD for putting up some basic, really, 40, 000-foot guidelines on being responsible and equitable and traceable and reliable and governable, but it's like, it's real top level stuff. But we are up here, you know, the flip phone generation trying to figure out how to govern AI, and it is complicated.

Speaker 11

00:47:34 - 00:47:59

But could you give us a sense, sort of in colloquial English, of what keeps you up at night about the military use of AI? If China is investing at least 3 times, and in some cases 10 times the amount that we are, What is the number 1 thing that you feel like you know kind of worst-case scenario if we go unchecked we could see in the next decade? Mr. Kitchen you're shaking your head.

Speaker 5

00:48:00 - 00:48:40

Thank you. While there are certain risks of what we would call kind of bespoke threats, I think the most acute challenge that we're likely to encounter in the near term is a simply more effective and efficient enemy. So the chairman referenced a quote from the Korean War. I'll raise him with another 1 from General Pershing, who said, infantry wins battles, but logistics wins wars. And I think supply chain and military logistics, And a lot of what we call kind of back office military capacity is what is likely to be reshaped by AI in the near term, which can sound innocuous and not so scary.

Speaker 11

00:48:40 - 00:48:53

Not after Ukraine. I mean, not after watching Russia and Ukraine. I'll be happy to invest in more improved logistics, given what we've just seen in the buffoonery in the Russian military. But I just want to make sure Mr. Wang has an opportunity.

Speaker 11

00:48:53 - 00:49:01

That's a good 1. And it's not a scary thing. It's just a more capable and competent adversary, whoever they are. Mr. Wang.

Speaker 4

00:49:01 - 00:49:36

I would certainly agree that the application of AI to back office functions, logistics, and just overall optimization is really critical. If you look towards the areas where the PLA is investing into artificial intelligence, it's for autonomous drone swarms, whether that be aerial, subsurface, or ground. They're investing across all fronts. They're investing into adaptive radar systems, which jam and blind US sensors and information networks. So they're investing across the whole spectrum in artificial intelligence to sort of set the new tone of warfare with this technology.

Speaker 4

00:49:36 - 00:50:21

And so, you know, we need to be investing across the slate. That being said, I worry as well about the risks in deploying these AI systems without proper guardrails. And for me, it really comes down to implementation, which is test and evaluation. So how do we know that for all of the artificial intelligence systems that the DoD is likely to deploy over the next few years and the next decade, How do we ensure that each of these AI systems adhere to the DoD ethical AI principles as stated? So I think it needs to be a standard part of the procurement process is a test and evaluation mechanism to ensure that every instance where a program within the DoD is looking to use artificial intelligence that we have the right testing and evaluation to ensure that

Speaker 2

00:50:21 - 00:50:22

it adheres to our guardrails.

Speaker 11

00:50:23 - 00:50:46

And I know that the department is working hard on this data labeling problem and trying to, it's an enormous task to ask, tends to be stovepiped organization to share data, make it available, label it, make it usable. If you were king or queen for the day and could get them to do 1 thing on reliability of data and availability, what would it be?

Speaker 4

00:50:48 - 00:51:14

I would say first establishing the central data repository and then creating a plan by which as much of the 22 terabytes of data generated a day goes into that central data repository and then creating a plan by which as much of that data is processed and labeled and annotated to be AI ready as possible. You know, these are all multi-year efforts that are not going to be solved tomorrow at the snap of a finger. They need to be solved through long-term planning and long-term coordination.

Speaker 11

00:51:14 - 00:51:16

Great. Thank you very much. Heel back.

Speaker 2

00:51:17 - 00:51:19

Mr. Lilletta is recognized for 5 minutes.

Speaker 12

00:51:19 - 00:51:50

Thank you, Chairman. Thank you, Chairman Gallagher, for your leadership on this issue and to our witnesses for helping to inform Congress on these important issues. Along with my colleagues from both sides of the aisle, I'm concerned with the rapid advances in artificial intelligence and machine learning, specifically with our adversaries like China. What concerns me most is the Chinese Communist Party has been making great strides and intends to be the world's leader by 2030. And while We here in the United States have made significant improvements in recent years and we continue to advance thankfully.

Speaker 12

00:51:50 - 00:52:30

We still have much work to do when it comes to ensuring the DOD is adopting and deploying these capabilities properly. With that I want to give a shameless plug for 1 piece of legislation that I have in the AI space. My legislation would require the Office of Management and Budget to issue guidance to federal agencies to implement transparency practices relating to the use of AI and machine learning, specifically when AI is being used to supplant a human's decision-making impacting American citizens. While my legislation focuses more broadly, I want to ask for your thoughts on where the DoD currently stands when it comes to AI and machine learning. Where is the U.S.

Speaker 12

00:52:30 - 00:52:48

Compared to our adversaries such as Russia and China with respect to fully implementing the latest capabilities? Are we years ahead? Are we on par? Are we years behind? And what are some ways you would plan for the DEU or D to speed up the adoption and implementation of AI effectively at the department.

Speaker 12

00:52:49 - 00:52:49

Mr. Wang.

Speaker 10

00:52:51 - 00:52:52

So when we look at

Speaker 4

00:52:52 - 00:53:33

the new technologies like large language models, like chat GPT that have sort of really come to light over the past year, I think that's a jump ball. This is a new technology that we need to implement as quickly as possible. They're trying to implement as quickly as possible and we'll see how that develops. If you look towards the last generation of AI technologies, which is computer vision and AI for things like facial recognition, this is an area where The original techniques were invented in the United States, but then China quickly raced ahead. So they built an industrial base within their country, funded it with government money to build facial recognition technology, which they deployed throughout their country to suppress Uyghurs and overall tighten the grip of their socialist regime.

Speaker 4

00:53:34 - 00:54:20

The, if you look today at the leaderboards for computer vision AI competitions globally, Chinese companies, Chinese universities dominate compared to American institutions. So if you look at that as a case study, the Chinese system clearly has an ability and a will to race forward when it comes to artificial intelligence deployments. Now as we look towards this next field of large language models, We have reasons to be optimistic. China is going to be more reticent to invest into large language models because they are difficult to censor. They released recent regulation that said that AI needed to adapt to their socialist principles, which I think is a clear limitation if you have an AI that can, you know, sometimes misspeak like Chachi B.T.

Speaker 4

00:54:20 - 00:54:35

So, we have reasons for optimism and again, the DoD produces more data than the PLA by orders of magnitude. We generate 22 terabytes of data every single day And so if we can properly build an advantage here, it will be quite durable.

Speaker 12

00:54:36 - 00:54:41

Mr. Kitchin, would you add something to that? Where is your scorecard at? Are we behind? Are we ahead?

Speaker 12

00:54:41 - 00:54:42

Are we on par?

Speaker 5

00:54:43 - 00:55:44

Well, I would say that the 2 global powers where the competition matters most historically is between the United States and China. As I mentioned at the beginning of my testimony, a year ago I had very real concerns as to how the United States was going to be able to maintain its AI advantage. But precisely because so much of the conversation around AI, legitimately so, the public conversation focuses on the risks and the kind of unknown, again, meaningful conversations. I do think, just analytically, I do believe that we have a moment to reassert American dominance in a way that really matters, that some of the things that I would have called a drag on our development and deployment from a national security perspective are actually lessening, And that if we realize this technology deliberately, then we can seize the advantage, and not just seize the advantage now, but actually build an advantage that will be meaningful over the long term. And I think that we should do everything we can to do that.

Speaker 12

00:55:44 - 00:55:54

Thanks. And with 30 seconds to go, Doctor, I'll ask you the last question. What are the risks that this committee, the department should be aware of and how do we address those risks as we leap forward?

Speaker 6

00:55:57 - 00:56:21

So when it comes to risk, there's obviously a fallback if the United States falls behind with regards to the advancement of AI in military. So the main area that we need to focus on is to make sure that we do have the advantage on the research, investing in the research, especially in the military side, and making sure that we are still a leader in the area of R&D in AI.

Speaker 12

00:56:21 - 00:56:22

Thanks. Chairman, my time has expired.

Speaker 2

00:56:23 - 00:56:24

Mr. Kim is recognized for 5 minutes.

Speaker 13

00:56:24 - 00:56:55

Thank you, Mr. Chair. Thank you so much for coming on out and talking to us today. We spent a lot of time today talking so far about who's got the development edge and where we're kind of building that direction certainly about competition with China So I don't want to to go over those right now As they were very well talked through Mr. Wang you talked about in your opening statement talking about how some of the main technology of the past, being about nuclear development and whatnot, sort of shaping that era, and this very well likely shaping our era.

Speaker 13

00:56:55 - 00:57:31

So I wanted to kind of get a sense from you all about what you think proliferation of this technology and possible weaponry would look like. You know, when we were in the nuclear era, which, you know, we still are, you know, we have a situation here where only a very few set of countries have been able to reach that threshold of technology, and proliferation has been in many ways kind of tried to effort to be kind of contained in that capacity. So I guess I wanted to ask you, for me, that doesn't necessarily seem like the kind of setup that we're likely to see over the coming decade or 2. What does it look like to you? Are we going to have a situation where the U.S.

Speaker 13

00:57:31 - 00:57:52

And China, a handful of countries, are the major developers and gatekeepers to this technology, but the actual weapon systems and technology will be potentially mass deployed and able for purchase by pretty much any nation that's out there. Just give us a sense of what that proliferation and topography and landscape looks like.

Speaker 4

00:57:52 - 00:58:31

I think this is a really good question. You know, I think in terms of impact, artificial intelligence is going to be similar to nuclear weaponry, but as you mentioned, it is a technology that is likely to be ubiquitous. A, artificial intelligence can be used across every single domain, every single function, every single activity that the military has today, so it's not sort of contained as 1 individual weapon. And it's a technology that is increasingly becoming a global technology. A few months ago, the UAE announced their own large language model that they had built called Falcon 40B.

Speaker 4

00:58:32 - 00:59:06

They actually open sourced that model to the world so that anybody on the internet can go and download that model, that large language model for use. We're seeing with the open source community when it comes to large language models that this technology is likely to be accessible in some way, shape, or form to nearly everyone in the world. That being said, I think that is not a reason to give up hope because of 1 of the things I mentioned before, which is for military use cases and military applications, you need algorithms that are trained on military data.

Speaker 13

00:59:06 - 00:59:32

And with that, I mean, Mr. Kitchen, if you don't mind, I'd like to bring you in, but will we find a situation where, yes, some country or entity or company is doing that, but then able to then sell that type of technology and weaponry to a country or to a group. You know, Mr. Kitchener, I'd like to also get your thoughts on potential for rogue actors, non-state actors, to be able to get this type of technology, to be able to utilize it. So if you don't mind, give us some of your thoughts.

Speaker 5

00:59:33 - 01:00:03

So I agree with Alex in the sense of this technology having the same strategic impact of something like nuclear weapons. But 1 of the peculiarities of it is that this technology is overwhelmingly being developed in the private sector for commercial applications, unlike nukes. Yeah. And so 1 of the implications of that is that because of that and the fact that so much is done via the open source model, it's instant proliferation. It's available in terms of the underlying technology and capacity.

Speaker 5

01:00:03 - 01:00:45

But it's going to be the applications, the particular applications that really make the difference when it comes to capability distinctions. So, and that's where Alex's point about the United States having a potential advantage on military data, right? How we apply the underlying capability is really, really going to matter, and that's where the advantage comes to us. Now, when we think about non-state actors or kind of rogue actors, I think it's, I think where the most acute challenge there is probably on novel and traditional cyber exploitation of these capabilities. So the ability to generate malicious code and automate it and deploy it is now going to be democratized to a level and at a scale that is going to be difficult.

Speaker 2

01:00:45 - 01:00:48

I want to just get 1 last question, doctor, to bring you in on this,

Speaker 13

01:00:48 - 01:01:07

you know, when we're talking about this proliferation, seeing the potential for non-state actors and others, I guess, you know, we talked about some of these frameworks the U.S. Needs to lead the way, but should we be thinking about an actual international agreement here, an international treaty, like what kind of structure should we be building towards to give our ability to try to structure that as a whole?

Speaker 6

01:01:09 - 01:01:27

I completely agree. We need to think about both the domestic side. So within the United States, we need to think about how we should be governing these type of technologies, understanding its risks and having mitigation process in place, but we do need to work with allies as a international, at the international level.

Speaker 2

01:01:27 - 01:01:32

Okay, thank you. I yield back. Mr. Fallon is recognized for 5 minutes.

Speaker 7

01:01:32 - 01:02:02

Thank you Mr. Chairman. I just wanted to follow up real quickly Mr. Kitchen. Yeah I think ransomware is an issue that it's a huge problem already and it's 1 that largely goes under the radar unless we you know colonial pipeline is hit or something JBS and that's what everybody talked about it for a week and forgot about it and acted as if it's not a real problem which it is when you have friends in industry, small companies, 100-200 people that are getting hit, half a million dollar ransoms now are being asked, a million dollar ransom is when a lot of the time was 50 grand a few years back.

Speaker 7

01:02:02 - 01:02:10

Do you think that with AI, are we going to face, as you just mentioned, but I want you to expound on it, an explosion in ransomware when you say it's democratized?

Speaker 5

01:02:13 - 01:02:56

I think that's certainly 1 of the potential implications. Honestly, I think 1 of the key developments over the last 2 years that has constrained ransomware to the degree that it's been constrained is the war in Ukraine. That many of those cyber syndicates that were prosecuting those attacks have been repurposed by the Russian government for attacks in Ukraine and elsewhere. I think if and when that ever slows down, we're going to feel the surge again. And I think that that surge will absolutely be enabled by generative AI, because 1 of the key areas – there's a study that says that there are kind of 4 key areas that will constitute approximately 75% of the economic increase coming with Gen AI.

Speaker 5

01:02:56 - 01:03:05

1 of those is in R&D and in software development being the other. And so I think that applies, unfortunately, equally to the bad guys as it does the good guys

Speaker 7

01:03:06 - 01:03:42

Yeah, you know look nobody has ever accused the DoD of being highly efficient. They're they're large, but when you have in Inefficiencies you're talking about wasting billions of taxpayer dollars, particularly when we're in a competition with China that's even more troubling and we need to address it. We might envision AI with future wars being fought by robots and such, but Within the walls itself, these walls, Mr. Wang, in your opinion, can the Department of Defense use AI to extract efficiencies in programming and budgetary activities?

Speaker 4

01:03:43 - 01:04:32

For sure. 1 of the areas that we've already worked with some of our DoD customers on is using artificial intelligence and large language models to help digest requirements that are given by the DoD. You know there's so many groups within the Department of Defense that are generating requirements and matching those requirements up with capabilities in the private sector or new capabilities that the DoD develops is an incredible potential efficiency gain. There's hundreds, if not thousands, of applications like that of artificial intelligence towards making the DoD a more efficient organization. So I'm incredibly optimistic about the ability to use AI, whether it's in logistics, back office, in personnel related matters, to build a more efficient force that wastes fewer resources and ultimately is able to have more force protection capability.

Speaker 7

01:04:32 - 01:04:38

I think it's the same thing holds for, you know, increased accountability with DoD contracting and spending.

Speaker 4

01:04:39 - 01:05:04

I think that there's, you know, if you think about what the limitations are or what the challenges are, it's in processing huge amounts of information and data that is being generated by the DoD to, you know, understand not only how funds are being used, but also understand what the capabilities that are being generated are. And so if you think about that problem set, it's 1 that's naturally suited for artificial intelligence and for the use of these large-language models.

Speaker 7

01:05:04 - 01:05:31

Dr. Rasmussen. You know, when you talk about AI, my mind starts to bend and hurt and break a little bit because it's just so intriguing. But When we just talk about basic concepts of some of the technology we've grown accustomed to like with social media, some folks, believe it or not, in this building and on the other side of the building don't grasp even those concepts. I mean, I remember a major state's governor saying that we should use Twitter more.

Speaker 7

01:05:32 - 01:05:56

I didn't even get the name right. And 1 of the senators, I think, remember saying, like, how can they post a picture on the line, things like that. So while that's funny, it's also troubling that if they're not grasping basic concepts, and you talk about AI, which is this stuff on, you know, hyper steroids, how do we go about best educating our colleagues in the American public on AI and assuage some of the fears associated with it?

Speaker 6

01:05:59 - 01:06:34

So when we are thinking about the education side of it, We need to understand that this education needs to be tailored towards people's needs. So depending on their roles, depending on their responsibilities, we need to tailor that education for them. To give you an example, for senior leaders who may not be technical, you need to come up with an education that lets them know what AI is exactly, to your point. What it's capable of, what its limitations are. Versus someone who's technical, let's say a data scientist, for them that would be a different story.

Speaker 6

01:06:34 - 01:06:42

We can have a more technical education for them, but also having this education in a continual form as AI evolves.

Speaker 7

01:06:42 - 01:06:49

So almost like how it can help them specifically and make their lives a little bit better. Thank you Mr. Chairman. I yield back.

Speaker 2

01:06:50 - 01:06:52

Mr. Ryan is recognized for 5 minutes.

Speaker 14

01:06:52 - 01:07:02

Thank you Mr. Chair. Good morning. Thank you all for being here and for your insights. I wanted to build on some of your, to start building on some of your written testimony Mr.

Speaker 14

01:07:02 - 01:07:36

Wang. You talked about data as the ammunition in AI warfare. You talked about what some of our adversaries, particularly China, are doing. And then you were, and I appreciate it, candid about areas where we need to improve. Can you talk about, based on your specific experience in your companies, working with DOD, who is doing relatively better, what are the lessons we can learn in terms of, and also if you could talk a little about CDAO and how you see that intersecting here, so that we can recognize the imperative around wrapping our arms around our data better?

Speaker 4

01:07:38 - 01:08:49

Certainly, so the groups that we work with by nature of you know us generally working with the more forward-leaning groups within the DoD are forward-looking, they're extremely innovative, and they've incredible in terms of taking on this technology as a key part of their go-forward strategy in building impressive capabilities. So we've worked with many of the early programs use it in the DoD for use of AI, and by and large, we've been, I've been incredibly impressed. That being said, I think now is an opportunity for us to build on those successes and really take this moment in the technology and speed up our deployment. It's incredibly important that we build on our past successes that we're able to more scalably deploy this technology across the entire DoD, rather than being limited to a few innovative cells within the DoD. As I mentioned a bit ago, the DIU and the CDAO have been some of these areas, some of the groups within the DoD that have been able to have fast procurement cycles and generally innovate when it comes to use of artificial intelligence, but that needs to happen across the entire Department of Defense.

Speaker 4

01:08:50 - 01:09:13

Lastly, just on the CDAO, I think they've done, you know, it's a recently established organization, but they've done a good job of pushing forward and building, you know, the right, pushing for the topic of data labeling and the central data repository for the DOD. And now I think we need to ensure that that actually happens in terms of collecting this 22 terabytes of data that are being generated every day.

Speaker 14

01:09:13 - 01:09:35

Thank you. And just to build on that and bring in anyone else who wants to add here, is it even possible to do that from the top down? I mean, I understand the importance of setting the right tone and direction, but if we think that creating a new office is, it's necessary, but I would argue not sufficient to really, if we're serious about wrapping our arms around this,

Speaker 4

01:09:35 - 01:09:36

it should

Speaker 14

01:09:36 - 01:09:49

be emphasized and trained and reinforced at much more broadly. Do you agree with that? Any ideas from anybody on how to do that, particularly looking at how others, adversaries or allies are doing it?

Speaker 4

01:09:50 - 01:10:27

A combination of top down and bottoms up is necessary here because the individuals who are making the decisions of when they get a new hard drive off of a military platform and they need to, you know, make the decision of what they're gonna do with that hard drive, we need all the way down to that individual to understand that hard drive is full of data that will fuel the future of American military leadership. And so they need to understand that as viscerally as we do from a top sound perspective within the CDA or the you know at Within this conversation, so it requires a whole of DoD approach to be able to properly achieve this outcome

Speaker 5

01:10:28 - 01:11:07

Congressman the 1 thing I would add is that as we tackle these difficult challenges, and they are legion, that just from a mentality standpoint, I would encourage Congress and the US government to approach these as challenges that have to be managed, not solved. If we make the perfect the enemy of the good, if we try to find the exquisite solution, we will so delay ourselves as that we will miss the opportunity. And that's 1 of the kind of key narratives I'm really trying to emphasize is that we really do have a meaningful strategic opportunity. And these guardrails and everything, they matter, they really do. But as we approach these things, seizing the opportunity I think is probably 1.

Speaker 5

01:11:07 - 01:11:14

And then doing it well and carefully is a part of that. But it cannot be the goal by which we have to leap over before we begin.

Speaker 14

01:11:15 - 01:11:34

I appreciate and agree. Just very briefly, Dr. Mudian and anyone else, particularly talking, you hit on it all a little bit, but we're talking about DoD, but your sense of, in the research realm, academic realm, how are we doing there, what can we do better, I think I could guess, but.

Speaker 6

01:11:35 - 01:11:58

So we can definitely, when we are investing in the research side of it, it opens the door for us on the innovation side to also invest in research on the safety aspect of it, on these guardrails that was mentioned. So we need to, when we are investing in the research, we need to consider both in parallel in order to make sure that we are always ahead of it.

Speaker 10

01:11:58 - 01:12:00

Thank you. You're back, Mr. Chair.

Speaker 2

01:12:00 - 01:12:13

We'll now move to a second round of questions. I'll begin by recognizing myself for 5 minutes. I want to return to Mr. Gates' question about key allies in the AI competition. You all mentioned, you know, our most obvious allies.

Speaker 2

01:12:13 - 01:12:45

I mean, I think you're right. I'm not detracting from that answer, 5 Eyes, NATO, EU. I'd like to invoke Jared Cohen's concept of geopolitical swing states, perhaps countries that may not fit neatly within the free world paradigm. What are the emerging AI superpowers that we may not be thinking about or let's just say states that punch above their weight when it comes to AI that we need to be Cultivating and ensuring they're not Finland izing in the Chinese Communist Party direction. I'll start with you.

Speaker 2

01:12:45 - 01:12:46

Mr. Way

Speaker 4

01:12:47 - 01:13:15

I do think it's really important You know as we as AI sort of promises to be 1 of the most important technologies, both economically and militarily, there are a myriad of countries that are all getting involved. Kind of as I mentioned, The UAE has a very dedicated effort towards artificial intelligence. They have open source models. They're continuing that series of developments towards building bigger and more powerful AI models. We don't know if they're going to open source them, but we'll see.

Speaker 4

01:13:15 - 01:13:52

I think it's important that as they develop those that we try as hard as we can to make sure those follow our principles and our governance regimes. India is another key country, obviously, very critical when we think about geopolitical allies, but also as you think about their developments in AI, they have an incredibly active tech sector, and they have stated efforts to develop large language models within their country. So these are some of the countries, I would say, that from an AI perspective seem to be racing ahead and ones that we want to ensure are thinking about it, artificial intelligence and its impacts in the same ways that we are as a country.

Speaker 2

01:13:53 - 01:13:53

Mr. Kitchen?

Speaker 5

01:13:55 - 01:14:21

I would agree completely. I think this affects the way we think about our relationship. So right now, our technology supply chain is distributed in such a way as to where there are critical vulnerabilities. Many of the key nodes are deep within Chinese sphere of influence. And where I think we're going to be going is we're going to try to build trusted technology ecosystems amongst trusted partners and allies.

Speaker 5

01:14:22 - 01:15:23

That the idea is that we identify, particularly Western democracies, as being the type of organizations that we can partner with so that we have mutually beneficial trade and technology relationships that are the core of future national security partnerships. That requires, however, a common purpose and a common understanding of the opportunities and the challenges. And 1 of the things I'm most concerned about is where many of our friends and allies are in the European Union, particularly on this issue. So my point there being that when we think about military interoperability in these types of alliances, We also need to understand that military interoperability is going to be predicated on regulatory interoperability, and that is where we have a real gap between us and some of our key friends. The European Union seems to have concluded that to build their own domestic technology base, they have to deliberately constrain and at times even decouple from the American technology base.

Speaker 5

01:15:23 - 01:15:28

And that will not work for our shared purposes and is going to be a real problem going forward.

Speaker 2

01:15:28 - 01:15:29

Great point. Dr. Mahmoudian.

Speaker 6

01:15:32 - 01:16:03

So I completely agree with other witnesses with regards to alliance. 1 of the things that we need to understand also that for those type of swing states that was mentioned, we need to also think about how we can align ourselves to them to make sure that their advancement in AI is also aligned to the United States so we would have that alliance with them, rather while we are ensuring that we are still the leader in this space.

Speaker 2

01:16:04 - 01:16:41

Mr. Kitchen, you mentioned in written and oral testimony that we, on hardware, we have a strong bipartisan consensus allowing us to constrain China's advancement. There's been some suggestion, and maybe put on the select committee on China had here as we engage with Silicon Valley leaders that While that we admire the GPU export controls In fact, we're able to bring Japan and the Dutch along with us was great and I give the Biden administration credit for that. There's loopholes whereby they're still able to access a tranche of the sort of second most advanced chips right now. I'm curious for your comments on that, and Mr.

Speaker 2

01:16:41 - 01:16:43

Wang, yours as well, and I recognize I run out of time here.

Speaker 7

01:16:44 - 01:16:44

Mr. Wang.

Speaker 5

01:16:44 - 01:17:10

Yeah, So this goes to my previous point about an iterative process. I think you were referencing the October 7th rules, the export control on integrated chips. That was the first tranche and now we're beginning to kind of optimize and tighten those controls. It's not a surprise that government and industry are doing a bit of back and forth on this. I think there is a growing recognition between both stakeholders that action is necessary and now we're trying to find the right way forward.

Speaker 5

01:17:10 - 01:17:12

I have high confidence that we'll do that.

Speaker 2

01:17:12 - 01:17:13

Quickly, Mr. Wang.

Speaker 4

01:17:15 - 01:17:28

It is true. You can see reports that ByteDance and other Chinese companies have bought billions of dollars of GPUs in the past, you know, in this year so far. So it is something that we need to be extremely careful and vigilant about. Mr. Kanna.

Speaker 3

01:17:32 - 01:17:49

When Chairman Gallagher and I had that conversation with Elon Musk, he said that AGI was 5 to 6 years away. I was surprised by that timeline. What is your sense of how long we are from AGI?

Speaker 4

01:17:53 - 01:17:58

AGI is an ill-defined concept. And I think many folks-

Speaker 2

01:17:58 - 01:18:00

Could you define it since we're not doing acronyms?

Speaker 4

01:18:00 - 01:18:02

Yeah. Yeah.

Speaker 2

01:18:02 - 01:18:03

You're the guy, sorry.

Speaker 4

01:18:04 - 01:19:10

AGI stands for artificial general intelligence, the idea that we would build an AI that is sort of generally intelligent in the way that humans are. It's not a super well-defined concept because even in using the current AI systems, you'll notice clear limitations and issues and challenges that they have with doing even things like basic math. The AGI as a concept is an enticing 1 that we in Silicon Valley talk about a lot, but I don't think it's very well defined and not something certainly that should meaningfully affect how we think about, you know, putting 1 foot in front of the other for not only economic leadership as well as military leadership. The reality is that the technologies today, large language models, computer vision technology, and other AI systems that are being developed and deployed today have immense bearing on the future of our world, whether that's from an economic perspective or from a military perspective. And that's why I think it's important that we set the foundations today of investing into data, investing in a test and evaluation to set up the foundations for long-term success.

Speaker 4

01:19:11 - 01:19:22

You know my last comment as it comes to AGI prediction timelines I think this is often a way to sort of distract from the current conversation, which is, in my mind, very important.

Speaker 3

01:19:23 - 01:19:25

Mr. Kitchener, Dr. Mahmoudian.

Speaker 5

01:19:27 - 01:19:59

Yeah, I think Alex is exactly right. The idea of artificial general intelligence, I think what we will be seeing is increasingly agile and capable foundation models, or these types of generative AI capabilities that are gonna be more broadly applicable. So 1 of the features of these foundational models is something that's called emergent capabilities. It's the idea that we created this algorithm or this foundation model to be able to do a particular task and lo and behold it actually can do this other thing without having been trained to do so. So we're going to see that.

Speaker 5

01:20:00 - 01:20:09

That's a common feature. But I would say that the timeline that was given to you about artificial general intelligence in the next 5 years is aspirational.

Speaker 3

01:20:11 - 01:20:14

If it's good. Anyway. Yeah.

Speaker 6

01:20:15 - 01:21:01

Similar to the previous comments, it is aspirational, but what I would add to that is we are headed to that direction. We see, as mentioned, with regards to foundation models, These type of models that can provide tasks that, you know, they were not necessarily trained on, but they can generalize to some extent. However, while we are heading into that direction, obviously not in 5 years, but we need to also invest while we are investing on the research side of it, we also need to invest in the guardrails to save the aspects of it, to make sure that we are able to mitigate the risks that we are anticipating with regards to artificial intelligence.

Speaker 3

01:21:01 - 01:21:21

Maybe I'll quickly ask my last question, which is do you think we need any DoD clearance for any types of AI like we have for nuclear technology? There are safeguards. There's only so many people who can get access to it. Mr. Wang, is there anything analogous in the AI space?

Speaker 4

01:21:22 - 01:22:02

So as we think towards military AI systems, so much of the next generation of capabilities are going to need to be built and trained on top of already classified data. So there's already an existing sort of structure and regime to protect any models that are trained on classified data, whether it's at the secret or top secret or even beyond level to ensure that those capabilities sort of stay limited to certain audiences and stay controlled. I don't know if we need to build even more on that, but I think that it's certainly true that most of the exquisite capabilities that the DoD looks to build are likely to be developed at the secret or top secret level.

Speaker 3

01:22:03 - 01:22:03

Thank you.

Speaker 2

01:22:04 - 01:22:05

Mr. Gates.

Speaker 10

01:22:05 - 01:22:28

I'm interested in the integration of AI and human performance. We always are very touched whenever there's, we have casualties that are in training or otherwise that are preventable. What have any of you learned about where some of the potential lies in utilizing AI in integration with sensor technology or other types of human performance capabilities?

Speaker 4

01:22:30 - 01:23:04

You know, 1 of the areas where artificial intelligence, I think, has some of the most greatest promise is in, as Mr. Kitchen mentioned before, is actually in logistics. So if you look at 1 of the largest causes of casualties, it actually was in, you know, transporting fuel and other resources for the military. This is an area where autonomous vehicles or even leader-follower setups are able to greatly improve the efficiency as well as reduce casualties for the military. And it's 1 of the goals of the Army's robot combat vehicle program that we're collaborating with them on.

Speaker 4

01:23:05 - 01:23:32

As we look further, these AI systems are assistive technologies. And with our scale Donovan platform, we're able to assist in key decision making. This is being utilized right now in military planning exercises to help ensure that all of the data and information that the DoD has access to is being integrated into the correct military decisions. So there's an incredibly bright future, I think, for assistive use of artificial intelligence to make the DoD more effective.

Speaker 5

01:23:34 - 01:24:31

This is 1 of the most exciting things about AI in my view is its ability to help expand human thriving. So many will have seen a commercial with 1 technology provider whose their phone could help users who have speech pathologies or difficulties communicate more effectively. The open AI, their chat GBT, has a function for vision-impaired individuals where it describes images for them so that they can participate in knowledge gain and application. And then when we think about in the military context, I mean, it's gonna be the AI, underlying technology and capability that enables everything from allowing paraplegics to walk again, to brain injury prevention and recovery. I mean, the things that this technology, again, I'm not an idealist on this, but the promise is real, and what it means for our society just in general, I think is very promising.

Speaker 10

01:24:32 - 01:25:03

As the son of a paraplegic mother, that's an inspiring concept. Doctor, I wanted to ask a little different twist on that question to you. I've talked with my colleague, Mr. Khanna, to some degree about how we ought to measure the soft power capabilities of some of these AI platforms. How is it that ethicists are thinking about what it would mean for the United States as opposed to China to be the leader in deploying a hundred thousand AI robot doctors into Africa or Latin America or somewhere else in the global south?

Speaker 6

01:25:07 - 01:25:36

It's all about how we want to have our values embedded into these AI systems. When we are thinking about these principles, 1 area that, especially the DOD has, is the systems to be governable. So depending on the level of risk that these systems pose, we want to have oversight. In some cases, the risk is low, so we may want to let the AI make the decision. Imagine a benign example being recommended a movie.

Speaker 6

01:25:36 - 01:25:57

That might be bad. But in specific cases, especially the ones that are lethal, we do not want the AI to make the decision. We want human oversight. We want AI to be used to provide us information, patterns that we may have not seen. So we would use those information and us humans would be able to make the judgment.

Speaker 6

01:25:57 - 01:26:00

So these are elements that we need to consider when we are thinking about this.

Speaker 10

01:26:00 - 01:26:07

That will substantially impact scalability and just the scale of being able to deploy the tech, I would think.

Speaker 6

01:26:07 - 01:26:36

If we have a comprehensive governance process, actually this does not necessarily be viewed as an obstacle with regards to scalability. A robust and comprehensive governance process actually enables us to have standards and policies in place that can easily apply to any AI use case that we have. So With that foundation of AI governance, we would be able to replicate the process for any AI use case

Speaker 7

01:26:36 - 01:26:36

that we have.

Speaker 10

01:26:36 - 01:26:59

Thank you. And I haven't given you enough time to answer this question, Mr. Wang, but 1 of the things that I'm sure we'd like to explore with you further is when we get into this test and evaluation paradigm that you keep coming back to in your testimony, it's important for us to get a concept of what the core principles of that test and evaluation regime would look like. And I hope you'll continue to work with the subcommittee on that.

Speaker 2

01:27:01 - 01:27:30

I don't think there's any. I'm sorry, I'm going to do a third round, but it'll go very quick, trust me. And I'm going to apply what I call the justice test, which is a reference to my 96-year-old grandmother, Virginia Justice, who's very smart, but is not even a member of the flip phone generation, let alone the AI generation. So I want you to imagine, you're sitting across from my grandma, you each have an old fashioned in hand. Her late husband was a World War II vet.

Speaker 2

01:27:30 - 01:27:48

And you get to explain to her why AI, what she needs to know about AI, why this conversation matters, both for the future of warfare as well as her life and the lives of her children and grandchildren. What do you say to the great and beautiful Virginia Justice?

Speaker 4

01:27:50 - 01:28:50

If we look towards World War II and the last era of conflict, new technologies like the atomic bomb were critical in ensuring that We both had American leadership and that the values that America upholds were able to continue to prosper and set the tone for development of the world. We're now embarking on a new era of the world, 1 in which a new technology, artificial intelligence, is likely to set the stage for the future of ideologies, the balance of global power, and the future of the relative peace of our world. Artificial intelligence is an incredibly powerful technology that underpins nearly everything that we do from an economic and military standpoint. And therefore, it's critical that we as a nation think about how we not only protect our citizens from the risks of artificial intelligence, but also protect our ideologies and democracy by ensuring we continue to be leaders. Mr.

Speaker 4

01:28:50 - 01:28:51

Kitchen.

Speaker 5

01:28:54 - 01:29:20

Ma'am, there is a new technology that under the right circumstances could protect your grandchildren and this nation, that could make this nation economically and militarily strong enough to defend its people and its interests, and a technology that in the wrong hands could imperil those same things. And it's really important that your government and industry work together to realize those promises and to mitigate those threats.

Speaker 2

01:29:21 - 01:29:23

Great, Dr. Mahmoudian.

Speaker 6

01:29:25 - 01:29:58

It's a technology that's pretty much embedded in our day-to-day lives. We are living with it, we are breathing with it. So we want to make sure that this technology that's part of our life has its, our values, the values that we fought for is incorporated into this technology. So we still would have our civil liberties and civil rights as well as using this technology and leveraging it to have a better quality of life.

Speaker 2

01:29:58 - 01:30:07

Great. By the way, it just occurred to me, though I love being a Gallagher, if I had my mother's maiden name Justice, I mean I'd probably be president at this point. That's such a better...

Speaker 3

01:30:07 - 01:30:08

And a progressive.

Speaker 2

01:30:08 - 01:30:12

Yes. Well played. Any other questions?

Speaker 3

01:30:13 - 01:30:13

Very good.

Speaker 2

01:30:14 - 01:30:43

Okay, A bit of housekeeping before we adjourn. I want to enter 3 things into the record quickly. The first is the article I referenced before by Jared Cohen on the rise of geopolitical swing states, published on May 15, 2023. The second is something that you, Mr. Wang, wrote in November of last year on the AI war and how to win it, in which you say, we must recognize that our current operating model will result in ruin, continuing on our trajectory for the next 10 years could result in us falling irrevocably far behind.

Speaker 2

01:30:43 - 01:31:12

Why do large organizations often continue on the path to their demise, even if the future is painfully obvious. The reason is inertia. Bureaucracies will continue to glide deep into the abyss for an eternity. And then the third is a recent article by Mark Andreessen, which articulates the optimistic case for AI, entitled Why AI Will Save the World, in which he says, the single greatest risk of AI is that China wins global AI dominance, and we, the United States and the West, do not. I propose a simple strategy for what we do about this.

Speaker 2

01:31:12 - 01:31:34

In fact, the same strategy President Ronald Reagan used to win the first Cold War with the Soviet Union, which is we win and they lose. So I ask unanimous consent to enter all 3 of those into the record. Without objection, so ordered. I ask unanimous consent that members have 5 days to submit statements for the record. And the hearing stands adjourned.

Speaker 2

01:31:35 - 01:31:36

Great.

Speaker 7

01:33:45 - 01:33:45

Th

Speaker 2

01:33:58 - 01:34:03

th th th th th

Speaker 7

01:34:05 - 01:34:05

th

Speaker 2

01:34:08 - 01:34:10

th th

Speaker 4

01:34:12 - 01:34:12

th

Speaker 7

01:34:45 - 01:34:48

Yeah. You were talking about the church? Yeah. You might see

Speaker 2

01:34:58 - 01:35:02

your cousin. Oh, my goodness. Nice to meet you. Nice to meet you. Sorry, I got to go.

Speaker 2

01:35:02 - 01:35:03

Yeah, and on the trip you

Speaker 7

01:35:03 - 01:35:04

got on the plane, you were in Melbourne. Yeah. Yeah. Yeah. Yeah.

Speaker 7

01:35:04 - 01:35:04

Yeah. Yeah. Yeah. Yeah. Yeah.

Speaker 7

01:35:04 - 01:35:07

Yeah. Yeah. Yeah. Yeah. Yeah.

Speaker 7

01:35:07 - 01:35:13

Yeah. Yeah. Yeah. Yeah. Yeah.

Speaker 7

01:35:13 - 01:35:13

Yeah.

Speaker 2

01:36:17 - 01:36:29

I'm going to ask questions. Hello, media. Media like data is a plural word. We often forget that. What's the singular, then?

Speaker 2

01:36:30 - 01:36:34

Media. Medium? That's an interesting question.

Speaker 7

01:36:34 - 01:36:35

What is the singular

Speaker 8

01:36:35 - 01:36:36

form of media?

Speaker 2

01:36:36 - 01:36:41

Media is a plural word. Is it plural? Right? Media. That can be both?

Speaker 2

01:36:48 - 01:36:59

How many words exist like that? That's interesting. I'm not going to go down a short and wide rabbit hole. These are the things that keep me up at night. I wonder if platypus is here.

Speaker 2

01:37:00 - 01:37:08

Sorry. Platypus, yeah. Platypuses. Deer and deer? Actually, is it?

Speaker 2

01:37:08 - 01:37:16

Deer and... Deer and a bunch of deer. Yeah. A deer and a bunch of deer. I think the same way about shrimp.

Speaker 2

01:37:16 - 01:37:23

I'm very... Deer and deer. We are deer. Okay, so what are we doing? Sorry.

Speaker 2

01:37:24 - 01:37:24

Shit.

Speaker 7

01:37:25 - 01:37:25

Man, that's

Speaker 8

01:37:25 - 01:37:26

got some... Yeah.