1 hours 21 minutes 57 seconds
Speaker 1
00:00:01 - 00:00:30
Welcome to yet another episode of Cast It. This is our second episode where we are on the road. We are recording at ALGO, the main European algorithms event, which in 2018 this year is in Helsinki, Finland hosted by Aalto University of Finland. My guest today is Tim Ruffgarden. Tim is a professor at Stanford University in California in the US, and the winner of a gazillion prizes, including the Grace Hopper Award and the Goethe Prize.
Speaker 1
00:00:31 - 00:00:37
Tim, welcome to the show. I'm really happy to have you here. Happy to be here. Thanks. So you're a professor of many things, actually.
Speaker 1
00:00:37 - 00:00:42
You're not only a computer science professor, but there's a very long title on your job description.
Speaker 2
00:00:42 - 00:01:11
Well, I consider myself first and foremost a computer scientist. It is true that at Stanford I have a courtesy or 0 percent appointment in the management science and engineering department. Yeah, so that there used to be an operations research department and industrial engineering department and they sort of merged and that's now called management science and engineering. So they have a nice, really nice optimization group who I connect with. And then I also connect a lot with the economists at Stanford, both in the econ department and also in the business school.
Speaker 2
00:01:11 - 00:01:13
And I don't have any formal affiliation with them. I
Speaker 1
00:01:13 - 00:01:36
see. But I guess this is what we want to talk about today, the intersection between economics, which is an academic discipline, and computer science, which is another academic discipline. And both of them have some effect on reality, even. So let's see how far we get there. So game theory, I'm old enough to have been brought up in a computer science universe that was completely innocent of any game theory.
Speaker 1
00:01:36 - 00:01:38
And that all changed in the, what, late 90s?
Speaker 2
00:01:39 - 00:01:49
Yeah, late 90s, I would say. Why? Well, you know, mid-90s or so was about the time the Internet started really exploding, taking over the world and becoming very commercialized
Speaker 1
00:01:49 - 00:01:52
because there was computer science before the internet
Speaker 2
00:01:52 - 00:02:38
There was believe it or not It was computer science before the internet But I think most computer scientists would say the field changed quite a bit with the advent of the internet And you know so to many people not just in theory, but across all branches of computer science, you saw a lot of research just changing a lot in the mid to late 90s for this reason. And theoretical computer science was no different. I think the community did a great job of responding to the new challenges that were coming up because of the advent of the internet. And specifically, what you alluded to, this connection to game theory and economics, 1 thing that was really novel was that the computer science applications that we really cared about that were coming out at that time fundamentally really called out for game theoretic reasoning. They really fundamentally involved the interaction of different competing parties.
Speaker 2
00:02:38 - 00:02:55
So initially, we were thinking about things like internet service providers. They're choosing how to route their traffic. They're choosing how to do their peering agreements. And it was clear that an internet service provider, they're kind of neither honest nor adversarial. They're not going to just sort of play along with some protocol if it's not in their own best interest.
Speaker 2
00:02:55 - 00:03:30
But they're also not going to bring the internet to a halt because they have such a vested interest in it going on. And that was the big contrast. I would say there are exceptions, but for the most part, working computer science prior to that, either you just had 1 entity, you were just building software for a single desktop and just did what you told it to. Or if you had multiple parties, like say in distributed systems, you'd model people as either cooperative slash honest or adversarial, like in Byzantine fault tolerance or something like that. Some of the nodes you would just assume obediently follow the protocol.
Speaker 2
00:03:30 - 00:03:38
Other nodes you would assume are adversarial, but there's no notion of a utility function and different parties acting according to some utility function. But that was really...
Speaker 1
00:03:38 - 00:03:56
That's a lot of words already. Let's unpack that a bit. Okay. So before the internet, it's basically an engineering discipline or a math discipline where the thing you build is also the thing you analyze. And then what happens with the Internet is that there are many different players with various incentives who not necessarily co-op...
Speaker 1
00:03:56 - 00:04:05
Well maybe they have shared goals but they don't have the same, What was the word, utility function? So the same result of being active in this environment?
Speaker 2
00:04:05 - 00:04:23
Sure. So for example, you could imagine each internet service provider, they want the internet to survive. But they also want to grab as much traffic from themselves. They want to grab customers from the other internet service providers. So they're simultaneously cooperating to make sure the internet routing works on the internet, but also competing for market share.
Speaker 2
00:04:23 - 00:04:29
So that mixture of cooperation and competition didn't really have much of an analog in prior work in computer science.
Speaker 1
00:04:29 - 00:04:34
But there was another discipline who had studied this for a generation at that time, I guess?
Speaker 2
00:04:35 - 00:04:46
Well, so, I mean, you know, economics has been studying incentives and equilibria for a long time, since before the birth of computer science as an intellectual discipline, I would even say.
Speaker 1
00:04:46 - 00:04:49
Which is really young, which is 70s or something. True.
Speaker 2
00:04:50 - 00:04:52
True. Well, maybe 50s, but yeah,
Speaker 1
00:04:52 - 00:04:56
maybe 70s. But Stanford was the first department, wasn't it?
Speaker 2
00:04:56 - 00:05:00
They were very early. Yeah. I'm not sure they were the first, but they were very early,
Speaker 1
00:05:00 - 00:05:03
for sure. And let's put it at 45, 48.
Speaker 2
00:05:03 - 00:05:05
I'm happy with that.
Speaker 1
00:05:05 - 00:05:07
But economics is older. Absolutely.
Speaker 2
00:05:07 - 00:05:12
And game theory specifically, right? So John von Neumann, for example, was doing a lot of his foundational work in game theory in
Speaker 1
00:05:12 - 00:05:16
the 1920s. Who incidentally also was 1 of the first people who built a computer.
Speaker 2
00:05:16 - 00:05:23
Exactly. Yes. Exactly. So he's, in some sense, computer science and game theory have this shared heritage, in part, coming from.
Speaker 1
00:05:23 - 00:05:34
Oh, that's beautiful. But we were ignorant of that. If you want to reduce this to 1 guy, and you ignore Turing, then it's von Neumann. And he invents 2 disciplines. And he
Speaker 2
00:05:34 - 00:05:47
certainly played a crucial role in 2 disciplines. Yes. Yeah, exactly. As you say, on the computer science side, Turing and others. And even on the economic side, there were people like Borrell thinking about equilibria in poker games and so on beforehand.
Speaker 2
00:05:47 - 00:05:56
But, you know, I think for most people, von Neumann's minimax theorem is really sort of a watershed event in the development of game theory. That field really exploded a little bit later in
Speaker 1
00:05:56 - 00:05:56
1944
Speaker 2
00:05:57 - 00:06:07
when von Neumann wrote his book with Oskar Morgenstern, Theory of Games and Economic Behaviour. And at that point, really, you know, everybody knew that this was an important new field that required attention.
Speaker 1
00:06:09 - 00:06:14
But computer science and game theory continued, innocent of each other, until the 90s.
Speaker 2
00:06:14 - 00:06:34
That's right, that's right. So there's sort of this shared heritage and von Neumann being important in both. But really, as the 20th century unfolded, they developed basically completely in parallel with very little communication. Again, some exceptions. There's some definitely forward-thinking visionaries.
Speaker 2
00:06:34 - 00:06:47
Herb Simon would be an early 1. Christos Papadimitriou would be an example. More recently, in the 1980s, he was starting to look at connections between the 2 fields. But for the most part, It's just been the last 20 years they've started talking to each other.
Speaker 1
00:06:48 - 00:07:14
And I guess even before we continue that, we need to spell out what game theory is, because we are, in this show, epistemologically extremely vociferous, and there is another conversation I have with somebody in game studies, and that's something completely different. That's a field of studies that studies computer games as narrative objects or as phenomena in literature. And when you say game theory, you mean something completely different.
Speaker 2
00:07:14 - 00:07:37
Yes, so game theory. It's really thinking about what happens when you get different parties in a common strategic situation. For example, a very early example would just be like a card game, a so-called zero-sum game, where either I win or you win. So in game theory, you have to identify, first of all, who are the participants? So in a card game, it might just be you and me.
Speaker 2
00:07:37 - 00:08:02
You have to identify what are the strategies available to each of the participants. So these could be, for example, different ways you could bet in Texas Hold'em or something like that. And then you need a notion of payoffs, which says, given a choice of action by every participant, what is the payoff earned by every player? Which in a card game could just be the amount of money that you win. So that's sort of its origins from games of chance.
Speaker 2
00:08:02 - 00:08:14
But increasingly, as the 20th century went on, people realized it might be interesting to think about real settings from this perspective, too. Like 1 famous and very important example is known as the Prisoner's Dilemma.
Speaker 1
00:08:15 - 00:08:16
Please spell that out.
Speaker 2
00:08:16 - 00:08:28
Sure. So this is a game. There's 2 players, 2 criminals, or alleged criminals that have been charged with a crime. Each 1 has a set of 2 strategies. They can either confess to the crime or not.
Speaker 2
00:08:28 - 00:08:59
They can deny that they did the crime. And the payoff structure is such that if both of the criminals deny that they did the crime, they're instead locked up on a sort of like charge. So they each sort of say a one-year prison sentence. If they both say, no, we didn't do it. If they, on the other hand, if 1 of them rats out his colleague, okay, so 1 of the person denies the crime, but the other person says, no, no, no, we did it, you got us.
Speaker 2
00:08:59 - 00:09:20
Then the confessor as a witness is set free and serves no jail time. But the other person, the person who denied the crime, gets convicted for a long jail sentence, say 5 years. If they both confess, then let's say they both served 3 years of prison time. Now, what do you think's going to happen? So the question now, game theory asks, what's going to happen in the prisoner's dilemma?
Speaker 2
00:09:20 - 00:09:36
And it depends how you set it up. But I mean, for the simplest analysis, you could reason as follows. You could say, look, on the 1 hand, the prisoners collectively would be best off if they both denied the crime. Because they each serve 1 year of time. That's 2 years total.
Speaker 2
00:09:36 - 00:09:56
And that's less than they could get by any other combination of behaviors. So it's in some sense collectively optimal for both prisoners to deny the crime. However, it is not individually optimal for either 1. If I'm 1 of the prisoners, I'm thinking, well, what are my options? I could either deny or confess.
Speaker 2
00:09:57 - 00:10:04
What are the 2 things my colleague could be doing? If he denies, then if I confess, I get the love.
Speaker 1
00:10:04 - 00:10:05
Really bad. Really bad.
Speaker 2
00:10:05 - 00:10:20
And if I deny, I do better. What if he confesses? Well, then I can, same thing. So either way, it's in your best interest to actually rat out your colleague. So it's the notion of a dominant strategy in a game.
Speaker 2
00:10:20 - 00:10:25
A strategy that's always in your best interest, that always maximizes your payoff independent of what the other person does.
Speaker 1
00:10:25 - 00:10:27
In both cases of what the other person does.
Speaker 2
00:10:27 - 00:10:40
In both cases of what the other person could do. So that's the, Yeah, that's what dominant strategy means. So I don't even have to worry about whether you're confessing or not. Independent of whether you confess or don't confess, I'm always going to be
Speaker 1
00:10:40 - 00:10:56
better off. Which is counterintuitive, because If both prisoners were just zombies controlled by a third person that was able to determine what both of these zombie prisoners did, they would both be even better off.
Speaker 2
00:10:56 - 00:10:57
That's right. That's right.
Speaker 1
00:10:57 - 00:11:08
And then the mystery is why can't they just get their act together and collectively perform exactly the strategy that would make the entire community, in this case, both prisoners better off.
Speaker 2
00:11:08 - 00:11:25
Right. So definitely, the story usually goes with the prisoners being separated in 2 different rooms. So they can communicate in advance and plan out their strategy. The idea is once the prisoners are in their separate room, and the jailers say, your colleague already told us what they're doing.
Speaker 1
00:11:25 - 00:11:31
I've watched enough cratchos to know this is exactly what happens. This is exactly what the police will do to you.
Speaker 2
00:11:31 - 00:11:54
Yeah, exactly. So this, I mean, I think this is a bit, and this is 1 thing game theory is really great for. It's really almost like parables, which you can take with you and understand a large part of the world through them. And the Prisoner's Dilemma is a great example. It's just, you can look at a lot of problems that are out there in the world where fundamentally what's going on is this conflict between collective optimality and individual optimality.
Speaker 1
00:11:55 - 00:12:05
And our normal tools from computer science, we just see all these actors as programs that I as a programmer have programmed are very poor at attacking these things.
Speaker 2
00:12:05 - 00:12:05
Good point.
Speaker 1
00:12:05 - 00:12:15
The whole idea of, say, engineering of collectively optimizing various parts that work together simply breaks down in analyzing these scenarios where there are different incentives.
Speaker 2
00:12:16 - 00:12:30
Right. So the old school computer science approach might just be like, well, let's just program each of the 2 parties so that they do the right thing. And then as long as there's no adversarial or no faulty machines, then you get a guarantee of collective optimality. That's right. But that's not reality in a lot of applications.
Speaker 2
00:12:30 - 00:12:32
There is this notion of sort of individual autonomy
Speaker 1
00:12:33 - 00:12:49
and self-interest. OK, so games in the narrow sense as card games, board games, are covered by game theory. But game theory also wants to model behavior of intelligent, independent agents that maximize various incentives.
Speaker 2
00:12:49 - 00:13:03
That's right. So another really good example would be the tragedy of the commons, for example. And so this again shows up in all kinds of places in the real world. You can think, for example, about pollution, say. So imagine you have sort of a bunch of factories.
Speaker 2
00:13:03 - 00:13:32
Those are going to be the parties in the game, these factories. And their strategy space is either they use a cheap technology that creates a lot of pollution, or they use an expensive technology that creates less pollution. And each of them has to make this decision. And in many cases, the payoffs are such that the incentive to pay for the expense of technology is not sufficient for the individual to actually do that. So in other words, It could be that if some factory polluted less,
Speaker 1
00:13:32 - 00:13:33
if
Speaker 2
00:13:33 - 00:14:00
you took into account the benefits for everybody, it would far outweigh the cost to that factory. But the factory is going to, in the absence of regulation or cultural norms or something, the factory will say, well, Yeah, but the benefits to me of using this expensive technology don't outweigh my costs for having this expensive technology. So again, if you ask, what do you expect to have happen? What's the equilibrium? Again, it's this dominant strategy for every factory to use the cheap technology and pollute.
Speaker 2
00:14:00 - 00:14:10
So you wind up with this worst case scenario where everybody's polluting as much as possible, and everyone's much worse off than they would be had they all agreed to buy the expensive technology.
Speaker 1
00:14:10 - 00:14:28
So now you're sitting here as an American in 1 of the main Scandinavian welfare states where... So this in the end is a question of politics in many, many respects. How much regulation do we want and how much do we want individual agents in the market to just optimize their own welfare?
Speaker 2
00:14:28 - 00:14:30
Yeah, their own utility.
Speaker 1
00:14:30 - 00:14:31
Utility is the right word.
Speaker 2
00:14:31 - 00:15:11
And so a key term you hear here is the idea of an externality. And so an externality means it's some benefit or cost that's caused by your action that's borne by somebody else. So in tragedy of the commons, there's what's called a negative externality, in that when I make my decision to build the cheap technology, there's a cost to society that is largely borne by everybody else. And so what you learn in kind of introductory economics is when you have a market with externalities, that's somewhere you should consider regulation. So you could imagine trying to have regulation that coaxes everybody into actually doing the collectively optimal strategy.
Speaker 1
00:15:11 - 00:15:20
And just to spell out the Commons and the tragedy of the Commons, That's the large green pasture that everybody can put their cows or sheep out to graze on?
Speaker 2
00:15:20 - 00:15:22
Yeah, I think originally it was a bovine example.
Speaker 1
00:15:22 - 00:15:24
It's a bovine example.
Speaker 2
00:15:24 - 00:15:35
In the pollution context, I think the commons would just be air quality that we all breathe. And the tragedy would be that that resource gets degraded because of this uninternalized externality.
Speaker 1
00:15:36 - 00:15:41
Because none of the individual factories is individually incentivized to reduce the pollution.
Speaker 2
00:15:41 - 00:15:54
That's right. That's right. So 1 way to think about it, So what would it mean for this externality to be internalized? Well, suppose that I actually said, if you reduce your pollution, if you build the expensive technology. You are now
Speaker 1
00:15:54 - 00:15:54
the state?
Speaker 2
00:15:56 - 00:15:58
For example. Yeah, yeah.
Speaker 1
00:15:58 - 00:16:00
Finland. You are now Finland.
Speaker 2
00:16:00 - 00:16:20
Right, right. So because the social value of your choice is so high, we will cover your cost, your investment in the expense of technology. Because overall, it will be a win. You'll have healthier people and less medical costs. This will be the kind of decision you could imagine making in the face of tragedy of the commons.
Speaker 1
00:16:21 - 00:16:31
Hmm? Yeah. Yeah, so these are obviously very, very important questions that are on everybody's minds right now, not even with climate change and whatnot. And economics provides this model for reasoning about this. Absolutely.
Speaker 1
00:16:33 - 00:16:40
And this is where you come in with your own thesis, right? From 15, 20 years ago by now.
Speaker 2
00:16:40 - 00:16:59
Indeed. So both the prisoner's dilemma and tragedy of the commons, they all do show up everywhere. Once you learn to think about the world in this way, you can recognize so many things out there. It's really just being instantiations of a couple of these canonical examples. And computer science is no exception.
Speaker 2
00:16:59 - 00:17:16
So you look around in computer science, you see examples of the prisoner's dilemma. So when I was a graduate student, first started thinking about this, I was looking at network routing. That's what I was interested at the time in networks. And then with the internet coming out, it was clear it was time to think about game theoretic behavior in networks.
Speaker 1
00:17:16 - 00:17:21
And networks are good old-fashioned, not social networks, but networks actually, wires connecting computers.
Speaker 2
00:17:21 - 00:17:35
Good point, yes. So there's different interpretations. I mean, the really old school interpretation of network would be a road network. And This is the 1 that's still probably the easiest for most of us to understand. So literally, you just have roads, median intersections.
Speaker 2
00:17:36 - 00:17:50
The actors are just drivers. Each 1 has their own car. Obviously, as a driver, at least in most cases, you can pick whatever route you want. And there, again, can be a conflict between how good a route is for you and how good a route is for somebody else. Let me explain why that is.
Speaker 2
00:17:50 - 00:18:14
So why is it that if a single driver, choosing which route to drive on, how does it affect other people? Well, if you think about it, on a given road, if there's some number of cars using it, every extra car, after a certain point, slows everybody down. By congestion. By congestion, exactly. So you might prefer it for your own sake, because you'll get there faster to take some route.
Speaker 2
00:18:14 - 00:18:32
But there might be 100 other people that are getting there slightly slower now that you've made that decision. So that again is an example of an uninternalized externality. So this is known as a congestion externality. You cause congestion which makes it worse for other people but you don't factor that into your own individual decision. You're just trying to get from point A to point B as quickly as you can.
Speaker 1
00:18:32 - 00:18:36
It's still faster than taking the bus, I guess, which everybody could take.
Speaker 2
00:18:37 - 00:18:46
Right, right. Yep. Right, so exactly. So if you have a car and you're going to get there faster, and you're okay with the gas cost versus the bus fare,
Speaker 1
00:18:46 - 00:18:46
you
Speaker 2
00:18:46 - 00:19:02
would just do it. It's hard to see why you wouldn't do it. You wouldn't not do it just because you make these other people's lives a little bit worse, typically, if you look at how drivers make decisions. Exactly. So this uninternalized externality, again, gives you the prisoner's dilemma all over again.
Speaker 2
00:19:02 - 00:19:07
So there's a very actually nice example discussed by the economist A.C. Pigou all the way back in
Speaker 1
00:19:07 - 00:19:07
1920.
Speaker 2
00:19:08 - 00:19:25
He only discussed it qualitatively, but still. I mean, he makes all of the right points about congestion externalities. So he says, imagine, and again, talking about these road networks. So he says, imagine you just have point A and point B. So you have a bunch of people leaving for work at roughly the same time, and then there's the place where the factory is.
Speaker 2
00:19:25 - 00:19:45
And imagine there's only 2 different routes to get from point A to point B. So it's like the simplest routing decision you can imagine. You take the first road, or you take the second road. Now, Piggar's example gets interesting when you assume that the 2 roads have very different characteristics, where 1 of them, on the 1 hand, is pretty long. So it takes a long time to get from point A to point B.
Speaker 2
00:19:45 - 00:19:55
But it's very wide. So it really doesn't suffer from congestion effects. So for example, maybe it always takes an hour. It takes an hour if nobody's on it. It takes an hour if everybody's on it.
Speaker 2
00:19:55 - 00:20:07
It doesn't matter. That's 1 of the roads. The other road is going to be shorter but narrower. So it will suffer from congestion, which means the more people use it, the slower it will be to take that road.
Speaker 1
00:20:10 - 00:20:11
OK, I can see that.
Speaker 2
00:20:11 - 00:20:23
So you can imagine, for example, maybe if nobody takes the short road, it takes 30 minutes. If half the population takes it, it takes 45 minutes. And if 100% of the population takes it, it takes an hour. Everybody an hour. That being an example.
Speaker 1
00:20:23 - 00:20:27
An hour which would also be the same time as taking the first road. Good point. The wide road.
Speaker 2
00:20:27 - 00:20:28
Good point. So it's called
Speaker 1
00:20:28 - 00:20:31
the wide road and narrow road, because otherwise I'm getting confused.
Speaker 2
00:20:31 - 00:20:43
Sounds good. Right. So the wide road is the 1 that always takes an hour. And let's assume that the narrow road gets congested. It's faster unless everybody uses it, at which point its travel time is always an hour.
Speaker 2
00:20:45 - 00:20:55
So now, thinking game theoretically, you ask the usual question. So given the setup, and assuming the drivers want to get from point A to point B as quickly as possible, what do we think is going to happen?
Speaker 1
00:20:55 - 00:21:06
OK, let's see if I understand this. Then I, as the individual driver, would take the narrow road no matter what. Because even if everybody, if nobody else takes the narrow road, I'm happy, I'm there in, what did we say, 30 minutes?
Speaker 2
00:21:07 - 00:21:09
If nobody takes it, you're there in 30 minutes.
Speaker 1
00:21:09 - 00:21:17
30 minutes, great, I will take that. If everybody else takes it, I'm still there in just shy of 1 hour. And if I take the wide road, it will take 1 hour no matter what.
Speaker 2
00:21:17 - 00:21:17
Exactly.
Speaker 1
00:21:18 - 00:21:26
So by similar reasoning to the prisoner dilemma thing, case by case, I can infer that it would be stupid not to take the narrow road.
Speaker 2
00:21:26 - 00:21:33
So your worst case scenario on the narrow road is as good as the best case scenario on the top road. It's another way of saying it.
Speaker 1
00:21:34 - 00:21:36
Everybody taking the narrow road.
Speaker 2
00:21:37 - 00:21:49
Because you as an individual, right? The worst thing that could happen on the narrow road is that it's just as slow as the best case on the other road. For the individual. For the individual. So it's a no-brainer.
Speaker 2
00:21:49 - 00:21:51
It would be crazy to take this hour-long distance.
Speaker 1
00:21:51 - 00:21:56
Unless I'm a really, really Scandinavian-minded person who proudly takes the wide road.
Speaker 2
00:21:56 - 00:21:59
Internalizes the internalities. Yes, yes, yes. Indeed, indeed. So absence.
Speaker 1
00:21:59 - 00:22:00
Just a virtue signal.
Speaker 2
00:22:00 - 00:22:03
Absence some altruistic behavior over that form.
Speaker 1
00:22:03 - 00:22:04
Let's ignore that.
Speaker 2
00:22:04 - 00:22:24
This is what you have, yeah, exactly. And your response might be, OK, well, so what? I mean, this 1 road is better than the other road. What's the big deal if everybody takes it? But the thing to realize is that actually, collectively, people would be better off if you offloaded some of that traffic from the narrow road to the wide road, say 25% of the traffic.
Speaker 1
00:22:24 - 00:22:27
People here as in the collective people?
Speaker 2
00:22:27 - 00:22:39
Yeah, the average driver, Put it that way. The average driver would be better off if you rebalanced a little bit. If you took some of the traffic off of the narrow road and put it on the wide road.
Speaker 1
00:22:39 - 00:22:45
So let's see if I can do that. If half the drivers take the narrow road, then each of them takes 45 minutes?
Speaker 2
00:22:45 - 00:22:46
Yeah.
Speaker 1
00:22:46 - 00:22:53
Is that the game? Yeah. And the other half still take the wide road. They take 1 hour. So we are, oh my god, somewhere in the low 50s.
Speaker 2
00:22:53 - 00:22:55
So the average travel time is 52 and a
Speaker 1
00:22:55 - 00:22:57
half at that point. Yes, 52 and a half.
Speaker 2
00:22:57 - 00:23:14
Exactly. But I think the intuition also should be clear, which is that, look, in the old equilibrium, it took everyone an hour. OK, so you put some people on this other road, but they also take an hour. So they're no worse off. But now the people still on the narrow road enjoy this relatively uncongested ride, so they get there faster.
Speaker 2
00:23:15 - 00:23:22
So you've made some of the people better off and nobody worse off. That's called a Pareto improving solution. And that's always sort of a good thing, basically.
Speaker 1
00:23:23 - 00:23:24
Except it feels wrong.
Speaker 2
00:23:24 - 00:23:26
Why does it feel? Because of the unfairness.
Speaker 1
00:23:26 - 00:23:26
Of the unfairness.
Speaker 2
00:23:26 - 00:23:36
Right. So you could. So that's an excellent point. So you could imagine different ways of mitigating the unfairness. You could imagine, you know, that some people are allowed to take 1 road on even days and the other 1 on odd days.
Speaker 2
00:23:36 - 00:23:36
Oh, or
Speaker 1
00:23:36 - 00:23:41
flip a coin, or a government official tells me in the morning, today is a wide road day for you. Exactly.
Speaker 2
00:23:41 - 00:23:45
But I completely agree that is a potential impediment to actual implementation.
Speaker 1
00:23:45 - 00:23:53
Okay, but the fairness thing can be removed from the scenario, I guess, by something like coin flipping or buying the privilege of...
Speaker 2
00:23:54 - 00:24:21
Yes, you can also have an economic solution, that's right. And the other thing I'll say is even if you discard all of these solutions as unrealistic, there's other examples, including a famous example known as Brace's paradox, where actually, if you had regulation, everybody would be better off by exactly the same amount. So there's a network where if people just do what they want, it would take everybody an hour. But if you could actually dictate traffic's routes to them, everybody would take 45 minutes.
Speaker 1
00:24:21 - 00:24:21
So
Speaker 2
00:24:21 - 00:24:42
you'd still be perfectly fair, and just everybody would be better. So that's in a slightly more complicated network with not just point A and point B, but also C and D. So there's even cases where fairness is clearly not the problem, the problem is clearly just the inefficiency of this individual optimization and the uninternalized externality.
Speaker 1
00:24:45 - 00:25:13
OK, but I guess this I can then still say, why didn't we build the network in a way to avoid this? I guess, so this seems to me now a question of network design, right? So I as a, when I built society, I now should build society so that I can have, oh, Let me build this up differently. You're pointing to the fact that regulation then would help the global welfare?
Speaker 2
00:25:13 - 00:25:14
Potentially, yeah.
Speaker 1
00:25:14 - 00:25:19
Potentially, but that seems to be an artifact of the design of the network.
Speaker 2
00:25:20 - 00:25:46
Well, I would say, I mean, whenever you have congestion, it's pretty hard to, I mean, it's not easy to design an actual network where congestion doesn't lead to un-internalized externalities. You know, There are cases where it works out fine. Like if you just had 2 copies of the wide road, there'd be no problem. If you just had 2 copies of the narrow road, there would also be no problem. People would naturally split 50-50.
Speaker 2
00:25:47 - 00:26:07
So you're right that there are cases where it's not a big deal. But if you look at actual real world networks, it's definitely there. And in fact, there's been a couple of empirical studies trying to measure exactly how much time is lost due to so called selfish routing. So there's been studies in Boston and in Singapore. And people have found that there is a significant change.
Speaker 2
00:26:07 - 00:26:12
So in other words, they measured the car traffic.
Speaker 1
00:26:12 - 00:26:13
Oh, empirically, with real drivers.
Speaker 2
00:26:13 - 00:26:22
No, with real drivers. So Boston did this. Singapore did this. They really said, how many cars are going on any road at any given time? And by the way, where did they start and where did they end?
Speaker 2
00:26:23 - 00:26:49
So this gives them the data of who wanted to drive where during that period of time, and then also where they went. They look at how much time it actually took them. And then their counterfactual that they do is they say, well, now that we actually know who wanted to go where, what if we instead solved the big optimization problem and routed them in the most efficient way possible to minimize the average travel time? How much faster on average would people have gotten to where they were going. And you regularly see gains of like
Speaker 1
00:26:49 - 00:26:49
10%
Speaker 2
00:26:49 - 00:26:57
to 20%, something on that order. So I do think it's a real effect that you have these congestion actionalities, and collectively people are worse off because of them.
Speaker 1
00:26:58 - 00:27:10
In particular, because you don't always have the luxury of building the network such as to avoid these things. Because, for instance, the Internet is built by many, many competing companies. Some of them build really, really fast connections.
Speaker 2
00:27:10 - 00:27:11
Yes, that's
Speaker 1
00:27:11 - 00:27:11
a good point.
Speaker 2
00:27:11 - 00:27:11
And then
Speaker 1
00:27:11 - 00:27:13
exactly these things do appear.
Speaker 2
00:27:13 - 00:27:33
That's a good point. So many networks are not built by some central authority. They sort of arise by an organic process. And even if people were optimizing locally, like if they optimized how things look like in their town, there's no reason to think things would be optimal once you look at the global picture. And I had another point to make.
Speaker 2
00:27:37 - 00:27:38
All right. Good
Speaker 1
00:27:38 - 00:27:55
luck. Cut that. So let me get back to the 52.5 minutes. So I guess I can tweak the parameters a bit and get any other number there, right? If I had a super teleporting narrow road where I can just zip along that road if there are no other cars.
Speaker 1
00:27:55 - 00:28:01
Good question. I have an infinitely fast car. Okay, that's it. I have an infinitely fast car. So if I know there are no other cars.
Speaker 1
00:28:01 - 00:28:06
It will take me 0 minutes or close enough to get to work.
Speaker 2
00:28:09 - 00:28:14
So you're asking a very computer science-y question, which is great, which is how bad could things be?
Speaker 1
00:28:14 - 00:28:17
That seems to be the obvious question to ask here.
Speaker 2
00:28:17 - 00:28:28
Well, to a computer scientist, perhaps. Yeah. It somehow was not, despite the fact that this routing model dates back at least to 1952 in Wardrop, this really question was not asked until computer scientists
Speaker 1
00:28:28 - 00:28:33
started looking at the problem. As you told me, he only analyzes this qualitatively, right? He observes that it's not the optimum solution?
Speaker 2
00:28:33 - 00:28:50
Good. So Pigou didn't even write down a formal mathematical model. He just sort of told the parable, basically, about these 2 parallel roads and the congestion externality, which is completely accurate. I mean, It's completely on point. But so he wasn't interested in doing a very mathematical version of it.
Speaker 2
00:28:50 - 00:28:51
Now, Wardrop, this is now in
Speaker 1
00:28:51 - 00:28:52
1952,
Speaker 2
00:28:53 - 00:29:25
he actually wrote down a model. He really said, these are the decision variables, these are the actions, These are the cost functions, et cetera. And in particular, he defined the notion of an equilibrium in the context of these routing networks, which is often called a wardrobe equilibrium for that reason. But still, even though he was doing it very mathematically, He never asked this question about how big could the gap be, how bad could it be. I mean, everybody recognized there would be efficiency loss at equilibrium with people doing whatever they want.
Speaker 2
00:29:25 - 00:29:48
But nobody asked how bad could that be until basically the turn of the century, turn of the 21st century. I think probably people mostly thought, well, probably it could be pretty bad, which is what you were just suggesting. But then actually, I mean, this is what made it interesting is it turned out there are assumptions under which you can prove bounds on how bad this inefficiency can be.
Speaker 1
00:29:48 - 00:29:49
And this is your famous result.
Speaker 2
00:29:50 - 00:30:05
Or you and other people's famous results. So, right. So I mean, I was working with Eva Tardos, my PhD advisor, when I was doing this work on selfish routing. And there is a 4 3rds result, which I'll describe in a second. And that's joint work with Eva Tardos, when I was a Cornell PhD student.
Speaker 2
00:30:05 - 00:30:21
And so we can go back to the Pigou's example we were talking about. We had the wide road and the narrow road. And you asked, OK, what if you tweak the parameters? Can you make it worse somehow? And Actually, 1 way you could make it worse, I said it takes 30 minutes if it's empty, and an hour if it's full.
Speaker 2
00:30:21 - 00:30:47
And you interpolate linearly in between. So let me make the example less realistic, but still sort of more extreme, by saying, OK, let's actually assume the narrow road, it takes, like you were saying, 0 time when it's empty and an hour when it's full. And again, you interpolate linearly in between. So now, again, so this doesn't change what you expect people to do at equilibrium. It's still a dominant strategy to take the narrow road.
Speaker 2
00:30:47 - 00:31:02
You still expect everybody to have a one-hour commute time. And now we discuss this alternative solution where you just split them 50-50. And now instead of 52.5, if you think about it, it's 45. You have half the people taking the wide road. That takes an hour.
Speaker 2
00:31:02 - 00:31:19
Half the people take the narrow road, which I'm now saying is only 30 minutes of travel time, because we interpolated linearly between 0 and 1. So half at 60 minutes, half at 30 minutes, that's a 45 minute average travel time. So the increase in travel time from self-interested behavior is
Speaker 1
00:31:19 - 00:31:20
33%
Speaker 2
00:31:21 - 00:31:34
relative to what you could achieve in the best case. Instead of 45 minutes, you're getting 60 minutes, which is a 33% increase. OK, So now you could ask, maybe it could be even worse than this.
Speaker 1
00:31:34 - 00:31:36
I guess I can tweak the function and make it nonlinear.
Speaker 2
00:31:36 - 00:31:52
Right, so you could imagine tweaking the function. Let me sort of postpone that for a second. I mean, I think what's particularly striking about this example is It's just point A, point B, and 2 parallel roads. This is just a two-way crossroad. Real networks are way more complicated.
Speaker 2
00:31:52 - 00:32:11
So why wouldn't you expect this sort of gap to just get arbitrarily bad once you look at large networks? And so the good news, and this was joint work with Ewa Tardos, is we showed that as long as all of your delay functions have this linear form, right? So like we had our wide road that was a constant function, always an hour.
Speaker 1
00:32:11 - 00:32:11
We had
Speaker 2
00:32:11 - 00:32:20
our narrow road, which we were saying was a linear function. Initially, we said it was 1 half plus x over 2. Now we're saying it's the identity function. Doesn't matter. Some linear function.
Speaker 2
00:32:20 - 00:32:39
So it turns out no matter what network you look like, it can be as big and crazy as you want. You can have lots. You can traffic with lots of different origins and destinations. It doesn't matter. No matter what, as long as each of these delay functions is a linear functional form, 33% is the highest efficiency loss you will ever see from self-interested behavior.
Speaker 2
00:32:40 - 00:32:56
In other words, for every single network with these types of linear costs, At equilibrium, selfish drivers' average travel time will be at worst 33% more than what you could achieve in the hypothetical world where you could just dictate what everybody was going to do.
Speaker 1
00:32:56 - 00:33:06
So the increase in efficiency by a perfect totalitarian welfare, benevolent totalitarian welfare state is 33% over letting everybody do whatever they want.
Speaker 2
00:33:06 - 00:33:14
Right, so I guess if you go backward it would be 25%. So if you start from the equilibrium solution and say how much could you drop it, you could drop it 25% to where it was.
Speaker 1
00:33:15 - 00:33:25
Why should that be a constant, right? Why should a problem that is only qualitative, that includes no numbers, why should that be characterized by the number 33%?
Speaker 2
00:33:25 - 00:33:33
Yeah, it's a good question. I mean, there is intuition I like to give about why maybe not so much the
Speaker 1
00:33:33 - 00:33:34
33%
Speaker 2
00:33:34 - 00:34:08
per se, but just why there should be some constant factor independent of the network size. I'll explain as much as I can, given that we don't have a board and stuff. But The main idea is actually super cool. And this was really identified, if not in Wardrop a few years later, but in a famous book by Bettman, McGuire, and Winston, who were studying Wardrop's model from a transportation perspective. So the point is, back in the 1950s, transportation scientists realized that in this model of network routing, there was something remarkable known as a potential function.
Speaker 2
00:34:08 - 00:34:27
So let me say what I mean by that through analogy. So many of us in our high school, we study physics, and we study electrical networks, like networks of resistors. And you learn about Ohm's law and Kirchhoff's law, stuff like this. And you learn about electrical current in 2 terminal networks. Now, what do these laws tell you?
Speaker 2
00:34:28 - 00:34:55
1 of them tells you that there's node conservation. So the amount of energy going, not including the terminals, the energy that goes in is the energy that goes out. And you also learn that sort of the voltage drop is equalized across all paths. So in that sense, you learn that electrical current is really an equilibrium, can be thought of as an equilibrium in these 2 terminal networks of resistors, because the voltage drop is equalized along all paths. But then there's also, so that's 1 thing...
Speaker 1
00:34:55 - 00:34:58
Even though the electrons obviously don't communicate, they seem to optimize something.
Speaker 2
00:34:58 - 00:35:36
They just figure out how to balance the voltage drop across all paths, just like in a road network, traffic manages to sort of balance travel times across all the competing routes. So in this sense, electrical current can be thought of as an equilibrium. On the other hand, there's also something known as Thompson's principle, which says that electrical current can also be thought of as solving a natural optimization problem. Namely, suppose you had to route a certain amount of current from point A to point B, and you wanted to minimize the dissipated energy. Then, in fact, again, the solution would be the equilibrium that you find in electrical current.
Speaker 2
00:35:36 - 00:35:43
So it's simultaneously an equilibrium, equalizing voltage drop and solving an optimization problem in minimizing dissipated energy.
Speaker 1
00:35:43 - 00:35:58
So it's an equilibrium in the sense that no single electron would be incentivized if that concept existed for electrons to change its mind. And it solves the global optimization problem that even godlike powers would not be able to route all these electrons differently.
Speaker 2
00:35:58 - 00:36:19
Exactly right. Exactly right. Well put. And it turns out that these networks of resistors are actually a special case of this routing model discussed by Pigou, Wardrop, and Beckman, McGuire, and Winston. It's sort of the special case where basically there's a pure linear relationship between the delay along a road and the amount of traffic on it.
Speaker 2
00:36:19 - 00:37:00
So if the amount of delay is always sort of, you know, 20 minutes times the fraction of the people using it, that would be an example of a pure linear function. That corresponds to, you know, how the, you know, voltage drop is linear in the resistance, I believe. And right. So you can actually think of these road networks as generalizing networks of resistors. And so given what we knew about resistors, which is that actually equilibria are actually solving an optimization problem we really care about, you might hope that over in the routing networks, OK, we've been thinking about them so far as equilibria, but maybe they're also minimizing some generalized notion of energy.
Speaker 2
00:37:01 - 00:37:14
That's what I mean by the potential function, some version of energy. So maybe equilibrium routing networks are not just equilibrium, but also are optimizing an interesting optimization problem. And they are, it turns out. Oh, really?
Speaker 1
00:37:15 - 00:37:15
They are,
Speaker 2
00:37:15 - 00:37:28
in fact, yeah. So you can generalize Thompson's principle and have a more general version of a potential function. And you can prove that routing of traffic through the network is in equilibrium if and only if it's a global minimizer of this potential function.
Speaker 1
00:37:28 - 00:37:31
And the function is known. The function is known.
Speaker 2
00:37:31 - 00:37:55
It would be great if the function was the average travel time, because then you would actually find out that people, then you would actually say the equilibrium is optimal, just like it was in the networks of resistance. It's not quite right. So the potential function is not quite the average travel time. But this is where it'd be nice to just write down the formula. If you just looked at the formula for the average travel time and the formula for this generalized energy function, it would be almost the same.
Speaker 2
00:37:55 - 00:38:01
The only thing that would be different is that the generalized potential function would have an extra 1 half in it.
Speaker 1
00:38:01 - 00:38:05
Oh, there's a number now. There's a number now. That's where it's after. And yeah, so
Speaker 2
00:38:05 - 00:38:31
it's basically just the difference between the function f of x equal x and if you integrate that function. So that's kind of a really x squared and if you integrate the function x. And so that introduces a factor 1 half difference between these 2 functions. From that realization, it's very easy to determine that you're going to have, at most, a 100% blow up, that the outcome at equilibrium will be, at worst, double the travel time of an optimal solution.
Speaker 1
00:38:31 - 00:38:33
Even that is not clear naively.
Speaker 2
00:38:33 - 00:38:50
Exactly. Even that is not clear naively, because again, these networks can be crazy big. So to get the 33%, you have to do a more specialized argument. But at that point, you're just kind of optimizing the constant. So that's my best intuition for why there is some universal constant that governs the efficiency laws.
Speaker 1
00:38:50 - 00:38:58
And this entire thing you've boiled down to this very, very, what, catchy phrase that the price of anarchy is? Right. So this should
Speaker 2
00:38:58 - 00:39:04
be credited to Christos Pavloumitriou and Ilias Koutsoupias. So they introduced the coin, price of anarchy.
Speaker 1
00:39:04 - 00:39:09
Yeah. The price of anarchy is 33%. Or is it 133? I guess you would say,
Speaker 2
00:39:09 - 00:39:14
I mean, technically, you would say it's 4 thirds. So you look at the ratio.
Speaker 1
00:39:15 - 00:39:25
4 thirds. OK, you look at the ratio of what you get with pure anarchy and what you get with a benevolent dictator. And that is never worse than 4 over 3. Exactly.
Speaker 2
00:39:25 - 00:39:40
And I should say, so economists particularly hate this phrase, price of anarchy. Because they don't see where the price is, and they don't see where the anarchy is. So here's the thing. To an economist, an equilibrium is not anarchy. It's called reality.
Speaker 2
00:39:40 - 00:40:12
It's called the real world. So the fact that computer scientists chose to call it this completely goes back to what we were saying at the beginning, that before computer scientists started thinking about game theory, everything was either like honest or adversarial. And so the old world was basically, OK, you program a computer, it does what you tell you. Or maybe you program a network of computers, and they run a protocol honestly, and they do what you said. So compared to the old world of sort of centralized computation, an equilibrium felt like anarchy.
Speaker 1
00:40:12 - 00:40:17
To us, sir. To us. Very, very disciplined, single-threaded computer scientists.
Speaker 2
00:40:17 - 00:40:22
Exactly. But an economist would say, no, no, no, that's just people playing equilibrium. What else are they supposed
Speaker 1
00:40:22 - 00:40:29
to do? So they would call it the externality of... What would economists speak for the price of anarchy would be?
Speaker 2
00:40:30 - 00:40:37
The phrases that economists have suggested to me are not as catchy, perhaps more accurate, but less catchy, like efficiency loss ratio.
Speaker 1
00:40:38 - 00:40:48
Efficiency loss ratio of equilibria. Yeah. Yeah. We win this 1. This must be the first time that computer scientists actually have found a catchier expression than the other disciplines.
Speaker 1
00:40:48 - 00:41:08
We're getting better at this. OK, so this is an interesting example now of computer science basically feeding back a result to economists, which from my perspective should have been available to them from the 1920s. But they don't think of it in our terms.
Speaker 2
00:41:09 - 00:41:45
Right, so this is, I mean, so I have to tell you, so Eve and I proved this result in, I think I was in my second year of PhD. And I spent a significant time of the rest of my PhD reading through dusty transportation science journals in total paralysis, total fear, that 1 of them would have already proved this result. Because it was going to be 1 of the central results in my thesis. And there's literally hundreds, if not thousands, of papers in transportation science and also to some extent operations research written about exactly this model, exactly the model with the 33% bound hold. And it seemed inconceivable to me that no 1 would have ever found it before.
Speaker 2
00:41:46 - 00:41:55
So fast forward to now, to 2018. As far as I know, I still haven't found that scary, just the article that
Speaker 1
00:41:55 - 00:41:57
proved it before us. And enough people are looking right now.
Speaker 2
00:41:57 - 00:42:03
You never know. You never know. I mean I'll always be ready for it to be discovered. It's a possibility.
Speaker 1
00:42:03 - 00:42:05
Maybe we learn that Gauss actually found it.
Speaker 2
00:42:05 - 00:42:11
Perhaps, yeah. Yeah. Yeah. And right. So I was in complete.
Speaker 2
00:42:12 - 00:42:45
So then, in the absence, I couldn't find anyone who had done it. And I was tired of just reading transportation science papers looking for it. So I said, oh, can I at least make up a narrative that will make me feel more confident that perhaps this is the first time this result has been proven? So I asked myself what you just said. How could it be that with almost 50 years of study and hundreds if not thousands of papers, and Pigou's examples from 1920, Brace's paradoxes from 1968, lots of the work was done in the 50s, how could this have escaped notice?
Speaker 2
00:42:46 - 00:43:21
The best story I've been able to come up with to tell myself to assuage this fear has to do with the role of approximation in different scientific cultures. So in theoretical computer science, which is sort of my background. So theoretical computer science as a discipline has grown up entirely in the long shadow cast by NP completeness. So NP completeness is basically a way to classify optimization problems as being very difficult, as seeming to resist efficient solutions. Right.
Speaker 1
00:43:21 - 00:43:30
This is from a time before we found catchy names for stuff. Yes. So NP-complete should be called something like Herculean, or very, very hard, or yeah.
Speaker 2
00:43:30 - 00:44:02
Actually, my favorite suggestion, so Don Knuth actually ran a survey saying, what should we call these sort of emerging problems that wound up being called NP-complete problems? And someone, I forget the name, but someone suggested that maybe they should be called PET problems, P-E-T. For PET would stand for probably exponential time. But then if it turned out that P was not equal to NP, it would turn to provably exponential time. And if P equals NP, you repurpose the acronym no matter how the conjecture goes.
Speaker 2
00:44:02 - 00:44:03
You can
Speaker 1
00:44:03 - 00:44:04
retrofit the acronym.
Speaker 2
00:44:04 - 00:44:10
Exactly. To either provably exponential time, or if P equaled NP, it could be previously exponential time.
Speaker 1
00:44:10 - 00:44:11
Oh, very nice.
Speaker 2
00:44:11 - 00:44:14
So let's leave aside the possibility that there's quasi-ponon.
Speaker 1
00:44:14 - 00:44:21
Let's just stick with established, though boring, terminology. So there's a class of problems which is really hard, and we call them the NPR problems.
Speaker 2
00:44:21 - 00:44:47
Exactly, ranging from traveling salesman problem to constraint satisfaction and so on. And so NP completeness was invented around 1971. By the mid-'70s, we knew already over 100 NP-complete problems. By the 80s, we knew thousands of NP-complete problems. And we just had to realize that the problems that we care about, a huge fraction of them, are not going to have always fast, always correct algorithms.
Speaker 1
00:44:48 - 00:45:06
We have 1 of the main exports of computer science. Exactly. Many of these problems were studied in math and physics, and people have thought about them for decades, maybe even a century in some cases. And then there is the unifying theory that says that Not only are all these problems probably very, very hard, they're also in some sense morally the same. They're just different versions of the same problem.
Speaker 1
00:45:06 - 00:45:07
That's right.
Speaker 2
00:45:07 - 00:45:20
That's right. I agree. So for example, it explained the asymmetry between characterizations of Eulerian graphs and characterizations of Hamiltonian graphs. Basically, we didn't have characterizations of Hamiltonian graphs. And NP-completeness gave you a reason to explain why.
Speaker 2
00:45:20 - 00:45:42
So that was a great example. And anyway, so for those of us who do algorithms, who really care about trying to solve problems, OK, all these problems are NP-complete. That doesn't mean they go away. Still, in real applications, the relevant problem really is an NP-complete problem. Now, NP-completeness doesn't mean it's not a death sentence.
Speaker 2
00:45:42 - 00:46:03
It doesn't mean you literally can never solve this problem in practice. But it does mean you have to consider compromises. You have to give up on something. And so computer scientists have come up with many different types of compromises, but 1 of them is the idea of a heuristic algorithm. So something which is always going to be fast, terminate in, say, seconds or something, But it won't always be quite optimal.
Speaker 2
00:46:03 - 00:46:18
Hopefully, it'll be close. But it's not guaranteed to be optimal. So we've been coming up with heuristic algorithms since before I was born. It's been a big part of computer science. And of course, to do theory, we wanted to have theorems that compare the performance of given heuristic algorithms.
Speaker 2
00:46:19 - 00:46:45
And so we do that in multiple ways, but that naturally led us to all of these formalisms for approximation to allow us to assess how good or bad a particular heuristic was. And so ever since that time, really ever since the 70s for sure, there's been, I'd call it, a culture of approximation guarantees in theoretical computer science, where whenever there's some obstruction to full optimality, we like to sort of measure the sub-optimality in whatever it is we are able to implement.
Speaker 1
00:46:46 - 00:47:13
And it's sub-optimality in the worst case. I guess that is our contribution. We're not running the experiment, but we are proving that, I mean, instead of selling a car with saying this will go 130 miles an hour, in the best case we're saying that no matter how the road looks, here's a guarantee of how it will behave. It may not be optimal, but we can guarantee its behavior in the worst case, rather than an empirical analysis of how it will behave in the average case.
Speaker 2
00:47:13 - 00:47:30
I guess I would say it a little differently. I think there's actually 2 different concepts. So first of all, even just this idea of approximation for 1 specific instance. So forget about worst case. Just fix some instance and how good some equilibrium is, how well some algorithm performs, whatever.
Speaker 2
00:47:30 - 00:47:38
Even there, there's the question of how do I feel about this number, right? That the average travel time is 60 minutes. Is that high? Is that low? How do I assess that?
Speaker 2
00:47:38 - 00:48:16
So idea number 1 is this idea of using a benchmark, OK, to do this sort of counterfactual, this thought experiment. What if actually we had a benevolent dictator who could pick the optimal outcome? And now we assess the outcome that happened according to how far or what multiple it is of this hypothetical benchmark. So that already, I think, takes you outside of what almost all traditional economic theory or transportation science has looked at. Then there's a sort of second question, which again, as you say, we also have to grapple with in algorithms a lot, which is that, OK, well, in some instances, you're going to be happy with a number.
Speaker 2
00:48:16 - 00:48:29
In some instances, you're going to be less happy with a number. And then there's a question of sort of how do you compare. So for example, in algorithms, suppose you have algorithm A and algorithm B. Algorithm A does better on some set of instances. Algorithm B does better on some other set of instances.
Speaker 2
00:48:30 - 00:48:51
How can you, in a precise way, call 1 of them better than the other? They're in some sense incomparable. So just as someone who, if you want to make these comparisons, you then have to make a further modeling compromise by taking an algorithm, say, and summarizing its performance in some simple way, like with 1 number. And there's multiple modeling choices you can make there. Worst case analysis is 1 particular choice.
Speaker 2
00:48:51 - 00:49:11
But I would say the reason you see worst case analysis in these price of energy analyses is more for technical convenience. I think it would be interesting to do an average case analysis also of the ratio. But I really think the conceptual jump from 20th century research on the topic was actually measuring yourself on this yardstick against this benchmark.
Speaker 1
00:49:12 - 00:49:19
And the benchmark is given by an infinitely powerful, benevolent dictator who has access to all information, infinite computational power.
Speaker 2
00:49:19 - 00:49:20
That's right.
Speaker 1
00:49:20 - 00:49:26
And that this makes sense and we can do it and we can prove stuff about it is indeed a big conceptual breakthrough.
Speaker 2
00:49:26 - 00:49:59
And so just to close the loop, you know, back when I was the paranoid grad student, so this was sort of what I could come up with. So OK, definitely there's a tradition of approximation in theoretical computer science. And now, as I've gotten older, I've learned it's almost unique to computer science in some very related fields, like operations research. Most other fields just do not have a tradition of approximation guarantees. And indeed, actually, there was a very pressing conversation I had with a guy named Eric Maskin, who's an economist who's actually since won the Nobel Prize in
Speaker 1
00:49:59 - 00:50:00
2007.
Speaker 2
00:50:00 - 00:50:46
So when I first met him in 2001 at the Institute for Advanced Studies And so I told him about this four-thirds bound and you know he super nice guy So he you know he listened to my proof and all this stuff and when I was finished at the whiteboard He kind of you know paused for a minute, and he said, it's a very nice question. I never would have thought to ask it, which just kind of blew me away, because he's a brilliant economist. But now as I've gotten older, I've realized that different fields just have different traditions for how they make certain arguments. And again, you know, this kind of four-thirds guarantee, as natural as it sounds to a computer scientist, it sounds very alien to people in most fields. And so, if it had never been proved before, that's my guess about why it hadn't been.
Speaker 1
00:50:47 - 00:51:22
So this is a beautiful example of a part of theoretical computer science, namely the analysis of approximation algorithms feeding back to theoretical economics. But we are still firmly entrenched in pen and paper, stuffy academics sitting in armchairs and writing papers for each other and impressing each other with the relevance or beauty of various theories. But you also get your hands dirty, right? In your, I mean, this is not entirely theoretical. This is not something completely irrelevant for actually building things.
Speaker 1
00:51:22 - 00:51:25
Right. You can talk about the auction developments.
Speaker 2
00:51:25 - 00:51:48
Sure, sure. So, right, so in the early days, you're absolutely right, It was mostly econ theorists talking to computer science theorists. But to be honest, to get things going, most fields do not collaborate with each other. So I'm actually very proud of the amount of collaboration that's happened at the computer science economics interface. I think it's very unusual, and I think it's awesome.
Speaker 2
00:51:48 - 00:52:08
And it was natural it started with sort of theorists on both sides because while you don't, there's lots of vocabulary which is not shared, you're both at least fundamentally, you know, mathematicians, right? You know how to talk about theorems and proofs. You agree on what a proof is, what a model is. So there was at least some common ground to get going. But as you say, you know, that was 20 years ago.
Speaker 2
00:52:08 - 00:52:32
And as time went on, there's obviously a hunger on both sides. Computer scientists wanted to see their ideas having more impact. And economists who were actually building real systems wanted to see if computer science ideas could be somehow useful for them. So there's a couple of great examples. But a recent really great example is what's called the FCC Incentive Auction, which was a spectrum auction run in the United States in 2016.
Speaker 2
00:52:32 - 00:52:33
I
Speaker 1
00:52:33 - 00:52:37
guess I need to unpack that for a general audience. No problem. The FCC is an American.
Speaker 2
00:52:37 - 00:52:58
Sorry, yes. So good. So yes. So the FCC is the United States organization responsible for managing the airwaves. So 1 of the things that they do is they give people licenses to use spectrum, to basically broadcast on certain frequencies in a certain geographical region.
Speaker 1
00:52:58 - 00:53:00
Like a broadcasting corporation like?
Speaker 2
00:53:00 - 00:53:08
For example. Yeah, so it could be. So the frequencies could be used for Wi-Fi. They could be used for television. They could be used for radio, et cetera.
Speaker 2
00:53:09 - 00:53:16
So the FCC is the organization within the US that's responsible for managing the airwaves.
Speaker 1
00:53:16 - 00:53:32
Because this is 1 of the things where everybody agrees that there has to be a managing organization because otherwise everybody would just broadcast on the same frequencies and then the person with the highest amplitude, whatever that is, wins. You couldn't hear anything, right? Everybody would be shouting.
Speaker 2
00:53:32 - 00:53:49
Right. I mean, it's a good question. So people do speculate about what a sort of decentralized market for spectrum would look like. But I think probably most experts are pretty nervous of the prospect. But in any case, it's been, if nothing else, historically it's been centralized for a long time.
Speaker 2
00:53:50 - 00:54:15
And so now the question is, how do these licenses get allocated? So if you wanted the rights to broadcast on some frequency in some location, where would you get that license? So since about the mid-90s, the way this has worked in the United States and also many other countries, is the governments have actually used auctions to sell licenses to the highest bidder, in effect.
Speaker 1
00:54:16 - 00:54:32
Because the market is dynamic, because there are new things like the internet that suddenly wants access to frequencies that have been originally held by, I don't know, television. And that doesn't produce enough revenue. So you want a mechanism for selling the old frequency to somebody who might be more interested in it.
Speaker 2
00:54:32 - 00:54:58
Oh, well, so let's take this in 2 steps. So first, even imagine just like currently nobody's allowed to use any of the airwaves, and you're just giving out those first few licenses. There's the question of who should get them. And So basically, auctions are useful, especially in situations where you need so-called price discovery. So in other words, where you have something you're selling, and you don't really know what people are willing to pay for it.
Speaker 2
00:54:58 - 00:55:30
And most of us experience this in our own lives. So if we're selling some commodity thing on eBay, like an iPhone 5, there's going to be some going price. So you don't need an auction because you know currently people are willing to pay 80 euros or whatever for an iPhone 5. On the other hand, so in my spare time, 1 of the things I like to do is collect sort of strange records, vinyl records. So that's a collector's market and that's a classic example of where it's actually a little bit, it's not so obvious what people are going, going to be willing to pay.
Speaker 2
00:55:30 - 00:55:39
So if you have a copy out of 100 of some limited edition record that came out 20 years ago, you really don't know if people are willing to pay just $20 or
Speaker 1
00:55:40 - 00:55:40
$220.
Speaker 2
00:55:41 - 00:55:44
And so that's exactly where you might want to use an auction.
Speaker 1
00:55:44 - 00:55:53
And auction is here. I can just think of an auction where somebody's standing and showing an item and asking a roomful of people what they want to pay for it, and then there's a hammer. Definitely. That kind of auction.
Speaker 2
00:55:53 - 00:56:14
So I'm very happy to just for the moment think about the kind you see in the movies. Yes. Where there's some auctioneer naming higher and higher prices, everybody keeps their hand up as long as they're interested, and the auction concludes when only 1 person is left standing and they pay the most recently, i.e. The highest price announced by the auctioneer. So I'm totally happy to have you think about an auction like that.
Speaker 2
00:56:14 - 00:56:46
Or if you've ever used eBay or a different online auction, that's a good thing to think about as well. Anyways, so auctions are particularly useful when you're not confident about your guess about what people are willing to pay. And in the case of spectrum licenses, what people are willing to pay is, OK, who's going to buy them? It's often telecom companies, like T-Mobile or AT&T or Sprint would be the US players. And the government's not necessarily going to know what they're, you know?
Speaker 2
00:56:46 - 00:57:08
So those companies will value spectrum according to their internal projections of what they'll be able to do with it, with their next generation of technologies. And so depending on, you know, exactly what their plans are, they may have a higher or lower valuation or value for the spectrum. And it would be hard for the FCC to really know what people are willing to pay.
Speaker 1
00:57:08 - 00:57:23
Because the company is not willing, or wouldn't be interested in telling the FCC what their own internal valuation actually is. It's like in the record example that I would be stupid to tell you that I will actually pay $2, 000 for your pristine copy of whatever it is.
Speaker 2
00:57:23 - 00:57:24
What have you, yeah, yeah.
Speaker 1
00:57:24 - 00:57:25
Yes.
Speaker 2
00:57:25 - 00:57:28
Yeah, Wolf Eyes or something, yeah, exactly. Because then
Speaker 1
00:57:28 - 00:57:31
you would ask me for $2, 000 and then I would have to pay it.
Speaker 2
00:57:31 - 00:57:31
That's
Speaker 1
00:57:31 - 00:57:33
right. If I could just sort of
Speaker 2
00:57:33 - 00:57:33
get it
Speaker 1
00:57:33 - 00:57:35
for a lower price, that would be great.
Speaker 2
00:57:35 - 00:57:53
Exactly right. So you could imagine the FCC just saying, hey, T-Mobile, how much profit do you think you could make if we sold you this license? But you could imagine probably the participants would have an incentive to lowball the FCC if they did that, so to under-report the value that they could get.
Speaker 1
00:57:53 - 00:58:02
Except if the CEO of T-Mobile happened to be this Scandinavian motorist who's also by themselves taking the wide
Speaker 2
00:58:02 - 00:58:03
road. Taking the high road.
Speaker 1
00:58:03 - 00:58:07
Of course, then he would not be the CEO for T-Mobile for very long.
Speaker 2
00:58:07 - 00:58:40
Probably not. Probably not. Good. So it is 1 of those cases where you'd really like to do price discovery, Because the demand kind of goes up and down for for spectrum and it's hard to predict exactly where it is at a given moment So So spectrum auction so since the mid 90s governments have been using auctions like this to sell these licenses. Now, something changed earlier this decade, maybe about 5 years ago, which was that they basically run out of spectrum.
Speaker 2
00:58:41 - 00:58:46
Kind of all of the most useful parts of the spectrum, they'd already granted licenses for its use to various people.
Speaker 1
00:58:46 - 00:58:49
For physical reasons that we don't need to understand.
Speaker 2
00:58:49 - 00:58:49
Well, this sort
Speaker 1
00:58:49 - 00:58:53
of 1. You can't infinitely subdivide the spectrum of available frequencies. OK, good.
Speaker 2
00:58:54 - 00:59:16
So for example, many of the people who had licenses for spectrum were television broadcasters. And to broadcast on a television channel, you need a certain block of spectrum. Historically, it's 6 megahertz. You can do channel sharing, but there's only so much you can do. There's still some minimum amount of the spectrum which is necessary for it to be useful for, say, a television station.
Speaker 2
00:59:16 - 00:59:31
So it's a scarce resource. It's a limited resource. And so for a while, they just had parts of the airwaves they could sort of give away that just weren't owned. Or no 1 had the rights to it, so they could just sell the rights. But Then everything filled up.
Speaker 2
00:59:32 - 01:00:19
But what was very unsatisfying about the situation is that a lot of the current license holders for the part of the spectrum that would be most useful for, say, wireless broadband, the thinking was that they were not actually extracting much value from that very valuable resource. So again, these were held by television broadcasters, some of whom are really big stations, but some of whom are just tiny mom and pop stations in the middle of nowhere that have had this license for 30 years. And so the thinking is that that's not the best use of that part of the spectrum. So the point of this incentive auction, which ran in 2016 and 2017 in the States, was really to repurpose spectrum. So actually to take it away from some of the people who currently have rights, and then to turn around and sell it to the highest bidder.
Speaker 1
01:00:19 - 01:00:20
And take away means buy.
Speaker 2
01:00:20 - 01:00:34
Buy, yes. So to compensate people appropriately to get their licenses. And again, you don't, you know, it's hard to imagine how you'd know what people would be willing to accept for their license. So that's why it's so nice to have an auction, because you have a price discovery process. The mom and
Speaker 1
01:00:34 - 01:00:43
pop station might know that their Spectrum is going to be used by something extremely valuable. For example. And hence, they would correctly demand a very, very high price for it.
Speaker 2
01:00:43 - 01:00:44
Right.
Speaker 1
01:00:44 - 01:00:45
I understand.
Speaker 2
01:00:45 - 01:00:49
You could be worried about holdouts, yeah, exactly. So we need
Speaker 1
01:00:49 - 01:00:59
a mechanism that allows us to buy spectrum from current license holders and then resell that to others, hopefully making a profit for the FCC?
Speaker 2
01:00:59 - 01:01:36
Yeah, I mean, making a profit. But I mean, the real point of the auction is to just increase the value society is getting from the use of Spectrum. So if there are people that are just not contributing much to the social good with this resource, and someone else is going to contribute a lot more good, then it makes sense to do a reallocation with some compensation. It is true, I should say, that the incentive did clear a fair amount of profit along the way, which was then applied to the massive US deficit, but that helped it pass Congress, that helped it pass the government in 2012. But I wouldn't call that fundamentally the number 1 goal of the auction.
Speaker 2
01:01:36 - 01:01:56
Number 1 goal was a more efficient reallocation of scarce resources. So good. So exactly right. So the government was going to be compensating television stations, the current license owners, to buy their licenses and then sell them to the highest bidder, again, typically telecom companies. And so it had 2 phases, right?
Speaker 2
01:01:56 - 01:02:04
So the buyback phase and then the selling phase. And So the first 1 is called a reverse auction, or a procurement auction, because you're procuring licenses.
Speaker 1
01:02:04 - 01:02:06
The FCC is procuring licenses?
Speaker 2
01:02:06 - 01:02:18
Exactly. So the person running the auction, the government, is procuring the licenses. And then the second part is a forward auction, which is sort of a normal auction. That's the kind of auction you see in the movies. So the Ascending Auction is an example of a forward auction.
Speaker 2
01:02:18 - 01:02:35
So the second half of the auction has been around a long time. So the government actually has 20 years of experience of what to do once they have the licenses, how to go about selling them. So that was not a problem that they needed to solve. That problem was already solved. But the first part, they had literally never done this before.
Speaker 2
01:02:35 - 01:02:45
They'd never tried to run an auction to procure licenses back from its current holders. So they needed to design from scratch a reverse auction to accomplish that.
Speaker 1
01:02:46 - 01:02:53
I would expect you can just take that from the shelf. There should be some kind of, it seems to be a natural enough problem, there should be some literature on this.
Speaker 2
01:02:53 - 01:03:07
Good, yeah, so certainly on reverse auctions there's literature. This is a pretty tricky reverse auction. To explain why it's tricky, I have to go into a little bit more of the details of it, if that's something that interests you.
Speaker 1
01:03:09 - 01:03:10
So
Speaker 2
01:03:13 - 01:03:40
good. So basically, the reverse auction involved, to name check something we were talking about 10 minutes ago, fundamentally involved the solution of NP-complete problems. So a very famous NP-complete problem known as the graph coloring problem. So originally graph coloring, people thought about because they wanted to sort of color maps and atlases so that countries with shared borders don't have the same color. But it turns out those kinds of coloring problems show up exactly in this FCC incentive auction.
Speaker 2
01:03:40 - 01:03:55
So why is that true? So what's sort of so uniquely challenging about this particular reserve auction as opposed to previous ones studied in the literature. So the goal of the auction, did you want to say something?
Speaker 1
01:03:55 - 01:03:58
I know, I think I see where this is going, but just continue.
Speaker 2
01:03:59 - 01:04:14
So The point of the auction is to reclaim a certain amount of spectrum. That's what it needs to do to be able to resell it in the forward part of the auction. So that is the basic goal of the reverse auction, get enough licenses back. So what does that mean? How do you know if you have enough licenses?
Speaker 2
01:04:15 - 01:04:44
So the way you measure your progress is you look at how much spectrum has been freed up. And 1 of the ways to measure spectrum is just in channels, each of which is a 6 megahertz block of the spectrum. So you might have a goal that says, clear me 60 megahertz of the spectrum, like 10 channels. So for example, you could say, let's look at all the stations broadcasting on the channels between 38 and 51. Let's say we're willing to leave 4 channels up on the air, but we want to clear 10 channels so we can sell them in the forward part of the auction.
Speaker 2
01:04:45 - 01:05:10
So that's what the auction's responsible for doing, clearing, let's say, 10 channels. Now, a decision that was made early on in the design, which was that clearing a channel should mean clearing it nationwide. So to clear, say, a television channel 51, That means after the auction, there should literally be not even a single television station anywhere in the country still broadcasting on channel
Speaker 1
01:05:10 - 01:05:24
51. I see. And I can get that by either everybody who holds a 51 selling their 51 to me, or just some of them doing that and then redistributing the other ones in a clever way. Exactly. So that the 51s happen to be free.
Speaker 1
01:05:24 - 01:05:36
Some of them sold the 51s to me, others I just redistributed. They can't see the difference. But this redistribution problem is in itself NP-hard. A difficult problem, exactly. That is the graph calling problem.
Speaker 2
01:05:36 - 01:05:56
So just to repeat what you said, you might cross your fingers and say, I really hope every single person on Channel 51 is willing to sell me their license for a really small amount of money. Then I'll just buy them all and Channel 51 will be free. But now you really have a problem with holdouts. If you know you're the last station on Channel 51. The
Speaker 1
01:05:56 - 01:05:58
small mom and pop station
Speaker 2
01:05:58 - 01:06:26
where nobody watches. You're going to say, no, you know what, I really want $100 million for my license, because I know you need Channel 51. So to skirt this holdout problem, it was crucial that the US government used its power to unilaterally reassign people's channel assignments. So in fact, some stations simply had their licenses bought, and they were off the air completely. But even stations who kept their license, many of them had their channel reassigned.
Speaker 2
01:06:26 - 01:06:43
So they might have been broadcasting on channel 51 before the auction, And they were forced to broadcast on channel 41 after the auction. So that's a key point, that the government was willing to reassign channels. So now let's go back. What is the goal of the auction? Clear 10 channels nationwide, let's say out of
Speaker 1
01:06:43 - 01:06:43
14.
Speaker 2
01:06:45 - 01:07:15
But what does that really mean? That really means you should buy back enough licenses so that the stations that stay on the air, the stations you didn't buy their licenses, it should be possible to assign them channels so that they all fit into a target number of channels, like 4 channels. And that's where you start seeing these so-called coloring problems. So just like if you're coloring a map, you want to use a few colors. At least back in the day, the cost of producing a map depended on the number of colors of ink.
Speaker 2
01:07:15 - 01:07:36
So you wanted to say, just with, say, a target number of colors, like 4 colors, assign 1 to each of the countries so that there were no conflicts between neighboring countries. None of them had the same color. Same thing here. For the remaining stations, you want to Color them in 1 of 4 colors, i.e. Assign them 1 of the 4 channels you're willing to use, again, so that none of them conflict.
Speaker 2
01:07:36 - 01:07:51
Meaning, whenever stations have overlapping broadcasting regions, they should get different colors, different channels. So that's why graph coloring is really a crucial part of this reverse auction, And that's why it's utterly unlike any of the reverse auctions that I'm aware of that have been studied in the past.
Speaker 1
01:07:51 - 01:07:58
Because the auctioning by rounds process requires the government to repeatedly solve an NPR problem.
Speaker 2
01:07:58 - 01:08:27
Right. So graph coloring is actually, right. So another obstacle is you have to solve this graph coloring problem not just once, but over and over and over again. Basically, in every round of the auction, and there were roughly 2 to 3 rounds per day, and for every remaining participant, and there were thousands of participants, we're talking about thousands of these every day, you had to solve a graph coloring problem, basically to check whether or not you could lower somebody's buyout price. Because you needed to know whether or not you could tolerate them refusing your price and dropping out of the auction.
Speaker 1
01:08:28 - 01:08:32
And that's obviously NP-hard. So we go home and cry, or no?
Speaker 2
01:08:32 - 01:08:44
Good question. Yeah. So right. So if you're familiar with NP-completeness, so on the 1 hand, when a problem is NP-complete, as we were saying, it's not like it's a death sentence. It doesn't mean it's fundamentally unsolvable in practice.
Speaker 2
01:08:45 - 01:09:01
But it definitely means you need to up your game and possibly make some compromises. You're not going to just write 10 lines of Python code and have an algorithm that always works in a fast amount of time. It's just not going to happen. So here, computer scientists brought in a different type of approximation. Sorry, a different type of compromise.
Speaker 2
01:09:01 - 01:09:41
Not approximation, but rather techniques for solving exactly NP-complete problems. Now, that's not an easy thing to do. But 1 of the most successful application domains for fast solution of NP-complete problems is so-called SAT solvers, or satisfiability, which is a family of constraint satisfaction problems. That has really important applications and formal verification and elsewhere. And as a result, for decades, hundreds or probably thousands of computer scientists, really smart computer scientists, have been working to build better and better SAT solvers, which are capable of solving more and more of instances of this NP complete problem quickly.
Speaker 2
01:09:41 - 01:10:04
So the FCC made a great decision, and they decided to hire a computer scientist, specifically Kevin Layton Brown from the University of British Columbia. And his team led the effort to, in a practical sense, solve all of these graph coloring problems really quickly. By really quickly, I mean they were solving medium-sized instances, usually in under a second, and almost always in under a minute.
Speaker 1
01:10:04 - 01:10:08
So medium is 1, 000 vertices? Yeah, medium, think of it as
Speaker 2
01:10:08 - 01:10:21
maybe 1, 000 or 2, 000 stations. And then the number of constraints you need to worry about, the number of conflicts, might be in the tens of thousands. So it's not the biggest instance ever. But solving it in 1 second is not a trivial thing.
Speaker 1
01:10:21 - 01:10:22
It needs a lot of firepower.
Speaker 2
01:10:22 - 01:10:22
Exactly.
Speaker 1
01:10:22 - 01:10:32
And you can't just, I mean, the problem was not Python. It doesn't get better by solving it in C. You need a lot of intellectual work. That's right.
Speaker 2
01:10:32 - 01:10:38
The bottleneck is not the programming language. The bottleneck is the size of the search space, which is enormous.
Speaker 1
01:10:40 - 01:10:47
And which you don't solve by just throwing more computational power after it, but solving really, really top-notch algorithmic breakthrough.
Speaker 2
01:10:47 - 01:10:58
That's right. That's right. So good. So Leighton Brand's team, they stood on the shoulders of SAT solvers, which already, you know, again is the combined work of hundreds of people. So those are already brilliant.
Speaker 2
01:10:59 - 01:11:20
And then, you know, they did pretty well just off the self SAT solvers. We're pretty good at solving these repacking problems. But they really wanted to do better. They wanted to get almost 100% of them within the time budget. So they added 2 families of their own innovations to the state of the art SAT solving, 1 of which was to sort of encode more of the domain knowledge.
Speaker 2
01:11:21 - 01:11:41
So in the FCC incentive auction, you actually have a lot of information about what your graph coloring problems are going to look like in advance. Because you know all of the stations that are going to be in the auction. You know that in advance. You know their broadcasting regions, so you know who conflicts with whom. And so that means that you're going to wind up solving graph coloring instances, all of which are sort of induced subgraphs of some master graph.
Speaker 2
01:11:41 - 01:12:09
And so that gives you an opportunity to tune your SAT solvers specifically for those kinds of instances. They also came up with a number of very clever caching tricks. So an obvious thing to do is, if you encounter the same graph coloring problem twice, cache the answer the first time and look it up. But they also wanted to not just have that, but also if you'd solved a very similar instance at some point in the past, you still want to reuse some of the work that you did for this new instance. And they had some clever solutions for that as well.
Speaker 2
01:12:09 - 01:12:34
And you put all those together, and they really were solving these medium-sized instances of graph coloring or SAT, almost always in under a second, really almost always in under a minute. And you know without this subroutine, which draws crucially on techniques from computer science, the US government literally could not have run this auction. They literally would have had to choose a different auction format were it not from these techniques borrowed from computer science.
Speaker 1
01:12:34 - 01:12:55
In particular, they would not have been able to find as good prices as they were. I guess the speed with which you solve this reassignment problem gives you, from the perspective of the FCC, a better baseline for what your next lower or higher? Lower. Lower bid should be.
Speaker 2
01:12:56 - 01:13:16
Yeah, that's right. But even more fundamentally, it's not clear you could run an auction that looked anything like this without these techniques. So you're right. The better the SAT solver, the more frugal the auction will be, the less it will pay out for a given amount of spectrum to be cleared. That's also true.
Speaker 2
01:13:18 - 01:13:30
So say the SAT solver almost never solved them, then you literally have to just start from scratch. I mean, really, you'd have to go back to square 1 and say, let's rethink about how this works. So it's completely crucial for the viability of the format.
Speaker 1
01:13:30 - 01:13:47
That's a brilliant example. So auction plus NP hard problem solving. Okay, so they ran this, bought back all the frequencies. And then there's a new round where they basically do what they've done for A long time, as you say, 20 years?
Speaker 2
01:13:47 - 01:13:59
Yeah, yeah. So then they switched to the forward auctions. Yeah, so, and so, I mean, there's 1 detail I've been glossing over, which is there's actually an outer loop. Actually, the auction took a year to complete. It was from March 2016 to March 2017.
Speaker 2
01:14:00 - 01:14:13
And 1 of the reasons it took so long was because of this outer loop. What's the point of the outer loop? Well, in my example, I said, suppose you wanted to clear 10 channels. And a natural question you might have is, well, why 10? Where'd you get that number?
Speaker 2
01:14:13 - 01:14:15
That's just like a magic number. How did you know
Speaker 1
01:14:15 - 01:14:15
10?
Speaker 2
01:14:15 - 01:14:46
And so, in fact, there's this outer loop which searches for the right number of channels to try to clear. And so in the first round, they tried to clear I think 21 channels, 126 megahertz. And what happened is they would have had to pay out something like 80 billion dollars to clear 21 channels, and then also they wouldn't have made that much money in the forward auction. They would have made, I forget, $25 billion or something in the forward auction. So they would have run a huge deficit if they had insisted on selling 21 channels.
Speaker 1
01:14:46 - 01:14:50
They pretend ran the auction or simulated it? Or what did they do?
Speaker 2
01:14:50 - 01:15:04
So there was a mock auction before, just to get people familiar with it. But that wasn't the part that chose the target. So the part that chose the target, that was all real auction. So as a participant, you were in there for the reverse auction. You were in there for the whole year, potentially.
Speaker 1
01:15:05 - 01:15:10
Knew that you participated in an auction that may be nullified and ram again with other parameters?
Speaker 2
01:15:10 - 01:15:34
So what's nice is it's not really nullified. It's more like it just keeps going further, right? So the way the reverse auction works is you start by offering very high buyout prices, which everybody would be happy to take, and then you lower over time. And so basically, if, so suppose you're trying to clear 21 channels, that means you've got to get almost everybody. So you start having these holdout-like issues, and you might have to pay a lot.
Speaker 2
01:15:34 - 01:15:47
And so then in the second iteration, you're like, well, OK, I guess I don't want to clear 21 channels. How about 18? That was the second iteration. And now you just go back and start decreasing those prices further from where they were before. And it turned out they needed 4 iterations.
Speaker 2
01:15:47 - 01:16:04
So first they tried to clear 21, then 18, then 16. They wound up clearing 14 channels, or 84 megahertz of spectrum, in the fourth iteration. And the payout to the stations was around $10 billion. And they brought in about $20 million in the forward part of the auction.
Speaker 1
01:16:05 - 01:16:09
$10 million, that's a small country's GDP.
Speaker 2
01:16:09 - 01:16:16
Yeah, that sounds about right. It's a lot of high stakes, this auction design.
Speaker 1
01:16:17 - 01:16:34
And uses a lot of these ideas that, so the computer science perspective is clear now because you talked a little about the NP-hardness problems. And the economics contribution here is just basically the idea of how you sell more than 1 commodity at a time?
Speaker 2
01:16:35 - 01:17:09
So a couple of things. So I mean, the idea, right. So economists have definitely thought about these simultaneous ascending and descending auctions for multiple items for quite some time. I'd say 1 real innovation in this reverse auction, well, I mean, I'd say the overall format, you know, while natural, it was, you know, definitely brilliant choice to use 1 of these so-called descending clock auctions, where you start with very high prices, and everybody, at which everybody would be willing to sell their license. And you let people, you let these go down over time and people drop out.
Speaker 2
01:17:09 - 01:17:17
Because 1 of the things that was super important, there's a big asymmetry in the bitter sophistication between the reverse auction and the forward auction.
Speaker 1
01:17:17 - 01:17:18
Yes, I can imagine that. In the
Speaker 2
01:17:18 - 01:17:45
reverse auction, you're talking about television broadcasters, sometimes really small ones, who don't even have the resources to hire a consultant to tell them what to do. On the forward auction, you're talking about like T-Mobile or something, right? So, where there is a lot of sophistication. So for the reverse auction, it was totally crucial that that auction was super, super simple for the stations to be able to participate. So that they could just do the obvious thing and rest assured that they were doing the
Speaker 1
01:17:45 - 01:17:51
right thing. The transparency of the mechanism by which you treat people is super important for them to interact with it. Exactly.
Speaker 2
01:17:51 - 01:18:03
And so this descending clock action really, on the 1 hand, it was about as simple as you could possibly imagine. But subject to that, it had just a ton of ingenuity in how the eventual outcome was actually computed. So getting that sweet spot.
Speaker 1
01:18:03 - 01:18:04
Behind the stages.
Speaker 2
01:18:04 - 01:18:15
Exactly. Right. So from a single participant's perspective, you have no idea that all these graph coin problems are being solved. You have no idea. All you're seeing is a sequence of yes, no questions.
Speaker 2
01:18:16 - 01:18:26
Would you be willing to sell your license to the government for $1 million, yes or no? Maybe if you say no, then they're like, fine. Go away. We never want to see you again. You're going to stay on the air, and we're not going to pay you.
Speaker 2
01:18:26 - 01:18:39
Or if you say, yes, I would be willing to sell, they're probably going to come back tomorrow and say, well, what about $950? If you say yes, they'll come back tomorrow and say, what about 900? At some point, it might get low enough. You're like, you know what? This license is worth more to me than that.
Speaker 2
01:18:39 - 01:18:49
So I'm out of here. I'm dropping out. And that's all you have to do as a participant. There's this sort of sequence of prices you're going to see. When it gets too low, you say no, that's it.
Speaker 2
01:18:50 - 01:19:08
So I'd say that was just a very smart choice, I think, to go with that overall auction format. Also, I think this way of thinking about the channel reassignment as a way of avoiding the holdout problem, in some sense, increasing the substitutability of the goods being procured. That was also a big idea.
Speaker 1
01:19:10 - 01:19:16
Fantastic. I think that got us through almost everything. OK. Any closing remarks?
Speaker 2
01:19:18 - 01:19:23
Let me see. I feel like I should have a good story to tell you.
Speaker 1
01:19:24 - 01:19:41
What's intellectual more satisfying, talking to the theoretical economist or getting these things done? I mean, we nerds just like to see amazing stuff work, but we also get some intellectual pleasure out of admiring nice theorems.
Speaker 2
01:19:41 - 01:20:08
Yeah. I mean, I think 1 thing that's important, So different researchers are different. And actually, I think that's a huge feature of how scientific communities are organized. So in a mathematical field like ours, there's some people where they really just get lost in a math problem and work hard on it for a long period of time and come up with some brilliant proof. And where would we be without researchers willing to do that, to focus so intently on a single problem?
Speaker 2
01:20:09 - 01:20:28
On the other hand, another important part of the ecosystem are people who build connections. And it could be connections just purely between different areas of theory, for example. Or it could be connections between, say, theory and practice, more like what we were just discussing. And I find all aspects of this pretty fun. Hard math problems are fun by themselves.
Speaker 2
01:20:30 - 01:21:18
But drawing, there is definitely a satisfaction to see the computer science mindset starting to infiltrate slowly but surely economic theory. So I think that's really, you know, so we talked a lot about NP completeness today, and I think that was a great early example of how theoretical computer science thinking started pervading many, many fields for lots of reasons. And I think we're starting to see this is sort of another example of that, where in some sense the techniques we've come up with are actually changing how a different field views some of their questions and changes even what they recognize as legitimate solutions in sort of a broadening way. So I think it's a very exciting time, actually, to be on the computer science economics interface. We now have a very strong shared language.
Speaker 2
01:21:18 - 01:21:36
Computer science students have been learning lots of economics and vice versa. And I think we're going to be, you know, there's been some killer applications already. We talked about the incentive auction. We didn't talk about, you know, online advertising auctions which drive the business models of companies like Google and Facebook. But I'm pretty confident there's going to be more coming around the corner.
Speaker 2
01:21:37 - 01:21:38
Excellent.
Speaker 1
01:21:38 - 01:21:40
Tim, thanks a lot for coming.
Speaker 2
01:21:40 - 01:21:41
Thanks so much for having me.
Speaker 1
01:21:41 - 01:21:41
And thank you all for listening. Bye.
Omnivision Solutions Ltd