See all Institute for Advanced Study transcripts on Youtube

youtube thumbnail

Overview and Recent Results in Combinatorial Auctions - Matt Weinberg

2 hours 59 seconds

Speaker 1

00:00:15 - 00:00:49

Thank you very much for having me. So as I quickly mentioned, most of the talk is on the slides. But in the interest of giving a little bit more background and context than I normally do, I have some stuff over here. And I'll give 1 proof of kind of like a basic result on this board here when I get to it. Okay, so I will try to cover as much of the new stuff as I can, but my intent is to spend as much time on the background material so that the context feels interesting.

Speaker 1

00:00:49 - 00:01:14

So feel free to interrupt as much as necessary. And if I don't get to new stuff, that's OK. So this, yeah, I guess this outline is actually not super useful. I took slides from a talk I just gave where it was condensed into an hour, so let me just skip over the outline. And okay, so what are combinatorial options?

Speaker 1

00:01:14 - 00:01:32

So here's, so this slide's just going to be the formal mathematical problem. So there is a set of bidders, there is a set of items. Every bidder has a valuation function. And just to confirm, these are functions that take as input a set. So fully describing a valuation function is 2 to the m numbers.

Speaker 1

00:01:32 - 00:02:02

It's not necessarily something simple where your value for an apple and an orange is the apple plus the orange. It's some arbitrary function on the sets. And what happens, bidders participate in some possibly interactive protocol with the designer. So they send messages, the designer asks some questions, and then afterwards at the end of this protocol, the auctioneer gives every bidder some sets. They should partition the items, or sorry, they should not overlap.

Speaker 1

00:02:02 - 00:02:21

They don't have to award all items. And if the auctioneer wants to, they can charge prices. So you'll see on the next slide what the objective function is. The goal has nothing to do with prices. The goal is just to allocate the items in a way that optimizes the welfare.

Speaker 1

00:02:22 - 00:02:51

So the sum of the value of everyone for the set that they get. And we're gonna be looking for approximation algorithms. And so the thing I wanna highlight is that the prices don't play a role in your objective function. The only reason the designer might use those is if the designer also wants to make the protocol say incentive compatible. So the idea here is that if the designer doesn't use prices at all, then you're kind of stuck with protocols where the larger numbers you say, the more stuff you get, and you shouldn't really expect people to follow those protocols.

Speaker 1

00:02:52 - 00:03:24

If you can charge prices, you can kind of make people pay for the large numbers that they're saying. It's still hard to do that in the right way, but it's at least feasible to possibly incentivize someone to tell you their actual valuation. So the formal concept here, if you would like it to be the case, that for every bidder, they can just go through the following simple reasoning. They can say, I know my value, say V1, I don't know anyone else's values. The only thing I'm going to trust is that they follow the protocol based on their values.

Speaker 1

00:03:25 - 00:03:53

And given that, even though I have no idea what anyone else's values are, it's in my incentive to follow the protocol. And so if every bidder simultaneously believes this, then no bidder needs to have any beliefs over anyone else's valuations. They all just need to believe that everyone else knows that the mechanism has this property and that everyone should tell the truth. So that's a, so that's called, I'll just call this truthful for the rest of the talk. The formal notion is called an ex-post-nash.

Speaker 1

00:03:54 - 00:04:00

There are other formal concepts of incentive compatible that people study, but this is going to be the 1 for the entire talk that comes up.

Speaker 2

00:04:00 - 00:04:05

So in the same mechanism you mean the policy? Correct, that's correct, yes.

Speaker 1

00:04:05 - 00:04:49

So when I use the term mechanism I maybe will be like implying that it should have something to do with incentives but it's just the protocol. Okay, so why if I was trying to make a philosophical pitch for why is this question interesting, I would say I think it's interesting because it's a lens to study the following question, which is like our truthful mechanisms as powerful as algorithms. So if I try to make that sound like a very grandiose question, I might ask, for any optimization problem you have in mind, if you can design an algorithm to solve it, is it possible to also solve it in a way that incentivizes people to participate? And maybe that's too grandiose to be like a yes or no question, but combinatorial auctions are a very natural test bed. So why is that?

Speaker 1

00:04:49 - 00:05:16

Because, you know, computer scientists have been studying the algorithmic version of this problem. So once I give specific instantiations, you'll recognize it as optimization problems that are interesting without talking about incentives. And economists study these problems before AGT existed as a field too, that you know people actually run large combinatorial auctions. So I think they're kind of at least right now maybe the most or at least among the most natural problems to try and get at this question.

Speaker 2

00:05:17 - 00:05:23

I'm going to have to go with the game field.

Speaker 1

00:05:23 - 00:05:55

Correct, yes, thanks. Okay, and so even when you specialize the combinatorial auctions, it's still kind of like too grand of a question to have a yes, no answer, but we can still try and use it as a lens. So just say from a philosophical perspective, this is why I find it most interesting is to understand like, these are super well-studied problems, they have a really rich theory, and how does the answer differ? Is it the same for algorithms versus mechanisms? And okay, let me give you some sample results.

Speaker 1

00:05:56 - 00:05:57

And...

Speaker 2

00:05:57 - 00:06:05

Just to ask you, is the framework you describe, Is it clear that mechanisms are special cases? How do we?

Speaker 1

00:06:05 - 00:06:36

Sorry, yeah, so let me describe exactly what I mean by that. So I would say, so for this sentence, what I'm saying is that algorithms are protocols that demand players follow them and players will just follow them. And mechanisms are protocols that additionally must be truthful. So the idea being that as soon as I have a truthful mechanism, that's just also an algorithm with a stronger property. So if I can do it with a mechanism, I can certainly do it with an algorithm.

Speaker 2

00:06:37 - 00:06:45

Because I think in general, mechanisms are many of the most powerful. Yes, yeah. You will talk only about those 4 mechanisms.

Speaker 1

00:06:45 - 00:07:22

So I'm gonna talk about, so I'm not gonna talk about price of anarchy or stuff like that at all. I will talk about, so I'm using the term algorithms basically to just refer to protocols, and I'm using the term mechanisms to refer to like truthful protocols. And I will try to be clearer every time I use it, but here that's what I mean. So like here I mean basically what I'm trying to say is there, is it the case that anytime we can design say a polytime algorithm that gets a 2 approximation there is also a polytime truthful 2 approximate mechanism, or are there separations? Yeah.

Speaker 2

00:07:22 - 00:07:28

And in the case of algorithms, will you regard the variations of the player as the authors?

Speaker 1

00:07:28 - 00:07:49

Great question. So I'm going to consider actually a couple different models and the answer is going to change depending on the model. I'm also going to make a pitch for like which models I think are you know maybe like more relevant less like them so I'm gonna yeah yeah but uh but I will consider both work uh 2 kinds of Oracle models and the communication model. Yeah. Awesome.

Speaker 1

00:07:54 - 00:08:38

OK, so let me tell you what's already known. So this result, just because it's kind of so seminal and also very simple, When I find a good pause point, I will just write that result on this slide. So this is by 3 economists and Vickrey, this is the same Vickrey from like the Vickrey auction or the second price auction. So he got a Nobel Prize for all of his work to auction design in particular, some of it contributed to this. This is not how they phrased their result, they weren't computer scientists, but as a computer scientist, I think we would look at their result and say that if you have N plus 1 black box calls to any optimal algorithm, so this is 1 that does not incentivize people to participate correctly.

Speaker 1

00:08:39 - 00:08:52

It's just an algorithm, but it maximizes welfare. If you have n plus 1 black box calls to that for any class evaluations, then you can turn that into an optimal truthful mechanism. So basically, if you think through what that must mean.

Speaker 2

00:08:52 - 00:08:54

So n is the number of.

Speaker 1

00:08:54 - 00:08:55

Number of emitters, correct.

Speaker 2

00:08:56 - 00:08:57

And 1 item.

Speaker 1

00:08:57 - 00:08:58

No, there are any number

Speaker 2

00:08:58 - 00:08:59

of items. Any number of items.

Speaker 1

00:08:59 - 00:09:09

Correct. And so there's any number of items. And these are whatever class evaluations you have in mind. They could be submodular. They could be monotone, whatever it is.

Speaker 1

00:09:09 - 00:09:31

But you need an optimal algorithm. And so if you think about what this must be saying, you say, well, clearly, the outcome it selects is clearly whatever the algorithm says, right? Because the algorithm is optimal and you're being optimal. So you have to select that. And so the only thing it's doing is it's somehow being clever with what payments it charges.

Speaker 1

00:09:31 - 00:10:05

And so it turns out that there is a way to charge clever payments with basically 1 black box call per player. So this is to select the outcome. Each of these are to figure out the payments. Basically, the content of this theorem is there is always a clever way to charge payments and you can do it with 1 black box call so that's something that will fit on like 1 board that um that I will do just to do all the board stuff at once. Okay so what's the catch the catch is that like you know you basically can never do this in poly time or poly communication for any interesting class evaluations.

Speaker 1

00:10:05 - 00:10:35

So it's basically known at this point, there's this class evaluations called bro substitutes, which is actually a mouthful to define. And if there's interest, I can define it, but it's a mouthful and it's not super intuitive, that's the limit of where this works. So you can do, you can exactly optimize over this set of valuations, but like submodular is too rich, that's simply hard to do. So then this is just stating the state of the art for like,

Speaker 2

00:10:35 - 00:10:39

yeah. So linear evaluations, you mean just like I have a value?

Speaker 1

00:10:44 - 00:11:30

Yes, so that 1, so that is a subclass of gross substitutes and that 1 you can do. So gross substitutes includes, maybe let me give like a quick example of like the most complicated 1 I can quickly describe in gross substitutes would be there is a set system of items that you like. You have a value for each item. And your value for a set of items is look at all subsets I like of that set, sum the values for things that I like and see what's the best. If the subset of things I like forms a matroid, this is gross substitutes.

Speaker 1

00:11:31 - 00:11:55

If the collection of subsets of things I like is more complicated than a matroid, it's probably not grow substitutes. So like, so for example, if I have a value for each item and my value for a set of 10 items is I just find my favorite item in the set. That's my, that's called unit demand. This is gross substitutes. And if you think about the optimization problem, that turns into max weight matching.

Speaker 1

00:11:55 - 00:12:29

So that you can do in poly time. If I just have a value for every item, And whenever you give me a set of items, I sum my value for every item in the set, that's called additive or linear. Those the optimization problem is even simpler that you can do in poly time. And so like what we can do is basically like, it's a little bit more complicated, like if matriots are involved instead, then you can also do it in poly time, but like getting even a little bit beyond that kind of like becomes MPR. And gross substitutes is more expressive than this, but this is like a good intuition for.

Speaker 2

00:12:30 - 00:12:35

So for the case you use the Edmonds partitioning partition.

Speaker 1

00:12:36 - 00:13:08

Great question so it's the algorithm I know has nothing to do with that the algorithm I know is. Based on rounding a LP relaxation And so actually maybe that's something else I can say. So there's an LP relaxation that is integral if and only if the valuations are gross substitutes. And so you can always solve the relaxation and if it's integral, then you get the optimum. And this seems to be, yeah.

Speaker 1

00:13:10 - 00:13:47

Yeah. So that seems to be what the limit is for where you can use this. Okay, so then you can also say like well what if I also want my algorithm to hold in like the most general case. So here if you insist on your algorithm holding for all monotone valuations, we do know Brunem approximation and polycommunication. And this is matched by randomized truthful mechanisms.

Speaker 1

00:13:49 - 00:14:07

And the best deterministic truthful mechanism is kind of like barely non-trivial. So let me maybe just give quick intuition. You should think of getting an M approximation. That's trivial. Cause you can just kind of take all the items, treat them as a single item.

Speaker 1

00:14:08 - 00:14:20

And then we have things like the second price auction. And so that's fairly trivial. And this is just a little bit better than trivial and I will describe this result in a little bit more detail during the talk.

Speaker 2

00:14:20 - 00:14:29

Let's check everybody's happy with the second pass option or maybe you want Mike to say something. That's a serious example background to everything.

Speaker 1

00:14:29 - 00:14:57

Yeah okay so let me let me take let me do this then So let me take my pause at this point and I will show you the VCG mechanism that's on this slide. So let me just go back here. So I will show you this and I will claim that the second price auction is a special case of this. And this 1 will fit on this board. So I have my notes just in case I need them.

Speaker 1

00:14:57 - 00:15:22

So here's my plan. So I wanna charge to each bidder, I am gonna charge them the following. I'm gonna say, how happy would everyone else be if I ran the optimal algorithm but I just took you out of the picture. So what would that be? So I'm going to start with that.

Speaker 1

00:15:22 - 00:16:13

So that's going to be the sum overall j not equal to I of their value for of their value for what they get in the optimal allocation on input. I'll use the 0 function for player I and then everyone else's valuation. Okay so this is this is 1 half of the payment. The first thing I say is if you weren't here this is how happy everyone would be in the optimum. And then when I subtract, I kind of get, so this is what I'm charging you, and then I give you a rebate for sort of like, well, how happy is everyone still, even though you're here?

Speaker 1

00:16:17 - 00:16:51

So this is sum over all other players, Vj of J of the real input with you. OK, so this is the entire payment. So let me first start with this, say what it is, say why is, how does it specialize to single item in a second price auction. So this is saying, I charge you, if you weren't in the picture, how happy could I possibly make everyone else? How happy are they when you're in the picture?

Speaker 1

00:16:52 - 00:17:09

And so the idea here is that this is kind of like, how much do you hurt everyone else just by being in the auction? That if you weren't here, we would have done this. But now we're doing this instead, and everyone else is definitely less happy. So thinking about what if there's only 1 item? What is this saying?

Speaker 1

00:17:10 - 00:17:27

This is saying that like, let's say what happens if you don't get the item? So if you're not the highest bidder, then what happens? You're not the highest bidder. So without you, the highest bidder besides you is the 1 who gets the item. So this is just the value of the highest bidder besides you.

Speaker 1

00:17:27 - 00:17:52

With you, that person still gets the item. So this is the value of the highest person without you. You're not changing anything, so your price is 0. If you are the highest bidder, then this term is 0 because everyone else gets nothing because you get the item. This term is equal to who would have gotten the item if you were not in the picture that's the highest bidder besides you, which is the second highest price because you're the highest.

Speaker 1

00:17:53 - 00:17:59

Okay, so this is the, so this is the CG and then I talked about how it specializes to the second price auction.

Speaker 2

00:17:59 - 00:18:10

1 more comment with maybe. This number, Pi, does not depend on I at all. Yes. And because of this, it is both. Exactly.

Speaker 2

00:18:12 - 00:18:12

Exactly.

Speaker 1

00:18:13 - 00:18:33

Yes. So I'll just repeat that. So that's the key thing, is that this does not depend on VI. Right, so now what I can try and do is I can say let's see what player i's utility would be if they reported some value. Okay, so what's going to reported some value.

Speaker 1

00:18:33 - 00:19:20

So what's going to happen is I want to look at the I of whatever they get minus the price they pay. So let's see. So this just means whatever allocation they cause to be selected, they're going to have, let me phrase it like this, just to be clear. OK, so what I can do is vi, I can report any vi prime that I want. Okay And so that's going to cause this set to be selected for me.

Speaker 1

00:19:20 - 00:19:53

So I'm going to run the optimal algorithm on whatever input I provide in d minus I. I'm going to have some value for that. And I'm going to pay this price. And so let's just see what that means. So maybe since I'm, okay, maybe I was a little bit ambitious, we can get all into like two-thirds of a board, but what's going to happen is when I subtract p I, I'm going to get this minus negative this, And the thing to observe is that this is just like the missing term from here.

Speaker 1

00:19:53 - 00:20:26

So this minus negative this is just going to turn into the sum over all I of, or the sum over all j of vj of this. And then this term has nothing to do with vi. This term is just fixed. I can't control that. So what that means is my utility, I'm trying to maximize my value minus price, is going to be equal to the sum of everyone's value for what they get, minus something I have 0 control over.

Speaker 1

00:20:27 - 00:21:17

So that means that I am happier when the sum of everyone's value is as large as possible. And so that means I want to help this algorithm do as good as possible. 1 way for me to do that is to just give it the real, you know, give it my real input. Okay, so, you know, it's sort of like there's, depending on how you design, there's something magical going on where like everything perfectly cancels out and it fits in 3 lines, or it's just kind of like hard to understand because it's like way too quick, But that's the key thing, is that this perfectly aligns the incentives so that you, player I, maximize your own utility by optimizing the welfare of the outcome that is selected. So these are clever payments.

Speaker 1

00:21:17 - 00:21:35

These are called the Clark pivot payments, this is the Clark from this 1, and so this is the VCG mechanism. Okay, so can I ask, are there questions about this And then I will share some more thoughts about this that are going to be relevant for later in the talk?

Speaker 2

00:21:42 - 00:21:47

So the thoughts were beyond approximation. Yes, perfect.

Speaker 1

00:21:48 - 00:22:20

So OK, so this gives you a framework and says, and you might just look at it and say, hold on, what if I don't have up here? This could just, this could just be some algorithm. Like, what if I just give you black box access to an approximation algorithm, what happens? So you can chase through exactly the same proof and here's the catch. What you see is that you are incentivized to report whatever vi prime will maximize the welfare of what the algorithm selects.

Speaker 1

00:22:21 - 00:22:47

So maybe let me just write this so that that part's easy to stare at. So this is sum over all j, bj. I'll just write alg so you don't have keep replacing op with alg. So this is the thing you're trying to optimize. You can't control this other term.

Speaker 1

00:22:47 - 00:23:22

This is what you want to optimize. If I give you an optimal algorithm, then you optimize it by giving it the right input. And maybe if you've thought through, like if you work in approximation algorithms a lot, most approximation algorithms actually like, they do simplifying things. Like if you run a greedy algorithm, the reason that, you know, it's not often we can always come up with examples where like, oh, you know, if I take, if it does, it's doing something a little silly, if I change the input, it actually would just do better, right? And so, sort of like, from the perspective of getting, you know, a 2 approximation, you're like, that's fine.

Speaker 1

00:23:22 - 00:23:36

I don't need to be optimal. But what that means is that a player I may want to manipulate it. It's weird. They want to manipulate it in a way that helps you, but they still want to manipulate it. So it's not going to be incentive compatible.

Speaker 1

00:23:37 - 00:24:01

And so you can ask, is it feasible to just take any approximation algorithm and change it in a way that makes it monotone? And largely, no. This is actually sort of like, to fix this problem everywhere for an approximation algorithm, you get stuck in exponentially long paths, right? Where you say, oh, OK, player I. Yeah, I'll imagine player I changing their value to make it better.

Speaker 1

00:24:01 - 00:24:21

So I, the algorithm, will just do that for them. But then that means maybe some other player could change their value and make it even better. And that this generally does not work. There are special kinds of algorithms that happen to have this property, where it perfectly aligns everything. Those are called maximal and range.

Speaker 1

00:24:22 - 00:24:59

And so let me just say exactly what that means is if you have an algorithm that actually perfectly aligns this, that for every player, them telling the correct input will result in more welfare than any lie they could possibly tell. That algorithm fits perfectly in here. And so those algorithms are called maximal and range. What are examples of maximal and range algorithms? 1 thing I might say is, I would say, you know, I'm just going to restrict the space of allocations.

Speaker 1

00:25:00 - 00:25:27

So I'm just going to decide I'm only going to consider giving all bidders the same light, all items to the same bidder and then I just want to maximize welfare subject to that. So that maybe is a very bad approximation algorithm but it is monotone in this way. Right, that like if you give me the correct input I am exactly optimizing but I'm exactly optimizing over a restricted set of allocations so that's an approximation algorithm that would fit in here.

Speaker 2

00:25:29 - 00:25:39

And with undecided approximation algorithms what they will work right Just something that gets you, you know, 3 jobs less than the option.

Speaker 1

00:25:41 - 00:26:07

Let me think. It still needs to have this, as long as it has this monotone property. But it still has to be the case that for every player, that play, the algorithm does better when it has the correct input. That it can never be the case that you can somehow help the algorithm by giving it the wrong input. So that's the key thing.

Speaker 2

00:26:07 - 00:26:14

It's a constant that another constant that the player cannot control so that if it comes out.

Speaker 1

00:26:16 - 00:26:40

Sorry if it's always exact yeah so if it's always yes yes if it's always exactly 3 dollars below that would be fine but but but it's just in yes, exactly. If it's sometimes 3, sometimes 2, sometimes whatever, then that would be a problem. Yeah. So, so the, yeah. So if, if for example, you somehow found yourself with what Avi is saying, an algorithm that is always $3 suboptimal, that would be fine.

Speaker 1

00:26:40 - 00:27:04

If you found yourself with something that is always exactly a factor of 2 suboptimal, that would also be fine. It's almost like the problem is that it's sometimes better and that the algorithm may be better on, you know, on the key, like it may be exactly a half on the real input, but better than a half on the wrong input. And then the players are incentivized to give you the wrong input to help the algorithm do better.

Speaker 3

00:27:04 - 00:27:21

Can I think of it like you have an optimization algorithm, well you have an optimization problem which has some function that creates an optimal? Yeah. The algorithm actually produces a different function. Yeah. And all that I need is that the 2 maximum are just in the same place.

Speaker 1

00:27:21 - 00:27:24

That would be fine. That would also be fine.

Speaker 3

00:27:25 - 00:27:37

The problem is when the algorithm actually produces something that the maximum is somewhere else. Exactly. So there is an incentive for you to say, exactly, because I'm maximizing welfare via algorithm. I'm going

Speaker 1

00:27:37 - 00:27:38

to put

Speaker 3

00:27:38 - 00:27:39

that instead of what you

Speaker 1

00:27:39 - 00:27:56

should. Exactly. And so now if you think, if you go through a short exercise, pick kind of like a canonical approximation algorithm, say like an LP rounding based algorithm. Like those are, you should not expect those to be monitored. Right, like they just, they take some fractional solution.

Speaker 1

00:27:56 - 00:28:12

If I take, nudge the fractional solution to be slightly suboptimal, my rounding algorithm does who knows what with it, right? That you should not expect it to have this nice property. Good. OK. OK, so that's this concept.

Speaker 1

00:28:12 - 00:28:18

This will come back up in the first set of results that I want to share. And as long as I am here,

Speaker 3

00:28:23 - 00:28:27

yes? Would you expect that property out of a greedy algorithm?

Speaker 1

00:28:27 - 00:28:45

No, they also wouldn't. Yeah. So things like this basically, So let's go through a greedy algorithm. The idea would be at some point, it does give the item to the wrong person. So say item 1 really should have gone to player 2 and opt, but greedily it wound up with player 1.

Speaker 1

00:28:45 - 00:29:13

Then the moment that happens, you say, oh, I should have given it the wrong input. Right, I should have like raised bit or two's value for it, right? That as soon as basically as soon as it does something wrong, that especially greedy algorithm is normally easy to think how you would manipulate them to get more welfare. Yeah. So I'll give you examples of some algorithms that are maximal in range, and basically, that's the first technical thing I will do.

Speaker 1

00:29:14 - 00:29:31

OK. So now, let me just tell you, the next couple of slides are also just summarizing what's on the board, but the board has a little bit more, I normally don't go into all this detail so the slides have a little bit less. So I just want to say like what's known about this problem in different models.

Speaker 2

00:29:32 - 00:29:52

So, yeah. Is the only way known to prove truthfulness of a mechanism is giving prices which are independent of its value or the ways to prove truthfulness that are different. But it may depend on your input, but I

Speaker 1

00:29:52 - 00:30:23

just wonder if you have. So the prices depending on values, my guess is like, My guess is that whatever formal instantiation you have in mind, the prices have to be independent of the values. So let me give you a formal instantiation that has to hold. Say that it is possible based on the other bidder's values, it's possible for you to get set S and pay P. And it's possible for you to get set S and pay P prime.

Speaker 1

00:30:24 - 00:30:36

And those are different. That can't be incentive. That can't be truthful. Because either P or P prime is smaller. And you will always want to go for the smaller 1, because you get the same set and you pay less.

Speaker 1

00:30:37 - 00:31:07

So that means is that what's an absolutely necessary condition is that for every set S, there is a unique price you can wind up paying to get it. And then normally, the harder part on top of that is like there's a different price for every set. Those have to be compatible in some global way. And that's normally the super hard part to get right. But at minimum, there has to be a unique price you can possibly pay for each set, and that cannot depend on your value.

Speaker 2

00:31:08 - 00:31:15

But there are proofs of trustworthiness like that, where prices are dependent on, I mean, I just wonder if such exists.

Speaker 1

00:31:15 - 00:32:05

Yeah, so I would phrase it as sort of like, because there's a unique price for each set, that is immediately determined without looking at your value. So like I would immediately get a contradiction if it were the case that say, fixing the values in the room, there's a VI I can report and pay P for set S, and there's a VI prime to get to pay P prime, 1 of those is wrong. Because either p is less than p prime and vice versa, I would want to switch. So it has to be like that. Like, it happens to be like, if you look at the way I chose to write p I here, like technically, like v I does sneak in here because v I determines the output, and then the other values are valid.

Speaker 1

00:32:05 - 00:33:00

The outputs are like, vi happens to show up here. But what's kind of hidden is that whatever set you wind up with, if you wind up with the same set, that doesn't depend on, that part doesn't depend on your value. But like, so I could write this in a way that doesn't depend on my value. But yeah, but let me like the economists refer to actually this will be on a slide later too, they refer to this, they call it the taxation principle, which I don't know how it got that name, but it just refers to the fact that once you fix everyone else, you can list all of the things that might happen to you based on fixing everyone else and you must get your favorite thing from this list and in particular this list can't contain options of the form same set multiple different prices. Yeah.

Speaker 1

00:33:04 - 00:33:42

All right. Okay, so let me, so let me share what's known. So I think the first model where this was studied was the computational model or the value oracle model. So value oracle model would mean the valuations are given to you or you have oracle access to say given a set S what is your value for S. The computational model would mean you're explicitly given a poly size circuit that computes a value oracle but if desired you can you know like hack around with the circuit instead of just computing queries.

Speaker 1

00:33:46 - 00:34:30

So here's what's known. So here the state of affairs is actually like fairly light known and there are strong separation there are sorry some separations so for some modular we know so there's this algorithm I believe it's called the continuous I don't remember all the references for these, but if you want references, I can probably quickly find some of them. So this, I believe, is called the continuous greedy algorithm. It gets a 1 minus 1 over e approximation for some modular evaluations. There is a simpler greedy algorithm that you know like is reasonable to teach in an undergrad algorithms course that gets a half approximation but this 1 minus 1 over e is tight this is the best you can do.

Speaker 1

00:34:31 - 00:34:33

As soon as you get um.

Speaker 2

00:34:36 - 00:34:43

In what maybe you can say something yeah what way is it using. Because we

Speaker 1

00:34:43 - 00:34:47

all for the out, yes, the algorithms are not so none of the algorithms use the circuit.

Speaker 2

00:34:49 - 00:34:52

The computational versus oh sorry.

Speaker 1

00:34:52 - 00:34:53

These are all the same. These are all the same.

Speaker 2

00:34:53 - 00:34:56

Yes. Okay you distinguish algorithm in the mechanism.

Speaker 1

00:34:56 - 00:34:57

Correct yeah

Speaker 2

00:34:57 - 00:34:59

so the uh algorithm knows how to use.

Speaker 1

00:34:59 - 00:35:43

Correct yes correct so the look basically some of the lower bounds are harder because you can't just prove an Oracle lower bound, you need to also, yeah. So just to repeat that, all of the algorithms, they're just based on value queries. The initial lower bounds were all information theoretic about value queries. And then I believe for all of them, maybe not all of them, there was also later work that said, even if you have access to the circuit, it's still NP-hard because there's a way to give a poly-sized circuit that if you could, but that's the harder part of the load. That's the only role that the explicit circuit would play is that the lower bounds are harder.

Speaker 1

00:35:47 - 00:36:22

Okay, so here, so submodular, you can get 1 minus 1 over e. There's a class evaluations called XOS that I actually would not expect everyone in the room to know. And let me pull up the definition really quick. Okay. Okay, so these are called fractionally sub additive or xos.

Speaker 1

00:36:23 - 00:36:51

So basically how you can think of it is there is a possibly exponentially large set of additive functions. If you prefer to think of it as a matrix, you can think of it as a matrix. But there is a set of additive functions. And when I give you a set, you say, let me look at all of the additive functions that I have in this bank, And I will find the best 1 for this set. So this is richer than submodular.

Speaker 1

00:36:52 - 00:37:25

So every submodular function is also xos. This is richer than what I was explaining before about how if you have a value for each item. So let's think about this. So you can also phrase it as I have a value for each item, and there are some sets I like. I can make a row in this matrix for every set I like and just put 0 for items not in the set and the value of the item for everything in that set, that would write it in the XOS format.

Speaker 1

00:37:25 - 00:38:08

And then the generalization is it like, I can have different numbers in every row if I want. So this is rich, even richer than that. This is equivalent to fractionally subadditive, so if, let me not go into too much detail for that definition, but if you're comfortable with like fractional covers, What this means is that fractionally sub additive says that for any set and any fractional cover of that set, the value of the set should be less than the weighted sum of the fractional cover. Okay, and sub additive is the same thing for integral covers. So fractionally sub additive is a more restrictive property than sub additive.

Speaker 1

00:38:08 - 00:38:46

Sub additive says whenever I have a you know, s union t, the value of s union t should be less than v of s plus v of t. Fractionally subadditive is just saying like even if you know s and t aren't sets it's just a fractional cover of s or whatever it should still hold. So this is x os it's genuinely in between submodular and subadditive. And okay, so subadditive I just said and then monotone are just all monotone valuation functions. And so these in particular can be super complicated compared to subadditive.

Speaker 1

00:38:46 - 00:39:24

I could just be like, I have a long list of sets of size m over 2 and if you get 1 of those you have value 1 otherwise you have value 0. So this is like wildly not sub additive. Okay so okay so just saying what's known. So in the computational model, this is, in my opinion, the most interesting result. Okay, so in the computational model, we know up to lower order terms the best truthful mechanism and it's root m.

Speaker 1

00:39:25 - 00:39:52

In particular, the impressive part of this is the lower bound, Okay, whereas there is a constant factor approximation. All of the other things match. I underlined this in red because 1 of the results I'll present slightly improved, so the red means that something I'm presenting at least slightly improved something here. So this used to be m over root log m and now it's m over log m. This is the most impressive result here.

Speaker 1

00:39:53 - 00:40:12

I didn't contribute to this. This should be, anyway, I didn't contribute to this. This is the impressive result here. Almost. So this is, they studied this related problem called combinatorial public projects, which I will actually briefly define in 1 of the background slides.

Speaker 1

00:40:13 - 00:40:40

I would say that there the story is much cleaner, and I would say is actually fairly resolved. The catch is that the problem was a little bit tailored to get these kinds. So it was much more naturally amenable to standard tools, whereas combinatorial options was a little bit more like the first principles problem. And then all of a sudden, like most of the tools didn't work. So this is a series of papers.

Speaker 1

00:40:40 - 00:41:04

So the last 1 in the series was Sahar, Dubzinski, and Jan Vontraek. I believe actually building on their own prior work. So I think those are the, the last 1 is definitely Dubczynski and Bontrach. I think those are the right names to associate. Let me just, actually that reference I have on my background slide so I might as well just pull that up to be safe.

Speaker 1

00:41:06 - 00:41:36

And sorry it builds on 1 of Shannon Dukme and Bontrach. Let me just pull that up. So so no need so these 2 and then it builds off and then the last 1 in the series, this is the 1 that makes it, even when you have access to the circuit, these ones are value queries. Okay, so, sorry, okay. So just to repeat again, in the computational model, things are basically resolved.

Speaker 1

00:41:36 - 00:41:46

This is the impressive result, the separation, that somehow they proved lower bounds that only applied to truthful mechanisms that did not apply to algorithms.

Speaker 2

00:41:54 - 00:41:56

I'm slowing you down a lot. I don't know if you

Speaker 1

00:41:56 - 00:41:57

can hear me. No, that's great.

Speaker 2

00:41:59 - 00:42:04

Are the economists interested or impressed by this law of up?

Speaker 1

00:42:05 - 00:42:38

So I think economists are, good question. I think yes and no. So what are they, they are interested in what we do in this field for sure. Are they specific? And the reason for that is that like the best example, there was this huge FCC spectrum reallocation that I think was originally supposed to be in 2017 and then like finally happened in 20 or it finally happened at some point.

Speaker 2

00:42:39 - 00:42:47

And this is a way of. Yeah. Yeah, I think even tens of billions,

Speaker 1

00:42:47 - 00:43:35

I think. That's a pretty good idea. And the group that eventually did it for the US government, so it was Ilyas, or there's a lot of people, but the people who wrote the peanuts paper about it was Ilyas Sehgal, who's an economist, Paul Milgram, who's an economist, who just won the Nobel Prize, and Kevin Layton Brown, who's a computer scientist, who works in AI and economics and computation broadly. So what was the interesting thing that they realized is that like the valuation functions that people have are complex. The idea being that if you're Verizon and you have 1 band to broadcast in the New York City area, it's much easier if you just have the same band and adjacent areas and you don't have to change your broadcasting frequency.

Speaker 1

00:43:36 - 00:44:04

Also, every broadcasting frequency, some of it is stronger, some of it's weaker. So cell phone companies like being able to penetrate so that you can get service on the subway. And cable companies, turn out, don't really care because you don't watch cable TV on the subway. And each company also has different values over sort of like, what if I can reach all of this market but I can't extend to this market. So it's just very complicated.

Speaker 1

00:44:05 - 00:44:41

And on top of that, they also had a hard algorithmic problem, which is just like, once you know who you want to give stuff to, sometimes they're indifferent between like the precise number of the frequency band they get, they just need to be packed. So they were regularly solving K-coloring problems on a very large graph. And so they're very interested in what computer scientists have to say about computation for these kinds of problems, I don't know that they're specifically interested in this specific result. I didn't

Speaker 2

00:44:43 - 00:44:52

mean the specific result, but the fact that there are lower bounds. Yeah. Yeah. So they care that, you know, they cannot achieve anything through.

Speaker 1

00:44:52 - 00:45:41

Yeah, so, oh, sorry, yeah. So I think they very much care about the philosophy of the interaction between computation incentives and the combinatorial options lens is much easier for them than like, you know, like a more, I don't wanna say made up, but like just problems that we might create that don't necessarily come from something they already work with all the time. The other thing I can share is that economists are much more comfortable with communication complexity than computational complexity. I guess A, because it's easier to learn communication complexity than computational complexity. And B, I will also get to 1 of these slides saying that just like the definitions I think are harder to get right with computation and incentives.

Speaker 1

00:45:41 - 00:46:10

And I think they're not hard, they're natural for communication incentives. So I think that economists do question some of our computational hardness results because they can question whether like the computational model is just like inserting hardness for not necessarily a good reason. Whereas communication, it's much clearer where the hardness is coming from. So let me now tell you what's known in the communication model. So here's what.

Speaker 1

00:46:10 - 00:46:37

So by algorithms in communication, I just mean communication protocols. And all of these are, I didn't say it up here, all of these are like poly time or poly value queries. This is poly communication. So the state of affairs for protocols is quite good. So I'll just highlight as an example, this is still an open problem if someone really wants to nail down what is the right constant for submodular valuations.

Speaker 1

00:46:38 - 00:47:01

We know that it's strictly better than 1 minus 1 over e. So this was an interesting algorithm because it showed that you can do something in the communication model that you can't do with value queries. And this lower bound shows that you still can't get all the way to optimal, but this gap is open. This is a paper. I think it's not the same paper, but probably have.

Speaker 1

00:47:04 - 00:47:33

I have a good guess at the authors, but let me not say something wrong. This, I wrote in this very precise way just to emphasize that we really know what's going on for the most part with protocols. So like this is basically what minus 1 over E, but we actually know exactly how it degrades with N. So this is a paper of a FIGA. This we know, so we tightened it to exactly 1 half, the lower bound recently, that's the red underline.

Speaker 1

00:47:33 - 00:48:04

The 1 half approximation algorithm is the same paper by feige and um the lower bound approach previous lower bound approach 1 half is that approaches infinity it's exactly 1 half already for 2 players now And this is data of root n, we've known this for a while. So this is like fairly set. On the other hand, the main thing I want to highlight about this half of the board is there are no omegas anywhere. So there's no lower bounds. These are the best lower bounds we know prior to 1 thing I'm going to share in this talk.

Speaker 1

00:48:05 - 00:48:24

All we have is we have truthful mechanisms. And the other thing I want to highlight is these are constants. This is like root m. It's actually the same that we can do with value queries. We don't know anything clever that uses communication beyond value queries.

Speaker 1

00:48:26 - 00:48:57

These are doing clever things. They are randomized and they're still super constant. So I'm less turned off by the gap between the log log m and a constant as I am between root m and a constant. But still, even if you're OK with randomization, we still don't know whether these can be improved down to a constant. But I think this gap is the more, like all of these, these gaps are I think a little bit more like, we really should understand whether there needs to be a gap at all or do something better.

Speaker 1

00:48:57 - 00:49:24

And if you ask me to guess, I believe that there is a gap. I wouldn't even be surprised if this is the best we can do. I wouldn't be surprised if we can do a little bit better. But if you asked me to guess, I think the biggest hole here is we really lack techniques to prove communication lower bounds that only apply to truthful protocols. That makes sense.

Speaker 1

00:49:24 - 00:49:58

We can prove communication lower bounds, and in fact, we know, except for this, exactly the right ones. But proving lower bounds that only apply to truthful mechanisms seems to be super hard. It's a thing to do. So Let me now get back to the story here. So, okay, I shared all of these on the board in more detail, so I'm gonna skip this.

Speaker 1

00:49:59 - 00:50:32

I also shared these on the board, so I'm gonna skip this. This just has the references. And now I want to say the context I also basically said, like, you know, it seems like if you took all of these stories together, you would say like this is kind of like a killer result, right? This is a lower bound that only held for truthful mechanisms and did not hold for algorithms. So there's something killer about that that's really unique.

Speaker 1

00:50:33 - 00:51:06

And also like a lot of these things do kind of like match like these numbers match in the most general settings like these numbers match. You know so like you can say like yeah maybe in the most general settings and also in the super simple settings where you have an exact algorithm. There's not really separation. And this 1 seems to suggest that there really is a separation. So now what I want to do is kind of tell you why is communication easier to deal with with incentives?

Speaker 1

00:51:06 - 00:51:34

Why, especially if you want to convince an economist about something, communication is much easier. Computation is weirder. Let me give you the following reason why. So I told you this result, so this is what I said was kind of like the killer result, it's a hardness result that only applies to truthful mechanisms. On the other hand, these results that I phrase as being in the communication model, they're actually like very simple mechanisms.

Speaker 1

00:51:34 - 00:51:47

Okay, so what are these mechanisms? They're what are called posted price mechanisms. So how do those work? You post a price on each item. So item I cost $10, item 1 cost $10, item 2 cost $13, whatever.

Speaker 1

00:51:47 - 00:51:56

You go to bidders 1 at a time. You say what set of items would you like to purchase? You can pick any set of remaining items you like. You're going to get that set. You're going to pay the sum of the prices.

Speaker 1

00:51:57 - 00:52:10

And then you just do this for each bidder. These prices can be computed in poly time, there is they're not exactly posted price mechanisms, there's a little bit of work to find the prices but this is really what's going on.

Speaker 2

00:52:11 - 00:52:15

And they could be computed for a longer time from the from the variation.

Speaker 1

00:52:15 - 00:53:05

Fantastic yeah okay so that's the okay So what I was going to do was try and create a little bit of drama before getting to that. But yeah, but so like initially, it should seem like something's off here. And what's off is exactly what Avi said, is that I kind of like I swept under the rug, it's like, if you want this mechanism to actually run in poly time, there are 2 things that need to happen. You need to compute the prices in poly time, and the bidders need to tell you what set they want to purchase in poly time. And we have a phrase for this operation of tell me what set you want to purchase that's called a demand query and they happen to be NP hard given a value oracle is a poly size circuit, you know, requires exponential value queries to do.

Speaker 1

00:53:07 - 00:53:41

And so what this means, and so certainly, if you're trying to get into a debate with an economist, you don't want to be taking the side that's sort of like a demand query, for example. This is the same thing as you walk into a grocery store, you leave with a set of items, right? So I think it's tough to be with an economist and taking the stance that it's NP hard for us to walk into a grocery store and find a set of items to leave with. I still like, you know, for me, I still think this computational model is relevant. And I think we learn a lot from it.

Speaker 1

00:53:41 - 00:53:58

We design interesting, right, like we design interesting mechanisms. We prove interesting lower bounds. We learn what's hard. But I think it is hard to take this stance that this should be the only model, if that makes sense. That there should at least be a model where these things count as poly time.

Speaker 1

00:54:00 - 00:54:23

There should also be a model where they don't, because value oracles are natural models, and some people think that way, especially if it's like really a large company. Maybe that's exactly the oracle access that the large company has access to for how much profit it can generate. But there should be at least some other model that doesn't have this issue. So when you say the mantra, you mean

Speaker 2

00:54:24 - 00:54:26

it should be the optimal set.

Speaker 1

00:54:26 - 00:54:36

Yes, that's correct. Yeah. Fantastic. Also, yes, also agree. So what I'm going to do.

Speaker 4

00:54:39 - 00:54:46

Yeah, question about the previous thing. This is the here like you're visiting each bit 1 at a time. Does the order matter? Like, it's like a

Speaker 1

00:54:46 - 00:55:19

better question. So it turns out that for the, all of the approximation guarantees hold no matter the order the bidders arrive. The sequential aspect is just for truthfulness, that it's sort of like, maybe easy to see this is truthful because you're a bidder, you show up and then someone says pick what you would like and you can leave with it right now and then you leave. And so the sequential aspect is necessary for it to be the case that like the moment you say something you get your stuff and you leave but it's not like necessary for the approximation guarantees.

Speaker 2

00:55:19 - 00:55:24

But there may be different outcomes on different orders. Yes absolutely,

Speaker 1

00:55:25 - 00:55:30

absolutely yes that's correct but that's correct but no matter the order they all work out.

Speaker 2

00:55:32 - 00:55:32

Okay if

Speaker 3

00:55:32 - 00:55:35

you do multiple runs do you retain truthness?

Speaker 1

00:55:35 - 00:55:37

No yeah exactly. It's known

Speaker 3

00:55:37 - 00:55:39

not to retain or it's not known

Speaker 2

00:55:39 - 00:55:40

to retain?

Speaker 1

00:55:41 - 00:56:10

Okay, so if you say specifically, these specific mechanisms, if you use these prices in multiple rounds, I know those are not truthful. If you're asking, do there exist prices so that they would be truthful with multiple rounds if you change the prices in a certain way? I don't know. My guess would be that the way you can change the prices is very restrictive. That would be my guess.

Speaker 1

00:56:10 - 00:56:28

But my guess is that if your goal was just to create something that is technically a 2 round mechanism, and you were willing to like have the prices in the second round be way higher than the prices in the first round my guess is it probably you could tailor prices that would wind up being truthful something like that.

Speaker 2

00:56:30 - 00:56:41

So just 1 last question, I asked you something and I don't know what the answer was. Can bisect be computed in polynomial time? Not the demand for it, but the bisect.

Speaker 1

00:56:41 - 00:56:43

For these, yes.

Speaker 2

00:56:43 - 00:56:45

Can be computed in the case of submodular.

Speaker 1

00:56:46 - 00:57:20

Yes, correct. So these can be, so they're actually very simple. They're basically, let me think of, this won't get you the optimal 1 over log log m cubed, but it's, but like, you can get, I believe, like 1 over log M or log M squared by just knowing, I need to ask half the bidders, I need to just get a poly approximation to the welfare on half the bidders. And then I just take those prices, divide them by M or take that number divided by M, that's the per item price. So I believe something like that will already get you like log squared M.

Speaker 1

00:57:20 - 00:57:58

And if you wanna get like all the way down, then you do need to have different prices for different items. But it really boils down to, you just need to get some reasonable approximation to the welfare with a randomly sampled half the bidders. And then based on that, you can, like, you need to do a little bit more work to understand how much is each item contributing. But largely, you do something like that and then set those as prices. Okay, so let me now give you another example just to further highlight the computation incentives issues.

Speaker 1

00:57:59 - 00:58:26

So this is the combinatorial public projects problem that Avi mentioned earlier. So here, a special version of this, so combinatorial public projects can also have multiple bidders, but on this slide, I'm just talking about 1 bidder. Okay, so 1 flag should immediately go off that says, if there's only 1 bidder, how can this possibly be hard as a mechanism design problem? There's 1 bidder. I'm trying to maximize the 1 bidder's welfare.

Speaker 1

00:58:26 - 00:58:46

I and the bidder are on the same team. This should be fairly trivial. But let's just dive through the definitions and see what you get. So I would ask first, is there a mechanism? Oh, and let me just quickly say the problem, the problem is that the bidder can get at most k items, and they have some submodular valuation.

Speaker 1

00:58:47 - 00:59:18

OK, so it's not a trivial optimization problem, but the incentives should be trivial. I basically just want the bidder to take whatever they want, I'm just going to approve it. So is there a mechanism that is truthful, maximizes welfare, computationally efficient? So what I would try and do, I would tell the buyer, we're on the same team, pick any set of K items you want. And the buyer would say like, that's nice, but I can't run in exponential time any more than you, I can't solve NP-hard problems any better than you can.

Speaker 1

00:59:18 - 00:59:58

So like, you're just offloading the computation to the bidder and saying like, I can't solve NP-hard problems, why don't you do it for me? So I think the right answer here is that this mechanism, calling it optimal is cheating, right? That I agree with. OK, so what if instead I said, you know, I want something that's truthful, I really want it to be truthful, but I'm okay if it approximately maximizes welfare, and I want it to be really computationally efficient, because that's an actual constraint. So here's what I would do, I would say, buyer, why don't you pick whatever set you like, just find the best 1 you can find in however much time you want to run for.

Speaker 1

00:59:58 - 01:00:08

If you have more resources, run an exponential time for all I care. That's fine. On top of that, there is a 1 minus 1 over e approximation. It's been known for a while. I'll even give you the source code.

Speaker 1

01:00:08 - 01:00:25

I'll give you the code to run that algorithm if you want. But if you have something better in mind, go for it. That's fine. And the catch is that technically, if you just really unpack the definition of truthful, this is not truthful. So why is it not truthful?

Speaker 1

01:00:25 - 01:00:55

Because truthful is supposed to say, for all of the things that could have happened to you, you wind up with the 1 that maximizes your utility. But the best thing that could have happened to you is you got lucky and you pick the optimal set. And that's not what you did, you found some other set instead. So technically, it's not truthful. And there's actually a formal lower bound that says that there is nothing that beats a 1 over Runem approximation that satisfies this definition of truthful.

Speaker 1

01:00:56 - 01:01:46

So why did I bring this up? This is a different problem. But what I'm trying to say is basically like this weirdness kind of already comes out for 1 bidder in a way that I hope feels like I hope feels like for this problem the definition is wrong right that like I think for this problem like for this problem I think this should be truthful we need the right definition but I think this should be truthful for the right definition. And I think it should guarantee a 1 minus 1 over e approximation. And so what I'm saying is that all the issues with the technical definition of truthful here and the weird way they interact with computation here, those are like I'm not saying what's going on in these results is this.

Speaker 1

01:01:46 - 01:02:09

It's not. It's something different. But I'm just saying that there is something that's not being captured by looking for things that are poly-time and truthful, and we should do something about that. So I'm not taking a super strong stance and saying there's something wrong with it, you know, I think these results are fantastic, but like I just think they're not the entire story. So I'm just trying to stop there and saying that there should be more to this story.

Speaker 2

01:02:09 - 01:02:20

So should it be more like a fair game of all the things that could happen to you that's how computers

Speaker 1

01:02:20 - 01:02:49

are very important on their side. Yeah so I will give you um also I think I will have time actually so I will give you my pitch in a new solution concept that will sound more like what you're saying. And the other thing I will do is just say like, communication doesn't have this issue. So like communication is a natural model. We're all, you know, computer science, maybe not as much as computation, but computer scientists are very comfortable looking at communication complexity.

Speaker 1

01:02:50 - 01:03:11

Even some economists do communication complexity now, which is kind of cool. And there's just no weirdness. So let's just revisit this. Of course, the problem with communication is, like I just told you, we don't have the same kind of strong results here. So like I'm trying to build up towards some of those same strong results and we got like a, you know, it's like a teeny bit of a teeny bit of the way there.

Speaker 1

01:03:14 - 01:03:37

And then 2, I think I will make sure to save time for this is just to give you a pitch at a new definition that I think does a better job of handling incentives and computation. The catch again is it's sort of like we have some positive results but proving lower bounds against this kind of solution concept, I've sort of like no idea how to get started for that.

Speaker 4

01:03:38 - 01:04:04

I have a question on the previous slide. I'm a little confused. Is the point that if you are in like some model where you know, they're optimizing or something, just NP-hard or something, then you're never gonna be truthful because when you're approximating, because you know, if you're approximating, because that's exactly right. When you're truthful, you can never get the optimal.

Speaker 1

01:04:04 - 01:04:44

Yeah, so unless, so like, so how would you get around that um unless like this approach is just saying like well for example what if I just say you can't pick up to k items. You can only pick up to 1 item. And I say, now, that can be truthful, because in polytime, you will find your favorite item. But the point is, if I put you in a situation where it is NP-hard for you to find the best thing for you in this mechanism, then it's not going to be truthful. Or it can't be truthful in polytime.

Speaker 1

01:04:44 - 01:04:46

That's correct. That's the barrier.

Speaker 4

01:04:46 - 01:05:07

And I know this would be like technically impossible to do, but wouldn't like a kind of a good definition, like new definition of truthful be something like, it's truthful up to what it can compute in poly time or something. So like instead of any, you can improve by doing something and improve by doing something that is poly-time computable.

Speaker 1

01:05:07 - 01:05:30

Yeah, so there is, so I think there's 2 ways you could approach this definition. 1 is like that. So 1 would be saying that like, I have a mechanism and I have a recommendation for you, and finding something better is NP-hard. So I believe people have pursued that definition. I'm not aware.

Speaker 1

01:05:31 - 01:06:16

I just have in my head, I remember seeing a paper that must have had a result that has that as a solution concept. I don't remember what it was, but it hasn't been widespread successful. The other thing is that I would say that sounds fine, but is in a little bit of the less robust direction, because I don't know who has more computation. And so if 1 person out there does have more computation, and they do find something a little bit better just by luck or whatever, and like that could unravel the whole mechanism because now for the other bidders, it's not truthful either. The 1 I'll propose is like in the more robust direction, but is a little bit less natural to say.

Speaker 1

01:06:16 - 01:06:58

But basically I think that's a fine definition to explore. And I'm saying I have in my head this recollection that I think someone has, but it just hasn't been that successful yet because probably it's just hard to design those things. Okay, so this is where I'm going to go and I'm going to get this like I also like showing this like quick caveat so like I also just thought this was a funny comic, wanted to put it on the slide. So the 1 of the main results or actually the first 2 main results I'll show you are going to do something like improving m over root log m to m over log m. And this is not the matrix multiplication exponent.

Speaker 1

01:06:58 - 01:07:36

I don't expect anyone in this room to necessarily care about log m versus root log m for a problem that you probably just heard about today. But it's interesting that these 2 numbers are equal. So that's going to be my pitch. And the other 1 is going to be improving this 3 4ths to 3 4ths minus 1 over 240 is interesting because those 2 numbers are different, not because you should really care about getting the precise number. Or for example, if anyone here is a communication complexity whiz and wants to revisit that, there are people who would be interested, but I'm not trying to pitch to you that like nailing down this number, this is like, this is the thing to do.

Speaker 1

01:07:37 - 01:08:03

Okay, so quickly revisit. So why did I pitch communication? I pitched communication because there's not this weird interaction with incentives. So revisiting posted price mechanisms specifically, how much communication does it take to run a posted price mechanism? You just need to state a set of subsides up to m, so that is at most m bits.

Speaker 1

01:08:03 - 01:08:51

So communication is not an issue, right? So posted price mechanisms will have low communication, and if we can prove communication complexity, lower bounds, those will tell us what posted price mechanisms can do. So what's known here is partly captured here and also captured here is that we don't have any separations between what can be done with a polycommunication protocol and a polycommunication truthful mechanism. OK, So what I mean by that is that even though the gap between these is substantial, like in all cases, it's like order of root m, for all we know, there is a lurking half approximation for subadditives that's deterministic and also truthful. And we don't have any separation here.

Speaker 1

01:08:54 - 01:09:19

If you want me to, I can pause and just say, what are the asterisks? There are some separations. Let me just go ahead and do that since I'm taking a lot of time anyway. So here, they did show a separation between, in a restricted setting, between what you can do with polylog n communication. So if you have really, really low communication, then they did show that there's something you can do with an algorithm but not a truthful mechanism.

Speaker 1

01:09:21 - 01:09:57

And these guys showed a really nice separation, but the lower bound holds for what's called dominant strategy truthful mechanisms. So the notion of truthful I proposed says, basically, it does let you say, I assume that everyone else is behaving rationally. I don't know their values, but I assume that they're rational given their values. Dominant strategy truthful is way stronger. That says, if I have an interactive protocol, as soon as I send a message, you could say, you sent bit 1.

Speaker 1

01:09:57 - 01:10:15

Now I'm going to try and screw you over. You sent bit 0. Now I'm going to try to help you as much as I can. The dominant strategy truthful is really strong. It says, even if you consider the possibility that the other players think like that, then it's still in your interest to be truthful.

Speaker 1

01:10:16 - 01:10:46

Even though every bit you send could be the difference between someone trying to help you as much as possible, assuming they know your value, and someone hurting you as much as possible, assuming they know you value. So we do have a separation between those. And this is still like, this is only as of last year. We have a separation between these super strong truthful mechanisms and protocols. And so they showed down here, they showed that you can't do better than M to the 1 minus Epsilon, if you want this super strong notion of truthfulness.

Speaker 1

01:10:48 - 01:11:21

And so the takeaway is just that we don't know nearly as much as we do in the communication model as in the computational model. And OK, so what are the 2 kinds of things I want to show. So I'm just going to start now getting into newer stuff, but I am still happy to pause and answer context questions or revisit old results or whatever throughout the talk. But starting now, I will start telling you some newer things. So here's a fact.

Speaker 1

01:11:21 - 01:11:56

So I mentioned that these VCG ideas, you could try to use them to design monotone approximation algorithms. And I have a slide with a little bit more detail about that. All of the good deterministic mechanisms, like 100% of them, all of them follow this framework. So the post-it price mechanisms, because we can randomly sample and learn some information, those are the best ones in the randomized setting. In the deterministic setting, because we can't even learn this, it's really hard just to even learn the scale of the problem.

Speaker 1

01:11:57 - 01:12:33

If you can randomly sample half the bidders, you can learn the scale, you can set some prices. You can't randomly sample, you can take the first half bidders, but that might tell you nothing about the second half of the bidders or something. So you can't really hope to set any prices here. The only thing we've been successful with is this VCG like ideas. So the first thing I'm going to tell you is just, I'll show you this result, which is just, now we know the best you can possibly do with this VCG-based idea for monotone valuations.

Speaker 1

01:12:34 - 01:12:55

And the second 1 is I'm going to describe for you the first communication complexity lower bound. It only applies to truthful mechanisms, but it's going to be a very small separation. OK. So I told you about this result before. I told you the downside.

Speaker 1

01:12:55 - 01:13:15

The downside is that you can't optimize over the entire space. I also told you the positive. The positive was I said, as long as your algorithm is monotone, then this framework works. But your approximation algorithm has to be monotone. So let me just repeat now with text on the slide to make it easier to process.

Speaker 1

01:13:15 - 01:13:34

How might I use this? So example 1 is I would say, don't consider every possible allocation. Only consider allocations that give all items to the same bidder. Now, find me the welfare maximizing 1. That I can do in poly time, right?

Speaker 1

01:13:34 - 01:13:49

Because poly communication too. From every bidder, I just need 1 number, which is what is your value if you got all items? I find the largest 1, I give them all items. Plug in VCG payments, It's basically the second price auction. Here's another thing I can do.

Speaker 1

01:13:49 - 01:13:55

I can say each bidder, you're only depending on what approximation you get. This will get M.

Speaker 2

01:13:55 - 01:13:56

M, yeah.

Speaker 1

01:13:56 - 01:14:38

Yeah, and so that's basically because it could be that all the items were supposed to go to different bidders you messed up. Okay, what if I give each bidder at most 1 item? I would say here I just need to ask each bidder for every item, tell me your value, and if I and I can actually do this in poly time too, it's just max weight bipartite matching. Okay, so here I can also, if the valuations are subadditive, this is an m approximation, if they're arbitrary monotone, this gets me nothing, right, because It could be that every bidder has a set of size root m that they like, and they need to get the entire set to get any value. But I don't give anyone more than 1 item, so I get nothing.

Speaker 1

01:14:40 - 01:15:10

Here's another thing I can do. I can just ask them for both of this information, Find the best 1 here, find the best 1 here, and pick the best. So I can also do that. This is, as I just told you, this is the best deterministic mechanism that we know for all of these results. So we slightly, slightly improved it to m over log m, but prior to this slight improvement, this is the best thing we knew how to do.

Speaker 1

01:15:10 - 01:15:35

This is the best deterministic mechanism for submodular, subadditive, or fractionally subadditive valuations. And the proof that this works is fairly short. I'll give a shot at saying it out loud. It's basically every bidder either wants more than root m items in the optimum, in which case there's only root m such bidders. And this is now a root M approximation instead of an M approximation because there's only root M bidders to consider.

Speaker 1

01:15:36 - 01:15:54

Or they wanted less than root M items. They wanted less than root M items, then giving them their faith, 1 among those, because their subadditive is 1 of a root M approximation. And we cover all of those with this. This is the best thing we know how to do. Here, there's nothing.

Speaker 1

01:15:54 - 01:16:20

You can give every bit or most k items, and that will take you up to the k communication to do. So all these things you can do. The trade-off is that if your allocation is simple like any of these, then you can find the best 1 and that's great. The downside is that if it's too simple, then you know you're not getting a good approximation guarantee. So the best 1 we saw on this slide for monotone valuations is M and first of additive is root M.

Speaker 1

01:16:22 - 01:16:53

Okay, so for the rest of this part of the talk, I'm only going to look at all monotone valuations And I will tell you what is the best thing. So here's the best thing. Slightly better than trivial in polycommunication. And this is also the best known deterministic mechanism with polycommunication at all. Okay, so like I said, all the best deterministic mechanisms, they're all VCG based right now.

Speaker 1

01:16:55 - 01:17:26

Okay, so what's our algorithm? I'm not going to say the analysis, but I will give the analysis of our slightly improved 1 which will imply this. The algorithm is very simple. So I was initially very scared of finally reading this paper, both because it was more written in econ language and it always takes me a little bit of extra time to turn everything from econ language into stuff I can understand. And second, I was just like, this must be super complicated if it got an approximation guarantee like m over root log m.

Speaker 1

01:17:26 - 01:17:58

But it turns out it's very simple. So what you do is you arbitrarily break the items down into log m chunks of size m over log m. And this is arbitrarily, doesn't have to be random, doesn't have to be any special way. Ask each bidder for their value for every set of chunks. Okay, so now because there's only we basically said functionally there's log m items and when there's only log m items there's only 2 to the log m or m different sets so that's fine.

Speaker 1

01:17:59 - 01:18:20

And now I just select the welfare maximizing allocation that keeps all the chunks together. Okay so this is the entire mechanism and it turns out sorry this shouldn't be root k this is m over that's a typo so that should be an m over k approximation oh no sorry this is right sorry m over root k and 2 to the k communication

Speaker 2

01:18:21 - 01:18:23

like m over k

Speaker 1

01:18:26 - 01:18:46

no so this no this is right there is his right yeah sorry so just repeat So they get M over root log M. I haven't done the analysis yet. So like the communication is 2 to the K. Yeah. And then I haven't said why you get an approximation yet.

Speaker 2

01:18:46 - 01:18:49

Oh you get that 1 because within a chunk you're...

Speaker 1

01:18:49 - 01:18:52

There's also yes yeah. Oh within a chunk sorry

Speaker 2

01:18:53 - 01:18:56

for the log m chunks you get only a loop.

Speaker 1

01:18:56 - 01:19:25

Yes exactly because there could be conflicts exactly exactly. Okay so let me say so what's the new result. So again, I'm not saying, please don't be super excited because this problem I just told you about today is now, you know, M over log M instead of M over root log M, but there's a follow-up result coming that says this is exactly tight. OK, so you can get m over k in 2 to the k. And the mechanism is not that complicated either.

Speaker 1

01:19:26 - 01:19:45

So here's what I'm going to do. I'm going to break the items down into 4k chunks of size m over 4k. Ask each bidder just like before, their value for every set of chunks. So I can do that in communication 2 to the 4k. And then I'm just going to repeat this exponential in k times.

Speaker 1

01:19:46 - 01:20:30

And this will cost me another factor of 2 to the k which is fine. Correct, yes and the only thing now is that I can't do this arbitrarily right because if I repeat something and I just say, you can do it arbitrarily, I'm probably not repeating anything. So I do need to do this a little bit carefully, but it turns out not that carefully. That like, you'll see where it comes up on the next slide. And then what I do is I just say, again, the 1 thing that's kind of nice about these maximal and range things is it like, you can really just like union the allocations together and it works very smoothly, right?

Speaker 1

01:20:30 - 01:20:52

That like, if I can find the best thing here and I can find the best thing here, I can find the best thing over either of them. So all I'm doing here is I'm saying I have 2 to the k different ways of making chunks. For each way of making chunk, what's the best allocation? Do this 2 to the k times. Whichever 1 is best, those are the way that's the way those are the chunks I choose to allocate my allocate them optimally for those chunks.

Speaker 1

01:20:53 - 01:21:14

And so this it turns out gets you the gets rid of the root k factor. So Let me actually go through the analysis. Let me go through the analysis. It'll give you a taste of like some of the simpler proofs that show up in this area. So here's what I'm going to do.

Speaker 1

01:21:14 - 01:21:46

So in the optimal allocation, there are some bidders who get more than k items, there are some bidders who get less than k items. So I'll break down the contribution of opt into some welfare comes from large bidders, some welfare comes from small bidders. Just like the proof I kind of like very quickly set up loud, there's not that many large bidders. There's the most M over K because each of them consume K items. So allocations that consider and all of these allocations do consider giving the same bidder all items, right?

Speaker 1

01:21:46 - 01:21:59

Because I do ask each bidder for your value. What if you got all the chunks? That's an option. So therefore, whoever is the largest large bidder, I get their value. So that gets me m over k approximation to the contribution of large bidders.

Speaker 1

01:21:59 - 01:22:15

So That's the easy part. The small bidders is still not that bad. It fits on this slide. It fits on this slide modulo 1 combinatorial claim. And what I want to claim is that it's the same for the small bidders.

Speaker 1

01:22:18 - 01:22:41

So how do I do that? So here's the main intuition, right? I have m items. So that means if I can somehow attribute a value to each item, and I can just correctly allocate the k highest value items, then I'm good. I'm just trying to get a k over m approximation.

Speaker 1

01:22:42 - 01:23:10

If I can allocate the k highest value things, well, that's all I need to do. If I also had to worry about large bidders, this is kind of nonsensical because what if every bidder wants k squared items? And nobody has any value if they just get k items. I can allocate k items the best I can, everyone still has value 0. The nice thing about the small bidders is that nobody wants more than k items.

Speaker 1

01:23:10 - 01:23:53

So let me go ahead and try to say what is the right set of not that much more than k items that I want to allocate correctly, and then describe how we allocate them correctly. So sort the bidders in decreasing order of this. So this is their value over the number of items they like. And then I'm just going to create a set of bidders, adding from the top of this list until I have value at least k times a small contribution over m. So now the observation is just like as soon as I get more than k items in the union of the sets these guys want, I definitely have at least a k over m approximation.

Speaker 1

01:23:55 - 01:24:36

So that would be true even with the large bidders, that would still be true. The nice thing is that because all these bidders are small, as soon as I exceed k items, I can't go beyond 2k. Right, like yes, I do need to have the entire bidder sets here, but like their entire set, I overshoot by most of, you know, most an additional k. Okay, so now I found, now I'm in my shape. I have a set of 2k items such that as long as I can allocate, there exists an allocation of only these 2k items that as long as I get these 2k items to the right place, then I get k times small over m.

Speaker 1

01:24:38 - 01:25:33

Okay, so now, right, and so now what this means is that they can go all to the right place, a sufficient condition is all 2k items wind up in a different chunk, right, because the chunks can go anywhere, the chunks can be allocated freely, so a sufficient condition is that all 2k items wind up in a different chunk. Here's the only part of the proof that I'm going to skip, is that if you randomly put items into 4k chunks independently, then the probability that 2k items all wind up in different chunks is exponentially small but not worse. So now what this means is that you can use, just repeat it exponentially many times, take a union bound with probabilistic method, and then everything is covered. Okay, so here's just a repeat of this. I told you the result.

Speaker 1

01:25:34 - 01:26:16

And so like I said, you do have to choose them carefully, but not that carefully, like existence is guaranteed by the probabilistic method. If you do it randomly, it's extremely likely to work out. I don't have an explicit construction in mind. And maybe 1 thing, I don't know actually how many people in the room or audience are, you know, like learning theory people, but there is a connection that previous papers have observed between this kind of analysis and notions in like VC dimension. So at the end of the day, the reason that the proof worked out, I eventually got it to say I need a set, every set of 2k items, because I don't know which ones are going to get me this guarantee.

Speaker 1

01:26:17 - 01:27:07

Every set of two-gate items needs to be able to be allocated completely freely. That sounds very similar to shattering a set in classification. It's a little bit different because you're not classifying something as like 1 side or the other, you're classifying it as being with any of n bidders or unallocated, but this concept is very similar in like some of the early papers studying maximal and range mechanisms, they have VC dimension in the title, they use the term shattered, and I think that's the right thing to do, that the concepts are actually similar. And so the state of the art, some of the authors who are contributing to this too are learning theory people. So that's a positive result.

Speaker 1

01:27:08 - 01:27:56

And speaking of like learning theory people contributing to the state of the art negative results, this was the state of the art hardness. So saying, if you want a VCG based mechanism, so you want to use exactly this approach on the board behind here with the VCG prices, then they showed that if you want m to the 1 minus epsilon approximation, you need exponential communication. I went through the paper to just make my best guess at if I wanted it to be more quantitative and say for an exact k, how much does it require? I think it's something like this. So it's not quite 2 to the K, but it's, you know, so like the improvement we made is just a smidge that it was already almost there.

Speaker 1

01:27:56 - 01:28:32

But I still want to say, like, I sort of feel like finally I can at least rest on this front that like the number, they're exactly the same now. So I am just saying the previous algorithm was almost there, and the previous lower bound was actually also almost there. And now they're just all the way there. So that's everything that I was going to say about this DCG-based approach. So if there are any questions either about context or this result in particular, I'm happy to sit here for as long as people want.

Speaker 1

01:28:32 - 01:28:35

And if not, I will go on to arbitrary mechanisms.

Speaker 2

01:28:38 - 01:28:40

I just think that there may be an explicit construction.

Speaker 1

01:28:41 - 01:28:42

Oh, that hits

Speaker 2

01:28:42 - 01:28:46

the mark. I have seen on bioshields. I'll think about it.

Speaker 4

01:28:46 - 01:28:47

OK, cool.

Speaker 1

01:28:50 - 01:29:16

OK, so now what about arbitrary truthful mechanisms? So I told everyone the definition of XOS earlier. I also told everyone about these results over here, but I'll just remind you again, this is the VCG based mechanism that I set out loud. This either bipartite matching or everyone gets the same stuff. This is the best we know.

Speaker 1

01:29:16 - 01:30:03

This is based on posted price, and this is based on LP rounding. And the other thing I want to highlight that didn't make it to the board is if I just focus on 2 bidders and m items, I still think things are actually very interesting. So if you just plug in n equals 2 here, this is 3 4ths, this is tight, this is the best thing you can do based on LP rounding. The best truthful mechanism we know is trivial. So like either of these, for them to kick in, you need more than 2 players for them to be interesting.

Speaker 1

01:30:03 - 01:30:23

This 1 starts by saying, randomly sample half the bidders and do something on the other 1. For 2 players, it's not doing anything worthwhile. This 1 is, again, like this bipartite matching or everything together. It's not doing anything interesting when there's only 2 players. We don't know anything non-trivial for our 2 players, randomized or deterministic.

Speaker 1

01:30:24 - 01:30:58

The only thing we know how to do is just say, tell me your value for everything, and I will do a second price auction. And if you're okay with being randomized, we actually don't know anything better than just to give all the items to a random player. Because we don't even need to collect input from that. So this, I think, is even though it's only 2 players, I think this context makes it more interesting, that there is a non-trivial protocol, and we don't know any non-trivial mechanisms, and we don't know whether there exists a non-trivial truthful mechanism.

Speaker 2

01:31:01 - 01:31:07

So OK, so this is a- Is it true also for 2 items on top of 2 middles?

Speaker 1

01:31:08 - 01:31:19

Oh, That's a good question. That's a really good question. Sorry For 2 items, I can just ask you for all 4 numbers.

Speaker 2

01:31:19 - 01:31:22

Oh, I see. Sorry, yeah, yeah. It's a real, yeah.

Speaker 1

01:31:23 - 01:31:55

It may be though that like, if you insisted on like, 2 bits of like, you know, Like if you insisted on really low communication, that maybe you could get something there. But yeah, I haven't thought that's it. That is a good question, though. So OK, this is just repeating the motivation slide. And like I mentioned, the main challenge here is just like, you know, like how do we normally prove communication lower bound?

Speaker 1

01:31:55 - 01:32:45

You know, we have lots of tools for proving communication lower bounds, but they apply to all protocols, right? And we need a communication lower bound that only applies to truthful protocols, but would not rule out the existence of a normal protocol because normal protocols exist. So the natural thing to try and do, which has been tried, and like, but has been tried, it just hasn't worked out, is trying to characterize all truthful auctions. And so there's a seminal paper by economists that says sort of like in more general settings than combinatorial auctions. So like if, for example, I was allowed to have valuations that depend on what set of items you get, not just what items I get, then the valuations are really rich.

Speaker 1

01:32:45 - 01:33:21

And if you wanna be truthful, even for these really, really, really rich valuations, there's not a lot you can do. So this was kind of like the most successful paper by computer scientists trying to take an approach like this. And basically, the problem is that like poster price mechanisms don't fit these characterizations. And so there's not a characterization I'm aware of that includes posted price mechanisms and, for example, the VCG-based mechanisms and is somewhat natural or attractable to work with. So I don't want to say it's a dead end.

Speaker 1

01:33:21 - 01:34:01

And on top of that, there are weirder truthful mechanisms beyond just posted price. And so I don't know for sure that it's a dead end, but it hasn't been largely successful. And what I want to present now, because it's also not my work, is this taxation complexity approach that was proposed by Shahar Domszynski that was sort of like, this is what got me into this space and I think this is really like, it really can work. It's a really cool idea. So as I mentioned earlier in this talk, this concept called the taxation principle.

Speaker 1

01:34:02 - 01:34:19

So I told you that you can take any truthful mechanism, right, so you can do this for any protocol. You can say, um, fix the inputs besides I. What are all the things that might happen to I? You can do that for any protocol, right? I can say fix of these, this induces a list of everything that might happen to you.

Speaker 1

01:34:20 - 01:34:41

So let's call that the menu. Okay, all the things that might happen to you are for every set S, maybe it's feasible for you to get it, maybe not. If it's not feasible, say the price is infinite. If it's feasible, let it be the price that you would get if you paid it. If the mechanism is truthful, every bidder must get their utility maximizing thing on the menu.

Speaker 1

01:34:41 - 01:35:10

Because otherwise, we have explicitly said, here is something that could happen, and you like it better than what would happen if you told the truth, therefore it's not truthful. So this is actually necessary and it's an equivalent characterization of truthfulness. So just to be explicit in the second price auction, the menu that you face is depending on your bid, you might get nothing and pay nothing. You also might get the item and pay the highest bid besides yours. So this is what your menu would look like.

Speaker 1

01:35:13 - 01:35:30

Now, what Safar proposes is he says, let's not look at the size of the menu. We also have a term for that. We call that the menu complexity. That is related for some of these lower bounds. He says, let's look at the number of distinct menus that get presented to you.

Speaker 1

01:35:31 - 01:35:37

Also, let's look at the log of that. Okay, so again, just as an example.

Speaker 2

01:35:38 - 01:35:41

A menu is the set of all possibilities.

Speaker 1

01:35:42 - 01:35:45

Correct. Fixing the other bidders.

Speaker 2

01:35:45 - 01:35:48

The number of this thing, of overall price.

Speaker 1

01:35:48 - 01:36:07

Exactly, exactly, yeah. Or let me give this as an example, yeah. So think like, um, so think of it like this. After fixing the other bidders, that, uh, valuations, that induces a menu. So For example, once I fix all the bids besides mine, this becomes the maximum of whatever it is.

Speaker 1

01:36:08 - 01:36:17

If the bids are integers between 0 and n, there are n plus 1 different possibilities for this. So there are n plus 1 different menus that I might see.

Speaker 2

01:36:17 - 01:36:22

That's the number of players anyway the number of menus is not more than the number of players.

Speaker 1

01:36:22 - 01:36:59

No it could be okay so let me go through this 1 then. So let now let's think of VCG. So the menu for VCG is for every set what's the price you would pay. If you will trust me that I correctly rewrote the formula, this is the price that player 2 has to pay to get set S in BCG. This is uniquely identifiable by the v1 right so the menu is the entire list for every set s of if you were to get s how much would you pay?

Speaker 2

01:36:59 - 01:37:02

That's the size of the menu not the number.

Speaker 1

01:37:03 - 01:37:15

So if okay so now let's think of what would happen if I have V1 and V1 prime. So if I have, and say that they differ on set S, then P2 of S is different.

Speaker 2

01:37:15 - 01:37:16

Oh, now it depends on

Speaker 1

01:37:16 - 01:37:19

the- Exactly, exactly, exactly, Yes.

Speaker 2

01:37:20 - 01:37:23

You look at the number of distinct menus of all possible valuations.

Speaker 1

01:37:23 - 01:37:59

Exactly, exactly. So I don't know if there's a connection to something more classical in computation. But the idea is that you want to say, fixing all the other inputs, how sensitive, like how many different ways can it be sensitive to my input, if that makes sense. So think of the other players in do say mapping from your value to what you get. And I wanna know how many mappings are there.

Speaker 1

01:38:02 - 01:38:54

So let me pause here for a second and ask if these examples make sense. So the idea here is that I am counting the number of different, let me use the language mapping, maybe that's better. Based on the other bidder's V minus I, this induces a mapping from my vi to the output okay that mapping can be large whatever but what I want to know is how many different mappings are there so every v minus I induces a different mapping it could be that lots of v minus I induce the same mapping so for example in the second price auction it's completely determined by the maximum bid in v minus I so if the maximum bid is the same they induce the same mapping in vcg if if there's 2 players the other players value completely if their value changes, it changes the menu. It changes the mapping.

Speaker 2

01:38:55 - 01:39:18

This appears in the stationary complexity in a very basic law about, for example, formula sizes. For function, if a function is a sub-function, where sub-function is exactly the way you define it, the way you define it is actually the size. You fit some of the bits and you see how many functions.

Speaker 1

01:39:18 - 01:39:19

Yeah yeah yeah.

Speaker 2

01:39:19 - 01:39:43

Of the remaining bits, then. You get a row of bounds based on this, the partition which for you is. Yeah. It's really like if you take the element distinctness problem, where you have n numbers, let's say, of size, you know, log n, each, right? You want to know whether they are all different or not.

Speaker 2

01:39:46 - 01:39:58

It's very easy to see that you have, you know, many functions because it depends on, you know, each number changes the function of whether it's.

Speaker 1

01:39:58 - 01:40:00

That makes sense. Or not.

Speaker 2

01:40:01 - 01:40:14

And then you get something like quadratic formula so I know about for simple that's the earliest low bounce known for the cyber uran formula that's how they are okay

Speaker 1

01:40:14 - 01:40:55

so it's possible that it would be super useful for me to learn about. So basically what I'm eventually going to say is that if you can prove lower bounds on the taxation complexity that are stronger than the communication complexity lower bound. So like not just using it as a means to prove a communication complexity lower bound, but if there's any examples where like this kind of measure is a strong, like is a larger than the communication complexity, this is exactly, let me just get on the next slide, but basically like I did not know of prior work that was studying this, but I think it would be really good for me to understand

Speaker 2

01:40:56 - 01:41:00

that prior work. Yeah, we're really

Speaker 1

01:41:00 - 01:41:20

starting something bigger here. So okay, so let me tell you now why is this measure cool. So here's first why it's not necessarily cool. So here are 2 facts. So Fact 1 is that there are doubly exponentially many gross substitutes valuations.

Speaker 1

01:41:20 - 01:41:53

I told you gross substitutes is a mouthful. I didn't want to formally define it, but related to the fact that there are doubly exponentially many matroids on m elements, there's at least that many gross substitutes valuations. Here's another fact, is that you can execute VCG in poly time for gross substitutes. So what does this mean together? That VCG, I just told you, VCG has taxation complexity equal to the number of possible valuations, log of that.

Speaker 1

01:41:53 - 01:42:27

So VCG has exponential taxation complexity, even restricted to gross substitutes, but it has poly communication. So this is why it's not obviously useful. However, what Shawhart did is he showed that if you have something that is truthful for rich enough classes evaluations, he basically showed like you can't avoid learning what is the menu that you're presenting. And so in particular, you at least have enough communication to learn what is the menu that you're presenting. And so this is the cleanest 1.

Speaker 1

01:42:28 - 01:42:58

And so what he showed is that if it's truthful for all monotone valuations, this is a lower bound on the communication. So what that means then related to what I was mentioning to you is that if we knew how to prove lower bounds on this, then we would get lower bounds that apply only to truthful mechanisms, but not necessarily arbitrary protocols. And he showed the same thing. They're polynomially related for XOS valuations. And he showed 1 other really cool thing.

Speaker 1

01:42:59 - 01:43:50

I realize Every time I show this slide, this is not worded precisely the way I want, but you'll know what I mean. And I keep forgetting to change it. The other thing he showed is that, OK, if I give you a truthful mechanism for 2 bidders, then not only is its taxation less than its communication, but the existence of that mechanism implies a simultaneous protocol that also has communication less than the taxation. Okay, so let me just give you, it's not enough intuition to see how the proof works, but just to help remember what this is saying. So if I tell you that there is at most T menus for this protocol.

Speaker 1

01:43:50 - 01:44:06

How would I implement it? I would say, actually, I, player 1, I'm the 1 who fully determines the menu for player 2. So let me just use log T bits to state it. And then you, player 2, you fully know my menu. So you just use log t bits to state it.

Speaker 1

01:44:06 - 01:44:14

And then you, player 2, you fully know my menu. So you just use log t bits to state it. So that's 1 round. In round 2, I know my value. I'm looking at the menu.

Speaker 1

01:44:14 - 01:44:27

I will just say what set I want. You know your value. You're looking at the menu. You will say what's that you want. So this gives you a 2 round protocol that just needs log of the number of menu or the taxation complexity bits.

Speaker 1

01:44:27 - 01:44:50

And it turns out you can do something a little bit clever to say, actually, if you just look at the 2 menus, that's enough. You can't run the mechanism, but you can find the allocation that is at least as good as what the mechanism would provide. So that's the clever part is getting rid of the second round. I'm not giving you a proof of that. I'm also not saying you should see how that goes, but this is just intuition for why this happens.

Speaker 1

01:44:50 - 01:45:14

And also, why is it special for 2 players is because in 2 players, I know your menu. As soon as you have 3 players, I still have to communicate with someone to figure out your menu. So this is really a two-player thing. And then just to confirm, simultaneous really means both people speak once, and then an onlooker who doesn't know anything about the values just use that has to decide the allocation.

Speaker 3

01:45:14 - 01:45:30

So I have a general question. Yeah. So whenever you have this problem where 1 guy only determines things because they only have their own information and you want to do the other 1, the natural thing is to go to the communication complexity model where you have number of foreheads.

Speaker 1

01:45:30 - 01:45:37

Great. Yeah. Yeah. So that's fantastic. So I would say basically we thought about that.

Speaker 1

01:45:38 - 01:46:31

The challenging part is that if I know n minus 1 of the values, then if I just phrase it as protocols in the number on forehead model, then what I would do is I would be like, I know N minus 1 of the values, let me just find the optimal allocation for the other N minus 1 values, and let me share that, And then let me share that, and then I'll also share how good it is. And then the onlooker will just pick the best of these allocations. And so that's going to give me actually a very good approximation guarantee. So what that means is that we did think a little bit about what would be some kind of more restricted number on forehead model, but just saying protocols in the number on forehead model, those are actually very good. So we're not going to get good lower bounds by going.

Speaker 1

01:46:31 - 01:46:33

But that would, but just to be clear.

Speaker 2

01:46:33 - 01:46:36

But there is lower bounds for this model.

Speaker 1

01:46:36 - 01:47:03

I know, I'm sorry, what I was saying was that for this specific problem, number on forehead algorithms are actually quite good. And so that's not going to work here, but maybe there's a restricted number on forehead model. But just to be clear for your proposal, if there were better lower bounds on number on forehead protocols, that would imply the lower bounds if we want for the same reason. Yeah. Did that make sense, though, as an answer?

Speaker 1

01:47:03 - 01:47:04

Yeah, great.

Speaker 2

01:47:06 - 01:47:06

OK,

Speaker 1

01:47:08 - 01:47:42

so just to repeat where we are. So here's the new approach right and so I would say, especially if you're a communication complexity person. I hope that this at least sounds like something that you could tackle just knowing communication complexity and the definition of combinatorial options. You just need to separate for 2 XOS bidders, You need to say, I know that 3 fourths, that's what you can get with an interactive polycommunication protocol. I just need to show that you can't hit 3 fourths with a simultaneous polycommunication protocol.

Speaker 1

01:47:43 - 01:48:26

And if you can do that, then you can get a separation. And I'm almost out of time so I can share basically like when I first saw I saw Sean Hart give this talk in like 2015 and I was a postdoc with like Mark Raverman at the time and so like I was naive I thought like oh like you know everything in communication complexity if you're good at communication complexity you can do it And I thought I was just going to like come back to Princeton and then we were going to solve it. And it turned out to be like super hard, and we only did it after like 5 years. But like, there is interesting stuff there. Like, I think even for just communication complexity people, maybe not like the kind of advanced communication complexity, but like the take a new combinatorial problem and prove lower amounts for it.

Speaker 1

01:48:27 - 01:49:07

This is just a new instance of that that turned out to be pretty challenging and fun. OK. So let me, since I only have 10 minutes, I think I am going to skip any of the technical stuff here. I will just say 1 nugget is that normally, if I told you something like, oh, what I really want to do, I really want to prove a lower bound for a search problem, you would say like, oh, maybe you have more flexibility than proving lower bound for a decision problem. But why don't you try the decision problem first and see if that helps?

Speaker 1

01:49:07 - 01:49:30

And like almost all the time, that's the right advice to give someone. Unless specifically they're interested in proving lower bounds for simultaneous protocols. Because the trivial reduction I think we all have in mind is you say, well, if you can solve search, then here's what you do. You solve search, you know the answer. Take 1 extra round and evaluate it.

Speaker 1

01:49:30 - 01:49:59

Now you solve decision. And what's interesting is just that this isn't a formal separation, but it's not obvious how to do that simultaneously, right? Because when you share your simultaneous message, you actually don't know what the outcome is going to be. So you can't like pre-evaluate it. And the other thing that turned out to be really interesting for this problem is that we first did get a lower bound on the decision problem and the search problem was easy for our hard construction.

Speaker 1

01:50:00 - 01:50:24

And so then the reason it took like another 2 years was to figure out how to embed a hard search problem in there too. So that's maybe like 1 interesting nugget. The other thing I want to share is like, so what made this interesting? So I learned, hopefully I get the buzzwords right. So I learned something interesting about communication complexity stuff.

Speaker 1

01:50:24 - 01:50:44

So this specific problem, the decision version, it's not total. So it's a promise problem that I only care about inputs. I only care about deciding here. So that means some black box techniques don't work. Other thing that's really interesting.

Speaker 1

01:50:45 - 01:51:05

There is a simultaneous 3 fourths approximate non deterministic protocol. That's maybe easy to see. Someone just tells you, here's the allocation, see if it's bigger than this. The other thing that's interesting, there is also a three-fourths approximate non-deterministic protocol. So why is that?

Speaker 1

01:51:06 - 01:51:47

There is an LP relaxation with an integrality gap of 3-4. And so it turns out that the dual of that, you can verify that it's a valid dual simultaneously. So someone can also give you a hint, here's a dual that witnesses that the fractional optimum is less than whatever, and you can check that, that that's correct. So there is a non-deterministic three-fourths approximation, a co-non, sorry, I misspoke, a co-non-deterministic 3-4ths approximation, and there is not a randomized 3-4ths approximation with high probability. So like something about that, I hope, hopefully I got the terms right.

Speaker 1

01:51:47 - 01:51:55

Something about that is interesting, right? Maybe gives at least some hint of like why maybe something interesting must be going on.

Speaker 2

01:51:55 - 01:51:57

So this is a result of yours and Mark's?

Speaker 1

01:51:58 - 01:52:24

Yes, so this is, So for the decision problem, so this is Mark, and this is, I don't know if anyone knew Jemming Mao. He was a student here with Mark, who's now at Google. And this was studying the decision problem. And then we were like, so, we were so happy when we figured this out and then realized that like, Dobczynski's applies to the search problem. And we looked at our construction, it happened to be easy for the search problem.

Speaker 1

01:52:24 - 01:53:28

And then kind of like, almost like at the last minute, we were like, this is going to be a really hard paper to pitch that like, there was a lot of cool stuff in there, but ultimately, the whole motivation for studying it was this framework, and then it doesn't actually say anything in the framework. And so we tried really hard for a bit and couldn't extend it. And then eventually working with, so Rishi was an undergrad at Princeton, Sephar was a postdoc at Princeton, and Raghavanj was a PhD student at Princeton who was working mostly with Galapagos communication complexity and so then eventually there is a way to you know it looks more like you know you have a base construction you need to do the right amount of extra work to get the thing you want to happen. And it did work out at the end. So this is, I would say, main result, number 2 of the talk, which is just saying, there is now some separation between what you can do with polycommunication with a truthful mechanism and a arbitrary protocol.

Speaker 1

01:53:28 - 01:54:12

It's not a large separation, but at least now we know there is a separation. And again, like my ambitious hopes are like not getting a separation of 1 over 240, but a separation of like root F or something like that. So it's a very long way from that, but I'm hoping that like taxation complexity, we now know how to leverage that a little bit and we have some idea of what hard instances can look like. So with 5 minutes, I think I'm not going to show you what the hard instance is, but it fits in a colorful picture. And what I would like to do in the last couple of minutes is, since you guys were interested in the definition, let me pose to you kind of how I would propose getting around this.

Speaker 1

01:54:13 - 01:54:40

So this is a full reset. We're now going back to this weird sounding instance with computation incentives where I said I really just want the buyer to pick the best set of size k that they can find in poly time. I really wanted to call this truthful and I really wanted to call this poly time and I really wanted to say it got a 1 minus 1 over e approximation. So I was thinking through, how would I resolve this? So I think this is the right mechanism.

Speaker 1

01:54:41 - 01:55:04

I think the right mechanism is tell the buyer, take any set of size k you want. I think that what would I think the buyer would really do? I think they would get a 1- 1 over E approximation. Because I think they would, at minimum, whatever they were thinking of doing, they would just afterwards run the approximation algorithm, see how it did, take the better. Or maybe they'd run the approximation algorithm first and try to improve.

Speaker 1

01:55:04 - 01:55:28

I don't know what they would do but I feel like it would be silly to not get the 1 minus 1 over e approximation. Okay so I was trying to say so I think it should be polytime, it should be truthful and it should be getting a good approximation. So I feel like what's the issue? The issue is that I think we have the wrong definition of truthful. So I want to propose a different 1.

Speaker 1

01:55:29 - 01:55:58

So here is a, All right, I'll leave this on the slide in case your preferred method of learning is like reading formal definitions. But let me say intuitively what I want to do. I want to say every player, poly time or not, they have in mind a strategy that they're going to use. What I'm going to do instead is I'm going to give them code. I will hand them the code.

Speaker 1

01:55:58 - 01:56:21

That code will run in poly time. That code will take as input their strategy and output a new strategy. Now, what I want to claim is that what the bidder should do is instead of using their strategy, they should run my code and use whatever it outputs. Now, how would I possibly be able to claim that? What I would say is this.

Speaker 1

01:56:23 - 01:56:51

So for no matter what value they have, no matter what strategy they were thinking of using, maybe I give them the same strategy back anyway. In which case, they're like, fine, you didn't help, but whatever. Or I give them back something that dominates what they were going to use. So dominates is this very strong thing that says no matter what everyone else is doing, you are going to be better off. And there is some possibility that you're strictly better.

Speaker 1

01:56:52 - 01:57:16

So I would argue, if I gave you computationally efficient code that you could input whatever you were thinking of doing, and I would give you something that is either the same or strictly better, you would just use it. And I think that that's reasonable to say that a rational person would use it. Now, the other thing is I want to avoid getting stuck in this situation where you say, well, what if I run the advice again? I run the advice again. I run the advice again.

Speaker 1

01:57:16 - 01:57:35

Then it's no longer polyclinic. Because it also has the property that if you run it twice, it doesn't actually help. So what I want to claim is that it is reasonable to say, you have whatever thought process goes on in your brain, and Then you ask me for advice. My advice runs in poly time. And then you just use whatever I propose.

Speaker 1

01:57:35 - 01:57:59

I want to say that that is a reasonable behavior to predict. And I think that should be fairly robust. And so just to confirm, it could be that you have a dominant strategy, and I don't advise you to use it, that's fine because if you propose your dominant strategy, then I certainly would recommend it back to you. But if you propose something else, I might not. I'm just going to propose something better.

Speaker 1

01:58:01 - 01:58:34

So this definition, this implementation and by strategies, I want to say is actually still very permissive. But it says that a mechanism should guarantee some approximation whenever players are using an advice strategy. So they should use something that can be output by my algorithm. So just to have an example in mind with this 1 minus 1 over e approximation, your strategy is pick a set. I'm just going to run my 1 over 1 minus 1 over e approximation and output the better of the approximation of the set you input.

Speaker 1

01:58:34 - 01:58:41

And what I'm saying is I think it is just reasonable to say that you should follow that strategy.

Speaker 3

01:58:45 - 01:58:51

So yeah. Technical question. The algorithm is not allowed to look at the valuation of the other players. It's just not. Correct.

Speaker 1

01:58:51 - 01:58:59

That's correct. Yes. So OK. So let me just maybe zoom to the threat. So there is being careful with that definition.

Speaker 1

01:59:00 - 01:59:27

And maybe 1 thing I think that's nice about it is that all of the computational assumptions are with the designer. There is your mechanism should run in poly time, and the code for the advice you give should run in poly time. The bidders can be exponential time or exponential space, whatever, right? There's no assumptions on them. All we're saying is that you should use something that is not improved by my advice.

Speaker 1

01:59:29 - 01:59:47

And we found that actually this does turn out to be equivalent. There's like a couple of papers. This is the first 1. And then the same authors have some follow-up that they phrased it differently. And that they like focused on this improvement process rather than the advising process.

Speaker 1

01:59:47 - 02:00:37

And so they also found a use of this, but it doesn't seem to have been used in the last like 15 years or so. And in particular, this is like the first time for combinatorial auctions in this post-price setting. But basically it is revisiting this concept that they propose and saying, like, actually, you can get more out of it than they got before. And I don't know what's the key result. Key result is just that like, you take this, take this post-price mechanism and roughly run it as is and the point is post-it price mechanisms really should be truthful even if people can't find their optimal set because we can recommend pretty good sets for them to take, and they should get something that is at least as good for them as this pretty good set.

Speaker 1

02:00:37 - 02:00:57

And as long as they do something that is at least as good as this pretty good set, then you get the same approximation guarantee. So that was the point of that paper. So very quickly, here's just a reminder of the things that I showed you. And then I will pause here with a sample of open problems on the slide. So thank you very much for having me.

Speaker 1

02:00:57 - 02:00:57

All right. All right.