See all Flashbots transcripts on Youtube

youtube thumbnail

CredibleCommitments.WTF | David C. Parkes - Illustrations on Commitment in Decentralized Systems

1 hours 3 minutes 26 seconds

Speaker 1

00:00:16 - 00:02:02

You You I'm going to use a microphone, bro. Can we get some screen? Yeah, sure.

Speaker 2

00:02:04 - 00:02:16

This was supposed to be animated, but it's not. But that's OK. So first of all, I want to talk about credible mechanisms. This was an idea that was introduced by Lee and Akhbapor,

Speaker 1

00:02:17 - 00:02:17

2020.

Speaker 2

00:02:19 - 00:02:56

They wanted to ask at a high level, what happens if the entity that is operationalizing a mechanism is able to deviate from the intended rules of the mechanism? So, maybe a canonical example to have in mind, which you're all familiar with in this audience, I think is, um, a second price auction. Would not be credible because there are deviations available that are not visible, uh, to the participants and they, they formalize this. Uh, So I want to spend a minute on this. So they said that a rule describes an intended behavior.

Speaker 2

00:02:58 - 00:03:30

For example, the intended behavior might be take all bids and run a first price auction. They describe the concepts of a safe deviation. This is a deviation from the rule that generates an output for which there exists an input consistent with the rule, given each participant's knowledge of the input. So, you know, you may know your bid, you may not know the other bids, and then you observe the outcome. Is the outcome consistent with some possible completion of the input?

Speaker 2

00:03:31 - 00:04:09

If it is, then this is a safe deviation. The entity operationalizing the mechanism may have deviated from the intended behavior, but in a way that is not observable to any party based on what they know. And then they say that a rule is credible if there is no profitable safe deviation. So they don't require there to be no safe deviations, but they require there to be no useful safe deviations. So I think it's useful to imagine that deviations that are not safe are sanctioned in some way.

Speaker 2

00:04:10 - 00:04:49

You're staking your reputation as a mechanism entity on following the rule, at least up until the point where your deviations can be caught. There are other things you can do that nobody will know about. You're not making any guarantees in that regard. And we still want to be able to say useful things about the mechanism. And what they proved in this paper, they proved the trial lemma, which I'm not going to step through in detail, but they proved, for example, that revenue optimal first price auctions are credible, but second price auctions are not credible.

Speaker 2

00:04:50 - 00:05:17

Um, and I think when we think about commitment, then I'm going to try to tie things into commitment. Uh, I think we could say that credibility gives a self-interested mechanism commitment power. If my mechanism rules are credible, then I can commit in a credible way to follow those rules. It is incentive compatible for me to follow them and not use a safe deviation. Okay.

Speaker 2

00:05:17 - 00:05:49

So this notion of credibility, which now you can kind of situate this in the DeFi space. So in the DeFi space, and let's think about transaction ordering into an AMM. In the DeFi space, a rule is an intended behavior. It might be take all transactions that you have seen and follow a particular sequencing methods to order those transactions onto an AMM. That might be the intended behavior.

Speaker 2

00:05:49 - 00:06:27

And then a safe deviation is 1 that again, generates an output for which there is an input that is consistent with the rule given knowledge of the input. I think Oftentimes when we think about blockchain, we will think about the input as being observable. That is the input that is used as being observable. So when I think about DeFi, for example, concretely, we can all observe the transactions that were pushed through Uniswap in that block. So in that sense, we all have information about the input.

Speaker 2

00:06:27 - 00:07:06

And what does it mean to be credible? It means that... And so a safe deviation is any deviation where if there's some space, either if there's some space in the rules, you still are following the requirements of the rule, or a deviation where you perturb the input in some way. You're, you're, you're allowed to perturb the input. It's just that you have to follow the rule given what people see about the input.

Speaker 2

00:07:06 - 00:07:48

And here the input is all of the transactions that are written into the block. I'm going to mention in a second the result we have, and it's just a slight tweak on the credibility definition. So the definition I gave you earlier is there is no profitable safe deviation. The definition I'm now going to use is that any profitable safe deviation from the rule still achieves design goals that we care about. So there may be deviations, and I'm okay with that as long as any deviations that the mechanism chooses to follow still lead to properties that are good for me.

Speaker 2

00:07:48 - 00:08:21

For example, a property I might care about in the DeFi context is I'm okay with MEV, but only if the execution price on the transaction that is MEVed is good in a way that we could formalize. And again, think about other deviations being sanctioned. So this is credibility where I say, I'm going to follow this particular sequencing rule. If you ever see, based on what you see about the input that I'm using, that I didn't follow the rule, you can sanction me. Everybody will know that I deviated.

Speaker 2

00:08:22 - 00:09:12

And now going forward, people will choose not to submit transactions to my mechanism. So here, credibility gives a self-interested party commitment power to achieve desiderata, where again desiderata might be what we could think about informally as tolerable MEV. It is essentially commendable for me to achieve these as a router, even if I might choose to use some safety deviations. So we have a forthcoming paper that I'm not going to get into in any detail other than to advertise it here. You can ask me about it later if we want to go there, which is coming up in stock, where we give a verifiable sequencing rule that ensures this kind of tolerable, um, uh, MEV property.

Speaker 2

00:09:13 - 00:09:48

So it ensures that for any executed transaction, A, either the price of A is as good as if A was the only order to execute or no MEV is gained by executing A. And you could think about, for example, um, builders and relays choosing to adopt that rule. And then now they are committing to be observed and have staked their reputation on not deviating. So they, they're committing. And the question is whether those rules have nice properties.

Speaker 2

00:09:50 - 00:10:06

That's the first thing that I wanted to talk about is role of credibility. Second thing I want to now jump back in time and talk about this idea of faithful implementations. The backdrop to this was that algorithmic mechanism design started in around

Speaker 1

00:10:06 - 00:10:08

1999, 2000.

Speaker 2

00:10:09 - 00:10:49

There was a flurry of activity around polynomial time mechanisms. And then some people, particularly Joan Feigenbaum and Scott Schenker, started thinking about what would happen if we had distributed algorithms that were implementing the mechanisms. And so now the question wasn't so much, Is there a polynomial time centralized algorithm that can operationalize the rule supermechanism? But is there a decentralized algorithm with nice communication complexity and computational complexity that can run the algorithm? And they wrote some very nice papers.

Speaker 2

00:10:50 - 00:10:55

I would encourage you to read them. The question they didn't ask is they didn't ask

Speaker 3

00:10:55 - 00:10:55

whether it

Speaker 2

00:10:55 - 00:11:49

was incentive aligned to follow the distributed algorithm. So they said, here's a distributed algorithm where if it is wrong, it will effectively compute the output of the mechanism. And they, they wanted mechanisms because mechanisms were providing incentive alignments around inputs, but they were not insisting that parties wouldn't want to deviate from the algorithm itself. And yet, at the same time, the motivating story were often things like BGP, Border Gateway Protocol for autonomous systems on the internet, which are self-interested parties executing decentralized algorithms. And there's no reason to really think they would choose to follow the algorithm if they could benefit by deviating from it.

Speaker 2

00:11:49 - 00:12:06

And so what we did in this work was we wanted to add this additional criterion of incentive alignment with choosing to follow the algorithm. And that's what I mean by a faithful implementation. So there was a POTC paper. There were 2 RMAS papers. So it's nice that we're adjacent to RMAS today.

Speaker 2

00:12:07 - 00:12:45

I called this specification faithfulness. So this work with Jeff Schneiderman and I want to contrast it to credibility. So in credibility, there is a privileged party, say a mechanism that receives inputs, can tamper with them in various ways, and then selects an output. When we were thinking about faithfulness, we didn't have any kind of center, any privileged party. We wanted a fully distributed system.

Speaker 2

00:12:45 - 00:13:10

So I would often draw these types of pictures. Uh, here's a sensor and here it's like, no, you take the rules of the mechanism and you give them to all of the participants in the, in the system. And they're all doing some part of the distributed computation. And we wanted all of that communication and computation to be in equilibrium. So it's a weaker form of commitment reflecting that multi-party aspect.

Speaker 2

00:13:11 - 00:13:44

It says it's incentive aligned for me to follow my part of the mechanism rules, given that others do the same. There's an equilibrium notion. All right. And what we did is we asked the question, can following a protocol be made an exposed Nash equilibrium? Roughly this says, as long as others follow the protocol, then whatever the inputs, you also want to follow the protocol, whatever the private inputs.

Speaker 2

00:13:44 - 00:14:16

That's the XPOST aspect of it. And if it's true, then we say that specification is faithful. We formalized it by breaking up the, um, uh, the strategy now of an, of a agent represents the computation that they're doing. And we, we thought about that in terms of a revelation component, a computation component and a message passing component. Um, so revelation is computation you're doing that is touching your private input.

Speaker 2

00:14:16 - 00:15:16

Computation is work you're doing that is depending on inputs that you've received and message passing is you sending messages to others. And we developed a proof recipe that I will flash and not spend a lot of time on, where we first of all defines this notion of algorithm compatible, which says that an agent will choose to follow the suggested computation in equilibrium. Communication compatible, an agent will choose to follow the suggested message passing in equilibrium. And then we strengthened AC, and we said, well, strong AC is that you will choose to follow your suggested computation, whatever you do in regard to your revelation action and your communication action. So assuming others do what they're supposed to be doing, but even if you were to deviate in arbitrary ways for revelation and message passing, it's still incentive alliance for you to do the computation correctly.

Speaker 2

00:15:16 - 00:15:55

And you can similarly strengthen communication compatibility. And then we can prove, it's quite easy to then prove that if a mechanism is strategy proof and you have a correct distributed specification that is strong AC and strong CC, then the specification will be faithful. So then the game is all about how to establish these properties and we provided techniques for that in these papers. RMS04, almost 20 years ago, it's hard to believe. Distributive ECG methods.

Speaker 2

00:15:56 - 00:16:27

We had our BGP protocol, which was in PODC and then a more general distributed optimization algorithm in RMS 06. So that's about faithfulness. Go back 1 year now, strategy-proof computing. This is something I was playing around with pretty early as a professor at Harvard. And I remember giving this talk actually within a little room at IJCAI in

Speaker 1

00:16:27 - 00:16:28

2003.

Speaker 2

00:16:29 - 00:16:58

And I don't think I've talked about it since. So thank you for giving me the chance. I think this paper has 10 citations. So I was developing this notion of what it would mean to have something called strategy-proof computing. And I'd been motivated by what's known as the end-to-end principle in network systems, where somehow you don't try to do too much in the network.

Speaker 2

00:17:00 - 00:17:24

You facilitate innovation on the edges. And I wanted to kind of think through what would that mean for mechanism design? And my main idea was, well, you don't want there to be 1 designer. You want them to be innovation amongst mechanism designers. You want anybody to be able to deploy a mechanism and have kind of a competitive landscape for which mechanisms get used.

Speaker 2

00:17:24 - 00:18:05

And then I was thinking through, okay, what would that mean? How might that work? Is there an AI agenda here? 1 thing I decided was that I wanted each mechanism to be strategy proof, um, at least kind of locally for the things that it's doing. And so what I mean by that is that strategy proof, this doesn't compose very well, If you're trying to buy a bundle of stuff, and I give you a strategy proof mechanism for 1 part of the stuff you want to buy, and a strategy proof mechanism for the second part, it doesn't mean that your dominant strategy is to bid truthfully for this part and be truthfully for this part, because you might not put everything together correctly.

Speaker 2

00:18:06 - 00:18:38

So there's this composability problem, which I raised in the paper. But what I did want to insist is that if somebody deploys a mechanism to allocate this particular bundle of resources, that that mechanism should be strategy-proof in a particular way and should be certified as such. So this is the abstract of the paper. I wanted to support continued innovation and competition, focusing on resource allocation, arbitration of resources. And by individual users can treat other resources as their own.

Speaker 2

00:18:38 - 00:19:04

This is the notion of simplicity that comes from strategy proofness, that you shouldn't have to game in the requests that you're making for resources. You should just be able to describe what you're interested in, just as you would, if you weren't competing with others for those resources. And I laid out some challenges. I'm just going to tell you what they were. And then I will point you to the paper, which I think we've now, we've, we've posted on the webpage.

Speaker 2

00:19:05 - 00:19:31

So I laid out 5 guiding principles and then I laid out 5 principles where we should care about incentives. We should be focused on utility and not utilization. We want incentive compatibility. We want openness and we want things to be decentralized. And then the challenges were, were design, were how to describe what it is your mechanism does and who it is designed for and what properties does it have.

Speaker 2

00:19:31 - 00:20:03

Um, how we can verify in a data-driven way the incentive properties of the mechanism. Because I had in mind that the infrastructure, which was vaguely discussed in the paper, is monitoring inputs and outputs to these mechanisms and is able to, by looking at the trace of inputs and outputs, notice violations in strategy proofness. I say, ah, I sanction you. You are not strategy proof. I have proof by looking at the things you've been doing over time.

Speaker 2

00:20:03 - 00:20:18

I can see you're not strategy proof. That's the way I was thinking about this. And we did later work on that. I have a paper on that verification step. Compositionality, And I'd forgotten about this until I looked at the paper yesterday.

Speaker 2

00:20:20 - 00:21:06

I talked about the idea of learning, using machine learning to try to design mechanisms as well in that paper. So then fast forward to this year, where we are using machine learning to design mechanisms. And I've been working on this idea of differentiable economics now for a few years with a number of collaborators. And the particular project that I wanted to briefly mention is about contract design, so it fits with commitment devices. So, oh, and I didn't mention commitments here, but I, what I wanted to say is that assume the ability to commit to the rules of a mechanism, that you are not going to change the rules of the mechanism.

Speaker 2

00:21:07 - 00:21:25

This was an implicit assumption in my paper, that you can deploy something and you will not change it. Which today, I think we can roughly do that. So that's good. Differential economics. This was something we started in 2017.

Speaker 2

00:21:25 - 00:22:03

A whole bunch of co-authors, including Jia Fang and Sai Ravindranath, more recently Jeff Zhang and Tonghan Wong in my research group. Simple idea. It says, OK, that represents the rules of a mechanism via a differentiable function, neural network. And let's use differentiable techniques to try to optimize the rules of the mechanism. The technical challenge is, I mean, there are a few, but 1 technical challenge is that you need incentive compatibility.

Speaker 2

00:22:05 - 00:22:45

So if, if I, if I gave the network, the objective of minimizing negated revenue or maximizing expected revenue. So if you say your loss function is negated revenue, Well, it's going to do that in a non-incentive compatible way. It's going to kind of charge you your bid, which of course is not sensible economically. So we have to have a way to regularize, penalize in some way the mechanisms that are learned, which we do in this research in various ways. Just as a quick example before I talk about contracts, in the first work we talked about optimal auction design.

Speaker 2

00:22:47 - 00:23:20

Very simple example, just with 1 bidder, just to illustrate what happens. And you can't have auctions for a single bidder. I know it sounds a little silly, but actually digital goods do tend to decompose across bidders because they're what's called a non-rival good. So suppose we had a single bidder with an additive valuation function, uniform 0, 1 for item 1, uniform 0, 1 for item 2. The theoretical economic literature tells us what the optimal design of an auction is.

Speaker 2

00:23:20 - 00:23:39

This is what it is. It says that if the value is in this space, less than this number for item 1, less than this for item 2, then don't allocate. If your value is high for both, allocate. If it's high for 1 but not 2, allocate item 1. High for 2 but not 1, allocate item 2.

Speaker 2

00:23:39 - 00:24:07

And then the payments follow from a typical auction theory. So if you just want to decompose it down, this is the allocation rule from theory for item 1, and this is the allocation rule from theory for item 2. So it has this threshold kind of structure to it. And when you train the network, you learn things that look like this. So this is the density plot of a tanh-based neural network.

Speaker 2

00:24:07 - 00:24:38

So you can see it kind of smooths out the boundaries, but it's approximately doing what it should do. OK. So Just basically my last slide, but hopefully we will have enough time for this session. Working paper led by Tonghan Wang in my research group on using this technique for optimal contract design. Let me tell you what I mean by a contract here.

Speaker 2

00:24:38 - 00:25:41

And I mean it in the very classical sense from economics. So we think here about the being an agent with unobservable costly actions, each action causing a distribution on observable outcomes and the being a principal, another actor in the system with value for the outcomes. So think about the agent as your contractor. If you're improving your house, you don't watch what the contractor does all the time, but based on the behavior of the contractor, your house is high quality or not, is on time or late, you can contract on payments with the, you can offer your contractor payments that are conditioned on whether or not the outcome is high quality or low quality on time or late. And the goal is to, in our work is to learn a contract.

Speaker 2

00:25:42 - 00:26:22

What is a contract? It's these outcome contingent payments to maximize the expected utility of the principal. Which is the, so think about for a given contract, this induces a distribution over outcomes. You have an expected value for that, you have an expected cost because you're paying and you want to find out of all possible contracts, the 1 that is best. So what we do here is we recognize that if you think about the design space, this is a bit washed out, but this is a two-dimensional contract space showing the utility function of the principal.

Speaker 2

00:26:23 - 00:27:09

The utility function of the principal as you move in contract space is discontinuous because as you move from 1 contract to another, the principal's best response, the agent's best response action can change, and that can lead to a discontinuity in the utility to the principle. And so we know it is the, for example, ReLU networks are piecewise continuous approximations. So we introduced a new network architecture, which gives a discontinuity, which is appropriate to modeling this problem. So we learn the utility function to the principle and then we do inference on that space to find the optimal contract. Going back to commitment here, this is assuming in the background the ability to commit to a contract.

Speaker 2

00:27:09 - 00:28:06

If you can commit to a contract, which are these outcome contingent payments, then there's a contract design problem that can be solved using machine learning. So just to wrap up, I wanted to give you these 4 illustrations on alignments and commitments. Emphasize that each of them has a different notion of commitments in credible mechanisms. Your commitment power comes from it being in your own self-interest, not to use a safe deviation, or in your own interest, to only use deviations that are tolerable. In the faithful implementation space, your commitment power comes from it being in your self-interest to follow the rules of the protocol as long as others are doing the same.

Speaker 2

00:28:07 - 00:29:06

In strategy proof computing, we assumed in that work that different parties are able to commit to the rules of mechanisms. And then we argued that it would be in the self-interest of a party to publish a strategy proof mechanism because the infrastructure is going to run a reputation system to catch a deviation. And then in this optimal contract design work, we're assuming the ability to commit to a contract, which has an outcome contingent payment role and using machine learning to design that commitment device. That's all I had. I wanted to kind of share this with you because I think these are all examples where I hope that ideas from distributed AI, multi-agent systems research, Econ CS research can continue to get picked up in the crypto space.

Speaker 2

00:29:07 - 00:29:13

And I'm guessing a bunch of you are building out and designing things in that space.

Speaker 1

00:29:24 - 00:29:24

Thanks.

Speaker 3

00:29:27 - 00:29:30

I'm curious that the kind of the fire challenge.

Speaker 2

00:29:34 - 00:29:34

Do you think any of

Speaker 4

00:29:34 - 00:29:36

them have been kind of like, I'm

Speaker 3

00:29:37 - 00:29:43

not exactly sure what they are but yeah, which ones do you kind of currently see as the kind of biggest outstanding ones that are

Speaker 2

00:29:43 - 00:29:46

still here? What do you think we've actually made a bit of progress on?

Speaker 3

00:29:46 - 00:29:51

Any different sources of case studies? Yeah, What is that standing in place of the API writing?

Speaker 2

00:29:52 - 00:30:07

Yeah, great. I would say good progress on mechanism design, good progress on the automation of mechanism design. People haven't really worried so much about developing languages to describe appropriations of mechanisms.

Speaker 3

00:30:08 - 00:30:08

A lot of work

Speaker 2

00:30:08 - 00:30:18

on languages to describe inputs to mechanisms, but not that all that's a question of how you describe the mechanism itself. And

Speaker 1

00:30:21 - 00:30:22

I don't

Speaker 2

00:30:22 - 00:31:01

want to say there's been a lot of work on this idea, but I'm just writing up that paper on why can't you trace the input output of the decisions from mechanisms and developing algorithms that can give with high probability guarantees that in fact the algorithm is strategy proof. We did some work on this like create space for more work and maybe there's a whole different approach to this which will be modern cryptographic techniques which we weren't thinking about here. Maybe we can chat a little bit about that. Compositionality is something that people are very aware of. There was some work on this after I wrote this paper.

Speaker 2

00:31:01 - 00:31:40

There was a paper by Moalan and Minsan that gave 1 way in which you can compose factors together. We also did some work where we talked about mechanisms for auctions. So you don't buy the resource, you buy the option to use the resource. Um, which gives you get a compositionality. If you're not exposed to having to buy the resource, we did some work on that, but I think that, I think it was Mike Wallin that once said to me, he said, you know, you, you're, you're never going to have commoners for our ocean or all of the resources on the walls.

Speaker 2

00:31:40 - 00:32:04

And so there are always going to be boundaries between mechanisms. And I think there'll still happen to be very much local kind of understanding that space. Uh, I suppose in price of anarchy work in the theoretical computer science community has proved some things about what happens if you use very simple decouples portions. I suppose you could think about as a company. It's your heart to.

Speaker 5

00:32:07 - 00:32:32

Yeah. Boldly off the description point. I'm curious about the, um, your notion of credibility and faithfulness. I'm thinking about, oh, like other economic security models think in the blockchain space and play it out. And I know you have understood correctly in the notion of faithfulness, it's a notion of equilibrium where you consider only the deviation of individual agents.

Speaker 5

00:32:32 - 00:33:08

But another concept people look at often is collusion thresholds. And then another angle as well is looking at the economic security budget in that if some unbounded activity uses a blockchain, you never know the audience of money moving around. There might be some upper bound, which point like the mechanism uses its faithfulness or credibility or whatever. So I'm curious if you've thought of other extensions to the notions of fair and disagreeability to cover these kind of security concerns.

Speaker 2

00:33:08 - 00:33:58

Yeah, great question. Let me hear a say. First of all, I think that coalitional refinements of what X talks about can certainly be defined. I don't know that people have really taken either credibility or faithfulness, though, yet, I haven't done that. The most constructive, positive thinking I've done about coalition updates is that, clearly has all coalition deviations within the automated mechanism design framework when we learn mechanism.

Speaker 2

00:33:59 - 00:34:31

Because what we do in that framework is we penalize unilateral deviations with a loss function. I didn't talk about it, but that's roughly what we did. So you could penalize correlational deviations, and I think you could use that framing to be able to build mechanisms that are more robust. I'm generally pretty bullish about using machine learning techniques to alternate that instance of design need, and I think it's quite flexible. So I think if there are convolutional things you care about or other things, it would be interesting to study it in that framing.

Speaker 2

00:34:37 - 00:35:29

The other thing I would say is there was some nice work early on by people like Vanessa Teague and Jeff Halperin on rational secret sharing models. You know, there's a secret sharing multi-party computation literature that takes a little classical. So with some sharing adversaries, it can behave in arbitrary ways. And it talks about robustness against some fraction of the particular ones deviating. Either with the work again in the mid 2000s, so okay what if the digits of only are not Byzantine in the way but are national active in some ways, peace plans, and they have some kind of coalition of robustness properties there.

Speaker 2

00:35:31 - 00:35:34

Did that help you impress them, T-Night? I haven't said any questions, I think.

Speaker 5

00:35:35 - 00:35:39

Yeah, I'll do 1 question again. Was that, you said Joe Halfburn?

Speaker 2

00:35:39 - 00:35:49

Yeah, Joe Halfburn and Vanessa Teague, and then we had follow-up with Leo Badan and myself, had follow up on that as well.

Speaker 1

00:35:57 - 00:35:58

Yeah.

Speaker 3

00:36:00 - 00:36:01

Thanks. All right.

Speaker 1

00:36:05 - 00:36:20

When you think what's in there. Composition added. Yes. So like. Compositionality.

Speaker 1

00:36:22 - 00:36:28

Yeah, so like.

Speaker 3

00:36:30 - 00:36:34

And beyond commitments, what kind of techniques can we use to achieve these

Speaker 2

00:36:34 - 00:37:11

things? Yeah. Yeah, I'm still not sure I have anything particularly exciting to say in particular about commitment with compositionality and the way those 2 come together. But hopefully we can get that during the conversation today. I think the last thing I can say in regards to compositionality is, first of all, just contrasted in differential privacy.

Speaker 2

00:37:11 - 00:37:54

You know, a lot of things people love about differential privacy, but it's invisible. And so it's always been incredibly frustrating to me that these mechanisms design settings seem to not be able to achieve the same properties, which is why we did think about it through options. But I think the best thing to say is, and actually that was part of my motivation for strategy-proof computing. Part of my strategy-proof computing notion was to let there be innovation where machine designers could kind of draw the boundaries in an appropriate way in resource space. Resource space could literally be spatial, could be temporal, could be other things.

Speaker 2

00:37:54 - 00:38:14

But try to draw boundaries in ways where you draw the boundaries that match the separability of utility functions or say the majority of participants in the system so that their problems will broadly break along you know this way and this way and they can independently make decisions here

Speaker 3

00:38:14 - 00:38:15

and then make decisions here.

Speaker 2

00:38:17 - 00:39:26

As I talk to you I wouldn't be surprised if there were a thing that we can do by thinking about some of the graphical modeling techniques that people use for probabilistic models, where these conditional and dependencies and dank structures, where you, you're willing to make assumptions about the lack of dependencies between random variables in the world. There may be things like that that we can also leverage and kind of ideas like that, all the graphic representations we can write down that capture the types of conditional independencies where if I give you this resource, then now your value for these resources are decoupled. Now you got to be dependent on the inside over here and then you depend on the inside over here. This is the type of way that we should be thinking if we want to think about how to break our problems. But maybe we shouldn't be that naïve and maybe we should just be thinking about how the body is structured so that other people can kind of move their mechanism domains around and try to be smart in a way that helps humans.

Speaker 6

00:39:27 - 00:39:48

Kind of a follow-up question to it, I've already done that somewhere and it's something from your other paper and strategy proof computing was that it would be the market itself that would like find those regions and now you're talking more about top-down kind of thing with the priority so like what do you think it's more promising there for the meta-pro and not designing the images and something about partition.

Speaker 2

00:39:48 - 00:40:33

Yeah I mean I would say definitely bottom up but still those participants will still need the right heuristics to know how to carve out the resource space. By the way, I did do a lot of work on commonist world of auctions. That's something that I've thought a lot about. And there are various preference representations in that literature that capture these types of, um, independencies or couplings. So I think there are things we can, we can borrow, but yeah, I would, I would, I would say, um, try to build infrastructures that allow for innovations over time.

Speaker 2

00:40:37 - 00:40:39

Are they not very controversial in certain?

Speaker 4

00:40:41 - 00:41:27

So you mentioned a couple of times, um, like for the beauty And I was wondering that, what your thoughts have been about the URLs, talking about it without being relinked now, about how to enforce those. We've talked about reputation at 1 point, but how can it be a big part of the research? Like, you could say, you can verify a sequence of transactions that has not being respected or there was like a regulation problem what we're expected to do. Yeah, I was just wondering how you thought about that in the past.

Speaker 2

00:41:28 - 00:42:26

Yeah, yeah, thanks. I did initially think about that as more of a sensitive ecosystem, like a reputation system where there are checkers that are motivated to find problems and call out the problems. But there was a period of time when I did work on using holomorphic encryption type ideas. We were using a pretty lightweight of that, where the way we were modeling it was, we didn't want to prevent the mechanism getting knowledge of the imputants. But we did want the mechanism to have to prove to anybody that the computation was done correctly.

Speaker 2

00:42:30 - 00:43:17

And we were using various, we're calling random log techniques with straight line computation. I can talk to you about it more offline, but it's techniques as a workload and Michael Rabin, he was not retired and Christopher Harvard. And there it was this prove a verify model where, where you run the mechanism and then you could deviate, but if you deviated, you wouldn't be able to prove that you've done the right thing. Right. So thinking about that, the things like, um, uh, normal non-decent, non-decentralized exchanges, But there are all these problems with dark

Speaker 3

00:43:18 - 00:43:19

things in

Speaker 2

00:43:19 - 00:44:14

finance where it became apparent over the years that banks were not able to credibly commit, not to use information they weren't supposed to use. And so I wanted to know whether we could use crypto techniques to have them commit to a rule and then not be able to prove that they haven't deviated. So I suppose all I'm trying to say is that over time, and then, you know, in today's world, with smart contracts and other technologies, I think we have much more ability to commit to rules. Um, there is always the question about what happens if you, okay. I mean, there are various flavors of that, of course, it could be literally the, um, uh, I have, I have no available deviations, but I think typically in the Kurtz space, I can commit to the rule, but I cannot get the inputs there necessarily, right?

Speaker 2

00:44:15 - 00:44:52

So the malfeasance of then playing with the inputs that hit that function. And that's why I think that combining that ability to commit to a function that acts on whatever inference it does get with credibility, which is bringing incentives to the table to complement that, I think that works really nicely together. Because then you've got the incentive alignments around the infants being correct or not modifying a bad way with the crypto providing protection against the output.

Speaker 4

00:44:56 - 00:45:18

Yeah. I wanted to expand on this a bit and I wanted to see if you have any intuitions on how can we prove faithfulness of an optimal smart contract that's based on a machine learning model. Would we want to improve the learning? Would we want to prove that to us using correct data?

Speaker 2

00:45:19 - 00:45:27

What would it even mean for a smart contract not to be failed, I suppose, is where my mind is going. Again, it's about 8 months, right?

Speaker 1

00:45:30 - 00:45:30

So

Speaker 2

00:45:33 - 00:45:40

is a smart contract a lot closer to what I framed as being something more like credibility would be?

Speaker 4

00:45:41 - 00:45:51

I saw feedback. If the contract itself is triggered by an engine on the cycle and machinery and what else up there, then wouldn't you want to prove that it was to that, to the initial?

Speaker 2

00:45:53 - 00:46:23

Yeah. Maybe I could take your question this way, but feel free to push back on, this is actually really interesting, I think. Could we use differentiable economics to learn credible mechanisms? I think that's where my mind went when you said that. I mean, but just rarely

Speaker 3

00:46:25 - 00:46:27

in my group, I'm sure you mentioned

Speaker 2

00:46:28 - 00:47:28

that Omni's very fundable sequencing rules work for NDB. I've been thinking a lot about credibility in the blockchain space and I can also continue that, that conversation, but I think credibility is very nicely suited to blockchain. So. I think that would be where I would start. I would code up the types and deviations that are represented through the critical T-concerts and I would want to penalize them when they go against the delete roster.

Speaker 2

00:47:30 - 00:47:55

I think it's wrong. Originally, we tried to do the I think there are definitely events that I've heard more about. I'm interested, but it won't be the main thing. I feel like there's an ongoing discussion.

Speaker 1

00:48:00 - 00:48:00

You

Speaker 5

00:48:55 - 00:48:56

But there are other limitations. But there are other limitations.

Speaker 2

00:49:05 - 00:49:27

It's almost like medical. So adoption is definitely a challenge. What I'm thinking about in response to your question is that you don't want to leave everything to a reputation mechanism. A reputation mechanism may not be a step. I mean, right, nobody's taught you to do my bad.

Speaker 2

00:49:27 - 00:49:37

It's not how can you build a reputation. By the way, there's an interesting story about the launch of eBay in that regard. EBay launched in

Speaker 1

00:49:37 - 00:49:38

1995

Speaker 2

00:49:39 - 00:50:11

and as of 1996 had a reputation system. I'm as far as I know on the internet, the first ever reputation system. And they designed a two-sided reputation mechanism where buyers could leave feedback on sellers and sell it to buyers. And people always ask why, why would they have sellers provide reputation feedback on buyers? A lot of reason that's given is the buyers can build a reputation as a buyer and then use that reputation to sell.

Speaker 2

00:50:12 - 00:50:43

So the idea of letting reputation be portable From 1 way you're acting into another way you're acting. So maybe 1 way is to think about who's tracking reputation and getting better efficiency. That is to think about whether or not You can contribute your reputation to others, a little bit like the way we think about things like age-related type mechanisms. But I think our virtual ecosystem is not yet.

Speaker 5

00:50:45 - 00:50:47

Most of them are there.

Speaker 2

00:50:47 - 00:51:04

And I'm also thinking that it shouldn't just be about reputation and collecting data. It should be about proving things as well. We don't want all of it to be about having to wait and see what happens. We want as much of it as we can be about provable properties that anybody can check.

Speaker 5

00:51:06 - 00:51:15

On the reputation side of things, there's also this question of how, I don't know what the correct property is,

Speaker 2

00:51:15 - 00:51:31

but how like game of... Yeah, yeah. And it's actually, you get all of the same problems at the meta level. Yeah. And, and there's lots of interesting work on attacks on reputation.

Speaker 2

00:51:34 - 00:52:08

You know, page attacks are robust. You get a list of people pointing each other at the thumb feedback score around so that the random walk and sit, this vertex, and behind me, holding it there. And so John Cockcroft wrote a little bit of paper in 2008 where he used Hitting in Time, which is another aesthetic process, and proved that that 1 was simple. It was being worth there as well. I mean, I don't have any questions you guys ask me.

Speaker 2

00:52:08 - 00:52:19

I can probably point to something in the past 20 years, but exactly like what is the right thing for the problem to be wrapped in with? I'm not sure.

Speaker 5

00:52:21 - 00:52:24

Yeah, the references are appreciated. Yeah, sure.

Speaker 2

00:52:25 - 00:52:31

Once I have all the references, I can use them all around.

Speaker 3

00:52:32 - 00:52:41

I have a question. I want to go back to the art, people make a commitment to a path you already took,

Speaker 2

00:52:43 - 00:52:43

so you were

Speaker 3

00:52:43 - 00:53:14

mentioning what are some productive ways to think about coalition? And also making the, you know, the blockchain, you need to be credibly need to, you know, consent rules, but the, anything to the most can even be treated as in the case, say it's energy or whatever. And over there, the way people usually solve this problem is to dedicate the job, you know, actually processing the input, providing the input to 1 person.

Speaker 2

00:53:14 - 00:53:17

So you're interested in this. Yeah, yeah, yeah. Yeah, I

Speaker 3

00:53:17 - 00:53:50

wanna like, can you somehow encourage those 2 parties? 1 party to hold the data, let it be known that the product is called the agile execution mechanism, encourage companies to use it as fast as possible. However, you know, it's also, you know, a big mechanism, design deployment, too easy in the sense that it is super easy for someone to be called in. Not like writing mechanism for those 2 parties to use. I think it's just, you know, kind of now it ends up picking the saddest company and you're right.

Speaker 3

00:53:50 - 00:53:59

It's like a, what I think on. I guess we'll learn some constructive ways to think about traditional.

Speaker 2

00:54:06 - 00:54:21

So let's be precise as well. Coalitional, I'm really referring to coalitions of participants in activism, whereas now I think you're thinking about coalition of mechanisms. Am I right? Yes. Yeah.

Speaker 2

00:54:21 - 00:54:55

Coalition of mechanisms that are somehow setting themselves up to be able to work together to defeat the divine intent to some separation of concerns. Let's see. I don't think I've thought about this. You may have defeated me in all of the use of mechanism. Instead, I mean, most of the mechanism design that you just think of is a single mechanism design.

Speaker 2

00:54:55 - 00:55:12

So even the notion of the being mechanism design in the context of model mechanisms is relatively understudied. And then the notion I'm now I'm making excuses but now the notion of like coalition of the mechanisms I think it's been even less understudied. Let's see do I have anything to say?

Speaker 3

00:55:24 - 00:55:46

Yeah, I mean, you know, we have seen that because very often and she told, you know, many people want to deploy, it's interesting, for example, some of the views in Oracle, some people want to deploy some, you know, these are not happening, but a lot of the Oracle-based services are not deploying with, like, some black-enemy, or something else.

Speaker 2

00:55:47 - 00:56:19

Yeah, I see. I see. I've often, so it's in some sense that we're adopting a notion of that the existence of the mechanism shouldn't create new sensitive concerns. Somehow. 1 thing that came to mind when you said that is work on information solicitation.

Speaker 2

00:56:21 - 00:57:05

But that work typically assumes that the only thing you're trying to do with the participants is to get a bit of machine sharing. You don't have any outside incentives. So it doesn't work to the Oracle. But what you do want in that case is that if you deploy an inference visualization mechanism, you don't want it to introduce new incentives to be acting in some different way in the world to change the event instead of them being the other 1 in the school where the information is stationed. You can try to formalize the types of interaction that you don't want and then try to prove that the magnetism is robust against.

Speaker 2

00:57:06 - 00:57:10

That was a very quick, we'll be thinking a little bit about that. The incubation stage.

Speaker 3

00:57:12 - 00:57:13

Sounds like a job for AI.

Speaker 2

00:57:14 - 00:57:18

There we go. And where I get stuck, I either wait in your guess or we think about the odds.

Speaker 3

00:57:21 - 00:57:25

Oh, yeah. So anybody has any last questions that would...

Speaker 4

00:57:30 - 00:57:30

Contra,

Speaker 3

00:57:32 - 00:57:37

what can you solve with this? My list is... The gate service can be solved with...

Speaker 2

00:57:37 - 00:57:37

The differentially...

Speaker 1

00:57:38 - 00:57:39

Yeah. ...Contracting.

Speaker 2

00:57:39 - 00:58:16

So, you're correct. The answer at the moment is... Okay. ...Which is not very satisfying because I'm still working. So the computational technique for contracting supply that most people would use for problems like what I described would be linear programming where you can solve a linear problem in separate linear programming for every action.

Speaker 2

00:58:17 - 00:59:18

And so that if you have a very large number of actions, you're not a very large amount of competition. It's a very solid contract. We avoid that here. In fact, here the shown is never convinced the data that we see contract design and then the utility to its scale. But I don't think I have a sense of where the comments because interesting, actually, that the, as far as I know, the applied contract design that you share is a little bit less developed And so I have it yet Yeah, so so multi-dimensionalness.

Speaker 2

00:59:23 - 00:59:45

Yeah, so multi-dimensional. You heard you mean multi-dimensional. Yeah, yeah, yeah. We've been thinking about a related problem for data market design, which sounds a bit like contract. I shouldn't say contracts.

Speaker 2

00:59:46 - 00:59:59

Yeah, So if you have anything in mind, we can try to take it. This is very new, so we haven't tried to apply it to anything really serious yet.

Speaker 3

01:00:02 - 01:00:08

So 1 of the astonishing challenges is about auditing, I forgot all of the names,

Speaker 2

01:00:08 - 01:00:09

basic verification,

Speaker 3

01:00:16 - 01:00:17

Yeah, and you mentioned

Speaker 2

01:00:19 - 01:00:25

that you weren't hating about modern technology. At the time, I

Speaker 3

01:00:25 - 01:00:35

was trying to zero-knowledge mechanism. Yeah. Do you think that unsolves 1 unify?

Speaker 2

01:00:42 - 01:01:11

Not obviously. And I think even just smart huddocks, where anybody can look at the description of the objects, not obviously, because the property I'm interested in proving something about is strategy proofness, which is a kind of a derived property of a function. So even if I describe the function to you, it may not be... I don't think we know very much about how tractable it is. There's a little bit of very recent work, but I don't remember the authors.

Speaker 2

01:01:12 - 01:01:39

But I think we don't really understand how tractable it is to somehow query that function and assign whether it is strategy-proof or not. 0 knowledge techniques. Actually, I think Janik and Tresi work on 0 knowledge for in-chart theoretical properties. So maybe that's where I'm going.

Speaker 4

01:01:39 - 01:01:41

It's also a key to the whole solution.

Speaker 3

01:01:42 - 01:01:43

It's a playable strategy.

Speaker 2

01:01:44 - 01:01:51

It's just not your deal, which is developing the language. Right, right.

Speaker 3

01:01:53 - 01:01:56

So it's just a bit like you need a mathematical

Speaker 2

01:01:56 - 01:01:58

to prove that your knowledge can work, but

Speaker 3

01:01:58 - 01:02:00

you have to do it.

Speaker 2

01:02:00 - 01:02:19

Or it's when you don't have mathematical, but you do have, in fact, it's how we have, somehow, 0 knowledge and the nulls would just prove that you correctly call the function, really, with the correct syntax. It wouldn't prove a property.

Speaker 3

01:02:22 - 01:02:28

Right, yeah, and I think that's the zero-knowledge thing, right? To commit to that.

Speaker 2

01:02:32 - 01:02:50

I guess what you could prove, it's not really satisfactory, but you could prove that there's no useful unilateral de-gauge in the result of it. So that's something to improve. That's all the guidance. That's not bad. You can prove that.

Speaker 2

01:02:50 - 01:02:53

You can prove that. You can prove that.

Speaker 1

01:02:54 - 01:02:54

You can prove that.

Speaker 2

01:02:54 - 01:02:54

All right, thank you. That's not bad. All right. Thank you.