1 hours 50 minutes 19 seconds
Speaker 1
00:00:00 - 00:00:24
All right, everyone. Welcome back to another episode of bell curve. Before we jump in quick disclaimer, the views expressed by my co-hosts today are their personal views, and they do not represent the views of any organization with which the co-hosts are associated with nothing in the episode is construed or relied upon as financial, technical, tax, legal, or other advice. You know the deal. Now let's jump into the episode.
Speaker 2
00:00:26 - 00:00:51
Hey everyone. Before we get into it today, just want to give a quick shout out to this season's sponsor, Rook. Close to a billion dollars worth of MEV has been taken out of users' pockets and that's just on Ethereum and that number is only getting larger, unfortunately. Rook thinks that it's time for a change and they've built a solution which is going to automatically redirect that MEV back to where it belongs into your, the user's pocket. So you're going to be hearing all about them later in the show.
Speaker 2
00:00:51 - 00:01:02
I'm a huge fan of this team and what they're building. So stay tuned to find out more. All right, partner. Uh, today we're going to be talking to John Trombino and Robert Miller. I am psyched for this episode.
Speaker 2
00:01:03 - 00:01:08
It's going to be a very good 1. You want to talk a little bit about why we've got John and Robert on today?
Speaker 3
00:01:08 - 00:01:32
Yeah. So I think our topic is MEV in the modular stack, right? And I mean, I couldn't imagine 2 better guests basically to interview about this. I think John has written some other similar posts about Ethereum's rollup-centric roadmap and recently also hit it out of the park with his post on... Rollups aren't real.
Speaker 3
00:01:32 - 00:01:47
Rollups aren't real is what it was called about. Like, I mean, it's, it's even hard to say what the post was about because it covered everything, right? Like different, uh, sequencer options for rollups to like decentralized block bidding. Yeah. It's like, can really recommend this post to everyone.
Speaker 3
00:01:47 - 00:02:31
And I think we will explore in this episode, basically, like what this rollup-centric roadmap will look like for crypto, like the different options for decentralizing sequencers, like the challenges also that rollups face in doing that. We will talk about chat sequencing and cross-domain MEV, and then ultimately we will make the bridge also to Swath. And I'm very glad that we have today Robert Miller with us, Head of product at Flashbots. So he's 1 of the main folks in charge of actually designing and implementing Swath. And yeah, he's also someone who has just this incredible wealth of knowledge about the MEV supply chain.
Speaker 3
00:02:32 - 00:02:35
And so, yeah, I think this would be just a blast of a conversation.
Speaker 2
00:02:36 - 00:02:49
Agreed, Hasu. All right, let's jump right into it. All right, everyone, welcome back to another episode of Bellcurve. Today, Hasu and I are joined by Robert Miller of Flashbots and John Trabino of DBA. Guys, welcome to the show.
Speaker 4
00:02:49 - 00:02:51
John Trabino Thanks for having us on. Hasu Trabino Good
Speaker 5
00:02:51 - 00:02:53
morning, good afternoon. Thanks for having us.
Speaker 2
00:02:53 - 00:02:53
Davey Warrick
Speaker 5
00:02:53 - 00:02:54
I thought you were
Speaker 2
00:02:54 - 00:03:14
going to give us a good morning, good afternoon, good night. Guys, we are really stoked for this episode. I've been looking forward to this 1. The title of the episode is MEV in a modular world. Thus far in the first 2 episodes of this season, Hasu and I have really explored the infrastructure from the perspective of main chain Ethereum.
Speaker 2
00:03:15 - 00:03:38
Obviously, as everyone in crypto knows, the roadmap for Ethereum is a modular 1, and a lot of MEV-based activity is going to move up the stack onto some of these rollups and layer twos. So maybe if we could just start from sort of a 10, 000-foot vantage point. And John or Robert, whoever kind of wants to take this first. I mean, how do you see MEV changing as Ethereum adopts this modular roadmap?
Speaker 5
00:03:39 - 00:04:25
Yeah, so obviously things will change a lot. There's kind of 2 parts to what I would say would change as you move to rollups? Some of it is kind of what I'll call like fundamental to rollups in that there are different kind of interactions at the boundaries between a rollup and an L1 versus potentially other chains where, for example, if you have like layer 1 sequencing for a layer 2, where that is not normally the case for any kind of other chain. The reality is that most of it that will change is just the kind of simple stuff and a factor of what rollups look like today. So simple example being generally all of the MAV is going to be done in real time by the person who is the leader of whoever is choosing the ordering of those blocks.
Speaker 5
00:04:25 - 00:05:30
So if I'm doing swaps on Uniswap on L1, then generally it's the Ethereum validators who are doing that. To the extent that I move to an L2, it's obviously going to be the sequencers who are doing that there, as opposed to the Ethereum L1 validators who are not going to be that kind of real-time person who's deciding the ordering for most transactions. And the big difference between most rollups today and the way that any other chain work is the fact that all of them run centralized sequencers today, which is obviously just not a thing that we see on any other L1 chains, because if you had a centralized sequencer for an L1, I don't really know what the point of that would be. And that has had a lot of nice things for users, but also obviously a lot of negative externalities in that very just kind of simple setup that all of them have taken because what effectively all of them do is we run a centralized sequencer and we keep all the order fill private and we just do a simple first come first serve because that sounds really nice for users of, yeah, you can trust us, we'll give you fast confirmations, we're going to make sure that you don't get front run, we're going to give you, quote unquote, fair ordering of, as we see everything, we're going to put it in order, and that sounds really nice.
Speaker 5
00:05:30 - 00:06:24
The problems that you obviously start to see with that are a lot of the problems that we've seen with, for example, Arbitrum over the past months as they're thinking through their strategy long-term of when you have this simple first come first serve and the only way that I can express my preference of what order do I want to be in that block is, well, I'm going to race to be the first person in that block. So that means people are investing in latency infrastructure, trying to co-locate with the sequencer, they're trying to spam in many cases. And you start to see the result of that is you end up with an incredibly heavy load on the sequencer to the point where Arbitrum sequencer had hundreds of thousands of connections to it and they were thinking about implementing proof of work to kind of meter those connections. Don't do proof of work as a solution to MAV will be my only response there. So all of these rollups are trying to think through their longer term strategy of, OK, we're going to have a lot of the same problems that L1 set to deal with.
Speaker 5
00:06:24 - 00:07:01
So you see people like Arbitrum starting to think about their longer term plans of, OK, how do we implement some kind of way for people to express their preferences and an ability to express those preferences by paying for those preferences directly, as opposed to just paying by investing in latency infrastructure and spamming the network. So those are things like their time boost proposal or what Shannon proposed with the frequent batch auction first come first serve variation, where you allow that flexibility to layer on some kind of auction mechanism for people to pay so that you can get much better, much more robust infrastructure in the same way that we see on the layer 1 to deal with MEV today.
Speaker 4
00:07:01 - 00:07:43
I think the only thing that I would add to that is that I think L2s give you a role, specifically give you a place where you can experiment more with these types of different types of ordering policies in a much faster way rather than L1. It's kind of part of the thesis of the roll-up-centric roadmap for Ethereum that you have more space for experimentation before you can break things down to the protocol level if something works. So very interested to see how these experiments around fair ordering fare in the wild. We've already seen some of it play out with the Arbitrum airdrop a few weeks ago that John alluded to. Interested to see maybe some cryptographic solutions or solutions in quotes to MEV that I've heard might launch.
Speaker 4
00:07:43 - 00:08:10
The last thing that I would add is that I think as you have more rollups proliferating, you have more MEV that exists not just on these rollups, but between the rollups as well. So I think we're going to see the rise of cross-domain MEV and more demand for solutions for some kind of synchronicity between different asynchronous execution environments as liquidity fragments to many different more places and you see more activity across different rollups.
Speaker 3
00:08:10 - 00:08:41
You know, I have a follow up question to that, but I think we put a pin in it and move it to the section on shared sequencing, which we'll get to later. I have a follow-up question for you, John. So you've written a lot about the roll-up-centric future for Ethereum. You had some iconic articles here, both about that and sequencing. So can you give us your best guess, how you think, not in terms of MEV, but just overall, how you think the ecosystem will look like in a couple of years?
Speaker 5
00:08:42 - 00:09:25
So the main thing that I probably expect to see out of most rollups is they start to look like L1s in the way that they run in very simple traditional ways. My guess is that it's not going to be all of these radically different everyone's using a shared sequencer or everyone's using a centralized sequencer type thing. My guess is that there's going to be meaningful pressure to decentralize, whether that is on a technical level slash social level if we want to, or very possibly, quite frankly, on a regulatory level of guessing, but we don't know what that is going to look like. There are probably some questions when you're the only person operating this thing. So there's probably going to be pressure to do that.
Speaker 5
00:09:25 - 00:09:44
And the simplest thing that we know works is you strap a consensus set on there. You have some form of PBS, you have some form of option between these validators who are on the roll ups. And we know that's a system that works reasonably well. So my guess is we start to see that across a lot of roll ups over the next year plus.
Speaker 4
00:09:45 - 00:10:08
I've got a question if I can steal the mic from Michael and Hasu. So John, how do you square things like existing consensus sets, auctions like PBS, which generally take a little bit more latency with users' existing desire for super low block times on rollups, as well as private mempools. So how do you think these 2 things, which seem intention to me, will play out in the future?
Speaker 5
00:10:09 - 00:10:28
Sure. So as far as the latency, I understand that some rollups care a lot more about that than I care about that. And that I think low latency, certainly well below Ethereum on block times. Yeah, like users want that. Does it have to be 300 milliseconds, 400 milliseconds?
Speaker 5
00:10:28 - 00:10:46
No, I don't really think it does. I think it's fine if your blocks are a second or whatever. I really don't notice the difference quite frankly as a user. I don't get the obsession with, let's do this literally at the speed of light. So that would be my general intuition on latency as far as that.
Speaker 5
00:10:46 - 00:11:40
As far as some of the privacy aspects and like layering on these kind of additional features that users like, you know, we don't want everyone to get front run and stuff. My guess is it's going to be a little bit easier to implement a lot of that stuff on these rollups compared to something like say, Ethereum. Because while I expect some form of PBS auction mechanism, et cetera, to arise in these rollups, they're also going to be realistically much smaller validator sets because we have a much weaker trust assumption of what can an L1 validator set do compared to an L1 validator set. We have a much smaller semi-trusted on weak assumptions type of validator set that's probably running these, maybe you have 10 sequences, 20 sequences, whatever it is. You start to end up in a situation that looks a little more like the Mevgetes or even a step further of Cosmos, where you start doing protocol on builders, stuff like that.
Speaker 5
00:11:40 - 00:12:19
We're not going to have these super strict requirements of what you guys have to do for Ethereum today, where we have to do this commit reveal process and you know, you want to support this long tail of hundreds of thousands of validators, it's probably okay to have some level of trust on, hey, you know, we'll show you the bundles in this kind of builder interface with the proposer. And yeah, if you start to steal them, we're going to kick you off because you're 1 of 10 or 20 validators and we know who you are and like you're gone. So being able to start to offer those features when you have a known set of smaller validators becomes a lot easier. And similarly enforcing those things like, you know, if you start front running, et cetera, and, you know, have some kind of privacy trust around them.
Speaker 4
00:12:19 - 00:13:16
I think there's 2 interesting things. And I'm going to go back to 1 thing that you said earlier about a single sequencer that I want to follow up on there. 1 is it's just latency in general in these systems. And I think it's like not very well understood generally that if you have an ordering protocol on your rollup that incentivizes latency gains, what you, sort of the structure of the market you're incentivizing is all actors to eventually co-locate in 1 place. And so you may not get a rollup that has a single sequencer controlled by a single party, but you may get a rollup that is controlled by a single data center in a single jurisdiction somewhere subject to a single set of laws, which is a huge surface area for regulation risk, as you were saying earlier, John, I think it's only like 1 step worse than having a single sequencer, having your entire chain co-locating in a single Amazon or a similar data center somewhere else within the world.
Speaker 4
00:13:16 - 00:14:25
And what we really want is to have these systems geographically distributed across the world in many different places, such that no single sequencer is able to impose its utility function on the network and arbitrarily sensor a set of transactions. And the other thing that I wanted to follow up on what you were saying is the sort of pushback against the notion that the PBS style is only useful if you, I think what you were trying to say was, is if you don't have kind of trust requirements between validators. And if you have a smaller set that is more trusted, you can use something like MapGeth where you're sending bundles in the clear. And I think what is interesting about PBS is not only this commitment reveal scheme in the privacy in it, but that it gives you the ability to iterate more quickly on features that are user facing than what validators would. So if you've ever worked with These companies that run validators right now, they're not going to roll out features as quickly as your Builder 69 or your Beaver Builder does on ETH mainnet.
Speaker 4
00:14:25 - 00:14:58
To bring this back as an example of a feature that you could see being offered by a builder on a roll-up, not on, probably won't be rolled out as quickly. The validator would be something like pre-commitments where a user is paying a small fee to a builder and in return, the builder commits to including the transaction in a certain place with certain states before that the Builder's Block has actually been included on chain. So I think this PBS style is still useful even in the absence of super hard trustlessness requirements that you're gesturing towards.
Speaker 3
00:14:58 - 00:15:24
You know, that is a very interesting answer to me. And I think maybe we should jump ahead to kind of this idea of roll up sequencing and shed sequencing, because you were saying that you would expect block builders to offer pre commits or soft commits as a service for users. Who do you think is kind of in the better position to offer this? Do you think it will be BlockBuilder? So do you think it will be the sequencer who ends up selecting the BlockBuilder?
Speaker 3
00:15:24 - 00:15:27
What kind of the proposal in that kind of system?
Speaker 5
00:15:28 - 00:16:13
At the roll up level, I would fully expect it to be the sequencer. I think that there is a use for that type of service on Ethereum L1, particularly because the block times are so slow. I don't think you need that kind of thing at the roll-up level, because that's the whole point of putting on a roll-up consensus, is that they're going to be giving you these half-second, one-second type pre-confirmations anyway, which is much faster than the L1 itself. And while there are potentially other features that builders could certainly give you, I don't think the pre-commit is necessarily as valuable, at least at the L2 level compared to the L1 level on that kind of typical UX, because you don't have these super long block times anymore at the L2 level. And there certainly could be other features that builders give you and are very helpful with.
Speaker 5
00:16:13 - 00:16:16
But that 1 in particular, I would guess, is less applicable.
Speaker 4
00:16:16 - 00:17:09
So I think the unsaid reason why I brought that up as a feature is, I think if you start from the place that these consensus sets need to be decentralized, then, and they're going to be, they have centralizing tendencies and you lose some of the properties that you want if they're all co-locating in 1 data center, then you, like, how do we provide users this really fast confirmation of the transactions being on chain in the absence of having low block times? And I think pre-commitments is 1 way to do that, even if you have block times that are a little bit slower. That's why I think they are still interesting in a roll-up context, because I think we may have to lengthen block times anyway. And like the 300, 500 milliseconds instant confirmation thing may just be an aberration of the past. I'm not sure that that is true, but I think that is an interesting argument and like an alternative way we can service those needs and why I brought it up.
Speaker 5
00:17:09 - 00:17:33
Do you think that any roll-ups will take that trade-off of trying to look more like a decentralization goal of an L1 where even the L2's consensus itself has longer block times and say a very decentralized committee. Because at that point, I would just say, why not just use the L1 as your sequencer? You do some variation of a base rollup if you want those super, super strong guarantees?
Speaker 4
00:17:37 - 00:18:14
Well, so I don't know whether a rollup will try that. I would like someone to try. And I think the design space here is pretty broad for like doing a little bit faster, a little bit faster block times than L1, but not a based rollup, maybe based rollup, but with pre-commits. These are all different trade-off spaces that I'd like to see tried out in practice. I don't know what is the best, But I do worry that this desire that users have for super fast block times and the implementation of first-come, first-served would lead to geographically centralized chains, which would lead to censorship over time.
Speaker 4
00:18:14 - 00:18:35
And that's sort of what I'm seeing in the future. And I think this is an interesting design space to play in. But frankly, I don't know what to expect people to adopt in the future. The only thing that I would touch on is I think there are reasons why you may not want to be a base role of an independent of this too. So that's sort of a different dimension of choice than block times and pre-commits.
Speaker 2
00:18:35 - 00:19:20
Hey guys, I almost hate to jump in on such an interesting conversation here, but on the off chance that there are a couple of listeners out there who might not be fully following everything that we're saying here, I'd love to almost like just back up for a second and just level set on sort of a definition of terms. So, you know, at the risk of almost starting at a pretty basic level, like maybe John, I could call on you because again, you've done so much work on this. Can you just kind of give us a high level definition of what a sequencer sort of looks like today? And for an audience that's very familiar with like what a validator sort of does from a functional standpoint, could you kind of compare and contrast how they're different? And then maybe like at the end of that definition, if you could kind of segue into when we talk about a decentralized network of sequencers or shared sequencing, if you could kind of feed into that definition, I think that'd be really helpful.
Speaker 5
00:19:21 - 00:19:52
Sure. So the way that roll-up sequencers work today is quite simple, because there's 1 person running it for each of them. As a user, I send my transaction to this centralized sequencer. This centralized sequencer then determines, based on what they've received, the ordering of those transactions, and they give immediate soft confirmations to users, which is really easy for them to do, because, I mean, in particular, they're not doing any kind of complex ordering around them, you know, waiting for long block times or anything. They're basically just giving you out first come first serve as they receive them.
Speaker 5
00:19:52 - 00:20:32
We give you a soft commitment of, hey, I will eventually put this on the L1 in exactly this order. So you are trusting the sequencer for that ordering. They could lie if they wanted to, they could reorder them, put them in a different order based on that feed that they've put out there because there is no explicit penalty against them. So they're giving out that real-time feed. And then after the fact, as they're doing this feed, as they have published that, and the users have the soft commitments, they're applying the deterministic, whatever it is for that rollup, state transition function of, okay, I have all these transactions, I execute all of them, and I compute, what is the updated state of all of those things?
Speaker 5
00:20:33 - 00:21:02
So generating a new state route, et cetera. And pointing out, because this will probably come up later, that sequencing and that determining the state and executing, while they are the same party today, those are 2 logically distinct roles, which can be separated. So just an important point, cause I'm sure we'll touch on that later. But that what they are doing today is they're also executing and generating that state. And then eventually they take that updated state, they post that on the L1 along with all the transaction data, et cetera.
Speaker 5
00:21:02 - 00:21:55
And that's the point at which you actually have a confirmation now of, OK, any full node can look at the data that's been posted to the L1, the state route that's been posted to the L1, and say, OK, this is definitely going to be the state of this rollup. In the case of optimistic rollups, the L1 isn't aware of what is finalized because you have this fraud challenge timeout period, but anyone, once it is posted to the L1, presuming the full transaction date is there, anyone can compute this will be the state of this rollup. So the difference is going to be obviously decentralizing what does that actual sequencer role look like. So a lot of that is today you are entirely trusting that sequencer for real-time censorship resistance, liveness, and the ordering of your transactions. They could censor you in real time, they could go down, they could reorder them as they want, they could do all of that stuff.
Speaker 5
00:21:55 - 00:23:06
And there's really no explicit penalty against them. So decentralizing that role in some way, whether that be just a simple leader selection algorithm, or you just implement the whole new consensus mechanism on this layer 2 that looks very much like an L1 at that point, such that there is an actual penalty. So in the simple example there, if say, you know, we throw a tendermint on a roll-up to decentralize that sequencer, as opposed to the centralized sequencer just giving you out the soft commitment, you now have a whole validator set on that L2 that even prior to the layer 1 confirming it, says yes, we also agree that this is the ordering and this is what will eventually go on the L1. So if you, even before it gets posted on the L1, if you tried to change that ordering, you know, they would be slashed for reordering the chain at that point because they have made now a much stronger soft commitment to you with the economic weight of whatever their stake is of saying, yes, this is actually what will end up on the L1. And that also improves the censorship and liveness of the system, just because you're no longer waiting for just 1 operator to potentially censor you to include your transaction, you now have, you know, maybe you're cycling through 100 validators.
Speaker 5
00:23:07 - 00:23:42
So it improves your real-time censorship resistance and ordering guarantees, with the important point that for any mature roll-up, you should get eventual censorship resistance and eventual inclusion at exactly the same rate guarantee of whatever the L1 is itself. Because you should always be able to, if you have a mature system, be able to force inclusion directly into the L1 itself and do stuff like that, such that even if the entire L2 is censoring me or they've gone down, that I can always get in through the L1 directly. But those real-time guarantees get much, much better as you decentralize that operator.
Speaker 2
00:23:44 - 00:24:21
Yeah. And I really like the example, actually, of just imagine the issue with being a single sequencer, right, roll up is obviously okay for now, but you would never have that at the layer 1, for instance, right? Imagine having 1 validator on Ethereum that defeats the entire point. And I think users can kind of intuitively grasp there that that's probably not a permanent state of affairs, but in the meantime, it's definitely sort of a honeypot for MEV type activity. Maybe either Robert or John, if you want to just, before we get to that very important point about sequencers kind of combining the role of proposer and builder that we've separated on the layer 1.
Speaker 2
00:24:22 - 00:24:39
Could you actually just talk a little bit about the different models that we're seeing in terms of MEV on rollups? Like if you could even sort of compare and contrast that MEVA, the auction sort of style that we're seeing on Optimism versus Arbitrum. It's kind of wanting to be a fair ordering type roll up. Whoever wants to take that.
Speaker 4
00:24:41 - 00:24:59
I can start riffing and John, I'm sure you'll follow up when I'm done. So MEVA was this really early idea for how you handle MEV. It predates the official launch of Flashbots is in the Eve Research Post, I think, by Carl from Optimism in 2018 or
Speaker 1
00:24:59 - 00:24:59
2019.
Speaker 4
00:25:01 - 00:25:59
And at the time, so roll ups were just bubbling up as an idea, it wasn't as mature as they can now. And I think the team was thinking about ways to perpetually fund rollups and perpetually use that funding that rollups get for public goods. And MEVA, which stands for MEV Auctions, was this idea that the rollup sequencer would auction off the right to propose a block in an open market. And so every single block you could bid, hey, I'm willing to pay 1 ETH or 2 ETH for the right to sequence this entire block, which is the idea behind MEV auctions. It's kind of interesting because there are some reasons why this structure isn't a good 1 and why the Optimism team has moved away from MEVA.
Speaker 4
00:26:00 - 00:27:04
And it differs a little bit from sort of the PBS, Mambu style market that we see on eFlare 1. And the way that it differs is that, as I remember the MEBA structure, it is you're auctioning off a block in advance. So you are buying a block in the future as opposed to bidding in real time for the existing block, which is slightly economically inefficient compared to a bunch of people bidding in real time for inclusion of a full block like you have in the boost today. This Both of these auctions are in contrast to first come first serve, which is being pursued by the Arbitrum team right now and the way that most rollups actually work in practice today, where the sequencer is attempting to include transactions in the order that they receive them with sort of no way of expressing your preference of priority over other other transactions. And this can be problematic because it incentivizes, you know, 1 of 2 things, depending on the economics of the domain.
Speaker 4
00:27:05 - 00:27:56
The first is it incentivizes spam. So if your transaction is just included on a first come first serve basis, and there's really a low fee block space, then a way that you could optimize for MEV is just to hit the sequencer as hard as you can, and do your MEV search on chain. So you actually implement like a background arbitrage model in a smart contract and just repeatedly hit your smart contract, trying to back run transactions probabilistically and reverting when you're not able to successfully do that. And as a searcher, you have no idea when this is gonna be successful, but you're outsourcing this work of doing it to the sequencer, you're using up gas on chain because it's so cheap. And the single time when you probabilistically are able to find some arbitrage on chain is going to pay for all of your reverts.
Speaker 4
00:27:57 - 00:29:08
So this is kind of the way that we saw Frisco and Frisur play out on Solana as an example where searchers would send millions, tens of millions of transactions to try to capture a single liquidation, you know, probabilistically hitting the chain as much as possible. The second way that you see this play out on first-cover search domains, which is like Arbitrum today, Optimism as well, since they're also first-cover search today, is optimizing for latency. So you try to find where the single sequencer is, co-locate with them, and optimize to the absolute smallest millisecond possible of latency in your code to be able to have the first transaction hit the sequencer possible after some energy extraction opportunity is created. And this creates all sorts of games that are being played today. Arbitrum had a problem where there was an endpoint that searchers would listen to on their sequencer, and searchers would try to open up many, many, many, many different connections to this endpoint, I think tens of thousands of them, and it was really hurting their sequencers' ability to operate in an efficient process, And searchers were doing this because they're able to get some level of edge in the latency game here.
Speaker 4
00:29:08 - 00:29:40
And like I said, the other externality of latency games is that incentivize co-location in a single geographic location. And over time, this is a vector for censorship. So these are kind of the 3 broad different ordering protocols that I would highlight to you right now. There's kind of PBS style on on eFlair 1 with Map Boost, where you're bidding in real time, MEV auctions, where you're bidding in advance and in the future and first come first serve, which is trying to include transactions as they come. Was that what you're asking, Michael?
Speaker 2
00:29:40 - 00:30:01
That's exactly it. Yes. Maybe if we could actually dig into that, you know, Actually, Hossam, maybe could I turn this over to you here? Because I know you gave a great talk about this in Mevconomics, which was definitely worth watching, and I would highly recommend that. But basically on the importance of PBS on roll-ups or layer twos.
Speaker 2
00:30:01 - 00:30:13
So maybe with Robert's explanation there serving as a jumping off point, could you describe why it's important to kind of transport this concept of PBS from ETH main chain up to the goal up layer?
Speaker 3
00:30:14 - 00:31:14
Yeah, I think it kind of connects to something that both Robert and john were already talking about, which is right now the sequence in these layer 2 domains, basically plays 2 roles, right. It proposes new blocks but it also decides on the ordering of that block. And in order to to decentralize the sequencer. Introduce these centralizing tendencies that especially Robert was mentioning to either spam the chain or optimize your latency to the degree that the entire chain would kind of centralize so that the chain sequencing would centralize into a single geographical domain or jurisdiction. And so we actually know 1 mechanism from Ethereum layer 1 that actually require, like it satisfies all of these requirements.
Speaker 3
00:31:14 - 00:32:07
And that is kind of to have a leader, some leader rotation mechanism with a decent amount of block time. So it's not so low that that kind of latency plays an overwhelming role. And then you outsource the construction of these blocks to a professional market of block builders, and you kind of manage to isolate the centralizing pressure on the block building road that way. And in a, in a role where it's kind of more easily contained and you try to Create as much competition as possible in the block builder market. And so I think the point of my talk was that there are different mechanisms to decentralize layer 2 sequences and PBS the PBS that we know from from layer 1 is still the best thing that we have.
Speaker 3
00:32:07 - 00:32:20
And so I think it would be very worthwhile for more layer twos and roll up ecosystems to think about how they can best plot this model to their respective domains.
Speaker 2
00:32:22 - 00:32:51
Yeah, absolutely. And I think that's kind of a good way to frame the rest of this discussion. So maybe this is a little bit tough because this hasn't actually happened necessarily yet, but let's try to focus the rest of this discussion on kind of the sequencing role on the roll-up layer and talk about kind of that roadmap to decentralizing sequencer set or even shared sequencer sets. And then I want to talk about decentralizing the role of block building. And then Robert, this is really where we're probably going to call on you a lot to talk about Suave and what that's going to look like.
Speaker 2
00:32:52 - 00:33:28
But maybe, thank you for that. So before we get there, John, can we talk a little bit, because I know you've written about this pretty extensively, about what the roadmap looks like to decentralizing sequencer sets. So maybe for listeners, I almost kind of picture this as if you talk about decentralizing sequencer sets, that's 1 roll up with many dedicated sequencers that sequence transactions in that particular roll up. And there's kind of this other world of shared sequencer sets where you say, Hey, there are a bunch of different rollups that settle back to Ethereum. Wouldn't it be great if there was this 1 network of sequencers that sequence transactions for all of these different rollups?
Speaker 2
00:33:28 - 00:33:31
So could you kind of talk about the roadmaps for both of those?
Speaker 5
00:33:31 - 00:34:10
So the basic consideration that a lot of these rollups are going to be taking when they're thinking about like how do we decentralize this, is rollups want to effectively inherit the guarantees of what their layer 1 is eventually. So that includes the censorship resistance and liveness of that system. So the simplest way to do that is a form of shared sequencer is just let the layer 1 sequence your blocks for you effectively. This is some variation of what's been called total anarchy, based roll-ups, pure fork choice, there's a number of them. But the basic idea is you're letting the L1 choose your ordering for you.
Speaker 5
00:34:10 - 00:34:31
You have this kind of PBS interaction at the layer 1 where that's how your block gets selected. It works, but you lose value to the L1 in the simple implementation of it. All the value would go to whoever is the layer 1 block producer, something that's a good thing. That is an argument that, for example, Justin has made in BaseDrop, so that is a desirable thing. Whereas...
Speaker 3
00:34:32 - 00:34:48
I would point out that you don't actually lose all of the value to the layer 1, because you can create kind of systems where the layer 1 sequencer has to burn some amount of tokens. So very similar to kind of an ERP1559 mechanism.
Speaker 2
00:34:48 - 00:34:50
Yes, you can.
Speaker 5
00:34:50 - 00:35:07
There are still ways to get value back to your rollup. It is trickier to do, but you can do it. But yeah, in the simple implementation at least, that is what it would look like. And for certain layers, that is not necessarily something. Yeah.
Speaker 5
00:35:07 - 00:35:45
It also looks very different depending on what your layer 1 is, whether it's something like Celestia, whether it's something like Ethereum, do you have a smart contract that's arbitrating this kind of thing? It does start to look different across different data layers. And in particular, when people were starting to talk about sovereign rollups on Bitcoin, where they were very against it because you could try to send that value down and potentially rollups don't want to, for whatever reason, whether that be regulatory or otherwise, that they purposely don't want a token, et cetera. In which case, it is forcibly going to go to the L1 if you don't implement that mechanism. But yes, it is not a strict option that it has to lose all of the value to the layer 1.
Speaker 5
00:35:45 - 00:36:05
That is a good point to be clear. The other main thing with this that I think is the bigger problem, quite frankly, is just the UX of it of, OK, you're back to the layer 1 now. You get layer 1 block times. There's no really practical, effective way to implement faster block times. That's good if you're doing any forms of these layer 1 sequencing.
Speaker 5
00:36:05 - 00:36:48
So that's where it comes in, OK, for these layer 2s, we implement something else in addition to what the layer 1 gives us, where we're always going to eventually get those layer 1 100% guarantees at whatever the layer 1 block time is. The point of all these other mechanisms is, can we get like 99% guarantees or whatever number in between those layer 1 block times? Because most users do not want to wait that long, is the reality of it. So the simple option that you can do is some form of, you know, you don't even necessarily need to do consensus strictly because the layer 1 eventually provides you consensus. You just do some form of leader selection, which could be, you know, you have basically tendermint, like minus the consensus, where it's just this kind of round robin style thing.
Speaker 5
00:36:49 - 00:37:30
And then you just inherit the layer 1 consensus. There are problems with this, in my mind, when you don't have a consensus, particularly stuff like liveness. You're not going to be able to increment the leader quicker than the layer 1 blocks. You're not going to be able to give any soft, good pre-confirmations in this if there isn't any kind of consensus voting off on it. So the reality is I think that most of them will implement some form of consensus, whether that starts off as, you know, a proof of authority type of thing, I think most of it kind of gravitates towards, you have a consensus that looks pretty similar to a layer 1, whether that's Tendermint, HotStuff, whatever, some variation of that, that probably gets implemented at most of these layer 2.
Speaker 5
00:37:30 - 00:37:58
Seems like the simplest thing. If I was pressured as a roll-up, whether that's regulatory pressure or whatever else, to decentralize my sequencer right now, that is what I would be doing. Quite frankly, the tried and true tested things of implement a simple consensus set, some form of PBS on top of that. The other idea that a lot of people are starting to get excited about right now is, OK, it's kind of hard. It's kind of annoying for all the roll-ups to figure that out for themselves.
Speaker 5
00:37:58 - 00:38:28
What if we just have 1 layer that just figures it out for all of us, basically. And that's that idea of shared sequencer, where the layer 1 is a form of shared sequencer, but for the reasons that I described earlier, it's not very feature-rich. You're going to get slow confirmations, et cetera. So what if we make another shared sequencing layer that is actually optimized to be a shared sequencing layer. And then all of these rollups can just opt into this shared sequencer and say, hey, for all of these rollups on top of it, we will just use you as our sequencer.
Speaker 5
00:38:29 - 00:38:41
And then they can still give those fast pre-confirmations. It's nice and easy for the rollups. We don't have to worry about figuring this out for ourselves. But it does come with a host of other complications, which I imagine we'll get into as well.
Speaker 2
00:38:41 - 00:39:03
Yeah, could we get into some of those complications? Because this is a pretty complicated area and 1 that's definitely developing sort of in real time. And from my understanding, when I hear people talk about a shared sequencer network, it's something that's far off in the future. So can you describe why that is? What are some of the complications there?
Speaker 2
00:39:03 - 00:39:33
And then 1 question that I always have listening to people talk about this is, who do you envision these sequencers being? You know, from 1 standpoint, I could imagine validators sort of on layer 1s, kind of like the block daemons or figments of the world, saying, hey, what makes sense for me and my business is to go and sort of be a sequencer on some of these layer twos. Do you see them being a totally independent network of sort of single sequencers? Or can you give us kind of an idea of who these sequencers might end up being in practice? And then yeah, some of the complications.
Speaker 5
00:39:33 - 00:40:20
As far as just who the entities are, my strong guess is that they are very similar slash the same entities. It's a very similar role, potentially at higher resource requirements, since some of them will be running at a bit higher speed than, for example, layer 1 Ethereum, but probably not that different from a lot of other chains. I think it'll be quite similar from that perspective. And then as far as, yeah, so what the shared sequencers actually look like and what the difficulties are. So the basic idea of them, at least for most of them, well, I'll start with the simple case of, you could have a shared sequencer that is just, it is a fully stateful normal sequencer that works the way that any proposer normally works, where I'm fully executing all the transactions for all of these chains.
Speaker 5
00:40:20 - 00:40:53
That can obviously only scale so far because I as 1 person am probably not going to be fully executing, you know, a thousand chains. And if the goal is actually to stick a ton of roll-ups on top of this thing, you probably need to get a little bit more creative of, okay, how do we scale this thing and open it up to everyone and make it easier to deploy to? So the main idea that the 2 biggest shared sequencers have announced so far, they have a very similar concept. It's Espresso and Astrea are the 2 that have announced recently. There are others who are building as well.
Speaker 5
00:40:53 - 00:41:46
But the basic idea of what they're doing to try to scale this out to many roll-ups, hey, we want this future for a million roll-ups and we wanna be able to provide this easy service is back to that point that I was describing earlier of while a sequencer today does both of those roles of we tell you this is the order of transactions and then they execute and they you know compute this data after that those are logically separate roles And so these shared sequencers completely strip those 2 parts out. So these shared sequencers like Espresso and Astria say, we will order your transactions for all these rollups and say, this is the order that we're gonna include them in, but they do not execute any of them. So they have no notion and understanding of what these transactions actually are at all. So that removes the really difficult part of this because, okay, now I don't need to hold the state for these anymore. I don't need to run the computation of executing these anymore.
Speaker 5
00:41:46 - 00:41:50
It's very simple. I'm just a dumb pipe that is saying they go through in this order.
Speaker 3
00:41:50 - 00:42:10
If I can jump in here. I mean, I thought about this a fair bit when this came out. And to me, it really seems like such a huge cop out, in a sense. Because you can claim all day long, oh yeah, my sequence, like my, my proposers don't have to execute any transactions. And so it's super scalable, but guess what?
Speaker 3
00:42:10 - 00:42:51
On Ethereum layer 1, the proposers also don't have to execute any transactions, right? The, They already commit to a block without knowing what's in it. And the same will be true in these new shared sequencing layers. There is a party that still needs to execute all the transactions because they need to know the state change that every single transaction will cause, because otherwise they couldn't compute the most efficient ordering, and that's the block builder. And so in a sense they say, yeah, the sequencing layer is going to be decentralized, but so far they are not really presenting any idea for how the block building layer can be decentralized, right?
Speaker 3
00:42:51 - 00:43:19
Because when you have a shared sequencing layer, maybe there's a hundred domains connected to it, then effectively what you need is you need this super builder, right? That simulates state changes across hundreds of different domains and it'll just be insanely resource, yeah, like resource hungry and centralizing. And so I think that's kind of the downside, right? Of cross-domain MEV and of shared sequencing.
Speaker 5
00:43:21 - 00:43:54
Yes, I definitely agree with you. It does make it very easy on the sequencing layer because you completely remove that. You know, you no longer need validators that need to re-execute all these things. So yeah, you can in theory have a very decentralized shared sequencer set, but yes, you are pushing that problem off to another layer. And that's why if you talk to the teams like Espresso Raster, they're like, yes, this does not work if you do not have someone sitting in front of the shared sequencing layer who is actually knowledgeable as to what these transactions are, because otherwise, yeah, they're just including them in a random dumb ordering.
Speaker 5
00:43:54 - 00:44:19
Most of them probably won't even work. It'll be incredibly inefficient ordering. For them to be good, you do need some form of builder to sit in front of them. And is that going to be 1 gigantic super builder who's centralized, or is that going to be different builders running for different domains and they are accepting bids for different domains? It starts to get a little more complicated the more that they're accepting there.
Speaker 5
00:44:20 - 00:44:44
But in theory, what you would obviously like to have is a decentralized block builder that is sitting in front of them, something along the lines of SWAV, such that, yeah, you don't need this gigantic centralized entity to sit in front of them because otherwise they are entirely reliant on that centralized entity to determine the ordering. They're not going to be able to enforce complicated things otherwise. They do need that entity sitting in front of them.
Speaker 3
00:44:45 - 00:45:28
Maybe Bert to rely on you here. So, we kind of talked about how shared sequencing allows for the extraction of cross domain MEV through this kind of notion of enforced cross domain synchronicity. But there are other forms as well to kind of get more efficient at the extraction of cross-domain MEV, right, that don't necessarily rely on a shared sequencer. So could you maybe walk us through what these alternatives are. For example, depending on whether, like for a chain that is not part of a shared sequencer or a chain that maybe doesn't even have the notion of proposal-builder separation.
Speaker 4
00:45:29 - 00:45:59
I think the sort of 2 dominant models in the market would be, or at least ideas that I know about, and maybe you're hinting at some third hidden idea that I don't know about. But so the 2 models that I would know would be shared sequencing. So providing some atomic synchronicity between domains. We've already talked about that. The second is, is that you may not be able to provide technical atomicity, but you could provide economic atomicity to users.
Speaker 4
00:45:59 - 00:47:05
So another way of saying this is, is that users could express their preference that I want transaction 1 included on domain A or something to happen on domain A and I want something to happen on domain B and I'm only willing to pay if both of these conditions are met. And in effect, by providing this economic ethnicity, you get the nice properties of technical ethnicity, but you are outsourcing the execution of that to specialized parties that can break that down and make it happen on 2 different domains in the absence of some actual communication or shared sequencing layer between different domains. And in that way, you don't need any integration between these domains. You don't need the domain to adopt a particular type of ordering model, you don't need PBS, you just need sophisticated actors that can understand risk and are really good at executing across these different domains and those actors getting these economically atomic bids. So is that what you're gesturing at, Hasko?
Speaker 3
00:47:05 - 00:47:36
Yeah, so I think it hints basically to the question whether all chains will end up being sequenced by another chain, or whether there's any alternative to that word. And I think the idea of shared sequencing is quite compelling, but I'm personally also very interested maybe to hear how we can provide similar guarantees without kind of like just giving into that idea that everybody like outsources their fork choice in that way.
Speaker 5
00:47:36 - 00:48:17
Yeah, 1 thing I should probably clarify because I realized it is a bit complicated and I didn't talk on exactly the guarantee that a shared sequencer gives you. So following from that example that Robert gave you. So say someone like you express a preference to suave of, hey, I will only pay if I get my transaction, say I'm doing an entomic arbitrage, I only want the leg on rollup A to execute if the leg on Rollup B also executes. So a suave executor can try to fulfill that preference for you and bid on both of those. So what they can guarantee you is what the state will be if I execute these things, because they are stateful and they're knowledgeable of what the result will be.
Speaker 5
00:48:17 - 00:48:49
So they understand that. What they can't guarantee is to your point of, will both of these chains include these atomically in these block heights? And so that is the part that a shared sequencer can give you is that kind of last leg of cutting out that last risk of it can't tell you what the result of executing these things is, but it can tell you that I will include these 2 transactions at these 2 block heights. And then the builder is what guarantees that other part of if they are included at these parts, this is the result of what they will be. So it is cutting out that like last bit of risk.
Speaker 5
00:48:49 - 00:49:21
It is unclear how useful that is to be quite frank. I don't think that's honestly a selling point for rollups to opt into it at the very least of we get better assurances around cross-domain MEV. I think that the selling point, if shared sequences are going to be successful, it is the laziness of plugging into them and potentially they offer better network effects. I don't think that you're going to see rollups opting into it because we can probably extract better cross-domain MEV more efficiently and return more value to our rollover. Like I don't think that's a strong selling point for them quite frankly.
Speaker 5
00:49:22 - 00:49:56
The 1 other thing I would say as far as how to get cross-chain atomicity, it's a very different approach. It's also not live, but it's what Anoma has been looking at with Typhon, where you can have what are called Chimera chains pop up, which are effectively on-demand kind of side chains, where if you have validator overlap between different chains that are running Typhon consensus, they can make those multi-chain atomic commits kind of on demand making those atomicity guarantees. But it's a very, very different approach and you need to be using Typhon to do that.
Speaker 4
00:49:57 - 00:50:01
It's like a sort of analogous to what's going on in the Cosmos world right now, right?
Speaker 5
00:50:02 - 00:50:40
Yeah, Typhon's a replacement for Tendermint, effectively is what it is. It's a kind of next iteration of that. There are other changes as part of it as well, but that is 1 of the major changes that would be applicable here is if you have validator overlap between different chains that are running Typhon, which in practice is the case that validators tend to be overlapping between different chains, they can make multi-chain commitments. So you get a weaker economic guarantee of whoever the validators overlapping are are able to make this guarantee for you. But it is a way, a very different way of trying to provide some kind of multi-chain atomicity guarantees.
Speaker 4
00:50:42 - 00:51:26
I think I want to come back to the question you asked, Hasu, which I thought was a very interesting prompt of, will all chains be sequenced by another chain in the future or is there some alternative? And I'm kind of musing on it as we talk here. And my intuition is that you can't get away from having your chain sequencing be influenced by another chain. And it's just degrees of how much you sort of want to submit to this or be hostily taken over by external forces in the market, to be honest. So like at the limit, if you want to do first come first serve and you have low, low, low, low block fees, what you're going to see is a market for spam come up in the world.
Speaker 4
00:51:26 - 00:52:17
So some chain somewhere or some sort of abstraction that allows the user to say, Hey, spam my contract as hard as possible, because I want I have this economic preference, you know, I want to land this liquidation or something like that. And this, to me is kind of a hostile auction that's taking over what should be a first come first serve model. And I don't see how you prevent that from happening. The alternative would be something like a market for transaction ordering on top of first come first serve, where you're paying sequencers that are supposed to be honestly reporting when they receive transactions locally instead to dishonestly report those because they're getting paid to do so. So I don't see how you can prevent these kind of out of band payments.
Speaker 4
00:52:17 - 00:52:25
And that would lead me to the conclusion that all chains are going to be influenced by the ordering of another chain.
Speaker 3
00:52:26 - 00:53:06
When you say all chains are influenced by the ordering, I think This is even true today, right? Because all the biggest block builders on Ethereum are basically today those that have the best connection to Binance, right? Who are best at extracting kind of sex tax arbitrage. And that shows the extent to which the ordering of 1 domain is really contingent on privilege on another domain that is really important systemically. And so even though Ethereum has a decentralized proposal set, it has PBS, it still has this ginormous centralization spillover from Binance.
Speaker 3
00:53:07 - 00:53:21
And that's why it's so important that actually more and more trading volume moves to these chains that do not have this kind of like ordering privilege that Binance has.
Speaker 4
00:53:21 - 00:53:44
I think it's even stronger than that, right? It's that it's so important that more and more liquidity in trading happens on actually decentralized domains, rather than just moving on to crypto rails, right? Because you can imagine Binance roll-up, which looks exactly like Binance. It's in exactly the same geographic location, except it's on a roll-up. And I don't think it would have a substantial difference today if it's not actually properly decentralized.
Speaker 3
00:53:46 - 00:54:16
So I think at this point, it makes sense to build a bridge to SWAF. So, Robert, what do you think? So we talked a lot about kind of a world where they are now all of a sudden the execution of transactions is spread across many different domains, right? So we have all of these rollups, we have layer twos, probably even going to have a lot of layer threes, you have shared sequences. So how do you see the role of Swath in that world?
Speaker 3
00:54:17 - 00:54:26
How do you think kind of what is the starting point for users and how does like a transaction kind of track through this new and much more complicated domain?
Speaker 4
00:54:28 - 00:55:10
Very good prompt and there's a lot there to unpack, Hasu. I think from a starting point, I'm making the assumption that you need some notion of a builder on all these different domains because people will have complicated preferences on ordering and other types of actions like pre-commits that they want. So you need some builder, that's like premise 1. Premise 2, it needs to be decentralized, and we can talk about why that is if you're interested. And premise 3, in order for it to be decentralized, you need some notion of privacy and some ability for parties to make commitments to each other, to facilitate collaboration among untrusted actors.
Speaker 4
00:55:11 - 00:56:04
And so what I see Suave as is this platform for parties in the MEV supply chain to make commitments and to communicate and manage their privacy between each other to provide ordering preferences, not just to Ethereum, but all of these different domains, the L3s, the L2s, L1s, maybe even centralized exchanges at some point in the future. So that's 1 part of your question. You asked me about a user's journey, I think. And the way that that would work would be a user needs to have some funds on SWA, which is a separate domain. So they would bridge funds to that domain and they interact with these specialized contracts that allow them to express their preferences over another domain's ordering or another domain's state.
Speaker 4
00:56:04 - 00:57:06
You may have some preference like, hey, I want my to come back to the earlier example, like I want this transaction included in domain A and another transaction included in domain B, and I'm only willing to pay if both of these things are included at the same time. So a user creates predominantly a signature, so not actually a transaction, but they create a signature that expresses this preference and passes it to the SWAV executor marketplace, where then executors can use the privacy and commitment abstractions within SWAV to collaborate on this and make sure that the user's preferences get executed. And only after that economic optimicity cross-domain preference is executed can parties claim the reward that the user has placed on that preference. So that's a long answer on how I think about SWAV and its role within these different places and sort of a simple example of how a user would interact with it and what's going on in the back end with executives.
Speaker 2
00:57:07 - 00:57:17
Hey guys, quick break from the show here. I want you to imagine something for me. Imagine swapping 2 stable coins on-chain, paying 0 dollars in gas and instead getting a rebate of
Speaker 1
00:57:17 - 00:57:17
$2, 000.
Speaker 2
00:57:18 - 00:57:40
This is something that's actually happened on-chain. To understand how, I want to introduce and thank this season's sponsor, Rook. Zooming out for a second, the current state of affairs at MEV is billions of dollars so far have been extracted from users' pockets using MEV. Rook is coming in and saying, enough is enough. Blockchain should drive value to their users and the applications they use.
Speaker 2
00:57:40 - 00:58:00
It is time to leave the hobbyist era behind us. If we want to move forward, we want to get this right. That's why Rook has built a custom blockchain settlement network, and it's 1 that gives you full control over the entire transaction lifecycle. Today, you can connect to an open source Rook node. The Rook protocol will automatically match, bundle, and auction your orders and transactions in seconds with 0 gas overhead.
Speaker 2
00:58:00 - 00:58:28
Also, any MEV that's discoverable along the way will be returned to you, the user. Created as a collaboration between the industry's top mechanism designers and MEV engineers, Rook was built from the ground up to be scalable, safe, and programmable. You can get your own mem pool, choose searchers and builders, and link your mempool with others to discover even more MEV. You can define how the MEV is shared and delivered as well. And Rook can basically process anything from transactions to meta-transactions and more.
Speaker 2
00:58:28 - 00:58:49
This is the way that blockchains basically should have been from day 1. So if you're a user listening to this, here's what I want you to do. I want you to go to your wallets, go to your favorite app, your node provider, and say, Hey, I want you to be working with these guys, Rook. I want the MEV that I create to be redistributed back to me. If you're a developer and you want to stay ahead of the game, the best way to do that is to follow them on Twitter.
Speaker 2
00:58:49 - 00:59:11
They are at Rook, or even better yet, slide into their DMs. They are lightning responsive. They'll get you set up today. And if you do slide into those DMs, as always, please tell them that I sent you. I was actually just going to say, in all the research that we did before this season, suave was the single most topic that people wanted to hear about.
Speaker 2
00:59:11 - 01:00:08
So Robert, I have a bunch of questions for you. I don't want to get too deep in the weeds too quickly, but I'm very curious about how standardizing preferences is going to work in practice. So a question that I had, and this was actually flagged to me as something that searchers are pretty interested in is, let's say you have 2 different blockchains with different block times. And like, let's say to use your example, right, you want to atomically make it so that if you place transaction on blockchain A like Ethereum with a 12 second block time, And then let's say like Polygon with a 2 second blockchain, and there's a block time, there's an arbitrage that you want to place. Let's say this arbitrage kind of pops up 1 second into Polygon and 3 seconds into Ethereum and you lock in the polygon leg of that arbitrage, but then there's still an enormous amount of time, right, 9 seconds in the Ethereum block time.
Speaker 2
01:00:08 - 01:00:13
So how is Suave going to make it so that you can close both legs of that arbitrage, for instance?
Speaker 4
01:00:14 - 01:00:41
So Suave isn't prescriptive on how exactly cross-domain MAUD is executed on. So we sort of anticipated a bunch of different approaches in the market that were going to be taken like Typhoon, like shared sequencing, like other things. And we are observing and seeing how these different execution models are happening, but we don't we're not prescriptive on on 1 way or another. That's the starting point. And different different users will have different levels of risk that they're willing to take.
Speaker 4
01:00:41 - 01:01:17
So some users will be willing to take the risk of, hey, maybe 1 leg will will fail and the other will not. And I'm willing to execute 1 part of the trade and not another. On the other side of this market, there may be executors that are willing to take that risk. And they may charge some payment if 1 leg is successful, but not the other. And some users may be willing only to pay or to make their trades if both sides of their trade are executed.
Speaker 4
01:01:17 - 01:02:10
And the sort of abstraction of Suave supports both of these different types of models where it's all or nothing, even with your trade or only 1 side of your trade being executed. The complexity with that is, is if you want all or nothing with both of your trades being executed, if you want it to work in this way, then I think the assets actually need to be settled on like Suave, the domain itself, instead of on Polygon or Arbitrum. And that may be unacceptable to some users, but this is a possibility at least. And Suave is trying to support all these different potential models of who is taking on what risk, how much they're paying for it, and what sort of execution is happening in the background, whether that's Typhoon, whether that's shared sequencing, whether that's just a market maker that's really good at evaluating risk on many different domains. So did that answer your question, Michael?
Speaker 2
01:02:10 - 01:02:11
Yeah, it did.
Speaker 4
01:02:11 - 01:02:13
Yeah, appreciate it.
Speaker 3
01:02:13 - 01:02:49
Yeah, so the way we think about Swarovski internally, I think it's definitely more as the demand side for these cross-domain transactions, and then kind of expose a lot of tools that allow for a supply side also to emerge, right? And these definitely include the shared sequences that John was talking about at length earlier, but also any other form. And they kind of get easier to build and integrate because SWAF provides kind of the tools for like trust this collaboration on private data, basically.
Speaker 4
01:02:51 - 01:03:36
And what I would say, too, is that by aggregating this demand side, by creating, to your point, Michael, standards for how this sort of bid or value that users are willing to pay for cross-domain ethnicity. By exposing that to the world, you then provide incentives for proposers to pursue cross-domain ethnicity solutions like Typhoon, like shared sequencing, because they see, hey, you know, there is all this value that I can get as a proposer if I integrate, you know, solutions that make it less risky for users to execute on these types of preferences. So in that way, it creates a market that I think incentivizes the exploration of more cost domain solutions.
Speaker 2
01:03:36 - 01:03:55
Excellent. I just, I have like kind of a continuing set of questions here, but I really liked that idea of kind of trying to approach things from the demand side. I think many arguments, especially investing arguments in crypto tend to be kind of focused on the supply side. So I love that approach. What's your sort of strategy from the Swath perspective of aggregating that demand side?
Speaker 3
01:03:56 - 01:04:18
I think Bert, that might be a great time to talk about the Order Flow option and how it relates to Swath. So Mike, 1 way that we've been thinking about aggregating the demand is definitely by kind of starting out at the next set of features that actually the order flow side of the market wants.
Speaker 4
01:04:19 - 01:04:54
What we're seeing on eFlare 1 in particular is the rise of what we call order flow auction. And in case your users don't know what this is, order flow auctions is this notion that you can send a transaction to an auction. And that auction will auction off the right to execute a transaction behind yours, usually. And instead of the block builder or the proposer getting paid, It is the user who sent that transaction getting paid within this auction. So it's a way for users to internalize the MEV that they create and get better execution.
Speaker 4
01:04:55 - 01:05:35
So an auction for Orderflow. And we think this is super interesting as a way to live up to 1 of Flashpots commitments of redistributing MEV. And we saw that there was a great amount of demand in the short term for this. I think you need to have order flow auctions as a starting point, but you also need these to be decentralized. So I don't think it really matters what you're doing at the block builder level or the proposer level and how decentralized those are if the dominant way that users are interacting with the chain is through a single centralized order flow auction and 1 end point to the chain, basically.
Speaker 4
01:05:35 - 01:06:29
And so what we've been working on at Flashbots for some time is how do you create an order flow auction that you can decentralize, that is permissionless for any searcher in the world to participate in. And the follow-up question that we're starting to work on now is, how do you take the outputs of that order for auction, bundles that include users transactions with searchers as well, and have those be used by any block builder in the universe, because we don't want to create a world where only trusted block builders can get the outputs of these order flow options. So these are really taking what we're working on with Suave of these primitives of how do you have parties commit to each other in the MEV supply chain? How do you have privacy and pulling them forward and anchoring them in something that we see that has market demand today in order flow auctions. So we can ensure that these, that's critical infrastructure is decentralized.
Speaker 2
01:06:30 - 01:07:01
Yeah. I suppose my, my next question to you there is, I have a pretty clear, and actually, order flow auctions and the increasing prevalence of them is definitely something Hasu and I are super interested in exploring. I think I've got a pretty good mental model for what a centralized order flow auction might look like. But I'm having a little bit more trouble sort of reaching in my mind for what a decentralized order flow auction might look like. So can you kind of describe some of the mechanism design that you guys are playing with at Swap and what would that actually look like in practice?
Speaker 4
01:07:01 - 01:07:28
Yeah, so I would point your reader, your listeners, their readers to read something that I wrote. And we wrote it, the flashbots team called me be shared. And it's on the platform. So collective dot flypots.net. It details our design for an order flow auction, which has privacy at the core and is permissionless for any searcher in the world to participate in.
Speaker 4
01:07:28 - 01:08:24
And the way that it does that is through this notion of privacy. So normally a user and a searcher are going to have a hard time collaborating because if the user sends the transaction to a searcher in clear text, the searcher can just take that front-running user, extract all the MEV and run off with it. There's really nothing that guarantees the user that they get MEV paid back and the user has no bargaining power as well. And there's no guarantee that they won't get front run either. So MEV share gets around this and it creates a permissionless market where any user can interact with any searcher through the notion of programmable privacy, where instead of sharing transactions in a clear text with searchers, we selectively share information about those transactions with searchers who can use that information to probabilistically extract a movie.
Speaker 4
01:08:24 - 01:08:59
So instead of sharing your Uniswap V2 trade or SushiSwap trade details, you could only share, for example, the pool that you're trading on. And a searcher could see, hey, this user is trading ETHUSDC. I don't know what direction, I don't know how much, but if the price of ETHUSDC on Uniswap v2 moves to this amount, then I'm willing to buy. And it moves the other direction, then I'm willing to sell and still pay some amount for this. So the trick here is to share just enough information that you can optimize for MAV, but not too much where the user gets worse outcomes.
Speaker 4
01:09:00 - 01:09:45
But by not sharing the full transaction, you enable permissionless searching on this. The other thing that I think is important here is this notion of commitments. So you need some way for the user to get paid for the MEV that they're creating. And we have something we call a validity condition, which is passed on with the end user's transaction that requires that the user gets paid some ETH in order for the transaction to be executed. I'm sort of coming around to the decentralized bit of this, but these 2 things, privacy and commitments, are what we think enables MEVshare to work and redistribute MEV back to users.
Speaker 4
01:09:45 - 01:10:36
And we're designing this in such a way that you're not relying on any trusted set of parties in order for the system to work, or at least not in the limit. So in the future, we expect MEV share to be run just by a distributed set of nodes within a decentralized block building network, instead of a centralized entity like Flashbots itself. And we think that selective data sharing is an API that can be run within this decentralized network, as well as the notion of kind of these conditions of validity that are added to users' transactions to ensure that they're getting feedback. That's a long-winded answer saying, we think privacy commitments are super important to the MVV supply chain, enabling parties to work together. And these are going to be features that are decentralized block building network and Suave offer.
Speaker 4
01:10:36 - 01:10:42
And they're kind of pulled forward into a centralized world in MEV share are designed for an order flow option.
Speaker 2
01:10:43 - 01:11:02
Yeah, thanks Rob. That makes an enormous amount of sense. And yeah, we will link to the MEV share article that you just referenced there. I'd like to, for listeners, kind of give people a little bit more of a concrete sense of what Swav is. I know there are sort of 3 components to it, Robert.
Speaker 2
01:11:02 - 01:11:33
There's the preference environment, there's the execution sort of market, and then there's the decentralized sort of builder. I also know that Swav is kind of its own chain, and I almost think of it as kind of an alternative mempool for being able to express these sorts of preferences and they actually sort of get routed. So can you kind of just describe again, almost less of like a high level and more of a concrete sense, just what exactly Suave is and what's sort of the timeline for development as well?
Speaker 4
01:11:33 - 01:12:09
Yeah, so Suave, like you said, R is these 3 things, so A preference environment, an execution market, and a decentralized block builder. And it is also a chain. So these are the same thing. And the way that they relate is that the chain is the place where you bring information from other domains, and you expose it through oracles that users can condition their preferences on. So the chain is how users are able to express their preferences in the chain, plus its messaging layer or its mempool makes up the environment for the preferences.
Speaker 4
01:12:09 - 01:13:07
And when we were thinking of how to communicate this, we came to preference environment as this notion of 1 single place for all these preferences could be aggregated and accessed together because there are economies of scale of having these things in 1 environment because you can optimize for MAD more and the more preferences that you have in some place. So that's the relationship between the chain and the preference environment and how that works. We also wanted to delineate the execution market because it's a super important part of Swapworks. It's at the core of it is this notion of a competitive marketplace of specialized actors that can take these preferences and execute on them regardless of whether it's Ethereum, whether it is, you know, Polygon, Arbitrum, Solana, Cosmos Chain, etc. But it's very important to us that we have this specialized marketplace and it provides a lot of interesting benefits to users.
Speaker 4
01:13:09 - 01:13:40
This is, again, separate from the decentralized block building network because not all domains will have some notion of block building in PBS. And really the decentralized block building network is just a specialized instance of an executor within the executor marketplace. But it is kind of logically separate in some ways. I hope that makes it a little bit more concrete. You can think of Suave and how to relate all these things as Suave being a chain where you express your preferences.
Speaker 4
01:13:41 - 01:13:56
On the back end, you have these specialized executors that are competing on them. And the decentralized block building network is just 1 example of that. Does that help concretize it a little bit, Michael? Are you looking for something else? And I won't be offended if you are.
Speaker 4
01:13:57 - 01:13:58
John, I see you on mute.
Speaker 5
01:13:59 - 01:14:27
Yeah, I have some questions on this. So I just wanted to clarify on some stuff. So you're saying like as a user, you express preferences via like the actual SwapChain itself, you have funds there as a contract to express those bids. So do you need to be going through that process of using Swapchain to express a preference if I just want to like send my order through the order flow auction that will eventually like plug into Swap on the front end, is that the way that you have to communicate to it?
Speaker 4
01:14:27 - 01:15:05
I think if you have particular preferences that you want to express like, you know, in this order flow auction, I only want this order to be executed if I am paid 1 ETH as an example, then you'll probably need to condition like a special SWAV preference that includes those parameters on it. I think if you're just a regular user and you want to use SWAV, it'll be as simple for you as, you know, rpc.flashbots.net and throw it at the RPC and the magic happens on the backend for you. But, so did that answer your question?
Speaker 5
01:15:05 - 01:15:55
Yeah. Another follow-up kind of question I had on that was, so as far as expressing these preferences of like, let's say there's some arbitrage that I see, like I want certain transactions to close this that I see on another chain. If that going through that process requires me to express a preference on SuaveChain to communicate like this is what I want done, that would imply that you're like, you're bounded by the suave chain block times of like, if another chain that I want to express a preference for has a really fast block time, and like, maybe that opportunity is going to disappear. That would mean that suave chain like needs a fast block time to be able to express that preference in the first place. So does that kind of pressure just lead to back to the point of like you described earlier of like the pressures of low block times?
Speaker 5
01:15:55 - 01:16:02
Does that like incentivize SwapChain to have super, super low block times, and then kind of like have that race to centralization there.
Speaker 4
01:16:03 - 01:16:30
I think it is a little bit different on SwapChain for what it's worth because we're really only posting data about other chains and settling payments. So it's like, I think, in some ways, less important than other domains that we have. So I'll just point that out. But you raise a really good point. And I do think that swaths block times are going to have to be faster and should be a little bit faster than other domains.
Speaker 4
01:16:30 - 01:17:10
On the other hand, I think there are ways to craft your preferences up front that don't require you to communicate those in real time. So as an example, you could offer to an executor on a domain that's really fast, hey, if you get a, if you spam my contract, and it emits a log that says success, I'm willing to pay you some amount. And that gives an incentive up front for an executive to be spamming your contract at the precise moment where it needs to be without you needing to communicate that, at the moment that it needs to. It'll just spam your contract at a prior. I think you could probably do similar things with latency.
Speaker 4
01:17:11 - 01:17:18
And in that way, there's like upfront ways to communicate your preferences to prime executors to be working on lower block time domains.
Speaker 3
01:17:20 - 01:18:03
John, I would also point out that, so you can think of SWAF almost as, it's like a board where basically anyone can submit their transaction requests or transaction execution requests. And once you made such a request and it's floating in the SWAF mempool, then the request is basically out there and anyone who goes and executes it, they can then come and claim their payment later on after the transaction was executed. Basically, they need to execute it and then an oracle reports the state change from the domain and then the payment can be unlocked on the target chain. So that means that the settlement is not bound to the block time of the target chain. The settlement can basically happen at any time later.
Speaker 3
01:18:03 - 01:18:24
And so that's why, in my opinion, at least, swap block times do not need to be lower than participating chains, because it's basically okay for executors to claim their payment with a little bit of delay, because they know the payment is trustless. If they have the transaction, they can get it included anytime later and then they're going to get their payment.
Speaker 5
01:18:25 - 01:18:51
So, yeah, so to be clear, like the settlement after the fact isn't the part that like I was kind of getting at there because yeah, that can kind of happen whenever. It's more of that kind of initial step of if it requires a transaction on SwavChain to communicate that preference in the first place, then if that block time is longer than the domain that I want to express that preference for, I won't be able to express the preference in time. So that's where I was wondering, like, where that sensitization pressure
Speaker 2
01:18:51 - 01:18:51
comes from.
Speaker 3
01:18:51 - 01:19:06
I believe you can communicate all your, and correct me if I'm wrong, Robert, I believe you can communicate all your preferences just through kind of these pre-signed transactions. They do not actually need to be mined in SwavChain in order to be commitments.
Speaker 4
01:19:07 - 01:19:36
Yeah, that does work. You may want to... We do have a notion of like a special transaction in Swav that will carry your signed commitments If you want to throw it in the swap mempool, which is what you would want, if you want the maximum number of executors to access your transaction, if you want it to be censorship resistant. If you wanna skip all of that and really save on latency and it's really important for you, you could communicate it directly with executors. That is right.
Speaker 4
01:19:36 - 01:20:08
Just with this signature model that we have been talking about. I do think you're like touching on something, John, which you can't ever really get around just because of laws of physics, like there will be some cross-domain MUD which is not possible to extract because of latency. And there is some kind of pressure like there is today from latency and cross-domain extraction on domains to centralize and have faster block times too. I think that's kind of inherent across domain immunity. I don't know that there's anything that we can do about that.
Speaker 4
01:20:08 - 01:20:27
But it is 1 of the reasons why it is important for us to, you know, as a community, align on having real decentralization. And as a part of that, I think probably slightly longer block times than we do today in order to reduce this pressure of centralization from cost to meta-media in the long run.
Speaker 5
01:20:27 - 01:21:08
A quick follow-up question. Can you elaborate a bit more on why it is exactly that. So like that you need a chain in certain cases to express these preferences and to settle them after the fact, as opposed to just having this kind of more peer-to-peer protocol layer part of Suave where I can just communicate my intent. And if you execute it on the other chain, you just get paid on that chain. Why do we need to go through this process of communicating that through Suave and then having this Oracle problem to go back to settle that payment after the fact of like, why can't we have this more global peer to peer layer and then just like settle on those chains themselves?
Speaker 5
01:21:08 - 01:21:11
And like, I'll give you a payment on that chain if you get my thing done.
Speaker 4
01:21:11 - 01:21:35
It's a good question, John. The TLDR of it, as I understand and correct me if I'm wrong, is like, why do you need a chain, right? Why do you need a new blockchain for this thing? And there's a few reasons. So in order to be economically efficient, we need some way of transmitting preferences, we think that is both, you know, as, as low cost as possible, and DOS resistant.
Speaker 4
01:21:36 - 01:22:21
1 way to be DOS resistant is to force attackers to pay a cost if there is spam within the network. And with a blockchain, we can impose a fee during periods of network congestion by including preferences on chain. And this would deter attackers. But it would allow we have designed a mechanism within Swav that we think allows as low cost as possible expression of preferences, you know, when there are not periods of congestion, and that requires you to change the actual chain itself, add a new type of transaction. And, you know, I may be wrong and Tim Bacow can yell at me if I'm not, but I don't think this is something we could get through ACD today, you know, if you're in core development community.
Speaker 4
01:22:22 - 01:23:25
So practically, it allows us to introduce new mechanisms that we think are more economically efficient to express preferences while still having the property being DOS resistant. And a standalone peer-to-peer network that is global wouldn't have the same mechanism. At least I haven't seen any design for it thus far. The second more general thing, other than this 1 specific mechanism for how you achieve low cost, but DOS resistant expression of preferences, is the notion that by owning the sort of full stack, by being able to change things on the full stack, we can, it's much more flexible and we can iterate more fast on many different design parameters and make tailored optimizations that again would be very difficult to retrofit on an existing domain and which you may not want on an existing domain as well. So, you know, example of this would be, it probably makes sense for Swapchain to have a faster block time than Ethereum L1.
Speaker 4
01:23:26 - 01:24:06
This is just a non-starter on Ethereum L1 because, you know, it's optimizing for slightly different things, right? But we can offer that at SwapChain. Another example is we thought of making SwapChain a roll-up that is using sort of the derivation function of the roll-up in order to get trustless access to L1 data and all L roll-up data so as to not need oracles that have any kind of trust assumptions to it or at least use the same trust assumptions as L1. These are examples of optimizations. 1 final 1 I'll throw out is we've thought about replacing the existing mempool within geth with a different mempool that's optimized for even faster communication.
Speaker 4
01:24:06 - 01:24:17
So these kind of optimizations that we think will make suave sort of a better domain and specific for MEV. Did I answer your question, John?
Speaker 5
01:24:17 - 01:24:18
Yeah, that was great.
Speaker 3
01:24:18 - 01:25:00
The last thing that I'd point out maybe is neutrality. So I think a SWAF kind of moves beyond Ethereum maybe in a couple years. I think there's also the kind of the question, like, is it fair that like all of these preferences get set on on an actual like settlement layer or should this kind of be like its own standalone neutral layer that actually has like ownership and participation and so on. That's crafted in like an entirely bottom up way from these different domains. And I think that's something that we're thinking about because part of getting adoption for a system like that is basically designing for political neutrality.
Speaker 3
01:25:02 - 01:25:15
And then maybe the kind of the existing ownership of Ethereum may not kind of be optimized in such a way that it kind of gets buy-in from Cosmos, gets buy-in from Solana, it gets buy-in from these centralized exchanges and
Speaker 2
01:25:15 - 01:25:15
so on.
Speaker 3
01:25:15 - 01:25:15
And so on.
Speaker 2
01:25:16 - 01:25:54
Yeah, that makes an enormous amount of sense. Guys, you've already been super generous with your time and I know we could probably keep going for another 2 hours, but maybe we could sort of transition to wind down. So we've covered an enormous amount of ground today And frankly, I think everything that Hasu and I were hoping to chat with both of you about, we've already talked about. But if you want to just leave listeners with 1 idea or sort of bookend the conversation with either kind of a hope for how MEV might play out on these sort of rollups or maybe something to avoid or just anything that you want listeners to kind of take away from the conversation, maybe if we could just end with that.
Speaker 5
01:25:55 - 01:26:30
Sure. So I'm definitely very hopeful that it doesn't start to just take years to decentralize these sequencers. I think there's going to be meaningful pressure to probably get that done in the nearer term. And when we do that, to hopefully do that in a way that isn't just going to incentivize all of the worst centralization pressures and kind of bend back with the exact same guarantees, whether that's super high latency, first come, first serve type roll-ups. Because if that's what we kind of gravitate towards with them, then I don't really know how much that achieved.
Speaker 5
01:26:31 - 01:26:39
So hopefully building these systems in a thoughtful way such that like, we actually retain meaningful guarantees and decentralization across them.
Speaker 4
01:26:39 - 01:27:08
I think if we want to decentralize these chains, and personally, I do, we need to decentralize the NAV supply chain. And that's what we're working on at Flashbots. We think to do that, you need some ability for parties that don't trust each other to make commitments to each other and to manage their privacy. And this is what we're trying to do with MED share. It's an early experimentation for those things within the use case of an order flow auction.
Speaker 4
01:27:08 - 01:27:38
If you're interested in this and interested in ensuring that MED doesn't become a centralizing force for Ethereum and every other domain within crypto, please come work with us. We are super interested in collaborating with others. And we're going to be working on a bunch of interesting problems in the future, like how to share order flow and how to make the most efficient order flow auction possible. So if you care about decentralization and MAD, decentralization crypto, please come work with us, check out MAD share and let us know if you want to collaborate.
Speaker 2
01:27:40 - 01:27:46
Excellent. Thanks, guys. This has been a fascinating conversation. Thank you both so much for your for your time.
Speaker 3
01:27:47 - 01:27:50
Cheers. Thanks for taking the time guys.
Speaker 2
01:27:50 - 01:27:58
All right, Hossi. That was a great episode. Big payoff for listeners. I think that was a lot to digest from John and Robert.
Speaker 3
01:27:58 - 01:28:17
Yeah, I thought this episode was great. I had this mental map in my head of all the things that I wanted those guys to cover. And we didn't even have to steer very much at all, right? It just felt like they were jumping from topic to topic and getting to all of the important points. Yeah, so I really liked this 1.
Speaker 2
01:28:17 - 01:28:41
Yeah, they both came out of the gate with a lot of energy. And for listeners, we actually recorded this at 8am on a Monday morning. So that was definitely tip top performance from the 2 of them. 1, 1 idea that I thought was very interesting Hossu that I wanted to get you to unpack the implications of a little more. Is that hypothetical that you posed about every chain being destined to be influenced by the sequencing of another chain?
Speaker 2
01:28:42 - 01:28:50
Robert had a very interesting answer to that, but could you kind of, I'm still kind of trying to digest what the implications are of that. Could you unpack that concept a little more?
Speaker 3
01:28:50 - 01:29:20
Yeah. So the reason that I asked this was that there is a lot of value to having cross-domain synchronicity because it basically allows for better bridging, but it also allows for the more economically efficient capturing of cross-domain MEV. And so I like how John was actually also going into some of the drawbacks of that, right? So 1 is definitely the value capture mechanism is, is, is, that's direct. And it's, it's also like, it's entirely not figured out how that would work.
Speaker 3
01:29:21 - 01:30:00
Also, I think the like, loss of sovereignty is also like, worth mentioning. And so I think in the Cosmos ecosystem today, we see a push towards sort of actual like applications taking control of their ordering, like through becoming blockchains, right? And then like ABCI plus, and so on. And I think that like shared sequencing is on the far other end of that spectrum, where you actually give up all your sovereignty about your sequencing. It's not even like you are on 1 chain with other applications and you share kind of the same sequencing rules with them like you would on Ethereum today.
Speaker 3
01:30:00 - 01:30:36
No, it's actually like you are on chain with all of the other chains. And like all of these chains actually share the same rules. And so that's 1 thing that I guess we didn't say about these shared sequencing networks, which is really that like they all need to opt into the exact same sequencing rules in order for that to work, right? And yeah, so I think that was worth pointing out. And yeah, so I guess the question that I have is really like, how big is this economic pool going to be?
Speaker 3
01:30:36 - 01:31:18
Right, so how big is cross-domain MEV going to be? And as a result, kind of this idea of all chains outsourcing their sequencing to the same chain. And I mean, I think in a sense, it's like a really scary idea, right? Because the more chains are going to be sequenced by the same sequencer, I mean, even if that sequencer maybe doesn't execute the transaction, then we, as we discussed, kind of, there are block builders who will need to do that. And so that's why cross-domain MEV is really such a big centralizing effect on, on kind of the builder market, right?
Speaker 3
01:31:18 - 01:31:49
Because it really drives up the resource requirements to be a builder, not just in terms of executing the transactions at the software level, also in terms of inventory management across these different domains, risk-taking, balance sheet size, and so on. It really creates this notion that someone who wants to be good at cross-domain or someone who wants to be good at block building must also be good at cross-domain arbitrage, et cetera, on these different domains. And I think that's a really scary idea.
Speaker 2
01:31:49 - 01:32:48
Paul, so when you talk about cross-domain MEB as an idea, when do you think, do you think that ever shifts from being primarily sex to dex arbitrage? Because I remember getting, the first time that I got very excited about this, I'm sure for you, this was years before, but in listening to the Atom 2.0 white paper, this idea of kind of the interchange scheduler, I thought to myself, man, that's a very cool, very compelling value proposition. But when I was doing a little bit of my research in this season, actually, I found when you don't, ignoring sex to dex arbitrage, which is a great profit center for a bunch of these builders. 1 of the problems when it comes to cross-domain kind of arbitraging price differences that you might say on 2 decentralized exchanges. There's a reason it hasn't necessarily been the big honeypot that a lot of people thought it was going to be, which is 1, that complexity around different block times that I was asking about with Suave.
Speaker 2
01:32:48 - 01:33:26
But then economically, the spreads of decentralized exchanges are much wider than the spreads on a centralized exchange. So when you think about how juicy an ARB needs to be, you need to basically be able to pay for transaction costs on both decentralized exchanges and the wider those spreads are, the less attractive the arbitrage is going to be. So I'm just curious how you think about cross-domain MEV evolving over time. Do you think it's this enormous, very sexy sort of pot of potential profits? Is it largely just going to be arbing the price of Binance, which is kind of what it is today, which is where price discovery happens?
Speaker 2
01:33:26 - 01:33:28
How do you think about the evolution of cross-domain?
Speaker 3
01:33:29 - 01:33:55
Yeah, I should caveat that by saying that I'm really not the cross-domain MEV expert at Flashboards or really kind of in the MEV space in general. But I mean, I think you touched on some very interesting points. So definitely Binance is kind of the domain against which like most arbitrage happens today. And so there is a lot of cross domain MUE. It's just against a centralized domain.
Speaker 3
01:33:55 - 01:34:26
And so I guess the question then becomes, well, how does that change? Like first of all, what does it mean for crypto? And I think it means today that many of the top block builders are engaging in, in kind of Binance, Ethereum arbitrage. Because that's basically how they can best monetize their, their block builder status, right? How does that, how is that, And so this has this like spillover effect, the centralization spillover effect almost from Binance to Ethereum.
Speaker 3
01:34:27 - 01:34:55
And so how does that change? I think, I think basically price discovery has to shift from Binance to some decentralized domain, whether that's an application on Ethereum or on a layer 2 or on layer 3, or whether that's like its own totally independent app chain. But I think once another venue becomes the focal point for trading and liquidity, then all of a sudden, I think arbitraging against that domain becomes much more important.
Speaker 2
01:34:56 - 01:34:58
Yeah. Yeah. Very well said.
Speaker 3
01:34:58 - 01:35:28
Maybe there's another thing that we can kind of point out. So there are ways that basically cross domain arbitrage can evolve and can kind of become more prominent, even without Binance becoming less prominent. And I think if you, so if you as a searcher, imagine like you're a searcher on Ethereum today. And so there's a lot of arbitrage basically, where you don't have any way really to close this without taking balance sheet risk. Right.
Speaker 3
01:35:28 - 01:36:10
And so maybe you don't engage in these at all. And so maybe there's very little competition for them, because kind of the like, Binance leg might be very centralized, right, of the trade. And so now all of a sudden, there is this shared sequence, or this way to do kind of express these cross domain preferences, maybe let it be between Arbitrum and Ethereum instead of Binance, right. And so now as a searcher, maybe I can just participate in that opportunity, whereas previously this market was close to me. And it doesn't matter that I'm less efficient at closing this arbitrage opportunity than the searcher who does kind of the like Ethereum Binance Arbitrum lag, right?
Speaker 3
01:36:10 - 01:37:02
Or like Ethereum Binance Binance Arbitrum. What matters to me is that I can, like, it is very cheap now to pursue this opportunity. And so I end up, and kind of the party that ends up getting the opportunity is not the 1 who can extract the most opportunity from it, but who actually can pay the block builder the most or can pay the validator the most, right. And so I think what we'd see is basically the value capture for validators from these opportunities will go up because there would be more competition. So ultimately, kind of the winning trade might still be done by whoever controls the Binance, the Binance leg of the trade, but the opportunity set at large would become much more competitive and kind of the margins would get compressed and more of the revenue would go to the Veldt et al on these respective domains.
Speaker 3
01:37:02 - 01:37:04
That would be 1 guess.
Speaker 2
01:37:05 - 01:37:51
Yeah, I think you might be absolutely right about that. 1 question that I had for you as well, Hossein, I didn't want to get too into the weeds technically here, but you know, you often hear, I'm sure folks, if you've tried to learn about this stuff on your own time, there are a lot of these sort of new actors, especially that get introduced with the implementation of things like Swap, like executors. There's also in MEV share, there's the matchmaker and there are all these kind of new entities. And a question that I often try to ask myself as I close my eyes, I try to imagine who are actually doing these sorts of actions. And when I kind of think about this, I think from a business standpoint, it makes sense for the kind of consensus infrastructure providers at like main chain to sort of move up the stack.
Speaker 2
01:37:51 - 01:38:17
So that's why I asked that question about validators. Like if valid people who are validating Ethereum, if that were me, I would have the competence of okay, securing a chain at the main chain level, then I would try to move up to roll ups. And when I think about executors and what I was hearing Robert describe, I think that actually sounds a lot like searchers, right? Am I right or wrong in describing that? Or how do you think the activity migrates up from main chain to this roll up environment?
Speaker 3
01:38:17 - 01:38:46
Yeah. So first of all, I'd say, I think there is a reason why I think especially like more technically or like architecture, like reminded folks, they tend to introduce a new role. I think it's because of the logic at separation. So anything that doesn't have to be done by the same role, they basically say this is a new role, even though in the beginning it might be done by the same role. So in Flashbots, we are going to run a matchmaker and we are going to run a block builder, right?
Speaker 3
01:38:46 - 01:39:23
Even though a block builder can also be a matchmaker. So especially with regards to the executors in SWAF, I acknowledge it's like, as a term or like a concept, it's a little bit confusing because it really makes you think this is going to be a new role. Whereas it's really, it's much more of an umbrella term for a bunch of roles that already exist. So especially with regards to SWAF validators, so that is not actually a new role, but that is really more of an umbrella term for different parties that already exist. So a SWAF executor is anyone who can get your transaction in at the target domain.
Speaker 3
01:39:24 - 01:40:04
And so if you think about who would be kind of the best party to do that, because the executor is also permissionless. So it's really about self-selection, like who will step up to fill that role of the executor in equilibrium. So we know at the farthest end, we kind of have the validator because they have kind of the ultimate monopoly over transaction ordering, right? But they also might outsource this to a block builder and the block builder themselves, they might run an MEV auction where they outsource kind of the ordering to search us, right? But then you may even have shared sequences, right?
Speaker 3
01:40:04 - 01:40:34
So they do the sequencing for different domains. And so they might be an executor. But then you might have chains maybe where there isn't even like a builder. And so then the searcher kind of might just go straight to the blockchain in order to get a particular transaction mine. So for example, Robert was saying, what if there's no PBS on a chain and it just has kind of first come first serve where what's going to happen is there will kind of, there's no reason why like a kind of latency auction as a service shouldn't emerge.
Speaker 3
01:40:34 - 01:41:25
Because if there's some party who has this preferential access to latency on that chain, what they end up doing is they end up monetizing that in some way, whether it's in a proprietary way, but they can also outsource that. And so in Tredify, I think you saw that already a couple of times. So for example, there was, there's multiple privately owned microwave tower networks that are basically, I think it's on the order of like 2 to 3 times as fast as kind of the regular internet, the way that they sent kind of information packets across the globe. And these microwave tower networks, they are being used by all of the big trading firms. And so that is kind of the exact same concept that you would expect to emerging in kind of a more latency sensitive crypto system.
Speaker 2
01:41:26 - 01:42:00
Yeah, it's, you know, the more I kind of dig into Suave, It's just so impressive just in how audacious sort of the scope of the work is. And I'm just very, very curious to see how it all plays out and gets implemented in practice. Because yeah, it's almost like every time I try to learn a little bit more about it, there's this whole new rabbit hole of concerns that I wasn't necessarily thinking about. So it's going to be phenomenal to watch you guys build that. I think my last question to you on this is, again, we're risking getting into some pretty technical spots here, but that's what this whole episode was.
Speaker 2
01:42:00 - 01:42:45
So I'll just sort of ask you, you know, when I, when I sort of look at this idea of shared sequencers, the mental model that I have, again, and I think Robert mentioned this during the episode is actually borrowed a little bit from, from Cosmos a bit, and their kind of idea of interchain security and validators securing multiple different chains. And my memory of how that works in Cosmos is there's some kind of physical limit to how many chains 1 validator can actually validate. And 1 of the questions that I didn't really want to get into, but I'm very curious about is, are those same limitations going to exist in shared sequencer networks in Ethereum? I would guess that they probably are. And we talked about the hardware requirements being a little bit higher as well.
Speaker 2
01:42:45 - 01:43:43
So I guess what I'm kind of trying to let me let me steel man something that I feel like is a Non-charitable interpretation of how all this is gonna work I have a feeling you're strongly gonna push back and then tell me why I'm wrong All right with this sort of uncharitable interpretation, but in the same way that I'm not sure in the same way that the US sometimes gets accused of, we have very good labor practices over here in the US, but don't look at how we actually outsource the labor in kind of Eastern markets. So kind of like the Nike example of, well, we have really sterling, very upstanding sort of labor practices over here, but really our sneakers get made over in Bangladesh where there's less good labor practices. I think you could uncharitably make that argument with Ethereum where we've limited what's going on on chain and Ethereum at the main chain is this kind of very neutral settlement layer where anyone can be a tiny little validator, but you've pushed some of the less desirable aspects of it just up the ladder to roll ups and roll ups are where all the execution and the users are going to interact.
Speaker 2
01:43:43 - 01:44:07
And they have a lot of the problems that some of the L1s kind of get rightly pushed back on, right? You've got centralized sequencers, you've got a very strong sort of push towards latency because people want faster confirmation time. So What's the reason why that's not necessarily the case? And how is this shared sequencer network going to solve some of these problems?
Speaker 3
01:44:07 - 01:44:34
Oh man. I think this is happening to a degree. Definitely like Ethereum kind of pushing these challenges of like making Ethereum scale and providing a great user experience to rollups. So I think that's by design. I think maybe like Ethereum core developers were wishing that maybe like some of the rollup choices with regards to UX had been different.
Speaker 3
01:44:34 - 01:45:37
But I think it's kind of owed to the fact that while rollups are kind of non-custodial, and they have this like censorship, this mechanism, so you can always get your transaction mined through the layer 1 contract. I think all of these kind of gave the teams working on these rollups in almost like a, I don't want to say it was a false sense of security, but like the sense of security that they really had a lot more freedom kind of to make centralizing choices and, and operate or like, innovate on kind of the UX more so than maybe the decentralization roadmap. And I think the other point is just this is like the almost like a result of kind of very intense competition between them, right? Because the market forces just pushed all of them to launch before having fraud proofs and so on, right before kind of being really decentralized, or like, having decentralized sequences. And yeah, so I think I really like have a hard time like blaming anyone for their choices.
Speaker 3
01:45:37 - 01:45:55
If I've been in the same position, I'm sure that kind of I had felt the same pressure from market forces. And I think they all like, did what they could write. And so I don't think it's anyone's fault. And with regards to kind of what you asked about Cosmos, I thought it was an interesting question. I don't know the answer to it.
Speaker 3
01:45:55 - 01:46:15
Like the answer why kind of Cosmos, like the in-chain security has the scaling limit, right? We should definitely ask that in our episode on Cosmos. If I had to guess, then it's maybe a mix of, well, like the Cosmos Validators, they basically execute all of the transactions, right? So they don't have proposal builder separation. So that's 1 thing.
Speaker 3
01:46:16 - 01:47:00
And the other thing is maybe kind of about like stake re-hypothecation in the sense that maybe if they like restake quote unquote, like the Cosmos Validator Hub stake too many times, then maybe the security for everyone starts going down after a while, right? Because they basically spread the security out over too many different host customer chains. And so I could imagine maybe it's like a mix of these 2 or something like that. And yeah, I don't think maybe I don't think necessarily that like shed sequences will have the same problem. I think they will definitely have the problem of just the builder, like the builder role in these networks is just going to be incredibly centralized.
Speaker 3
01:47:02 - 01:47:36
Just because the the requirements of being a cross chain builder are just so so so much higher than being like a Ethereum layer 1 builder. And yeah, frankly, That's why we have this very audacious vision of decentralizing the MVP supply chain, because yeah, not just on the single domain, but even horizontally across several domains by providing different parties, kind of the tools to collaborate in a trustless way, Just so all of these roles don't have to be played by the same parties.
Speaker 2
01:47:38 - 01:48:05
Yeah, well said. And by the way, I can't say that I would make any decisions differently than the layer twos on Ethereum would either. Although I would, I don't know if you remember giving this quote, but it stuck with me. You said it in, I think it was like an I Pledge Allegiance podcast a little while ago about people, crypto, the intense cyclical nature of crypto actually culling people who make short term decisions. And I actually was listening to you at the time because it was during a bull market and not agreeing with that quote.
Speaker 2
01:48:05 - 01:48:43
And then frankly, now, with the benefit of a full year of bear market and watching how it's played out, I actually do agree with you a little bit. And this is beyond the scope of this MEV podcast necessarily. But I did, I found myself wondering about that exactly when I was watching this Arbitrum governance snafu play out and wondering to myself, well, do you really need that billion dollars to play around and kind of be fast and loose and match the Polygon deals? Or is that actually not money that's particularly well allocated that way? And if you had a longer term framework on it, then I don't know, it's maybe beyond the scope of this MEV podcast.
Speaker 2
01:48:43 - 01:49:03
But I do find myself wondering that quite a lot, actually. All right, let's give listeners a little bit of a tease for the next episode here. So this next episode is going to be an interview with 2 searchers. So this is kind of a fun callback to the original interview with a searcher that you did, it's got to be back in
Speaker 1
01:49:03 - 01:49:04
2020.
Speaker 3
01:49:04 - 01:49:07
Yeah. 2020 or 2021. Yeah.
Speaker 2
01:49:08 - 01:49:19
Yeah. So this 1, this 1 will be fun because we'll really get into the nitty gritty of what the searching role kind of looks like today and some of the the pvp activities that go on for researchers
Speaker 3
01:49:19 - 01:49:49
I think it would be very interesting to also listen back to the original episode just to hear what changed in like the the 2 or 3 years that that lay between these interviews because I think um I mean so these 2 searchers were interviewing I think they're really on top of their game. And I think that'd give us a great overview over what it takes to be a searcher today, how much more sophisticated it is and how much more resource intensive. I think those would be 2 takeaways that I'm expecting to get.
Speaker 2
01:49:50 - 01:49:54
Absolutely. All right, Hasi. As always, this has been a fun 1. See
Speaker 4
01:50:00 - 01:50:00
you
Speaker 5
01:50:15 - 01:50:00
you
Omnivision Solutions Ltd