24 minutes 27 seconds
🇬🇧 English
Speaker 1
00:00
So I will talk about EIP-4844 economics and market dynamics. This is a joint work together with David Krapis, who is a researcher at Ethereum Foundation, and Edward W. Felton, who is a co-founder and chief scientist at Offchain Labs. And this is a joint academic grant between Ethereum Foundation and Offchain Labs.
Speaker 1
00:29
Okay, So motivation is clear. We all know that Ethereum needs scaling. And there have been a lot of solutions proposed for that. But at the moment, rollups are assumed to be the best solutions for that.
Speaker 1
00:46
So therefore, rollups can help Ethereum into this goal to scale. Also, Ethereum on its turn can help the rollups and in particular it can provide more resources for the data availability. So I will explain what data availability means in a second. So this is very short introduction to rollups.
Speaker 1
01:12
So rollups are moving execution of transactions on Ethereum off-chain, and they use it only for the settlement layer. So transactions are sent by users to the sequencer, so there is 1 dedicated validator of the rollup in the Ethereum language. Then the sequencer receives transactions and passes it to the execution stage in some order, and the mostly used transaction order is first come first serve. And when there are enough transactions, the sequencer, in case of Arbitrum compresses, but in other cases it's not necessary to be compressed, will post a batch of transactions to the Ethereum.
Speaker 1
02:01
And then validators of the rollup coordinate to advance the state. And they post the updated state to the Ethereum. Now, how does this new proposal help the rollups? We know from the data that the big part of costs that roll-ups incur are call data posting costs or batch posting costs.
Speaker 1
02:32
If this part is lowered significantly, then the whole total costs are lowered. And the improvement can be from 3 to 10. So I think these are more conservative numbers than 10 to 100 that we often see, the claims. And if these costs are lowered, then Rollups reach more users, and through Rollups, Ethereum also reaches more users.
Speaker 1
03:03
So now how exactly the Ethereum improvement proposal looks like. So it generates in economic terms, we can think about the parallel data market and this data consists of blobs which are approximately 128 kilobytes. So originally it was proposed to put 2 target data blobs per block, but it was changed currently to 3 data blobs per block, and the maximum is double. So it's very similar to what is happening in EIP-1559 with the regular gas market.
Speaker 1
03:49
So dynamic pricing is exactly the same as in this pricing of Ethereum regular gas market. So if the maximum number of blobs is reached, the price increases by 12.5%, so 1.8. And if 0 blobs are posted in the block, then price decreases by 12.5%. And in between, it's just linear approximation for the percentage.
Speaker 1
04:19
The main question that we ask in this joint work is, what should roll-ups do? What should their strategies be? From a roll-up perspective, they are used by users, and therefore they get some stream of transactions so they can measure what their transaction rate is or what is their current number of transactions that need to be posted on the main net. So what they are doing, they are grouping them by time.
Speaker 1
04:48
And in case of optimistic rollups, they just compress the raw data of transactions. And in case of ZK rollups, they either compress raw data or they generate some state views. But the main idea is that more data needs to be posted. Of course, there are different sizes for the roll-ups, ranging from, let's say, the biggest, that is Arbitrum, to the very small ones.
Speaker 1
05:18
But each of them need to decide how often do they want to post the blobs. And the special feature of blog posting is that you cannot post half blobs. Or you can post a blob that consists of half the relevant data, and then maybe append some zeros on some useless data. But what is special is that you need to pay full price.
Speaker 1
05:45
So there is a blob price that is determined by this dynamic mechanism. And no matter how much of that blob space you are using, you are paying the full price. So we model transactions' preferences to be included fast with a delay cost function. So more the transactions wait, higher the cost.
Speaker 1
06:14
And we model it with a linear delay cost function. This is the most realistic 1. And if we assume that there is a constant rate of transaction arrival to the rollup, then rollup's aggregate delay cost is just quadratic. So that's very simple mathematics here.
Speaker 1
06:30
So there are a few parameters that rollups and also Mechanics Designer can use to design a system. First is demand of rollup, and that's a transaction arrival rate. Then there is blob price that we will determine the equilibrium, or the dynamic update Mechanics will determine the equilibrium, and At least that's what is the promise of this dynamic price update. Then there is L1 metadata posting cost.
Speaker 1
07:02
So each blog when it's posted, it takes some gas usage on the main gas market. So that needs to be paid by the rollup. There is also gas market price per byte, so on the main net. And also on the main net, when you post a batch, there is some metadata posting cost.
Speaker 1
07:25
This is also fixed. So the total cost of the rollup, when it is posting the data, It's a sum of 2 parts. First is posting cost, whether it's a blob market or the regular gas market, it's posting cost. And also there is some aggregate delay cost that we measure as a quadratic function in time.
Speaker 1
07:53
There are a few differences between blob market and regular L1 gas market. So total cost in the blob data market at posting is sum of fixed blob price and fixed L1 metadata cost. While on the L1 market, it's fixed L1 gas fee, but it's per byte. So it's multiplied by how much bytes the rollup is using.
Speaker 1
08:24
And then fixed meta cost is added to that. OK, so There is a clear trade-off here. Of course, blob market has lower per byte cost. And I will discuss why is that.
Speaker 1
08:39
And it can be used to save the posting costs. But L1 gas market gives rollups flexibility, because they don't need to wait before they are full blob or very large batch. And then they economize on the delay costs. So optimization problem that Rollup solved is quite simple.
Speaker 1
09:03
So in each case, whether it's blog posting or the batch posting, it finds optimal stopping time. And that can be found using this first order condition. It's very simple because of this quadratic form of the delay function, delay cost function. In both cases, the same.
Speaker 1
09:23
But of course, there are other parameters that need to be taken into account. So After rollup knows what is the optimal strategy in both cases, it simply compares which 1 is better at each point and uses the 1 that is more favorable to it. And this already gives upper bound on the blog price, for which the blog posting is better than posting L1 call data. If we assume that rollups can interchange between using data availability in these 2 markets.
Speaker 1
10:05
So they have, let's say, fraud proofs in case of Arbitrum. And it can get data from both markets. And it should be easily doable for optimistic roll-ups, but I think for ZK roll-ups it should also be fine. We observe that per byte cost in the block market is always upper bounded by per byte gas market price.
Speaker 1
10:32
But if not, then they will just switch to using L1 gas market. It's as simple as that. So now, slightly more interesting results. So if we assume continuous time, and we assume that the equilibrium price is determined at every point.
Speaker 1
10:53
And at least that's the promise of the dynamic pricing mechanism. And also we assume that roll-ups are myopic. They only care how much cost they incur in this round and they don't care about long-term gain. We obtained the first result that for given set of roll-ups, or therefore, their fixed transaction arrival rates, There is some threshold that is dependent of course on the set and transaction rates, such that roll-ups that have lower rate than this threshold, they prefer to use L1 gas market.
Speaker 1
11:35
So they don't use blow posting strategies. So if roll-ups are small enough, they don't use blow posting. Of course, it can be that there are no such roll-ups. We just determine the threshold.
Speaker 1
11:45
And if there was a roll-up that had lower rate, then it would use. But there is, of course, 1 roll-up that will use a block posting strategy. Otherwise, the block posting cost is 0, and that's a contradiction if there is no such roll-up. But there can be situation where no roll-up switches to L1.
Speaker 1
12:08
The Second main result is that if rollups have a technology to join in blog posting 2 or more, but we focus on 2 because more gets much more complicated. And they somehow know how to split the cost or the revenue that they obtain if they join in blog posting, then we get that any 2 roll-ups actually can improve their total cost by joining in blog posting. It says nothing about how they split the cost and that actually is a future research question. Okay, so this gives some hope for small roll-ups, that if they operate alone, they cannot use block-posting strategy, But if they join forces, now maybe they cross that threshold.
Speaker 1
13:03
And also, the threshold will change, but hopefully not too much. So if they come together, they can still make it. And this is not only about those roll-ups that don't make in the blog data market. It can be that rollups that are large enough, they come together, they join in blob hosting, and they improve their total costs.
Speaker 1
13:28
And that happens because they lower the delays. So each transaction, their transaction, now has lower delay. And in total, of course, then it gets even lower, because it grows quickly, quadratically. Actually, If we look at how they can share this value added, it doesn't have to be proportional at all.
Speaker 1
13:55
And actually, if you look at fair sharing, we see that the roll-ups that produce 70% and 30% of the blob data, the same moment, they pay almost equally. Because the small rollup, even though it generated much less data, it saved a lot on delays. Because if it waited before it could fill the blob, delay cost would be huge. So therefore, large rollup can exploit this and make it pay almost 50%.
Speaker 1
14:32
It's always more than 50% that large rollup needs to pay, but it doesn't grow proportionally. And also how this added value is split again, it's an interesting question. And I think some bargaining solution should be applied here. Or maybe it will just be decided on the business level.
Speaker 1
14:54
So in the future work, we want to add compression rates. And here it becomes slightly more complicated, because there is more data, compression rate gets better, up to some point, but still it gets better. Also, it's interesting what happens with these dynamics, dynamic price update. We assume that they reach equilibrium price right away, but of course in reality this is not the case sometimes price jumps a lot also another interesting Direction is to consider some more aggressive strategies by especially bigger roll ups So they can start posting even faster, the blobs, and so not optimally.
Speaker 1
15:38
That increases their costs per transaction overall, but it also forces smaller rollups to leave the market. And that then feeds back in the equilibrium price. Equilibrium price goes down because there is less usage. And then, depending on the parameter sets, we see that it actually improves the aggregate costs, or costs per transaction.
Speaker 1
16:05
Well, of course, it depends on the parameters and this gets slightly more complicated than simply myopic optimization of posting strategy. So we also look at a bigger question of how EIP-4844 should evolve. And this we try to model with user types. So we think that there are more dimensions to user types.
Speaker 1
16:36
But let's say the 3 main ones are security or decentralization, because in many cases, they are quite related. And Ethereum has highest security slash decentralization. It's by definition because rollups are inheriting the security so they cannot have higher than Ethereum. Regarding rollups, it's of course varying because there are many rollups, many different technologies and implementations.
Speaker 1
17:04
But let's say it's moderate on average and improving over the time. Another dimension is fees or willingness to pay fee. And Ethereum has high, that's why it needs scaling in the first place, and rollups again have it varying, but let's say that on average they have it moderate, but this is exactly where the improvement proposal should help. And there is also speed, speed of including the transaction.
Speaker 1
17:38
But actually, it consists of 2 parts. First is rollup, soft finality. This is what sequencer gives to users. In case of first come, first serve, it's the best you can get.
Speaker 1
17:50
There is nothing better you can do. While Ethereum finality of a roll up speed, it's slower than Ethereum because it needs to first post a batch and then wait for batch confirmation on the Ethereum. So it cannot have finality better than Ethereum. So it's Ethereum finality plus the time that it takes rollup to post.
Speaker 1
18:17
But of course some users may not care about Ethereum finality, they care about rollup, including their transaction on the Ethereum. OK, so the first speed, the rollup soft Guarantee can be traded for the security, but it's not part of EIP 4844, by either decentralization or encryption or both together. But the second part, so Ethereum finality speed of course depends on EIP 4844 dynamics, meaning more resources or more blobs Ethereum will allocate, the speed should go down. But then also demand is increasing, so everything depends on how the EIP-4844 will evolve.
Speaker 1
19:05
It also depends how the types are distributed. And also, of course, it depends how rollups will improve regarding their security and decentralization. So that's all from my side. Thank you for your attention.
Speaker 1
19:22
So if you have any questions.
Speaker 2
19:41
Anyone?
Speaker 3
19:43
Any questions? Any questions? Thanks for the talk.
Speaker 3
19:53
Are there any implications of the research you've done for off-chain labs and by extension Arbitrum beyond just the strategy of when to use blobs versus L1. Does it go beyond that or is that mostly the implication?
Speaker 1
20:12
Yeah, so implication, immediate implication is that call data posting costs should go down. And also strategy needs to change how we post. But also, we may want to join some other roll up and improve costs.
Speaker 1
20:28
But then, yeah, again, as I said, it requires to. So there are a lot of implications, at least these 3 that was in the talk. Thank you.
Speaker 4
20:43
With 4844 in particular, what would be, do you have any examples of outside of Ethereum, like what that could mean for protocol, depending on if they're looking to, beyond transactions, right? Beyond like trying to like help with throughput, are there other ways you could optimize with that same type protocol and mechanism?
Speaker 1
21:06
So outside Ethereum, so immediate thing is that other L1s can also offer that. So layer, base layers, if they want to encourage rollup, rollups deployed on them. But I think the biggest demand is on the Ethereum at the moment, and that's why they introduced it.
Speaker 1
21:29
I heard that Bitcoin also pushes to develop roll-ups, or Bitcoin community, They could also introduce that. In principle, so that gets a bit tricky, but rollups themselves can introduce similar protocol or allow rollup users to post their blobs. But it gets tricky because compressing doesn't work anymore. Because once it's compressed, you cannot compress it twice.
Speaker 1
22:02
So there won't be much saving. But maybe there can be some other solutions.
Speaker 2
22:11
Are there additional open questions you have when it comes to just different market dynamics within roll-ups, L2s, anything within Arbitrum, et cetera, that you'd hope to get out of like
Speaker 4
22:24
a group of economics like experts to help solve?
Speaker 1
22:30
Yeah, so actually The last part that I presented we still don't have a very clear model of that and I would be very happy if you have some suggestions to develop that Yeah model
Speaker 3
22:45
The third implication that you mentioned regarding potential collaboration of different rollups to optimize their costs, is there anyone or any project working on that as far as you know or it's kind of a wait and see?
Speaker 1
23:00
No, no, so far it's just a research question. There are no talks about that.
Speaker 4
23:08
I've also thought about the whole, like I read Patty's like bridging databases,
Speaker 2
23:14
type of thing, like the many, like
Speaker 4
23:16
Do you believe there's a future where there'd be thousands of rollups? Like by many different protocols or groups?
Speaker 1
23:23
So 1 of the results suggested there should not be many rollups or there will not be many rollups using blog posting Because it cannot be, there are thousands of rollups and everyone has high transaction rate. So there will be small ones and those small ones will not use blob posting. So that's very related to that.
Speaker 1
23:41
So they cannot take advantage of EIP-4844, only the larger ones, large enough ones will take
Speaker 3
23:51
Some consortium of smaller roll ups
Speaker 1
23:54
Yes If they manage to come together all of them and post common blobs and then split the costs But I think it's too complicated But in principle, yes, so they can come together and post blobs.
Speaker 2
24:11
Any other questions from anyone?
Speaker 4
24:16
Akaki, thank you very much.
Speaker 2
24:17
Give him a hand, everyone.
Omnivision Solutions Ltd