1 hours 51 minutes 12 seconds
🇬🇧 English
Speaker 1
00:02
Okay, let's get started. So thanks everybody for tuning in to the 8th edition of the Lido Node Operator Community Call. There was a bit of a gap in between this call and the previous call, but we're going to try to get back to our monthly cadence. So kicking off, I'm just going to go over the agenda.
Speaker 1
00:24
I'm Izzy. I'm a LidoDAO contributor working in the NOM workstream. And today we have a series of guests as well as some presentations from fellow LidoDAO contributors. So first up, we have Will, who's going to give us an update on how the fifth onboarding wave for Lido on Ethereum is going.
Speaker 1
00:45
Then it's going to be followed up by Sasha and Max, who are going to do an in-depth presentation around the community staking risk analysis and bonding model that is being explored by LidarDAO contributors. Then we have a presentation by Lucas from NetherMind that's going to tell us about the NetherMind execution layer, changes that have been made in the last couple of months, and things to look forward to in the future. And rounding it off, we're going to have Justin Drake from the Ethereum Foundation that's going to tell us about MEVBurn. So Will, please take it away and tell us about Lido and if you're in wave 5.
Speaker 2
01:21
Sounds good.
Speaker 3
01:22
Thanks, Izzy. As Izzy mentioned, I'm Will, a contributor to the Node Operator Management Workstream at Lido Dow. And I'll be talking about the fifth onboarding round.
Speaker 3
01:32
So as a reminder, this onboarding round is being run in 2 stages. Stage 1 had 14 eligible candidates that were assessed by the LiDAR node operators of governance group or the LNO SG. And these candidates basically were all return applicants to the Lido on Ethereum onboarding process that had previously been assessed high scores by the Lido node operators subgovernance group in prior onboarding rounds. So the LNO SG has suggested 2 of these node operators to the Lido DAO for onboarding onto the protocol.
Speaker 3
02:06
These are LaunchNodes and SenseiNode. There's currently a snapshot vote that is pending that closes on Thursday of this week. So if you've not voted, feel free to do so. And also feel free to participate in the discussion on the LIDO research forums regarding these 2 node operators.
Speaker 3
02:23
Over the past week or so, optimistic onboarding and testnet activities are starting to begin. So currently there is a process of collecting basically contact information, understanding the policies and procedures related to running validators for the Lido protocol on Ethereum. And this will continue over the next few weeks until there's an on-chain vote. Assuming that performance is solid and there's no issues, then mainnet onboarding is tentatively planned for these 2 operators by mid-August.
Speaker 3
02:52
For the second stage of the onboarding round, there's an additional 114 applicants that need to be reviewed. So in total, there's 116 applications that were received for the Wave 5 onboarding round. The evaluation for these next 114 applications and for Stage 2 overall is tentatively planned for the final week of August. So if you have applied to the onboarding round, expect to hear from the non-contributor workstream in the next week or so, basically following up if there's any additional information required.
Speaker 3
03:21
In some cases, potentially having follow-up calls. In other cases, basically just an update saying that the information that was received is sufficient to be evaluated. This process will continue through the month of August, and additional details will come with the emails that you've provided and applied with. So during the stage 1 LNOSG evaluation round, the initial discussion between the participants suggested that a total suggested shortlist to the DAO of 7 to 9 new node operators would be proposed.
Speaker 3
03:51
This is across both stage 1 and stage 2. Important to remember that the Lido DAO has the potential to approve new modules to be added to the protocol overall of the next 12 to 18 months where additional node operators will have the opportunity to participate. But in general, again, this is just a suggestion from the LNOSG. So feel free to join the discussion in the onboarding round thread that's live on the LIDO research forums.
Speaker 3
04:15
Assuming all these timelines hold, mainnet onboarding is tentatively expected by mid-September, meaning that whatever the new number of node operators that the DAO decides to onboard to the protocol, all should be running mainnet validators by the end of September. So with that, keep it fairly tight, but happy to take questions if there are any. But again, I encourage you to go to the LIDAR research forums to join the discussion there. Thanks.
Speaker 1
04:40
So I'm keeping an eye on the chat in Zoom and on YouTube. I don't see any questions. And since we are trying to stay on schedule this time, I think we'll just move on to the next segment.
Speaker 1
04:52
So Sasha and Max, can you please present your analysis on risk assessment for community staking?
Speaker 4
04:59
Yep, Sure. Hey everyone, I'm Sasha, a LIDAR contributor working on the community staking work stream as a product manager. The second section of our today's call is supposed to be about the permissionless bond analysis.
Speaker 4
05:14
But Firstly, it's worth providing a bit more context why we are going to talk about such kind of things. So, several months ago, we have presented the Community Validation Manifesto where the LiDAR contributors gathered a common understanding of why solace-taking matters and what we want to achieve as LIDAR contributors on our way of evolving the LIDAR node operator and validator set. Within these goals outlined in this document, we have committed to increase the number of independent node operators and to lower the barriers to becoming a node operator. Since those days, when we established this manifesto, a lot of work has been done.
Speaker 4
06:02
And the most significant milestone was the deployment of the Staking Router smart contract, sorry, which basically opens up the opportunity to modularize the LIDAR validator set. And it means that the LIDAR DAO can switch from the monolithic node operator set structure to the modular 1. So that's why we've started working on the development of different modules which aim to onboard new node operators. But if we do a step back and look at these goals once again, we can see that there is a small node that these goals must be achieved without compromising on quality.
Speaker 4
06:52
And it means that while increasing the number of independent node operators, we must always think about risks and implement such mechanisms of risk mitigation that remind the overall protocol stable and secure. And I myself can say that it requires a lot of effort. So, bonds and dbt are considered as crucial risk mitigation mechanisms in permissionless staking solution. They can be combined in various ways within different modules.
Speaker 4
07:29
For instance, a module might utilize the bonds without DVT, or it can be a combination of bonds and DVT, or whatever. And the most interesting fact about bonds is that the required amount of bond should be enough to cover the risks for the lightest takers. And at the same time, it should be affordable for node operators, because we remember that our another goal is to lower barriers to becoming a node operator. And how to reach this balance?
Speaker 4
08:06
So that's the question that Max is going to talk about. Yeah, Max, the turn is yours.
Speaker 2
08:14
Thank you, Sasha. Yeah, Hey, my name is Max. I'm also a contributor, part of analytical stream.
Speaker 2
08:20
And with this explanation, I'll talk about what we're going to do. We're going to drive some details about the mechanisms of The first thing is evaluating risks involved in running a validator and ways to mitigate it. Of course, there are multiple ways to mitigate it and we're focusing specifically on both within this talk. I will be talking about risks and mainly about risks involved with validator malfunction, whether it's going offline or slashing or the risks of EL reward stealing by malicious actor rewroting the EL rewards to different addresses.
Speaker 2
09:02
Of course, all of these risks could happen unintentionally or intentionally and all of these risks should be mitigated to facilitate all the needs and all stakeholders expectations not in terms just the in your community but also stakers and and there are different ways of mitigation of risk mitigation, like bonding or reputation based mechanism or explicitly reducing risks with dbt. But for all of this, the first things we need to handle is the way to evaluate risks. And then try to build a mechanism that would reduce the risk exposure of our protocol. So the agenda is pretty simple.
Speaker 2
09:47
We would go by each of these risks provided here. Talk about how we can relate the effect, how Bound can mitigate the effect, and then talk a little about the complex risk of these damaging events realizing at the same time or consequently. A couple of words on assumptions. So to model the risks and damage and the evaluation we need some assumptions on mechanics which is basically that the operator fully controls related behavior, and all the rewards and validator balance after exiting would end up on the protocol vault.
Speaker 2
10:28
And we need a crucial assumption that at some point in the future there would be technology providing smart contractual exits which is more of a reality now but still not within the spec. And also we to provide risk valuation we need to make some assumptions on the market, which is basically what situations or scenarios we are considering in terms of the damage to the protocol. During the whole of this presentation, we will be using risk-averse approach, which is basically could be formulated as taking numbers representing as much data as possible. And for market assumptions, the most crucial parts are the level of tail rewards and the level of yield rewards, which we've taken to some generous numbers, and also yield reward structure, which impacts the amount of risk damage that could be provided.
Speaker 2
11:32
And for our approach, we used the assumption on consistent nature of the yield, yield-reward structures in terms of distribution and median radius. Going straight forward to the risks provided above, the thrust in validator going offline for some time. There is some assumptions should be made, the period of offline, the APR on consensus layer and NPR on execution layer. And basically what we are presenting right now is a framework that can put these assumptions to the numbers of possible effect on the protocol.
Speaker 2
12:15
For validator going offline, there is a part of missed rewards, the rewards that the validator didn't get for this time, and it's basically validator balance multiplied by period, multiplied by APR on CL on PL, and penalties that is accumulated on this level, which could be measured as missed rewards on consensus layer multiplied by 3 quarters. The gist behind this coefficient is the structure of penalties. They are going for SYNC committed, not going for proposal, not for all the attestations. More precise number, given the whole spec is 65%.
Speaker 2
12:57
But adding up a little bit more on this is based on the risk averse approach with penalties, for example, for swing committed being like exactly the same as missed rewards. Applying this pretty basic calculation, we could end up with some amount of ETH that protocol didn't get as a result of this risk realization. This is important to mention as the penalties is straightforward decreasing the amount of ETH within the protocol and validators. And missed reports is something that didn't landed comparing with situation when validator ran as usual.
Speaker 2
13:40
And transferring this risk is another option it could be compensated by the bond as a whole. It could be compensated by the bond, for example, on penalties part. And for missed rewards could be shared between all the participants. Again, this is the framework just to assess the numbers, the decisions on how this risk could be transferred or handled is the next step on this research.
Speaker 2
14:07
But getting the number is a pressure part to this. For the slashing thing, the situation is a little bit more complicated. Again, we need some assumptions for this exit period for the validators getting slashed. We don't need forced exits here as it would be exited by the beacon chain itself, but it could be a line, a queue on exits that will and lengthen the time that it would be not generating rewards and getting penalties.
Speaker 2
14:43
Also some assumptions on APR. The logical missed rewards is basically the same with the different exit period. The logic on penalties is quite simple. There is initial penalty, penalty, there is attestation penalties for being offline during the validator exit.
Speaker 2
15:01
Nevermind it's offline and not, it would still get the penalties. And midterm or correlation penalty is a little bit trickier. There is a formula here but the basic logic behind this formula is again simple. For each 1% of network slashing at the time of midterm penalties, 3% of validator balance would be slashed as correlation penalty within the integer mathematics behind, that means rounding down to nearest integer.
Speaker 2
15:34
The idea here is as a correlation penalty, you can get as high as possible in terms of the validity rewards, but building additional assumption on the amount of network that is slashing at this moment, we could limit this risk. For example, if no more than
Speaker 1
15:52
3.13%
Speaker 2
15:54
of network is in ongoing slashing, the correlation penalty wouldn't exceed 2 ETH because of this math with multiplier by
Speaker 1
16:04
3.
Speaker 2
16:06
So the framework here adds additional assumption on which risks in terms of the ongoing slashings within the network, we are trying to protect for and we are trying to transfer the risk on the bond. And for this particular example this would be
Speaker 1
16:22
3.79
Speaker 2
16:24
ETH. So the recap on assessment of first 2 type of risk is pretty simple. There is some assumptions that should be made that outside of the framework, that is a question of what risks we're trying to prevent and what risks we would take if this happened. Then there is an assessment part when we could calculate, but based on the assumptions, the level of exposure to risk, and then there is an application.
Speaker 2
16:54
For example, this is the thing I've told before, with a pretty generous level of sale rewards and EL rewards with an assumption that there would be no more than a year, at least the technology for trigger deliberate exits, that there wouldn't be an activity leak for substantial amount of time, and that we are not protecting from the case of more than 3.13% of network slashing within the same period. 4th bond is enough to compensate the risk of going offline and slashing basically for any amount of validators. Going next for the third type of risk, which is quite interesting in terms of this approach, the possibility for changing recipient address and basically stealing yearly rewards brings up a tough question. First of all, and the main major difference is Yale stealing is directly incentivized.
Speaker 2
17:53
If you are trying to slash your validator or go offline, you don't get the rewards. It's basically rewards missing and damaging the ETH. For this situation, Yale stealing is an opportunity for a malicious actor to perform this attack and get more rewards that he's risking. And due to high variance in the amount of yield rewards.
Speaker 2
18:19
Basically, no amount of bond as a mechanism could prevent this situation purely on economical science sense if we're speaking about entity controlling only 1 validator and providing only 1 bond. As like, if we have the block worth 400 piece, like no phone of 16, no 32 bones would economically prevent the actor from stealing it and losing its bond. But the good news pretty simple which could be formulated like that. There is very very very very very small amount of blocks that worth so much ETH.
Speaker 2
18:59
For example only 0.28% of the blocks worth more than 4 ETH. And if we assume the worst case scenario, when all the actors act maliciously and steal them, the amount of blocks subjected to this attack is pretty low, but the amount of cumulative EL rewards is pretty high. If this 0.25 of blocks would be stolen, we would lose as a protocol basically
Speaker 1
19:29
25.8%
Speaker 2
19:32
of all EL rewards, which would be socialized on all the stakers. So we couldn't with no amount of bond prevent this situation, but we could limit it. And the basic idea is the more bond we have, the less effect on EL rewards, supposing that all of the actors would be acting purely economically malicious, stealing the bond, stealing the EL, if they face the block that was more value than the bond they provided.
Speaker 2
20:05
It could be combined with this graph, which is basically the mobile bond you have. The less decrease in EL rewards we would observe in this situation, but there is a significant element of chance within this model that is inherited from that significant variance of EL. The basic gist here is simple. What if we would face more high worthy in terms of year blocks through the year and hence more intentions for actors to steal the bond.
Speaker 2
20:42
For this case the approach could be transformed a little, basically adding multiple dimensions of simulations and selecting some pointile again using the risk averse approach in which we get the worst luck in terms of 90% of confidence interval, we get the worst luck on actor behavior in terms of everybody who can still economically, who can still getting more ETH than the bond provider would do this and create this basic curve which transforms to which amount of year rewards would be stolen for different levels of bond. This is again a framework that could be used to assess the impact of this attack existing on the protocol And elaborating a little bit on this, we could challenge the last assumption of percentage of node operators or validators that are subject to stealing. In terms of community staking, we're not going in with distributing all of 100% of stake on permissionless node operators. So applying the curve above to the percentage of node operators that are considered and subjected to stealing, we could calculate the most possible amount of impact on protocol in terms of drop in APR and in terms of relative drop in APR within an assumption on which part of the whole APR yield rewards are providing for.
Speaker 2
22:29
Again, using risk-reverse approach, the more yield rewards are responsible for APR, the more prone we are to this attack as its effects on the Yale-rewards. Basically, this is a framework to validate this. And the useful, the case for using it is making some assumptions on the yield-reverse structure, on share of validators that are subject to sterling and level of risk tolerance in terms of confidence interval, using risk assessment and evaluating which risks the DAO and the stakers and the community is okay to take if we don't have any additional mechanisms apart from bond, which is of course not true as bonding is not the only 1 mechanism. And yeah the example here is based on graphs above for 4-ith bond decursion appear would be less than 0.027 percent with probability of 90 percent if only what 5 percent of validators would perform if attack if opportunity occurs.
Speaker 2
23:42
So at this point we've built the framework for evaluating 3 types of risks in terms of impact on protocol, in terms of how Bound could prevent it or limit at least. But bringing this to mid space and evaluating a little bit more, these risks could happen at the same time. So we should consider combining the risk scenarios and validating what would happen in this situation. And the basic gist is you can't steal EL if your validator is not running online.
Speaker 2
24:18
And validator being offline is worse than Yale stealing on general level because impactful Yale stealing is really, really rare and validator going offline is rewards not generated and penalties acquiring for the whole amount of time. So creating this the most dangerous scenario is the simple idea. Permissions join, going offline until the time trigger exits are arrived and slashing at the moment they are just around the corner. So again, this is a framework that could provide you the instruments for evaluating this type of combined risk.
Speaker 2
25:04
And the first 1 is apocalyptic scenario when all of the risk are going together with risk averse approach on each of the assumptions. It's again assumptions on IP error, on time to triggerable exit, on the possible level of ongoing selections, and times in exit queue and the validators that are affected. With different levels of bond, we could receive the situations when the bond covers all of this damage, for example, 7 is for this, or the situation when bonds don't cover all the damage, and in that case, we face a decrease within the missed rewards not compensated with the bonds, which could be treated the same as increasing fee just to put the context. Again this is an instrument that could provide this valuation but basically this assumption are as I ordered them apocalyptic.
Speaker 2
26:00
None of them each of them are really really rare and with low probability to happen and happening all of them at the same time is basically an existence. Protecting from this type of risk would cost us too much in terms of market affordability. A more realistic approach is getting to a more realistic amount of APR, getting for more realistic amount of ongoing slashing, which is basically cap at 1%, which is huge, We never faced it in the history. And if ongoing session is less than 1%, the correlation penalty is 0 ETH.
Speaker 2
26:40
Adjusting a little bit time and exit queue for validators. And for this situation, the bond would cover all possible risks. And this assumption on the share affected basically doesn't matter because if it's 1%, 10% or 20%, when the bond has a mechanism of risk transferring cover all the losses, there is no impact on protocol. There are multiple different scenarios with tweaking some parts, for example, if triggerable exits would come up only in 3 years and it's really impactful, or when the midterm penalty would actually appear, for example, some sys if we want to protect from some civil attack.
Speaker 2
27:25
Even we could consider this effect if we go with a 0 bond, because, again, It's a framework, a tool to evaluate the risk, but I'll move to conclusion. If that is pretty simple, risk assessment is a framework tool that could be provided to evaluate the different scenarios. And then the decision should be made not on the amount of bond we should provide in terms of risks, but in terms of scenarios and actual risks that we as a DAO want to mitigate, and which we are willing to take because consider them non-existent in terms of probability. And of course, I'll say it again, the bond is just 1 of the multiple options that could be used to mitigate the risks analyzed within this framework.
Speaker 2
28:22
And thank you. If any questions in the chat, somewhere else, I would love to answer them.
Speaker 1
28:30
Thanks for the presentation, Max. I don't see any questions in the chat. I have a couple that I'll try to go through quickly but also in the interest of time.
Speaker 1
28:39
Maybe we don't spend too much time on this but looking forward to like seeing this fully fleshed out and posted perhaps on the forum so that people can like dig into the math and maybe ask specific questions when that's available. So 1 of the questions that I have, and actually I think like Dimitri brought this up in the Zoom call, and maybe Dimitri even wants to talk about this himself. I'm going to wait to see if he's going to unmute or not. No?
Speaker 1
29:06
Yeah, yeah,
Speaker 5
29:08
yeah, yeah, yeah, here I am. So the important addition here is that the risk assessment is done in the assumption that each validator has its own bond and there is no connection between validators. But the actual situation in most of the times is that 1 operator runs multiple validators.
Speaker 5
29:28
And if we're speaking about MEV stealing, 1 way to mitigate risks even better with keeping the same for each bond is to associate the bond not with the validator but with the all validators of the node operator and then make this combined bond a subject of a penalty application in case of huge MEV stealing. In this case, we are like assuming that the node operator has, let's say 10 validators, we will have 40 ETH bond. And in this case, this case covers like literally 99.9% of the scenarios. Yeah, so that's my addition here.
Speaker 5
30:12
Yeah, yeah.
Speaker 2
30:13
Yeah, absolutely true. And like, it's you're completely true. 1 side, I would add a bit, like piling up together all the bonds, not only the way for DAO to build a new level of risk protection, but also could be used to lower the entry limit because if the DAO is more protected against the risks with some technology like piling up different bonds for an operator, we can lower the bond per 1 validator because we could share some of this risk premium of doing this, requiring less capital for protection to the stakers.
Speaker 2
30:55
So it's technology that's good for everyone, basically.
Speaker 1
31:00
Yeah. So Torsten, in response to your point, Dimitri, brings up a good point, which is where if there's a node operator who is likely to steal or it's within their game theory strategy to potentially steal, why don't they just Sybil, i.e. Pretend to be multiple node operators instead of coming, like being truthful, let's say, and grouping all of their validators under 1 entity.
Speaker 2
31:28
For sure. Yeah. Yeah.
Speaker 2
31:34
Sorry. And that's the point that I was talking about. If we have an incentive for the node operator to show that he has 10 validators of 100 and could provide the bond in 1 pool that could be used for protection. We could lower the bond to incentivize this, like kind of, I wouldn't say good, this risk-based behavior when there is no like intentions for attack.
Speaker 2
32:02
And if there is an intention for attack, of course, it could be like 1 entity controlling
Speaker 1
32:08
100
Speaker 2
32:09
validators pretending to be 100 solar stickers. But for this situation, the bond would be higher and different technologies, not in terms of bonding could be applied to provide some incentive incentivization against that behavior.
Speaker 1
32:28
Yeah, I think that nails it. And there's a bit of a difference of approach, I guess, that we're exploring. Like, do we do nonlinear bonding or do we offer some sort of like performance bonus, let's say, if you've been successfully running validators for a large amount of validators, and large doesn't need to be hundreds, right?
Speaker 1
32:47
If you're a solo stake or it might be low tens or something like that, to incentivize operators to self-group, let's say their validators. Yeah, definitely more carrot than stake as Torsten says. So there's another comment in the chat, and this is the last 1, and then we'll move on to Lucas and Nevermind. Large MAV profit isn't necessarily going to be captured by node operators in that sense, if the MAV takeaway is large enough that the attacker will set up their own validator set.
Speaker 1
33:18
So that's potentially true. So it's very possible that modules that allow for permissionless entry into Lido will basically, may potentially be used by these actors, right, to increase their chances of landing a proposal. But that's why, in theory, you need at least a decent amount of bond to begin with. And it's also why the permissionless entry into the LIDAR operator sets is envisaged to occur through multiple channels and not just like a pure play solo staking module.
Speaker 1
33:52
So in dbt, depending on how you do cluster selection, it may be a little bit more difficult to get a complete sibling, let's say, of the cluster, such that the operator could maliciously take over the proposals. Yeah, the reason really, sorry, there's a question. If you think about why risk hacking or getting front run and ensure no leakage or detection outside of Lido, indeed, really like the only thing at risk there is the non-productiveness of that ETH that's sitting in the queue while you're setting up the validators or while it's getting exited when you want to wind down the validator operation because nobody's going to trust you with a block going forward. But yeah, that's correct.
Speaker 1
34:47
But yeah, guys, keep up the questions. Feel free to throw them in the chat and we'll send them to Max. When the presentation and the analysis is available on the forums, we'll welcome diving into this in depth because there's still a lot of meat to uncover. Need to uncover.
Speaker 2
35:02
Yeah, thank you for question. Yeah, easy. You mentioned that we're planning to publish this research on the forum with the base math behind.
Speaker 2
35:11
And all the comments, all the ideas is very, very welcome. Thank you. I think I need to.
Speaker 1
35:20
Thank you, Max. So up next we have Lucas from Nethermind.
Speaker 6
35:31
Yes, I'm sharing the screen.
Speaker 1
35:35
We can see it. Go ahead.
Speaker 6
35:37
Okay. So, I'm Łukasz Rozmaj from Nevermind Ethereum client. I'm a tech lead there. I'm also Ethereum core developer and thank you for having me.
Speaker 6
35:50
So I will start with a very brief history. So you might see Nevermind popping out in different places. The company right now is around
Speaker 1
35:59
200
Speaker 6
36:01
people strong, but the number of people actually working on the client is smaller. It's just a subset of this team. And we started, Tomasz, our founder, started working on Nevermind Dreamclient in 2017.
Speaker 6
36:17
We had our first release in 2019 and I joined the team then. In between
Speaker 1
36:25
2020-2021,
Speaker 6
36:27
we added some of our unique features that I will tell a bit later. But as you can see, we only started to grow the team a lot in 2021 and then 22 and now in 23. So we had quite a few things to catch up to, but I think we are catching up or we caught up on almost everything and now we have the manpower to create the perfect Ethereum client that we envision.
Speaker 6
37:02
So why I'm actually here? I'm actually here to promote client diversity. And client diversity can be seen in like few axes. So we can have node diversity on the network, we can have actual node diversity in the validators that are actually validating the chain and creating the blocks, And we can actually have node diversity in the builders, but with MEV, it's even more complicated to achieve that.
Speaker 6
37:38
And why do we need this? So probably the most important 1 is the diversity of validators. So no 1 cares if we have diversity on the Ethereum nodes, if all the validators run GEPH, for example, right? They need to run Nevermind, BESS or Ergon in order to have a proper client diversity.
Speaker 6
38:01
Why this is important, right? It reduces dependency on the Gef codebase. So we don't want Gef to be Ethereum. We want Ethereum spec to be Ethereum and everyone who implementing it doing the right thing there.
Speaker 6
38:18
So we have, so if we have a bug in 1 of the repositories, it won't affect the whole chain, right? That would be bad. Also, when we have a bug in, bugs in some of the clients, it is very explicit. So for example, if we have bugs in more than 1 third of the clients of the network, then we would lose finality.
Speaker 6
38:46
But this is fine. This is okay, this is not fine, this is bad but this is not catastrophic, right? If we lose finality the chain actually is being informed that there is a problem and for example, you know, we can stop trading only the risk trading. Anyone who is averse to risk can, for example, stop trading during that time until the issues are solved.
Speaker 6
39:15
There is also 1 interesting problem on the social layer. So if we have more clients, we also have more protocol client developers. And those protocol client developers are gatekeepers a bit to Ethereum evolution, right? So for example, we are deciding to some extent about which EAPs will be implemented for the next hard fork.
Speaker 6
39:52
And if you have multiple teams working there, you have less bias, right? Because if 1 team has some bias or prioritizes some things based on their own personal experience or personal beliefs. This now is a more democratized process. So there are multiple things that we benefit from client diversity.
Speaker 6
40:18
The problem, like I said, is measuring client diversity. So in terms of node diversity, we have a decent measurement in ethernodes.org. And while This is a bit outdated image. Gafu is around 50%, never mind, is around 20%, 25%.
Speaker 6
40:39
And we have Erigon
Speaker 1
40:41
10, 15,
Speaker 6
40:42
and Bezu around
Speaker 1
40:44
10%
Speaker 6
40:45
of the nodes. So the node diversity is looking decent. It's not great, but it's not bad right now.
Speaker 6
40:53
But like I said, node diversity isn't the thing that we are most interested about. Most interested about is validator diversity. And validator diversity isn't as great and it's really hard to track because we cannot really measure easily which validator runs which kind of node. It would be privacy breach.
Speaker 6
41:22
We don't want even this data to gather this data. So the only way this is actually currently being done is by social channels like running surveys and just talking to people. And I think there is not enough diversity in the validator space right now and we need to improve it. And improving it is not only in, like, for example, Nethermine, Ergon or Besu interests, but it's in the interest of the whole ecosystem of all Ethereum.
Speaker 6
41:55
And now going to plugin Nevermind, what are our unique features? Why you should be potentially interested in going Nevermind. So firstly, we have super fast sync. Our fast sync, our sync is the fastest on the market.
Speaker 6
42:11
It scales really well. So on super fast machine, it's super fast on fast machine is only fast. But it's on, on the best case scenarios, we can have a machine validating the chain in 30 to 40 minutes. And in some Cloud scenarios, this is the time to copy GIF database.
Speaker 6
42:32
So we can sync to the head of the chain in the similar time that people copy the GIF database from disk to disk. So it's something we boast about. Just to be aware, this is for the headers and state sync, which allows you to both create blocks and attest head of the chain. We still do a sync bodies and receipts in the background.
Speaker 6
43:03
And it takes a few more hours, let's say 2 or 3 hours on a good machine. So just be aware that we have a different sync than probably any other node. And be aware of that. Full pruning.
Speaker 6
43:21
So we still use fairly straightforward yellow paper implementation of Ethereum state. And this causes that some garbage gets into the database from time to time. We cannot prune everything in memory. We have to save trees from time to time to the disk, just to support restarts correctly, for example.
Speaker 6
43:46
Or because our cache in memory is full and we need to dump it. So there is garbage aggregating in the database. It's similar in GIF, But we can clean it up thanks to our unique design without stopping the node. So you can configure your node, for example, that if your database is over, let's say, 500 gigabytes, well, state database, well, it should be around
Speaker 1
44:17
150.
Speaker 6
44:19
Then you will actually run the full pruning in the background without stopping the node, without missing attestations, without missing block production. And this will clean. It will take, again, dependent on the hardware, probably a few hours.
Speaker 6
44:38
But then you will have your database reduced. Another thing is our node is optimized for staking and for being a validator. So we are targeting stability, so the stability of running the node especially. So that's our top priority.
Speaker 6
44:58
So if we get any stability issues, that's top priority for the team. And the second priority is the station performance, right? We do not allow to degrade at the stations. And on a fast NVMe drive, I seen like getting 100% at the stations.
Speaker 6
45:18
There is some problems on slower drives, and this is because our current state layout. But this will change. I will plug in that in 1 of the next slides. And 1 last thing that is fairly unique to Nevermind, and also quite completely unique feature, is plugins.
Speaker 6
45:38
So a lot of our own features are implemented as plugins, and plugins work in this way that they are loaded by the node into the same process and can have unique functionality. So this is not maybe needed really for node operators unless they are doing something quite custom, but I can imagine that you want to have a unique alerts, for example, for your node. And while you might be able to achieve them for JSON RPC and things like that, it might be easier just to directly integrate them to your node and get them from there. So, for example, if you wanted to do something like that in Gef, you have to fork the Gef code, you have to maintain it, you have to build it yourself, But for Nevermind, you can only build a simple plugin.
Speaker 6
46:35
You just drop it into the plugins folder, it loads, and you have additional functionality. And in the recent releases, and by recent, I mean, let's say last half a year, we were focusing more stability, but we also have a lot of performance improvements. So we were decreasing how much memory we use. We were decreasing how much SSD writes and reads we do, we were decreasing CPU usage where we can or like optimizing it.
Speaker 6
47:11
And all of that was done to improve attestations again. So, like I said, attestations are most important. And from EL standpoint, attestations are connected to how long it would take for the node when it gets the new payload, new block, and when it says to the CL if it's valid or invalid. So this time is crucial.
Speaker 6
47:38
It's like the most important thing for attestation performance. And we also have this heavy processes that we can run in the background, right? Our old blocks and receipt sinks, as well as pruning, are quite heavy. And in the first or old implementations of that, they had a big impact on attestations because they were using a lot of CPU, they were using a lot of SSD input-output.
Speaker 6
48:11
So they had a big impact on attestations, but we minimize this impact in recent releases. So you can have a more reliable attestation performance even when they are running. Another thing is to decrease sync time and improve scalability. So sync time is directly scalable with your resources, right?
Speaker 6
48:35
So again, best case scenario, 30 minutes, but even if you have a slower disk and slower CPUs and for sync time, it's also your internet connection is important. It's still really good. Increased stability, like I said stability is the king. And in the last release we improved the UX and by UX I mean our logs.
Speaker 6
49:02
So for a long time we had quite dull non-colored logs. Now we have colored logs. We removed a lot of non-important logs or collapsed them to only like a header instead of having a big statistic. And I think this will make logs more readable for new users, especially.
Speaker 6
49:28
Another important thing, what is the future of Nevermind? So 1, a part of, of course, following all the hard forks, 1 of the biggest change that we are working over a year now on is to changing our storage model. So like I said, we have a very straightforward storage model for now, but we are moving to a path-based storage. And it allows us to achieve a few interesting things.
Speaker 6
49:59
So when Nevermind 2 will be released, with this new storage model, we will have no pruning. So no garbage will end up in database. It will be deleted, pruned on the fly when the block is being processed. Thanks to better reading patterns from the storage, we will have faster block processing.
Speaker 6
50:23
This will enable us to get better attestation even on the slower hardware part, especially I mean the slower disk because that's the most important. So just to give you an example from our profiling, when the block is being processed, currently 20% of the time is actually EVM execution, and 80% of the time is state access. And after this change, our first benchmarks are that now the state execution is 80% and the state access 20%. So the whole thing is a few times faster already.
Speaker 6
50:57
And we haven't really properly optimized the state execution yet because it wasn't worth it right now before we delivered that. We want to have less corruptions on state. So sometimes we have corruptions, but we want to fix them. And Actually, we are also adding state recovery.
Speaker 6
51:18
So when we have corruptions, we want to have more mechanism to recover from these corruptions. Changing storage model won't directly do, but will allow us to do better archive node. So currently, we are not really competing in this regard. Erick Gunn, for example, has a lot better archive node than us.
Speaker 6
51:39
But soon I think we will be competitive. Serving Snapsync. So SnapSync is how we sync our state and right now only GIF serves SnapSync. Maybe Bezu does, but I'm not sure.
Speaker 6
51:53
And after we change this state, because we need to be more performant there, we will start serving Snapsync, which will increase client diversity on this level. And you will have also faster JSON RPC. So anything from ETH call to anything that touches the state will be a few times faster. More optimizations, we are always looking for optimizations.
Speaker 6
52:20
And something plug-in not for Nethermine 2.0, but for the future, that is being worked on in Ethereum community and I think might be interesting for validators, are vertical trees. Vertical trees will come in 1 of the future hard forks. Not sure if directly after Cancun or maybe 1 more. But what they bring is something very interesting because they bring light clients.
Speaker 6
52:45
And light clients will be able to validate the block without having any state. So imagine that you run a node and even without it syncing, it can validate the blocks. You can have a validator with almost no storage. And even if you want to download the state, because, for example, you want to use JSON RPC, you can download in the background while attesting all the time during the sync.
Speaker 6
53:15
So you will have attestations when you start the application, when you start the node. And, nevermind, actually prototype this and it's live on, I think the devnet is called Clostin or something, the Verkle3 DevNet. And yeah, we are the first there to do that. I want to also plug in a tool from our DevOps called Sedge.
Speaker 6
53:46
It's a one-click way of setting up your CL-EL pair, and it supports multiple ELs and multiple CLs, not only in Evermite, and multiple networks. So all the testnets too, and Gnosis chain. And we are really eager to hear your feedback. So tell us what you need, tell us what's good, tell us what's bad, what's not working.
Speaker 6
54:13
And I want to emphasize 1 thing that in our Discord server we have a client support channel and every week there's a different Nevermind developer there working on the client that can help you with your problems, that will diagnose your problems, help you with your setup. So we try to leave no 1 behind, right? Every node counts. So if you would want to change, don't be afraid.
Speaker 6
54:41
We will try to help you however we can. And that's pretty much it. Maybe there will be some questions.
Speaker 1
54:51
Thank you, Lukasz, for that overview. I'm looking in the YouTube chat right now. There's a question around when we might have GRPC support for another month.
Speaker 6
55:01
So gRPC support is currently not prioritized, but we might start working on it in the next month. So it will probably take a few months to get there. But this is something we want to deliver.
Speaker 6
55:17
But it's less priority, for example, than the state redesign.
Speaker 1
55:25
OK, understood. That's fine.
Speaker 6
55:27
1 thing, if 2 people ask, it will get bumped. So please tell us what you need. If more people need this gRPC, we will probably shift it in priorities.
Speaker 1
55:39
So spam the client support channel on Discord is what you're saying?
Speaker 6
55:43
Yeah, for multiple accounts.
Speaker 1
55:45
The more people there, the better in a certain way. So I guess that's how you get them in, right? Bait and switch.
Speaker 1
55:51
Yes. No, that's really cool. Really cool. Yeah, and vertical tree support is something really, really cool to look forward to.
Speaker 1
55:58
And then potentially, you know, being able to validate with much less state stored. In terms of the syncing model, I just had a really quick question. So obviously, you use a really different syncing paradigm. How soon can that EL client be used to actually produce blocks?
Speaker 1
56:13
Do you still need to sync the full state first?
Speaker 6
56:16
You need to sync full state, right? Because blocks are, to produce blocks, you need to actually access the state, right? And to get the actual state route.
Speaker 6
56:26
So, but like I said, it's
Speaker 1
56:28
30
Speaker 6
56:29
minutes on a very fast machine, 1 hour on a distantly fast machine, 2 hours on a mediocre machine, right? So and you can get there very quickly.
Speaker 1
56:41
OK. And 1 last question before we jump into the world of MEV burn. What is the test where vertical trees are being implemented and tried out? Somebody asked.
Speaker 6
56:53
I would have to look it up.
Speaker 1
56:55
Okay, no worries. We can find it and we can send it in the chat.
Speaker 6
56:59
I will plug into a chat in a second.
Speaker 1
57:02
Awesome, thank you. Okay, thanks a lot, Lucas.
Speaker 6
57:06
Thank you.
Speaker 1
57:07
And up next, we have Justin Drake, who's gonna be telling us about MAP-Burn.
Speaker 7
57:15
Okay, hi everyone. So I guess this is the same presentation that I gave at EFCC, but the opportunity here is for you guys to interrupt me whenever you want and ask as many questions as you want. So, yeah, I'm going to be talking about MEV burn and the way that I think about it and the way that I try to sell it to the general public is as the equivalent of EIP 1559 but for MEV.
Speaker 7
57:42
So the mental model that I have in mind is that there's 2 use cases for block space. The first 1 is inclusion of transactions. So you want your transactions to go on chain and to get confirmed. And then the other 1 is specifically the ordering of transactions.
Speaker 7
58:00
So for example, if you're at the very top of the block, you're special and you can collect all the arbitrage. And when there is a competition for inclusion, you get congestion, just like a road which is congested with traffic. And then when you have competition for ordering, you have contention. You have various transactions that are contending, for example, to be at the very top of the block.
Speaker 7
58:25
And it turns out that the block value is really the sum of both of these things. So on the 1 hand, you have the congestion fee, which EIP-1559 has really managed to capture very well, which varies over time. And then you have the contention value, I guess, which traditionally we call the MEV, which also varies over time, but can be much more spiky. And just like we have a burn mechanism for EIP, with EIP-1559 for congestion, we can have a burn mechanism for contention as well.
Speaker 7
59:10
Okay, now what are the benefits of burning part of this block value? And it basically boils down to 2 things. When you're burning, 1 of the consequences is that you're actually smoothing the rewards. So from the perspective of a validator, you get a smooth income from issuance, and then you kind of get this very bumpy income from being a proposer.
Speaker 7
59:40
And so by burning the proposer rewards, you're left with the attester rewards, which are very, very smooth. And that, as we'll see, comes with various security benefits, especially for staking pools, as we'll discuss. And then there's another kind of aspect of burning, which is redistribution. Like the cash flow of this block value, instead of going to the stakers as a community, as a group, it goes to ETH holders, which is a broader, wider group.
Speaker 7
01:00:16
And as I hope will argue, those have economic benefits very similar to EIP-1559. Okay, so I just want to give you a little bit of intuition as to how the construction actually works and happy to answer any questions here. So before we do that we kind of need to refresh ourselves on what is proposed a builder separation because that is the context in which MEV Burn lives. So we have builders that submit bids.
Speaker 7
01:00:50
These bids are gossiped or shared with relays to form the bid pool. So the bid pool is either maintained by nodes in EntroinPBS or maintained by relays in the context of MEV Boost. The proposer is able to access this bid pool and pick 1 of the bids. Generally, they'll pick the highest paying bid or if they're censoring, they might pick the highest paying a bid that comes from a censoring builder.
Speaker 7
01:01:21
They commit to the bid by signing it, by basically signing this block header, gossiping the block header. And this act of committing to the bid should give them a guaranteed payment as the proposer, either guaranteed natively with enshrinedPBS or guaranteed by the relays. And then once the proposer has committed to the bid, it's safe for the builder, the winning builder, the selected builder, to reveal the corresponding payload. And if we look at how EnshrinedPBS specifically is implemented, well at least the suggested design is to basically have various periods within a slot.
Speaker 7
01:02:08
So before the slot even starts we have this bidding period where the builders submit bids to the bid pool and then within the slot there's these 2 phases that are kind of very important. The first 1 is the proposer committing to the bid, and then the builder revealing the payload. And the way that we make sure everything goes smoothly is that we invoke the attesters. We ask the attesters as a committee to attest to the timeliness, meaning that these consensus messages were gossiped on time.
Speaker 7
01:02:44
So we asked the attesters to commit to the timeliness of the proposal, which is this committed bid, and we ask them to attest to the timeliness of the builder's payload reveal. And the reason why I went to all this trouble to giving you all this detail is because MEV burn is a very simple change on top of this picture, and here it is. So MEV burn is the the new added orange portions. So basically asking these attesters to observe the so-called burn floor, which we'll talk about.
Speaker 7
01:03:20
It's basically the highest block value that was observed 2 seconds before the start of the slot. And then once they've observed this burn floor from the bid pool they need to enforce the burn floor so they basically need to make sure that the proposal that was selected by the proposer has value at least the burn floor. And then this burn floor will be value that will be burnt, just like EIP-1559 burns the base fee. Any questions so far?
Speaker 7
01:04:01
Because this slide is, I guess, pretty dense.
Speaker 1
01:04:05
So I have a couple, maybe just specifically on this slide. Yeah. The existence or the addition, basically, of a burn floor and this interaction between bidders and proposers going forward, Does that increase the possibilities or potential for collusion around like where to set that floor?
Speaker 7
01:04:25
Right, so that's a very good and advanced question. So Here's how I answer it. So 1 very important property of this is that the incentives for the builders don't change, right?
Speaker 7
01:04:38
The way that builders profit is they will generally capture the delta between the top builder that is winning the auction and the second best builder. The reason that doesn't change is because the only change is on the recipient side of things, on the proposer that's now no longer receiving the MEV because it's being burnt. Now on the topic of collusion, you might ask, okay, can the proposer along with the builders all collude in order to deactivate DMEV burn? And my answer here is that it's strictly simpler for the builders to collude among themselves than it is for all the builders to collude among themselves and to collude with the proposer, right?
Speaker 7
01:05:30
Because there's like 1 more party involved and so it's strictly more difficult to coordinate. Now let's assume that the builders are all able to collude among themselves today. Well it turns out that they can get all the MEV for themselves. And it's something that we're not seeing today, right?
Speaker 7
01:05:52
There's about a thousand ETH of MEV that goes to the proposers, and the builders don't seem to be colluding amongst each other. They seem to be competing against each other. And there's no reason why MEV-Burn would change this, because as I mentioned, there's no change to the incentives from the perspective of the builders. Does that answer your question?
Speaker 1
01:06:17
Yeah, no, that does. So basically you're saying that like the solution space of what they could achieve by doing so is already here. This just narrows it and kind of adds the requirement to be able to collude also with another party, which is the proposers now, which doesn't exist in the current system.
Speaker 7
01:06:35
Yeah exactly and to be quite honest I'm a little surprised that the builders are not colluding more amongst each other because you know they're making a fairly small profit margin by competing, but they could be making most of the rewards for themselves if they were not giving it to the proposers. But that's a different story.
Speaker 1
01:06:57
But the corollary to this to a certain extent is that given that proposers are losing a lot of value or at least marginal value per block, they might be more incentivized to become builders themselves and then therefore vertically integrate. And it's still like the same amount of parties that would need to collude, like n, but in certain cases these parties would be more vertically integrated, i.e. More centralized, and thereby kind of going around the whole point of pbs which was to separate these actors in the block space ecosystem.
Speaker 7
01:07:33
So I don't think that's the case. And the reason is that if you have n builders, you just need 1 defecting builder for this mechanism to work. And if the proposer decides, OK, I want to vertically integrate and also become a builder, now you have n plus 1 builders.
Speaker 7
01:07:52
And if you can have 1 out of n builders defecting, then certainly you can have 1 out of n plus 1 builders defecting. So I don't think that that changes things.
Speaker 1
01:08:05
I think that's a good answer. I hope it's true. Let's not stay on this for too long.
Speaker 1
01:08:09
Let's keep going.
Speaker 7
01:08:10
Okay, yeah, it is a very advanced question. Okay, moving on. So what is the definition of the burn floor?
Speaker 7
01:08:18
Basically it's going to be the maximum over all bids that are 2 seconds early, and by 2 seconds early I mean bids that came 2 seconds before the start of the slot, And we're going to be trying to maximize the transaction base fees, which is known as EIP-1559 base fees, and this new thing, which is a block base fee. So every block will itself have a base fee at the block level. And it will be, you know, basically the bid value from the builders that they get from MEV. Okay, so that was the construction.
Speaker 7
01:09:00
It's fairly simple. And I guess the whole rest of the talk is around the benefits of EIP-15, I mean, if you burn. And the benefits fall into 3 categories. 1 is that we're enshrining infrastructure.
Speaker 7
01:09:15
And by doing so, we're providing a little bit more structure, a little bit more rigor to the bid pool and things like that and that out of the bat is going to have some advantages. We also have advantages from smoothing the rewards as I mentioned in the introduction and we're also going to have benefits from redistributing the cash flows from the stakers to the ETH holders. Okay, so let's talk about advantages from enshrining. 1 of the cool things that we will get is very similar to the base fee for congestion, which provides an on-chain oracle that can be used by small contracts, we can have a new oracle for contention.
Speaker 7
01:10:05
So just like Ethereum is aware of the value of its own congestion pricing, Ethereum as a blockchain can be aware of its own MEV. So nowadays, the bids from MEV boost can be totally manipulated in either direction. So for example, a proposer could just pick an empty block, which has 0 transactions and 0 MEV, despite the fact that there could be a lot of latent MEV in the system, which is not being collected. And what a proposer can also do is they could spin up a builder and create a fake block, which has a bid of, I don't know, 10, 000 ETH.
Speaker 7
01:10:50
And then this bid is just the proposer paying themselves, you know, from the builder to themselves. And they can artificially make these high bids. And so you can't use MEV boost bids as reliable data points and as an on-chain oracle. And I guess another advantage of enshrining is that you also have, you're also making ETH, ETH are the asset, more of a unit of account within Ethereum.
Speaker 7
01:11:23
So now not only do you have to pay for congestion-based fees in ETH, but you also have to you know pay or settle the MEV payment in ETH. Today if you wanted to, you could pay searchers and builders using any currency you wanted. So it kind of enshrines things a little bit here. Another advantage is around the Another advantage is around the trustlessness of bidding.
Speaker 7
01:11:55
So once we have enshrined PBS, builders and proposes will have the option to use 2 separate bit pools. They can keep using the MEV boost bit pool, or they can choose to use the enshrined PBS bit pool. And when we have MEV burn, we're basically like truly enshrining the enshrined PBS bid pool, and we're removing the option to use the MEV boost bid pool. And the reason is that in order for the block to get included on chain it must satisfy this burn floor and so the proposers really need to be getting their bids from the the enshrined PBS bid pool.
Speaker 7
01:12:47
Another advantage is around censorship. So today we effectively have 2 separate bid pools. We have a bid pool that is supplied by censoring builders, and we have a bid pool where all builders, whether or not they're censoring, are submitting. And some proposers have decided to only pick bids from the censored bid pool, and they're kind of pretending that the uncensored bid pool doesn't even exist.
Speaker 7
01:13:18
And so examples of this include Celsius, Jump Capital and others. And what MEV burn is doing is kind of forcing proposers to select bids from this unified bid pool. They can't discriminate between censored and uncensored because, again, they need to respect this burn floor. And that sometimes will mean that they have to pick an uncensored bid pool.
Speaker 7
01:13:48
And I've talked to a lot of the censoring builders, sorry, proposers. And what they've told me is that really the only reason why they are censoring is because they can censor. And so if it was kind of technically impossible, or they have some form of plausible deniability, or whatever it was, they wouldn't be censoring, because censoring is not fun. And an MEV brand really gives them this protection that they need, or possibly gives them the protection that they need in order to stop censoring.
Speaker 7
01:14:22
Another benefit around censorship is the costliness of censorship. So before EIP-1559, we had this notion of a transaction fee before we had the base fee and the tip, and if you wanted to censor a transaction you would pay the full kind of opportunity cost of not including that transaction. So if someone was willing to pay $10 for this transaction and you didn't include it, then you lost
Speaker 1
01:14:52
$10
Speaker 7
01:14:53
in opportunity cost. With EIP-1559, you're only losing the tip. And generally, the tip is much, much smaller than the base fee.
Speaker 7
01:15:05
So for example, right now the base fee might be 30 gui per gas and generally a tip of 1 gui per gas is sufficient, so a 30x delta. And so the cost of censorship today is actually extremely small, because it means that you're ignoring a few transactions, but you're really, at the end of the day, just losing a few pennies of opportunity cost per transaction. What MEV burn does is, because this burn flow is trying to maximize both the transaction base fees and the block base fees, we're returning to this model where the opportunity cost of censorship is the full transaction value. And if you really want to censor a transaction for a long period of time, not only do you have an opportunity cost, But it's even worse than that.
Speaker 7
01:16:04
You need to subsidize, kind of pay out of pocket for the burn in order to sustain the censorship over a long period of time. And that's assuming we don't have an inclusion list. Once we have inclusion lists, that's kind of yet another layer of defense against censorship. Okay, so that is the end of the first section in terms of benefits from enshrining.
Speaker 7
01:16:30
Any questions on this first section here?
Speaker 1
01:16:35
No, I think that's pretty clear. I really do like that the message to make validators dumb again. That's definitely something I'm a huge proponent of.
Speaker 7
01:16:45
Right, exactly. In a way, discriminating from different bid pools is a form of sophistication and we're just asking them to do the simplest thing possible, which is to just pick the top paying bid from the bid pool. Okay, so let's talk about advantages of smoothing and those are relevant to the staking pools like LIDAR.
Speaker 7
01:17:13
So 1 of the features of MEV is that it's quite volatile and there's a lot of variance and the median reward is significantly lower than the average, you know, roughly 3x lower on a block by block basis. And there's some like big extremes, so for example this block that Lido received,
Speaker 1
01:17:37
691
Speaker 7
01:17:37
ETH, is 12, 000 times larger than the median. And so if you're a small validator, you're effectively playing a lottery game, where most of the time in a lottery, you'll lose your tickets. You buy a $1 ticket and most of the time you're gonna lose, but every once in a while you're gonna win the jackpot and win this lottery.
Speaker 7
01:18:01
And this lottery mechanism is very similar to proof of work. And the economics, basically the fact that the median is lower than the average, is an incentive for the small players to pool in order to get access to the average return instead of the median. So if you look in proof of work, there's this kind of very staggering fact, which is that if you buy a mining rig, so for example, you spend $5, 000 on a mining rig, More often than not, over the period of the lifetime of the mining rig, over the 5 years, you will mine 0 blocks. But you might get very, very lucky.
Speaker 7
01:18:45
You might mine 1 block, and you might make several times the value of your mining rig in this 1 time when you do win the lottery. And as we know with Bitcoin Proof of Work, there's enormous amounts of pooling involved, and we don't want to have this forcing function for pooling. I guess it is to the advantage of Lido in this case, but it is not necessarily healthy. And so burning basically provides an alternative way to achieving this fairness of income, which doesn't involve pooling.
Speaker 7
01:19:25
Another problem with the spikiness of MEV is around stealing MEV and dossing is 1 such example which I've illustrated here. So imagine that you have 3 proposers in a row, and each proposer is entitled to some MEV reward. And Now let's assume that the last proposer is evil and has decided to doss the previous 2 proposers. So what does that mean?
Speaker 7
01:19:54
It means that the previous 2 proposers are not going to receive their share of MEV. But it's even worse than that because the last proposal is actually going to receive the MEV that the previous 2 proposers would use to receive, because most of the time if you have some MEV it can be transferred to the next slot. And so here you see in this instance the last proposer is able to triple their expected amount of MEV but they could do this opportunistically when there's a very, very you know, juicy slot. So if there's you know a thousandth of MEV they can decide to dust the proposer so that if they're the next proposer, they will receive the 1, 000 ETH, not the previous proposer.
Speaker 7
01:20:40
And once you have MEV burn, now you don't have these incentives to go attack proposers. And it turns out dosing is just 1 example of the types of attacks. You can try and do eclipse attacks at the peer-to-peer network level, and you can try and do these small reorgs as well. It can become quite adversarial.
Speaker 7
01:21:07
Another benefit of smoothing is that you're removing this what I call rug pulling, which is basically rug pulling a staking pool itself. So in staking pools like Lido or Rocket Pool, you have the stakers that provide the ETH and then you have operators that are trusted to do several things. And 1 of the things that the operators trusted to do is to always redirect the MEV back to the stakers, back to some sort of smoothing pool. But there's instances where the operator is not incentivized to do that.
Speaker 7
01:21:48
The rational behavior in some instances is for the operator to take the MEV in a given slot and run away with it. And the reason is that operators are collateralized. So in the case of Lido, today they're collateralized with reputation but in the future they might be collateralized with just financial assets. And whenever this the the slot MEV is greater than the total amount of collateral, they're incentivized to run away with the MEV instead of giving it to the staker.
Speaker 7
01:22:24
And I think as we try and distribute these staking pools and allow more and more types of operators to join in, this attack is going to become more and more of a design consideration. Okay, another aspect which I guess is relevant to LIDO is whenever the staker, the operator, receives toxic MEV. So the definition, at least my definition of toxic MEV is MEV where there's a clear victim, a clear user that has lost money as opposed to arbitrage for example which is just latent MEV. And so whenever there's toxic MEV there's a dilemma, should we return the MEV or should we not return the MEV?
Speaker 7
01:23:19
And it's a dilemma that is kind of quite rich in the sense that, you know, there's a legal question, there's a moral question, there's an accounting question, and so it, you know, there may be a governance question. And I think Lido has received some requests you know from victims to return the MEV and my understanding is that the policy of Lido is to not be returning the MEV. And kind of 1 of the bad things is that if you try and be a very altruistic actor and you try and return MEV, you're actually putting yourself at a competitive disadvantage. And so in a way, all the good guys are being punished, whereas all the bad guys that never return the MEV kind of get prioritized economically.
Speaker 7
01:24:26
But there's a kind of a second flavor of this toxic MEV, which I call systemic MEV. Let's imagine that instead of a few hundred ETH of toxic MUV, we're talking about millions of ETH. And this could happen, for example, if 1 of the roll-ups gets hacked and there's like millions of victims. And so here, 1 of the questions we ask ourselves is like, what will happen if the staker uses this ETH to mount an attack?
Speaker 7
01:25:00
What will happen if, Sasha, I'm doing a presentation right now. Sorry, that's my son. It could be that there's so much if that has been stolen here that we could find ourselves in a DAO-like situation where we need to use the social consensus in order to do a bailout and that creates a fork just like we have Ethereum Classic. And so by burning the MEV, we just completely remove these considerations around how to deal with toxic MEV.
Speaker 7
01:25:43
And then I guess a final slide on this moving aspect is that once we're burning the MEV, we're actually unlocking what I call stake capping. So we're basically making it feasible for the beacon chain to have a cap on the total amount of stake. And this is useful for a couple of reasons. The first reason is that it means that we can have a hard upper bound on the total number of validators, right?
Speaker 7
01:26:16
Because if, for example, we set a cap of 32 million ETH and you know you need at least 32 ETH per validator that means that we'll in the worst case we'll have a million validators. And validator capping is useful because it means that we can potentially have smaller validators, it means you can potentially reduce the 32-Eth to something like 8-Eth, it means we could potentially have shorter slots because we can achieve finality faster, and it might even be a stepping stone toward achieving single-slot finality. Because right now, the biggest impediment to achieving single-slot finality is that in the worst case, we have these 4 million validators, and we can't aggregate 4 million BLS signatures in 1 slot. Then another kind of interesting thing about state capping is that we potentially have a we basically have a mechanism to burn the restaking proceeds, the yield from restaking.
Speaker 7
01:27:23
Because not only do we have value that comes from congestion and contention, but restaking kind of adds a third component to the value of staking. And the way that stake capping would be implemented potentially is with a negative, an issuance curve which goes down as you get very, very close to the cap and it goes to towards negative infinity and so you'd be in a position where you have to pay for the privilege to be a validator because you get to receive, for example, a restaking yield. And the reason why MEV burn is important here is because it smooths things out, it removes the variance, and it doesn't exacerbate the pooling dynamic that I was talking about before. Because there's 2 components to the reward right now.
Speaker 7
01:28:25
There's the issuance, which is extremely smooth, and then the MEV, which is very, very volatile. But if you remove the smooth part, then you have even more volatility. And so it's important to smooth the MEV before we can even think of doing state capping. Okay, so that was the end of the second section on smoothing.
Speaker 7
01:28:45
Any questions here?
Speaker 1
01:28:48
I do have a general question which is like what are the trade-offs because so far everything looks like super rosy and good but maybe we can get to it at the end.
Speaker 7
01:28:58
Yeah for sure. Yeah let's talk about some of the trade-offs at the end. Okay, redistribution.
Speaker 7
01:29:08
So, here it's all about redistributing cash flows from the stakers to the holders. And the way that I think about it is that, today we have a surplus of stake thief. And if you wanna kind of look at this blue circle of stake teeth, it's composed of 2 parts. You have the stakers that are here because there's rewards from issuance, and there's some fraction of the stakers that are here because we have this additional reward from MEV.
Speaker 7
01:29:48
And the way that we've designed Ethereum is that it's meant to be secure even if there's 0 MEV. Even if there's no activity on chain, we still want the beacon chain to be secure. And actually, that is where we were before the merge. Before the merge you only had rewards from issuance.
Speaker 7
01:30:09
And so arguably any kind of non-zero MEV that's going to the stakers is overpaying for security, because we have this kind of security from insurance alone. And there's kind of this kind of 2 consequences of overpaying for security. 1 of them is around the balance of ETH between what I call economic security and economic bandwidth. So you have ETH, this pristine collateral money which could be used for 2 use cases.
Speaker 7
01:30:46
On the 1 hand, it could be used for staking, economic security, or it could be used in context of DeFi, for example, stable points and economic bandwidth. Now imagine that we enter this massive bull run with tons of MEV, what will happen is that the beacon chain will absorb all the liquidity, think of it like a sponge, because there's going to be so much rewards from MEV that all the ETH will go to the beacon chain, and there's going to be a drought within DeFi land, within the EVM, and applications that need this economic bandwidth will have to pay a huge amount for it. So the cost of money goes up. And once we have MME burn, we have predictability on the total amount of economic security, as opposed to, to having a potential imbalance and potential unpredictability.
Speaker 7
01:31:43
And the second aspect of, of having this, this excess or the surplus stake is around issuance. So the way that the issuance formula is built is that the total amount of issuance is the square root of the total amount of stake. And so if we have a little bit less stake with MEV burn, we're going to have a little bit less issuance. And so we're going to be removing unnecessary dilution for ETH token holders.
Speaker 7
01:32:16
And not only... So basically when we look at a total economic efficiency of MEV burn, there's kind of 2 components. It's a double whammy. On the 1 hand, we're reducing the issuance, which improves the scarcity of EFAD asset, but we're also increasing the burn.
Speaker 7
01:32:35
And I tried to quantify these 2 things somewhat conservatively, and my estimate is that we're talking about roughly speaking 250, 000 ETH per year of reduced issuance and
Speaker 1
01:32:48
250, 000
Speaker 7
01:32:49
ETH per year of increased MEV burn. And so yeah, it is a very significant optimization of on the order of half a million ETH per year, which makes ETH, the asset, more ultrasound, I guess, more scarce, and may increase the price of ETH. And then there's kind of this Other kind of very interesting optimization, which is around taxes.
Speaker 7
01:33:21
So if you have, imagine that you have a company and you have cash flows that you want to redistribute to shareholders, There's like 2 preferred techniques. The first 1 is dividends, where you're basically giving dividends to the shareholders. But another technique, and this is what Apple is doing, is that they're buying back Apple shares and increasing the scarcity of total Apple shares and increasing the price of each share. And it turns out that Dividends in most jurisdictions are taxed as income, whereas buybacks are taxed as capital gains because the price of your asset has gone up.
Speaker 7
01:34:12
And I think in most jurisdictions, income tax is significantly larger than capital gains. So for example, in the UK where I am, income tax is 50%, capital gains is only
Speaker 1
01:34:23
20%.
Speaker 7
01:34:24
And so by moving to this different model of rewarding the actors within Ethereum, we're significantly reducing the sell pressure from taxes to nation states. And yeah, that's pretty much it. So yeah, Happy to answer questions.
Speaker 7
01:34:49
Yeah, I think maybe if you
Speaker 1
01:34:51
can just go over, like, what are some of the main trade-offs of MAP-Burn as a process or in terms of, like, in general, it seems like it's a good thing in a lot of different respects. 1, it continues to kind of like isolate the responsibilities of these different actors in the block space and the block proposal ecosystem. Especially in the last section, you kind of talked about the indirect benefits that it would have both at the network layer, like less people would run around validators and there would be less validators overall.
Speaker 1
01:35:26
So that's great. But what kind of trade-offs or cons might there be?
Speaker 7
01:35:32
Right, so I've actually written an E3 search post where I go through some of the considerations. 1 of the downsides, as always, is that this is, I guess, more complexity, more things that we have to do at the consensus layer. And we have limited amount of bandwidth in terms of what the client implementers and the researchers and the broader community can tolerate.
Speaker 7
01:36:06
And maybe you want to prioritize things other than MEV-Bran. I guess more technical downsides. 1 of them is that there's a splitting attack. So it's a little technical, but basically what a proposer can do, is they can, you know, malicious proposer can do is that they can try and pick a bid value such that half the attesters think that the bid value is above the burn floor and half the attesters think the bid value is under the burn floor.
Speaker 7
01:36:46
And so they've kind of split the attestant into 2 groups and that leads to chain instability. I guess the answer that I have to this is that proposers can already split the attestors into 2 groups using the timeliness of the proposal. So proposers are meant to propose within a certain threshold, and they can propose just at the threshold such that half of the attestors see it as being on time and have the attestors see it being late. Another thing that I haven't talked about is that just like there would be a block base v, there would also be a block tip.
Speaker 7
01:37:35
And the reason is that MeV burn is a partial burn. So I guess this is 1 of the downsides of the scheme is that it's not perfect. And if we go back to the slide with the 2 seconds, basically, any MEV that comes in the last 2 seconds won't get burnt. Instead, it will go to the proposer, as it does today, in the form of a tip.
Speaker 7
01:38:08
And so I guess there's 2 downsides here. 1 of them is that really the benefits from MEV burn are quantitative as opposed to qualitative. We're reducing the total amount of MEV by an order of magnitude, but we're not completely removing it. And so Some of the attacks around MEV stealing still, to some extent, do exist.
Speaker 7
01:38:37
And then the other downside is that there's an optimization game going on where the proposers are incentivized to try and guess what will be the burn floor in such a way that they pick the smallest burn floor possible and thereby maximize the tip. And The answer that I have here is very similar to the splitting attack, which is that this already exists today. So today, proposals are incentivized to be sophisticated by basically delaying as much as possible the proposal. So there's a threshold today, there's a deadline of 4 seconds, and it's to the advantage of proposers to propose as close as possible to this four-second deadline so as to maximize the total amount of MEV in their block.
Speaker 7
01:39:32
Of course, they take the risk of not making it on-chain and getting reorg'd, but very sophisticated proposers will basically pick the time just right so that they still get included on-chain and they maximize the amount of MEV. So in a way, some of the downsides that already exist today, which are splitting attacks and optimization games, will still exist with MEV burn
Speaker 1
01:40:01
and
Speaker 7
01:40:04
just the nature of these things changes a little bit. 1 of the questions that I get a lot is around what is going to be the economic impact for validators? And I actually argue that this is very much beneficial for the stakers.
Speaker 7
01:40:31
And it's quite a subtle argument, but let me try and make it for you guys. So 1 of the observations is that there's a cost of money associated with ETH. So you know once you have ETH and you lock it in a box, you're missing out on placing ETH in some other box which might give you a yield. And so if we assume, for example, that the cost of money is some percentage, let's say 3% annualized, what we should expect when MEV burn happens is for the yield to temporarily kind of tank, but then a bunch of validators to exit the system and to go get yield in other places where they get the full cost of money.
Speaker 7
01:41:24
And then over time, the APR of staking will go back to where it was, back to the 3%, let's say, as the total number of validators shrinks. And so on a per validator basis, you'll get in the equilibrium the same APR. And so if you're making let's say 3 percent a year, which is about 1 ETH per 32 ETH validator, you're going to keep getting that in the equilibrium. Now if you're thinking from the perspective of USD denominated returns, things actually get improved.
Speaker 7
01:42:08
Because not only from an ETH denominated return standpoint, you've maintained your 3 percent, you maintain your 1 ETH per year, but this 1 ETH will actually be worth more from a USD denominator standpoint because the scarcity of ETH will have improved. Now, putting aside this argument, there's various arguments. There's a couple arguments for why MEV-Burn is better for some individual stakers. So 1 of them is around for the small stakers, now they can benefit from the average returns as opposed to median returns.
Speaker 7
01:42:51
And because the median is lower than the average, most small validators, almost all small validators, will be better off with MEV burn. And then there's the second aspect around the taxes, whereby your tax bill will be significantly lower as a staker. And so again, at the end of the day, a net of taxes you may be making greater returns. And another thing I guess I should say on the fact that the USD denominated returns is that, you know, stakers are huge ETH holders, right?
Speaker 7
01:43:31
You're getting 1 ETH per year, but you have 32 ETH of principal. And so the fact that your ETH will grow in value as an ETH holder should be to the benefit of stakers.
Speaker 1
01:43:48
All right, that was pretty extensive. Thank you. I see 1 last question in the chat.
Speaker 1
01:43:53
Max, maybe you want to ask it and then we can wrap up.
Speaker 2
01:43:58
Yeah, for sure. So the question is pretty general in terms of the MEV. So there is a way of like enforcing the efficiency of ETH by like incentivize to find the transactions that generate the income and then like sharing this income with builders and with proposals and with the real with like existing of maybe burn would it would it affect or maybe not like the incentives for like the bots finding the arbitration opportunities to exist because they like can be a bad way but they're a good way in ways of making the market better in terms of more efficient.
Speaker 7
01:44:48
Right. So I guess 1 interesting thing about MEV burn is that it doesn't change the incentives upstream, so basically at the very beginning of PBS. The searchers, the builders, none of their incentive change. And the reason is that the searchers, the top searcher will make the delta between it and the second best searcher, and the same for the builders.
Speaker 7
01:45:17
The top builder will make the delta between it and the second best builder. I mean, 1 of the questions that I get is around, okay, are we basically forcing every single proposer to extract as much MEV as possible per slot? And isn't that bad? Because it means that we're basically maximizing the amount of toxic MEV.
Speaker 7
01:45:48
And if you look at stakers like Vitalik, for example, Vitalik has decided to opt out from PBS altogether, and he's just building these very naive blocks that don't extract MEV from users. And I guess my answer to this is that toxic MEV in a way, at least some part of it, is orthogonal to MEV burn. So we have different solutions, like encrypted mempools, that remove the toxic MEV. And so MEV burn really is about burning the excess MEV that isn't given back to the users.
Speaker 7
01:46:38
For example, with MEV share. So these encrypted mempools, not only do they prevent front running, but actually they facilitate back running and facilitate the refund of any excess MEV back to the users. So in the endgame, I expect that MEV burn is really about burning the latent MEV from arbitrage and not at all changing the dynamics around maximizing toxic MEV because it simply won't be any toxic MEV to extract from users. Now, I guess 1 question that I get a lot is around whether or not MEV burn removes some of the sovereignty of proposers, right?
Speaker 7
01:47:25
Because now proposers, they must, you know, select a bid which is above the the burn floor and that that's like constraining. So for example it it means that proposers can't self-build so whatever Vitalik is doing today he can't do it with MEV burn. And basically the answer that I have for this is that we have inclusion lists. An inclusion list is basically self-building.
Speaker 7
01:47:54
So how do inclusion lists work? Basically as a proposer I'm going to self-build this very simple block just by taking transactions from the main pool and ordering them by guess, just like what Vitalik is doing today. And then I'm going to submit this inclusion list to the next builder and basically force the next builder to include all of those transactions for the block to be valid. And so what that means is that if, for example, I'm an altruistic proposer and I really, really care about censorship resistance, what I could do today with self-building, I can do tomorrow with inclusion lists.
Speaker 7
01:48:38
And then there's kind of this other use case that some people want is this idea of pre-confirmations. So they want to use some users want to come up to have some sort of bilateral agreement with the proposer, some sort of promise, that that transaction will be included in their block. And this is also something that can be done with inclusion lists. Basically what the the proposer can do is promise that they will include the user's transaction in their inclusion list so that when their time comes, their transaction will make it on-chain.
Speaker 7
01:49:19
So inclusion list kind of gives you almost all of the sovereignty that proposers already have today. In particular, they give you sovereignty for censorship resistance and for pre-confirmations.
Speaker 1
01:49:35
Awesome, thanks a lot for that. So, I mean, for inclusion list, obviously, but it's great, but from a censorship resistance perspective and ensuring that block proposers at least get their chance to include any of the transactions that they themselves want but I guess it removes the possibility, although I don't know why somebody would want to do this. Maybe they're an artist of proposing a completely empty block if they wanted to.
Speaker 7
01:50:02
Okay. Right. I mean, actually they could propose a completely empty block, but they'd have to pay
Speaker 1
01:50:13
for it.
Speaker 7
01:50:14
Yeah.
Speaker 1
01:50:17
As of the 32 each, it wasn't enough, but yeah, okay.
Speaker 7
01:50:21
Right.
Speaker 1
01:50:21
No, that's great. All right. Thank you very much, Justin.
Speaker 1
01:50:24
That was really, really, really great presentation.
Speaker 7
01:50:27
Thank you
Speaker 4
01:50:28
so much, guys.
Speaker 1
01:50:29
I don't see any other questions. So I think we'll wrap up with that. Thanks, everybody, for tuning in.
Speaker 1
01:50:35
We'll try to do the next community call again in around a month's time. So probably the last or second to last Tuesday of August. Until then, please check out the presentations that will be attached as descriptions to this recording. If you have any questions, feel free to chuck them on the forums and we'll reach out to the authors of the presentations or the analyses and try to get you guys answers.
Speaker 1
01:50:57
Thanks a lot and have a good evening if you're in Europe or morning if you're in the States, or I guess it's like 3 a.m. If you're in Asia. I hope you're sleeping. I don't know why you're up.
Speaker 1
01:51:07
Thanks a lot. Bye everyone.
Omnivision Solutions Ltd