See all shafu transcripts on Youtube

youtube thumbnail

The Bytecode #7 - Alphakey - Cover

1 hours 47 minutes 7 seconds

🇬🇧 English

S1

Speaker 1

00:01

Welcome to the Bytecode, where we look at smart contracts with the people who wrote them. I'm Rosh Shafoo and today's guest is Alpha Key. Alpha Key, thanks for taking the time, man.

S2

Speaker 2

00:15

Yeah, man, thanks for having me. I've been catching up on some of the episodes you've been doing and fantastic content so far. So congrats for that.

S1

Speaker 1

00:25

Awesome. Thank you very much, dude. I appreciate that. So yeah, Alpha Key wrote something he calls cover, which if I look at the GitHub description is a liquidity tool to protect against direction risks such as impermanent loss, which I found interesting.

S1

Speaker 1

00:43

So yeah, AlphaKey, why don't you very briefly introduce yourself and introduce cover. And then yeah, AlphaKey is going to present like a high-level overview of Cover, which I think is useful first, and then we're going to jump into the code. So yeah, Alfaki, go ahead.

S2

Speaker 2

01:06

Sure, so a bit of background about me. I've been doing software development for many years. It got really addicted to trading back in like

S1

Speaker 1

01:16

2016-2017.

S2

Speaker 2

01:19

It definitely became a very frequent topic at the office for me, just like opening up my brokerage account. I was actually trading the cannabis legalization event for Canada. I think it's really interesting, a lot of the parallels that exist between crypto and that industry.

S2

Speaker 2

01:48

You know, it's very much the growth of it is very much bound by regulation, you know, is to sort of move forward. You need the regulation and the market gets stretched a lot in both directions. So, you know, the the psychology of the markets can kind of turn people upside down sometimes to where it's, you know, when the market's doing really well your perspective gets shifted to, oh, you know, NFTs have all these use cases, everybody's gonna use NFTs or, you know, whatever the product or protocol is that seems like it's getting mass adoption. And then when the market's down, you're like, is anybody actually going to use this stuff?

S2

Speaker 2

02:30

And that's sort of the arena that we find ourselves in now and things get very psychologically and also price action wise, stretch the downside and that psychology really messes with you. And I've definitely had my own struggles with that myself. And so coming back into this space in early 2021, you know, I'm thinking like, okay, how can I make this? How can I get this right this time?

S2

Speaker 2

02:56

How can I trade this correctly? Right. How can I actually walk away with more than that I put in? And I was looking at Glassnode, I was looking at Glassnode Analytics and seeing what all of the whale wallets were doing and thinking, Oh yeah, this is a really great way to actually kind of determine where the market's headed, perhaps.

S2

Speaker 2

03:20

And then started really getting interested in automated trading on centralized exchanges, playing around with the platform that some people might be familiar with called 3 commas. And you know, you can basically open up a bunch of small trades. Each trade has both a, what is called a take profit. So like you're gonna sell above the price that you entered and then also a stop loss where you're going to sell below the price that you entered if the trade doesn't work out how you expect.

S2

Speaker 2

03:51

So then coming on-chain, I'm kind of looking for that same thing. I'm looking for like, how can I do this same sort of strategy that I was doing on decentralized exchanges, but kind of bring that on chain and started playing around with the old farms? The first immediate problem to me is like, hey, well, if you're going to get more liquidity on AMMs, on decentralized exchanges, Like you have to have some clear and obvious solution for impermanent loss, which is obviously just kind of the short-term opportunity costs of basically having sold too early. As the price of ETH moves up, you're selling out of your ETH position and therefore you'll have sold at an average price that was lower than whatever the market price is.

S2

Speaker 2

04:41

So basically thinking how do you hedge for this potential opportunity cost And then from there, that was like February 2021. So then from there begins the whole like two-year journey of trying to figure out like, how do we improve slash scale this exchange experience that we had back then. When Uniswap v3 came out, I thought, okay, this is interesting, but this didn't really solve the problems that I was wanting to solve. And yeah, I think that really began the journey for me.

S1

Speaker 1

05:20

Awesome. That sounds really interesting. And, okay, so 1 solution trying to tackle this is cover, right?

S2

Speaker 2

05:33

Yeah. So initially back in 2021, I very naively thought that, well, maybe you can just pull the liquidity before you start experiencing the impermanent loss. But of course, that's not an option for, you know, if you're bootstrapping the liquidity for your own token, that's not really an option, right? Because then your token stops trading.

S2

Speaker 2

05:54

So you need to be able to have the continuous liquidity there, just kind of as it has been for the last couple of years now, people bootstrapping their own markets through AMMs, cover is really a way to kind of offset that opportunity cost. So as you would normally sell ETH on the way up, this allows you to create a position that is going to buy ETH on the way up. And of course, you have to protect against the possibility of flash loans, of any sort of manipulation. But basically, it will just unlock liquidity, sort of like the TWAP price moves somewhere and then, okay, we're going to unlock that liquidity and then it's going to do these little Dutch auctions to actually find the natural market price because we can't just sell something at a flat price and assume that that's going to get filled.

S3

Speaker 3

06:57

Interesting. Maybe, do you want

S1

Speaker 1

06:59

to start sharing your screen? I think the presentation would be really helpful to get like a high level overview before we jump into the code.

S2

Speaker 2

07:13

Sure. So, I'll just go through our slide deck a bit. So this is the solution that we were discussing just now, which basically looks like a stop loss because when a certain price is hit you're going to

S1

Speaker 1

07:38

Sorry to interrupt you but I can only see a black screen is that what you're saying?

S2

Speaker 2

07:42

Oh Let's see,

S1

Speaker 1

07:48

I thought maybe it's like a dramatic introduction or something.

S2

Speaker 2

07:57

Let me just share my entire screen and see if

S1

Speaker 1

08:00

that works. Okay, this looks better.

S2

Speaker 2

08:06

Okay, yeah. So imagine that on the left, this could be your Univ-3 position, your curve position, etc. And effectively, you're going to sell out into DAI.

S2

Speaker 2

08:23

So you're going to sell ETH into DAI as the price increases, right? And at a price of

S1

Speaker 1

08:32

5, 000,

S2

Speaker 2

08:34

you will, from between 3, 000 and

S1

Speaker 1

08:36

5, 000

S2

Speaker 2

08:37

die per ETH, you will effectively have sold your ETH at about, let's say like, it's like

S1

Speaker 1

08:44

3, 800

S2

Speaker 2

08:46

die per ETH around there. So the opportunity cost is of course between that 3, 800 and the

S1

Speaker 1

08:53

5, 000.

S2

Speaker 2

08:55

So, effectively, on the right side, this is what Cover does. It will actually buy ETH on the way up, and it will purchase that ETH at an average price of

S1

Speaker 1

09:09

3, 800,

S2

Speaker 2

09:10

assuming you were using the same curve math. Currently, the cover uses constant product, so classic x times y equals k, to do the curve calculations. And it will buy that an average of 3, 800.

S2

Speaker 2

09:26

So basically, you can dynamically rebalance your position. Here you're making a directional bet. If you're using something like an option, then it's not directional, but here you're able to work with the naked asset. So instead of having to resort to using an option or derivative, you can just hold the actual asset.

S2

Speaker 2

09:52

In this case, it'll be ETH or start world DAI, which is going to be used to buy ETH. And you don't have to work with any extra derivative asset to dynamically rebalance. And then the other solution we're working on, We're not going to get to it today because sort of 2 different code bases kind of have to pick 1 to deep dive on, right? But we're also working on this limit pool protocol which will allow you to basically have a one-way position.

S2

Speaker 2

10:38

So it would trade the same way that your AMM positions would trade today, but it actually locks the position in. So this is really important for asset pairs that you don't necessarily want to put up a two-sided position on. So for example, something like an option token where you're very much worried about the price of the option token going to 0. So therefore you could still have a marketplace where you can trade those assets, but nobody is having to take 2 sided risk.

S2

Speaker 2

11:15

And also you could do cool stuff like, hey, I wanna sell ETH from like

S1

Speaker 1

11:20

3000

S2

Speaker 2

11:21

to 10, 000 DAI per ETH, right, and you can pay a fixed transaction cost for that. So the TLDR here is instead of trying to do conditional swaps off-chain, we are able to express ourselves doing a one-way position on-chain, and therefore, we're not trying to do what we do today with limit orders and stops where it's like off chain and it's a single swap. And we try to like shove a circle in a square a little bit because it's better to have the liquidity on chain before the price actually moves there, rather than reacting and then potentially having to deal with getting front run and all of those things.

S2

Speaker 2

12:21

So yeah, that's pretty much it as far as the deck goes. We're definitely talking to liquidity management protocols because something like cover, if they're very confident that a position is going to go out of range, they can take a hedge position on that without having to resort to using something like an option or a derivative, which may be a little harder to explain to the users.

S1

Speaker 1

12:54

Okay, interesting.

S2

Speaker 2

12:59

Any questions about any of this stuff before we dive into the code?

S1

Speaker 1

13:04

No questions. Excited to look at the implementation.

S2

Speaker 2

13:10

Yeah, there's quite a lot there, so let me try to start from the very inception of 1 of these pools and kind of walk you all the way through it. So, there's a few choices that I'll probably change before we actually launch it, namely the fact that we're deploying bytecode each time, but really tried to obey a lot of the standards that people expect, like token 0, token 1. Those tokens, for anybody that's not familiar with AMMs, those are usually sorted by address.

S2

Speaker 2

13:59

So, if you have 0xA and 0xB, 0xA will be token 0 and 0xB will be token 1. This is kind of... Sorry, go ahead.

S1

Speaker 1

14:12

Yeah, can you zoom in a little bit maybe?

S2

Speaker 2

14:16

Oh, sure.

S3

Speaker 3

14:16

That's way better. Okay.

S2

Speaker 2

14:22

Now I've got like red lines on the screen.

S1

Speaker 1

14:26

Yeah, well, what is this? Oh, it's okay. Okay.

S2

Speaker 2

14:31

It's Grammarly. Maybe I

S1

Speaker 1

14:33

can. Okay.

S2

Speaker 2

14:36

I can tell Grammarly to go away. But yeah. So essentially, we create a pool.

S2

Speaker 2

14:42

There's a TWAP source that we have. So either that comes from, you know, a liquid source like, for example, Uniswap V3 or Curve, and hopefully in the future our own pools will be able to use our own pools as a TWAP source. And then we also, you'll notice here that there's this mention of volatility tiers, and essentially this tells us how fast the position can unlock at most. And this is 1 of the mechanisms in the protocol to protect against really widespread manipulation of an asset, you can basically put a rate limit on how fast the price can move.

S2

Speaker 2

15:30

So you say, well, at most I expect it to move

S1

Speaker 1

15:34

10%

S2

Speaker 2

15:35

per hour or something like that. Right. And so that's why these tiers exist is to to have that that rate limit.

S2

Speaker 2

15:49

And so, yeah, nothing crazy here beyond launching the pool. And then we go into the contract. And as we set up this pool, we first just set the immutable arguments. As far as I'm aware, in the newest version of Solidity, you actually don't have to set the immutable variables in the constructor.

S2

Speaker 2

16:19

You can set them outside of the constructor. But for anybody that's not familiar with the immutable keyword, essentially, this will allow you to put a value in the bytecode And that makes it basically 0 cost, because you're reading directly from the bytecode to access that value. If you are putting something in storage, it's going to cost you 20, 000 gas units to initialize it, and then 2, 100 to read that value. So if you have a value that you don't expect to change quite good.

S1

Speaker 1

17:01

It's a constant that you set at deploy time.

S2

Speaker 2

17:07

Yes, and like I said, in the newest version of Slidi, you'll actually be able to set that outside of the constructor if, for example, maybe you have an initialize method.

S3

Speaker 3

17:16

Maybe you have like an

S2

Speaker 2

17:16

initialize method. So, the way that this is actually, the contracts are laid out, is that we're going to make a call. Why is this done?

S2

Speaker 2

17:34

This is done to actually split out the bytecode because in this protocol there's about 100 kilobytes of bytecode. Comparison to something like Uniswap v3, that's somewhere in the ballpark of 30 kilobytes of bytecode so this is quite a bit more and thus the reason that we split these out. You'll also notice that this is its own call as well. Essentially what this does is every time the TWAP moves, we're going to sync up what's happening in the pool.

S2

Speaker 2

18:14

And there's always only 1 active auction at a given time, depending on where that TWAP price falls, right? So if the TWAP is at

S1

Speaker 1

18:28

2000,

S2

Speaker 2

18:29

let's say 2000 di per eth, we will unlock a tick spacings worth of liquidity which might be something like 10 or 20 basis points or 0.1 or

S1

Speaker 1

18:45

0.2%.

S2

Speaker 2

18:47

So on each 1 of these methods, we pretty much sync up the pool, then we're going to do a call to do a mint, and then we're going to save the state. So now we can go into the mint call and like I said the benefit of doing it this way is that you split out the bytecode and everything here is built specifically for Arbitrum so we're really trying to focus on reducing the amount of call data that's passed around. So we just do 1 external call here and here.

S2

Speaker 2

19:39

And then everything else is an internal. So all these functions that you see inside of here, these are all internal calls. So then we don't have to pay for that call data on L2. You kind of have this trade-off between L1 and L2, where L1 ETH, the call data is going to be the cheapest thing and then the storage is going to be really expensive.

S2

Speaker 2

20:12

And then on L2 it's kind of the opposite where you know when there's a lot of activity on the L1, the call data on the L2 is going to be really expensive. So you really want to pay attention to how you're packing your call data. And so thus the reason to use something like a struct. Well you want to use a struct so that it will compress all the call data down and then yeah that decreases the gas cost for the user.

S2

Speaker 2

20:50

So, here what we do is we only allow the user to place a position on 1 side of the TWAP. And the TWAP here you're going to see that it's referred to as latest tick. So, if we go into this positions library, You'll see first we check are these valid ticks, then we initialize this cache. We check if they're minting an empty position, and then we enforce what's called this safety window.

S2

Speaker 2

21:35

Maybe we'll get to this later, but essentially this is done so that to reduce the impact of anybody being able to see the current state of the pool and also perhaps know where the price is going to go. Because as we know, AMMs often lag behind centralized exchanges. So this effectively stops someone from seeing that, hey, you've accumulated all this filled amount, I'm just going to stick my position in. And even if, you know, the next auction doesn't get filled, I'll still get part of the previous fill.

S2

Speaker 2

22:17

But there's also this cool effect in this pool of like if if there is a lot if if the auctions are getting filled frequently it reduces the risk of somebody who's creating a new position that they won't get filled. They'll still, everything will be split pro rata, just like it is in any AMM. And the reason that we do that is so that we never have any arrays that we have to loop through. Right?

S2

Speaker 2

22:51

So here we essentially, based off of whatever their initial range is, we're gonna calculate the liquidity that they would mint.

S3

Speaker 3

23:03

And then- I

S1

Speaker 1

23:03

have a question.

S2

Speaker 2

23:05

Yes.

S1

Speaker 1

23:09

Okay, wouldn't you want to do line 73 at the start of the function to fail quicker? I think it's like the fastest thing that could fail. Just something I found interesting.

S2

Speaker 2

23:24

Yes, that definitely is true. You'll notice I also... So yes, this could definitely be moved up above the cache so that it fails sooner.

S2

Speaker 2

23:36

And then someone who...is it likely that the user is going to send a mount 0? Probably not. Like They pretty much grief themselves if they are trying to create a position with a mountain 0. But yes, for sure, it's good to follow the principle of failing somebody as soon as possible so that they don't burn extra gas.

S2

Speaker 2

24:06

And that makes the experience better for all the other users on that chain.

S1

Speaker 1

24:13

Okay. Another question. I find the require statements interesting.

S2

Speaker 2

24:23

Yes, so I initially I was using custom errors And the problem that I ran into was that I couldn't actually see the revert message on EtherScan. And this is something that's been known for a while that it's an issue. Now, If you call a contract directly, I believe it will give you the custom revert string.

S2

Speaker 2

24:52

So you could do revert and then whatever your custom error is. However, If you call a contract which calls another contract, then it will not bubble up the custom error. This is ultimately the issue that I ran into, And require does use more bytecode than custom errors. However, and it does use more gas.

S2

Speaker 2

25:28

So the way that I get around that is I do the if check to like if we should revert. And if that string is true, then I just force the revert by require false. And then I pass the string as the revert string as well and this makes it so that I can actually see that someone can see the error on EtherScan. Yeah it's a bit of an unfortunate thing, Etherscan.

S2

Speaker 2

26:02

There's there has been people prodding at Etherscan to go and fix this issue. But they they haven't quite gotten around to it yet, unfortunately. But, you know, custom errors have been around for a really long time. Of course, yeah.

S2

Speaker 2

26:22

And I also, I was having some back and forth with Paul Rosvanberg, who's the developer for Sablier. And I was like, yeah, I think I'm just gonna do this because at least it works for now. And he's like,

S3

Speaker 3

26:38

no, don't do that. You're giving into them. And I'm like, well,

S2

Speaker 2

26:45

because I...

S1

Speaker 1

27:00

I can't hear you anymore. I think probably your AirPods ran out of memory or something. Okay, So while we're waiting for Alpha Key to get his mic back, So I personally like to use custom errors.

S1

Speaker 1

27:54

I just think it's, it looks better when you look at the code and it's more expressive. The syntax used by AlphaKey is like, I get it because of the tooling, but I don't think it's the nicest thing to look at, but definitely interesting to see. Let's see. Interesting to see.

S1

Speaker 1

28:23

Let's see. I'm probably gonna timestamp when AlphaKey's when Alpha Key's mic is gonna come back, because I have no clue how to edit videos. So

S3

Speaker 3

28:49

let's see. Let's see

S1

Speaker 1

29:06

okay okay so we're gonna stop this recording, create a new 1 and continue from there. Welcome to the Bytecode, where we look at smart contracts with the people who wrote them. I'm Hirosh Shafu and today's guest is Alpha Key.

S1

Speaker 1

29:28

Alpha Key, thanks for taking the time, man.

S2

Speaker 2

29:32

Yeah, man, thanks for having me. I've been catching up on some of the episodes you've been doing and fantastic content so far. So congrats for that.

S1

Speaker 1

29:42

Awesome. Thank you very much, dude. Appreciate that. So yeah, Alpha Key wrote something he calls Cover, which if I look at the GitHub description, is a liquidity pool to protect against directional risks such as impermanent loss, which I found interesting.

S1

Speaker 1

29:59

So yeah, AlphaKey, why don't you very briefly introduce yourself and introduce Cover. And then, yeah, Alfaki is going to present a high-level overview of Cover, which I think is useful first, and then we're going to jump into the code. So yeah, Alpha Key, go ahead.

S2

Speaker 2

30:23

Sure, so a bit of background about me. I've been doing software development for many years. It got really addicted to trading back in like

S1

Speaker 1

30:34

2016, 2017.

S2

Speaker 2

30:36

Definitely became a very frequent topic at the office for me just like opening up my brokerage account. But sort of post doing a couple trading, so I was actually I was trading the cannabis legalization event for Canada, And I think it's really interesting a lot of the parallels that exist between crypto and that industry, you know, it's very much the growth of it is very much bound by regulation, you know, is to sort of move forward, you need need the regulation. And the market gets stretched a lot in both directions.

S2

Speaker 2

31:16

So you know, the the psychology of the markets can kind of turn people upside down sometimes to where it's, you know, when the market's doing really well your perspective gets shifted to, oh, You know, NFTs have all these use cases, everybody's going to use NFTs or, you know, whatever the product or protocol is that, you know, seems like it's getting mass adoption. And then when the market's down, you're like, is anybody actually going to use this stuff? And that's sort of the arena that we find ourselves in now. And things get very psychologically and also price action-wise stretch the downside and that psychology really messes with you.

S2

Speaker 2

32:00

And I've definitely had my own struggles with that myself. And so coming back into this space in early 2021, you know, I'm thinking like, okay, how can I make this? How can I get this right this time? How can I trade this correctly?

S3

Speaker 3

32:15

Right?

S2

Speaker 2

32:16

How can I actually walk away with more than that I put in? And I was, you know, looking at Glassnode, I was looking at Glassnode Analytics and seeing what all of the whale wallets were doing and thinking, oh, yeah, this is a really great way to actually kind of determine where the market's headed, perhaps, and then started really getting interested in automated trading on centralized exchanges, playing around with the platform that some people might be familiar with called 3 commas. And, you know, you can basically open up a bunch of small trades.

S2

Speaker 2

32:54

Each trade has both a what is called a take profit. So like you're going to sell above the price that you entered and then also a stop loss where you're going to sell below the price that you entered if the trade doesn't work out how you expect. So then coming on chain, I'm kind of looking for that same thing. I'm looking for like, how can I do this same sort of strategy that I was doing on decentralized exchanges, but kind of bring that on chain and started playing around with the old farms?

S2

Speaker 2

33:24

The first immediate problem to me is like, hey, well, if you're going to get more liquidity on AMMs, on decentralized exchanges, like you have to have some clear and obvious solution for impermanent loss, which is obviously just kind of the short-term opportunity costs of basically having sold too early. As the price of ETH moves up, you're selling out of your ETH position and therefore you'll have sold at an average price that was lower than whatever the market price is. So basically thinking, how do you hedge for this potential opportunity cost? And then from there, that was like February, 2021.

S2

Speaker 2

34:08

So then from there begins the whole like two-year journey of trying to figure out like, how do we improve slash scale this exchange experience that we had back then. You know, when Uniswap v3 came out, I thought, okay, this is interesting, but this didn't really solve the problems that I was wanting to solve. And yeah, I think that really began the journey for me.

S1

Speaker 1

34:37

Awesome. That sounds really interesting. And okay, so 1 solution Trying to tackle this is cover, right?

S2

Speaker 2

34:50

Yeah, so initially back in 2021, I very naively thought that, well, maybe you can just pull the liquidity before you start experiencing the impermanent loss. But of course, that's not an option for, you know, if you're bootstrapping the liquidity for your own token, that's not really an option, right? Because then your token stops trading.

S2

Speaker 2

35:11

So you need to be able to have the continuous liquidity there, just kind of as it has been for the last couple of years now, people bootstrapping their own markets through AMMs, cover is really a way to kind of offset that that opportunity costs. So As you would normally sell ETH on the way up, this allows you to create a position that is going to buy ETH on the way up. And of course you have to protect against the possibility of flash loans, of any sort of manipulation. But basically it will just unlock liquidity, sort of like the TWAP price moves somewhere and then, okay, we're going to unlock that liquidity and then it's going to do these little Dutch auctions to actually find the natural market price because we can't just sell something at a flat price and assume that that's going to get filled.

S3

Speaker 3

36:14

Okay, interesting. Maybe do you want

S1

Speaker 1

36:16

to start sharing your screen? I think the presentation would be really helpful to get a high-level overview before we jump into the code.

S2

Speaker 2

36:30

Sure. So I'll just go through our slide deck a bit. So this is the solution that we were discussing just now, which basically looks like a stop loss, because when a certain price is hit, you're going to,

S1

Speaker 1

36:53

in this case, I'm sorry, sorry to interrupt you, but I can only see a black screen. Is that what you're saying?

S2

Speaker 2

36:59

Oh, Okay. Let's see.

S1

Speaker 1

37:05

I thought maybe it's like a dramatic introduction or something.

S2

Speaker 2

37:14

Let me just share my entire screen, see if

S1

Speaker 1

37:17

that works. Okay, this looks better.

S2

Speaker 2

37:23

Okay, yeah. So, imagine that on the left, this could be your Univ-E3 position, your curve position, etc. And effectively you're going to sell out into DAI.

S2

Speaker 2

37:40

So you're going to sell ETH into DAI as the price increases, right? And at a price of

S1

Speaker 1

37:49

$5, 000,

S2

Speaker 2

37:51

you will, from between $3, 000 and $5, 000 die per ETH, you will effectively have sold your ETH at about, let's say, like

S1

Speaker 1

38:01

$3, 800

S2

Speaker 2

38:03

die per ETH around there. So the opportunity cost is of course between that 3800 and the

S1

Speaker 1

38:10

5000.

S2

Speaker 2

38:12

So effectively on the right side, this is what cover does. It will actually buy ETH on the way up and it will purchase that ETH at an average price of $3, 800 you know assuming you were using the same curve math. Currently the cover uses constant product so classic x times y equals k to do the curve calculations.

S2

Speaker 2

38:40

And it will buy that an average of 3, 800. So basically, you can dynamically rebalance your position. Here you're making a directional bet. If you're using something like an option, then it's not directional, but here you're able to work with the naked asset.

S2

Speaker 2

39:01

So instead of having to resort to using an option or derivative, you can just hold the actual asset. In this case, it'll be ETH or sorry, well, DAI, which is going to be used to buy ETH, right? And you don't have to work with any extra derivative asset to dynamically rebalance. And then the other solution we're working on, we're not going to get to it today because sort of 2 different code bases kind of have to pick 1 to deep dive on, right?

S2

Speaker 2

39:43

But we're also working on this limit pool protocol, which will allow you to basically have a one-way position. So it would trade the same way that your AMM positions would trade today, but it actually locks the position in. So this is really important for asset pairs that you don't necessarily want to put up a two-sided position on. So for example, something like an option token where you're very much worried about the price of the option token going to 0.

S2

Speaker 2

40:21

So therefore you could still have a marketplace where you can trade those assets, but nobody is having to take 2 sided risk. And also you could do cool stuff like, hey, I wanna sell ETH from like 3000 to 10, 000 di per ETH. And you can pay a fixed transaction cost for that. So the TLDR here is instead of trying to do conditional swaps off-chain, we are able to express ourselves doing a one-way position on-chain, and therefore we're not trying to do what we do today with limit orders and stops where it's like off chain and it's a single swap.

S2

Speaker 2

41:10

And we try to like shove a circle in a square a little bit because it's better to have the liquidity on chain before the price actually moves there, rather than reacting and then potentially having to deal with getting front run and all of those things. So yeah, that's pretty much it as far as the deck goes. We're definitely talking to liquidity management protocols because something like cover, if they're very confident that a position's going to go out of range, they can take a hedge position on that without having to resort to using something like an option or a derivative which may be a little harder to explain to the users.

S1

Speaker 1

42:11

Okay, interesting.

S2

Speaker 2

42:16

Any questions about any of this stuff before we dive into the code?

S1

Speaker 1

42:21

No questions. Excited to look at the implementation.

S2

Speaker 2

42:27

Yeah, there's quite a lot there. So, let me try to start from the very inception of 1 of these pools and kind of walk you all the way through it. So there's a few choices that I'll probably change before we actually launch it, namely the fact that we're deploying bytecode each time.

S2

Speaker 2

43:00

But really tried to obey a lot of the standards that people expect like token 0 token 1 those are Those tokens for anybody that's not familiar with AMMs. Those are usually sorted by address So, you know, if you have 0xA and 0xB, 0xA will be token 0 and 0xB will be token 1. This is kind of... Sorry, go ahead.

S1

Speaker 1

43:30

Yeah, can you zoom in a little bit maybe?

S2

Speaker 2

43:33

Oh, sure.

S1

Speaker 1

43:36

Ah, that's way better.

S2

Speaker 2

43:38

Okay. Now I've got like red lines on the screen.

S1

Speaker 1

43:43

Yeah, what is this? Oh, it's okay. Okay.

S1

Speaker 1

43:47

Okay.

S2

Speaker 2

43:48

It's Grammarly. Maybe I can. Okay.

S2

Speaker 2

43:53

I can tell Grammarly to go away. But yeah, so essentially, you know, we create a pool, there's a TWAP source that we have. So either that comes from a liquid source, like, for example, Uniswap V3 or Curve. And hopefully in the future, our own pools, we'll use our own pools as a HWOP source.

S2

Speaker 2

44:19

And then we also, you'll notice here that there's this mention of volatility tiers. And essentially, this tells us how fast the position can unlock at most. And this is 1 of the mechanisms in the protocol to protect against really widespread manipulation of an asset. You can basically put a rate limit on how fast the price can move.

S2

Speaker 2

44:47

So you say, well, at most I expect it to move

S1

Speaker 1

44:51

10%

S2

Speaker 2

44:52

per hour or something like that, right? And so that's why these tiers exist is to have that rate limit. And so yeah, nothing crazy here beyond launching the pool.

S2

Speaker 2

45:16

And then we go into the contract. And as we set up this pool, we first just set the immutable arguments. As far as I'm aware, in the newest version of Solidity, you actually don't have to set the immutable variables in the constructor, you can set them outside of the constructor. But for anybody that's not familiar with the immutable keyword, essentially This will allow you to put a value in the bytecode.

S2

Speaker 2

45:49

And that makes it basically 0 cost, because you're reading directly from the bytecode to access that value. If you are putting something in storage, it's going to cost you 20, 000 gas units to initialize it, and then like 2, 100 to read that value. So if you have a value that you don't expect to change,

S1

Speaker 1

46:16

Quite good to use. It's a constant that you set at deploy time.

S2

Speaker 2

46:24

Yes, and like I said in the newest version of Slidi you'll actually be able to set that outside of the constructor if, for example, maybe you have an initialize method. So the way that the contracts are laid out is that we're going to make a call. Why is this done?

S2

Speaker 2

46:51

This is done to actually split out the bytecode because in this protocol, there's about 100 kilobytes of bytecode. Comparison to something like Uniswap v3, that's somewhere in the ballpark of 30 kilobytes of bytecode. This is quite a bit more, and thus the reason that we split these out. You'll also notice that this is its own call as well.

S2

Speaker 2

47:20

Essentially what this does is every time the TWAP moves, we're going to sync up what's happening in the pool. And there's always only 1 active auction at a given time, depending on where that TWAP price falls. Right. So if the TWAP is at

S1

Speaker 1

47:45

2000,

S2

Speaker 2

47:46

let's say 2000 die per ETH, we will unlock a tick spacings worth of liquidity, which might be something like 10 or 20 basis points or 0.1 or

S1

Speaker 1

48:02

0.2%.

S2

Speaker 2

48:04

So on each 1 of these methods, we pretty much sync up the pool. Then we're going to do a call to do a mint, and then we're going to save the state. So now we can go into the mint call and like I said the benefit of doing it this way is that you split out the bytecode and everything here is built specifically for Arbitrum.

S2

Speaker 2

48:41

So we're really trying to focus on reducing the amount of call data that's passed around. So we just do 1 external call here and here, and then everything else is an internal. So all these functions that you see inside of here, these are all internal calls. So then we don't have to pay for that call data on L2.

S2

Speaker 2

49:13

You kind of have this trade-off between L1 and L2 where L1 ETH, the call date is gonna be the cheapest thing. And then the storage is going to be really expensive. And then On L2, it's kind of the opposite, where when there's a lot of activity on the L1, the call data on the L2 is going to be really expensive. So you really want to pay attention to how you're packing your call data.

S2

Speaker 2

49:48

And so, thus the reason to use something like a struct. Well, you want to use a struct so that it will compress all the call data down and then, yeah, that decreases the gas cost for the user. So here what we do is we only allow the user to place a position on 1 side of the TWAP. And the TWAP here, you're going to see that it's referred to as latest tick.

S2

Speaker 2

50:26

So, if we go into this positions library, You'll see first we check are these valid ticks. Then we initialize this cache. We check if they're not, if they're minting an empty position. And then we enforce what's called this safety window.

S2

Speaker 2

50:52

Maybe we'll get to this later, but essentially this is done so that to reduce the impact of anybody being able to see the current state of the pool and also perhaps know where the price is going to go because as we know AMMs often lag behind centralized exchanges. So this effectively stops someone from seeing that, hey, you've accumulated all this filled amount, I'm just gonna stick my position in, and even if the next auction doesn't get filled, I'll still get part of the previous fill. But there's also this cool effect in this pool of like, if there is a lot, if the auctions are getting filled frequently, it reduces the risk of somebody who's creating a new position that they won't get filled. Everything will be split pro rata, just like it is in any AMM.

S2

Speaker 2

51:59

And the reason that we do that is so that we never have any arrays that we have to loop through. So here we essentially, based off of whatever their initial range is, We're going to calculate the liquidity that they would mint. I have

S1

Speaker 1

52:20

a question.

S2

Speaker 2

52:22

Yes.

S1

Speaker 1

52:26

Okay, wouldn't you want to do line 73 at the start of the function to fail quicker I think it's like the fastest thing that could fail just something I found interesting

S2

Speaker 2

52:41

yes that definitely is true I you'll notice I also so yes this this could definitely be moved up above the cache so that it fails sooner and then someone who is it likely that the user is going to send a mount 0 probably and probably not like they They pretty much grief themselves if they are trying to create a position with a mountain 0. But yes, for sure it's good to follow the principle of failing somebody as soon as possible so that they don't burn extra gas and that makes the experience better for all the other users on that chain.

S1

Speaker 1

53:30

Okay, another question. I find the require statements interesting.

S2

Speaker 2

53:40

Yes, so I initially I was using custom errors And the problem that I ran into was that I couldn't actually see the revert message on EtherScan. And this is something that's been known for a while that it's an issue. Now if you call a contract directly, I believe it will give you the custom revert string.

S2

Speaker 2

54:09

So you could do revert and then whatever your custom error is. However, if you call a contract which calls another contract, then it will not bubble up the custom error. So this is ultimately the issue that I ran into. And Require does use more bytecode than custom errors.

S2

Speaker 2

54:43

And it does use more gas. So the way that I get around that is I do the if check to like if we should revert. And if that string is true, then I just force the revert by require false. And then I pass the string as the revert string as well.

S2

Speaker 2

55:09

And this makes it so that I can actually see the error. Someone can see the error on EtherScan. Yeah, it's a bit of an unfortunate thing, Etherscan. There has been people prodding at Etherscan to go and fix this issue, but they haven't quite gotten around to it yet, unfortunately.

S2

Speaker 2

55:33

But custom errors have been around for a really long time. So, of course, yeah. And I also, I was having some back and forth with Paul Rosvanberg, who's the developer for Sablier. And I was like, yeah, I think I'm just gonna do this because at least it works for now.

S3

Speaker 3

55:54

And he's like, no, don't do that. You're giving it, you're giving into them. And I'm like, well,

S1

Speaker 1

56:03

because I. The custom errors are definitely more beautiful to look at right when when you're reading the code but I definitely get it from a tooling side that it's it's super annoying when you deep when you're debugging something that you don't get the error string. That's super annoying.

S2

Speaker 2

56:25

Yeah, it would definitely be annoying for the users as well of this protocol to not be able to see the revert string directly on Etherscan, they would have to deep dive on Tenderly, which I'm pretty sure very few people are going to do. So yeah, it's just an unfortunate side effect of the state of Etherscan. And apparently they don't have this on their roadmap at all to fix this.

S2

Speaker 2

56:53

So that's where it's like, okay, what do you do? Pick your poison.

S1

Speaker 1

57:02

That's weird. Is there a reason for that?

S2

Speaker 2

57:07

I don't actually know, but yeah, it seems to be directly connected to when you delegate call to another contract, it's supposed to decode the error string that comes back. So EtherScan is just not decoding the error string that comes back from the delegate call. So that's why you can't if I just do revert, if I use the custom mirrors, you will not see, you'll just see fail in Etherscan and it won't give you any revert string.

S1

Speaker 1

57:44

Yeah, that sucks.

S2

Speaker 2

57:46

Yeah, it definitely does. And even for me, it took me longer to debug a lot of times because I wouldn't be able to see that. And I think even if you go to, I think Tenderly might have the same issue.

S2

Speaker 2

58:01

I don't remember what the case is with Tenderly, if it decodes those custom revert strings or not. But yeah, I mean, most people are looking at ArbaScan or EtherScan anyways.

S3

Speaker 3

58:15

So

S2

Speaker 2

58:15

you kind of have to build around that, at least in my opinion. So hopefully this is something that gets fixed. It could be fixed by another explorer.

S2

Speaker 2

58:25

But of course, like, you know, majority of people are on Ether scan. So it's definitely, you know, it will be very much appreciated if there's more diversity of block explorers out there. Not to say that EtherScan hasn't done a good job, just to say like, hey, like, there could be some other team that could cover for this fault.

S1

Speaker 1

58:51

Yeah, definitely feels like a monopoly right now. But don't get me wrong, it's a great product, AetherScan for sure. Okay, Just before I forget, can you switch back to Coverpool?

S1

Speaker 1

59:05

Yes. There's also something I find interesting. Why? So line 41, why re-implement that and not just use opens up in our Soulmate or something?

S2

Speaker 2

59:19

Yeah, so I think this is a philosophy that you see in some code bases where they implement things on their own such that they're not dependent on some other code base. Here, it's a pretty simple check. There's a lot of extra stuff that comes with something like an OpenZeppelin, where you're gonna have changing owner and stuff like that.

S2

Speaker 2

59:44

I guess perhaps you could just import just the owner only modifier but here you'll see at the bottom it just checks. So I think my opinion on the OpenZeppelin stuff is that sometimes it can be a bit too heavyweight. We have seen issues arise from certain versions of the OpenZeppelin contracts here and there, like very rarely. But the auditor that we're working with on the limit pools, they pushed us towards using the OpenZeppelin lock.

S2

Speaker 2

01:00:28

So, yeah, my opinion is, I think those are definitely a good starting point. You might find that some of those contracts that they have might do more than you necessarily need. So I think from my perspective, it's just to keep it as simple as possible. And yeah, the security onus is definitely on the developer.

S2

Speaker 2

01:00:58

So here I find like, okay, This is a pretty simple modifier. It's just checking if message.sender is the owner. And we use this for the fee collection for setting the fees. Fees here.

S1

Speaker 1

01:01:25

That's definitely an interesting opinion. I have another opinion. I'm not sure if we should.

S2

Speaker 2

01:01:31

Go for it. This is what we're here for. We're here for the hot debates.

S1

Speaker 1

01:01:36

Okay, let's do it. So, OpenZeppelin, some of OpenZeppelin's implementations are definitely bloated, for sure, but if there's something bloated, there's like, if there's something bloated in OpenZeppelin that I need to use, I would need soulmate, right? Which is definitely not bloated.

S1

Speaker 1

01:02:00

And from a security point of view, I would definitely trust a code that's been reused like 1, 000 times more than my own implementation. Only owner is, of course, like a very, very simple thing. And it's hard to mess that up. But even that, like I feel, I personally would feel safer if I see the import on top, and it's like from Soulmate or something.

S1

Speaker 1

01:02:27

That would make me feel safer. So I find it interesting that it actually came from the auditor.

S2

Speaker 2

01:02:35

Yes, yes. They, for the re-entrancy, you'll see that we also have a re-entrancy lock in here. My stance on re-entrancy is that like, even if your lock fails, you should still be safe from re-entrancy before you make any, the checks, effects, interactions.

S2

Speaker 2

01:02:59

Before you interact with something external, you should always make sure to save the state so that you don't ever have the possibility of a stale state being there. I know Perhaps if you're really trying to optimize, right, like you might think differently, but I don't think it's really that expensive to write to the same slot twice, if I remember correctly.

S1

Speaker 1

01:03:30

Okay, maybe 1 last thing to the owner only. I also find it interesting that underscore only owner function is called exactly once and why create a separate function for that and just not inline the 1 line of code.

S2

Speaker 2

01:03:50

Yeah, so the interest... So this is only used 1 time. If you have something like this where it's used multiple times, it saves bytecode space.

S1

Speaker 1

01:04:06

Oh, I don't mean that. I don't mean that. I don't mean why the modifier at all.

S1

Speaker 1

01:04:10

I mean like line 42, that function is only used once. Why not just inline it in the modifier?

S2

Speaker 2

01:04:20

Yes, you'll notice that I was just following a pattern. Okay. But definitely you would get the same, you should get the same result and same byte.

S2

Speaker 2

01:04:30

I don't recall if the byte code size was different between just inlining it and having a private function. I don't recall if there was a bytecode size difference, but definitely for the lock there is. So that's why it's just to kind of follow the same pattern.

S1

Speaker 1

01:04:51

Okay, okay, like a symmetry thing. I get that. Okay, interesting.

S2

Speaker 2

01:05:00

And The bytecode size difference for this is about 2 kilobytes. So you can shave off about 2 kilobytes of bytecode because when you have a modifier, and perhaps you already know this, it will copy that bytecode to every usage of the modifier. So it's going to stick it in here.

S2

Speaker 2

01:05:21

It's going to also copy that same bytecode here and here. So you save a noticeable amount of bytecode by just having private functions like this.

S1

Speaker 1

01:05:38

Okay, okay. Okay, that makes sense.

S2

Speaker 2

01:05:41

And also in the OpenZeppelin re-entrancy lock, they all seize private functions.

S1

Speaker 1

01:05:52

I see. OK, that would be an interesting PR for Sol-C or something, if someone can optimize that. It feels like something the compiler should do.

S2

Speaker 2

01:06:10

Yes, yes, I do agree with that. It would just basically say like, hey, anything that's inside the modifier, if it's used more than once, then just make it a private function.

S1

Speaker 1

01:06:24

Exactly.

S2

Speaker 2

01:06:27

But then you have to make 2 private functions 1 for before this and then 1 for after.

S1

Speaker 1

01:06:40

Yeah, but even that I think most modifiers only have the beginning. I think you could do some, I don't know, feels like it's solvable. It's not something intraceable.

S2

Speaker 2

01:06:55

Yes, I agree.

S1

Speaker 1

01:06:57

Maybe an interesting hackathon project or something. Okay, where do you want to go next?

S2

Speaker 2

01:07:07

Yeah, so kind of taking a step back out, you'll again notice that every function has this sink latest, which this is what actually is sinking the pool. So this might be called every couple minutes when the price moves. Here in this burn function, you can actually skip the sink.

S2

Speaker 2

01:07:33

And this is just a safety mechanism. If for some reason you don't want to pay the gas cost to sink the pool, you can still withdraw and exit your position.

S1

Speaker 1

01:07:47

Okay.

S2

Speaker 2

01:07:51

So yeah, it's definitely the most complex part of this. So there's 2 parts and we can start with whichever 1 you want that are the most complex. So 1 is the sinking process, where basically we put onto a claim tick the current state of your position and then sort of update all the elements in the pool.

S2

Speaker 2

01:08:22

So at the end of this sync function, we're saying, okay, the pool prices is starting here. On 1 side, the liquidity will be 0. And yeah, then we're just setting it up for the next auction. And Here we set the time as the current block timestamp.

S1

Speaker 1

01:08:52

Okay, by the way, looking at this now, it makes sense that this is a layer 2 arbitrum protocol. There is some heavy lifting that happens here, gas-wise.

S2

Speaker 2

01:09:12

Yes, so to give an approximation on how much gas this costs. So every time that this, the pool is synced, it will be somewhere between 3 to 400, 000 gas. So it's not crazy expensive.

S2

Speaker 2

01:09:32

And that definitely pales in comparison to something like maybe like an Aave or something like that, where there's a lot of risk parameters and then a lot of stuff that has to be stashed in storage. Pretty much all of this stuff is done in memory. So What is very easily missed is that even though you see a lot of code here as I'm scrolling, it is actually almost all in memory.

S1

Speaker 1

01:10:16

Okay, yeah, but what makes me nervous gas wise is that you have 2 wilds in there too and I don't know the exit condition of that. So it's probably like 400k on average, right? Because how often could this loop?

S2

Speaker 2

01:10:34

Yeah, it depends how far the price is moving. But if you're just moving from 1 tick to another, then it's between 300 and 400k gas. So basically what happens in this function here is that from the current auction, we roll over any unfilled amounts.

S2

Speaker 2

01:11:05

And this, when you go to claim your position, it will inform the pool of any unfilled amounts and that is split pro rata back to the users. And then here we update an epoch and this epoch is used to as basically a control mechanism so that you claim on the correct tick. And yeah, this maybe runs like, you know, once every couple minutes, every like 3 to 5 minutes, let's say. So then essentially, we have a cross stick.

S2

Speaker 2

01:11:54

This would be in the direction of swapping. So for 0 to 1, that is down. That is the lower. And then vice versa for 1 to 0, because we are representing both sides here.

S2

Speaker 2

01:12:14

And then we do this accumulate process, which essentially, if we hit a position boundary, it will drop off whatever deltas onto that tick so that the user can claim their final position once their entire position range has been crossed. So pretty much once we get all the way to where the TWOP is now, that's when that loop stops. Also had an auditor point out that like, hey, you should just have a condition in that while loop to show when it breaks. And yeah, definitely had a lot of stuff to work on for this.

S2

Speaker 2

01:13:10

So that was 1 of the things that wasn't, it's not really a functional thing, But it would signal to the reader when the loop actually cuts off. So then we do this accumulate function here. And this will actually take off any deltas that were on the previous tick. So it just carries over anything that was checkpointed.

S2

Speaker 2

01:13:48

So basically as you're going along this position, you're checkpointing the state of the position and hey, here's how much has been filled, here's how much wasn't filled. And then you, from that tick, you unstash it. And then if we hit a position boundary then that's where we drop those deltas onto the tick which we're accumulating onto. And then we just clear everything out.

S2

Speaker 2

01:14:23

So this of course works such that it's always moving in 1 direction. So you're clearing everything out as you're moving liquidity in 1 direction. So yeah, that's pretty much that function. Any questions so far?

S1

Speaker 1

01:14:52

Yeah, talk about, okay, so you said this function is called like every couple of minutes. And it seems like, like anyone can call that function, right? Yeah.

S2

Speaker 2

01:15:13

Yes.

S1

Speaker 1

01:15:15

What are you guys using to automate this?

S2

Speaker 2

01:15:21

Right, so the incentive is there for someone to go and update the pool because it would unlock ahead of where the market price is. And that is a deliberate choice. So let's say that the range that we're looking at is from tick 0 to tick

S1

Speaker 1

01:15:47

20.

S2

Speaker 2

01:15:50

And the spacing is 10. When when you get 3 quarters past a single spacing, it will allow you to unlock the next tick, which is from 10 to

S1

Speaker 1

01:16:01

20.

S2

Speaker 2

01:16:03

And, you know, 10 to 20 is some portion of that we expect to be ahead of the market price or better than the market price. So then is the incentive once the price actually goes there for someone and when it's available to be unlocked for someone to basically unlock it and then go and you know take that arbitrage immediately And of course the benefit for the user is that they're giving up short term arbitrage to be protected and receive priority for liquidity ahead of anybody else because they offer a slightly better price.

S1

Speaker 1

01:16:54

Okay, so there's some inbuilt incentive for someone to call this function. Okay.

S2

Speaker 2

01:17:02

Yes, and the parameters on the auction, like how fast the auction plays out, that can help with that. Also, how Often you allow the pool to be unlocked. That can also help with making sure that incentive is there.

S2

Speaker 2

01:17:24

But yeah, in the swap function, we measure how much time has passed in the auction, and then we give a discount based on that. And that discount is capped at the tick spacing. So if the tick spacing is 10, which is like, you know, 0.1%, then we give max

S1

Speaker 1

01:17:47

0.1%.

S2

Speaker 2

01:17:48

And of course, the faster that we give that up, the better, right? Because then we're more likely to get filled and the user cares about getting filled, not necessarily about getting the right, the best price at that exact point in time, right? So just thinking for the user that it's going to be away from their computer for some hours.

S2

Speaker 2

01:18:17

And to put this in context of like a Terra Luna situation where the price is falling quite fast, basically on something like, I don't know, UST, you would be offering a better price than the curve pool or whatever other pool. And therefore, someone will fill you. Your position is 1 way. So you say, I'm only going to accept, let's say USDC.

S2

Speaker 2

01:18:51

And then someone takes that short term arbitrage and boom, you, you got filled, you got protected. And then, obviously there's people that are taking 2 sided risk. So those, those people hopefully and likely understand that that risk is there. And yeah, the people that are just holding the asset, they want to exit it if it gets to some price, they can do so safely.

S2

Speaker 2

01:19:29

But This is what actually prevents this from being called too often. So here, if we don't have any update for the pool, then we would just return early and then none of this other process happens. So, I think the way that the average tick is being calculated is quite interesting. So, essentially what we do is we check how many auctions have elapsed over this time.

S2

Speaker 2

01:20:17

So, if the auction is let's say like a minute and the tick spacing is again like 10 or 0.1 percent, we allow the reference to move based on how much time has passed. You'll also notice here that we say, okay, if 3 quarters of the time has elapsed, will allow you to move 1 spacing.

S1

Speaker 1

01:20:56

Question. Why is auctions elapsed int?

S2

Speaker 2

01:21:03

That is that is just to reduce conversion because Yeah, I'm pretty sure I'm using this against some other int32. So, let's see here. Time elapsed.

S2

Speaker 2

01:21:41

I believe it's because it could be negative. So if auctions elapsed is 0 and I subtract 1, then it's going to be negative

S1

Speaker 1

01:21:55

1. Yeah, but what does auctions elapsed equals negative 1 even mean?

S2

Speaker 2

01:22:07

It just means that it's because we want to say If 3 quarters of the time has passed, then we add 1. But it's just so We don't move too early. I'm trying to think.

S2

Speaker 2

01:22:40

I believe it's so that we don't double count this. Because here I'm adding 1 if 3 quarters of the time is passed. And so that makes it so that we don't double count this addition here.

S1

Speaker 1

01:23:05

Okay. Okay.

S2

Speaker 2

01:23:11

So let's say that not enough time has passed such that this is 0, right? Time elapsed over auction length is 0, then it would be negative

S1

Speaker 1

01:23:23

1.

S2

Speaker 2

01:23:27

I guess then this wouldn't be greater than that. Yeah, perhaps that's something that could be changed. It's essentially so we don't double count this.

S2

Speaker 2

01:23:54

So let's say if 2 auctions had elapsed, then if we didn't subtract

S1

Speaker 1

01:24:04

1

S2

Speaker 2

01:24:07

and then we add 1, then it would double count the same time period. So It's just to avoid having to do that. Yeah, we just add an extra auction if 3 quarters of the time has passed, but that only is for the first auction, not for any amount greater than that.

S1

Speaker 1

01:24:43

Okay. Maybe another thing you're doing a lot of conversion and I personally if I do a lot of conversion I always use like a safe cast from from OpenZeppelin but same philosophy here no OpenZeppelin or external contracts right?

S2

Speaker 2

01:25:02

Yes although in the code that I'm working on now I am using a safe cast and I think that it would be good for me to go back and apply that safe cast here as well because that's definitely the most common failure cases that you're going to see are overflow, underflow, right? Outside of like deep logic issues, it's usually going to be overflow or underflow. So when it comes to smart contracts, it's always better to assume failure and then code as if that failure is going to occur.

S2

Speaker 2

01:25:45

So yeah, having those kind of safe casts, it just reverts. If you are trying to cast a negative value to a Uint or a negative int to a Uint, it protects you against doing that because you can have some unforeseen downstream effects if you cast a negative into a uint, because of course that's cast into a very large number. So definitely there can be some unforeseen effects from that. So yeah, I think that's definitely a valid point.

S2

Speaker 2

01:26:41

And so then after we figure out how many auctions have elapsed, we calculate the average tick. And for this we actually take 4 different samples. Take 4 different samples. So what I'm doing here is I'm taking 2 samples at each of the ends of where we want to grab the sample from.

S2

Speaker 2

01:27:23

Why am I doing this? It is so that if any 2 of this, even if 2 of the samples are manipulated, we can still take the 1 with the least variance from the last sample that we took. And because manipulation is always going to increase the variance, right? So we just take if the time delta here, let's say of 1 block is 1 second or 2 seconds, We're going to take a sample at 0, so the most recent sample, then at 2 seconds, then 2 seconds before the length of the sample we're taking, and then also at the sample length.

S2

Speaker 2

01:28:21

Then what we do is we calculate the average tick from matching up all of those 4 samples. So we have

S1

Speaker 1

01:28:32

0, 1, 2, 3,

S2

Speaker 2

01:28:34

and we match here 0 and 2, 0 and

S1

Speaker 1

01:28:39

3, 1

S2

Speaker 2

01:28:41

and

S1

Speaker 1

01:28:41

2,

S2

Speaker 2

01:28:43

and 1 and 3. Because of course, 2 and 3 are right next to each other, and then 0 and 1 are also next to each other. And then for safety, we make sure that whatever answer we got, that that actually fits within our expected min and max tick range.

S2

Speaker 2

01:29:10

If it doesn't, then we just say, okay, well, it's the min tick, it's the max tick. This is just really bounds protection, but still definitely cases that you have to check for and handle. And then after we return back here, so first we call this internal function, then from there we go across those 4 items that we put in the array and we say, which 1 has the least variance? And we compare that to the current sample that we took previously and we say okay well which 1 has least variance and then that's the 1 that we end up using.

S2

Speaker 2

01:30:02

This way like I said even if 2 of the samples get messed up, you still have a valid sample to compare against. And yeah, I think, of course, moving into proof of stake on Ethel 1 this is the sort of big issue is that like you you know which blocks you're going to be proposing and have control of. So then you could basically, you could control the samples that come out of those blocks, but you have to get 2 blocks in a row to be able to manipulate any 1 sample. And so to even take it a step further than the implementation I have here, if you put an extra block time between 0 and 1, it means that in order for them to manipulate both of those samples, they would have to have 3 blocks in a row, which is quite difficult.

S2

Speaker 2

01:31:21

Currently, if you control, it's about like

S1

Speaker 1

01:31:25

0.015%

S2

Speaker 2

01:31:27

of the eSupply, you can propose 2 blocks every 62 days, something like that. So yeah, I think there's hopefully some takeaways for people that are using on-chain TWAP oracles. And I think that, you know, my thoughts are like, they can be improved to protect against manipulation and sort of the, If you require that for your system and you don't want to depend on some sort of off-chain Oracle, then perhaps this is a path to improve that protection.

S2

Speaker 2

01:32:18

But yeah, I think a lot more research to come on the on-chain TWAP Oracle side. I'm kind of excited to see the things that people come up with. I know there's been some, I've seen someone working on Uniswap v4 hooks and using a calculating a median, which is sort of what's being done here because we're kind of throwing out the bad samples. But, yeah, this is, this is how we're calculating the, the, the unlock price to figure out what liquidity should be unlocked.

S2

Speaker 2

01:33:02

And, as I said previously, we rate limit so that you can only unlock so much at a time and of course this protects against any sort of flash crash stuff where, you know, the price maybe goes somewhere for a couple of blocks and then comes back. So, yeah, I think ultimately with a solution like this, there's gonna be 1 pool that is for people that are long-term holders that want to play on a bigger time frame. And then there's going to be people that want to be more reactionary because they're trading on a shorter timeframe. So I kind of see it playing out like that in the

S3

Speaker 3

01:33:51

long run

S2

Speaker 2

01:33:51

where there's sort of 2 different parties to optimize for. Probably they're not gonna be in the same pool for that reason. So yeah, that's the process of how we actually figure out what is the next price that we're going to.

S2

Speaker 2

01:34:19

From there, like I said, it's just a process of updating deltas on the ticks that you would claim from. And that's pretty much this whole syncing process. And for any position, we have to check points up to what place they've been filled up to or what liquidity they've unlocked. So then that's why we have this this dash internal function here where we checkpoint everything and then we can continue at the next auction.

S2

Speaker 2

01:34:58

Of course the price can move up and down so we can partially cross a position range by going up and then the price moves down and then once we come back up, that's when you would re-encounter this stash

S3

Speaker 3

01:35:15

or checkpoint.

S1

Speaker 1

01:35:17

Stash or

S2

Speaker 2

01:35:19

checkpoint. So yeah, I'm definitely sure that there's some gas optimizations that can be done here. But yeah, I think it is pretty reasonable and definitely something that can scale because of course, for such a large amount, like if you were to have such a large amount of liquidity in this, then paying 3 or 400000 gas, which is, you know, the equivalent of maybe 3 to 4 swaps is pretty reasonable. So like for that, you're talking about on main net, maybe the cost of that is like $15.

S2

Speaker 2

01:36:10

But if I can get a couple basis points on something like

S1

Speaker 1

01:36:15

10, 20, $30, 000

S2

Speaker 2

01:36:17

even, then that's worth it for me to go and unlock and keep the pool up to date. So, yeah, I mean, all of this stuff is obviously experimental, but for me, I know I'd very much like to have a process like this on chain so that I'm not forced to exit all at 1 price. And yeah, just something that I would like to see and run the experiment and kind of see what happens.

S2

Speaker 2

01:36:53

But yeah, any questions on any of this syncing process? I will be, in the future, doing more benchmarks as far as gas costs and I think definitely for a version

S1

Speaker 1

01:37:06

1.5

S2

Speaker 2

01:37:08

would be good to tackle a lot of those obviously most of those are going to come from storage reads and writes. So that's obviously the heavy hitters

S1

Speaker 1

01:37:21

Okay, where are you guys? Auditing wise we are yet right now

S2

Speaker 2

01:37:29

Yeah, so With with this repo. This has been audited twice so We we have a report from Veradise They do a mix of DeFi stuff. They also have been doing a lot of zero-knowledge stuff.

S2

Speaker 2

01:37:48

So they've more recently been auditing the scroll layer too. So, and basically my advice to anybody that's close to doing audits or in the process of doing them is, to me, the report is the producible. So you pay somebody for an audit and to me I want the report to be top-notch, very detailed, show the exact sort of case that was solved for. And so here, you can see they're showing 1 of the calculations here, and that it was set to an incorrect value.

S2

Speaker 2

01:38:53

So you can see kind of all the code here and yeah, I first and foremost, I just look for reports that are quite lengthy. So I like I like the fact that in this report, they lay out the structure of all of the classes and also they talk about the goals of the audit. Ultimately, the audit is trying to prove something, Right? So, I like the fact, and I don't see that in a lot of, I would say, not as promising audit reports, is that they don't lay out a list of goals.

S2

Speaker 2

01:39:48

This is kind of for anybody that's been following some of the security space recently, people are talking a lot about invariants and ultimately these goals are proving that proving out a set of invariants that exist within the protocol. So I think this sort of having a sort of section of what do we expect the protocol to be able to do or not do I think that's pretty important and I do also think that something we're going to see hopefully more frequently is auditors running fuzzing tests and the other audit report that I'm going to show here, they plan to show how exactly they went about the fuzz testing. So I don't know if in your own projects you've been doing a lot of fuzzing, but I think it's something that's definitely going to be necessary as we sort of move things forward in the security space. I think if you haven't fuzzed, if you're writing a new original protocol and you haven't fuzz tested, to hell and back, it becomes a bit more difficult to trust that there's not some edge case that wasn't handled.

S1

Speaker 1

01:41:41

Yeah, obviously Foundry makes it super easy to do Fuzz tests. I just realized that this is you guys did in hardhat.

S2

Speaker 2

01:41:53

Yeah.

S1

Speaker 1

01:41:57

Why hardhat over foundry? I would find that interesting.

S2

Speaker 2

01:42:03

Yeah, so I chose Hardhat because I already knew how to set up a testing framework for it. And it was much easier also for the deployment. So if you go into the test folder and you go into the deployment here, and Maybe there's some similar way to do this in hard hat, or sorry, in Foundry as well.

S2

Speaker 2

01:42:35

But essentially here we have a hook that runs before any of the tests and it just runs all the deployment. And this is the same deployment that runs when I deploy onto testnet. So I haven't had the time to dig super deep into Foundry yet. For the auditing that we've been doing, we've actually been using Echidna to do all the fuzz testing and actually learned recently that that was also done during the audit process of Uniswap v3, that they also used Echidna to test for edge cases.

S2

Speaker 2

01:43:27

And so I think that's been a pretty effective tool up to this point. For me it was just I didn't have to I didn't have to spend time learning how to set up Foundry and all that stuff And once I have more free time, it's definitely something that I want to spend more time with. But for this, I effectively have a streamlined process to deploy all the contracts. And then in the actual tests, and I think this 1 has like 60 different test cases, I just have these standard functions here where I'm validating a set of conditions.

S2

Speaker 2

01:44:24

And let's go into details. So then here for example, I make sure that we have an expected balance difference for the ERC20 and this should be reflected so also that the quote matches. I validate a bunch of different things in here. The swap one's pretty simple because we mostly just focus on checking that the balance difference is respected.

S2

Speaker 2

01:45:20

And then the mint, I'm checking liquidity on ticks. I'm checking that the position liquidity changes as expected. So the answer is, it was much easier for me to jump into doing this than it was Foundry. But it definitely seems that the majority of codebases now are using Foundry quite a bit, just really haven't had the time to spend on that because it's mostly been about, it's mostly been spent on protocol design.

S1

Speaker 1

01:46:07

Awesome, dude. Yeah, we're nearly approaching an hour and a half, so I think it would be a good time to stop here. But yeah, thank you very much.

S1

Speaker 1

01:46:17

This was really interesting, Alpha Key. So yeah, thank you for taking the time.

S2

Speaker 2

01:46:25

Yeah,

S1

Speaker 1

01:46:26

100%.

S2

Speaker 2

01:46:28

Thanks for having me on and hopefully all the listeners learn something and yeah, talk soon.

S1

Speaker 1

01:46:36

Yeah for sure. If you want to find Alpha Key I think the best place is Twitter, right? So I'm gonna link Alpha Key's Twitter in the description below, a link to the GitHub.

S1

Speaker 1

01:46:58

Thanks again Alpha Key and see you guys at the next 1. Bye bye.