See all Delphi Digital transcripts on Youtube

youtube thumbnail

Is the Future of Blockchain Interoperability Optimistic? w/ Arjun & Anna

1 hours 27 minutes 42 seconds

Speaker 1

00:00:00 - 00:00:21

We've seen so many bridge hacks happen, whether they're a smart contract bug or a compromise of keys like Ronan. These hacks might have been prevented if there was an optimistic window or some kind of delay that could have allowed time for external actors to respond to the threat. You're now plugged in to the Delphi Podcast.

Speaker 2

00:00:24 - 00:00:29

Hey, everyone. Welcome back to the Delphi Podcast. I'm your co-host Avi Zerlo.

Speaker 3

00:00:30 - 00:00:32

And I'm your co-host John Gural.

Speaker 2

00:00:32 - 00:00:53

Today we have 2 guests, Ana Carroll, protocol lead at Nomad, and Arjun Bhutani, founder of Connext. Today, we're going to talk about everything bridges and the Connext and Nomad solutions. But before we do, Arjun and Anna, would you please introduce yourself? Arjun, would you start?

Speaker 4

00:00:53 - 00:01:07

Yeah, absolutely. I'm Arjun. I'm 1 of the founders and project lead of Connext. Connext is a interoperability protocol that lets you move funds and data across chains. We do this in partnership with Nomad.

Speaker 4

00:01:07 - 00:01:15

So you can kind of think of Connext as like the liquidity layer to Nomad's messaging layer. As a team, we've been around since

Speaker 1

00:01:15 - 00:01:15

2017,

Speaker 4

00:01:16 - 00:01:44

building, basically researching and developing Layer 2 scalability solutions on top of Ethereum. And then we ended up pivoting into Interop in 2020 because we just realized that there was this very, very big problem around L2 fragmentation and being able to actually build applications that make sense to users that aren't really fragmented across many different ecosystems. And we went live with a network to do this in the start of

Speaker 1

00:01:44 - 00:02:03

2021. And hey everybody, I'm Anna Carroll. I'm the protocol lead at Nomad. Nomad is an optimistic interoperability protocol. We pass messages between chains and developers can use that to build applications on top of Nomad.

Speaker 1

00:02:04 - 00:02:10

I work on our smart contracts, particularly Solidity and everything related to that. Nice.

Speaker 2

00:02:14 - 00:02:33

Thank you, guys. I'm excited for this conversation and we're going to get in the weeds later on in the back half. And to begin, we're just going to set the scene. Now if you're a user or a listener who doesn't understand the importance of bridges, This is probably not the podcast for you. Bridges are important.

Speaker 2

00:02:33 - 00:02:57

We live in a multi-chain world. This is quite clear today. We're going to skip over all of that so you guys don't need to explain it. Now they're important for cross-train asset transfers and smart contract calls and many other things and interesting designs that could be built on top of these systems. But I want to start with some of the challenges of actually building these systems.

Speaker 2

00:02:57 - 00:03:14

And Arjun, you wrote an article last year titled Interoperability Trilemma, aka why bridging Ethereum domains is so damn difficult. So would you mind telling our listeners why the Interoperability Trilemma, aka why bridging Ethereum domains is so damn difficult?

Speaker 4

00:03:15 - 00:04:01

Absolutely. So 1 of the kind of annoying gotchas with building anything on top of blockchains, and I think this is true for any sort of distributed system, is that there are always trade-offs around desirable properties that you want as part of your system. If we think about what we want the ideal bridge to look like, the ideal bridge is cheap, fast, it allows you to pass any kind of arbitrary message between chains, it's easy to deploy to many different chains, and then lastly it retains the core security properties of the chains that it operates on top of. What we found is that it's actually very, very difficult to get all those properties at the same time. So when I wrote that article, at the time optimistic bridges as a concept were not really around yet.

Speaker 4

00:04:01 - 00:04:53

I mean, they've kind of been talked about a little bit, but we were largely analyzing the trade-off space around 3 of those properties, which is trust minimization, generalizability, so arbitrary message passing, and then extensibility, which is like, can you put the same system on many different chains very easily? And what we found was that all of the different systems that existed ended up having to sacrifice 1 of those trade-offs, 1 of those properties, in order to be able to function correctly. So the existing version of Connextless live on mainnet and has been live for some time, utilizes atomic swaps. Atomic swaps are a mechanism to trustlessly basically take liquidity on 1 side and then swap it for liquidity on another side between 2 chains. And they are very easy to deploy to many chains, and they retain a lot of the core trust minimization properties of the underlying chains.

Speaker 4

00:04:53 - 00:05:36

But the downside is that you can't use them to do generalized message passing. And we found that the same kind of thing existed for other types of bridges as well. So externally verified bridges, which include everything from multi-sig bridges to MPC systems, TSS systems, POS systems like Axelar, stuff like that. All of those also now introduce this new security assumption where you have an external set of verifiers, external set of parties that are validating data that goes between chains, and now you've introduced a new trust assumption against the chains themselves. And so in the case of these externally verified systems, you're able to use them on many chains very easily, you're able to do generalized message passing, but they're not trustless.

Speaker 4

00:05:38 - 00:06:23

When NOMAD came along, we kind of had to change this mental model a little bit and change it from a trilemma to a quadrilemma, because Nomad actually does get all 3 of these properties at the same time, but it does it by sacrificing something else, which is, in Nomad's case, latency. So you have this very, very reasonably trust-minimized solution that is able to be deployed very easily to many different chains, and it can pass any kind of arbitrary messages. But the downside is that it takes 30 minutes for a message to go across chains. So yeah, the key takeaway from this is just that there is no easy way to actually get all of these properties at the same time. Any project that claims to be able to do all of these things at the same time is probably sacrificing trust minimization without really telling you.

Speaker 5

00:06:23 - 00:06:23

Edith Wossowski-Khan

Speaker 1

00:06:23 - 00:07:13

And I'll just weigh in to say that part of our thesis, our engineering thesis at Nomad is that for cross-chain protocols, latency is a feature. And what I mean by that is a couple of things. Every cross-chain protocol often will have some latency, at least to reach finality on the sending chain. There are a lot of chains like proof of stake chains that reach finality instantly or quickly, but chains like Ethereum, which are often at the center of this ecosystem, they have probabilistic finality. So you have to wait a certain amount of time after a message is sent to ensure that that message has truly been sent from the perspective of the chain's consensus, right?

Speaker 1

00:07:13 - 00:08:23

So every interop protocol has to have some latency on chains that have non-deterministic finality. Additionally, we think that latency on the receiving chain is also a feature that most interop protocols will want to incorporate as time goes onwards. The reason being that having interop protocols that basically perform state changes immediately as soon as a piece of information is passed to that chain, have no opportunity to respond to attacks, compromises, or threats to the system, which is why we've seen so many bridge hacks happen, whether they're a smart contract bug or a compromise of keys like Ronin. These hacks might have been prevented if there was an optimistic window or some kind of delay that could have allowed time for external actors to respond to the threat. And as we've seen, the only bridge hack that I know of that has been stopped was on an optimistic bridge, the near rainbow bridge.

Speaker 1

00:08:23 - 00:08:44

And that was because there was a period of time where they could react to that. So overall, we think that regardless of whether it's an optimistic interop protocol or some proof of authority construction. Latency is a feature that's desirable in interop protocols. And because of that, we think optimistic bridges have the best seat in the trade-off space.

Speaker 2

00:08:45 - 00:09:32

That's a good take. A bit contrarian with the latency, and I'm sure we'll dive into that a bit later. Before we do, I want to just make a point of the—in the article that you wrote, Arjun, on the trilemma, you ask a question of who is verifying the system and what are the costs of corruption. I think this is a great way to look at the architecture of bridges because it essentially comes down to this verifying cost. And I'm wondering, you mentioned it in your last answer there, but if you could just really explicitly overview sort of the 3 buckets that you've identified bridge designs falling into.

Speaker 4

00:09:33 - 00:10:08

Yeah. All right, it's 4 buckets once you include optimistic bridges, but the 3 buckets are, you know, the chains themselves are verifying any state update, which is what we call a natively verified bridge, so This is things like IBC, where you basically have 1 chain's validator set proving or basically figuring out some mechanism to verify updates from another chain. And typically, this happens by just verifying consensus directly. Then you have externally verified systems. So you have a third-party set of validators that are taking data from here and verifying that it's correct against data that's over here.

Speaker 4

00:10:08 - 00:10:30

So this is things like multi-sig bridges, includes everything from layer 0. This is actually the vast majority of things that are out there fall into this bucket. So it's everything from like layer 0, Axelar, etc. To, you know, I mean, technically Coinbase is an externally verified bridge, right? It's a very, very explicitly custodial 1, but there is an external verifier and that is Coinbase.

Speaker 4

00:10:32 - 00:11:06

And then you have locally verified bridges. This is kind of like the atomic swap case that I was talking about, where you don't actually have that. You basically limit the possible types of state transitions to be ones where all you can really do are like swaps or basically some sort of mechanism where you are just allowing somebody else to do something on your behalf in an atomic way. And in those cases they're locally verified because both of the parties of that transaction are only verifying each other. So you don't need to involve anybody else in that system, and it reduces the overhead massively.

Speaker 4

00:11:07 - 00:11:35

Lastly, with optimistic systems, you do, again, have a third-party set of verifiers. But the role of that verifier set has kind of changed. Where instead of validating that an update is correct, that set of verifiers now just proves, like it submits a proof if the update is incorrect. The slightly different model, which is, you know, sort of like a 1 of n honest party model rather than a m of n honest party model.

Speaker 3

00:11:36 - 00:11:55

I'm curious, do you think like in future there can be a new category in this framework that you come up with, Or do you think the design space has somewhat matured, and we all know constraints and the space pretty well at this point?

Speaker 4

00:11:56 - 00:12:15

So that's a really good question. Basically, what it comes down to is, are there other categories? And then what do you need to do to expand those categories, or the existing categories? Are there other categories? It's possible, but at the moment I don't think so.

Speaker 4

00:12:15 - 00:12:53

I think there's only a limited set of people that could possibly verify updates that happen between chains, and they will inevitably end up falling into 1 of these buckets at this stage, at least from what I can tell. That said, you can expand the bucket. So I think, For example, in the optimistic space, we are just only just starting to scratch the surface of optimistic bridge designs. And there is a lot that's out there that could be possible. For example, there's a bridge design, there's a Beamer bridge or something like that, that was made by the BrainBot team, that uses a slightly different optimistic design than Nomad does.

Speaker 4

00:12:53 - 00:13:44

And of course, it's like specifically targeted transfers only, but it makes some other sort of like trade-off concessions around not needing latency, but in sort of having an insurance mechanism effectively against it. There's mechanisms like what UMA is doing that are sort of optimistic again, where you have somebody like front the liquidity for an exit, similar to like how Connexin Nomad would work together, But on the back end, you have this pool of funds that is used as insurance bond. So I think there's an interesting design space that hasn't yet been explored there. On the local verification side, To be honest, people have been doing locally verified cross-chain communication for a very long time. We have optimized atomic swaps as much as we possibly could, and I think we're sort of starting to hit the limits of what can be accomplished there.

Speaker 4

00:13:45 - 00:14:09

So I don't personally think that there's a lot more there than there has been. And I would expect that for the other kinds of mechanisms, so externally verified mechanisms and natively verified mechanisms, we're in a similar spot where like in both of those cases, you would fundamentally need new cryptographic primitives or new sort of like tooling around cryptography to create better constructions, because that is the core dependency for both of them.

Speaker 3

00:14:10 - 00:14:32

Got it. Yeah. Yeah, that was a great overview. So I think we can slowly start to dive into like nomad so Like the optimistic bridge. It's just evokes the optimistic roll ups In my mind because because the roll ups at the end.

Speaker 3

00:14:32 - 00:15:02

They're just bridges chains with some like trust minimized bridges. So like when I look at it the obvious difference is that in the roll-up case there is sort of 1 place where that the base chain that guarantees the data availability, whereas in the NOMAS case, the data availability sort of is guaranteed by chain. So what are some like implications of it? And please take your time to like walk us through your design, your bridge design.

Speaker 1

00:15:02 - 00:15:33

Yes, I love this question. I have a piece of writing that I want to put out actually about this, comparing optimistic interoperability to optimistic rollups. So I'll start by giving a little bit of baseline context for what Nomad is and how it works. Nomad actually as a protocol is quite simple or elegant if you ask me. Basically the way that it works is that on the sending chain there's a Merkle tree.

Speaker 1

00:15:34 - 00:16:06

When users send a message, that message is hashed and inserted as a leaf into the Merkle tree. There's an updater, that's an external actor that signs attestations of Merkle roots in this tree. Those signed attestations can be relayed across chains. It's trivial and uses widely available cryptography to validate an updater signature. So their attestations can be accepted on the receiving chain.

Speaker 1

00:16:06 - 00:17:13

The catch is that these attestations have an optimistic timeout period during which watchers or guardians of the system can come and block fraudulent routes from the replica. Watchers are configured on the application level rather than the system-wide level. And this means that applications can have more control and sovereignty over blocking fraud from the system-wide root of the updater. So, in short, messages are enqueued, a bundle of those messages are attested to by an updater, That signature is relayed to the receiving chain, which kicks off an optimistic period during which guardians of the system can block fraudulent routes and slash the updater if necessary. So kind of with that down, there's a couple of different things that I would emphasize comparing the security models of optimistic roll-ups with optimistic interop.

Speaker 1

00:17:15 - 00:17:55

So a couple of things. In both systems, there's what I would call a strict hierarchy of truth. So both of these systems, optimistic roll-ups and optimistic interoperability protocols, you're attempting to post state transitions from 1 domain to another domain. That, when I say domain, I mean basically kind of a source of block space or sort of a security boundary, a chain or a rollup or a side chain, et cetera. So in both cases, you're trying to post state transitions from 1 to another.

Speaker 1

00:17:55 - 00:18:28

In rollups, it's state transitions from the L2 to the L1, and in NOMAD, it transitions from the sending chain to the receiving chain. In both cases, like I said, there's what I would call a strict hierarchy of truth. So 1 domain is the arbiter of what is considered valid state. In an optimistic rollup, the layer 1 is the arbiter of truth. The layer 1 is where the decision about valid state gets made.

Speaker 1

00:18:28 - 00:19:07

In Nomad, the sending chain is the arbiter of truth. So if there's state anywhere else that doesn't match the sending chain, that's considered fraudulent. This actually introduces something really interesting which is that fraud proofs and fraud games are a lot simpler to implement in NOMAD than they are in roll-ups. The reason being that the place where state changes happen, that's the sending chain, is also the arbiter of truth. Whereas in roll-ups, the place where state changes happen, the L2, is not the arbiter of truth.

Speaker 1

00:19:07 - 00:19:38

That's the L1. So if there's a dispute about fraud, you have to, in a roll-up, you have to play a complicated fraud game to try to narrow in on some sort of proof that what has happened on the L2 is not valid. Whereas in Nomad, it's a lot simpler. Because the state change has happened in the same place where truth is decided, all you have to do is provide the invalid state change and it's simple. The updater can be slashed instantly.

Speaker 1

00:19:39 - 00:20:20

So that's 1 big difference. Another thing that I would say here is that another difference I would highlight between rollups and interop in the optimistic design space is that in 1 design in rollups there's a strict hierarchy of power between these domains and in optimistic interop that's not true. So in rollups if fraud is proven then the layer 1 has the power to roll back state on the layer 2. It has power over the layer 2. The layer 2 is essentially subservient to the source of truth, the layer 1.

Speaker 1

00:20:21 - 00:20:54

But in interop by definition, by necessity, there is no hierarchy of power between the 2 domains. The sending chain and the receiving chain are basically on equal footing. The sending chain can't roll back state on the receiving chain if fraud is proven, right? So that's where watchers come in and that's why you basically need watchers in optimistic interoperability. Watchers have the ability to block the channel.

Speaker 1

00:20:54 - 00:21:34

They don't have the ability to change state on it. So if fraud happens in optimistic interoperability, the only way to roll back and erase that fraudulent state is through governance. The watchers can only block the channel so that those state changes don't cause harm until governance is able to go in and roll back that state. Those are 2 big things I would highlight between optimistic roll-ups and optimistic interoperability. I have all of this in a hack MD that I'm meaning to actually write up, because I know it's a bit dense.

Speaker 1

00:21:35 - 00:21:39

And a lot of this is quite new to folks. So I think writing might help.

Speaker 3

00:21:39 - 00:22:20

So if I may summarize those 2 points, and this will be a very quick summary. The first point is that the source of truth in the optimistic bridge design is basically the home chain. So this is where the slashing occurs, the fraud proof is sort of resolved. And the second point, And this first point basically encompassed the fact that you don't necessarily have a parent chain, baby chain kind of structure. And the second point was that you cannot automatically through smart contracts roll back the state.

Speaker 3

00:22:20 - 00:22:38

You can only like halt the system and then sort of social consensus kicks into play. Whereas in the rollup case, you can just like roll back the state and keep going without sort of interference from like a social consensus. Would that be a good summary?

Speaker 1

00:22:39 - 00:23:04

Exactly. So if I had to try to summarize it really quick myself, I would just say in interoperability, state changes and truth live in the same place. So proving fraud is easy. However, in interoperability, there is no hierarchy of power between domains. So rolling back fraud is more difficult.

Speaker 3

00:23:04 - 00:23:57

Got it. And if we were to like dive into a deeper implications of this design, so let's suppose that there was a fraud and the fraud is sort of public in the home chain, and a slashing occurs on the home chain, and so a bond gets slashed. And then the recourse of that in the destination chain is basically that nobody acts on that message anymore. And so that begs the question, couldn't this open like a griefing vector where like these watchers just like sort of attempt to halt the system as a griefing vector and what would be a way to mitigate such possibility?

Speaker 1

00:23:58 - 00:25:01

There could be a griefing vector and that is inherent to this design. There's a couple of mitigations that can be done. First things first, it's important to emphasize that the watcher role, there's an untrusted element and a trusted element on the sending chain where the source of truth is anyone can be a watcher there and Slash fraud that is totally permissionless on the receiving chain it is a permissioned role to actually block fraud. The reason is by nature, we cannot verify on the receiving chain for sure what happened on the sending chain unless obviously we have something like a full header relay. So because you cannot prove definitively that fraud happened on the home chain, there has to be a permissioned role that's capable of blocking fraud.

Speaker 1

00:25:03 - 00:26:04

So something I'd first emphasize is that watchers are permissioned and they're permissioned at the ZAP level at the cross-chain application. So Watchers should be reputable actors who are incentivized for the application to succeed. In Nomad's case, this is actually not that difficult to find. Anybody who wants to move tokens across chains, right, there's a lot of stakeholders in that act who are highly incentivized to have this system function the way it needs to function. If they're a user of the tokens on the other side, like Kinects, for example, if they're a user of the underlying message channel, they have a strong incentive for this message channel to be live as long as it's safe to do so, and shut down as soon as it's not.

Speaker 1

00:26:05 - 00:26:59

So the first step is basically permission watchers that are incentivized not to do that. The second thing is just that we explicitly made the design choice to limit this vector, not for the whole channel, for a per application basis. So the trust that you're placing in 2 watchers is kind of, you can more tightly configure that, right? It doesn't have to be that every app building on top of Nomad trusts each other's set of watchers, because the incentives might get a little bit out of whack there. Whereas on a per application basis, there's a tighter loop to ensure that the guardians of that app actually care about it being live.

Speaker 1

00:27:01 - 00:27:09

Aside from that, right, those are more social considerations. Well, I can pause there, but I have other thoughts.

Speaker 3

00:27:09 - 00:27:15

Please go, please go. This is a great, great discussion. I'm really enjoying it.

Speaker 1

00:27:15 - 00:27:43

Sure. So another answer for how you could mitigate this would be on the actual mechanism design and crypto economic front. So you could imagine a construction wherein watchers are similar to the updater also bonded on the home chain. If fraud is proven on the home chain, which is permissionless, anyone can do that. There could be any number of watchers doing that.

Speaker 1

00:27:44 - 00:28:23

You could require that the watchers submits its own signature to attest that fraud has occurred within a given number of blocks from fraud being proven on the home chain. In which case, it would again be permissionless to relay that watcher's signature to the receiving chain which would shut down the channel. So in that case, you could basically bond a watcher and slash them if anyone can provide a signature showing that they incorrectly attested to fraud and slash them if anyone can prove that they did not attest to fraud when it indeed happened.

Speaker 4

00:28:23 - 00:29:34

I think something that's also important to highlight here is like there is a set of... So like the incentives around watching as a role in optimistic systems are still like being researched more broadly, even in other kinds of optimistic systems like optimistic roll-ups, and they were a pretty big topic of research in state channels as well, where there is like an optimization function that you can create around the amount of, basically, the incentives for watchers to actually correctly watch the chain and to get around what's called the verifier's dilemma, which is what incentive is there for people to actually perform these services if fraud doesn't occur very frequently, versus the incentives, the disincentive, for people to maliciously prove fraud. And that optimization function is still, like, it's that's still something that that is being talked about, even in the rollup space. And that's part of the reason why, like, you know, not, there's no 1 really watching rollups at the moment. And and why like fraud proofs don't exist for rollups at the moment.

Speaker 4

00:29:34 - 00:30:00

And so I think the idea, the thesis, and I actually really agree with this thesis for Nomad is that it's better to work on figuring out the material and practical solutions that can be implemented today that give us the best possible level of security while working towards a model that can actually end up becoming permissionless through the use of these kinds of incentives and mechanism design.

Speaker 1

00:30:00 - 00:30:30

Yep, and We put out a blog post on this. I think it's titled the Nomad Design Philosophy that goes over this. We talk about safety as opposed to security. Not that security is not important, but the distinction is talking about theoretical versus practical concerns. We think Nomad holds up very strongly in the theoretical realm, but we measure our success also based on the true safety of our users in the wild.

Speaker 1

00:30:31 - 00:30:37

So that is something that I would highlight in Arjun's point.

Speaker 3

00:30:38 - 00:31:14

No, these are great colors. I think you're kind of going after a very robust design here, and these are definitely not easy questions to be answered and even harder to implement. So yeah, I'm like, I really sort of, I'm very excited for how things will enroll here. I will ask 1 more difficult security question, just because I'm curious how you think about it. And then we can get into more fun questions after that.

Speaker 3

00:31:15 - 00:31:57

So rollups currently, they sort of implement a one-week challenge period. That's sort of an arbitrarily chosen number, but it's on purpose chosen as 1 week, and the motivation for that is for like the sort of, if things go wrong, or like a smart contract bug, or whatever, that humans can react to it. And yours is 30 minutes, so that begs the question, how were you comfortable with this choice? What were some motivations that sort of made you comfortable in picking this much shorter timeframe?

Speaker 1

00:31:58 - 00:32:27

Wait, I think that the security questions are the fun questions. We don't have to move on. Yeah, so actually I would take it really back to that conversation about the difference between roll-ups and interop in the optimistic design space. So remember how we talked about how fraud proofs are much simpler in optimistic interoperability compared to optimistic roll-ups? The answer comes down to that, exactly.

Speaker 1

00:32:28 - 00:33:04

In optimistic roll-ups, fraud games are extremely complicated. In fact, most optimistic roll-ups launched without functional fraud proofs in production. Whereas Nomad was able to launch with fraud proofs functioning in production, obviously on day 1. And really the core difference there is how simple fraud games are in Nomad compared to optimistic roll-ups. In roll-ups, there's a complicated challenge period which takes a lot of back and forth, many transactions being submitted in that kind of time.

Speaker 1

00:33:05 - 00:33:46

And I've chatted to some engineers at Optimism. I want to strengthen Nomad's collaboration with Optimistic Rollups in the ecosystem because I think there are design challenges that we're both facing around things like updater trustlessness, for example. But anyway, I chatted to folks at Optimism while we were parameterizing our fraud-proof window. And these fraud-proof windows, it's not a perfect scientific thing, right? But the thinking behind theirs was, first of all, it has to be obvious that miners are censoring the fraud games.

Speaker 1

00:33:47 - 00:34:08

And because there are so many transactions involved in the fraud games, they need a really long period to make it obvious that miners are censoring them. Whereas in Nomad, there's a single transaction in fraud proofs. It's dead simple. So that's basically the thinking behind the 30 minute period. It's really all about censorship of proving fraud.

Speaker 1

00:34:09 - 00:34:22

And that's actually a lot harder when it's just 1 single transaction that you can overprice 10x. It's a lot easier to demonstrate that censorship is happening.

Speaker 3

00:34:23 - 00:35:24

Can I counter you on this and put a bit more pressure on you with this question? So in the roll-up space, and this is my personal opinion, I just don't think the censorship of miners or validators in the base layer is a big concern. I don't view it as a big concern. And the reason I don't view it as a big concern is the particular reason that you alluded to, which is like it would be obvious, right? So if validators sort of censor it, it would be obvious and so that would sort of raise the question that like you know if the base layer like Ethereum moves into roll-up centric world then sort of like you would be harming yourself as a validator or miner like your own network whereas in the optimistic bridge case every chain is sort of on their own, right?

Speaker 3

00:35:24 - 00:36:51

So like if things go sort of rogue here, like because you don't have this like parent chain, baby chain kind of thing, every chain is on its own. So my point is, if in the destination chain, miners sort of collude to not accept a particular transaction there, which is, they just have to do it for 30 minutes, they won't get slashed by doing this, because censorship is not slashable, and so that could just prevent a fraud proof to be mined and that wouldn't necessarily drop the token price of that chain in my opinion because it sort of targets a specific application rather than like rather than like being being an issue for the whole roll up space that's sort of built on top of the chain. So don't you see this as a, and I'll just finish my thoughts with this. Like in the Rainbow Bridge, I mean, the very opposite happened, right? So the miners sort of front-run the fraud proof, which wasn't sort of discussed before, But yeah, how do you think about sort of these kind of threats and yeah, this difference?

Speaker 1

00:36:52 - 00:37:12

Well, if I could summarize your point, the feeling is that miners of Ethereum mainnet are incentivized not to mess with rollups because if rollups are not viable, that affects the price of Ethereum main net. Is that your, is that the claim?

Speaker 3

00:37:12 - 00:37:26

Yes, that's a good summary, more or less good summary. I think the connection between token price and these kind of actions are more coupled together in this like roll up ecosystem kind of thing.

Speaker 1

00:37:26 - 00:38:08

Well, I would actually kind of disagree, honestly. I think of roll ups as kind of an application built on top of Ethereum. And I think the main disincentive for obviously censoring transactions on mainnet is that it threatens the integrity of the chain, right? It threatens the qualities of the chain as censorship resistant, decentralized, etc. So any chain that obviously censors transactions, in my opinion, that should have economic consequences on the token for that chain, regardless of which application or the purpose of the transaction that they're censoring.

Speaker 1

00:38:10 - 00:38:12

Yeah, that's kind of my thought.

Speaker 4

00:38:12 - 00:38:43

1 other thing to point out here is like, we're talking about censoring transactions on a given chain, like it's actually an easy task. It's not. So the cases where you can actually censor a transaction are pretty easy to define. There's a really good article by Ed Felton about this that breaks down the different kinds of censorship attacks that you can do on chains specifically for fraud proofs. And it's from the context of roll-ups, but it applies to bridges as well.

Speaker 4

00:38:44 - 00:40:11

And 1 of the key insights from that is that unless you control majority validator stake in any of these chains, which if you do, that is inherently its own problem that is much more serious about the integrity of your chain itself, but unless you control a majority ownership stake of the validator set of the chain, a censorship attack is a probabilistic attack. So you have to then figure out what is the probability of success iterated over the success of censorship on 1 block, iterated over every single block that happens within that 30-minute window, and figure out if the net EV of actually attempting that attack and successfully doing fraud is greater than or equal to, and like accounting for the risk of it failing, is greater than or equal to the loss downside that ends up happening as a result of slashing. Generally speaking, I have yet to find a case where it probabilistically makes sense. Because even if you say, like, okay, well, I have a 10, I have a 1% likelihood of being able to not successfully censor this transaction in a single block, right, which is astoundingly low. And again, I don't think that you could do that unless you controlled like the vast vast majority of the validating power in the network or mining power in a network.

Speaker 4

00:40:12 - 00:40:16

Even in that kind of case you sort of have to like multiply, you basically have to do like

Speaker 1

00:40:16 - 00:40:17

99%

Speaker 4

00:40:26 - 00:40:35

to the power of however many blocks that you're iterating over. And that becomes like an unreasonable probability of success pretty quickly.

Speaker 1

00:40:36 - 00:41:05

That's exactly right. And that exact article that Arjun referenced, as well as other research on censorship attacks on chain, is exactly what were the inputs to our parameterization of our fraud window. So it just so happens that because our fraud proofs are so much simpler, that that becomes a lot simpler actually to determine. It's just the cost and the odds of censoring 1 transaction.

Speaker 4

00:41:07 - 00:41:54

There is a caveat here, which is that 1 thing that we haven't talked about too much is chains that are inherently have basically different liveness considerations than probabilistic finality chains. So for example, like tendermint chains explicitly allow for halting. And like a tendermint chain can go down, and tendermint chains have gone down for a long period of time in the past. And that's definitely a concern, because if the chain itself becomes completely unresponsive, then you now have a secondary set of problems. Fortunately, there is a way around that problem, which is just move towards using block numbers for everything, because then even if a chain halts, you're still able to utilize block numbers as your metric for deciding whether or not to progress or not.

Speaker 1

00:41:54 - 00:42:30

That is exactly right. And actually, in our next protocol sprint, we will be moving from block timestamps to block numbers. Frankly, the reason for starting with timestamps was just because we are going to be deployed on a lot of different chains. Block time is not exactly a perfect measure between blocks for almost every chain. Parameterizing that specially for every single chain was initially part of a bottleneck to being able to deploy on all the chains that we needed to.

Speaker 1

00:42:30 - 00:43:20

And sort of akin to our thoughts on safety, what would have to happen is the updater, okay, a malicious updater would have to be trying to attack the system. They would have to have some kind of knowledge about when liveness was about to fail for a chain. They would have to somehow slide in a malicious update just before the chain failed and then be able to race watchers to take effect the fraudulent and malicious transaction before watchers were able to block as the second that the chain came back alive. So it's valid. We will engineer around it.

Speaker 1

00:43:20 - 00:43:26

But we don't necessarily, it doesn't keep me up at night personally. Hopefully I don't eat those words.

Speaker 2

00:43:27 - 00:44:05

Nice. Those are great answers and insight. And so it sounds like the optimistic bridges, they offer this desirable degree of security with a slight trade-off of the latency, though, And as you pointed earlier, the latency isn't maybe a bug, but it's rather a feature. Now I want to use that as maybe a segue into the relationship that Connex and Nomad have and what's going on there. So Arjun, could you maybe talk a bit more about what Connex is doing and how it's plugging into Nomad?

Speaker 4

00:44:06 - 00:44:30

Yeah, absolutely. So, I think like the simple way to think about it is that Connex is the liquidity layer on top of Nomad's messaging layer in the same sort of has a similar relationship to things like Stargate and L0. Or I guess every project is heading towards this now. So I think Wormhole has Portal. And there's a couple of other things like that as well for some of the other systems that I'm not too familiar with.

Speaker 4

00:44:30 - 00:45:24

But In the Kinext and Nomad case, there's also a little bit of an additional kind of functionality piece there, which is like Kinext also allows you to, in some cases, bypass this Nomad latency. Of course, latency is a key part of Nomad's security model. And it's also a key part of likely what users and what application developers, protocol developers will want to include as part of their design choices to have security in the long term to protect against contagion risk. So for example, when the whole Terra collapse happened, there was this huge concern about what happens if now that the amount of staked capital in Terra is reducing. What happens if you can attack the Terra chain and use that to exit this unbacked USD into other chains?

Speaker 4

00:45:25 - 00:46:01

What happens if you can, what is the contagion risk of 1 chain's failure spreading to other ecosystems? And fortunately, that risk is solved by adding latency, solved by adding these external watchers. But at the same time, users don't want to have to wait 30 minutes, like that's the bottom line. So how do you find both of those things at the same time? How do you basically make sure that you have both experiences where you have this like systemic security that exists under the hood, while also making sure that users can actually use applications the way that they do normally and not have to wait 30 minutes to do anything basic between chains?

Speaker 4

00:46:02 - 00:46:36

And the answer to this is Connext. So what Connext does is it's its own sort of like liquidity network that sits on top of Nomad. We have this like bilateral exclusive relationship with Nomad and we kind of consider ourselves to be part of the same overarching stack. And that's why we have a very, very close, like there's an extremely close relationship between both teams. And this liquidity layer is made up of a network of nodes, which it connects nodes, which are called routers, that are LPs that basically watch transactions that happen on Nomad on different chains.

Speaker 4

00:46:37 - 00:47:03

And in the cases where they're able to, so this is, and I can get into which cases they're able to, they front capital for Nomad transfers. So what they do is basically similar to the model that like Hop pioneered a year, year and a half ago. They watch for a transaction that is going through the slow path of Nomad. They front liquidity to the user and then they claim against the transaction that is coming out of Nomad in

Speaker 3

00:47:03 - 00:47:03

the future.

Speaker 4

00:47:05 - 00:47:31

Now, under what conditions can they actually do this? So this gets into this interesting question of like, what does generalized messaging mean? You know, does it mean, so, you know, we've talked about like transfers of funds as something as like a simple use case that you can do in many different kinds of ways, including atomic swaps. And then generalized messaging as this sort of holy grail thing where you can allow contracts to talk to each other directly. But what does that domain actually look like?

Speaker 4

00:47:31 - 00:47:59

So what we found is that it's actually like, it's not as clear cut as you might think. There are cases where you can allow for some external party to call a contract on your behalf and do so with funds even without necessarily needing the generalized messaging piece. So a good example of this is, say, for instance, Avi, I pay you. We do this using escrows, HTLCs, so it's completely trustless. But it is possible for me to pay you.

Speaker 4

00:48:00 - 00:48:36

And instead of just paying you to pay me money, like instead of me paying you USDC on Arbitrum for you to pay me USDC on Optimism, I instead pay you USDC on Arbitrum and plus some fees to instead call Uniswap on Optimism with USDC and swap into ETH and send me the ETH. And you can do that all contractually on-chain. So the nice thing about this is like, you have, I have basically executed some function on a receiving chain. I've received the output of that function, which is eth on the receiving chain. I haven't actually needed to pay the gas for it.

Speaker 4

00:48:36 - 00:49:20

I've delegated that to an external party, which in this case is effectively like a relayer. And I've been able to do that just using escrows without having to use any more complex mechanism for generalized messaging. So it's clear then that it is possible to do some types of messaging without needing the generalized piece, without needing Nomad's 30-minute latency, without needing an actual security model, and instead doing it just by fronting. And when you dig deeper into this, what you find is it is possible to do this in every case where the receiving chain contract is not checking the origin, like tx.origin effectively, where it is an unpermissioned call. So a Uniswap swap is an unpermissioned call because any user can do it.

Speaker 4

00:49:21 - 00:49:53

You know, going into a Uniswap LP position, also an unpermissioned call because any user can do that. Updating Uniswap's fee tiers on a different chain is not an unpermissioned call. Only the Uniswap DAO can do that. That is something that does need to go through NOMAD and would need to wait the full 30 minutes latency. The conclusion of this is, any case where in any system or in any type of cross-chain call where you have an unpermissioned call, which is typically always going to be user facing interactions.

Speaker 4

00:49:54 - 00:50:13

Those can be short circuited by Connext. Basically, Connext routers can execute those calls immediately and simply claim against Nomad at some point 30 minutes in the future. So any user facing interaction that uses the combination of Connext and Nomad can still happen in the amount of time it takes for a transaction to become final on the sending chain.

Speaker 3

00:50:13 - 00:51:01

That's something definitely to think about. The permission, or if anyone can use it, can do it, that can be fronted and sort of more sophisticated, faster use cases can be implemented. So 1 question I want to ask you guys, and Anna or Arjun, feel free to jump, is that in Nomad, Nomad just passes some raw bytes across chains, and then sort of how those raw bytes are interpreted is completely up to applications. So the question is, how fancy applications do you think can go with this? Like what are some use cases that you're most excited about?

Speaker 3

00:51:01 - 00:51:03

Would love to hear your thoughts.

Speaker 1

00:51:03 - 00:51:35

Yeah, it's something that I spend a lot of time thinking about. You know, with an arbitrary bytes array you can do arbitrary code execution, right? You can do anything. The thing is that there are some special design considerations for cross-chain applications that are a little bit different than smart contracts or applications built on a single chain. So a couple of things being, well, first, there's latency in between call execution.

Speaker 1

00:51:35 - 00:51:57

So something happens on the sending chain, there's a pause, and then something happens on the receiving chain. That's the case for every interrupt protocol, not just Nomad, as we covered. There's latency in between execution. There's another thing, which is there's not atomicity. On the sending chain, the thing happens.

Speaker 1

00:51:58 - 00:52:45

If the second half of the transaction fails on the receiving chain, there is no recourse on the sending chain. So I think that there are some design considerations for cross-chain applications that any developer has to really take into account. Because of the atomicity issue, I think right now applications cross-chain makes sense if first, the second state transition on the receiving chain should never fail. Or if it doesn't matter for it to fail or it doesn't matter for it to be retried. So that's kind of 1 class of concerns.

Speaker 1

00:52:46 - 00:53:33

I think a lot of smart contract developers are not used to thinking in that way. A lot of the applications that we see on layer ones might not necessarily map perfectly to the cross-chain world, and we're still figuring out some of the applications that are going to work best there. There are definitely some applications that we know work for sure. And the first class of things that we know for sure work have to do with what I would call blockchain's first killer apps, which is tokens. So for example, mint and burn bridges like Nomads in-house bridge, liquidity networks like Connext, even multi-chain ERC20 tokens.

Speaker 1

00:53:34 - 00:54:19

This one's a little bit more niche. There's less examples of this in production right now. But this is a useful use case for someone like an asset issuer, like a centralized asset issuer. You can imagine like a large stablecoin issuer, particularly if they want to be regularly, regulatorily compliant, they can prove that there's right, 1 token on chain on any chain for every $1 in their treasury. And the way they would accomplish that multi-chain is to basically code a token that when you send it across chains, instead of locking it in a bridge to be unlocked later, it burns the token and mints it on the other chain.

Speaker 1

00:54:20 - 00:55:12

So it's kind of a special case in my mind for like a mint and burn token bridge. It's kind of just like almost a subset or like a special case of that use case. That's everything that related to tokens that we really know will work and we have demand for. There's another use case for cross-chain messaging that I'm personally really excited about and that I've been spending a lot of time thinking about and that's cross-chain governance, namely cross-chain governance proposal execution or more generally speaking, just cross-chain calls. The problem statement here is that a lot of DAOs have smart contracts which govern their protocols, govern their systems.

Speaker 1

00:55:13 - 00:55:58

In most cases, these smart contracts are some variant of the Governor Bravo, Governor Alpha, and Comp Token pattern that we kind of know and love for some of these protocol DAOs. But the problem is that that governance system is deployed on a single chain, whereas a lot of protocols are deployed on multiple chains. This is almost kind of a relic of how the multi-chain ecosystem developed, almost in parallel to the idea of token-based governance. And so a lot of projects have this problem. I think that some projects are just coming to realize it, whereas some projects have definitely taken attempts at fixing it.

Speaker 1

00:55:58 - 00:56:37

But what we don't have right now is a standard that can actually solve this problems for protocols, because what we've seen in the governance space thus far, is that when people solve this problem for protocols, they're very happy to use the best in class solution. That's what we saw with, you know, Compound's Governor Bravo and CompToken. It became very common to fork those contracts and just use that. Reason being, obviously, that most protocols are, like, governance is not their core competency or their core focus. They want to build their protocol and they're happy to use best in class solutions for governance.

Speaker 1

00:56:38 - 00:56:52

So what I've been working on for a while and working with several protocol endowed communities in the space to talk about has been this idea of cross-chain governance. And I'm really excited about this use case right now.

Speaker 4

00:56:52 - 00:57:48

There's a couple of others that I think are pretty key to highlight here too, that we've been interacting with more at kind of like the liquidity layer. The first is, so as a, just taking a quick step back, like, to kind of touch on a point that Anna made, what the big shift that we are experiencing right now is that we have to like, help developers move from thinking about building applications or decentralized applications in a synchronous context, which means like everything exists on a single chain and executes within a single block, to an asynchronous context, which is more web-like. You know, like we when we build applications on the web today, we are interacting with remote resources that you make calls to, asynchronous calls to, and then you don't really know when you're going to get that information back. You don't know if you're going to get that information back. And like these sorts of assumptions are not built into application development on blockchains today, but they need to be because the synchronous development just doesn't really scale.

Speaker 4

00:57:49 - 00:58:45

If that's the case, as Anna mentioned, a lot of the applications that exist today won't directly translate. But I think there are ways to emulate those experiences in a way that leans on existing knowledge that we have from how people build distributed systems on the web today. And those are distributed systems that may or may not achieve consensus, may or may not have consistency internally in their state. But there is just a ton that we can learn from around things like locking, around things like achieving consistency on replicated stores that translates really, really well into this ecosystem. A couple of the key use cases that associated with that that we've been really excited about and that we've actually been interacting with a lot and people are already building on top of Kinects are there's a lot of interesting stuff that's happening now with cross-chain NFTs.

Speaker 4

00:58:45 - 00:59:38

So the first cross-chain NFT protocol, it's basically like a low-level standard and bridge, and then standard for creating cross-chain NFTs more generally, went live on the Connects testnet I think yesterday. And we already have several other people that are either incorporating those standards or building their own kind of like core NFT bridges to be able to unlock the isolate, like, you know, NFT ecosystems right now are siloed to their own specific chains. So, for example, like Axie NFTs exist on Ronin, OpenSea is on Ethereum, there's a polygon ecosystem of marketplaces and NFTs, there's the treasured out ecosystem, but none of these ecosystems actually talk to each other. And that extends to metaverse use cases as well. So like Decentraland running on Polygon does not really talk to like other metaverse ecosystems or other kinds of games on other chains.

Speaker 4

00:59:39 - 01:00:24

And that's a bit of a shame because there's a lot of like really important innovation that can be created, that can be done once you have these like games be able to interoperate with each other or these ecosystems be able to like work with 1 another more directly. And for marketplaces, of course, it's a huge boost to be able to interact with NFTs that are high value coming from other chains. So that's something that we're super excited about. The other 1 that I personally think is going to be quite big is, I mean, I think it's sort of inevitable, is like cross-chain DeFi. So creating, you know, taking what we've created inadvertently right now is basically like a replicated instances of DeFi on each chain, where every major chain now has like Aave, Curve, et cetera, running on it.

Speaker 4

01:00:24 - 01:00:57

And that creates this like really fragmented experience where users have to kind of consciously think about what chain they're on as part of interacting with Aave. And they're not just able to aggregate the best interest rate from any chain, they have to go and specifically switch to a chain to actually go and access interest there. And that's just horrible for users. It's god awful for people to have to be like, okay, well, I want the best Aave rate that I can get on USDC. Now I need to go and switch through every, you know, all of the 10 chains that Aave supports and figure out what the rates of return are on each and then figure out how to bridge there.

Speaker 4

01:00:58 - 01:01:27

And then even more, even more kind of awful is this idea that you have to be locked into 1 chain to begin with. You know, why can't you just borrow on 1 or lend on 1 chain, borrow on another? Why can't you use yearn in a way where, you know, you are like zapping into a yearn vault and then that yearn vault is actually aggregating interest rates or aggregating yield from a bunch of different sources that are on different chains rather than all on a single chain. These things are entirely possible with this technology. You can do them without trust.

Speaker 4

01:01:27 - 01:01:33

You can do them with the same amount of robustness that already exists for like Bluechip DeFi projects.

Speaker 1

01:01:33 - 01:02:32

I would just chime in to add that there is a category of apps, like Arjun mentioned, that I didn't initially mention because I think we're still working on figuring out the design patterns for them. A lot of the apps that we mentioned before around moving tokens across chains, moving NFTs across chains, or cross-chain calls and proposal executions, the demand is there today, the design is there today, and they can be built and shipped today. There is a class of apps that I do think there will be demand and desire for in this world that I think that we're still figuring out some of the design considerations for. In that class are things like staking, lending protocols, yield farming like Arjun said. Also voting, like the portion of governance that is to do with passing proposals rather than executing them.

Speaker 1

01:02:33 - 01:02:57

And something interesting that I noticed when I made a list of these use cases was that actually all of these things have to do with locking up tokens. So voting, staking, lending, yield farming, All of these things have to do with that. And I'm still thinking a little bit more about that. But I think it has something to do with the fact that it's hard to sync state across chains. I'm still figuring this out.

Speaker 1

01:02:57 - 01:02:59

I think we're all still figuring this out.

Speaker 4

01:02:59 - 01:03:26

The 1 that I think that the use cases that I mean, obviously, there's a lot of these things that are immediately apparent, right, which is which is what things are actually generating a lot of value within within the crypto ecosystem right now. And that's like, those are there's like really 4 things, right? It's NFTs, governance, DeFi, and then token issuances. Those are really still the 4 biggest things that people do in the space. But I've always been kind of like keen to forecast out and see what other types of use cases might exist.

Speaker 4

01:03:26 - 01:04:16

And of course, like there's a lot out there, but something that I'm particularly excited about is like when we start applying interoperability outside of the scope of simply just like, you know, smart contracting blockchains, and we start applying them to other kinds of decentralized networks as well. So like What happens if, for example, within a smart contract that you're writing on top of optimism, you are now able to go and utilize SIA? What happens if within a smart contract, within that same contract, you are now able to utilize Gollum? And this is possible, right? In the cases, you can entirely build, like the Filecoin blockchain allows you to do arbitrary scripting, so it is entirely possible to eventually build some sort of mechanism to bridge to that chain as well.

Speaker 4

01:04:16 - 01:05:04

And when you do that, you can that means you can access storage resources directly from inside contracts running on another chain. The mental model here is like, imagine if you could eventually make it so that Ethereum or other kinds of smart contract contracting blockchains are your CPU interop protocols like nomad and connects or your motherboard and then everything else kind of plugs into that. So you can now have legitimate long-term storage in the form of like an HDD through things like Filecoin, you know, like caching and like RAM through what eventually is theoretically realized through the graph protocolization, and compute resources and other kinds of things that you plug into that are their own independent resource networks. You could build full stack decentralized applications inside of a smart contract running on top of Ethereum.

Speaker 2

01:05:04 - 01:05:19

Okay, that was pretty mind expanding. I haven't thought about that. That's really cool. I mean, just curious to continue on this thought, what would it take to actually be able to execute this?

Speaker 4

01:05:21 - 01:05:58

You need to basically just port Nomad to other chains, other kind of like non-chain, like I guess basically resource networks like Filecoin. I think I haven't kept up with where Filecoin, like the state of Filecoin actors and whether it's possible to write actors that are fully generalized yet. I believe that was the direction that they were headed. So I'm not sure if that is entirely possible yet or not, but it will be at some point. So whenever that happens, basically what it just comes down to is we give somebody a grant to write a port.

Speaker 2

01:05:58 - 01:06:11

And now this isn't like exclusive to Nomads design, right? This is broadly talking about bridging and cross-chain communication networks correct

Speaker 1

01:06:11 - 01:06:29

well it definitely wouldn't work for certain internally verified systems like header relays but it would work for the externally verified class of interop protocols like proof of authority, constructions, as well as Nomad.

Speaker 2

01:06:30 - 01:06:47

Gotcha. Yeah. Okay. That makes sense. And that's like kind of a closing thought or topic that we could wrap this conversation up around and, and just that, you know, bringing it back to the beginning of the episode, identifying right the importance of bridges.

Speaker 2

01:06:48 - 01:07:23

We live in this multi-chain world. We need bridges. There is a single challenge, a single problem that there's more than a handful of projects now attempting to solve. I'm curious who you believe out there in the space is maybe not comparable, but also doing work that may be appropriate for certain types of cross-chain communication. Because I think coming back to the design buckets that we outlined in the beginning of the episode, they have these different advantages, trade-offs.

Speaker 2

01:07:25 - 01:07:57

Some may be more secure, but may have latency, thinking of something like Nomad. Others may, you know, have the quicker latency or higher performance, as some developers would see it. But maybe they're not as secure. There are some security assumptions that are introduced. So, I mean, I guess in a broad way, I'm asking, do you think this is a space that is 0 sum, that there is 1 solution to fit all?

Speaker 2

01:07:58 - 01:08:02

And if not, What other solutions are interesting?

Speaker 1

01:08:03 - 01:08:49

Oh, well, I mean, there's a class of interop protocols that are natively built into the chains that they operate on, which are really great. You know, IBC for Cosmos chains, XCM for Polkadot. And because these protocols like work very well, sit very well in the trade-off space and are built directly into the chain so you don't have to do things like header relays, they work extremely well within the bounds of the ecosystems that they were designed for. And part of what we talk about Nomad at is kind of like an IBC for EVM. We want to be the interop protocol for EVM.

Speaker 1

01:08:50 - 01:09:15

But part of that is because EVM didn't build in interoperability into his chains the way other ecosystems did. So we definitely at Nomad think that applications that want to expand outside of the EVM world, which is totally reasonable to do so, should absolutely use native message passing protocols and interop protocols if they're built into the chain and they're available.

Speaker 4

01:09:16 - 01:09:51

My response to this is going to be a bit controversial, and I recognize that. And so I'll first start out by saying that I don't think that I think everything is actually positive. So like, I don't believe I think the when people believe that things are 0 sum, they're usually, I've yet to find cases where there are like 0 sum games being played out in open markets in the wild in the world. That just like remains 0 sum indefinitely in the face of like increasing innovation. That said, I don't think the interop market is zero-sum.

Speaker 4

01:09:51 - 01:10:41

I think everything that we do in this space is positive-sum, and this is the reason why we have, for example, Connext has a great relationship. I really love the hop team. They're probably our most direct competitor as like a liquidity that 1 of the only other trustless liquidity networks out there for roll ups. But at the same time, you know, we have a lot of respect for them because we think that they are a legitimately awesome team that is trying to do something really great for the space, and is pushing forward a new way of doing interop that other people hadn't really considered in the past. That said, I don't, I would say that not a lot, like, I do think that there will be other options out there in the future that will be more specialized for certain use cases than others.

Speaker 4

01:10:41 - 01:11:12

I do think a lot of them will end up competing with Kinects and Nomad and I think that's a good thing because I think competition is good. But I don't think it is objectively a good thing that many of the other types of protocols that exist out there right now explicitly trade off trust. I think that is a relic of the bull market. I think it exists because at the moment we have de-prioritized trust minimization as a space and we have de-prioritized economic security. However, I think that's a mistake.

Speaker 4

01:11:12 - 01:12:11

And I think that as we saw from, you know, the, a lot of the, the like downfall, like the, you know, like the, the Ronen bridge hack is a great example of this, but then as is, you know, the terror ecosystem hack or attack where, or I guess implosion, where, you know, Ronin was attacked, like the root of trust of Ronin was attacked. But similarly, like Tara's economic model was attacked, right? And like what that means is that ultimately what we need to do is build systems that are resilient, not just against, you know, smart contract attacks and root of trust attacks, but also against economic attacks. Like what happens when you have large scale financial institutions anonymously trying to like manipulate markets to change the economics of a system that relies on staking in order to maintain its security? What happens when Roon's price is manipulated down to near 0?

Speaker 4

01:12:11 - 01:12:30

Does that cause a death spiral? These are the kinds of questions that people should be asking. And my thesis is that we may be able to get away with trusted models. Trusted options may actually survive. They might do extremely well for the next few years.

Speaker 4

01:12:30 - 01:13:15

But the whole point of this space and the whole reason that we're trying to build trust minimized systems, systems that have this kind of economic security, is because the threat model is not, can I win this market in the next 3 to 4 years? It's can my protocol survive the next 100 years in the face of 1 day rogue governments or some billionaires deciding, hey, this thing is leading to me not being able to extract as much value as I could. And so now I need to shut it down. You know, like I think that's what I would like to do. And my ideal outcome is that we work as a community, as a collective work towards validating models like Nomads and other models that actually prioritize security, things like IBC.

Speaker 4

01:13:16 - 01:13:40

And as a result of that shift the Overton window away from, hey, let's just create another multi-sig at almost no cost and try to scale it. And if we do that, I think I'm perfectly OK with competition. That actually is good competition, competition that is that we can respect and that we can learn from and that actually helps push the space in the

Speaker 1

01:13:43 - 01:13:47

right direction. The listeners can't see, but I'm vigorously nodding my head.

Speaker 2

01:13:51 - 01:14:19

Yeah, I appreciate those answers. And I don't think that's too contrarian. I think that's relatively on point. I will say that, at least for myself, I take a bit more of like a holistic view of these are all trade-offs. And for certain users and applications, these trade-offs may be ones that they're willing to take off, specifically around security.

Speaker 2

01:14:21 - 01:14:59

That something like latency, like a 30 minute wait time for asset transfers is just simply not desirable from a user experience. Though it objectively provides greater security guarantees, it may not be desirable from user experience. I think we've seen that play out on the market. Now, well, is that objectively good or bad for the ecosystem at large, I think that's actually just a moral debate. I'm in the camp that I think there'll be many solutions.

Speaker 2

01:14:59 - 01:15:22

There'll be a few category winners and they'll probably fall in line based on the design buckets that you've outlined. And so, they would say that Nomad is far beyond in that fourth category, but that there are others that will probably do quite well.

Speaker 1

01:15:22 - 01:15:36

I think users are often willing to sacrifice security until it's far too late. And I think a lot of users don't make that mistake twice.

Speaker 2

01:15:37 - 01:16:07

Yeah. Well, yeah, I just want to make 1 point on the Ronin hack, because I think that's like a prime use case that comes up. Like the bridge design never failed. It was an operational miscalculation or huge oversight, huge security flaw of where they stored sort of the access controls of the bridge. So I think that's important to note, like where you place sort of these security assumptions, like where exactly do they lie?

Speaker 3

01:16:07 - 01:16:24

Yeah, that's like, that's the part of the security model though, that there exists some set of keys that could collude. It doesn't matter if it was intentional or they sort of weren't able to secure it. So I would put that under security model for sure.

Speaker 2

01:16:24 - 01:16:40

If Ronin were to be redesigned with much stronger security guarantees over where those were restored, now we wouldn't have faced the same outcome. The bridge functioned as it should.

Speaker 4

01:16:41 - 01:17:19

I definitely disagree with that. So I think you could build a version of Ronin. For example, Ronin is now increasing the number of keys that you need the threshold of in order to actually make updates happen. And they could store those keys more securely, but ultimately, again, all of this stuff from a security perspective comes down to what is your threat model? So if your threat model is like, Lazarus basically just dicking around and having to stumble across the fact that all 4 of these keys were on a single machine, that's obviously a very low threat.

Speaker 4

01:17:19 - 01:17:48

It's not a particularly intensive threat model. But the threat model for the space is what happens when you have a large nation-state decide, okay, well, I'm going to just subpoena anybody that interacts with this set of servers. What happens when you have, the US government says, okay, I'm going to go and just figure out which AWS boxes this stuff is running on and shut them down. Or just subpoena Amazon to give us all of the data off of those boxes. These are legitimate attacks.

Speaker 4

01:17:48 - 01:18:13

And given the fact that if you want to build long-term sustainable public goods, you have to hedge against the fact that like the political climate may completely change. And that the new political climate that we're in may end up being legitimately authoritarian. Like if it's possible for a government to come and shut down your system, it's not a decentralized system. At that point, it's like, at that point, you might as well just be like a federally regulated institution. It'll be less work.

Speaker 4

01:18:14 - 01:18:41

And like, And I think in the case of something like Ronin, right, the fact is the Ronin bridge would not have happened if Ronin was using Nomad. Like that, it just wouldn't have happened. Like it happened because it was possible for a certain threshold of signers to be able to steal funds arbitrarily. And that is simply not possible in IBC, and it's not possible in NOMAD.

Speaker 1

01:18:42 - 01:19:08

That's right. And there's a really simple example to explain why this is a problem of mechanism design. As you said, the Ronin bridge did work at the mechanism level how it was intended to. The problem with that is that it relied heavily on trust assumptions and trust was compromised, meaning those keys were compromised by an attacker. This can be solved by mechanism design.

Speaker 1

01:19:08 - 01:19:36

So let's take a look at what happens in Nomad. If every key in the system, if an attacker gains access to every key in the system, the updater and every single watcher for every single application. They can sign a malicious route as the updater and that route will be relayed to the receiving chain. And then what happens? Well, the attacker has access to the watcher key and they can choose not to submit fraud.

Speaker 1

01:19:36 - 01:19:58

But all of the honest actors still also have access to the watcher key and they can block fraud. So even if you compromise, if you gain access to every single key in the system, fraud is still blocked by even 1 honest watcher with still access to their keys. So this is an issue of mechanism design.

Speaker 2

01:19:59 - 01:20:04

Yeah that's a good point And I think you put it much more elegantly than I did.

Speaker 3

01:20:04 - 01:21:01

I kind of have another tough question to you guys that I think that squarely fits into security design. And we haven't talked about this. And that is the smart contract upgrades, right? In the case of a bug or a potential concern there, like those smart contracts are upgraded and then there's sort of 2 ways to approach this whether you can have like an instant upgrade or you can have a delayed upgrade and Then in the in the delayed upgrade scenario, I think the question becomes Would nomads smart contracts delay would be less than the timeout period or more than the timeout period? And then another third option is that it could be un-upgradable.

Speaker 3

01:21:01 - 01:21:11

So I think all of these squarely applies to roll-ups as well but I wonder how you guys specifically aim to implement this.

Speaker 1

01:21:13 - 01:22:11

Yeah so for our upgrade ability setup we've used something called the upgrade beacon pattern, which was spearheaded originally by 0 Age, a protocol engineer I had the privilege of learning from while I was working at Derma, who is now the protocol, the head of protocol at OpenSea and just released Stargate, which you might've seen. He's a brilliant engineer. This pattern basically separates concerns between the implementation, the proxy, and the upgrade logic. And this is a property I love about this setup because most upgradable proxies, they co-mingle the upgrade logic with either the proxy or the implementation. And because of that, if there's an issue with an upgrade, a lot of the time, that proxy can be bricked, basically impossible to recover.

Speaker 1

01:22:13 - 01:23:07

But in an upgrade beacon setup, it's basically always possible to roll back to a previous implementation, to roll forward to a new implementation, even if somebody malicious upgraded the contract to have no code at it, even, or to maybe not to self-destruct. But Anyway, so upgrade begin pattern is what we use for our upgrades. And that's just something I wanted to shout out for any protocol engineers listening. But in terms of how upgrades are executed, they are executed by our governance, our cross-chain governance system. So we were talking about cross-chain governance a little bit earlier in the call, this is not a new use case for us.

Speaker 1

01:23:07 - 01:23:39

We designed cross-chain governance over a year ago, and we've been using it in production since day 0 of our system. So our upgrades do go over the channel, the Nomad channel, and no, you cannot bypass the 30 minute period for those upgrades. So there is an inherent time lock for any upgrades that happen outside of the governor chain, which is Ethereum. And that's a model that we're comfortable with.

Speaker 2

01:23:39 - 01:24:07

Nice. Well, I think now might be a good time to wrap up the conversation and to give some parting thoughts and more specifically, you know, where people can find you, that being on Twitter and also for developers who are listening and curious on using, you know, learning a bit more on how to implement Nomad and Kinect for their solutions? Where can they learn more?

Speaker 4

01:24:09 - 01:24:47

So I think there's a handful of resources. So obviously, Nomad has a bunch of fantastic resources around their system. And I'm sure Anna can expand on this, but I definitely recommend checking out docs.nomad.xyz, which has a fantastic overview of the Nomad protocol, as well as some of the lower level pieces needed to interact or integrate with Nomad directly. Generally, our mental model for integrations and for developers building against the system has been that it's like a protocol stack. And so there are instances where you will want to specifically build against Nomad, but for the vast majority of use cases, it's better to just call down through this doc.

Speaker 4

01:24:47 - 01:25:16

And the way that connects, for example, handles calls that are for generalized messaging is that we just pass them down to Nomad. And so in order to do something like that, you'd integrate with the connects API itself and SDK, which is now fully functional on testnet. And you can find that at docs.kinect.network. Aside from that, definitely join both of our discords. You can find our link at, I believe it's discord.gg.kinect.

Speaker 4

01:25:18 - 01:25:48

And then of course, follow us on Twitter as well. So Connext Twitter is at Connext Network and then Nomad's Twitter is at nomadxyz. Low dash, yes. In general, as a high level, it connects to Nomad are trying to push forward this idea of asynchronous solidity, which is this new way of thinking about building applications in solidity and on top of Ethereum-based blockchains. And it's still extremely early for that.

Speaker 4

01:25:48 - 01:26:15

There's still a lot of like really first principles thinking that needs to be done around designing these kinds of applications, around building the developer tooling needed for people to actually utilize this framework and this communication system effectively. And so if anyone who's listening to this finds that interesting, definitely reach out to either of our organizations. If you reach out to 1, we'll both hear about it too anyway. And we would love for you to help and get involved.

Speaker 1

01:26:16 - 01:26:38

Yep, Arjun mentioned a couple of our links, but definitely come follow Nomad on Twitter. It's at nomadxyz underscore. Join our discord, discord.gg slash nomadxyz. Check out our homepage to learn a little bit more, nomad.xyz. Our docs are at docs.nomad.xyz.

Speaker 1

01:26:40 - 01:27:10

And you can bridge tokens at our token app at app.nomad.xyz. And yeah, definitely check out our docs to learn more about our protocol, to catch links about some of these resources we talked about today, and to learn how to build on top of Nomad. Our docs are undergoing a little refresh. So we're excited about that. And yeah, love to see all in the community and we're excited to have your voices.

Speaker 3

01:27:10 - 01:27:11

Thank you to both of

Speaker 2

01:27:11 - 01:27:18

you, Anna and Arjun. It was a pleasure And hope to talk to you again soon. Thanks guys.

Speaker 1

01:27:18 - 01:27:19

Thanks so much, y'all.

Speaker 5

01:27:20 - 01:27:19

Hey, it's Tom. Our team at Delphi Media just launched a new show, NFT Collector, where 2 guests go head to head to each build a $50, 000 NFT portfolio in short 10 minute episodes. They leak alpha all along the way, so click the link in the show notes and subscribe to NFT Collector on YouTube.