See all Parallel Polis transcripts on Youtube

youtube thumbnail

Exploring the Future and Potential of Zero-Knowledge Technology

47 minutes 21 seconds

Speaker 1

00:00:00 - 00:00:09

Hello. Ah, there we go. Thanks everyone for coming. Yeah, it's a panel about ZK and the future potential. So let me get my guests, if they'll come through now.

Speaker 2

00:00:14 - 00:00:14

Hello. Yeah.

Speaker 3

00:00:17 - 00:00:20

So, please come sit down and then

Speaker 1

00:00:20 - 00:00:41

we'll give you a chance to introduce yourselves. So to start with, I think it'd be useful if you could just give your names, the company you're working for, and your role there before we get started. That way I won't get your names wrong. Yeah.

Speaker 2

00:00:41 - 00:00:51

Hi, everyone. First, I'm very happy to be here today with lots of familiar faces So I'm a view. I'm head of product at star core

Speaker 4

00:00:53 - 00:01:05

Hello everyone. My name is toggle. I do research at scroll. You can probably find me shit posting on Twitter a lot But it's fun to

Speaker 5

00:01:06 - 00:01:11

My name is Alex. I'm co-founder of matter labs, which is the company by a zk-sync the creator of the kissing

Speaker 6

00:01:11 - 00:01:22

Hello, I'm Danny Holland If you were here a few minutes ago, I'm with Vega Protocol, I'm a smart contract developer, and I just gave a talk about 5 minutes ago about ZK Rollups, so yeah, hi.

Speaker 1

00:01:23 - 00:01:29

And thank you for coming in at the last moment to follow the panel. Thanks for doing that. I hope your voice lasts out. Yeah, it'll be fine. Okay, cool.

Speaker 1

00:01:29 - 00:01:44

All right, So our topic today is around the future of 0 knowledge proofs. So maybe it would be useful if you would, can you give a quick idea of your roadmap from your company for the next few months, year, or that kind of thing, where you're heading?

Speaker 2

00:01:44 - 00:02:21

Yeah, I think in this context, I'll say a couple of words. So we are 1 of the companies in the ecosystem developing Starknet. Starknet right now is on mainnet. It's still on alpha mode, so it has all kind of training wheels. And the next few versions coming this year will be mainly focused around significantly improving the TPS, then reducing transaction costs by enabling different data availability modes, so not just layer 1 data availability, but also L2 consensus, and then later on other layers.

Speaker 2

00:02:22 - 00:02:54

And then following that towards the end of the year we'll also work on adding fee market to improve user experience and enable all kind of levels of service. Maybe we can mention it later in the panel but I also would like to say that there are a bunch of efforts on top of Starknet, around Starknet to create the right frameworks for privacy, the right framework for gaming around that, which is also touching privacy in another way. So I hope we get there later.

Speaker 1

00:02:54 - 00:02:55

Yeah, definitely.

Speaker 4

00:02:55 - 00:03:19

I feel like for us, the next few months are gonna be mostly about launching the main net. We're almost there. There are still a few things that need to be added and things that need to be audited. But it should be coming in the next few months. And once we're on the main net, the next plan going forward is to start working on removing the training wheels.

Speaker 4

00:03:19 - 00:04:02

So because when we're going to launch, a lot of the internal operations are going to be centralized, like a sequencer, brewer, et cetera, et cetera. And there's a lot of research that we've been doing in the background for the last year, even more on how to decentralize those components. Because while the user could use and interact with a protocol the same as with ZK Sync, StarKnight, and everybody else, there are still risks involved that will not eventually be there. And the goal is to make sure that we minimize those risks as fast as possible and to make sure that the system is trust minimized and secure to use without a lot of additional assumptions.

Speaker 5

00:04:06 - 00:04:45

At ZK-Sync, we are also working on a number of things in parallel. So we don't have a sequential roadmap, because we have many teams working on different aspects of what the guys were talking about just now. The scalability aspect, the decentralization of the sequencer, decentralization of the prover, the scalability includes the cost factor and the throughput factor. There is the ultimate vision of what we all want to accomplish with rollups. We want to build, we want to turn Ethereum into something that can scale limitlessly, that doesn't have any bounds on scalability.

Speaker 5

00:04:46 - 00:05:18

And this can only be solved in some hybrid data availability model, like with using Volition, using ZK Porter, in some decentralized way. Like, we don't want it to be run by centralized operators. So there is this final North Star that we're achieving. And it also involves probably having multiple chains, not just 1 chain, not 1 roll up that can handle infinite load, because that's just not feasible. You cannot run all of the internet on a single server or in a single data center.

Speaker 5

00:05:18 - 00:05:38

We'll be in the internet of these chains, and they must be connected in some seamless way. So these are the priorities, and we prefer to talk about them as we ship. And we are about to ship something really interesting in the coming weeks. So I encourage you to follow us on ZK Sync on Twitter, and you will see some cool stuff.

Speaker 7

00:05:38 - 00:05:39

Sounds good.

Speaker 8

00:05:39 - 00:06:25

So I actually got into ZK Rollups specifically to solve the gas problem for our multi-sig bridge. So the biggest challenge at the moment where sorry the roadmap at the moment is there's not really 1 because we can't find auditors for CIRCUM. Yeah we can't find and even if we could I mean all the techs too new all the cryptos too new like how do you guarantee it's going to be safe especially when working with something like Ethereum? So, I mean, all of you are working on your own full ecosystems, but our usage of zero-knowledge proofs and whatnot are in interaction with Ethereum. So it's weird because you cross security boundaries and whatnot.

Speaker 8

00:06:25 - 00:06:26

So yeah.

Speaker 1

00:06:26 - 00:06:36

Cool. I'll come back to security a little bit later on. It's a very pertinent topic, I think. Okay, so some very exciting things ahead of us. It's good to hear.

Speaker 1

00:06:36 - 00:06:47

So maybe take something back slightly. Is there 1 breakthrough or in the technology over the last year that has really impressed you, has really excited you?

Speaker 3

00:06:48 - 00:06:51

You wanna start from the other side? Please, anyone.

Speaker 5

00:06:56 - 00:07:35

Oh, I can start. For maybe not last year necessarily, But on the continuum of the last 2 years, I think zero-knowledge proofs in general have achieved a breakthrough in terms of going from, basically that was the period of going from 0 to 1. Maybe it was not 0, it was 0, 0 something, but we definitely are at 1 now. So the 0 knowledge proofs are productized. We are at a point where the transactions are very cheap, the proof generation is very cheap, very affordable, you can scale it and you can also do proofs for individuals, you know like Individual stocks, individual stocks for privacy from the user perspective.

Speaker 5

00:07:35 - 00:08:00

You can generate proofs in the browser or on a mobile phone. This is a moment where now we go from 1 to many. Where we actually take that and just roll it out into production. So there will be more waves of innovation in the prover technology, in the protocols themselves, but we're past the point where we have everything we need now to actually build stuff. Everything is going to go ZK.

Speaker 5

00:08:01 - 00:08:02

That is absolutely clear.

Speaker 1

00:08:02 - 00:08:03

That's a good quote.

Speaker 4

00:08:06 - 00:08:48

If we're talking specifically about ZK in the past year or so I would say folding schemes have been quite a breakthrough and also another path to scale going forward And in terms of crypto in general, there are too many things to mention. A lot of things on the incentive side, on the protocol side, on the consensus side, so for example, Hot Stuff 2 was published a few months ago. There are a lot of things happen and the good thing about this space, there are still a lot of things that haven't been discovered yet and are yet to be discovered. So there are even more things to come going forward. So That's why I'm excited about this field in general.

Speaker 2

00:08:50 - 00:09:27

Yeah, on the proving front, I guess, like in the last year, we saw so many teams doing so many breakthrough on so many different directions that I can mention. Like we saw a huge improvement on working with smaller fields. There are some all kinds of thoughts on combining different fields. That's another topic that I think is now gaining some attention. On the use cases front, I think the 1 thing that I would mention is the use of storage proofs that we haven't seen a year ago.

Speaker 2

00:09:27 - 00:09:37

And now it seems that it's going to be leaving its effect on rollups and Zika rollups in particular.

Speaker 1

00:09:37 - 00:09:52

Okay, so let's continue on that area. Maybe we're seeing some things like with Axiom and the co-processor idea And the fact that we can interact then with data on Ethereum, how do you see that moving ahead?

Speaker 4

00:10:01 - 00:10:49

I feel like the idea of storage proofs is great because it allows us to and to create cross-chain bridges that don't actually pass the messages as long as you can read the message from the message that was stored on the other chain. That's more than enough for you to have a trust-minimized way to interact between chains. And that can potentially have a massive effect on the UX and the way users bridge between chains, et cetera, because especially going forward with Ethereum, we're going to have a lot of chains trying to interact with 1 another, et cetera. And a lot of effort is going to go into optimizing that cross-chain communication, and storage proofs are a good way to do it.

Speaker 1

00:10:49 - 00:11:06

So Vitalik had a recent blog where he's talking about the problems of having multiple addresses across L2s and different chains. And then you were talking about having many chains. How do we solve this problem of interoperability and having something that is useful, is a good user experience?

Speaker 5

00:11:07 - 00:11:39

I think this problem is a little more fundamental. So Vitalik mentions 3 problems in his last post about privacy, scalability, essentially, like the cost and the usability with smart contracts. I would add the fourth 1, which is going to be the segmentation or fragmentation of liquidity, fragmentation of the users, which is not solvable. You won't be able to solve it alone with standards, unfortunately. Like, you could introduce standards.

Speaker 5

00:11:39 - 00:12:06

We could all agree. We could just get all the rollups together and say, OK, let's have a unified URI for addresses on different rollups. There are already proposals for this, like if, with a prefix, if colon address, or z casing z colon address, or scroll colon. You can add more metadata. You can add more parameters.

Speaker 5

00:12:06 - 00:12:44

And eventually, it's just 1 string, which you copy-paste, or you encode in QR code, which you can display to someone. The problem is that even with storage proofs, even with cross-chain message passing, which theoretically can, or even practically, can be done, at least between ZK chains, the optimistic chains are out of the equation. But in ZK chains, the latency will eventually fall to just a few seconds. So you will be able to finalize your block on Ethereum. And then the other rollup can read the state of this block and can access the messages that are committed in the block.

Speaker 5

00:12:44 - 00:13:48

Problem is, you cannot still bridge assets natively. You would still have to pass it all over Ethereum. If we're talking about NFTs, you actually have to pass the individual NFT all the way through, from 1 chain to another chain, because you cannot blindly trust the implementation of another roll-up for reasons out of for non technological reasons Because the other roll-up can be upgraded by the team or by Security Council by its governance in a malicious way and then the risks of this upgrade must be contained to that roll-up. It's a very fundamental security isolation assumption. So like, and if the bridging goes all the way up through Ethereum, you can imagine it's like we have a lot of cities, but they are all connected with a single, like, very narrow bridge that everyone has to pass through, which, like, doesn't matter how many highways you have inside the cities, if the road that connects them is like a single pedestrian bridge, you can't carry much over a lot.

Speaker 5

00:13:48 - 00:14:12

So I'm afraid we will see fragmentation where there will be ecosystems emerging. Inside the ecosystem, there will be seamless bridging, and there will be many chains connected within it. Between those ecosystems, it's going to be hard. It's going to be more like countries. Some countries have really good infrastructure, have many cities, many railroads, many highways, et cetera.

Speaker 5

00:14:12 - 00:14:29

But some countries are less developed, but still good. But between the countries, there are checkpoints and customs, and you have to pay for moving things across the border So like this is kind of a scenario to which we're coming. I don't see how we can overcome that like from

Speaker 8

00:14:30 - 00:14:43

I do I think it'll kind of solve itself because there will be less roll-ups. There will be fewer roll-ups. A lot of the projects will go away. They will die. They will fail and those that remain, the builders will come and they will find solutions to this.

Speaker 8

00:14:43 - 00:14:47

So that's honestly going to be a big 1, I think. To solve interoperability.

Speaker 2

00:14:51 - 00:15:03

Yeah, I'm just optimistic that maybe not for all assets but for messages and maybe messages will include other this is a very complex issue. So I think that in between the roll-ups and the challenges

Speaker 7

00:15:03 - 00:15:03

that we have, if you

Speaker 2

00:15:03 - 00:15:07

have a complex interesting use cases, not just votes, you still have this way to bridge not going through a theorem directly.

Speaker 1

00:15:19 - 00:15:20

So I agree that for some

Speaker 4

00:15:20 - 00:15:26

cases it's going to be more complex. But I think that we will, in between the roll-ups, we'll try to overcome at least

Speaker 7

00:15:26 - 00:15:27

some of the challenges directly. Some of the challenges directly. Okay, you want to add to that or? Bridging is probably the most complex problem that

Speaker 4

00:15:27 - 00:15:55

we have in scaling. So I don't want to say that it's an impossible feat, but it's definitely a difficult problem to solve. But I'm just hopeful that at some point someone's going to come up with a way to do it. Because if you look at what the bridges were like 5 years ago and what we have now, that the difference in trust assumptions and security is drastic. And so I don't see a reason why we should stop here.

Speaker 4

00:15:55 - 00:16:02

There will be nothing else discovered and nothing else that helps us make it more seamless in the future.

Speaker 1

00:16:04 - 00:16:28

All right, I want to move away a little bit from scalability now and move on to other aspects such as privacy, but also this conference, if you've been to them before, they have quite an unusual manifesto, perhaps, compared to other conferences, they talk a lot about social good. What, as people in the CK world, what can we do to further all of that?

Speaker 8

00:16:30 - 00:16:44

Build community, share the knowledge you guys have found, you know, as actually I said that in my talk a little while ago, be involved, ask questions on, you know, Stack Overflow and whatnot,

Speaker 7

00:16:45 - 00:16:45

just

Speaker 8

00:16:45 - 00:16:54

That's the way we're gonna get there. We gotta find that weird basement nerd who's like, oh no, no, this is how we solve this, and he has to get his voice out, and it has to be there.

Speaker 1

00:16:56 - 00:16:57

Alex, do you have anything?

Speaker 5

00:16:57 - 00:17:11

I agree. This is why I think we have a pretty high bar in Ethereum on the way we should interact with... The way projects can build stuff. You have to be open source completely. Oh yeah.

Speaker 5

00:17:11 - 00:17:34

Fully permissioned as the open source. Like ZK-Sync is a license under MIT, Apache, and I would even scratch that if we didn't have to carry over the dependencies. You have to be completely transparent about things, like the protocol specification, protocol design. And then you have to give things over to the community. And the ultimate owner of the protocol must be the community.

Speaker 5

00:17:34 - 00:18:12

So things like decentralizing the sequencer, decentralizing the proofs, is non-negotiable. It's not a question of whether our strategy is to build it or not to build it. It just won't take off. It's the fundamental core values of crypto that we represent, which go deeply in the roots of this original cyberpunk movement, which the party, the police, this place has played a big role in that. It was a meeting point for a lot of crypto punks in the past.

Speaker 1

00:18:12 - 00:18:17

I know Scroll, this is a very important thing for you, isn't it? The mindset of the community and being community led?

Speaker 4

00:18:18 - 00:18:56

I mean, we started essentially as a collaboration. So the scroll was born as a collaboration between us and Ethereum Foundation. So it's in our DNA and our roots to be open source and be friendly to the community, make sure that people should be able to contribute and should be able to get in touch with us, understand certain things that are, for example, are not well documented, et cetera. So we always welcome people to come and speak to us. And I feel like that's important and it's central to Ethereum's values.

Speaker 4

00:18:56 - 00:19:41

Ethereum was built on this value of collaboration. Like even if you look at the number of founders over there and with their mad like 30 40 founders and like it still goes on to this day you can't just say that oh yeah if your foundation is responsible for everything or somebody else there are a lot of different teams working and interacting with each other and collaborating with each other. Even between us, we spend time talking to Starknet, to ZK Sync, to Optimism, trying to solve different problems together, trying to propose different standards and stuff like that. So even if From the outside it might look like we all just hate each other and we're just trying to like completely dominate and like make sure that nobody else succeeds. It's not actually like that.

Speaker 4

00:19:41 - 00:19:52

We have quite a lot of conversations and like discuss how we can work together to help Ethereum grow and become a better protocol and a community as a whole.

Speaker 1

00:19:52 - 00:19:54

I know with StarkNet you have a very strong community.

Speaker 2

00:19:54 - 00:20:14

Yeah I wanted to say something around that. I think the decentralization, sure there is the part where you want to decentralize the sequencer and the prover and there is the part where you want to decentralize the sequencer and the prover and there's the part where you want the core infrastructure to be open source, but I think 1 crucial thing we're trying to achieve with Starknet is that you don't want you can't

Speaker 7

00:20:23 - 00:20:23

have the project fully dependent or even largely

Speaker 3

00:20:23 - 00:20:23

dependent in 1 or 2 or very small set of companies.

Speaker 2

00:20:23 - 00:20:38

So, 1 thing that is happening, I think, in front of our eyes is that some companies are taking larger and larger shares of the work on some infrastructure part. I can give you an example in Starknet. The coming version,

Speaker 1

00:20:39 - 00:20:40

0. 12,

Speaker 2

00:20:40 - 00:20:55

there is going to be a huge improvement in TPS, and a large part of this work was done by a different organization, not Starkware, but Lambda class. And we see those examples also with ZKVMs being built on top of Starknet, so Kakarot

Speaker 7

00:20:55 - 00:20:55

is an

Speaker 2

00:20:55 - 00:21:14

example, infrastructure of the provers, so there are multiple teams building a prover that is compatible with Starknet, and this is all done outside. And I think that it's a huge step for a project when some significant shares in the core work are done by different companies.

Speaker 8

00:21:16 - 00:21:26

To comment on that, I would say, remember when parity and Geth both were coming out, having both parallel tracks was very useful for the community as a whole. So absolutely, I agree with that.

Speaker 1

00:21:27 - 00:21:44

Can I pick on a point, I think Alex said, but also maybe you, about standards in ZK? And I know there's ZK Proofs, Rorg, who are trying to establish standards, but do we have a good idea of what those standards are? Yet is it still too early? What would you like to see? 10

Speaker 2

00:21:44 - 00:21:47

protocols, 20 standards. It's gonna be...

Speaker 4

00:21:47 - 00:22:15

I'll give you an example. For example, we were talking to StarCore and Polygon and we were trying to have Ethereum make an EIP that implements Poseidon pre-compiled. And we couldn't agree on a specific standard of Poseidon that will work for all of us, because there are so many different configuration, et cetera, that it would be difficult. 1 of us would have to change something.

Speaker 2

00:22:15 - 00:22:38

Yeah, just to work more on this example, the challenge is not that we don't want to collaborate on how exactly will Poseidon be implemented, but at this point, we work with different technologies that require some different changes. And It's not as simple as it might sound. There is a common name, but the implementation can be pretty different.

Speaker 5

00:22:38 - 00:22:56

So I think we're in this phase of exploration where we just experiment and it's good. And then eventually some technologies will crystallize and stabilize over time, and then we will say, okay, this has been time-tested, we can now all agree on putting this uproot into Ethereum and making it pre-compiled, for example.

Speaker 8

00:22:56 - 00:23:10

I mean, we don't even know what hash functions and signing schemes and whatnot will even be solid in 5 or 10 years, so everyone's implementing their own, and we don't know which one's gonna be the the 1 that actually stands the test of time like like Shah has and whatnot.

Speaker 5

00:23:10 - 00:23:13

We will likely know it's gonna be not what we're expecting.

Speaker 8

00:23:13 - 00:23:14

Yeah probably.

Speaker 2

00:23:14 - 00:23:37

I think if 1 thing is open in this discussion is that we have some support from Ethereum when it comes to working on those primitives because sometimes we are coming and asking something and it requires a lot of flexibility for this to fit with all other networks at the same time. So there is complexity involved. I admit that.

Speaker 1

00:23:38 - 00:23:53

Yeah. Yeah. OK. So I do courses for developers in this area. And the things that they ask me, I present about, I think, all of your products, and I teach on those, but a lot of the developers say to me, which 1 should we use?

Speaker 1

00:23:54 - 00:24:07

What language should we learn? And it's become easier a little bit because Rust is having such an influence now in this space, particularly with Cairo 1 now, it's become a lot easier. What should I say to them? What's your advice?

Speaker 8

00:24:08 - 00:24:08

Zircom.

Speaker 4

00:24:11 - 00:24:14

You're just trying to get people to learn it so someone can audit it for you.

Speaker 8

00:24:15 - 00:24:17

Exactly, Yeah, see? Someone was paying attention.

Speaker 5

00:24:20 - 00:24:53

I think the future is, unfortunately, for Zircon that we will be very flexible in terms of support of languages. And in the very short term, we'll be able to support basically everything from obviously Solidity, which is already the case, to Rust, to anything else that people use on the system. So ZK Sync, for example, is based and our compiler is LLVM based. So we can integrate any language which doesn't use things like garbage collector, but has a LLVM front end. And we could just plug

Speaker 7

00:24:53 - 00:24:54

it in.

Speaker 5

00:24:55 - 00:25:30

And it would be possible to prove execution trace of things like risk in other machine. So I don't think it makes sense to learn any specific esoteric language today, because even privacy can be implemented with private smart contracts. They will be implementable in Solidity and in Rust. So why do those are very rich ecosystems where a lot of tooling and things get improved. And you want to work on something that has future as a developer.

Speaker 8

00:25:31 - 00:25:33

I don't know, man, the solidity thing, that's a lot of gas.

Speaker 5

00:25:34 - 00:26:17

No, no, no. In the context of, I'm talking about the proofs. You would be able to make a proof of something, like you will just describe some private function in solidity. Think of it like you describe a smart contract and some of the functions you mark them private and they don't become just private from you know like you cannot call them from outside and they're not public but they actually private or the node functions some variables are private right so they they are part of the state but no 1 can see them except for the owner of this contract, who can then make a state transition by calling some function, which will be not called on blockchain, but they will generate the proof locally. They will only submit the proof.

Speaker 5

00:26:17 - 00:26:34

And there will be basically 1 function of the smart contract, like accept the proof and make a proper transition. So developers will not need to bother about correctness of the proofs or details of implementation of zero-knowledge proofs, which is great. That would be cool. That sounds cool.

Speaker 2

00:26:34 - 00:26:58

So I want to say something about it that maybe, like in some ways, it's on similar lines, but with a slightly different conclusion. So we have this framework being built on top of StarkNet that is called Dojo. And this is a framework for gaming. And that in particular also means that you need a client side for the users. And well, it's not a server side, but a blockchain side.

Speaker 2

00:26:58 - 00:27:30

And today, The way it works is that you have many developers using Siricom on 1 hand, and maybe they understand a bit what is happening. And by the way, it's not just for gaming. And then you have on the other side the Solidity smart contract. So they already have to handle 2 different tools that are not necessarily looking similar. And I think the reason many of them are still using Sircum is because Sircum offers this very convenient and efficient platform that you can use in your browser.

Speaker 2

00:27:30 - 00:27:54

And I think what we are trying to do, and I believe that others here as well, is to create an intermediate level, and in our case, in StarGate, it's Cairo, that is both very efficient, and at the same time is easy to

Speaker 7

00:27:54 - 00:27:54

use, easy to learn, and I think that Cairo

Speaker 1

00:27:54 - 00:27:54

1

Speaker 7

00:27:54 - 00:27:54

gets to the point where it's more like Rust-based, it's

Speaker 2

00:27:54 - 00:28:35

pretty efficient, it's easy to use, and, yes, I do think that there will be compilations from other languages to Cairo that won't necessarily go through Cairo 1 but I think that if you have a language where you feel comfortable with like our 1 where you can write both the user side that runs on the browser and it's efficient enough and smart contract site that is also can be deployed in efficient network, then you get 1 framework, 1 language that you can use to write your game, your DeFi application, your logic, or whatever, anything else basically. So to answer your question at the beginning, I would say yeah, learn Cairo.

Speaker 1

00:28:36 - 00:28:53

So, as you pick up on that area of games, for me this area seems very exciting at the moment, and I've seen a lot of enthusiasm around this in CK. Do you have examples or how do you feel about the area of developing games in ZK?

Speaker 8

00:28:54 - 00:28:58

So there's, what was that 1, Dark Forest? Yeah.

Speaker 7

00:28:59 - 00:29:00

That was,

Speaker 8

00:29:00 - 00:29:19

you know, CIRCOM, Snark.js, all that stuff. That was cool. But my biggest, the thing that makes me happiest about that is we can offload a ton of compute, we can prove that compute happened, and then you just do little state updates on chain. Because obviously gas is the biggest issue. I mean, as a smart contract developer, that's like the number 1 annoyance constantly.

Speaker 8

00:29:20 - 00:29:29

So being able to offload all this compute no matter how big it is and to like the privacy doesn't even come into play. Like just the compute alone is worth it.

Speaker 2

00:29:29 - 00:30:29

Yeah I wanna plus 1 or plus 100 what Dan said. I think that like Dark Forest was great and it shows but many people are focused on the privacy aspect. Oh we can have a game we don't know what is happening anywhere else and But 1 point that was a bit lost in what they did is that users actually play the game, then prove it to the chain with a number of steps they did. And, well, I don't know if that's the future of gaming, but I do hope that some of it is and like you can imagine that maybe in the future this is an actual technology of games where users execute locally their code to avoid interactions with servers and to avoid many many servers and to avoid interactions with many other users. So maybe it doesn't fit for all game, but if there is enough of quants of time in the game, you can actually build this kind of game with ZK technology today.

Speaker 2

00:30:29 - 00:30:43

So maybe in 3 years, things are efficient enough so that you have an actual framework for games that is maybe for some cases more efficient than maybe using 1 server and that would be amazing.

Speaker 8

00:30:43 - 00:31:04

So in my talk last year I gave an example of playing 10, 000 rounds of rock, paper, scissors. Like the final proof that was submitted was the entire game, like all the interactions. So it was the entire server, not just here's my moves, now I'm making a transaction. Here's your moves, now you're making a transaction. It was the entire interaction.

Speaker 8

00:31:04 - 00:31:14

Yeah, that is going to be useful because then you can just put the output of the game, this is the results because the results are a matter for the leaderboard on chain.

Speaker 1

00:31:16 - 00:31:35

I should say as well, I think we've all got teams in the 0x Titans at the moment. I noticed we all have cars in that. I think we've got 15 minutes left. I would like to open up to the audience if you have questions. So I see a hand over there, could we get a microphone over there, please?

Speaker 1

00:31:38 - 00:31:39

Yes, please, thank you.

Speaker 9

00:31:43 - 00:31:50

Hello. Do you see any benefit in using recursive ZK proofs, and if so, which of the methods you find most promising?

Speaker 1

00:31:51 - 00:31:53

Sorry, I couldn't hear what you were saying.

Speaker 9

00:31:54 - 00:32:02

So if you see any benefit in using recursive ZK snarks or starts, recursive ones, And if yes, which ones are like most promising?

Speaker 8

00:32:03 - 00:32:05

I'm missing the noun from that question.

Speaker 7

00:32:06 - 00:32:07

Recursive. Oh,

Speaker 6

00:32:07 - 00:32:11

yeah. So can you repeat the question?

Speaker 8

00:32:13 - 00:32:23

As far as I'm aware, You can't prove a circuit from another circuit. You can't embed them and make them recursive. So you can't like... He was asking about recursion.

Speaker 1

00:32:23 - 00:32:23

I think the character,

Speaker 2

00:32:23 - 00:32:32

I don't know about scroll, but I'm pretty sure that they're all doing it. We use recursion, yeah. Yeah, it's a... It's in production thing.

Speaker 5

00:32:32 - 00:32:58

So recursion is what enables us to basically prove infinite length of computation trace. Then you still have the sequential bottleneck of the naive execution, But we can parallelize zero-knowledge proof generation itself infinitely, because we just break it down in small chunks. And then we build the tree recursively, and we combine them 1 by 1 until we get 1 final proof, which is finalized.

Speaker 2

00:32:58 - 00:33:04

Yeah, this has been used extensively by ZK Rollups today.

Speaker 5

00:33:10 - 00:33:29

We like everyone is using their own technology like we are currently using Plonk. Yeah, so those are proof systems like Plonk is a proof system, Halo is a proof system. Everyone is using a different proof system.

Speaker 1

00:33:30 - 00:33:32

Okay, we have another. Thank you.

Speaker 10

00:33:33 - 00:33:53

Thanks, very insightful. Look forward to learning more. I'm curious if the threat of quantum computers represents a unique threat to the new cryptography being developed for zero-knowledge technology, or if a lot of the same arguments that we would talk about with, for instance, transitioning from SHA or elliptic curve to quantum resistant encryption apply equally well to this new cryptography.

Speaker 4

00:33:55 - 00:34:47

I just had a discussion about it on the podcast the other day. I feel like, first of all, let's say, even if your proof system is not quantum resistant and it's broken No, sorry Assume that your proof system is quantum resistant and the quantum computers are efficient enough to break all the not quantum resistant proof systems. The problem is even if you make your proof system quantum resistant, a lot of things that are built on top of it are still not quantum resistant. So for example in Ethereum, the signature schemes are still not quantum resistant. So let's say your proof system is sound, it works as well as it did before the quantum computers were efficient, but you can still steal money via quantum computer by breaking the signature or something else.

Speaker 4

00:34:47 - 00:35:14

So I feel like, well, it's a threat that we need to keep in the back of our minds. It's not something that is pressing at the moment. And also, I feel like if we get to a point where We haven't solved that problem then, but quantum computers are already efficient enough to break things, cryptographic schemes. I think we have bigger problems than like proof systems that allow us to bridge.

Speaker 6

00:35:14 - 00:35:18

So to answer your question, yeah, it's the exact same conversation we're having in the rest of the field.

Speaker 2

00:35:19 - 00:35:48

I just want to say 1 more thing on that. I think that the community as a whole, we are involved, Ethereum is involved, I'm sure that all the projects here are involved as well. There are some efforts to get to some primitives that are secure. So even when it comes to replacing BLS signature in aggregation with some equivalent that is ZK, for example, Stark-based ones. This is in this level of the entire community at this point.

Speaker 1

00:35:48 - 00:35:51

Cool. Do we have any more? There's a hand up there.

Speaker 3

00:35:59 - 00:36:17

What do you think the limitations of 0 knowledge proofs are? As in like what will they, what do you think they won't be able to do and where do you think we will have to eventually invoke things like optimistic compute, trusted execution environments and so on? Okay, the limitations of 0 knowledge proofs.

Speaker 8

00:36:17 - 00:36:18

Data availability?

Speaker 2

00:36:19 - 00:37:11

I think I can go into something a bit more specific. Sometimes, for example, I wouldn't go to anything too exotic. But you want to have some execution that is not known to any entity. For example, in the context of dark pools, you want to have a potentially matching to anyone and then you have to apply some kind of mechanism that is not using just 0 knowledge but maybe something else like for example some kind of MPC or small MPC on some parts of the system that that it could work. I think normally you would start to see those systems living in harmony in the coming month, years, like the technology of 0 knowledge is evolving.

Speaker 2

00:37:11 - 00:37:13

You'll see also these kinds of systems.

Speaker 5

00:37:13 - 00:37:46

Yeah, I agree. I wanted to say the same. So the MPC, like the fundamental limitation of 0 knowledge proof systems is that someone has to know the secret. There must be 1 party who knows some secret that the SNARK or STARK is computed off. If you want to have like distributed, like multiple parties keep their secrets separate, then you need to come up with something more sophisticated like multiparty computation, or some kind of like interesting combination.

Speaker 5

00:37:47 - 00:38:43

Like you could argue the zero-cash protocol, like what Zcash and Artstack are based on, is kind of a hybrid approach where everyone has their own secret, but then they only generate SNARKs for state transitions that are public and shared with everyone else. But like, so this is 1 fundamental limitation and you can also add here the fully homomorphic encryption where like some data is, like maybe you wanna share some data privately and then some computations made on this data which must remain private until some people can decrypt it or maybe threshold of participants can decrypt it. But going back to the second half of the question like whether optimistic fraud proofs and what trusted execution can help us, I don't think it can help us as a primary solution. They could serve as a secondary solution. We could have something that, like, as a fallback.

Speaker 5

00:38:43 - 00:39:05

Like, if the zero-knowledge proofs fail, then we have some fallback mechanisms that together give somewhat, some degree of just a safety net. It's not there to replace the floor, but something better. You don't want to rely on this mechanism, or like an escape hatch in the plane. You don't want to be actually trying it out. You would rather prefer to land safely, right?

Speaker 5

00:39:05 - 00:39:23

But if things happen, then you need this kind of backup mechanisms, which only work if they stay backup. Because if you rely on them as a primary means, then all of a sudden, you introduce too many new trust assumptions. So no, I don't see anything that optimistic rollups can do better than ZKrollups.

Speaker 4

00:39:23 - 00:39:38

Just to add to that, whatever a fraud proof can do, a validity proof can also do. It's just a question. It's just better. Yeah, basically. Gladly, there are no optimistic roll-up builders on this stage.

Speaker 4

00:39:40 - 00:40:16

So essentially, the question is just efficiency. So currently, some things are quite inefficient to do in 0 knowledge proofs. But looking at the progress that proof systems made in the last 345 years, you can ascertain that they're gonna probably continue making a lot of progress in terms of efficiency in the next few years. And therefore, I probably feel like in 2, 3 years' time, the efficiency difference is going to be negligible to a point where every single optimistic roll-up will become a ZK roll-up.

Speaker 1

00:40:18 - 00:40:24

OK. You have time for a couple more questions, I think. Just a couple more. We have a hand up here.

Speaker 11

00:40:28 - 00:40:35

Hi, thank you for your talk, guys. You mentioned earlier how it's hard to tell developers which language to choose. But just like the language, it's hard for

Speaker 12

00:40:35 - 00:40:45

them to choose your own ZKE solution. So could you maybe comment what is the unique offering you have compared to maybe other guys on the stage if I was to invest time into your own project?

Speaker 5

00:40:47 - 00:40:56

That's a great question. That's a good 1. I've been meaning, yeah. Nice.

Speaker 6

00:40:57 - 00:41:00

My least favorite part. I didn't make Sircom, So

Speaker 8

00:41:00 - 00:41:12

I don't have a dog in the fight. So other than that, I was able to come across it like as a noob not knowing how to do any of this and look at it and figure it out. And it was the most accessible 1, in my opinion, at

Speaker 6

00:41:12 - 00:41:22

the time. And it's not general purpose. It's bespoke for a very specific problem. Like you tackle just that 1 problem, but I don't know. You guys?

Speaker 6

00:41:23 - 00:41:24

OK, ZK-Sync.

Speaker 5

00:41:27 - 00:41:48

The main value proposition of ZK-Sync is it's a technology that is aimed at future. It's a protocol that we developed, the core team of ZK-Sync and the community that is contributing to it. Is we're thinking about the ultimate mission of the protocol and the ultimate vision. What is that that we want to build? And we reverse engineer from there.

Speaker 5

00:41:49 - 00:42:33

We're not basing it off past choices. We fast forward a couple of years and say, how is Ethereum and the world and this web-free internet of value is going to look like, and what we will need there? What we'll need to get there? So coming back to Vitalik's post 2 days ago, where he mentions privacy, smart contracts with essentially user experience and scalability, Those are the 3 things that were mentioned as like 3 out of 4 points that were named as the key goals for ZK-Sync at inaugural post. 3 years ago, we wrote a post for what we tried to accomplish there.

Speaker 5

00:42:33 - 00:42:58

We're talking about all 3 points. And it manifests in the decision that we make. We have account abstraction as 1 example, natively integrated at the protocol. So we had to take courage to deviate from Ethereum slightly. Ethereum was pushing for, I was going back then for EAP 457, which we took as a base.

Speaker 5

00:42:58 - 00:43:00

We could not do it

Speaker 1

00:43:00 - 00:43:00

100%

Speaker 5

00:43:00 - 00:43:25

compatible, because that would limit us. We would not be able, for instance, to have account abstraction with EOAs, which today comprise the majority of the wallets. So like with EAP-457, if you use Metamask, You cannot have gas-less transactions. You cannot pay for gas in the tokens in which you transact. If you only have DAI, you cannot pay in DAI.

Speaker 5

00:43:25 - 00:43:41

You still need to get some ETH. Pay for ETH. And this creates a huge friction for the end users. So we said, okay, we're willing to push these boundaries. This is what L2s are for, to experiment, to come up with new ideas, to build stuff in a different way.

Speaker 5

00:43:41 - 00:44:14

And by the way, Stark was doing the same. They had taken it also further on the developer experience side, where the difference between us is like we're doing all these things like state diffs, for example, which are going to be like evolution, just going to be massive from the user experience, from the price point of view. But we're doing it in the Ethereum paradigm. We're keeping the compatibility with the code base so that you can take the solutions and you don't have to rewrite the entire ecosystem from scratch. You can take all the projects and just deploy them in Solidity.

Speaker 1

00:44:14 - 00:44:17

We're almost out of time. I just wanted to get a couple more comments in.

Speaker 4

00:44:17 - 00:44:47

I feel like I was sat down here just to be implicitly bullied by these 2. So we took a bit of a different path. So our philosophy is that Ethereum has reached a point where it doesn't really make sense for us to rebuild the infrastructure, change things, et cetera. So our baseline goal is to be as compatible with Ethereum as is technically feasible. There are still a few things that we have that are different to Ethereum.

Speaker 4

00:44:47 - 00:44:51

But for the vast majority of users and developers, I would say

Speaker 1

00:44:51 - 00:44:51

99.99%,

Speaker 4

00:44:53 - 00:45:24

there's no difference. And all this tooling works out of the box, everything. But that doesn't limit us. We can still add things that EVM doesn't have and basically build a superset of functionality on top of the baseline EVM feature set that Ethereum supports. So we can add things that Ethereum doesn't have like state expiry, account obstruction, native account obstruction, things that don't currently work in Ethereum, but we can still keep the things that work in Ethereum 100% compatible.

Speaker 4

00:45:25 - 00:45:49

So let's say for a user that just wants a one-to-one Ethereum experience, they can use the existing tooling, and if you want something else, you can just use different tooling that allows you to interact with those things. And that basically gets you the best of both worlds, where you have the familiar user experience, but if you need something else, you can always just use something that gives you a different user experience.

Speaker 2

00:45:50 - 00:46:17

Right. I only have like, I see, 50 seconds. Just to cut to the point, I think like, the 3 things, I think the third 1 is the most important right now. So, obviously, deep, deep, deep expertise in the technology, both like experience from the past and even looking into the future, so that's 1. The second thing that I would mention is that we have the courage, like what Alex mentioned as well, but even more to the extreme, we are not afraid to change things when they need to be changed.

Speaker 2

00:46:17 - 00:46:57

So that, for example, means the counter-obstruction to the default. And that also means all kind of hybrid data availability mode. We're just changing what's needed to bring the most scalable protocol. And the third and most important is the developer community. The fact that you can go and touch the most deeper parts of the protocol and take part in developing the most important parts of the protocol right now, and the fact that you can go to every conference and have a bunch of other people working and building around that is very, very helpful, and I think it could and will help you as a, I assume, a beginning developer in the ecosystem.

Speaker 2

00:46:57 - 00:46:58

Great.

Speaker 1

00:46:58 - 00:46:58

Thank you. We're out of time. Please join me in thanking the panel.