24 minutes 14 seconds
Speaker 1
00:00:01 - 00:00:09
Hello, hello. Let me just grab the clicker real quick. All right. So hello, hello. I'm DC Builder.
Speaker 1
00:00:09 - 00:00:57
I'm a research engineer and I work at WorldCoin on the protocol team. So my day-to-day, I usually spend it working on applied cryptography, some distributed systems engineering, solidity rust, things around that nature. And recently, or for the past few months, I've been doing some research around the topic called zero-knowledge machine learning, ZKML. And so it's a really early stage technology that might have a lot of useful applications in the domain of blockchains and Since it's really early on I just wanted to give like an introductory talk that sort of explains what it is What you can what you can do where the state of the art is, what tools you can use to do a ZKML. And yeah, just give a brief introduction into what the topic is all about.
Speaker 1
00:01:04 - 00:01:19
The clicker is not working for some reason. So it's the clicker is not working. All right, that's good. Yeah, so first, Let me boil down ZKML into its 2 constituents. So first, we have ZK.
Speaker 1
00:01:19 - 00:01:40
As you heard in the previous talk, ZK is sort of this piece of cryptography that allows you to essentially prove that some computation happened correctly. And you can also hide parts of this computation. So the workflow is that you have some computer. They do some computation. They generate a proof that that computation happened.
Speaker 1
00:01:40 - 00:01:57
And any verifier can verify the correctness of that computation. So in the context of ML, right? My business goes back. Oh, it works the opposite way. Okay, never mind Yeah, so in the context of ML so ML what is ML right?
Speaker 1
00:01:57 - 00:03:12
So a machine learning is a sort of group of algorithms where you have some form of goal or task you want to solve and You create a sort of heuristic solution for your your task by heuristic I mean just a good enough approximation the way you achieve these approximations is by using several algorithms that sort of have statistical learning in some way or another, right? You have these sort of like neural networks if you've heard of them. They're able to sort of get trained on lots and lots of data, lots and lots of examples, and they're able to sort of learn how to solve a specific task based on either reinforcement learning, where they ask a specific rating mechanism to rate their task solution, and they essentially get better over time. So in the context of ML, the problems you're usually solving are like classification or aggression like you're trying to solve some form of optimization for some sort of problem and the the the reason why ZK is useful for this is you are essentially able to prove these ML algorithms and to end You're essentially able to prove that hey, if I have some input and I feed it into some model, I can get an output and I can prove that all of this happened correctly.
Speaker 1
00:03:12 - 00:03:39
So for example, I can tell, I can say to my friends, hey I have computed this model on this specific on this specific input and I have this output, right? So let's say I have a model that is able to discern between a dog or a cat So I can just feed it an image of a dog or a cat. It has some output It tells me hey, this is a cat And I can prove to anyone that I did, indeed, do this computation. So they don't have to run the model. They can just trust me, because they can verify the 0 knowledge proof.
Speaker 1
00:03:41 - 00:04:39
So why 0 knowledge? So 0 knowledge, the main properties that 0 knowledge gives you that's useful in the context of machine learning is verifiability of execution so you know like Usually when you're using any ML algorithms, let's say through an API right through like a programming interface You go to like let's say the open AI Website you go to the open AI website and they have a model for example whisper whisper is a model that does transcription for audio But the API is very block boxy You don't really know whether open AI is indeed running Whisper on those servers. You're just trusting them as an intermediary that, hey, you did indeed run Whisper on my algorithm, and you trust because you're paying for some service and you have some legal agreement and they cannot cheat you out of it. However, there is no mathematical proof that they're actually running Whisper. They might be running a much like lower or like a worse model on the back end and just tell you that it's Whisper on the front end and you would have no way to know it.
Speaker 1
00:04:39 - 00:06:15
So for this sort of use case That's where verifiability comes into play right and Verifiability of execution comes from the properties of zero-knowledge cryptography of completeness and soundness Another part of ZK that's useful as the zero-knowledge part so zero-knowledge essentially means that you're able to proof a correctness of a statement without revealing any information besides the statement. A sort of good intuition for this is I can prove to you that I computed some model, but I can hide the weights of that model. So I can tell you, hey, my model has X accuracy on some test data sets like for example it has more than 99% accuracy without revealing the weights of my model so that's also useful in some context and when we're talking about ZKML well you're usually talking about proof of inference, right? You may have heard of model training right like training models. You've heard like the Companies like Google meta all of these companies have huge co-located data centers that they train machine learning models for Years on ends to like Try to get the best performance and have the biggest models and whatnot so this this type of computation is really expensive and it takes a long time to compute and 0 knowledge cryptography only adds more computation It has an overhead of about a hundred to a million X depends on like which computations you're trying to proof By overhead, I mean that it's about, let's say, like 1, 000 to a million times more computationally expensive to compute a proof than just run the computation yourself.
Speaker 1
00:06:15 - 00:07:10
However, to verify the proof, it's a lot easier than to actually compute the computation That's the the reason why was your knowledge proofs are useful for example in the complex of blockchains, right? You can just run some transactions off chain You can create a proof and you can verify the margin and update the state of the network without having to For everyone to re-execute those transactions So since proof of training would be too computationally intensive, the only thing that's currently feasible and for the foreseeable future is proof of inference, just evaluating a already trained model on some input to get some output. So this is sort of like a simple Venn diagram to essentially combine like essentially it combines different primitives and constituents in order to like explain sort of how ZKML or like what properties ZKML has. So we have 3, the top 1, the yellow circle is computational integrity. This is the computational correctness or verifiability.
Speaker 1
00:07:10 - 00:07:33
I can verify that some computation happened correctly. The bottom left, the blue 1 is privacy. Privacy, as I said, stems from the property of 0 knowledge. And heuristic optimization, this is what I just essentially, you can think of it as just ML. Heuristic optimization just means that you're trying to optimize a problem for which there's no perfect solution, you're only giving approximations.
Speaker 1
00:07:33 - 00:08:03
An approximation you can think of as a heuristic. And so if you combine all 3 of those you get ZKML in the middle. If you only combine computational integrity and privacy you get traditional ZK, right, what we use in the context of blockchains to verify transaction execution or do private transactions, let's say Zcash. If you only do computation integrity and heuristic optimization, you get validity ML. This essentially means that you're proving that some machine learning model ran correctly, but you're not hiding any parts of the computation.
Speaker 1
00:08:05 - 00:08:44
And the last 1 that I didn't mention is if you combine privacy and heuristic optimization or ML, you get this sort of field called fully homomorphic encryption machine learning. Fully homomorphic encryption is a different type or a different primitive within cryptography that allows you to do slightly different things. Essentially the difference is fully homomorphic encryption allows you to encrypt some data, perform some computation on that encrypted data, and then decrypt it. So that essentially means that you're able to do computations on private data and the computer that's performing the computations on the private data never gets to learn what the original input was. So you can do private computation.
Speaker 1
00:08:44 - 00:09:21
However, you don't have the computational integrity from zero-knowledge cryptography. So here I just have some simple definitions of what I just talked about. And use cases, right? You might be wondering, OK, this is a cool technology, but what are the actual applications for any of this? So the validity property of 0 knowledge, right, the 1 that allows you to verify that some computation happened correctly, in my opinion, is the most useful property out of ZK that's useful in the context of ML.
Speaker 1
00:09:22 - 00:09:57
So the first 1 that I have listed is machine learning as a service transparency. This is the example I mentioned at the beginning, where instead of you trusting OpenAI to give you some model, you can request the OpenAI team or the API to give you a proof that they're actually running the model that they say they're running. If they give you a proof alongside with the output, you can have cryptographic verifiability that they ran some model that they ran the model they are claiming they have. And this can work for any context. Another 1 is verifiable on-chain classifiers or regressors.
Speaker 1
00:09:58 - 00:10:23
That should just say essentially bring ML on-chain. So machine learning is very computationally intensive and blockchains are very computationally constrained, right? We have a theorem with like 15-30 transactions per second, each block has about 30 million gas with the AP 1.59 and the computations you can put into it is very very slow, right? So like you cannot you cannot really do ML on-chain, even on L2s today. It's really hard.
Speaker 1
00:10:23 - 00:11:12
So where ZK is good is that instead of doing the computation on-chain, you can do the ML computation off-chain, do a zero-knowledge proof, and just verify it on-chain. And you can bring those results. So you can build applications that use ML without having to actually do the computations on-chain, which brings a whole set of new possible use cases. So if 1 good example of this that I've heard of in the recent months is that for example Yearn who is a, that is a DeFi protocol that does yield aggregation, right, essentially trying to find which DeFi protocols can give me the best yield on some sort of assets that I provided. So some of these strategies are fairly complex, but if you want to do a strategy that has or involves some form of ML optimization, then you cannot do it on-chain.
Speaker 1
00:11:13 - 00:12:05
So how do you prove that you're indeed running some strategy on-chain? It's through essentially doing the proof of the ML algorithm off chain Do a proof send it to to the chain verify it and then do some computations with it Right like run the strategy that you say that you are running Another 1 is anomaly detection or fraud prevention. So there is a few startups in the blockchain ecosystem that are trying to use ML to detect threats, security threats in this case. Whether you see some malicious transactions or some anomaly. You can train a model to learn what normal is, like the normal day-to-day of a protocol right like there's some transactions that happen and some defy protocol these are the balances and the reserves of the AMM pool for example and there's some like weird transaction where somebody exploits the entire like thing So you can essentially do something like that in order to prevent attacks.
Speaker 1
00:12:06 - 00:12:51
Instead of having to be reactive to some exploit, if someone detects an anomaly, the protocol can be instantly frozen if you provided a proof that an agreed-upon model found something that might be potentially risky in this case. In the context of ZK, so now just 0 knowledge, you're hiding parts of the ML computation. So 1 good use case is the centralized Kaggle. Kaggle is this machine learning competition platform where essentially if you are a company that you want to have a problem solved using ML, you can post a Kaggle competition with some price attached to it. And different people will try to compete and get that price by submitting the winning bid and the winning algorithm.
Speaker 1
00:12:51 - 00:13:45
So the way it works is that a company gives money to Kaggle and creates the competition, a user wins that competition and when they win it Kaggle essentially exchanges the money from the 1 that created the competition to the user, and the user gives the algorithm that they created to the company that posted the competition. So this has the trusted intermediary, it being Kaggle itself, the platform. However, if you use zkml, what you could do is you can prove that your model has more than X accuracy on some test data, which is how you essentially measure what is the best solution. So if you prove that you have the best solution using ZK, you don't have to reveal your model, you can just hide your model, you can just prove that you have some form of accuracy. Once you prove that, you can claim the price that was associated with the competition, and you automatically reveal the model that you've previously committed to.
Speaker 1
00:13:45 - 00:14:21
So you can do essentially these cryptographic games where you can like hide something and then show it only if you win some form of prize. Inference on private and sensitive data. So essentially there's a lot of different types of sensitive data that you might not want to reveal to some form of verifier. So you can create a proof that you run some model on some private data, and the verifier does not learn anything about the data you run it on, but you know that it is some form of data that they wanted to have some evaluation on top of. This might have use cases in the medical field.
Speaker 1
00:14:21 - 00:15:13
Let's say that you want to classify whether some image of a cancer cell is malignance or not. So you can prove to someone that, hey, this is malignance without revealing who or which original image it was, essentially hide any sensitive data that you might want. So I work at WorldCoin which is a privacy preserving proof of personhood protocol or yeah that's WorldID And so essentially there's like a lot of different challenges that we're facing, 1 of which is that we are building hardware. And when you're building hardware is sometimes hard to essentially give good user experience for people. And then in this context, when you go to our piece of hardware, which is an iris biometric scanner, it essentially generates an encoding of the randomness of the human iris, which is called an iris code.
Speaker 1
00:15:13 - 00:15:47
And so if we ever update the model for generating these iris codes, you would essentially have to go to this hardware device again. And these hardware devices are spread out to different countries. And you have to go to the same place, physically get onboarded again, and regenerate a new iris code. Because if we update the model, the previous iris code is no longer valid. So a good use case for ZKML in this case is that if the user keeps essentially an image from the orb stored on their local device, on their encrypted storage, let's say on their iPhone, they could regenerate their iris code using ZKML.
Speaker 1
00:15:47 - 00:16:16
Like they just download the new model, they download the prover for that specific model, they compute a new iris code locally on their phone, and they submit a proof to the protocol, which is an on-chain smart contract. They submit a proof that, hey, I've computed a new Iris code, and this is the proof that I computed it correctly with the new model. So that's 1 use case. Another 1 is making the orb more trustless or more transparent. So this hardware device, we call it the orb for context.
Speaker 1
00:16:17 - 00:16:46
Essentially, there is a lot of execution that happens on the orb, a lot of computation. And since it's a piece of hardware, currently the way that we enforce that some computations are happening is through a trusted execution environment, a TEE. And currently, a TEE is not very transparent. It just enforces that some computations that you specify are indeed happening. However, to the outside user or to the outside observer, you don't really know what's actually happening on the hardware.
Speaker 1
00:16:46 - 00:17:32
So for example, we can commit to some form of code that's public and open source. And we'd be able to essentially create a proof that, hey, we did indeed, like, this code that's public and open source didn't read, did indeed run within the hardware device. Another 1 is two-factor authentication. So for example, let's say that when you get a world ID, which is essentially this, without getting into much detail, is this essentially private-public key pair that allows you to proof that you're a unique person without revealing who you are using 0 knowledge. So you just proof that you have a unique world ID, which is this sort of unique identifier or passport that essentially represents uniqueness or can proof uniqueness without revealing who you are.
Speaker 1
00:17:33 - 00:17:53
And the cool part of this is that if you, let's say, want to do 2FA, you can have an image of yourself stored on your own phone. And you could essentially map that image to that specific iris code. And you could essentially do two-factor authentication using ZK. And the way that you do this matching is using ML. So that's where ZKML might be potentially useful.
Speaker 1
00:17:53 - 00:18:09
And so all these 3 use cases are the original reasons why I originally got into ZKML. There's a lot of different projects building on top of it. So I suggest you read up more. So I'm going to be showing you a few links. So there's 3 links here, QR codes.
Speaker 1
00:18:10 - 00:18:51
The first 1 is the ZKML community. It's just a Telegram group where a bunch of people that are interested in ZKML can talk about different things around ZKML, whether it's new research papers from the ZK side or ML side, or the combination of both, to new code bases, new tools, new use cases, new applications you might be able to build, events that happen where people from the ZKML community are at, where you can discuss it. Essentially like the shelling point for where you can talk and discuss anything about ZKML. The middle 1 is awesome ZKML. It's essentially a GitHub repo with a readme or a markdown file, a text file, with a list of resources that I put together about the different ZKML things.
Speaker 1
00:18:51 - 00:19:34
So if you want to learn more and deep dive, this is a very introductory talk. I'm happy to answer questions now. But essentially, it just combines or aggregates all the resources that I found on the internet about ZKML, whether it's blog posts, podcasts, whether it's code bases, tools, people that are interested in it, so if you want to reach out to them, you can, all these sorts of things. And The third 1 is, so every single conference, every single time I give this talk, I usually give a sort of shout out to 1 of my favorite articles that recently came out. So the latest 1 that I recently read is from 1KX, And they wrote this really in-depth blog post, a lot more in-depth than this specific talk.
Speaker 1
00:19:34 - 00:19:49
So if you want to jump into what ZKML is all about and learn more than this introductory talk, I recommend you read the article on the right. OK. So thank you. That's everything for me. And I'm happy to take questions.
Speaker 2
00:20:02 - 00:20:03
Does this work? Yeah.
Speaker 1
00:20:07 - 00:20:08
Hello? Yeah.
Speaker 3
00:20:08 - 00:20:18
Thank you for the talk. So quick question. Do you have knowledge of current real implementation of CKML working in
Speaker 1
00:20:18 - 00:20:19
the world?
Speaker 3
00:20:19 - 00:20:24
And what is the overheads that you have seen that you mentioned like a thousand times? It's lower for training, but I don't know if
Speaker 4
00:20:24 - 00:20:26
you talk about overheads on inference.
Speaker 1
00:20:26 - 00:21:03
Yeah, 100% So the current tooling At least like there's several companies building different types of tools In terms of just generating a proof of any arbitrary computation So the way that most companies or most projects are approaching this is that you have some machine learning model in a library, let's say TensorFlow, Keras, PyTorch. There's this standardized interpretation of these machine learning models. It's called ONNX. It's a standardized format that any of these, essentially, frameworks can export to. ONNX stands for Open Neural Network Exchange.
Speaker 1
00:21:04 - 00:22:03
And so essentially what these tools to create zero-knowledge proofs of these models, what they do is that they take the ONNX format or the representation of a model, and they transpile it into a zero-knowledge circuit. So they create a circuit representation for every single type of computation That's represented inside of this 1 an X representation and then they generate a proof So the tools or currently I think the tool that's most flexible for any given user to use it's called Ezekiel I'm happy to share your link or if you if you just go back to the, maybe the presentation is not there anymore. But if you scan the middle QR code, the awesome ZKML 1, you just scroll down to code bases, Ezekiel. And it essentially is a Rust tool that creates a proof of an NX file using the Halo 2 proving system, the privacy scaling explorations group fork, the 1 that like scroll is using and the 1 that like many projects are using. So it essentially creates a 0 knowledge proof using Halo 2.
Speaker 1
00:22:03 - 00:22:40
And in terms of the overhead, It changes every single month because they constantly do like changes and PRS. I think right now I Wouldn't want to guess I think it's like still around a thousand X overhead but things I've heard from Jason 1 of the co-authors of Ezekiel is that they are already starting to be able to prove models in the sizes of 100 million plus parameters, And it takes, I think, like 5 minutes on a really big AWS server with lots of RAM, lots of graphics.
Speaker 2
00:22:42 - 00:23:14
Thanks, DC. Just 1 last question, please. Last question, please. So how do you think that ZKML differentiates or is better in terms of privacy as compared to probably ML inside a trusted execution environment? I mean in terms of computation obviously it requires hardware and ZKML doesn't but in terms of privacy, how would you compare it?
Speaker 1
00:23:15 - 00:23:44
Yeah, so running ML inside of a trusted execution environment, it gives you some form of guarantee that it's ran correctly within that specific hardware. But if you want to verify that computation outside of that piece of hardware, it's impossible. So the first thing is just like, you cannot do on-chain ML. Because in order for that to work, you'd have to trust that some node has some trusted execution environment. It just doesn't add up.
Speaker 1
00:23:44 - 00:23:44
You don't really have the guarantees. What ZK provides you is the guarantees that some execution happened correctly, which trusted execution environments do not. They only guarantee you that some form of computation wasn't tampered with within that specific device, but outside of that environment, you have no guarantees that that is actually what happens.
Omnivision Solutions Ltd