-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RLN in resource-restricted clients #45
Comments
(1) Full RLN in
|
Thanks for your update, @weboko. Sounds like we have a working POC, at least in terms of publishing? It's reasonable to expect more bugs and fixes surfacing during productionising/dogfooding.
Good point. I think this depends on a variety of use cases though - for some applications the light node would need to be independent, whereas others may have such resource restrictions that they'd rather offload the RLN work to a service node.
Right. You're right in saying the Merkle tree is the only real blocker. In some sense this suggestion is a variation on (3) where instead of RLN-as-a-service, the service nodes could provide the necessary root (or preconstructed tree?) for the light clients to use. Trust is indeed the main concern here, but it's a good idea and worth considering.
interesting approach. not sure if pros and cons of this has been discussed in another forum, but presumably this would require a "trusted" (centralised) entity to publish and update the roots, which may be an unacceptable tradeoff. |
Would like @rymnc input here. TLDR: how can a light client use its own proof to publish a message via lightpush? As @jm-clius said in 1) one approach could be: Let js-waku synchronize its own tree and create its proofs:
The cons may make it not practical to use, so I would like to think of other approaches. For example, rely on other waku nodes to get the root/tree so that we can avoid i) the syncing time ii) requiring an eth node, iii) complexity in the js-waku side. Some kind of new protocol in the waku network for this, that allows light clients to get the trees/root from other nodes and use them to create proofs. This won't require eth nodes or long sync times. As per this the tree takes <2MB for 10k memberships so should be feasible to share it in almost no time. But this solution will take some time to develop and test, and there are some cases to take into account (eg the tree is constantly changing). In other words, my concern is that even if we had "1. Full RLN in resource-restricted clients" it may not be usable in practice. |
This may help for verification but not for proof generation afaik. afaik to generate a proof you need the whole tree. |
Correct, having the root onchain helps with verification. For the tree sync, ideally we would have a waku protocol provide the tree to a requesting node, but there should be some incentive involved. additionally, we could use https://github.com/rymnc/rlnp2p-sync as a starting point and provide an ipfs Cid to light clients. They may choose to sync the tree to verify, or download the contents associated with the ipfs Cid directly. wdyt? |
Could we use some combination here to verify the tree downloaded from a service node to minimise trust assumptions? |
yes! having the root onchain, as discussed here brings its own set of complexities. Off the top of my head, a user may generate a state transition proof, which indicates that they have inserted an id_commitment This would make registrations cost approx ~ 15.7 USD (assuming eth=$1800, gas price=38 gwei) versus the initial cost ~ 2.5 USD |
This would be another separate discussion but with great impact in this. I really doubt that we will end up deploying in Ethereum mainnet due to fees, so if we aim for a layer 2, gas will be less of an issue, which might open the door of having the root onchain. |
Actually it's pretty much it. We shouldn't expect PoC to be adopted in a way it works now: WASM (up to 10s download time in browses) + tree sync (need for RPC + takes from 4s to 10 minutes). (of course I am looking into ways to address some things but still)
Good point. So that means we need tree anyway for Once we get tree from Waku node |
I am not convinced using the REST API here makes sense. We have a libp2p tech stack for light client (peer exchange, light push, filter) and we are now offering to build a second tech stack, also for light clients? Assuming that a project would want to generate proof for their light clients, then it would make more sense to enhance light push to attach proof for incoming messages that are:
Such a model could enable the app to have other users offer said service. Going in a REST API model, we restrict ourselves to HTTP tech meaning REST API, fqdn, web auth, etc.
Would be good to understand here if the WASM blob can be smaller/compressed/split? For example, it is fine if the webapp start validating incoming RLN proof after 15s. Also, can we reduce the blob so it only contains "generate proof" functionality for example, so that the user can send messages ASAP? Regarding downloading the membership and constructing the tree.
In terms of trust, where we get the full tree or individual events from an untrusted source, the moment the user sends a message via light push, they will know whether the data can be trusted (light push request successful) or not (light push request rejected because of invalid proof). Any measure we take could be seen as temporary measures, meaning that we could get the data from an untrusted source while it also gets it from the blockchain. So that the app may operate in "boot up mode" for a few minutes. Finally, we could use a snapshot approach to the tree issue, similar to what is done in Bitcoin or Ethereum. Where we can embed tree data in the source code:
In this case, the data is served by the web server that delivers the JS code. Ideally, a combination of solutions above enables an acceptable UX.
I think this is fair, the only issue is that I am not sure IPFS can be accessed from a webapp without using a centralized gateway. Could instead contract events be broadcast on a Waku content topic and we can let incentivization of storing said event be solved by the store incentivization issue? Regarding UX, do note that we are operating in the decentralized domain and hence it would be good to understand what time to boot up is acceptable by Web3 users. I understand that in Web2, page loading time should be sub 1/2 second but I assume that Web3 users are more open to slower loading pages? Also, as mentioned above, there are alternative to have a fast loading app that operates in a semi trusted mode until all data is acquired. |
closing since this has already been researched, implemented and delivered to TheWakuNetwork see: |
Background
The public Waku Network's main DoS protection mechanism is rate-limiting publishers using RLN.
This roughly poses the following requirements of participants:
For both publishers and validators:
Publishers only:
5. Register an RLN membership in the configured on-chain RLN membership contract (currently on Sepolia)
6. Generate and attach a ZK proof to each published message, proving membership, rate-limit adherence and including share of private key associated with membership.
Validators only:
7. Validate proof in all received messages based on locally constructed tree. Continue processing if pass. Drop message if fail (and potentially descore source peer).
Problem statement
#23 provides some benchmarked insights into the resource impact of RLN requirements.
Not all requirements above are compatible with all resource-restricted clients (such as short-lived js-waku based browser nodes).
Specifically:
Possible approaches
Depending on use case, we likely need all three approaches below:
1. Full RLN in resource-restricted clients
Remaining effort: medium (implemented in client mode for nwaku and go-waku, WIP in js-waku - @weboko can we have a rough time estimate/breakdown of the work that's still required here?)
Clients simply download all RLN memberships, construct the Merkle tree and generate proofs themselves which they attach to (lightpush) published messages. These clients can use the same tree to verify messages received via filter/store.
Notes:
2. Dedicated RLN service node via REST API
Remaining effort: low-medium (requires extending lightpush API in go-waku/nwaku to attach proof on client's behalf)
This can serve as a workaround to (1). Projects can simply run full nodes, with registered RLN memberships, that act as dedicated entry point for their resource-restricted client(s) into the network. Clients use the existing REST API to lightpush messages to these dedicated service nodes and the service node attaches an RLN proof on the client's behalf.
3. Distributed RLN service
Remaining effort: high
This builds on (2), but allow this service to become generally available in the network. This requires a new protocol/extensions to the lightpush protocol that allows a service node to offer one or more RLN memberships to clients and attach the proof to published messages on their behalf. It can be as simple as configuring multiple memberships on a service node, accounting for use of these memberships per lightpush client and providing appropriate feedback to clients (such as rate limit exceeded, no available memberships, etc.). Since membership registration (and publishing) costs money and resources, incentivisation will play an important role here.
The text was updated successfully, but these errors were encountered: