Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New RFC: BCP Waku Simple Scaling #160

Closed
Tracked by #154 ...
kaiserd opened this issue Jan 20, 2023 · 1 comment
Closed
Tracked by #154 ...

New RFC: BCP Waku Simple Scaling #160

kaiserd opened this issue Jan 20, 2023 · 1 comment
Assignees

Comments

@kaiserd
Copy link
Contributor

kaiserd commented Jan 20, 2023

  • best current practice RFC explaining how to use currently available Waku components and simple additions / updates
    to scale a Waku network to 1 million nodes
  • this comes with trade-offs in terms of decrentralization and anonymity
  • The RFC will focus on the Waku relay and Waku store, plus give outlooks for lightpush, filter, and peer exchange

Basic Idea: scale to 1mio nodes with smaller multicast groups (communities)

Scale to 1 mio nodes, with channels of at most 10000 nodes (We have to verify this number, and might have to adjust it. Important to note, there is a number that limits the number of nodes in a pubsub topic.)

  • assume a pubsub topic (single mesh) can scale to 10000 nodes
  • the layer above Waku Relay (the app, Status) allocates content topics (multicast groups) to pubsub topics (mesh networks)
    • this can be 1:1 for large multicast groups, or k:1 for smaller multicast groups
    • (sharding would automatically take of this, but is left to a future milestone)
    • for 1:1 mapping, we lose k-anonymity
    • nodes need bootstrap nodes for each pubsub topic that carries multicast groups they are interested in
      • these can be super-peers added to DNS discovery (super-peers are also needed for multicast groups beyond 10000 nodes, see below)
    • reactivate gossipsub peer exchange to get better connectivity (at the cost of anonymity)
  • store: centralized per community

Basic idea: scale larger node numbers within a single mulicast group (communitiy)

(This not not in the scope for the frist version of the RFC. If desired, we could add a section or even a separate BCP RFC.)

  • for this, we need to split multicast groups onto several pubsub topics
  • the splitting is left to the app layer (inverse sharding could do this in a future milestone)
  • introduce super-peers (trade-off at the cost of decrentralization)
    • each pubsub topic has super-peers
    • the super-peers are responsible for providing messages across pubsub boundaries
  • carrying messages across pubsub topics would be done by store / super-peers:
    • (requires adjusting the store, this is additional work to meet the tight timeline. Future improved store versions will not use this anymore)
    • use relay within a pubsub topic
    • super-peers will store all messages (within all pubsub topics; to scale this even further, super peers have to shared, too, but for 10x-100x it should be OK)
    • message need a unique (time-sorted) identifier (this is a trade-off to anonymity, but is useful beyond a simple scaling solution)
    • peers from any pubsub shard can ask the super-peers for messages associated with pubSub topics they are not part of
      • via filter
      • super-peers offer store to serve peers that have been off-line
      • stronger peers can be part of several pubsub topics to reduce load
@kaiserd
Copy link
Contributor Author

kaiserd commented Feb 17, 2023

With 51/WAKU2-RELAY-SHARDING, and the rest of the simple scaling ideas potentially being specific to communities, the current plan is merging the RFC tracked by this issue into #167.
If there should be enough general material to warrant a dedicated RFC, we will resume with the RFC tracked by this issue.
(Keeping this issue open until this decision is final.)


edit: We went that route. Closing this issue, and #167 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Next/Backlog
Development

No branches or pull requests

2 participants