-
Notifications
You must be signed in to change notification settings - Fork 835
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update elastic scaling guide #6739
base: master
Are you sure you want to change the base?
Conversation
//! | ||
//! - The `DefaultCoreSelector` implements a round-robin selection on the cores that can be | ||
//! occupied by the parachain at the very next relay parent. This is the equivalent to what all | ||
//! parachains on production networks have been using so far. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. Shall we rename this as part of this PR? It seems like LookaheadCoreSelector should be the "default" as we expect any new parachain to use asynchronous backing?
//! <div class="warning">If you configure a velocity which is different from the number of assigned | ||
//! cores, the measured velocity in practice will be the minimum of these two. However, be mindful | ||
//! that if the velocity is higher than the number of assigned cores, it's possible that | ||
//! <a href="https://github.com/paritytech/polkadot-sdk/issues/6667"> only a subset of the collator set will be authoring blocks.</a></div> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The question is why do we need to configure a velocity at all, seems redundant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once the slot based collator can produce multiple blocks per slot we should also add that we recommend slot durations of at least 6s, preferably even 12. (better censorship resistance)
//! `overseer_handle` and `relay_chain_slot_duration` params passed to `start_consensus` and pass | ||
//! in the `slot_based_handle`. | ||
//! | ||
//! ### Phase 2 - Configure core selection policy in the parachain runtime |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Phase 2 assumes candidate receipt v2 feature bit was enabled.
This phase will change after the feature bit is enabled on all networks and a form of #6939 is merged
@@ -15,7 +15,9 @@ use polkadot_sdk::*; | |||
use cumulus_client_cli::CollatorOptions; | |||
use cumulus_client_collator::service::CollatorService; | |||
#[docify::export(lookahead_collator)] | |||
use cumulus_client_consensus_aura::collators::lookahead::{self as aura, Params as AuraParams}; | |||
use cumulus_client_consensus_aura::collators::slot_based::{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes in this file will be rolled back before merge, but currently showcase what a parachain team using the template would need to do on the node-side to use elastic scaling
//! | ||
//! ### Phase 3 - Configure maximum scaling factor in the runtime | ||
//! | ||
//! First of all, you need to decide the upper limit to how many parachain blocks you need to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually the thinking is the other way around - what is the minimum target block time? It is then no longer needed to configure any other parameters manually as you can compute them from this value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can also make all the calculations based on the velocity, which is what I describe here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can see what is described here, but I want a better DX.
As you've noticed recently, people didn't ask "how many parachains blocks can I produce per relay chain block ?", Instead they ask "How can I get 500ms blocks ?" because that is what their end users care about. The velocity of the parachain is largely an implementation detail.
With that being said, we can then remove all of the details about velocity and concerns around they need to compute all sorts of other constants.
//! | ||
//! ## Current constraints | ||
//! | ||
//! Elastic scaling is still considered experimental software, so stability is not guaranteed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After launching on Polkadot this is not true.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, will update when that is the case
//! duration of 2 seconds per block.** Using the current implementation with multiple collators | ||
//! adds additional latency to the block production pipeline. Assuming block execution takes | ||
//! about the same as authorship, the additional overhead is equal the duration of the authorship | ||
//! plus the block announcement. Each collator must first import the previous block before | ||
//! authoring a new one, so it is clear that the highest throughput can be achieved using a | ||
//! single collator. Experiments show that the peak performance using more than one collator | ||
//! (measured up to 10 collators) is utilising 2 cores with authorship time of 1.3 seconds per | ||
//! block, which leaves 400ms for networking overhead. This would allow for 2.6 seconds of | ||
//! execution, compared to the 2 seconds async backing enabled. | ||
//! The development required for enabling maximum compute throughput for multiple collators is tracked by | ||
//! [this issue](https://github.com/paritytech/polkadot-sdk/issues/5190). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can do much better in terms of structure here vs a large blob of text which is not that easy to read and focus important information.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I rewrote this section. let me know how it looks
//! this should obviously only be used for testing purposes, due to the clear lack of decentralisation | ||
//! and resilience. Experiments show that the peak compute throughput using more than one collator | ||
//! (measured up to 10 collators) is utilising 2 cores with authorship time of 1.3 seconds per block, | ||
//! which leaves 400ms for networking overhead. This would allow for 2.6 seconds of execution, compared |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add the formula as a function of latency to compute the max usable execution time.
//! | ||
//! ### Phase 3 - Configure maximum scaling factor in the runtime | ||
//! | ||
//! First of all, you need to decide the upper limit to how many parachain blocks you need to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can see what is described here, but I want a better DX.
As you've noticed recently, people didn't ask "how many parachains blocks can I produce per relay chain block ?", Instead they ask "How can I get 500ms blocks ?" because that is what their end users care about. The velocity of the parachain is largely an implementation detail.
With that being said, we can then remove all of the details about velocity and concerns around they need to compute all sorts of other constants.
Resolves #5050
Updates the elastic scaling guide, taking into consideration:
This PR should not be merged until:
CandidateReceiptV2
node feature bit is enabled on all networksexperimental-ump-signals
feature of the parachain-system pallet is turned on by default (which can only be done after 1)TODO: