-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Approve multiple candidates with a single signature #7554
base: sandreim/vrf_modulo_comapct_assignment
Are you sure you want to change the base?
Approve multiple candidates with a single signature #7554
Conversation
Signed-off-by: Alexandru Gheorghe <[email protected]>
TODO: Migration is not correctly handled, should be done before versi testing. Signed-off-by: Alexandru Gheorghe <[email protected]>
Signed-off-by: Alexandru Gheorghe <[email protected]>
Signed-off-by: Alexandru Gheorghe <[email protected]>
primitives/src/v5/mod.rs
Outdated
}), | ||
DisputeStatement::Valid(ValidDisputeStatementKind::ApprovalChecking) => | ||
ApprovalVote(candidate_hash).signing_payload(session), | ||
DisputeStatement::Valid(ValidDisputeStatementKind::ApprovalCheckingV2( | ||
candidate_hashes, | ||
)) => ApprovalVoteMultipleCandidates(candidate_hashes.clone()).signing_payload(session), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should ensure that candidate_hash
is included in candidate_hashes
.
We could also save a bit of space by having the ValidDisputeStatementKind::ApprovalCheckingV2
include only the other candidate hashes and the index in the vec where the candidate hash in question should be included. Maybe overcomplicated, but payload_data
returns a Vec<u8>
unconditionally which is a problem. i.e. we can't represent a failure path with the current interface.
As it stands, we'd allow any set of candidate hashes to be treated as a valid dispute vote for any candidate.
Pseudocode
ValidDisputeStatementKind::ApprovalCheckingV2(candidate_hashes, index) => {
let mut final_candidates = candidate_hashes.clone();
let index = min(candidate_hashes.len(), index);
final_candidates.insert(index, candidate_hash);
signing_payload(final_candidates, session);
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, this interface wasn't yet cleaned, it was just in a state that worked on the happy-path, I was thinking about making candidate_hash an Option<CandidateHash>
and when is Some we just added to the payload.
What do you think about that ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Opted instead for throwing an error if the provided candidate_hash is not in the vec of candidate_hashes, I think that keeps the API easy to use and prevents accidental misusage.
See here: 2271c71#diff-e876407804ef19c762a4fe91db962a8eea5185a3aee42ed76b60fb8b549beb03R1288, let me know what you think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems fine, so long as we throw an error in dispute handling as normally done.
I do feel this part of the code could be much better optimized.
primitives/src/vstaging/mod.rs
Outdated
pub max_approval_coalesce_count: u32, | ||
/// The maximum time we await for a candidate approval to be coalesced with | ||
/// the ones for other candidate before we sign it and distribute to our peers | ||
pub max_approval_coalesce_wait_millis: u32, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO this shouldn't be in the runtime configuration but is rather a strategy set in the node-side.
The runtime should be concerned with validity, not necessarily how the nodes get there in the fastest possible way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you referring just to the max_approval_coalesce_wait_millis
part or are you think that both configurations parameters shouldn't be there ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The count is a hard parameter defining a boundary between validity and invalidity, which could reasonably live in the runtime or else just be an implementation constant. Though it should probably be very high if defined in the runtime.
The wait time is just tactical from a validator and more advanced strategies would be precluded by hardcoding here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I see, one reason I had for keeping at least having max_approval_coalesce_count
in the runtime is that it gives us a good knob to enable v2 approvals on all validator, instead of having to basically do 2 separate releases of node to enable this logic.
But I guess just for that use-case I could go with the option were I make both max_approval_coalesce_count
and max_approval_coalesce_wait_ticks
implementation constants and just have a param in the runtime enable_approvals_v2
. Would that make more sense ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't reviewed the changeset in-depth yet, but typically there are two approaches to enabling new features:
- Do a first node upgrade that adds support for the change, followed by a second after giving enough time for nodes to upgrade to the first that activates the behavior.
- Do a single node upgrade that adds support for the change, which is activated by a runtime switch such as a version increase.
(2) is usually better as it goes through chain governance, which it sounds like you are going for - great. (see the #5022 changeset for an example, where we use the async_backing_params
runtime API to activate the feature).
I could go with the option were I make both max_approval_coalesce_count and max_approval_coalesce_wait_ticks implementation constants and just have a param in the runtime enable_approvals_v2. Would that make more sense
Yep, makes sense to me, as long as the following statement is something you agree with: we don't mind coalesced bundles being arbitrarily large as long as they are valid, and don't expect that valid bundles will be of a size too large to handle within the disputes pallet. Setting a hard maximum seems valuable to protect per-block disputes load from causing overweight blocks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setting a hard maximum seems valuable to protect per-block disputes load from causing overweight blocks.
Yes, we definitely need a maximum here to protect per-block disputes creating overweight blocks.
... additionally computed in ticks as it is done everywhere else. And added some tests to make sure approval-voting behaves the way we intended to. Signed-off-by: Alexandru Gheorghe <[email protected]>
Signed-off-by: Alexandru Gheorghe <[email protected]>
…gnment' into coalescing_of_approvals_cleanup
Signed-off-by: Alexandru Gheorghe <[email protected]>
…gnment' into coalescing_of_approvals_cleanup
Signed-off-by: Alexandru Gheorghe <[email protected]>
Signed-off-by: Alexandru Gheorghe <[email protected]>
Signed-off-by: Alexandru Gheorghe <[email protected]>
Signed-off-by: Alexandru Gheorghe <[email protected]>
Signed-off-by: Alexandru Gheorghe <[email protected]>
Signed-off-by: Alexandru Gheorghe <[email protected]>
Signed-off-by: Alexandru Gheorghe <[email protected]>
Signed-off-by: Alexandru Gheorghe <[email protected]>
Signed-off-by: Alexandru Gheorghe <[email protected]>
Signed-off-by: Alexandru Gheorghe <[email protected]>
…gnment' into coalescing_of_approvals_v2
Signed-off-by: Alexandru Gheorghe <[email protected]>
* runtime: add BEEFY and MMR to Westend Signed-off-by: Adrian Catangiu <[email protected]> * runtime: add BEEFY and MMR to Kusama Signed-off-by: Adrian Catangiu <[email protected]> * node/service: enable BEEFY for Westend and Kusama Signed-off-by: Adrian Catangiu <[email protected]> * node/service: regenerate genesis keys for westend-native and kusama-native Since these keys are only used for development/local chains, also publish the secret seeds used to generate the public keys, so that developers can recover/generate the private key pairs if needed. Signed-off-by: Adrian Catangiu <[email protected]> * runtime: add session keys migration to add BEEFY to Westend and Kusama * runtime: fix migration * fix try-runtime build * cargo fmt * fix parachains slashing benchmark * address review comments * Apply suggestions from code review Co-authored-by: Bastian Köcher <[email protected]> * runtime: fix session keys migration --------- Signed-off-by: Adrian Catangiu <[email protected]> Co-authored-by: parity-processbot <> Co-authored-by: Bastian Köcher <[email protected]>
Bumps [actions/setup-node](https://github.com/actions/setup-node) from 3.8.0 to 3.8.1. - [Release notes](https://github.com/actions/setup-node/releases) - [Commits](actions/setup-node@v3.8.0...v3.8.1) --- updated-dependencies: - dependency-name: actions/setup-node dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…gnment' into feature/approve_multiple_candidates_v2
…aritytech#7641) * Bound number of assets which can be withdrawn to pay for execution. * ".git/.scripts/commands/fmt/fmt.sh" * Include ClaimAsset in limiting the assets * Change max assets to constant --------- Co-authored-by: command-bot <> Co-authored-by: Francisco Aguirre <[email protected]>
Signed-off-by: Alexandru Gheorghe <[email protected]>
Signed-off-by: Alexandru Gheorghe <[email protected]>
Signed-off-by: Alexandru Gheorghe <[email protected]>
|
||
// Invariant: to our knowledge, none of the peers except for the `source` know about the | ||
// approval. | ||
metrics.on_approval_imported(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should inc_by(candidate_indices.count_ones() )
Signed-off-by: Alexandru Gheorghe <[email protected]>
* Fix xcm-builder mock (preparation for monorepo) The CI fails here when the runtime-benchmarks feature is enabled in the workspace. Signed-off-by: Oliver Tale-Yazdi <[email protected]> * Update xcm/xcm-builder/Cargo.toml --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]>
…gnment' into feature/approve_multiple_candidates_v2
Signed-off-by: Alexandru Gheorghe <[email protected]>
* move RuntimeDebug out of frame_support * move RuntimeDebug out of frame_support * fix xcm export * ".git/.scripts/commands/fmt/fmt.sh" * fix xcm intefration tests * fix cargo lock for xcm intefration tests * wip * restructure benchmarking macro related exports * update cargo lock --------- Co-authored-by: parity-processbot <>
Signed-off-by: Alexandru Gheorghe <[email protected]>
…gnment' into feature/approve_multiple_candidates_v2
Signed-off-by: Alexandru Gheorghe <[email protected]>
Signed-off-by: Alexandru Gheorghe <[email protected]>
Signed-off-by: Alexandru Gheorghe <[email protected]>
The CI pipeline was cancelled due to failure one of the required jobs. |
@@ -162,6 +162,8 @@ where | |||
KeyOwnershipProof(relay_parent, validator_id, key_ownership_proof) => self | |||
.requests_cache | |||
.cache_key_ownership_proof((relay_parent, validator_id), key_ownership_proof), | |||
RequestResult::ApprovalVotingParams(relay_parent, params) => |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
RequestResult::ApprovalVotingParams(relay_parent, params) => | |
ApprovalVotingParams(relay_parent, params) => |
@@ -682,6 +773,9 @@ struct State { | |||
clock: Box<dyn Clock + Send + Sync>, | |||
assignment_criteria: Box<dyn AssignmentCriteria + Send + Sync>, | |||
spans: HashMap<Hash, jaeger::PerLeafSpan>, | |||
// Store approval-voting params, so that we don't to always go to RuntimeAPI | |||
// to ask this information which requires a task context switch. | |||
approval_voting_params_cache: Option<LruCache<Hash, ApprovalVotingParams>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can't this be cached per-session? to have fewer entries in this map.
maybe even add it to the session_info_provider
Bumps [chevdor/srtool-actions](https://github.com/chevdor/srtool-actions) from 0.7.0 to 0.8.0. - [Release notes](https://github.com/chevdor/srtool-actions/releases) - [Commits](chevdor/srtool-actions@v0.7.0...v0.8.0) --- updated-dependencies: - dependency-name: chevdor/srtool-actions dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [rustls-webpki](https://github.com/rustls/webpki) from 0.101.2 to 0.101.4. - [Release notes](https://github.com/rustls/webpki/releases) - [Commits](rustls/webpki@v/0.101.2...v/0.101.4) --- updated-dependencies: - dependency-name: rustls-webpki dependency-type: indirect ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
added asynchronous backing params
Signed-off-by: Adrian Catangiu <[email protected]>
* Update README.md * Update README.md
…ndidates_polkadot_sdk
Initial implementation for the plan discussed here: paritytech/polkadot-sdk#701 on top of: #6782
Overall idea
When approval-voting checks a candidate and is ready to advertise the approval, defer it in a per-relay chain block until we either have
MAX_APPROVAL_COALESCE_COUNT
candidates to sign or a candidate has stayedMAX_APPROVALS_COALESCE_TICKS
in the queue, in both cases we sign what candidates we have available.This should allow us to reduce the number of approvals messages we have to create/send/verify. The parameters are configurable, so we should find some values that balance:
TODO: