-
Notifications
You must be signed in to change notification settings - Fork 673
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: gather v0 block signatures from stackerdb #4807
feat: gather v0 block signatures from stackerdb #4807
Conversation
Opening for draft reviews - there is one integration test implemented that demonstrates the full flow of proposing a Nakamoto block, gathering block responses from signers, and then broadcasting that block. |
We do have a ticket in place to port some existing signer tests over that might be relevant but in terms of testing the miner signature aggregation, this seems like a good test and I wouldn't block this PR based on these other tests. I do think we should add some unit tests to the logic for accumulating signatures. It might be a bit difficult as you may need to forcibly write stackerdb events, but we should simulate bad signaurtes being written and check that threshold calculations are as expected. You could maybe even add it as an integration test if you see how I force a block proposal manually in my one v0 signer test. Instead of forcing a block proposal and having signers respond, do not spin up any signers (or shut them all down inside the test) and manually act as a signer, broadcasting bad signatures/not enough signatures, etc. to stackerdb. |
@jferrant yeah good call, I'll add more logic and tests around verifying and gathering signatures |
@jcnelson @kantai @jferrant I've opened this as "ready for review" after adding additional logic and checks for the data the miner gets from StackerDB. I've really struggled with integration testing for these scenarios. For example, when using |
reward_set_signers | ||
.iter() | ||
.cloned() | ||
.map(|s| s.weight) | ||
.fold(0, |w, acc| { | ||
acc.checked_add(w) | ||
.expect("FATAL: Total signer weight > u32::MAX") | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this would be nice to implement as an instance method in RewardSet
, i.e.:
pub fn total_signing_weight(&self) -> Result<u32, ChainstateError> {
let Some(ref reward_set_signers) = self.signers else {
error!("Could not initialize WSTS coordinator for reward set without signer");
return Err(ChainstateError::NoRegisteredSigners(0));
};
Ok(reward_set_signers
.iter()
.fold(0, |s, acc| {
acc.checked_add(s.weight)
.expect("FATAL: Total signer weight > u32::MAX")
}))
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍🏼 , added and am now using it in SignCoordinator::new
and verify_signer_signature
.checked_add(signer_entry.weight) | ||
.expect("FATAL: total weight signed exceeds u32::MAX"); | ||
} | ||
debug!("SignCoordinator: Total weight signed: {total_weight_signed}"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might be helpful with a bit more info:
debug!("Added another signature to block";
"block_signer_sighash" => %block_sighash,
"signer_pubkey" => %signer_pubkey,
"signer_slot_id" => slot_id,
"signature" => %signature,
"signer_weight" => signer_entry.weight
"total_weight_signed" => total_weight_signed,
);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good call - I've added this, but I removed signer_weight
as it otherwise goes over the "max recursion" limit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This LGTM, just a few comments
Oh, that’s just because the comma was missing. Can you add it back in?
…On Thu, May 30, 2024 at 6:37 PM Hank Stoever ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In testnet/stacks-node/src/nakamoto_node/sign_coordinator.rs
<#4807 (comment)>
:
> + };
+ if !valid_sig {
+ warn!(
+ "Processed signature but didn't validate over the expected block. Ignoring";
+ "signature" => %signature,
+ "block_signer_signature_hash" => %block_sighash,
+ "slot_id" => slot_id,
+ );
+ continue;
+ }
+ if !gathered_signatures.contains_key(&slot_id) {
+ total_weight_signed = total_weight_signed
+ .checked_add(signer_entry.weight)
+ .expect("FATAL: total weight signed exceeds u32::MAX");
+ }
+ debug!("SignCoordinator: Total weight signed: {total_weight_signed}");
Good call - I've added this, but I removed signer_weight as it otherwise
goes over the "max recursion" limit.
—
Reply to this email directly, view it on GitHub
<#4807 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAIZUEDQRJNQU2AGUVRFOG3ZE6Z25AVCNFSM6AAAAABIAR7TJSVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZDAOBZGUYTCNRQGA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Ah, yeah added 😀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
}) | ||
.collect::<Result<HashMap<_, _>, ChainstateError>>()?; | ||
|
||
let coordinator: FireCoordinator<Aggregator> = FireCoordinator::new(coord_config); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this be moved into the #[cfg(test)]
block? Also, can the imports for wsts be moved there as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both the cfg(test)
block and the "non test" block return a SignCoordinator
, which has a coordinator
field. So, we'd have to return some other type if we wanted to only include the wsts stuff in the cfg(test)
block. Ultimately, this whole file relies on both the v0 and v1 stuff (ie begin_sign_v0
and begin_sign_v1
) - there's no easy fix other than separate files / structs IMO
assert!(all_signed); | ||
|
||
// Test prometheus metrics response | ||
#[cfg(feature = "monitoring_prom")] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It this getting tested in CI?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is running in the integration-tests
workflow, and I believe those tests do include the monitoring_prom
flag enabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM; just a couple of minor comments
This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
This PR updates the miner's
SignCoordinator
to gather signer signatures using the new header format.