-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Conversation
SecretStore: pass real error in error messages SecretStore: is_internal_error -> Error::is_non_fatal warnings SecretStore: ConsensusTemporaryUnreachable fix after merge removed comments removed comments SecretStore: updated HTTP error responses SecretStore: more ConsensusTemporaryUnreachable tests fix after rebase
SecretStore: service pack (continue)
Test repeatedly failed:
|
@svyatonik @5chdn I think that error is |
Ah, gotcha. @niklasad1 wanna try to give this PR the first review? |
parity/configuration.rs
Outdated
"registry" => Ok(Some(SecretStoreContractAddress::Registry)), | ||
a => Ok(Some(SecretStoreContractAddress::Address(a.parse().map_err(|e| format!("{}", e))?))), | ||
fn into_secretstore_service_contract_address(s: Option<&String>) -> Result<Option<SecretStoreContractAddress>, String> { | ||
match s.map(|x| &**x) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be simplified to s.map(String::as_str)
for readability purposes!
Some(FailedKeyVersionContinueAction::Decrypt(Some(ref origin), ref requester)) => | ||
self.data.lock().failed_continue_with = | ||
Some(FailedContinueAction::Decrypt(Some(origin.clone().into()), requester.clone().into())), | ||
_ => (), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use if let
instead because you only care about one pattern!
@@ -361,8 +391,47 @@ impl<T> SessionImpl<T> where T: SessionTransport { | |||
let confirmations = data.confirmations.as_ref().expect(reason); | |||
let versions = data.versions.as_ref().expect(reason); | |||
if let Some(result) = core.result_computer.compute_result(data.threshold.clone(), confirmations, versions) { | |||
// when master node processing decryption service request, it starts with a key versions negotiation session |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when the master node
a key version negotiation
@@ -361,8 +391,47 @@ impl<T> SessionImpl<T> where T: SessionTransport { | |||
let confirmations = data.confirmations.as_ref().expect(reason); | |||
let versions = data.versions.as_ref().expect(reason); | |||
if let Some(result) = core.result_computer.compute_result(data.threshold.clone(), confirmations, versions) { | |||
// when master node processing decryption service request, it starts with a key versions negotiation session | |||
// if negotiation fails, only master node knows about it |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the negotiation fails
@@ -361,8 +391,47 @@ impl<T> SessionImpl<T> where T: SessionTransport { | |||
let confirmations = data.confirmations.as_ref().expect(reason); | |||
let versions = data.versions.as_ref().expect(reason); | |||
if let Some(result) = core.result_computer.compute_result(data.threshold.clone(), confirmations, versions) { | |||
// when master node processing decryption service request, it starts with a key versions negotiation session | |||
// if negotiation fails, only master node knows about it | |||
// => if error is fatal, only master will know about it and report to the contract && request will never be rejected |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the error is fatal, only the master ...
´report it`
the request will ...
@@ -361,8 +391,47 @@ impl<T> SessionImpl<T> where T: SessionTransport { | |||
let confirmations = data.confirmations.as_ref().expect(reason); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out-of-scope for this PR, would be nice if fn try_complete
would return Result instead!
Needs a 2nd review :) |
Long shot, @folsen do you want to review this? 🙉 |
} | ||
let key_share_owners = message.version_holders.iter().cloned().map(Into::into).collect(); | ||
let new_nodes_set = data.new_nodes_set.as_ref() | ||
.expect("new_nodes_set is filled during consensus establishing; change sessions are running after this; qed"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain why expect()
here instead of returning an error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- because I'm sure that the error won't ever happen. And I'm sure because...
- ...the
ServerSetChange
session consists of two (omitting details) phases. The first one (EstablishingConsensus
) is when master node prepares session structure (i.e. asking all other nodes - if they want to participate in the session, what shares do they have && what to do with these shares - either leave as is or add/remove). The second phase starts (RunningShareChangeSessions
) after first phase is finished and it is the phase where shares are actually altered (new shares are added/old shares are removed).on_initialize_share_change_session
is called during a second phase - it is checked at the beginning of the method. Andnew_nodes_set
is initialized during first phase - on master node during sessioninitialize
call and on other nodes - whenUnknownSessionsRequest
message is received (that's the transition from phase to phase2).
|
||
pub fn make_cluster_sessions() -> ClusterSessions { | ||
let key_pair = Random.generate().unwrap(); | ||
let config = ClusterConfiguration { | ||
threads: 1, | ||
self_key_pair: Arc::new(PlainNodeKeyPair::new(key_pair.clone())), | ||
listen_address: ("127.0.0.1".to_owned(), 100_u16), | ||
key_server_set: Arc::new(MapKeyServerSet::new(vec![(key_pair.public().clone(), format!("127.0.0.1:{}", 100).parse().unwrap())].into_iter().collect())), | ||
key_server_set: Arc::new(MapKeyServerSet::new(false, vec![(key_pair.public().clone(), format!("127.0.0.1:{}", 100).parse().unwrap())].into_iter().collect())), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace format!
with "127.0.0.1:100".parse().unwrap() to get rid of a String allocation!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good but I can't say that I fully understand everything. So, would probably good with another reviewer!
I think this is good to go once you resolve conflicts |
* master: Minor fix in chain supplier and light provider (#8906) Block 0 is valid in queries (#8891) fixed osx permissions (#8901) Atomic create new files with permissions to owner in ethstore (#8896) Add ETC Cooperative-run load balanced parity node (#8892) Add support for --chain tobalaba (#8870) fix some warns on nightly (#8889) Add new ovh bootnodes and fix port for foundation bootnode 3.2 (#8886) SecretStore: service pack 1 (#8435)
* SecretStore: error unify initial commit SecretStore: pass real error in error messages SecretStore: is_internal_error -> Error::is_non_fatal warnings SecretStore: ConsensusTemporaryUnreachable fix after merge removed comments removed comments SecretStore: updated HTTP error responses SecretStore: more ConsensusTemporaryUnreachable tests fix after rebase * SecretStore: unified SS contract config options && read * SecretStore: service pack SecretStore: service pack (continue) * fixed grumbles
…rp_sync_on_light_client * 'master' of https://github.com/paritytech/parity: (29 commits) Block 0 is valid in queries (openethereum#8891) fixed osx permissions (openethereum#8901) Atomic create new files with permissions to owner in ethstore (openethereum#8896) Add ETC Cooperative-run load balanced parity node (openethereum#8892) Add support for --chain tobalaba (openethereum#8870) fix some warns on nightly (openethereum#8889) Add new ovh bootnodes and fix port for foundation bootnode 3.2 (openethereum#8886) SecretStore: service pack 1 (openethereum#8435) Handle removed logs in filter changes and add geth compatibility field (openethereum#8796) fixed ipc leak, closes openethereum#8774 (openethereum#8876) scripts: remove md5 checksums (openethereum#8884) hardware_wallet/Ledger `Sign messages` + some refactoring (openethereum#8868) Check whether we need resealing in miner and unwrap has_account in account_provider (openethereum#8853) docker: Fix alpine build (openethereum#8878) Remove mac os installers etc (openethereum#8875) README.md: update the list of dependencies (openethereum#8864) Fix concurrent access to signer queue (openethereum#8854) Tx permission contract improvement (openethereum#8400) Limit the number of transactions in pending set (openethereum#8777) Use sealing.enabled to emit eth_mining information (openethereum#8844) ...
on top of #8357
closes #7956
This is a PR for cumulative fixes (didn't wanted to spam with many SS PRs), made during SS initial nodes deployment on Kovan. Details: