-
Notifications
You must be signed in to change notification settings - Fork 221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: prevent banning of connected base node in wallet #3638
feat: prevent banning of connected base node in wallet #3638
Conversation
a5a7c77
to
0a07ced
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer the term "allow_list" and "add_peer_to_allow_list" rather than "exclude_ban_peer".
comms/dht/src/connectivity/mod.rs
Outdated
@@ -67,6 +70,7 @@ pub struct DhtConnectivity { | |||
config: DhtConfig, | |||
peer_manager: Arc<PeerManager>, | |||
node_identity: Arc<NodeIdentity>, | |||
ban_exclusions: Arc<RwLock<Vec<NodeId>>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this not just be local? I think the other Arc's are shared amongst multiple different structs, whereas there should only be one DhtConnectivity
ban_exclusions: Arc<RwLock<Vec<NodeId>>>, | |
ban_exclusions: Vec<NodeId>, |
comms/src/connectivity/requester.rs
Outdated
@@ -54,6 +54,7 @@ pub enum ConnectivityEvent { | |||
PeerConnected(PeerConnection), | |||
PeerConnectFailed(NodeId), | |||
PeerBanned(NodeId), | |||
ExcludePeerBan(NodeId), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there something that subscribes to this event? I don't know if it's worth publishing it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like this is being used to communicate to the DHT Connectivity Actor that doesn't have a direct handle because it doesn't provide a service API. However, I don't think this is really the ideal way to plumb this communication in. While it might make sense to add a peer to the Allowlist via the Comms::connectivity manager how do you manage that list afterwards in terms of seeing who is on it and removing peers from it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This functionality would typically be called an Allowlist (whitelist in the past).
I don't think it is ideal to make the interface for adding to the allowlist a call via the Comms::connectivity manager. I think you should add service handle to the Dht::connectivity actor to build out a service API. The reason is that you will not just be adding to the allowlist but also querying who is on it and removing peers from it.
This PR adds to the allowlist but doesn't remove the previous base node from the allow list when a new one is selected as described in the ticket.
comms/src/connectivity/requester.rs
Outdated
@@ -54,6 +54,7 @@ pub enum ConnectivityEvent { | |||
PeerConnected(PeerConnection), | |||
PeerConnectFailed(NodeId), | |||
PeerBanned(NodeId), | |||
ExcludePeerBan(NodeId), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like this is being used to communicate to the DHT Connectivity Actor that doesn't have a direct handle because it doesn't provide a service API. However, I don't think this is really the ideal way to plumb this communication in. While it might make sense to add a peer to the Allowlist via the Comms::connectivity manager how do you manage that list afterwards in terms of seeing who is on it and removing peers from it?
a764b7b
to
46a9b37
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure why the base node would be sending so many messages to the wallet - that doesn't seem right.
Assuming there is a valid reason to add this, an allowlist of peers that cannot be banned can either be added to comms connectivity (responsible for banning peer's it is asked to - "don't ever ban these peers") or DHT connectivity (responsible for checking message rates - "don't ban this peer for message rate violation").
I think the former makes more sense, and there is already a service API to work with.
Observed this occurring during a stress test and the base node was relaying SAF messages at a high rate. |
46a9b37
to
3518491
Compare
This PR excludes the connected base node in the wallet from being banned. Review comments
3518491
to
4fb8254
Compare
SAF messages are sent in bundles of messages, should be |
The sentiment is that there are several reasons that a wallet might ban a base node, with new ones being added in the future perhaps, but I would like to preserve the concept that a wallet will never ban the base node it has been explicitly set to use for any reason other than the WireMode byte. The WireMode byte though is enforced at a lower level I believe. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should not be implemented in dht::connectivity where you need to communicate with the dht actor via an event stream. Please implement this as an Allowlist where the Bans are processed: in the comms::connectivity service. Again, we are not focusing on a single ban case, the idea is implement an allow list that prevents any bans from occurring for a node on the allow list.
See PR #3642 |
Description
This PR excludes the connected base node in the wallet from being banned.
Motivation and Context
Usability
How Has This Been Tested?
cargo test --all
nvm use 12.22.6 && node_modules/.bin/cucumber-js --profile "ci" --tags "not @long-running and not @broken"