-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kad-GetProviders doesn't work on two nodes #1526
Comments
@michaelvoronov would you mind providing a unit test that reproduces the failure above? |
@mxinden I would be happy to provide such tests as I just had the issue when trying to adapt the distributed-key-value-store example to put providers instead of putting values :) I'm not very familiar with the code base, where would it make more sense? Would |
Sorry, I missed notification, we've switched to put/get functionality, but I think this issue could be obtained from the source code. |
@BlackYoup either in @michaelvoronov I am still having difficulties following your above reasoning. |
@mxinden I opened #1704 which updates the distributed-key-value-store example. With this, if you run it in two terminals:
Nothing will be printed in terminal 2. If you add a log when the |
@mxinden Let's consider two nodes (1 and 2) in a network that know about each other. 1 starts providing some key and then 2 calls get providers for this key that will trigger fn provider_peers(&mut self, key: &record::Key, source: &PeerId) -> Vec<KadPeer> {
let kbuckets = &mut self.kbuckets;
self.store.providers(key)
.into_iter()
.filter_map(move |p|
if &p.provider != source {
let key = kbucket::Key::new(p.provider.clone());
kbuckets.entry(&key).view().map(|e| KadPeer::from(e.to_owned()))
} else {
None
})
.take(self.queries.config().replication_factor.get())
.collect()
} From this code it could be obtained that info about providers is collecting only from But kbuckets don't contain a section for the self node (1). So |
Thanks for the easy way to reproduce this @BlackYoup. Thanks @michaelvoronov for catching the bug down all the way to Even though this is a bit of an edge case (2 node cluster) I am still of the opinion that it is worth fixing. @BlackYoup @michaelvoronov would one of you like to tackle this in a pull request? I took a couple of notes to describe how I would approach the issue. diff --git a/protocols/kad/src/behaviour.rs b/protocols/kad/src/behaviour.rs
index dc06a444..5b9bccfd 100644
--- a/protocols/kad/src/behaviour.rs
+++ b/protocols/kad/src/behaviour.rs
@@ -790,6 +790,13 @@ where
if &p.provider != source {
let key = kbucket::Key::new(p.provider.clone());
kbuckets.entry(&key).view().map(|e| KadPeer::from(e.to_owned()))
+ } else if &p.provider == self.kbuckets.local_key().preimage() {
+ // Create and return a KadPeer of ourself. Within `provider_peers` we are not
+ // aware of our `Multiaddr`s. It might be good enough to only return our
+ // `PeerId` (a bit hacky) as the remote obviously already knows how to contact
+ // us. Otherwise we could enrich the event in `<Kademlia as
+ // NetworkBehaviour>::poll` as the argument `parameters` contains our external
+ // addresses via `parameters.external_addresses`.
} else {
None
}) |
So far, provider records are stored without their addresses and the addresses of provider records are obtained from the routing table on demand. This has two shortcomings: 1. We can only return provider records whose provider peers happen to currently be in the local routing table. 2. The local node never returns itself as a provider for a key, even if it is indeed a provider. These issues are addressed here by storing the addresses together with the provider records, falling back to addresses from the routing table only for backward-compatibility with existing implementations of `RecordStore` using persistent storage. Resolves libp2p#1526.
* Store addresses of provider records. So far, provider records are stored without their addresses and the addresses of provider records are obtained from the routing table on demand. This has two shortcomings: 1. We can only return provider records whose provider peers happen to currently be in the local routing table. 2. The local node never returns itself as a provider for a key, even if it is indeed a provider. These issues are addressed here by storing the addresses together with the provider records, falling back to addresses from the routing table only for backward-compatibility with existing implementations of `RecordStore` using persistent storage. Resolves #1526. * Update protocols/kad/src/behaviour.rs Co-authored-by: Max Inden <[email protected]> * Remove negligible use of with_capacity. * Update changelog. Co-authored-by: Max Inden <[email protected]>
* Store addresses of provider records. So far, provider records are stored without their addresses and the addresses of provider records are obtained from the routing table on demand. This has two shortcomings: 1. We can only return provider records whose provider peers happen to currently be in the local routing table. 2. The local node never returns itself as a provider for a key, even if it is indeed a provider. These issues are addressed here by storing the addresses together with the provider records, falling back to addresses from the routing table only for backward-compatibility with existing implementations of `RecordStore` using persistent storage. Resolves libp2p/rust-libp2p#1526. * Update protocols/kad/src/behaviour.rs Co-authored-by: Max Inden <[email protected]> * Remove negligible use of with_capacity. * Update changelog. Co-authored-by: Max Inden <[email protected]>
It seems that getproviders in libp2p-kad doesn't work for two nodes. From code here it could be obtained that
provider_peers
tries to find providers only in the buckets, but buckets aren't contain any info about a current node... So it works perfectly on >2 nodes, but doesn't on 2.The text was updated successfully, but these errors were encountered: