-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Downloader for iroh-bytes #1334
Comments
A few thoughts on this topic: I think the downloader API should be separated into declarations of intent and hints. A declaration of intent would be e.g.
For a declaration of intent you get a u64 handle that allows you to control/delete the intent. Then you have hints. Hints are something like:
It is then up to the downloader to make use of the hints, ask some discovery mechanism etc. The downloader keeps track of the intents and makes sure that there are no duplicates, e.g. if two places want the same hash we don't want two download tasks. An extension of the intent/hint scheme would be something like topics. E.g. instead of saying for these 10000 hashes you should try these 100 peers, have a "download topic" that in case of doc sync might correspond to a document. Then you can say "I want to download these 10000 hashes for topic A big question: should the downloader support collections? I would say yes, otherwise the downloader will be useless for many non document sync related iroh tasks. |
The parts that @Frando mentions seem clear to me how to implement, @rklaehn 's not so much. Could you guys help me here? (sorry still noob to these parts of the code):
Thank you both! |
You need the
Not sure I got the question right, and @rklaehn knows much more than me about hashes and ranges. I think a range always refers to part(s) of a single blob identified by a single hash. So you can say "gimme these ranges of this blob identified by |
What @Frando said
The protocol for individual blobs goes like this: you ask for a hash and a set of ranges of chunks (1 chunk = 1024 bytes). Then you get back data only for what you have asked for. The default thing is to just ask for a hash and all it's chunks For collections it is more complex. You can specify ranges for the collection itself and for each of it's children. E.g. you have downloaded a collection and half of it's children and were interrupted. You then figure out what is missing and ask for exactly that.
|
Here is an issue that is related to the downloader. Probably not initially, but eventually we want to support this: #839 |
## Description This PR adds `iroh-sync`, a document synchronization protocol, to iroh, and integrates with `iroh-net`, `iroh-gossip` and `iroh-bytes`. * At the core is the `iroh-sync` crate, with a set reconciliation algorithm implemented by @dignifiedquire. See [the old iroh-sync repo](https://github.com/n0-computer/iroh-sync/) for the prehistory and #1216 for the initial PR (fully included in this PR, and by now outdated) * Iroh sync is integrated in the iroh node, with iroh-gossip, in the RPC interface, and the CLI. * `LiveSync` is the handle to an actor that integrates sync with [gossip](#1149 ) to broadcast and receive document updates from peers. For each open document a gossip swarm is joined with a `TopicId` derived from the doc namespace. * mod `download` contains the new downloader. It will be improved in #1344 . * mod `client` is the new high-level RPC client. It currently only has methods for dealing with docs and sync, other things should be added once we merged this. CLI commands for sync are in `commands/sync.rs`. Will be much better with #1356 . * `examples/sync.rs` has a REPL to modify and sync docs. It does a full setup without using the iroh console. Also includes code to sync directories, and a hammer command for load testing. * The PR also introduces `iroh::client::Iroh`, a wrapper around the RPC client, and `iroh::client::Doc`, a wrapper around RPC client for a single document ## Notes & open questions #### Should likely happen before merge: * [x] Make `iroh_sync::Store:::list_authors` and `list_replicas` return iterators `iroh-sync` *fixed in #1366 * * [ ] Add `iroh_sync::Store::close_replica` * [x] `ContentStatus` in `on_insert` callback is reported as `Ready` if the content is still `baomap::PartialEntry` (in-process download) *fixed in a8e8093* #### Can happen after merge, but before `0.6` release * [ ] Implement `AuthorImport` and `AuthorShare` RPC & CLI commands * [ ] sync store `list_namespaces` and `list_authors` internally collect, return iterator instead * [ ] Fix cross-compiles to arm/android. See cross-rs/cross#1311 * [ ] Ensure that fingerpring calculation is efficient and/or cached for large documents. Currently calculating the initial fingerprint iterates once over all entries in a document. * [ ] Make content downloads be more reliable * [ ] Add some way to download content from peers independent of the first insertion event for a remote entry. The downloader with retries is tracked in #1334 and 1344, but independent of that, we still would currently only ever try to queue a download when the `on_insert` callback triggers, which is only once. There should be a way, even if manual for now, to try to download missing content in a replica from peers. * [ ] during `iroh-sync` sync include info if content is available for each entry * [ ] Add basic peer management and persistence. Currently live sync will stop to do anything after a node restart. * [ ] Persist the addressbook of peers for a document, to reconnect when restarting the node * [ ] Implement `PeerAdd` and `PeerList` RPC & CLI commands. The latter needs changes in `iroh-net` to expose information of currently-connected peers and their peer info. * [ ] Make read-only replicas possible * [ ] Improve reliablity of replica sync. * sync is triggered on each `NeighborUp` event from gossip. check that we don't sync too much. * maybe include peer info in gossip messages, to queue syncs with those (but not all at once) * track and exchange the timestamp of last full sync for peers, to know if you missed gossiped message and react accordingly * add more tests with peers coming and leaving #### Open questions * [ ] `iroh_sync::EntrySignature` should the signatures include a namespace prefix? * [ ] do we want the 1:1 mapping of `NamespaceId`and gossip `TopicId`, or would the topic id as a hash be better? #### Other TODOs collected from the code * [ ] Port `hammer` and `fs` commands from REPL example to iroh cli * [ ] docs/naming: settle terminology about keypairs, private/secret/signing keys, public keys/identifiers and make docs and symbols consistent * [ ] Make `bytes_get` streaming in the RPC interface * [ ] Allow to unset the subscription on a replica * [ ] `iroh-sync` shouldn't depend on `iroh-bytes` only for `Hash` type -> #1354 * [ ] * [ ] Move `sync::live::PeerSource` to iroh-net or even better -> #1354 * [ ] `StoreInstance::put` propagate error and verify timestamp is reasonable. * [ ] `StoreInstance::get_range` implement inverted range * [ ] `iroh_sync`: Remove some items only used in tests (marked with #[cfg(test)]) * [ ] `iroh_sync` fs store: verify get method fetches all keys with this namespace * [ ] `ranger::SimpleStore::get_range`: optimize * [ ] `ranger::Peer` avoid allocs? * [ ] `fs::StoreInstance::get_fingerprint` optimize * [ ] `SyncEngine::doc_subscribe` remove unwrap, handle error ## Change checklist - [x] Self-review. - [x] Documentation updates if relevant. - [ ] Tests if relevant. --------- Co-authored-by: dignifiedquire <[email protected]> Co-authored-by: Asmir Avdicevic <[email protected]> Co-authored-by: Kasey <[email protected]>
## Description Adds the `Downloader` as specified in #1334 plus some backchannel convos Features include: - Support collections - Add delays to downloads - Add download retries with an incremental backoff - Keeping peers for a bit longer than necessary in hopes they will be useful again - Having the concept of intents and deduplicating downloads efforts - Cancelling download intents - Limiting how many concurrent requests are done in total - Limiting how many concurrent requests are done per peer - Limiting the number of open connections in total - Basic error management in the form of deciding whether a peer should be dropped, the request should be dropped, or if the request should be retried ## Notes & open questions ### TODOs - A remaining TODO in the code is whether something special should be done when dropping quic connections - Should downloads have a timeout? - ~<sup>I know I've said this a hundred times with a hundred different things but would love to test this as well under stress scenarios and a large number of peers. don't hate me</sup>~ In reality after abstracting away all the IO most scenarios can be simulated easily. What would remain for a _much_ later time when the need and opportunity for real case testing scenario arises is to tune the concurrency parameters ### Future work #### Downloading Ranges There was the requirement of downloading a Hash, a range of a Hash, a collection and (not mentioned but potentially implied) ranges of collections. There is no support for ranges right now because of the great duplication of the `get` code in order to take advantage of proper errors added in #1362. In principle, adding ranges should be really easy. This is because it's an extension of the `DownloadKind` and would simply need calculating the missing ranges not based on the difference between what we have and the whole range, but the given range. I would prefer to find a way to deduplicate the get code before doing this extension. Also, as far as I can tell, there is no need for this yet. #### Prioritizing candidates per role: `Provider` and `Candidate` A nice extension, as discussed at some point, is to differentiate candidates we know have the data, from those that _might_ have the data. This has added benefit that when a peer is available to perform another download under the concurrency limits, a hash we know they have could be downloaded right away instead of waiting for the delay. At this point making this doesn't make sense because we will likely attempt a download before the peer has retrieved the data themselves. To implement this, we would need to add the notifications of fully downloaded hashes as available into gossip first. #### Leveraging the info from gossip When declaring that a hash `X` should be downloaded, it's also an option to query gossip for peers that are subscribed to the topic to which `X` belongs to use them as candidates. This could be done connecting the `ProviderMap` to `gossip`. For now I don't see the need to do this. ### Open questions about Future work - In line with the described work from above, the registry only allows querying for peer candidates to a hash since that's as good as it gets in terms of what we know from a remote right now. It's not clear to me if we would want to change this to have better availability information with #1413 in progress. - More future work: downloading a large data set/blob from multiple peers would most likely require us to do a three step process: 1. understanding the data layout/size. 2. splitting the download. 3. actually performing the separate downloads. Generally curious how this will end. My question here is whether we should do this for every download, or just on data that we expect to be big. Is there any way to obtain such hint without relying on a query every single time? ## Change checklist - [x] Self-review. - [x] Documentation updates if relevant. - [x] Tests if relevant.
@Frando & @divagant-martian both agree that #1420 closes |
## Description This PR adds `iroh-sync`, a document synchronization protocol, to iroh, and integrates with `iroh-net`, `iroh-gossip` and `iroh-bytes`. * At the core is the `iroh-sync` crate, with a set reconciliation algorithm implemented by @dignifiedquire. See [the old iroh-sync repo](https://github.com/n0-computer/iroh-sync/) for the prehistory and #1216 for the initial PR (fully included in this PR, and by now outdated) * Iroh sync is integrated in the iroh node, with iroh-gossip, in the RPC interface, and the CLI. * `LiveSync` is the handle to an actor that integrates sync with [gossip](#1149 ) to broadcast and receive document updates from peers. For each open document a gossip swarm is joined with a `TopicId` derived from the doc namespace. * mod `download` contains the new downloader. It will be improved in #1344 . * mod `client` is the new high-level RPC client. It currently only has methods for dealing with docs and sync, other things should be added once we merged this. CLI commands for sync are in `commands/sync.rs`. Will be much better with #1356 . * `examples/sync.rs` has a REPL to modify and sync docs. It does a full setup without using the iroh console. Also includes code to sync directories, and a hammer command for load testing. * The PR also introduces `iroh::client::Iroh`, a wrapper around the RPC client, and `iroh::client::Doc`, a wrapper around RPC client for a single document ## Notes & open questions #### Should likely happen before merge: * [x] Make `iroh_sync::Store:::list_authors` and `list_replicas` return iterators `iroh-sync` *fixed in #1366 * * [ ] Add `iroh_sync::Store::close_replica` * [x] `ContentStatus` in `on_insert` callback is reported as `Ready` if the content is still `baomap::PartialEntry` (in-process download) *fixed in a8e8093* #### Can happen after merge, but before `0.6` release * [ ] Implement `AuthorImport` and `AuthorShare` RPC & CLI commands * [ ] sync store `list_namespaces` and `list_authors` internally collect, return iterator instead * [ ] Fix cross-compiles to arm/android. See cross-rs/cross#1311 * [ ] Ensure that fingerpring calculation is efficient and/or cached for large documents. Currently calculating the initial fingerprint iterates once over all entries in a document. * [ ] Make content downloads be more reliable * [ ] Add some way to download content from peers independent of the first insertion event for a remote entry. The downloader with retries is tracked in #1334 and 1344, but independent of that, we still would currently only ever try to queue a download when the `on_insert` callback triggers, which is only once. There should be a way, even if manual for now, to try to download missing content in a replica from peers. * [ ] during `iroh-sync` sync include info if content is available for each entry * [ ] Add basic peer management and persistence. Currently live sync will stop to do anything after a node restart. * [ ] Persist the addressbook of peers for a document, to reconnect when restarting the node * [ ] Implement `PeerAdd` and `PeerList` RPC & CLI commands. The latter needs changes in `iroh-net` to expose information of currently-connected peers and their peer info. * [ ] Make read-only replicas possible * [ ] Improve reliablity of replica sync. * sync is triggered on each `NeighborUp` event from gossip. check that we don't sync too much. * maybe include peer info in gossip messages, to queue syncs with those (but not all at once) * track and exchange the timestamp of last full sync for peers, to know if you missed gossiped message and react accordingly * add more tests with peers coming and leaving #### Open questions * [ ] `iroh_sync::EntrySignature` should the signatures include a namespace prefix? * [ ] do we want the 1:1 mapping of `NamespaceId`and gossip `TopicId`, or would the topic id as a hash be better? #### Other TODOs collected from the code * [ ] Port `hammer` and `fs` commands from REPL example to iroh cli * [ ] docs/naming: settle terminology about keypairs, private/secret/signing keys, public keys/identifiers and make docs and symbols consistent * [ ] Make `bytes_get` streaming in the RPC interface * [ ] Allow to unset the subscription on a replica * [ ] `iroh-sync` shouldn't depend on `iroh-bytes` only for `Hash` type -> #1354 * [ ] * [ ] Move `sync::live::PeerSource` to iroh-net or even better -> #1354 * [ ] `StoreInstance::put` propagate error and verify timestamp is reasonable. * [ ] `StoreInstance::get_range` implement inverted range * [ ] `iroh_sync`: Remove some items only used in tests (marked with #[cfg(test)]) * [ ] `iroh_sync` fs store: verify get method fetches all keys with this namespace * [ ] `ranger::SimpleStore::get_range`: optimize * [ ] `ranger::Peer` avoid allocs? * [ ] `fs::StoreInstance::get_fingerprint` optimize * [ ] `SyncEngine::doc_subscribe` remove unwrap, handle error ## Change checklist - [x] Self-review. - [x] Documentation updates if relevant. - [ ] Tests if relevant. --------- Co-authored-by: dignifiedquire <[email protected]> Co-authored-by: Asmir Avdicevic <[email protected]> Co-authored-by: Kasey <[email protected]>
## Description Adds the `Downloader` as specified in #1334 plus some backchannel convos Features include: - Support collections - Add delays to downloads - Add download retries with an incremental backoff - Keeping peers for a bit longer than necessary in hopes they will be useful again - Having the concept of intents and deduplicating downloads efforts - Cancelling download intents - Limiting how many concurrent requests are done in total - Limiting how many concurrent requests are done per peer - Limiting the number of open connections in total - Basic error management in the form of deciding whether a peer should be dropped, the request should be dropped, or if the request should be retried ## Notes & open questions ### TODOs - A remaining TODO in the code is whether something special should be done when dropping quic connections - Should downloads have a timeout? - ~<sup>I know I've said this a hundred times with a hundred different things but would love to test this as well under stress scenarios and a large number of peers. don't hate me</sup>~ In reality after abstracting away all the IO most scenarios can be simulated easily. What would remain for a _much_ later time when the need and opportunity for real case testing scenario arises is to tune the concurrency parameters ### Future work #### Downloading Ranges There was the requirement of downloading a Hash, a range of a Hash, a collection and (not mentioned but potentially implied) ranges of collections. There is no support for ranges right now because of the great duplication of the `get` code in order to take advantage of proper errors added in #1362. In principle, adding ranges should be really easy. This is because it's an extension of the `DownloadKind` and would simply need calculating the missing ranges not based on the difference between what we have and the whole range, but the given range. I would prefer to find a way to deduplicate the get code before doing this extension. Also, as far as I can tell, there is no need for this yet. #### Prioritizing candidates per role: `Provider` and `Candidate` A nice extension, as discussed at some point, is to differentiate candidates we know have the data, from those that _might_ have the data. This has added benefit that when a peer is available to perform another download under the concurrency limits, a hash we know they have could be downloaded right away instead of waiting for the delay. At this point making this doesn't make sense because we will likely attempt a download before the peer has retrieved the data themselves. To implement this, we would need to add the notifications of fully downloaded hashes as available into gossip first. #### Leveraging the info from gossip When declaring that a hash `X` should be downloaded, it's also an option to query gossip for peers that are subscribed to the topic to which `X` belongs to use them as candidates. This could be done connecting the `ProviderMap` to `gossip`. For now I don't see the need to do this. ### Open questions about Future work - In line with the described work from above, the registry only allows querying for peer candidates to a hash since that's as good as it gets in terms of what we know from a remote right now. It's not clear to me if we would want to change this to have better availability information with #1413 in progress. - More future work: downloading a large data set/blob from multiple peers would most likely require us to do a three step process: 1. understanding the data layout/size. 2. splitting the download. 3. actually performing the separate downloads. Generally curious how this will end. My question here is whether we should do this for every download, or just on data that we expect to be big. Is there any way to obtain such hint without relying on a query every single time? ## Change checklist - [x] Self-review. - [x] Documentation updates if relevant. - [x] Tests if relevant.
The iroh-sync PR adds a downloader. In its current state it is unfinished, and cannot do retries and doesn't have a proper queue.
Features we need from the downloader from a high level iroh sync point of view:
Hash
,PeerId
pairsThis high level API exists already in the PR, with public methods
push(hash, peer_id)
andasync finished(hash: Hash)
. The actual actor below misses many features though. We want:--
copying some notes from the now-merged downloader improvements PR:
The
Downloader
currently acts immediately on new requests and doesn't do any retries. It also does not have a proper queue internally.We want to be able to queue downloads with retries and different timeouts. My idea was to add a time-based queue, maybe using the
TimerMap
fromiroh_gossip::util
. Then I thought to change the API to exposedownload(hashes: Vec<Hash>, peer: Peer, delay: Option<Instant>)
. The actor would maintain a timer queue of pending requests and their delay, and poll that for the next operation, check if the operation is still needed, and if so enqueue the future to actually do the download. The actor should also keep track of running downloads, and have a limit of how many peers are being dialed or connected to in parallel (configurable). Code-wise, I thought to add at least aDownloadState
for each pending download or so, and then similar to the current code have indexes into those forPeerId
andHash
(andInstant
if using something like theTimerMap
).With this, we can then:
SignedEntry
)Later on, we'd also want to add an explicit command to retry all missing hashes in a document from a peer, and optionally a mode that would do that automatically if we get new neighbors. This will initially be quite inefficient for large docs, because we have no ẁant
/
have` exchange in iroh-bytes. So maybe defer this a bit.Edit
Another thought on the API, with
peer
anddelay
we'd also want to pass aenum RetryStrategy { None, Auto }
or so likely, with auto being some incremental backoff with max retries or similar.Edit 2
Currently the
quinn::Connection
s are dropped and thus cancelled if there is no immediate next download pending after a download finished. Likely we'll want to keep them around for a configurable-with-default amount of time, and then drop them if no operation happened. Redialing is quite cheap but not free (one roundtrip in current setup).The text was updated successfully, but these errors were encountered: