-
-
Notifications
You must be signed in to change notification settings - Fork 758
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
borg extract: add --continue flag #1665
Conversation
src/borg/archive.py
Outdated
fd.truncate() | ||
if pi: | ||
pi.show(increase=prefix_length) | ||
ids = [c.id for c in chunks] | ||
for _, data in self.pipeline.fetch_many(ids, is_preloaded=True): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if this causes an issue with the preloaded chunks:
iirc it preloads all chunks of a file when processing its item, but then it fetches either none (existing, full file) or only some (existing, partial file), before it goes into fetch-all mode (non-existing files).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, didn't consider that. I made up RemoteRepository.discard_preload for that, which looks about right to me, but makes it hang - for some reason.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is kinda strange. If I set a breakpoint in RR.discard_ids I see that neither cache
nor responses
contain anything, but still - removing the IDs from preload_ids
makes it hang in call_many
, in the while not self.to_send and (calls or self.preload_ids) and len(waiting_for) < MAX_INFLIGHT:
loop.
Current coverage is 84.67% (diff: 85.96%)@@ master #1665 diff @@
==========================================
Files 20 20
Lines 6548 6589 +41
Methods 0 0
Messages 0 0
Branches 1112 1123 +11
==========================================
+ Hits 5547 5579 +32
- Misses 734 739 +5
- Partials 267 271 +4
|
FYI the reason this was moved to b3 is that with larger files it still hangs in RemoteRepository.call_many, in the same spot as earlier:
I haven't looked at it again yet. |
fd.seek(prefix_length) | ||
fd.truncate() | ||
discarded_count = len(item.chunks) - len(chunks) | ||
discarded_chunks_ids = [c.id for c in item.chunks[:discarded_count]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick: discard_count / discard_chunk_ids (not past tense)
"later" and no activity = close. |
Fixes #1356