-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
send/recv compatibility with 0.6.5.x #6616
send/recv compatibility with 0.6.5.x #6616
Conversation
3763821
to
2fdb9bf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I'd suggest squashing the module_param()
commit and its documentation commit together.
More fundamentally I think it's worth re-evaluating the decision to not add a feature flag to the send stream for this functionality. The comment in zfs_ioctl.h
clearly states it wasn't added initially because the stream format was thought to be fully backwards compatible.
+ * This is not implemented as a feature flag, because the receiving side does
+ * not need to have implemented it to receive this stream; it is fully backward
+ * compatible. We need a flag, though, because full send streams without it
+ * cannot necessarily be received as a clone correctly.
I haven't dug in to the details but if that's not going to be possible we should consider converting this module option in to a proper zfs send
command line option and disabling this flag by default for compatibility.
module/zfs/dmu_send.c
Outdated
@@ -4010,4 +4010,7 @@ dmu_objset_is_receiving(objset_t *os) | |||
#if defined(_KERNEL) | |||
module_param(zfs_send_corrupt_data, int, 0644); | |||
MODULE_PARM_DESC(zfs_send_corrupt_data, "Allow sending corrupt data"); | |||
|
|||
module_param(zfs_send_set_freerecords_bit, int, 0644); | |||
MODULE_PARM_DESC(zfs_send_set_freerecords_bit, "Do not set freerecords bit for backwards compatibility with 0.6.5.x"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: 80 character limit.
You can wrap the block with /* BEGIN CSTYLED */
/* END CSTYLED */
to make the style checker happy about the indent. See the bottom of arc.c
for an example.
okay, so further bisecting shows that "OpenZFS 6393 - zfs receive a full send as a clone" (#4221) is NOT the actual culprit here. while it does change the behaviour for full streams (which now include FREEOBJECTS records), that change alone would indeed (like noted in the comment) be backwards compatible. this is a bit hidden by the fact that it introduces improved error handling on the receiving side, which meant that the actual backwards-incompatibility was not noticed later on. the first commit in the 0.6.5 -> 0.7.0 history which actually introduces the breaking behaviour is "OpenZFS 7104 - increase indirect block size" (d7958b4 / #5679), which bumps the default/max indirect block size (shift) by 3 orders of magnitude (from 14/16K to 17/128K). this value is included when calculating the BP_SPAN when dumping a hole, which affects the numobjs for the FREEOBJECTS record as well as introducing an additional 0,0 FREEOBJECTS record (the latter does not seem to be a problem). test case: following pool setup:
dumping a full and incremental replication stream using:
0.7.0 produces streams that end in:
the full stream already hangs zfs recv on 0.6.5.11 on the receiving side. applying the following patch with a sort of revert of d7958b4, on top of 0.7.0:
now, the dumped streams end like this:
and sending both the full and the incremental stream to a 0.6.5.11 system works as expected. I only tested with 0.7.0, but I am pretty sure the same applies to 0.7.1 and master. is there any way to get out of this mess besides backporting the improved receive handling to 0.6.5.11 like in #6602? |
@behlendorf @ahrens any change either of you could take a look at this? am I correct in assuming that bumping the indirect block size shift default value and limit does not change the actual on disk format of zpools, because otherwise it would have been hidden behind a pool feature flag and not just bumped between 0.6.5 and 0.7? if so, one way to ease upgrades would be to revert this change and postpone it (either to 0.8 or a later point in 0.7), at least for downstreams relying on replication using zfs send/recv.. |
@Fabian-Gruenbichler thanks for isolating the exact commit. To answer your question increasing the indirect block size didn't require a feature flag at the time since the size was already being stored on disk. Older versions of the software are capable of handling this larger size. That all worked as expected, where things seem to have gone wrong is this accidental change to the send stream.
Unfortunately since this size is stored on disk that would only resolve the issue for new pools, not for existing pools. What I think might work would be updating We should try and get @dankimmel's thoughts on this too since he's familiar with all this code. He may have a better idea. |
To make sure I understand, let me summarize: Some FREEOBJECTS records have very large number of objects, which confuses (appears to hang) old code (0.6.5). These confusing FREEOBJECTS records are generated by both full and incremental sends from current (0.7) bits. This PR makes it possible to set a tunable which changes the full send stream such that it doesn't generate these confusing records (but receive full stream as a clone won't work). So this PR is essentially a workaround (requires setting tunable) for part of the bug (only fixes full send streams). While this is an improvement over the current state, I think we need to find a more complete solution -- at least to workaround the problem for incrementals too, and hopefully to actually fix the bug, without requiring changing tunables. @pcd1193182 any ideas? |
For incrementals, I think we know that there can't be any objects after dn_maxblkid * DNODES_PER_BLOCK. So we don't really need to FREEOBJECTS any objects after that. We could use this knowledge to reduce the numobjs in the big FREEOBJECTS record, which might be enough to eliminate the "hang". For full sends, we don't know what the max objid is on the target, when receiving as a clone. But we could change the code on the receiving system to make it free all objects after the last object in the send stream. An older system couldn't receive-whole-send-as-a-clone from a system with this change, but "receive whole send as a clone" is a relatively recent, and rarely used feature, so this is much less bad than not being to send from 0.7 to earlier releases. |
Matthew Ahrens wrote:
Some FREEOBJECTS records have very large number of objects, which confuses (appears to hang) old code (0.6.5). These confusing FREEOBJECTS records are generated by both full and incremental sends from current (0.7) bits. This PR makes it possible to set a tunable which changes the full send stream such that it doesn't generate these confusing records (but receive full stream as a clone won't work).
So this PR is essentially a workaround (requires setting tunable) for part of the bug (only fixes full send streams). While this is an improvement over the current state, I think we need to find a more complete solution -- at least to workaround the problem for incrementals too, and hopefully to actually fix the bug, without requiring changing tunables.
summary sounds correct :) AFAICT, it's usually one such FREEOBJECTS record at the very end of the stream, presumably to free all the (coalesced) holes? that might just be a result of my test cases though, and for actual real world streams the structure might looks a bit different. the problem is always the huge number of objects though, because the receiving (0.6.5) side does not skip the whole bunch on the first non-existing object, but actually iterates over all of them.
For incrementals, I think we know that there can't be any objects after dn_maxblkid * DNODES_PER_BLOCK. So we don't really need to FREEOBJECTS any objects after that. We could use this knowledge to reduce the numobjs in the big FREEOBJECTS record, which might be enough to eliminate the "hang".
that sounds like a direction worth investigating - I will include this and @behlendorf's feedback (moving the module parameter to a send/recv flag instead) and push something later on today or tomorrow.
For full sends, we don't know what the max objid is on the target, when receiving as a clone. But we could change the code on the receiving system to make it free all objects after the last object in the send stream. An older system couldn't receive-whole-send-as-a-clone from a system with this change, but "receive whole send as a clone" is a relatively recent, and rarely used feature, so this is much less bad than not being to send from 0.7 to earlier releases.
not sure if this is worth it - if we need a fix on the receiving side, the most straight-forward one IMHO is my other PR backporting the "skip rest of FREEOBJECTS record if object does not exist" commits ;) ideally, we could fix this mess on the sender side only - in many scenarios, the target of a send/recv is not easily updatable (think centralized backup server for lots of infrastructure with conservative upgrade policy).
|
You're proposing that we need a fix on the receiving side in order to receive any full send stream. I'm proposing that we need a fix on the receiving side in order to receive a full send as a clone. That's a significant difference. |
sorry, misunderstood your point there. what you mean is:
together with reducing the incremental FREEOBJECTS records to the needed minimum, this would allow sending regular streams from > 0.7.1 to 0.6.5.x again. receiving full streams as clones would still not be supported on 0.6.5.x. sending from 0.7.0/1 to > 0.7.1 would work for all combinations if we keep the receive compatibility. but sending from > 0.7.1 to 0.7.0/1 would no longer support receiving as full clone.. I still wonder whether putting the full stream with FREE(OBJECTS) behind a send feature flag would not be easier.. that would not require any changes on the receiving side.. |
4c09a49
to
930b12a
Compare
took a first stab at implementing the max possible object ID, and fixed some issues with the previous iteration. haven't done extensive testing yet and this is my first foray into the inner workings of ZFS, so possibly I missed some important aspects ;) did some basic sending/receiving between patched master and patched master as well as 0.6.5.11 which showed no obvious errors. |
930b12a
to
94f51fb
Compare
Codecov Report
@@ Coverage Diff @@
## master #6616 +/- ##
==========================================
+ Coverage 74.09% 74.28% +0.19%
==========================================
Files 295 295
Lines 93882 93920 +38
==========================================
+ Hits 69558 69767 +209
+ Misses 24324 24153 -171
Continue to review full report at Codecov.
|
pushed a new iteration to make style checker happy and reduce code duplication. |
94f51fb
to
614ddfc
Compare
Actually, I meant to keep setting the DRR flag, but truncate the last FREEOBJECTS record based on
Yeah, we could require that you do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The dsa_incremental_maxobj
code is nice. Have you been able to test it out, sending to an older release of ZFS?
module/zfs/dmu_send.c
Outdated
|
||
/* | ||
* reduce numobjs according to maximum possible object id of | ||
* base snapshot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These comments say what is happening, but it isn't much more work to read the code to understand that.
It would be really useful to have a comment explaining why we are doing this (especially since the receive code (now) handles longer FREEOBJECTS records just fine).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, makes sense!
module/zfs/dmu_send.c
Outdated
@@ -934,7 +967,7 @@ dmu_send_impl(void *tag, dsl_pool_t *dp, dsl_dataset_t *to_ds, | |||
zfs_bookmark_phys_t *ancestor_zb, boolean_t is_clone, | |||
boolean_t embedok, boolean_t large_block_ok, boolean_t compressok, | |||
boolean_t rawok, int outfd, uint64_t resumeobj, uint64_t resumeoff, | |||
vnode_t *vp, offset_t *off) | |||
vnode_t *vp, offset_t *off, uint64_t incremental_maxobj) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this function can figure out incremental_maxobj
instead of making the caller pass it in. In addition to making the interface cleaner, it looks like we can eliminate some duplicated code in the callers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but we need the maxobj of the fromsnap dataset (because those are the objects we potentially free, unless I misunderstood something..), and here we only have the ancestor_zb (which could be a bookmark!). I figured we would not want to duplicate all the dataset finding code from dmu_send(_obj) in dmu_send_impl ? but maybe I am just missing the right helper to get from zfs_bookmark_phys_t to dnode_phys_t ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or should we use tosnap to get maxblkid, because it is guarantueed that maxblkid does not get smaller from one snapshot to the next (not sure about this?) - that would indeed make it easier, and also be reusable for the full send proposal..
yeah, I did some basic tests (lots of files, big files, holes in files, deleting lots of files) and all sends from patched master to patched master and to vanilla 0.6.5.11 worked as expected, and the test suite results look good as well. |
614ddfc
to
2ae3866
Compare
and a new iteration:
I haven't tested resuming, and actually receving a full stream as clone yet ;) |
5f0d1ab
to
f714e38
Compare
rebased, fixed the test case about receiving full streams as clone to use the new '-C' flag, and actually allow using the '-C' flag when sending. |
I don't love having to use a flag for this... the feature should really just work transparently. Having different "kinds" of full sends just rubs me the wrong way. I think I prefer Matt's proposal for full sends in #6616 (comment) . It requires some receive changes, but they wouldn't be extensive, and it gets the behavior we want in pretty much every case. I'd be happy to help outline the receive changes if needed. |
pushed new iteration, now trimming freeobjects records unconditionally for both types of streams, and keeping track of objects referenced/touched when receiving streams, using that information to free all leftover objects after that mark when receiving a full stream as clone. as discussed with @pcd1193182 on IRC, this would mean that ZoL > 0.7.2 is incompatible regarding the "receive full stream as clone" feature with ZoL 0.7.0-0.7.2, but it would be fully compatible with ZoL 0.6.5.x regarding full and incremental streams without changes in 0.6.5. feedback welcome ;) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This strikes me as a pretty reasonable solution. We can live with the "receive full stream as clone" feature being incompatible with 0.7.0 - 0.7.2 tags. The most important thing is that full and incremental streams will be compatible again with 0.6.5.x as originally intended.
module/zfs/dmu_send.c
Outdated
|
||
for (obj = rwa->last_touched_object + 1; | ||
next_err == 0; | ||
next_err = dmu_object_next(rwa->os, &obj, FALSE, 0)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think rewriting this in the form while (next_err == 0) { ... }
would be easier to read.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that was lifted from receive_freeobjects (https://github.com/zfsonlinux/zfs/blob/master/module/zfs/dmu_send.c#L2550), but since the loop condition is much simpler here converting to a while makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
on second thought - with the pesky continue we'd either need two calls to dmu_object_next or a goto, so I am not so sure..
https://gist.github.com/Fabian-Gruenbichler/8632d82204237a6219ba40c1258d261d
https://gist.github.com/Fabian-Gruenbichler/6234f96c6a33397561541f75889a7d3b
which of the three would be preferred?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense, my preference would be for the last one
https://gist.github.com/Fabian-Gruenbichler/6234f96c6a33397561541f75889a7d3b
module/zfs/dmu_send.c
Outdated
@@ -2150,6 +2151,7 @@ struct receive_writer_arg { | |||
boolean_t resumable; | |||
boolean_t raw; | |||
uint64_t last_object, last_offset; | |||
uint64_t last_touched_object; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think using last_touched_object
for a name is misleading. To me it implies the last modified object in the stream, when in fact you're tracking the largest_object
in the stream.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I was thinking of "last" as in "highest ID", not "last" as in "last in stream". largest is also misleading though, as it could refer to size? maybe something like max_objid ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, good point. max_objid
makes sense to me, or even max_object
which would be consistent with last_object
.
While you're updating this could you also split last_object
and last_offset
from the line above so each is declared on its own line.
module/zfs/dmu_send.c
Outdated
@@ -3720,6 +3746,38 @@ dmu_recv_stream(dmu_recv_cookie_t *drc, vnode_t *vp, offset_t *voffp, | |||
} | |||
mutex_exit(&rwa->mutex); | |||
|
|||
/* | |||
* if we are receiving a full stream as clone, we need to free leftover | |||
* objects after the last one referenced in the stream. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you mean largest one referenced in the stream, not last.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
leftover objects with IDs greater than the maximum ID referenced in the stream.
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, how about something like this to make it perfectly clear.
"If we are receiving a full stream as a clone, all object ids which
are greater than the maximum ID referenced in the stream are
by definition unused and must be freed."
module/zfs/dmu_send.c
Outdated
@@ -3720,6 +3746,38 @@ dmu_recv_stream(dmu_recv_cookie_t *drc, vnode_t *vp, offset_t *voffp, | |||
} | |||
mutex_exit(&rwa->mutex); | |||
|
|||
/* | |||
* if we are receiving a full stream as clone, we need to free leftover |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: s/if we/If we (capital leading i)
updated PR description for (hopefully) final approach |
all objects after the last written or freed object are not supposed to exist after receiving the stream. free them accordingly, as if a freeobjects record for them had been included in the stream. Signed-off-by: Fabian Grünbichler <[email protected]>
bf7a8de
to
9c2f987
Compare
updated and included all of the style feedback. I reduced the while loop even further by skipping the call to I wonder if we should not do the same in receive_freeobjects as well? |
removed WIP from title - this is ready to get merged from my side. @ahrens @pcd1193182 any objections? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Once everyone's happy with this we'll want to upstream it to OpenZFS.
Thank you Fabian for this patch ! |
@@ -2824,6 +2861,9 @@ receive_free(struct receive_writer_arg *rwa, struct drr_free *drrf) | |||
if (dmu_object_info(rwa->os, drrf->drr_object, NULL) != 0) | |||
return (SET_ERROR(EINVAL)); | |||
|
|||
if (drrf->drr_object > rwa->max_object) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't actually need these, since we only consider this value when doing a receive of a full send as a clone, and in that case you have to get an object record before you get any other kind of record for that object. Doesn't hurt to have them, though, and to have it be accurate in all cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any other kind of record except FREEOBJECT ;) I originally attempted to just re-use last_object, but that obviously did not work out.
I did not feel comfortable enough with my level of understanding of the code to play guessing games (regarding which patterns are possible and which aren't), so I went the "better safe than sorry" route :)
When sending an incremental stream based on a snapshot, the receiving side must have the same base snapshot. Thus we do not need to send FREEOBJECTS records for any objects past the maximum one which exists locally. This allows us to send incremental streams (again) to older ZFS implementations (e.g. ZoL < 0.7) which actually try to free all objects in a FREEOBJECTS record, instead of bailing out early. Reviewed by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Fabian Grünbichler <[email protected]> Closes #5699 Closes #6507 Closes #6616
Changes look good. Sorry I didn't have a chance to look at this before it was integrated (was on vacation last week). Is anyone signed up to port this to OpenZFS/illumos? |
All objects after the last written or freed object are not supposed to exist after receiving the stream. Free them accordingly, as if a freeobjects record for them had been included in the stream. Reviewed by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Fabian Grünbichler <[email protected]> Closes openzfs#5699 Closes openzfs#6507 Closes openzfs#6616
When sending an incremental stream based on a snapshot, the receiving side must have the same base snapshot. Thus we do not need to send FREEOBJECTS records for any objects past the maximum one which exists locally. This allows us to send incremental streams (again) to older ZFS implementations (e.g. ZoL < 0.7) which actually try to free all objects in a FREEOBJECTS record, instead of bailing out early. Reviewed by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Fabian Grünbichler <[email protected]> Closes openzfs#5699 Closes openzfs#6507 Closes openzfs#6616
All objects after the last written or freed object are not supposed to exist after receiving the stream. Free them accordingly, as if a freeobjects record for them had been included in the stream. Reviewed by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Fabian Grünbichler <[email protected]> Closes openzfs#5699 Closes openzfs#6507 Closes openzfs#6616
When sending an incremental stream based on a snapshot, the receiving side must have the same base snapshot. Thus we do not need to send FREEOBJECTS records for any objects past the maximum one which exists locally. This allows us to send incremental streams (again) to older ZFS implementations (e.g. ZoL < 0.7) which actually try to free all objects in a FREEOBJECTS record, instead of bailing out early. Reviewed by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Fabian Grünbichler <[email protected]> Closes openzfs#5699 Closes openzfs#6507 Closes openzfs#6616
All objects after the last written or freed object are not supposed to exist after receiving the stream. Free them accordingly, as if a freeobjects record for them had been included in the stream. Reviewed by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Fabian Grünbichler <[email protected]> Closes #5699 Closes #6507 Closes #6616
When sending an incremental stream based on a snapshot, the receiving side must have the same base snapshot. Thus we do not need to send FREEOBJECTS records for any objects past the maximum one which exists locally. This allows us to send incremental streams (again) to older ZFS implementations (e.g. ZoL < 0.7) which actually try to free all objects in a FREEOBJECTS record, instead of bailing out early. Reviewed by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Fabian Grünbichler <[email protected]> Closes #5699 Closes #6507 Closes #6616
Hi, I know the issue has been closed almost a year now, but the fix for the issue has been partially ported to the openzfs repo. The partial code commit is at openzfs/openzfs#631, or the code looks almost identical. The missing part is where backwards compatiblity is introduced, when receiving a dataset that's been 'send' in a newer version of zfs. I don't know if there is any needs/wants to merge the missing part to the openzfs repo, but I had to patch the FreeBSD 11.2 zfs tree because I still have servers deployed running on 10.3. I could try to make a pr for the patch, but looks like there is a lot of red tape to me. |
@waikontse feel free to port it (or file an issue) - the backwards compatibility commit is very small and should be straight forward. I don't have the capacity atm do it myself unfortunately. |
@Fabian-Gruenbichler You're right, it's 10 lines of code max I believe. I'll try to have it merged on openzfs. Thanks. |
All objects after the last written or freed object are not supposed to exist after receiving the stream. Free them accordingly, as if a freeobjects record for them had been included in the stream. Reviewed by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Fabian Grünbichler <[email protected]> Closes openzfs#5699 Closes openzfs#6507 Closes openzfs#6616
Description
ZFS 0.6.5.x does not handle large FREEOBJECTS records referencing non-existing objects well. ZFS 0.7 separately introduced two changes which lead to an incompatibility when sending from ZFS 0.7.x to ZFS 0.6.5.x:
Receiving full streams as clones (#4221)
generates FREE and FREEOBJECTS records even when generating a full stream, to allow those streams to be received as clones of existing datasets.
Increasing indirect block sizes (#5679)
blows up the number of objects contained in the final FREEOBJECTS record for the "logical hole" of unused space at the end of most datasets.
This PR caps the FREEOBJECTS records at the maximum possible USED object ID. For incremental streams, this is no problem. For full streams received as clones, this means we need an extra step of freeing any potentially leftover objects of the origin dataset, after processing the whole stream. The latter breaks compatibility for receiving full streams generated with this patch as clones on systems without this patch.
How Has This Been Tested?
Sending from patched to patched, patched to non-patched (see caveat in description), and patched to 0.6.5.11.
Types of changes
Checklist:
Signed-off-by
.