-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Block Cloning #13392
Block Cloning #13392
Conversation
It's finally happening, Thanks for the hard work |
First, thanks for this, it's been something I've looked forward to since I heard about it on the drawing board. A couple of thoughts, before I go play with this on my testbeds and have more informed ones:
Is this a userland dedup implementation, similar to the one that just got ejected from the codebase? Yes. Could you mangle your bytes if it incorrectly claimed a block was a duplicate? Also yes. Would allowing people to pay the computational cost to optionally BRT their incoming send streams rather than live with the overhead forever be worth it, IMO? A third yes. (Not suggesting you write it, for this or in general, just curious about your thoughts on the feasibility of such a thing, since "all your savings go up in smoke on send/recv" seems like one of the few pain points with no mitigations at the moment...though I suppose you could turn on dedup, receive, and then use idea 1 if that's feasible. Heh.)
Come to think of it, I suppose if that works, you could just do that for dedup entries too and stop having to scan the DDT for them, if you wanted...which might make someone I know happy. (You might wind up breaking backward compatibility of resuming scrubs, though, I suppose. Blech.) Likely missing some complexity that makes this infeasible, just curious why you think this might not be a thing one could implement. Thanks again for all your work on this, and for reading my uninformed questions. :) |
@rincebrain Here are my thoughts on easier manual deduplication and preserving deduplication during send/recv. Let's say you would like to manually deduplicate the data on your pool periodically. To do that you would need to scan your entire pool, read all the data and build some database that you can check against to see if the there is another copy of the given block already. Once you have such database, then you could use zfs diff to get only the files that has changed and scan only them. Even better if we had a mechanism to read all BPs of the given file without reading the data - that would allow us to determine if the given block is newer than the snapshot and only read the data of the blocks that are newer (and not all the data within modified file). One could teach zfs recv to talk to this database and deduplicate the blocks on the fly. Note that this database is pretty much DDT in userland. Converting DDT to BRT is still doable. I think we would just need to teach ZFS that even though the BP has the D bit set, it may not be in the DDT. As for the scrub, of course everything is doable:) Before you start the scrub you could allocate a bitmap where one bit represents 4kB of storage (2^ashift). Once you scrub a block you set the corresponding bit, so the next time you want to scrub the same block, you will know it already has been done. Such bitmap would require 32MB of RAM per 1TB of pool storage. With this in place we could get rid of the current mechanism for DDT. This bitmap could be stored in the pool to allow to continue scrub after restart (it could be synced to disk rarely as the worst that can happen is that we scrub some blocks twice). |
@pjd Wow, I just noticed this (you work in stealth mode? :) ), surprised at the progress, thanks! I had a question related to what @rincebrain asked about zfs send/receive and was curious if you knew how BTRFS was doing their reflink send preservation? (I think the -c and -p options? But I've also read on email lists that supposedly btrfs can preserve reflinks over send/receive.) So was wondering how they do it. thx How can we donate anonymously? (to a specific dev, or openzfs earmarked for specific developer?) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, Pawel. It is a cool feature. I've only started reading the code, so only few first comments so far:
module/os/freebsd/zfs/zfs_vnops_os.c
Outdated
td->td_retval[0] = done; | ||
|
||
return (done > 0 ? 0 : error); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I see, FreeBSD already has sys_copy_file_range()/kern_copy_file_range() and even linux_copy_file_range(). Aren't you duplicating?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be honest I was under impression that copy_file_range(2) always copied the data, just avoiding copying the data to and from userland, but Linux's manual page does mention that reflinks can be used. So, yes, that is a good candidate to use. I'd still prefer to have a flag controlling this behavior, but on Linux, I guess, reflink is used when available. One problem is that if offsets and length are not recordsize-aligned, we would still need to implement the actually copy of the non-aligned bits.
All in all, please ignore the syscalls for now, they were added so it can be tested. This will be done separately, maybe under sponsorship of the FreeBSD Foundation.
module/zcommon/zpool_prop.c
Outdated
@@ -116,6 +116,9 @@ zpool_prop_init(void) | |||
zprop_register_number(ZPOOL_PROP_DEDUPRATIO, "dedupratio", 0, | |||
PROP_READONLY, ZFS_TYPE_POOL, "<1.00x or higher if deduped>", | |||
"DEDUP", B_FALSE, sfeatures); | |||
zprop_register_number(ZPOOL_PROP_BRTRATIO, "brtratio", 0, | |||
PROP_READONLY, ZFS_TYPE_POOL, "<1.00x or higher if cloned>", | |||
"BRT", B_FALSE, sfeatures); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"brtratio" is not very informative. Why not "cloneratio", for example?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree this is not the best name... I didn't want to use "clone" as it may be confused with 'zfs clone' and blockcloneratio seems too long. But I agree this should be changed. If people are ok with cloneratio, then it's fine by me.
Thank you Alex for looking at this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am actually curios what does it measure and how useful? It seems like the average number of references to each block that has more than one. For dedup the attribute had some more sense, since it counted all blocks in DDT, which included all blocks written since dedup enable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
blockrefratio is better, but maybe a bit long
Sorry, but I don't know how they do it.
I'd recommend donating to the FreeBSD Foundation, which is fully tax-deductible in US and FF is supporting OpenZFS development. |
@pjd I wish freebsd all the best, but I wanted to donate to a 'specific person', I only use linux anyway. I would not mind 'paying taxes' on donation, without any tax-deduction, so that the individual/recipient does not, since it is a gift. Also via the foundation it seems you must give foundation full name/address etc, the anonymity they offer is just not to list your name on their website. I'll try do a bit of research to see if I can find a more direct way than FreeBSD Foundation, or if that's the only way. As far as btrfs reflinks over send/receive, I think they 'replay' a sort of playbook of actions on extents, that playback on receive. Many thanks again for all your hard work and surprising us with this pull-request for feature many of us long wished for! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Few more thoughts in order of appearance. Still reading.
module/zfs/brt.c
Outdated
|
||
spa_config_enter(brt->brt_spa, SCL_VDEV, FTAG, RW_READER); | ||
vd = vdev_lookup_top(brt->brt_spa, brtvd->bv_vdevid); | ||
size = vdev_get_min_asize(vd) / brt->brt_rangesize + 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aren't you allocating here one extra if vdev_get_min_asize(vd) is multiple of brt->brt_rangesize?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking about DIV_ROUND_UP() AKA howmany().
brt_vdev_sync(brt, brtvd, tx); | ||
|
||
if (brtvd->bv_totalcount == 0) | ||
brt_vdev_destroy(brt, brtvd, tx); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you need brt_vdev_sync() in case of brt_vdev_destroy()?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not strictly necessary, but in brt_vdev_destroy() I assert that all the counters are 0. If brt_vdev_sync() wouldn't be called then we would need to resign from the asserts and I'd prefer not to do that, especially that this won't happen often. Do you agree?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just meant that writing something just to delete it next line is a waste of time. I agree about the assertions point though, but it is more of implementation detail.
module/zfs/brt.c
Outdated
if (bre1->bre_offset < bre2->bre_offset) | ||
return (-1); | ||
else if (bre1->bre_offset > bre2->bre_offset) | ||
return (1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lots if not all AVL compare functions were rewritten to TREE_CMP().
module/zfs/brt.c
Outdated
/* | ||
* Entries to sync. | ||
*/ | ||
avl_tree_t *bv_tree; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it need to be a pointer? It is a very small structure, why add another pointer dereference?
module/zfs/brt.c
Outdated
if (bre1->bre_vdevid < bre2->bre_vdevid) | ||
return (-1); | ||
else if (bre1->bre_vdevid > bre2->bre_vdevid) | ||
return (1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see brt_entry_compare() is used only within one vdev. So I guess you could just assert (bre1->bre_vdevid == bre2->bre_vdevid).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! This is a leftover from when all BRT entries were together. We can actually eliminate bre_vdevid (which I just did). Thanks.
module/zfs/brt.c
Outdated
|
||
if (error == ENOENT) { | ||
BRTSTAT_BUMP(brt_decref_entry_not_on_disk); | ||
brt_entry_free(bre); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Allocating and immediately freeing memory for every freed block potentially present in BRT is a bit of waste. Could probably be optimized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. Fixed.
module/zfs/brt.c
Outdated
|
||
brt_entry_fill(&bre_search, bp); | ||
|
||
brt_enter(brt); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't profiled frees recently, but have feeling that global pool-wide lock here will cause congestion. It would be good to have it at least per-vdev.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now I've modified brt_lock to rwlock. This allows to use read-lock in brt_may_exists() and this is the hot function called on every free.
module/zfs/brt.c
Outdated
{ "decref_no_entry", KSTAT_DATA_UINT64 } | ||
}; | ||
|
||
#define BRTSTAT_BUMP(stat) atomic_inc_64(&brt_stats.stat.value.ui64) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be good to use wmsum_add() to not touch global variables with atomic under another global (now) lock.
module/zfs/brt.c
Outdated
/* | ||
* TODO: Walk through brtvd->bv_bitmap and write only dirty parts. | ||
*/ | ||
dmu_write(brt->brt_mos, brtvd->bv_mos_brtvdev, 0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Brian pointed out this might need to be byte swapped so it works properly across endian-ness
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we have a header to go with this? to store BRT_RANGE_SIZE
and anything else that might change in the future?
Otherwise we are likely to have compatibility issues if we ever want to change the range size
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You won't be able to change range size once the pool is created. Well, it at least will be hard, as you would need to reread the entire BRT table and recreated refcount table. Currently I store range size in metadata to be able to change the define in the future and don't affect existing pools.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've got to the end of the patch, so the last batch for now.
module/zcommon/zpool_prop.c
Outdated
@@ -116,6 +116,9 @@ zpool_prop_init(void) | |||
zprop_register_number(ZPOOL_PROP_DEDUPRATIO, "dedupratio", 0, | |||
PROP_READONLY, ZFS_TYPE_POOL, "<1.00x or higher if deduped>", | |||
"DEDUP", B_FALSE, sfeatures); | |||
zprop_register_number(ZPOOL_PROP_BRTRATIO, "brtratio", 0, | |||
PROP_READONLY, ZFS_TYPE_POOL, "<1.00x or higher if cloned>", | |||
"BRT", B_FALSE, sfeatures); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am actually curios what does it measure and how useful? It seems like the average number of references to each block that has more than one. For dedup the attribute had some more sense, since it counted all blocks in DDT, which included all blocks written since dedup enable.
dl->dr_override_state = DR_OVERRIDDEN; | ||
if (BP_IS_HOLE(bp)) { | ||
dl->dr_overridden_by.blk_phys_birth = 0; | ||
dl->dr_overridden_by.blk_birth = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this correct? This case is equivalent of hole being punched after being written, so as I understand it should get birth time of this TXG to replicate correctly. Exception can be made only if it was hole and still remains a hole, in which case we may keep the pointer intact.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does that look better?
--- a/module/zfs/dmu.c
+++ b/module/zfs/dmu.c
@@ -2272,12 +2272,11 @@ dmu_brt_addref(objset_t *os, uint64_t object, uint64_t offset, uint64_t length,
dl->dr_override_state = DR_OVERRIDDEN;
if (BP_IS_HOLE(bp)) {
dl->dr_overridden_by.blk_phys_birth = 0;
- dl->dr_overridden_by.blk_birth = 0;
} else {
dl->dr_overridden_by.blk_phys_birth =
BP_PHYSICAL_BIRTH(bp);
- dl->dr_overridden_by.blk_birth = dr->dr_txg;
}
+ dl->dr_overridden_by.blk_birth = dr->dr_txg;
/*
* When data in embedded into BP there is no need to create
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably, but I am not completely confident in this db_dirty_records area. Additionally if both source and destination are holes, I guess this could be skipped completely to not increase metadata and replication traffic, but with the same caution.
length, RL_WRITER); | ||
srclr = zfs_rangelock_enter(&srczp->z_rangelock, srcoffset, | ||
length, RL_READER); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should here also be a case of srczp == dstzp, comparing srcoffset to dstoffset to avoid deadlock if somebody try to do it simultaneously in opposite directions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch!
module/zfs/zfs_vnops.c
Outdated
length, RL_READER); | ||
} | ||
|
||
srcblksz = srczp->z_blksz; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel somewhere here must be a check that destination block size is equal to source or the destination file consist of only one block or it will be overwritten completely. zfs_grow_blocksize() used below will silently do nothing otherwise, that will probably result in data corruption.
module/zfs/zio.c
Outdated
return (zio); | ||
} | ||
if (BP_GET_TYPE(bp) != DMU_OT_PLAIN_FILE_CONTENTS && | ||
BP_GET_TYPE(bp) != DMU_OT_ZVOL) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should here be BP_IS_METADATA() instead for symmetry with dmu_brt_readbps()?
lr->lr_length = len; | ||
lr->lr_blksz = blksz; | ||
lr->lr_nbps = partnbps; | ||
memcpy(lr->lr_bps, bps, sizeof (bps[0]) * partnbps); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have some guaranties that blocks from different dataset won't be freed before the replay? For example in case of encrypted dataset (I just don't remember whether non-encrypted datasets are replayed strictly on import or may be sometimes later).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC, they are replayed at mount, so it could be late
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now I changed the code to only use ZIL when we clone within a single dataset. When sync=always is set on the destination dataset and we are cloning between separate datasets we call txg_wait_synced() before returning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect unless the file is really big it may be faster to return error and just copy it than wait for TXG commit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've only started working my way through the PR, but I thought I'd go ahead and post at least some initial feedback. I'll keep working my through it as I can. For other reviewer I'd suggest starting with the Big Theory comment at the top if brt.c
. It does a great job explaining the high level design.
cmd/zdb/zdb_il.c
Outdated
tab_prefix, (u_longlong_t)lr->lr_foid, (u_longlong_t)lr->lr_offset, | ||
(u_longlong_t)lr->lr_length, (u_longlong_t)lr->lr_blksz); | ||
|
||
for (i = 0; i < lr->lr_nbps; i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for (i = 0; i < lr->lr_nbps; i++) { | |
for (unsigned int i = 0; i < lr->lr_nbps; i++) { |
module/icp/include/sys/bitmap.h
Outdated
@@ -0,0 +1,183 @@ | |||
/* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason this needs to be added to the icp include directory? It'd be a bit more convenient to place it under the top level include/sys/
path. It would also be nice to either prune the header down to what's actually used, or add the missing bitmap.c
source so all the publicly defined functions exist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This still should be moved. It builds fine on Linux under include/sys/
.
module/zfs/brt.c
Outdated
* algorithm or even with checksumming disabled. | ||
* | ||
* As mentioned above, the BRT entries are much smaller than the DDT entries. | ||
* To uniquely identify a block we just need its vdevid and offset. We also |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd suggest using vdev id
instead of vdevid
throughout.
module/zfs/brt.c
Outdated
* references? To avoid this dilemma BRT cooperates with DDT - if a given block | ||
* is being cloned using BRT and the BP has the D (dedup) bit set, BRT will | ||
* lookup DDT entry and increase the counter there. No BRT entry will be | ||
* created for a block that resides on a dataset with deduplication turned on. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you mean no BRT entry will be created for a block that is already in the dedup table, right? Not that this is based on the current setting of the dedup property. It seems to me this also requires that once a block has been added to the BRT it can't be added to the DDT. Or put another way, the contents of the BRT and DDT are required to be mutually exclusive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll try to clarify that. You are right that we may have a block residing on dedup-enabled dataset and not having the D bit set (ie. it was created before dedup was turned on).
Also note that it is impossible to set the D bit after block creation, so "once a block has been added to the BRT it can't be added to the DDT" cannot happen as there is no way to add a block to DDT after it was created.
* ssize_t fclonefile(int srcfd, int dstfd); | ||
* ssize_t fclonerange(int srcfd, off_t srcoffset, size_t length, | ||
* int dstfd, off_t dstoffset); | ||
* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For reference, on Linux the interfaces are:
ssize_t copy_file_range(int fd_in, off64_t *off_in, int fd_out, off64_t *off_out, size_t len, unsigned int flags);
int ioctl(int dest_fd, FICLONERANGE, struct file_clone_range *arg);
int ioctl(int dest_fd, FICLONE, int src_fd);
https://man7.org/linux/man-pages/man2/copy_file_range.2.html
https://man7.org/linux/man-pages/man2/ioctl_ficlone.2.html
At least on Linux there's no requirement that the start and length offsets need to be block aligned. If unaligned we'll have to copy the beginning and end at least on Linux. My feeling is it'd be best to handle this in the common code so all the platforms could benefit from it. But if could be platform specific if we need too.
I haven't yet gone through all the expected errno's from the man pages above, but let's make sure they match up with what zfs_clone_range()
returns if possible.
module/zfs/brt.c
Outdated
* - The block we want to clone may have been modified within the same | ||
* transaction group. We could potentially clone the previous version of the | ||
* data, but that doesn't seem right. We treat it as the previous case and | ||
* return an error. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this case, and the one above I think we're going to have to handle waiting on the txg sync and retrying somewhere in the ZFS code. Based on established errno's from the man pages any existing callers aren't going to know what to do with EAGAIN.
Excellent to see this feature being worked on! I just posted a request for "fast-copying" between datasets (#13516) because I had assumed any reflink style support wouldn't initially handle it, but it's good to see it mentioned here. In that case my issue can probably be closed. To be clear, in my issue I'm assuming automatic "fast-copying" (cloning) rather than having to explicitly call One thing I'd like to discuss from my issue is copying between datasets where certain settings differ, with the main ones being If a target dataset has a Meanwhile To cover these I propose a new setting, probably better named
There may be other settings that need to be considered, but these are the ones I use that seem applicable to cloning. The aim is to allow users to ensure that cloning is not used to produce files in a dataset that are less redundant, less compressed etc. than files that are copied normally. |
So, I tried a first pass at wiring up the Linux copy/clone range calls, and found a problem. You see, Linux has a hardcoded check in the VFS layer for each of these calls that:
That is, it attempts to explicitly forbid cross-filesystem reflinks in a VFS check before it ever gets to our code. The options I see for dealing with this are all pretty gross.
Thoughts? |
If the aim is to clone as many copy/move operations as possible then should we be worrying about explicit cloning commands in the first place? I think the ideal scenario is to avoid the need for those calls entirely, by detecting when a file is being copied between datasets in the same pool (including the same dataset) so we can use cloning instead, this way programs don't need to make any explicit cloning calls, since many don't/won't, so picking this up automatically on the ZFS side would be best, as it would mean more files "deduplicated" with no extra work by users or programs. |
Changing the scope of this from "userland generated reflinking of ranges" to "reimplementing dedup inline" would significantly impact how useful it is, IMO. In particular, it would go from "I'd use it" to "I'd force it off". The goal here is not (I believe) "maximize dedup on everything", the goal here is "allow userland to specify when something is wanted to be a CoW copy of something else, because that's where most of the benefit is compared to trying to dedup everything". |
This seems like a huge overreaction; ZFS provides more than enough tools to control redundancy already, so there's simply no advantage to maintaining multiple copies of the same file beyond that (so long as the target dataset's settings are respected if necessary, see above). I'm very much of the opposite position; if this feature is manual only then I guarantee you I will almost never use it, because so many tools simply do not support cloning even on filesystems that support it. If I have to jump to the command line to do it manually outside of a tool, or discard and replace copies after the fact, then that's an excellent way to guarantee that I'll almost never do it. While there's certainly potential for offline deduplication scripts to make this a bit easier, e.g- scan one or more datasets for duplicates and replace with clones of one of them, why force it to be done retroactively? Otherwise it's limiting to just those with more specific needs, like cloning VMs or similar (where cloning datasets isn't suitable). And if the cloning system calls aren't going to work for cross-dataset cases anyway, then automatic may be the only way to go regardless which is why I mentioned it; at the very least you could still turn it off by default, then turn it on temporarily when you want to clone to another dataset. |
I don't know if I'd say they're not going to work...
It's more inconvenient, but it's certainly not unworkable. I'm not saying there's no use for anyone if it's used to replace the existing inline dedup implementation - further up the thread, I advocate for doing just that, and implementing similar functionality to allow you to do inline dedup on send/recv with this. But personally, the performance tradeoffs of inline dedup aren't worth it to me, especially compared to doing it as postprocessing later as needed, or with explicit notification, so if that was the only functionality, I would not be making use of it. |
My understanding from reading the man page and kernel source code is that this is indeed a problem with all ioctl-based file clone/dedup calls but not with the newer more generic If the src and dst file are from zfs but not on the same sb with Still not optimal since most tools that claim to be reflink compatible will first use the ioctl and only might fall back to copy_file_range. |
coreutils (8.30) on 5.4 was where I saw EXDEV without ever executing our So even if the kernel provides (Not that I'm not ecstatic to be wrong and that we can have our cake and eat it too, and also amused that Oracle originated the patch, but I think we still get to have a fallback, albeit a much simpler one than what I implemented, for those cases.) e: anyway, i'll polish up the thing I have for making this play nice under Linux with e: I actually just tried coreutils git and it also refused cp --reflink with EXDEV, so I'm guessing their autodetection handling is...incorrect, because actually calling copy_file_range works great. |
Huh, did I really never post anything more in here? Rude of me. It seems the consensus is Linux should change here, and that coreutils can't do anything to make the behavior I write about there better without violating correctness, but I put the odds of that as infinitely approaching 0 without quite getting there, so I suppose we'll be stuck with I've been busy and haven't done what I wanted to, which was extend the |
i_sb is a superblock of filesystem. Do you think it is actually lying, if the data can be linked freely across the pool? I wonder if it will be interesting to find btrfs discussion on the topic - why they implemented it that way. |
It's probably worth reading about some side effects of how btrfs handles subvolumes before thinking that might be a good route. |
So if leveraging the system clone call isn't an option, what information do we have to work with? My thinking was that if we can know the source dataset, target dataset, and the file/inode being copied or moved, then this should still be enough for ZFS to decide whether to clone or not, without having to be explicitly told? I don't know enough about the file system calls to know how easy it is to get that information though; I guess I was just thinking that if we know a file is being copied within the same dataset, or copied/moved between two datasets under the same (or no) encryption root then ZFS can simply decide to clone automatically based on whatever rules are set on the target (e.g- always clone where possible, only clone if compression/recordsize etc. is the same, or never clone). It's never really made sense to me why an explicit call should be required to clone in the first place, because if you trust a filesystem enough to store your data then you shouldn't need to create extra redundancy by default, especially on ZFS with It should still probably be an opt-in feature, as administrators know their pools the best, but the three main options I outlined should be sufficient? Then if the clone system calls are improved in future, we could add a fourth setting "on" meaning cloning is permitted, but only when the appropriate system call is used? Actually thinking about it, this option could also be available immediately, as even if the system calls won't work for now, we could still presumably have a ZFS specific command to begin with (e.g- something under |
The whole reason why reflink exists and why it is is an explicit operation is because:
And IMHO if you want to do things implicitly ZFS already has dedupe. Block Cloning/ Reflink is purposely built around being an explicit operation so that the kernel knows of the intention of userspace. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did another ZIL-focussed review.
Apologies for the formatting, I did it offline and then copy-pasted into GitHub.
Given the above commentary I take it this was not tested on Linux? We got reports of data and pool corruption in FreeBSD after the import which landed the feature, for example:
There is also a report claiming corruption can show up even with the feature disabled, I don't know if it is related yet. |
It was not. There are also no test cases in ZTS for this. We really should get those implemented sooner rather than later. A few bugs have already been found in this from a mix of static analysis and people attempting to adapt it for use on Linux. |
Given the above I don't think the feature is ready for prime-time and consequently should be reverted for the time being. |
we have these commits, they came with the merge. as noted above, issues are still there. |
Let's start with a bug report in a separate issue. |
Let me summarize what I found, as a lot of FUD is being spread. Reported corruptions when block cloning was disabled, AFAIU, were due to EXDEV leaking to userland's cp(1) and not being handled in the kernel. This resulted in cp(1) creating empty files. Not a data corruption inside ZFS. This was due to OpenZFS being merged to FreeBSD before plumbing was finished: I can confirm there is data corruption when block cloning is enabled. It happens for very small files were embedded blocks are used. I'm overwriting blk_phys_birth which for embedded blocks is part of the payload. I'm also working on adding tests, but I'd like to be able to have a more complete coverage and currently there is no userland tool apart from cp(1) that uses copy_file_range(2). Maybe integrating pjdfstest would be beneficial? I could extend fstest tool to allow testing copy_file_range(2) with different offsets and sizes. I hope that helps. PS. I'm currently in the middle of spring break with my family, mostly traveling, so I can answer with delays. If someone wants to commit the fix to FreeBSD directly, be my guest. |
Don't overwrite blk_phys_birth, as for embedded blocks it is part of the payload. Reviewed-by: Richard Yao <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Pawel Jakub Dawidek <[email protected]> Issue #13392 Closes #14739
Hi @pjd, Any new on the porting of this to linux (client side) ? Thanks |
Don't overwrite blk_phys_birth, as for embedded blocks it is part of the payload. Reviewed-by: Richard Yao <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Pawel Jakub Dawidek <[email protected]> Issue openzfs#13392 Closes openzfs#14739
Don't overwrite blk_phys_birth, as for embedded blocks it is part of the payload. Reviewed-by: Richard Yao <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Pawel Jakub Dawidek <[email protected]> Issue openzfs#13392 Closes openzfs#14739
Hi all, I have a couple of questions (sorry if they were already asked):
Thanks. |
I think that's a reasonable one-sentence explanation, that I've used myself too. What actually happens is that
There's very little about block cloning in the docs, so no.
There's no clear indicator for block cloning specifically. The logical/physical birth times of a block can be an indicator:
Yes, its all a bit limited at the moment. |
To expand on this a bit:
coreutils...9.1, I believe it was?...changed the default from
I thought dedup and BRT blocks had a different mismatch about pbirth/lbirth where one of them as pbirth < lbirth and the other was pbirth > lbirth - did that change, or is my memory completely out of it? |
Yeah, since 5.19 it checks if the
Yes, 9.1.
Right now they're both pbirth <= lbirth. Its been like that for at least as long as I've been working on dedup, but that's only this year :) |
Do some workarounds exist for the
I missed the P/L birth difference, thank you.
No worries, it remains a great feature for ZFS! Thanks to everybody involved in it. |
The above match my observations, thanks for the very clear recap. |
I'm not working on this in any capacity, but when I talked about it with people before, I was advocating having something optionally like the old send -D case ish, where you could optionally assemble your own mapping table of things that you might BRT against from your pool and then on recv compare the incoming things against it and proactively do the thing. If you added a backchannel, you could also theoretically have an incremental stream that only contained a new instruction of "BRT against thing matching this", but that'd be a security problem and also a problem for the result being not receivable except on the other end, races against the blocks being removed before you BRT against them, or a long tail of other edge cases. I think it'd be doable if someone wanted to write it, though. The other thing that was discussed was receiving into dedup=on and having a knob that let you lob everything from the DDT into the BRT eventually, which wouldn't cover every case. |
New Features - Block cloning (#13392) - Linux container support (#14070, #14097, #12263) - Scrub error log (#12812, #12355) - BLAKE3 checksums (#12918) - Corrective "zfs receive" - Vdev and zpool user properties Performance - Fully adaptive ARC (#14359) - SHA2 checksums (#13741) - Edon-R checksums (#13618) - Zstd early abort (#13244) - Prefetch improvements (#14603, #14516, #14402, #14243, #13452) - General optimization (#14121, #14123, #14039, #13680, #13613, #13606, #13576, #13553, #12789, #14925, #14948) Signed-off-by: Brian Behlendorf <[email protected]>
New features: - Fully adaptive ARC eviction (openzfs#14359) - Block cloning (openzfs#13392) - Scrub error log (openzfs#12812, openzfs#12355) - Linux container support (openzfs#14070, openzfs#14097, openzfs#12263) - BLAKE3 Checksums (openzfs#12918) - Corrective "zfs receive" (openzfs#9372) Signed-off-by: Brian Behlendorf <[email protected]>
Motivation and Context
Block Cloning allows to clone a file (or a subset of its blocks) into another (or the same) file by just creating additional references to the data blocks without copying the data itself. Block Cloning can be described as a fast, manual deduplication.
Description
In many ways Block Cloning is similar to the existing deduplication, but there are some important differences:
As mentioned above, the BRT entries are much smaller than the DDT entries. To uniquely identify a block we just need its vdevid and offset. We also need to maintain a reference counter. The vdevid will often repeat, as there is a small number of top-level VDEVs and a large number of blocks stored in each VDEV. We take advantage of that to reduce the BRT entry size further by maintaining one BRT for each top-level VDEV, so we can then have only offset and counter as the BRT entry.
Minimizing free penalty.
Block Cloning allows to clone any existing block. When we free a block there is no hint in the block pointer whether the block was cloned or not, so on each free we have to check if there is a corresponding entry in the BRT or not. If there is, we need to decrease the reference counter. Doing BRT lookup on every free can potentially be expensive by requiring additional I/Os if the BRT doesn't fit into memory. This is the main problem with deduplication, so we've learn our lesson and try not to repeat the same mistake here. How do we do that? We divide each top-level VDEV into 64MB regions. For each region we maintain a reference counter that is a sum of all reference counters of the cloned blocks that have offsets within the region. This creates the regions array of 64bit numbers for each top-level VDEV. The regions array is always kept in memory and updated on disk in the same transaction group as the BRT updates to keep everything in-sync. We can keep the array in memory, because it is very small. With 64MB regions and 1TB VDEV the array requires only 128kB of memory (we may decide to decrease the region size in the future). Now, when we want to free a block, we first consult the array. If the counter for the whole region is zero, there is no need to look for the BRT entry, as there isn't one for sure. If the counter for the region is greater than zero, only then we will do a BRT lookup and if an entry is found we will decrease the reference counters in the entry and in the regions array.
The regions array is small, but can potentially be larger for very large VDEVs or smaller regions. In this case we don't want to rewrite entire array on every change. We then divide the regions array into 128kB chunks and keep a bitmap of dirty chunks within a transaction group. When we sync the transaction group we can only update the parts of the regions array that were modified. Note: Keeping track of the dirty parts of the regions array is implemented, but updating only parts of the regions array on disk is not yet implemented - for now we will update entire regions array if there was any change.
The implementation tries to be economic: if BRT is not used, or no longer used, there will be no entries in the MOS and no additional memory used (eg. the regions array is only allocated if needed).
Interaction between Deduplication and Block Cloning.
If both functionalities are in use, we could end up with a block that is referenced multiple times in both DDT and BRT. When we free one of the references we couldn't tell where it belongs, so we would have to decide what table takes the precedence: do we first clear DDT references or BRT references? To avoid this dilemma BRT cooperates with DDT - if a given block is being cloned using BRT and the BP has the D (dedup) bit set, BRT will lookup DDT entry and increase the counter there. No BRT entry will be created for a block that resides on a dataset with deduplication turned on. BRT may be more efficient for manual deduplication, but if the block is already in the DDT, then creating additional BRT entry would be less efficient. This clever idea was proposed by Allan Jude.
Block Cloning across datasets.
Block Cloning is not limited to cloning blocks within the same dataset. It is possible (and very useful) to clone blocks between different datasets.
One use case is recovering files from snapshots. By cloning the files into dataset we need no additional storage. Without Block Cloning we would need additional space for those files.
Another interesting use case is moving the files between datasets (copying the file content to the new dataset and removing the source file). In that case Block Cloning will only be used briefly, because the BRT entries will be removed when the source is removed.
Note: currently it is not possible to clone blocks between encrypted datasets, even if those datasets use the same encryption key (this includes snapshots of encrypted datasets). Cloning blocks between datasets that use the same keys should be possible and should be implemented in the future.
Block Cloning flow through ZFS layers.
Note: Block Cloning can be used both for cloning file system blocks and ZVOL blocks. As of this writing no interface is implemented that allows for ZVOL blocks cloning.
Depending on the operating system there might be different interfaces to clone blocks. On FreeBSD we have two syscalls:
Even though fclonerange() takes byte offsets and length, they have to be block-aligned.
Both syscalls call OS-independent zfs_clone_range() function. This function was implemented based on zfs_write(), but instead of writing the given data we first read block pointers using the new dmu_read_l0_bps() function from the source file. Once we have BPs from the source file we call the dmu_brt_addref() function on the destination file. This function allocates BPs for us. We iterate over all source BPs. If the given BP is a hole or an embedded block, we just copy BP. If it points to a real data we place this BP on a BRT pending list using the brt_pending_add() function.
We use this pending list to keep track of all BPs that got new references within this transaction group.
Some special cases to consider and how we address them:
When we free a block we have additional step in the ZIO pipeline where we call the zio_brt_free() function. We then call the brt_entry_decref() that loads the corresponding BRT entry (if one exists) and decreases reference counter. If this is not the last reference we will stop ZIO pipeline here. If this is the last reference or the block is not in the BRT, we continue the pipeline and free the block as usual.
At the beginning of spa_sync() where there can be no more block cloning, but before issuing frees we call brt_pending_apply(). This function applies all the new clones to the BRT table - we load BRT entries and update reference counters. To sync new BRT entries to disk, we use brt_sync() function. This function will sync all dirty top-level-vdev BRTs, regions arrays, etc.
Block Cloning and ZIL.
Every clone operation is divided into chunks (similar to write) and each chunk is cloned in a separate transaction. To keep ZIL entries small, each chunk clones at most 254 blocks, which makes ZIL entry to be 32kB. Replaying clone operation is different from the regular clone operation, as when we log clone operation we cannot use the source object - it may reside on a different dataset, so we log BPs we want to clone.
The ZIL is replayed when we mount the given dataset, not when the pool is imported. Taking this into account it is possible that the pool is imported without mounting datasets and the source dataset is destroy before the destination dataset is mounted and its ZIL replayed.
To address this situation we leverage zil_claim() mechanism where ZFS will parse all the ZILs on pool import. When we come across TX_CLONE_RANGE entries, we will bump reference counters for their BPs in the BRT and then on mount and ZIL replay we will just attach BPs to the file without bumping reference counters.
Note it is still possible that after zil_claim() we never mount the destination, so we never replay its ZIL and we destroy it. This way we would end up with leaked references in BRT. We address that too as ZFS gives as a chance to clean this up on dataset destroy (see zil_free_clone_range()).
How Has This Been Tested?
I have a test program that can make use of this functionality that I have been using for manual testing.
Types of changes
Checklist:
Signed-off-by
.