Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detect and prevent mixed raw and non-raw sends #8308

Closed
wants to merge 4 commits into from

Conversation

tcaputi
Copy link
Contributor

@tcaputi tcaputi commented Jan 18, 2019

Currently, there is an issue in the raw receive code where
raw receives are allowed to happen on top of previously
non-raw received datasets. This is a problem because the
source-side dataset doesn't know about how the blocks on
the destination were encrypted. As a result, any MAC in
the objset's checksum-of-MACs tree that is a parent of both
blocks encrypted on the source and blocks encrypted by the
destination will be incorrect. This will result in
authentication errors when we decrypt the dataset.

This patch fixes this issue by adding a new check to the
raw receive code. The code now maintains an "IVset guid",
which acts as an identifier for the set of IVs used to
encrypt a given snapshot. When a snapshot is raw received,
the destination snapshot will take this value from the
DRR_BEGIN payload. Non-raw receives and normal "zfs snap"
operations will cause ZFS to generate a new IVset guid.
When a raw incremental stream is received, ZFS will check
that the "from" IVset guid in the stream matches that of
the "from" destination snapshot. If they do not match, the
code will error out the receive, preventing the problem.

WIP: Still working on reverse compatibility for existing encrypted datasets.

Signed-off-by: Tom Caputi [email protected]

Motivation and Context

This issue was originally found and reported by Jason King.

How Has This Been Tested?

send_mixed_raw.ksh has been added to ensure the new code detects and prevents mixed raw. The patch has also been tested against. Some sample datasets that Jason provided and it seems to work correctly now.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (a change to man pages or other documentation)

Checklist:

@tcaputi tcaputi added the Status: Work in Progress Not yet ready for general review label Jan 18, 2019
@DeHackEd
Copy link
Contributor

DeHackEd commented Jan 18, 2019

Umm... Last time zfs_bookmark_phys_t was changed we ended up with errata 1 (2094). Is this safe now?
Never mind, I see what you're up to.

@tcaputi
Copy link
Contributor Author

tcaputi commented Jan 18, 2019

@DeHackEd We're still working on on-disk reverse compatibility, but zfs_bookmark_phys_t was going to get bigger with redacted send / recv anyway. We may have to file an errata for this, but that isn't clear yet.

@ahrens ahrens changed the title Detect and prevend mixed raw and non-raw sends Detect and prevent mixed raw and non-raw sends Jan 18, 2019
@ahrens ahrens requested review from pcd1193182 and ahrens January 18, 2019 17:42
contrib/pyzfs/setup.py Outdated Show resolved Hide resolved
man/man8/zfs.8 Outdated Show resolved Hide resolved
module/zfs/dmu_recv.c Outdated Show resolved Hide resolved
@@ -2485,6 +2529,8 @@ dsl_crypto_populate_key_nvlist(dsl_dataset_t *ds, nvlist_t **nvl_out)
fnvlist_add_uint64(nvl, "mdn_indblkshift", mdn->dn_indblkshift);
fnvlist_add_uint64(nvl, "mdn_nblkptr", mdn->dn_nblkptr);
fnvlist_add_uint64(nvl, "mdn_maxblkid", mdn->dn_maxblkid);
fnvlist_add_uint64(nvl, "to_ivset_guid", to_ivset_guid);
fnvlist_add_uint64(nvl, "from_ivset_guid", from_ivset_guid);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we should only add these if they are nonzero?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thought process at the time was that this would allow us to identify source systems that have the new version of zfs that supports ivset guids, but are sending an old snapshot without one set.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the code care about that, or is that just in case developers want to inspect the send stream with zstreamdump and make inferences based on it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thats just mostly for developers / debugging.

contrib/pyzfs/setup.py Outdated Show resolved Hide resolved
include/sys/dsl_crypt.h Outdated Show resolved Hide resolved
include/sys/fs/zfs.h Outdated Show resolved Hide resolved
lib/libzfs/libzfs_sendrecv.c Outdated Show resolved Hide resolved
man/man8/zfs.8 Outdated Show resolved Hide resolved
module/zfs/dmu_recv.c Outdated Show resolved Hide resolved
include/sys/dsl_bookmark.h Show resolved Hide resolved
include/sys/dsl_bookmark.h Show resolved Hide resolved
module/zfs/dmu_send.c Outdated Show resolved Hide resolved
@tcaputi
Copy link
Contributor Author

tcaputi commented Feb 12, 2019

All the code and tests should now be ready for review. I'm still working on the wording for the errata page, but I expect that PR should be made tomorrow.

@codecov
Copy link

codecov bot commented Feb 13, 2019

Codecov Report

Merging #8308 into master will increase coverage by 0.18%.
The diff coverage is 87.5%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #8308      +/-   ##
==========================================
+ Coverage   78.53%   78.72%   +0.18%     
==========================================
  Files         380      380              
  Lines      116007   116091      +84     
==========================================
+ Hits        91108    91391     +283     
+ Misses      24899    24700     -199
Flag Coverage Δ
#kernel 79.19% <92.92%> (+0.22%) ⬆️
#user 67.47% <16.07%> (+0.24%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 762f9ef...95226b6. Read the comment docs.

@lundman
Copy link
Contributor

lundman commented Feb 13, 2019

This is what we are all waiting on yes? :) Will fix that last issue?

@tcaputi
Copy link
Contributor Author

tcaputi commented Feb 13, 2019

Hopefully, yes.

@tcaputi tcaputi force-pushed the mixed_raw_recvs branch 2 times, most recently from b58560d to 6f87f6d Compare February 13, 2019 16:21
@tcaputi
Copy link
Contributor Author

tcaputi commented Feb 13, 2019

Related errata documentation PR for the website: zfsonlinux/zfsonlinux.github.com#45

@ahrens ahrens added Status: Code Review Needed Ready for review and testing and removed Status: Work in Progress Not yet ready for general review labels Feb 13, 2019
@tcaputi
Copy link
Contributor Author

tcaputi commented Feb 20, 2019

For reference, these are the commands used to generate the test pool for the errata handling test:

truncate -s 100M /root/missing_ivset.dat
zpool create -f missing_ivset /root/missing_ivset.dat
echo 'password' | zfs create -o encryption=on -o keyformat=passphrase missing_ivset/testfs
echo 'password' | zfs create -V 10M -o encryption=on -o keyformat=passphrase missing_ivset/testvol

touch /missing_ivset/filea
yes | dd of=/dev/zvol/missing_ivset/testvol bs=512 count=1 seek=0
zfs snap missing_ivset/testfs@snap1
zfs snap missing_ivset/testvol@snap1

touch /missing_ivset/fileb
yes | dd of=/dev/zvol/missing_ivset/testvol bs=512 count=1 seek=4096
zfs snap missing_ivset/testfs@snap2
zfs snap missing_ivset/testvol@snap2
zfs bookmark missing_ivset/testfs@snap2 missing_ivset/testfs#snap2
zfs bookmark missing_ivset/testvol@snap2 missing_ivset/testvol#snap2

touch /missing_ivset/filec
yes | dd of=/dev/zvol/missing_ivset/testvol bs=512 count=1 seek=8192
zfs snap missing_ivset/testfs@snap3
zfs snap missing_ivset/testvol@snap3
zpool export missing_ivset

Copy link
Contributor

@behlendorf behlendorf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good. I'll see about doing some additional manual testing latter this week. In the meanwhile, it looks like this change introduced a small memory leak and it needs to be rebased.

module/zfs/dmu_recv.c Show resolved Hide resolved
This patch adds the bookmark v2 feature to the on-disk format. This
feature will be needed for the upcoming redacted sends and for an
upcoming fix that for raw receives. The feature is not currently
used by any code and thus thich change is a no-op, aside from the
fact that the user can now enable the feature.

Signed-off-by: Tom Caputi <[email protected]>
@tcaputi
Copy link
Contributor Author

tcaputi commented Feb 21, 2019

@behlendorf your comments have been addressed. The memory leak was actually an existing leak that the new tests exposed (now fixed in the PR). I would appreciate it if someone could look at the changes related to that to make sure I didn't miss any cases. The fixes for that are all in dmu_recv_stream()

Currently, there is an issue in the raw receive code where
raw receives are allowed to happen on top of previously
non-raw received datasets. This is a problem because the
source-side dataset doesn't know about how the blocks on
the destination were encrypted. As a result, any MAC in
the objset's checksum-of-MACs tree that is a parent of both
blocks encrypted on the source and blocks encrypted by the
destination will be incorrect. This will result in
authentication errors when we decrypt the dataset.

This patch fixes this issue by adding a new check to the
raw receive code. The code now maintains an "IVset guid",
which acts as an identifier for the set of IVs used to
encrypt a given snapshot. When a snapshot is raw received,
the destination snapshot will take this value from the
DRR_BEGIN payload. Non-raw receives and normal "zfs snap"
operations will cause ZFS to generate a new IVset guid.
When a raw incremental stream is received, ZFS will check
that the "from" IVset guid in the stream matches that of
the "from" destination snapshot. If they do not match, the
code will error out the receive, preventing the problem.

This patch requires an on-disk format change to add the
IVset guids to snapshots and bookmarks. As a result, this
patch has errata handling and a tunable to help affected
users resolve the issue with as little interruption as
possible.

Signed-off-by: Tom Caputi <[email protected]>
module/zfs/dmu_recv.c Show resolved Hide resolved
module/zfs/dmu_recv.c Outdated Show resolved Hide resolved
module/zfs/dsl_crypt.c Outdated Show resolved Hide resolved
module/zfs/dsl_crypt.c Outdated Show resolved Hide resolved
module/zfs/dsl_crypt.c Outdated Show resolved Hide resolved
@@ -2485,6 +2529,8 @@ dsl_crypto_populate_key_nvlist(dsl_dataset_t *ds, nvlist_t **nvl_out)
fnvlist_add_uint64(nvl, "mdn_indblkshift", mdn->dn_indblkshift);
fnvlist_add_uint64(nvl, "mdn_nblkptr", mdn->dn_nblkptr);
fnvlist_add_uint64(nvl, "mdn_maxblkid", mdn->dn_maxblkid);
fnvlist_add_uint64(nvl, "to_ivset_guid", to_ivset_guid);
fnvlist_add_uint64(nvl, "from_ivset_guid", from_ivset_guid);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the code care about that, or is that just in case developers want to inspect the send stream with zstreamdump and make inferences based on it?

module/zfs/dsl_dataset.c Outdated Show resolved Hide resolved
case ZFS_PROP_IVSET_GUID:
error = zap_lookup(ds->ds_dir->dd_pool->dp_meta_objset,
ds->ds_object, DS_FIELD_IVSET_GUID, sizeof (numval),
1, &numval);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doing this without checking if the dataset is zapified won't work right, for a few reasons:

  1. on debug bits, this will panic (see zap_lockdir())
  2. on nondebug bits on other platforms (e.g. illumos), this will probably panic, since the check in zap_lockdir_impl() is unique to ZoL.
  3. on nondebug bits on ZoL, this will return EINVAL (incorrect channel program) when we want ENOENT (property not present). See zcp_handle_error().

@lundman
Copy link
Contributor

lundman commented Feb 28, 2019

(it would make my life easier if new changes were new commits instead of force-pushed over the one, and only squashed right before merging at the end :) )

Copy link
Contributor

@pcd1193182 pcd1193182 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other than matt's comments, everything looks good to me. I was mostly looking at the send/recv logic, since that's what I know best.

@tcaputi
Copy link
Contributor Author

tcaputi commented Feb 28, 2019

@ahrens a new version has been pushed to address your comments.
@lundman the changes were pushed as a new commit :)

@behlendorf behlendorf added Status: Accepted Ready to integrate (reviewed, tested) and removed Status: Code Review Needed Ready for review and testing labels Mar 3, 2019
Squash this commit before merging.

Signed-off-by: Tom Caputi <[email protected]>
Squash this before merging

Signed-off-by: Tom Caputi <[email protected]>
behlendorf pushed a commit that referenced this pull request Mar 13, 2019
This patch adds the bookmark v2 feature to the on-disk format. This
feature will be needed for the upcoming redacted sends and for an
upcoming fix that for raw receives. The feature is not currently
used by any code and thus this change is a no-op, aside from the
fact that the user can now enable the feature.

Reviewed-by: Paul Dagnelie <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Matt Ahrens <[email protected]>
Signed-off-by: Tom Caputi <[email protected]>
Issue #8308
behlendorf pushed a commit that referenced this pull request Mar 14, 2019
This patch attempts to address some user concerns that have arisen
since errata 4 was introduced.

* The errata warning has been made less scary for users without
  any encrypted datasets.

* The errata warning now clears itself without a pool reimport if
  the bookmark_v2 feature is enabled and no encrypted datasets
  exist.

* It is no longer possible to create new encrypted datasets without
  enabling the bookmark_v2 feature, thus helping to ensure that the
  errata is resolved.

Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Tom Caputi <[email protected]>
Issue ##8308
Closes #8504
@JMoVS
Copy link
Contributor

JMoVS commented Mar 25, 2019

is there any way an existing pool with the top-level-dataset being an encrypted one to transfer the data on the pool? Or will I have to backup the pool, destroy it and start fresh?

@tcaputi
Copy link
Contributor Author

tcaputi commented Mar 25, 2019

Hmmm. That is a good point. @behlendorf do you know of a way to do this?

@behlendorf
Copy link
Contributor

Good question. Sorry no, I can't think of a great way to do this.

@JMoVS
Copy link
Contributor

JMoVS commented Mar 25, 2019

ugh, ok, thx. Note to self: Never create root datasets with experimental features 😆

@JMoVS
Copy link
Contributor

JMoVS commented Mar 26, 2019

@kpande It is possible to create a different encryption root, but that doesn't help me in getting rid of the top dataset. I think this should maybe be made clear for Admins in Errata #4? Like "if you have a top-level encryptionroot, you can either create a new one below it and live with the errata or backup the pool and recreate it with the bookmark v2 feature"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Accepted Ready to integrate (reviewed, tested)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants