-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Panic during zfs send pool1/dataset1 | zfs recv -v -u -s pool2/dataset2
.
#13937
Comments
Just tried to
... |
Also have errors in
|
It looks like something corrupted a metadata buffer before it was checksummed and written to disk. @512yottabytes Would you provide the backtrace that you get when this happens with |
Hello, @ryao
|
Sending dataset's snapshot as raw via |
We need to find out what file on your pool is corrupted and restore it from backup (actually, the data is fine, it is just metadata that is corrupt). A raw send would not check the contents, so the corruption just goes from one pool to another that way. I am busy this week. Could you contact me on Monday at around 1pm in the OpenZFS slack channel? I could try to help you then. |
I'm running into a panic in a similar situation. I suspect it may be the same cause What distribution (with version) you are usingArch linux (kernel Linux machine-name 6.0.6-arch1-1 #1 SMP PREEMPT_DYNAMIC Sat, 29 Oct 2022 14:08:39 +0000 x86_64 GNU/Linux) The spl and zfs versions you are using, installation method (repository or manual compilation)2.1.6 (installed from non-official repo) Describe the issue you are experiencingKernel panic, unkillable zfs process during send/receive Describe how to reproduce the issueSimilar to the original report, but I'm doing an incremental send
Dataset is encrypted. The panic happens if I only load the encryption key on the source and not the destination. It does not panic if the key is loaded on both ends Including any warning/errors/backtraces from the system logsdmesg output
|
I think you're correct. Sorry for the noise |
what's the status here please? :) can we put together a reproducer if it's still an issue? I'd look into it but need to be able to reproduce it locally, that's the most time effective option edit: im just blind it's right there in the report |
Hello, I've permanent panic during
zfs send pool1/dataset1 | zfs recv -v -u -s pool2/dataset2
. It appears just after 179 GB of 183 GB is transfered.It happens on Ubuntu, Fedora, FreeBSD 13.1, OpenIndiana Hipster (OI-hipster-gui-20211031) and OmniOS (omnios-r151042).
System information
Distribution Name = Ubuntu, Fedora, FreeBSD 13.1, OpenIndiana Hipster (OI-hipster-gui-20211031) and OmniOS (omnios-r151042)
Distribution Version = Ubuntu 22.04, Fedora 36, FreeBSD 13.1, OpenIndiana Hipster (OI-hipster-gui-20211031) and OmniOS (omnios-r151042)
Kernel Version = Linux hpws 5.15.0-48-generic #54-Ubuntu SMP Fri Aug 26 13:26:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux (as for Ubuntu 22.04)
Architecture x86_64
OpenZFS Version= zfs-2.1.4-0ubuntu0.1, zfs-kmod-2.1.4-0ubuntu0.1 (as for Ubuntu 22.04)
Describe the problem you're observing:
I've permanent panic during
zfs send pool1/dataset1 | zfs recv -v -u -s pool2/dataset2
. It appears just after 179 GB of 183 GB is transfered.It also occurs not depending on if
-v
and/or-s
are present or not (zfs send pool1/dataset1 | zfs recv -u pool2/dataset2
, and so on...)Source dataset is encrypted with ZFS native encryption (aes 256 gcm).
Compression = lz4
Dedup = off
Checksum = sha512
Target dataset:
Compression = lz4
Dedup = off
Checksum = on
Target dataset can be not encrypted or be on top of LUKS.
Also tried to send via SSH, the panic occurs on sender's side.
Despite the error occurs on the sender size,
zfs send pool1/dataset1 > /home/user/file.zfs
works without any error.UPD 1:
Just tried to
cat /home/user/file.zfs | zfs recv -v -u -s pool2/dataset2
, also got panic.UPD 2:
Also have errors in
rsync
and same panic trying to:zpool scrub pool1
andzpool scrub pool2
finished successfully without any error.UPD 3:
Sending dataset's snapshot as unencrypted raw via
zfs send -w pool1/dataset1@--2022-13-32--25-62-62--snapshot1 | zfs recv pool2/dataset2
works well, without errors.Sending snapshots 2 & 3
zfs send -w pool1/dataset1@--2022-14-33--26-63-63--snapshot2 | zfs recv pool2/dataset3
gives errors in zfs and spl modules, but send/recv ends successfully.Describe how to reproduce the problem:
Include any warning/errors/backtraces from the system logs
Panic during
rsync
:gives errors:
The text was updated successfully, but these errors were encountered: