-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot restore backups to encrypted pools #286
Comments
@artw can I get the node daemonset(openebs-zfs-node-xxxx) log in kube-system namespace? |
@pawanpraka1 that was fast, thank you! I did not know where to look
any workarounds? |
can you share |
The pool is encrypted with aes-256-gcm, the snapshot is not (I can recv it on another box w/ encryption=off and it stays off) snapshot metadata: "metadata": {
"name": "pvc-51791429-c865-4439-8c76-387d335c8cd3",
"namespace": "openebs",
"uid": "50365668-58ba-45d2-9339-b23c684b4061",
"resourceVersion": "1703137",
"generation": 2,
"creationTimestamp": "2021-02-05T08:16:01Z",
"labels": {
"kubernetes.io/nodename": "con-d1",
"velero.io/namespace": "redis-test"
},
"finalizers": [
"zfs.openebs.io/finalizer"
],
"managedFields": [
{
"manager": "zfs-driver",
"operation": "Update",
"apiVersion": "zfs.openebs.io/v1",
"time": "2021-02-05T08:16:01Z"
}
]
},
"spec": {
"ownerNodeID": "con-d1",
"poolName": "data/k3s/pv",
"capacity": "5368709120",
"recordsize": "4k",
"compression": "on",
"dedup": "off",
"volumeType": "DATASET",
"fsType": "zfs"
},
"status": {
"state": "Ready"
} The data (pipe through base64 and gunzip)
I can do recv (without -F) on the same pool fine and it inherits encryption from parent
|
are you restoring it on the same cluster and on the same node? If it is the different node where we are restoring then can you try keeping encryption setting same on both the node? can you try the recv with -F and let me know the error? |
all nodes are the same, I can import it fine with -F.
|
I see using "-F" while doing restore is a problem for encrypted volumes as zfs does not allow to use -F on it. It was done to roll back any changs which might have done, but it is highly unlikely that it happens. Probably we can safely remove -F option from here https://github.com/openebs/zfs-localpv/blob/master/pkg/zfs/zfs_util.go#L345. |
No, this would not work. -F is needed to overwrite a dataset. The proper solution would be to create and use another function, like CreateVolumeFromSnapshot, that will not do zfs create first, but recv, and would still create all needed metadata abstractions.
|
To answer this question, we need to have ZFSVolume object, which in turns creates the volume. Then we restore on that volume. This object is needed so that we can mount the volume and do various operation like zfs property change etc. |
Good point. yeah, if dataset exists then we need to provide "-F" option. Can you confirm if we are creating a dataset in the encrypted pool and then restore with "-F" will fail on that dataset? |
Already did, please see the snippet below my previous comment |
I tried to receive a "raw" snapshot created using
|
hmmm, so that means we can never restore the data on already existing encryption volume (strange!!! new learning for me) as we have to provide -F option. However it is possible do that on non encrypted volumes. |
It seems that the only way you can work with existing encrypted volumes is through replication streams (incremental backups). And you need to use -w then (raw data).
By the way the zfs-localpv does not use -w. Is it configurable somewhere? It is less portable, but snapshots are both encrypted and compressed if the source is. It may be desirable. In my case I don't really care about encrypting the backup, it is stored in safe location, but portable one is twice as large.
|
no, not as of now. We need to find out a proper solution for this which can work for all cases. |
@artw we need to make some design changes to fix this. The only solution I can think of right now is to restore the volume first and then create the ZFSVolume object. If volume already exists, the creation of ZFSVolume does not do anything. For now, if you want to recover the data, you can manually create the volume without encryption and then do the velero restore. |
@pawanpraka1 yes, this is a tricky one.
edit: wrong, can encrypt by send/receive -o encryption=on But it is something at least, thanks! |
Yes @artw, we will fix it so that we don't need to restore it in not encrypted dataset. We are working on the design, will ping to test the fix once it is ready. |
actually I was wrong,
|
adding the openzfs issue for reference here : openzfs/zfs#6793. |
@artw I have made some design changes where we are creating the ZFSVolume after the restore is done. Here are the PR links (still work in progress)
I don't have encrypted pool setup, could you help me verify this change. To verify, we need to install the below zfs operator yaml
This operator is using my local build (amd64) image |
@pawanpraka1 wow, that was fast! Just tested it, worked fine. Encryption is preserved after restore
velero restore logs redis-20210217105707
|
thanks for confirming @artw. would you mind mentioning your use case in our Adopters file openebs/openebs#2719. |
I'm having trouble restoring PVs with velero, it always fails with mysterious error
The detailed info is posted at velero-plugin repo, as it seemed more relevant. Would be glad if someone helps me to debug this.
openebs/velero-plugin#145
The text was updated successfully, but these errors were encountered: