You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Incremental backup fail in flattening proccess when using Ceph and rsync backup driver. There are no errors in the VM logs, but inside the backup image show up an error:
Sat Jan 18 16:11:02 2025 : Error flattening backup increments: ERROR: reconstruct_chains: Command failed: export LANG=C export LC_ALL=C set -e -o pipefail; shopt -qs failglob qemu-img rebase -u -F qcow2 -b '/var/lib/one/datastores/104/28/997b7f/disk.0.rbd2' '/var/lib/one/datastores/104/28/31cdde/disk.0.1.rbdiff'ERROR: reconstruct_chains: [STDOUT] "" ERROR: reconstruct_chains: [STDERR] "WARNING: Image format was not specified for '/var/lib/one/datastores/104/28/31cdde/disk.0.1.rbdiff' and probing guessed raw.\n Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.\n Specify the 'raw' format explicitly to remove the restrictions.\nqemu-img: Could not change the backing file to '/var/lib/one/datastores/104/28/997b7f/disk.0.rbd2': Operation not supported\n" /var/lib/one/remotes/datastore/rsync/increment_flatten:149:in `<main>': Unable to reconstruct qcow2 chains: WARNING: Image format was not specified for '/var/lib/one/datastores/104/28/31cdde/disk.0.1.rbdiff' and probing guessed raw. (StandardError) Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted. Specify the 'raw' format explicitly to remove the restrictions. qemu-img: Could not change the backing file to '/var/lib/one/datastores/104/28/997b7f/disk.0.rbd2': Operation not supported
Error from FSunstone:
Error from RSunstone:
No error on VM logs:
tail -f /var/log/one/28.log
Sat Jan 18 17:25:50 2025 [Z0][VMM][I]: VM Disk successfully attached.
Sat Jan 18 17:25:50 2025 [Z0][LCM][I]: VM Disk successfully attached.
Sat Jan 18 17:25:50 2025 [Z0][VM][I]: New LCM state is RUNNING
Sat Jan 18 17:29:37 2025 [Z0][VM][I]: New LCM state is BACKUP
Sat Jan 18 17:29:48 2025 [Z0][VMM][I]: Successfully execute transfer manager driver operation: prebackup_live.
Sat Jan 18 17:30:16 2025 [Z0][VMM][I]: Successfully execute datastore driver operation: backup.
Sat Jan 18 17:30:17 2025 [Z0][VMM][I]: Successfully execute transfer manager driver operation: postbackup_live.
Sat Jan 18 17:30:18 2025 [Z0][VMM][I]: VM backup successfully created.
Sat Jan 18 17:30:18 2025 [Z0][VM][I]: New LCM state is RUNNING
Sat Jan 18 17:30:47 2025 [Z0][VM][I]: New LCM state is BACKUP
Sat Jan 18 17:30:58 2025 [Z0][VMM][I]: Successfully execute transfer manager driver operation: prebackup_live.
Sat Jan 18 17:31:26 2025 [Z0][VMM][I]: Successfully execute datastore driver operation: backup.
Sat Jan 18 17:31:27 2025 [Z0][VMM][I]: Successfully execute transfer manager driver operation: postbackup_live.
Sat Jan 18 17:31:27 2025 [Z0][VMM][I]: VM backup successfully created.
Sat Jan 18 17:31:27 2025 [Z0][VM][I]: New LCM state is RUNNING
Sat Jan 18 17:31:53 2025 [Z0][VM][I]: New LCM state is BACKUP
Sat Jan 18 17:31:59 2025 [Z0][VMM][I]: Successfully execute transfer manager driver operation: prebackup_live.
Sat Jan 18 17:32:01 2025 [Z0][VMM][I]: Successfully execute datastore driver operation: backup.
Sat Jan 18 17:32:02 2025 [Z0][VMM][I]: Successfully execute transfer manager driver operation: postbackup_live.
Sat Jan 18 17:32:02 2025 [Z0][VMM][I]: VM backup successfully created.
Sat Jan 18 17:32:02 2025 [Z0][VM][I]: New LCM state is RUNNING
Sat Jan 18 17:32:42 2025 [Z0][VM][I]: New LCM state is BACKUP
Sat Jan 18 17:32:50 2025 [Z0][VMM][I]: Successfully execute transfer manager driver operation: prebackup_live.
Sat Jan 18 17:32:51 2025 [Z0][VMM][I]: Successfully execute datastore driver operation: backup.
Sat Jan 18 17:32:52 2025 [Z0][VMM][I]: Successfully execute transfer manager driver operation: postbackup_live.
Sat Jan 18 17:32:52 2025 [Z0][VMM][I]: VM backup successfully created.
Sat Jan 18 17:32:52 2025 [Z0][VM][I]: New LCM state is RUNNING
Sat Jan 18 17:33:26 2025 [Z0][VM][I]: New LCM state is BACKUP
Sat Jan 18 17:33:32 2025 [Z0][VMM][I]: Successfully execute transfer manager driver operation: prebackup_live.
Sat Jan 18 17:33:33 2025 [Z0][VMM][I]: Successfully execute datastore driver operation: backup.
Sat Jan 18 17:33:34 2025 [Z0][VMM][I]: Successfully execute transfer manager driver operation: postbackup_live.
Sat Jan 18 17:33:34 2025 [Z0][VMM][I]: VM backup successfully created.
Sat Jan 18 17:33:34 2025 [Z0][VM][I]: New LCM state is RUNNING
Sat Jan 18 17:33:34 2025 [Z0][LCM][I]: Removing 1 backup increments
To Reproduce
1-Configure system and images datastores for ceph.
2-Create a VM with 2 disks from images in datastores with TM_MAD: ceph: one from the OS disk type and the ther one an empty datablock image.
3-Partitionate, format and mount the second disk (Your VM has now 2 disks online).
4-Configure the VM for Single VM Backups, with:
5-Execute 1st backup, 2nd backup, 3rd backup and the 4th backup will not be completed.
FSunstone showing 4 increments instead of 3.
RSunstone showing 4 increments instead of 3.
After this error you cant perform any other backup over that backup image. You need to delete the given backup image and create a new one. The flattening process will fails again and will be on same situation as older backup image. This works when using different TM_MAD, for example, a datastore shared.
Franco-Sparrow
changed the title
Incremental backup fail in flattening stage when using Ceph and rsync backup driver
Incremental backup fail in flattening proccess when using Ceph and rsync backup driver
Jan 19, 2025
Franco-Sparrow
changed the title
Incremental backup fail in flattening proccess when using Ceph and rsync backup driver
Incremental backup fail in flattening process when using Ceph and rsync backup driver
Jan 19, 2025
Franco-Sparrow
changed the title
Incremental backup fail in flattening process when using Ceph and rsync backup driver
Error flattening backup increments when using Ceph and rsync backup driver
Jan 19, 2025
Description
Incremental backup fail in flattening proccess when using Ceph and rsync backup driver. There are no errors in the VM logs, but inside the backup image show up an error:
Error from FSunstone:
Error from RSunstone:
No error on VM logs:
To Reproduce
1-Configure system and images datastores for ceph.
2-Create a VM with 2 disks from images in datastores with
TM_MAD: ceph
: one from the OS disk type and the ther one an empty datablock image.3-Partitionate, format and mount the second disk (Your VM has now 2 disks online).
4-Configure the VM for Single VM Backups, with:
5-Execute 1st backup, 2nd backup, 3rd backup and the 4th backup will not be completed.
FSunstone showing 4 increments instead of 3.
RSunstone showing 4 increments instead of 3.
After this error you cant perform any other backup over that backup image. You need to delete the given backup image and create a new one. The flattening process will fails again and will be on same situation as older backup image. This works when using different TM_MAD, for example, a datastore
shared
.Checking VM configuration:
The output is as follow:
Checking information from system datastore:
Cheking information from images datastore:
Entire VM information:
Ceph version running in the storage nodes:
Expected behavior
The flattening should be working and a 4th backup should have been succeed.
Details
Additional context
Add any other context about the problem here.
Progress Status
The text was updated successfully, but these errors were encountered: