Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DietPi-Backup | Add support for compressed backup/restore #816

Closed
Fourdee opened this issue Mar 17, 2017 · 8 comments
Closed

DietPi-Backup | Add support for compressed backup/restore #816

Fourdee opened this issue Mar 17, 2017 · 8 comments
Assignees

Comments

@Fourdee
Copy link
Collaborator

Fourdee commented Mar 17, 2017

eg: mybackup.zip, to any location (for purpose of non-symlink/permissions file-systems)

https://dietpi.com/phpbb/viewtopic.php?p=6188#p6188

@Fourdee Fourdee self-assigned this Mar 17, 2017
@Fourdee Fourdee changed the title DietPi-Backup | Add support for compressed save DietPi-Backup | Add support for compressed backup/restore Mar 17, 2017
@Invictaz
Copy link

Invictaz commented May 6, 2017

Yeah this is what I'm looking for :)

@Invictaz
Copy link

Any updates on this? Maybe PiClone from the other issue?

@MichaIng
Copy link
Owner

MichaIng commented Apr 5, 2018

Ref: https://help.ubuntu.com/community/BackupYourSystem/TAR
https://wiki.archlinux.org/index.php/Full_system_backup_with_tar

The problem is, using tar or any other compression tool directly, results in everything is backed up/transferred instead of just the changed files. Perfect would be if the tool compares state of backed up files within archive and then like rsync just adds the changed files into the archive. But practically, as far as I could find, for this the whole archive needs to be unpacked, compared and then repacked again. This will be an intense CPU, disc I/O (in case RAM) and storage intense operation until finished. I didn't find any method that does this nicely file by file, preferable within RAM, without the need to unpack and repack everything at once.

First rsync, then separate tar/bzip2/.. packing and removing the unpacked backup folder, sounds like not a good trade. On next backup everything would need to be unpacked, without loosing time stamps, rsynced and repacked again. Same on restoring backup. Thus, to be safe, you would need double raw data size as backup drive capacity, completely destroying the advantage of smaller backup size.

Finally I do not see a reasonable advantage of compressed backup, based on the possibilities I found. If backup drive I/O, backup time and resource usage really doesn't matter, but only the final backup size, then only adequate solution would be to just backup/transfer+compress the whole system directly via e.g. tar, overwriting the old archive (as shown in links above). But this is also not preferable on SBCs, where system is on sd card, as also the whole system will be restored (instead of synced), leading to larger sd card write access/wear. This is an essential factor, as corrupted sd cards are one of the most occurring total server losses.

Tools that might offer what I am looking for: compressed + incremental backup: https://wiki.archlinux.org/index.php/Synchronization_and_backup_programs#Incremental_backups

@Fourdee
Copy link
Collaborator Author

Fourdee commented Apr 5, 2018

@MichaIng

Yep, agree 👍
Rsync cannot be piped or compressed in realtime. Only the end result can be compressed, defeating the purpose by requiring 2x~ backup size and additional resource usage.

@MichaIng
Copy link
Owner

MichaIng commented May 4, 2018

Testing: https://github.com/borgbackup/borg

  • Active developed
  • Debian repo
  • Python based
  • Allows everything needed as above.
  • LZMA compression takes a while 🤣 ...
  • ... exclude file need different syntax, going on tomorrow

@MichaIng
Copy link
Owner

MichaIng commented Aug 2, 2018

Okay, had another look. Actually for DietPi-Backup from my point of view this doesn't make sense. Using raw rsync has many advantages according to speed, RAM usage, disk usage during process and overall read/writes to backup drive. For my impression there are rare cases where you really want to have your backups/files compressed, at least after rethink all the above. And the need to install and handle an additional software package with dependencies and wrapper code requirement does just not make it up.

If one really needs compressed backups, than the above 3rd party tool(s) can be used, but I suggest we do not create a wrapper for this into dietpi-backup. Best we can do from my point of view is to add them to dietpi-software.

Will mark as closed, please reopen if there is a different opinion.

@MichaIng MichaIng closed this as completed Aug 2, 2018
@megusta-01
Copy link

Hey MichaIng,
can you maybe rethink about borg? I want to use it as a borg server, that is what I imagine:

  • use it as alternative to DietPi-Backup -> without compression, backups store directly on Pi
  • backup my clients (also Pi configs) automatically with compression -> store external (on NAS) and Cloud (RClone needed or maybe not)
  • limit borg with dietpi-process_tool

I found some solutions how to handle it
https://github.com/witten/borgmatic

This one is on alpha status and not released, but has a WebGui. Maybe you have some better solutions?
https://github.com/marcpope/borgbackupserver

@MichaIng
Copy link
Owner

MichaIng commented Jul 10, 2019

@megusta-01
Basically it should work well when following the official install instructions for Debian. dietpi-process_tool has been merged into dietpi-services btw with v6.25 and will not handle borg, if active as service/daemon.

We will keep dietpi-backup as simple integrated solution, perhaps extend to allow creating multiple backups, but leave them uncompressed as of above discussion. I will not implement borg into our scripts but might add it as install option to dietpi-software. Everyone is anyway free to choose DietPi-Backup or any other backup solution that works on Debian 😉.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants