Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upload from Supervisor's list of backups; use HA's name #7

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

ryansch
Copy link

@ryansch ryansch commented Mar 8, 2023

Instead of using aws s3 sync, I loop through the backups from supervisor's backup endpoint.

The backup API gives us the friendly name from the UI along with metadata we can send along to S3.

Example:
Screenshot 2023-03-08 at 15 07 38

I'm using this alongside https://github.com/jcwillox/hass-auto-backup to automate taking the actual backups.

@ehbush
Copy link

ehbush commented Mar 25, 2023

Love this @ryansch. @thomasfr, are you able to verify and approve this merge request?

Copy link
Owner

@thomasfr thomasfr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First of all: LOVE IT! 🚀 I was already trying around to get the name and replace it, so happy to see your PR. Thank you.

See my comments, happy to discuss and hear your opinions


readarray -t backups < <(bashio::api.supervisor GET /backups/info false .backups | jq -c '.[]')
for backup in "${backups[@]}"; do
bashio::log.debug $(echo "${backup}" | jq -C .)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a more descriptive debug message please. Just a string what to expect to see, to make it obvious if there is an empty string echoed

@@ -14,13 +19,29 @@ local_backups_to_keep="$(bashio::config 'local_backups_to_keep' '3')"
monitor_path="/backup"
jq_filter=".backups|=sort_by(.date)|.backups|reverse|.[$local_backups_to_keep:]|.[].slug"


Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove additional empty line please

bashio::log.debug $(echo "${backup}" | jq -C .)

slug=$(echo "${backup}" | jq -r .slug)
name=$(echo "${backup}" | jq -r .name)
Copy link
Owner

@thomasfr thomasfr Mar 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should also handle special characters. See limitations for object keys: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-keys.html#:~:text=Characters%20that%20might%20require%20special%20handling

I suggest to use $name as is in the metadata and create a separate variable (for instance $object_key_name) for the object key with replaced/removed characters which need special handling and should be avoided, mentioned in the above link.
Logic could be something like:

  1. Replace spaces/multiple whitespace characters with a single _
  2. Replace all characters not mentioned here with -, and with a single - if there are multiple consecutive unsafe characters.

What do you think?

slug=$(echo "${backup}" | jq -r .slug)
name=$(echo "${backup}" | jq -r .name)

if ! aws s3 ls "s3://${bucket_name}/${name}.tar" >/dev/null; then
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could/should extract the date information we get from the /backups/info response (date ), and use year, month, day, hour as object key prefix.

For example:

{
  "backups": [
    {
      "slug": "skuwe823",
      "date": "2020-09-30T20:25:34.273Z",
      "name": "Awesome backup with     : some@weird=//characters",
      "type": "partial",
      "size": 44,
      "protected": true,
      "compressed": true,
      "content": {
        "homeassistant": true,
        "addons": ["awesome_addon"],
        "folders": ["ssl", "media"]
      }
    }
  ],
  "days_until_stale": 30
}

Together with my previous comment, this would result in:

object_key_name="Awesome_backup_with_some-weird-characters"
object_key="$year/$month/$day/$hour/$object_key_name"
# Would result in: '2020/09/30/20/Awesome_backup_with_some-weird-characters.tar'

Would also make it easier to manually delete older files via the console or create rules based on the key and we would not need the additional aws s3 ls - (although theoretically there could still be an object with the same key, but very very unlikely).

@ryansch
Copy link
Author

ryansch commented Apr 27, 2023

This is still on my radar.

@ehbush
Copy link

ehbush commented Oct 4, 2023

@ryansch and @thomasfr: Any update(s) on this? Would really love to have this feature to make this addon perfect :-)

@ryansch
Copy link
Author

ryansch commented Jan 3, 2024

I swear I'm not dead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants