-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upload from Supervisor's list of backups; use HA's name #7
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First of all: LOVE IT! 🚀 I was already trying around to get the name and replace it, so happy to see your PR. Thank you.
See my comments, happy to discuss and hear your opinions
|
||
readarray -t backups < <(bashio::api.supervisor GET /backups/info false .backups | jq -c '.[]') | ||
for backup in "${backups[@]}"; do | ||
bashio::log.debug $(echo "${backup}" | jq -C .) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a more descriptive debug message please. Just a string what to expect to see, to make it obvious if there is an empty string echoed
@@ -14,13 +19,29 @@ local_backups_to_keep="$(bashio::config 'local_backups_to_keep' '3')" | |||
monitor_path="/backup" | |||
jq_filter=".backups|=sort_by(.date)|.backups|reverse|.[$local_backups_to_keep:]|.[].slug" | |||
|
|||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove additional empty line please
bashio::log.debug $(echo "${backup}" | jq -C .) | ||
|
||
slug=$(echo "${backup}" | jq -r .slug) | ||
name=$(echo "${backup}" | jq -r .name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should also handle special characters. See limitations for object keys: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-keys.html#:~:text=Characters%20that%20might%20require%20special%20handling
I suggest to use $name
as is in the metadata and create a separate variable (for instance $object_key_name
) for the object key with replaced/removed characters which need special handling and should be avoided, mentioned in the above link.
Logic could be something like:
- Replace spaces/multiple whitespace characters with a single
_
- Replace all characters not mentioned here with
-
, and with a single-
if there are multiple consecutive unsafe characters.
What do you think?
slug=$(echo "${backup}" | jq -r .slug) | ||
name=$(echo "${backup}" | jq -r .name) | ||
|
||
if ! aws s3 ls "s3://${bucket_name}/${name}.tar" >/dev/null; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could/should extract the date information we get from the /backups/info
response (date
), and use year
, month
, day
, hour
as object key prefix.
For example:
{
"backups": [
{
"slug": "skuwe823",
"date": "2020-09-30T20:25:34.273Z",
"name": "Awesome backup with : some@weird=//characters",
"type": "partial",
"size": 44,
"protected": true,
"compressed": true,
"content": {
"homeassistant": true,
"addons": ["awesome_addon"],
"folders": ["ssl", "media"]
}
}
],
"days_until_stale": 30
}
Together with my previous comment, this would result in:
object_key_name="Awesome_backup_with_some-weird-characters"
object_key="$year/$month/$day/$hour/$object_key_name"
# Would result in: '2020/09/30/20/Awesome_backup_with_some-weird-characters.tar'
Would also make it easier to manually delete older files via the console or create rules based on the key and we would not need the additional aws s3 ls
- (although theoretically there could still be an object with the same key, but very very unlikely).
This is still on my radar. |
I swear I'm not dead. |
Instead of using
aws s3 sync
, I loop through the backups from supervisor's backup endpoint.The backup API gives us the friendly name from the UI along with metadata we can send along to S3.
Example:
I'm using this alongside https://github.com/jcwillox/hass-auto-backup to automate taking the actual backups.