You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now we dump a database on each periodic snapshot run
We then hash the database, and if the hash exists in S3-compatible storage, we don't upload it
We can avoid having to snapshot/hash by storing ephemeral metadata on the last time we ran snapshotting: if no file was modified since our last snapshot run, don't dump the database/hash it at al. More technically, get fs::metadata of each file in the dir, call modified() on result, sort the times so it's stable, take max or hash them together. If the max/hash of the modified times hasn't changed, skip this backup.
The text was updated successfully, but these errors were encountered:
fs::metadata
of each file in the dir, callmodified()
on result, sort the times so it's stable, take max or hash them together. If the max/hash of the modified times hasn't changed, skip this backup.The text was updated successfully, but these errors were encountered: