Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add official support for taking multiple snapshots of websites over time #179

Open
pirate opened this issue Mar 19, 2019 · 13 comments
Open
Labels
size: hard status: idea-phase Work is tentatively approved and is being planned / laid out, but is not ready to be implemented yet touches: data/schema/architecture

Comments

@pirate
Copy link
Member

pirate commented Mar 19, 2019

This is by far the most requested feature.

People want an easy way to take multiple snapshots of websites over time.

Here's how archive.org does it
Screenshot 2024-03-09 at 18 58 25


For people finding this issue via Google / incoming links, if you want a hacky solution to take a second snapshot of a site, you can add the link with a new hash and it will be treated as a new page and a new snapshot will be taken:

echo https://example.com/some/page.html#archivedate=2019-03-18 | archivebox add
# then to re-shapshot it on another day...
echo https://example.com/some/page.html#archivedate=2019-03-22 | archivebox add

Edit: as of v0.6 there is now a button in the UI to do this ^
Screen Shot 2021-04-09 at 8 51 35 p

@pirate pirate added size: hard status: idea-phase Work is tentatively approved and is being planned / laid out, but is not ready to be implemented yet touches: data/schema/architecture labels Mar 19, 2019
@n0ncetonic
Copy link
Contributor

Looking forward to this feature. Thanks for the hacky workaround as well, I have a few pages I'd like to continue monitoring for new content but I was worried about the implications of my current backup being overwritten by a 404 page if the content went down.

@pirate
Copy link
Member Author

pirate commented Mar 19, 2019

I just updated the README to make the current behavior clearer as well:

Running archivebox add adds only new, unique links into your data folder on each run. Because it will ignore duplicates and only archive each link the first time you add it, you can schedule it to run on a timer and re-import all your feeds multiple times a day. It will run quickly even if the feeds are large, because it's only archiving the newest links since the last run. For each link, it runs through all the archive methods. Methods that fail will save None and be automatically retried on the next run, methods that succeed save their output into the data folder and are never retried/overwritten by subsequent runs. Support for saving multiple snapshots of each site over time will be added soon (along with the ability to view diffs of the changes between runs).

@alex9099
Copy link

alex9099 commented Aug 1, 2020

Any updates on this? It would be really nice if it was possible to have versions, like the waybackmachine does :)

@pirate
Copy link
Member Author

pirate commented Aug 1, 2020

You can accomplish this right now still by adding a hash at the end of the URL, e.g.

archivebox add https://example.com/#2020-08-01
archivebox add https://example.com/#2020-09-01
...

Official first-class support for multiple snapshots is still on the roadmap, but don't expect it anytime in the next month or two, it's quite a large feature with big implications for how we store and dedupe snapshot data internally.

@pirate pirate closed this as completed Aug 1, 2020
@pirate pirate reopened this Aug 1, 2020
@TheOneValen
Copy link

Would be nice if there also was a migration from the hash-date-hack to the first-class support.

@Spacewalker2
Copy link

Do I get this right? At the point this is available I can for example add an URL (not a feed) like archivebox schedule --every=day 'http://example.com/static.html' and the URL gets archived everyday. If there are changes then ArchiveBox provides diffs of it.

Will it be possible that ArchiveBox notifies me if there are changes maybe by using the local MTA?

@pirate
Copy link
Member Author

pirate commented Jan 23, 2021

Scheduled archiving will not re-archive the initial page if snapshots already exist, the way that archivebox schedule feature is meant to be used is with the --depth=1 flag to pull in new links from a source like an RSS feed, bookmarks export file, or HTML page with some links in it, without re-archiving the initial page itself (it re-pulls it to do the crawl, but will not re-snapshot it).

ArchiveBox has no first-class support for taking multiple snapshots or any built-in diffing system, only the #hash hack mentioned above. It's still on the roadmap but not expected anytime soon due to the architectural complexity. If you absolutely need multiple snapshots of the same pages over time I recommend checking out some of the other tools available on our community wiki https://github.com/ArchiveBox/ArchiveBox/wiki/Web-Archiving-Community#other-archivebox-alternatives

@Spacewalker2
Copy link

Thanks for the quick answer and the very cool application! I already run an ArchiveBox instance on my FreeNAS, and it fits the purpose perfectly. Having the described feature above would be a nice extra. I asked because something like diffs is mentioned on the archivebox.io website itself. If archivebox schedule does not fit here then maybe running archivebox add with some other time-based job scheduler would be possible too. I look forward to it.

BTW: I hope ArchiveBox will end up in the FreeNAS/TrueNAS plugins section some time. Having ArchiveBox here available with one or two clicks would be very nice.

@pirate
Copy link
Member Author

pirate commented Apr 10, 2021

This is now added in v0.6. It's not full support, but it's a step in the right direction. I just added a UI button labeled Re-snapshot that automates the process of creating a new snapshot with a bumped timestamp in the URL hash. I could also add a flag called --resnapshot or --duplicate that automates this step when archiving via the CLI too.

Then later when we add better real multi-snapshot support, we can migrate all the Snapshots with timestamps in their hashes to the new system automatically.

Screen Shot 2021-04-09 at 8 51 35 p

@pirate pirate changed the title Add support for taking multiple snapshots of websites over time Add official support for taking multiple snapshots of websites over time Apr 21, 2021
@pirate pirate unpinned this issue May 13, 2021
@GlassedSilver
Copy link

GlassedSilver commented Jun 7, 2021

Sometimes websites remove pages and redirect them to something completely different.

IDK, an example I could think of is that if you tried to call for the OG URL for the Xbox 360 sub-page on xbox.com these days I think you'll get redirected to the Xbox One S page, since that is now... yeah I don't know how that's really relevant other than "this is old-ish and it's cheap-ish, have this instead???! kthx"

Try it for yourself:
http://xbox.com/en-US/xbox-360

Redirects at the time of writing to:
https://www.xbox.com/en-US/consoles/xbox-one-s

Not sure if the URL sends some HTML error code along with the redirect (???).
Just thought about this issue for a few days and wondered what the strategy is for either a no-error redirect and an error-returning redirect.

Also, I would consider being VERY careful about dropping URLs from the automated re-archival process on too many fails. It's not very uncommon for a site to go missing for months sometimes and then to come back. I'm not talking about the leagues of Microsoft, but fan sites, hobby projects, niche software developers who do it in their spare time and missed renewing their domain name registration and catching it a bit late, etc...

There are all sorts of sticks that can be thrown at you where a simple KO on 3 errors would lead to silent discontinuation of archiving of something that's only temporarily not there. Maybe asking for user confirmation at least per domain would be the best approach:
e.g.:

  • Yes, I know GeoCities is down and down forever, stop trying these URLs.
  • No, this <insert hobby dev's page here> project ain't gone forever, I checked the dev's Twitter and know they are working on a fix, keep trying, please.

Edit: Microsoft does supply the error 301, moved permanently. That's kind of them, not sure how much we can rely on this in the real world? Anyone with ample experience in this?

@agnosticlines

This comment was marked as off-topic.

@pirate
Copy link
Member Author

pirate commented Aug 2, 2022

Thanks for the support @agnosticlines I got your donation! <3 *(All the donation info is here for future reference: https://github.com/ArchiveBox/ArchiveBox/wiki/Donations)

This is still high on my priority list but development speed is slow these days, I only have a day or so per month to dedicate to this project and most of it is taken up by bugfixes. Occasionally I have a month where I sprint and do a big release but I cant make promises on the timeline that this particular feature will be released.

@sysfu

This comment was marked as off-topic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size: hard status: idea-phase Work is tentatively approved and is being planned / laid out, but is not ready to be implemented yet touches: data/schema/architecture
Projects
None yet
Development

No branches or pull requests

8 participants