Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Archive of our own can get error of too many attempts #1149

Open
kyoam opened this issue Jan 20, 2025 · 9 comments
Open

Archive of our own can get error of too many attempts #1149

kyoam opened this issue Jan 20, 2025 · 9 comments

Comments

@kyoam
Copy link

kyoam commented Jan 20, 2025

This happened on a recent download that came up on Ao3 saying that there were too many requests.

but since this happened on my actual browser too it's more a anti ddos i think.

just be warned that too many requests on the site can get errors on updating stories.

@JimmXinu
Copy link
Owner

Yes, we are aware. Thanks.

@dudamoos
Copy link

dudamoos commented Jan 21, 2025

How much work would it be to implement auto-sleep-and-retry in response to an HTTP 429 response code? That's the standard code for rate limiting, so handling it would improve the downloader's resilience and avoid manually restarting downloads. HTTP 429 responses can also include a Retry-After header so the downloader can intelligently auto-throttle rather than just sleeping for an arbitrary delay. However, apparently AO3 (or Cloudflare, which they use for DoS protection) sets that field to 0, so it isn't always useful.

@JimmXinu
Copy link
Owner

FFF already has wait and retry code.

The part that is configurable is a semi-random sleep time between requests. Try setting a larger slow_down_sleep_time value (2 is the current default for AO3). I will set the default higher if users report it helps.

FYI, the most active FFF discussion is on MobileRead about the plugin version.

@Samasaur1
Copy link

I was just coming here to report the same error. I have a script that runs daily that uses FFF and I've been seeing errors for the past four days. I'll try setting slow_down_sleep_time to 5 and will report back.

However, apparently AO3 (or Cloudflare, which they use for DoS protection) sets that field to 0, so it isn't always useful.

I know AO3 used to set Retry-After when it returned 429 (at least when accessing the history page, idk if the behavior varies per page), but in my logs this doesn't appear to have happened more recently than November

@Samasaur1
Copy link

I was able to get my setup to work with a slow_down_sleep_time value of 10

@Samasaur1
Copy link

Samasaur1 commented Jan 24, 2025

However, apparently AO3 (or Cloudflare, which they use for DoS protection) sets that field to 0, so it isn't always useful.

I know AO3 used to set Retry-After when it returned 429 (at least when accessing the history page, idk if the behavior varies per page), but in my logs this doesn't appear to have happened more recently than November

Also, AO3 does still set Retry-After at least when accessing the history page — saw it ~6 hours ago. It would be great if FFF could use the value in this header if present, and only fall back to the existing behavior if the header is absent

@JimmXinu
Copy link
Owner

I've not been seeing this behavior myself. What Retry-After header value(s) are you seeing? And are you sure it's not being honored?

FFF uses requests and urllib3 for HTTP requests. Looking at the Retry code, respect_retry_after_header is on by default. We just retry on more codes and the backoff_factor=2 we set looks like it should be additive with Retry-After.

@Samasaur1
Copy link

I saw a value of 164 based on output from one of my own scripts. I didn't realize that FFF honored HTTP 429, it's totally possible that it retries and there's just no log output. I was mostly replying to the person saying that AO3's Retry-After headers always have a value of 0.

@dudamoos
Copy link

I've been able to get by with a slow_down_sleep_time of 8 while downloading a lot of fics. For sparse downloading it might be possible to get away with 4 or 6.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants