-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit processing of queued files to 500 per job #8
Comments
_Originally posted by @neumannd in #3 (comment) |
I think in the case of a retrieve for more than 500 files (or maybe rather 10/... tapes) we should assume the user to have done a mistake, cancel the whole thing and throw an error. Otherwise we run into the problem that users might accidentally trigger loading half the HSM into the cache... |
I see that 500 files can be a massive request and a mistake. I would argue that this limitation should however be done at the lowest level, like |
yeah, just saying that we should not try to bypass such limitations, b/c they are there for a reason. |
I see where you are coming from. Retrievals are now for the most part combined into a single |
Yes. We feared It would be the savest to read out the |
@observingClouds It would be reasonable to read
Then extract the value of
|
Originally posted by @neumannd in #3 (comment)
The text was updated successfully, but these errors were encountered: