Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--disk-limit and --disk-limit-tmp => limits not enforced #1001

Open
nick-youngblut opened this issue Dec 28, 2024 · 0 comments
Open

--disk-limit and --disk-limit-tmp => limits not enforced #1001

nick-youngblut opened this issue Dec 28, 2024 · 0 comments

Comments

@nick-youngblut
Copy link

nick-youngblut commented Dec 28, 2024

When I set fasterq-dump --disk-limit 0.1GB --disk-limit-tmp 0.1GB, the limits are not enforced. The fasterq-dump jobs never throw an error, regardless of the (temp) output file sizes.

Also, the docs at https://github.com/ncbi/sra-tools/wiki/HowTo:-fasterq-dump mention --disk-limit and --disk-limit-tmp, but do not actually explain what they are. The fasterq-dump CLI docs also do not explain the --disk-limit and --disk-limit-tmp parameters. For example, one could assume multiple possibilities on what happens when a disk limit is reached:

  1. fasterq-dump will exit gracefully when the limit is reached, and only the reads already downloaded are written to the file system
  2. faster-dump just throws a non-zero exit

I believe the the 2nd option is the actual behavior. The 1st option would be much more desirable, especially for compute environments with usually small, defined disk limits (e.g., cloud batch VMs). The 1st option would then act somewhat like --maxSpotId from fastq-dump, which is lacking in fasterq-dump.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant