-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question/problem] "Resource temporarily unavailable" when pushing data to the output store #8
Comments
Hi! Could you share some information about the system (in particular file system) you are running this on? |
Thanks a lot for the quick reply! Here's our
Please do let me know if you need further information 🙂 |
Any idea why this error may occur? After playing around a bit, I've noticed that:
Things I've tried to resolve the error (unsuccessfully thus far):
Any help or ideas would be very much appreciated 🙏 |
First of all, thanks a lot for developing DataLad and this amazing workflow, and congrats on the beautiful paper!
I'm trying to use the workflow on our HPC and it mostly works fine. However, when trying to push the job-specific outputs from the
datalad containers-run
command back to the output store, I frequently encounter the following error message:As a bit of context, in my batch jobs (using SLURM) I'm cloning a BIDS dataset and then preprocess the BIDS data from a single participant using
afni_proc.py
. I'm defining the resulting pre-processed anatomical and time-series data as well as some first-level statistical maps as--outputs
indatalad containers-run
, and so these are the files that should get pushed to the output store so that I can later merge them back into my main BIDS dataset.I haven't yet been able to determine exactly when and for what kind of files this error occurs – right now it seems to be pretty random. Of course, this might be very particular to the setup of our HPC. But I still wanted to ask if you have any experience or ideas for how to deal with this kind of error.
Thanks a lot in advance!
The text was updated successfully, but these errors were encountered: