-
-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generating shell scripts instead of functions for wrappers without specifying on the container.yaml #565
Comments
Hmm this sounds like a bug - the scripts should generate using the default templates from the settings.yaml. @muffato was this added with the PR that updated wrapper scripts? |
Yes, it sounds like a bug too ! Scripts should be created regardless of whether they have a template defined / overwritten in |
Hi. I can't reproduce the problem on a clean install.
@surak : what settings do you have ? In particular the wrapper-scripts settings, incl. its subkeys |
@surak that would be great if you could give the complete steps you took so we can reproduce! I can try as well. |
find .
.
./nvcr.io
./nvcr.io/nvidia
./nvcr.io/nvidia/tensorflow
./nvcr.io/nvidia/tensorflow/.version
./nvcr.io/nvidia/tensorflow/22.06-tf2-py3
./nvcr.io/nvidia/tensorflow/22.06-tf2-py3/module.lua
./nvcr.io/nvidia/tensorflow/22.06-tf2-py3/99-shpc.sh
./nvcr.io/nvidia/tensorflow/22.06-tf2-py3/nvcr.io-nvidia-tensorflow-22.06-tf2-py3-sha256:ea76aabaa17ee0e308a443fa1e7666f98350865f266b02610a60791bd0feeb40.sif I can confirm that quay.io works with the |
So, I think we are talking about different things. In quay.io/biocontainers/blast/container.yaml, the aliases are like this:
Which make sense: they are aliases to binaries on the filesystem of the container. This is related to the aliases generated in the template. For example, here: https://github.com/singularityhub/singularity-hpc/blob/main/shpc/main/modules/templates/singularity.lua , where you have - {|module_name|}-run:
singularity run {% if features.gpu %}{{ features.gpu }} {% endif %}{% if features.home %}-B {{ features.home }} --home {{ features.home }} {% endif %}{% if features.x11 %}-B {{ features.x11 }} {% endif %}{% if settings.environment_file %}-B <moduleDir>/{{ settings.environment_file }}:/.singularity.d/env/{{ settings.environment_file }}{% endif %} {% if settings.bindpaths %}-B {{ settings.bindpaths }} {% endif %}<container> "$@"
- {|module_name|}-shell:
singularity shell -s {{ settings.singularity_shell }} {% if features.gpu %}{{ features.gpu }} {% endif %}{% if features.home %}-B {{ features.home }} --home {{ features.home }} {% endif %}{% if features.x11 %}-B {{ features.x11 }} {% endif %}{% if settings.environment_file %}-B <moduleDir>/{{ settings.environment_file }}:/.singularity.d/env/{{ settings.environment_file }}{% endif %} {% if settings.bindpaths %}-B {{ settings.bindpaths }} {% endif %}<container>
- {|module_name|}-exec:
singularity exec {% if features.gpu %}{{ features.gpu }} {% endif %}{% if features.home %}-B {{ features.home }} --home {{ features.home }} {% endif %}{% if features.x11 %}-B {{ features.x11 }} {% endif %}{% if settings.environment_file %}-B <moduleDir>/{{ settings.environment_file }}:/.singularity.d/env/{{ settings.environment_file }}{% endif %} {% if settings.bindpaths %}-B {{ settings.bindpaths }} {% endif %}<container> "$@"
- {|module_name|}-inspect-runscript:
singularity inspect -r <container>
- {|module_name|}-inspect-deffile:
singularity inspect -d <container>
- {|module_name|}-container:
echo "$SINGULARITY_CONTAINER" This creates shell functions when loading the module, and those won't work in Slurm. So if I understand it correctly, the "bug" is in the TCL file which generates the module file, like here:
and here:
|
Hi @surak . Right, I think I understand, and your original message is becoming clearer. This issue is about |
ah yes, now I understand the issue- correct they indeed should be wrapper scripts and not aliases - it's just an oversight. I think I likely didn't change them because (I suspected) the use case for using in a workflow wasn't as strong, and the aliases would work most of the time. |
Exactly. Thing is, slurm won't accept functions as executables. So, right now, to run the tensorflow container installed with srun --mpi=pspmix --cpu_bind=none,v env PMIX_SECURITY_MODE=native \
singularity ${SINGULARITY_OPTS} exec --nv ${SINGULARITY_COMMAND_OPTS} \
/p/project/ccstao/cstao05/easybuild/juwelsbooster/modules/containers/nvcr.io/nvidia/tensorflow/22.06-tf2-py3/nvcr.io-nvidia-tensorflow-22.06-tf2-py3-sha256:ea76aabaa17ee0e308a443fa1e7666f98350865f266b02610a60791bd0feeb40.sif \
python MYCODE.py If that looks like With a shell script instead of a function, I could do something like this on the batch submission script: module load nvcr.io/nvidia/tensorflow/22.06-tf2-py3
srun tensorflow-run mycode.py Instead. |
Ah indeed, run should definitely be a non-alias / wrapper! |
I'm pretty deep into the remote registry refactor, but I can fix this up right after! |
Will work on this soon! |
heyo! Nothing is ready yet, but just wanted to update you that I have a proof of concept - just for one command (a singularity shell, e.g., "python-shell"). I think this should work, so I should be able to do run/exec/container/inspects and then the same for docker and probably just need a few more evenings for a PR. So stay tuned! Here is progress: https://github.com/singularityhub/singularity-hpc/compare/shell-script-wrappers?expand=1 I refactored the wrappers logic into more clear functions for generation, and put them in their own module, and then the singularity/docker specific templates for the various commands will live in the wrappers templates folder named for their respective container technology. So WIP - more soon (probably not tonight!) |
okay here is a first shot! #586 This should generate the container command wrapper scripts, akin to the alias ones. The jinja2 is a bit hard to write so review/additional test would be greatly appreciated! |
Wooohooo! Thanks! |
As the wrapper_scripts option is now enabled by default, shpc will only create scripts in cases the container.yaml file has the docker_scripts/singularity_scripts tag.
My suggestion would be that, instead of creating functions for calling the containers, simple default shell scripts with such content be created instead. This way,
slurm
and others will be happy to call it.The current default is to only generate shell scripts if there's enough info on the container.yaml.
Changes needed:
What do you think? That's better than to have to add a script/wrapper entry to every single container in the registry.
The text was updated successfully, but these errors were encountered: