-
Notifications
You must be signed in to change notification settings - Fork 638
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support OCI images in apptainer #4543
Comments
I was not aware that also Apptainer has support for it. Apparently it works in the same manner as Singularity but without requiring the https://apptainer.org/docs/user/latest/docker_and_oci.html @marcodelapierre thoughts? |
Actually, it seems to me like the Apptainer support lags behind. Apptainer allows:
Unlike Singularity, Apptainer does not seem to have yet:
I admit I have not used Apptainer as much as I have deep dived into Singularity, so some questions for you @emiller88:
@pditommaso I am going to edit the OCI mode docs in Nextflow, to describe the level of support with greater detail: see #4544 |
Docs shows the minimal example is
So, on nextflow side it should be enough to omit the |
I have to admit that I haven't either 😂 we moved the modules action over to apptainer and nothing broke . I didn't understand the split or that singularity was still a project being actively developed.
They both failed so Idk if it'll work in github actions.
I was hoping to have a single image for singularity and docker for the modules built by wave. It was just a pipe dream. I think we'll still have to have the two separate containers built and updated. Interested if you have any recommendations or advice for how to tag those images in quay properly to get picked up. (a note to myself and any nf-core people we can possibly move the container declarations and conda to the configure file in the module directory) |
Many thanks for the additional info @emiller88 , very useful to better understanding your context. What is your requirement to be able to use the same image for singularity and docker? I am wondering whether what you need is the ability to |
I have also a specific question. From the logs you attached to this issue, any of the
This would fail because, if Note you are using the following container: So:
As above, I think I am unable to exactly get what you are trting to achieve - though I think your statement about willing to use a single container sounds like a good trace on which you can expand. Please let me know :-) |
@marcodelapierre my question was for you 😆 |
Thanks Paolo, I was holding on that aspect, but let me take your ping as a chance to further clarify with @emiller88 . Have you tried running the test without the |
OK, I have run some tests myself on a Ubuntu VM, without Nextflow, only Singularity 4.0.2, Apptainer 1.2.5 and some of the container images involved in this issue. Let's look at the Docker/OCI images from the commit mentioned at the start of this issue for
My goal was to have success in running
And NONE of these are successful, as the executable is not found in the As a double-check, both @emiller88 was this your goal as well. I suspect so, right? If this is the case, I think we should coordinate with @ewels and talk about minor edits to the base container image for nf-core modules, which may enable to use a single image for both Docker and Singularity/Apptainer. As such, right now I think the issue is more on the side of setting up the container images in a way that suits the usage requirement, rather than performing any edit in the Nextflow code base. Happy to be proven wrong. @emiller88 @ewels thoughts? I am happy to spawn an issue where relevant in |
I believe the |
@emiller88 please read this -- I have a solution for you! Thanks Paolo. To summarise my previous comments with regards to the original topic, at the moment there is nothing to be done on the Nextflow side to support specific OCI functionalities in Apptainer. That being said, @pditommaso @ewels I have just realised that we can get Nextflow to fix at runtime the missing
@emiller88 I have just tried the above with your test case, and IT WORKS! This would be in substitution for @pditommaso , what we could implement is a little utility option for Conda container, e.g. |
I think it's not correct. Apptainer is expecting the OCI container to be specified via The The PATH problem in irrelevant in this context. |
Have you tested what you say @pditommaso ?
I cannot seem to find such a mode in Apptainer, neither in the docs nor at runtime in my tests. |
You are saying that when running |
oh yes |
As described above, I cannot see any trace in the docs nor at runtime of proper OCI runtime. In fact, the wording in the docs always only mentions the ability to pull OCI containers. Neither there is mention for support of |
Fair enough, but still this can be an implementation details, in the meaning from a point of the user that's a transparent process and effectively it allows running a docker (oci) container via We need to understand it it's smart enough to use the same cached converted SIF for containers having the same disgest but different names eg..
|
As a user myself I don't have a specific goal. I'm just trying to understand what we can get away with in There's two things going on: |
I agree 100%. I've tested it, and apptainer even tho does not support OCI mode, essential it works in the same manner when using an OCI container prefixed with It's even smart enough to cache the SIF based on the container digest checksum. Regarding the problem with PATH and exec, this was fixed adding in the Dockerfile created by Wave the export of the Mamba root path Indeed
and
To summarize, we need to make apptainer support |
Hi Edmund, many thanks for summarising, that helps me a lot, brilliant! :-) I think your point 1. above on being able to use a single container can be achieved quickly. However, let me chunk it down in 2 sub-points: 1a- (this issue): you are getting a runtime error when using the Docker container (let's forget about the OCI label for now)
1b- nf-core/modules#4519 : the issue you mention in there is having a copy of the SIF in the
2- using Wave instead of Mulled (nf-core/modules#4080): I see the NF team is helping there already, but please feel free to ping me there as well Please ask me anything! :-) |
This should do #4548 |
I think we do not need a new manual option for the required features; they can be implemented transparently in a general manner. Apptainer is not able to skip the conversion. Apptainer has no "true" or "full" OCI mode as Singularity has. |
Both Singularity and Apptainer have been able to cache the SIF by default for quite some time (at least 4 years I think, it was there when I designed my online tutorial). I think it has probably started with Singularity 3.0. [edited] Docs for singularity 3.0 mention the cache: https://docs.sylabs.io/guides/3.0/user-guide/build_env.html?highlight=cache So, as I said, all we need is to turn off saving SIF to work/. Singularity will cache any way. Not in OCI format, but in SIF format. |
@pditommaso Let me show you
|
3 approaches shown:
Have a look at the cache listings:
|
@pditommaso have a look 👆 , let me know what you think. As I said, all we need to do is to disable the pulling of the SIF artifacts in work/, always with singularity and apptainer. |
And at last, on this, after testing
I can confirm that both singularity and apptainer caches comply with this requirement, as we speak, no extra config needed. |
Summarising: 1a- executable not found error (this issue - non-Wave containers only): fix with 1b- getting rid of SIF copy in work/ (issue in nf-core/modules#4519): this should be done transparently by Nextflow based on whether the requested image starts with 2- Wave instead of Mulled : PR in progress by Edmund in nf-core/modules#4080 3- no need to formalise a OCI mode for apptainer 4- @pditommaso we may want to provide a singularity/apptainer option that allows switching between |
Having issues with my dev&testing environment today, will update here by Monday |
Ok, this changes things a bit. I didn't know both apptainer and singularity could run directly an OCI container doing an implicit conversion. Considering this, I agree that OCI mode should be a Singularity-only capability. I'm not sure instead it's a good idea to get rid completely of the current conversion process made by Nextflow. It's widely used and I'm not sure it could be replaced transparently by the one made automatically by the singularity/apptainer runtime. However, we should support it by adding a setting e.g. |
@edmundmiller this PR, #4548, was merged about 3 weeks ago, implementing the This should help for your issue: nf-core/modules#4519 With Please feel free to close this issue or to further comment :) |
Awesome!
To clarify will this put the images in I've gotten around that with some tricks like this https://github.com/nf-core/configs/blob/89e1f18b8a7b071753fd5304e2dde0d61d51f26f/conf/utd_ganymede.config#L10 but it's a default that caused @drpatelh a lot of pain. So less converting is awesome, but main concern was the cache location. |
Oh, I hear you Edmund! In my 5 years at an HPC centre, I saw so many users of various applications with the home filling up issue! My general suggestion is to aim to fix these cache location issues on a per-caching-application basis, rather than within the "client" applications that make use them. This makes the setup more robust and maintainable. IE in this case, because the cache is owned by Singularity/Apptainer, then it should be addressed within their configuration, not within the Nextflow one. This would hold true for any other cache (NXF_HOME -> Nextflow; .local -> Python; .spack -> Spack; ..). My practical suggestions to sort it out:
Hope this can help in streamlining your setup. |
@edmundmiller I am closing this as I trust the merged PR above addresses the key issue here. Happy to re-open and follow-up further if needed. |
* build: Add wave * build: Set strategy to dockerfile, conda then container * refactor: Remove container * build: Add a repo to push to * ci(wave): Add wave build https://github.com/nodejs/docker-node/blob/3c4fa6daf06a4786d202f2f610351837806a0380/.github/workflows/build-test.yml#L29 * ci(wave): Switch to all_changed_files * ci(wave): Only look for envronment.ymls * dummy: Change env * ci(wave): Remove raw format * ci(wave): Try a bunch of different things at once * ci(wave): Remove redundant fromJson and wrap in an array * ci(wave): I have no idea what I'm doing * ci(wave): Wrap it * ci(wave): Found an example https://github.com/tj-actions/changed-files/blob/main/.github/workflows/matrix-test.yml * ci(wave): Maybe quotes? * ci(wave): That'll do it * ci(wave): Fix wave install * ci(wave): Hard code an image * ci(wave): Add secrets * feat: Try a different files structure * ci(wave): First stab at building singularity images * fixup! feat: Try a different files structure * ci(wave): Add profile to matrix * ci(wave): Give up on fancy substitution * ci(wave): Add await Co-authored-by: ewels <[email protected]> * ci(wave): Switch to quay * test(wave): Add freeze and update build repo * refactor(wave): What happens if I add a container? * refactor(wave): Have both bowtie modules use the same env For the sake of demonstration * test: Cut out using wave on tests * refactor: What happens if we use the singularity one? * refactor: Keep container directives for offline download seqeralabs/wave#323 * feat: Try new singularity OCI setting nextflow-io/nextflow@f5362a7 * build: Update container name Guess #4327 broke that * chore: Bump wave-cli version * ci: Install runc * ci: Switch to singularityhub action nextflow-io/nextflow#4543 * ci: Install new singularity manually Why that action trys to build from source, idk. * ci: Install dependancies for singularity * ci: runc => crun * ci: Fix cgroup error https://blog.misharov.pro/2021-05-16/systemd-github-actions * ci: That'll do it * ci: Remove Dockerfile We'll have a seperate action for this I think * ci: Update name * ci: Push to the correct repos * ci: Remove OCI stuff * ci: Need a full URL * ci: Fix // in container name * ci: Remove push Build once, renovate should bump the images automagically * build: Add containers back * ci: Add cache repos Idk what this does exactly * ci: Change registry name to use _ Because "build" is a api end point on quay.io. So `bowtie/build` doesn't work. Other plus is this matches the conda env name. * build: / => _ in container name * Try ociAutoPull * chore: Add renovate comments to samtools Just to trigger wave build * test: Add ociAutoPull to nf-test * ci: Bump wave version * chore: Bump containers with new wave version Not sure why that's happening... * build: Update to use commity.wave.seqera.io * ci: Bump wave-cli to 1.4.1 * ci: Try apptainer * ci: Remove build-repo to see what happens * build: Bump Nextflow version requirement * fix: Get rid of the environment name? Maybe this will get the auto generated tag? * ci: Bump action versions * ci: Try name-strategy tagPrefix seqeralabs/wave-cli@269df0e * ci: Remove singularity build for now * ci: Try imageSuffix * ci: Try none * ci: What is the bowtie container name * ci: Remove --name-strategy * style: Add back in container elvis operator * ci: Remove cache repo * Revert "build: Bump Nextflow version requirement" This reverts commit 69e1ea5. * Revert "test: Add ociAutoPull to nf-test" This reverts commit 5a3d546. * test(#6505): Snapshot the versions contents, not the hash * ci(#6505): Update version snapshot after building containers * test(#6505): Attempt a topic channel with tests askimed/nf-test#258 * chore: Bump to 1.5.0 * fix: Remove shard and filter on test bumping * build: Bump images to match environment * ci: Fix nf-test setup * ci: Remove snapshot bumping * build: Fix containers in bowtie --------- Co-authored-by: ewels <[email protected]>
Bug report
I was excited to use the new OCI suport for singularity and naively thought it would work with apptainer out of the box. They're different enough that there's a
apptainer oci
or something command, so when it runs with the singularity command with oci, it throws a weird error.Expected behavior and actual behavior
singularity.oci
to work with apptainerSteps to reproduce the problem
Check out this commit(nf-core/modules@c27c938) from nf-core modules and run
Program output
logs-singularity-.zip
Environment
$SHELL --version
)Additional context
(Add any other context about the problem here)
The text was updated successfully, but these errors were encountered: