Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support OCI images in apptainer #4543

Closed
edmundmiller opened this issue Nov 26, 2023 · 34 comments · Fixed by #4548
Closed

Support OCI images in apptainer #4543

edmundmiller opened this issue Nov 26, 2023 · 34 comments · Fixed by #4548
Assignees

Comments

@edmundmiller
Copy link
Contributor

Bug report

I was excited to use the new OCI suport for singularity and naively thought it would work with apptainer out of the box. They're different enough that there's a apptainer oci or something command, so when it runs with the singularity command with oci, it throws a weird error.

Expected behavior and actual behavior

singularity.oci to work with apptainer

Steps to reproduce the problem

Check out this commit(nf-core/modules@c27c938) from nf-core modules and run

PROFILE=singularity nextflow run ./tests/modules/nf-core/bowtie/align -entry test_bowtie_align_single_end -c ./tests/config/nextflow.config -c ./tests/modules/nf-core/bowtie/align/nextflow.config

Program output

logs-singularity-.zip

Environment

  • Nextflow version: 23.11.0-edge
  • Java version: [?]
  • Operating system: [macOS, Linux, etc]
  • Bash version: (use the command $SHELL --version)

Additional context

(Add any other context about the problem here)

edmundmiller added a commit to nf-core/modules that referenced this issue Nov 26, 2023
@pditommaso
Copy link
Member

I was not aware that also Apptainer has support for it. Apparently it works in the same manner as Singularity but without requiring the --oci option

https://apptainer.org/docs/user/latest/docker_and_oci.html

@marcodelapierre thoughts?

@marcodelapierre
Copy link
Member

marcodelapierre commented Nov 27, 2023

Actually, it seems to me like the Apptainer support lags behind.

Apptainer allows:

  • like Singularity, to pull/build from OCI and convert into SIF (default behaviour for long time)
  • like Singularity, to unpack a SIF into an OCI-compliant directory tree (aka sandbox), via the oci subcommand, and requiring sudo privileges
  • like Singularity, to enable a compatibility mode via the --compat flag, that more closely mimicks some OCI default behaviours (isolation over integration and related)

Unlike Singularity, Apptainer does not seem to have yet:

  • support for an image format that is closer to OCI (similar to the recent OCI-SIF in Singularity)
  • support to run containers by means of runc/crun

I admit I have not used Apptainer as much as I have deep dived into Singularity, so some questions for you @emiller88:

  • I see in your commit you are installing runc: is Apptainer able to leverage it? (may sound naïve, but could not find information in the Apptainer docs)
  • What are you trying to achieve with your Apptainer + OCI setup? I am happy to share with you what I know
  • Note that the OCI mode in Nextflow is currently implemented to support the new --oci mode in Singularity v4.0. There is currently no support in Nextflow for the oci subcommand that both Singularity and Apptainer have. What benefits do you see in using it with Nextflow?

@pditommaso I am going to edit the OCI mode docs in Nextflow, to describe the level of support with greater detail: see #4544

@pditommaso
Copy link
Member

pditommaso commented Nov 27, 2023

Docs shows the minimal example is

apptainer run docker://sylabsio/lolcow:latest

So, on nextflow side it should be enough to omit the --oci option

@edmundmiller
Copy link
Contributor Author

I admit I have not used Apptainer as much as I have deep dived into Singularity, so some questions for you @emiller88:

I have to admit that I haven't either 😂 we moved the modules action over to apptainer and nothing broke . I didn't understand the split or that singularity was still a project being actively developed.

  • I see in your commit you are installing runc: is Apptainer able to leverage it? (may sound naïve, but could not find information in the Apptainer docs)

They both failed so Idk if it'll work in github actions.

  • What are you trying to achieve with your Apptainer + OCI setup? I am happy to share with you what I know

I was hoping to have a single image for singularity and docker for the modules built by wave. It was just a pipe dream. I think we'll still have to have the two separate containers built and updated. Interested if you have any recommendations or advice for how to tag those images in quay properly to get picked up.

(a note to myself and any nf-core people we can possibly move the container declarations and conda to the configure file in the module directory)

@marcodelapierre
Copy link
Member

Many thanks for the additional info @emiller88 , very useful to better understanding your context.

What is your requirement to be able to use the same image for singularity and docker? I am wondering whether what you need is the ability to singularity run <image>/ apptainer run <image> , using run and the precanned runscript as opposed to the exec syntax. Is this what you are after? Or otherwise please feel free to add more details.

@marcodelapierre
Copy link
Member

marcodelapierre commented Nov 28, 2023

I have also a specific question. From the logs you attached to this issue, any of the .command.run has:

singularity exec --no-home --oci -B /home/runner/pytest_workflow_mcou73_o/bowtie_align_paired-end/work docker://quay.io/nf-core/modules/bowtie:bowtie_build--df26d88a69745299 /bin/bash -c "cd $NXF_TASK_WORKDIR; /bin/bash -ue /home/runner/pytest_workflow_mcou73_o/bowtie_align_paired-end/work/ab/39b3aa93efb03b178dd8d1274a30b7/.command.sh"

This would fail because, if singularity is a symlink to apptainer then it would not be able to accept the --oci flag.

Note you are using the following container: docker://quay.io/nf-core/modules/bowtie:bowtie_build--df26d88a69745299

So:

  • what happens if you do not enable OCI mode?
  • apptainer does not seem to support runc nor crun (see docs at https://apptainer.org/docs/user/latest/index.html), so the runc addition in the commit you are using should not play a role for apptainer

As above, I think I am unable to exactly get what you are trting to achieve - though I think your statement about willing to use a single container sounds like a good trace on which you can expand.

Please let me know :-)

@pditommaso
Copy link
Member

@marcodelapierre my question was for you 😆

@marcodelapierre
Copy link
Member

Thanks Paolo,

I was holding on that aspect, but let me take your ping as a chance to further clarify with @emiller88 .

Have you tried running the test without the oci mode, and with the docker container image + singularity runtime? if so, and if it fails, could you post the error?
If it turns out that we need to enable Nextflow to execute the docker version of the container with singularity run instead of singularity exec, we can definitely look into implementing the option.

@marcodelapierre marcodelapierre self-assigned this Nov 29, 2023
@marcodelapierre
Copy link
Member

OK, I have run some tests myself on a Ubuntu VM, without Nextflow, only Singularity 4.0.2, Apptainer 1.2.5 and some of the container images involved in this issue.

Let's look at the Docker/OCI images from the commit mentioned at the start of this issue for nf-core/modules:

  • quay.io/nf-core/modules/bowtie:bowtie_align--6c5b9c93546643d8
  • quay.io/nf-core/modules/bowtie:bowtie_build--df26d88a69745299

My goal was to have success in running bowtie --version from these images, which I can achieve with Docker. I tested:

  • singularity exec
  • singularity exec --oci
  • apptainer exec

And NONE of these are successful, as the executable is not found in the PATH.

As a double-check, both singularity and apptainer can successfully execute the command using the singularity SIF images as downloaded from https://depot.galaxyproject.org/singularity/, see master branch of the nf-core/modules repo.

@emiller88 was this your goal as well. I suspect so, right?

If this is the case, I think we should coordinate with @ewels and talk about minor edits to the base container image for nf-core modules, which may enable to use a single image for both Docker and Singularity/Apptainer.

As such, right now I think the issue is more on the side of setting up the container images in a way that suits the usage requirement, rather than performing any edit in the Nextflow code base. Happy to be proven wrong.

@emiller88 @ewels thoughts? I am happy to spawn an issue where relevant in nf-core, to talk edits to the base container images for modules.

@pditommaso
Copy link
Member

I believe the singularity exec problem with bowtie module is not related to the goal of this issue.

@marcodelapierre
Copy link
Member

marcodelapierre commented Nov 30, 2023

@emiller88 please read this -- I have a solution for you!

Thanks Paolo.

To summarise my previous comments with regards to the original topic, at the moment there is nothing to be done on the Nextflow side to support specific OCI functionalities in Apptainer.
If using apptainer at the runtime, all possible support is available with no additional flags or configuration, i.e. there is no general way to get the images in this issue working, without changing the images themselves.
There is no apptainer match for the new --oci flag in singularity.

That being said, @pditommaso @ewels I have just realised that we can get Nextflow to fix at runtime the missing PATH in conda/mamba containers:

singularity.runOptions = '--env PATH=/opt/conda/bin:\\\$PATH'

@emiller88 I have just tried the above with your test case, and IT WORKS!

This would be in substitution for singularity.oci = true, which is not required in this context.
With that, you can use the Docker purposed container instead of having to resort to a second, Singularity purposed one, without having to touch the image itself.

@pditommaso , what we could implement is a little utility option for Conda container, e.g. condapath or similar, that simply add the flag above without boiler plate, to make conda/mamba 's users life easier. Thoughts?

@pditommaso
Copy link
Member

To summarise my previous comments with regards to the original topic, at the moment there is nothing to be done on the Nextflow side to support specific OCI functionalities in Apptainer.

I think it's not correct. Apptainer is expecting the OCI container to be specified via docker:// but nextflow interprets this a the need to pull it and convert to an image file.

The apptainer.oci flag should prevent this to happen and allow running it in OCI mode.

The PATH problem in irrelevant in this context.

@marcodelapierre
Copy link
Member

Have you tested what you say @pditommaso ?

and allow running it in OCI mode.

I cannot seem to find such a mode in Apptainer, neither in the docs nor at runtime in my tests.

@pditommaso
Copy link
Member

You are saying that when running apptainer run docker://sylabsio/lolcow:latest it does a mere conversion from OCI image to SIF image file?

@marcodelapierre
Copy link
Member

oh yes

@marcodelapierre
Copy link
Member

As described above, I cannot see any trace in the docs nor at runtime of proper OCI runtime.

In fact, the wording in the docs always only mentions the ability to pull OCI containers.

Neither there is mention for support of runc or crun as low level engine.

@pditommaso
Copy link
Member

pditommaso commented Nov 30, 2023

Fair enough, but still this can be an implementation details, in the meaning from a point of the user that's a transparent process and effectively it allows running a docker (oci) container via apptainer run.

We need to understand it it's smart enough to use the same cached converted SIF for containers having the same disgest but different names eg..

apptainer run docker://wave.seqera.io/wt/3a94c9b7c579/wave/build:cowpy--924cf9852a1402ab
apptainer run docker://wave.seqera.io/wt/179fcadc90ed/wave/build:cowpy--924cf9852a1402ab

edmundmiller added a commit to nf-core/modules that referenced this issue Nov 30, 2023
@edmundmiller
Copy link
Contributor Author

As a user myself I don't have a specific goal. I'm just trying to understand what we can get away with in nf-core/modules now.

There's two things going on:

  1. Get rid of the scary container logic and just point to one container.
  2. Replace mulled biocontainers with containers built by wave

@pditommaso
Copy link
Member

pditommaso commented Nov 30, 2023

I agree 100%. I've tested it, and apptainer even tho does not support OCI mode, essential it works in the same manner when using an OCI container prefixed with docker://

It's even smart enough to cache the SIF based on the container digest checksum.

Regarding the problem with PATH and exec, this was fixed adding in the Dockerfile created by Wave the export of the Mamba root path

https://github.com/seqeralabs/libseqera/blob/6a7efe04f47717acbdfae2db989d6e943fa4ece7/wave-utils/src/main/resources/templates/conda/dockerfile-conda-file.txt#L7-L7

Indeed

$ apptainer exec docker://wave.seqera.io/wt/230d6111c473/wave/build:bedtools-2.30.0--4d32f5e1745982a7 bedtools --version
INFO:    Using cached SIF image
bedtools v2.30.0

and

$ singularity exec --oci docker://wave.seqera.io/wt/230d6111c473/wave/build:bedtools-2.30.0--4d32f5e1745982a7 bedtools --version
INFO:    Using cached OCI-SIF image
bedtools v2.30.0

To summarize, we need to make apptainer support apptainer.oci=true flag in a similar way has it's done for Singularity

@marcodelapierre
Copy link
Member

marcodelapierre commented Dec 1, 2023

Hi Edmund, many thanks for summarising, that helps me a lot, brilliant! :-)

I think your point 1. above on being able to use a single container can be achieved quickly. However, let me chunk it down in 2 sub-points:

1a- (this issue): you are getting a runtime error when using the Docker container (let's forget about the OCI label for now)

  • This can be fixed right now for most non-Wave conda/mamba based containers (eg biocontainers) by adding the following to your config: singularity.runOptions = '--env PATH=/opt/conda/bin:\\\$PATH'. Wave solves it for you with no syntax needed.

1b- nf-core/modules#4519 : the issue you mention in there is having a copy of the SIF in the work directory, eg in the case of HPC clusters

  • when I was responsible for containers+nextflow at an HPC centre (until last month), to reduce the duplication of SIFs I was setting up the following in the nextflow module, so that the Nextflow container cache was in a single shared location for all workflows, as an example (Lua syntax, but it gives the gist): setenv("NXF_SINGULARITY_CACHEDIR", "home/username/.nextflow_singularity") . You can try this right now as well.
  • in alternative, @pditommaso , to be honest we could disable the pull/copy of the SIF file entirely in the codebase, singularity and apptainer cache it anyway. Happy to do it if you green light it.

2- using Wave instead of Mulled (nf-core/modules#4080): I see the NF team is helping there already, but please feel free to ping me there as well

Please ask me anything! :-)

@pditommaso
Copy link
Member

This should do #4548

@marcodelapierre
Copy link
Member

marcodelapierre commented Dec 1, 2023

This should do #4548

I think we do not need a new manual option for the required features; they can be implemented transparently in a general manner.
All we need is to turn off the SIF save in work, that's it. All the rest, right now, is not necessary.

Apptainer is not able to skip the conversion. Apptainer has no "true" or "full" OCI mode as Singularity has.

@marcodelapierre
Copy link
Member

marcodelapierre commented Dec 1, 2023

Both Singularity and Apptainer have been able to cache the SIF by default for quite some time (at least 4 years I think, it was there when I designed my online tutorial). I think it has probably started with Singularity 3.0.

[edited] Docs for singularity 3.0 mention the cache: https://docs.sylabs.io/guides/3.0/user-guide/build_env.html?highlight=cache

So, as I said, all we need is to turn off saving SIF to work/. Singularity will cache any way. Not in OCI format, but in SIF format.

@marcodelapierre
Copy link
Member

@pditommaso Let me show you

$ cd ~
$ rm -fr .singularity/
$ singularity exec docker://ubuntu echo ciao
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
Getting image source signatures
Copying blob aece8493d397 done   | 
Copying config e4c5895818 done   | 
Writing manifest to image destination
Getting image source signatures
Copying blob aece8493d397 done   | 
Copying config e4c5895818 done   | 
Writing manifest to image destination
2023/12/01 04:34:16  info unpack layer: sha256:aece8493d3972efa43bfd4ee3cdba659c0f787f8f59c82fb3e48c87cbb22a12e
INFO:    Creating SIF file...
ciao

$ singularity cache list -v
NAME                     DATE CREATED           SIZE             TYPE
aece8493d3972efa43bfd4   2023-12-01 04:34:16    28.17 MiB        blob
c9cf959fd83770dfdefd8f   2023-12-01 04:34:16    0.41 KiB         blob
e4c58958181a5925816faa   2023-12-01 04:34:16    2.24 KiB         blob
sha256:c9cf959fd83770d   2023-12-01 04:34:25    28.43 MiB        oci-tmp

There are 1 container file(s) using 28.43 MiB and 3 oci blob file(s) using 28.17 MiB of space
Total space used: 56.61 MiB

$ rm -fr .singularity/
$ singularity exec --oci docker://ubuntu echo ciao
Getting image source signatures
Copying blob aece8493d397 done   | 
Copying config e4c5895818 done   | 
Writing manifest to image destination
Getting image source signatures
Copying blob aece8493d397 done   | 
Copying config e4c5895818 done   | 
Writing manifest to image destination
INFO:    Converting OCI image to OCI-SIF format
INFO:    Squashing image to single layer
INFO:    Writing OCI-SIF image
INFO:    Cleaning up.
ciao
ubuntu@ip-172-31-5-28:~$ singularity cache list -v
NAME                     DATE CREATED           SIZE             TYPE
aece8493d3972efa43bfd4   2023-12-01 04:34:48    28.17 MiB        blob
c9cf959fd83770dfdefd8f   2023-12-01 04:34:48    0.41 KiB         blob
e4c58958181a5925816faa   2023-12-01 04:34:48    2.24 KiB         blob
sha256:c9cf959fd83770d   2023-12-01 04:35:00    28.43 MiB        oci-sif

There are 1 container file(s) using 28.43 MiB and 3 oci blob file(s) using 28.17 MiB of space
Total space used: 56.60 MiB

$ rm -fr .singularity/
$ ls
apptainer_1.2.5_amd64.deb   singularity-ce_4.0.2-jammy_amd64.deb  
$ sudo apt remove -y singularity-ce && sudo apt install -y ./apptainer_1.2.5_amd64.deb 
[...]

ubuntu@ip-172-31-5-28:~$ apptainer exec docker:ubuntu echo ciao
INFO:    /etc/singularity/ exists; cleanup by system administrator is not complete (see https://apptainer.org/docs/admin/latest/singularity_migration.html)
FATAL:   Unable to handle docker:ubuntu uri: failed to get checksum for docker:ubuntu: unable to parse image name docker:ubuntu: docker: image reference ubuntu does not start with //
ubuntu@ip-172-31-5-28:~$ apptainer exec docker://ubuntu echo ciao
INFO:    /etc/singularity/ exists; cleanup by system administrator is not complete (see https://apptainer.org/docs/admin/latest/singularity_migration.html)
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
Getting image source signatures
Copying blob aece8493d397 done  
Copying config e4c5895818 done  
Writing manifest to image destination
Storing signatures
2023/12/01 04:36:06  info unpack layer: sha256:aece8493d3972efa43bfd4ee3cdba659c0f787f8f59c82fb3e48c87cbb22a12e
INFO:    /etc/singularity/ exists; cleanup by system administrator is not complete (see https://apptainer.org/docs/admin/latest/singularity_migration.html)
INFO:    Creating SIF file...
ciao

$ apptainer cache list -v
INFO:    /etc/singularity/ exists; cleanup by system administrator is not complete (see https://apptainer.org/docs/admin/latest/singularity_migration.html)
NAME                     DATE CREATED           SIZE             TYPE
aece8493d3972efa43bfd4   2023-12-01 04:36:06    28.17 MiB        blob
c9cf959fd83770dfdefd8f   2023-12-01 04:36:06    0.41 KiB         blob
e4c58958181a5925816faa   2023-12-01 04:36:06    2.24 KiB         blob
4f1895695840d967579e77   2023-12-01 04:36:15    28.43 MiB        oci-tmp

There are 1 container file(s) using 28.43 MiB and 3 oci blob file(s) using 28.17 MiB of space
Total space used: 56.61 MiB

$ rm -fr .apptainer/
$ sudo apt remove -y apptainer && sudo apt install -y ./singularity-ce_4.0.2-jammy_amd64.deb 

@marcodelapierre
Copy link
Member

3 approaches shown:

  • singularity exec
  • singularity exec --oci
  • apptainer exec

Have a look at the cache listings:

  • All three approaches cache blobs and images by default (by default under ~/.singularity and ~/.apptainer respectively)
  • OCI blobs: they are the same for all 3, see shasums, as it should be
  • Final image: singularity and apptainer create a SIF (called oci-tmp in their jargon)
  • Final image: singularity exec --oci creates the OCI-SIF, ie skips the conversion

@marcodelapierre
Copy link
Member

@pditommaso have a look 👆 , let me know what you think.

As I said, all we need to do is to disable the pulling of the SIF artifacts in work/, always with singularity and apptainer.

@marcodelapierre
Copy link
Member

And at last, on this, after testing

We need to understand if it's smart enough to use the same cached converted SIF for containers having the same disgest but different names

I can confirm that both singularity and apptainer caches comply with this requirement, as we speak, no extra config needed.

@marcodelapierre
Copy link
Member

marcodelapierre commented Dec 1, 2023

Summarising:

1a- executable not found error (this issue - non-Wave containers only): fix with singularity.runOptions = '--env PATH=/opt/conda/bin:\\\$PATH'

1b- getting rid of SIF copy in work/ (issue in nf-core/modules#4519): this should be done transparently by Nextflow based on whether the requested image starts with docker:// or not. PR in preparation.

2- Wave instead of Mulled : PR in progress by Edmund in nf-core/modules#4080

3- no need to formalise a OCI mode for apptainer

4- @pditommaso we may want to provide a singularity/apptainer option that allows switching between exec and run. Note that is is not 1:1 related to OCI mode per se; I think run can be made conditional to running a docker:// image, plus giving the option to turn it on with SIF images that can take advantage of it. PR in preparation.

@marcodelapierre
Copy link
Member

Having issues with my dev&testing environment today, will update here by Monday

@pditommaso
Copy link
Member

Ok, this changes things a bit. I didn't know both apptainer and singularity could run directly an OCI container doing an implicit conversion.

Considering this, I agree that OCI mode should be a Singularity-only capability.

I'm not sure instead it's a good idea to get rid completely of the current conversion process made by Nextflow. It's widely used and I'm not sure it could be replaced transparently by the one made automatically by the singularity/apptainer runtime.

However, we should support it by adding a setting e.g. directMode both for singularity and apptainer to enable this capability independently by the OCI mode.

@marcodelapierre
Copy link
Member

@edmundmiller this PR, #4548, was merged about 3 weeks ago, implementing the ociAutoPull directive for both Singularity and Apptainer.

This should help for your issue: nf-core/modules#4519

With ociAutoPull, Nextflow won't cache the SIF file in the work directory, it will instead just leverage Apptainer cache, pretty much the same way as with Docker.

Please feel free to close this issue or to further comment :)

@edmundmiller
Copy link
Contributor Author

edmundmiller commented Jan 8, 2024

Awesome!

instead just leverage Apptainer cache

To clarify will this put the images in $APPTAINER_CACHE? The main issue we were having is users on HPCs where that's not set properly and it defaults(at the system level, Nextflow it's work/singularity/cache) to something like ~/.singularity/cache and it was filling up their home directory. 😬

I've gotten around that with some tricks like this https://github.com/nf-core/configs/blob/89e1f18b8a7b071753fd5304e2dde0d61d51f26f/conf/utd_ganymede.config#L10 but it's a default that caused @drpatelh a lot of pain.

So less converting is awesome, but main concern was the cache location.

@marcodelapierre
Copy link
Member

Oh, I hear you Edmund! In my 5 years at an HPC centre, I saw so many users of various applications with the home filling up issue!

My general suggestion is to aim to fix these cache location issues on a per-caching-application basis, rather than within the "client" applications that make use them. This makes the setup more robust and maintainable. IE in this case, because the cache is owned by Singularity/Apptainer, then it should be addressed within their configuration, not within the Nextflow one. This would hold true for any other cache (NXF_HOME -> Nextflow; .local -> Python; .spack -> Spack; ..).

My practical suggestions to sort it out:

  1. move the cache where you want it to be (ie a path with enough quota/space), then create a symbolic link to the default location. This is quite good because it works for ANY application without requiring changing configurations;
  2. use application appropriate options to configure the cache to be in the elected path; this is more elegant, but it requires dedicated setup for each application (whether it is an environment variable or other), hence extra work; hence we found it to be less preferable.

Hope this can help in streamlining your setup.

@marcodelapierre marcodelapierre linked a pull request Jan 18, 2024 that will close this issue
@marcodelapierre
Copy link
Member

@edmundmiller I am closing this as I trust the merged PR above addresses the key issue here.

Happy to re-open and follow-up further if needed.

edmundmiller added a commit to nf-core/modules that referenced this issue Feb 16, 2024
edmundmiller added a commit to nf-core/modules that referenced this issue Feb 16, 2024
edmundmiller added a commit to nf-core/modules that referenced this issue Jun 11, 2024
edmundmiller added a commit to nf-core/modules that referenced this issue Jun 18, 2024
edmundmiller added a commit to nf-core/modules that referenced this issue Aug 8, 2024
edmundmiller added a commit to nf-core/modules that referenced this issue Nov 7, 2024
* build: Add wave

* build: Set strategy to dockerfile, conda then container

* refactor: Remove container

* build: Add a repo to push to

* ci(wave): Add wave build

https://github.com/nodejs/docker-node/blob/3c4fa6daf06a4786d202f2f610351837806a0380/.github/workflows/build-test.yml#L29

* ci(wave): Switch to all_changed_files

* ci(wave): Only look for envronment.ymls

* dummy: Change env

* ci(wave): Remove raw format

* ci(wave): Try a bunch of different things at once

* ci(wave): Remove redundant fromJson and wrap in an array

* ci(wave): I have no idea what I'm doing

* ci(wave): Wrap it

* ci(wave): Found an example

https://github.com/tj-actions/changed-files/blob/main/.github/workflows/matrix-test.yml

* ci(wave): Maybe quotes?

* ci(wave): That'll do it

* ci(wave): Fix wave install

* ci(wave): Hard code an image

* ci(wave): Add secrets

* feat: Try a different files structure

* ci(wave): First stab at building singularity images

* fixup! feat: Try a different files structure

* ci(wave): Add profile to matrix

* ci(wave): Give up on fancy substitution

* ci(wave): Add await

Co-authored-by: ewels <[email protected]>

* ci(wave): Switch to quay

* test(wave): Add freeze and update build repo

* refactor(wave): What happens if I add a container?

* refactor(wave): Have both bowtie modules use the same env

For the sake of demonstration

* test: Cut out using wave on tests

* refactor: What happens if we use the singularity one?

* refactor: Keep container directives for offline download

seqeralabs/wave#323

* feat: Try new singularity OCI setting

nextflow-io/nextflow@f5362a7

* build: Update container name

Guess #4327 broke that

* chore: Bump wave-cli version

* ci: Install runc

* ci: Switch to singularityhub action

nextflow-io/nextflow#4543

* ci: Install new singularity manually

Why that action trys to build from source, idk.

* ci: Install dependancies for singularity

* ci: runc => crun

* ci: Fix cgroup error

https://blog.misharov.pro/2021-05-16/systemd-github-actions

* ci: That'll do it

* ci: Remove Dockerfile

We'll have a seperate action for this I think

* ci: Update name

* ci: Push to the correct repos

* ci: Remove OCI stuff

* ci: Need a full URL

* ci: Fix // in container name

* ci: Remove push

Build once, renovate should bump the images automagically

* build: Add containers back

* ci: Add cache repos

Idk what this does exactly

* ci: Change registry name to use _

Because "build" is a api end point on quay.io.

So `bowtie/build` doesn't work.

Other plus is this matches the conda env name.

* build: / => _ in container name

* Try ociAutoPull

* chore: Add renovate comments to samtools

Just to trigger wave build

* test: Add ociAutoPull to nf-test

* ci: Bump wave version

* chore: Bump containers with new wave version

Not sure why that's happening...

* build: Update to use commity.wave.seqera.io

* ci: Bump wave-cli to 1.4.1

* ci: Try apptainer

* ci: Remove build-repo to see what happens

* build: Bump Nextflow version requirement

* fix: Get rid of the environment name?

Maybe this will get the auto generated tag?

* ci: Bump action versions

* ci: Try name-strategy tagPrefix

seqeralabs/wave-cli@269df0e

* ci: Remove singularity build for now

* ci: Try imageSuffix

* ci: Try none

* ci: What is the bowtie container name

* ci: Remove --name-strategy

* style: Add back in container elvis operator

* ci: Remove cache repo

* Revert "build: Bump Nextflow version requirement"

This reverts commit 69e1ea5.

* Revert "test: Add ociAutoPull to nf-test"

This reverts commit 5a3d546.

* test(#6505): Snapshot the versions contents, not the hash

* ci(#6505): Update version snapshot after building containers

* test(#6505): Attempt a topic channel with tests

askimed/nf-test#258

* chore: Bump to 1.5.0

* fix: Remove shard and filter on test bumping

* build: Bump images to match environment

* ci: Fix nf-test setup

* ci: Remove snapshot bumping

* build: Fix containers in bowtie

---------

Co-authored-by: ewels <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants