-
Notifications
You must be signed in to change notification settings - Fork 359
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Thibault Jeandet
committed
Oct 27, 2017
1 parent
4ca3563
commit 6524ac8
Showing
7 changed files
with
130 additions
and
286 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,54 @@ | ||
Cromwell provides a generic way to configure a backend relying on most High Performance Computing (HPC) frameworks, and with access to a shared filesystem. | ||
|
||
The two main features that are needed for this backend to be used are a way to submit a job to the compute cluster and to get its status through the command line. | ||
You can find example configurations for a variety of those backends here: | ||
|
||
* [SGE](SGE) | ||
* [LSF](LSF) | ||
* [SLURM](SLURM) | ||
* [HTCondor](HTcondor) | ||
|
||
## FileSystems | ||
|
||
### Shared FileSystem | ||
HPC backends rely on being able to access and use a shared filesystem to store workflow results. | ||
|
||
Cromwell is configured with a root execution directory which is set in the configuration file under `backend.providers.<backend_name>.config.root`. This is called the `cromwell_root` and it is set to `./cromwell-executions` by default. Relative paths are interpreted as relative to the current working directory of the Cromwell process. | ||
|
||
When Cromwell runs a workflow, it first creates a directory `<cromwell_root>/<workflow_uuid>`. This is called the `workflow_root` and it is the root directory for all activity in this workflow. | ||
|
||
Each `call` has its own subdirectory located at `<workflow_root>/call-<call_name>`. This is the `<call_dir>`. | ||
Any input files to a call need to be localized into the `<call_dir>/inputs` directory. There are different localization strategies that Cromwell will try until one works. Below is the default order specified in `reference.conf` but it can be overridden: | ||
|
||
* `hard-link` - This will create a hard link to the file | ||
* `soft-link` - Create a symbolic link to the file. This strategy is not applicable for tasks which specify a Docker image and will be ignored. | ||
* `copy` - Make a copy the file | ||
|
||
Shared filesystem localization is defined in the `config` section of each backend. The default stanza for the Local and HPC backends looks like this: | ||
|
||
``` | ||
filesystems { | ||
local { | ||
localization: [ | ||
"hard-link", "soft-link", "copy" | ||
] | ||
} | ||
} | ||
``` | ||
|
||
### Additional FileSystems | ||
|
||
HPC backends (as well as the Local backend) can be configured to be able to interact with other type of filesystems, where the input files can be located for example. | ||
Currently the only other filesystem supported is Google Cloud Storage (GCS). See the [Google section](Google) of the documentation for information on how to configure GCS in Cromwell. | ||
Once you have a google authentication configured, you can simply add a `gcs` stanza in your configuration file to enable GCS: | ||
|
||
``` | ||
backend.providers.MyHPCBackend { | ||
filesystems { | ||
gcs { | ||
# A reference to a potentially different auth for manipulating files via engine functions. | ||
auth = "application-default" | ||
} | ||
} | ||
} | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
The following configuration can be used as a base to allow Cromwell to interact with an [LSF](https://en.wikipedia.org/wiki/Platform_LSF) cluster and dispatch jobs to it: | ||
|
||
```hocon | ||
LSF { | ||
actor-factory = "cromwell.backend.impl.sfs.config.ConfigBackendLifecycleActorFactory" | ||
config { | ||
submit = "bsub -J ${job_name} -cwd ${cwd} -o ${out} -e ${err} /bin/bash ${script}" | ||
kill = "bkill ${job_id}" | ||
check-alive = "bjobs ${job_id}" | ||
job-id-regex = "Job <(\\d+)>.*" | ||
} | ||
} | ||
``` | ||
|
||
For information on how to further configure it, take a look at the [Getting Started on HPC Clusters](../tutorials/HPCIntro) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,61 +1,14 @@ | ||
_For the Doc-A-Thon_ | ||
**Questions to answer and things to consider:** | ||
|
||
1. Who is visiting the Local page? | ||
*This is the first in the list of Backends* | ||
2. What do they need to know first? | ||
|
||
3. Is all the important information there? If not, add it! | ||
*What is an rc file? Write out the full name with the abbreviation, Return Code (rc) file, then abbreviate after.* | ||
4. Are there things that don't need to be there? Remove them. | ||
|
||
5. Are the code and instructions accurate? Try it! | ||
|
||
--- | ||
**DELETE ABOVE ONCE COMPLETE** | ||
|
||
--- | ||
|
||
|
||
**Local Backend** | ||
|
||
The local backend will simply launch a subprocess for each task invocation and wait for it to produce its rc file. | ||
|
||
This backend creates three files in the `<call_dir>` (see previous section): | ||
|
||
* `script` - A shell script of the job to be run. This contains the user's command from the `command` section of the WDL code. | ||
* `stdout` - The standard output of the process | ||
* `stderr` - The standard error of the process | ||
|
||
The `script` file contains: | ||
|
||
``` | ||
#!/bin/sh | ||
cd <container_call_root> | ||
<user_command> | ||
echo $? > rc | ||
``` | ||
|
||
`<container_call_root>` would be equal to `<call_dir>` for non-Docker jobs, or it would be under `/cromwell-executions/<workflow_uuid>/call-<call_name>` if this is running in a Docker container. | ||
|
||
When running without docker, the subprocess command that the local backend will launch is: | ||
|
||
``` | ||
/bin/bash <script>" | ||
``` | ||
|
||
When running with docker, the subprocess command that the local backend will launch is: | ||
The local backend will simply launch a subprocess for each job invocation and wait for it to produce a return code file (rc file) which will contain the exit code of the job's command. | ||
It is pre-enabled by default and there is no further configuration needed to start using it. | ||
|
||
``` | ||
docker run --rm -v <cwd>:<docker_cwd> -i <docker_image> /bin/bash < <script> | ||
``` | ||
It uses the local filesystem on which Cromwell is running to store the workflow directory structure. | ||
|
||
**NOTE**: If you are using the local backend with Docker and Docker Machine on Mac OS X, by default Cromwell can only | ||
run from in any path under your home directory. | ||
You can find the complete set of configurable settings with explanations in the [example configuration file](https://github.com/broadinstitute/cromwell/blob/b47feaa207fcf9e73e105a7d09e74203fff6f73b/cromwell.examples.conf#L193). | ||
|
||
The `-v` flag will only work if `<cwd>` is within your home directory because VirtualBox with | ||
Docker Machine only exposes the home directory by default. Any local path used in `-v` that is not within the user's | ||
home directory will silently be interpreted as references to paths on the VirtualBox VM. This can manifest in | ||
Cromwell as tasks failing for odd reasons (like missing RC file) | ||
The Local backend makes use of the same generic configuration as HPC backends. The same [filesystem considerations](HPC#filesystems) apply. | ||
|
||
See https://docs.docker.com/engine/userguide/dockervolumes/ for more information on volume mounting in Docker. | ||
**Note to OSX users**: Docker on Mac restricts the directories that can be mounted. Only some directories are allowed by default. | ||
If you try to mount a volume from a disallowed directory, jobs can fail in an odd manner. Before mounting a directory make sure it is in the list | ||
of allowed directories. See the [docker documentation](https://docs.docker.com/docker-for-mac/osxfs/#namespaces) for how to configure those directories. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
The following configuration can be used as a base to allow Cromwell to interact with a [SLURM](https://slurm.schedmd.com/) cluster and dispatch jobs to it: | ||
|
||
```hocon | ||
SLURM { | ||
actor-factory = "cromwell.backend.impl.sfs.config.ConfigBackendLifecycleActorFactory" | ||
config { | ||
runtime-attributes = """ | ||
Int runtime_minutes = 600 | ||
Int cpus = 2 | ||
Int requested_memory_mb_per_core = 8000 | ||
String queue = "short" | ||
""" | ||
submit = """ | ||
sbatch -J ${job_name} -D ${cwd} -o ${out} -e ${err} -t ${runtime_minutes} -p ${queue} \ | ||
${"-n " + cpus} \ | ||
--mem-per-cpu=${requested_memory_mb_per_core} \ | ||
--wrap "/bin/bash ${script}" | ||
""" | ||
kill = "scancel ${job_id}" | ||
check-alive = "squeue -j ${job_id}" | ||
job-id-regex = "Submitted batch job (\\d+).*" | ||
} | ||
} | ||
``` | ||
|
||
For information on how to further configure it, take a look at the [Getting Started on HPC Clusters](../tutorials/HPCIntro) |
Oops, something went wrong.