Skip to content

Commit

Permalink
backend pass
Browse files Browse the repository at this point in the history
  • Loading branch information
Thibault Jeandet committed Oct 27, 2017
1 parent 4ca3563 commit 6524ac8
Show file tree
Hide file tree
Showing 7 changed files with 130 additions and 286 deletions.
118 changes: 21 additions & 97 deletions docs/backends/Backends.md
Original file line number Diff line number Diff line change
@@ -1,109 +1,34 @@
_For the Doc-A-Thon_
**Questions to answer and things to consider:**

1. Who is visiting the General Backends page?
*Do they know what a backend is?*
2. What do they need to know first?

3. Is all the important information there? If not, add it!
*Add information about SLURM? See this [Github issue](https://github.com/broadinstitute/cromwell/issues/1750) for more information.
4. Are there things that don't need to be there? Remove them.

5. Are the code and instructions accurate? Try it!

---
**DELETE ABOVE ONCE COMPLETE**

---


A backend represents a way to run the user's command specified in the `task` section. Cromwell allows for backends conforming to
the Cromwell backend specification to be plugged into the Cromwell engine. Additionally, backends are included with the
A backend represents a way to run the commands of your workflow. Cromwell allows for backends conforming to
the Cromwell backend specification to be plugged into the Cromwell engine. Additionally, backends are included with the
Cromwell distribution:

* **Local / GridEngine / LSF / etc.** - Run jobs as subprocesses or via a dispatcher. Supports launching in Docker containers. Use `bash`, `qsub`, `bsub`, etc. to run scripts.
* **Google Cloud** - Launch jobs on Google Compute Engine through the Google Genomics Pipelines API.
* **GA4GH TES** - Launch jobs on servers that support the GA4GH Task Execution Schema (TES).
* **HtCondor** - Allows to execute jobs using HTCondor.
* **Spark** - Adds support for execution of spark jobs.
* **[Local](Local)**
* **[HPC](HPC): [SunGridEngine](SGE) / [LSF](LSF) / [HTCondor](HTcondor), [SLURM](SLURM), etc.** - Run jobs as subprocesses or via a dispatcher. Supports launching in Docker containers. Use `bash`, `qsub`, `bsub`, etc. to run scripts.
* **[Google Cloud](Google)** - Launch jobs on Google Compute Engine through the Google Genomics Pipelines API.
* **[GA4GH TES](TES)** - Launch jobs on servers that support the GA4GH Task Execution Schema (TES).
* **[Spark](Spark)** - Supports execution of spark jobs.

HPC backends are put under the same umbrella because they all use the same generic configuration that can be specialized to fit the need of a particular technology.

Backends are specified in the `backend` configuration block under `providers`. Each backend has a configuration that looks like:
Backends are specified in the `backend.providers` configuration. Each backend has a configuration that looks like:

```hocon
backend {
default = "Local"
providers {
BackendName {
actor-factory = "FQN of BackendLifecycleActorFactory instance"
config {
key = "value"
key2 = "value2"
...
}
}
BackendName {
actor-factory = "FQN of BackendLifecycleActorFactory class"
config {
...
}
}
```

The structure within the `config` block will vary from one backend to another; it is the backend implementation's responsibility
to be able to interpret its configuration.

In the example below two backend types are named within the `providers` section here, so both
are available. The default backend is specified by `backend.default` and must match the `name` of one of the
configured backends:

```hocon
backend {
default = "Local"
providers {
Local {
actor-factory = "cromwell.backend.impl.local.LocalBackendLifecycleActorFactory"
config {
root: "cromwell-executions"
filesystems = {
local {
localization: [
"hard-link", "soft-link", "copy"
]
}
gcs {
# References an auth scheme defined in the 'google' stanza.
auth = "application-default"
}
}
}
},
JES {
actor-factory = "cromwell.backend.impl.jes.JesBackendLifecycleActorFactory"
config {
project = "my-cromwell-workflows"
root = "gs://my-cromwell-workflows-bucket"
maximum-polling-interval = 600
dockerhub {
# account = ""
# token = ""
}
genomics {
# A reference to an auth defined in the 'google' stanza at the top. This auth is used to create
# Pipelines and manipulate auth JSONs.
auth = "application-default"
endpoint-url = "https://genomics.googleapis.com/"
}
filesystems = {
gcs {
# A reference to a potentially different auth for manipulating files via engine functions.
auth = "user-via-refresh"
}
}
}
}
]
}
```
The providers section can contain multiple backends which will all be available to Cromwell.

**Backend Job Limits**

You can limit the number of concurrent jobs for a backend by specifying the following option in the backend's config
All backends support limiting the number of concurrent jobs by specifying the following option in the backend's configuration
stanza:

```
Expand All @@ -118,12 +43,11 @@ backend {

**Backend Filesystems**

Each backend will utilize filesystems to store the directory structure of an executed workflow. Currently, the backends and the type of filesystems that the backend use are tightly coupled. In future versions of Cromwell, they may be more loosely coupled.

Each backend will utilize a filesystem to store the directory structure and results of an executed workflow.
The backend/filesystem pairings are as follows:

* [Local Backend](Local) and associated backends primarily use the [Shared Local Filesystem](SharedFilesystem).
* [Google Backend](Google) uses the [Google Cloud Storage Filesystem](Google/#google-cloud-storage-filesystem).
* Local, HPC and Spark backend use the [Shared Local Filesystem](SharedFilesystem).
* Google backend uses the [Google Cloud Storage Filesystem](Google/#google-cloud-storage-filesystem).

Note that while Local, SGE, LSF, etc. backends use the local or network filesystem for the directory structure of a workflow, they are able to localize inputs
from GCS paths if configured to use a GCS filesystem. See [Google Storage Filesystem](Google/#google-cloud-storage-filesystem) for more details.
Additional filesystems capabilities can be added depending on the backend.
For instance, an HPC backend can be configured to work with files on Google Cloud Storage. See the HPC documentation for more details.
54 changes: 54 additions & 0 deletions docs/backends/HPC.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
Cromwell provides a generic way to configure a backend relying on most High Performance Computing (HPC) frameworks, and with access to a shared filesystem.

The two main features that are needed for this backend to be used are a way to submit a job to the compute cluster and to get its status through the command line.
You can find example configurations for a variety of those backends here:

* [SGE](SGE)
* [LSF](LSF)
* [SLURM](SLURM)
* [HTCondor](HTcondor)

## FileSystems

### Shared FileSystem
HPC backends rely on being able to access and use a shared filesystem to store workflow results.

Cromwell is configured with a root execution directory which is set in the configuration file under `backend.providers.<backend_name>.config.root`. This is called the `cromwell_root` and it is set to `./cromwell-executions` by default. Relative paths are interpreted as relative to the current working directory of the Cromwell process.

When Cromwell runs a workflow, it first creates a directory `<cromwell_root>/<workflow_uuid>`. This is called the `workflow_root` and it is the root directory for all activity in this workflow.

Each `call` has its own subdirectory located at `<workflow_root>/call-<call_name>`. This is the `<call_dir>`.
Any input files to a call need to be localized into the `<call_dir>/inputs` directory. There are different localization strategies that Cromwell will try until one works. Below is the default order specified in `reference.conf` but it can be overridden:

* `hard-link` - This will create a hard link to the file
* `soft-link` - Create a symbolic link to the file. This strategy is not applicable for tasks which specify a Docker image and will be ignored.
* `copy` - Make a copy the file

Shared filesystem localization is defined in the `config` section of each backend. The default stanza for the Local and HPC backends looks like this:

```
filesystems {
local {
localization: [
"hard-link", "soft-link", "copy"
]
}
}
```

### Additional FileSystems

HPC backends (as well as the Local backend) can be configured to be able to interact with other type of filesystems, where the input files can be located for example.
Currently the only other filesystem supported is Google Cloud Storage (GCS). See the [Google section](Google) of the documentation for information on how to configure GCS in Cromwell.
Once you have a google authentication configured, you can simply add a `gcs` stanza in your configuration file to enable GCS:

```
backend.providers.MyHPCBackend {
filesystems {
gcs {
# A reference to a potentially different auth for manipulating files via engine functions.
auth = "application-default"
}
}
}
```
15 changes: 15 additions & 0 deletions docs/backends/LSF.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
The following configuration can be used as a base to allow Cromwell to interact with an [LSF](https://en.wikipedia.org/wiki/Platform_LSF) cluster and dispatch jobs to it:

```hocon
LSF {
actor-factory = "cromwell.backend.impl.sfs.config.ConfigBackendLifecycleActorFactory"
config {
submit = "bsub -J ${job_name} -cwd ${cwd} -o ${out} -e ${err} /bin/bash ${script}"
kill = "bkill ${job_id}"
check-alive = "bjobs ${job_id}"
job-id-regex = "Job <(\\d+)>.*"
}
}
```

For information on how to further configure it, take a look at the [Getting Started on HPC Clusters](../tutorials/HPCIntro)
63 changes: 8 additions & 55 deletions docs/backends/Local.md
Original file line number Diff line number Diff line change
@@ -1,61 +1,14 @@
_For the Doc-A-Thon_
**Questions to answer and things to consider:**

1. Who is visiting the Local page?
*This is the first in the list of Backends*
2. What do they need to know first?

3. Is all the important information there? If not, add it!
*What is an rc file? Write out the full name with the abbreviation, Return Code (rc) file, then abbreviate after.*
4. Are there things that don't need to be there? Remove them.

5. Are the code and instructions accurate? Try it!

---
**DELETE ABOVE ONCE COMPLETE**

---


**Local Backend**

The local backend will simply launch a subprocess for each task invocation and wait for it to produce its rc file.

This backend creates three files in the `<call_dir>` (see previous section):

* `script` - A shell script of the job to be run. This contains the user's command from the `command` section of the WDL code.
* `stdout` - The standard output of the process
* `stderr` - The standard error of the process

The `script` file contains:

```
#!/bin/sh
cd <container_call_root>
<user_command>
echo $? > rc
```

`<container_call_root>` would be equal to `<call_dir>` for non-Docker jobs, or it would be under `/cromwell-executions/<workflow_uuid>/call-<call_name>` if this is running in a Docker container.

When running without docker, the subprocess command that the local backend will launch is:

```
/bin/bash <script>"
```

When running with docker, the subprocess command that the local backend will launch is:
The local backend will simply launch a subprocess for each job invocation and wait for it to produce a return code file (rc file) which will contain the exit code of the job's command.
It is pre-enabled by default and there is no further configuration needed to start using it.

```
docker run --rm -v <cwd>:<docker_cwd> -i <docker_image> /bin/bash < <script>
```
It uses the local filesystem on which Cromwell is running to store the workflow directory structure.

**NOTE**: If you are using the local backend with Docker and Docker Machine on Mac OS X, by default Cromwell can only
run from in any path under your home directory.
You can find the complete set of configurable settings with explanations in the [example configuration file](https://github.com/broadinstitute/cromwell/blob/b47feaa207fcf9e73e105a7d09e74203fff6f73b/cromwell.examples.conf#L193).

The `-v` flag will only work if `<cwd>` is within your home directory because VirtualBox with
Docker Machine only exposes the home directory by default. Any local path used in `-v` that is not within the user's
home directory will silently be interpreted as references to paths on the VirtualBox VM. This can manifest in
Cromwell as tasks failing for odd reasons (like missing RC file)
The Local backend makes use of the same generic configuration as HPC backends. The same [filesystem considerations](HPC#filesystems) apply.

See https://docs.docker.com/engine/userguide/dockervolumes/ for more information on volume mounting in Docker.
**Note to OSX users**: Docker on Mac restricts the directories that can be mounted. Only some directories are allowed by default.
If you try to mount a volume from a disallowed directory, jobs can fail in an odd manner. Before mounting a directory make sure it is in the list
of allowed directories. See the [docker documentation](https://docs.docker.com/docker-for-mac/osxfs/#namespaces) for how to configure those directories.
27 changes: 27 additions & 0 deletions docs/backends/SLURM.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
The following configuration can be used as a base to allow Cromwell to interact with a [SLURM](https://slurm.schedmd.com/) cluster and dispatch jobs to it:

```hocon
SLURM {
actor-factory = "cromwell.backend.impl.sfs.config.ConfigBackendLifecycleActorFactory"
config {
runtime-attributes = """
Int runtime_minutes = 600
Int cpus = 2
Int requested_memory_mb_per_core = 8000
String queue = "short"
"""
submit = """
sbatch -J ${job_name} -D ${cwd} -o ${out} -e ${err} -t ${runtime_minutes} -p ${queue} \
${"-n " + cpus} \
--mem-per-cpu=${requested_memory_mb_per_core} \
--wrap "/bin/bash ${script}"
"""
kill = "scancel ${job_id}"
check-alive = "squeue -j ${job_id}"
job-id-regex = "Submitted batch job (\\d+).*"
}
}
```

For information on how to further configure it, take a look at the [Getting Started on HPC Clusters](../tutorials/HPCIntro)
Loading

0 comments on commit 6524ac8

Please sign in to comment.