diff --git a/docs/backends/Backends.md b/docs/backends/Backends.md index 26a42466c27..20022b2eeff 100644 --- a/docs/backends/Backends.md +++ b/docs/backends/Backends.md @@ -1,46 +1,22 @@ -_For the Doc-A-Thon_ -**Questions to answer and things to consider:** - -1. Who is visiting the General Backends page? -*Do they know what a backend is?* -2. What do they need to know first? - -3. Is all the important information there? If not, add it! -*Add information about SLURM? See this [Github issue](https://github.com/broadinstitute/cromwell/issues/1750) for more information. -4. Are there things that don't need to be there? Remove them. - -5. Are the code and instructions accurate? Try it! - ---- - **DELETE ABOVE ONCE COMPLETE** - ---- - - -A backend represents a way to run the user's command specified in the `task` section. Cromwell allows for backends conforming to -the Cromwell backend specification to be plugged into the Cromwell engine. Additionally, backends are included with the +A backend represents a way to run the commands of your workflow. Cromwell allows for backends conforming to +the Cromwell backend specification to be plugged into the Cromwell engine. Additionally, backends are included with the Cromwell distribution: -* **Local / GridEngine / LSF / etc.** - Run jobs as subprocesses or via a dispatcher. Supports launching in Docker containers. Use `bash`, `qsub`, `bsub`, etc. to run scripts. -* **Google Cloud** - Launch jobs on Google Compute Engine through the Google Genomics Pipelines API. -* **GA4GH TES** - Launch jobs on servers that support the GA4GH Task Execution Schema (TES). -* **HtCondor** - Allows to execute jobs using HTCondor. -* **Spark** - Adds support for execution of spark jobs. +* **[Local](Local)** +* **[HPC](HPC): [SunGridEngine](SGE) / [LSF](LSF) / [HTCondor](HTcondor), [SLURM](SLURM), etc.** - Run jobs as subprocesses or via a dispatcher. Supports launching in Docker containers. Use `bash`, `qsub`, `bsub`, etc. to run scripts. +* **[Google Cloud](Google)** - Launch jobs on Google Compute Engine through the Google Genomics Pipelines API. +* **[GA4GH TES](TES)** - Launch jobs on servers that support the GA4GH Task Execution Schema (TES). +* **[Spark](Spark)** - Supports execution of spark jobs. + +HPC backends are put under the same umbrella because they all use the same generic configuration that can be specialized to fit the need of a particular technology. -Backends are specified in the `backend` configuration block under `providers`. Each backend has a configuration that looks like: +Backends are specified in the `backend.providers` configuration. Each backend has a configuration that looks like: ```hocon -backend { - default = "Local" - providers { - BackendName { - actor-factory = "FQN of BackendLifecycleActorFactory instance" - config { - key = "value" - key2 = "value2" - ... - } - } +BackendName { + actor-factory = "FQN of BackendLifecycleActorFactory class" + config { + ... } } ``` @@ -48,62 +24,11 @@ backend { The structure within the `config` block will vary from one backend to another; it is the backend implementation's responsibility to be able to interpret its configuration. -In the example below two backend types are named within the `providers` section here, so both -are available. The default backend is specified by `backend.default` and must match the `name` of one of the -configured backends: - -```hocon -backend { - default = "Local" - providers { - Local { - actor-factory = "cromwell.backend.impl.local.LocalBackendLifecycleActorFactory" - config { - root: "cromwell-executions" - filesystems = { - local { - localization: [ - "hard-link", "soft-link", "copy" - ] - } - gcs { - # References an auth scheme defined in the 'google' stanza. - auth = "application-default" - } - } - } - }, - JES { - actor-factory = "cromwell.backend.impl.jes.JesBackendLifecycleActorFactory" - config { - project = "my-cromwell-workflows" - root = "gs://my-cromwell-workflows-bucket" - maximum-polling-interval = 600 - dockerhub { - # account = "" - # token = "" - } - genomics { - # A reference to an auth defined in the 'google' stanza at the top. This auth is used to create - # Pipelines and manipulate auth JSONs. - auth = "application-default" - endpoint-url = "https://genomics.googleapis.com/" - } - filesystems = { - gcs { - # A reference to a potentially different auth for manipulating files via engine functions. - auth = "user-via-refresh" - } - } - } - } - ] -} -``` +The providers section can contain multiple backends which will all be available to Cromwell. **Backend Job Limits** -You can limit the number of concurrent jobs for a backend by specifying the following option in the backend's config +All backends support limiting the number of concurrent jobs by specifying the following option in the backend's configuration stanza: ``` @@ -118,12 +43,11 @@ backend { **Backend Filesystems** -Each backend will utilize filesystems to store the directory structure of an executed workflow. Currently, the backends and the type of filesystems that the backend use are tightly coupled. In future versions of Cromwell, they may be more loosely coupled. - +Each backend will utilize a filesystem to store the directory structure and results of an executed workflow. The backend/filesystem pairings are as follows: -* [Local Backend](Local) and associated backends primarily use the [Shared Local Filesystem](SharedFilesystem). -* [Google Backend](Google) uses the [Google Cloud Storage Filesystem](Google/#google-cloud-storage-filesystem). +* Local, HPC and Spark backend use the [Shared Local Filesystem](SharedFilesystem). +* Google backend uses the [Google Cloud Storage Filesystem](Google/#google-cloud-storage-filesystem). -Note that while Local, SGE, LSF, etc. backends use the local or network filesystem for the directory structure of a workflow, they are able to localize inputs -from GCS paths if configured to use a GCS filesystem. See [Google Storage Filesystem](Google/#google-cloud-storage-filesystem) for more details. \ No newline at end of file +Additional filesystems capabilities can be added depending on the backend. +For instance, an HPC backend can be configured to work with files on Google Cloud Storage. See the HPC documentation for more details. \ No newline at end of file diff --git a/docs/backends/HPC.md b/docs/backends/HPC.md new file mode 100644 index 00000000000..0fcf4070e5a --- /dev/null +++ b/docs/backends/HPC.md @@ -0,0 +1,54 @@ +Cromwell provides a generic way to configure a backend relying on most High Performance Computing (HPC) frameworks, and with access to a shared filesystem. + +The two main features that are needed for this backend to be used are a way to submit a job to the compute cluster and to get its status through the command line. +You can find example configurations for a variety of those backends here: + +* [SGE](SGE) +* [LSF](LSF) +* [SLURM](SLURM) +* [HTCondor](HTcondor) + +## FileSystems + +### Shared FileSystem +HPC backends rely on being able to access and use a shared filesystem to store workflow results. + +Cromwell is configured with a root execution directory which is set in the configuration file under `backend.providers..config.root`. This is called the `cromwell_root` and it is set to `./cromwell-executions` by default. Relative paths are interpreted as relative to the current working directory of the Cromwell process. + +When Cromwell runs a workflow, it first creates a directory `/`. This is called the `workflow_root` and it is the root directory for all activity in this workflow. + +Each `call` has its own subdirectory located at `/call-`. This is the ``. +Any input files to a call need to be localized into the `/inputs` directory. There are different localization strategies that Cromwell will try until one works. Below is the default order specified in `reference.conf` but it can be overridden: + +* `hard-link` - This will create a hard link to the file +* `soft-link` - Create a symbolic link to the file. This strategy is not applicable for tasks which specify a Docker image and will be ignored. +* `copy` - Make a copy the file + +Shared filesystem localization is defined in the `config` section of each backend. The default stanza for the Local and HPC backends looks like this: + +``` +filesystems { + local { + localization: [ + "hard-link", "soft-link", "copy" + ] + } +} +``` + +### Additional FileSystems + +HPC backends (as well as the Local backend) can be configured to be able to interact with other type of filesystems, where the input files can be located for example. +Currently the only other filesystem supported is Google Cloud Storage (GCS). See the [Google section](Google) of the documentation for information on how to configure GCS in Cromwell. +Once you have a google authentication configured, you can simply add a `gcs` stanza in your configuration file to enable GCS: + +``` +backend.providers.MyHPCBackend { + filesystems { + gcs { + # A reference to a potentially different auth for manipulating files via engine functions. + auth = "application-default" + } + } +} +``` \ No newline at end of file diff --git a/docs/backends/LSF.md b/docs/backends/LSF.md new file mode 100644 index 00000000000..cc0d9b85605 --- /dev/null +++ b/docs/backends/LSF.md @@ -0,0 +1,15 @@ +The following configuration can be used as a base to allow Cromwell to interact with an [LSF](https://en.wikipedia.org/wiki/Platform_LSF) cluster and dispatch jobs to it: + +```hocon +LSF { + actor-factory = "cromwell.backend.impl.sfs.config.ConfigBackendLifecycleActorFactory" + config { + submit = "bsub -J ${job_name} -cwd ${cwd} -o ${out} -e ${err} /bin/bash ${script}" + kill = "bkill ${job_id}" + check-alive = "bjobs ${job_id}" + job-id-regex = "Job <(\\d+)>.*" + } +} +``` + +For information on how to further configure it, take a look at the [Getting Started on HPC Clusters](../tutorials/HPCIntro) diff --git a/docs/backends/Local.md b/docs/backends/Local.md index d42cb07ea34..f9824e368e2 100644 --- a/docs/backends/Local.md +++ b/docs/backends/Local.md @@ -1,61 +1,14 @@ -_For the Doc-A-Thon_ -**Questions to answer and things to consider:** - -1. Who is visiting the Local page? -*This is the first in the list of Backends* -2. What do they need to know first? - -3. Is all the important information there? If not, add it! -*What is an rc file? Write out the full name with the abbreviation, Return Code (rc) file, then abbreviate after.* -4. Are there things that don't need to be there? Remove them. - -5. Are the code and instructions accurate? Try it! - ---- - **DELETE ABOVE ONCE COMPLETE** - ---- - - **Local Backend** -The local backend will simply launch a subprocess for each task invocation and wait for it to produce its rc file. - -This backend creates three files in the `` (see previous section): - -* `script` - A shell script of the job to be run. This contains the user's command from the `command` section of the WDL code. -* `stdout` - The standard output of the process -* `stderr` - The standard error of the process - -The `script` file contains: - -``` -#!/bin/sh -cd - -echo $? > rc -``` - -`` would be equal to `` for non-Docker jobs, or it would be under `/cromwell-executions//call-` if this is running in a Docker container. - -When running without docker, the subprocess command that the local backend will launch is: - -``` -/bin/bash