Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

set default GPU to K80->P4 in BEAST task; parameterize beagle_order #442

Merged
merged 9 commits into from
Dec 21, 2022
31 changes: 25 additions & 6 deletions pipes/WDL/tasks/tasks_interhost.wdl
Original file line number Diff line number Diff line change
Expand Up @@ -96,6 +96,7 @@ task multi_align_mafft {
task beast {
input {
File beauti_xml
String beagle_order="1,2,3,4"

String? accelerator_type
Int? accelerator_count
Expand All @@ -106,6 +107,21 @@ task beast {
String docker = "quay.io/broadinstitute/beast-beagle-cuda:1.10.5pre"
}

meta {
description: "Execute GPU-accelerated BEAST. For tips on performance, see https://beast.community/performance#gpu"
}
parameter_meta {
beagle_order: {
description: "The order of CPU(0) and GPU(1+) resources used to process partitioned data."
}
accelerator_type: {
description: "The model of GPU to use. For availability and pricing on GCP, see https://cloud.google.com/compute/gpus-pricing#gpus"
}
accelerator_count: {
description: "The number of GPUs of the specified type to use."
}
}

Int disk_size = 300
Int boot_disk = 50
Int disk_size_az = disk_size + boot_disk
Expand All @@ -119,10 +135,13 @@ task beast {
bash -c "sleep 60; nvidia-smi" &
beast \
-beagle_multipartition off \
-beagle_GPU -beagle_cuda -beagle_SSE \
-beagle_double -beagle_scaling always \
-beagle_order 1,2,3,4 \
${beauti_xml}
-beagle_GPU \
-beagle_cuda \
-beagle_SSE \
-beagle_double \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One request -- can we parameterize with some boolean beagle_double=true that turns into either -beagle_double or -beagle_single on the command line? In >90% of the xmls I've run, we need double precision to converge well, but every once in a while, single-precision is enough and the speed gains are huge. The best performance/$ is actually on a T4 but only if you're running single-precision.

-beagle_scaling always \
~{'-beagle_order ' + beagle_order} \
~{beauti_xml}
}

output {
Expand All @@ -143,9 +162,9 @@ task beast {
gpu: true # dxWDL
dx_timeout: "40H" # dxWDL
dx_instance_type: "mem1_ssd1_gpu2_x8" # dxWDL
acceleratorType: select_first([accelerator_type, "nvidia-tesla-k80"]) # GCP PAPIv2
acceleratorType: select_first([accelerator_type, "nvidia-tesla-p4"]) # GCP PAPIv2
acceleratorCount: select_first([accelerator_count, 4]) # GCP PAPIv2
gpuType: select_first([gpu_type, "nvidia-tesla-k80"]) # Terra
gpuType: select_first([gpu_type, "nvidia-tesla-p4"]) # Terra
gpuCount: select_first([gpu_count, 4]) # Terra
nvidiaDriverVersion: "410.79"
}
Expand Down