You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to set up the workflow on a big server. I managed to use mamba to set the environment and run smoothly with the test code: sh setup_test.sh. Then I get stuck when runing this code: snakemake --use-conda --cores $Ncores --configfile config/config.yml
I modified it by specify another profile: snakemake --profile ~/profiles/server-slurm --use-conda --cores 1 --configfile config/config.yml
Because the cluster nodes have no internet access, so can only run it on a login node with 1 core. Here is the profile settings:
default-resources: mem_mb=6000
Here is the log:
Workflow defines that rule bwaindex is eligible for caching between workflows (use the --cache argument to enable this).
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cluster nodes: 50
Job stats:
job count min threads max threads
Select jobs to execute...
WorkflowError:
Error grouping resources in group '744b017c-b3d8-4ed6-8e0a-ee3979e6939e': Not enough resources were provided. This error is typically caused by a Pipe group requiring too many resources. Note that resources are summed across every member of the pipe group, except for ['runtime'], which is calculated via max(). Excess Resources:
_cores: 2/1
Do you have any ideas how to solve it? Big thanks!
Best,
Menghan
The text was updated successfully, but these errors were encountered:
The error occurs because Snakemake is trying to allocate more cores than available.
Limit cores per rule ensure each rule requests only the available cores (1 in your case) Add resources cores=1 to your rules in the Snakefile and also additionally
Adjust profile by modify the server-slurm profile to request 1 core by default or
Check resource requests ensure no rule is implicitly requesting more than 1 core.
Hi, Thanks for the nice pipeline!
I tried to set up the workflow on a big server. I managed to use mamba to set the environment and run smoothly with the test code: sh setup_test.sh. Then I get stuck when runing this code: snakemake --use-conda --cores $Ncores --configfile config/config.yml
I modified it by specify another profile: snakemake --profile ~/profiles/server-slurm --use-conda --cores 1 --configfile config/config.yml
Because the cluster nodes have no internet access, so can only run it on a login node with 1 core. Here is the profile settings:
default-resources: mem_mb=6000
cluster: "submit.py"
cluster-status: "check_status.sh"
cluster-cancel: "scancel"
max-jobs-per-second: 5
max-status-checks-per-second: 0.2
local-cores: 1
latency-wait: 120
jobs: 50
keep-going: True
rerun-incomplete: True
printshellcmds: True
Here is the log:
Workflow defines that rule bwaindex is eligible for caching between workflows (use the --cache argument to enable this).
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cluster nodes: 50
Job stats:
job count min threads max threads
bin_pairs_library 8 1 1
chunk_fastq 4 1 1
default 1 1 1
download_fastqs 1 1 1
map_chunks_bwa_pipe 4 1 1
merge_dedup 4 1 1
merge_runs 5 1 1
merge_stats_libraries_into_groups 3 1 1
merge_zoom_library_group_coolers 6 1 1
parse_sort_chunks 4 1 1
scaling_pairs_library 4 1 1
zoom_library 8 1 1
total 52 1 1
Select jobs to execute...
WorkflowError:
Error grouping resources in group '744b017c-b3d8-4ed6-8e0a-ee3979e6939e': Not enough resources were provided. This error is typically caused by a Pipe group requiring too many resources. Note that resources are summed across every member of the pipe group, except for ['runtime'], which is calculated via max(). Excess Resources:
_cores: 2/1
Do you have any ideas how to solve it? Big thanks!
Best,
Menghan
The text was updated successfully, but these errors were encountered: