You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when we use a non-zero minimum in cluster config for resources, they get alive at cluster launch. then this job-related check will never have a value of True:
if [[ $job_comment==*"Key=Monitoring,Value=ON"* ]];then
because this must be run in the root context, the only chance to do it is in the prolog script to attach it to a job, so basically the plan would be to
install the docker container anyway in post-install but do not start it
use prolog and epilog to start and stop the container depending on user's choice to monitor or not
the problem is how to send a signal about the job to prolog and epilog since the custom user env variables are not sent, and the job comment is not sent. Because per slurm manuals, we should not perform scontrol from prolog; this will impair the scaling of the jobs similarly to the API calls (this is related to #34 )
Looking at the variables available at prolog/epilog time I only have 2 ideas so far:
SLURM_PRIO_PROCESS Scheduling priority (nice value) at the time of submission. Available in SrunProlog, TaskProlog, SrunEpilog and TaskEpilog. We can #SBATCH --nice 0 or some sensible value to uniquely identify the intention then use the TaskProlog and TaskEpilog to start/stop the monitoring container
use some crafted slurm job name like [GM] my job name then pick and interpret this from SLURM_JOB_NAME Name of the job. Available in PrologSlurmctld, SrunProlog, TaskProlog, EpilogSlurmctld, SrunEpilog and TaskEpilog. Meaning also the use of TaskProlog and TaskEpilog to start/stop the monitoring container
The text was updated successfully, but these errors were encountered:
the reason is the stack name is also unique even if it is recycled, and we can use the SLURM_CLUSTER_NAME Name of the cluster executing the job inside the prolog scripts, thus avoiding undesired calls to scontrol.
I will eventually submit PR, now at the stage of brainstorming to get the best idea to work
(hm, the current value of SLURM_CLUSTER_NAME is always parallelcluster, so we might need to modify that to the parallelcluster-stack_name or something. I see this is inside slurm.conf. Therefore we need to modify it during post-install at headnode and compute node in case of static nodes, I suppose)...
when we use a non-zero minimum in cluster config for resources, they get alive at cluster launch. then this job-related check will never have a value of True:
1click-hpc/modules/40.install.monitoring.compute.sh
Line 59 in 7a833d4
because this must be run in the root context, the only chance to do it is in the prolog script to attach it to a job, so basically the plan would be to
the problem is how to send a signal about the job to prolog and epilog since the custom user env variables are not sent, and the job comment is not sent. Because per slurm manuals, we should not perform
scontrol
from prolog; this will impair the scaling of the jobs similarly to the API calls (this is related to #34 )Looking at the variables available at prolog/epilog time I only have 2 ideas so far:
#SBATCH --nice 0
or some sensible value to uniquely identify the intention then use the TaskProlog and TaskEpilog to start/stop the monitoring container[GM] my job name
then pick and interpret this from SLURM_JOB_NAME Name of the job. Available in PrologSlurmctld, SrunProlog, TaskProlog, EpilogSlurmctld, SrunEpilog and TaskEpilog. Meaning also the use of TaskProlog and TaskEpilog to start/stop the monitoring containerThe text was updated successfully, but these errors were encountered: