Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sherpa related workflows get stuck due to a problem with opening an openmpi session #45165

Open
ArturAkh opened this issue Jun 7, 2024 · 11 comments

Comments

@ArturAkh
Copy link

ArturAkh commented Jun 7, 2024

Dear all,

At KIT, we were seeing some problems with Sherpa related workflows at our opportunistic resources (KIT-HOREKA), e.g.

data.RequestName = cmsunified_task_SMP-RunIISummer20UL18GEN-00048__v1_T_240312_112234_8747

The jobs seem to hang with a CPU usage at 0%, leading to very low efficiency (below 20%) for HoreKa resources:

https://grafana-sdm.scc.kit.edu/d/qn-VJhR4k/lrms-monitoring?orgId=1&refresh=15m&var-pool=GridKa+Opportunistic&var-schedd=total&var-location=horeka&viewPanel=98&from=1717406527904&to=1717579327904

After some investigation of the situation, we have figured out the following:

  • Logs from CMSSW are empty, if connecting to the jobs themselves from HTCondor
  • This has to do with the fact, that CMSSW is calling an external process (e.g. cmsExternalGenerator extGen777_0 777_0), which is hanging and was identified to be a Sherpa process.
  • If trying to run the configuration ourselves on a machine we have control of we see the following errors:
A call to mkdir was unable to create the desired directory:

  Directory: /tmp/openmpi-sessions-12009@bms1_0/52106
  Error:     No space left on device

So the entire process is unable to open an openmpi session. Even more problematic is, that the job does not fail properly but is hanging (i.e. running further with 0% efficiency). We see often this message in the logs when running locally:

Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!

According to our local physics group which had some experience with running Sherpa, this is a known problem.

Resetting the $TMPDIR variable to a different location was allowing us to make the process work properly if running it manually. We are not sure though, whether this is a correct action to be taken on an entire (sub)site for all worker nodes...

We would like to know, how to resolve this issue, and whether something needs to be done in terms of openmpi libraries in the CMSSW software stack for that.

Best regards,

Artur Gottmann

@cmsbuild
Copy link
Contributor

cmsbuild commented Jun 7, 2024

cms-bot internal usage

@cmsbuild
Copy link
Contributor

cmsbuild commented Jun 7, 2024

A new Issue was created by @ArturAkh.

@Dr15Jones, @antoniovilela, @makortel, @sextonkennedy, @rappoccio, @smuzaffar can you please review it and eventually sign/assign? Thanks.

cms-bot commands are listed here

@Dr15Jones
Copy link
Contributor

assign generator

@Dr15Jones
Copy link
Contributor

assign generators

@cmsbuild
Copy link
Contributor

cmsbuild commented Jun 7, 2024

New categories assigned: generators

@alberto-sanchez,@bbilin,@GurpreetSinghChahal,@mkirsano,@menglu21,@SiewYan you have been requested to review this Pull request/Issue and eventually sign? Thanks

@makortel
Copy link
Contributor

Ping @cms-sw/generators-l2

@lviliani
Copy link
Contributor

lviliani commented Sep 6, 2024

@shimashimarin did we observe this also in our recent tests?

@shimashimarin
Copy link
Contributor

Sorry for the late reply. I usually test the Sherpa processes locally or via private production. I have not encountered such an issue.

However, I noticed that OpenMPI is used here. the MPI parallelization is mainly to speed up the integration process, i.e. Sherpack generation. Parallelization of event generation can simply be done by starting multiple instances of Sherpa.

Therefore, I think it is not necessary to use OpenMPI sessions for Sherpa event generation. Maybe we can test the Sherpa event production without using OpenMPI?

@makortel
Copy link
Contributor

makortel commented Nov 6, 2024

Maybe we can test the Sherpa event production without using OpenMPI?

Just to note that avoiding OpenMPI from Sherpa in CMSSW would also avoid this thread-unsafe workaround

int Unzip(std::string infile, std::string outfile) {
/////////////////////////////////////////////
/////////////// BUG FIX FOR MPI /////////////
/////////////////////////////////////////////
const char *tmpdir = std::getenv("TMPDIR");
if (tmpdir && (strlen(tmpdir) > 50)) {
setenv("TMPDIR", "/tmp", true);
}

(reported in #46002 (comment))

@makortel
Copy link
Contributor

makortel commented Jan 7, 2025

Is anyone looking into avoiding the use of OpenMPI from Sherpa during event production?

@shimashimarin
Copy link
Contributor

Hi @makortel I haven't found time to work on it yet. But it seems that we can disable OpenMPI in SherpaHadronizer.cc. I will do some tests and let you know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants