Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyMAPDL and PyHPS #2865

Open
germa89 opened this issue Mar 8, 2024 · 1 comment
Open

PyMAPDL and PyHPS #2865

germa89 opened this issue Mar 8, 2024 · 1 comment
Assignees

Comments

@germa89
Copy link
Collaborator

germa89 commented Mar 8, 2024

Disclaimer:

I'm going to do brainstorming in this PR, hence do not expect a logical sequence of phrases or ideas. I just want to summarise the current situation, and future goals from my point of view and understanding.

Feel free to contribute through the comments. The main body of this issue might be updated without notice.

Considerations
In this issue, I am not:

  • Considering interaction with other PyAnsys products.

Content

Objective

It is getting high priority to have PyMAPDL able to work in HPC clusters.

Scenarios

There are currently considering the following scenarios:

Scenario A: Scheduler > Python[HPC] > PyMAPDL Client[HPC] > PyMAPDL Server (MAPDL)[HPC]

Description

Do a batch job using scheduler commands to submit a Python script.
This Python script should be a normal PyMAPDL script, meaning, it should run in a normal PC setup.

In this case, everything happens on the HPC side, where Python starts, runs PyMAPDL, and PyMAPDL launches MAPDL.
MAPDL should take advantage of the HPC environment, and use as many cores and memory as specified in the job. Hence HPC scheduler configuration should be available to MAPDL, maybe passing it through PyMAPDL.

It does require a Python venv accessible from the computer nodes, so PyMAPDL can be used.

Variations according to the number of MAPDL instances

There are two possible variations according to the number of MAPDL instances:

image

One MAPDL instance (Scenario A-Single).

In this case, it makes sense to use all job resources in this instance.

Multiple MAPDL instances (Scenario A-Multi).

In this case, resources need to be shared among all the MAPDL jobs. Some schedulers (such as SLURM) do support suballocation or job steps, where the main job is subdivided into smaller steps but they still have access to the memory of other steps. Other option is job arrays, but it seems focused on independent jobs which is not our case, since we need a Python process to coordinate the MAPDL instances.

Variations according to the entrypoint

There are also other variations depending on the "entrypoint":

Using the scheduler (Scenario A-scheduler entrypoint)

Using the scheduler command (such as sbatch and srun for the SLURM scheduler).

sbatch --nodes=1 my_shell_script.sh

or

sbatch --nodes=1 python my_python_script.py
Using a custom CLI (Scenario A-custom entrypoint)

In this case, PyMAPDL (or PyHPS) should offer a CLI entrypoint to submit jobs.

pymapdl hpc start my_python_script

Presumably, there is NO conflict between the "number of MAPDL instances" and "entrypoints" classes. Meaning, we will have 4 total variations in scenario A.

Example

Command

sbatch python my_script.py

Python script

from ansys.mapdl.core import launch_mapdl

mapdl = launch_mapdl()

mapdl.prep7()
...
mapdl.exit()

Solution

One MAPDL instance. Scenario A-Single

It seems that MPI can take care of getting the scheduler configuration. This is called "tight integration".

This can be configured by passing I_MPI_HYDRA_BOOTSTRAP env var to MAPDL (see: https://www.intel.com/content/www/us/en/docs/mpi-library/developer-reference-linux/2021-8/hydra-environment-variables.html#GUID-ED610469-4BCA-4ADF-8FC5-754CA80404D3)

As discussed with @Buzz1167 the number of cores should still be passed. If the number of machines used is greater than 1, then additionally the ANS_MULTIPLE_NODES env var should also be passed.

This approach will partially deprecate #2754. Not totally because we still need to pass the number of CPUs set by the scheduler.

Multiple MAPDL instances. Scenario A-Multi

This solution might need to manually share the job resources across the MAPDL instances.
In this context, #2754 might be useful because we can use the information gathered there to launch the n MAPDL instances.

How to split those resources? One option is by the number of cores.... simply dividing the job CPUs among the number of MAPDL instances. This is easy, but it might create MAPDL instances spawning across different machines. We should probably go for this case.

The whole Multi-MAPDL instances should be addressed in the MapdlPool object. Then we will have the standard launch_mapdl for the single MAPDL instance approach. We should implement a warning in case we spawn multiple MAPDL instances using launch_mapdl in HPC environments.

Scheduler entrypoint. Scenario A-Scheduler entrypoint

Nothing to do here, since the user will manage the job configuration through the scheduler CLI.

Custom entrypoint. Scenario A-Custom entrypoint

Many users will expect a sort of wrapper on top of the scheduler CLI which allows them to submit jobs to the HPC cluster.

Should the CLI be provided by PyMAPDL or PyHPS? Probably it makes more sense PyHPS does it.

Scenario B: Python[VDI] > PyMAPDL Client [VDI] > Scheduler > PyMAPDL Server (MAPDL)[HPC]

Description

This scenario is based on PyMAPDL running on a VDI machine or, another "smaller" machine. This PyMAPDL client will connect and interface against the multiple MAPDL instances running on the HPC cluster.

If PyMAPDL is not launching MAPDL instances, the MapdlPool is ready for handling remote MAPDL instances (See #2862)

For launching MAPDL instances, things are a bit more difficult.

Should PyMAPDL launch the MAPDL instances? Yes. Here is where PyHPS comes into play. Having PyHPS should easily allow us to launch, control and manage remote MAPDL instances in an HPC cluster.

I believe in this case, it should not matter whether there are one or more instances. The plumbing should be similar, and the split of the resources should be done on a higher level.

Does it matter if PyMAPDL is running as batch or interactive? From the HPC perspective, it does not. Hence we can, for the moment, ignore whether we are on interactive or batch mode.

Example

Console command:

python my_script.py

Python script

from ansys.mapdl.core.hpc import HPCPool

pool = HPCPool( # PyHPS under the hood
    n_workers = 10,
    n_cores = 2, # per worker
    memory = 2*1024, #MB, per worker
    ...)

for worker in pool:
    worker.prep7()
    ...

In this case, we will have to use the MapdlPool too, so we can manage all the instances, and work with them without waiting for any in particular.

Solution

PyHPS and the pool module should be leveraged here. We need to couple PyHPS with PyMAPDL, so PyMAPDL can launch (and close) their own MAPDL instances.

Scenario C: Python[VDI] > PyMAPDL Client[VDI] > PyMAPDL Server (MAPDL)[VDI]

This is the default PyMAPDL behaviour where everything is local. There is nothing to add here.

Current PyMAPDL Situation

When you set an HPC job (using SLURM for instance), the configuration of the batch job is not used by PyMAPDL, and hence I presume it is not passed to MAPDL, so we cannot take advantage of the HPC cluster resources set for the job.

Roadmap

  1. Address Scenario A, for one MAPDL instance
  2. Implement an HPC context detection. PyMAPDL should be aware of it is running on an HPC cluster.
  3. Implement a warning in case we are running several launch_mapdl in an HPC context, and recommend to use MapdlPool.
  4. If PyMAPDL is running on an HPC cluster, we should turn off plotting (however, MAPDL internal plots, and savefig should still be allowed).
  5. Check whether we can address Scenario A-multiple instances, splitting resources most simply.
  6. Implement Scenario B.

References:

@germa89 germa89 self-assigned this Mar 8, 2024
@germa89 germa89 changed the title PyMAPDL and HPC clusters PyMAPDL and PyHPS Oct 7, 2024
@germa89 germa89 mentioned this issue Oct 7, 2024
8 tasks
@germa89
Copy link
Collaborator Author

germa89 commented Oct 7, 2024

See #3467 for a more SLURM based approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant