Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attempts to add Intel MPI support #57

Merged
merged 5 commits into from
Oct 22, 2024

Conversation

mathomp4
Copy link
Member

@mathomp4 mathomp4 commented Oct 2, 2024

This is a draft PR to add Intel MPI support to GEOSldas. Eventually on SLES15, we'll want to use Intel MPI, and GEOSldas was sort of "hardcoded" for Open MPI. I think I did this right, but @weiyuan-jiang and @biljanaorescanin might want to stare at it. The LDAS scripting is different for sure!

@mathomp4 mathomp4 added the 0-diff label Oct 2, 2024
@mathomp4 mathomp4 self-assigned this Oct 2, 2024
Copy link
Collaborator

@gmao-rreichle gmao-rreichle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks ok to me, but I can't say if it'll work. @biljanaorescanin's testing will tell.

@mathomp4
Copy link
Member Author

mathomp4 commented Oct 2, 2024

Note that hopefully when SLES12 goes away this can be simplified as we won't need the "built on SLES15" jazz.

@biljanaorescanin
Copy link
Collaborator

All regression tests passed.

@gmao-rreichle
Copy link
Collaborator

All regression tests passed.

Thanks, @biljanaorescanin. Was the new functionality (i.e., use of IntelMPI) exercised to see if it works properly?

@biljanaorescanin
Copy link
Collaborator

@gmao-rreichle I don't know. @mathomp4 did we?
I realized this is all in setup scripts so I am not sure.

@gmao-rreichle
Copy link
Collaborator

this is all in setup scripts

@biljanaorescanin, maybe you're confusing $!OMP with MPI? We use MPI throughout the model and assimilation elements of GEOSldas.x

@mathomp4
Copy link
Member Author

mathomp4 commented Oct 3, 2024

All regression tests passed.

Thanks, @biljanaorescanin. Was the new functionality (i.e., use of IntelMPI) exercised to see if it works properly?

@gmao-rreichle The answer is no. The new functionality needs a new g5_modules. I mainly wanted @biljanaorescanin to test to make sure I didn't break the current functionality with Open MPI.

If you'd like to test the Intel MPI bits, I can make a branch of GEOSldas with a new ESMA_env for us to test with.

@gmao-rreichle
Copy link
Collaborator

Thanks, @mathomp4. I think we should test the new functionality before we merge the PR. Is there anything in the "new ESMA_env" that prevents us from adopting it now (provided it works and tests 0-diff)? Maybe the "new ESMA_env" is not yet a release?

@mathomp4
Copy link
Member Author

mathomp4 commented Oct 3, 2024

Thanks, @mathomp4. I think we should test the new functionality before we merge the PR. Is there anything in the "new ESMA_env" that prevents us from adopting it now (provided it works and tests 0-diff)? Maybe the "new ESMA_env" is not yet a release?

Well, my forward thinking move would be non-zero-diff (it's a move to Intel 13 which has been NZD for the GCM).

But, I can make up a test g5_modules that should be zero-diff. (New MPI, old compiler.) Let me try that...

@mathomp4
Copy link
Member Author

mathomp4 commented Oct 3, 2024

Okay. Here is a PR that just moves the MPI stack:

GEOS-ESM/GEOSldas#780

Eventually we'll move to Intel MPI with Intel 2021.13 but as I said that has been NZD with the GCM (so we are folding it into an NZD change or into GEOSgcm v12)

@gmao-rreichle
Copy link
Collaborator

Thanks, @mathomp4! @biljanaorescanin, when you get a chance, please test this PR via GEOS-ESM/GEOSldas#780
Assuming the test works, please make note of the run times. It's important to know if IntelMPI is noticeably faster or slower than OpenMPI for GEOSldas.
@mathomp4: I'm open to a non-zero-diff change if you think GEOSldas could benefit from Intel 2021.13.

@mathomp4
Copy link
Member Author

mathomp4 commented Oct 3, 2024

Thanks, @mathomp4! @biljanaorescanin, when you get a chance, please test this PR via GEOS-ESM/GEOSldas#780 Assuming the test works, please make note of the run times. It's important to know if IntelMPI is noticeably faster or slower than OpenMPI for GEOSldas. @mathomp4: I'm open to a non-zero-diff change if you think GEOSldas could benefit from Intel 2021.13.

@gmao-rreichle Well, we need to move to it eventually for MAPL3 development. Older Intel compilers just have a bug with something in MAPL3. But I figure since a non-zero-diff GOCART or Physics or something is coming soon, we can just "hide" the change there. Once we know Intel MPI works, I can then whip up a test PR for Intel 2021.13 and we can see how different it is!

@gmao-rreichle gmao-rreichle merged commit 2bbd3a0 into develop Oct 22, 2024
9 checks passed
@gmao-rreichle gmao-rreichle deleted the feature/mathomp4/add-impi-support-ldas branch October 22, 2024 13:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants