Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mpicc cleanup #4

Merged
merged 7 commits into from
Nov 29, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 3 additions & 11 deletions configure.ac
Original file line number Diff line number Diff line change
Expand Up @@ -29,16 +29,6 @@ AC_REQUIRE_CPP

AC_CHECK_SIZEOF([long int])

AC_MSG_CHECKING([if mpicc is available])
AC_PATH_PROG([MPICC],[mpicc])
if test -z "$MPICC" ; then
have_mpicc=0
AC_MSG_RESULT(['mpicc' is not found])
else
have_mpicc=1
AC_MSG_RESULT(['mpicc' is found])
fi

AC_MSG_CHECKING([if nvcc is available])
AC_PATH_PROG([NVCC],[nvcc])
if test -z "$NVCC" ; then
Expand All @@ -48,7 +38,6 @@ else
have_nvcc=1
AC_MSG_RESULT(['nvcc' is found])
fi
AM_CONDITIONAL([HAVE_GPU_TESTING], [test "x${have_nvcc}" = x1 && test "x${have_mpicc}" = x1])

dnl
dnl Verify pkg-config
Expand Down Expand Up @@ -140,6 +129,9 @@ AM_CONDITIONAL([HAVE_BAKE], [test "x${have_mpi}" = x1 && test "x${have_bake_clie
AM_CONDITIONAL([HAVE_PMDK], [test "x${have_libpmemobj}" = x1 && test "x${have_argobots}" = x1])
AM_CONDITIONAL([HAVE_SSG], [test "x${have_ssg}" = x1 && test "x${have_ssg_mpi}" = x1])
AM_CONDITIONAL([HAVE_MPI], [test "x${have_mpi}" = x1])
AM_CONDITIONAL([HAVE_GPU_TESTING], [test "x${have_nvcc}" = x1 && test "x${have_mpi}" = x1])



AC_CONFIG_FILES([Makefile])
AC_OUTPUT
1 change: 1 addition & 0 deletions perf-regression/Makefile.subdir
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ bin_PROGRAMS += perf-regression/gpu-margo-p2p-bw
CUDA_CFLAGS := $(subst -pthread,,$(CFLAGS))

#Set up mpi path for include/lib
MPICC := $(shell which $(CC))
STRIP_LAST = $(patsubst %/,%,$(dir $(MPICC)))
MPICC_PATH = $(patsubst %/,%,$(dir $(STRIP_LAST)))

Expand Down
6 changes: 5 additions & 1 deletion perf-regression/theta-gpu/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,8 @@
This is an example script for executing an automated regression test on the
To run the margo gpu test: gpu-margo-p2p-bw.cu with the current libfabric release,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think rather than explaining the modifications a user would need to do, we should instead just make a PR to the mochi-spack-packages repo with the changes needed and then just describe that libfabric needs to be configured with the --with-cuda variant.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, please do! That's exactly why we have our own libfabric package that inherits from the upstream one: so that we can add our own local variants/patches before they are available upstream. After we've tested it a bit we could contribute the +cuda variant to the spack maintainers, but we should start by just having it in our repo.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add the changes described here to llibfabric's package.py and create a PR at https://github.com/mochi-hpc/mochi-spack-packages.

libfabric needs to be configured with cuda support.
See the local variant added to mochi-spack-packages/packages/libfabric/package.py.

There is an example script for executing an automated regression test on the
Thetagpu system at the ALCF. The entire process is handled by the
"./gpu-qsub" script. The script can be copied to a desired location where
the test may be run and submitted via "qsub ./gpu-qsub" on a
Expand Down