Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation fault when writing large files collectively with Fortran API #5184

Open
maurerben opened this issue Dec 18, 2024 · 2 comments
Open
Assignees
Labels
Component - Fortran Fortran wrappers

Comments

@maurerben
Copy link

maurerben commented Dec 18, 2024

Describe the bug
When writing large files collectively from more then 3 Processes, the program ends with a segmentation fault while writing. For gcc builds on various systems. Large means about >=8 GB per process. We call the HDF5 Fortran API.

I include a minimal failing example. Each process allocates an int32 one-rank array of size 2147483647 (that are ~8.59GB) initialized with the process number and writes that to a one-rank dataset of size n_processes * 2147483647 of type H5T_NATIVE_INTEGER. The program ends in the segmentation fault when called from more then three processes/

Expected behavior
The program writes the expected file collectively and ends without an error.

Platform
The example code fails as described on two systems:

  • Laptop:
    - Dell Latitude 5430
    - 12th Gen Intel(R) Core(TM) i7-1265U
    - 32 GB RAM
  • HPC:
    - Dell R7525 Server, 2x 7713 AMD EPYC with 1TB RAM with 2x 100 GBit/s Ethernet
    - Rocky 9.3 or 9.4 as hypervisor
    - Runs a KVM in a docker container
    - Connected to a Lustre with 100GB/s Ethernet, 2 meta and 4 storage servers
  • HDF5 version
    - 1.12.3 and 1.14.0 (on both systems)
  • OS and version
    - Ubuntu 24.04.1 LTS (Laptop)
    - OpenSUSE 15.5 (HPC)
  • Compiler and version
    - gcc 13.3.0 (Laptop)
    - gcc 14.2.0 (HPC)
  • Build system (e.g. CMake, Autotools) and version
  • Any configure options you specified
    - export CC=mpicc F9X=mpif90 ./configure --enable-build-mode=production --enable-parallel --enable-fortran (Laptop and HPC for HDF5-1.12.3)
    - export CC=mpicc CXX=mpicxx FC=mpifort F77=mpif77 ; ./configure --prefix=/software/hdf5/1.14.5-libaec-1.1.3-openmpi-5.0.6-gcc14.2.0 --with-pic --with-pthread --enable-shared --enable-threadsafe --enable-unsupported --enable-parallel --enable-fortran --with-fortran2003 --enable-build-mode=production (HPC for HDF5-1.14.0)`
  • MPI library and version (parallel HDF5)
    • OpenMPI 4.1.6 (Laptop)
    • OpenMPI 5.0.6 (HPC)

Additional context

The program ends without an error if the h5dwrite_f routines is commented out or if, in the h5pset_dxpl_mpio_f call, H5FD_MPIO_COLLECTIVE_F is replaced with H5FD_MPIO_INDEPENDENT_F.

The program runs as expected on the LUMI HPC in Finland with cray-HDF5-12.2.11 built with gcc 12.2.0 and on intel nodes (Xeon(R) Gold 6126 CPU @ 2.60GHz) built with intel-oneapi 2021.4.0. On the intel machine, HDF5-1.12.0 is self built with export CC=mpiicc F9X=mpiifort .configure --enable-build-mode=production --enable-parallel --enable-fortran --enable-fortran2003

Additionally I tested on the Laptop HDF5 installed with apt-get install libhdf5-openmpi-dev which is version 1.10.10. In this case the program fails as well as described above.

All software on the HPC is self built. Here are the specs:
GCC 14.2.0
./contrib/download_prerequisites gefolgt von ./configure --prefix=/software/gcc/14.2.0 --disable-multilib

libaec 1.1.3
cmake -S ../ -DCMAKE_INSTALL_PREFIX=/software/libaec/1.1.3 ; make ; make install

OpenMPI 5.0.6 gcc/14.2.0
./configure --prefix=/software/openmpi/5.0.6-gcc14.2.0 ; make -j ; make install

HDF5 1.14.5 libaec/1.1.3 openmpi/5.0.6 gcc/14.2.0
export CC=mpicc CXX=mpicxx FC=mpifort F77=mpif77 ; ./configure --prefix=/software/hdf5/1.14.5-libaec-1.1.3-openmpi-5.0.6-gcc14.2.0 --with-pic --with-pthread --enable-shared --enable-threadsafe --enable-unsupported --enable-parallel --enable-fortran --with-fortran2003 --enable-build-mode=production

Failing Example parallel_write_int32.f90

program parallel_write_int32
  
    use iso_fortran_env
    use iso_c_binding
    use hdf5
    use mpi
  
    implicit none

    character(*), parameter :: file_name = "test_file.h5"
    character(*), parameter :: dataset = "dataset"
    integer(int64), parameter :: block_size=2147483647
    integer :: ierr, procs, rank
    integer(HID_T) :: file_id, file_id_plist, dataspace_id, dataset_id, memspace_id
    integer(HSIZE_T) :: dims(1), offset(1), datacount(1)
    integer(int32), allocatable, target :: data_block(:)
    type(c_ptr) :: buffer

    ! Initialize MPI
    call mpi_init(ierr)
    call mpi_comm_size(mpi_comm_world, procs, ierr)
    call mpi_comm_rank(mpi_comm_world, rank, ierr)
  
    ! Prepare Data
    dims = [int(block_size, HSIZE_T) * procs]
    datacount = [int(block_size, HSIZE_T)]
    offset = [int(rank, HSIZE_T) * block_size] ! HDF5 assumes c-like indexing
    allocate(data_block(block_size), source=rank)
    buffer = c_loc(data_block)
  
    ! Initialize HDF5 file
    call h5open_f(ierr)
  
    ! Create file and open root group 
    call h5pcreate_f(H5P_FILE_ACCESS_F, file_id_plist, ierr)
    call h5pset_fapl_mpio_f(file_id_plist, mpi_comm_world, MPI_INFO_NULL, ierr)
    call h5fcreate_f(file_name, H5F_ACC_TRUNC_F, file_id, ierr, access_prp = file_id_plist)
    call h5pclose_f(file_id_plist, ierr)
  
    ! Create dataset for the global array
    call h5screate_simple_f(size(dims), dims, dataspace_id, ierr)
    call h5dcreate_f(file_id, dataset, H5T_NATIVE_INTEGER, dataspace_id, dataset_id, ierr)
    call h5sclose_f(dataspace_id, ierr)
  
    ! Prepare dataspace for writing
    call h5screate_simple_f(size(dims), datacount, memspace_id, ierr)
    call h5dget_space_f(dataset_id, dataspace_id, ierr)
    call h5sselect_hyperslab_f(dataspace_id, H5S_SELECT_SET_F, offset, datacount, ierr)
  
    ! Create a property list to enable parallel writing
    call h5pcreate_f(H5P_DATASET_XFER_F, file_id_plist, ierr)
    call h5pset_dxpl_mpio_f(file_id_plist, H5FD_MPIO_COLLECTIVE_F, ierr) ! H5FD_MPIO_INDEPENDENT_F works
  
    ! Write data to file
    call h5dwrite_f(dataset_id, H5T_NATIVE_INTEGER, buffer, ierr, &
            file_space_id = dataspace_id, mem_space_id = memspace_id, xfer_prp = file_id_plist)
  
    ! Close open HDF5 objects
    call h5pclose_f(file_id_plist, ierr)
    call h5sclose_f(dataspace_id, ierr)
    call h5sclose_f(memspace_id, ierr)
    call h5dclose_f(dataset_id, ierr)
    call h5fclose_f(file_id, ierr)
    call h5close_f(ierr)
  
    ! Finalize MPI
    call mpi_finalize(ierr)
end program

Built with

h5pfc parallel_write_int32.f90 -g -O0 -fbounds-check -fbacktrace -Wall -Wextra -o parallel_write_int32
h5pfc parallel_write_int32.f90 -g -O3 -o parallel_write_int32
@DetlevCM
Copy link

I could also add that make check passes in the HDF5 compilation step as described and the same problems can be reproduced bare-metal on a Linux laptop (also AMD-based) running Tumbleweed.

Using two ranks succeeds, but the moment 3 or more are used it fails...

@hyoklee hyoklee added the Component - Fortran Fortran wrappers label Dec 19, 2024
@brtnfld
Copy link
Contributor

brtnfld commented Dec 20, 2024

This looks like an OpenMPI issue. For me, the program works fine with MPICH but fails with OpenMPI 4.0.2. Can you try with MPICH and see if you have the same problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component - Fortran Fortran wrappers
Projects
None yet
Development

No branches or pull requests

4 participants