Environment Variables
@@ -477,5 +477,5 @@ variables to these tests will increase the time significantly.
-_Last updated: 05-16-2016_
+_Last updated: 11-18-2015_
*/
diff --git a/cime/externals/pio2/doc/source/testpio_example.txt b/cime/externals/pio2/doc/source/testpio_example.txt
index 21875023badd..07fb3b04b478 100644
--- a/cime/externals/pio2/doc/source/testpio_example.txt
+++ b/cime/externals/pio2/doc/source/testpio_example.txt
@@ -16,9 +16,9 @@
*/
/*! \page testpio_example testpio: a regression and benchmarking code
-The testpio directory, included with the release package, tests both the accuracy
-and performance of reading and writing data
-using the pio library.
+The testpio directory, included with the release package, tests both the accuracy
+and performance of reading and writing data
+using the pio library.
The testpio directory contains 3 perl scripts that you can use to build and run the testpio.F90 code.
@@ -65,7 +65,7 @@ block, io_nml, contains some general settings:
("bin","pnc","snc"), binary, pnetcdf, or serial netcdf
- rearr | string, type of rearranging to be done
+ | rearr | string, type of rearranging to be done
("none","mct","box","boxauto") |
@@ -77,15 +77,15 @@ block, io_nml, contains some general settings:
base | integer, base pe associated with nprocIO striding |
- stride | integer, the stride of io pes across the global pe set. A stride=-1
+ | stride | integer, the stride of io pes across the global pe set. A stride=-1
directs PIO to calculate the stride automatically. |
- num_aggregator | integer, mpi-io number of aggregators, only used if no
+ | num_aggregator | integer, mpi-io number of aggregators, only used if no
pio rearranging is done |
- dir | string, directory to write output data, this must exist
+ | dir | string, directory to write output data, this must exist
before the model starts up |
@@ -101,19 +101,19 @@ block, io_nml, contains some general settings:
compdof_input | string, setting of the compDOF ('namelist' or a filename) |
- compdof_output | string, whether the compDOF is saved to disk
+ | compdof_output | string, whether the compDOF is saved to disk
('none' or a filename) |
Notes:
- the "mct" rearr option is not currently available
- - if rearr is set to "none", then the computational decomposition is also
+ - if rearr is set to "none", then the computational decomposition is also
going to be used as the IO decomposition. The computation decomposition
must therefore be suited to the underlying I/O methods.
- if rearr is set to "box", then pio is going to generate an internal
IO decomposition automatically and pio will rearrange to that decomp.
- - num_aggregator is used with mpi-io and no pio rearranging. mpi-io is only
+ - num_aggregator is used with mpi-io and no pio rearranging. mpi-io is only
used with binary data.
- nprocsIO, base, and stride implementation has some special options
- if nprocsIO > 0 and stride > 0, then use input values
@@ -139,7 +139,7 @@ blocks are identical in use.
("xyz","xzy","yxz","yzx","zxy","zyx")
- grddecomp | string, sets up the block size with gdx, gdy, and gdz, see
+ | grddecomp | string, sets up the block size with gdx, gdy, and gdz, see
below, ("x","y","z","xy","xye","xz","xze","yz","yze",
"xyz","xyze","setblk") |
@@ -183,7 +183,7 @@ are provided below.
Testpio writes out several files including summary information to
stdout, data files to the namelists directory, and a netcdf
-file summarizing the decompositions. The key output information
+file summarizing the decompositions. The key output information
is written to stdout and contains the timing information. In addition,
a netcdf file called gdecomp.nc is written that provides both the
block and task ids for each gridcell as computed by the decompositions.
@@ -237,24 +237,24 @@ combinations of these cpp flags.
The decomposition implementation supports the decomposition of
a general 3 dimensional "nx * ny * nz" grid into multiple blocks
-of gridcells which are then ordered and assigned to processors.
-In general, blocks in the decomposition are rectangular,
-"gdx * gdy * gdz" and the same size, although some blocks around
-the edges of the domain may be smaller if the decomposition is uneven.
-Both gridcells within the block and blocks within the domain can be
+of gridcells which are then ordered and assigned to processors.
+In general, blocks in the decomposition are rectangular,
+"gdx * gdy * gdz" and the same size, although some blocks around
+the edges of the domain may be smaller if the decomposition is uneven.
+Both gridcells within the block and blocks within the domain can be
ordered in any of the possible dimension hierarchies, such as "xyz"
-where the first dimension is the fastest.
+where the first dimension is the fastest.
-The gdx, gdy, and gdz inputs allow the user to specify the size in
-any dimension and the grddecomp input specifies which dimensions are
-to be further optimized. In general, automatic decomposition generation
-of 3 dimensional grids can be done in any of possible combination of
+The gdx, gdy, and gdz inputs allow the user to specify the size in
+any dimension and the grddecomp input specifies which dimensions are
+to be further optimized. In general, automatic decomposition generation
+of 3 dimensional grids can be done in any of possible combination of
dimensions, (x, y, z, xy, xz, yz, or xyz), with the other dimensions having a
fixed block size. The automatic generation of the decomposition is
based upon an internal algorithm that tries to determine the most
"square" blocks with an additional constraint on minimizing the maximum
number of gridcells across processors. If evenly divided grids are
-desired, use of the "e" addition to grddecomp specifies that the grid
+desired, use of the "e" addition to grddecomp specifies that the grid
decomposition must be evenly divided. The setblk option uses the
prescibed gdx, gdy, and gdz inputs without further automation.
@@ -263,7 +263,7 @@ in mapping blocks to processors, but has a few additional options.
"cont1d" (contiguous 1d) basically unwraps the blocks in the order specified
by the blkorder input and then decomposes that "1d" list of blocks
onto processors by contiguously grouping blocks together and allocating
-them to a processor. The number of contiguous blocks that are
+them to a processor. The number of contiguous blocks that are
allocated to a processor is the maximum of the values of bdx, bdy, and
bdz inputs. Contiguous blocks are allocated to each processor in turn
in a round robin fashion until all blocks are allocated. The
@@ -272,13 +272,13 @@ contiguous blocks are set automatically such that each processor
recieves only 1 set of contiguous blocks. The ysym2 and ysym4
blkdecomp2 options modify the original block layout such that
the tasks assigned to the blocks are 2-way or 4-way symetric
-in the y axis.
+in the y axis.
The decomposition tool is extremely flexible, but arbitrary
inputs will not always yield valid decompositions. If a valid
decomposition cannot be computed based on the global grid size,
-number of pes, number of blocks desired, and decomposition options,
-the model will stop.
+number of pes, number of blocks desired, and decomposition options,
+the model will stop.
As indicated above, the IO decomposition must be suited to the
IO methods, so decompositions are even further limited by those
@@ -306,7 +306,7 @@ Some decomposition examples:
Standard xyz ordering, 2d decomp:
note: blkdecomp plays no role since there is 1 block per pe
- nx_global 6
+ nx_global 6
ny_global 4
nz_global 1 ______________________________
npes 4 |B3 P3 |B4 P4 |
@@ -327,7 +327,7 @@ note: blkdecomp plays no role since there is 1 block per pe
Same as above but yxz ordering, 2d decomp
note: blkdecomp plays no role since there is 1 block per pe
- nx_global 6
+ nx_global 6
ny_global 4
nz_global 1 _____________________________
npes 4 |B2 P2 |B4 P4 |
@@ -345,11 +345,11 @@ note: blkdecomp plays no role since there is 1 block per pe
bdz 0
-xyz grid ordering, 1d x decomp
+xyz grid ordering, 1d x decomp
note: blkdecomp plays no role since there is 1 block per pe
note: blkorder plays no role since it's a 1d decomp
- nx_global 8
+ nx_global 8
ny_global 4
nz_global 1 _____________________________________
npes 4 |B1 P1 |B2 P2 |B3 P3 |B4 P4 |
@@ -369,7 +369,7 @@ xyz grid ordering, 1d x decomp
yxz block ordering, 2d grid decomp, 2d block decomp, 4 block per pe
- nx_global 8
+ nx_global 8
ny_global 4
nz_global 1 _____________________________________
npes 4 |B4 P2 |B8 P2 |B12 P4 |B16 P4 |
diff --git a/cime/externals/pio2/examples/basic/CMakeLists.txt b/cime/externals/pio2/examples/basic/CMakeLists.txt
index e992df176ba5..efa784e01e10 100644
--- a/cime/externals/pio2/examples/basic/CMakeLists.txt
+++ b/cime/externals/pio2/examples/basic/CMakeLists.txt
@@ -7,7 +7,7 @@ ADD_CUSTOM_COMMAND(
)
ENDFOREACH()
-SET(SRC check_mod.F90 gdecomp_mod.F90 kinds_mod.F90 namelist_mod.F90
+SET(SRC check_mod.F90 gdecomp_mod.F90 kinds_mod.F90 namelist_mod.F90
testpio.F90 utils_mod.F90 ${TEMPSRCF90})
SET(WSSRC wstest.c)
@@ -15,7 +15,7 @@ INCLUDE_DIRECTORIES(${PIO_INCLUDE_DIRS})
LINK_DIRECTORIES(${PIO_LIB_DIR})
ADD_EXECUTABLE(testpio ${SRC})
ADD_EXECUTABLE(wstest ${WSSRC})
-if(${PIO_BUILD_TIMING} MATCHES "ON")
+if(${PIO_BUILD_TIMING} MATCHES "ON")
SET(TIMING_LINK_LIB timing)
endif()
diff --git a/cime/externals/pio2/examples/basic/MPASA30km.csh b/cime/externals/pio2/examples/basic/MPASA30km.csh
index e030403c3591..a141354fa984 100755
--- a/cime/externals/pio2/examples/basic/MPASA30km.csh
+++ b/cime/externals/pio2/examples/basic/MPASA30km.csh
@@ -1,7 +1,7 @@
#!/usr/bin/csh
set id = `date "+%m%d%y-%H%M"`
set host = 'kraken'
-#./testpio_bench.pl --maxiter 10 --iofmt pnc --numvars 10 --pecount 120 --bench MPASA30km -numIO 20 --partdir /lustre/scratch/jdennis/MPAS --logfile-suffix trunk_close
+#./testpio_bench.pl --maxiter 10 --iofmt pnc --numvars 10 --pecount 120 --bench MPASA30km -numIO 20 --partdir /lustre/scratch/jdennis/MPAS --logfile-suffix trunk_close
#./testpio_bench.pl --maxiter 10 --iofmt pnc --numvars 10 --pecount 240 --bench MPASA30km -numIO 40 --partdir /lustre/scratch/jdennis/MPAS --logfile-suffix trunk_close
#./testpio_bench.pl --maxiter 10 --iofmt pnc --numvars 10 --pecount 480 --bench MPASA30km -numIO 80 --partdir /lustre/scratch/jdennis/MPAS --logfile-suffix trunk_close
#./testpio_bench.pl --maxiter 10 --iofmt pnc --numvars 10 --pecount 960 --bench MPASA30km -numIO 160 --partdir /lustre/scratch/jdennis/MPAS --logfile-suffix trunk_close
diff --git a/cime/externals/pio2/examples/basic/README.testpio b/cime/externals/pio2/examples/basic/README.testpio
index 7156a9e1e38a..db914e3c7727 100644
--- a/cime/externals/pio2/examples/basic/README.testpio
+++ b/cime/externals/pio2/examples/basic/README.testpio
@@ -1,7 +1,7 @@
TESTPIO README
-Testpio tests both the accuracy and performance of reading and writing data
+Testpio tests both the accuracy and performance of reading and writing data
using the pio library. The tests are controlled via namelist. There are a
set of general namelist and then namelist to setup a computational
decomposition and an IO decomposition. The computational decomposition
@@ -21,34 +21,34 @@ block, io_nml, contains some general settings:
nx_global - integer, global size of "x" dimension
ny_global - integer, global size of "y" dimension
nz_global - integer, glboal size of "z" dimension
- ioFMT - string, type and i/o method of data file
+ ioFMT - string, type and i/o method of data file
("bin","pnc","snc"), binary, pnetcdf, or serial netcdf
- rearr - string, type of rearranging to be done
+ rearr - string, type of rearranging to be done
("none","mct","box","boxauto")
nprocsIO - integer, number of IO processors used only when rearr is
not "none", if rearr is "none", then the IO decomposition
will be the computational decomposition
base - integer, base pe associated with nprocIO striding
stride - integer, the stride of io pes across the global pe set
- num_aggregator - integer, mpi-io number of aggregators, only used if no
+ num_aggregator - integer, mpi-io number of aggregators, only used if no
pio rearranging is done
- dir - string, directory to write output data, this must exist
+ dir - string, directory to write output data, this must exist
before the model starts up
num_iodofs - tests either 1dof or 2dof init decomp interfaces (1,2)
maxiter - integer, the number of trials for the test
DebugLevel - integer, sets the debug level (0,1,2,3)
compdof_input - string, setting of the compDOF ('namelist' or a filename)
- compdof_output - string, whether the compDOF is saved to disk
+ compdof_output - string, whether the compDOF is saved to disk
('none' or a filename)
Notes:
- the "mct" rearr option is not currently available
- - if rearr is set to "none", then the computational decomposition is also
+ - if rearr is set to "none", then the computational decomposition is also
going to be used as the IO decomposition. The computation decomposition
must therefore be suited to the underlying I/O methods.
- if rearr is set to "box", then pio is going to generate an internal
IO decomposition automatically and pio will rearrange to that decomp.
- - num_aggregator is used with mpi-io and no pio rearranging. mpi-io is only
+ - num_aggregator is used with mpi-io and no pio rearranging. mpi-io is only
used with binary data.
- nprocsIO, base, and stride implementation has some special options
if nprocsIO > 0 and stride > 0, then use input values
@@ -66,7 +66,7 @@ blocks are identical in use.
increasing this increases the flexibility of decompositions.
grdorder - string, sets the gridcell ordering within the block
("xyz","xzy","yxz","yzx","zxy","zyx")
- grddecomp - string, sets up the block size with gdx, gdy, and gdz, see
+ grddecomp - string, sets up the block size with gdx, gdy, and gdz, see
below, ("x","y","z","xy","xye","xz","xze","yz","yze",
"xyz","xyze","setblk")
gdx - integer, "x" size of block
@@ -89,7 +89,7 @@ are provided below.
Testpio writes out several files including summary information to
stdout, data files to the namelist dir directory, and a netcdf
-file summarizing the decompositions. The key output information
+file summarizing the decompositions. The key output information
is stdout, which contains the timing information. In addition,
a netcdf file called gdecomp.nc is written that provides both the
block and task ids for each gridcell as computed by the decompositions.
@@ -110,7 +110,7 @@ option to testpio_run.pl
There are several testpio_in files for the pio test suite. The ones that
come with pio test specific things. In general, there are tests for
- sn = serial netcdf and no rearrangement
+ sn = serial netcdf and no rearrangement
sb = serial netcdf and box rearrangement
pn = parallel netcdf and no rearrangement
pb = parallel netcdf and box rearrangement
@@ -121,7 +121,7 @@ and the test number (01, etc) is consistent across I/O methods with
02 = simple 2d xy decomp across all pes with all pes active in I/O
03 = all data on root pe, all pes active in I/O
04 = simple 2d xy decomp with yxz ordering and stride=4 pes active in I/O
- 05 = 2d xy decomp with 4 blocks/pe, yxz ordering, xy block decomp, and
+ 05 = 2d xy decomp with 4 blocks/pe, yxz ordering, xy block decomp, and
stride=4 pes active in I/O
06 = 3d xy decomp with 4 blocks/pe, yxz ordering, xy block decomp, and
stride=4 pes active in I/O
@@ -146,24 +146,24 @@ DECOMPOSITION:
The decomposition implementation supports the decomposition of
a general 3 dimensional "nx * ny * nz" grid into multiple blocks
-of gridcells which are then ordered and assigned to processors.
-In general, blocks in the decomposition are rectangular,
-"gdx * gdy * gdz" and the same size, although some blocks around
-the edges of the domain may be smaller if the decomposition is uneven.
-Both gridcells within the block and blocks within the domain can be
+of gridcells which are then ordered and assigned to processors.
+In general, blocks in the decomposition are rectangular,
+"gdx * gdy * gdz" and the same size, although some blocks around
+the edges of the domain may be smaller if the decomposition is uneven.
+Both gridcells within the block and blocks within the domain can be
ordered in any of the possible dimension hierarchies, such as "xyz"
-where the first dimension is the fastest.
+where the first dimension is the fastest.
-The gdx, gdy, and gdz inputs allow the user to specify the size in
-any dimension and the grddecomp input specifies which dimensions are
-to be further optimized. In general, automatic decomposition generation
-of 3 dimensional grids can be done in any of possible combination of
+The gdx, gdy, and gdz inputs allow the user to specify the size in
+any dimension and the grddecomp input specifies which dimensions are
+to be further optimized. In general, automatic decomposition generation
+of 3 dimensional grids can be done in any of possible combination of
dimensions, (x, y, z, xy, xz, yz, or xyz), with the other dimensions having a
fixed block size. The automatic generation of the decomposition is
based upon an internal algorithm that tries to determine the most
"square" blocks with an additional constraint on minimizing the maximum
number of gridcells across processors. If evenly divided grids are
-desired, use of the "e" addition to grddecomp specifies that the grid
+desired, use of the "e" addition to grddecomp specifies that the grid
decomposition must be evenly divided. the setblk option uses the
prescibed gdx, gdy, and gdz inputs withtout further automation.
@@ -172,7 +172,7 @@ in mapping blocks to processors, but has a few additional options.
"cont1d" (contiguous 1d) basically unwraps the blocks in the order specified
by the blkorder input and then decomposes that "1d" list of blocks
onto processors by contiguously grouping blocks together and allocating
-them to a processor. The number of contiguous blocks that are
+them to a processor. The number of contiguous blocks that are
allocated to a processor is the maximum of the values of bdx, bdy, and
bdz inputs. Contiguous blocks are allocated to each processor in turn
in a round robin fashion until all blocks are allocated. The
@@ -181,13 +181,13 @@ contiguous blocks are set automatically such that each processor
recieves only 1 set of contiguous blocks. The ysym2 and ysym4
blkdecomp2 options modify the original block layout such that
the tasks assigned to the blocks are 2-way or 4-way symetric
-in the y axis.
+in the y axis.
The decomposition tool is extremely flexible, but arbitrary
inputs will not always yield valid decompositions. If a valid
decomposition cannot be computed based on the global grid size,
-number of pes, number of blocks desired, and decomposition options,
-the model will stop.
+number of pes, number of blocks desired, and decomposition options,
+the model will stop.
As indicated above, the IO decomposition must be suited to the
IO methods, so decompositions are even further limited by those
@@ -212,7 +212,7 @@ Some decomposition examples:
Standard xyz ordering, 2d decomp:
note: blkdecomp plays no role since there is 1 block per pe
- nx_global 6
+ nx_global 6
ny_global 4
nz_global 1 ______________________________
npes 4 |B3 P3 |B4 P4 |
@@ -231,7 +231,7 @@ Standard xyz ordering, 2d decomp:
Same as above but yxz ordering, 2d decomp
note: blkdecomp plays no role since there is 1 block per pe
- nx_global 6
+ nx_global 6
ny_global 4
nz_global 1 _____________________________
npes 4 |B2 P2 |B4 P4 |
@@ -248,10 +248,10 @@ Same as above but yxz ordering, 2d decomp
bdy 0 |______________|______________|
bdz 0
-xyz grid ordering, 1d x decomp
+xyz grid ordering, 1d x decomp
note: blkdecomp plays no role since there is 1 block per pe
note: blkorder plays no role since it's a 1d decomp
- nx_global 8
+ nx_global 8
ny_global 4
nz_global 1 _____________________________________
npes 4 |B1 P1 |B2 P2 |B3 P3 |B4 P4 |
@@ -269,7 +269,7 @@ xyz grid ordering, 1d x decomp
bdz 0
yxz block ordering, 2d grid decomp, 2d block decomp, 4 block per pe
- nx_global 8
+ nx_global 8
ny_global 4
nz_global 1 _____________________________________
npes 4 |B4 P2 |B8 P2 |B12 P4 |B16 P4 |
diff --git a/cime/externals/pio2/examples/basic/alloc_mod.F90.in b/cime/externals/pio2/examples/basic/alloc_mod.F90.in
index 7843c7f69f64..f70259d6ef1c 100644
--- a/cime/externals/pio2/examples/basic/alloc_mod.F90.in
+++ b/cime/externals/pio2/examples/basic/alloc_mod.F90.in
@@ -1,6 +1,6 @@
#define __PIO_FILE__ "alloc_mod.F90.in"
!>
-!! @file
+!! @file
!! $Revision$
!! $LastChangedDate$
!! @brief Internal allocation routines for PIO
@@ -15,18 +15,18 @@ module alloc_mod
!>
!! @private
-!! PIO internal memory allocation check routines.
+!! PIO internal memory allocation check routines.
!<
public:: alloc_check
!>
!! @private
-!! PIO internal memory allocation check routines.
+!! PIO internal memory allocation check routines.
!<
- public:: dealloc_check
+ public:: dealloc_check
interface alloc_check
! TYPE long,int,real,double ! DIMS 1,2
- module procedure alloc_check_{DIMS}d_{TYPE}
+ module procedure alloc_check_{DIMS}d_{TYPE}
! TYPE double,long,int,real
module procedure alloc_check_0d_{TYPE}
end interface
@@ -42,19 +42,19 @@ module alloc_mod
!>
!! @private
-!! PIO internal memory allocation check routines.
+!! PIO internal memory allocation check routines.
!<
public :: alloc_print_usage
!>
!! @private
-!! PIO internal memory allocation check routines.
+!! PIO internal memory allocation check routines.
!<
public :: alloc_trace_on
!>
!! @private
-!! PIO internal memory allocation check routines.
+!! PIO internal memory allocation check routines.
!<
public :: alloc_trace_off
@@ -66,7 +66,7 @@ contains
! Instantiate all the variations of alloc_check_ and dealloc_check_
!
- ! TYPE long,int,real,double
+ ! TYPE long,int,real,double
subroutine alloc_check_1d_{TYPE} (data,varlen,msg)
{VTYPE}, pointer :: data(:)
@@ -102,7 +102,7 @@ contains
end subroutine alloc_check_1d_{TYPE}
- ! TYPE long,int,real,double
+ ! TYPE long,int,real,double
subroutine alloc_check_2d_{TYPE} (data,size1, size2,msg)
{VTYPE}, pointer :: data(:,:)
@@ -214,7 +214,7 @@ end subroutine dealloc_check_0d_{TYPE}
!>
!! @private
!! @fn alloc_print_usage
-!! PIO internal memory allocation check routines.
+!! PIO internal memory allocation check routines.
!<
subroutine alloc_print_usage(rank,msg)
#ifndef NO_MPIMOD
diff --git a/cime/externals/pio2/examples/basic/build_defaults.xml b/cime/externals/pio2/examples/basic/build_defaults.xml
index 0bf3766e18ee..0bea13fd8e45 100644
--- a/cime/externals/pio2/examples/basic/build_defaults.xml
+++ b/cime/externals/pio2/examples/basic/build_defaults.xml
@@ -31,7 +31,7 @@
#BSUB -J testpio_suite
#BSUB -W 3:00
"
- />
+ />
@@ -65,7 +65,7 @@
#BSUB -J testpio_suite
#BSUB -W 1:00
'
- />
+ />
+ />
+ />
+ />
+ />
+/>
+/>
+/>
0) then
@@ -62,7 +62,7 @@ subroutine check_1D_r8(my_comm, fname,wr_array,rd_array,len,iostat)
wr_array(maxbadloc), rd_array(maxbadloc)
if(present(iostat)) iostat = -20
endif
- call dealloc_check(diff)
+ call dealloc_check(diff)
end subroutine check_1D_r8
subroutine check_3D_r8(my_comm, fname,wr_array,rd_array)
@@ -76,17 +76,17 @@ subroutine check_3D_r8(my_comm, fname,wr_array,rd_array)
real(r8) :: lsum,gsum
integer(i4) :: ierr,cbad,rank
integer(i4) :: len1,len2,len3
-
+
len1 = SIZE(wr_array,dim=1)
len2 = SIZE(wr_array,dim=2)
len3 = SIZE(wr_array,dim=3)
-
+
allocate(diff(len1,len2,len3))
-
+
diff = wr_array - rd_array
cbad = COUNT(diff .ne. 0.0)
lsum = SUM(diff)
-
+
call MPI_Allreduce(lsum,gsum,1,MPI_REAL8,MPI_SUM,MY_COMM,ierr)
call CheckMPIReturn('Call to MPI_Allreduce()',ierr,__FILE__,__LINE__)
@@ -96,7 +96,7 @@ subroutine check_3D_r8(my_comm, fname,wr_array,rd_array)
if(lsum .ne. 0.0) print *,'IAM: ', rank, 'File: ',TRIM(fname),&
' Error detected for correctness test(3D,R8): ',lsum,' # bad: ',cbad
endif
- deallocate(diff)
+ deallocate(diff)
end subroutine check_3D_r8
@@ -111,17 +111,17 @@ subroutine check_3D_r4(my_comm, fname,wr_array,rd_array)
real(r4) :: lsum,gsum
integer(i4) :: ierr,cbad,rank
integer(i4) :: len1,len2,len3
-
+
len1 = SIZE(wr_array,dim=1)
len2 = SIZE(wr_array,dim=2)
len3 = SIZE(wr_array,dim=3)
-
+
allocate(diff(len1,len2,len3))
-
+
diff = wr_array - rd_array
cbad = COUNT(diff .ne. 0.0)
lsum = SUM(diff)
-
+
call MPI_Allreduce(lsum,gsum,1,MPI_REAL,MPI_SUM,MY_COMM,ierr)
call CheckMPIReturn('Call to MPI_Allreduce()',ierr,__FILE__,__LINE__)
@@ -131,7 +131,7 @@ subroutine check_3D_r4(my_comm, fname,wr_array,rd_array)
if(lsum .ne. 0) print *,'IAM: ', rank, 'File: ',TRIM(fname),&
' Error detected for correctness test(3D,R4): ',lsum,' # bad: ',cbad
endif
- deallocate(diff)
+ deallocate(diff)
end subroutine check_3D_r4
@@ -146,17 +146,17 @@ subroutine check_3D_i4(my_comm, fname,wr_array,rd_array)
integer(i4) :: lsum,gsum
integer(i4) :: ierr,cbad,rank
integer(i4) :: len1,len2,len3
-
+
len1 = SIZE(wr_array,dim=1)
len2 = SIZE(wr_array,dim=2)
len3 = SIZE(wr_array,dim=3)
-
+
allocate(diff(len1,len2,len3))
-
+
diff = wr_array - rd_array
cbad = COUNT(diff .ne. 0.0)
lsum = SUM(diff)
-
+
call MPI_Allreduce(lsum,gsum,1,MPI_INTEGER,MPI_SUM,MY_COMM,ierr)
call CheckMPIReturn('Call to MPI_Allreduce()',ierr,__FILE__,__LINE__)
if(gsum .ne. 0.0) then
@@ -165,13 +165,13 @@ subroutine check_3D_i4(my_comm, fname,wr_array,rd_array)
if(lsum .ne. 0) print *,'IAM: ', rank, 'File: ',TRIM(fname),&
' Error detected for correctness test(3D,I4): ',lsum,' # bad: ',cbad
endif
- deallocate(diff)
+ deallocate(diff)
end subroutine check_3D_i4
subroutine check_1D_r4(my_comm,fname,wr_array,rd_array,len,iostat)
integer, intent(in) :: my_comm
-
+
character(len=*) :: fname
real(r4) :: wr_array(:)
real(r4) :: rd_array(:)
@@ -181,11 +181,11 @@ subroutine check_1D_r4(my_comm,fname,wr_array,rd_array,len,iostat)
real(r4) :: lsum,gsum
integer(i4) :: ierr,len,cbad,rank
-
+
! Set default (no error) value for iostat if present)
if(present(iostat)) iostat = PIO_noerr
-
+
call alloc_check(diff,len,' check_1D_r4:diff ')
if(len>0) then
@@ -195,7 +195,7 @@ subroutine check_1D_r4(my_comm,fname,wr_array,rd_array,len,iostat)
else
lsum = 0
end if
-
+
call MPI_Allreduce(lsum,gsum,1,MPI_REAL,MPI_SUM,MY_COMM,ierr)
call CheckMPIReturn('Call to MPI_Allreduce()',ierr,__FILE__,__LINE__)
if(abs(gsum) > tiny(gsum)) then
@@ -205,7 +205,7 @@ subroutine check_1D_r4(my_comm,fname,wr_array,rd_array,len,iostat)
' Error detected for correctness test(1D,R4): ',lsum,' # bad: ',cbad
if(present(iostat)) iostat = -20
endif
- deallocate(diff)
+ deallocate(diff)
end subroutine check_1D_r4
@@ -221,11 +221,11 @@ subroutine check_1D_i4(my_comm, fname,wr_array,rd_array,len,iostat)
integer(i4) :: lsum,gsum
integer(i4) :: ierr,cbad,rank, lloc(1)
-
+
! Set default (no error) value for iostat if present)
if(present(iostat)) iostat = PIO_noerr
-
+
call alloc_check(diff,len,' check_1D_r4:diff ')
if(len>0) then
diff = wr_array - rd_array
@@ -245,7 +245,7 @@ subroutine check_1D_i4(my_comm, fname,wr_array,rd_array,len,iostat)
lloc, wr_array(lloc(1)), rd_array(lloc(1))
if(present(iostat)) iostat = -20
endif
- deallocate(diff)
+ deallocate(diff)
end subroutine check_1D_i4
diff --git a/cime/externals/pio2/examples/basic/config_bench.xml b/cime/externals/pio2/examples/basic/config_bench.xml
index 4c2f6840e7a1..0f8beaf2ea5d 100644
--- a/cime/externals/pio2/examples/basic/config_bench.xml
+++ b/cime/externals/pio2/examples/basic/config_bench.xml
@@ -75,39 +75,39 @@
256 256 256
+ 256 256 256
80 48 60
+ 80 48 60
32 48 60
+ 32 48 60
32 24 60
+ 32 24 60
16 24 60
+ 16 24 60
16 12 60
+ 16 12 60
-
+
90 72 100
+ 90 72 100
90 36 100
+ 90 36 100
45 36 100
+ 45 36 100
30 30 100
+ 30 30 100
45 18 100
+ 45 18 100
20 40 100
+ 20 40 100
20 20 100
+ 20 20 100
10 10 100
-
+
576 6 26
diff --git a/cime/externals/pio2/examples/basic/fdepends.awk b/cime/externals/pio2/examples/basic/fdepends.awk
index 2980920cf23a..03bab9769da9 100644
--- a/cime/externals/pio2/examples/basic/fdepends.awk
+++ b/cime/externals/pio2/examples/basic/fdepends.awk
@@ -19,9 +19,9 @@ BEGIN { IGNORECASE=1
#
-# awk reads each line of the filename argument $2 until it finds
+# awk reads each line of the filename argument $2 until it finds
# a "use" or "#include"
-#
+#
/^[ \t]*use[ \t]+/ {
@@ -30,7 +30,7 @@ BEGIN { IGNORECASE=1
if ( $0 ~ /_EXTERNAL/ ) next
# Assume the second field is the F90 module name,
- # remove any comma at the end of the second field (due to
+ # remove any comma at the end of the second field (due to
# ONLY or rename), and print it in a dependency line.
sub(/,$/,"",$2)
@@ -49,8 +49,8 @@ BEGIN { IGNORECASE=1
if ( $0 ~ /_EXTERNAL/ ) next
# Remove starting or ending quote or angle bracket
- sub(/^["<']/,"",$2)
- sub(/[">']$/,"",$2)
+ sub(/^["<']/,"",$2)
+ sub(/[">']$/,"",$2)
print PRLINE $2
-
+
}
diff --git a/cime/externals/pio2/examples/basic/gdecomp_mod.F90 b/cime/externals/pio2/examples/basic/gdecomp_mod.F90
index a4c08718fb65..e4f1921452ee 100644
--- a/cime/externals/pio2/examples/basic/gdecomp_mod.F90
+++ b/cime/externals/pio2/examples/basic/gdecomp_mod.F90
@@ -32,7 +32,7 @@ module gdecomp_mod
character(len=128):: nml_file ! namelist filename if used
character(len=16) :: nml_var ! namelist variable if used
end type
-
+
character(len=*),parameter :: modname = 'gdecomp_mod'
integer(i4),parameter :: master_task = 0
@@ -51,7 +51,7 @@ subroutine gdecomp_set(gdecomp,nxg,nyg,nzg,gdx,gdy,gdz,bdx,bdy,bdz, &
type(gdecomp_type), intent(inout) :: gdecomp
! NOTE: not all of these are optional, but optional allows
-! them to be called in arbitrary order
+! them to be called in arbitrary order
integer(i4),optional :: nxg,nyg,nzg ! global grid size
integer(i4),optional :: gdx,gdy,gdz ! block size
@@ -175,7 +175,7 @@ subroutine gdecomp_set(gdecomp,nxg,nyg,nzg,gdx,gdy,gdz,bdx,bdy,bdz, &
endif
end subroutine gdecomp_set
-
+
!==================================================================
subroutine gdecomp_read_nml(gdecomp,nml_file,nml_var,my_task,ntasks,gdims)
@@ -264,7 +264,7 @@ subroutine gdecomp_read_nml(gdecomp,nml_file,nml_var,my_task,ntasks,gdims)
endif
end subroutine gdecomp_read_nml
-
+
!==================================================================
subroutine gdecomp_print(gdecomp)
@@ -386,7 +386,7 @@ subroutine gdecomp_DOF(gdecomp,my_task,DOF,start,count,write_decomp,test)
!DBG print *,'IAM: ',my_task,'gdecomp_DOF: point #3 gsiz:',gsiz
!DBG print *,'IAM: ',my_task,'gdecomp_DOF: point #3 bsiz:',bsiz
- if(wdecomp) then
+ if(wdecomp) then
allocate(blkid(gsiz(1),gsiz(2),gsiz(3)))
allocate(tskid(gsiz(1),gsiz(2),gsiz(3)))
blkid = -1
@@ -564,7 +564,7 @@ subroutine gdecomp_DOF(gdecomp,my_task,DOF,start,count,write_decomp,test)
! ii = (n3-1)*gsiz(2)*gsiz(1) + (n2-1)*gsiz(1) + n1
nbxyz = ((n3-1)/bsiz(3))*nblk(2)*nblk(1) + ((n2-1)/bsiz(2))*nblk(1) + &
((n1-1)/bsiz(1)) + 1
- if(wdecomp) then
+ if(wdecomp) then
blkid(n1,n2,n3) = bxyzbord(nbxyz)
tskid(n1,n2,n3) = bxyzpord(nbxyz)
endif
@@ -581,7 +581,7 @@ subroutine gdecomp_DOF(gdecomp,my_task,DOF,start,count,write_decomp,test)
cntmax = maxval(cnta)
! --- map gridcells to dof ---
-
+
if (testonly) then
allocate(testdof(cntmax,0:gnpes-1))
testdof = 0
@@ -669,7 +669,7 @@ subroutine gdecomp_DOF(gdecomp,my_task,DOF,start,count,write_decomp,test)
write(6,*) trim(subname),' start and count could NOT be computed '
endif
-!------- MASTER TASK WRITE -------------------------------------
+!------- MASTER TASK WRITE -------------------------------------
if (my_task == master_task) then
@@ -720,7 +720,7 @@ subroutine gdecomp_DOF(gdecomp,my_task,DOF,start,count,write_decomp,test)
if (first_call) then
rcode = nf90_create(ncname,nf90_clobber,ncid)
else
- rcode = nf90_open(ncname,nf90_write,ncid)
+ rcode = nf90_open(ncname,nf90_write,ncid)
endif
rcode = nf90_redef(ncid)
dname = trim(gdecomp%nml_var)//'_nx'
@@ -742,9 +742,9 @@ subroutine gdecomp_DOF(gdecomp,my_task,DOF,start,count,write_decomp,test)
endif ! testonly
-!------- END MASTER TASK WRITE ---------------------------------
+!------- END MASTER TASK WRITE ---------------------------------
- if(wdecomp) then
+ if(wdecomp) then
deallocate(blkid,tskid)
endif
deallocate(cnta,cntb,bxyzbord,bxyzpord,bordpord)
@@ -948,7 +948,7 @@ subroutine calcbsiz(npes,gsiz,bsiz,option,ierr)
npes2 = npes2/m
bs = bs - 1
else
- write(6,*) trim(subname),' ERROR: bsiz not allowed ',n,gsiz(n),bsiz(n),m,npes,npes2
+ write(6,*) trim(subname),' ERROR: bsiz not allowed ',n,gsiz(n),bsiz(n),m,npes,npes2
call piodie(__FILE__,__LINE__)
endif
endif
@@ -1116,7 +1116,7 @@ end subroutine piodie
subroutine mpas_decomp_generator(dim1,dim2,dim3,my_task,fname,dof)
integer :: dim1, dim2, dim3
integer, intent(in) :: my_task ! my MPI rank
- character(len=*),intent(in) :: fname ! name of MPAS partition file
+ character(len=*),intent(in) :: fname ! name of MPAS partition file
integer(kind=pio_offset_kind), pointer :: dof(:)
! Local variables
@@ -1136,7 +1136,7 @@ subroutine mpas_decomp_generator(dim1,dim2,dim3,my_task,fname,dof)
! 1st dimension: vertical
! 2nd dimension: horizontal
- gnz = dim1
+ gnz = dim1
nCellsGlobal = dim2*dim3
call get_global_id_list(my_task,fname,nCellsSolve,nCellsGlobal,globalIDList)
@@ -1201,7 +1201,7 @@ end subroutine get_global_id_list
subroutine camlike_decomp_generator(gnx, gny, gnz, myid, ntasks, npr_yz, dof)
integer, intent(in) :: gnx, gny, gnz, myid, ntasks, npr_yz(4)
- integer(kind=pio_offset_kind), pointer :: dof(:), tdof(:), tchk(:)
+ integer(kind=pio_offset_kind), pointer :: dof(:), tdof(:), tchk(:)
real, pointer :: rdof(:)
integer(kind=pio_offset_kind) :: dofsize,tdofsize
@@ -1284,7 +1284,7 @@ subroutine camlike_decomp_generator(gnx, gny, gnz, myid, ntasks, npr_yz, dof)
end do
end do
- CALL qsRecursive(1_PIO_OFFSET_KIND, dofsize, dof) !kicks off the recursive
+ CALL qsRecursive(1_PIO_OFFSET_KIND, dofsize, dof) !kicks off the recursive
deallocate(tdof)
@@ -1327,7 +1327,7 @@ integer(kind=pio_offset_kind) FUNCTION qsPartition (loin, hiin, list)
hi = hi - 1
END DO
IF (hi /= lo) then !move the entry indexed by hi to left side of partition
- list(lo) = list(hi)
+ list(lo) = list(hi)
lo = lo + 1
END IF
DO !move in from the left
@@ -1335,7 +1335,7 @@ integer(kind=pio_offset_kind) FUNCTION qsPartition (loin, hiin, list)
lo = lo + 1
END DO
IF (hi /= lo) then !move the entry indexed by hi to left side of partition
- list(hi) = list(lo)
+ list(hi) = list(lo)
hi = hi - 1
END IF
END DO
diff --git a/cime/externals/pio2/examples/basic/namelist_mod.F90 b/cime/externals/pio2/examples/basic/namelist_mod.F90
index 1dbd343cb2d1..fa237a719844 100644
--- a/cime/externals/pio2/examples/basic/namelist_mod.F90
+++ b/cime/externals/pio2/examples/basic/namelist_mod.F90
@@ -12,7 +12,7 @@ module namelist_mod
use pio_support, only : piodie, CheckMPIReturn ! _EXTERNAL
use pio, only : pio_offset_kind
- implicit none
+ implicit none
private
public :: broadcast_namelist
@@ -21,7 +21,7 @@ module namelist_mod
integer(kind=i4), public, parameter :: buffer_size_str_len = 20
integer(kind=i4), public, parameter :: true_false_str_len = 6
integer(kind=i4), public, parameter :: romio_str_len = 10
-
+
logical, public, save :: async
integer(i4), public, save :: nx_global,ny_global,nz_global
integer(i4), public, save :: rearr_type
@@ -49,9 +49,9 @@ module namelist_mod
integer(kind=i4), public, save :: set_lustre_values = 0 !! Set to one for true
integer(kind=i4), public, save :: lfs_ost_count = 1
-
+
character(len=80), save, public :: compdof_input
- character(len=80), save, public :: iodof_input
+ character(len=80), save, public :: iodof_input
character(len=80), save, public :: compdof_output
character(len=256), save, public :: part_input
character(len=256), save, public :: casename
@@ -125,7 +125,7 @@ subroutine ReadTestPIO_Namelist(device, nprocs, filename, caller, ierror)
character(len=*), parameter :: myname_=myname//'ReadPIO_Namelist'
!-------------------------------------------------
- ! set default values for namelist io_nml variables
+ ! set default values for namelist io_nml variables
!-------------------------------------------------
async = .false.
@@ -175,14 +175,14 @@ subroutine ReadTestPIO_Namelist(device, nprocs, filename, caller, ierror)
open (device, file=filename,status='old',iostat=ierror)
- if(ierror /= 0) then
+ if(ierror /= 0) then
write(*,*) caller,'->',myname_,':: Error opening file ',filename, &
' on device ',device,' with iostat=',ierror
ierror = -1
else
ierror = 1
endif
-
+
do while (ierror > 0)
read(device, nml=io_nml, iostat=ierror)
enddo
@@ -318,7 +318,7 @@ subroutine ReadTestPIO_Namelist(device, nprocs, filename, caller, ierror)
stride = (nprocs-base)/num_iotasks
endif
elseif (nprocsIO <= 0) then
-#ifdef BGx
+#ifdef BGx
! A negative value for num_iotasks has a special meaning on Blue Gene
num_iotasks = nprocsIO
#else
@@ -333,7 +333,7 @@ subroutine ReadTestPIO_Namelist(device, nprocs, filename, caller, ierror)
endif
!------------------------------------------------
- ! reset stride if there are not enough processors
+ ! reset stride if there are not enough processors
!------------------------------------------------
if (base + num_iotasks * (stride-1) > nprocs-1) then
stride = FLOOR(real((nprocs - 1 - base),kind=r8)/real(num_iotasks,kind=r8))
@@ -342,9 +342,9 @@ subroutine ReadTestPIO_Namelist(device, nprocs, filename, caller, ierror)
!-------------------------------------------------------
! If rearrangement is 'none' reset to the proper values
!-------------------------------------------------------
- if(trim(rearr) == 'none') then
+ if(trim(rearr) == 'none') then
stride = 1
- num_iotasks = nprocs
+ num_iotasks = nprocs
endif
write(*,*) trim(string),' n_iotasks = ',num_iotasks,' (updated)'
@@ -381,7 +381,7 @@ subroutine Broadcast_Namelist(caller, myID, root, comm, ierror)
integer(i4) :: itmp
!------------------------------------------
- ! broadcast namelist info to all processors
+ ! broadcast namelist info to all processors
!------------------------------------------
if(async) then
diff --git a/cime/externals/pio2/examples/basic/perl5lib/ChangeLog b/cime/externals/pio2/examples/basic/perl5lib/ChangeLog
index a5bbed9f87b9..d5bfab836835 100644
--- a/cime/externals/pio2/examples/basic/perl5lib/ChangeLog
+++ b/cime/externals/pio2/examples/basic/perl5lib/ChangeLog
@@ -6,7 +6,7 @@ Originator(s): erik
Date: Sat Jun 13, 2009
One-line Summary: Add %ymd indicator for streams so can do year-month-days
-M Streams/Template.pm ---- Add ability to write out %ymd year-month-day
+M Streams/Template.pm ---- Add ability to write out %ymd year-month-day
for filenames in streams. It assumes a noleap
calendar -- could easily be extended to make
Gregorian optional.
@@ -14,7 +14,7 @@ M t/01.t ---- Change formatting of successful test
M t/02.t ---- Add more tests for %ymd, and offset
M t/03.t ---- Change formatting of successful test
M t/04.t ---- Change formatting of successful test
-M t/datm.streams.txt ---------- Add another year and the last-month
+M t/datm.streams.txt ---------- Add another year and the last-month
to start for testing
A t/datm.ymd.streams.txt ------ Add streams test file with %ymd
M t/datm.template.streams.xml - Add CPLHIST test section with %ymd
@@ -27,7 +27,7 @@ Date: Tue Jun 9, 2009
One-line Summary: add offset support for streams template
M Streams/Template.pm
-
+
==============================================================
Tag name: perl5lib_090424
Originator(s): erik
@@ -79,7 +79,7 @@ Build/Namelist.pm
. Change validate_variable_value() from an object method to a class method,
and remove the unused argument.
. add fix to _split_namelist_value method to replace embedded newlines by
- spaces.
+ spaces.
Build/NamelistDefaults.pm
. make the method interfaces case insensitive by converting all variable
@@ -146,7 +146,7 @@ Originator(s): erik (KLUZEK ERIK 1326 CGD)
Date: Mon Aug 11 10:44:52 MDT 2008
One-line Summary: Turn off printing of file existance if NOT -verbose
-M Streams/Template.pm ----------- Turn off printing of file
+M Streams/Template.pm ----------- Turn off printing of file
checking if NOT $printing;
==============================================================
@@ -190,8 +190,8 @@ about needing to do validation as is done now. Change the validate methods a bit
and make them more robust.
M Build/Config.pm --------------- Add get_valid_values method and use it internally.
-M Build/NamelistDefinition.pm --- Add namelist validate_variable_value to validate
- method. Add option to return without quotes to
+M Build/NamelistDefinition.pm --- Add namelist validate_variable_value to validate
+ method. Add option to return without quotes to
get_valid_values method.
M Build/Namelist.pm ------------- Make validate_variable_value more robust.
diff --git a/cime/externals/pio2/examples/basic/perl5lib/XML/Changes b/cime/externals/pio2/examples/basic/perl5lib/XML/Changes
index d7ad4ec13882..d0be5104f77e 100644
--- a/cime/externals/pio2/examples/basic/perl5lib/XML/Changes
+++ b/cime/externals/pio2/examples/basic/perl5lib/XML/Changes
@@ -1,27 +1,27 @@
-Revision history for Perl extension XML::Lite.
-
-0.14 31 January 2003
- - Fixed a major bug in parsing empty elements
- - Fixed some typos in documenation
- - Fixed error in documentation of XML::Element::get_attributes interface
-0.13 13 November 2001
- - Minor bug fixes?
-0.12 15 November 2001
- - Fixed bugs in test that failed on CPAN Testers
- - Fixed warnings in XML::Lite::Element->_find_self
- - Fixed bug where mutiple child lists failed (problem in opt code)
- - Added tests for above
- - Removed from CPAN because Matt Sergeant got upset
-0.11 6 November 2001
- - XML::Lite::Element->get_text() now removes CDATA tags (but leaves content)
-0.10 6 November 2001
- - Fixed children() and text() methods by re-vamping the
- tree.
- - Built tests for all exposed methods of all objects
- - Built tests for all contructor calls
-0.05 4 November 2001
- - Added get_text method
-0.01 Sat Aug 25 13:31:48 2001
- - original version; created by h2xs 1.20 with options
- -XA -n XML::Lite
-
+Revision history for Perl extension XML::Lite.
+
+0.14 31 January 2003
+ - Fixed a major bug in parsing empty elements
+ - Fixed some typos in documenation
+ - Fixed error in documentation of XML::Element::get_attributes interface
+0.13 13 November 2001
+ - Minor bug fixes?
+0.12 15 November 2001
+ - Fixed bugs in test that failed on CPAN Testers
+ - Fixed warnings in XML::Lite::Element->_find_self
+ - Fixed bug where mutiple child lists failed (problem in opt code)
+ - Added tests for above
+ - Removed from CPAN because Matt Sergeant got upset
+0.11 6 November 2001
+ - XML::Lite::Element->get_text() now removes CDATA tags (but leaves content)
+0.10 6 November 2001
+ - Fixed children() and text() methods by re-vamping the
+ tree.
+ - Built tests for all exposed methods of all objects
+ - Built tests for all contructor calls
+0.05 4 November 2001
+ - Added get_text method
+0.01 Sat Aug 25 13:31:48 2001
+ - original version; created by h2xs 1.20 with options
+ -XA -n XML::Lite
+
diff --git a/cime/externals/pio2/examples/basic/perl5lib/XML/Lite.pm b/cime/externals/pio2/examples/basic/perl5lib/XML/Lite.pm
index c1f7c821eae9..d6aa32e978c0 100644
--- a/cime/externals/pio2/examples/basic/perl5lib/XML/Lite.pm
+++ b/cime/externals/pio2/examples/basic/perl5lib/XML/Lite.pm
@@ -35,12 +35,12 @@ my $xml = new XML::Lite( xml => 'a_file.xml' );
=head1 DESCRIPTION
-XML::Lite is a lightweight XML parser, with basic element traversing
-methods. It is entirely self-contained, pure Perl (i.e. I based on
-expat). It provides useful methods for reading most XML files, including
-traversing and finding elements, reading attributes and such. It is
-designed to take advantage of Perl-isms (Attribute lists are returned as
-hashes, rather than, say, lists of objects). It provides only methods
+XML::Lite is a lightweight XML parser, with basic element traversing
+methods. It is entirely self-contained, pure Perl (i.e. I based on
+expat). It provides useful methods for reading most XML files, including
+traversing and finding elements, reading attributes and such. It is
+designed to take advantage of Perl-isms (Attribute lists are returned as
+hashes, rather than, say, lists of objects). It provides only methods
for reading a file, currently.
=head1 METHODS
@@ -50,7 +50,7 @@ The following methods are available:
=over 4
=cut
-
+
use XML::Lite::Element;
BEGIN {
use vars qw( $VERSION @ISA );
@@ -75,17 +75,17 @@ use vars qw( %ERRORS );
=item my $xml = new XML::Lite( xml => $source[, ...] );
Creates a new XML::Lite object. The XML::Lite object acts as the document
-object for the $source that is sent to it to parse. This means that you
-create a new object for each document (or document sub-section). As the
+object for the $source that is sent to it to parse. This means that you
+create a new object for each document (or document sub-section). As the
objects are lightweight this should not be a performance consideration.
The object constructor can take several named parameters. Parameter names
-may begin with a '-' (as in the example above) but are not required to. The
+may begin with a '-' (as in the example above) but are not required to. The
following parameters are recognized.
- xml The source XML to parse. This can be a filename, a scalar that
+ xml The source XML to parse. This can be a filename, a scalar that
contains the document (or document fragment), or an IO handle.
-
+
As a convenince, if only on parameter is given, it is assumed to be the source.
So you can use this, if you wish:
@@ -99,7 +99,7 @@ sub new {
my $proto = shift;
my %parms;
my $class = ref($proto) || $proto;
-
+
# Parse parameters
$self->{settings} = {};
if( @_ > 1 ) {
@@ -109,7 +109,7 @@ sub new {
while( ($k, $v) = each %parms ) {
$k =~ s/^-//; # Removed leading '-' if it exists. (Why do Perl programmers use this?)
$self->{settings}{$k} = $v;
- } # end while
+ } # end while
} else {
$self->{settings}{xml} = $_[0];
} # end if;
@@ -121,10 +121,10 @@ sub new {
$self->{doc} = '';
$self->{_CDATA} = [];
$self->{handlers} = {};
-
+
# Refer to global error messages
$self->{ERRORS} = $self->{settings}{error_messages} || \%ERRORS;
-
+
# Now parse the XML document and build look-up tables
return undef unless $self->_parse_it();
@@ -181,8 +181,8 @@ sub root_element {
Returns a list of all elements that match C<$name>.
C<@list> is a list of L objects
If called in a scalar context, this will return the
-first element found that matches (it's more efficient
-to call in a scalar context than assign the results
+first element found that matches (it's more efficient
+to call in a scalar context than assign the results
to a list of one scalar).
If no matching elements are found then returns C
@@ -201,7 +201,7 @@ sub element_by_name;
sub elements_by_name {
my $self = shift;
my( $name ) = @_;
-
+
if( wantarray ) {
my @list = ();
foreach( @{$self->{elements}{$name}} ) {
@@ -241,7 +241,7 @@ sub elements_by_name {
# ----------------------------------------------------------
sub _parse_it {
my $self = shift;
-
+
# Get the xml content
if( $self->{settings}{xml} =~ /^\s* ) {
$self->{doc} = $self->{settings}{xml};
@@ -268,26 +268,26 @@ sub _parse_it {
$self->{doc_offset} = length $1; # Store the number of removed chars for messages
} # end if
$self->{doc} =~ s/\s+$//;
-
-
+
+
# Build lookup tables
$self->{elements} = {};
$self->{tree} = [];
# - These are used in the building process
my $element_list = [];
my $current_element = $self->{tree};
-
+
# Call init handler if defined
&{$self->{handlers}{init}}($self) if defined $self->{handlers}{init};
-
+
# Make a table of offsets to each element start and end point
# Table is a hash of element names to lists of offsets:
# [start_tag_start, start_tag_end, end_tag_start, end_tag_end]
# where tags include the '<' and '>'
-
- # Also make a tree of linked lists. List contains root element
+
+ # Also make a tree of linked lists. List contains root element
# and other nodes. Each node consits of a list ref (the position list)
- # and a following list containing the child element. Text nodes are
+ # and a following list containing the child element. Text nodes are
# a list ref (with just two positions).
# Find the opening and closing of the XML, giving errors if not well-formed
@@ -297,22 +297,22 @@ sub _parse_it {
$self->_error( 'ROOT_NOT_CLOSED', $start_pos + $self->{doc_offset} ) if $end_pos == -1;
my $doc_end = rindex( $self->{doc}, '>' );
$self->_error( 'ROOT_NOT_CLOSED' ) if $doc_end == -1;
-
+
# Now walk through the document, one tag at a time, building up our
# lookup tables
while( $end_pos <= $doc_end ) {
-
+
# Get a tag
my $tag = substr( $self->{doc}, $start_pos, $end_pos - $start_pos + 1 );
# Get the tag name and see if it's an end tag (starts with )
my( $end, $name ) = $tag =~ m{^<\s*(/?)\s*([^/>\s]+)};
-
+
if( $end ) {
# If there is no start tag for this end tag then throw an error
$self->_error( 'NO_START', $start_pos + $self->{doc_offset}, $tag ) unless defined $self->{elements}{$name};
-
- # Otherwise, add the end point to the array for the last element in
+
+ # Otherwise, add the end point to the array for the last element in
# the by-name lookup hash
my( $x, $found ) = (@{$self->{elements}{$name}} - 1, 0);
while( $x >= 0 ) {
@@ -329,24 +329,24 @@ sub _parse_it {
# If we didn't find an open element then throw an error
$self->_error( 'NO_START', $start_pos + $self->{doc_offset}, $tag ) unless $found;
-
+
# Call an end-tag handler if defined (not yet exposed)
&{$self->{handlers}{end}}($self, $name) if defined $self->{handlers}{end};
-
+
# Close element in linked list (tree)
$current_element = pop @$element_list;
-
+
} else {
- # Make a new list in the by-name lookup hash if none found by this name yet
+ # Make a new list in the by-name lookup hash if none found by this name yet
$self->{elements}{$name} = [] unless defined $self->{elements}{$name};
-
+
# Add start points to the array of positions and push it on the hash
my $pos_list = [$start_pos, $end_pos];
push @{$self->{elements}{$name}}, $pos_list;
-
+
# Call start-tag handler if defined (not yet exposed)
&{$self->{handlers}{start}}($self, $name) if defined $self->{handlers}{start};
-
+
# If this is a single-tag element (e.g. <.../>) then close it immediately
if( $tag =~ m{/\s*>$} ) {
push @$current_element, $pos_list;
@@ -364,7 +364,7 @@ sub _parse_it {
} # end if
} # end if
-
+
# Move the start pointer to beginning of next element
$start_pos = index( $self->{doc}, '<', $start_pos + 1 );
last if $start_pos == -1 || $end_pos == $doc_end;
@@ -372,16 +372,16 @@ sub _parse_it {
# Now $end_pos is end of old tag and $start_pos is start of new
# So do things on the data between the tags as needed
if( $start_pos - $end_pos > 1 ) {
- # Call any character data handler
+ # Call any character data handler
&{$self->{handlers}{char}}($self, substr($self->{doc}, $end_pos + 1, $start_pos - $end_pos - 1)) if defined $self->{handlers}{char};
# Inserting the text into the linked list as well
# push @$current_element, [$end_pos + 1, $start_pos - 1];
} # end if
-
+
# Now finish by incrementing the parser to the next element
$end_pos = index( $self->{doc}, '>', $start_pos + 1 );
-
+
# If there is no next element, and we're not at the end of the document,
# then throw an error
$self->_error( 'ELM_NOT_CLOSED', $start_pos + $self->{doc_offset} ) if $end_pos == -1;
@@ -401,7 +401,7 @@ sub _parse_it {
#
# Returns: Scalar content of $file, undef on error
#
-# Description: Reads from $file and returns the content.
+# Description: Reads from $file and returns the content.
# $file may be either a filename or an IO handle
# ----------------------------------------------------------
# Date Modification Author
@@ -412,7 +412,7 @@ sub _get_a_file {
my $self = shift;
my $file = shift;
my $content = undef;
-
+
# If it's a ref and a handle, then read that
if( ref($file) ) {
$content = join '', <$file>;
@@ -422,12 +422,12 @@ sub _get_a_file {
open( XML, $file ) || return undef;
$content = join '', ;
close XML || return undef;
- }
+ }
# Don't know how to handle this type of parameter
else {
return undef;
} # end if
-
+
return $content;
} # end _get_a_file
@@ -448,10 +448,10 @@ sub _error {
my $self = shift;
my( $code, @args ) = @_;
my $msg = $self->{ERRORS}{$code};
-
+
# Handle replacement codes
$msg =~ s/\%(\d+)/$args[$1]/g;
-
+
# Throw exception
die ref($self) . ":$msg\n";
} # end _error
@@ -462,7 +462,7 @@ sub _error {
#
# Args: $content
#
-# Returns: A reference to the CDATA element, padded to
+# Returns: A reference to the CDATA element, padded to
# original size.
#
# Description: Stores the CDATA element in the internal
@@ -498,13 +498,13 @@ sub _store_cdata {
sub _dump_tree {
my $self = shift;
my $node = shift || $self->{tree};
-
+
my $tree = '';
for( my $i = 0; $i < scalar(@$node) && defined $node->[$i]; $i++ ) {
if( (scalar(@{$node->[$i]}) == 4) && (defined $node->[$i][2]) ) {
$tree .= '[' . join( ',', @{$node->[$i]} ) . "] "
- . substr($self->{doc}, $node->[$i][0], $node->[$i][1] - $node->[$i][0] + 1)
- . "..."
+ . substr($self->{doc}, $node->[$i][0], $node->[$i][1] - $node->[$i][0] + 1)
+ . "..."
. substr($self->{doc}, $node->[$i][2], $node->[$i][3] - $node->[$i][2] + 1) . " (child $i)\n";
# Do child list
$i++;
@@ -530,7 +530,7 @@ END { }
=head1 BUGS
Lots. This 'parser' (Matt Sergeant takes umbrance to my us of that word) will handle some XML
-documents, but not all.
+documents, but not all.
=head1 VERSION
diff --git a/cime/externals/pio2/examples/basic/perl5lib/XML/Lite/Element.pm b/cime/externals/pio2/examples/basic/perl5lib/XML/Lite/Element.pm
index c611d6cd17fd..388511d89a01 100644
--- a/cime/externals/pio2/examples/basic/perl5lib/XML/Lite/Element.pm
+++ b/cime/externals/pio2/examples/basic/perl5lib/XML/Lite/Element.pm
@@ -33,18 +33,18 @@ print $elm->get_attribute( 'attribute_name' );
=head1 DESCRIPTION
-C objects contain rudimentary methods for querying XML
-elements in an XML document as parsed by XML::Lite. Usually these objects
+C objects contain rudimentary methods for querying XML
+elements in an XML document as parsed by XML::Lite. Usually these objects
are returned by method calls in XML::Lite.
=head1 METHODS
-The following methods are available. All methods like 'get_name' can be
+The following methods are available. All methods like 'get_name' can be
abbeviated as 'name.'
=over 4
-=cut
+=cut
use strict;
BEGIN {
@@ -63,8 +63,8 @@ use vars qw();
Creates a new XML::Lite::Element object from the XML::Lite object, C<$owner_document>.
-Currently, you must not call this manually. You can create an object with one of
-the 'factory' methods in XML::Lite, such as C or C
+Currently, you must not call this manually. You can create an object with one of
+the 'factory' methods in XML::Lite, such as C or C
or with one of the XML::Lite::Element 'factory' methods below, like C.
=cut
@@ -77,15 +77,15 @@ sub new {
# The arguments are as follows:
# $owner_document is an XML::Lite object within which this element lives
# \@pointers is a two or four element array ref containing the offsets
- # into the original document of the start and end points of
+ # into the original document of the start and end points of
# the opening and closing (when it exists) tags for the element
-
+
# Validate arguments
return undef unless @_ >= 2;
return undef unless ref($_[0]) && (ref($_[1]) eq 'ARRAY');
-
+
# Load 'em up
-
+
# The data structure for the ::Element object has these properties
# doc A reference to the containing XML::Lite object
# node A reference to an array of pointers to our element in the document
@@ -94,11 +94,11 @@ sub new {
# name The name on our tag
# _attrs A string of the attibutes in our tag (unparsed)
# attrs A hash ref of attributes in our tag
-
+
$self->{doc} = $_[0];
$self->{node} = $_[1];
-
- # Using the pointers, find out tag name, and attribute list from the
+
+ # Using the pointers, find out tag name, and attribute list from the
# opening tag (if there are any attributes).
my $tag = substr( $self->{doc}{doc}, $self->{node}[0], $self->{node}[1] - $self->{node}[0] + 1 );
if( $tag =~ m{^<\s*([^/>\s]+)\s+([^>]+)\s*/?\s*>$} ) {
@@ -111,7 +111,7 @@ sub new {
# Should have been caught in the parsing! maybe an assert?
$self->{doc}->_error( 'ELM_NOT_CLOSED', $self->{node}[0] + $self->{doc}->{doc_offset} );
} # end if
-
+
# Good. Now returns it.
bless ($self, $class);
return $self;
@@ -142,16 +142,16 @@ sub content;
sub get_content {
my $self = shift;
- # If we don't have any content, then we should return
+ # If we don't have any content, then we should return
# '' right away.
return '' unless defined $self->{node}[2];
-
+
# Using our pointers, find everything between our tags
my $content = substr( $self->{doc}{doc}, $self->{node}[1] + 1, $self->{node}[2] - $self->{node}[1] - 1 );
-
+
# Now, restore any CDATA chunks that may have been pulled out
$content =~ s//{doc}{_CDATA}[$1]]]>/g;
-
+
# And return the content
return $content;
} # end get_content
@@ -173,11 +173,11 @@ sub attributes;
*attributes = \&get_attributes;
sub get_attributes {
my $self = shift;
-
+
# Parse the attribute string into a hash of name-value pairs
# unless we've already done that.
$self->_parse_attrs() unless defined $self->{attrs};
-
+
# Just return a *copy* of the hash (this is read-only after all!)
if ( defined($self->{attrs}) ) {
return %{$self->{attrs}};
@@ -202,10 +202,10 @@ sub attribute;
sub get_attribute {
my $self = shift;
my( $name ) = @_;
-
+
# If we haven't parsed the attribute string into a hash, then do that.
$self->_parse_attrs() unless defined $self->{attrs};
-
+
# Now return the requested attribute. If it's not there
# then 'undef' is returned
return $self->{attrs}{$name};
@@ -233,9 +233,9 @@ sub get_name {
=item my @children = $element->get_children()
-Returns a list of XML::Lite::Element objects for each element contained
-within the current element. This does not return any text or CDATA in
-the content of this element. You can parse that through the L
+Returns a list of XML::Lite::Element objects for each element contained
+within the current element. This does not return any text or CDATA in
+the content of this element. You can parse that through the L
method.
If no child elements exist then an empty list is returned.
@@ -256,7 +256,7 @@ sub get_children {
my $self = shift;
my @children = ();
- # If we don't have any content, then we should return an emtpty
+ # If we don't have any content, then we should return an emtpty
# list right away -- we have no children.
return @children unless defined $self->{node}[2];
@@ -264,8 +264,8 @@ sub get_children {
# This will also load {children} and {parent} as well
$self->_find_self() unless defined $self->{self};
- # Now that we know who we are (if this didn't fail) we can
- # iterate through the sub nodes (our child list) and make
+ # Now that we know who we are (if this didn't fail) we can
+ # iterate through the sub nodes (our child list) and make
# XML::Lite::Elements objects for each child
if( defined $self->{children} ) {
my $i = 0;
@@ -276,7 +276,7 @@ sub get_children {
$node = $self->{children}[++$i];
} # end while
} # end if
-
+
return @children;
} # end get_children
@@ -304,14 +304,14 @@ sub get_text {
my $self = shift;
my $content = '';
- # If we don't have any content, then we should return
+ # If we don't have any content, then we should return
# $content right away -- we have no text
return $content unless defined $self->{node}[2];
# Otherwise get out content and children
my @children = $self->get_children;
my $orig_content = $self->get_content;
-
+
# Then remove the child elements from our content
my $start = 0;
foreach( @children ) {
@@ -320,10 +320,10 @@ sub get_text {
$start = ($_->{node}[3] || $_->{node}[1]) - $self->{node}[1];
} # end foreach
$content .= substr( $orig_content, $start ) if $start < length($orig_content);
-
+
# Remove the CDATA wrapper, preserving the content
$content =~ s//$1/g;
-
+
# Return the left-over text
return $content;
} # end get_text
@@ -352,7 +352,7 @@ sub get_text {
# ----------------------------------------------------------
sub _parse_attrs {
my $self = shift;
-
+
my $attrs = $self->{_attrs};
if ( defined($attrs) ) {
$attrs =~ s/^\s+//;
@@ -364,7 +364,7 @@ sub _parse_attrs {
$attrs =~ s/^\s+//;
} # end while
}
-
+
return 1;
} # end _parse_atttrs
@@ -376,7 +376,7 @@ sub _parse_attrs {
# Returns: A reference to our node or undef on error
#
# Description: Traverses the owner document's tree to find
-# the node that references the current element. Sets
+# the node that references the current element. Sets
# $self-{self} as a side-effect. Even if this is already set,
# _find_self will traverse again, so don't call unless needed.
# ----------------------------------------------------------
@@ -387,8 +387,8 @@ sub _parse_attrs {
# ----------------------------------------------------------
sub _find_self {
my $self = shift;
-
- # We actually just call this recusively, so the first
+
+ # We actually just call this recusively, so the first
# argument can be a starting point to descend from
# but we don't doc that above
my $node = shift || $self->{doc}{tree};
@@ -405,10 +405,10 @@ sub _find_self {
# If this is our self, then we're done!
# NOTE: Since the list references are the same in the by-name hash
# and tree objects, we can just do a reference compare here.
- # If objects are ever created with non-factory methods then we need to
+ # If objects are ever created with non-factory methods then we need to
# use a _compare_lists call.
-# if( _compare_lists( $node->[$i], $self->{node} ) ) {
- if( $node->[$i] eq $self->{node} ) {
+# if( _compare_lists( $node->[$i], $self->{node} ) ) {
+ if( $node->[$i] eq $self->{node} ) {
$self->{parent} = $node;
$self->{self} = $node->[$i];
# If this list has children, then add a pointer to that list
@@ -453,16 +453,16 @@ sub _find_self {
# ----------------------------------------------------------
sub _compare_lists {
my( $rA, $rB ) = @_;
-
+
# Lists are not equal unless same size
return 0 unless scalar(@$rA) == scalar(@$rB);
-
+
# Now compare item by item.
my $i;
for( $i = 0; $i < scalar(@$rA); $i++ ) {
return 0 unless $rA->[$i] eq $rB->[$i];
} # end for
-
+
return 1;
} # end _compare_lists
diff --git a/cime/externals/pio2/examples/basic/perl5lib/XML/README b/cime/externals/pio2/examples/basic/perl5lib/XML/README
index fa16ec054383..6234a760cec3 100644
--- a/cime/externals/pio2/examples/basic/perl5lib/XML/README
+++ b/cime/externals/pio2/examples/basic/perl5lib/XML/README
@@ -7,7 +7,7 @@ for most things you need to do with XML files.
It is not dependent on any other modules or external programs for installation.
-NOTE that this parser will do many things that you want with XML but
+NOTE that this parser will do many things that you want with XML but
not everything. It is not a validating parser! It will not handle
international characters (unless run on those systems). Use
at your own risk.
diff --git a/cime/externals/pio2/examples/basic/perl5lib/XML/man3/XML::Lite.3 b/cime/externals/pio2/examples/basic/perl5lib/XML/man3/XML::Lite.3
index f16455c6713c..f2b3912ed74a 100644
--- a/cime/externals/pio2/examples/basic/perl5lib/XML/man3/XML::Lite.3
+++ b/cime/externals/pio2/examples/basic/perl5lib/XML/man3/XML::Lite.3
@@ -141,73 +141,73 @@
.TH Lite 3 "perl v5.6.0" "2003-03-17" "User Contributed Perl Documentation"
.UC
.SH "NAME"
-\&\s-1XML:\s0:Lite \- A lightweight \s-1XML\s0 parser for simple files
+\&\s-1XML:\s0:Lite \- A lightweight \s-1XML\s0 parser for simple files
.SH "SYNOPSIS"
.IX Header "SYNOPSIS"
-use \s-1XML:\s0:Lite;
-my \f(CW$xml\fR = new \s-1XML:\s0:Lite( xml => 'a_file.xml' );
+use \s-1XML:\s0:Lite;
+my \f(CW$xml\fR = new \s-1XML:\s0:Lite( xml => 'a_file.xml' );
.SH "DESCRIPTION"
.IX Header "DESCRIPTION"
-\&\s-1XML:\s0:Lite is a lightweight \s-1XML\s0 parser, with basic element traversing
-methods. It is entirely self-contained, pure Perl (i.e. \fInot\fR based on
-expat). It provides useful methods for reading most \s-1XML\s0 files, including
-traversing and finding elements, reading attributes and such. It is
-designed to take advantage of Perl-isms (Attribute lists are returned as
-hashes, rather than, say, lists of objects). It provides only methods
-for reading a file, currently.
+\&\s-1XML:\s0:Lite is a lightweight \s-1XML\s0 parser, with basic element traversing
+methods. It is entirely self-contained, pure Perl (i.e. \fInot\fR based on
+expat). It provides useful methods for reading most \s-1XML\s0 files, including
+traversing and finding elements, reading attributes and such. It is
+designed to take advantage of Perl-isms (Attribute lists are returned as
+hashes, rather than, say, lists of objects). It provides only methods
+for reading a file, currently.
.SH "METHODS"
.IX Header "METHODS"
-The following methods are available:
+The following methods are available:
.Ip "my \f(CW$xml\fR = new \s-1XML:\s0:Lite( xml => \f(CW$source\fR[, ...] );" 4
.IX Item "my $xml = new XML::Lite( xml => $source[, ...] );"
-Creates a new \s-1XML:\s0:Lite object. The \s-1XML:\s0:Lite object acts as the document
-object for the \f(CW$source\fR that is sent to it to parse. This means that you
-create a new object for each document (or document sub-section). As the
-objects are lightweight this should not be a performance consideration.
+Creates a new \s-1XML:\s0:Lite object. The \s-1XML:\s0:Lite object acts as the document
+object for the \f(CW$source\fR that is sent to it to parse. This means that you
+create a new object for each document (or document sub-section). As the
+objects are lightweight this should not be a performance consideration.
.Sp
-The object constructor can take several named parameters. Parameter names
-may begin with a '\-' (as in the example above) but are not required to. The
-following parameters are recognized.
+The object constructor can take several named parameters. Parameter names
+may begin with a '\-' (as in the example above) but are not required to. The
+following parameters are recognized.
.Sp
.Vb 2
-\& xml The source XML to parse. This can be a filename, a scalar that
+\& xml The source XML to parse. This can be a filename, a scalar that
\& contains the document (or document fragment), or an IO handle.
.Ve
-As a convenince, if only on parameter is given, it is assumed to be the source.
-So you can use this, if you wish:
+As a convenince, if only on parameter is given, it is assumed to be the source.
+So you can use this, if you wish:
.Sp
.Vb 1
\& my $xml = new XML::Lite( 'file.xml' );
.Ve
.Ip "my \f(CW$elm\fR = \f(CW$xml\fR->\fIroot_element()\fR" 4
.IX Item "my $elm = $xml->root_element()"
-Returns a reference to an \s-1XML:\s0:Lite::Element object that represents
-the root element of the document.
+Returns a reference to an \s-1XML:\s0:Lite::Element object that represents
+the root element of the document.
.Sp
-Returns \f(CW\*(C`undef\*(C'\fR on errors.
+Returns \f(CW\*(C`undef\*(C'\fR on errors.
.Ip "@list = \f(CW$xml\fR->elements_by_name( \f(CW$name\fR )" 4
.IX Item "@list = $xml->elements_by_name( $name )"
-Returns a list of all elements that match \f(CW\*(C`$name\*(C'\fR.
-\&\f(CW\*(C`@list\*(C'\fR is a list of the XML::Lite::Element manpage objects
-If called in a scalar context, this will return the
-first element found that matches (it's more efficient
-to call in a scalar context than assign the results
-to a list of one scalar).
+Returns a list of all elements that match \f(CW\*(C`$name\*(C'\fR.
+\&\f(CW\*(C`@list\*(C'\fR is a list of the XML::Lite::Element manpage objects
+If called in a scalar context, this will return the
+first element found that matches (it's more efficient
+to call in a scalar context than assign the results
+to a list of one scalar).
.Sp
-If no matching elements are found then returns \f(CW\*(C`undef\*(C'\fR
-in scalar context or an empty list in array context.
+If no matching elements are found then returns \f(CW\*(C`undef\*(C'\fR
+in scalar context or an empty list in array context.
.SH "BUGS"
.IX Header "BUGS"
-Lots. This 'parser' (Matt Sergeant takes umbrance to my us of that word) will handle some \s-1XML\s0
-documents, but not all.
+Lots. This 'parser' (Matt Sergeant takes umbrance to my us of that word) will handle some \s-1XML\s0
+documents, but not all.
.SH "VERSION"
.IX Header "VERSION"
-0.14
+0.14
.SH "AUTHOR"
.IX Header "AUTHOR"
-Jeremy Wadsack for Wadsack-Allen Digital Group (dgsupport@wadsack-allen.com)
+Jeremy Wadsack for Wadsack-Allen Digital Group (dgsupport@wadsack-allen.com)
.SH "COPYRIGHT"
.IX Header "COPYRIGHT"
-Copyright 2001\-2003 Wadsack-Allen. All rights reserved.
-This library is free software; you can redistribute it and/or
-modify it under the same terms as Perl itself.
+Copyright 2001\-2003 Wadsack-Allen. All rights reserved.
+This library is free software; you can redistribute it and/or
+modify it under the same terms as Perl itself.
diff --git a/cime/externals/pio2/examples/basic/perl5lib/XML/man3/XML::Lite::Element.3 b/cime/externals/pio2/examples/basic/perl5lib/XML/man3/XML::Lite::Element.3
index f31d1336e462..5eaf684214b8 100644
--- a/cime/externals/pio2/examples/basic/perl5lib/XML/man3/XML::Lite::Element.3
+++ b/cime/externals/pio2/examples/basic/perl5lib/XML/man3/XML::Lite::Element.3
@@ -151,19 +151,19 @@ my \f(CW$elm\fR = \f(CW$xml\fR->elements_by_name( 'element_name' );
print \f(CW$elm\fR->get_attribute( 'attribute_name' );
.SH "DESCRIPTION"
.IX Header "DESCRIPTION"
-\&\f(CW\*(C`XML::Lite::Element\*(C'\fR objects contain rudimentary methods for querying \s-1XML\s0
-elements in an \s-1XML\s0 document as parsed by \s-1XML:\s0:Lite. Usually these objects
+\&\f(CW\*(C`XML::Lite::Element\*(C'\fR objects contain rudimentary methods for querying \s-1XML\s0
+elements in an \s-1XML\s0 document as parsed by \s-1XML:\s0:Lite. Usually these objects
are returned by method calls in \s-1XML:\s0:Lite.
.SH "METHODS"
.IX Header "METHODS"
-The following methods are available. All methods like 'get_name' can be
+The following methods are available. All methods like 'get_name' can be
abbeviated as 'name.'
.Ip "my \f(CW$element\fR = new \s-1XML:\s0:Lite::Element( \f(CW$owner_document\fR, \e@pointers );" 4
.IX Item "my $element = new XML::Lite::Element( $owner_document, @pointers );"
Creates a new \s-1XML:\s0:Lite::Element object from the \s-1XML:\s0:Lite object, \f(CW\*(C`$owner_document\*(C'\fR.
.Sp
-Currently, you must not call this manually. You can create an object with one of
-the 'factory' methods in \s-1XML:\s0:Lite, such as \f(CW\*(C`element_by_name\*(C'\fR or \f(CW\*(C`root_element\*(C'\fR
+Currently, you must not call this manually. You can create an object with one of
+the 'factory' methods in \s-1XML:\s0:Lite, such as \f(CW\*(C`element_by_name\*(C'\fR or \f(CW\*(C`root_element\*(C'\fR
or with one of the \s-1XML:\s0:Lite::Element 'factory' methods below, like \f(CW\*(C`get_children\*(C'\fR.
.Ip "my \f(CW$content\fR = \f(CW$element\fR->\fIget_content()\fR" 4
.IX Item "my $content = $element->get_content()"
@@ -180,9 +180,9 @@ Returns the value of the named attribute for this element.
Returns the name of the element tag
.Ip "my \f(CW@children\fR = \f(CW$element\fR->\fIget_children()\fR" 4
.IX Item "my @children = $element->get_children()"
-Returns a list of \s-1XML:\s0:Lite::Element objects for each element contained
-within the current element. This does not return any text or \s-1CDATA\s0 in
-the content of this element. You can parse that through the the get_content manpage
+Returns a list of \s-1XML:\s0:Lite::Element objects for each element contained
+within the current element. This does not return any text or \s-1CDATA\s0 in
+the content of this element. You can parse that through the the get_content manpage
method.
.Sp
If no child elements exist then an empty list is returned.
diff --git a/cime/externals/pio2/examples/basic/testdecomp.F90 b/cime/externals/pio2/examples/basic/testdecomp.F90
index 6ea015fd56eb..9684e14a19dd 100644
--- a/cime/externals/pio2/examples/basic/testdecomp.F90
+++ b/cime/externals/pio2/examples/basic/testdecomp.F90
@@ -7,7 +7,7 @@ program testdecomp
use gdecomp_mod
implicit none
-
+
integer, pointer :: compDOF(:), ioDOF(:)
integer :: startcomp(3),cntcomp(3)
integer :: startio(3),cntio(3),gdims(3)
@@ -23,7 +23,7 @@ program testdecomp
num_tasks = 192
gdims(1) = 3600
gdims(2) = 2400
- gdims(3) = 40
+ gdims(3) = 40
! call gdecomp_read_nml(gdecomp,fin,'comp',my_task)
! print *,'after gdecomp_read_nml'
diff --git a/cime/externals/pio2/examples/basic/testpio.F90 b/cime/externals/pio2/examples/basic/testpio.F90
index b62a917fd574..2a6e62e427af 100644
--- a/cime/externals/pio2/examples/basic/testpio.F90
+++ b/cime/externals/pio2/examples/basic/testpio.F90
@@ -46,7 +46,7 @@ program testpio
integer(i4) :: indx
integer(i4) :: mode
- integer(i4) :: ip,numPhases
+ integer(i4) :: ip,numPhases
character(len=*), parameter :: TestR8CaseName = 'r8_test'
character(len=*), parameter :: TestR4CaseName = 'r4_test'
character(len=*), parameter :: TestI4CaseName = 'i4_test'
@@ -101,11 +101,11 @@ program testpio
real(r8) :: dt_write_r8, dt_write_r4, dt_write_i4 ! individual write times
real(r8) :: dt_read_r8, dt_read_r4, dt_read_i4 ! individual read times
! Arrays to hold globally reduced read/write times--one element per time trial
- real(r8), dimension(:), pointer :: gdt_write_r8, gdt_write_r4, gdt_write_i4
+ real(r8), dimension(:), pointer :: gdt_write_r8, gdt_write_r4, gdt_write_i4
real(r8), dimension(:), pointer :: gdt_read_r8, gdt_read_r4, gdt_read_i4
integer(i4) :: nprocs
- integer(i4) :: lLength ! local number of words in the computational decomposition
+ integer(i4) :: lLength ! local number of words in the computational decomposition
integer(i4), parameter :: nml_in = 10
character(len=*), parameter :: nml_filename = 'testpio_in'
@@ -149,7 +149,7 @@ program testpio
call MPI_INIT(ierr)
call CheckMPIReturn('Call to MPI_INIT()',ierr,__FILE__,__LINE__)
-
+
! call enable_abort_on_exit
@@ -200,7 +200,7 @@ program testpio
endif
#endif
-#ifdef MEMCHK
+#ifdef MEMCHK
call GPTLget_memusage(msize, rss, mshare, mtext, mstack)
if(rss>lastrss) then
lastrss=rss
@@ -212,7 +212,7 @@ program testpio
!----------------------------------------------------------------
if(Debug) print *,'testpio: before call to readTestPIO_Namelist'
- if(my_task == master_task) then
+ if(my_task == master_task) then
call ReadTestPIO_Namelist(nml_in, nprocs, nml_filename, myname, nml_error)
endif
if(Debug) print *,'testpio: before call to broadcast_namelist'
@@ -224,7 +224,7 @@ program testpio
! Checks (num_iotasks can be negative on BGx)
!-------------------------------------
-#if !defined(BGx)
+#if !defined(BGx)
if (num_iotasks <= 0) then
write(*,*) trim(myname),' ERROR: ioprocs invalid num_iotasks=',num_iotasks
call piodie(__FILE__,__LINE__)
@@ -234,7 +234,7 @@ program testpio
! ----------------------------------------------------------------
! if stride is and num_iotasks is incompatible than reset stride (ignore stride on BGx)
! ----------------------------------------------------------------
-#if !defined(BGx)
+#if !defined(BGx)
if (base + num_iotasks * (stride-1) > nprocs-1) then
write(*,*) trim(myname),' ERROR: num_iotasks, base and stride too large', &
' base=',base,' num_iotasks=',num_iotasks,' stride=',stride,' nprocs=',nprocs
@@ -243,7 +243,7 @@ program testpio
#endif
!--------------------------------------
- ! Initalizes the parallel IO subsystem
+ ! Initalizes the parallel IO subsystem
!--------------------------------------
call PIO_setDebugLevel(DebugLevel)
@@ -266,7 +266,7 @@ program testpio
call MPI_COMM_SIZE(MPI_COMM_COMPUTE,nprocs,ierr)
else
-#ifdef MEMCHK
+#ifdef MEMCHK
call GPTLget_memusage(msize, rss, mshare, mtext, mstack)
if(rss>lastrss) then
lastrss=rss
@@ -279,7 +279,7 @@ program testpio
call PIO_init(my_task, MPI_COMM_COMPUTE, num_iotasks, num_aggregator, stride, &
rearr_type, PIOSYS, base)
-#ifdef MEMCHK
+#ifdef MEMCHK
call GPTLget_memusage(msize, rss, mshare, mtext, mstack)
if(rss>lastrss) then
lastrss=rss
@@ -289,7 +289,7 @@ program testpio
end if
-
+
if(Debug) print *,'testpio: after call to PIO_init', nprocs,mpi_comm_io
@@ -339,7 +339,7 @@ program testpio
trim(ibm_io_sparse_access))
end if
end if
-! if(set_lustre_values /= 0) then
+! if(set_lustre_values /= 0) then
! call PIO_setnum_OST(PIOSYS,lfs_ost_count)
! endif
@@ -355,11 +355,11 @@ program testpio
if(index(casename,'CAM')==1) then
call camlike_decomp_generator(gdims3d(1),gdims3d(2),gdims3d(3),my_task,nprocs,npr_yz,compDOF)
- elseif(index(casename,'MPAS')==1) then
+ elseif(index(casename,'MPAS')==1) then
! print *,'testpio: before call to mpas_decomp_generator: (',TRIM(part_input),') gdims3d: ',gdims3d
call mpas_decomp_generator(gdims3d(1),gdims3d(2),gdims3d(3),my_task,part_input,compDOF)
else if (trim(compdof_input) == 'namelist') then
-#ifdef MEMCHK
+#ifdef MEMCHK
call GPTLget_memusage(msize, rss, mshare, mtext, mstack)
if(rss>lastrss) then
lastrss=rss
@@ -369,7 +369,7 @@ program testpio
if(Debug) print *,'iam: ',My_task,'testpio: point #1'
call gdecomp_read_nml(gdecomp,nml_filename,'comp',my_task,nprocs,gDims3D(1:3))
-#ifdef MEMCHK
+#ifdef MEMCHK
call GPTLget_memusage(msize, rss, mshare, mtext, mstack)
if(rss>lastrss) then
lastrss=rss
@@ -380,7 +380,7 @@ program testpio
if(Debug) print *,'iam: ',My_task,'testpio: point #2'
call gdecomp_DOF(gdecomp,My_task,compDOF,start,count)
if(Debug) print *,'iam: ',My_task,'testpio: point #3', minval(compdof),maxval(compdof)
-#ifdef MEMCHK
+#ifdef MEMCHK
call GPTLget_memusage(msize, rss, mshare, mtext, mstack)
if(rss>lastrss) then
lastrss=rss
@@ -413,7 +413,7 @@ program testpio
endif
endif
-#ifdef MEMCHK
+#ifdef MEMCHK
call GPTLget_memusage(msize, rss, mshare, mtext, mstack)
if(rss>lastrss) then
lastrss=rss
@@ -427,7 +427,7 @@ program testpio
call pio_writedof(trim(compdof_output),gdims3d, compDOF,MPI_COMM_COMPUTE,75)
endif
-#ifdef MEMCHK
+#ifdef MEMCHK
call GPTLget_memusage(msize, rss, mshare, mtext, mstack)
if(rss>lastrss) then
lastrss=rss
@@ -482,7 +482,7 @@ program testpio
else
call piodie(__FILE__,__LINE__,' rearr '//trim(rearr)//' not supported')
endif
-#ifdef MEMCHK
+#ifdef MEMCHK
call GPTLget_memusage(msize, rss, mshare, mtext, mstack)
if(rss>lastrss) then
lastrss=rss
@@ -505,19 +505,19 @@ program testpio
lLength = size(compDOF)
!----------------------
- ! allocate and set test arrays
+ ! allocate and set test arrays
!----------------------
- if(TestR8 .or. TestCombo) then
+ if(TestR8 .or. TestCombo) then
call alloc_check(test_r8wr,lLength,'testpio:test_r8wr')
endif
- if(TestR4 .or. TestCombo) then
+ if(TestR4 .or. TestCombo) then
call alloc_check(test_r4wr,lLength,'testpio:test_r4wr' )
endif
- if(TestInt .or. TestCombo) then
+ if(TestInt .or. TestCombo) then
call alloc_check(test_i4wr,lLength,'testpio:test_i4wr')
endif
- if(TestInt) then
+ if(TestInt) then
call alloc_check(test_i4i ,lLength,'testpio:test_i4i ')
call alloc_check(test_i4j ,lLength,'testpio:test_i4j ')
call alloc_check(test_i4k ,lLength,'testpio:test_i4k ')
@@ -527,27 +527,27 @@ program testpio
do n = 1,lLength
call c1dto3d(compdof(n),gDims3D(1),gDims3D(2),gDims3D(3),i1,j1,k1)
- if(TestInt) then
+ if(TestInt) then
test_i4dof(n) = compdof(n)
test_i4i(n) = i1
test_i4j(n) = j1
test_i4k(n) = k1
test_i4m(n) = my_task
endif
- if(TestR8 .or. TestCombo) then
+ if(TestR8 .or. TestCombo) then
! test_r8wr(n) = 10.0_r8*cos(20.*real(i1,kind=r8)/real(gDims3D(1),kind=r8))* &
! cos(10.*real(j1,kind=r8)/real(gDims3D(2),kind=r8))* &
! (1.0+1.0*real(j1,kind=r8)/real(gDims3D(2),kind=r8))* &
! cos(25.*real(k1,kind=r8)/real(gDims3D(3),kind=r8))
test_r8wr = compdof
endif
- if(TestR4 .or. TestCombo) then
+ if(TestR4 .or. TestCombo) then
test_r4wr(n) = 10.0_r4*cos(20.*real(i1,kind=r4)/real(gDims3D(1),kind=r4))* &
cos(10.*real(j1,kind=r4)/real(gDims3D(2),kind=r4))* &
(1.0+1.0*real(j1,kind=r4)/real(gDims3D(2),kind=r4))* &
cos(25.*real(k1,kind=r4)/real(gDims3D(3),kind=r4))
endif
- if(TestInt .or. TestCombo) then
+ if(TestInt .or. TestCombo) then
test_i4wr(n) = compdof(n)
! test_i4wr(n) = nint(10.0_r8*cos(20.*real(i1,kind=r8)/real(gDims3D(1),kind=r8))* &
! cos(10.*real(j1,kind=r8)/real(gDims3D(2),kind=r8))* &
@@ -558,7 +558,7 @@ program testpio
if(Debug) print *,'iam: ',My_task,'testpio: point #10'
-#ifdef MEMCHK
+#ifdef MEMCHK
call GPTLget_memusage(msize, rss, mshare, mtext, mstack)
if(rss>lastrss) then
lastrss=rss
@@ -581,7 +581,7 @@ program testpio
!--------------------------------
! allocate arrays for holding globally-reduced timing information
!--------------------------------
-#ifdef MEMCHK
+#ifdef MEMCHK
call GPTLget_memusage(msize, rss, mshare, mtext, mstack)
if(rss>lastrss) then
lastrss=rss
@@ -595,25 +595,25 @@ program testpio
call alloc_check(gdt_write_i4, maxiter, ' testpio:gdt_write_i4 ')
call alloc_check(gdt_read_i4, maxiter, ' testpio:gdt_read_i4 ')
if(Debug) print *,'iam: ',My_task,'testpio: point #11'
-#ifdef MEMCHK
+#ifdef MEMCHK
call GPTLget_memusage(msize, rss, mshare, mtext, mstack)
if(rss>lastrss) then
lastrss=rss
print *,__FILE__,__LINE__,'mem=',rss,' it=',it
end if
#endif
- if(splitPhase) then
+ if(splitPhase) then
numPhases = 2
else
numPhases = 1
endif
do ip=1,numPhases
- if(numPhases == 1) then
+ if(numPhases == 1) then
readPhase = .true.
writePhase = .true.
else
- if(ip == 1) then
+ if(ip == 1) then
writePhase = .true.
readPhase = .false.
else
@@ -623,9 +623,9 @@ program testpio
endif
if(log_master_task) print *,'{write,read}Phase: ',writePhase,readPhase
-
+
do it=1,maxiter
-#ifdef MEMCHK
+#ifdef MEMCHK
call GPTLget_memusage(msize, rss, mshare, mtext, mstack)
if(rss>lastrss) then
lastrss=rss
@@ -644,7 +644,7 @@ program testpio
IOdesc_r8,iostart=startpio,iocount=countpio)
glenr8 = product(gdims3d)
if(Debug) print *,'iam: ',My_task,'testpio: point #7.1'
-
+
if(TestR4 .or. TestCombo) then
call PIO_initDecomp(PIOSYS,PIO_real, gDims3D,compDOF,&
IOdesc_r4,iostart=startpio,iocount=countpio)
@@ -675,7 +675,7 @@ program testpio
if(TestInt .or. TestCombo) then
! print *,__FILE__,__LINE__,gdims3d
! print *,__FILE__,__LINE__,compdof
-
+
call PIO_initDecomp(PIOSYS,PIO_int, gDims3D,compDOF,&
IOdesc_i4)
if(Debug) print *,'iam: ',My_task,'testpio: point #8.4'
@@ -750,7 +750,7 @@ program testpio
endif
endif
if(Debug) print *,'iam: ',My_task,'testpio: point #9'
-
+
if(Debug) then
write(*,'(a,2(a,i8))') myname,':: After call to initDecomp. comp_rank=',my_task, &
' io_rank=',iorank
@@ -758,10 +758,10 @@ program testpio
call PIO_getnumiotasks(PIOSYS,num_iotasks)
!------------
- ! Open file{s}
+ ! Open file{s}
!------------
write(citer,'(i3.3)') it
-
+
fname = TRIM(dir)//'foo.'//citer//'.'//TRIM(Iofmtd)
fname_r8 = TRIM(dir)//'foo.r8.'//citer//'.'//TRIM(Iofmtd)
fname_r4 = TRIM(dir)//'foo.r4.'//citer//'.'//TRIM(Iofmtd)
@@ -776,7 +776,7 @@ program testpio
mode = 0
#endif
- if(writePhase) then
+ if(writePhase) then
if(TestCombo) then
if(Debug) write(*,'(2a,i8)') myname,':: Combination Test: Creating File...it=',it
ierr = PIO_CreateFile(PIOSYS,File,iotype,trim(fname), mode)
@@ -801,21 +801,21 @@ program testpio
call check_pioerr(ierr,__FILE__,__LINE__,' i4 createfile')
endif
-
+
allocate(vard_r8(nvars), vard_r4(nvars))
-
+
!---------------------------
- ! Code specifically for netCDF files
+ ! Code specifically for netCDF files
!---------------------------
- if(iotype == iotype_pnetcdf .or. &
+ if(iotype == iotype_pnetcdf .or. &
iotype == iotype_netcdf .or. &
iotype == PIO_iotype_netcdf4p .or. &
iotype == PIO_iotype_netcdf4c) then
- if(TestR8) then
+ if(TestR8) then
!-----------------------------------
- ! for the single record real*8 file
+ ! for the single record real*8 file
!-----------------------------------
call WriteHeader(File_r8,nx_global,ny_global,nz_global,dimid_x,dimid_y,dimid_z)
@@ -833,9 +833,9 @@ program testpio
call check_pioerr(iostat,__FILE__,__LINE__,' r8 enddef')
endif
- if(TestR4) then
+ if(TestR4) then
!-----------------------------------
- ! for the single record real*4 file
+ ! for the single record real*4 file
!-----------------------------------
call WriteHeader(File_r4,nx_global,ny_global,nz_global,dimid_x,dimid_y,dimid_z)
iostat = PIO_def_dim(File_r4,'charlen',strlen,charlen)
@@ -850,14 +850,14 @@ program testpio
call check_pioerr(iostat,__FILE__,__LINE__,' i4 enddef')
endif
- if(TestInt) then
+ if(TestInt) then
!-----------------------------------
- ! for the single record integer file
+ ! for the single record integer file
!-----------------------------------
call WriteHeader(File_i4,nx_global,ny_global,nz_global,dimid_x,dimid_y,dimid_z)
- iostat = PIO_def_var(File_i4,'fdof',PIO_int,(/dimid_x,dimid_y,dimid_z/),vard_i4dof)
+ iostat = PIO_def_var(File_i4,'fdof',PIO_int,(/dimid_x,dimid_y,dimid_z/),vard_i4dof)
call check_pioerr(iostat,__FILE__,__LINE__,' i4dof defvar')
-
+
iostat = PIO_def_var(File_i4,'field',PIO_int,(/dimid_x,dimid_y,dimid_z/),vard_i4)
call check_pioerr(iostat,__FILE__,__LINE__,' i4 defvar')
iostat = PIO_def_var(File_i4,'fi',PIO_int,(/dimid_x,dimid_y,dimid_z/),vard_i4i)
@@ -873,10 +873,10 @@ program testpio
iostat = PIO_enddef(File_i4)
call check_pioerr(iostat,__FILE__,__LINE__,' i4 enddef')
endif
-
- if(TestCombo) then
+
+ if(TestCombo) then
!-----------------------------------
- ! for the multi record file
+ ! for the multi record file
!-----------------------------------
call WriteHeader(File,nx_global,ny_global,nz_global,dimid_x,dimid_y,dimid_z)
iostat = PIO_def_var(File,'field_r8',PIO_double,(/dimid_x,dimid_y,dimid_z/),vard_r8c)
@@ -906,12 +906,12 @@ program testpio
endif
!-------------------------
- ! Time the parallel write
+ ! Time the parallel write
!-------------------------
-
+
dt_write_r8 = 0.
-
+
if(TestR8) then
if(iofmtd .ne. 'bin') then
iostat = pio_put_var(file_r8,varfn_r8,fname_r8)
@@ -966,7 +966,7 @@ program testpio
endif
if(Debug) print *,'iam: ',My_task,'testpio: point #13'
- if(TestInt) then
+ if(TestInt) then
dt_write_i4 = 0.
call MPI_Barrier(MPI_COMM_COMPUTE,ierr)
call CheckMPIReturn('Call to MPI_BARRIER()',ierr,__FILE__,__LINE__)
@@ -1005,10 +1005,10 @@ program testpio
call PIO_write_darray(File_i4,vard_i4m,iodesc_i4,test_i4m,iostat)
call check_pioerr(iostat,__FILE__,__LINE__,' i4m write_darray')
- call PIO_CloseFile(File_i4)
+ call PIO_CloseFile(File_i4)
endif
- if(TestCombo) then
+ if(TestCombo) then
if(iofmtd .ne. 'bin') then
iostat = pio_put_var(file,varfn,fname)
iostat = pio_put_var(file,varfruit,fruits)
@@ -1026,9 +1026,9 @@ program testpio
endif
if(Debug) then
- write(*,'(a,2(a,i8),i8)') myname,':: After calls to PIO_write_darray. comp_rank=',my_task, &
+ write(*,'(a,2(a,i8),i8)') myname,':: After calls to PIO_write_darray. comp_rank=',my_task, &
' io_rank=',iorank,mpi_comm_io
-
+
endif
endif
@@ -1037,17 +1037,17 @@ program testpio
if(Debug) print *,'iam: ',My_task,'testpio: point #14'
- if (readPhase) then
+ if (readPhase) then
!-------------------------------------
! Open the file back up and check data
!-------------------------------------
-
- if(TestR8) then
+
+ if(TestR8) then
ierr = PIO_OpenFile(PIOSYS, File_r8, iotype, fname_r8)
call check_pioerr(ierr,__FILE__,__LINE__,' r8 openfile')
endif
if(Debug) print *,'iam: ',My_task,'testpio: point #15'
-
+
if(TestR4) then
ierr = PIO_OpenFile(PIOSYS,File_r4,iotype, fname_r4)
call check_pioerr(ierr,__FILE__,__LINE__,' r4 openfile')
@@ -1063,13 +1063,13 @@ program testpio
if(Debug) then
write(*,'(2a,i8)') myname,':: After calls to PIO_OpenFile. my_task=',my_task
endif
-
+
if(Debug) print *,__FILE__,__LINE__
if(iotype == iotype_pnetcdf .or. &
iotype == iotype_netcdf) then
do ivar=1,nvars
- if(TestR8) then
+ if(TestR8) then
iostat = PIO_inq_varid(file_r8,'filename',varfn_r8)
@@ -1077,15 +1077,15 @@ program testpio
call check_pioerr(iostat,__FILE__,__LINE__,' r8 inq_varid')
endif
- if(TestR4) then
+ if(TestR4) then
if(iofmtd(2:3) .eq. 'nc') then
iostat = PIO_inq_varid(file_r4,'filename',varfn_r4)
end if
- iostat = PIO_inq_varid(File_r4,'field00001',vard_r4(ivar))
+ iostat = PIO_inq_varid(File_r4,'field00001',vard_r4(ivar))
call check_pioerr(iostat,__FILE__,__LINE__,' r4 inq_varid')
endif
end do
- if(TestInt) then
+ if(TestInt) then
iostat = PIO_inq_varid(File_i4,'field',vard_i4)
call check_pioerr(iostat,__FILE__,__LINE__,' i4 inq_varid')
endif
@@ -1097,7 +1097,7 @@ program testpio
! Time the parallel read
!-------------------------
dt_read_r8 = 0.
- if(TestR8) then
+ if(TestR8) then
if(iofmtd(2:3) .eq. 'nc') then
iostat = pio_get_var(file_r8,varfn_r8, fnamechk)
if(fnamechk /= fname_r8) then
@@ -1120,7 +1120,7 @@ program testpio
call t_stopf('testpio_read')
#endif
et = MPI_Wtime()
- dt_read_r8 = dt_read_r8 + (et - st)/nvars
+ dt_read_r8 = dt_read_r8 + (et - st)/nvars
call check_pioerr(iostat,__FILE__,__LINE__,' r8 read_darray')
endif
@@ -1170,12 +1170,12 @@ program testpio
endif
!-------------------------------
- ! Print the maximum memory usage
+ ! Print the maximum memory usage
!-------------------------------
! call alloc_print_usage(0,'testpio: after calls to PIO_read_darray')
#ifdef TESTMEM
-! stop
+! stop
#endif
if(Debug) then
@@ -1184,7 +1184,7 @@ program testpio
endif
!-------------------
- ! close the file up
+ ! close the file up
!-------------------
if(TestR8) call PIO_CloseFile(File_r8)
if(TestR4) call PIO_CloseFile(File_r4)
@@ -1200,13 +1200,13 @@ program testpio
! endif
!-----------------------------
- ! Perform correctness testing
+ ! Perform correctness testing
!-----------------------------
- if(TestR8 .and. CheckArrays) then
+ if(TestR8 .and. CheckArrays) then
call checkpattern(mpi_comm_compute, fname_r8,test_r8wr,test_r8rd,lLength,iostat)
call check_pioerr(iostat,__FILE__,__LINE__,' checkpattern r8 test')
endif
-
+
if( TestR4 .and. CheckArrays) then
call checkpattern(mpi_comm_compute, fname_r4,test_r4wr,test_r4rd,lLength,iostat)
call check_pioerr(iostat,__FILE__,__LINE__,' checkpattern r4 test')
@@ -1218,15 +1218,15 @@ program testpio
endif
if(Debug) print *,'iam: ',My_task,'testpio: point #21'
- if(TestCombo .and. CheckArrays) then
+ if(TestCombo .and. CheckArrays) then
!-------------------------------------
- ! Open up and read the combined file
+ ! Open up and read the combined file
!-------------------------------------
-
+
ierr = PIO_OpenFile(PIOSYS,File,iotype,fname)
call check_pioerr(ierr,__FILE__,__LINE__,' combo test read openfile')
-
+
if(iofmtd(1:2).eq.'nc') then
iostat = PIO_inq_varid(File,'field_r8',vard_r8c)
call check_pioerr(iostat,__FILE__,__LINE__,' combo test r8 inq_varid')
@@ -1260,22 +1260,22 @@ program testpio
call PIO_CloseFile(File)
if(Debug) print *,'iam: ',My_task,'testpio: point #22a'
et = MPI_Wtime()
- dt_read_r8 = dt_read_r8 + (et - st)/nvars
+ dt_read_r8 = dt_read_r8 + (et - st)/nvars
!-----------------------------
- ! Check the combined file
+ ! Check the combined file
!-----------------------------
call checkpattern(mpi_comm_compute, fname,test_r8wr,test_r8rd,lLength,iostat)
call check_pioerr(iostat,__FILE__,__LINE__,' checkpattern test_r8 ')
-
+
call checkpattern(mpi_comm_compute, fname,test_r4wr,test_r4rd,lLength,iostat)
call check_pioerr(iostat,__FILE__,__LINE__,' checkpattern test_r4 ')
-
+
call checkpattern(mpi_comm_compute, fname,test_i4wr,test_i4rd,lLength,iostat)
call check_pioerr(iostat,__FILE__,__LINE__,' checkpattern test_i4 ')
-
+
endif
!---------------------------------------
- ! Print out the performance measurements
+ ! Print out the performance measurements
!---------------------------------------
call MPI_Barrier(MPI_COMM_COMPUTE,ierr)
endif
@@ -1293,7 +1293,7 @@ program testpio
if(writePhase) call GetMaxTime(dt_write_r4, gdt_write_r4(it), MPI_COMM_COMPUTE, ierr)
endif
if(Debug) print *,'iam: ',My_task,'testpio: point #24'
-
+
if(TestInt) then
! Maximum read/write times
if(readPhase) call GetMaxTime(dt_read_i4, gdt_read_i4(it), MPI_COMM_COMPUTE, ierr)
@@ -1311,30 +1311,30 @@ program testpio
!--------------------------------
- ! Clean up initialization memory
+ ! Clean up initialization memory
! note: make sure DOFs are not used later
!--------------------------------
if (My_task >= 0) call dealloc_check(compDOF)
!----------------------------------
- ! Print summary bandwidth statistics
+ ! Print summary bandwidth statistics
!----------------------------------
if(Debug) print *,'iam: ',My_task,'testpio: point #26'
if(TestR8 .or. TestCombo .and. (iorank == 0) ) then
- call WriteTimeTrialsStats(casename,TestR8CaseName, fname_r8, glenr8, gdt_read_r8, gdt_write_r8, maxiter)
+ call WriteTimeTrialsStats(casename,TestR8CaseName, fname_r8, glenr8, gdt_read_r8, gdt_write_r8, maxiter)
endif
if(TestR4 .and. (iorank == 0) ) then
- call WriteTimeTrialsStats(casename,TestR4CaseName, fname_r4, glenr4, gdt_read_r4, gdt_write_r4, maxiter)
+ call WriteTimeTrialsStats(casename,TestR4CaseName, fname_r4, glenr4, gdt_read_r4, gdt_write_r4, maxiter)
endif
if(TestInt .and. (iorank == 0) ) then
- call WriteTimeTrialsStats(casename,TestI4CaseName, fname_i4, gleni4, gdt_read_i4, gdt_write_i4, maxiter)
+ call WriteTimeTrialsStats(casename,TestI4CaseName, fname_i4, gleni4, gdt_read_i4, gdt_write_i4, maxiter)
endif
!-------------------------------
- ! Print timers and memory usage
+ ! Print timers and memory usage
!-------------------------------
#ifdef TIMING
@@ -1455,7 +1455,7 @@ end subroutine WriteStats
!=============================================================================
- subroutine WriteTimeTrialsStats(casename,TestName, FileName, glen, ReadTimes, WriteTimes, nTrials)
+ subroutine WriteTimeTrialsStats(casename,TestName, FileName, glen, ReadTimes, WriteTimes, nTrials)
implicit none
diff --git a/cime/externals/pio2/examples/basic/testpio_bench.pl b/cime/externals/pio2/examples/basic/testpio_bench.pl
index a47eada67f0e..fe3a2239640c 100755
--- a/cime/externals/pio2/examples/basic/testpio_bench.pl
+++ b/cime/externals/pio2/examples/basic/testpio_bench.pl
@@ -275,7 +275,7 @@ sub usage{
if($attributes{NETCDF_PATH} =~ /netcdf-4/){
$enablenetcdf4="--enable-netcdf4";
}
- }
+ }
}
if(defined $suites){
@@ -314,7 +314,7 @@ sub usage{
ldz => 0,
partfile => 'null',
partdir => 'foo',
- iofmt => 'pnc',
+ iofmt => 'pnc',
rearr => 'box',
numprocsIO => 10,
stride => -1,
@@ -435,7 +435,7 @@ sub usage{
$configuration{$name}=$value;
}
$found = 1;
- }
+ }
}
#my $suffix = $bname . "-" . $pecount;
my $suffix = $bname . "_PE-" . $pecount . "_IO-" . $iofmt . "-" . $numIO;
@@ -587,8 +587,8 @@ sub usage{
}elsif(/ENV_(.*)/){
print "set $1 $attributes{$_}\n";
print F "\$ENV{$1}=\"$attributes{$_}\"\;\n";
- }
-
+ }
+
}
@@ -659,7 +659,7 @@ sub usage{
my \@testlist = \"$suffix";
# unlink("../pio/Makefile.conf");
# copy("testpio_in","$tstdir"); # copy the namelist file into test directory
-
+
chdir ("$tstdir");
my \$test;
my \$run = "$attributes{run}";
@@ -709,7 +709,7 @@ sub usage{
open(LOG,\$log);
my \@logout = ;
close(LOG);
-
+
my \$cnt = grep /testpio completed successfully/ , \@logout;
open(T,">TestStatus");
if(\$cnt>0){
@@ -724,7 +724,7 @@ sub usage{
close(T);
}
}else{
- print "suite \$suite FAILED to configure or build\\n";
+ print "suite \$suite FAILED to configure or build\\n";
}
}
print "test complete on $host \$passcnt tests PASS, \$failcnt tests FAIL\\n";
diff --git a/cime/externals/pio2/examples/basic/testpio_build.pl b/cime/externals/pio2/examples/basic/testpio_build.pl
index 720e92df12f8..56faccd69fd8 100644
--- a/cime/externals/pio2/examples/basic/testpio_build.pl
+++ b/cime/externals/pio2/examples/basic/testpio_build.pl
@@ -47,8 +47,8 @@
}elsif(/ENV_(.*)/){
print "set $1 $attributes{$_}\n";
$ENV{$1}="$attributes{$_}";
- }
-
+ }
+
}
diff --git a/cime/externals/pio2/examples/basic/testpio_run.pl b/cime/externals/pio2/examples/basic/testpio_run.pl
index 6c7b2ab60662..d0f8900bf864 100755
--- a/cime/externals/pio2/examples/basic/testpio_run.pl
+++ b/cime/externals/pio2/examples/basic/testpio_run.pl
@@ -79,8 +79,8 @@ sub usage{
# }elsif(/ENV_(.*)/){
# print "set $1 $attributes{$_}\n";
# print F "\$ENV{$1}=\"$attributes{$_}\n\"";
-# }
-
+# }
+
}
if(defined $suites){
@@ -233,7 +233,7 @@ sub usage{
# \$ENV{MP_PROCS} = 1;
#system("hostname > $tstdir/hostfile");
#\$ENV{MP_HOSTFILE}="$tstdir/hostfile";
-
+
# }
if("$host" eq "yellowstone_pgi") {
\$ENV{LD_PRELOAD}="/opt/ibmhpc/pe1304/ppe.pami/gnu/lib64/pami64/libpami.so";
@@ -242,7 +242,7 @@ sub usage{
if("$host" eq "erebus" or "$host" =~ /^yellowstone/){
# \$ENV{MP_PROCS}=\$saveprocs;
# delete \$ENV{MP_HOSTFILE};
- }
+ }
}
my \$test;
@@ -276,9 +276,9 @@ sub usage{
unlink("testpio") if(-e "testpio");
if($twopass){
- copy("$tstdir/testpio.\$suite","testpio");
+ copy("$tstdir/testpio.\$suite","testpio");
}else{
- copy("$tstdir/testpio","testpio");
+ copy("$tstdir/testpio","testpio");
}
chmod 0755,"testpio";
@@ -316,7 +316,7 @@ sub usage{
open(LOG,\$log);
my \@logout = ;
close(LOG);
-
+
my \$cnt = grep /testpio completed successfully/ , \@logout;
open(T,">TestStatus");
if(\$cnt>0){
@@ -331,7 +331,7 @@ sub usage{
close(T);
}
}else{
- print "suite \$suite FAILED to configure or build\\n";
+ print "suite \$suite FAILED to configure or build\\n";
}
}
if($twopass && \$thispass==1){
@@ -341,7 +341,7 @@ sub usage{
print "Run ($script) second pass with \$subsys\n";
}else{
exec(\$subsys);
- }
+ }
}
print "test complete on $host \$passcnt tests PASS, \$failcnt tests FAIL\\n";
diff --git a/cime/externals/pio2/examples/basic/utils_mod.F90 b/cime/externals/pio2/examples/basic/utils_mod.F90
index 6919390a8ede..203064da41a5 100644
--- a/cime/externals/pio2/examples/basic/utils_mod.F90
+++ b/cime/externals/pio2/examples/basic/utils_mod.F90
@@ -11,7 +11,7 @@ module utils_mod
!>
!! @private
-!! @brief Writes netcdf header information for testpio.
+!! @brief Writes netcdf header information for testpio.
!! @param File @copydoc file_desc_t
!! @param nx
!! @param ny
diff --git a/cime/externals/pio2/examples/basic/wstest.c b/cime/externals/pio2/examples/basic/wstest.c
index c2fa962855f4..de4f05dcd589 100644
--- a/cime/externals/pio2/examples/basic/wstest.c
+++ b/cime/externals/pio2/examples/basic/wstest.c
@@ -27,7 +27,7 @@ int main(int argc, char *argv[])
PIOc_Init_Intracomm(MPI_COMM_WORLD, npe, 1, 0, PIO_REARR_SUBSET,&iosysid);
- // Create a weak scaling test -
+ // Create a weak scaling test -
nx=6;
ny=6;
nz=2;
@@ -52,15 +52,15 @@ int main(int argc, char *argv[])
PIOc_createfile(iosysid, &ncid, &iotype, "wstest.nc", PIO_CLOBBER);
// Order of dims in c is slowest first
- PIOc_def_dim(ncid, "nx", (PIO_Offset) gdim[2], dimids+2);
- PIOc_def_dim(ncid, "ny", (PIO_Offset) gdim[1], dimids+1);
+ PIOc_def_dim(ncid, "nx", (PIO_Offset) gdim[2], dimids+2);
+ PIOc_def_dim(ncid, "ny", (PIO_Offset) gdim[1], dimids+1);
PIOc_def_dim(ncid, "nz", (PIO_Offset) gdim[0], dimids);
PIOc_def_var(ncid, "idof", PIO_INT, 3, dimids, &vid);
-
+
PIOc_enddef(ncid);
-
+
PIOc_write_darray(ncid, vid, iodesc,(PIO_Offset) (nx*ny*nz), iarray, NULL);
diff --git a/cime/externals/pio2/examples/c/example2.c b/cime/externals/pio2/examples/c/example2.c
index d86af67774c0..f4b75d075b74 100644
--- a/cime/externals/pio2/examples/c/example2.c
+++ b/cime/externals/pio2/examples/c/example2.c
@@ -48,8 +48,8 @@
* responsibilty for writing and reading them will be spread between
* all the processors used to run this example. */
/**@{*/
-#define X_DIM_LEN 20
-#define Y_DIM_LEN 30
+#define X_DIM_LEN 400
+#define Y_DIM_LEN 400
/**@}*/
/** The number of timesteps of data to write. */
diff --git a/cime/externals/pio2/examples/c/valsupp_example1.supp b/cime/externals/pio2/examples/c/valsupp_example1.supp
deleted file mode 100644
index 63f3e073836d..000000000000
--- a/cime/externals/pio2/examples/c/valsupp_example1.supp
+++ /dev/null
@@ -1,15 +0,0 @@
-{
- cond_jump_1
- Memcheck:Cond
- fun:MPIC_Waitall
- fun:MPIR_Alltoallw_intra
- fun:MPIR_Alltoallw
- fun:MPIR_Alltoallw_impl
- fun:PMPI_Alltoallw
- fun:pio_swapm
- fun:rearrange_comp2io
- fun:PIOc_write_darray_multi
- fun:flush_buffer
- fun:PIOc_sync
- fun:main
-}
\ No newline at end of file
diff --git a/cime/externals/pio2/examples/cxx/examplePio.cxx b/cime/externals/pio2/examples/cxx/examplePio.cxx
index d526e6f9b53e..b4f03cc13e87 100644
--- a/cime/externals/pio2/examples/cxx/examplePio.cxx
+++ b/cime/externals/pio2/examples/cxx/examplePio.cxx
@@ -13,26 +13,26 @@ class pioExampleClass {
pioExampleClass::pioExampleClass(){
// user defined ctor with no arguments
-
+
cout << " pioExampleClass::pioExampleClass() "<< endl;
-
+
}
void pioExampleClass::init () {
-
+
cout << " pioExampleClass::init() " << endl;
-
+
}
int main () {
-
+
pioExampleClass *pioExInst;
-
+
pioExInst = new pioExampleClass();
-
+
pioExInst->init();
delete(pioExInst);
-
+
return 0;
}
\ No newline at end of file
diff --git a/cime/externals/pio2/examples/f03/CMakeLists.txt b/cime/externals/pio2/examples/f03/CMakeLists.txt
index 711b2beef3e5..e362ceacd137 100644
--- a/cime/externals/pio2/examples/f03/CMakeLists.txt
+++ b/cime/externals/pio2/examples/f03/CMakeLists.txt
@@ -12,7 +12,7 @@ LINK_DIRECTORIES(${PIO_LIB_DIR})
set(CMAKE_Fortran_FLAGS "${CMAKE_Fortran_FLAGS} -g -O0")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -g -O0")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -g -O0")
-if(${PIO_BUILD_TIMING})
+if(${PIO_BUILD_TIMING})
SET(TIMING_LINK_LIB timing)
endif()
SET(SRC examplePio.f90)
diff --git a/cime/externals/pio2/examples/f03/examplePio.f90 b/cime/externals/pio2/examples/f03/examplePio.f90
index d2baddf20966..f5eede9e26a2 100644
--- a/cime/externals/pio2/examples/f03/examplePio.f90
+++ b/cime/externals/pio2/examples/f03/examplePio.f90
@@ -10,7 +10,7 @@ module pioExample
use pio, only : PIO_nowrite, PIO_openfile
implicit none
-
+ save
private
include 'mpif.h'
diff --git a/cime/externals/pio2/scripts/prune_decomps.pl b/cime/externals/pio2/scripts/prune_decomps.pl
index 9bf3dd9e3e72..42336533aeee 100644
--- a/cime/externals/pio2/scripts/prune_decomps.pl
+++ b/cime/externals/pio2/scripts/prune_decomps.pl
@@ -44,15 +44,15 @@
open(F1,">$file");
foreach(@file1){
if(/\[(.*)\]/){
- my $decode = `addr2line -e ../bld/${CIME_MODEL}.exe $1`;
+ my $decode = `addr2line -e ../bld/cesm.exe $1`;
print F1 "$decode\n";
print "$decode\n";
}else{
print F1 $_;
}
-
+
}
close(F1);
}
-
+
diff --git a/cime/externals/pio2/src/CMakeLists.txt b/cime/externals/pio2/src/CMakeLists.txt
index 5a2df634c422..ba6863d85094 100644
--- a/cime/externals/pio2/src/CMakeLists.txt
+++ b/cime/externals/pio2/src/CMakeLists.txt
@@ -2,11 +2,11 @@
# PRELIMINARIES
#==============================================================================
-# Test for big-endian nature
-if (PIO_TEST_BIG_ENDIAN)
+# Test for big-endian nature
+if (PIO_TEST_BIG_ENDIAN)
include (TestBigEndian)
test_big_endian (PIO_BIG_ENDIAN_TEST_RESULT)
- if (PIO_BIG_ENDIAN_TEST_RESULT)
+ if (PIO_BIG_ENDIAN_TEST_RESULT)
set (PIO_BIG_ENDIAN ON CACHE BOOL "Whether machine is big endian")
else ()
set (PIO_BIG_ENDIAN OFF CACHE BOOL "Whether machine is big endian")
diff --git a/cime/externals/pio2/src/clib/CMakeLists.txt b/cime/externals/pio2/src/clib/CMakeLists.txt
index 8c01336969c5..093aeac70c16 100644
--- a/cime/externals/pio2/src/clib/CMakeLists.txt
+++ b/cime/externals/pio2/src/clib/CMakeLists.txt
@@ -14,21 +14,15 @@ set (PIO_C_SRCS topology.c
pioc_sc.c
pio_spmd.c
pio_rearrange.c
- pio_nc4.c
+ pio_darray.c
bget.c)
-set (PIO_GENNC_SRCS ${CMAKE_CURRENT_BINARY_DIR}/pio_put_nc.c
- ${CMAKE_CURRENT_BINARY_DIR}/pio_get_nc.c
- ${CMAKE_CURRENT_BINARY_DIR}/pio_nc.c)
+set (PIO_GENNC_SRCS ${CMAKE_CURRENT_BINARY_DIR}/pio_nc.c
+ ${CMAKE_CURRENT_BINARY_DIR}/pio_nc4.c
+ ${CMAKE_CURRENT_BINARY_DIR}/pio_put_nc.c
+ ${CMAKE_CURRENT_BINARY_DIR}/pio_get_nc.c)
-if (PIO_ENABLE_ASYNC)
- set (PIO_ADDL_SRCS pio_nc_async.c pio_put_nc_async.c pio_get_nc_async.c
- pio_msg.c pio_varm.c pio_darray_async.c)
-else ()
- set (PIO_ADDL_SRCS pio_darray.c ${PIO_GENNC_SRCS})
-endif ()
-
-add_library (pioc ${PIO_C_SRCS} ${PIO_ADDL_SRCS})
+add_library (pioc ${PIO_C_SRCS} ${PIO_GENNC_SRCS})
# set up include-directories
include_directories(
@@ -202,9 +196,12 @@ else ()
pio_get_nc.c
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/pio_nc.c
pio_nc.c
+ COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/pio_nc4.c
+ pio_nc4.c
DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/pio_put_nc.c
${CMAKE_CURRENT_SOURCE_DIR}/pio_get_nc.c
- ${CMAKE_CURRENT_SOURCE_DIR}/pio_nc.c)
+ ${CMAKE_CURRENT_SOURCE_DIR}/pio_nc.c
+ ${CMAKE_CURRENT_SOURCE_DIR}/pio_nc4.c)
endif ()
diff --git a/cime/externals/pio2/src/clib/bget.c b/cime/externals/pio2/src/clib/bget.c
index 020fb724b765..752ac219e749 100644
--- a/cime/externals/pio2/src/clib/bget.c
+++ b/cime/externals/pio2/src/clib/bget.c
@@ -397,6 +397,7 @@
BGET CONFIGURATION
==================
*/
+//#define PIO_USE_MALLOC 1
#ifdef PIO_USE_MALLOC
#include
#endif
diff --git a/cime/externals/pio2/src/clib/config.h.in b/cime/externals/pio2/src/clib/config.h.in
index 1722872c3056..e4be61d9e0c7 100644
--- a/cime/externals/pio2/src/clib/config.h.in
+++ b/cime/externals/pio2/src/clib/config.h.in
@@ -19,7 +19,4 @@
* will use the included bget() package for memory management. */
#define PIO_USE_MALLOC @USE_MALLOC@
-/** Set to non-zero to turn on logging. Output may be large. */
-#define PIO_ENABLE_LOGGING @ENABLE_LOGGING@
-
#endif /* _PIO_CONFIG_ */
diff --git a/cime/externals/pio2/src/clib/ncputgetparser.pl b/cime/externals/pio2/src/clib/ncputgetparser.pl
index fd2b65a896b1..f4bb89c6b95e 100644
--- a/cime/externals/pio2/src/clib/ncputgetparser.pl
+++ b/cime/externals/pio2/src/clib/ncputgetparser.pl
@@ -1,4 +1,4 @@
-#!/usr/bin/perl
+#!/usr/bin/perl
use strict;
my $netcdf_incdir = $ARGV[0];
@@ -22,7 +22,7 @@
chomp($line);
next if($line =~ /^\s*\/\*/);
next if($line =~ /^\s*[#\*]/);
- $line =~ s/\s+/ /g;
+ $line =~ s/\s+/ /g;
if($line =~ / ncmpi_(.*)(\(.*)$/) {
$func = $1;
@@ -41,7 +41,7 @@
foreach my $line (@file){
chomp($line);
next if($line =~ /^\s*\/\*/);
- next if($line =~ /^\s*[#\*]/);
+ next if($line =~ /^\s*[#\*]/);
$line =~ s/\s+/ /g;
if($line =~ /^\s*nc_([^_].*)(\(.*)$/) {
$func = $1;
@@ -93,7 +93,7 @@
}
if($line =~/function/){
if($line =~ s/PIO_function/PIOc_$func/){
- $line =~ s/\(\)/ $pnetfunc / ;
+ $line =~ s/\(\)/ $pnetfunc / ;
# $line =~ s/int ncid/file_desc_t *file/;
$line =~ s/MPI_Offset/PIO_Offset/g;
$line =~ s/\;//;
@@ -102,7 +102,7 @@
$line =~ s/, int \*request//;
-
+
}else{
my $args;
if(defined $bfunc){
@@ -116,7 +116,7 @@
}
}
if($line =~ s/nc_function/nc_$func/){
- $args = $functions->{$func}{netcdf} ;
+ $args = $functions->{$func}{netcdf} ;
if($pnetfunc =~ /void \*buf/ ){
$args =~ s/\*op/ buf/g;
$args =~ s/\*ip/ buf/g;
@@ -143,7 +143,7 @@
$args =~ s/MPI_Info //g;
$args =~ s/,\s+\*/, /g;
$args =~ s/\[\]//g;
-
+
$args =~ s/size_t \*indexp/\(size_t \*\) index/g;
$args =~ s/size_t \*startp/\(size_t \*\) start/g;
$args =~ s/size_t \*countp/\(size_t \*\) count/g;
@@ -156,7 +156,7 @@
# }else{
# $args =~ s/size_t /\(size_t\)/g;
# }
-
+
$line =~ s/\(\)/$args/;
}
}
@@ -164,10 +164,10 @@
# print F " file->varlist[*varidp].record=-1;\n";
# print F " file->varlist[*varidp].buffer=NULL;\n";
# }
- print F $line;
+ print F $line;
}
-
+
print F "\n";
}
@@ -193,7 +193,7 @@
# print "nctype $nctype\n";
$buftype = $typemap->{$nctype};
}
-
+
if($func =~ /var1/){
$bufcount = 1;
}
@@ -254,7 +254,7 @@
my $postline;
if($line =~/function/){
if($line =~ s/PIO_function/PIOc_$func/){
- $line =~ s/\(\)/ $functions->{$func}{pnetcdf} / ;
+ $line =~ s/\(\)/ $functions->{$func}{pnetcdf} / ;
# $line =~ s/int ncid/file_desc_t *file/;
$line =~ s/MPI_Offset/PIO_Offset/g;
$line =~ s/\;//;
@@ -272,7 +272,7 @@
}elsif($line =~ /ncmpi_function_all/){
$line = " ";
}
-
+
if($line =~ s/ncmpi_function\(/ncmpi_$func\(/){
if($allfunc==1){
$preline = "#ifdef PNET_READ_AND_BCAST\n";
@@ -284,9 +284,9 @@
$postline.="#else\n";
}
}
-
+
if($line =~ s/nc_function/nc_$func/){
- $args = $functions->{$func}{netcdf} ;
+ $args = $functions->{$func}{netcdf} ;
}
$args =~ s/int ncid/file->fh/;
@@ -317,7 +317,7 @@
$args =~ s/MPI_Info //g;
$args =~ s/,\s+\*/, /g;
$args =~ s/\[\]//g;
-
+
$args =~ s/size_t \*indexp/\(size_t \*\) index/g;
$args =~ s/size_t \*startp/\(size_t \*\) start/g;
$args =~ s/size_t \*countp/\(size_t \*\) count/g;
@@ -330,7 +330,7 @@
# }else{
# $args =~ s/size_t /\(size_t\)/g;
# }
-
+
$line =~ s/\(\)/$args/;
}
}
@@ -344,7 +344,7 @@
}
-
+
# print "$func $functions->{$func}{pnetcdf} $functions->{$func}{netcdf}\n";
print F "\n";
}
diff --git a/cime/externals/pio2/src/clib/pio.h b/cime/externals/pio2/src/clib/pio.h
index 07f4b8b20c55..5b8446bf21ef 100644
--- a/cime/externals/pio2/src/clib/pio.h
+++ b/cime/externals/pio2/src/clib/pio.h
@@ -1,8 +1,11 @@
/**
* @file
- * Public headers for the PIO C interface.
* @author Jim Edwards
* @date 2014
+ * @brief Public headers for the PIO C interface.
+ *
+ *
+ *
*
* @see http://code.google.com/p/parallelio/
*/
@@ -40,292 +43,209 @@
/** The maximum number of variables allowed in a netCDF file. */
#define PIO_MAX_VARS NC_MAX_VARS
+
/**
- * Variable description structure.
- */
+ * @brief Variable description structure
+ *
+ * The variable record is the index into the unlimited dimension in the netcdf file
+ * typically this is the time dimension.
+ * ndims is the number of dimensions on the file for this variable
+ * request is the id of each outstanding pnetcdf request for this variable
+ * nreqs is the number of outstanding pnetcdf requests for this variable
+ * fillbuf is a memory buffer to hold fill values for this variable (write only)
+ * iobuf is a memory buffer to hold (write only)
+*/
typedef struct var_desc_t
{
- /** The unlimited dimension in the netCDF file (typically the time
- * dimension). -1 if there is no unlimited dimension. */
- int record;
-
- /** Number of dimensions for this variable. */
- int ndims;
-
- /** ID of each outstanding pnetcdf request for this variable. */
- int *request;
+ int record;
+ int ndims;
- /** Number of requests bending with pnetcdf. */
- int nreqs;
+ int *request; // used for pnetcdf iput calls
+ int nreqs;
+ void *fillbuf;
+ void *iobuf;
- /** Buffer that contains the fill value for this variable. */
- void *fillbuf;
-
- /** ??? */
- void *iobuf;
} var_desc_t;
/**
- * IO region structure.
+ * @brief io region structure
*
* Each IO region is a unit of data which can be described using start and count
- * arrays. Each IO task may in general have multiple io regions per variable. The
+ * arrays. Each IO task may in general have multiple io regions per variable. The
* box rearranger will have at most one io region per variable.
*
- */
+*/
typedef struct io_region
{
- int loffset;
- PIO_Offset *start;
- PIO_Offset *count;
- struct io_region *next;
+ int loffset;
+ PIO_Offset *start;
+ PIO_Offset *count;
+ struct io_region *next;
} io_region;
/**
- * IO descriptor structure.
+ * @brief io descriptor structure
*
* This structure defines the mapping for a given variable between
* compute and IO decomposition.
- */
+ *
+*/
typedef struct io_desc_t
{
- /** The ID of this io_desc_t. */
- int ioid;
- int async_id;
- int nrecvs;
- int ndof;
- int ndims;
- int num_aiotasks;
- int rearranger;
- int maxregions;
- bool needsfill; // Does this decomp leave holes in the field (true) or write everywhere (false)
-
- /** The maximum number of bytes of this iodesc before flushing. */
- int maxbytes;
- MPI_Datatype basetype;
- PIO_Offset llen;
- int maxiobuflen;
- PIO_Offset *gsize;
-
- int *rfrom;
- int *rcount;
- int *scount;
- PIO_Offset *sindex;
- PIO_Offset *rindex;
-
- MPI_Datatype *rtype;
- MPI_Datatype *stype;
- int num_stypes;
- int holegridsize;
- int maxfillregions;
- io_region *firstregion;
- io_region *fillregion;
-
- bool handshake;
- bool isend;
- int max_requests;
+ int ioid;
+ int async_id;
+ int nrecvs;
+ int ndof;
+ int ndims;
+ int num_aiotasks;
+ int rearranger;
+ int maxregions;
+ bool needsfill; // Does this decomp leave holes in the field (true) or write everywhere (false)
+ int maxbytes; // maximum number of bytes of this iodesc before flushing
+ MPI_Datatype basetype;
+ PIO_Offset llen;
+ int maxiobuflen;
+ PIO_Offset *gsize;
+
+ int *rfrom;
+ int *rcount;
+ int *scount;
+ PIO_Offset *sindex;
+ PIO_Offset *rindex;
+
+ MPI_Datatype *rtype;
+ MPI_Datatype *stype;
+ int num_stypes;
+ int holegridsize;
+ int maxfillregions;
+ io_region *firstregion;
+ io_region *fillregion;
+
+
+ bool handshake;
+ bool isend;
+ int max_requests;
- MPI_Comm subset_comm;
-
- /** Pointer to the next io_desc_t in the list. */
- struct io_desc_t *next;
+ MPI_Comm subset_comm;
+ struct io_desc_t *next;
} io_desc_t;
/**
- * IO system descriptor structure.
+ * @brief io system descriptor structure
*
- * This structure contains the general IO subsystem data and MPI
- * structure
- */
+ * This structure contains the general IO subsystem data
+ * and MPI structure
+ *
+*/
typedef struct iosystem_desc_t
{
- /** The ID of this iosystem_desc_t. This will be obtained by
- * calling PIOc_Init_Intercomm() or PIOc_Init_Intracomm(). */
- int iosysid;
-
- /** This is an MPI intra communicator that includes all the tasks in
- * both the IO and the computation communicators. */
- MPI_Comm union_comm;
-
- /** This is an MPI intra communicator that includes all the tasks
- * involved in IO. */
- MPI_Comm io_comm;
-
- /** This is an MPI intra communicator that includes all the tasks
- * involved in computation. */
- MPI_Comm comp_comm;
-
- /** This is an MPI inter communicator between IO communicator and
- * computation communicator. */
- MPI_Comm intercomm;
-
- /** This is a copy (but not an MPI copy) of either the comp (for
- * non-async) or the union (for async) communicator. */
- MPI_Comm my_comm;
-
- /** This MPI group contains the processors involved in
- * computation. */
- MPI_Group compgroup;
+ int iosysid;
+ MPI_Comm union_comm;
+ MPI_Comm io_comm;
+ MPI_Comm comp_comm;
+ MPI_Comm intercomm;
+ MPI_Comm my_comm;
+
+ /** This MPI group contains the processors involved in
+ * computation. It is created in PIOc_Init_Intracomm(), and freed my
+ * PIO_finalize(). */
+ MPI_Group compgroup;
- /** This MPI group contains the processors involved in I/O. */
- MPI_Group iogroup;
-
- /** The number of tasks in the IO communicator. */
- int num_iotasks;
-
- /** The number of tasks in the computation communicator. */
- int num_comptasks;
-
- /** Rank of this task in the union communicator. */
- int union_rank;
-
- /** The rank of this process in the computation communicator, or -1
- * if this process is not part of the computation communicator. */
- int comp_rank;
-
- /** The rank of this process in the IO communicator, or -1 if this
- * process is not part of the IO communicator. */
- int io_rank;
-
- /** Set to MPI_ROOT if this task is the master of IO communicator, 0
- * otherwise. */
- int iomaster;
-
- /** Set to MPI_ROOT if this task is the master of comp communicator, 0
- * otherwise. */
- int compmaster;
-
- /** Rank of IO root task (which is rank 0 in io_comm) in the union
- * communicator. */
- int ioroot;
-
- /** Rank of computation root task (which is rank 0 in
- * comm_comms[cmp]) in the union communicator. */
- int comproot;
-
- /** An array of the ranks of all IO tasks within the union
- * communicator. */
- int *ioranks;
-
- /** Controls handling errors. */
- int error_handler;
+ /** This MPI group contains the processors involved in I/O. It is
+ * created in PIOc_Init_Intracomm(), and freed my PIOc_finalize(). */
+ MPI_Group iogroup;
+
+ int num_iotasks;
+ int num_comptasks;
- /** The rearranger decides which parts of a distributed array are
- * handled by which IO tasks. */
- int default_rearranger;
+ int union_rank;
+ int comp_rank;
+ int io_rank;
- /** True if asynchronous interface is in use. */
- bool async_interface;
+ bool iomaster;
+ bool compmaster;
- /** True if this task is a member of the IO communicator. */
- bool ioproc;
+ int ioroot;
+ int comproot;
+ int *ioranks;
- /** MPI Info object. */
- MPI_Info info;
+ int error_handler;
+ int default_rearranger;
- /** Pointer to the next iosystem_desc_t in the list. */
- struct iosystem_desc_t *next;
+ bool async_interface;
+ bool ioproc;
+
+ MPI_Info info;
+ struct iosystem_desc_t *next;
} iosystem_desc_t;
/**
- * multi buffer.
- */
+ * @brief multi buffer
+ *
+*/
typedef struct wmulti_buffer
{
- int ioid;
- int validvars;
- int arraylen;
- int *vid;
- int *frame;
- void *fillvalue;
- void *data;
- struct wmulti_buffer *next;
+ int ioid;
+ int validvars;
+ int arraylen;
+ int *vid;
+ int *frame;
+ void *fillvalue;
+ void *data;
+ struct wmulti_buffer *next;
} wmulti_buffer;
+
+
/**
- * File descriptor structure.
+ * @brief io system descriptor structure
*
* This structure holds information associated with each open file
- */
+ *
+*/
typedef struct file_desc_t
{
- /** The IO system ID used to open this file. */
- iosystem_desc_t *iosystem;
-
- /** The buffersize does not seem to be used anywhere. */
- /* PIO_Offset buffsize;*/
-
- /** The ncid returned for this file by the underlying library
- * (netcdf or pnetcdf). */
- int fh;
-
- /** The PIO_TYPE value that was used to open this file. */
- int iotype;
-
- /** List of variables in this file. */
- struct var_desc_t varlist[PIO_MAX_VARS];
-
- /** ??? */
- int mode;
-
- /** ??? */
- struct wmulti_buffer buffer;
-
- /** Pointer to the next file_desc_t in the list of open files. */
- struct file_desc_t *next;
-
- /** True if this task should participate in IO (only true for one
- * task with netcdf serial files. */
- int do_io;
+ iosystem_desc_t *iosystem;
+ PIO_Offset buffsize;
+ int fh;
+ int iotype;
+ struct var_desc_t varlist[PIO_MAX_VARS];
+ int mode;
+ struct wmulti_buffer buffer;
+ struct file_desc_t *next;
} file_desc_t;
/**
- * These are the supported methods of reading/writing netCDF
- * files. (Not all methods can be used with all netCDF files.)
- */
-enum PIO_IOTYPE
-{
- /** Parallel Netcdf (parallel) */
- PIO_IOTYPE_PNETCDF = 1,
-
- /** Netcdf3 Classic format (serial) */
- PIO_IOTYPE_NETCDF = 2,
-
- /** NetCDF4 (HDF5) compressed format (serial) */
- PIO_IOTYPE_NETCDF4C = 3,
+ * @brief These are the supported output formats
+ *
+*/
- /** NetCDF4 (HDF5) parallel */
- PIO_IOTYPE_NETCDF4P = 4
+enum PIO_IOTYPE{
+ PIO_IOTYPE_PNETCDF=1, //< Parallel Netcdf (parallel)
+ PIO_IOTYPE_NETCDF=2, //< Netcdf3 Classic format (serial)
+ PIO_IOTYPE_NETCDF4C=3, //< NetCDF4 (HDF5) compressed format (serial)
+ PIO_IOTYPE_NETCDF4P=4 //< NetCDF4 (HDF5) parallel
};
/**
- * These are the supported output data rearrangement methods.
- */
-enum PIO_REARRANGERS
-{
- /** Box rearranger. */
- PIO_REARR_BOX = 1,
-
- /** Subset rearranger. */
- PIO_REARR_SUBSET = 2
+ * @brief These are the supported output data rearrangement methods
+ *
+*/
+enum PIO_REARRANGERS{
+ PIO_REARR_BOX = 1,
+ PIO_REARR_SUBSET = 2
};
/**
- * These are the supported error handlers.
- */
-enum PIO_ERROR_HANDLERS
-{
- /** Errors cause abort. */
- PIO_INTERNAL_ERROR = (-51),
-
- /** Error codes are broadcast to all tasks. */
- PIO_BCAST_ERROR = (-52),
-
- /** Errors are returned to caller with no internal action. */
- PIO_RETURN_ERROR = (-53)
+ * @brief These are the supported error handlers
+ *
+*/
+enum PIO_ERROR_HANDLERS{
+ PIO_INTERNAL_ERROR=(-51), //< Errors cause abort
+ PIO_BCAST_ERROR=(-52), //< Error codes are broadcast to all tasks
+ PIO_RETURN_ERROR=(-53) //< Errors are returned to caller with no internal action
};
-/** Define the netCDF-based error codes. */
#if defined( _PNETCDF) || defined(_NETCDF)
#define PIO_GLOBAL NC_GLOBAL
#define PIO_UNLIMITED NC_UNLIMITED
@@ -418,283 +338,269 @@ enum PIO_ERROR_HANDLERS
#define PIO_EBADCHUNK NC_EBADCHUNK
#define PIO_ENOTBUILT NC_ENOTBUILT
#define PIO_EDISKLESS NC_EDISKLESS
+
#define PIO_FILL_DOUBLE NC_FILL_DOUBLE
#define PIO_FILL_FLOAT NC_FILL_FLOAT
#define PIO_FILL_INT NC_FILL_INT
#define PIO_FILL_CHAR NC_FILL_CHAR
-#endif /* defined( _PNETCDF) || defined(_NETCDF) */
-/** Define the extra error codes for the parallel-netcdf library. */
+#endif
#ifdef _PNETCDF
#define PIO_EINDEP NC_EINDEP
-#else /* _PNETCDF */
+#else
#define PIO_EINDEP (-203)
-#endif /* _PNETCDF */
-
-/** Define error codes for PIO. */
-#define PIO_EBADIOTYPE -255
-
-/** ??? */
-#define PIO_REQ_NULL (NC_REQ_NULL-1)
-
+#endif
#if defined(__cplusplus)
extern "C" {
#endif
- int PIOc_strerror(int pioerr, char *errstr);
- int PIOc_freedecomp(int iosysid, int ioid);
- int PIOc_inq_att (int ncid, int varid, const char *name, nc_type *xtypep, PIO_Offset *lenp);
- int PIOc_inq_format (int ncid, int *formatp);
- int PIOc_inq_varid (int ncid, const char *name, int *varidp);
- int PIOc_inq_varnatts (int ncid, int varid, int *nattsp);
- int PIOc_def_var (int ncid, const char *name, nc_type xtype, int ndims, const int *dimidsp, int *varidp);
- int PIOc_def_var_deflate(int ncid, int varid, int shuffle, int deflate,
- int deflate_level);
- int PIOc_inq_var_deflate(int ncid, int varid, int *shufflep, int *deflatep,
- int *deflate_levelp);
- int PIOc_inq_var_szip(int ncid, int varid, int *options_maskp, int *pixels_per_blockp);
- int PIOc_def_var_chunking(int ncid, int varid, int storage, const PIO_Offset *chunksizesp);
- int PIOc_inq_var_chunking(int ncid, int varid, int *storagep, PIO_Offset *chunksizesp);
- int PIOc_def_var_fill(int ncid, int varid, int no_fill, const void *fill_value);
- int PIOc_inq_var_fill(int ncid, int varid, int *no_fill, void *fill_valuep);
- int PIOc_def_var_endian(int ncid, int varid, int endian);
- int PIOc_inq_var_endian(int ncid, int varid, int *endianp);
- int PIOc_set_chunk_cache(int iosysid, int iotype, PIO_Offset size, PIO_Offset nelems, float preemption);
- int PIOc_get_chunk_cache(int iosysid, int iotype, PIO_Offset *sizep, PIO_Offset *nelemsp, float *preemptionp);
- int PIOc_set_var_chunk_cache(int ncid, int varid, PIO_Offset size, PIO_Offset nelems,
- float preemption);
- int PIOc_get_var_chunk_cache(int ncid, int varid, PIO_Offset *sizep, PIO_Offset *nelemsp,
- float *preemptionp);
- int PIOc_inq_var (int ncid, int varid, char *name, nc_type *xtypep, int *ndimsp, int *dimidsp, int *nattsp);
- int PIOc_inq_varname (int ncid, int varid, char *name);
- int PIOc_put_att_double (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const double *op);
- int PIOc_put_att_int (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const int *op);
- int PIOc_rename_att (int ncid, int varid, const char *name, const char *newname);
- int PIOc_del_att (int ncid, int varid, const char *name);
- int PIOc_inq_natts (int ncid, int *ngattsp);
- int PIOc_inq (int ncid, int *ndimsp, int *nvarsp, int *ngattsp, int *unlimdimidp);
- int PIOc_get_att_text (int ncid, int varid, const char *name, char *ip);
- int PIOc_get_att_short (int ncid, int varid, const char *name, short *ip);
- int PIOc_put_att_long (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const long *op);
- int PIOc_redef (int ncid);
- int PIOc_set_fill (int ncid, int fillmode, int *old_modep);
- int PIOc_enddef (int ncid);
- int PIOc_rename_var (int ncid, int varid, const char *name);
- int PIOc_put_att_short (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const short *op);
- int PIOc_put_att_text (int ncid, int varid, const char *name, PIO_Offset len, const char *op);
- int PIOc_inq_attname (int ncid, int varid, int attnum, char *name);
- int PIOc_get_att_ulonglong (int ncid, int varid, const char *name, unsigned long long *ip);
- int PIOc_get_att_ushort (int ncid, int varid, const char *name, unsigned short *ip);
- int PIOc_put_att_ulonglong (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const unsigned long long *op);
- int PIOc_inq_dimlen (int ncid, int dimid, PIO_Offset *lenp);
- int PIOc_get_att_uint (int ncid, int varid, const char *name, unsigned int *ip);
- int PIOc_get_att_longlong (int ncid, int varid, const char *name, long long *ip);
- int PIOc_put_att_schar (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const signed char *op);
- int PIOc_put_att_float (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const float *op);
- int PIOc_inq_nvars (int ncid, int *nvarsp);
- int PIOc_rename_dim (int ncid, int dimid, const char *name);
- int PIOc_inq_varndims (int ncid, int varid, int *ndimsp);
- int PIOc_get_att_long (int ncid, int varid, const char *name, long *ip);
- int PIOc_inq_dim (int ncid, int dimid, char *name, PIO_Offset *lenp);
- int PIOc_inq_dimid (int ncid, const char *name, int *idp);
- int PIOc_inq_unlimdim (int ncid, int *unlimdimidp);
- int PIOc_inq_vardimid (int ncid, int varid, int *dimidsp);
- int PIOc_inq_attlen (int ncid, int varid, const char *name, PIO_Offset *lenp);
- int PIOc_inq_dimname (int ncid, int dimid, char *name);
- int PIOc_put_att_ushort (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const unsigned short *op);
- int PIOc_get_att_float (int ncid, int varid, const char *name, float *ip);
- int PIOc_sync (int ncid);
- int PIOc_put_att_longlong (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const long long *op);
- int PIOc_put_att_uint (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const unsigned int *op);
- int PIOc_get_att_schar (int ncid, int varid, const char *name, signed char *ip);
- int PIOc_inq_attid (int ncid, int varid, const char *name, int *idp);
- int PIOc_def_dim (int ncid, const char *name, PIO_Offset len, int *idp);
- int PIOc_inq_ndims (int ncid, int *ndimsp);
- int PIOc_inq_vartype (int ncid, int varid, nc_type *xtypep);
- int PIOc_get_att_int (int ncid, int varid, const char *name, int *ip);
- int PIOc_get_att_double (int ncid, int varid, const char *name, double *ip);
- int PIOc_inq_atttype (int ncid, int varid, const char *name, nc_type *xtypep);
- int PIOc_put_att_uchar (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const unsigned char *op);
- int PIOc_get_att_uchar (int ncid, int varid, const char *name, unsigned char *ip);
- int PIOc_InitDecomp(const int iosysid, const int basetype,const int ndims, const int dims[],
- const int maplen, const PIO_Offset *compmap, int *ioidp, const int *rearr,
- const PIO_Offset *iostart,const PIO_Offset *iocount);
- int PIOc_Init_Intracomm(const MPI_Comm comp_comm,
- const int num_iotasks, const int stride,
- const int base, const int rearr, int *iosysidp);
- int PIOc_Init_Intercomm(int component_count, MPI_Comm peer_comm, MPI_Comm *comp_comms,
- MPI_Comm io_comm, int *iosysidp);
- int PIOc_closefile(int ncid);
- int PIOc_createfile(const int iosysid, int *ncidp, int *iotype,
- const char *fname, const int mode);
- int PIOc_openfile(const int iosysid, int *ncidp, int *iotype,
- const char *fname, const int mode);
- int PIOc_write_darray(const int ncid, const int vid, const int ioid, const PIO_Offset arraylen,
- void *array, void *fillvalue);
- int PIOc_write_darray_multi(const int ncid, const int vid[], const int ioid, const int nvars, const PIO_Offset arraylen,
- void *array, const int frame[], void *fillvalue[], bool flushtodisk);
-
- int PIOc_get_att_ubyte (int ncid, int varid, const char *name, unsigned char *ip);
- int PIOc_put_att_ubyte (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const unsigned char *op) ;
- int PIOc_set_blocksize(const int newblocksize);
- int PIOc_readmap(const char file[], int *ndims, int *gdims[], PIO_Offset *fmaplen, PIO_Offset *map[], const MPI_Comm comm);
- int PIOc_readmap_from_f90(const char file[],int *ndims, int *gdims[], PIO_Offset *maplen, PIO_Offset *map[], const int f90_comm);
- int PIOc_writemap(const char file[], const int ndims, const int gdims[], PIO_Offset maplen, PIO_Offset map[], const MPI_Comm comm);
- int PIOc_writemap_from_f90(const char file[], const int ndims, const int gdims[], const PIO_Offset maplen, const PIO_Offset map[], const int f90_comm);
- int PIOc_deletefile(const int iosysid, const char filename[]);
- int PIOc_File_is_Open(int ncid);
- int PIOc_Set_File_Error_Handling(int ncid, int method);
- int PIOc_advanceframe(int ncid, int varid);
- int PIOc_setframe(const int ncid, const int varid,const int frame);
- int PIOc_get_numiotasks(int iosysid, int *numiotasks);
- int PIOc_get_iorank(int iosysid, int *iorank);
- int PIOc_get_local_array_size(int ioid);
- int PIOc_Set_IOSystem_Error_Handling(int iosysid, int method);
- int PIOc_set_hint(const int iosysid, char hint[], const char hintval[]);
- int PIOc_Init_Intracomm(const MPI_Comm comp_comm,
- const int num_iotasks, const int stride,
- const int base,const int rearr, int *iosysidp);
- int PIOc_finalize(const int iosysid);
- int PIOc_iam_iotask(const int iosysid, bool *ioproc);
- int PIOc_iotask_rank(const int iosysid, int *iorank);
- int PIOc_iosystem_is_active(const int iosysid, bool *active);
- int PIOc_put_vars_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const unsigned char *op) ;
- int PIOc_get_var1_schar (int ncid, int varid, const PIO_Offset index[], signed char *buf) ;
- int PIOc_put_vars_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const unsigned short *op) ;
- int pio_read_darray_nc(file_desc_t *file, io_desc_t *iodesc, const int vid, void *IOBUF);
- int PIOc_put_vars_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const unsigned long long *op) ;
- int PIOc_get_vars_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], unsigned long long *buf) ;
- int PIOc_put_varm (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const void *buf, PIO_Offset bufcount, MPI_Datatype buftype) ;
- int PIOc_read_darray(const int ncid, const int vid, const int ioid, const PIO_Offset arraylen, void *array);
- int PIOc_put_vars_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const unsigned int *op) ;
- int PIOc_get_varm_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], signed char *buf) ;
- int PIOc_put_varm_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned char *op) ;
- int PIOc_put_var_ushort (int ncid, int varid, const unsigned short *op) ;
- int PIOc_get_vars_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], short *buf) ;
- int PIOc_put_var1_longlong (int ncid, int varid, const PIO_Offset index[], const long long *op) ;
- int PIOc_get_var_double (int ncid, int varid, double *buf) ;
- int PIOc_put_vara_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const unsigned char *op) ;
- int PIOc_put_varm_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const short *op) ;
- int PIOc_get_vara_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], double *buf) ;
- int PIOc_put_var1_long (int ncid, int varid, const PIO_Offset index[], const long *ip) ;
- int PIOc_get_var_int (int ncid, int varid, int *buf) ;
- int PIOc_put_vars_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const long *op) ;
- int PIOc_put_var_short (int ncid, int varid, const short *op) ;
- int PIOc_get_vara_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], char *buf) ;
- int PIOc_put_vara_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const int *op) ;
-
- int PIOc_put_var1_ushort (int ncid, int varid, const PIO_Offset index[], const unsigned short *op);
- int PIOc_put_vara_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const char *op);
- int PIOc_put_varm_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const char *op);
- int PIOc_put_varm_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned short *op);
- int PIOc_put_var_ulonglong (int ncid, int varid, const unsigned long long *op);
- int PIOc_put_var_int (int ncid, int varid, const int *op);
- int PIOc_put_var_longlong (int ncid, int varid, const long long *op);
- int PIOc_put_var_schar (int ncid, int varid, const signed char *op);
- int PIOc_put_var_uint (int ncid, int varid, const unsigned int *op);
- int PIOc_put_var (int ncid, int varid, const void *buf, PIO_Offset bufcount, MPI_Datatype buftype);
- int PIOc_put_vara_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const unsigned short *op);
- int PIOc_put_vars_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const short *op);
- int PIOc_put_vara_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const unsigned int *op);
- int PIOc_put_vara_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const signed char *op);
- int PIOc_put_varm_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned long long *op);
- int PIOc_put_var1_uchar (int ncid, int varid, const PIO_Offset index[], const unsigned char *op);
- int PIOc_put_varm_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const int *op);
- int PIOc_put_vars_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const signed char *op);
- int PIOc_put_var1 (int ncid, int varid, const PIO_Offset index[], const void *buf, PIO_Offset bufcount, MPI_Datatype buftype);
- int PIOc_put_vara_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const float *op);
- int PIOc_put_var1_float (int ncid, int varid, const PIO_Offset index[], const float *op);
- int PIOc_put_varm_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const float *op);
- int PIOc_put_var1_text (int ncid, int varid, const PIO_Offset index[], const char *op);
- int PIOc_put_vars_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const char *op);
- int PIOc_put_varm_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const long *op);
- int PIOc_put_vars_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const double *op);
- int PIOc_put_vara_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const long long *op);
- int PIOc_put_var_double (int ncid, int varid, const double *op);
- int PIOc_put_var_float (int ncid, int varid, const float *op);
- int PIOc_put_var1_ulonglong (int ncid, int varid, const PIO_Offset index[], const unsigned long long *op);
- int PIOc_put_varm_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned int *op);
- int PIOc_put_var1_uint (int ncid, int varid, const PIO_Offset index[], const unsigned int *op);
- int PIOc_put_var1_int (int ncid, int varid, const PIO_Offset index[], const int *op);
- int PIOc_put_vars_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const float *op);
- int PIOc_put_vara_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const short *op);
- int PIOc_put_var1_schar (int ncid, int varid, const PIO_Offset index[], const signed char *op);
- int PIOc_put_vara_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const unsigned long long *op);
- int PIOc_put_varm_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const double *op);
- int PIOc_put_vara (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const void *buf, PIO_Offset bufcount, MPI_Datatype buftype);
- int PIOc_put_vara_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const long *op);
- int PIOc_put_var1_double (int ncid, int varid, const PIO_Offset index[], const double *op);
- int PIOc_put_varm_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const signed char *op);
- int PIOc_put_var_text (int ncid, int varid, const char *op);
- int PIOc_put_vars_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const int *op);
- int PIOc_put_var1_short (int ncid, int varid, const PIO_Offset index[], const short *op);
- int PIOc_put_vars_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const long long *op);
- int PIOc_put_vara_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const double *op);
- int PIOc_put_vars (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const void *buf, PIO_Offset bufcount, MPI_Datatype buftype);
- int PIOc_put_vars_tc(int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], nc_type xtype, const void *buf);
- int PIOc_put_var_uchar (int ncid, int varid, const unsigned char *op);
- int PIOc_put_var_long (int ncid, int varid, const long *op);
- int PIOc_put_varm_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const long long *op);
- int PIOc_get_vara_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], int *buf);
- int PIOc_get_var1_float (int ncid, int varid, const PIO_Offset index[], float *buf);
- int PIOc_get_var1_short (int ncid, int varid, const PIO_Offset index[], short *buf);
- int PIOc_get_vars_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], int *buf);
- int PIOc_get_var_text (int ncid, int varid, char *buf);
- int PIOc_get_varm_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], double *buf);
- int PIOc_get_vars_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], signed char *buf);
- int PIOc_get_vara_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], unsigned short *buf);
- int PIOc_get_var1_ushort (int ncid, int varid, const PIO_Offset index[], unsigned short *buf);
- int PIOc_get_var_float (int ncid, int varid, float *buf);
- int PIOc_get_vars_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], unsigned char *buf);
- int PIOc_get_var (int ncid, int varid, void *buf, PIO_Offset bufcount, MPI_Datatype buftype);
- int PIOc_get_var1_longlong (int ncid, int varid, const PIO_Offset index[], long long *buf);
- int PIOc_get_vars_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], unsigned short *buf);
- int PIOc_get_var_long (int ncid, int varid, long *buf);
- int PIOc_get_var1_double (int ncid, int varid, const PIO_Offset index[], double *buf);
- int PIOc_get_vara_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], unsigned int *buf);
- int PIOc_get_vars_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], long long *buf);
- int PIOc_get_var_longlong (int ncid, int varid, long long *buf);
- int PIOc_get_vara_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], short *buf);
- int PIOc_get_vara_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], long *buf);
- int PIOc_get_var1_int (int ncid, int varid, const PIO_Offset index[], int *buf);
- int PIOc_get_var1_ulonglong (int ncid, int varid, const PIO_Offset index[], unsigned long long *buf);
- int PIOc_get_var_uchar (int ncid, int varid, unsigned char *buf);
- int PIOc_get_vara_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], unsigned char *buf);
- int PIOc_get_vars_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], float *buf);
- int PIOc_get_vars_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], long *buf);
- int PIOc_get_var1 (int ncid, int varid, const PIO_Offset index[], void *buf, PIO_Offset bufcount, MPI_Datatype buftype);
- int PIOc_get_var_uint (int ncid, int varid, unsigned int *buf);
- int PIOc_get_vara (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], void *buf, PIO_Offset bufcount, MPI_Datatype buftype);
- int PIOc_get_vara_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], signed char *buf);
- int PIOc_get_var1_uint (int ncid, int varid, const PIO_Offset index[], unsigned int *buf);
- int PIOc_get_vars_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], unsigned int *buf);
- int PIOc_get_vara_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], float *buf);
- int PIOc_get_varm_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], char *buf);
- int PIOc_get_var1_text (int ncid, int varid, const PIO_Offset index[], char *buf);
- int PIOc_get_varm_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], int *buf);
- int PIOc_get_varm_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], unsigned int *buf);
- int PIOc_get_varm (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], void *buf, PIO_Offset bufcount, MPI_Datatype buftype);
- int PIOc_get_vars_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], double *buf);
- int PIOc_get_vara_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], long long *buf);
- int PIOc_get_var_ulonglong (int ncid, int varid, unsigned long long *buf);
- int PIOc_get_vara_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], unsigned long long *buf);
- int PIOc_get_var_short (int ncid, int varid, short *buf);
- int PIOc_get_varm_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], float *buf);
- int PIOc_get_var1_long (int ncid, int varid, const PIO_Offset index[], long *buf);
- int PIOc_get_varm_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], long *buf);
- int PIOc_get_varm_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], unsigned short *buf);
- int PIOc_get_varm_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], long long *buf);
- int PIOc_get_vars_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], char *buf);
- int PIOc_get_var1_uchar (int ncid, int varid, const PIO_Offset index[], unsigned char *buf);
- int PIOc_get_vars (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], void *buf, PIO_Offset bufcount, MPI_Datatype buftype);
- int PIOc_get_vars_tc(int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], nc_type xtype, void *buf);
- int PIOc_get_varm_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], short *buf);
- int PIOc_get_varm_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], unsigned long long *buf);
- int PIOc_get_var_schar (int ncid, int varid, signed char *buf);
- int PIOc_iotype_available(const int iotype);
- int PIOc_set_log_level(int level);
- int PIOc_put_att(int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const void *op);
- int PIOc_get_att(int ncid, int varid, const char *name, void *ip);
- int PIOc_inq_type(int ncid, nc_type xtype, char *name, PIO_Offset *sizep);
+#define PIO_EBADIOTYPE -255
+#define PIO_REQ_NULL (NC_REQ_NULL-1)
+int PIOc_freedecomp(int iosysid, int ioid);
+int PIOc_inq_att (int ncid, int varid, const char *name, nc_type *xtypep, PIO_Offset *lenp);
+int PIOc_inq_format (int ncid, int *formatp);
+int PIOc_inq_varid (int ncid, const char *name, int *varidp);
+int PIOc_inq_varnatts (int ncid, int varid, int *nattsp);
+int PIOc_def_var (int ncid, const char *name, nc_type xtype, int ndims, const int *dimidsp, int *varidp);
+int PIOc_def_var_deflate(int ncid, int varid, int shuffle, int deflate,
+ int deflate_level);
+int PIOc_inq_var_deflate(int ncid, int varid, int *shufflep,
+ int *deflatep, int *deflate_levelp);
+int PIOc_inq_var_szip(int ncid, int varid, int *options_maskp, int *pixels_per_blockp);
+int PIOc_def_var_chunking(int ncid, int varid, int storage, const PIO_Offset *chunksizesp);
+int PIOc_inq_var_chunking(int ncid, int varid, int *storagep, PIO_Offset *chunksizesp);
+int PIOc_def_var_fill(int ncid, int varid, int no_fill, const void *fill_value);
+int PIOc_inq_var_fill(int ncid, int varid, int *no_fill, void *fill_valuep);
+int PIOc_def_var_endian(int ncid, int varid, int endian);
+int PIOc_inq_var_endian(int ncid, int varid, int *endianp);
+int PIOc_set_chunk_cache(int iosysid, int iotype, PIO_Offset size, PIO_Offset nelems, float preemption);
+int PIOc_get_chunk_cache(int iosysid, int iotype, PIO_Offset *sizep, PIO_Offset *nelemsp, float *preemptionp);
+int PIOc_set_var_chunk_cache(int ncid, int varid, PIO_Offset size, PIO_Offset nelems,
+ float preemption);
+int PIOc_get_var_chunk_cache(int ncid, int varid, PIO_Offset *sizep, PIO_Offset *nelemsp,
+ float *preemptionp);
+int PIOc_inq_var (int ncid, int varid, char *name, nc_type *xtypep, int *ndimsp, int *dimidsp, int *nattsp);
+int PIOc_inq_varname (int ncid, int varid, char *name);
+int PIOc_put_att_double (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const double *op);
+int PIOc_put_att_int (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const int *op);
+int PIOc_rename_att (int ncid, int varid, const char *name, const char *newname);
+int PIOc_del_att (int ncid, int varid, const char *name);
+int PIOc_inq_natts (int ncid, int *ngattsp);
+int PIOc_inq (int ncid, int *ndimsp, int *nvarsp, int *ngattsp, int *unlimdimidp);
+int PIOc_get_att_text (int ncid, int varid, const char *name, char *ip);
+int PIOc_get_att_short (int ncid, int varid, const char *name, short *ip);
+int PIOc_put_att_long (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const long *op);
+int PIOc_redef (int ncid);
+int PIOc_set_fill (int ncid, int fillmode, int *old_modep);
+int PIOc_enddef (int ncid);
+int PIOc_rename_var (int ncid, int varid, const char *name);
+int PIOc_put_att_short (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const short *op);
+int PIOc_put_att_text (int ncid, int varid, const char *name, PIO_Offset len, const char *op);
+int PIOc_inq_attname (int ncid, int varid, int attnum, char *name);
+int PIOc_get_att_ulonglong (int ncid, int varid, const char *name, unsigned long long *ip);
+int PIOc_get_att_ushort (int ncid, int varid, const char *name, unsigned short *ip);
+int PIOc_put_att_ulonglong (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const unsigned long long *op);
+int PIOc_inq_dimlen (int ncid, int dimid, PIO_Offset *lenp);
+int PIOc_get_att_uint (int ncid, int varid, const char *name, unsigned int *ip);
+int PIOc_get_att_longlong (int ncid, int varid, const char *name, long long *ip);
+int PIOc_put_att_schar (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const signed char *op);
+int PIOc_put_att_float (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const float *op);
+int PIOc_inq_nvars (int ncid, int *nvarsp);
+int PIOc_rename_dim (int ncid, int dimid, const char *name);
+int PIOc_inq_varndims (int ncid, int varid, int *ndimsp);
+int PIOc_get_att_long (int ncid, int varid, const char *name, long *ip);
+int PIOc_inq_dim (int ncid, int dimid, char *name, PIO_Offset *lenp);
+int PIOc_inq_dimid (int ncid, const char *name, int *idp);
+int PIOc_inq_unlimdim (int ncid, int *unlimdimidp);
+int PIOc_inq_vardimid (int ncid, int varid, int *dimidsp);
+int PIOc_inq_attlen (int ncid, int varid, const char *name, PIO_Offset *lenp);
+int PIOc_inq_dimname (int ncid, int dimid, char *name);
+int PIOc_put_att_ushort (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const unsigned short *op);
+int PIOc_get_att_float (int ncid, int varid, const char *name, float *ip);
+int PIOc_sync (int ncid);
+int PIOc_put_att_longlong (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const long long *op);
+int PIOc_put_att_uint (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const unsigned int *op);
+int PIOc_get_att_schar (int ncid, int varid, const char *name, signed char *ip);
+int PIOc_inq_attid (int ncid, int varid, const char *name, int *idp);
+int PIOc_def_dim (int ncid, const char *name, PIO_Offset len, int *idp);
+int PIOc_inq_ndims (int ncid, int *ndimsp);
+int PIOc_inq_vartype (int ncid, int varid, nc_type *xtypep);
+int PIOc_get_att_int (int ncid, int varid, const char *name, int *ip);
+int PIOc_get_att_double (int ncid, int varid, const char *name, double *ip);
+int PIOc_inq_atttype (int ncid, int varid, const char *name, nc_type *xtypep);
+int PIOc_put_att_uchar (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const unsigned char *op);
+int PIOc_get_att_uchar (int ncid, int varid, const char *name, unsigned char *ip);
+int PIOc_InitDecomp(const int iosysid, const int basetype,const int ndims, const int dims[],
+ const int maplen, const PIO_Offset *compmap, int *ioidp, const int *rearr,
+ const PIO_Offset *iostart,const PIO_Offset *iocount);
+int PIOc_Init_Intracomm(const MPI_Comm comp_comm,
+ const int num_iotasks, const int stride,
+ const int base, const int rearr, int *iosysidp);
+int PIOc_closefile(int ncid);
+int PIOc_createfile(const int iosysid, int *ncidp, int *iotype,
+ const char *fname, const int mode);
+int PIOc_openfile(const int iosysid, int *ncidp, int *iotype,
+ const char *fname, const int mode);
+int PIOc_write_darray(const int ncid, const int vid, const int ioid, const PIO_Offset arraylen, void *array, void *fillvalue);
+ int PIOc_write_darray_multi(const int ncid, const int vid[], const int ioid, const int nvars, const PIO_Offset arraylen, void *array, const int frame[], void *fillvalue[], bool flushtodisk);
+
+int PIOc_get_att_ubyte (int ncid, int varid, const char *name, unsigned char *ip);
+int PIOc_put_att_ubyte (int ncid, int varid, const char *name, nc_type xtype, PIO_Offset len, const unsigned char *op) ;
+int PIOc_set_blocksize(const int newblocksize);
+ int PIOc_readmap(const char file[], int *ndims, int *gdims[], PIO_Offset *fmaplen, PIO_Offset *map[], const MPI_Comm comm);
+ int PIOc_readmap_from_f90(const char file[],int *ndims, int *gdims[], PIO_Offset *maplen, PIO_Offset *map[], const int f90_comm);
+ int PIOc_writemap(const char file[], const int ndims, const int gdims[], PIO_Offset maplen, PIO_Offset map[], const MPI_Comm comm);
+ int PIOc_writemap_from_f90(const char file[], const int ndims, const int gdims[], const PIO_Offset maplen, const PIO_Offset map[], const int f90_comm);
+ int PIOc_deletefile(const int iosysid, const char filename[]);
+ int PIOc_File_is_Open(int ncid);
+ int PIOc_Set_File_Error_Handling(int ncid, int method);
+ int PIOc_advanceframe(int ncid, int varid);
+ int PIOc_setframe(const int ncid, const int varid,const int frame);
+ int PIOc_get_numiotasks(int iosysid, int *numiotasks);
+ int PIOc_get_iorank(int iosysid, int *iorank);
+ int PIOc_get_local_array_size(int ioid);
+ int PIOc_Set_IOSystem_Error_Handling(int iosysid, int method);
+ int PIOc_set_hint(const int iosysid, char hint[], const char hintval[]);
+ int PIOc_Init_Intracomm(const MPI_Comm comp_comm,
+ const int num_iotasks, const int stride,
+ const int base,const int rearr, int *iosysidp);
+ int PIOc_finalize(const int iosysid);
+ int PIOc_iam_iotask(const int iosysid, bool *ioproc);
+ int PIOc_iotask_rank(const int iosysid, int *iorank);
+ int PIOc_iosystem_is_active(const int iosysid, bool *active);
+ int PIOc_put_vars_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const unsigned char *op) ;
+ int PIOc_get_var1_schar (int ncid, int varid, const PIO_Offset index[], signed char *buf) ;
+ int PIOc_put_vars_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const unsigned short *op) ;
+ int pio_read_darray_nc(file_desc_t *file, io_desc_t *iodesc, const int vid, void *IOBUF);
+ int PIOc_put_vars_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const unsigned long long *op) ;
+ int PIOc_get_vars_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], unsigned long long *buf) ;
+ int PIOc_put_varm (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const void *buf, PIO_Offset bufcount, MPI_Datatype buftype) ;
+ int PIOc_read_darray(const int ncid, const int vid, const int ioid, const PIO_Offset arraylen, void *array);
+ int PIOc_put_vars_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const unsigned int *op) ;
+ int PIOc_get_varm_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], signed char *buf) ;
+ int PIOc_put_varm_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned char *op) ;
+ int PIOc_put_var_ushort (int ncid, int varid, const unsigned short *op) ;
+ int PIOc_get_vars_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], short *buf) ;
+ int PIOc_put_var1_longlong (int ncid, int varid, const PIO_Offset index[], const long long *op) ;
+ int PIOc_get_var_double (int ncid, int varid, double *buf) ;
+ int PIOc_put_vara_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const unsigned char *op) ;
+ int PIOc_put_varm_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const short *op) ;
+ int PIOc_get_vara_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], double *buf) ;
+ int PIOc_put_var1_long (int ncid, int varid, const PIO_Offset index[], const long *ip) ;
+ int PIOc_get_var_int (int ncid, int varid, int *buf) ;
+ int PIOc_put_vars_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const long *op) ;
+ int PIOc_put_var_short (int ncid, int varid, const short *op) ;
+ int PIOc_get_vara_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], char *buf) ;
+ int PIOc_put_vara_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const int *op) ;
+ int PIOc_put_vara_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const int *op) ;
+
+ int PIOc_put_var1_ushort (int ncid, int varid, const PIO_Offset index[], const unsigned short *op);
+ int PIOc_put_vara_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const char *op);
+ int PIOc_put_varm_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const char *op);
+ int PIOc_put_varm_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned short *op);
+ int PIOc_put_var_ulonglong (int ncid, int varid, const unsigned long long *op);
+ int PIOc_put_var_int (int ncid, int varid, const int *op);
+ int PIOc_put_var_longlong (int ncid, int varid, const long long *op);
+ int PIOc_put_var_schar (int ncid, int varid, const signed char *op);
+ int PIOc_put_var_uint (int ncid, int varid, const unsigned int *op);
+ int PIOc_put_var (int ncid, int varid, const void *buf, PIO_Offset bufcount, MPI_Datatype buftype);
+ int PIOc_put_vara_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const unsigned short *op);
+ int PIOc_put_vars_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const short *op);
+ int PIOc_put_vara_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const unsigned int *op);
+ int PIOc_put_vara_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const signed char *op);
+ int PIOc_put_varm_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned long long *op);
+ int PIOc_put_var1_uchar (int ncid, int varid, const PIO_Offset index[], const unsigned char *op);
+ int PIOc_put_varm_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const int *op);
+ int PIOc_put_vars_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const signed char *op);
+ int PIOc_put_var1 (int ncid, int varid, const PIO_Offset index[], const void *buf, PIO_Offset bufcount, MPI_Datatype buftype);
+ int PIOc_put_vara_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const float *op);
+ int PIOc_put_var1_float (int ncid, int varid, const PIO_Offset index[], const float *op);
+ int PIOc_put_varm_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const float *op);
+ int PIOc_put_var1_text (int ncid, int varid, const PIO_Offset index[], const char *op);
+ int PIOc_put_vars_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const char *op);
+ int PIOc_put_varm_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const long *op);
+ int PIOc_put_vars_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const double *op);
+ int PIOc_put_vara_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const long long *op);
+ int PIOc_put_var_double (int ncid, int varid, const double *op);
+ int PIOc_put_var_float (int ncid, int varid, const float *op);
+ int PIOc_put_var1_ulonglong (int ncid, int varid, const PIO_Offset index[], const unsigned long long *op);
+ int PIOc_put_varm_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned int *op);
+ int PIOc_put_var1_uint (int ncid, int varid, const PIO_Offset index[], const unsigned int *op);
+ int PIOc_put_var1_int (int ncid, int varid, const PIO_Offset index[], const int *op);
+ int PIOc_put_vars_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const float *op);
+ int PIOc_put_vara_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const short *op);
+ int PIOc_put_var1_schar (int ncid, int varid, const PIO_Offset index[], const signed char *op);
+ int PIOc_put_vara_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const unsigned long long *op);
+ int PIOc_put_varm_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const double *op);
+ int PIOc_put_vara (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const void *buf, PIO_Offset bufcount, MPI_Datatype buftype);
+ int PIOc_put_vara_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const long *op);
+ int PIOc_put_var1_double (int ncid, int varid, const PIO_Offset index[], const double *op);
+ int PIOc_put_varm_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const signed char *op);
+ int PIOc_put_var_text (int ncid, int varid, const char *op);
+ int PIOc_put_vars_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const int *op);
+ int PIOc_put_var1_short (int ncid, int varid, const PIO_Offset index[], const short *op);
+ int PIOc_put_vars_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const long long *op);
+ int PIOc_put_vara_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const double *op);
+ int PIOc_put_vars (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const void *buf, PIO_Offset bufcount, MPI_Datatype buftype);
+ int PIOc_put_var_uchar (int ncid, int varid, const unsigned char *op);
+ int PIOc_put_var_long (int ncid, int varid, const long *op);
+ int PIOc_put_varm_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const long long *op);
+ int PIOc_get_vara_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], int *buf);
+ int PIOc_get_var1_float (int ncid, int varid, const PIO_Offset index[], float *buf);
+ int PIOc_get_var1_short (int ncid, int varid, const PIO_Offset index[], short *buf);
+ int PIOc_get_vars_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], int *buf);
+ int PIOc_get_var_text (int ncid, int varid, char *buf);
+ int PIOc_get_varm_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], double *buf);
+ int PIOc_get_vars_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], signed char *buf);
+ int PIOc_get_vara_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], unsigned short *buf);
+ int PIOc_get_var1_ushort (int ncid, int varid, const PIO_Offset index[], unsigned short *buf);
+ int PIOc_get_var_float (int ncid, int varid, float *buf);
+ int PIOc_get_vars_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], unsigned char *buf);
+ int PIOc_get_var (int ncid, int varid, void *buf, PIO_Offset bufcount, MPI_Datatype buftype);
+ int PIOc_get_var1_longlong (int ncid, int varid, const PIO_Offset index[], long long *buf);
+ int PIOc_get_vars_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], unsigned short *buf);
+ int PIOc_get_var_long (int ncid, int varid, long *buf);
+ int PIOc_get_var1_double (int ncid, int varid, const PIO_Offset index[], double *buf);
+ int PIOc_get_vara_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], unsigned int *buf);
+ int PIOc_get_vars_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], long long *buf);
+ int PIOc_get_var_longlong (int ncid, int varid, long long *buf);
+ int PIOc_get_vara_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], short *buf);
+ int PIOc_get_vara_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], long *buf);
+ int PIOc_get_var1_int (int ncid, int varid, const PIO_Offset index[], int *buf);
+ int PIOc_get_var1_ulonglong (int ncid, int varid, const PIO_Offset index[], unsigned long long *buf);
+ int PIOc_get_var_uchar (int ncid, int varid, unsigned char *buf);
+ int PIOc_get_vara_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], unsigned char *buf);
+ int PIOc_get_vars_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], float *buf);
+ int PIOc_get_vars_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], long *buf);
+ int PIOc_get_var1 (int ncid, int varid, const PIO_Offset index[], void *buf, PIO_Offset bufcount, MPI_Datatype buftype);
+ int PIOc_get_var_uint (int ncid, int varid, unsigned int *buf);
+ int PIOc_get_vara (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], void *buf, PIO_Offset bufcount, MPI_Datatype buftype);
+ int PIOc_get_vara_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], signed char *buf);
+ int PIOc_get_var1_uint (int ncid, int varid, const PIO_Offset index[], unsigned int *buf);
+ int PIOc_get_vars_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], unsigned int *buf);
+ int PIOc_get_vara_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], float *buf);
+ int PIOc_get_varm_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], char *buf);
+ int PIOc_get_var1_text (int ncid, int varid, const PIO_Offset index[], char *buf);
+ int PIOc_get_varm_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], int *buf);
+ int PIOc_get_varm_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], unsigned int *buf);
+ int PIOc_get_varm (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], void *buf, PIO_Offset bufcount, MPI_Datatype buftype);
+ int PIOc_get_vars_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], double *buf);
+ int PIOc_get_vara_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], long long *buf);
+ int PIOc_get_var_ulonglong (int ncid, int varid, unsigned long long *buf);
+ int PIOc_get_vara_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], unsigned long long *buf);
+ int PIOc_get_var_short (int ncid, int varid, short *buf);
+ int PIOc_get_varm_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], float *buf);
+ int PIOc_get_var1_long (int ncid, int varid, const PIO_Offset index[], long *buf);
+ int PIOc_get_varm_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], long *buf);
+ int PIOc_get_varm_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], unsigned short *buf);
+ int PIOc_get_varm_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], long long *buf);
+ int PIOc_get_vars_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], char *buf);
+ int PIOc_get_var1_uchar (int ncid, int varid, const PIO_Offset index[], unsigned char *buf);
+ int PIOc_get_vars (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], void *buf, PIO_Offset bufcount, MPI_Datatype buftype);
+ int PIOc_get_varm_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], short *buf);
+ int PIOc_get_varm_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], unsigned long long *buf);
+ int PIOc_get_var_schar (int ncid, int varid, signed char *buf);
+ int PIOc_iotype_available(const int iotype);
+
#if defined(__cplusplus)
}
#endif
diff --git a/cime/externals/pio2/src/clib/pio_c_get_template.c b/cime/externals/pio2/src/clib/pio_c_get_template.c
index fa9b6ce17312..e846a0216798 100644
--- a/cime/externals/pio2/src/clib/pio_c_get_template.c
+++ b/cime/externals/pio2/src/clib/pio_c_get_template.c
@@ -18,7 +18,7 @@ int PIO_function()
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -53,7 +53,7 @@ int PIO_function()
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
diff --git a/cime/externals/pio2/src/clib/pio_c_put_template.c b/cime/externals/pio2/src/clib/pio_c_put_template.c
index 7d682e6ec0dc..6ffc2e93e719 100644
--- a/cime/externals/pio2/src/clib/pio_c_put_template.c
+++ b/cime/externals/pio2/src/clib/pio_c_put_template.c
@@ -1,8 +1,8 @@
///
/// PIO interface to nc_function
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
int PIO_function()
@@ -25,7 +25,7 @@ int PIO_function()
msg = 0;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -52,7 +52,7 @@ int PIO_function()
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
diff --git a/cime/externals/pio2/src/clib/pio_c_template.c b/cime/externals/pio2/src/clib/pio_c_template.c
index 4cbc47944482..033cfb98663c 100644
--- a/cime/externals/pio2/src/clib/pio_c_template.c
+++ b/cime/externals/pio2/src/clib/pio_c_template.c
@@ -17,7 +17,7 @@ int PIO_function()
msg = 0;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
diff --git a/cime/externals/pio2/src/clib/pio_darray.c b/cime/externals/pio2/src/clib/pio_darray.c
index bf734291a953..75a451cba59b 100644
--- a/cime/externals/pio2/src/clib/pio_darray.c
+++ b/cime/externals/pio2/src/clib/pio_darray.c
@@ -13,20 +13,16 @@
#include
#include
-
#define PIO_WRITE_BUFFERING 1
PIO_Offset PIO_BUFFER_SIZE_LIMIT=10485760; // 10MB default limit
bufsize PIO_CNBUFFER_LIMIT=33554432;
static void *CN_bpool=NULL;
static PIO_Offset maxusage=0;
-
-/** Set the pio buffer size limit. This is the size of the data buffer
- * on the IO nodes.
+/** @brief Set the pio buffer size limit, this is the size of the data buffer on the IO nodes.
*
- * The pio_buffer_size_limit will only apply to files opened after
- * the setting is changed.
*
+ * The pio_buffer_size_limit will only apply to files opened after the setting is changed.
* @param limit the size of the buffer on the IO nodes
* @return The previous limit setting.
*/
@@ -39,108 +35,87 @@ static PIO_Offset maxusage=0;
return(oldsize);
}
-/** Initialize the compute buffer to size PIO_CNBUFFER_LIMIT.
- *
- * This routine initializes the compute buffer pool if the bget memory
- * management is used.
+/** @brief Initialize the compute buffer to size PIO_CNBUFFER_LIMIT
*
+ * This routine initializes the compute buffer pool if the bget memory management is used.
* @param ios the iosystem descriptor which will use the new buffer
*/
+
void compute_buffer_init(iosystem_desc_t ios)
{
#ifndef PIO_USE_MALLOC
- if (!CN_bpool)
- {
- if (!(CN_bpool = malloc(PIO_CNBUFFER_LIMIT)))
- {
+ if(CN_bpool == NULL){
+ CN_bpool = malloc( PIO_CNBUFFER_LIMIT );
+ if(CN_bpool==NULL){
char errmsg[180];
- sprintf(errmsg,"Unable to allocate a buffer pool of size %d on task %d:"
- " try reducing PIO_CNBUFFER_LIMIT\n", PIO_CNBUFFER_LIMIT, ios.comp_rank);
+ sprintf(errmsg,"Unable to allocate a buffer pool of size %d on task %d: try reducing PIO_CNBUFFER_LIMIT\n",PIO_CNBUFFER_LIMIT,ios.comp_rank);
piodie(errmsg,__FILE__,__LINE__);
}
-
bpool( CN_bpool, PIO_CNBUFFER_LIMIT);
- if (!CN_bpool)
- {
+ if(CN_bpool==NULL){
char errmsg[180];
- sprintf(errmsg,"Unable to allocate a buffer pool of size %d on task %d:"
- " try reducing PIO_CNBUFFER_LIMIT\n", PIO_CNBUFFER_LIMIT, ios.comp_rank);
+ sprintf(errmsg,"Unable to allocate a buffer pool of size %d on task %d: try reducing PIO_CNBUFFER_LIMIT\n",PIO_CNBUFFER_LIMIT,ios.comp_rank);
piodie(errmsg,__FILE__,__LINE__);
}
-
bectl(NULL, malloc, free, PIO_CNBUFFER_LIMIT);
}
#endif
}
-/** Write a single distributed field to output. This routine is only
- * used if aggregation is off.
- *
- * @param[in] file: a pointer to the open file descriptor for the file
- * that will be written to
- *
+/** @ingroup PIO_write_darray
+ * @brief Write a single distributed field to output. This routine is only used if aggregation is off.
+ * @param[in] file: a pointer to the open file descriptor for the file that will be written to
* @param[in] iodesc: a pointer to the defined iodescriptor for the buffer
- *
* @param[in] vid: the variable id to be written
- *
* @param[in] IOBUF: the buffer to be written from this mpi task
- *
- * @param[in] fillvalue: the optional fillvalue to be used for missing
- * data in this buffer
- *
- * @return 0 for success, error code otherwise.
- *
- * @ingroup PIO_write_darray
+ * @param[in] fillvalue: the optional fillvalue to be used for missing data in this buffer
*/
-int pio_write_darray_nc(file_desc_t *file, io_desc_t *iodesc, const int vid,
- void *IOBUF, void *fillvalue)
+ int pio_write_darray_nc(file_desc_t *file, io_desc_t *iodesc, const int vid, void *IOBUF, void *fillvalue)
{
- iosystem_desc_t *ios; /** Pointer to io system information. */
+ iosystem_desc_t *ios;
var_desc_t *vdesc;
int ndims;
- int ierr = PIO_NOERR; /** Return code from function calls. */
+ int ierr;
int i;
- int mpierr = MPI_SUCCESS; /** Return code from MPI function codes. */
+ int msg;
+ int mpierr;
int dsize;
MPI_Status status;
PIO_Offset usage;
int fndims;
- PIO_Offset tdsize = 0;
-
+ PIO_Offset tdsize;
#ifdef TIMING
GPTLstart("PIO:write_darray_nc");
#endif
- /* Get the IO system info. */
- if (!(ios = file->iosystem))
- return PIO_EBADID;
+ tdsize=0;
+ ierr = PIO_NOERR;
- /* Get pointer to variable information. */
- if (!(vdesc = file->varlist + vid))
+ ios = file->iosystem;
+ if(ios == NULL){
+ fprintf(stderr,"Failed to find iosystem handle \n");
return PIO_EBADID;
+ }
+ vdesc = (file->varlist)+vid;
+ if(vdesc == NULL){
+ fprintf(stderr,"Failed to find variable handle %d\n",vid);
+ return PIO_EBADID;
+ }
ndims = iodesc->ndims;
+ msg = 0;
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = 0;
-
- if (ios->compmaster)
+ if(ios->async_interface && ! ios->ioproc){
+ if(ios->comp_rank==0)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- }
+ mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, ios->compmaster, ios->intercomm);
}
ierr = PIOc_inq_varndims(file->fh, vid, &fndims);
- if (ios->ioproc)
- {
+
+ if(ios->ioproc){
io_region *region;
int ncid = file->fh;
int regioncnt;
@@ -160,157 +135,126 @@ int pio_write_darray_nc(file_desc_t *file, io_desc_t *iodesc, const int vid,
if(vdesc->record >= 0 && ndimsiotype == PIO_IOTYPE_PNETCDF)
+ if(file->iotype == PIO_IOTYPE_PNETCDF){
+ // make sure we have room in the buffer ;
flush_output_buffer(file, false, tsize*(iodesc->maxiobuflen));
+ }
#endif
rrcnt=0;
- for (regioncnt = 0; regioncnt < iodesc->maxregions; regioncnt++)
- {
- for (i = 0; i < ndims; i++)
- {
+ for(regioncnt=0;regioncntmaxregions;regioncnt++){
+ for(i=0;iloffset);
// this is a record based multidimensional array
- if (vdesc->record >= 0)
- {
+ if(vdesc->record >= 0){
start[0] = vdesc->record;
- for (i = 1; i < ndims; i++)
- {
+ for(i=1;istart[i-1];
count[i] = region->count[i-1];
}
if(count[1]>0)
count[0] = 1;
// Non-time dependent array
- }
- else
- {
- for (i = 0; i < ndims; i++)
- {
+ }else{
+ for( i=0;istart[i];
count[i] = region->count[i];
}
}
}
- switch(file->iotype)
- {
+ switch(file->iotype){
#ifdef _NETCDF
#ifdef _NETCDF4
case PIO_IOTYPE_NETCDF4P:
-
- /* Use collective writes with this variable. */
ierr = nc_var_par_access(ncid, vid, NC_COLLECTIVE);
- if (iodesc->basetype == MPI_DOUBLE || iodesc->basetype == MPI_REAL8)
- ierr = nc_put_vara_double(ncid, vid, (size_t *)start, (size_t *)count,
- (const double *)bufptr);
- else if (iodesc->basetype == MPI_INTEGER)
- ierr = nc_put_vara_int(ncid, vid, (size_t *)start, (size_t *)count,
- (const int *)bufptr);
- else if (iodesc->basetype == MPI_FLOAT || iodesc->basetype == MPI_REAL4)
- ierr = nc_put_vara_float(ncid, vid, (size_t *)start, (size_t *)count,
- (const float *)bufptr);
- else
- fprintf(stderr,"Type not recognized %d in pioc_write_darray\n",
- (int)iodesc->basetype);
+ if(iodesc->basetype == MPI_DOUBLE || iodesc->basetype == MPI_REAL8){
+ ierr = nc_put_vara_double (ncid, vid,(size_t *) start,(size_t *) count, (const double *) bufptr);
+ } else if(iodesc->basetype == MPI_INTEGER){
+ ierr = nc_put_vara_int (ncid, vid, (size_t *) start, (size_t *) count, (const int *) bufptr);
+ }else if(iodesc->basetype == MPI_FLOAT || iodesc->basetype == MPI_REAL4){
+ ierr = nc_put_vara_float (ncid, vid, (size_t *) start, (size_t *) count, (const float *) bufptr);
+ }else{
+ fprintf(stderr,"Type not recognized %d in pioc_write_darray\n",(int) iodesc->basetype);
+ }
break;
case PIO_IOTYPE_NETCDF4C:
-#endif /* _NETCDF4 */
+#endif
case PIO_IOTYPE_NETCDF:
{
mpierr = MPI_Type_size(iodesc->basetype, &dsize);
size_t tstart[ndims], tcount[ndims];
- if (ios->io_rank == 0)
- {
- for (i = 0; i < iodesc->num_aiotasks; i++)
- {
- if (i == 0)
- {
+ if(ios->io_rank==0){
+
+ for(i=0;inum_aiotasks;i++){
+ if(i==0){
buflen=1;
- for (j = 0; j < ndims; j++)
- {
+ for(j=0;jio_comm); // handshake - tell the sending task I'm ready
mpierr = MPI_Recv( &buflen, 1, MPI_INT, i, 1, ios->io_comm, &status);
- if (buflen > 0)
- {
- mpierr = MPI_Recv(tstart, ndims, MPI_OFFSET, i, ios->num_iotasks+i,
- ios->io_comm, &status);
- mpierr = MPI_Recv(tcount, ndims, MPI_OFFSET, i, 2 * ios->num_iotasks + i,
- ios->io_comm, &status);
+ if(buflen>0){
+ mpierr = MPI_Recv( tstart, ndims, MPI_OFFSET, i, ios->num_iotasks+i, ios->io_comm, &status);
+ mpierr = MPI_Recv( tcount, ndims, MPI_OFFSET, i,2*ios->num_iotasks+i, ios->io_comm, &status);
tmp_buf = malloc(buflen * dsize);
mpierr = MPI_Recv( tmp_buf, buflen, iodesc->basetype, i, i, ios->io_comm, &status);
}
}
- if (buflen>0)
- {
- if (iodesc->basetype == MPI_INTEGER)
+ if(buflen>0){
+ if(iodesc->basetype == MPI_INTEGER){
ierr = nc_put_vara_int (ncid, vid, tstart, tcount, (const int *) tmp_buf);
- else if (iodesc->basetype == MPI_DOUBLE || iodesc->basetype == MPI_REAL8)
+ }else if(iodesc->basetype == MPI_DOUBLE || iodesc->basetype == MPI_REAL8){
ierr = nc_put_vara_double (ncid, vid, tstart, tcount, (const double *) tmp_buf);
- else if (iodesc->basetype == MPI_FLOAT || iodesc->basetype == MPI_REAL4)
+ }else if(iodesc->basetype == MPI_FLOAT || iodesc->basetype == MPI_REAL4){
ierr = nc_put_vara_float (ncid,vid, tstart, tcount, (const float *) tmp_buf);
- else
- fprintf(stderr,"Type not recognized %d in pioc_write_darray\n",
- (int)iodesc->basetype);
-
- if (ierr == PIO_EEDGE)
+ }else{
+ fprintf(stderr,"Type not recognized %d in pioc_write_darray\n",(int) iodesc->basetype);
+ }
+ if(ierr == PIO_EEDGE){
for(i=0;iio_rank < iodesc->num_aiotasks)
- {
+ }else if(ios->io_rank < iodesc->num_aiotasks ){
buflen=1;
- for (i = 0; i < ndims; i++)
- {
+ for(i=0;iio_rank,tstart[0],
- tstart[1],tcount[0],tcount[1],buflen,ndims,fndims);*/
+ // printf("%s %d %d %d %d %d %d %d %d %d\n",__FILE__,__LINE__,ios->io_rank,tstart[0],tstart[1],tcount[0],tcount[1],buflen,ndims,fndims);
mpierr = MPI_Recv( &ierr, 1, MPI_INT, 0, 0, ios->io_comm, &status); // task0 is ready to recieve
mpierr = MPI_Rsend( &buflen, 1, MPI_INT, 0, 1, ios->io_comm);
- if (buflen > 0)
- {
- mpierr = MPI_Rsend(tstart, ndims, MPI_OFFSET, 0, ios->num_iotasks+ios->io_rank,
- ios->io_comm);
- mpierr = MPI_Rsend(tcount, ndims, MPI_OFFSET, 0,2*ios->num_iotasks+ios->io_rank,
- ios->io_comm);
+ if(buflen>0) {
+ mpierr = MPI_Rsend( tstart, ndims, MPI_OFFSET, 0, ios->num_iotasks+ios->io_rank, ios->io_comm);
+ mpierr = MPI_Rsend( tcount, ndims, MPI_OFFSET, 0,2*ios->num_iotasks+ios->io_rank, ios->io_comm);
mpierr = MPI_Rsend( bufptr, buflen, iodesc->basetype, 0, ios->io_rank, ios->io_comm);
}
}
break;
}
break;
-#endif /* _NETCDF */
+ #endif
#ifdef _PNETCDF
case PIO_IOTYPE_PNETCDF:
- for (i = 0, dsize = 1; i < ndims; i++)
+ for( i=0,dsize=1;ibasetype);
@@ -322,57 +266,56 @@ int pio_write_darray_nc(file_desc_t *file, io_desc_t *iodesc, const int vid,
}
}
*/
- if (dsize > 0)
- {
+ if(dsize>0){
// printf("%s %d %d %d\n",__FILE__,__LINE__,ios->io_rank,dsize);
startlist[rrcnt] = (PIO_Offset *) calloc(fndims, sizeof(PIO_Offset));
countlist[rrcnt] = (PIO_Offset *) calloc(fndims, sizeof(PIO_Offset));
- for (i = 0; i < fndims; i++)
- {
+ for( i=0; imaxregions - 1)
- {
+ if(regioncnt==iodesc->maxregions-1){
// printf("%s %d %d %ld %ld\n",__FILE__,__LINE__,ios->io_rank,iodesc->llen, tdsize);
// ierr = ncmpi_put_varn_all(ncid, vid, iodesc->maxregions, startlist, countlist,
// IOBUF, iodesc->llen, iodesc->basetype);
int reqn=0;
- if (vdesc->nreqs % PIO_REQUEST_ALLOC_CHUNK == 0 )
- {
+
+ if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- for (int i = vdesc->nreqs; i < vdesc->nreqs + PIO_REQUEST_ALLOC_CHUNK; i++)
+ for(int i=vdesc->nreqs;inreqs+PIO_REQUEST_ALLOC_CHUNK;i++){
vdesc->request[i]=NC_REQ_NULL;
+ }
reqn = vdesc->nreqs;
+ }else{
+ while(vdesc->request[reqn] != NC_REQ_NULL ){
+ reqn++;
+ }
}
- else
- while(vdesc->request[reqn] != NC_REQ_NULL)
- reqn++;
ierr = ncmpi_bput_varn(ncid, vid, rrcnt, startlist, countlist,
IOBUF, iodesc->llen, iodesc->basetype, vdesc->request+reqn);
- if (vdesc->request[reqn] == NC_REQ_NULL)
+ if(vdesc->request[reqn] == NC_REQ_NULL){
vdesc->request[reqn] = PIO_REQ_NULL; //keeps wait calls in sync
+ }
vdesc->nreqs = reqn;
// printf("%s %d %X %d\n",__FILE__,__LINE__,IOBUF,request);
- for (i=0;iiotype,__FILE__,__LINE__);
}
- if (region)
+ if(region != NULL)
region = region->next;
} // for(regioncnt=0;regioncntmaxregions;regioncnt++){
} // if(ios->ioproc)
@@ -385,12 +328,11 @@ int pio_write_darray_nc(file_desc_t *file, io_desc_t *iodesc, const int vid,
return ierr;
}
-/** Write a set of one or more aggregated arrays to output file
+/** @brief Write a set of one or more aggregated arrays to output file
* @ingroup PIO_write_darray
*
* This routine is used if aggregation is enabled, data is already on the
* io-tasks
- *
* @param[in] file: a pointer to the open file descriptor for the file that will be written to
* @param[in] nvars: the number of variables to be written with this decomposition
* @param[in] vid: an array of the variable ids to be written
@@ -411,11 +353,12 @@ int pio_write_darray_multi_nc(file_desc_t *file, const int nvars, const int vid[
const int maxiobuflen, const int num_aiotasks,
void *IOBUF, const int frame[])
{
- iosystem_desc_t *ios; /** Pointer to io system information. */
+ iosystem_desc_t *ios;
var_desc_t *vdesc;
int ierr;
int i;
- int mpierr = MPI_SUCCESS; /** Return code from MPI function codes. */
+ int msg;
+ int mpierr;
int dsize;
MPI_Status status;
PIO_Offset usage;
@@ -430,39 +373,29 @@ int pio_write_darray_multi_nc(file_desc_t *file, const int nvars, const int vid[
#endif
ios = file->iosystem;
- if (ios == NULL)
- {
+ if(ios == NULL){
fprintf(stderr,"Failed to find iosystem handle \n");
return PIO_EBADID;
}
vdesc = (file->varlist)+vid[0];
ncid = file->fh;
- if (vdesc == NULL)
- {
+ if(vdesc == NULL){
fprintf(stderr,"Failed to find variable handle %d\n",vid[0]);
return PIO_EBADID;
}
+ msg = 0;
- /* If async is in use, send message to IO master task. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = 0;
- if (ios->compmaster)
+ if(ios->async_interface && ! ios->ioproc){
+ if(ios->comp_rank==0)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, ios->compmaster, ios->intercomm);
}
- }
ierr = PIOc_inq_varndims(file->fh, vid[0], &fndims);
MPI_Type_size(basetype, &tsize);
- if (ios->ioproc)
- {
+ if(ios->ioproc){
io_region *region;
int regioncnt;
int rrcnt;
@@ -478,141 +411,104 @@ int pio_write_darray_multi_nc(file_desc_t *file, const int nvars, const int vid[
ncid = file->fh;
region = firstregion;
+
rrcnt=0;
- for (regioncnt = 0; regioncnt < maxregions; regioncnt++)
- {
+ for(regioncnt=0;regioncntstart[0],region->count[0],ndims,fndims,vdesc->record);
- for (i = 0; i < fndims; i++)
- {
+ for(i=0;irecord >= 0)
- {
- for (i = fndims - ndims; i < fndims; i++)
- {
+ if(vdesc->record >= 0){
+ for(i=fndims-ndims;istart[i-(fndims-ndims)];
count[i] = region->count[i-(fndims-ndims)];
}
- if (fndims>1 && ndims0)
- {
+ if(fndims>1 && ndims0){
count[0] = 1;
start[0] = frame[0];
- }
- else if (fndims==ndims)
- {
+ }else if(fndims==ndims){
start[0]+=vdesc->record;
}
// Non-time dependent array
- }
- else
- {
- for (i = 0; i < ndims; i++)
- {
+ }else{
+ for( i=0;istart[i];
count[i] = region->count[i];
}
}
}
- switch(file->iotype)
- {
+ switch(file->iotype){
#ifdef _NETCDF4
case PIO_IOTYPE_NETCDF4P:
- for (int nv = 0; nv < nvars; nv++)
- {
- if (vdesc->record >= 0 && ndims < fndims)
- {
+ for(int nv=0; nvrecord >= 0 && ndimsloffset));
}
ierr = nc_var_par_access(ncid, vid[nv], NC_COLLECTIVE);
- if (basetype == MPI_DOUBLE ||basetype == MPI_REAL8)
- {
- ierr = nc_put_vara_double (ncid, vid[nv],(size_t *) start,(size_t *) count,
- (const double *)bufptr);
- }
- else if (basetype == MPI_INTEGER)
- {
- ierr = nc_put_vara_int (ncid, vid[nv], (size_t *) start, (size_t *) count,
- (const int *)bufptr);
- }
- else if (basetype == MPI_FLOAT || basetype == MPI_REAL4)
- {
- ierr = nc_put_vara_float (ncid, vid[nv], (size_t *) start, (size_t *) count,
- (const float *)bufptr);
- }
- else
- {
- fprintf(stderr,"Type not recognized %d in pioc_write_darray\n",
- (int)basetype);
+ if(basetype == MPI_DOUBLE ||basetype == MPI_REAL8){
+ ierr = nc_put_vara_double (ncid, vid[nv],(size_t *) start,(size_t *) count, (const double *) bufptr);
+ } else if(basetype == MPI_INTEGER){
+ ierr = nc_put_vara_int (ncid, vid[nv], (size_t *) start, (size_t *) count, (const int *) bufptr);
+ }else if(basetype == MPI_FLOAT || basetype == MPI_REAL4){
+ ierr = nc_put_vara_float (ncid, vid[nv], (size_t *) start, (size_t *) count, (const float *) bufptr);
+ }else{
+ fprintf(stderr,"Type not recognized %d in pioc_write_darray\n",(int) basetype);
}
}
break;
#endif
#ifdef _PNETCDF
case PIO_IOTYPE_PNETCDF:
- for (i = 0, dsize = 1; i < fndims; i++)
- {
+ for( i=0,dsize=1;i0)
- {
+ if(dsize>0){
// printf("%s %d %d %d\n",__FILE__,__LINE__,ios->io_rank,dsize);
startlist[rrcnt] = (PIO_Offset *) calloc(fndims, sizeof(PIO_Offset));
countlist[rrcnt] = (PIO_Offset *) calloc(fndims, sizeof(PIO_Offset));
- for (i = 0; i < fndims; i++)
- {
+ for( i=0; iio_rank,iodesc->llen, tdsize);
// ierr = ncmpi_put_varn_all(ncid, vid, iodesc->maxregions, startlist, countlist,
// IOBUF, iodesc->llen, iodesc->basetype);
//printf("%s %d %ld \n",__FILE__,__LINE__,IOBUF);
- for (int nv=0; nvvarlist)+vid[nv];
- if (vdesc->record >= 0 && ndimsrecord >= 0 && ndimsnreqs%PIO_REQUEST_ALLOC_CHUNK == 0 )
- {
+ if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- for (int i=vdesc->nreqs;inreqs+PIO_REQUEST_ALLOC_CHUNK;i++)
- {
+ for(int i=vdesc->nreqs;inreqs+PIO_REQUEST_ALLOC_CHUNK;i++){
vdesc->request[i]=NC_REQ_NULL;
}
reqn = vdesc->nreqs;
- }
- else
- {
- while(vdesc->request[reqn] != NC_REQ_NULL)
- {
+ }else{
+ while(vdesc->request[reqn] != NC_REQ_NULL){
reqn++;
}
}
@@ -622,20 +518,16 @@ int pio_write_darray_multi_nc(file_desc_t *file, const int nvars, const int vid[
ierr = ncmpi_bput_varn(ncid, vid[nv], rrcnt, startlist, countlist,
bufptr, llen, basetype, &(vdesc->request));
*/
- if (vdesc->request[reqn] == NC_REQ_NULL)
- {
+ if(vdesc->request[reqn] == NC_REQ_NULL){
vdesc->request[reqn] = PIO_REQ_NULL; //keeps wait calls in sync
}
vdesc->nreqs += reqn+1;
// printf("%s %d %d %d\n",__FILE__,__LINE__,vdesc->nreqs,vdesc->request[reqn]);
}
- for (i=0;iiotype,__FILE__,__LINE__);
}
- if (region)
+ if(region != NULL)
region = region->next;
} // for(regioncnt=0;regioncntmaxregions;regioncnt++){
} // if(ios->ioproc)
@@ -686,11 +578,12 @@ int pio_write_darray_multi_nc_serial(file_desc_t *file, const int nvars, const i
const int maxiobuflen, const int num_aiotasks,
void *IOBUF, const int frame[])
{
- iosystem_desc_t *ios; /** Pointer to io system information. */
+ iosystem_desc_t *ios;
var_desc_t *vdesc;
int ierr;
int i;
- int mpierr = MPI_SUCCESS; /** Return code from MPI function codes. */
+ int msg;
+ int mpierr;
int dsize;
MPI_Status status;
PIO_Offset usage;
@@ -704,40 +597,30 @@ int pio_write_darray_multi_nc_serial(file_desc_t *file, const int nvars, const i
GPTLstart("PIO:write_darray_multi_nc_serial");
#endif
- if (!(ios = file->iosystem))
- {
+ ios = file->iosystem;
+ if(ios == NULL){
fprintf(stderr,"Failed to find iosystem handle \n");
return PIO_EBADID;
}
-
+ vdesc = (file->varlist)+vid[0];
ncid = file->fh;
- if (!(vdesc = (file->varlist) + vid[0]))
- {
+ if(vdesc == NULL){
fprintf(stderr,"Failed to find variable handle %d\n",vid[0]);
return PIO_EBADID;
}
+ msg = 0;
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (! ios->ioproc)
- {
- int msg = 0;
-
+ if(ios->async_interface && ! ios->ioproc){
if(ios->comp_rank==0)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, ios->compmaster, ios->intercomm);
}
- }
ierr = PIOc_inq_varndims(file->fh, vid[0], &fndims);
MPI_Type_size(basetype, &tsize);
- if (ios->ioproc)
- {
+ if(ios->ioproc){
io_region *region;
int regioncnt;
int rrcnt;
@@ -753,29 +636,21 @@ int pio_write_darray_multi_nc_serial(file_desc_t *file, const int nvars, const i
rrcnt=0;
- for (regioncnt = 0; regioncnt < maxregions; regioncnt++)
- {
- for (i = 0; i < fndims; i++)
- {
+ for(regioncnt=0;regioncntrecord >= 0)
- {
- for (i = fndims - ndims; i < fndims; i++)
- {
+ if(vdesc->record >= 0){
+ for(i=fndims-ndims;istart[i-(fndims-ndims)];
tmp_count[i+regioncnt*fndims] = region->count[i-(fndims-ndims)];
}
// Non-time dependent array
- }
- else
- {
- for (i = 0; i < ndims; i++)
- {
+ }else{
+ for( i=0;istart[i];
tmp_count[i+regioncnt*fndims] = region->count[i];
}
@@ -783,31 +658,25 @@ int pio_write_darray_multi_nc_serial(file_desc_t *file, const int nvars, const i
region = region->next;
}
}
- if (ios->io_rank > 0)
- {
+ if(ios->io_rank>0){
mpierr = MPI_Recv( &ierr, 1, MPI_INT, 0, 0, ios->io_comm, &status); // task0 is ready to recieve
MPI_Send( &llen, 1, MPI_OFFSET, 0, ios->io_rank, ios->io_comm);
- if (llen>0)
- {
+ if(llen>0){
MPI_Send( &maxregions, 1, MPI_INT, 0, ios->io_rank+ios->num_iotasks, ios->io_comm);
MPI_Send( tmp_start, maxregions*fndims, MPI_OFFSET, 0, ios->io_rank+2*ios->num_iotasks, ios->io_comm);
MPI_Send( tmp_count, maxregions*fndims, MPI_OFFSET, 0, ios->io_rank+3*ios->num_iotasks, ios->io_comm);
// printf("%s %d %ld\n",__FILE__,__LINE__,nvars*llen);
MPI_Send( IOBUF, nvars*llen, basetype, 0, ios->io_rank+4*ios->num_iotasks, ios->io_comm);
}
- }
- else
- {
+ }else{
size_t rlen;
int rregions;
size_t start[fndims], count[fndims];
size_t loffset;
mpierr = MPI_Type_size(basetype, &dsize);
- for (int rtask=0; rtasknum_iotasks; rtask++)
- {
- if (rtask>0)
- {
+ for(int rtask=0; rtasknum_iotasks; rtask++){
+ if(rtask>0){
mpierr = MPI_Send( &ierr, 1, MPI_INT, rtask, 0, ios->io_comm); // handshake - tell the sending task I'm ready
MPI_Recv( &rlen, 1, MPI_OFFSET, rtask, rtask, ios->io_comm, &status);
if(rlen>0){
@@ -817,54 +686,40 @@ int pio_write_darray_multi_nc_serial(file_desc_t *file, const int nvars, const i
// printf("%s %d %d %ld\n",__FILE__,__LINE__,rtask,nvars*rlen);
MPI_Recv( IOBUF, nvars*rlen, basetype, rtask, rtask+4*ios->num_iotasks, ios->io_comm, &status);
}
- }
- else
- {
+ }else{
rlen = llen;
rregions = maxregions;
}
- if (rlen>0)
- {
+ if(rlen>0){
loffset = 0;
- for (regioncnt=0;regioncntrecord>=0)
- {
- if (fndims>1 && ndims0)
- {
+ if(vdesc->record>=0){
+ if(fndims>1 && ndims0){
count[0] = 1;
start[0] = frame[nv];
- }
- else if (fndims==ndims)
- {
+ }else if(fndims==ndims){
start[0]+=vdesc->record;
}
}
- if (basetype == MPI_INTEGER)
- {
+
+
+
+ if(basetype == MPI_INTEGER){
ierr = nc_put_vara_int (ncid, vid[nv], start, count, (const int *) bufptr);
- }
- else if (basetype == MPI_DOUBLE || basetype == MPI_REAL8)
- {
+ }else if(basetype == MPI_DOUBLE || basetype == MPI_REAL8){
ierr = nc_put_vara_double (ncid, vid[nv], start, count, (const double *) bufptr);
- }
- else if (basetype == MPI_FLOAT || basetype == MPI_REAL4)
- {
+ }else if(basetype == MPI_FLOAT || basetype == MPI_REAL4){
ierr = nc_put_vara_float (ncid,vid[nv], start, count, (const float *) bufptr);
- }
- else
- {
+ }else{
fprintf(stderr,"Type not recognized %d in pioc_write_darray\n",(int) basetype);
}
@@ -875,8 +730,7 @@ int pio_write_darray_multi_nc_serial(file_desc_t *file, const int nvars, const i
}
size_t tsize;
tsize = 1;
- for (int i=0;imode & PIO_WRITE))
- {
+ if(! (file->mode & PIO_WRITE)){
fprintf(stderr,"ERROR: Attempt to write to read-only file\n");
return PIO_EPERM;
}
iodesc = pio_get_iodesc_from_id(ioid);
- if (iodesc == NULL)
- {
+ if(iodesc == NULL){
// print_trace(NULL);
//fprintf(stderr,"iodesc handle not found %d %d\n",ioid,__LINE__);
return PIO_EBADID;
@@ -950,57 +791,44 @@ int PIOc_write_darray_multi(const int ncid, const int vid[], const int ioid,
ios = file->iosystem;
// rlen = iodesc->llen*nvars;
rlen=0;
- if (iodesc->llen>0)
- {
+ if(iodesc->llen>0){
rlen = iodesc->maxiobuflen*nvars;
}
- if (vdesc0->iobuf)
- {
+ if(vdesc0->iobuf != NULL){
piodie("Attempt to overwrite existing io buffer",__FILE__,__LINE__);
}
- if (iodesc->rearranger>0)
- {
- if (rlen>0)
- {
+ if(iodesc->rearranger>0){
+ if(rlen>0){
MPI_Type_size(iodesc->basetype, &vsize);
+ //printf("rlen*vsize = %ld\n",rlen*vsize);
+
vdesc0->iobuf = bget((size_t) vsize* (size_t) rlen);
- if (vdesc0->iobuf==NULL)
- {
+ if(vdesc0->iobuf==NULL){
printf("%s %d %d %ld\n",__FILE__,__LINE__,nvars,vsize*rlen);
piomemerror(*ios,(size_t) rlen*(size_t) vsize, __FILE__,__LINE__);
}
- if (iodesc->needsfill && iodesc->rearranger==PIO_REARR_BOX)
- {
- if (vsize==4)
- {
- for (int nv=0;nv < nvars; nv++)
- {
- for (int i=0;imaxiobuflen;i++)
- {
+ if(iodesc->needsfill && iodesc->rearranger==PIO_REARR_BOX){
+ if(vsize==4){
+ for(int nv=0;nv < nvars; nv++){
+ for(int i=0;imaxiobuflen;i++){
((float *) vdesc0->iobuf)[i+nv*(iodesc->maxiobuflen)] = ((float *)fillvalue)[nv];
}
}
- }
- else if (vsize==8)
- {
- for (int nv=0;nv < nvars; nv++)
- {
- for (int i=0;imaxiobuflen;i++)
- {
+ }else if(vsize==8){
+ for(int nv=0;nv < nvars; nv++){
+ for(int i=0;imaxiobuflen;i++){
((double *)vdesc0->iobuf)[i+nv*(iodesc->maxiobuflen)] = ((double *)fillvalue)[nv];
}
}
}
}
}
-
ierr = rearrange_comp2io(*ios, iodesc, array, vdesc0->iobuf, nvars);
}/* this is wrong, need to think about it
else{
vdesc0->iobuf = array;
} */
- switch(file->iotype)
- {
+ switch(file->iotype){
case PIO_IOTYPE_NETCDF4P:
case PIO_IOTYPE_PNETCDF:
ierr = pio_write_darray_multi_nc(file, nvars, vid,
@@ -1016,8 +844,7 @@ int PIOc_write_darray_multi(const int ncid, const int vid[], const int ioid,
iodesc->maxregions, iodesc->firstregion, iodesc->llen,
iodesc->maxiobuflen, iodesc->num_aiotasks,
vdesc0->iobuf, frame);
- if (vdesc0->iobuf)
- {
+ if(vdesc0->iobuf != NULL){
brel(vdesc0->iobuf);
vdesc0->iobuf = NULL;
}
@@ -1025,38 +852,30 @@ int PIOc_write_darray_multi(const int ncid, const int vid[], const int ioid,
}
+
+
if(iodesc->rearranger == PIO_REARR_SUBSET && iodesc->needsfill &&
- iodesc->holegridsize>0)
- {
- if (vdesc0->fillbuf)
- {
+ iodesc->holegridsize>0){
+ if(vdesc0->fillbuf != NULL){
piodie("Attempt to overwrite existing buffer",__FILE__,__LINE__);
}
vdesc0->fillbuf = bget(iodesc->holegridsize*vsize*nvars);
//printf("%s %d %x\n",__FILE__,__LINE__,vdesc0->fillbuf);
- if (vsize==4)
- {
- for (int nv=0;nvholegridsize;i++)
- {
+ if(vsize==4){
+ for(int nv=0;nvholegridsize;i++){
((float *) vdesc0->fillbuf)[i+nv*iodesc->holegridsize] = ((float *) fillvalue)[nv];
}
}
- }
- else if (vsize==8)
- {
- for (int nv=0;nvholegridsize;i++)
- {
+ }else if(vsize==8){
+ for(int nv=0;nvholegridsize;i++){
((double *) vdesc0->fillbuf)[i+nv*iodesc->holegridsize] = ((double *) fillvalue)[nv];
}
}
}
- switch(file->iotype)
- {
+ switch(file->iotype){
case PIO_IOTYPE_PNETCDF:
ierr = pio_write_darray_multi_nc(file, nvars, vid,
iodesc->ndims, iodesc->basetype, iodesc->gsize,
@@ -1085,10 +904,13 @@ int PIOc_write_darray_multi(const int ncid, const int vid[], const int ioid,
flush_output_buffer(file, flushtodisk, 0);
+
return ierr;
+
}
-/** Write a distributed array to the output file.
+/** @brief Write a distributed array to the output file.
+ * @ingroup PIO_write_darray
*
* This routine aggregates output on the compute nodes and only sends
* it to the IO nodes when the compute buffer is full or when a flush
@@ -1098,23 +920,24 @@ int PIOc_write_darray_multi(const int ncid, const int vid[], const int ioid,
* @param[in] vid: the variable ID returned by PIOc_def_var().
* @param[in] ioid: the I/O description ID as passed back by
* PIOc_InitDecomp().
+
* @param[in] arraylen: the length of the array to be written. This
* is the length of the distrubited array. That is, the length of
* the portion of the data that is on the processor.
+
* @param[in] array: pointer to the data to be written. This is a
* pointer to the distributed portion of the array that is on this
* processor.
+
* @param[in] fillvalue: pointer to the fill value to be used for
* missing data.
*
* @returns 0 for success, non-zero error code for failure.
- * @ingroup PIO_write_darray
*/
#ifdef PIO_WRITE_BUFFERING
-int PIOc_write_darray(const int ncid, const int vid, const int ioid,
- const PIO_Offset arraylen, void *array, void *fillvalue)
+ int PIOc_write_darray(const int ncid, const int vid, const int ioid, const PIO_Offset arraylen, void *array, void *fillvalue)
{
- iosystem_desc_t *ios; /** Pointer to io system information. */
+ iosystem_desc_t *ios;
file_desc_t *file;
io_desc_t *iodesc;
var_desc_t *vdesc;
@@ -1134,53 +957,50 @@ int PIOc_write_darray(const int ncid, const int vid, const int ioid,
ierr = PIO_NOERR;
needsflush = 0; // false
file = pio_get_file_from_id(ncid);
- if (file == NULL)
- {
+ if(file == NULL){
fprintf(stderr,"File handle not found %d %d\n",ncid,__LINE__);
return PIO_EBADID;
}
- if (! (file->mode & PIO_WRITE))
- {
+ if(! (file->mode & PIO_WRITE)){
fprintf(stderr,"ERROR: Attempt to write to read-only file\n");
return PIO_EPERM;
}
iodesc = pio_get_iodesc_from_id(ioid);
- if (iodesc == NULL)
- {
+ if(iodesc == NULL){
fprintf(stderr,"iodesc handle not found %d %d\n",ioid,__LINE__);
return PIO_EBADID;
}
ios = file->iosystem;
+
vdesc = (file->varlist)+vid;
if(vdesc == NULL)
return PIO_EBADID;
- /* Is this a record variable? */
- recordvar = vdesc->record < 0 ? true : false;
-
- if (iodesc->ndof != arraylen)
- {
+ if(vdesc->record<0){
+ recordvar=false;
+ }else{
+ recordvar=true;
+ }
+ if(iodesc->ndof != arraylen){
fprintf(stderr,"ndof=%ld, arraylen=%ld\n",iodesc->ndof,arraylen);
piodie("ndof != arraylen",__FILE__,__LINE__);
}
wmb = &(file->buffer);
- if (wmb->ioid == -1)
- {
- if (recordvar)
+ if(wmb->ioid == -1){
+ if(recordvar){
wmb->ioid = ioid;
- else
+ }else{
wmb->ioid = -(ioid);
}
- else
- {
+ }else{
// separate record and non-record variables
- if (recordvar)
- {
- while(wmb->next && wmb->ioid!=ioid)
+ if(recordvar){
+ while(wmb->next != NULL && wmb->ioid!=ioid){
if(wmb->next!=NULL)
wmb = wmb->next;
+ }
#ifdef _PNETCDF
/* flush the previous record before starting a new one. this is collective */
// if(vdesc->request != NULL && (vdesc->request[0] != NC_REQ_NULL) ||
@@ -1188,27 +1008,25 @@ int PIOc_write_darray(const int ncid, const int vid, const int ioid,
// needsflush = 2; // flush to disk
// }
#endif
- }
- else
- {
- while(wmb->next && wmb->ioid!= -(ioid))
- {
+ }else{
+ while(wmb->next != NULL && wmb->ioid!= -(ioid)){
if(wmb->next!=NULL)
wmb = wmb->next;
}
}
}
- if ((recordvar && wmb->ioid != ioid) || (!recordvar && wmb->ioid != -(ioid)))
- {
+ if((recordvar && wmb->ioid != ioid) || (!recordvar && wmb->ioid != -(ioid))){
wmb->next = (wmulti_buffer *) bget((bufsize) sizeof(wmulti_buffer));
- if (wmb->next == NULL)
+ if(wmb->next == NULL){
piomemerror(*ios,sizeof(wmulti_buffer), __FILE__,__LINE__);
+ }
wmb=wmb->next;
wmb->next=NULL;
- if (recordvar)
+ if(recordvar){
wmb->ioid = ioid;
- else
+ }else{
wmb->ioid = -(ioid);
+ }
wmb->validvars=0;
wmb->arraylen=arraylen;
wmb->vid=NULL;
@@ -1217,71 +1035,68 @@ int PIOc_write_darray(const int ncid, const int vid, const int ioid,
wmb->fillvalue=NULL;
}
+
MPI_Type_size(iodesc->basetype, &tsize);
// At this point wmb should be pointing to a new or existing buffer
// so we can add the data
// printf("%s %d %X %d %d %d\n",__FILE__,__LINE__,wmb->data,wmb->validvars,arraylen,tsize);
// cn_buffer_report(*ios, true);
bfreespace(&totfree, &maxfree);
- if (needsflush == 0)
+ if(needsflush==0){
needsflush = (maxfree <= 1.1*(1+wmb->validvars)*arraylen*tsize );
+ }
MPI_Allreduce(MPI_IN_PLACE, &needsflush, 1, MPI_INT, MPI_MAX, ios->comp_comm);
- if (needsflush > 0 )
- {
+
+ if(needsflush > 0 ){
// need to flush first
// printf("%s %d %ld %d %ld %ld\n",__FILE__,__LINE__,maxfree, wmb->validvars, (1+wmb->validvars)*arraylen*tsize,totfree);
cn_buffer_report(*ios, true);
flush_buffer(ncid,wmb, needsflush==2); // if needsflush == 2 flush to disk otherwise just flush to io node
}
-
- if (arraylen > 0)
- if (!(wmb->data = bgetr(wmb->data, (1+wmb->validvars)*arraylen*tsize)))
+ if(arraylen > 0){
+ wmb->data = bgetr( wmb->data, (1+wmb->validvars)*arraylen*tsize);
+ if(wmb->data == NULL){
piomemerror(*ios, (1+wmb->validvars)*arraylen*tsize , __FILE__,__LINE__);
-
- if (!(wmb->vid = (int *) bgetr(wmb->vid,sizeof(int)*(1+wmb->validvars))))
+ }
+ }
+ wmb->vid = (int *) bgetr( wmb->vid,sizeof(int)*( 1+wmb->validvars));
+ if(wmb->vid == NULL){
piomemerror(*ios, (1+wmb->validvars)*sizeof(int) , __FILE__,__LINE__);
-
- if (vdesc->record >= 0)
- if (!(wmb->frame = (int *)bgetr(wmb->frame, sizeof(int) * (1 + wmb->validvars))))
+ }
+ if(vdesc->record>=0){
+ wmb->frame = (int *) bgetr( wmb->frame,sizeof(int)*( 1+wmb->validvars));
+ if(wmb->frame == NULL){
piomemerror(*ios, (1+wmb->validvars)*sizeof(int) , __FILE__,__LINE__);
+ }
+ }
+ if(iodesc->needsfill){
+ wmb->fillvalue = bgetr( wmb->fillvalue,tsize*( 1+wmb->validvars));
+ if(wmb->fillvalue == NULL){
+ piomemerror(*ios, (1+wmb->validvars)*tsize , __FILE__,__LINE__);
+ }
+ }
- if (iodesc->needsfill)
- if (!(wmb->fillvalue = bgetr(wmb->fillvalue,tsize*(1+wmb->validvars))))
- piomemerror(*ios, (1+wmb->validvars)*tsize , __FILE__,__LINE__);
- if (iodesc->needsfill)
- {
- if (fillvalue)
- {
+ if(iodesc->needsfill){
+ if(fillvalue != NULL){
memcpy((char *) wmb->fillvalue+tsize*wmb->validvars,fillvalue, tsize);
- }
- else
- {
+ }else{
vtype = (MPI_Datatype) iodesc->basetype;
- if (vtype == MPI_INTEGER)
- {
+ if(vtype == MPI_INTEGER){
int fill = PIO_FILL_INT;
memcpy((char *) wmb->fillvalue+tsize*wmb->validvars, &fill, tsize);
- }
- else if (vtype == MPI_FLOAT || vtype == MPI_REAL4)
- {
+ }else if(vtype == MPI_FLOAT || vtype == MPI_REAL4){
float fill = PIO_FILL_FLOAT;
memcpy((char *) wmb->fillvalue+tsize*wmb->validvars, &fill, tsize);
- }
- else if (vtype == MPI_DOUBLE || vtype == MPI_REAL8)
- {
+ }else if(vtype == MPI_DOUBLE || vtype == MPI_REAL8){
double fill = PIO_FILL_DOUBLE;
memcpy((char *) wmb->fillvalue+tsize*wmb->validvars, &fill, tsize);
- }
- else if (vtype == MPI_CHARACTER)
- {
+ }else if(vtype == MPI_CHARACTER){
char fill = PIO_FILL_CHAR;
memcpy((char *) wmb->fillvalue+tsize*wmb->validvars, &fill, tsize);
- }
- else
- {
+ }else{
fprintf(stderr,"Type not recognized %d in pioc_write_darray\n",vtype);
}
}
@@ -1291,8 +1106,9 @@ int PIOc_write_darray(const int ncid, const int vid, const int ioid,
wmb->arraylen = arraylen;
wmb->vid[wmb->validvars]=vid;
bufptr = (void *)((char *) wmb->data + arraylen*tsize*wmb->validvars);
- if (arraylen>0)
+ if(arraylen>0){
memcpy(bufptr, array, arraylen*tsize);
+ }
/*
if(tsize==8){
double asum=0.0;
@@ -1306,37 +1122,30 @@ int PIOc_write_darray(const int ncid, const int vid, const int ioid,
// printf("%s %d %d %d %d %X\n",__FILE__,__LINE__,wmb->validvars,wmb->ioid,vid,bufptr);
- if (wmb->frame!=NULL)
+ if(wmb->frame!=NULL){
wmb->frame[wmb->validvars]=vdesc->record;
+ }
wmb->validvars++;
// printf("%s %d %d %d %d %d\n",__FILE__,__LINE__,wmb->validvars,iodesc->maxbytes/tsize, iodesc->ndof, iodesc->llen);
- if (wmb->validvars >= iodesc->maxbytes/tsize)
+ if(wmb->validvars >= iodesc->maxbytes/tsize){
PIOc_sync(ncid);
+ }
return ierr;
+
}
#else
-/** Write a distributed array to the output file.
+/** @brief Write a distributed array to the output file
* @ingroup PIO_write_darray
*
- * This version of the routine does not buffer, all data is
- * communicated to the io tasks before the routine returns.
- *
- * @param ncid identifies the netCDF file
- * @param vid
- * @param ioid
- * @param arraylen
- * @param array
- * @param fillvalue
- *
- * @return
+ * This version of the routine does not buffer, all data is communicated to the io tasks
+ * before the routine returns
*/
-int PIOc_write_darray(const int ncid, const int vid, const int ioid,
- const PIO_Offset arraylen, void *array, void *fillvalue)
+ int PIOc_write_darray(const int ncid, const int vid, const int ioid, const PIO_Offset arraylen, void *array, void *fillvalue)
{
- iosystem_desc_t *ios; /** Pointer to io system information. */
+ iosystem_desc_t *ios;
file_desc_t *file;
io_desc_t *iodesc;
void *iobuf;
@@ -1349,14 +1158,12 @@ int PIOc_write_darray(const int ncid, const int vid, const int ioid,
file = pio_get_file_from_id(ncid);
- if (file == NULL)
- {
+ if(file == NULL){
fprintf(stderr,"File handle not found %d %d\n",ncid,__LINE__);
return PIO_EBADID;
}
iodesc = pio_get_iodesc_from_id(ioid);
- if (iodesc == NULL)
- {
+ if(iodesc == NULL){
fprintf(stderr,"iodesc handle not found %d %d\n",ioid,__LINE__);
return PIO_EBADID;
}
@@ -1365,20 +1172,20 @@ int PIOc_write_darray(const int ncid, const int vid, const int ioid,
ios = file->iosystem;
rlen = iodesc->llen;
- if (iodesc->rearranger>0)
- {
- if (rlen>0)
- {
+ if(iodesc->rearranger>0){
+ if(rlen>0){
MPI_Type_size(iodesc->basetype, &tsize);
// iobuf = bget(tsize*rlen);
iobuf = malloc((size_t) tsize*rlen);
- if (!iobuf)
+ if(iobuf==NULL){
piomemerror(*ios,rlen*(size_t) tsize, __FILE__,__LINE__);
}
+ }
// printf(" rlen = %d %ld\n",rlen,iobuf);
// }
+
ierr = rearrange_comp2io(*ios, iodesc, array, iobuf, 1);
printf("%s %d ",__FILE__,__LINE__);
@@ -1386,13 +1193,10 @@ int PIOc_write_darray(const int ncid, const int vid, const int ioid,
printf(" %d ",((int *) iobuf)[n]);
printf("\n");
- }
- else
- {
+ }else{
iobuf = array;
}
- switch(file->iotype)
- {
+ switch(file->iotype){
case PIO_IOTYPE_PNETCDF:
case PIO_IOTYPE_NETCDF:
case PIO_IOTYPE_NETCDF4P:
@@ -1404,22 +1208,17 @@ int PIOc_write_darray(const int ncid, const int vid, const int ioid,
free(iobuf);
return ierr;
+
}
#endif
-/** Read an array of data from a file to the (parallel) IO library.
+/** @brief Read an array of data from a file to the (parallel) IO library.
* @ingroup PIO_read_darray
- *
- * @param file
- * @param iodesc
- * @param vid
- * @param IOBUF
*/
-int pio_read_darray_nc(file_desc_t *file, io_desc_t *iodesc, const int vid,
- void *IOBUF)
+int pio_read_darray_nc(file_desc_t *file, io_desc_t *iodesc, const int vid, void *IOBUF)
{
int ierr=PIO_NOERR;
- iosystem_desc_t *ios; /** Pointer to io system information. */
+ iosystem_desc_t *ios;
var_desc_t *vdesc;
int ndims, fndims;
MPI_Status status;
@@ -1443,8 +1242,7 @@ int pio_read_darray_nc(file_desc_t *file, io_desc_t *iodesc, const int vid,
if(fndims==ndims)
vdesc->record=-1;
- if (ios->ioproc)
- {
+ if(ios->ioproc){
io_region *region;
size_t start[fndims];
size_t count[fndims];
@@ -1465,27 +1263,21 @@ int pio_read_darray_nc(file_desc_t *file, io_desc_t *iodesc, const int vid,
// calling program to change the basetype.
region = iodesc->firstregion;
MPI_Type_size(iodesc->basetype, &tsize);
- if (fndims>ndims)
- {
+ if(fndims>ndims){
ndims++;
if(vdesc->record<0)
vdesc->record=0;
}
- for (regioncnt=0;regioncntmaxregions;regioncnt++)
- {
+ for(regioncnt=0;regioncntmaxregions;regioncnt++){
// printf("%s %d %d %ld %d %d\n",__FILE__,__LINE__,regioncnt,region,fndims,ndims);
tmp_bufsize=1;
- if (region==NULL || iodesc->llen==0)
- {
- for (i=0;illen==0){
+ for(i=0;illen - region->loffset, iodesc->llen, region->loffset);
- if (vdesc->record >= 0 && fndims>1)
- {
+ if(vdesc->record >= 0 && fndims>1){
start[0] = vdesc->record;
- for (i=1;istart[i-1];
count[i] = region->count[i-1];
// printf("%s %d %d %ld %ld\n",__FILE__,__LINE__,i,start[i],count[i]);
}
if(count[1]>0)
count[0] = 1;
- }
- else
- {
+ }else{
// Non-time dependent array
- for (i=0;istart[i];
count[i] = region->count[i];
// printf("%s %d %d %ld %ld\n",__FILE__,__LINE__,i,start[i],count[i]);
@@ -1517,24 +1304,16 @@ int pio_read_darray_nc(file_desc_t *file, io_desc_t *iodesc, const int vid,
}
}
- switch(file->iotype)
- {
+ switch(file->iotype){
#ifdef _NETCDF4
case PIO_IOTYPE_NETCDF4P:
- if (iodesc->basetype == MPI_DOUBLE || iodesc->basetype == MPI_REAL8)
- {
+ if(iodesc->basetype == MPI_DOUBLE || iodesc->basetype == MPI_REAL8){
ierr = nc_get_vara_double (file->fh, vid,start,count, bufptr);
- }
- else if (iodesc->basetype == MPI_INTEGER)
- {
+ } else if(iodesc->basetype == MPI_INTEGER){
ierr = nc_get_vara_int (file->fh, vid, start, count, bufptr);
- }
- else if (iodesc->basetype == MPI_FLOAT || iodesc->basetype == MPI_REAL4)
- {
+ }else if(iodesc->basetype == MPI_FLOAT || iodesc->basetype == MPI_REAL4){
ierr = nc_get_vara_float (file->fh, vid, start, count, bufptr);
- }
- else
- {
+ }else{
fprintf(stderr,"Type not recognized %d in pioc_read_darray\n",(int) iodesc->basetype);
}
break;
@@ -1543,29 +1322,25 @@ int pio_read_darray_nc(file_desc_t *file, io_desc_t *iodesc, const int vid,
case PIO_IOTYPE_PNETCDF:
{
tmp_bufsize=1;
- for (int j = 0; j < fndims; j++)
+ for(int j=0;j 0)
- {
+ if(tmp_bufsize>0){
startlist[rrlen] = (PIO_Offset *) bget(fndims * sizeof(PIO_Offset));
countlist[rrlen] = (PIO_Offset *) bget(fndims * sizeof(PIO_Offset));
- for (int j = 0; j < fndims; j++)
- {
+ for(int j=0;jmaxregions, j,start[j],count[j],tmp_bufsize);*/
+ // printf("%s %d %d %d %d %ld %ld %ld\n",__FILE__,__LINE__,realregioncnt,iodesc->maxregions, j,start[j],count[j],tmp_bufsize);
}
rrlen++;
}
- if (regioncnt==iodesc->maxregions-1)
- {
+ if(regioncnt==iodesc->maxregions-1){
ierr = ncmpi_get_varn_all(file->fh, vid, rrlen, startlist,
countlist, IOBUF, iodesc->llen, iodesc->basetype);
- for (i=0;iiotype,__FILE__,__LINE__);
}
- if (region)
+ if(region != NULL)
region = region->next;
} // for(regioncnt=0;...)
}
@@ -1590,21 +1365,14 @@ int pio_read_darray_nc(file_desc_t *file, io_desc_t *iodesc, const int vid,
return ierr;
}
-/** Read an array of data from a file to the (serial) IO library.
+
+/** @brief Read an array of data from a file to the (serial) IO library.
* @ingroup PIO_read_darray
- *
- * @param file
- * @param iodesc
- * @param vid
- * @param IOBUF
- *
- * @returns
*/
-int pio_read_darray_nc_serial(file_desc_t *file, io_desc_t *iodesc,
- const int vid, void *IOBUF)
+int pio_read_darray_nc_serial(file_desc_t *file, io_desc_t *iodesc, const int vid, void *IOBUF)
{
int ierr=PIO_NOERR;
- iosystem_desc_t *ios; /** Pointer to io system information. */
+ iosystem_desc_t *ios;
var_desc_t *vdesc;
int ndims, fndims;
MPI_Status status;
@@ -1628,8 +1396,7 @@ int pio_read_darray_nc_serial(file_desc_t *file, io_desc_t *iodesc,
if(fndims==ndims)
vdesc->record=-1;
- if (ios->ioproc)
- {
+ if(ios->ioproc){
io_region *region;
size_t start[fndims];
size_t count[fndims];
@@ -1648,114 +1415,80 @@ int pio_read_darray_nc_serial(file_desc_t *file, io_desc_t *iodesc,
// calling program to change the basetype.
region = iodesc->firstregion;
MPI_Type_size(iodesc->basetype, &tsize);
- if (fndims>ndims)
- {
+ if(fndims>ndims){
if(vdesc->record<0)
vdesc->record=0;
}
- for (regioncnt=0;regioncntmaxregions;regioncnt++)
- {
- if (region==NULL || iodesc->llen==0)
- {
- for (i = 0; i < fndims; i++)
- {
+ for(regioncnt=0;regioncntmaxregions;regioncnt++){
+ if(region==NULL || iodesc->llen==0){
+ for(i=0;irecord >= 0 && fndims>1)
- {
+ }else{
+ if(vdesc->record >= 0 && fndims>1){
tmp_start[regioncnt*fndims] = vdesc->record;
- for (i=1;istart[i-1];
tmp_count[i+regioncnt*fndims] = region->count[i-1];
}
if(tmp_count[1+regioncnt*fndims]>0)
tmp_count[regioncnt*fndims] = 1;
- }
- else
- {
+ }else{
// Non-time dependent array
- for (i = 0; i < fndims; i++)
- {
+ for(i=0;istart[i];
tmp_count[i+regioncnt*fndims] = region->count[i];
}
}
/* for(i=0;inext;
} // for(regioncnt=0;...)
- if (ios->io_rank>0)
- {
+ if(ios->io_rank>0){
MPI_Send( &(iodesc->llen), 1, MPI_OFFSET, 0, ios->io_rank, ios->io_comm);
- if (iodesc->llen > 0)
- {
- MPI_Send(&(iodesc->maxregions), 1, MPI_INT, 0,
- ios->num_iotasks + ios->io_rank, ios->io_comm);
- MPI_Send(tmp_count, iodesc->maxregions*fndims, MPI_OFFSET, 0,
- 2 * ios->num_iotasks + ios->io_rank, ios->io_comm);
- MPI_Send(tmp_start, iodesc->maxregions*fndims, MPI_OFFSET, 0,
- 3 * ios->num_iotasks + ios->io_rank, ios->io_comm);
- MPI_Recv(IOBUF, iodesc->llen, iodesc->basetype, 0,
- 4 * ios->num_iotasks+ios->io_rank, ios->io_comm, &status);
+ if(iodesc->llen > 0){
+ MPI_Send( &(iodesc->maxregions), 1, MPI_INT, 0, ios->num_iotasks+ios->io_rank, ios->io_comm);
+ MPI_Send( tmp_count, iodesc->maxregions*fndims, MPI_OFFSET, 0, 2*ios->num_iotasks+ios->io_rank, ios->io_comm);
+ MPI_Send( tmp_start, iodesc->maxregions*fndims, MPI_OFFSET, 0, 3*ios->num_iotasks+ios->io_rank, ios->io_comm);
+ MPI_Recv(IOBUF, iodesc->llen, iodesc->basetype, 0, 4*ios->num_iotasks+ios->io_rank, ios->io_comm, &status);
}
- }
- else if (ios->io_rank == 0)
- {
+ }else if(ios->io_rank==0){
int maxregions=0;
size_t loffset, regionsize;
size_t this_start[fndims*iodesc->maxregions];
size_t this_count[fndims*iodesc->maxregions];
// for( i=ios->num_iotasks-1; i>=0; i--){
- for (int rtask = 1; rtask <= ios->num_iotasks; rtask++)
- {
- if (rtasknum_iotasks)
- {
+ for(int rtask=1;rtask<=ios->num_iotasks;rtask++){
+ if(rtasknum_iotasks){
MPI_Recv(&tmp_bufsize, 1, MPI_OFFSET, rtask, rtask, ios->io_comm, &status);
- if (tmp_bufsize>0)
- {
- MPI_Recv(&maxregions, 1, MPI_INT, rtask, ios->num_iotasks+rtask,
- ios->io_comm, &status);
- MPI_Recv(this_count, maxregions*fndims, MPI_OFFSET, rtask,
- 2 * ios->num_iotasks + rtask, ios->io_comm, &status);
- MPI_Recv(this_start, maxregions*fndims, MPI_OFFSET, rtask,
- 3 * ios->num_iotasks + rtask, ios->io_comm, &status);
- }
+ if(tmp_bufsize>0){
+ MPI_Recv(&maxregions, 1, MPI_INT, rtask, ios->num_iotasks+rtask, ios->io_comm, &status);
+ MPI_Recv(this_count, maxregions*fndims, MPI_OFFSET, rtask, 2*ios->num_iotasks+rtask, ios->io_comm, &status);
+ MPI_Recv(this_start, maxregions*fndims, MPI_OFFSET, rtask, 3*ios->num_iotasks+rtask, ios->io_comm, &status);
}
- else
- {
+ }else{
maxregions=iodesc->maxregions;
tmp_bufsize=iodesc->llen;
}
loffset = 0;
- for (regioncnt=0;regioncntnum_iotasks)
- {
- for (int m=0; mnum_iotasks){
+ for(int m=0; mbasetype == MPI_DOUBLE || iodesc->basetype == MPI_REAL8)
- {
+ if(iodesc->basetype == MPI_DOUBLE || iodesc->basetype == MPI_REAL8){
ierr = nc_get_vara_double (file->fh, vid,start, count, bufptr);
- }
- else if (iodesc->basetype == MPI_INTEGER)
- {
+ }else if(iodesc->basetype == MPI_INTEGER){
ierr = nc_get_vara_int (file->fh, vid, start, count, bufptr);
- }
- else if (iodesc->basetype == MPI_FLOAT || iodesc->basetype == MPI_REAL4)
- {
+ }else if(iodesc->basetype == MPI_FLOAT || iodesc->basetype == MPI_REAL4){
ierr = nc_get_vara_float (file->fh, vid, start, count, bufptr);
- }
- else
- {
- fprintf(stderr,"Type not recognized %d in pioc_write_darray_nc_serial\n",
- (int)iodesc->basetype);
+ }else{
+ fprintf(stderr,"Type not recognized %d in pioc_write_darray_nc_serial\n",(int) iodesc->basetype);
}
- if (ierr != PIO_NOERR)
- {
+ if(ierr != PIO_NOERR){
for(int i=0;inum_iotasks)
- MPI_Send(IOBUF, tmp_bufsize, iodesc->basetype, rtask,
- 4 * ios->num_iotasks + rtask, ios->io_comm);
+ if(rtasknum_iotasks){
+ MPI_Send(IOBUF, tmp_bufsize, iodesc->basetype, rtask,4*ios->num_iotasks+rtask, ios->io_comm);
+ }
}
}
}
@@ -1808,21 +1531,14 @@ int pio_read_darray_nc_serial(file_desc_t *file, io_desc_t *iodesc,
return ierr;
}
-/** Read a field from a file to the IO library.
+
+/** @brief Read a field from a file to the IO library.
* @ingroup PIO_read_darray
*
- * @param ncid identifies the netCDF file
- * @param vid
- * @param ioid
- * @param arraylen
- * @param array
- *
- * @return
*/
-int PIOc_read_darray(const int ncid, const int vid, const int ioid,
- const PIO_Offset arraylen, void *array)
+int PIOc_read_darray(const int ncid, const int vid, const int ioid, const PIO_Offset arraylen, void *array)
{
- iosystem_desc_t *ios; /** Pointer to io system information. */
+ iosystem_desc_t *ios;
file_desc_t *file;
io_desc_t *iodesc;
void *iobuf=NULL;
@@ -1832,46 +1548,35 @@ int PIOc_read_darray(const int ncid, const int vid, const int ioid,
file = pio_get_file_from_id(ncid);
- if (file == NULL)
- {
+ if(file == NULL){
fprintf(stderr,"File handle not found %d %d\n",ncid,__LINE__);
return PIO_EBADID;
}
iodesc = pio_get_iodesc_from_id(ioid);
- if (iodesc == NULL)
- {
+ if(iodesc == NULL){
fprintf(stderr,"iodesc handle not found %d %d\n",ioid,__LINE__);
return PIO_EBADID;
}
ios = file->iosystem;
- if (ios->iomaster)
- {
+ if(ios->iomaster){
rlen = iodesc->maxiobuflen;
- }
- else
- {
+ }else{
rlen = iodesc->llen;
}
- if (iodesc->rearranger > 0)
- {
- if (ios->ioproc && rlen>0)
- {
+ if(iodesc->rearranger > 0){
+ if(ios->ioproc && rlen>0){
MPI_Type_size(iodesc->basetype, &tsize);
iobuf = bget(((size_t) tsize)*rlen);
- if (iobuf==NULL)
- {
+ if(iobuf==NULL){
piomemerror(*ios,rlen*((size_t) tsize), __FILE__,__LINE__);
}
}
- }
- else
- {
+ }else{
iobuf = array;
}
- switch(file->iotype)
- {
+ switch(file->iotype){
case PIO_IOTYPE_NETCDF:
case PIO_IOTYPE_NETCDF4C:
ierr = pio_read_darray_nc_serial(file, iodesc, vid, iobuf);
@@ -1883,8 +1588,7 @@ int PIOc_read_darray(const int ncid, const int vid, const int ioid,
default:
ierr = iotype_error(file->iotype,__FILE__,__LINE__);
}
- if (iodesc->rearranger > 0)
- {
+ if(iodesc->rearranger > 0){
ierr = rearrange_io2comp(*ios, iodesc, iobuf, array);
if(rlen>0)
@@ -1895,14 +1599,6 @@ int PIOc_read_darray(const int ncid, const int vid, const int ioid,
}
-/** Flush the output buffer.
- *
- * @param file
- * @param force
- * @param addsize
- *
- * @return
- */
int flush_output_buffer(file_desc_t *file, bool force, PIO_Offset addsize)
{
var_desc_t *vdesc;
@@ -1918,20 +1614,17 @@ int flush_output_buffer(file_desc_t *file, bool force, PIO_Offset addsize)
ierr = ncmpi_inq_buffer_usage(file->fh, &usage);
- if (!force && file->iosystem->io_comm != MPI_COMM_NULL)
- {
+ if(!force && file->iosystem->io_comm != MPI_COMM_NULL){
usage += addsize;
MPI_Allreduce(MPI_IN_PLACE, &usage, 1, MPI_OFFSET, MPI_MAX,
file->iosystem->io_comm);
}
- if (usage > maxusage)
- {
+ if(usage > maxusage){
maxusage = usage;
}
- if (force || usage>=PIO_BUFFER_SIZE_LIMIT)
- {
+ if(force || usage>=PIO_BUFFER_SIZE_LIMIT){
int rcnt;
bool prev_dist=false;
int prev_record=-1;
@@ -1941,32 +1634,28 @@ int flush_output_buffer(file_desc_t *file, bool force, PIO_Offset addsize)
maxreq = 0;
reqcnt=0;
rcnt=0;
- for (int i = 0; i < PIO_MAX_VARS; i++)
- {
+ for(int i=0; ivarlist+i;
reqcnt+=vdesc->nreqs;
- if (vdesc->nreqs > 0)
- maxreq = i;
+ if(vdesc->nreqs>0) maxreq = i;
}
int request[reqcnt];
int status[reqcnt];
- for (int i = 0; i <= maxreq; i++)
- {
+ for(int i=0; i<=maxreq; i++){
vdesc = file->varlist+i;
#ifdef MPIO_ONESIDED
/*onesided optimization requires that all of the requests in a wait_all call represent
a contiguous block of data in the file */
- if (rcnt>0 && (prev_record != vdesc->record || vdesc->nreqs==0))
- {
+ if(rcnt>0 && (prev_record != vdesc->record ||
+ vdesc->nreqs==0)){
ierr = ncmpi_wait_all(file->fh, rcnt, request,status);
rcnt=0;
}
prev_record = vdesc->record;
#endif
// printf("%s %d %d %d %d \n",__FILE__,__LINE__,i,vdesc->nreqs,vdesc->request);
- for (reqcnt=0;reqcntnreqs;reqcnt++)
- {
+ for(reqcnt=0;reqcntnreqs;reqcnt++){
request[rcnt++] = max(vdesc->request[reqcnt],NC_REQ_NULL);
}
free(vdesc->request);
@@ -1981,8 +1670,7 @@ int flush_output_buffer(file_desc_t *file, bool force, PIO_Offset addsize)
// if(file->iosystem->io_rank==0){
// printf("%s %d %d\n",__FILE__,__LINE__,rcnt);
// }
- if (rcnt > 0)
- {
+ if(rcnt>0){
/*
if(file->iosystem->io_rank==0){
printf("%s %d %d ",__FILE__,__LINE__,rcnt);
@@ -1993,16 +1681,13 @@ int flush_output_buffer(file_desc_t *file, bool force, PIO_Offset addsize)
}*/
ierr = ncmpi_wait_all(file->fh, rcnt, request,status);
}
- for (int i = 0; i < PIO_MAX_VARS; i++)
- {
+ for(int i=0; ivarlist+i;
- if (vdesc->iobuf)
- {
+ if(vdesc->iobuf != NULL){
brel(vdesc->iobuf);
vdesc->iobuf=NULL;
}
- if (vdesc->fillbuf)
- {
+ if(vdesc->fillbuf != NULL){
brel(vdesc->fillbuf);
vdesc->fillbuf=NULL;
}
@@ -2017,65 +1702,40 @@ int flush_output_buffer(file_desc_t *file, bool force, PIO_Offset addsize)
return ierr;
}
-/** Print out info about the buffer for debug purposes.
- *
- * @param ios the IO system structure
- * @param collective true if collective report is desired
- */
void cn_buffer_report(iosystem_desc_t ios, bool collective)
{
- if (CN_bpool)
- {
+ if(CN_bpool != NULL){
long bget_stats[5];
long bget_mins[5];
long bget_maxs[5];
bstats(bget_stats, bget_stats+1,bget_stats+2,bget_stats+3,bget_stats+4);
- if (collective)
- {
+ if(collective){
MPI_Reduce(bget_stats, bget_maxs, 5, MPI_LONG, MPI_MAX, 0, ios.comp_comm);
MPI_Reduce(bget_stats, bget_mins, 5, MPI_LONG, MPI_MIN, 0, ios.comp_comm);
- if (ios.compmaster)
- {
- printf("PIO: Currently allocated buffer space %ld %ld\n",
- bget_mins[0], bget_maxs[0]);
- printf("PIO: Currently available buffer space %ld %ld\n",
- bget_mins[1], bget_maxs[1]);
- printf("PIO: Current largest free block %ld %ld\n",
- bget_mins[2], bget_maxs[2]);
- printf("PIO: Number of successful bget calls %ld %ld\n",
- bget_mins[3], bget_maxs[3]);
- printf("PIO: Number of successful brel calls %ld %ld\n",
- bget_mins[4], bget_maxs[4]);
+ if(ios.compmaster){
+ printf("PIO: Currently allocated buffer space %ld %ld\n",bget_mins[0],bget_maxs[0]);
+ printf("PIO: Currently available buffer space %ld %ld\n",bget_mins[1],bget_maxs[1]);
+ printf("PIO: Current largest free block %ld %ld\n",bget_mins[2],bget_maxs[2]);
+ printf("PIO: Number of successful bget calls %ld %ld\n",bget_mins[3],bget_maxs[3]);
+ printf("PIO: Number of successful brel calls %ld %ld\n",bget_mins[4],bget_maxs[4]);
// print_trace(stdout);
}
- }
- else
- {
- printf("%d: PIO: Currently allocated buffer space %ld \n",
- ios.union_rank, bget_stats[0]) ;
- printf("%d: PIO: Currently available buffer space %ld \n",
- ios.union_rank, bget_stats[1]);
- printf("%d: PIO: Current largest free block %ld \n",
- ios.union_rank, bget_stats[2]);
- printf("%d: PIO: Number of successful bget calls %ld \n",
- ios.union_rank, bget_stats[3]);
- printf("%d: PIO: Number of successful brel calls %ld \n",
- ios.union_rank, bget_stats[4]);
+ }else{
+ printf("%d: PIO: Currently allocated buffer space %ld \n",ios.union_rank,bget_stats[0]) ;
+ printf("%d: PIO: Currently available buffer space %ld \n",ios.union_rank,bget_stats[1]);
+ printf("%d: PIO: Current largest free block %ld \n",ios.union_rank,bget_stats[2]);
+ printf("%d: PIO: Number of successful bget calls %ld \n",ios.union_rank,bget_stats[3]);
+ printf("%d: PIO: Number of successful brel calls %ld \n",ios.union_rank,bget_stats[4]);
}
}
}
-/** Free the buffer pool.
- *
- * @param ios
- */
void free_cn_buffer_pool(iosystem_desc_t ios)
{
#ifndef PIO_USE_MALLOC
- if (CN_bpool)
- {
+ if(CN_bpool != NULL){
cn_buffer_report(ios, true);
bpoolrelease(CN_bpool);
// free(CN_bpool);
@@ -2084,38 +1744,24 @@ void free_cn_buffer_pool(iosystem_desc_t ios)
#endif
}
-/** Flush the buffer.
- *
- * @param ncid
- * @param wmb
- * @param flushtodisk
- */
void flush_buffer(int ncid, wmulti_buffer *wmb, bool flushtodisk)
{
- if (wmb->validvars > 0)
- {
- PIOc_write_darray_multi(ncid, wmb->vid, wmb->ioid, wmb->validvars,
- wmb->arraylen, wmb->data, wmb->frame,
- wmb->fillvalue, flushtodisk);
+ if(wmb->validvars>0){
+ PIOc_write_darray_multi(ncid, wmb->vid, wmb->ioid, wmb->validvars, wmb->arraylen, wmb->data, wmb->frame, wmb->fillvalue, flushtodisk);
wmb->validvars=0;
brel(wmb->vid);
wmb->vid=NULL;
brel(wmb->data);
wmb->data=NULL;
- if (wmb->fillvalue)
+ if(wmb->fillvalue != NULL)
brel(wmb->fillvalue);
- if (wmb->frame)
+ if(wmb->frame != NULL)
brel(wmb->frame);
wmb->fillvalue=NULL;
wmb->frame=NULL;
}
}
-/** Comput the maximum aggregate number of bytes.
- *
- * @param ios
- * @param iodesc
- */
void compute_maxaggregate_bytes(const iosystem_desc_t ios, io_desc_t *iodesc)
{
int maxbytesoniotask=INT_MAX;
@@ -2124,12 +1770,12 @@ void compute_maxaggregate_bytes(const iosystem_desc_t ios, io_desc_t *iodesc)
// printf("%s %d %d %d\n",__FILE__,__LINE__,iodesc->maxiobuflen, iodesc->ndof);
- if (ios.ioproc && iodesc->maxiobuflen > 0)
+ if(ios.ioproc && iodesc->maxiobuflen>0){
maxbytesoniotask = PIO_BUFFER_SIZE_LIMIT/ iodesc->maxiobuflen;
-
- if (ios.comp_rank >= 0 && iodesc->ndof > 0)
+ }
+ if(ios.comp_rank>=0 && iodesc->ndof>0){
maxbytesoncomputetask = PIO_CNBUFFER_LIMIT/iodesc->ndof;
-
+ }
maxbytes = min(maxbytesoniotask,maxbytesoncomputetask);
// printf("%s %d %d %d\n",__FILE__,__LINE__,maxbytesoniotask, maxbytesoncomputetask);
diff --git a/cime/externals/pio2/src/clib/pio_darray_async.c b/cime/externals/pio2/src/clib/pio_darray_async.c
deleted file mode 100644
index d9e41a8340ea..000000000000
--- a/cime/externals/pio2/src/clib/pio_darray_async.c
+++ /dev/null
@@ -1,2164 +0,0 @@
-/** @file
- *
- * This file contains the routines that read and write
- * distributed arrays in PIO.
- *
- * When arrays are distributed, each processor holds some of the
- * array. Only by combining the distributed arrays from all processor
- * can the full array be obtained.
- *
- * @author Jim Edwards, Ed Hartnett
- */
-
-#include
-#include
-#include
-
-/* 10MB default limit. */
-PIO_Offset PIO_BUFFER_SIZE_LIMIT = 10485760;
-
-/* Initial size of compute buffer. */
-bufsize PIO_CNBUFFER_LIMIT = 33554432;
-
-/* Global buffer pool pointer. */
-static void *CN_bpool = NULL;
-
-/* Maximum buffer usage. */
-static PIO_Offset maxusage = 0;
-
-/** Set the pio buffer size limit. This is the size of the data buffer
- * on the IO nodes.
- *
- * The pio_buffer_size_limit will only apply to files opened after
- * the setting is changed.
- *
- * @param limit the size of the buffer on the IO nodes
- *
- * @return The previous limit setting.
- */
-PIO_Offset PIOc_set_buffer_size_limit(const PIO_Offset limit)
-{
- PIO_Offset oldsize;
- oldsize = PIO_BUFFER_SIZE_LIMIT;
- if (limit > 0)
- PIO_BUFFER_SIZE_LIMIT = limit;
- return oldsize;
-}
-
-/** Initialize the compute buffer to size PIO_CNBUFFER_LIMIT.
- *
- * This routine initializes the compute buffer pool if the bget memory
- * management is used. If malloc is used (that is, PIO_USE_MALLOC is
- * non zero), this function does nothing.
- *
- * @param ios the iosystem descriptor which will use the new buffer
- */
-void compute_buffer_init(iosystem_desc_t ios)
-{
-#if !PIO_USE_MALLOC
-
- if (!CN_bpool)
- {
- if (!(CN_bpool = malloc(PIO_CNBUFFER_LIMIT)))
- {
- char errmsg[180];
- sprintf(errmsg,"Unable to allocate a buffer pool of size %d on task %d:"
- " try reducing PIO_CNBUFFER_LIMIT\n", PIO_CNBUFFER_LIMIT, ios.comp_rank);
- piodie(errmsg, __FILE__, __LINE__);
- }
-
- bpool(CN_bpool, PIO_CNBUFFER_LIMIT);
- if (!CN_bpool)
- {
- char errmsg[180];
- sprintf(errmsg,"Unable to allocate a buffer pool of size %d on task %d:"
- " try reducing PIO_CNBUFFER_LIMIT\n", PIO_CNBUFFER_LIMIT, ios.comp_rank);
- piodie(errmsg, __FILE__, __LINE__);
- }
-
- bectl(NULL, malloc, free, PIO_CNBUFFER_LIMIT);
- }
-#endif
-}
-
-/** Write a single distributed field to output. This routine is only
- * used if aggregation is off.
- *
- * @param file a pointer to the open file descriptor for the file
- * that will be written to
- * @param iodesc a pointer to the defined iodescriptor for the buffer
- * @param vid the variable id to be written
- * @param IOBUF the buffer to be written from this mpi task
- * @param fillvalue the optional fillvalue to be used for missing
- * data in this buffer
- *
- * @return 0 for success, error code otherwise.
- * @ingroup PIO_write_darray
- */
-int pio_write_darray_nc(file_desc_t *file, io_desc_t *iodesc, const int vid,
- void *IOBUF, void *fillvalue)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- var_desc_t *vdesc;
- int ndims; /* Number of dimensions according to iodesc. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int i; /* Loop counter. */
- int mpierr = MPI_SUCCESS; /* Return code from MPI function codes. */
- int dsize; /* Size of the type. */
- MPI_Status status; /* Status from MPI_Recv calls. */
- PIO_Offset usage; /* Size of current buffer. */
- int fndims; /* Number of dims for variable according to netCDF. */
- PIO_Offset tdsize = 0; /* Total size. */
-
- LOG((1, "pio_write_array_nc vid = %d", vid));
-
-#ifdef TIMING
- /* Start timing this function. */
- GPTLstart("PIO:write_darray_nc");
-#endif
-
- /* Get the IO system info. */
- if (!(ios = file->iosystem))
- return PIO_EBADID;
-
- /* Get pointer to variable information. */
- if (!(vdesc = file->varlist + vid))
- return PIO_EBADID;
-
- ndims = iodesc->ndims;
-
- /* Get the number of dims for this var from netcdf. */
- ierr = PIOc_inq_varndims(file->fh, vid, &fndims);
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = 0;
-
- if (ios->compmaster)
- mpierr = MPI_Send(&msg, 1, MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- }
- }
-
- /* If this is an IO task, write the data. */
- if (ios->ioproc)
- {
- io_region *region;
- int regioncnt;
- int rrcnt;
- void *bufptr;
- void *tmp_buf = NULL;
- int tsize; /* Type size. */
- size_t start[fndims]; /* Local start array for this task. */
- size_t count[fndims]; /* Local count array for this task. */
- int buflen;
- int j; /* Loop counter. */
-
- PIO_Offset *startlist[iodesc->maxregions];
- PIO_Offset *countlist[iodesc->maxregions];
-
- /* Get the type size (again?) */
- MPI_Type_size(iodesc->basetype, &tsize);
-
- region = iodesc->firstregion;
-
- /* If this is a var with an unlimited dimension, and the
- * iodesc ndims doesn't contain it, then add it to ndims. */
- if (vdesc->record >= 0 && ndims < fndims)
- ndims++;
-
-#ifdef _PNETCDF
- /* Make sure we have room in the buffer. */
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- flush_output_buffer(file, false, tsize * (iodesc->maxiobuflen));
-#endif
-
- rrcnt = 0;
- /* For each region, figure out start/count arrays. */
- for (regioncnt = 0; regioncnt < iodesc->maxregions; regioncnt++)
- {
- /* Init arrays to zeros. */
- for (i = 0; i < ndims; i++)
- {
- start[i] = 0;
- count[i] = 0;
- }
-
- if (region)
- {
- bufptr = (void *)((char *)IOBUF + tsize * region->loffset);
- if (vdesc->record >= 0)
- {
- /* This is a record based multidimensional array. */
-
- /* This does not look correct, but will work if
- * unlimited dim is dim 0. */
- start[0] = vdesc->record;
-
- /* Set the local start and count arrays. */
- for (i = 1; i < ndims; i++)
- {
- start[i] = region->start[i - 1];
- count[i] = region->count[i - 1];
- }
-
- /* If there is data to be written, write one timestep. */
- if (count[1] > 0)
- count[0] = 1;
- }
- else
- {
- /* Array without unlimited dimension. */
- for (i = 0; i < ndims; i++)
- {
- start[i] = region->start[i];
- count[i] = region->count[i];
- }
- }
- }
-
- switch(file->iotype)
- {
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
-
- /* Use collective writes with this variable. */
- ierr = nc_var_par_access(file->fh, vid, NC_COLLECTIVE);
-
- /* Write the data. */
- if (iodesc->basetype == MPI_DOUBLE || iodesc->basetype == MPI_REAL8)
- ierr = nc_put_vara_double(file->fh, vid, (size_t *)start, (size_t *)count,
- (const double *)bufptr);
- else if (iodesc->basetype == MPI_INTEGER)
- ierr = nc_put_vara_int(file->fh, vid, (size_t *)start, (size_t *)count,
- (const int *)bufptr);
- else if (iodesc->basetype == MPI_FLOAT || iodesc->basetype == MPI_REAL4)
- ierr = nc_put_vara_float(file->fh, vid, (size_t *)start, (size_t *)count,
- (const float *)bufptr);
- else
- fprintf(stderr,"Type not recognized %d in pioc_write_darray\n",
- (int)iodesc->basetype);
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif /* _NETCDF4 */
- case PIO_IOTYPE_NETCDF:
- {
- /* Find the type size (again?) */
- mpierr = MPI_Type_size(iodesc->basetype, &dsize);
-
- size_t tstart[ndims], tcount[ndims];
-
- /* The IO master task does all the data writes, but
- * sends the data to the other IO tasks (why?). */
- if (ios->io_rank == 0)
- {
- for (i = 0; i < iodesc->num_aiotasks; i++)
- {
- if (i == 0)
- {
- buflen = 1;
- for (j = 0; j < ndims; j++)
- {
- tstart[j] = start[j];
- tcount[j] = count[j];
- buflen *= tcount[j];
- tmp_buf = bufptr;
- }
- }
- else
- {
- /* Handshake - tell the sending task I'm ready. */
- mpierr = MPI_Send(&ierr, 1, MPI_INT, i, 0, ios->io_comm);
- mpierr = MPI_Recv(&buflen, 1, MPI_INT, i, 1, ios->io_comm, &status);
- if (buflen > 0)
- {
- mpierr = MPI_Recv(tstart, ndims, MPI_OFFSET, i, ios->num_iotasks+i,
- ios->io_comm, &status);
- mpierr = MPI_Recv(tcount, ndims, MPI_OFFSET, i, 2 * ios->num_iotasks + i,
- ios->io_comm, &status);
- tmp_buf = malloc(buflen * dsize);
- mpierr = MPI_Recv(tmp_buf, buflen, iodesc->basetype, i, i, ios->io_comm, &status);
- }
- }
-
- if (buflen > 0)
- {
- /* Write the data. */
- if (iodesc->basetype == MPI_INTEGER)
- ierr = nc_put_vara_int(file->fh, vid, tstart, tcount, (const int *)tmp_buf);
- else if (iodesc->basetype == MPI_DOUBLE || iodesc->basetype == MPI_REAL8)
- ierr = nc_put_vara_double(file->fh, vid, tstart, tcount, (const double *)tmp_buf);
- else if (iodesc->basetype == MPI_FLOAT || iodesc->basetype == MPI_REAL4)
- ierr = nc_put_vara_float(file->fh, vid, tstart, tcount, (const float *)tmp_buf);
- else
- fprintf(stderr,"Type not recognized %d in pioc_write_darray\n",
- (int)iodesc->basetype);
-
- /* Was there an error from netCDF? */
- if (ierr == PIO_EEDGE)
- for (i = 0; i < ndims; i++)
- fprintf(stderr,"dim %d start %ld count %ld\n", i, tstart[i], tcount[i]);
-
- /* Free the temporary buffer, if we don't need it any more. */
- if (tmp_buf != bufptr)
- free(tmp_buf);
- }
- }
- }
- else if (ios->io_rank < iodesc->num_aiotasks)
- {
- buflen = 1;
- for (i = 0; i < ndims; i++)
- {
- tstart[i] = (size_t) start[i];
- tcount[i] = (size_t) count[i];
- buflen *= tcount[i];
- // printf("%s %d %d %d %d\n",__FILE__,__LINE__,i,tstart[i],tcount[i]);
- }
- /* printf("%s %d %d %d %d %d %d %d %d %d\n",__FILE__,__LINE__,ios->io_rank,tstart[0],
- tstart[1],tcount[0],tcount[1],buflen,ndims,fndims);*/
- mpierr = MPI_Recv(&ierr, 1, MPI_INT, 0, 0, ios->io_comm, &status); // task0 is ready to recieve
- mpierr = MPI_Rsend(&buflen, 1, MPI_INT, 0, 1, ios->io_comm);
- if (buflen > 0)
- {
- mpierr = MPI_Rsend(tstart, ndims, MPI_OFFSET, 0, ios->num_iotasks+ios->io_rank,
- ios->io_comm);
- mpierr = MPI_Rsend(tcount, ndims, MPI_OFFSET, 0,2*ios->num_iotasks+ios->io_rank,
- ios->io_comm);
- mpierr = MPI_Rsend(bufptr, buflen, iodesc->basetype, 0, ios->io_rank, ios->io_comm);
- }
- }
- break;
- }
- break;
-#endif /* _NETCDF */
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- for (i = 0, dsize = 1; i < ndims; i++)
- dsize *= count[i];
-
- tdsize += dsize;
- // if (dsize==1 && ndims==2)
- // printf("%s %d %d\n",__FILE__,__LINE__,iodesc->basetype);
-
- if (dsize > 0)
- {
- // printf("%s %d %d %d\n",__FILE__,__LINE__,ios->io_rank,dsize);
- startlist[rrcnt] = (PIO_Offset *) calloc(fndims, sizeof(PIO_Offset));
- countlist[rrcnt] = (PIO_Offset *) calloc(fndims, sizeof(PIO_Offset));
- for (i = 0; i < fndims; i++)
- {
- startlist[rrcnt][i] = start[i];
- countlist[rrcnt][i] = count[i];
- }
- rrcnt++;
- }
- if (regioncnt == iodesc->maxregions - 1)
- {
- // printf("%s %d %d %ld %ld\n",__FILE__,__LINE__,ios->io_rank,iodesc->llen, tdsize);
- // ierr = ncmpi_put_varn_all(file->fh, vid, iodesc->maxregions, startlist, countlist,
- // IOBUF, iodesc->llen, iodesc->basetype);
- int reqn = 0;
-
- if (vdesc->nreqs % PIO_REQUEST_ALLOC_CHUNK == 0 )
- {
- vdesc->request = realloc(vdesc->request,
- sizeof(int) * (vdesc->nreqs + PIO_REQUEST_ALLOC_CHUNK));
-
- for (int i = vdesc->nreqs; i < vdesc->nreqs + PIO_REQUEST_ALLOC_CHUNK; i++)
- vdesc->request[i] = NC_REQ_NULL;
- reqn = vdesc->nreqs;
- }
- else
- while(vdesc->request[reqn] != NC_REQ_NULL)
- reqn++;
-
- ierr = ncmpi_bput_varn(file->fh, vid, rrcnt, startlist, countlist,
- IOBUF, iodesc->llen, iodesc->basetype, vdesc->request+reqn);
- if (vdesc->request[reqn] == NC_REQ_NULL)
- vdesc->request[reqn] = PIO_REQ_NULL; //keeps wait calls in sync
- vdesc->nreqs = reqn;
-
- // printf("%s %d %X %d\n",__FILE__,__LINE__,IOBUF,request);
- for (i=0;iiotype,__FILE__,__LINE__);
- }
-
- /* Move to the next region. */
- if (region)
- region = region->next;
- } // for (regioncnt=0;regioncntmaxregions;regioncnt++){
- } // if (ios->ioproc)
-
- /* Check the error code returned by netCDF. */
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
-#ifdef TIMING
- /* Stop timing this function. */
- GPTLstop("PIO:write_darray_nc");
-#endif
-
- return ierr;
-}
-
-/** Write a set of one or more aggregated arrays to output file.
- *
- * This routine is used if aggregation is enabled, data is already on
- * the io-tasks
- *
- * @param file a pointer to the open file descriptor for the file
- * that will be written to
- * @param nvars the number of variables to be written with this
- * decomposition
- * @param vid: an array of the variable ids to be written
- * @param iodesc_ndims: the number of dimensions explicitly in the
- * iodesc
- * @param basetype the basic type of the minimal data unit
- * @param gsize array of the global dimensions of the field to
- * be written
- * @param maxregions max number of blocks to be written from
- * this iotask
- * @param firstregion pointer to the first element of a linked
- * list of region descriptions.
- * @param llen length of the iobuffer on this task for a single
- * field
- * @param maxiobuflen maximum llen participating
- * @param num_aiotasks actual number of iotasks participating
- * @param IOBUF the buffer to be written from this mpi task
- * @param frame the frame or record dimension for each of the nvars
- * variables in IOBUF
- *
- * @return 0 for success, error code otherwise.
- * @ingroup PIO_write_darray
- */
-int pio_write_darray_multi_nc(file_desc_t *file, const int nvars, const int *vid,
- const int iodesc_ndims, MPI_Datatype basetype, const PIO_Offset *gsize,
- const int maxregions, io_region *firstregion, const PIO_Offset llen,
- const int maxiobuflen, const int num_aiotasks,
- void *IOBUF, const int *frame)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- var_desc_t *vdesc;
- int ierr;
- int i;
- int mpierr = MPI_SUCCESS; /* Return code from MPI function codes. */
- int dsize;
- MPI_Status status;
- PIO_Offset usage;
- int fndims;
- PIO_Offset tdsize;
- int tsize;
- int ncid;
- tdsize=0;
- ierr = PIO_NOERR;
-
-#ifdef TIMING
- /* Start timing this function. */
- GPTLstart("PIO:write_darray_multi_nc");
-#endif
-
- ios = file->iosystem;
- if (ios == NULL)
- {
- fprintf(stderr,"Failed to find iosystem handle \n");
- return PIO_EBADID;
- }
- vdesc = (file->varlist)+vid[0];
- ncid = file->fh;
-
- if (vdesc == NULL)
- {
- fprintf(stderr,"Failed to find variable handle %d\n",vid[0]);
- return PIO_EBADID;
- }
-
- /* If async is in use, send message to IO master task. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = 0;
- if (ios->compmaster)
- mpierr = MPI_Send(&msg, 1, MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, ios->compmaster, ios->intercomm);
- }
- }
-
- ierr = PIOc_inq_varndims(file->fh, vid[0], &fndims);
- MPI_Type_size(basetype, &tsize);
-
- if (ios->ioproc)
- {
- io_region *region;
- int regioncnt;
- int rrcnt;
- void *bufptr;
- int buflen, j;
- size_t start[fndims];
- size_t count[fndims];
- int ndims = iodesc_ndims;
-
- PIO_Offset *startlist[maxregions];
- PIO_Offset *countlist[maxregions];
-
- ncid = file->fh;
- region = firstregion;
-
- rrcnt = 0;
- for (regioncnt = 0; regioncnt < maxregions; regioncnt++)
- {
- // printf("%s %d %d %d %d %d %d\n",__FILE__,__LINE__,region->start[0],region->count[0],ndims,fndims,vdesc->record);
- for (i = 0; i < fndims; i++)
- {
- start[i] = 0;
- count[i] = 0;
- }
- if (region)
- {
- // this is a record based multidimensional array
- if (vdesc->record >= 0)
- {
- for (i = fndims - ndims; i < fndims; i++)
- {
- start[i] = region->start[i-(fndims-ndims)];
- count[i] = region->count[i-(fndims-ndims)];
- }
-
- if (fndims>1 && ndims0)
- {
- count[0] = 1;
- start[0] = frame[0];
- }
- else if (fndims==ndims)
- {
- start[0] += vdesc->record;
- }
- // Non-time dependent array
- }
- else
- {
- for (i = 0; i < ndims; i++)
- {
- start[i] = region->start[i];
- count[i] = region->count[i];
- }
- }
- }
-
- switch(file->iotype)
- {
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- for (int nv = 0; nv < nvars; nv++)
- {
- if (vdesc->record >= 0 && ndims < fndims)
- {
- start[0] = frame[nv];
- }
- if (region)
- {
- bufptr = (void *)((char *) IOBUF + tsize*(nv*llen + region->loffset));
- }
- ierr = nc_var_par_access(ncid, vid[nv], NC_COLLECTIVE);
-
- if (basetype == MPI_DOUBLE ||basetype == MPI_REAL8)
- {
- ierr = nc_put_vara_double (ncid, vid[nv],(size_t *) start,(size_t *) count,
- (const double *)bufptr);
- }
- else if (basetype == MPI_INTEGER)
- {
- ierr = nc_put_vara_int (ncid, vid[nv], (size_t *) start, (size_t *) count,
- (const int *)bufptr);
- }
- else if (basetype == MPI_FLOAT || basetype == MPI_REAL4)
- {
- ierr = nc_put_vara_float (ncid, vid[nv], (size_t *) start, (size_t *) count,
- (const float *)bufptr);
- }
- else
- {
- fprintf(stderr,"Type not recognized %d in pioc_write_darray\n",
- (int)basetype);
- }
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- for (i = 0, dsize = 1; i < fndims; i++)
- {
- dsize *= count[i];
- }
- tdsize += dsize;
-
- if (dsize>0)
- {
- // printf("%s %d %d %d\n",__FILE__,__LINE__,ios->io_rank,dsize);
- startlist[rrcnt] = (PIO_Offset *) calloc(fndims, sizeof(PIO_Offset));
- countlist[rrcnt] = (PIO_Offset *) calloc(fndims, sizeof(PIO_Offset));
- for (i = 0; i < fndims; i++)
- {
- startlist[rrcnt][i]=start[i];
- countlist[rrcnt][i]=count[i];
- }
- rrcnt++;
- }
- if (regioncnt==maxregions-1)
- {
- //printf("%s %d %d %ld %ld\n",__FILE__,__LINE__,ios->io_rank,iodesc->llen, tdsize);
- // ierr = ncmpi_put_varn_all(ncid, vid, iodesc->maxregions, startlist, countlist,
- // IOBUF, iodesc->llen, iodesc->basetype);
-
- //printf("%s %d %ld \n",__FILE__,__LINE__,IOBUF);
- for (int nv=0; nvvarlist)+vid[nv];
- if (vdesc->record >= 0 && ndimsnreqs%PIO_REQUEST_ALLOC_CHUNK == 0 )
- {
- vdesc->request = realloc(vdesc->request,
- sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
-
- for (int i=vdesc->nreqs;inreqs+PIO_REQUEST_ALLOC_CHUNK;i++)
- {
- vdesc->request[i]=NC_REQ_NULL;
- }
- reqn = vdesc->nreqs;
- }
- else
- {
- while(vdesc->request[reqn] != NC_REQ_NULL)
- {
- reqn++;
- }
- }
- ierr = ncmpi_iput_varn(ncid, vid[nv], rrcnt, startlist, countlist,
- bufptr, llen, basetype, vdesc->request+reqn);
- /*
- ierr = ncmpi_bput_varn(ncid, vid[nv], rrcnt, startlist, countlist,
- bufptr, llen, basetype, &(vdesc->request));
- */
- if (vdesc->request[reqn] == NC_REQ_NULL)
- {
- vdesc->request[reqn] = PIO_REQ_NULL; //keeps wait calls in sync
- }
- vdesc->nreqs += reqn+1;
-
- // printf("%s %d %d %d\n",__FILE__,__LINE__,vdesc->nreqs,vdesc->request[reqn]);
- }
- for (i=0;iiotype,__FILE__,__LINE__);
- }
- if (region)
- region = region->next;
- } // for (regioncnt=0;regioncntmaxregions;regioncnt++){
- } // if (ios->ioproc)
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
-#ifdef TIMING
- /* Stop timing this function. */
- GPTLstop("PIO:write_darray_multi_nc");
-#endif
-
- return ierr;
-}
-
-/** Write a set of one or more aggregated arrays to output file in
- * serial mode.
- *
- * This routine is used if aggregation is enabled, data is already on the
- * io-tasks
- *
- * @param file: a pointer to the open file descriptor for the file
- * that will be written to
- * @param nvars: the number of variables to be written with this
- * decomposition
- * @param vid: an array of the variable ids to be written
- * @param iodesc_ndims: the number of dimensions explicitly in the
- * iodesc
- * @param basetype : the basic type of the minimal data unit
- * @param gsize : array of the global dimensions of the field to be
- * written
- * @param maxregions : max number of blocks to be written from this
- * iotask
- * @param firstregion : pointer to the first element of a linked
- * list of region descriptions.
- * @param llen : length of the iobuffer on this task for a single
- * field
- * @param maxiobuflen : maximum llen participating
- * @param num_aiotasks : actual number of iotasks participating
- * @param IOBUF: the buffer to be written from this mpi task
- * @param frame : the frame or record dimension for each of the
- * nvars variables in IOBUF
- *
- * @return 0 for success, error code otherwise.
- * @ingroup PIO_write_darray
- */
-int pio_write_darray_multi_nc_serial(file_desc_t *file, const int nvars, const int *vid,
- const int iodesc_ndims, MPI_Datatype basetype, const PIO_Offset *gsize,
- const int maxregions, io_region *firstregion, const PIO_Offset llen,
- const int maxiobuflen, const int num_aiotasks,
- void *IOBUF, const int *frame)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- var_desc_t *vdesc;
- int ierr;
- int i;
- int mpierr = MPI_SUCCESS; /* Return code from MPI function codes. */
- int dsize;
- MPI_Status status;
- PIO_Offset usage;
- int fndims;
- PIO_Offset tdsize;
- int tsize;
- int ncid;
- tdsize=0;
- ierr = PIO_NOERR;
-#ifdef TIMING
- /* Start timing this function. */
- GPTLstart("PIO:write_darray_multi_nc_serial");
-#endif
-
- if (!(ios = file->iosystem))
- {
- fprintf(stderr,"Failed to find iosystem handle \n");
- return PIO_EBADID;
- }
-
- ncid = file->fh;
-
- if (!(vdesc = (file->varlist) + vid[0]))
- {
- fprintf(stderr,"Failed to find variable handle %d\n",vid[0]);
- return PIO_EBADID;
- }
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (! ios->ioproc)
- {
- int msg = 0;
-
- if (ios->comp_rank==0)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, ios->compmaster, ios->intercomm);
- }
- }
-
- ierr = PIOc_inq_varndims(file->fh, vid[0], &fndims);
- MPI_Type_size(basetype, &tsize);
-
- if (ios->ioproc)
- {
- io_region *region;
- int regioncnt;
- int rrcnt;
- void *bufptr;
- int buflen, j;
- size_t tmp_start[fndims*maxregions];
- size_t tmp_count[fndims*maxregions];
-
- int ndims = iodesc_ndims;
-
- ncid = file->fh;
- region = firstregion;
-
-
- rrcnt = 0;
- for (regioncnt = 0; regioncnt < maxregions; regioncnt++)
- {
- for (i = 0; i < fndims; i++)
- {
- tmp_start[i + regioncnt * fndims] = 0;
- tmp_count[i + regioncnt * fndims] = 0;
- }
- if (region)
- {
- // this is a record based multidimensional array
- if (vdesc->record >= 0)
- {
- for (i = fndims - ndims; i < fndims; i++)
- {
- tmp_start[i + regioncnt * fndims] = region->start[i - (fndims - ndims)];
- tmp_count[i + regioncnt * fndims] = region->count[i - (fndims - ndims)];
- }
- // Non-time dependent array
- }
- else
- {
- for (i = 0; i < ndims; i++)
- {
- tmp_start[i + regioncnt * fndims] = region->start[i];
- tmp_count[i + regioncnt * fndims] = region->count[i];
- }
- }
- region = region->next;
- }
- }
- if (ios->io_rank > 0)
- {
- mpierr = MPI_Recv(&ierr, 1, MPI_INT, 0, 0, ios->io_comm, &status); // task0 is ready to recieve
- MPI_Send(&llen, 1, MPI_OFFSET, 0, ios->io_rank, ios->io_comm);
- if (llen>0)
- {
- MPI_Send(&maxregions, 1, MPI_INT, 0, ios->io_rank+ios->num_iotasks, ios->io_comm);
- MPI_Send(tmp_start, maxregions*fndims, MPI_OFFSET, 0, ios->io_rank+2*ios->num_iotasks, ios->io_comm);
- MPI_Send(tmp_count, maxregions*fndims, MPI_OFFSET, 0, ios->io_rank+3*ios->num_iotasks, ios->io_comm);
- // printf("%s %d %ld\n",__FILE__,__LINE__,nvars*llen);
- MPI_Send(IOBUF, nvars*llen, basetype, 0, ios->io_rank+4*ios->num_iotasks, ios->io_comm);
- }
- }
- else
- {
- size_t rlen;
- int rregions;
- size_t start[fndims], count[fndims];
- size_t loffset;
- mpierr = MPI_Type_size(basetype, &dsize);
-
- for (int rtask=0; rtasknum_iotasks; rtask++)
- {
- if (rtask>0)
- {
- mpierr = MPI_Send(&ierr, 1, MPI_INT, rtask, 0, ios->io_comm); // handshake - tell the sending task I'm ready
- MPI_Recv(&rlen, 1, MPI_OFFSET, rtask, rtask, ios->io_comm, &status);
- if (rlen>0){
- MPI_Recv(&rregions, 1, MPI_INT, rtask, rtask+ios->num_iotasks, ios->io_comm, &status);
- MPI_Recv(tmp_start, rregions*fndims, MPI_OFFSET, rtask, rtask+2*ios->num_iotasks, ios->io_comm, &status);
- MPI_Recv(tmp_count, rregions*fndims, MPI_OFFSET, rtask, rtask+3*ios->num_iotasks, ios->io_comm, &status);
- // printf("%s %d %d %ld\n",__FILE__,__LINE__,rtask,nvars*rlen);
- MPI_Recv(IOBUF, nvars*rlen, basetype, rtask, rtask+4*ios->num_iotasks, ios->io_comm, &status);
- }
- }
- else
- {
- rlen = llen;
- rregions = maxregions;
- }
- if (rlen>0)
- {
- loffset = 0;
- for (regioncnt=0;regioncntrecord>=0)
- {
- if (fndims>1 && ndims0)
- {
- count[0] = 1;
- start[0] = frame[nv];
- }
- else if (fndims==ndims)
- {
- start[0]+=vdesc->record;
- }
- }
-
- if (basetype == MPI_INTEGER)
- {
- ierr = nc_put_vara_int (ncid, vid[nv], start, count, (const int *) bufptr);
- }
- else if (basetype == MPI_DOUBLE || basetype == MPI_REAL8)
- {
- ierr = nc_put_vara_double (ncid, vid[nv], start, count, (const double *) bufptr);
- }
- else if (basetype == MPI_FLOAT || basetype == MPI_REAL4)
- {
- ierr = nc_put_vara_float (ncid,vid[nv], start, count, (const float *) bufptr);
- }
- else
- {
- fprintf(stderr,"Type not recognized %d in pioc_write_darray\n",(int) basetype);
- }
-
- if (ierr != PIO_NOERR){
- for (i=0;imaxregions;regioncnt++){
- } // if (rlen>0)
- } // for (int rtask=0; rtasknum_iotasks; rtask++){
-
- }
- } // if (ios->ioproc)
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
-#ifdef TIMING
- /* Stop timing this function. */
- GPTLstop("PIO:write_darray_multi_nc_serial");
-#endif
-
- return ierr;
-}
-
-/** Write one or more arrays with the same IO decomposition to the file.
- *
- * @param ncid identifies the netCDF file
- * @param vid: an array of the variable ids to be written
- * @param ioid: the I/O description ID as passed back by
- * PIOc_InitDecomp().
- * @param nvars the number of variables to be written with this
- * decomposition
- * @param arraylen: the length of the array to be written. This
- * is the length of the distrubited array. That is, the length of
- * the portion of the data that is on the processor.
- * @param array: pointer to the data to be written. This is a
- * pointer to the distributed portion of the array that is on this
- * processor.
- * @param frame the frame or record dimension for each of the nvars
- * variables in IOBUF
- * @param fillvalue: pointer to the fill value to be used for
- * missing data.
- * @param flushtodisk
- *
- * @return 0 for success, error code otherwise.
- * @ingroup PIO_write_darray
- */
-int PIOc_write_darray_multi(const int ncid, const int *vid, const int ioid,
- const int nvars, const PIO_Offset arraylen,
- void *array, const int *frame, void **fillvalue,
- bool flushtodisk)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file;
- io_desc_t *iodesc;
-
- int vsize, rlen;
- int ierr;
- var_desc_t *vdesc0;
-
- ierr = PIO_NOERR;
-
- file = pio_get_file_from_id(ncid);
- if (file == NULL)
- {
- fprintf(stderr,"File handle not found %d %d\n",ncid,__LINE__);
- return PIO_EBADID;
- }
- if (! (file->mode & PIO_WRITE))
- {
- fprintf(stderr,"ERROR: Attempt to write to read-only file\n");
- return PIO_EPERM;
- }
-
- iodesc = pio_get_iodesc_from_id(ioid);
- if (iodesc == NULL)
- {
- // print_trace(NULL);
- //fprintf(stderr,"iodesc handle not found %d %d\n",ioid,__LINE__);
- return PIO_EBADID;
- }
-
- vdesc0 = file->varlist+vid[0];
-
- pioassert(nvars>0,"nvars <= 0",__FILE__,__LINE__);
-
- ios = file->iosystem;
- // rlen = iodesc->llen*nvars;
- rlen=0;
- if (iodesc->llen>0)
- {
- rlen = iodesc->maxiobuflen*nvars;
- }
- if (vdesc0->iobuf)
- {
- piodie("Attempt to overwrite existing io buffer",__FILE__,__LINE__);
- }
- if (iodesc->rearranger>0)
- {
- if (rlen>0)
- {
- MPI_Type_size(iodesc->basetype, &vsize);
- //printf("rlen*vsize = %ld\n",rlen*vsize);
-
- vdesc0->iobuf = bget((size_t) vsize* (size_t) rlen);
- if (vdesc0->iobuf==NULL)
- {
- printf("%s %d %d %ld\n",__FILE__,__LINE__,nvars,vsize*rlen);
- piomemerror(*ios,(size_t) rlen*(size_t) vsize, __FILE__,__LINE__);
- }
- if (iodesc->needsfill && iodesc->rearranger==PIO_REARR_BOX)
- {
- if (vsize==4)
- {
- for (int nv=0;nv < nvars; nv++)
- {
- for (int i=0;imaxiobuflen;i++)
- {
- ((float *) vdesc0->iobuf)[i+nv*(iodesc->maxiobuflen)] = ((float *)fillvalue)[nv];
- }
- }
- }
- else if (vsize==8)
- {
- for (int nv=0;nv < nvars; nv++)
- {
- for (int i=0;imaxiobuflen;i++)
- {
- ((double *)vdesc0->iobuf)[i+nv*(iodesc->maxiobuflen)] = ((double *)fillvalue)[nv];
- }
- }
- }
- }
- }
-
- ierr = rearrange_comp2io(*ios, iodesc, array, vdesc0->iobuf, nvars);
- }/* this is wrong, need to think about it
- else{
- vdesc0->iobuf = array;
- } */
- switch(file->iotype)
- {
- case PIO_IOTYPE_NETCDF4P:
- case PIO_IOTYPE_PNETCDF:
- ierr = pio_write_darray_multi_nc(file, nvars, vid,
- iodesc->ndims, iodesc->basetype, iodesc->gsize,
- iodesc->maxregions, iodesc->firstregion, iodesc->llen,
- iodesc->maxiobuflen, iodesc->num_aiotasks,
- vdesc0->iobuf, frame);
- break;
- case PIO_IOTYPE_NETCDF4C:
- case PIO_IOTYPE_NETCDF:
- ierr = pio_write_darray_multi_nc_serial(file, nvars, vid,
- iodesc->ndims, iodesc->basetype, iodesc->gsize,
- iodesc->maxregions, iodesc->firstregion, iodesc->llen,
- iodesc->maxiobuflen, iodesc->num_aiotasks,
- vdesc0->iobuf, frame);
- if (vdesc0->iobuf)
- {
- brel(vdesc0->iobuf);
- vdesc0->iobuf = NULL;
- }
- break;
-
- }
-
- if (iodesc->rearranger == PIO_REARR_SUBSET && iodesc->needsfill &&
- iodesc->holegridsize>0)
- {
- if (vdesc0->fillbuf)
- {
- piodie("Attempt to overwrite existing buffer",__FILE__,__LINE__);
- }
-
- vdesc0->fillbuf = bget(iodesc->holegridsize*vsize*nvars);
- //printf("%s %d %x\n",__FILE__,__LINE__,vdesc0->fillbuf);
- if (vsize==4)
- {
- for (int nv=0;nvholegridsize;i++)
- {
- ((float *) vdesc0->fillbuf)[i+nv*iodesc->holegridsize] = ((float *) fillvalue)[nv];
- }
- }
- }
- else if (vsize==8)
- {
- for (int nv=0;nvholegridsize;i++)
- {
- ((double *) vdesc0->fillbuf)[i+nv*iodesc->holegridsize] = ((double *) fillvalue)[nv];
- }
- }
- }
- switch(file->iotype)
- {
- case PIO_IOTYPE_PNETCDF:
- ierr = pio_write_darray_multi_nc(file, nvars, vid,
- iodesc->ndims, iodesc->basetype, iodesc->gsize,
- iodesc->maxfillregions, iodesc->fillregion, iodesc->holegridsize,
- iodesc->holegridsize, iodesc->num_aiotasks,
- vdesc0->fillbuf, frame);
- break;
- case PIO_IOTYPE_NETCDF4P:
- case PIO_IOTYPE_NETCDF4C:
- case PIO_IOTYPE_NETCDF:
- /* ierr = pio_write_darray_multi_nc_serial(file, nvars, vid,
- iodesc->ndims, iodesc->basetype, iodesc->gsize,
- iodesc->maxfillregions, iodesc->fillregion, iodesc->holegridsize,
- iodesc->holegridsize, iodesc->num_aiotasks,
- vdesc0->fillbuf, frame);
- */
- /* if (vdesc0->fillbuf != NULL){
- printf("%s %d %x\n",__FILE__,__LINE__,vdesc0->fillbuf);
- brel(vdesc0->fillbuf);
- vdesc0->fillbuf = NULL;
- }
- */
- break;
- }
- }
-
- flush_output_buffer(file, flushtodisk, 0);
-
- return ierr;
-}
-
-/** Write a distributed array to the output file.
- *
- * This routine aggregates output on the compute nodes and only sends
- * it to the IO nodes when the compute buffer is full or when a flush
- * is triggered.
- *
- * @param ncid: the ncid of the open netCDF file.
- * @param vid: the variable ID returned by PIOc_def_var().
- * @param ioid: the I/O description ID as passed back by
- * PIOc_InitDecomp().
- * @param arraylen: the length of the array to be written. This
- * is the length of the distrubited array. That is, the length of
- * the portion of the data that is on the processor.
- * @param array: pointer to the data to be written. This is a
- * pointer to the distributed portion of the array that is on this
- * processor.
- * @param fillvalue: pointer to the fill value to be used for
- * missing data.
- *
- * @returns 0 for success, non-zero error code for failure.
- * @ingroup PIO_write_darray
- */
-int PIOc_write_darray(const int ncid, const int vid, const int ioid,
- const PIO_Offset arraylen, void *array, void *fillvalue)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file;
- io_desc_t *iodesc;
- var_desc_t *vdesc;
- void *bufptr;
- size_t rlen;
- int ierr;
- MPI_Datatype vtype;
- wmulti_buffer *wmb;
- int tsize;
- int *tptr;
- void *bptr;
- void *fptr;
- bool recordvar;
- int needsflush;
- bufsize totfree, maxfree;
-
- ierr = PIO_NOERR;
- needsflush = 0; // false
- file = pio_get_file_from_id(ncid);
- if (file == NULL)
- {
- fprintf(stderr,"File handle not found %d %d\n",ncid,__LINE__);
- return PIO_EBADID;
- }
- if (! (file->mode & PIO_WRITE))
- {
- fprintf(stderr,"ERROR: Attempt to write to read-only file\n");
- return PIO_EPERM;
- }
-
- iodesc = pio_get_iodesc_from_id(ioid);
- if (iodesc == NULL)
- {
- fprintf(stderr,"iodesc handle not found %d %d\n",ioid,__LINE__);
- return PIO_EBADID;
- }
- ios = file->iosystem;
-
- vdesc = (file->varlist)+vid;
- if (vdesc == NULL)
- return PIO_EBADID;
-
- /* Is this a record variable? */
- recordvar = vdesc->record < 0 ? true : false;
-
- if (iodesc->ndof != arraylen)
- {
- fprintf(stderr,"ndof=%ld, arraylen=%ld\n",iodesc->ndof,arraylen);
- piodie("ndof != arraylen",__FILE__,__LINE__);
- }
- wmb = &(file->buffer);
- if (wmb->ioid == -1)
- {
- if (recordvar)
- wmb->ioid = ioid;
- else
- wmb->ioid = -(ioid);
- }
- else
- {
- // separate record and non-record variables
- if (recordvar)
- {
- while(wmb->next && wmb->ioid!=ioid)
- if (wmb->next!=NULL)
- wmb = wmb->next;
-#ifdef _PNETCDF
- /* flush the previous record before starting a new one. this is collective */
- // if (vdesc->request != NULL && (vdesc->request[0] != NC_REQ_NULL) ||
- // (wmb->frame != NULL && vdesc->record != wmb->frame[0])){
- // needsflush = 2; // flush to disk
- // }
-#endif
- }
- else
- {
- while(wmb->next && wmb->ioid!= -(ioid))
- {
- if (wmb->next!=NULL)
- wmb = wmb->next;
- }
- }
- }
- if ((recordvar && wmb->ioid != ioid) || (!recordvar && wmb->ioid != -(ioid)))
- {
- wmb->next = (wmulti_buffer *) bget((bufsize) sizeof(wmulti_buffer));
- if (wmb->next == NULL)
- piomemerror(*ios,sizeof(wmulti_buffer), __FILE__,__LINE__);
- wmb=wmb->next;
- wmb->next=NULL;
- if (recordvar)
- wmb->ioid = ioid;
- else
- wmb->ioid = -(ioid);
- wmb->validvars=0;
- wmb->arraylen=arraylen;
- wmb->vid=NULL;
- wmb->data=NULL;
- wmb->frame=NULL;
- wmb->fillvalue=NULL;
- }
-
- MPI_Type_size(iodesc->basetype, &tsize);
- // At this point wmb should be pointing to a new or existing buffer
- // so we can add the data
- // printf("%s %d %X %d %d %d\n",__FILE__,__LINE__,wmb->data,wmb->validvars,arraylen,tsize);
- // cn_buffer_report(*ios, true);
- bfreespace(&totfree, &maxfree);
- if (needsflush == 0)
- needsflush = (maxfree <= 1.1*(1+wmb->validvars)*arraylen*tsize );
- MPI_Allreduce(MPI_IN_PLACE, &needsflush, 1, MPI_INT, MPI_MAX, ios->comp_comm);
-
- if (needsflush > 0 )
- {
- // need to flush first
- // printf("%s %d %ld %d %ld %ld\n",__FILE__,__LINE__,maxfree, wmb->validvars, (1+wmb->validvars)*arraylen*tsize,totfree);
- cn_buffer_report(*ios, true);
-
- flush_buffer(ncid, wmb, needsflush == 2); // if needsflush == 2 flush to disk otherwise just flush to io node
- }
-
- if (arraylen > 0)
- if (!(wmb->data = bgetr(wmb->data, (1+wmb->validvars)*arraylen*tsize)))
- piomemerror(*ios, (1+wmb->validvars)*arraylen*tsize, __FILE__, __LINE__);
-
- if (!(wmb->vid = (int *) bgetr(wmb->vid,sizeof(int)*(1+wmb->validvars))))
- piomemerror(*ios, (1+wmb->validvars)*sizeof(int), __FILE__, __LINE__);
-
- if (vdesc->record >= 0)
- if (!(wmb->frame = (int *)bgetr(wmb->frame, sizeof(int) * (1 + wmb->validvars))))
- piomemerror(*ios, (1+wmb->validvars)*sizeof(int), __FILE__, __LINE__);
-
- if (iodesc->needsfill)
- if (!(wmb->fillvalue = bgetr(wmb->fillvalue,tsize*(1+wmb->validvars))))
- piomemerror(*ios, (1+wmb->validvars)*tsize , __FILE__,__LINE__);
-
- if (iodesc->needsfill)
- {
- if (fillvalue)
- {
- memcpy((char *) wmb->fillvalue+tsize*wmb->validvars,fillvalue, tsize);
- }
- else
- {
- vtype = (MPI_Datatype) iodesc->basetype;
- if (vtype == MPI_INTEGER)
- {
- int fill = PIO_FILL_INT;
- memcpy((char *) wmb->fillvalue+tsize*wmb->validvars, &fill, tsize);
- }
- else if (vtype == MPI_FLOAT || vtype == MPI_REAL4)
- {
- float fill = PIO_FILL_FLOAT;
- memcpy((char *) wmb->fillvalue+tsize*wmb->validvars, &fill, tsize);
- }
- else if (vtype == MPI_DOUBLE || vtype == MPI_REAL8)
- {
- double fill = PIO_FILL_DOUBLE;
- memcpy((char *) wmb->fillvalue+tsize*wmb->validvars, &fill, tsize);
- }
- else if (vtype == MPI_CHARACTER)
- {
- char fill = PIO_FILL_CHAR;
- memcpy((char *) wmb->fillvalue+tsize*wmb->validvars, &fill, tsize);
- }
- else
- {
- fprintf(stderr,"Type not recognized %d in pioc_write_darray\n",vtype);
- }
- }
-
- }
-
- wmb->arraylen = arraylen;
- wmb->vid[wmb->validvars]=vid;
- bufptr = (void *)((char *) wmb->data + arraylen*tsize*wmb->validvars);
- if (arraylen>0)
- memcpy(bufptr, array, arraylen*tsize);
- /*
- if (tsize==8){
- double asum=0.0;
- printf("%s %d %d %d %d\n",__FILE__,__LINE__,vid,arraylen,iodesc->ndof);
- for (int k=0;kvalidvars,wmb->ioid,vid,bufptr);
-
- if (wmb->frame!=NULL)
- wmb->frame[wmb->validvars]=vdesc->record;
- wmb->validvars++;
-
- // printf("%s %d %d %d %d %d\n",__FILE__,__LINE__,wmb->validvars,iodesc->maxbytes/tsize, iodesc->ndof, iodesc->llen);
- if (wmb->validvars >= iodesc->maxbytes/tsize)
- PIOc_sync(ncid);
-
- return ierr;
-}
-
-/** Read an array of data from a file to the (parallel) IO library.
- *
- * @param file a pointer to the open file descriptor for the file
- * that will be written to
- * @param iodesc a pointer to the defined iodescriptor for the buffer
- * @param vid the variable id to be read
- * @param IOBUF the buffer to be read into from this mpi task
- *
- * @return 0 on success, error code otherwise.
- * @ingroup PIO_read_darray
- */
-int pio_read_darray_nc(file_desc_t *file, io_desc_t *iodesc, const int vid,
- void *IOBUF)
-{
- int ierr=PIO_NOERR;
- iosystem_desc_t *ios; /* Pointer to io system information. */
- var_desc_t *vdesc;
- int ndims, fndims;
- MPI_Status status;
- int i;
-
-#ifdef TIMING
- /* Start timing this function. */
- GPTLstart("PIO:read_darray_nc");
-#endif
- ios = file->iosystem;
- if (ios == NULL)
- return PIO_EBADID;
-
- vdesc = (file->varlist)+vid;
-
- if (vdesc == NULL)
- return PIO_EBADID;
-
- ndims = iodesc->ndims;
- ierr = PIOc_inq_varndims(file->fh, vid, &fndims);
-
- if (fndims==ndims)
- vdesc->record=-1;
-
- if (ios->ioproc)
- {
- io_region *region;
- size_t start[fndims];
- size_t count[fndims];
- size_t tmp_start[fndims];
- size_t tmp_count[fndims];
- size_t tmp_bufsize=1;
- int regioncnt;
- void *bufptr;
- int tsize;
-
- int rrlen=0;
- PIO_Offset *startlist[iodesc->maxregions];
- PIO_Offset *countlist[iodesc->maxregions];
-
- // buffer is incremented by byte and loffset is in terms of the iodessc->basetype
- // so we need to multiply by the size of the basetype
- // We can potentially allow for one iodesc to have multiple datatypes by allowing the
- // calling program to change the basetype.
- region = iodesc->firstregion;
- MPI_Type_size(iodesc->basetype, &tsize);
- if (fndims>ndims)
- {
- ndims++;
- if (vdesc->record<0)
- vdesc->record=0;
- }
- for (regioncnt=0;regioncntmaxregions;regioncnt++)
- {
- // printf("%s %d %d %ld %d %d\n",__FILE__,__LINE__,regioncnt,region,fndims,ndims);
- tmp_bufsize=1;
- if (region==NULL || iodesc->llen==0)
- {
- for (i=0;iloffset);
-
- // printf("%s %d %d %d %d\n",__FILE__,__LINE__,iodesc->llen - region->loffset, iodesc->llen, region->loffset);
-
- if (vdesc->record >= 0 && fndims>1)
- {
- start[0] = vdesc->record;
- for (i=1;istart[i-1];
- count[i] = region->count[i-1];
- // printf("%s %d %d %ld %ld\n",__FILE__,__LINE__,i,start[i],count[i]);
- }
- if (count[1] > 0)
- count[0] = 1;
- }
- else
- {
- // Non-time dependent array
- for (i=0;istart[i];
- count[i] = region->count[i];
- // printf("%s %d %d %ld %ld\n",__FILE__,__LINE__,i,start[i],count[i]);
- }
- }
- }
-
- switch(file->iotype)
- {
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- if (iodesc->basetype == MPI_DOUBLE || iodesc->basetype == MPI_REAL8)
- {
- ierr = nc_get_vara_double (file->fh, vid,start,count, bufptr);
- }
- else if (iodesc->basetype == MPI_INTEGER)
- {
- ierr = nc_get_vara_int (file->fh, vid, start, count, bufptr);
- }
- else if (iodesc->basetype == MPI_FLOAT || iodesc->basetype == MPI_REAL4)
- {
- ierr = nc_get_vara_float (file->fh, vid, start, count, bufptr);
- }
- else
- {
- fprintf(stderr,"Type not recognized %d in pioc_read_darray\n",(int) iodesc->basetype);
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- {
- tmp_bufsize=1;
- for (int j = 0; j < fndims; j++)
- tmp_bufsize *= count[j];
-
- if (tmp_bufsize > 0)
- {
- startlist[rrlen] = (PIO_Offset *) bget(fndims * sizeof(PIO_Offset));
- countlist[rrlen] = (PIO_Offset *) bget(fndims * sizeof(PIO_Offset));
-
- for (int j = 0; j < fndims; j++)
- {
- startlist[rrlen][j] = start[j];
- countlist[rrlen][j] = count[j];
- /* printf("%s %d %d %d %d %ld %ld %ld\n",__FILE__,__LINE__,realregioncnt,
- iodesc->maxregions, j,start[j],count[j],tmp_bufsize);*/
- }
- rrlen++;
- }
- if (regioncnt==iodesc->maxregions-1)
- {
- ierr = ncmpi_get_varn_all(file->fh, vid, rrlen, startlist,
- countlist, IOBUF, iodesc->llen, iodesc->basetype);
- for (i=0;iiotype,__FILE__,__LINE__);
-
- }
- if (region)
- region = region->next;
- } // for (regioncnt=0;...)
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
-#ifdef TIMING
- /* Stop timing this function. */
- GPTLstop("PIO:read_darray_nc");
-#endif
-
- return ierr;
-}
-
-/** Read an array of data from a file to the (serial) IO library.
- *
- * @param file a pointer to the open file descriptor for the file
- * that will be written to
- * @param iodesc a pointer to the defined iodescriptor for the buffer
- * @param vid the variable id to be read.
- * @param IOBUF the buffer to be read into from this mpi task
- *
- * @returns
- * @ingroup PIO_read_darray
- */
-int pio_read_darray_nc_serial(file_desc_t *file, io_desc_t *iodesc,
- const int vid, void *IOBUF)
-{
- int ierr=PIO_NOERR;
- iosystem_desc_t *ios; /* Pointer to io system information. */
- var_desc_t *vdesc;
- int ndims, fndims;
- MPI_Status status;
- int i;
-
-#ifdef TIMING
- /* Start timing this function. */
- GPTLstart("PIO:read_darray_nc_serial");
-#endif
- ios = file->iosystem;
- if (ios == NULL)
- return PIO_EBADID;
-
- vdesc = (file->varlist)+vid;
-
- if (vdesc == NULL)
- return PIO_EBADID;
-
- ndims = iodesc->ndims;
- ierr = PIOc_inq_varndims(file->fh, vid, &fndims);
-
- if (fndims==ndims)
- vdesc->record=-1;
-
- if (ios->ioproc)
- {
- io_region *region;
- size_t start[fndims];
- size_t count[fndims];
- size_t tmp_start[fndims * iodesc->maxregions];
- size_t tmp_count[fndims * iodesc->maxregions];
- size_t tmp_bufsize;
- int regioncnt;
- void *bufptr;
- int tsize;
-
- int rrlen = 0;
-
- // buffer is incremented by byte and loffset is in terms of the iodessc->basetype
- // so we need to multiply by the size of the basetype
- // We can potentially allow for one iodesc to have multiple datatypes by allowing the
- // calling program to change the basetype.
- region = iodesc->firstregion;
- MPI_Type_size(iodesc->basetype, &tsize);
- if (fndims>ndims)
- {
- if (vdesc->record < 0)
- vdesc->record = 0;
- }
- for (regioncnt=0;regioncntmaxregions;regioncnt++)
- {
- if (region==NULL || iodesc->llen==0)
- {
- for (i = 0; i < fndims; i++)
- {
- tmp_start[i + regioncnt * fndims] = 0;
- tmp_count[i + regioncnt * fndims] = 0;
- }
- bufptr=NULL;
- }
- else
- {
- if (vdesc->record >= 0 && fndims>1)
- {
- tmp_start[regioncnt*fndims] = vdesc->record;
- for (i=1;istart[i-1];
- tmp_count[i+regioncnt*fndims] = region->count[i-1];
- }
- if (tmp_count[1 + regioncnt * fndims] > 0)
- tmp_count[regioncnt * fndims] = 1;
- }
- else
- {
- // Non-time dependent array
- for (i = 0; i < fndims; i++)
- {
- tmp_start[i + regioncnt * fndims] = region->start[i];
- tmp_count[i + regioncnt * fndims] = region->count[i];
- }
- }
- /* for (i=0;inext;
- } // for (regioncnt=0;...)
-
- if (ios->io_rank>0)
- {
- MPI_Send(&(iodesc->llen), 1, MPI_OFFSET, 0, ios->io_rank, ios->io_comm);
- if (iodesc->llen > 0)
- {
- MPI_Send(&(iodesc->maxregions), 1, MPI_INT, 0,
- ios->num_iotasks + ios->io_rank, ios->io_comm);
- MPI_Send(tmp_count, iodesc->maxregions*fndims, MPI_OFFSET, 0,
- 2 * ios->num_iotasks + ios->io_rank, ios->io_comm);
- MPI_Send(tmp_start, iodesc->maxregions*fndims, MPI_OFFSET, 0,
- 3 * ios->num_iotasks + ios->io_rank, ios->io_comm);
- MPI_Recv(IOBUF, iodesc->llen, iodesc->basetype, 0,
- 4 * ios->num_iotasks+ios->io_rank, ios->io_comm, &status);
- }
- }
- else if (ios->io_rank == 0)
- {
- int maxregions=0;
- size_t loffset, regionsize;
- size_t this_start[fndims*iodesc->maxregions];
- size_t this_count[fndims*iodesc->maxregions];
- // for (i=ios->num_iotasks-1; i>=0; i--){
- for (int rtask = 1; rtask <= ios->num_iotasks; rtask++)
- {
- if (rtasknum_iotasks)
- {
- MPI_Recv(&tmp_bufsize, 1, MPI_OFFSET, rtask, rtask, ios->io_comm, &status);
- if (tmp_bufsize>0)
- {
- MPI_Recv(&maxregions, 1, MPI_INT, rtask, ios->num_iotasks+rtask,
- ios->io_comm, &status);
- MPI_Recv(this_count, maxregions*fndims, MPI_OFFSET, rtask,
- 2 * ios->num_iotasks + rtask, ios->io_comm, &status);
- MPI_Recv(this_start, maxregions*fndims, MPI_OFFSET, rtask,
- 3 * ios->num_iotasks + rtask, ios->io_comm, &status);
- }
- }
- else
- {
- maxregions=iodesc->maxregions;
- tmp_bufsize=iodesc->llen;
- }
- loffset = 0;
- for (regioncnt=0;regioncntnum_iotasks)
- {
- for (int m=0; mbasetype == MPI_DOUBLE || iodesc->basetype == MPI_REAL8)
- {
- ierr = nc_get_vara_double (file->fh, vid,start, count, bufptr);
- }
- else if (iodesc->basetype == MPI_INTEGER)
- {
- ierr = nc_get_vara_int (file->fh, vid, start, count, bufptr);
- }
- else if (iodesc->basetype == MPI_FLOAT || iodesc->basetype == MPI_REAL4)
- {
- ierr = nc_get_vara_float (file->fh, vid, start, count, bufptr);
- }
- else
- {
- fprintf(stderr,"Type not recognized %d in pioc_write_darray_nc_serial\n",
- (int)iodesc->basetype);
- }
-
- if (ierr != PIO_NOERR)
- {
- for (int i = 0; i < fndims; i++)
- fprintf(stderr,"vid %d dim %d start %ld count %ld err %d\n",
- vid, i, start[i], count[i], ierr);
-
- }
-
-#endif
- }
- if (rtask < ios->num_iotasks)
- MPI_Send(IOBUF, tmp_bufsize, iodesc->basetype, rtask,
- 4 * ios->num_iotasks + rtask, ios->io_comm);
- }
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__, __LINE__);
-
-#ifdef TIMING
- /* Stop timing this function. */
- GPTLstop("PIO:read_darray_nc_serial");
-#endif
-
- return ierr;
-}
-
-/** Read a field from a file to the IO library.
- * @ingroup PIO_read_darray
- *
- * @param ncid identifies the netCDF file
- * @param vid the variable ID to be read
- * @param ioid: the I/O description ID as passed back by
- * PIOc_InitDecomp().
- * @param arraylen: the length of the array to be read. This
- * is the length of the distrubited array. That is, the length of
- * the portion of the data that is on the processor.
- * @param array: pointer to the data to be read. This is a
- * pointer to the distributed portion of the array that is on this
- * processor.
- *
- * @return 0 for success, error code otherwise.
- * @ingroup PIO_read_darray
- */
-int PIOc_read_darray(const int ncid, const int vid, const int ioid,
- const PIO_Offset arraylen, void *array)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file;
- io_desc_t *iodesc;
- void *iobuf=NULL;
- size_t rlen=0;
- int ierr, tsize;
- MPI_Datatype vtype;
-
- file = pio_get_file_from_id(ncid);
-
- if (file == NULL)
- {
- fprintf(stderr,"File handle not found %d %d\n",ncid,__LINE__);
- return PIO_EBADID;
- }
- iodesc = pio_get_iodesc_from_id(ioid);
- if (iodesc == NULL)
- {
- fprintf(stderr,"iodesc handle not found %d %d\n",ioid,__LINE__);
- return PIO_EBADID;
- }
- ios = file->iosystem;
- if (ios->iomaster)
- {
- rlen = iodesc->maxiobuflen;
- }
- else
- {
- rlen = iodesc->llen;
- }
-
- if (iodesc->rearranger > 0)
- {
- if (ios->ioproc && rlen>0)
- {
- MPI_Type_size(iodesc->basetype, &tsize);
- iobuf = bget(((size_t) tsize)*rlen);
- if (iobuf==NULL)
- {
- piomemerror(*ios,rlen*((size_t) tsize), __FILE__,__LINE__);
- }
- }
- }
- else
- {
- iobuf = array;
- }
-
- switch(file->iotype)
- {
- case PIO_IOTYPE_NETCDF:
- case PIO_IOTYPE_NETCDF4C:
- ierr = pio_read_darray_nc_serial(file, iodesc, vid, iobuf);
- break;
- case PIO_IOTYPE_PNETCDF:
- case PIO_IOTYPE_NETCDF4P:
- ierr = pio_read_darray_nc(file, iodesc, vid, iobuf);
- break;
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- if (iodesc->rearranger > 0)
- {
- ierr = rearrange_io2comp(*ios, iodesc, iobuf, array);
-
- if (rlen>0)
- brel(iobuf);
- }
-
- return ierr;
-
-}
-
-/** Flush the output buffer. This is only relevant for files opened
- * with pnetcdf.
- *
- * @param file a pointer to the open file descriptor for the file
- * that will be written to
- * @param force true to force the flushing of the buffer
- * @param addsize additional size to add to buffer (in bytes)
- *
- * @return 0 for success, error code otherwise.
- * @private
- * @ingroup PIO_write_darray
- */
-int flush_output_buffer(file_desc_t *file, bool force, PIO_Offset addsize)
-{
- int ierr = PIO_NOERR;
-
-#ifdef _PNETCDF
- var_desc_t *vdesc;
- int *status;
- PIO_Offset usage = 0;
-
-#ifdef TIMING
- /* Start timing this function. */
- GPTLstart("PIO:flush_output_buffer");
-#endif
-
- pioassert(file != NULL, "file pointer not defined", __FILE__,
- __LINE__);
-
- /* Find out the buffer usage. */
- ierr = ncmpi_inq_buffer_usage(file->fh, &usage);
-
- /* If we are not forcing a flush, spread the usage to all IO
- * tasks. */
- if (!force && file->iosystem->io_comm != MPI_COMM_NULL)
- {
- usage += addsize;
- MPI_Allreduce(MPI_IN_PLACE, &usage, 1, MPI_OFFSET, MPI_MAX,
- file->iosystem->io_comm);
- }
-
- /* Keep track of the maximum usage. */
- if (usage > maxusage)
- maxusage = usage;
-
- /* If the user forces it, or the buffer has exceeded the size
- * limit, then flush to disk. */
- if (force || usage >= PIO_BUFFER_SIZE_LIMIT)
- {
- int rcnt;
- bool prev_dist=false;
- int prev_record=-1;
- int prev_type=0;
- int maxreq;
- int reqcnt;
- maxreq = 0;
- reqcnt=0;
- rcnt=0;
- for (int i = 0; i < PIO_MAX_VARS; i++)
- {
- vdesc = file->varlist + i;
- reqcnt += vdesc->nreqs;
- if (vdesc->nreqs > 0)
- maxreq = i;
- }
- int request[reqcnt];
- int status[reqcnt];
-
- for (int i = 0; i <= maxreq; i++)
- {
- vdesc = file->varlist + i;
-#ifdef MPIO_ONESIDED
- /*onesided optimization requires that all of the requests in a wait_all call represent
- a contiguous block of data in the file */
- if (rcnt>0 && (prev_record != vdesc->record || vdesc->nreqs==0))
- {
- ierr = ncmpi_wait_all(file->fh, rcnt, request,status);
- rcnt=0;
- }
- prev_record = vdesc->record;
-#endif
- // printf("%s %d %d %d %d \n",__FILE__,__LINE__,i,vdesc->nreqs,vdesc->request);
- for (reqcnt=0;reqcntnreqs;reqcnt++)
- {
- request[rcnt++] = max(vdesc->request[reqcnt],NC_REQ_NULL);
- }
- free(vdesc->request);
- vdesc->request = NULL;
- vdesc->nreqs = 0;
- // if (file->iosystem->io_rank < 2) printf("%s %d varid=%d\n",__FILE__,__LINE__,i);
-#ifdef FLUSH_EVERY_VAR
- ierr = ncmpi_wait_all(file->fh, rcnt, request, status);
- rcnt = 0;
-#endif
- }
- // if (file->iosystem->io_rank==0){
- // printf("%s %d %d\n",__FILE__,__LINE__,rcnt);
- // }
- if (rcnt > 0)
- {
- /*
- if (file->iosystem->io_rank==0){
- printf("%s %d %d ",__FILE__,__LINE__,rcnt);
- for (int i=0; ifh, rcnt, request, status);
- }
- for (int i = 0; i < PIO_MAX_VARS; i++)
- {
- vdesc = file->varlist + i;
- if (vdesc->iobuf)
- {
- brel(vdesc->iobuf);
- vdesc->iobuf=NULL;
- }
- if (vdesc->fillbuf)
- {
- brel(vdesc->fillbuf);
- vdesc->fillbuf=NULL;
- }
- }
-
- }
-
-#ifdef TIMING
- /* Stop timing this function. */
- GPTLstop("PIO:flush_output_buffer");
-#endif
-
-#endif /* _PNETCDF */
- return ierr;
-}
-
-/** Print out info about the buffer for debug purposes.
- *
- * @param ios the IO system structure
- * @param collective true if collective report is desired
- *
- * @private
- * @ingroup PIO_write_darray
- */
-void cn_buffer_report(iosystem_desc_t ios, bool collective)
-{
-
- if (CN_bpool)
- {
- long bget_stats[5];
- long bget_mins[5];
- long bget_maxs[5];
-
- bstats(bget_stats, bget_stats+1,bget_stats+2,bget_stats+3,bget_stats+4);
- if (collective)
- {
- MPI_Reduce(bget_stats, bget_maxs, 5, MPI_LONG, MPI_MAX, 0, ios.comp_comm);
- MPI_Reduce(bget_stats, bget_mins, 5, MPI_LONG, MPI_MIN, 0, ios.comp_comm);
- if (ios.compmaster)
- {
- printf("PIO: Currently allocated buffer space %ld %ld\n",
- bget_mins[0], bget_maxs[0]);
- printf("PIO: Currently available buffer space %ld %ld\n",
- bget_mins[1], bget_maxs[1]);
- printf("PIO: Current largest free block %ld %ld\n",
- bget_mins[2], bget_maxs[2]);
- printf("PIO: Number of successful bget calls %ld %ld\n",
- bget_mins[3], bget_maxs[3]);
- printf("PIO: Number of successful brel calls %ld %ld\n",
- bget_mins[4], bget_maxs[4]);
- // print_trace(stdout);
- }
- }
- else
- {
- printf("%d: PIO: Currently allocated buffer space %ld \n",
- ios.union_rank, bget_stats[0]) ;
- printf("%d: PIO: Currently available buffer space %ld \n",
- ios.union_rank, bget_stats[1]);
- printf("%d: PIO: Current largest free block %ld \n",
- ios.union_rank, bget_stats[2]);
- printf("%d: PIO: Number of successful bget calls %ld \n",
- ios.union_rank, bget_stats[3]);
- printf("%d: PIO: Number of successful brel calls %ld \n",
- ios.union_rank, bget_stats[4]);
- }
- }
-}
-
-/** Free the buffer pool. If malloc is used (that is, PIO_USE_MALLOC is
- * non zero), this function does nothing.
- *
- * @param ios the IO system structure
- *
- * @private
- * @ingroup PIO_write_darray
- */
-void free_cn_buffer_pool(iosystem_desc_t ios)
-{
-#if !PIO_USE_MALLOC
- if (CN_bpool)
- {
- cn_buffer_report(ios, true);
- bpoolrelease(CN_bpool);
- // free(CN_bpool);
- CN_bpool = NULL;
- }
-#endif /* !PIO_USE_MALLOC */
-}
-
-/** Flush the buffer.
- *
- * @param ncid identifies the netCDF file
- * @param wmb
- * @param flushtodisk
- *
- * @private
- * @ingroup PIO_write_darray
- */
-void flush_buffer(int ncid, wmulti_buffer *wmb, bool flushtodisk)
-{
- if (wmb->validvars > 0)
- {
- PIOc_write_darray_multi(ncid, wmb->vid, wmb->ioid, wmb->validvars,
- wmb->arraylen, wmb->data, wmb->frame,
- wmb->fillvalue, flushtodisk);
- wmb->validvars = 0;
- brel(wmb->vid);
- wmb->vid = NULL;
- brel(wmb->data);
- wmb->data = NULL;
- if (wmb->fillvalue)
- brel(wmb->fillvalue);
- if (wmb->frame)
- brel(wmb->frame);
- wmb->fillvalue = NULL;
- wmb->frame = NULL;
- }
-}
-
-/** Compute the maximum aggregate number of bytes.
- *
- * @param ios the IO system structure
- * @param iodesc a pointer to the defined iodescriptor for the buffer
- *
- * @private
- * @ingroup PIO_write_darray
- */
-void compute_maxaggregate_bytes(const iosystem_desc_t ios, io_desc_t *iodesc)
-{
- int maxbytesoniotask = INT_MAX;
- int maxbytesoncomputetask = INT_MAX;
- int maxbytes;
-
- // printf("%s %d %d %d\n",__FILE__,__LINE__,iodesc->maxiobuflen, iodesc->ndof);
-
- if (ios.ioproc && iodesc->maxiobuflen > 0)
- maxbytesoniotask = PIO_BUFFER_SIZE_LIMIT / iodesc->maxiobuflen;
-
- if (ios.comp_rank >= 0 && iodesc->ndof > 0)
- maxbytesoncomputetask = PIO_CNBUFFER_LIMIT / iodesc->ndof;
-
- maxbytes = min(maxbytesoniotask, maxbytesoncomputetask);
-
- // printf("%s %d %d %d\n",__FILE__,__LINE__,maxbytesoniotask, maxbytesoncomputetask);
-
- MPI_Allreduce(MPI_IN_PLACE, &maxbytes, 1, MPI_INT, MPI_MIN, ios.union_comm);
- iodesc->maxbytes = maxbytes;
- // printf("%s %d %d %d\n",__FILE__,__LINE__,iodesc->maxbytes,iodesc->maxiobuflen);
-
-}
diff --git a/cime/externals/pio2/src/clib/pio_file.c b/cime/externals/pio2/src/clib/pio_file.c
index 5003288caa0f..83bf9835a298 100644
--- a/cime/externals/pio2/src/clib/pio_file.c
+++ b/cime/externals/pio2/src/clib/pio_file.c
@@ -1,592 +1,461 @@
-#include
#include
#include
-
-/** Open an existing file using PIO library.
- *
- * Input parameters are read on comp task 0 and ignored elsewhere.
- *
- * @param iosysid : A defined pio system descriptor (input)
- * @param ncidp : A pio file descriptor (output)
- * @param iotype : A pio output format (input)
- * @param filename : The filename to open
- * @param mode : The netcdf mode for the open operation
- *
- * @return 0 for success, error code otherwise.
- * @ingroup PIO_openfile
+/**
+ ** @public
+ ** @ingroup PIO_openfile
+ ** @brief open an existing file using pio
+ ** @details Input parameters are read on comp task 0 and ignored elsewhere.
+ ** @param iosysid : A defined pio system descriptor (input)
+ ** @param ncidp : A pio file descriptor (output)
+ ** @param iotype : A pio output format (input)
+ ** @param filename : The filename to open
+ ** @param mode : The netcdf mode for the open operation
*/
+
int PIOc_openfile(const int iosysid, int *ncidp, int *iotype,
- const char *filename, const int mode)
+ const char filename[], const int mode)
{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
-
- LOG((1, "PIOc_openfile iosysid = %d", iosysid));
-
- /* User must provide valid input for these parameters. */
- if (!ncidp || !iotype || !filename)
- return PIO_EINVAL;
- if (*iotype < PIO_IOTYPE_PNETCDF || *iotype > PIO_IOTYPE_NETCDF4P)
- return PIO_ENOMEM;
-
- /* Get the IO system info from the iosysid. */
- if (!(ios = pio_get_iosystem_from_id(iosysid)))
- {
- LOG((0, "PIOc_openfile got bad iosysid %d",iosysid));
- return PIO_EBADID;
- }
-
- /* Allocate space for the file info. */
- if (!(file = (file_desc_t *) malloc(sizeof(*file))))
- return PIO_ENOMEM;
-
- /* Fill in some file values. */
- file->iotype = *iotype;
- file->next = NULL;
- file->iosystem = ios;
- file->mode = mode;
- for (int i = 0; i < PIO_MAX_VARS; i++)
- {
- file->varlist[i].record = -1;
- file->varlist[i].ndims = -1;
+ int ierr;
+ int msg;
+ int mpierr;
+ size_t len;
+ iosystem_desc_t *ios;
+ file_desc_t *file;
+
+ ierr = PIO_NOERR;
+
+ msg = PIO_MSG_OPEN_FILE;
+ ios = pio_get_iosystem_from_id(iosysid);
+ if(ios==NULL){
+ printf("bad iosysid %d\n",iosysid);
+ return PIO_EBADID;
+ }
+
+ file = (file_desc_t *) malloc(sizeof(*file));
+ if(file==NULL){
+ return PIO_ENOMEM;
+ }
+ file->iotype = *iotype;
+ file->next = NULL;
+ file->iosystem = ios;
+ file->mode = mode;
+ for(int i=0; ivarlist[i].record = -1;
+ file->varlist[i].ndims = -1;
#ifdef _PNETCDF
- file->varlist[i].request = NULL;
- file->varlist[i].nreqs=0;
+ file->varlist[i].request = NULL;
+ file->varlist[i].nreqs=0;
#endif
- file->varlist[i].fillbuf = NULL;
- file->varlist[i].iobuf = NULL;
- }
-
- file->buffer.validvars = 0;
- file->buffer.vid = NULL;
- file->buffer.data = NULL;
- file->buffer.next = NULL;
- file->buffer.frame = NULL;
- file->buffer.fillvalue = NULL;
-
- /* Set to true if this task should participate in IO (only true for
- * one task with netcdf serial files. */
- if (file->iotype == PIO_IOTYPE_NETCDF4P || file->iotype == PIO_IOTYPE_PNETCDF ||
- ios->io_rank == 0)
- file->do_io = 1;
- else
- file->do_io = 0;
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- int msg = PIO_MSG_OPEN_FILE;
- size_t len = strlen(filename);
-
- if (!ios->ioproc)
- {
- /* Send the message to the message handler. */
- if (ios->compmaster)
- mpierr = MPI_Send(&msg, 1, MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- /* Send the parameters of the function call. */
- if (!mpierr)
- mpierr = MPI_Bcast(&len, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast((void *)filename, len + 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&file->iotype, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&file->mode, 1, MPI_INT, ios->compmaster, ios->intercomm);
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- return check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
- switch (file->iotype)
- {
+ file->varlist[i].fillbuf = NULL;
+ file->varlist[i].iobuf = NULL;
+ }
+
+ file->buffer.validvars=0;
+ file->buffer.vid=NULL;
+ file->buffer.data=NULL;
+ file->buffer.next=NULL;
+ file->buffer.frame=NULL;
+ file->buffer.fillvalue=NULL;
+
+ if(ios->async_interface && ! ios->ioproc){
+ if(ios->comp_rank==0)
+ mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
+ len = strlen(filename);
+ mpierr = MPI_Bcast((void *) filename,len, MPI_CHAR, ios->compmaster, ios->intercomm);
+ mpierr = MPI_Bcast(&(file->iotype), 1, MPI_INT, ios->compmaster, ios->intercomm);
+ mpierr = MPI_Bcast(&(file->mode), 1, MPI_INT, ios->compmaster, ios->intercomm);
+ }
+
+ if(ios->ioproc){
+
+ switch(file->iotype){
#ifdef _NETCDF
#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
+ case PIO_IOTYPE_NETCDF4P:
#ifdef _MPISERIAL
- ierr = nc_open(filename, file->mode, &(file->fh));
+ ierr = nc_open(filename, file->mode, &(file->fh));
#else
- file->mode = file->mode | NC_MPIIO;
- ierr = nc_open_par(filename, file->mode, ios->io_comm, ios->info, &file->fh);
+ file->mode = file->mode | NC_MPIIO;
+ ierr = nc_open_par(filename, file->mode, ios->io_comm,ios->info, &(file->fh));
#endif
- break;
+ break;
- case PIO_IOTYPE_NETCDF4C:
- file->mode = file->mode | NC_NETCDF4;
- // *** Note the INTENTIONAL FALLTHROUGH ***
+ case PIO_IOTYPE_NETCDF4C:
+ file->mode = file->mode | NC_NETCDF4;
+ // *** Note the INTENTIONAL FALLTHROUGH ***
#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_open(filename, file->mode, &file->fh);
- }
- break;
+ case PIO_IOTYPE_NETCDF:
+ if(ios->io_rank==0){
+ ierr = nc_open(filename, file->mode, &(file->fh));
+ }
+ break;
#endif
#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- ierr = ncmpi_open(ios->io_comm, filename, file->mode, ios->info, &file->fh);
-
- // This should only be done with a file opened to append
- if (ierr == PIO_NOERR && (file->mode & PIO_WRITE))
- {
- if(ios->iomaster)
- LOG((1, "%d Setting IO buffer %ld", __LINE__, PIO_BUFFER_SIZE_LIMIT));
- ierr = ncmpi_buffer_attach(file->fh, PIO_BUFFER_SIZE_LIMIT);
- }
- break;
+ case PIO_IOTYPE_PNETCDF:
+ ierr = ncmpi_open(ios->io_comm, filename, file->mode, ios->info, &(file->fh));
+
+ // This should only be done with a file opened to append
+ if(ierr == PIO_NOERR && (file->mode & PIO_WRITE)){
+ if(ios->iomaster) printf("%d Setting IO buffer %ld\n",__LINE__,PIO_BUFFER_SIZE_LIMIT);
+ ierr = ncmpi_buffer_attach(file->fh, PIO_BUFFER_SIZE_LIMIT );
+ }
+ break;
#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- break;
- }
+ default:
+ ierr = iotype_error(file->iotype,__FILE__,__LINE__);
+ break;
+ }
- // If we failed to open a file due to an incompatible type of
- // NetCDF, try it once with just plain old basic NetCDF.
+ // If we failed to open a file due to an incompatible type of NetCDF, try it
+ // once with just plain old basic NetCDF
#ifdef _NETCDF
- if((ierr == NC_ENOTNC || ierr == NC_EINVAL) && (file->iotype != PIO_IOTYPE_NETCDF)) {
- if(ios->iomaster) printf("PIO2 pio_file.c retry NETCDF\n");
- // reset ierr on all tasks
- ierr = PIO_NOERR;
- // reset file markers for NETCDF on all tasks
- file->iotype = PIO_IOTYPE_NETCDF;
-
- // open netcdf file serially on main task
- if(ios->io_rank==0){
- ierr = nc_open(filename, file->mode, &(file->fh)); }
+ if((ierr == NC_ENOTNC || ierr == NC_EINVAL) && (file->iotype != PIO_IOTYPE_NETCDF)) {
+ if(ios->iomaster) printf("PIO2 pio_file.c retry NETCDF\n");
+ // reset ierr on all tasks
+ ierr = PIO_NOERR;
+ // reset file markers for NETCDF on all tasks
+ file->iotype = PIO_IOTYPE_NETCDF;
- }
-#endif
- }
+ // open netcdf file serially on main task
+ if(ios->io_rank==0){
+ ierr = nc_open(filename, file->mode, &(file->fh)); }
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- if (ierr)
- return check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Broadcast results to all tasks. Ignore NULL parameters. */
- if (!ierr)
- {
- if ((mpierr = MPI_Bcast(&file->mode, 1, MPI_INT, ios->ioroot, ios->union_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
-
- if ((mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->ioroot, ios->union_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
-
- *ncidp = file->fh;
- pio_add_to_file_list(file);
}
-
- if (ios->io_rank == 0)
- LOG((1, "Open file %s %d", filename, file->fh));
-
- return ierr;
+#endif
+ }
+
+ ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
+
+ if(ierr==PIO_NOERR){
+ mpierr = MPI_Bcast(&(file->mode), 1, MPI_INT, ios->ioroot, ios->union_comm);
+ pio_add_to_file_list(file);
+ *ncidp = file->fh;
+ }
+ if(ios->io_rank==0){
+ printf("Open file %s %d\n",filename,file->fh); //,file->fh,file->id,ios->io_rank,ierr);
+// if(file->fh==5) print_trace(stdout);
+ }
+ return ierr;
}
-/** Open a new file using pio. Input parameters are read on comp task
- * 0 and ignored elsewhere.
- *
- * @public
- * @ingroup PIO_createfile
- *
- * @param iosysid : A defined pio system descriptor (input)
- * @param ncidp : A pio file descriptor (output)
- * @param iotype : A pio output format (input)
- * @param filename : The filename to open
- * @param mode : The netcdf mode for the open operation
+/**
+ ** @public
+ ** @ingroup PIO_createfile
+ ** @brief open a new file using pio
+ ** @details Input parameters are read on comp task 0 and ignored elsewhere.
+ ** @param iosysid : A defined pio system descriptor (input)
+ ** @param ncidp : A pio file descriptor (output)
+ ** @param iotype : A pio output format (input)
+ ** @param filename : The filename to open
+ ** @param mode : The netcdf mode for the open operation
*/
-int PIOc_createfile(const int iosysid, int *ncidp, int *iotype,
- const char filename[], const int mode)
+int PIOc_createfile(const int iosysid, int *ncidp, int *iotype,
+ const char filename[], const int mode)
{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
-
- /* User must provide valid input for these parameters. */
- if (!ncidp || !iotype || !filename || strlen(filename) > NC_MAX_NAME)
- return PIO_EINVAL;
-
- /* Get the IO system info from the iosysid. */
- if (!(ios = pio_get_iosystem_from_id(iosysid)))
- return PIO_EBADID;
-
- /* Allocate space for the file info. */
- if (!(file = (file_desc_t *)malloc(sizeof(file_desc_t))))
- return PIO_ENOMEM;
-
- /* Fill in some file values. */
- file->next = NULL;
- file->iosystem = ios;
- file->iotype = *iotype;
-
- file->buffer.validvars = 0;
- file->buffer.data = NULL;
- file->buffer.next = NULL;
- file->buffer.vid = NULL;
- file->buffer.ioid = -1;
- file->buffer.frame = NULL;
- file->buffer.fillvalue = NULL;
-
- for(int i = 0; i < PIO_MAX_VARS; i++)
- {
- file->varlist[i].record = -1;
- file->varlist[i].ndims = -1;
+ int ierr;
+ int msg;
+ int mpierr;
+
+ size_t len;
+ iosystem_desc_t *ios;
+ file_desc_t *file;
+
+
+ ierr = PIO_NOERR;
+
+ ios = pio_get_iosystem_from_id(iosysid);
+ file = (file_desc_t *) malloc(sizeof(file_desc_t));
+ file->next = NULL;
+ file->iosystem = ios;
+ file->iotype = *iotype;
+
+ file->buffer.validvars=0;
+ file->buffer.data=NULL;
+ file->buffer.next=NULL;
+ file->buffer.vid=NULL;
+ file->buffer.ioid=-1;
+ file->buffer.frame=NULL;
+ file->buffer.fillvalue=NULL;
+
+ for(int i=0; ivarlist[i].record = -1;
+ file->varlist[i].ndims = -1;
#ifdef _PNETCDF
- file->varlist[i].request = NULL;
- file->varlist[i].nreqs=0;
+ file->varlist[i].request = NULL;
+ file->varlist[i].nreqs=0;
#endif
- file->varlist[i].fillbuf = NULL;
- file->varlist[i].iobuf = NULL;
- }
-
- file->mode = mode;
-
- /* Set to true if this task should participate in IO (only true for
- * one task with netcdf serial files. */
- if (file->iotype == PIO_IOTYPE_NETCDF4P || file->iotype == PIO_IOTYPE_PNETCDF ||
- ios->io_rank == 0)
- file->do_io = 1;
- else
- file->do_io = 0;
-
- /* If async is in use, and this is not an IO task, bcast the
- * parameters. */
- if (ios->async_interface)
- {
- int msg = PIO_MSG_CREATE_FILE;
- size_t len = strlen(filename);
-
- if (!ios->ioproc)
- {
- /* Send the message to the message handler. */
- if (ios->compmaster)
- mpierr = MPI_Send(&msg, 1, MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- /* Send the parameters of the function call. */
- if (!mpierr)
- mpierr = MPI_Bcast(&len, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast((void *)filename, len + 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&file->iotype, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&file->mode, 1, MPI_INT, ios->compmaster, ios->intercomm);
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- return check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
+ file->varlist[i].fillbuf = NULL;
+ file->varlist[i].iobuf = NULL;
+ }
+
+ msg = PIO_MSG_CREATE_FILE;
+ file->mode = mode;
+
+
+ if(ios->async_interface && ! ios->ioproc){
+ if(ios->comp_rank==0)
+ mpierr = MPI_Send( &msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
+ len = strlen(filename);
+ mpierr = MPI_Bcast((void *) filename,len, MPI_CHAR, ios->compmaster, ios->intercomm);
+ mpierr = MPI_Bcast(&(file->iotype), 1, MPI_INT, ios->compmaster, ios->intercomm);
+ mpierr = MPI_Bcast(&file->mode, 1, MPI_INT, ios->compmaster, ios->intercomm);
+ }
- if (ios->ioproc)
- {
- switch (file->iotype)
- {
+
+ if(ios->ioproc){
+ switch(file->iotype){
#ifdef _NETCDF
#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- // The 64 bit options are not compatable with hdf5 format files
- // printf("%d %d %d %d %d \n",__LINE__,file->mode,PIO_64BIT_DATA, PIO_64BIT_OFFSET, NC_MPIIO);
- file->mode = file->mode | NC_MPIIO | NC_NETCDF4;
- //printf("%s %d %d %d\n",__FILE__,__LINE__,file->mode, NC_MPIIO| NC_NETCDF4);
- ierr = nc_create_par(filename, file->mode, ios->io_comm,ios->info , &(file->fh));
- break;
- case PIO_IOTYPE_NETCDF4C:
- file->mode = file->mode | NC_NETCDF4;
+ case PIO_IOTYPE_NETCDF4P:
+ // The 64 bit options are not compatable with hdf5 format files
+ // printf("%d %d %d %d %d \n",__LINE__,file->mode,PIO_64BIT_DATA, PIO_64BIT_OFFSET, NC_MPIIO);
+ file->mode = file->mode | NC_MPIIO | NC_NETCDF4;
+ //printf("%s %d %d %d\n",__FILE__,__LINE__,file->mode, NC_MPIIO| NC_NETCDF4);
+ ierr = nc_create_par(filename, file->mode, ios->io_comm,ios->info , &(file->fh));
+ break;
+ case PIO_IOTYPE_NETCDF4C:
+ file->mode = file->mode | NC_NETCDF4;
#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_create(filename, file->mode, &(file->fh));
- }
- break;
+ case PIO_IOTYPE_NETCDF:
+ if(ios->io_rank==0){
+ ierr = nc_create(filename, file->mode, &(file->fh));
+ }
+ break;
#endif
#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- ierr = ncmpi_create(ios->io_comm, filename, file->mode, ios->info, &(file->fh));
- if(ierr == PIO_NOERR){
- if(ios->io_rank==0){
- printf("%d Setting IO buffer size on all iotasks to %ld\n",ios->io_rank,PIO_BUFFER_SIZE_LIMIT);
- }
- int oldfill;
- ierr = ncmpi_buffer_attach(file->fh, PIO_BUFFER_SIZE_LIMIT );
- // ierr = ncmpi_set_fill(file->fh, NC_FILL, &oldfill);
- }
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
+ case PIO_IOTYPE_PNETCDF:
+ ierr = ncmpi_create(ios->io_comm, filename, file->mode, ios->info, &(file->fh));
+ if(ierr == PIO_NOERR){
+ if(ios->io_rank==0){
+ printf("%d Setting IO buffer size on all iotasks to %ld\n",ios->io_rank,PIO_BUFFER_SIZE_LIMIT);
}
+ int oldfill;
+ ierr = ncmpi_buffer_attach(file->fh, PIO_BUFFER_SIZE_LIMIT );
+ // ierr = ncmpi_set_fill(file->fh, NC_FILL, &oldfill);
+ }
+ break;
+#endif
+ default:
+ ierr = iotype_error(file->iotype,__FILE__,__LINE__);
}
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- if (ierr)
- return check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Broadcast results to all tasks. Ignore NULL parameters. */
- if (!ierr)
- {
- if ((mpierr = MPI_Bcast(&file->mode, 1, MPI_INT, ios->ioroot, ios->union_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- file->mode = file->mode | PIO_WRITE; // This flag is implied by netcdf create functions but we need to know if its set
-
- if ((mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->ioroot, ios->union_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
-
- *ncidp = file->fh;
- pio_add_to_file_list(file);
- }
-
- if (ios->io_rank == 0)
- LOG((1, "Create file %s %d", filename, file->fh));
-
- return ierr;
+ }
+
+ ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
+
+ if(ierr == PIO_NOERR){
+ mpierr = MPI_Bcast(&(file->mode), 1, MPI_INT, ios->ioroot, ios->union_comm);
+ file->mode = file->mode | PIO_WRITE; // This flag is implied by netcdf create functions but we need to know if its set
+ pio_add_to_file_list(file);
+ *ncidp = file->fh;
+ }
+ if(ios->io_rank==0){
+ printf("Create file %s %d\n",filename,file->fh); //,file->fh,file->id,ios->io_rank,ierr);
+// if(file->fh==5) print_trace(stdout);
+ }
+ return ierr;
}
-/** Close a file previously opened with PIO.
- * @ingroup PIO_closefile
- *
- * @param ncid: the file pointer
+/**
+ ** @ingroup PIO_closefile
+ ** @brief close a file previously opened with PIO
+ ** @param ncid: the file pointer
*/
int PIOc_closefile(int ncid)
{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
-
- /* Find the info about this file. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* Sync changes before closing. */
- if (file->mode & PIO_WRITE)
- PIOc_sync(ncid);
-
- /* If async is in use and this is a comp tasks, then the compmaster
- * sends a msg to the pio_msg_handler running on the IO master and
- * waiting for a message. Then broadcast the ncid over the intercomm
- * to the IO tasks. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_CLOSE_FILE;
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1, MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- return check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
- switch (file->iotype)
- {
+ int ierr;
+ int msg;
+ int mpierr;
+ iosystem_desc_t *ios;
+ file_desc_t *file;
+
+ ierr = PIO_NOERR;
+
+ file = pio_get_file_from_id(ncid);
+ if(file == NULL)
+ return PIO_EBADID;
+ ios = file->iosystem;
+ msg = 0;
+ if((file->mode & PIO_WRITE)){
+ PIOc_sync(ncid);
+ }
+ if(ios->async_interface && ! ios->ioproc){
+ if(ios->comp_rank==0)
+ mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
+ mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, ios->compmaster, ios->intercomm);
+ }
+
+ if(ios->ioproc){
+ switch(file->iotype){
#ifdef _NETCDF
#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_close(file->fh);
- break;
- case PIO_IOTYPE_NETCDF4C:
+ case PIO_IOTYPE_NETCDF4P:
+ ierr = nc_close(file->fh);
+ break;
+ case PIO_IOTYPE_NETCDF4C:
#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_close(file->fh);
- }
- break;
+ case PIO_IOTYPE_NETCDF:
+ if(ios->io_rank==0){
+ ierr = nc_close(file->fh);
+ }
+ break;
#endif
#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- if((file->mode & PIO_WRITE)){
- ierr = ncmpi_buffer_detach(file->fh);
- }
- ierr = ncmpi_close(file->fh);
- break;
+ case PIO_IOTYPE_PNETCDF:
+ if((file->mode & PIO_WRITE)){
+ ierr = ncmpi_buffer_detach(file->fh);
+ }
+ ierr = ncmpi_close(file->fh);
+ break;
#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
+ default:
+ ierr = iotype_error(file->iotype,__FILE__,__LINE__);
}
+ }
+ if(ios->io_rank==0){
+ printf("Close file %d \n",file->fh);
+// if(file->fh==5) print_trace(stdout);
+ }
+
+ ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- if (ierr)
- return check_netcdf(file, ierr, __FILE__, __LINE__);
+ int iret = pio_delete_file_from_list(ncid);
- /* Delete file from our list of open files. */
- pio_delete_file_from_list(ncid);
- return ierr;
+ return ierr;
}
-/** Delete a file.
- * @ingroup PIO_deletefile
- *
- * @param iosysid : a pio system handle
- * @param filename : a filename
+/**
+ ** @ingroup PIO_deletefile
+ ** @brief Delete a file
+ ** @param iosysid : a pio system handle
+ ** @param filename : a filename
*/
int PIOc_deletefile(const int iosysid, const char filename[])
{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
- int msg = PIO_MSG_DELETE_FILE;
- size_t len;
-
- /* Get the IO system info from the id. */
- if (!(ios = pio_get_iosystem_from_id(iosysid)))
- return PIO_EBADID;
-
- /* If async is in use, send message to IO master task. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- if(ios->comp_rank==0)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- len = strlen(filename);
- if (!mpierr)
- mpierr = MPI_Bcast(&len, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast((void *)filename, len + 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- }
- }
-
- /* If this is an IO task, then call the netCDF function. The
- * barriers are needed to assure that no task is trying to operate
- * on the file while it is being deleted. */
- if(ios->ioproc){
- MPI_Barrier(ios->io_comm);
+ int ierr;
+ int msg;
+ int mpierr;
+ int chkerr;
+ iosystem_desc_t *ios;
+
+ ierr = PIO_NOERR;
+ ios = pio_get_iosystem_from_id(iosysid);
+
+ if(ios == NULL)
+ return PIO_EBADID;
+
+ msg = 0;
+
+ if(ios->async_interface && ! ios->ioproc){
+ if(ios->comp_rank==0)
+ mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
+ // mpierr = MPI_Bcast(iosysid,1, MPI_INT, ios->compmaster, ios->intercomm);
+ }
+ // The barriers are needed to assure that no task is trying to operate on the file while it is being deleted.
+ if(ios->ioproc){
+ MPI_Barrier(ios->io_comm);
#ifdef _NETCDF
- if(ios->io_rank==0)
- ierr = nc_delete(filename);
+ if(ios->io_rank==0)
+ ierr = nc_delete(filename);
#else
#ifdef _PNETCDF
- ierr = ncmpi_delete(filename, ios->info);
+ ierr = ncmpi_delete(filename, ios->info);
#endif
#endif
- MPI_Barrier(ios->io_comm);
- }
-
- // Special case - always broadcast the return from the
- MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm);
+ MPI_Barrier(ios->io_comm);
+ }
+ // Special case - always broadcast the return from the
+ MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm);
+
+
- return ierr;
+ return ierr;
}
+///
+/// PIO interface to nc_sync
+///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
+/// Refer to the netcdf documentation.
+///
/**
- * PIO interface to nc_sync This routine is called collectively by all
- * tasks in the communicator ios.union_comm.
- *
- * Refer to the netcdf documentation.
- */
-int PIOc_sync(int ncid)
+* @name PIOc_sync
+*/
+int PIOc_sync (int ncid)
{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
- wmulti_buffer *wmb, *twmb;
-
- /* Get the file info from the ncid. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* If async is in use, send message to IO master tasks. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_SYNC;
-
- if(ios->comp_rank == 0)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, ios->compmaster, ios->intercomm);
- }
+ int ierr;
+ int msg;
+ int mpierr;
+ iosystem_desc_t *ios;
+ file_desc_t *file;
+ wmulti_buffer *wmb, *twmb;
+
+ ierr = PIO_NOERR;
+
+ file = pio_get_file_from_id(ncid);
+ if(file == NULL)
+ return PIO_EBADID;
+ ios = file->iosystem;
+ msg = PIO_MSG_SYNC;
+
+ if(ios->async_interface && ! ios->ioproc){
+ if(ios->compmaster)
+ mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
+ mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
+ }
+
+ if((file->mode & PIO_WRITE)){
+ // cn_buffer_report( *ios, true);
+ wmb = &(file->buffer);
+ while(wmb != NULL){
+ // printf("%s %d %d %d\n",__FILE__,__LINE__,wmb->ioid, wmb->validvars);
+ if(wmb->validvars>0){
+ flush_buffer(ncid, wmb, true);
+ }
+ twmb = wmb;
+ wmb = wmb->next;
+ if(twmb == &(file->buffer)){
+ twmb->ioid=-1;
+ twmb->next=NULL;
+ }else{
+ brel(twmb);
+ }
}
+ flush_output_buffer(file, true, 0);
- if (file->mode & PIO_WRITE)
- {
- // cn_buffer_report( *ios, true);
- wmb = &(file->buffer);
- while(wmb != NULL){
- // printf("%s %d %d %d\n",__FILE__,__LINE__,wmb->ioid, wmb->validvars);
- if(wmb->validvars>0){
- flush_buffer(ncid, wmb, true);
- }
- twmb = wmb;
- wmb = wmb->next;
- if(twmb == &(file->buffer)){
- twmb->ioid=-1;
- twmb->next=NULL;
- }else{
- brel(twmb);
- }
- }
- flush_output_buffer(file, true, 0);
-
- if(ios->ioproc){
- switch(file->iotype){
+ if(ios->ioproc){
+ switch(file->iotype){
#ifdef _NETCDF
#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_sync(file->fh);;
- break;
- case PIO_IOTYPE_NETCDF4C:
+ case PIO_IOTYPE_NETCDF4P:
+ ierr = nc_sync(file->fh);;
+ break;
+ case PIO_IOTYPE_NETCDF4C:
#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_sync(file->fh);;
- }
- break;
+ case PIO_IOTYPE_NETCDF:
+ if(ios->io_rank==0){
+ ierr = nc_sync(file->fh);;
+ }
+ break;
#endif
#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- ierr = ncmpi_sync(file->fh);;
- break;
+ case PIO_IOTYPE_PNETCDF:
+ ierr = ncmpi_sync(file->fh);;
+ break;
#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
+ default:
+ ierr = iotype_error(file->iotype,__FILE__,__LINE__);
+ }
}
- return ierr;
+
+ ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
+ }
+ return ierr;
}
diff --git a/cime/externals/pio2/src/clib/pio_get_nc.c b/cime/externals/pio2/src/clib/pio_get_nc.c
index 86d233ecaba2..0da690aaf216 100644
--- a/cime/externals/pio2/src/clib/pio_get_nc.c
+++ b/cime/externals/pio2/src/clib/pio_get_nc.c
@@ -1,7 +1,7 @@
#include
#include
-int PIOc_get_var1_schar (int ncid, int varid, const PIO_Offset index[], signed char *buf)
+int PIOc_get_var1_schar (int ncid, int varid, const PIO_Offset index[], signed char *buf)
{
int ierr;
int msg;
@@ -23,7 +23,7 @@ int PIOc_get_var1_schar (int ncid, int varid, const PIO_Offset index[], signed c
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -66,7 +66,7 @@ int PIOc_get_var1_schar (int ncid, int varid, const PIO_Offset index[], signed c
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -74,7 +74,7 @@ int PIOc_get_var1_schar (int ncid, int varid, const PIO_Offset index[], signed c
return ierr;
}
-int PIOc_get_vars_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], unsigned long long *buf)
+int PIOc_get_vars_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], unsigned long long *buf)
{
int ierr;
int msg;
@@ -100,7 +100,7 @@ int PIOc_get_vars_ulonglong (int ncid, int varid, const PIO_Offset start[], cons
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -143,7 +143,7 @@ int PIOc_get_vars_ulonglong (int ncid, int varid, const PIO_Offset start[], cons
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -151,7 +151,7 @@ int PIOc_get_vars_ulonglong (int ncid, int varid, const PIO_Offset start[], cons
return ierr;
}
-int PIOc_get_varm_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], unsigned char *buf)
+int PIOc_get_varm_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], unsigned char *buf)
{
int ierr;
int msg;
@@ -177,7 +177,7 @@ int PIOc_get_varm_uchar (int ncid, int varid, const PIO_Offset start[], const PI
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -220,7 +220,7 @@ int PIOc_get_varm_uchar (int ncid, int varid, const PIO_Offset start[], const PI
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -228,7 +228,7 @@ int PIOc_get_varm_uchar (int ncid, int varid, const PIO_Offset start[], const PI
return ierr;
}
-int PIOc_get_varm_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], signed char *buf)
+int PIOc_get_varm_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], signed char *buf)
{
int ierr;
int msg;
@@ -254,7 +254,7 @@ int PIOc_get_varm_schar (int ncid, int varid, const PIO_Offset start[], const PI
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -297,7 +297,7 @@ int PIOc_get_varm_schar (int ncid, int varid, const PIO_Offset start[], const PI
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -305,7 +305,7 @@ int PIOc_get_varm_schar (int ncid, int varid, const PIO_Offset start[], const PI
return ierr;
}
-int PIOc_get_vars_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], short *buf)
+int PIOc_get_vars_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], short *buf)
{
int ierr;
int msg;
@@ -331,7 +331,7 @@ int PIOc_get_vars_short (int ncid, int varid, const PIO_Offset start[], const PI
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -374,7 +374,7 @@ int PIOc_get_vars_short (int ncid, int varid, const PIO_Offset start[], const PI
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -382,7 +382,7 @@ int PIOc_get_vars_short (int ncid, int varid, const PIO_Offset start[], const PI
return ierr;
}
-int PIOc_get_var_double (int ncid, int varid, double *buf)
+int PIOc_get_var_double (int ncid, int varid, double *buf)
{
int ierr;
int msg;
@@ -412,7 +412,7 @@ int PIOc_get_var_double (int ncid, int varid, double *buf)
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -455,7 +455,7 @@ int PIOc_get_var_double (int ncid, int varid, double *buf)
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -463,7 +463,7 @@ int PIOc_get_var_double (int ncid, int varid, double *buf)
return ierr;
}
-int PIOc_get_vara_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], double *buf)
+int PIOc_get_vara_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], double *buf)
{
int ierr;
int msg;
@@ -489,7 +489,7 @@ int PIOc_get_vara_double (int ncid, int varid, const PIO_Offset start[], const P
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -532,7 +532,7 @@ int PIOc_get_vara_double (int ncid, int varid, const PIO_Offset start[], const P
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -540,7 +540,7 @@ int PIOc_get_vara_double (int ncid, int varid, const PIO_Offset start[], const P
return ierr;
}
-int PIOc_get_var_int (int ncid, int varid, int *buf)
+int PIOc_get_var_int (int ncid, int varid, int *buf)
{
int ierr;
int msg;
@@ -570,7 +570,7 @@ int PIOc_get_var_int (int ncid, int varid, int *buf)
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -613,7 +613,7 @@ int PIOc_get_var_int (int ncid, int varid, int *buf)
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -621,7 +621,7 @@ int PIOc_get_var_int (int ncid, int varid, int *buf)
return ierr;
}
-int PIOc_get_var_ushort (int ncid, int varid, unsigned short *buf)
+int PIOc_get_var_ushort (int ncid, int varid, unsigned short *buf)
{
int ierr;
int msg;
@@ -651,7 +651,7 @@ int PIOc_get_var_ushort (int ncid, int varid, unsigned short *buf)
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -694,7 +694,7 @@ int PIOc_get_var_ushort (int ncid, int varid, unsigned short *buf)
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -702,7 +702,7 @@ int PIOc_get_var_ushort (int ncid, int varid, unsigned short *buf)
return ierr;
}
-int PIOc_get_vara_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], char *buf)
+int PIOc_get_vara_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], char *buf)
{
int ierr;
int msg;
@@ -728,7 +728,7 @@ int PIOc_get_vara_text (int ncid, int varid, const PIO_Offset start[], const PIO
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -771,7 +771,7 @@ int PIOc_get_vara_text (int ncid, int varid, const PIO_Offset start[], const PIO
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -779,7 +779,7 @@ int PIOc_get_vara_text (int ncid, int varid, const PIO_Offset start[], const PIO
return ierr;
}
-int PIOc_get_vara_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], int *buf)
+int PIOc_get_vara_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], int *buf)
{
int ierr;
int msg;
@@ -805,7 +805,7 @@ int PIOc_get_vara_int (int ncid, int varid, const PIO_Offset start[], const PIO_
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -848,7 +848,7 @@ int PIOc_get_vara_int (int ncid, int varid, const PIO_Offset start[], const PIO_
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -856,7 +856,7 @@ int PIOc_get_vara_int (int ncid, int varid, const PIO_Offset start[], const PIO_
return ierr;
}
-int PIOc_get_var1_float (int ncid, int varid, const PIO_Offset index[], float *buf)
+int PIOc_get_var1_float (int ncid, int varid, const PIO_Offset index[], float *buf)
{
int ierr;
int msg;
@@ -878,7 +878,7 @@ int PIOc_get_var1_float (int ncid, int varid, const PIO_Offset index[], float *b
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -921,7 +921,7 @@ int PIOc_get_var1_float (int ncid, int varid, const PIO_Offset index[], float *b
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -929,7 +929,7 @@ int PIOc_get_var1_float (int ncid, int varid, const PIO_Offset index[], float *b
return ierr;
}
-int PIOc_get_var1_short (int ncid, int varid, const PIO_Offset index[], short *buf)
+int PIOc_get_var1_short (int ncid, int varid, const PIO_Offset index[], short *buf)
{
int ierr;
int msg;
@@ -951,7 +951,7 @@ int PIOc_get_var1_short (int ncid, int varid, const PIO_Offset index[], short *b
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -994,7 +994,7 @@ int PIOc_get_var1_short (int ncid, int varid, const PIO_Offset index[], short *b
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -1002,7 +1002,7 @@ int PIOc_get_var1_short (int ncid, int varid, const PIO_Offset index[], short *b
return ierr;
}
-int PIOc_get_vars_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], int *buf)
+int PIOc_get_vars_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], int *buf)
{
int ierr;
int msg;
@@ -1028,7 +1028,7 @@ int PIOc_get_vars_int (int ncid, int varid, const PIO_Offset start[], const PIO_
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1071,7 +1071,7 @@ int PIOc_get_vars_int (int ncid, int varid, const PIO_Offset start[], const PIO_
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -1079,7 +1079,7 @@ int PIOc_get_vars_int (int ncid, int varid, const PIO_Offset start[], const PIO_
return ierr;
}
-int PIOc_get_var_text (int ncid, int varid, char *buf)
+int PIOc_get_var_text (int ncid, int varid, char *buf)
{
int ierr;
int msg;
@@ -1109,7 +1109,7 @@ int PIOc_get_var_text (int ncid, int varid, char *buf)
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1152,7 +1152,7 @@ int PIOc_get_var_text (int ncid, int varid, char *buf)
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -1160,7 +1160,7 @@ int PIOc_get_var_text (int ncid, int varid, char *buf)
return ierr;
}
-int PIOc_get_varm_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], double *buf)
+int PIOc_get_varm_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], double *buf)
{
int ierr;
int msg;
@@ -1186,7 +1186,7 @@ int PIOc_get_varm_double (int ncid, int varid, const PIO_Offset start[], const P
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1229,7 +1229,7 @@ int PIOc_get_varm_double (int ncid, int varid, const PIO_Offset start[], const P
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -1237,7 +1237,7 @@ int PIOc_get_varm_double (int ncid, int varid, const PIO_Offset start[], const P
return ierr;
}
-int PIOc_get_vars_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], signed char *buf)
+int PIOc_get_vars_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], signed char *buf)
{
int ierr;
int msg;
@@ -1263,7 +1263,7 @@ int PIOc_get_vars_schar (int ncid, int varid, const PIO_Offset start[], const PI
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1306,7 +1306,7 @@ int PIOc_get_vars_schar (int ncid, int varid, const PIO_Offset start[], const PI
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -1314,7 +1314,7 @@ int PIOc_get_vars_schar (int ncid, int varid, const PIO_Offset start[], const PI
return ierr;
}
-int PIOc_get_vara_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], unsigned short *buf)
+int PIOc_get_vara_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], unsigned short *buf)
{
int ierr;
int msg;
@@ -1340,7 +1340,7 @@ int PIOc_get_vara_ushort (int ncid, int varid, const PIO_Offset start[], const P
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1383,7 +1383,7 @@ int PIOc_get_vara_ushort (int ncid, int varid, const PIO_Offset start[], const P
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -1391,7 +1391,7 @@ int PIOc_get_vara_ushort (int ncid, int varid, const PIO_Offset start[], const P
return ierr;
}
-int PIOc_get_var1_ushort (int ncid, int varid, const PIO_Offset index[], unsigned short *buf)
+int PIOc_get_var1_ushort (int ncid, int varid, const PIO_Offset index[], unsigned short *buf)
{
int ierr;
int msg;
@@ -1413,7 +1413,7 @@ int PIOc_get_var1_ushort (int ncid, int varid, const PIO_Offset index[], unsigne
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1456,7 +1456,7 @@ int PIOc_get_var1_ushort (int ncid, int varid, const PIO_Offset index[], unsigne
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -1464,7 +1464,7 @@ int PIOc_get_var1_ushort (int ncid, int varid, const PIO_Offset index[], unsigne
return ierr;
}
-int PIOc_get_var_float (int ncid, int varid, float *buf)
+int PIOc_get_var_float (int ncid, int varid, float *buf)
{
int ierr;
int msg;
@@ -1494,7 +1494,7 @@ int PIOc_get_var_float (int ncid, int varid, float *buf)
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1537,7 +1537,7 @@ int PIOc_get_var_float (int ncid, int varid, float *buf)
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -1545,7 +1545,7 @@ int PIOc_get_var_float (int ncid, int varid, float *buf)
return ierr;
}
-int PIOc_get_vars_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], unsigned char *buf)
+int PIOc_get_vars_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], unsigned char *buf)
{
int ierr;
int msg;
@@ -1571,7 +1571,7 @@ int PIOc_get_vars_uchar (int ncid, int varid, const PIO_Offset start[], const PI
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1614,7 +1614,7 @@ int PIOc_get_vars_uchar (int ncid, int varid, const PIO_Offset start[], const PI
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -1622,7 +1622,7 @@ int PIOc_get_vars_uchar (int ncid, int varid, const PIO_Offset start[], const PI
return ierr;
}
-int PIOc_get_var (int ncid, int varid, void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
+int PIOc_get_var (int ncid, int varid, void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
{
int ierr;
int msg;
@@ -1644,7 +1644,7 @@ int PIOc_get_var (int ncid, int varid, void *buf, PIO_Offset bufcount, MPI_Datat
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1687,7 +1687,7 @@ int PIOc_get_var (int ncid, int varid, void *buf, PIO_Offset bufcount, MPI_Datat
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -1695,7 +1695,7 @@ int PIOc_get_var (int ncid, int varid, void *buf, PIO_Offset bufcount, MPI_Datat
return ierr;
}
-int PIOc_get_var1_longlong (int ncid, int varid, const PIO_Offset index[], long long *buf)
+int PIOc_get_var1_longlong (int ncid, int varid, const PIO_Offset index[], long long *buf)
{
int ierr;
int msg;
@@ -1717,7 +1717,7 @@ int PIOc_get_var1_longlong (int ncid, int varid, const PIO_Offset index[], long
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1760,7 +1760,7 @@ int PIOc_get_var1_longlong (int ncid, int varid, const PIO_Offset index[], long
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -1768,7 +1768,7 @@ int PIOc_get_var1_longlong (int ncid, int varid, const PIO_Offset index[], long
return ierr;
}
-int PIOc_get_vars_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], unsigned short *buf)
+int PIOc_get_vars_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], unsigned short *buf)
{
int ierr;
int msg;
@@ -1794,7 +1794,7 @@ int PIOc_get_vars_ushort (int ncid, int varid, const PIO_Offset start[], const P
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1837,7 +1837,7 @@ int PIOc_get_vars_ushort (int ncid, int varid, const PIO_Offset start[], const P
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -1845,7 +1845,7 @@ int PIOc_get_vars_ushort (int ncid, int varid, const PIO_Offset start[], const P
return ierr;
}
-int PIOc_get_var_long (int ncid, int varid, long *buf)
+int PIOc_get_var_long (int ncid, int varid, long *buf)
{
int ierr;
int msg;
@@ -1875,7 +1875,7 @@ int PIOc_get_var_long (int ncid, int varid, long *buf)
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1918,7 +1918,7 @@ int PIOc_get_var_long (int ncid, int varid, long *buf)
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -1926,7 +1926,7 @@ int PIOc_get_var_long (int ncid, int varid, long *buf)
return ierr;
}
-int PIOc_get_var1_double (int ncid, int varid, const PIO_Offset index[], double *buf)
+int PIOc_get_var1_double (int ncid, int varid, const PIO_Offset index[], double *buf)
{
int ierr;
int msg;
@@ -1948,7 +1948,7 @@ int PIOc_get_var1_double (int ncid, int varid, const PIO_Offset index[], double
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1991,7 +1991,7 @@ int PIOc_get_var1_double (int ncid, int varid, const PIO_Offset index[], double
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -1999,7 +1999,7 @@ int PIOc_get_var1_double (int ncid, int varid, const PIO_Offset index[], double
return ierr;
}
-int PIOc_get_vara_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], unsigned int *buf)
+int PIOc_get_vara_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], unsigned int *buf)
{
int ierr;
int msg;
@@ -2025,7 +2025,7 @@ int PIOc_get_vara_uint (int ncid, int varid, const PIO_Offset start[], const PIO
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2068,7 +2068,7 @@ int PIOc_get_vara_uint (int ncid, int varid, const PIO_Offset start[], const PIO
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -2076,7 +2076,7 @@ int PIOc_get_vara_uint (int ncid, int varid, const PIO_Offset start[], const PIO
return ierr;
}
-int PIOc_get_vars_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], long long *buf)
+int PIOc_get_vars_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], long long *buf)
{
int ierr;
int msg;
@@ -2102,7 +2102,7 @@ int PIOc_get_vars_longlong (int ncid, int varid, const PIO_Offset start[], const
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2145,7 +2145,7 @@ int PIOc_get_vars_longlong (int ncid, int varid, const PIO_Offset start[], const
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -2153,7 +2153,7 @@ int PIOc_get_vars_longlong (int ncid, int varid, const PIO_Offset start[], const
return ierr;
}
-int PIOc_get_var_longlong (int ncid, int varid, long long *buf)
+int PIOc_get_var_longlong (int ncid, int varid, long long *buf)
{
int ierr;
int msg;
@@ -2183,7 +2183,7 @@ int PIOc_get_var_longlong (int ncid, int varid, long long *buf)
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2226,7 +2226,7 @@ int PIOc_get_var_longlong (int ncid, int varid, long long *buf)
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -2234,7 +2234,7 @@ int PIOc_get_var_longlong (int ncid, int varid, long long *buf)
return ierr;
}
-int PIOc_get_vara_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], short *buf)
+int PIOc_get_vara_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], short *buf)
{
int ierr;
int msg;
@@ -2260,7 +2260,7 @@ int PIOc_get_vara_short (int ncid, int varid, const PIO_Offset start[], const PI
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2303,7 +2303,7 @@ int PIOc_get_vara_short (int ncid, int varid, const PIO_Offset start[], const PI
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -2311,7 +2311,7 @@ int PIOc_get_vara_short (int ncid, int varid, const PIO_Offset start[], const PI
return ierr;
}
-int PIOc_get_vara_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], long *buf)
+int PIOc_get_vara_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], long *buf)
{
int ierr;
int msg;
@@ -2337,7 +2337,7 @@ int PIOc_get_vara_long (int ncid, int varid, const PIO_Offset start[], const PIO
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2380,7 +2380,7 @@ int PIOc_get_vara_long (int ncid, int varid, const PIO_Offset start[], const PIO
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -2388,7 +2388,7 @@ int PIOc_get_vara_long (int ncid, int varid, const PIO_Offset start[], const PIO
return ierr;
}
-int PIOc_get_var1_int (int ncid, int varid, const PIO_Offset index[], int *buf)
+int PIOc_get_var1_int (int ncid, int varid, const PIO_Offset index[], int *buf)
{
int ierr;
int msg;
@@ -2410,7 +2410,7 @@ int PIOc_get_var1_int (int ncid, int varid, const PIO_Offset index[], int *buf)
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2453,7 +2453,7 @@ int PIOc_get_var1_int (int ncid, int varid, const PIO_Offset index[], int *buf)
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -2461,7 +2461,7 @@ int PIOc_get_var1_int (int ncid, int varid, const PIO_Offset index[], int *buf)
return ierr;
}
-int PIOc_get_var1_ulonglong (int ncid, int varid, const PIO_Offset index[], unsigned long long *buf)
+int PIOc_get_var1_ulonglong (int ncid, int varid, const PIO_Offset index[], unsigned long long *buf)
{
int ierr;
int msg;
@@ -2483,7 +2483,7 @@ int PIOc_get_var1_ulonglong (int ncid, int varid, const PIO_Offset index[], unsi
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2526,7 +2526,7 @@ int PIOc_get_var1_ulonglong (int ncid, int varid, const PIO_Offset index[], unsi
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -2534,7 +2534,7 @@ int PIOc_get_var1_ulonglong (int ncid, int varid, const PIO_Offset index[], unsi
return ierr;
}
-int PIOc_get_var_uchar (int ncid, int varid, unsigned char *buf)
+int PIOc_get_var_uchar (int ncid, int varid, unsigned char *buf)
{
int ierr;
int msg;
@@ -2564,7 +2564,7 @@ int PIOc_get_var_uchar (int ncid, int varid, unsigned char *buf)
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2607,7 +2607,7 @@ int PIOc_get_var_uchar (int ncid, int varid, unsigned char *buf)
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -2615,7 +2615,7 @@ int PIOc_get_var_uchar (int ncid, int varid, unsigned char *buf)
return ierr;
}
-int PIOc_get_vara_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], unsigned char *buf)
+int PIOc_get_vara_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], unsigned char *buf)
{
int ierr;
int msg;
@@ -2641,7 +2641,7 @@ int PIOc_get_vara_uchar (int ncid, int varid, const PIO_Offset start[], const PI
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2684,7 +2684,7 @@ int PIOc_get_vara_uchar (int ncid, int varid, const PIO_Offset start[], const PI
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -2692,7 +2692,7 @@ int PIOc_get_vara_uchar (int ncid, int varid, const PIO_Offset start[], const PI
return ierr;
}
-int PIOc_get_vars_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], float *buf)
+int PIOc_get_vars_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], float *buf)
{
int ierr;
int msg;
@@ -2718,7 +2718,7 @@ int PIOc_get_vars_float (int ncid, int varid, const PIO_Offset start[], const PI
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2761,7 +2761,7 @@ int PIOc_get_vars_float (int ncid, int varid, const PIO_Offset start[], const PI
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -2769,7 +2769,7 @@ int PIOc_get_vars_float (int ncid, int varid, const PIO_Offset start[], const PI
return ierr;
}
-int PIOc_get_vars_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], long *buf)
+int PIOc_get_vars_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], long *buf)
{
int ierr;
int msg;
@@ -2795,7 +2795,7 @@ int PIOc_get_vars_long (int ncid, int varid, const PIO_Offset start[], const PIO
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2838,7 +2838,7 @@ int PIOc_get_vars_long (int ncid, int varid, const PIO_Offset start[], const PIO
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -2846,7 +2846,7 @@ int PIOc_get_vars_long (int ncid, int varid, const PIO_Offset start[], const PIO
return ierr;
}
-int PIOc_get_var1 (int ncid, int varid, const PIO_Offset index[], void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
+int PIOc_get_var1 (int ncid, int varid, const PIO_Offset index[], void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
{
int ierr;
int msg;
@@ -2868,7 +2868,7 @@ int PIOc_get_var1 (int ncid, int varid, const PIO_Offset index[], void *buf, PIO
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2911,7 +2911,7 @@ int PIOc_get_var1 (int ncid, int varid, const PIO_Offset index[], void *buf, PIO
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -2919,7 +2919,7 @@ int PIOc_get_var1 (int ncid, int varid, const PIO_Offset index[], void *buf, PIO
return ierr;
}
-int PIOc_get_var_uint (int ncid, int varid, unsigned int *buf)
+int PIOc_get_var_uint (int ncid, int varid, unsigned int *buf)
{
int ierr;
int msg;
@@ -2949,7 +2949,7 @@ int PIOc_get_var_uint (int ncid, int varid, unsigned int *buf)
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2992,7 +2992,7 @@ int PIOc_get_var_uint (int ncid, int varid, unsigned int *buf)
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -3000,7 +3000,7 @@ int PIOc_get_var_uint (int ncid, int varid, unsigned int *buf)
return ierr;
}
-int PIOc_get_vara (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
+int PIOc_get_vara (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
{
int ierr;
int msg;
@@ -3022,7 +3022,7 @@ int PIOc_get_vara (int ncid, int varid, const PIO_Offset start[], const PIO_Offs
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3065,7 +3065,7 @@ int PIOc_get_vara (int ncid, int varid, const PIO_Offset start[], const PIO_Offs
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -3073,7 +3073,7 @@ int PIOc_get_vara (int ncid, int varid, const PIO_Offset start[], const PIO_Offs
return ierr;
}
-int PIOc_get_vara_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], signed char *buf)
+int PIOc_get_vara_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], signed char *buf)
{
int ierr;
int msg;
@@ -3099,7 +3099,7 @@ int PIOc_get_vara_schar (int ncid, int varid, const PIO_Offset start[], const PI
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3142,7 +3142,7 @@ int PIOc_get_vara_schar (int ncid, int varid, const PIO_Offset start[], const PI
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -3150,7 +3150,7 @@ int PIOc_get_vara_schar (int ncid, int varid, const PIO_Offset start[], const PI
return ierr;
}
-int PIOc_get_var1_uint (int ncid, int varid, const PIO_Offset index[], unsigned int *buf)
+int PIOc_get_var1_uint (int ncid, int varid, const PIO_Offset index[], unsigned int *buf)
{
int ierr;
int msg;
@@ -3172,7 +3172,7 @@ int PIOc_get_var1_uint (int ncid, int varid, const PIO_Offset index[], unsigned
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3215,7 +3215,7 @@ int PIOc_get_var1_uint (int ncid, int varid, const PIO_Offset index[], unsigned
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -3223,7 +3223,7 @@ int PIOc_get_var1_uint (int ncid, int varid, const PIO_Offset index[], unsigned
return ierr;
}
-int PIOc_get_vars_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], unsigned int *buf)
+int PIOc_get_vars_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], unsigned int *buf)
{
int ierr;
int msg;
@@ -3249,7 +3249,7 @@ int PIOc_get_vars_uint (int ncid, int varid, const PIO_Offset start[], const PIO
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3292,7 +3292,7 @@ int PIOc_get_vars_uint (int ncid, int varid, const PIO_Offset start[], const PIO
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -3300,7 +3300,7 @@ int PIOc_get_vars_uint (int ncid, int varid, const PIO_Offset start[], const PIO
return ierr;
}
-int PIOc_get_vara_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], float *buf)
+int PIOc_get_vara_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], float *buf)
{
int ierr;
int msg;
@@ -3326,7 +3326,7 @@ int PIOc_get_vara_float (int ncid, int varid, const PIO_Offset start[], const PI
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3369,7 +3369,7 @@ int PIOc_get_vara_float (int ncid, int varid, const PIO_Offset start[], const PI
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -3377,7 +3377,7 @@ int PIOc_get_vara_float (int ncid, int varid, const PIO_Offset start[], const PI
return ierr;
}
-int PIOc_get_varm_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], char *buf)
+int PIOc_get_varm_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], char *buf)
{
int ierr;
int msg;
@@ -3403,7 +3403,7 @@ int PIOc_get_varm_text (int ncid, int varid, const PIO_Offset start[], const PIO
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3446,7 +3446,7 @@ int PIOc_get_varm_text (int ncid, int varid, const PIO_Offset start[], const PIO
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -3454,7 +3454,7 @@ int PIOc_get_varm_text (int ncid, int varid, const PIO_Offset start[], const PIO
return ierr;
}
-int PIOc_get_var1_text (int ncid, int varid, const PIO_Offset index[], char *buf)
+int PIOc_get_var1_text (int ncid, int varid, const PIO_Offset index[], char *buf)
{
int ierr;
int msg;
@@ -3476,7 +3476,7 @@ int PIOc_get_var1_text (int ncid, int varid, const PIO_Offset index[], char *buf
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3519,7 +3519,7 @@ int PIOc_get_var1_text (int ncid, int varid, const PIO_Offset index[], char *buf
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -3527,7 +3527,7 @@ int PIOc_get_var1_text (int ncid, int varid, const PIO_Offset index[], char *buf
return ierr;
}
-int PIOc_get_varm_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], int *buf)
+int PIOc_get_varm_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], int *buf)
{
int ierr;
int msg;
@@ -3553,7 +3553,7 @@ int PIOc_get_varm_int (int ncid, int varid, const PIO_Offset start[], const PIO_
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3596,7 +3596,7 @@ int PIOc_get_varm_int (int ncid, int varid, const PIO_Offset start[], const PIO_
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -3604,7 +3604,7 @@ int PIOc_get_varm_int (int ncid, int varid, const PIO_Offset start[], const PIO_
return ierr;
}
-int PIOc_get_varm_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], unsigned int *buf)
+int PIOc_get_varm_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], unsigned int *buf)
{
int ierr;
int msg;
@@ -3630,7 +3630,7 @@ int PIOc_get_varm_uint (int ncid, int varid, const PIO_Offset start[], const PIO
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3673,7 +3673,7 @@ int PIOc_get_varm_uint (int ncid, int varid, const PIO_Offset start[], const PIO
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -3681,7 +3681,7 @@ int PIOc_get_varm_uint (int ncid, int varid, const PIO_Offset start[], const PIO
return ierr;
}
-int PIOc_get_varm (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
+int PIOc_get_varm (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
{
int ierr;
int msg;
@@ -3703,7 +3703,7 @@ int PIOc_get_varm (int ncid, int varid, const PIO_Offset start[], const PIO_Offs
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3746,7 +3746,7 @@ int PIOc_get_varm (int ncid, int varid, const PIO_Offset start[], const PIO_Offs
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -3754,7 +3754,7 @@ int PIOc_get_varm (int ncid, int varid, const PIO_Offset start[], const PIO_Offs
return ierr;
}
-int PIOc_get_vars_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], double *buf)
+int PIOc_get_vars_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], double *buf)
{
int ierr;
int msg;
@@ -3780,7 +3780,7 @@ int PIOc_get_vars_double (int ncid, int varid, const PIO_Offset start[], const P
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3823,7 +3823,7 @@ int PIOc_get_vars_double (int ncid, int varid, const PIO_Offset start[], const P
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -3831,7 +3831,7 @@ int PIOc_get_vars_double (int ncid, int varid, const PIO_Offset start[], const P
return ierr;
}
-int PIOc_get_vara_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], long long *buf)
+int PIOc_get_vara_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], long long *buf)
{
int ierr;
int msg;
@@ -3857,7 +3857,7 @@ int PIOc_get_vara_longlong (int ncid, int varid, const PIO_Offset start[], const
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3900,7 +3900,7 @@ int PIOc_get_vara_longlong (int ncid, int varid, const PIO_Offset start[], const
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -3908,7 +3908,7 @@ int PIOc_get_vara_longlong (int ncid, int varid, const PIO_Offset start[], const
return ierr;
}
-int PIOc_get_var_ulonglong (int ncid, int varid, unsigned long long *buf)
+int PIOc_get_var_ulonglong (int ncid, int varid, unsigned long long *buf)
{
int ierr;
int msg;
@@ -3938,7 +3938,7 @@ int PIOc_get_var_ulonglong (int ncid, int varid, unsigned long long *buf)
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3981,7 +3981,7 @@ int PIOc_get_var_ulonglong (int ncid, int varid, unsigned long long *buf)
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -3989,7 +3989,7 @@ int PIOc_get_var_ulonglong (int ncid, int varid, unsigned long long *buf)
return ierr;
}
-int PIOc_get_vara_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], unsigned long long *buf)
+int PIOc_get_vara_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], unsigned long long *buf)
{
int ierr;
int msg;
@@ -4015,7 +4015,7 @@ int PIOc_get_vara_ulonglong (int ncid, int varid, const PIO_Offset start[], cons
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4058,7 +4058,7 @@ int PIOc_get_vara_ulonglong (int ncid, int varid, const PIO_Offset start[], cons
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -4066,7 +4066,7 @@ int PIOc_get_vara_ulonglong (int ncid, int varid, const PIO_Offset start[], cons
return ierr;
}
-int PIOc_get_var_short (int ncid, int varid, short *buf)
+int PIOc_get_var_short (int ncid, int varid, short *buf)
{
int ierr;
int msg;
@@ -4096,7 +4096,7 @@ int PIOc_get_var_short (int ncid, int varid, short *buf)
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4139,7 +4139,7 @@ int PIOc_get_var_short (int ncid, int varid, short *buf)
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -4147,7 +4147,7 @@ int PIOc_get_var_short (int ncid, int varid, short *buf)
return ierr;
}
-int PIOc_get_varm_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], float *buf)
+int PIOc_get_varm_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], float *buf)
{
int ierr;
int msg;
@@ -4173,7 +4173,7 @@ int PIOc_get_varm_float (int ncid, int varid, const PIO_Offset start[], const PI
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4216,7 +4216,7 @@ int PIOc_get_varm_float (int ncid, int varid, const PIO_Offset start[], const PI
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -4224,7 +4224,7 @@ int PIOc_get_varm_float (int ncid, int varid, const PIO_Offset start[], const PI
return ierr;
}
-int PIOc_get_var1_long (int ncid, int varid, const PIO_Offset index[], long *buf)
+int PIOc_get_var1_long (int ncid, int varid, const PIO_Offset index[], long *buf)
{
int ierr;
int msg;
@@ -4246,7 +4246,7 @@ int PIOc_get_var1_long (int ncid, int varid, const PIO_Offset index[], long *buf
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4289,7 +4289,7 @@ int PIOc_get_var1_long (int ncid, int varid, const PIO_Offset index[], long *buf
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -4297,7 +4297,7 @@ int PIOc_get_var1_long (int ncid, int varid, const PIO_Offset index[], long *buf
return ierr;
}
-int PIOc_get_varm_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], long *buf)
+int PIOc_get_varm_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], long *buf)
{
int ierr;
int msg;
@@ -4323,7 +4323,7 @@ int PIOc_get_varm_long (int ncid, int varid, const PIO_Offset start[], const PIO
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4366,7 +4366,7 @@ int PIOc_get_varm_long (int ncid, int varid, const PIO_Offset start[], const PIO
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -4374,7 +4374,7 @@ int PIOc_get_varm_long (int ncid, int varid, const PIO_Offset start[], const PIO
return ierr;
}
-int PIOc_get_varm_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], unsigned short *buf)
+int PIOc_get_varm_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], unsigned short *buf)
{
int ierr;
int msg;
@@ -4400,7 +4400,7 @@ int PIOc_get_varm_ushort (int ncid, int varid, const PIO_Offset start[], const P
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4443,7 +4443,7 @@ int PIOc_get_varm_ushort (int ncid, int varid, const PIO_Offset start[], const P
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -4451,7 +4451,7 @@ int PIOc_get_varm_ushort (int ncid, int varid, const PIO_Offset start[], const P
return ierr;
}
-int PIOc_get_varm_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], long long *buf)
+int PIOc_get_varm_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], long long *buf)
{
int ierr;
int msg;
@@ -4477,7 +4477,7 @@ int PIOc_get_varm_longlong (int ncid, int varid, const PIO_Offset start[], const
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4520,7 +4520,7 @@ int PIOc_get_varm_longlong (int ncid, int varid, const PIO_Offset start[], const
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -4528,7 +4528,7 @@ int PIOc_get_varm_longlong (int ncid, int varid, const PIO_Offset start[], const
return ierr;
}
-int PIOc_get_vars_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], char *buf)
+int PIOc_get_vars_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], char *buf)
{
int ierr;
int msg;
@@ -4554,7 +4554,7 @@ int PIOc_get_vars_text (int ncid, int varid, const PIO_Offset start[], const PIO
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4597,7 +4597,7 @@ int PIOc_get_vars_text (int ncid, int varid, const PIO_Offset start[], const PIO
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -4605,7 +4605,7 @@ int PIOc_get_vars_text (int ncid, int varid, const PIO_Offset start[], const PIO
return ierr;
}
-int PIOc_get_var1_uchar (int ncid, int varid, const PIO_Offset index[], unsigned char *buf)
+int PIOc_get_var1_uchar (int ncid, int varid, const PIO_Offset index[], unsigned char *buf)
{
int ierr;
int msg;
@@ -4627,7 +4627,7 @@ int PIOc_get_var1_uchar (int ncid, int varid, const PIO_Offset index[], unsigned
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4670,7 +4670,7 @@ int PIOc_get_var1_uchar (int ncid, int varid, const PIO_Offset index[], unsigned
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -4678,7 +4678,7 @@ int PIOc_get_var1_uchar (int ncid, int varid, const PIO_Offset index[], unsigned
return ierr;
}
-int PIOc_get_vars (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
+int PIOc_get_vars (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
{
int ierr;
int msg;
@@ -4700,7 +4700,7 @@ int PIOc_get_vars (int ncid, int varid, const PIO_Offset start[], const PIO_Offs
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4743,7 +4743,7 @@ int PIOc_get_vars (int ncid, int varid, const PIO_Offset start[], const PIO_Offs
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -4751,7 +4751,7 @@ int PIOc_get_vars (int ncid, int varid, const PIO_Offset start[], const PIO_Offs
return ierr;
}
-int PIOc_get_varm_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], short *buf)
+int PIOc_get_varm_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], short *buf)
{
int ierr;
int msg;
@@ -4777,7 +4777,7 @@ int PIOc_get_varm_short (int ncid, int varid, const PIO_Offset start[], const PI
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4820,7 +4820,7 @@ int PIOc_get_varm_short (int ncid, int varid, const PIO_Offset start[], const PI
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -4828,7 +4828,7 @@ int PIOc_get_varm_short (int ncid, int varid, const PIO_Offset start[], const PI
return ierr;
}
-int PIOc_get_varm_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], unsigned long long *buf)
+int PIOc_get_varm_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], unsigned long long *buf)
{
int ierr;
int msg;
@@ -4854,7 +4854,7 @@ int PIOc_get_varm_ulonglong (int ncid, int varid, const PIO_Offset start[], cons
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4897,7 +4897,7 @@ int PIOc_get_varm_ulonglong (int ncid, int varid, const PIO_Offset start[], cons
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
@@ -4905,7 +4905,7 @@ int PIOc_get_varm_ulonglong (int ncid, int varid, const PIO_Offset start[], cons
return ierr;
}
-int PIOc_get_var_schar (int ncid, int varid, signed char *buf)
+int PIOc_get_var_schar (int ncid, int varid, signed char *buf)
{
int ierr;
int msg;
@@ -4935,7 +4935,7 @@ int PIOc_get_var_schar (int ncid, int varid, signed char *buf)
ierr = PIO_NOERR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4978,7 +4978,7 @@ int PIOc_get_var_schar (int ncid, int varid, signed char *buf)
ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
- if(ios->async_interface || bcast ||
+ if(ios->async_interface || bcast ||
(ios->num_iotasks < ios->num_comptasks)){
MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
}
diff --git a/cime/externals/pio2/src/clib/pio_get_nc_async.c b/cime/externals/pio2/src/clib/pio_get_nc_async.c
deleted file mode 100644
index 7f6aeb79ce3d..000000000000
--- a/cime/externals/pio2/src/clib/pio_get_nc_async.c
+++ /dev/null
@@ -1,921 +0,0 @@
-/**
- * @file
- * PIO functions to get data (excluding varm functions).
- *
- * @author Ed Hartnett
- * @date 2016
- *
- * @see http://code.google.com/p/parallelio/
- */
-
-#include
-#include
-#include
-
-/**
- * Internal PIO function which provides a type-neutral interface to
- * nc_get_vars.
- *
- * Users should not call this function directly. Instead, call one of
- * the derived functions, depending on the type of data you are
- * reading: PIOc_get_vars_text(), PIOc_get_vars_uchar(),
- * PIOc_get_vars_schar(), PIOc_get_vars_ushort(),
- * PIOc_get_vars_short(), PIOc_get_vars_uint(), PIOc_get_vars_int(),
- * PIOc_get_vars_long(), PIOc_get_vars_float(),
- * PIOc_get_vars_double(), PIOc_get_vars_ulonglong(),
- * PIOc_get_vars_longlong()
- *
- * This routine is called collectively by all tasks in the
- * communicator ios.union_comm.
- *
- * @param ncid identifies the netCDF file
- * @param varid the variable ID number
- * @param start an array of start indicies (must have same number of
- * entries as variable has dimensions). If NULL, indices of 0 will be
- * used.
- *
- * @param count an array of counts (must have same number of entries
- * as variable has dimensions). If NULL, counts matching the size of
- * the variable will be used.
- *
- * @param stride an array of strides (must have same number of
- * entries as variable has dimensions). If NULL, strides of 1 will be
- * used.
- *
- * @param xtype the netCDF type of the data being passed in buf. Data
- * will be automatically covnerted from the type of the variable being
- * read from to this type.
- *
- * @param buf pointer to the data to be written.
- *
- * @return PIO_NOERR on success, error code otherwise.
- */
-int PIOc_get_vars_tc(int ncid, int varid, const PIO_Offset *start, const PIO_Offset *count,
- const PIO_Offset *stride, nc_type xtype, void *buf)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
- int ndims; /* The number of dimensions in the variable. */
- int *dimids; /* The IDs of the dimensions for this variable. */
- PIO_Offset typelen; /* Size (in bytes) of the data type of data in buf. */
- PIO_Offset num_elem = 1; /* Number of data elements in the buffer. */
- int bcast = false;
-
- LOG((1, "PIOc_get_vars_tc ncid = %d varid = %d start = %d count = %d "
- "stride = %d xtype = %d", ncid, varid, start, count, stride, xtype));
-
- /* User must provide a place to put some data. */
- if (!buf)
- return PIO_EINVAL;
-
- /* Find the info about this file. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* Run these on all tasks if async is not in use, but only on
- * non-IO tasks if async is in use. */
- if (!ios->async_interface || !ios->ioproc)
- {
- /* Get the length of the data type. */
- if ((ierr = PIOc_inq_type(ncid, xtype, NULL, &typelen)))
- return check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Get the number of dims for this var. */
- if ((ierr = PIOc_inq_varndims(ncid, varid, &ndims)))
- return check_netcdf(file, ierr, __FILE__, __LINE__);
-
- PIO_Offset dimlen[ndims];
-
- /* If no count array was passed, we need to know the dimlens
- * so we can calculate how many data elements are in the
- * buf. */
- if (!count)
- {
- int dimid[ndims];
-
- /* Get the dimids for this var. */
- if ((ierr = PIOc_inq_vardimid(ncid, varid, dimid)))
- return check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Get the length of each dimension. */
- for (int vd = 0; vd < ndims; vd++)
- if ((ierr = PIOc_inq_dimlen(ncid, dimid[vd], &dimlen[vd])))
- return check_netcdf(file, ierr, __FILE__, __LINE__);
- }
-
- /* Figure out the real start, count, and stride arrays. (The
- * user may have passed in NULLs.) */
- PIO_Offset rstart[ndims], rcount[ndims], rstride[ndims];
- for (int vd = 0; vd < ndims; vd++)
- {
- rstart[vd] = start ? start[vd] : 0;
- rcount[vd] = count ? count[vd] : dimlen[vd];
- rstride[vd] = stride ? stride[vd] : 1;
- }
-
- /* How many elements in buf? */
- for (int vd = 0; vd < ndims; vd++)
- num_elem *= (rcount[vd] - rstart[vd])/rstride[vd];
- LOG((2, "PIOc_get_vars_tc num_elem = %d", num_elem));
- }
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_GET_VARS;
- char start_present = start ? true : false;
- char count_present = count ? true : false;
- char stride_present = stride ? true : false;
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1, MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- /* Send the function parameters and associated informaiton
- * to the msg handler. */
- if (!mpierr)
- mpierr = MPI_Bcast(&ncid, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&varid, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&ndims, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&start_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr && start_present)
- mpierr = MPI_Bcast((PIO_Offset *)start, ndims, MPI_OFFSET, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&count_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr && count_present)
- mpierr = MPI_Bcast((PIO_Offset *)count, ndims, MPI_OFFSET, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&stride_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr && stride_present)
- mpierr = MPI_Bcast((PIO_Offset *)stride, ndims, MPI_OFFSET, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&xtype, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&num_elem, 1, MPI_OFFSET, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&typelen, 1, MPI_OFFSET, ios->compmaster, ios->intercomm);
- LOG((2, "PIOc_get_vars_tc ncid = %d varid = %d ndims = %d start_present = %d "
- "count_present = %d stride_present = %d xtype = %d num_elem = %d", ncid, varid,
- ndims, start_present, count_present, stride_present, xtype, num_elem));
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- return check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
-
- /* Broadcast values currently only known on computation tasks to IO tasks. */
- if ((mpierr = MPI_Bcast(&num_elem, 1, MPI_OFFSET, ios->comproot, ios->my_comm)))
- check_mpi(file, mpierr, __FILE__, __LINE__);
- if ((mpierr = MPI_Bcast(&typelen, 1, MPI_OFFSET, ios->comproot, ios->my_comm)))
- check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- {
-#ifdef PNET_READ_AND_BCAST
- LOG((1, "PNET_READ_AND_BCAST"));
- ncmpi_begin_indep_data(file->fh);
-
- /* Only the IO master does the IO, so we are not really
- * getting parallel IO here. */
- if (ios->iomaster)
- {
- switch(xtype)
- {
- case NC_BYTE:
- ierr = ncmpi_get_vars_schar(ncid, varid, start, count, stride, buf);
- break;
- case NC_CHAR:
- ierr = ncmpi_get_vars_text(ncid, varid, start, count, stride, buf);
- break;
- case NC_SHORT:
- ierr = ncmpi_get_vars_short(ncid, varid, start, count, stride, buf);
- break;
- case NC_INT:
- ierr = ncmpi_get_vars_int(ncid, varid, start, count, stride, buf);
- break;
- case NC_FLOAT:
- ierr = ncmpi_get_vars_float(ncid, varid, start, count, stride, buf);
- break;
- case NC_DOUBLE:
- ierr = ncmpi_get_vars_double(ncid, varid, start, count, stride, buf);
- break;
- case NC_INT64:
- ierr = ncmpi_get_vars_longlong(ncid, varid, start, count, stride, buf);
- break;
- default:
- LOG((0, "Unknown type for pnetcdf file! xtype = %d", xtype));
- }
- };
- ncmpi_end_indep_data(file->fh);
- bcast=true;
-#else /* PNET_READ_AND_BCAST */
- LOG((1, "not PNET_READ_AND_BCAST"));
- switch(xtype)
- {
- case NC_BYTE:
- ierr = ncmpi_get_vars_schar_all(ncid, varid, start, count, stride, buf);
- break;
- case NC_CHAR:
- ierr = ncmpi_get_vars_text_all(ncid, varid, start, count, stride, buf);
- break;
- case NC_SHORT:
- ierr = ncmpi_get_vars_short_all(ncid, varid, start, count, stride, buf);
- break;
- case NC_INT:
- ierr = ncmpi_get_vars_int_all(ncid, varid, start, count, stride, buf);
- for (int i = 0; i < 4; i++)
- LOG((2, "((int *)buf)[%d] = %d", i, ((int *)buf)[0]));
- break;
- case NC_FLOAT:
- ierr = ncmpi_get_vars_float_all(ncid, varid, start, count, stride, buf);
- break;
- case NC_DOUBLE:
- ierr = ncmpi_get_vars_double_all(ncid, varid, start, count, stride, buf);
- break;
- case NC_INT64:
- ierr = ncmpi_get_vars_longlong_all(ncid, varid, start, count, stride, buf);
- break;
- default:
- LOG((0, "Unknown type for pnetcdf file! xtype = %d", xtype));
- }
-#endif /* PNET_READ_AND_BCAST */
- }
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- switch(xtype)
- {
- case NC_BYTE:
- ierr = nc_get_vars_schar(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- case NC_CHAR:
- ierr = nc_get_vars_schar(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- case NC_SHORT:
- ierr = nc_get_vars_short(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- case NC_INT:
- ierr = nc_get_vars_int(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- case NC_FLOAT:
- ierr = nc_get_vars_float(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- case NC_DOUBLE:
- ierr = nc_get_vars_double(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
-#ifdef _NETCDF4
- case NC_UBYTE:
- ierr = nc_get_vars_uchar(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- case NC_USHORT:
- ierr = nc_get_vars_ushort(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- case NC_UINT:
- ierr = nc_get_vars_uint(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- case NC_INT64:
- ierr = nc_get_vars_longlong(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- case NC_UINT64:
- ierr = nc_get_vars_ulonglong(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- /* case NC_STRING: */
- /* ierr = nc_get_vars_string(ncid, varid, (size_t *)start, (size_t *)count, */
- /* (ptrdiff_t *)stride, (void *)buf); */
- /* break; */
- default:
- ierr = nc_get_vars(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
-#endif /* _NETCDF4 */
- }
-#endif /* _NETCDF */
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- if (ierr)
- return check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Send the data. */
- LOG((2, "PIOc_get_vars_tc bcasting data num_elem = %d typelen = %d", num_elem,
- typelen));
- if (!mpierr)
- mpierr = MPI_Bcast((void *)buf, num_elem * typelen, MPI_BYTE, ios->ioroot,
- ios->my_comm);
- return ierr;
-}
-
-int PIOc_get_vars_text(int ncid, int varid, const PIO_Offset *start, const PIO_Offset *count,
- const PIO_Offset *stride, char *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, stride, NC_CHAR, buf);
-}
-
-int PIOc_get_vars_uchar(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const PIO_Offset *stride, unsigned char *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, stride, NC_UBYTE, buf);
-}
-
-int PIOc_get_vars_schar(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const PIO_Offset *stride, signed char *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, stride, NC_BYTE, buf);
-}
-
-int PIOc_get_vars_ushort(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const PIO_Offset *stride, unsigned short *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, stride, NC_USHORT, buf);
-}
-
-int PIOc_get_vars_short(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const PIO_Offset *stride, short *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, stride, NC_SHORT, buf);
-}
-
-int PIOc_get_vars_uint(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const PIO_Offset *stride, unsigned int *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, stride, NC_UINT, buf);
-}
-
-int PIOc_get_vars_int(int ncid, int varid, const PIO_Offset *start, const PIO_Offset *count,
- const PIO_Offset *stride, int *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, stride, NC_INT, buf);
-}
-
-int PIOc_get_vars_long(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const PIO_Offset *stride, long *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, stride, NC_LONG, buf);
-}
-
-int PIOc_get_vars_float(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const PIO_Offset *stride, float *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, stride, NC_FLOAT, buf);
-}
-
-int PIOc_get_vars_double(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const PIO_Offset *stride, double *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, stride, NC_DOUBLE, buf);
-}
-
-int PIOc_get_vars_ulonglong(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const PIO_Offset *stride,
- unsigned long long *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, stride, NC_UINT64, buf);
-}
-
-int PIOc_get_vars_longlong(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const PIO_Offset *stride, long long *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, stride, NC_UINT64, buf);
-}
-
-int PIOc_get_vara_text(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, char *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, NULL, NC_CHAR, buf);
-}
-
-int PIOc_get_vara_uchar(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, unsigned char *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, NULL, NC_UBYTE, buf);
-}
-
-int PIOc_get_vara_schar(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, signed char *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, NULL, NC_BYTE, buf);
-}
-
-int PIOc_get_vara_ushort(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, unsigned short *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, NULL, NC_USHORT, buf);
-}
-
-int PIOc_get_vara_short(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, short *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, NULL, NC_SHORT, buf);
-}
-
-int PIOc_get_vara_long(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, long *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, NULL, NC_LONG, buf);
-}
-
-int PIOc_get_vara_uint(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, unsigned int *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, NULL, NC_UINT, buf);
-}
-
-int PIOc_get_vara_int(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, int *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, NULL, NC_INT, buf);
-}
-
-int PIOc_get_vara_float(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, float *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, NULL, NC_FLOAT, buf);
-}
-
-int PIOc_get_vara_double(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, double *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, NULL, NC_DOUBLE, buf);
-}
-
-int PIOc_get_vara_ulonglong(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, unsigned long long *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, NULL, NC_UINT64, buf);
-}
-
-int PIOc_get_vara_longlong(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, long long *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, start, count, NULL, NC_INT64, buf);
-}
-
-int PIOc_get_var_text(int ncid, int varid, char *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, NULL, NULL, NULL, NC_CHAR, buf);
-}
-
-int PIOc_get_var_uchar(int ncid, int varid, unsigned char *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, NULL, NULL, NULL, NC_UBYTE, buf);
-}
-
-int PIOc_get_var_schar(int ncid, int varid, signed char *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, NULL, NULL, NULL, NC_BYTE, buf);
-}
-
-int PIOc_get_var_ushort(int ncid, int varid, unsigned short *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, NULL, NULL, NULL, NC_USHORT, buf);
-}
-
-int PIOc_get_var_short(int ncid, int varid, short *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, NULL, NULL, NULL, NC_SHORT, buf);
-}
-
-int PIOc_get_var_uint(int ncid, int varid, unsigned int *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, NULL, NULL, NULL, NC_UINT, buf);
-}
-
-int PIOc_get_var_int(int ncid, int varid, int *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, NULL, NULL, NULL, NC_INT, buf);
-}
-
-int PIOc_get_var_long (int ncid, int varid, long *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, NULL, NULL, NULL, NC_LONG, buf);
-}
-
-int PIOc_get_var_float(int ncid, int varid, float *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, NULL, NULL, NULL, NC_FLOAT, buf);
-}
-
-int PIOc_get_var_double(int ncid, int varid, double *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, NULL, NULL, NULL, NC_DOUBLE, buf);
-}
-
-int PIOc_get_var_ulonglong(int ncid, int varid, unsigned long long *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, NULL, NULL, NULL, NC_UINT64, buf);
-}
-
-int PIOc_get_var_longlong(int ncid, int varid, long long *buf)
-{
- return PIOc_get_vars_tc(ncid, varid, NULL, NULL, NULL, NC_INT64, buf);
-}
-
-int PIOc_get_var1_tc(int ncid, int varid, const PIO_Offset *index, nc_type xtype,
- void *buf)
-{
- int ndims;
- int ierr;
-
- /* Find the number of dimensions. */
- if ((ierr = PIOc_inq_varndims(ncid, varid, &ndims)))
- return ierr;
-
- /* Set up count array. */
- PIO_Offset count[ndims];
- for (int c = 0; c < ndims; c++)
- count[c] = 1;
-
- return PIOc_get_vars_tc(ncid, varid, index, count, NULL, xtype, buf);
-}
-
-int PIOc_get_var1_text(int ncid, int varid, const PIO_Offset *index, char *buf)
-{
- return PIOc_get_var1_tc(ncid, varid, index, NC_CHAR, buf);
-}
-
-int PIOc_get_var1_uchar (int ncid, int varid, const PIO_Offset *index, unsigned char *buf)
-{
- return PIOc_get_var1_tc(ncid, varid, index, NC_UBYTE, buf);
-}
-
-int PIOc_get_var1_schar(int ncid, int varid, const PIO_Offset *index, signed char *buf)
-{
- return PIOc_get_var1_tc(ncid, varid, index, NC_BYTE, buf);
-}
-
-int PIOc_get_var1_ushort(int ncid, int varid, const PIO_Offset *index, unsigned short *buf)
-{
- return PIOc_get_var1_tc(ncid, varid, index, NC_USHORT, buf);
-}
-
-int PIOc_get_var1_short(int ncid, int varid, const PIO_Offset *index, short *buf)
-{
- return PIOc_get_var1_tc(ncid, varid, index, NC_SHORT, buf);
-}
-
-int PIOc_get_var1_uint(int ncid, int varid, const PIO_Offset *index, unsigned int *buf)
-{
- return PIOc_get_var1_tc(ncid, varid, index, NC_UINT, buf);
-}
-
-int PIOc_get_var1_long (int ncid, int varid, const PIO_Offset *index, long *buf)
-{
- return PIOc_get_var1_tc(ncid, varid, index, NC_LONG, buf);
-}
-
-int PIOc_get_var1_int(int ncid, int varid, const PIO_Offset *index, int *buf)
-{
- return PIOc_get_var1_tc(ncid, varid, index, NC_INT, buf);
-}
-
-int PIOc_get_var1_float(int ncid, int varid, const PIO_Offset *index, float *buf)
-{
- return PIOc_get_var1_tc(ncid, varid, index, NC_FLOAT, buf);
-}
-
-int PIOc_get_var1_double (int ncid, int varid, const PIO_Offset *index, double *buf)
-{
- return PIOc_get_var1_tc(ncid, varid, index, NC_DOUBLE, buf);
-}
-
-int PIOc_get_var1_ulonglong (int ncid, int varid, const PIO_Offset *index,
- unsigned long long *buf)
-{
- return PIOc_get_var1_tc(ncid, varid, index, NC_INT64, buf);
-}
-
-
-int PIOc_get_var1_longlong(int ncid, int varid, const PIO_Offset *index,
- long long *buf)
-{
- return PIOc_get_var1_tc(ncid, varid, index, NC_INT64, buf);
-}
-
-int PIOc_get_var (int ncid, int varid, void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- MPI_Datatype ibuftype;
- int ndims;
- int ibufcnt;
- bool bcast = false;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_GET_VAR;
- ibufcnt = bufcount;
- ibuftype = buftype;
- ierr = PIO_NOERR;
-
- if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_get_var(file->fh, varid, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- bcast = true;
- if(ios->iomaster){
- ierr = nc_get_var(file->fh, varid, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
-#ifdef PNET_READ_AND_BCAST
- ncmpi_begin_indep_data(file->fh);
- if(ios->iomaster){
- ierr = ncmpi_get_var(file->fh, varid, buf, bufcount, buftype);;
- };
- ncmpi_end_indep_data(file->fh);
- bcast=true;
-#else
- ierr = ncmpi_get_var_all(file->fh, varid, buf, bufcount, buftype);;
-#endif
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- if(ios->async_interface || bcast ||
- (ios->num_iotasks < ios->num_comptasks)){
- MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
- }
-
- return ierr;
-}
-
-
-
-
-
-
-int PIOc_get_var1 (int ncid, int varid, const PIO_Offset *index, void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- MPI_Datatype ibuftype;
- int ndims;
- int ibufcnt;
- bool bcast = false;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_GET_VAR1;
- ibufcnt = bufcount;
- ibuftype = buftype;
- ierr = PIO_NOERR;
-
- if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_get_var1(file->fh, varid, (size_t *) index, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- bcast = true;
- if(ios->iomaster){
- ierr = nc_get_var1(file->fh, varid, (size_t *) index, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
-#ifdef PNET_READ_AND_BCAST
- ncmpi_begin_indep_data(file->fh);
- if(ios->iomaster){
- ierr = ncmpi_get_var1(file->fh, varid, index, buf, bufcount, buftype);;
- };
- ncmpi_end_indep_data(file->fh);
- bcast=true;
-#else
- ierr = ncmpi_get_var1_all(file->fh, varid, index, buf, bufcount, buftype);;
-#endif
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- if(ios->async_interface || bcast ||
- (ios->num_iotasks < ios->num_comptasks)){
- MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
- }
-
- return ierr;
-}
-
-int PIOc_get_vara (int ncid, int varid, const PIO_Offset *start, const PIO_Offset *count, void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- MPI_Datatype ibuftype;
- int ndims;
- int ibufcnt;
- bool bcast = false;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_GET_VARA;
- ibufcnt = bufcount;
- ibuftype = buftype;
- ierr = PIO_NOERR;
-
- if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_get_vara(file->fh, varid, (size_t *) start, (size_t *) count, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- bcast = true;
- if(ios->iomaster){
- ierr = nc_get_vara(file->fh, varid, (size_t *) start, (size_t *) count, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
-#ifdef PNET_READ_AND_BCAST
- ncmpi_begin_indep_data(file->fh);
- if(ios->iomaster){
- ierr = ncmpi_get_vara(file->fh, varid, start, count, buf, bufcount, buftype);;
- };
- ncmpi_end_indep_data(file->fh);
- bcast=true;
-#else
- ierr = ncmpi_get_vara_all(file->fh, varid, start, count, buf, bufcount, buftype);;
-#endif
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- if(ios->async_interface || bcast ||
- (ios->num_iotasks < ios->num_comptasks)){
- MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
- }
-
- return ierr;
-}
-
-
-
-
-
-
-int PIOc_get_vars (int ncid, int varid, const PIO_Offset *start, const PIO_Offset *count, const PIO_Offset *stride, void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- MPI_Datatype ibuftype;
- int ndims;
- int ibufcnt;
- bool bcast = false;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_GET_VARS;
- ibufcnt = bufcount;
- ibuftype = buftype;
- ierr = PIO_NOERR;
-
- if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_get_vars(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- bcast = true;
- if(ios->iomaster){
- ierr = nc_get_vars(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
-#ifdef PNET_READ_AND_BCAST
- ncmpi_begin_indep_data(file->fh);
- if(ios->iomaster){
- ierr = ncmpi_get_vars(file->fh, varid, start, count, stride, buf, bufcount, buftype);;
- };
- ncmpi_end_indep_data(file->fh);
- bcast=true;
-#else
- ierr = ncmpi_get_vars_all(file->fh, varid, start, count, stride, buf, bufcount, buftype);;
-#endif
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- if(ios->async_interface || bcast ||
- (ios->num_iotasks < ios->num_comptasks)){
- MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
- }
-
- return ierr;
-}
-
diff --git a/cime/externals/pio2/src/clib/pio_internal.h b/cime/externals/pio2/src/clib/pio_internal.h
index 40d7e12f4509..f7a3c9ddf220 100644
--- a/cime/externals/pio2/src/clib/pio_internal.h
+++ b/cime/externals/pio2/src/clib/pio_internal.h
@@ -1,19 +1,7 @@
-/**
- * @file
- * Private headers and defines for the PIO C interface.
- * @author Jim Edwards
- * @date 2014
- *
- * @see http://code.google.com/p/parallelio/
- */
-
#ifndef __PIO_INTERNAL__
#define __PIO_INTERNAL__
-
#include
-
-/* It seems that some versions of openmpi fail to define
- * MPI_OFFSET. */
+// It seems that some versions of openmpi fail to define MPI_OFFSET
#ifdef OMPI_OFFSET_DATATYPE
#ifndef MPI_OFFSET
#define MPI_OFFSET OMPI_OFFSET_DATATYPE
@@ -29,22 +17,17 @@
#include
#endif
-#if PIO_ENABLE_LOGGING
-void pio_log(int severity, const char *fmt, ...);
-#define LOG(e) pio_log e
-#else
-#define LOG(e)
-#endif /* PIO_ENABLE_LOGGING */
-#define max(a,b) \
- ({ __typeof__ (a) _a = (a); \
- __typeof__ (b) _b = (b); \
- _a > _b ? _a : _b; })
+#define max(a,b) \
+ ({ __typeof__ (a) _a = (a); \
+ __typeof__ (b) _b = (b); \
+ _a > _b ? _a : _b; })
+
+#define min(a,b) \
+ ({ __typeof__ (a) _a = (a); \
+ __typeof__ (b) _b = (b); \
+ _a < _b ? _a : _b; })
-#define min(a,b) \
- ({ __typeof__ (a) _a = (a); \
- __typeof__ (b) _b = (b); \
- _a < _b ? _a : _b; })
#define MAX_GATHER_BLOCK_SIZE 0
#define PIO_REQUEST_ALLOC_CHUNK 16
@@ -53,116 +36,121 @@ void pio_log(int severity, const char *fmt, ...);
extern "C" {
#endif
- extern PIO_Offset PIO_BUFFER_SIZE_LIMIT;
- extern bool PIO_Save_Decomps;
+extern PIO_Offset PIO_BUFFER_SIZE_LIMIT;
+extern bool PIO_Save_Decomps;
- /** Used to sort map points in the subset rearranger. */
- typedef struct mapsort
- {
- int rfrom;
- PIO_Offset soffset;
- PIO_Offset iomap;
- } mapsort;
- /** swapm defaults. */
- typedef struct pio_swapm_defaults
- {
- int nreqs;
- bool handshake;
- bool isend;
- } pio_swapm_defaults;
+/**
+ ** @brief Used to sort map points in the subset rearranger
+*/
+typedef struct mapsort
+{
+ int rfrom;
+ PIO_Offset soffset;
+ PIO_Offset iomap;
+} mapsort;
+
+/**
+ * @brief swapm defaults.
+ *
+*/
+typedef struct pio_swapm_defaults
+{
+ int nreqs;
+ bool handshake;
+ bool isend;
+} pio_swapm_defaults;
- void pio_get_env(void);
- int pio_add_to_iodesc_list(io_desc_t *iodesc);
- io_desc_t *pio_get_iodesc_from_id(int ioid);
- int pio_delete_iodesc_from_list(int ioid);
- file_desc_t *pio_get_file_from_id(int ncid);
- int pio_delete_file_from_list(int ncid);
- void pio_add_to_file_list(file_desc_t *file);
- void pio_push_request(file_desc_t *file, int request);
+ void pio_get_env(void);
+ int pio_add_to_iodesc_list(io_desc_t *iodesc);
+ io_desc_t *pio_get_iodesc_from_id(int ioid);
+ int pio_delete_iodesc_from_list(int ioid);
- iosystem_desc_t *pio_get_iosystem_from_id(int iosysid);
- int pio_add_to_iosystem_list(iosystem_desc_t *ios);
+ file_desc_t *pio_get_file_from_id(int ncid);
+ int pio_delete_file_from_list(int ncid);
+ void pio_add_to_file_list(file_desc_t *file);
+ void pio_push_request(file_desc_t *file, int request);
+
+ iosystem_desc_t *pio_get_iosystem_from_id(int iosysid);
+ int pio_add_to_iosystem_list(iosystem_desc_t *ios);
- int check_netcdf(file_desc_t *file,const int status, const char *fname, const int line);
- int iotype_error(const int iotype, const char *fname, const int line);
- void piodie(const char *msg,const char *fname, const int line);
- void pioassert(bool exp, const char *msg,const char *fname, const int line);
- int CalcStartandCount(const int basetype, const int ndims, const int *gdims, const int num_io_procs,
- const int myiorank, PIO_Offset *start, PIO_Offset *kount);
- void CheckMPIReturn(const int ierr,const char file[],const int line);
- int pio_fc_gather( void *sendbuf, const int sendcnt, const MPI_Datatype sendtype,
- void *recvbuf, const int recvcnt, const MPI_Datatype recvtype, const int root,
- MPI_Comm comm, const int flow_cntl);
- int pio_fc_gatherv( void *sendbuf, const int sendcnt, const MPI_Datatype sendtype,
- void *recvbuf, const int recvcnts[], const int recvdispl[], const MPI_Datatype recvtype, const int root,
- MPI_Comm comm, const int flow_cntl);
+ int check_netcdf(file_desc_t *file,const int status, const char *fname, const int line);
+ int iotype_error(const int iotype, const char *fname, const int line);
+ void piodie(const char *msg,const char *fname, const int line);
+ void pioassert(bool exp, const char *msg,const char *fname, const int line);
+ int CalcStartandCount(const int basetype, const int ndims, const int *gdims, const int num_io_procs,
+ const int myiorank, PIO_Offset *start, PIO_Offset *kount);
+ void CheckMPIReturn(const int ierr,const char file[],const int line);
+ int pio_fc_gather( void *sendbuf, const int sendcnt, const MPI_Datatype sendtype,
+ void *recvbuf, const int recvcnt, const MPI_Datatype recvtype, const int root,
+ MPI_Comm comm, const int flow_cntl);
+ int pio_fc_gatherv( void *sendbuf, const int sendcnt, const MPI_Datatype sendtype,
+ void *recvbuf, const int recvcnts[], const int recvdispl[], const MPI_Datatype recvtype, const int root,
+ MPI_Comm comm, const int flow_cntl);
- int pio_fc_gatherv( void *sendbuf, const int sendcnt, const MPI_Datatype sendtype,
- void *recvbuf, const int recvcnts[], const int rdispls[], const MPI_Datatype recvtype, const int root,
- MPI_Comm comm, const int flow_cntl);
+ int pio_fc_gatherv( void *sendbuf, const int sendcnt, const MPI_Datatype sendtype,
+ void *recvbuf, const int recvcnts[], const int rdispls[], const MPI_Datatype recvtype, const int root,
+ MPI_Comm comm, const int flow_cntl);
- int pio_swapm(void *sndbuf, int sndlths[], int sdispls[], MPI_Datatype stypes[],
- void *rcvbuf, int rcvlths[], int rdispls[], MPI_Datatype rtypes[],
- MPI_Comm comm, const bool handshake, bool isend, const int max_requests);
+ int pio_swapm(void *sndbuf, int sndlths[], int sdispls[], MPI_Datatype stypes[],
+ void *rcvbuf, int rcvlths[], int rdispls[], MPI_Datatype rtypes[],
+ MPI_Comm comm, const bool handshake, bool isend, const int max_requests);
- long long lgcd_array(int nain, long long*ain);
+ long long lgcd_array(int nain, long long*ain);
- void PIO_Offset_size(MPI_Datatype *dtype, int *tsize);
- PIO_Offset GCDblocksize(const int arrlen, const PIO_Offset arr_in[]);
+ void PIO_Offset_size(MPI_Datatype *dtype, int *tsize);
+ PIO_Offset GCDblocksize(const int arrlen, const PIO_Offset arr_in[]);
- int subset_rearrange_create(const iosystem_desc_t ios,const int maplen, PIO_Offset compmap[], const int gsize[],
- const int ndim, io_desc_t *iodesc);
+ int subset_rearrange_create(const iosystem_desc_t ios,const int maplen, PIO_Offset compmap[], const int gsize[],
+ const int ndim, io_desc_t *iodesc);
- int box_rearrange_create(const iosystem_desc_t ios,const int maplen, const PIO_Offset compmap[], const int gsize[],
- const int ndim, io_desc_t *iodesc);
+ int box_rearrange_create(const iosystem_desc_t ios,const int maplen, const PIO_Offset compmap[], const int gsize[],
+ const int ndim, io_desc_t *iodesc);
- int rearrange_io2comp(const iosystem_desc_t ios, io_desc_t *iodesc, void *sbuf,
- void *rbuf);
- int rearrange_comp2io(const iosystem_desc_t ios, io_desc_t *iodesc, void *sbuf,
- void *rbuf, const int nvars);
- int calcdisplace(const int bsize, const int numblocks,const PIO_Offset map[],int displace[]);
- io_desc_t *malloc_iodesc(const int piotype, const int ndims);
- void performance_tune_rearranger(iosystem_desc_t ios, io_desc_t *iodesc);
+ int rearrange_io2comp(const iosystem_desc_t ios, io_desc_t *iodesc, void *sbuf,
+ void *rbuf);
+ int rearrange_comp2io(const iosystem_desc_t ios, io_desc_t *iodesc, void *sbuf,
+ void *rbuf, const int nvars);
+ int calcdisplace(const int bsize, const int numblocks,const PIO_Offset map[],int displace[]);
+ io_desc_t *malloc_iodesc(const int piotype, const int ndims);
+ void performance_tune_rearranger(iosystem_desc_t ios, io_desc_t *iodesc);
- int flush_output_buffer(file_desc_t *file, bool force, PIO_Offset addsize);
- void compute_maxIObuffersize(MPI_Comm io_comm, io_desc_t *iodesc);
- io_region *alloc_region(const int ndims);
- int pio_delete_iosystem_from_list(int piosysid);
- int gcd(int a, int b);
- long long lgcd (long long a,long long b );
- int gcd_array(int nain, int *ain);
- void free_region_list(io_region *top);
- void gindex_to_coord(const int ndims, const PIO_Offset gindex, const PIO_Offset gstride[], PIO_Offset *gcoord);
- PIO_Offset coord_to_lindex(const int ndims, const PIO_Offset lcoord[], const PIO_Offset count[]);
+ int flush_output_buffer(file_desc_t *file, bool force, PIO_Offset addsize);
+ void compute_maxIObuffersize(MPI_Comm io_comm, io_desc_t *iodesc);
+ io_region *alloc_region(const int ndims);
+ int pio_delete_iosystem_from_list(int piosysid);
+ int gcd(int a, int b);
+ long long lgcd (long long a,long long b );
+ int gcd_array(int nain, int *ain);
+ void free_region_list(io_region *top);
+ void gindex_to_coord(const int ndims, const PIO_Offset gindex, const PIO_Offset gstride[], PIO_Offset *gcoord);
+ PIO_Offset coord_to_lindex(const int ndims, const PIO_Offset lcoord[], const PIO_Offset count[]);
- int ceil2(const int i);
- int pair(const int np, const int p, const int k);
- int define_iodesc_datatypes(const iosystem_desc_t ios, io_desc_t *iodesc);
+ int ceil2(const int i);
+ int pair(const int np, const int p, const int k);
+ int define_iodesc_datatypes(const iosystem_desc_t ios, io_desc_t *iodesc);
- int create_mpi_datatypes(const MPI_Datatype basetype,const int msgcnt,const PIO_Offset dlen,
- const PIO_Offset mindex[],const int mcount[],int *mfrom, MPI_Datatype mtype[]);
- int compare_offsets(const void *a,const void *b) ;
+ int create_mpi_datatypes(const MPI_Datatype basetype,const int msgcnt,const PIO_Offset dlen,
+ const PIO_Offset mindex[],const int mcount[],int *mfrom, MPI_Datatype mtype[]);
+ int compare_offsets(const void *a,const void *b) ;
- int subset_rearrange_create(const iosystem_desc_t ios, int maplen, PIO_Offset compmap[],
- const int gsize[], const int ndims, io_desc_t *iodesc);
- void print_trace (FILE *fp);
- void cn_buffer_report(iosystem_desc_t ios, bool collective);
- void compute_buffer_init(iosystem_desc_t ios);
- void free_cn_buffer_pool(iosystem_desc_t ios);
- void flush_buffer(int ncid, wmulti_buffer *wmb, bool flushtodisk);
- void piomemerror(iosystem_desc_t ios, size_t req, char *fname, const int line);
- void compute_maxaggregate_bytes(const iosystem_desc_t ios, io_desc_t *iodesc);
- int check_mpi(file_desc_t *file, const int mpierr, const char *filename,
- const int line);
+ int subset_rearrange_create(const iosystem_desc_t ios, int maplen, PIO_Offset compmap[],
+ const int gsize[], const int ndims, io_desc_t *iodesc);
+ void print_trace (FILE *fp);
+ void cn_buffer_report(iosystem_desc_t ios, bool collective);
+ void compute_buffer_init(iosystem_desc_t ios);
+ void free_cn_buffer_pool(iosystem_desc_t ios);
+void flush_buffer(int ncid, wmulti_buffer *wmb, bool flushtodisk);
+ void piomemerror(iosystem_desc_t ios, size_t req, char *fname, const int line);
+ void compute_maxaggregate_bytes(const iosystem_desc_t ios, io_desc_t *iodesc);
#ifdef BGQ
- void identity(MPI_Comm comm, int *iotask);
- void determineiotasks(const MPI_Comm comm, int *numiotasks,int *base, int *stride, int *rearr,
- bool *iamIOtask);
+ void identity(MPI_Comm comm, int *iotask);
+ void determineiotasks(const MPI_Comm comm, int *numiotasks,int *base, int *stride, int *rearr,
+ bool *iamIOtask);
#endif
@@ -170,215 +158,214 @@ extern "C" {
}
#endif
-/** These are the messages that can be sent over the intercomm when
- * async is being used. */
-enum PIO_MSG
-{
- PIO_MSG_OPEN_FILE,
- PIO_MSG_CREATE_FILE,
- PIO_MSG_INQ_ATT,
- PIO_MSG_INQ_FORMAT,
- PIO_MSG_INQ_VARID,
- PIO_MSG_DEF_VAR,
- PIO_MSG_INQ_VAR,
- PIO_MSG_PUT_ATT_DOUBLE,
- PIO_MSG_PUT_ATT_INT,
- PIO_MSG_RENAME_ATT,
- PIO_MSG_DEL_ATT,
- PIO_MSG_INQ,
- PIO_MSG_GET_ATT_TEXT,
- PIO_MSG_GET_ATT_SHORT,
- PIO_MSG_PUT_ATT_LONG,
- PIO_MSG_REDEF,
- PIO_MSG_SET_FILL,
- PIO_MSG_ENDDEF,
- PIO_MSG_RENAME_VAR,
- PIO_MSG_PUT_ATT_SHORT,
- PIO_MSG_PUT_ATT_TEXT,
- PIO_MSG_INQ_ATTNAME,
- PIO_MSG_GET_ATT_ULONGLONG,
- PIO_MSG_GET_ATT_USHORT,
- PIO_MSG_PUT_ATT_ULONGLONG,
- PIO_MSG_GET_ATT_UINT,
- PIO_MSG_GET_ATT_LONGLONG,
- PIO_MSG_PUT_ATT_SCHAR,
- PIO_MSG_PUT_ATT_FLOAT,
- PIO_MSG_RENAME_DIM,
- PIO_MSG_GET_ATT_LONG,
- PIO_MSG_INQ_DIM,
- PIO_MSG_INQ_DIMID,
- PIO_MSG_PUT_ATT_USHORT,
- PIO_MSG_GET_ATT_FLOAT,
- PIO_MSG_SYNC,
- PIO_MSG_PUT_ATT_LONGLONG,
- PIO_MSG_PUT_ATT_UINT,
- PIO_MSG_GET_ATT_SCHAR,
- PIO_MSG_INQ_ATTID,
- PIO_MSG_DEF_DIM,
- PIO_MSG_GET_ATT_INT,
- PIO_MSG_GET_ATT_DOUBLE,
- PIO_MSG_PUT_ATT_UCHAR,
- PIO_MSG_GET_ATT_UCHAR,
- PIO_MSG_PUT_VARS_UCHAR,
- PIO_MSG_GET_VAR1_SCHAR,
- PIO_MSG_GET_VARS_ULONGLONG,
- PIO_MSG_GET_VARM_UCHAR,
- PIO_MSG_GET_VARM_SCHAR,
- PIO_MSG_GET_VARS_SHORT,
- PIO_MSG_GET_VAR_DOUBLE,
- PIO_MSG_GET_VARA_DOUBLE,
- PIO_MSG_GET_VAR_INT,
- PIO_MSG_GET_VAR_USHORT,
- PIO_MSG_PUT_VARS_USHORT,
- PIO_MSG_GET_VARA_TEXT,
- PIO_MSG_PUT_VARS_ULONGLONG,
- PIO_MSG_GET_VARA_INT,
- PIO_MSG_PUT_VARM,
- PIO_MSG_GET_VAR1_FLOAT,
- PIO_MSG_GET_VAR1_SHORT,
- PIO_MSG_GET_VARS_INT,
- PIO_MSG_PUT_VARS_UINT,
- PIO_MSG_GET_VAR_TEXT,
- PIO_MSG_GET_VARM_DOUBLE,
- PIO_MSG_PUT_VARM_UCHAR,
- PIO_MSG_PUT_VAR_USHORT,
- PIO_MSG_GET_VARS_SCHAR,
- PIO_MSG_GET_VARA_USHORT,
- PIO_MSG_PUT_VAR1_LONGLONG,
- PIO_MSG_PUT_VARA_UCHAR,
- PIO_MSG_PUT_VARM_SHORT,
- PIO_MSG_PUT_VAR1_LONG,
- PIO_MSG_PUT_VARS_LONG,
- PIO_MSG_GET_VAR1_USHORT,
- PIO_MSG_PUT_VAR_SHORT,
- PIO_MSG_PUT_VARA_INT,
- PIO_MSG_GET_VAR_FLOAT,
- PIO_MSG_PUT_VAR1_USHORT,
- PIO_MSG_PUT_VARA_TEXT,
- PIO_MSG_PUT_VARM_TEXT,
- PIO_MSG_GET_VARS_UCHAR,
- PIO_MSG_GET_VAR,
- PIO_MSG_PUT_VARM_USHORT,
- PIO_MSG_GET_VAR1_LONGLONG,
- PIO_MSG_GET_VARS_USHORT,
- PIO_MSG_GET_VAR_LONG,
- PIO_MSG_GET_VAR1_DOUBLE,
- PIO_MSG_PUT_VAR_ULONGLONG,
- PIO_MSG_PUT_VAR_INT,
- PIO_MSG_GET_VARA_UINT,
- PIO_MSG_PUT_VAR_LONGLONG,
- PIO_MSG_GET_VARS_LONGLONG,
- PIO_MSG_PUT_VAR_SCHAR,
- PIO_MSG_PUT_VAR_UINT,
- PIO_MSG_PUT_VAR,
- PIO_MSG_PUT_VARA_USHORT,
- PIO_MSG_GET_VAR_LONGLONG,
- PIO_MSG_GET_VARA_SHORT,
- PIO_MSG_PUT_VARS_SHORT,
- PIO_MSG_PUT_VARA_UINT,
- PIO_MSG_PUT_VARA_SCHAR,
- PIO_MSG_PUT_VARM_ULONGLONG,
- PIO_MSG_PUT_VAR1_UCHAR,
- PIO_MSG_PUT_VARM_INT,
- PIO_MSG_PUT_VARS_SCHAR,
- PIO_MSG_GET_VARA_LONG,
- PIO_MSG_PUT_VAR1,
- PIO_MSG_GET_VAR1_INT,
- PIO_MSG_GET_VAR1_ULONGLONG,
- PIO_MSG_GET_VAR_UCHAR,
- PIO_MSG_PUT_VARA_FLOAT,
- PIO_MSG_GET_VARA_UCHAR,
- PIO_MSG_GET_VARS_FLOAT,
- PIO_MSG_PUT_VAR1_FLOAT,
- PIO_MSG_PUT_VARM_FLOAT,
- PIO_MSG_PUT_VAR1_TEXT,
- PIO_MSG_PUT_VARS_TEXT,
- PIO_MSG_PUT_VARM_LONG,
- PIO_MSG_GET_VARS_LONG,
- PIO_MSG_PUT_VARS_DOUBLE,
- PIO_MSG_GET_VAR1,
- PIO_MSG_GET_VAR_UINT,
- PIO_MSG_PUT_VARA_LONGLONG,
- PIO_MSG_GET_VARA,
- PIO_MSG_PUT_VAR_DOUBLE,
- PIO_MSG_GET_VARA_SCHAR,
- PIO_MSG_PUT_VAR_FLOAT,
- PIO_MSG_GET_VAR1_UINT,
- PIO_MSG_GET_VARS_UINT,
- PIO_MSG_PUT_VAR1_ULONGLONG,
- PIO_MSG_PUT_VARM_UINT,
- PIO_MSG_PUT_VAR1_UINT,
- PIO_MSG_PUT_VAR1_INT,
- PIO_MSG_GET_VARA_FLOAT,
- PIO_MSG_GET_VARM_TEXT,
- PIO_MSG_PUT_VARS_FLOAT,
- PIO_MSG_GET_VAR1_TEXT,
- PIO_MSG_PUT_VARA_SHORT,
- PIO_MSG_PUT_VAR1_SCHAR,
- PIO_MSG_PUT_VARA_ULONGLONG,
- PIO_MSG_PUT_VARM_DOUBLE,
- PIO_MSG_GET_VARM_INT,
- PIO_MSG_PUT_VARA,
- PIO_MSG_PUT_VARA_LONG,
- PIO_MSG_GET_VARM_UINT,
- PIO_MSG_GET_VARM,
- PIO_MSG_PUT_VAR1_DOUBLE,
- PIO_MSG_GET_VARS_DOUBLE,
- PIO_MSG_GET_VARA_LONGLONG,
- PIO_MSG_GET_VAR_ULONGLONG,
- PIO_MSG_PUT_VARM_SCHAR,
- PIO_MSG_GET_VARA_ULONGLONG,
- PIO_MSG_GET_VAR_SHORT,
- PIO_MSG_GET_VARM_FLOAT,
- PIO_MSG_PUT_VAR_TEXT,
- PIO_MSG_PUT_VARS_INT,
- PIO_MSG_GET_VAR1_LONG,
- PIO_MSG_GET_VARM_LONG,
- PIO_MSG_GET_VARM_USHORT,
- PIO_MSG_PUT_VAR1_SHORT,
- PIO_MSG_PUT_VARS_LONGLONG,
- PIO_MSG_GET_VARM_LONGLONG,
- PIO_MSG_GET_VARS_TEXT,
- PIO_MSG_PUT_VARA_DOUBLE,
- PIO_MSG_PUT_VARS,
- PIO_MSG_PUT_VAR_UCHAR,
- PIO_MSG_GET_VAR1_UCHAR,
- PIO_MSG_PUT_VAR_LONG,
- PIO_MSG_GET_VARS,
- PIO_MSG_GET_VARM_SHORT,
- PIO_MSG_GET_VARM_ULONGLONG,
- PIO_MSG_PUT_VARM_LONGLONG,
- PIO_MSG_GET_VAR_SCHAR,
- PIO_MSG_GET_ATT_UBYTE,
- PIO_MSG_PUT_ATT_STRING,
- PIO_MSG_GET_ATT_STRING,
- PIO_MSG_PUT_ATT_UBYTE,
- PIO_MSG_INQ_VAR_FILL,
- PIO_MSG_DEF_VAR_FILL,
- PIO_MSG_DEF_VAR_DEFLATE,
- PIO_MSG_INQ_VAR_DEFLATE,
- PIO_MSG_INQ_VAR_SZIP,
- PIO_MSG_DEF_VAR_FLETCHER32,
- PIO_MSG_INQ_VAR_FLETCHER32,
- PIO_MSG_DEF_VAR_CHUNKING,
- PIO_MSG_INQ_VAR_CHUNKING,
- PIO_MSG_DEF_VAR_ENDIAN,
- PIO_MSG_INQ_VAR_ENDIAN,
- PIO_MSG_SET_CHUNK_CACHE,
- PIO_MSG_GET_CHUNK_CACHE,
- PIO_MSG_SET_VAR_CHUNK_CACHE,
- PIO_MSG_GET_VAR_CHUNK_CACHE,
- PIO_MSG_INITDECOMP_DOF,
- PIO_MSG_WRITEDARRAY,
- PIO_MSG_READDARRAY,
- PIO_MSG_SETERRORHANDLING,
- PIO_MSG_FREEDECOMP,
- PIO_MSG_CLOSE_FILE,
- PIO_MSG_DELETE_FILE,
- PIO_MSG_EXIT,
- PIO_MSG_GET_ATT,
- PIO_MSG_PUT_ATT,
- PIO_MSG_INQ_TYPE
+enum PIO_MSG{
+ PIO_MSG_OPEN_FILE,
+ PIO_MSG_CREATE_FILE,
+ PIO_MSG_INQ_ATT,
+ PIO_MSG_INQ_FORMAT,
+ PIO_MSG_INQ_VARID,
+ PIO_MSG_INQ_VARNATTS,
+ PIO_MSG_DEF_VAR,
+ PIO_MSG_INQ_VAR,
+ PIO_MSG_INQ_VARNAME,
+ PIO_MSG_PUT_ATT_DOUBLE,
+ PIO_MSG_PUT_ATT_INT,
+ PIO_MSG_RENAME_ATT,
+ PIO_MSG_DEL_ATT,
+ PIO_MSG_INQ_NATTS,
+ PIO_MSG_INQ,
+ PIO_MSG_GET_ATT_TEXT,
+ PIO_MSG_GET_ATT_SHORT,
+ PIO_MSG_PUT_ATT_LONG,
+ PIO_MSG_REDEF,
+ PIO_MSG_SET_FILL,
+ PIO_MSG_ENDDEF,
+ PIO_MSG_RENAME_VAR,
+ PIO_MSG_PUT_ATT_SHORT,
+ PIO_MSG_PUT_ATT_TEXT,
+ PIO_MSG_INQ_ATTNAME,
+ PIO_MSG_GET_ATT_ULONGLONG,
+ PIO_MSG_GET_ATT_USHORT,
+ PIO_MSG_PUT_ATT_ULONGLONG,
+ PIO_MSG_INQ_DIMLEN,
+ PIO_MSG_GET_ATT_UINT,
+ PIO_MSG_GET_ATT_LONGLONG,
+ PIO_MSG_PUT_ATT_SCHAR,
+ PIO_MSG_PUT_ATT_FLOAT,
+ PIO_MSG_INQ_NVARS,
+ PIO_MSG_RENAME_DIM,
+ PIO_MSG_INQ_VARNDIMS,
+ PIO_MSG_GET_ATT_LONG,
+ PIO_MSG_INQ_DIM,
+ PIO_MSG_INQ_DIMID,
+ PIO_MSG_INQ_UNLIMDIM,
+ PIO_MSG_INQ_VARDIMID,
+ PIO_MSG_INQ_ATTLEN,
+ PIO_MSG_INQ_DIMNAME,
+ PIO_MSG_PUT_ATT_USHORT,
+ PIO_MSG_GET_ATT_FLOAT,
+ PIO_MSG_SYNC,
+ PIO_MSG_PUT_ATT_LONGLONG,
+ PIO_MSG_PUT_ATT_UINT,
+ PIO_MSG_GET_ATT_SCHAR,
+ PIO_MSG_INQ_ATTID,
+ PIO_MSG_DEF_DIM,
+ PIO_MSG_INQ_NDIMS,
+ PIO_MSG_INQ_VARTYPE,
+ PIO_MSG_GET_ATT_INT,
+ PIO_MSG_GET_ATT_DOUBLE,
+ PIO_MSG_INQ_ATTTYPE,
+ PIO_MSG_PUT_ATT_UCHAR,
+ PIO_MSG_GET_ATT_UCHAR,
+ PIO_MSG_PUT_VARS_UCHAR,
+ PIO_MSG_GET_VAR1_SCHAR,
+ PIO_MSG_GET_VARS_ULONGLONG,
+ PIO_MSG_GET_VARM_UCHAR,
+ PIO_MSG_GET_VARM_SCHAR,
+ PIO_MSG_GET_VARS_SHORT,
+ PIO_MSG_GET_VAR_DOUBLE,
+ PIO_MSG_GET_VARA_DOUBLE,
+ PIO_MSG_GET_VAR_INT,
+ PIO_MSG_GET_VAR_USHORT,
+ PIO_MSG_PUT_VARS_USHORT,
+ PIO_MSG_GET_VARA_TEXT,
+ PIO_MSG_PUT_VARS_ULONGLONG,
+ PIO_MSG_GET_VARA_INT,
+ PIO_MSG_PUT_VARM,
+ PIO_MSG_GET_VAR1_FLOAT,
+ PIO_MSG_GET_VAR1_SHORT,
+ PIO_MSG_GET_VARS_INT,
+ PIO_MSG_PUT_VARS_UINT,
+ PIO_MSG_GET_VAR_TEXT,
+ PIO_MSG_GET_VARM_DOUBLE,
+ PIO_MSG_PUT_VARM_UCHAR,
+ PIO_MSG_PUT_VAR_USHORT,
+ PIO_MSG_GET_VARS_SCHAR,
+ PIO_MSG_GET_VARA_USHORT,
+ PIO_MSG_PUT_VAR1_LONGLONG,
+ PIO_MSG_PUT_VARA_UCHAR,
+ PIO_MSG_PUT_VARM_SHORT,
+ PIO_MSG_PUT_VAR1_LONG,
+ PIO_MSG_PUT_VARS_LONG,
+ PIO_MSG_GET_VAR1_USHORT,
+ PIO_MSG_PUT_VAR_SHORT,
+ PIO_MSG_PUT_VARA_INT,
+ PIO_MSG_GET_VAR_FLOAT,
+ PIO_MSG_PUT_VAR1_USHORT,
+ PIO_MSG_PUT_VARA_TEXT,
+ PIO_MSG_PUT_VARM_TEXT,
+ PIO_MSG_GET_VARS_UCHAR,
+ PIO_MSG_GET_VAR,
+ PIO_MSG_PUT_VARM_USHORT,
+ PIO_MSG_GET_VAR1_LONGLONG,
+ PIO_MSG_GET_VARS_USHORT,
+ PIO_MSG_GET_VAR_LONG,
+ PIO_MSG_GET_VAR1_DOUBLE,
+ PIO_MSG_PUT_VAR_ULONGLONG,
+ PIO_MSG_PUT_VAR_INT,
+ PIO_MSG_GET_VARA_UINT,
+ PIO_MSG_PUT_VAR_LONGLONG,
+ PIO_MSG_GET_VARS_LONGLONG,
+ PIO_MSG_PUT_VAR_SCHAR,
+ PIO_MSG_PUT_VAR_UINT,
+ PIO_MSG_PUT_VAR,
+ PIO_MSG_PUT_VARA_USHORT,
+ PIO_MSG_GET_VAR_LONGLONG,
+ PIO_MSG_GET_VARA_SHORT,
+ PIO_MSG_PUT_VARS_SHORT,
+ PIO_MSG_PUT_VARA_UINT,
+ PIO_MSG_PUT_VARA_SCHAR,
+ PIO_MSG_PUT_VARM_ULONGLONG,
+ PIO_MSG_PUT_VAR1_UCHAR,
+ PIO_MSG_PUT_VARM_INT,
+ PIO_MSG_PUT_VARS_SCHAR,
+ PIO_MSG_GET_VARA_LONG,
+ PIO_MSG_PUT_VAR1,
+ PIO_MSG_GET_VAR1_INT,
+ PIO_MSG_GET_VAR1_ULONGLONG,
+ PIO_MSG_GET_VAR_UCHAR,
+ PIO_MSG_PUT_VARA_FLOAT,
+ PIO_MSG_GET_VARA_UCHAR,
+ PIO_MSG_GET_VARS_FLOAT,
+ PIO_MSG_PUT_VAR1_FLOAT,
+ PIO_MSG_PUT_VARM_FLOAT,
+ PIO_MSG_PUT_VAR1_TEXT,
+ PIO_MSG_PUT_VARS_TEXT,
+ PIO_MSG_PUT_VARM_LONG,
+ PIO_MSG_GET_VARS_LONG,
+ PIO_MSG_PUT_VARS_DOUBLE,
+ PIO_MSG_GET_VAR1,
+ PIO_MSG_GET_VAR_UINT,
+ PIO_MSG_PUT_VARA_LONGLONG,
+ PIO_MSG_GET_VARA,
+ PIO_MSG_PUT_VAR_DOUBLE,
+ PIO_MSG_GET_VARA_SCHAR,
+ PIO_MSG_PUT_VAR_FLOAT,
+ PIO_MSG_GET_VAR1_UINT,
+ PIO_MSG_GET_VARS_UINT,
+ PIO_MSG_PUT_VAR1_ULONGLONG,
+ PIO_MSG_PUT_VARM_UINT,
+ PIO_MSG_PUT_VAR1_UINT,
+ PIO_MSG_PUT_VAR1_INT,
+ PIO_MSG_GET_VARA_FLOAT,
+ PIO_MSG_GET_VARM_TEXT,
+ PIO_MSG_PUT_VARS_FLOAT,
+ PIO_MSG_GET_VAR1_TEXT,
+ PIO_MSG_PUT_VARA_SHORT,
+ PIO_MSG_PUT_VAR1_SCHAR,
+ PIO_MSG_PUT_VARA_ULONGLONG,
+ PIO_MSG_PUT_VARM_DOUBLE,
+ PIO_MSG_GET_VARM_INT,
+ PIO_MSG_PUT_VARA,
+ PIO_MSG_PUT_VARA_LONG,
+ PIO_MSG_GET_VARM_UINT,
+ PIO_MSG_GET_VARM,
+ PIO_MSG_PUT_VAR1_DOUBLE,
+ PIO_MSG_GET_VARS_DOUBLE,
+ PIO_MSG_GET_VARA_LONGLONG,
+ PIO_MSG_GET_VAR_ULONGLONG,
+ PIO_MSG_PUT_VARM_SCHAR,
+ PIO_MSG_GET_VARA_ULONGLONG,
+ PIO_MSG_GET_VAR_SHORT,
+ PIO_MSG_GET_VARM_FLOAT,
+ PIO_MSG_PUT_VAR_TEXT,
+ PIO_MSG_PUT_VARS_INT,
+ PIO_MSG_GET_VAR1_LONG,
+ PIO_MSG_GET_VARM_LONG,
+ PIO_MSG_GET_VARM_USHORT,
+ PIO_MSG_PUT_VAR1_SHORT,
+ PIO_MSG_PUT_VARS_LONGLONG,
+ PIO_MSG_GET_VARM_LONGLONG,
+ PIO_MSG_GET_VARS_TEXT,
+ PIO_MSG_PUT_VARA_DOUBLE,
+ PIO_MSG_PUT_VARS,
+ PIO_MSG_PUT_VAR_UCHAR,
+ PIO_MSG_GET_VAR1_UCHAR,
+ PIO_MSG_PUT_VAR_LONG,
+ PIO_MSG_GET_VARS,
+ PIO_MSG_GET_VARM_SHORT,
+ PIO_MSG_GET_VARM_ULONGLONG,
+ PIO_MSG_PUT_VARM_LONGLONG,
+ PIO_MSG_GET_VAR_SCHAR,
+ PIO_MSG_GET_ATT_UBYTE,
+ PIO_MSG_PUT_ATT_STRING,
+ PIO_MSG_GET_ATT_STRING,
+ PIO_MSG_PUT_ATT_UBYTE,
+ PIO_MSG_INQ_VAR_FILL,
+ PIO_MSG_DEF_VAR_FILL,
+ PIO_MSG_DEF_VAR_DEFLATE,
+ PIO_MSG_INQ_VAR_DEFLATE,
+ PIO_MSG_INQ_VAR_SZIP,
+ PIO_MSG_DEF_VAR_FLETCHER32,
+ PIO_MSG_INQ_VAR_FLETCHER32,
+ PIO_MSG_DEF_VAR_CHUNKING,
+ PIO_MSG_INQ_VAR_CHUNKING,
+ PIO_MSG_DEF_VAR_ENDIAN,
+ PIO_MSG_INQ_VAR_ENDIAN,
+ PIO_MSG_SET_CHUNK_CACHE,
+ PIO_MSG_GET_CHUNK_CACHE,
+ PIO_MSG_SET_VAR_CHUNK_CACHE,
+ PIO_MSG_GET_VAR_CHUNK_CACHE
};
-#endif /* __PIO_INTERNAL__ */
+#endif
diff --git a/cime/externals/pio2/src/clib/pio_lists.c b/cime/externals/pio2/src/clib/pio_lists.c
index efa0868c4441..511918b52b53 100644
--- a/cime/externals/pio2/src/clib/pio_lists.c
+++ b/cime/externals/pio2/src/clib/pio_lists.c
@@ -9,34 +9,37 @@ static iosystem_desc_t *pio_iosystem_list=NULL;
static file_desc_t *pio_file_list = NULL;
static file_desc_t *current_file=NULL;
-/** Add a new entry to the global list of open files.
- *
- * @param file pointer to the file_desc_t struct for the new file.
-*/
void pio_add_to_file_list(file_desc_t *file)
{
file_desc_t *cfile;
+ int cnt=-1;
+ // on iotasks the fh returned from netcdf should be unique, on non-iotasks we
+ // need to generate a unique fh, we do this with cnt, its a negative index
- /* This file will be at the end of the list, and have no next. */
file->next = NULL;
-
- /* Get a pointer to the global list of files. */
cfile = pio_file_list;
-
- /* Keep a global pointer to the current file. */
current_file = file;
-
- /* If there is nothing in the list, then file will be the first
- * entry. Otherwise, move to end of the list. */
- if (!cfile)
- pio_file_list = file;
- else
- {
- while (cfile->next)
- cfile = cfile->next;
- cfile->next = file;
+ if(cfile==NULL){
+ pio_file_list = file;
+ }else{
+ cnt = min(cnt,cfile->fh-1);
+ while(cfile->next != NULL)
+ {
+ cfile=cfile->next;
+ cnt = min(cnt,cfile->fh-1);
+ }
+ cfile->next = file;
}
+ if(! file->iosystem->ioproc || ((file->iotype != PIO_IOTYPE_PNETCDF &&
+ file->iotype != PIO_IOTYPE_NETCDF4P) &&
+ file->iosystem->io_rank>0))
+ file->fh = cnt;
+
+ cfile = pio_file_list;
+
}
+
+
file_desc_t *pio_get_file_from_id(int ncid)
{
diff --git a/cime/externals/pio2/src/clib/pio_msg.c b/cime/externals/pio2/src/clib/pio_msg.c
deleted file mode 100644
index e6220d850cde..000000000000
--- a/cime/externals/pio2/src/clib/pio_msg.c
+++ /dev/null
@@ -1,2119 +0,0 @@
-/**
- * @file
- * @author Ed Hartnett
- * @date 2016
- * @brief PIO async msg handling
- *
- * @see http://code.google.com/p/parallelio/
- */
-
-#include
-#include
-#include
-
-/* MPI serial builds stub out MPI functions so that the MPI code can
- * work on one processor. This function is missing from our serial MPI
- * implementation, so it is included here. This can be removed after
- * it is added to the MPI serial library. */
-/* #ifdef USE_MPI_SERIAL */
-/* int MPI_Intercomm_merge(MPI_Comm intercomm, int high, MPI_Comm *newintracomm) */
-/* { */
-/* return MPI_SUCCESS; */
-/* } */
-/* #endif /\* USE_MPI_SERIAL *\/ */
-
-#ifdef PIO_ENABLE_LOGGING
-extern int my_rank;
-extern int pio_log_level;
-#endif /* PIO_ENABLE_LOGGING */
-
-/** This function is run on the IO tasks to find netCDF type
- * length. */
-int inq_type_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int xtype;
- char name_present, size_present;
- char *namep = NULL, name[NC_MAX_NAME + 1];
- PIO_Offset *sizep = NULL, size;
- int mpierr;
- int ret;
-
- LOG((1, "inq_type_handler"));
-
- /* Get the parameters for this function that the the comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&xtype, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&name_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&size_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
-
- /* Handle null pointer issues. */
- if (name_present)
- namep = name;
- if (size_present)
- sizep = &size;
-
- /* Call the function. */
- if ((ret = PIOc_inq_type(ncid, xtype, namep, sizep)))
- return ret;
-
- LOG((1, "inq_type_handler succeeded!"));
- return PIO_NOERR;
-}
-
-/** This function is run on the IO tasks to find netCDF file
- * format. */
-int inq_format_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int *formatp = NULL, format;
- char format_present;
- int mpierr;
- int ret;
-
- LOG((1, "inq_format_handler"));
-
- /* Get the parameters for this function that the the comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&format_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((2, "inq_format_handler got parameters ncid = %d format_present = %d",
- ncid, format_present));
-
- /* Manage NULL pointers. */
- if (format_present)
- formatp = &format;
-
- /* Call the function. */
- if ((ret = PIOc_inq_format(ncid, formatp)))
- return ret;
-
- if (formatp)
- LOG((2, "inq_format_handler format = %d", *formatp));
- LOG((1, "inq_format_handler succeeded!"));
-
- return PIO_NOERR;
-}
-
-/** This function is run on the IO tasks to create a netCDF file. */
-int create_file_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int len;
- int iotype;
- char *filename;
- int mode;
- int mpierr;
- int ret;
-
- LOG((1, "create_file_handler comproot = %d\n", ios->comproot));
-
- /* Get the parameters for this function that the he comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&len, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((1, "create_file_handler got parameter len = %d\n", len));
- if (!(filename = malloc(len + 1 * sizeof(char))))
- return PIO_ENOMEM;
- if ((mpierr = MPI_Bcast((void *)filename, len + 1, MPI_CHAR, 0,
- ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&iotype, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&mode, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((1, "create_file_handler got parameters len = %d "
- "filename = %s iotype = %d mode = %d\n",
- len, filename, iotype, mode));
-
- /* Call the create file function. */
- if ((ret = PIOc_createfile(ios->iosysid, &ncid, &iotype, filename, mode)))
- return ret;
-
- /* Free resources. */
- free(filename);
-
- LOG((1, "create_file_handler succeeded!"));
- return PIO_NOERR;
-}
-
-/** This function is run on the IO tasks to close a netCDF file. It is
- * only ever run on the IO tasks.
- *
- * @param ios pointer to the iosystem_desc_t.
- * @return PIO_NOERR for success, error code otherwise.
-*/
-int close_file_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int mpierr;
- int ret;
-
- int my_rank;
- MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
- LOG((1, "%d close_file_handler\n", my_rank));
-
- /* Get the parameters for this function that the the comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((1, "%d create_file_handler got parameter ncid = %d\n", ncid));
-
- /* Call the close file function. */
- if ((ret = PIOc_closefile(ncid)))
- return ret;
-
- LOG((1, "close_file_handler succeeded!\n", my_rank));
- return PIO_NOERR;
-}
-
-/** This function is run on the IO tasks to inq a netCDF file. It is
- * only ever run on the IO tasks.
- *
- * @param ios pointer to the iosystem_desc_t.
- * @return PIO_NOERR for success, error code otherwise.
-*/
-int inq_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int ndims, nvars, ngatts, unlimdimid;
- int *ndimsp = NULL, *nvarsp = NULL, *ngattsp = NULL, *unlimdimidp = NULL;
- char ndims_present, nvars_present, ngatts_present, unlimdimid_present;
- int mpierr;
- int ret;
-
- int my_rank;
- MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
- LOG((1, "%d inq_handler\n", my_rank));
-
- /* Get the parameters for this function that the the comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&ndims_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&nvars_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&ngatts_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&unlimdimid_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((1, "%d inq_handler ndims_present = %d nvars_present = %d ngatts_present = %d unlimdimid_present = %d\n",
- ndims_present, nvars_present, ngatts_present, unlimdimid_present));
-
- /* NULLs passed in to any of the pointers in the original call
- * need to be matched with NULLs here. Assign pointers where
- * non-NULL pointers were passed in. */
- if (ndims_present)
- ndimsp = &ndims;
- if (nvars_present)
- nvarsp = &nvars;
- if (ngatts_present)
- ngattsp = &ngatts;
- if (unlimdimid_present)
- unlimdimidp = &unlimdimid;
-
- /* Call the inq function to get the values. */
- if ((ret = PIOc_inq(ncid, ndimsp, nvarsp, ngattsp, unlimdimidp)))
- return ret;
-
- return PIO_NOERR;
-}
-
-/** Do an inq_dim on a netCDF dimension. This function is only run on
- * IO tasks.
- *
- * @param ios pointer to the iosystem_desc_t.
- * @param msg the message sent my the comp root task.
- * @return PIO_NOERR for success, error code otherwise.
-*/
-int inq_dim_handler(iosystem_desc_t *ios, int msg)
-{
- int ncid;
- int dimid;
- char name_present, len_present;
- char *dimnamep = NULL;
- PIO_Offset *dimlenp = NULL;
- char dimname[NC_MAX_NAME + 1];
- PIO_Offset dimlen;
-
- int mpierr;
- int ret;
-
- LOG((1, "inq_dim_handler"));
-
- /* Get the parameters for this function that the the comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&dimid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&name_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&len_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((2, "inq_handler name_present = %d len_present = %d", name_present,
- len_present));
-
- /* Set the non-null pointers. */
- if (name_present)
- dimnamep = dimname;
- if (len_present)
- dimlenp = &dimlen;
-
- /* Call the inq function to get the values. */
- if ((ret = PIOc_inq_dim(ncid, dimid, dimnamep, dimlenp)))
- return ret;
-
- return PIO_NOERR;
-}
-
-/** Do an inq_dimid on a netCDF dimension name. This function is only
- * run on IO tasks.
- *
- * @param ios pointer to the iosystem_desc_t.
- * @return PIO_NOERR for success, error code otherwise.
-*/
-int inq_dimid_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int *dimidp = NULL, dimid;
- int mpierr;
- int id_present;
- int ret;
- int namelen;
- char *name;
-
- LOG((1, "inq_dimid_handler"));
-
- /* Get the parameters for this function that the the comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&namelen, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if (!(name = malloc((namelen + 1) * sizeof(char))))
- return PIO_ENOMEM;
- if ((mpierr = MPI_Bcast((void *)name, namelen + 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&id_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((1, "inq_dimid_handler ncid = %d namelen = %d name = %s id_present = %d",
- ncid, namelen, name, id_present));
-
- /* Set non-null pointer. */
- if (id_present)
- dimidp = &dimid;
-
- /* Call the inq_dimid function. */
- if ((ret = PIOc_inq_dimid(ncid, name, dimidp)))
- return ret;
-
- /* Free resources. */
- free(name);
-
- return PIO_NOERR;
-}
-
-/** Handle attribute inquiry operations. This code only runs on IO
- * tasks.
- *
- * @param ios pointer to the iosystem_desc_t.
- * @param msg the message sent my the comp root task.
- * @return PIO_NOERR for success, error code otherwise.
-*/
-int inq_att_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int varid;
- int mpierr;
- int ret;
- char *name5;
- int namelen;
- int *op, *ip;
- nc_type xtype, *xtypep = NULL;
- PIO_Offset len, *lenp = NULL;
- char xtype_present, len_present;
-
- LOG((1, "inq_att_handler"));
-
- /* Get the parameters for this function that the the comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&varid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&namelen, 1, MPI_INT, ios->compmaster, ios->intercomm)))
- return PIO_EIO;
- if (!(name5 = malloc((namelen + 1) * sizeof(char))))
- return PIO_ENOMEM;
- if ((mpierr = MPI_Bcast((void *)name5, namelen + 1, MPI_CHAR, ios->compmaster,
- ios->intercomm)))
- return PIO_ENOMEM;
- if ((mpierr = MPI_Bcast(&xtype_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&len_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
-
- /* Match NULLs in collective function call. */
- if (xtype_present)
- xtypep = &xtype;
- if (len_present)
- lenp = &len;
-
- /* Call the function to learn about the attribute. */
- if ((ret = PIOc_inq_att(ncid, varid, name5, xtypep, lenp)))
- return ret;
-
- return PIO_NOERR;
-}
-
-/** Handle attribute inquiry operations. This code only runs on IO
- * tasks.
- *
- * @param ios pointer to the iosystem_desc_t.
- * @param msg the message sent my the comp root task.
- * @return PIO_NOERR for success, error code otherwise.
-*/
-int inq_attname_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int varid;
- int attnum;
- char name[NC_MAX_NAME + 1], *namep = NULL;
- char name_present;
- int mpierr;
- int ret;
-
- LOG((1, "inq_att_name_handler"));
-
- /* Get the parameters for this function that the the comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&varid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&attnum, 1, MPI_INT, ios->compmaster, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&name_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((2, "inq_attname_handler got ncid = %d varid = %d attnum = %d name_present = %d",
- ncid, varid, attnum, name_present));
-
- /* Match NULLs in collective function call. */
- if (name_present)
- namep = name;
-
- /* Call the function to learn about the attribute. */
- if ((ret = PIOc_inq_attname(ncid, varid, attnum, namep)))
- return ret;
-
- return PIO_NOERR;
-}
-
-/** Handle attribute inquiry operations. This code only runs on IO
- * tasks.
- *
- * @param ios pointer to the iosystem_desc_t.
- * @param msg the message sent my the comp root task.
- * @return PIO_NOERR for success, error code otherwise.
-*/
-int inq_attid_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int varid;
- int attnum;
- char *name;
- int namelen;
- int id, *idp = NULL;
- char id_present;
- int mpierr;
- int ret;
-
- LOG((1, "inq_attid_handler"));
-
- /* Get the parameters for this function that the the comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&varid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&namelen, 1, MPI_INT, ios->compmaster, ios->intercomm)))
- return PIO_EIO;
- if (!(name = malloc((namelen + 1) * sizeof(char))))
- return PIO_ENOMEM;
- if ((mpierr = MPI_Bcast(name, namelen + 1, MPI_CHAR, ios->compmaster, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&id_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((2, "inq_attid_handler got ncid = %d varid = %d attnum = %d id_present = %d",
- ncid, varid, attnum, id_present));
-
- /* Match NULLs in collective function call. */
- if (id_present)
- idp = &id;
-
- /* Call the function to learn about the attribute. */
- if ((ret = PIOc_inq_attid(ncid, varid, name, idp)))
- return ret;
-
- /* Free resources. */
- free(name);
-
- return PIO_NOERR;
-}
-
-/** Handle attribute operations. This code only runs on IO tasks.
- *
- * @param ios pointer to the iosystem_desc_t.
- * @param msg the message sent my the comp root task.
- * @return PIO_NOERR for success, error code otherwise.
-*/
-int att_put_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int varid;
- int mpierr;
- int ierr;
- char *name;
- int namelen;
- PIO_Offset attlen, typelen;
- nc_type atttype;
- int *op, *ip;
- int iotype;
-
- LOG((1, "att_put_handler"));
-
- /* Get the parameters for this function that the the comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&varid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- mpierr = MPI_Bcast(&namelen, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!(name = malloc((namelen + 1) * sizeof(char))))
- return PIO_ENOMEM;
- mpierr = MPI_Bcast((void *)name, namelen + 1, MPI_CHAR, ios->compmaster,
- ios->intercomm);
- if ((mpierr = MPI_Bcast(&atttype, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&attlen, 1, MPI_OFFSET, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&typelen, 1, MPI_OFFSET, 0, ios->intercomm)))
- return PIO_EIO;
- if (!(op = malloc(attlen * typelen)))
- return PIO_ENOMEM;
- if ((mpierr = MPI_Bcast((void *)op, attlen * typelen, MPI_BYTE, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((1, "att_put_handler ncid = %d varid = %d namelen = %d name = %s iotype = %d"
- "atttype = %d attlen = %d typelen = %d",
- ncid, varid, namelen, name, iotype, atttype, attlen, typelen));
-
- /* Call the function to read the attribute. */
- if ((ierr = PIOc_put_att(ncid, varid, name, atttype, attlen, op)))
- return ierr;
- LOG((2, "put_handler called PIOc_put_att, ierr = %d", ierr));
-
- /* Free resources. */
- free(name);
- free(op);
-
- LOG((2, "put_handler complete!"));
- return PIO_NOERR;
-}
-
-/** Handle attribute operations. This code only runs on IO tasks.
- *
- * @param ios pointer to the iosystem_desc_t.
- * @param msg the message sent my the comp root task.
- * @return PIO_NOERR for success, error code otherwise.
-*/
-int att_get_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int varid;
- int mpierr;
- int ierr;
- char *name;
- int namelen;
- PIO_Offset attlen, typelen;
- nc_type atttype;
- int *op, *ip;
- int iotype;
-
- LOG((1, "att_get_handler"));
-
- /* Get the parameters for this function that the the comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&varid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- mpierr = MPI_Bcast(&namelen, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!(name = malloc((namelen + 1) * sizeof(char))))
- return PIO_ENOMEM;
- mpierr = MPI_Bcast((void *)name, namelen + 1, MPI_CHAR, ios->compmaster,
- ios->intercomm);
- if ((mpierr = MPI_Bcast(&iotype, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&atttype, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&attlen, 1, MPI_OFFSET, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&typelen, 1, MPI_OFFSET, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((1, "att_get_handler ncid = %d varid = %d namelen = %d name = %s iotype = %d"
- "atttype = %d attlen = %d typelen = %d",
- ncid, varid, namelen, name, iotype, atttype, attlen, typelen));
-
- /* Allocate space for the attribute data. */
- if (!(ip = malloc(attlen * typelen)))
- return PIO_ENOMEM;
-
- /* Call the function to read the attribute. */
- if ((ierr = PIOc_get_att(ncid, varid, name, ip)))
- return ierr;
-
- /* Free resources. */
- free(name);
- free(ip);
-
- return PIO_NOERR;
-}
-
-/** Handle var put operations. This code only runs on IO tasks.
- *
- * @param ios pointer to the iosystem_desc_t.
- * @return PIO_NOERR for success, error code otherwise.
-*/
-int put_vars_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int varid;
- int mpierr;
- int ierr;
- char *name;
- int namelen;
- PIO_Offset typelen; /** Length (in bytes) of this type. */
- nc_type xtype; /** Type of the data being written. */
- char start_present, count_present, stride_present;
- PIO_Offset *startp = NULL, *countp = NULL, *stridep = NULL;
- int ndims; /** Number of dimensions. */
- void *buf; /** Buffer for data storage. */
- PIO_Offset num_elem; /** Number of data elements in the buffer. */
-
- LOG((1, "put_vars_handler"));
-
- /* Get the parameters for this function that the the comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&varid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&ndims, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
-
- /* Now we know how big to make these arrays. */
- PIO_Offset start[ndims], count[ndims], stride[ndims];
-
- if ((mpierr = MPI_Bcast(&start_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- if (!mpierr && start_present)
- {
- if ((mpierr = MPI_Bcast(start, ndims, MPI_OFFSET, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((1, "put_vars_handler getting start[0] = %d ndims = %d", start[0], ndims));
- }
- if ((mpierr = MPI_Bcast(&count_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- if (!mpierr && count_present)
- if ((mpierr = MPI_Bcast(count, ndims, MPI_OFFSET, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&stride_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- if (!mpierr && stride_present)
- if ((mpierr = MPI_Bcast(stride, ndims, MPI_OFFSET, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&xtype, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&num_elem, 1, MPI_OFFSET, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&typelen, 1, MPI_OFFSET, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((1, "put_vars_handler ncid = %d varid = %d ndims = %d start_present = %d "
- "count_present = %d stride_present = %d xtype = %d num_elem = %d typelen = %d",
- ncid, varid, ndims, start_present, count_present, stride_present, xtype,
- num_elem, typelen));
-
- for (int d = 0; d < ndims; d++)
- {
- if (start_present)
- LOG((2, "start[%d] = %d\n", d, start[d]));
- if (count_present)
- LOG((2, "count[%d] = %d\n", d, count[d]));
- if (stride_present)
- LOG((2, "stride[%d] = %d\n", d, stride[d]));
- }
-
- /* Allocate room for our data. */
- if (!(buf = malloc(num_elem * typelen)))
- return PIO_ENOMEM;
-
- /* Get the data. */
- if ((mpierr = MPI_Bcast(buf, num_elem * typelen, MPI_BYTE, 0, ios->intercomm)))
- return PIO_EIO;
-
- /* for (int e = 0; e < num_elem; e++) */
- /* LOG((2, "element %d = %d", e, ((int *)buf)[e])); */
-
- /* Set the non-NULL pointers. */
- if (start_present)
- startp = start;
- if (count_present)
- countp = count;
- if (stride_present)
- stridep = stride;
-
- /* Call the function to write the data. */
- switch(xtype)
- {
- case NC_BYTE:
- ierr = PIOc_put_vars_schar(ncid, varid, startp, countp, stridep, buf);
- break;
- case NC_CHAR:
- ierr = PIOc_put_vars_schar(ncid, varid, startp, countp, stridep, buf);
- break;
- case NC_SHORT:
- ierr = PIOc_put_vars_short(ncid, varid, startp, countp, stridep, buf);
- break;
- case NC_INT:
- ierr = PIOc_put_vars_int(ncid, varid, startp, countp,
- stridep, buf);
- break;
- case NC_FLOAT:
- ierr = PIOc_put_vars_float(ncid, varid, startp, countp,
- stridep, buf);
- break;
- case NC_DOUBLE:
- ierr = PIOc_put_vars_double(ncid, varid, startp, countp,
- stridep, buf);
- break;
-#ifdef _NETCDF4
- case NC_UBYTE:
- ierr = PIOc_put_vars_uchar(ncid, varid, startp, countp,
- stridep, buf);
- break;
- case NC_USHORT:
- ierr = PIOc_put_vars_ushort(ncid, varid, startp, countp,
- stridep, buf);
- break;
- case NC_UINT:
- ierr = PIOc_put_vars_uint(ncid, varid, startp, countp,
- stridep, buf);
- break;
- case NC_INT64:
- ierr = PIOc_put_vars_longlong(ncid, varid, startp, countp,
- stridep, buf);
- break;
- case NC_UINT64:
- ierr = PIOc_put_vars_ulonglong(ncid, varid, startp, countp,
- stridep, buf);
- break;
- /* case NC_STRING: */
- /* ierr = PIOc_put_vars_string(ncid, varid, startp, countp, */
- /* stridep, (void *)buf); */
- /* break; */
- /* default:*/
- /* ierr = PIOc_put_vars(ncid, varid, startp, countp, */
- /* stridep, buf); */
-#endif /* _NETCDF4 */
- }
-
- return PIO_NOERR;
-}
-
-/** Handle var get operations. This code only runs on IO tasks.
- *
- * @param ios pointer to the iosystem_desc_t.
- * @return PIO_NOERR for success, error code otherwise.
-*/
-int get_vars_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int varid;
- int mpierr;
- int ierr;
- char *name;
- int namelen;
- PIO_Offset typelen; /** Length (in bytes) of this type. */
- nc_type xtype; /** Type of the data being written. */
- char start_present, count_present, stride_present;
- PIO_Offset *startp = NULL, *countp = NULL, *stridep = NULL;
- int ndims; /** Number of dimensions. */
- void *buf; /** Buffer for data storage. */
- PIO_Offset num_elem; /** Number of data elements in the buffer. */
-
- LOG((1, "get_vars_handler"));
-
- /* Get the parameters for this function that the the comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&varid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&ndims, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
-
- /* Now we know how big to make these arrays. */
- PIO_Offset start[ndims], count[ndims], stride[ndims];
-
- if ((mpierr = MPI_Bcast(&start_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- if (!mpierr && start_present)
- {
- if ((mpierr = MPI_Bcast(start, ndims, MPI_OFFSET, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((1, "put_vars_handler getting start[0] = %d ndims = %d", start[0], ndims));
- }
- if ((mpierr = MPI_Bcast(&count_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- if (!mpierr && count_present)
- if ((mpierr = MPI_Bcast(count, ndims, MPI_OFFSET, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&stride_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- if (!mpierr && stride_present)
- if ((mpierr = MPI_Bcast(stride, ndims, MPI_OFFSET, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&xtype, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&num_elem, 1, MPI_OFFSET, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&typelen, 1, MPI_OFFSET, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((1, "get_vars_handler ncid = %d varid = %d ndims = %d start_present = %d "
- "count_present = %d stride_present = %d xtype = %d num_elem = %d typelen = %d",
- ncid, varid, ndims, start_present, count_present, stride_present, xtype,
- num_elem, typelen));
-
- for (int d = 0; d < ndims; d++)
- {
- if (start_present)
- LOG((2, "start[%d] = %d\n", d, start[d]));
- if (count_present)
- LOG((2, "count[%d] = %d\n", d, count[d]));
- if (stride_present)
- LOG((2, "stride[%d] = %d\n", d, stride[d]));
- }
-
- /* Allocate room for our data. */
- if (!(buf = malloc(num_elem * typelen)))
- return PIO_ENOMEM;
-
- /* Set the non-NULL pointers. */
- if (start_present)
- startp = start;
- if (count_present)
- countp = count;
- if (stride_present)
- stridep = stride;
-
- /* Call the function to read the data. */
- switch(xtype)
- {
- case NC_BYTE:
- ierr = PIOc_get_vars_schar(ncid, varid, startp, countp,
- stridep, buf);
- break;
- case NC_CHAR:
- ierr = PIOc_get_vars_schar(ncid, varid, startp, countp,
- stridep, buf);
- break;
- case NC_SHORT:
- ierr = PIOc_get_vars_short(ncid, varid, startp, countp,
- stridep, buf);
- break;
- case NC_INT:
- ierr = PIOc_get_vars_int(ncid, varid, startp, countp,
- stridep, buf);
- break;
- case NC_FLOAT:
- ierr = PIOc_get_vars_float(ncid, varid, startp, countp,
- stridep, buf);
- break;
- case NC_DOUBLE:
- ierr = PIOc_get_vars_double(ncid, varid, startp, countp,
- stridep, buf);
- break;
-#ifdef _NETCDF4
- case NC_UBYTE:
- ierr = PIOc_get_vars_uchar(ncid, varid, startp, countp,
- stridep, buf);
- break;
- case NC_USHORT:
- ierr = PIOc_get_vars_ushort(ncid, varid, startp, countp,
- stridep, buf);
- break;
- case NC_UINT:
- ierr = PIOc_get_vars_uint(ncid, varid, startp, countp,
- stridep, buf);
- break;
- case NC_INT64:
- ierr = PIOc_get_vars_longlong(ncid, varid, startp, countp,
- stridep, buf);
- break;
- case NC_UINT64:
- ierr = PIOc_get_vars_ulonglong(ncid, varid, startp, countp,
- stridep, buf);
- break;
- /* case NC_STRING: */
- /* ierr = PIOc_get_vars_string(ncid, varid, startp, countp, */
- /* stridep, (void *)buf); */
- /* break; */
- /* default:*/
- /* ierr = PIOc_get_vars(ncid, varid, startp, countp, */
- /* stridep, buf); */
-#endif /* _NETCDF4 */
- }
-
- LOG((1, "get_vars_handler succeeded!"));
- return PIO_NOERR;
-}
-
-/** Do an inq_var on a netCDF variable. This function is only run on
- * IO tasks.
- *
- * @param ios pointer to the iosystem_desc_t.
- * @param msg the message sent my the comp root task.
- * @return PIO_NOERR for success, error code otherwise.
-*/
-int inq_var_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int varid;
- int mpierr;
- char name_present, xtype_present, ndims_present, dimids_present, natts_present;
- char name[NC_MAX_NAME + 1], *namep;
- nc_type xtype, *xtypep = NULL;
- int *ndimsp = NULL, *dimidsp = NULL, *nattsp = NULL;
- int ndims, dimids[NC_MAX_DIMS], natts;
- int ret;
-
- LOG((1, "inq_var_handler"));
-
- /* Get the parameters for this function that the the comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&varid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&name_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&xtype_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&ndims_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&dimids_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&natts_present, 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((2,"inq_var_handler ncid = %d varid = %d name_present = %d xtype_present = %d ndims_present = %d "
- "dimids_present = %d natts_present = %d\n",
- ncid, varid, name_present, xtype_present, ndims_present, dimids_present, natts_present));
-
- /* Set the non-NULL pointers. */
- if (name_present)
- namep = name;
- if (xtype_present)
- xtypep = &xtype;
- if (ndims_present)
- ndimsp = &ndims;
- if (dimids_present)
- dimidsp = dimids;
- if (natts_present)
- nattsp = &natts;
-
- /* Call the inq function to get the values. */
- if ((ret = PIOc_inq_var(ncid, varid, namep, xtypep, ndimsp, dimidsp, nattsp)))
- return ret;
-
- if (ndims_present)
- LOG((2, "inq_var_handler ndims = %d", ndims));
-
- return PIO_NOERR;
-}
-
-/** Do an inq_varid on a netCDF variable name. This function is only
- * run on IO tasks.
- *
- * @param ios pointer to the iosystem_desc_t.
- * @return PIO_NOERR for success, error code otherwise.
-*/
-int inq_varid_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int varid;
- int mpierr;
- int ret;
- int namelen;
- char *name;
-
- /* Get the parameters for this function that the the comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&namelen, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if (!(name = malloc((namelen + 1) * sizeof(char))))
- return PIO_ENOMEM;
- if ((mpierr = MPI_Bcast((void *)name, namelen + 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
-
- /* Call the inq_dimid function. */
- if ((ret = PIOc_inq_varid(ncid, name, &varid)))
- return ret;
-
- /* Free resources. */
- free(name);
-
- return PIO_NOERR;
-}
-
-/** This function is run on the IO tasks to sync a netCDF file.
- *
- * @param ios pointer to the iosystem_desc_t.
- * @return PIO_NOERR for success, error code otherwise.
-*/
-int sync_file_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int mpierr;
- int ret;
-
- LOG((1, "sync_file_handler"));
-
- /* Get the parameters for this function that the comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((1, "sync_file_handler got parameter ncid = %d", ncid));
-
- /* Call the sync file function. */
- if ((ret = PIOc_sync(ncid)))
- return ret;
-
- LOG((2, "sync_file_handler succeeded!"));
- return PIO_NOERR;
-}
-
-/** This function is run on the IO tasks to enddef a netCDF file.
- *
- * @param ios pointer to the iosystem_desc_t.
- * @return PIO_NOERR for success, error code otherwise.
-*/
-int change_def_file_handler(iosystem_desc_t *ios, int msg)
-{
- int ncid;
- int mpierr;
- int ret;
-
- LOG((1, "change_def_file_handler"));
-
- /* Get the parameters for this function that the comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
-
- /* Call the function. */
- ret = (msg == PIO_MSG_ENDDEF) ? PIOc_enddef(ncid) : PIOc_redef(ncid);
-
- LOG((1, "change_def_file_handler succeeded!"));
- return PIO_NOERR;
-}
-
-/** This function is run on the IO tasks to define a netCDF
- * variable. */
-int def_var_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int len, namelen;
- int iotype;
- char *name;
- int mode;
- int mpierr;
- int ret;
- int varid;
- nc_type xtype;
- int ndims;
- int *dimids;
-
- int my_rank;
- MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
- LOG((1, "%d def_var_handler comproot = %d\n", my_rank, ios->comproot));
-
- /* Get the parameters for this function that the he comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&namelen, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if (!(name = malloc(namelen + 1 * sizeof(char))))
- return PIO_ENOMEM;
- if ((mpierr = MPI_Bcast((void *)name, namelen + 1, MPI_CHAR, 0,
- ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&xtype, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&ndims, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if (!(dimids = malloc(ndims * sizeof(int))))
- return PIO_ENOMEM;
- if ((mpierr = MPI_Bcast(dimids, ndims, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((1, "%d def_var_handler got parameters namelen = %d "
- "name = %s len = %d ncid = %d\n",
- my_rank, namelen, name, len, ncid));
-
- /* Call the create file function. */
- if ((ret = PIOc_def_var(ncid, name, xtype, ndims, dimids, &varid)))
- return ret;
-
- /* Free resources. */
- free(name);
- free(dimids);
-
- LOG((1, "%d def_var_handler succeeded!\n", my_rank));
- return PIO_NOERR;
-}
-
-/** This function is run on the IO tasks to define a netCDF
- * dimension. */
-int def_dim_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int len, namelen;
- int iotype;
- char *name;
- int mode;
- int mpierr;
- int ret;
- int dimid;
-
- int my_rank;
- MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
- LOG((1, "def_dim_handler comproot = %d", ios->comproot));
-
- /* Get the parameters for this function that the he comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&namelen, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if (!(name = malloc(namelen + 1 * sizeof(char))))
- return PIO_ENOMEM;
- if ((mpierr = MPI_Bcast((void *)name, namelen + 1, MPI_CHAR, 0,
- ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&len, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((2, "def_dim_handler got parameters namelen = %d "
- "name = %s len = %d ncid = %d", namelen, name, len, ncid));
-
- /* Call the create file function. */
- if ((ret = PIOc_def_dim(ncid, name, len, &dimid)))
- return ret;
-
- /* Free resources. */
- free(name);
-
- LOG((1, "%d def_dim_handler succeeded!\n", my_rank));
- return PIO_NOERR;
-}
-
-/** This function is run on the IO tasks to rename a netCDF
- * dimension. */
-int rename_dim_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int len, namelen;
- int iotype;
- char *name;
- int mode;
- int mpierr;
- int ret;
- int dimid;
- char name1[NC_MAX_NAME + 1];
-
- LOG((1, "rename_dim_handler"));
-
- /* Get the parameters for this function that the he comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&dimid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&namelen, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if (!(name = malloc((namelen + 1) * sizeof(char))))
- return PIO_ENOMEM;
- if ((mpierr = MPI_Bcast((void *)name, namelen + 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((2, "rename_dim_handler got parameters namelen = %d "
- "name = %s ncid = %d dimid = %d", namelen, name, ncid, dimid));
-
- /* Call the create file function. */
- if ((ret = PIOc_rename_dim(ncid, dimid, name)))
- return ret;
-
- /* Free resources. */
- free(name);
-
- LOG((1, "%d rename_dim_handler succeeded!\n", my_rank));
- return PIO_NOERR;
-}
-
-/** This function is run on the IO tasks to rename a netCDF
- * dimension. */
-int rename_var_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int len, namelen;
- int iotype;
- char *name;
- int mode;
- int mpierr;
- int ret;
- int varid;
- char name1[NC_MAX_NAME + 1];
-
- LOG((1, "rename_var_handler"));
-
- /* Get the parameters for this function that the he comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&varid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&namelen, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if (!(name = malloc((namelen + 1) * sizeof(char))))
- return PIO_ENOMEM;
- if ((mpierr = MPI_Bcast((void *)name, namelen + 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((2, "rename_var_handler got parameters namelen = %d "
- "name = %s ncid = %d varid = %d", namelen, name, ncid, varid));
-
- /* Call the create file function. */
- if ((ret = PIOc_rename_var(ncid, varid, name)))
- return ret;
-
- /* Free resources. */
- free(name);
-
- LOG((1, "%d rename_var_handler succeeded!\n", my_rank));
- return PIO_NOERR;
-}
-
-/** This function is run on the IO tasks to rename a netCDF
- * attribute. */
-int rename_att_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int varid;
- int namelen, newnamelen;
- char *name, *newname;
- int mpierr;
- int ret;
-
- LOG((1, "rename_att_handler"));
-
- /* Get the parameters for this function that the he comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&varid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&namelen, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if (!(name = malloc((namelen + 1) * sizeof(char))))
- return PIO_ENOMEM;
- if ((mpierr = MPI_Bcast(name, namelen + 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&newnamelen, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if (!(newname = malloc((newnamelen + 1) * sizeof(char))))
- return PIO_ENOMEM;
- if ((mpierr = MPI_Bcast(newname, newnamelen + 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((2, "rename_att_handler got parameters namelen = %d name = %s ncid = %d varid = %d "
- "newnamelen = %d newname = %s", namelen, name, ncid, varid, newnamelen, newname));
-
- /* Call the create file function. */
- if ((ret = PIOc_rename_att(ncid, varid, name, newname)))
- return ret;
-
- /* Free resources. */
- free(name);
- free(newname);
-
- LOG((1, "%d rename_att_handler succeeded!\n", my_rank));
- return PIO_NOERR;
-}
-
-/** This function is run on the IO tasks to delete a netCDF
- * attribute. */
-int delete_att_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int varid;
- int namelen, newnamelen;
- char *name, *newname;
- int mpierr;
- int ret;
-
- LOG((1, "delete_att_handler"));
-
- /* Get the parameters for this function that the he comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&ncid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&varid, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&namelen, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if (!(name = malloc((namelen + 1) * sizeof(char))))
- return PIO_ENOMEM;
- if ((mpierr = MPI_Bcast(name, namelen + 1, MPI_CHAR, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((2, "delete_att_handler namelen = %d name = %s ncid = %d varid = %d ",
- namelen, name, ncid, varid));
-
- /* Call the create file function. */
- if ((ret = PIOc_del_att(ncid, varid, name)))
- return ret;
-
- /* Free resources. */
- free(name);
-
- LOG((1, "delete_att_handler succeeded!"));
- return PIO_NOERR;
-}
-
-/** This function is run on the IO tasks to open a netCDF file.
- *
- * @param ios pointer to the iosystem_desc_t data.
- *
- * @return PIO_NOERR for success, error code otherwise. */
-int open_file_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int len;
- int iotype;
- char *filename;
- int mode;
- int mpierr;
- int ret;
-
- int my_rank;
- MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
- LOG((1, "%d open_file_handler comproot = %d\n", my_rank, ios->comproot));
-
- /* Get the parameters for this function that the he comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&len, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((2, "open_file_handler got parameter len = %d", len));
- if (!(filename = malloc(len + 1 * sizeof(char))))
- return PIO_ENOMEM;
- if ((mpierr = MPI_Bcast((void *)filename, len + 1, MPI_CHAR, 0,
- ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&iotype, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if ((mpierr = MPI_Bcast(&mode, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- LOG((2, "open_file_handler got parameters len = %d filename = %s iotype = %d mode = %d\n",
- len, filename, iotype, mode));
-
- /* Call the open file function. */
- if ((ret = PIOc_openfile(ios->iosysid, &ncid, &iotype, filename, mode)))
- return ret;
-
- /* Free resources. */
- free(filename);
-
- LOG((1, "%d open_file_handler succeeded!\n", my_rank));
- return PIO_NOERR;
-}
-
-/** This function is run on the IO tasks to delete a netCDF file.
- *
- * @param ios pointer to the iosystem_desc_t data.
- *
- * @return PIO_NOERR for success, error code otherwise. */
-int delete_file_handler(iosystem_desc_t *ios)
-{
- int ncid;
- int len;
- char *filename;
- int mpierr;
- int ret;
-
- int my_rank;
- MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
- LOG((1, "%d delete_file_handler comproot = %d\n", my_rank, ios->comproot));
-
- /* Get the parameters for this function that the he comp master
- * task is broadcasting. */
- if ((mpierr = MPI_Bcast(&len, 1, MPI_INT, 0, ios->intercomm)))
- return PIO_EIO;
- if (!(filename = malloc(len + 1 * sizeof(char))))
- return PIO_ENOMEM;
- if ((mpierr = MPI_Bcast((void *)filename, len + 1, MPI_CHAR, 0,
- ios->intercomm)))
- return PIO_EIO;
- LOG((1, "%d delete_file_handler got parameters len = %d filename = %s\n",
- my_rank, len, filename));
-
- /* Call the delete file function. */
- if ((ret = PIOc_deletefile(ios->iosysid, filename)))
- return ret;
-
- /* Free resources. */
- free(filename);
-
- LOG((1, "%d delete_file_handler succeeded!\n", my_rank));
- return PIO_NOERR;
-}
-
-int initdecomp_dof_handler(iosystem_desc_t *ios)
-{
- return PIO_NOERR;
-}
-
-int writedarray_handler(iosystem_desc_t *ios)
-{
- return PIO_NOERR;
-}
-
-int readdarray_handler(iosystem_desc_t *ios)
-{
- return PIO_NOERR;
-}
-
-int seterrorhandling_handler(iosystem_desc_t *ios)
-{
- return PIO_NOERR;
-}
-
-int var_handler(iosystem_desc_t *ios, int msg)
-{
- return PIO_NOERR;
-}
-
-int freedecomp_handler(iosystem_desc_t *ios)
-{
- return PIO_NOERR;
-}
-
-int finalize_handler(iosystem_desc_t *ios)
-{
- int my_rank;
- MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
- LOG((1, "%d finalize_handler called\n", my_rank));
- return PIO_NOERR;
-}
-
-int pio_callback_handler(iosystem_desc_t *ios, int msg)
-{
- return PIO_NOERR;
-}
-
-/** This function is called by the IO tasks. This function will not
- return, unless there is an error. */
-int pio_msg_handler(int io_rank, int component_count, iosystem_desc_t *iosys)
-{
- iosystem_desc_t *my_iosys;
- int msg = 0;
- MPI_Request req[component_count];
- MPI_Status status;
- int index;
- int mpierr;
- int ret = PIO_NOERR;
-
- int my_rank;
- MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
- LOG((1, "%d pio_msg_handler called\n", my_rank));
-
- /* Have IO comm rank 0 (the ioroot) register to receive
- * (non-blocking) for a message from each of the comproots. */
- if (!io_rank)
- {
- for (int cmp = 0; cmp < component_count; cmp++)
- {
- my_iosys = &iosys[cmp];
- LOG((1, "%d about to call MPI_Irecv\n", my_rank));
- mpierr = MPI_Irecv(&msg, 1, MPI_INT, my_iosys->comproot, MPI_ANY_TAG,
- my_iosys->union_comm, &req[cmp]);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- }
- }
-
- /* If the message is not -1, keep processing messages. */
- while (msg != -1)
- {
- /* Wait until any one of the requests are complete. */
- if (!io_rank)
- {
- LOG((1, "%d about to call MPI_Waitany req[0] = %d MPI_REQUEST_NULL = %d\n",
- my_rank, req[0], MPI_REQUEST_NULL));
- mpierr = MPI_Waitany(component_count, req, &index, &status);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- LOG((3, "Waitany returned index = %d req[%d] = %d",
- index, index, req[index]));
- }
-
- /* Broadcast the index of the computational component that
- * originated the request to the rest of the IO tasks. */
- mpierr = MPI_Bcast(&index, 1, MPI_INT, 0, iosys->io_comm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- my_iosys = &iosys[index];
- LOG((3, "index MPI_Bcast complete index = %d", index));
-
- /* Broadcast the msg value to the rest of the IO tasks. */
- LOG((3, "about to call msg MPI_Bcast"));
- mpierr = MPI_Bcast(&msg, 1, MPI_INT, 0, my_iosys->io_comm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- LOG((1, "pio_msg_handler msg MPI_Bcast complete msg = %d", msg));
-
- /* Handle the message. This code is run on all IO tasks. */
- switch (msg)
- {
- case PIO_MSG_INQ_TYPE:
- inq_type_handler(my_iosys);
- break;
- case PIO_MSG_INQ_FORMAT:
- inq_format_handler(my_iosys);
- break;
- case PIO_MSG_CREATE_FILE:
- create_file_handler(my_iosys);
- LOG((2, "returned from create_file_handler"));
- break;
- case PIO_MSG_SYNC:
- sync_file_handler(my_iosys);
- break;
- case PIO_MSG_ENDDEF:
- case PIO_MSG_REDEF:
- LOG((2, "calling change_def_file_handler"));
- change_def_file_handler(my_iosys, msg);
- LOG((2, "returned from change_def_file_handler"));
- break;
- case PIO_MSG_OPEN_FILE:
- open_file_handler(my_iosys);
- break;
- case PIO_MSG_CLOSE_FILE:
- close_file_handler(my_iosys);
- break;
- case PIO_MSG_DELETE_FILE:
- delete_file_handler(my_iosys);
- break;
- case PIO_MSG_RENAME_DIM:
- rename_dim_handler(my_iosys);
- break;
- case PIO_MSG_RENAME_VAR:
- rename_var_handler(my_iosys);
- break;
- case PIO_MSG_RENAME_ATT:
- rename_att_handler(my_iosys);
- break;
- case PIO_MSG_DEL_ATT:
- delete_att_handler(my_iosys);
- break;
- case PIO_MSG_DEF_DIM:
- def_dim_handler(my_iosys);
- break;
- case PIO_MSG_DEF_VAR:
- def_var_handler(my_iosys);
- break;
- case PIO_MSG_INQ:
- inq_handler(my_iosys);
- break;
- case PIO_MSG_INQ_DIM:
- inq_dim_handler(my_iosys, msg);
- break;
- case PIO_MSG_INQ_DIMID:
- inq_dimid_handler(my_iosys);
- break;
- case PIO_MSG_INQ_VAR:
- inq_var_handler(my_iosys);
- break;
- case PIO_MSG_GET_ATT:
- ret = att_get_handler(my_iosys);
- break;
- case PIO_MSG_PUT_ATT:
- ret = att_put_handler(my_iosys);
- break;
- case PIO_MSG_INQ_VARID:
- inq_varid_handler(my_iosys);
- break;
- case PIO_MSG_INQ_ATT:
- inq_att_handler(my_iosys);
- break;
- case PIO_MSG_INQ_ATTNAME:
- inq_attname_handler(my_iosys);
- break;
- case PIO_MSG_INQ_ATTID:
- inq_attid_handler(my_iosys);
- break;
- case PIO_MSG_GET_VARS:
- get_vars_handler(my_iosys);
- break;
- case PIO_MSG_PUT_VARS:
- put_vars_handler(my_iosys);
- break;
- case PIO_MSG_INITDECOMP_DOF:
- initdecomp_dof_handler(my_iosys);
- break;
- case PIO_MSG_WRITEDARRAY:
- writedarray_handler(my_iosys);
- break;
- case PIO_MSG_READDARRAY:
- readdarray_handler(my_iosys);
- break;
- case PIO_MSG_SETERRORHANDLING:
- seterrorhandling_handler(my_iosys);
- break;
- case PIO_MSG_FREEDECOMP:
- freedecomp_handler(my_iosys);
- break;
- case PIO_MSG_EXIT:
- finalize_handler(my_iosys);
- msg = -1;
- break;
- default:
- pio_callback_handler(my_iosys, msg);
- }
-
- /* If an error was returned by the handler, do something! */
- LOG((3, "pio_msg_handler checking error ret = %d", ret));
- if (ret)
- {
- LOG((0, "hander returned error code %d", ret));
- MPI_Finalize();
- }
-
- LOG((3, "pio_msg_handler getting ready to listen"));
- /* Unless finalize was called, listen for another msg from the
- * component whose message we just handled. */
- if (!io_rank && msg != -1)
- {
- LOG((3, "pio_msg_handler about to Irecv"));
- my_iosys = &iosys[index];
- mpierr = MPI_Irecv(&msg, 1, MPI_INT, my_iosys->comproot, MPI_ANY_TAG, my_iosys->union_comm,
- &req[index]);
- LOG((3, "pio_msg_handler called MPI_Irecv req[%d] = %d\n", index, req[index]));
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- }
-
- }
-
- return PIO_NOERR;
-}
-
-int
-pio_iosys_print(int my_rank, iosystem_desc_t *iosys)
-{
- printf("%d iosysid: %d\n", my_rank, iosys->iosysid);
- if (iosys->union_comm == MPI_COMM_NULL)
- printf("%d union_comm: MPI_COMM_NULL ", my_rank);
- else
- printf("%d union_comm: %d ", my_rank, iosys->union_comm);
-
- if (iosys->comp_comm == MPI_COMM_NULL)
- printf("comp_comm: MPI_COMM_NULL ");
- else
- printf("comp_comm: %d ", iosys->comp_comm);
-
- if (iosys->io_comm == MPI_COMM_NULL)
- printf("io_comm: MPI_COMM_NULL ");
- else
- printf("io_comm: %d ", iosys->io_comm);
-
- if (iosys->intercomm == MPI_COMM_NULL)
- printf("intercomm: MPI_COMM_NULL\n");
- else
- printf("intercomm: %d\n", iosys->intercomm);
-
- printf("%d num_iotasks=%d num_comptasks=%d union_rank=%d, comp_rank=%d, "
- "io_rank=%d async_interface=%d\n",
- my_rank, iosys->num_iotasks, iosys->num_comptasks, iosys->union_rank,
- iosys->comp_rank, iosys->io_rank, iosys->async_interface);
-
- printf("%d ioroot=%d comproot=%d iomaster=%d, compmaster=%d\n",
- my_rank, iosys->ioroot, iosys->comproot, iosys->iomaster,
- iosys->compmaster);
-
- printf("%d iotasks:", my_rank);
- for (int i = 0; i < iosys->num_iotasks; i++)
- printf("%d ", iosys->ioranks[i]);
- printf("\n");
- return PIO_NOERR;
-}
-
-/** @ingroup PIO_init
- * Library initialization used when IO tasks are distinct from compute
- * tasks.
- *
- * This is a collective call. Input parameters are read on
- * comp_rank=0 values on other tasks are ignored. This variation of
- * PIO_init sets up a distinct set of tasks to handle IO, these tasks
- * do not return from this call. Instead they go to an internal loop
- * and wait to receive further instructions from the computational
- * tasks.
- *
- * For 4 tasks, to have 2 of them be computational, and 2 of them
- * be IO, I would provide the following:
- *
- * component_count = 1
- *
- * peer_comm = MPI_COMM_WORLD
- *
- * comp_comms = an array with one element, an MPI (intra) communicator
- * that contains the two tasks designated to do computation
- * (processors 0, 1).
-
- * io_comm = an MPI (intra) communicator with the other two tasks (2,
- * 3).
- *
- * iosysidp = pointer that gets the IO system ID.
- *
- * Fortran function (from PIO1, in piolib_mod.F90) is:
- *
- * subroutine init_intercom(component_count, peer_comm, comp_comms,
- * io_comm, iosystem, rearr_opts)
- *
- * Some notes from Jim:
- *
- * Components and Component Count
- * ------------------------------
- *
- * It's a cesm thing - the cesm model is composed of several component
- * models (atm, ocn, ice, lnd, etc) that may or may not be collocated
- * on mpi tasks. Since for intercomm the IOCOMM tasks are a subset of
- * the compute tasks for a given component we have a separate iocomm
- * for each model component. and we call init_inracomm independently
- * for each component.
- *
- * When the IO tasks are independent of any model component then we
- * can have all of the components share one set of iotasks and we call
- * init_intercomm once with the information for all components.
- *
- * Inter vs Intra Communicators
- * ----------------------------
- *
- * ​For an intra you just need to provide the compute comm, pio creates
- * an io comm as a subset of that compute comm.
- *
- * For an inter you need to provide multiple comms - peer comm is the
- * communicator that is going to encompass all of the tasks - usually
- * this will be mpi_comm_world. Then you need to provide a comm for
- * each component model that will share the io server, then an
- * io_comm.
- *
- * Example of Communicators
- * ------------------------
- *
- * Starting from MPI_COMM_WORLD the calling program will create an
- * IO_COMM and one or more COMP_COMMs, I think an example might be best:
- *
- * Suppose we have 10 tasks and 2 of them will be IO tasks. Then 0:7
- * are in COMP_COMM and 8:9 are in IO_COMM In this case on tasks 0:7
- * COMP_COMM is defined and IO_COMM is MPI_COMM_NULL and on tasks 8:9
- * IO_COMM is defined and COMP_COMM is MPI_COMM_NULL The communicators
- * to handle communications between COMP_COMM and IO_COMM are defined
- * in init_intercomm and held in a pio internal data structure.
- *
- * Return or Not
- * -------------
- *
- * The io_comm tasks do not return from the init_intercomm routine.
- *
- * Sequence of Events to do Asynch I/O
- * -----------------------------------
- *
- * Here is the sequence of events that needs to occur when an IO
- * operation is called from the collection of compute tasks. I'm
- * going to use pio_put_var because write_darray has some special
- * characteristics that make it a bit more complicated...
- *
- * Compute tasks call pio_put_var with an integer argument
- *
- * The MPI_Send sends a message from comp_rank=0 to io_rank=0 on
- * union_comm (a comm defined as the union of io and compute tasks)
- * msg is an integer which indicates the function being called, in
- * this case the msg is PIO_MSG_PUT_VAR_INT
- *
- * The iotasks now know what additional arguments they should expect
- * to receive from the compute tasks, in this case a file handle, a
- * variable id, the length of the array and the array itself.
- *
- * The iotasks now have the information they need to complete the
- * operation and they call the pio_put_var routine. (In pio1 this bit
- * of code is in pio_get_put_callbacks.F90.in)
- *
- * After the netcdf operation is completed (in the case of an inq or
- * get operation) the result is communicated back to the compute
- * tasks.
- *
- *
- * @param component_count The number of computational (ex. model)
- * components to associate with this IO component
- *
- * @param peer_comm The communicator from which all other communicator
- * arguments are derived
- *
- * @param comp_comms An array containing the computational
- * communicator for each of the computational components. The I/O
- * tasks pass MPI_COMM_NULL for this parameter.
- *
-`* @param io_comm The io communicator. Processing tasks pass
- * MPI_COMM_NULL for this parameter.
- *
- * @param iosysidp An array of length component_count. It will get the
- * iosysid for each component.
- *
- * @return PIO_NOERR on success, error code otherwise.
- */
-int PIOc_Init_Intercomm(int component_count, MPI_Comm peer_comm,
- MPI_Comm *comp_comms, MPI_Comm io_comm, int *iosysidp)
-{
- iosystem_desc_t *iosys;
- iosystem_desc_t *my_iosys;
- int ierr = PIO_NOERR;
- int mpierr;
- int iam;
- int io_leader, comp_leader;
- int root;
- MPI_Group io_grp, comm_grp, union_grp;
-
- /* Allocate struct to hold io system info for each component. */
- if (!(iosys = (iosystem_desc_t *) calloc(1, sizeof(iosystem_desc_t) * component_count)))
- ierr = PIO_ENOMEM;
-
- if (!ierr)
- for (int cmp = 0; cmp < component_count; cmp++)
- {
- /* These are used when using the intercomm. */
- int comp_master = MPI_PROC_NULL, io_master = MPI_PROC_NULL;
-
- /* Get a pointer to the iosys struct */
- my_iosys = &iosys[cmp];
-
- /* Create an MPI info object. */
- CheckMPIReturn(MPI_Info_create(&(my_iosys->info)),__FILE__,__LINE__);
-
- /* This task is part of the computation communicator. */
- if (comp_comms[cmp] != MPI_COMM_NULL)
- {
- /* Copy the computation communicator. */
- mpierr = MPI_Comm_dup(comp_comms[cmp], &my_iosys->comp_comm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
-
- /* Create an MPI group with the computation tasks. */
- mpierr = MPI_Comm_group(my_iosys->comp_comm, &my_iosys->compgroup);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
-
- /* Find out how many tasks are in this communicator. */
- mpierr = MPI_Comm_size(iosys->comp_comm, &my_iosys->num_comptasks);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
-
- /* Set the rank within the comp_comm. */
- mpierr = MPI_Comm_rank(my_iosys->comp_comm, &my_iosys->comp_rank);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
-
- /* Find the rank of the io leader in peer_comm. */
- iam = -1;
- mpierr = MPI_Allreduce(&iam, &io_leader, 1, MPI_INT, MPI_MAX, peer_comm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
-
- /* Find the rank of the comp leader in peer_comm. */
- if (!my_iosys->comp_rank)
- {
- mpierr = MPI_Comm_rank(peer_comm, &iam);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
- }
- else
- iam = -1;
-
- /* Find the lucky comp_leader task. */
- mpierr = MPI_Allreduce(&iam, &comp_leader, 1, MPI_INT, MPI_MAX, peer_comm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
-
- /* Is this the compmaster? Only if the comp_rank is zero. */
- if (!my_iosys->comp_rank)
- {
- my_iosys->compmaster = MPI_ROOT;
- comp_master = MPI_ROOT;
- }
- else
- my_iosys->compmaster = MPI_PROC_NULL;
-
- /* Set up the intercomm from the computation side. */
- mpierr = MPI_Intercomm_create(my_iosys->comp_comm, 0, peer_comm,
- io_leader, cmp, &my_iosys->intercomm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
-
- /* Create the union communicator. */
- mpierr = MPI_Intercomm_merge(my_iosys->intercomm, 0, &my_iosys->union_comm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
- }
- else
- {
- my_iosys->comp_comm = MPI_COMM_NULL;
- my_iosys->compgroup = MPI_GROUP_NULL;
- my_iosys->comp_rank = -1;
- }
-
- /* This task is part of the IO communicator, so set up the
- * IO stuff. */
- if (io_comm != MPI_COMM_NULL)
- {
- /* Copy the IO communicator. */
- mpierr = MPI_Comm_dup(io_comm, &my_iosys->io_comm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
-
- /* Get an MPI group that includes the io tasks. */
- mpierr = MPI_Comm_group(my_iosys->io_comm, &my_iosys->iogroup);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
-
- /* Find out how many tasks are in this communicator. */
- mpierr = MPI_Comm_size(iosys->io_comm, &my_iosys->num_iotasks);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
-
- /* Set the rank within the io_comm. */
- mpierr = MPI_Comm_rank(my_iosys->io_comm, &my_iosys->io_rank);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
-
- /* Find the rank of the io leader in peer_comm. */
- if (!my_iosys->io_rank)
- {
- mpierr = MPI_Comm_rank(peer_comm, &iam);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
- }
- else
- iam = -1;
-
- /* Find the lucky io_leader task. */
- mpierr = MPI_Allreduce(&iam, &io_leader, 1, MPI_INT, MPI_MAX, peer_comm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
-
- /* Find the rank of the comp leader in peer_comm. */
- iam = -1;
- mpierr = MPI_Allreduce(&iam, &comp_leader, 1, MPI_INT, MPI_MAX, peer_comm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
-
- /* This is an io task. */
- my_iosys->ioproc = true;
-
- /* Is this the iomaster? Only if the io_rank is zero. */
- if (!my_iosys->io_rank)
- {
- my_iosys->iomaster = MPI_ROOT;
- io_master = MPI_ROOT;
- }
- else
- my_iosys->iomaster = 0;
-
- /* Set up the intercomm from the I/O side. */
- mpierr = MPI_Intercomm_create(my_iosys->io_comm, 0, peer_comm,
- comp_leader, cmp, &my_iosys->intercomm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
-
- /* Create the union communicator. */
- mpierr = MPI_Intercomm_merge(my_iosys->intercomm, 0, &my_iosys->union_comm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
-
- }
- else
- {
- my_iosys->io_comm = MPI_COMM_NULL;
- my_iosys->iogroup = MPI_GROUP_NULL;
- my_iosys->io_rank = -1;
- my_iosys->ioproc = false;
- my_iosys->iomaster = false;
- }
-
- /* my_comm points to the union communicator for async, and
- * the comp_comm for non-async. It should not be freed
- * since it is not a proper copy of the commuicator, just
- * a copy of the reference to it. */
- my_iosys->my_comm = my_iosys->union_comm;
-
- /* Find rank in union communicator. */
- mpierr = MPI_Comm_rank(my_iosys->union_comm, &my_iosys->union_rank);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
-
- /* Find the rank of the io leader in the union communicator. */
- if (!my_iosys->io_rank)
- my_iosys->ioroot = my_iosys->union_rank;
- else
- my_iosys->ioroot = -1;
-
- /* Distribute the answer to all tasks. */
- mpierr = MPI_Allreduce(&my_iosys->ioroot, &root, 1, MPI_INT, MPI_MAX,
- my_iosys->union_comm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
- my_iosys->ioroot = root;
-
- /* Find the rank of the computation leader in the union
- * communicator. */
- if (!my_iosys->comp_rank)
- my_iosys->comproot = my_iosys->union_rank;
- else
- my_iosys->comproot = -1;
-
- /* Distribute the answer to all tasks. */
- mpierr = MPI_Allreduce(&my_iosys->comproot, &root, 1, MPI_INT, MPI_MAX,
- my_iosys->union_comm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
- my_iosys->comproot = root;
-
- /* Send the number of tasks in the IO and computation
- communicators to each other over the intercomm. This is
- a one-to-all bcast from the local task that passes
- MPI_ROOT as the root (all other local tasks should pass
- MPI_PROC_NULL as the root). The bcast is recieved by
- all the members of the leaf group which each pass the
- rank of the root relative to the root group. */
- if (io_comm != MPI_COMM_NULL)
- {
- comp_master = 0;
- mpierr = MPI_Bcast(&my_iosys->num_comptasks, 1, MPI_INT, comp_master,
- my_iosys->intercomm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- mpierr = MPI_Bcast(&my_iosys->num_iotasks, 1, MPI_INT, io_master,
- my_iosys->intercomm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- }
- else
- {
- io_master = 0;
- mpierr = MPI_Bcast(&my_iosys->num_comptasks, 1, MPI_INT, comp_master,
- my_iosys->intercomm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- mpierr = MPI_Bcast(&my_iosys->num_iotasks, 1, MPI_INT, io_master,
- my_iosys->intercomm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- }
-
- /* Allocate an array to hold the ranks of the IO tasks
- * within the union communicator. */
- if (!(my_iosys->ioranks = malloc(my_iosys->num_iotasks * sizeof(int))))
- return PIO_ENOMEM;
-
- /* Allocate a temp array to help get the IO ranks. */
- int *tmp_ioranks;
- if (!(tmp_ioranks = malloc(my_iosys->num_iotasks * sizeof(int))))
- return PIO_ENOMEM;
-
- /* Init array, then have IO tasks set their values, then
- * use allreduce to distribute results to all tasks. */
- for (int cnt = 0 ; cnt < my_iosys->num_iotasks; cnt++)
- tmp_ioranks[cnt] = -1;
- if (io_comm != MPI_COMM_NULL)
- tmp_ioranks[my_iosys->io_rank] = my_iosys->union_rank;
- mpierr = MPI_Allreduce(tmp_ioranks, my_iosys->ioranks, my_iosys->num_iotasks, MPI_INT, MPI_MAX,
- my_iosys->union_comm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
-
- /* Free temp array. */
- free(tmp_ioranks);
-
- /* Set the default error handling. */
- my_iosys->error_handler = PIO_INTERNAL_ERROR;
-
- /* We do support asynch interface. */
- my_iosys->async_interface = true;
-
- /* For debug purposes, print the contents of the struct. */
- /*int my_rank;*/
- /* MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);*/
-
- /* for (int t = 0; t < my_iosys->num_iotasks + my_iosys->num_comptasks; t++) */
- /* { */
- /* MPI_Barrier(my_iosys->union_comm); */
- /* if (my_rank == t) */
- /* pio_iosys_print(my_rank, my_iosys); */
- /* } */
-
- /* Add this id to the list of PIO iosystem ids. */
- iosysidp[cmp] = pio_add_to_iosystem_list(my_iosys);
- LOG((2, "added to iosystem_list iosysid = %d", iosysidp[cmp]));
-
- /* Now call the function from which the IO tasks will not
- * return until the PIO_MSG_EXIT message is sent. */
- if (io_comm != MPI_COMM_NULL)
- if ((ierr = pio_msg_handler(my_iosys->io_rank, component_count, iosys)))
- return ierr;
- }
-
- /* If there was an error, make sure all tasks see it. */
- if (ierr)
- {
- mpierr = MPI_Bcast(&ierr, 1, MPI_INT, 0, iosys->intercomm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- if (mpierr)
- ierr = PIO_EIO;
- }
-
- return ierr;
-}
diff --git a/cime/externals/pio2/src/clib/pio_nc.c b/cime/externals/pio2/src/clib/pio_nc.c
index 52a376f0c266..f511475a4f6c 100644
--- a/cime/externals/pio2/src/clib/pio_nc.c
+++ b/cime/externals/pio2/src/clib/pio_nc.c
@@ -122,7 +122,7 @@ int PIOc_inq_dimname (int ncid, int dimid, char *name)
if(file == NULL)
return PIO_EBADID;
ios = file->iosystem;
- msg = PIO_MSG_INQ_DIM;
+ msg = PIO_MSG_INQ_DIMNAME;
if(ios->async_interface && ! ios->ioproc){
if(ios->compmaster)
@@ -961,7 +961,7 @@ int PIOc_inq_vartype (int ncid, int varid, nc_type *xtypep)
if(file == NULL)
return PIO_EBADID;
ios = file->iosystem;
- msg = PIO_MSG_INQ_VAR;
+ msg = PIO_MSG_INQ_VARTYPE;
if(ios->async_interface && ! ios->ioproc){
if(ios->compmaster)
@@ -1108,7 +1108,7 @@ int PIOc_inq_vardimid (int ncid, int varid, int *dimidsp)
if(file == NULL)
return PIO_EBADID;
ios = file->iosystem;
- msg = PIO_MSG_INQ_VAR;
+ msg = PIO_MSG_INQ_VARDIMID;
if(ios->async_interface && ! ios->ioproc){
if(ios->compmaster)
@@ -1340,7 +1340,7 @@ int PIOc_inq_attlen (int ncid, int varid, const char *name, PIO_Offset *lenp)
if(file == NULL)
return PIO_EBADID;
ios = file->iosystem;
- msg = PIO_MSG_INQ_ATT;
+ msg = PIO_MSG_INQ_ATTLEN;
if(ios->async_interface && ! ios->ioproc){
if(ios->compmaster)
@@ -1415,7 +1415,7 @@ int PIOc_inq_atttype (int ncid, int varid, const char *name, nc_type *xtypep)
if(file == NULL)
return PIO_EBADID;
ios = file->iosystem;
- msg = PIO_MSG_INQ_ATT;
+ msg = PIO_MSG_INQ_ATTTYPE;
if(ios->async_interface && ! ios->ioproc){
if(ios->compmaster)
@@ -1561,7 +1561,7 @@ int PIOc_inq_natts (int ncid, int *ngattsp)
if(file == NULL)
return PIO_EBADID;
ios = file->iosystem;
- msg = PIO_MSG_INQ;
+ msg = PIO_MSG_INQ_NATTS;
if(ios->async_interface && ! ios->ioproc){
if(ios->compmaster)
@@ -2322,7 +2322,7 @@ int PIOc_inq_attname (int ncid, int varid, int attnum, char *name)
if(file == NULL)
return PIO_EBADID;
ios = file->iosystem;
- msg = PIO_MSG_INQ_ATT;
+ msg = PIO_MSG_INQ_ATTNAME;
if(ios->async_interface && ! ios->ioproc){
if(ios->compmaster)
@@ -2551,7 +2551,7 @@ int PIOc_inq_unlimdim (int ncid, int *unlimdimidp)
if(file == NULL)
return PIO_EBADID;
ios = file->iosystem;
- msg = PIO_MSG_INQ;
+ msg = PIO_MSG_INQ_UNLIMDIM;
if(ios->async_interface && ! ios->ioproc){
if(ios->compmaster)
@@ -2702,7 +2702,7 @@ int PIOc_inq_ndims (int ncid, int *ndimsp)
if(file == NULL)
return PIO_EBADID;
ios = file->iosystem;
- msg = PIO_MSG_INQ;
+ msg = PIO_MSG_INQ_NDIMS;
if(ios->async_interface && ! ios->ioproc){
if(ios->compmaster)
@@ -2848,7 +2848,7 @@ int PIOc_inq_nvars (int ncid, int *nvarsp)
if(file == NULL)
return PIO_EBADID;
ios = file->iosystem;
- msg = PIO_MSG_INQ;
+ msg = PIO_MSG_INQ_NVARS;
if(ios->async_interface && ! ios->ioproc){
if(ios->compmaster)
@@ -3141,7 +3141,7 @@ int PIOc_inq_varnatts (int ncid, int varid, int *nattsp)
if(file == NULL)
return PIO_EBADID;
ios = file->iosystem;
- msg = PIO_MSG_INQ_VAR;
+ msg = PIO_MSG_INQ_VARNATTS;
if(ios->async_interface && ! ios->ioproc){
if(ios->compmaster)
@@ -3444,7 +3444,7 @@ int PIOc_inq_dimlen (int ncid, int dimid, PIO_Offset *lenp)
if(file == NULL)
return PIO_EBADID;
ios = file->iosystem;
- msg = PIO_MSG_INQ_DIM;
+ msg = PIO_MSG_INQ_DIMLEN;
if(ios->async_interface && ! ios->ioproc){
if(ios->compmaster)
@@ -3674,7 +3674,7 @@ int PIOc_inq_varndims (int ncid, int varid, int *ndimsp)
if(file == NULL)
return PIO_EBADID;
ios = file->iosystem;
- msg = PIO_MSG_INQ_VAR;
+ msg = PIO_MSG_INQ_VARNDIMS;
if(file->varlist[varid].ndims > 0){
(*ndimsp) = file->varlist[varid].ndims;
return PIO_NOERR;
@@ -3753,7 +3753,7 @@ int PIOc_inq_varname (int ncid, int varid, char *name)
if(file == NULL)
return PIO_EBADID;
ios = file->iosystem;
- msg = PIO_MSG_INQ_VAR;
+ msg = PIO_MSG_INQ_VARNAME;
if(ios->async_interface && ! ios->ioproc){
if(ios->compmaster)
diff --git a/cime/externals/pio2/src/clib/pio_nc4.c b/cime/externals/pio2/src/clib/pio_nc4.c
index a311ed003f4f..735e3755a99d 100644
--- a/cime/externals/pio2/src/clib/pio_nc4.c
+++ b/cime/externals/pio2/src/clib/pio_nc4.c
@@ -235,40 +235,28 @@ int PIOc_inq_var_deflate(int ncid, int varid, int *shufflep,
int PIOc_def_var_chunking(int ncid, int varid, int storage,
const PIO_Offset *chunksizesp)
{
- iosystem_desc_t *ios; /** Pointer to io system information. */
- file_desc_t *file; /** Pointer to file information. */
- int ierr = PIO_NOERR; /** Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /** Return code from MPI function codes. */
+ int ierr;
+ int msg;
+ int mpierr;
+ iosystem_desc_t *ios;
+ file_desc_t *file;
char *errstr;
+
errstr = NULL;
+ ierr = PIO_NOERR;
- /* Find the info about this file. */
if (!(file = pio_get_file_from_id(ncid)))
return PIO_EBADID;
ios = file->iosystem;
+ msg = PIO_MSG_DEF_VAR_CHUNKING;
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
+ if (ios->async_interface && ! ios->ioproc)
{
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_DEF_VAR_CHUNKING;
-
- if (ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- return check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
+ if (ios->compmaster)
+ mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
+ mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
- /* If this is an IO task, then call the netCDF function. */
if (ios->ioproc)
{
switch (file->iotype)
@@ -299,12 +287,20 @@ int PIOc_def_var_chunking(int ncid, int varid, int storage,
}
}
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- if (ierr)
- return check_netcdf(file, ierr, __FILE__, __LINE__);
-
+ /* Allocate an error string if needed. */
+ if (ierr != PIO_NOERR)
+ {
+ errstr = (char *) malloc((strlen(__FILE__) + 20)* sizeof(char));
+ sprintf(errstr,"in file %s",__FILE__);
+ }
+
+ /* Check for netCDF error. */
+ ierr = check_netcdf(file, ierr, errstr,__LINE__);
+
+ /* Free the error string if it was allocated. */
+ if (errstr != NULL)
+ free(errstr);
+
return ierr;
}
diff --git a/cime/externals/pio2/src/clib/pio_nc_async.c b/cime/externals/pio2/src/clib/pio_nc_async.c
deleted file mode 100644
index 147fc548d7ab..000000000000
--- a/cime/externals/pio2/src/clib/pio_nc_async.c
+++ /dev/null
@@ -1,2513 +0,0 @@
-/**
- * @file
- * PIO interfaces to
- * [NetCDF](http://www.unidata.ucar.edu/software/netcdf/docs/modules.html)
- * support functions
-
- * This file provides an interface to the
- * [NetCDF](http://www.unidata.ucar.edu/software/netcdf/docs/modules.html)
- * support functions. Each subroutine calls the underlying netcdf or
- * pnetcdf or netcdf4 functions from the appropriate subset of mpi
- * tasks (io_comm). Each routine must be called collectively from
- * union_comm.
- *
- * @author Jim Edwards (jedwards@ucar.edu), Ed Hartnett
- * @date Feburary 2014, April 2016
- */
-
-#include
-#include
-#include
-
-/**
- * @ingroup PIOc_inq
- * The PIO-C interface for the NetCDF function nc_inq.
- *
- * This routine is called collectively by all tasks in the
- * communicator ios.union_comm. For more information on the underlying
- * NetCDF commmand please read about this function in the NetCDF
- * documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__datasets.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- *
- * @return PIO_NOERR for success, error code otherwise. See
- * PIOc_Set_File_Error_Handling
- */
-int PIOc_inq(int ncid, int *ndimsp, int *nvarsp, int *ngattsp, int *unlimdimidp)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
-
- LOG((1, "PIOc_inq ncid = %d", ncid));
-
- /* Find the info about this file. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_INQ; /* Message for async notification. */
- char ndims_present = ndimsp ? true : false;
- char nvars_present = nvarsp ? true : false;
- char ngatts_present = ngattsp ? true : false;
- char unlimdimid_present = unlimdimidp ? true : false;
-
- if (ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&ndims_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&nvars_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&ngatts_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&unlimdimid_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- LOG((2, "PIOc_inq ncid = %d ndims_present = %d nvars_present = %d ngatts_present = %d unlimdimid_present = %d",
- ncid, ndims_present, nvars_present, ngatts_present, unlimdimid_present));
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- return check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- {
- LOG((2, "PIOc_inq calling ncmpi_inq unlimdimidp = %d", unlimdimidp));
- ierr = ncmpi_inq(ncid, ndimsp, nvarsp, ngattsp, unlimdimidp);
- LOG((2, "PIOc_inq called ncmpi_inq"));
- if (unlimdimidp)
- LOG((2, "PIOc_inq returned from ncmpi_inq unlimdimid = %d", *unlimdimidp));
- }
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype == PIO_IOTYPE_NETCDF && file->do_io)
- {
- LOG((2, "PIOc_inq calling classic nc_inq"));
- /* Should not be necessary to do this - nc_inq should
- * handle null pointers. This has been reported as a bug
- * to netCDF developers. */
- int tmp_ndims, tmp_nvars, tmp_ngatts, tmp_unlimdimid;
- LOG((2, "PIOc_inq calling classic nc_inq"));
- ierr = nc_inq(ncid, &tmp_ndims, &tmp_nvars, &tmp_ngatts, &tmp_unlimdimid);
- LOG((2, "PIOc_inq calling classic nc_inq"));
- if (unlimdimidp)
- LOG((2, "classic tmp_unlimdimid = %d", tmp_unlimdimid));
- if (ndimsp)
- *ndimsp = tmp_ndims;
- if (nvarsp)
- *nvarsp = tmp_nvars;
- if (ngattsp)
- *ngattsp = tmp_ngatts;
- if (unlimdimidp)
- *unlimdimidp = tmp_unlimdimid;
- if (unlimdimidp)
- LOG((2, "classic unlimdimid = %d", *unlimdimidp));
- }
- else if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- {
- LOG((2, "PIOc_inq calling netcdf-4 nc_inq"));
- ierr = nc_inq(ncid, ndimsp, nvarsp, ngattsp, unlimdimidp);
- }
-#endif /* _NETCDF */
- LOG((2, "PIOc_inq netcdf call returned %d", ierr));
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- if (ierr)
- return check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Broadcast results to all tasks. Ignore NULL parameters. */
- if (!ierr)
- {
- if (ndimsp)
- if ((mpierr = MPI_Bcast(ndimsp, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
-
- if (nvarsp)
- if ((mpierr = MPI_Bcast(nvarsp, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
-
- if (ngattsp)
- if ((mpierr = MPI_Bcast(ngattsp, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
-
- if (unlimdimidp)
- if ((mpierr = MPI_Bcast(unlimdimidp, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- return ierr;
-}
-
-/**
- * @ingroup PIOc_inq_ndims
- * The PIO-C interface for the NetCDF function nc_inq_ndims.
- */
-int PIOc_inq_ndims (int ncid, int *ndimsp)
-{
- LOG((1, "PIOc_inq_ndims"));
- return PIOc_inq(ncid, ndimsp, NULL, NULL, NULL);
-}
-
-/**
- * @ingroup PIOc_inq_nvars
- * The PIO-C interface for the NetCDF function nc_inq_nvars.
- */
-int PIOc_inq_nvars(int ncid, int *nvarsp)
-{
- return PIOc_inq(ncid, NULL, nvarsp, NULL, NULL);
-}
-
-/**
- * @ingroup PIOc_inq_natts
- * The PIO-C interface for the NetCDF function nc_inq_natts.
- */
-int PIOc_inq_natts(int ncid, int *ngattsp)
-{
- return PIOc_inq(ncid, NULL, NULL, ngattsp, NULL);
-}
-
-/**
- * @ingroup PIOc_inq_unlimdim
- * The PIO-C interface for the NetCDF function nc_inq_unlimdim.
- */
-int PIOc_inq_unlimdim(int ncid, int *unlimdimidp)
-{
- LOG((1, "PIOc_inq_unlimdim ncid = %d unlimdimidp = %d", ncid, unlimdimidp));
- return PIOc_inq(ncid, NULL, NULL, NULL, unlimdimidp);
-}
-
-/** Internal function to provide inq_type function for pnetcdf. */
-int pioc_pnetcdf_inq_type(int ncid, nc_type xtype, char *name,
- PIO_Offset *sizep)
-{
- int typelen;
- char typename[NC_MAX_NAME + 1];
-
- switch (xtype)
- {
- case NC_UBYTE:
- case NC_BYTE:
- case NC_CHAR:
- typelen = 1;
- break;
- case NC_SHORT:
- case NC_USHORT:
- typelen = 2;
- break;
- case NC_UINT:
- case NC_INT:
- case NC_FLOAT:
- typelen = 4;
- break;
- case NC_UINT64:
- case NC_INT64:
- case NC_DOUBLE:
- typelen = 8;
- break;
- }
-
- /* If pointers were supplied, copy results. */
- if (sizep)
- *sizep = typelen;
- if (name)
- strcpy(name, "some type");
-
- return PIO_NOERR;
-}
-
-/**
- * @ingroup PIOc_typelen
- * The PIO-C interface for the NetCDF function nctypelen.
- */
-int PIOc_inq_type(int ncid, nc_type xtype, char *name, PIO_Offset *sizep)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
- int typelen;
-
- LOG((1, "PIOc_inq_type ncid = %d xtype = %d", ncid, xtype));
-
- /* Find the info about this file. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_INQ_TYPE; /* Message for async notification. */
- char name_present = name ? true : false;
- char size_present = sizep ? true : false;
-
- if (ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&xtype, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&name_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&size_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- return check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- ierr = pioc_pnetcdf_inq_type(ncid, xtype, name, sizep);
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- ierr = nc_inq_type(ncid, xtype, name, (size_t *)sizep);
-#endif /* _NETCDF */
- LOG((2, "PIOc_inq_type netcdf call returned %d", ierr));
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Broadcast results to all tasks. Ignore NULL parameters. */
- if (!ierr)
- {
- if (name)
- {
- int slen;
- if (ios->iomaster)
- slen = strlen(name);
- if ((mpierr = MPI_Bcast(&slen, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- if (!mpierr)
- if ((mpierr = MPI_Bcast((void *)name, slen + 1, MPI_CHAR, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
- if (sizep)
- if ((mpierr = MPI_Bcast(sizep , 1, MPI_OFFSET, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- return ierr;
-}
-
-/**
- * @ingroup PIOc_inq_format
- * The PIO-C interface for the NetCDF function nc_inq_format.
- */
-int PIOc_inq_format (int ncid, int *formatp)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
-
- LOG((1, "PIOc_inq ncid = %d", ncid));
-
- /* Find the info about this file. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_INQ_FORMAT;
- char format_present = formatp ? true : false;
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&format_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- return check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- ierr = ncmpi_inq_format(file->fh, formatp);
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- ierr = nc_inq_format(file->fh, formatp);
-#endif /* _NETCDF */
- LOG((2, "PIOc_inq netcdf call returned %d", ierr));
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Broadcast results to all tasks. Ignore NULL parameters. */
- if (!ierr)
- {
- if (formatp)
- if ((mpierr = MPI_Bcast(formatp , 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- return ierr;
-}
-
-/**
- * @ingroup PIOc_inq_dim
- * The PIO-C interface for the NetCDF function nc_inq_dim.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__dimensions.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @param lenp a pointer that will get the number of values
- * @return PIO_NOERR for success, error code otherwise. See PIOc_Set_File_Error_Handling
- */
-int PIOc_inq_dim(int ncid, int dimid, char *name, PIO_Offset *lenp)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
-
- LOG((1, "PIOc_inq_dim"));
-
- /* Get the file info, based on the ncid. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_INQ_DIM;
- char name_present = name ? true : false;
- char len_present = lenp ? true : false;
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&dimid, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&name_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- LOG((2, "PIOc_inq netcdf Bcast name_present = %d", name_present));
- if (!mpierr)
- mpierr = MPI_Bcast(&len_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- LOG((2, "PIOc_inq netcdf Bcast len_present = %d", len_present));
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- return check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- ierr = ncmpi_inq_dim(file->fh, dimid, name, lenp);;
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- ierr = nc_inq_dim(file->fh, dimid, name, (size_t *)lenp);;
-#endif /* _NETCDF */
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Broadcast results to all tasks. Ignore NULL parameters. */
- if (!ierr)
- {
- if (name)
- {
- int slen;
- if (ios->iomaster)
- slen = strlen(name);
- if ((mpierr = MPI_Bcast(&slen, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- if ((mpierr = MPI_Bcast((void *)name, slen + 1, MPI_CHAR, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- if (lenp)
- if ((mpierr = MPI_Bcast(lenp , 1, MPI_OFFSET, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- return ierr;
-}
-
-/**
- * @ingroup PIOc_inq_dimname
- * The PIO-C interface for the NetCDF function nc_inq_dimname.
- */
-int PIOc_inq_dimname(int ncid, int dimid, char *name)
-{
- return PIOc_inq_dim(ncid, dimid, name, NULL);
-}
-
-/**
- * @ingroup PIOc_inq_dimlen
- * The PIO-C interface for the NetCDF function nc_inq_dimlen.
- */
-int PIOc_inq_dimlen(int ncid, int dimid, PIO_Offset *lenp)
-{
- return PIOc_inq_dim(ncid, dimid, NULL, lenp);
-}
-
-/**
- * @ingroup PIOc_inq_dimid
- * The PIO-C interface for the NetCDF function nc_inq_dimid.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__dimensions.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @param idp a pointer that will get the id of the variable or attribute.
- * @return PIO_NOERR for success, error code otherwise. See PIOc_Set_File_Error_Handling
- */
-int PIOc_inq_dimid(int ncid, const char *name, int *idp)
-{
- iosystem_desc_t *ios;
- file_desc_t *file;
- int ierr = PIO_NOERR;
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
-
- /* Name must be provided. */
- if (!name)
- return PIO_EINVAL;
-
- LOG((1, "PIOc_inq_dimid name = %s", name));
-
- /* Get the file info, based on the ncid. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* If using async, and not an IO task, then send parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_INQ_DIMID;
- char id_present = idp ? true : false;
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- int namelen = strlen(name);
- if (!mpierr)
- mpierr = MPI_Bcast(&namelen, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast((void *)name, namelen + 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&id_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- return check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* IO tasks call the netCDF functions. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- ierr = ncmpi_inq_dimid(file->fh, name, idp);;
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- ierr = nc_inq_dimid(file->fh, name, idp);;
-#endif /* _NETCDF */
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Broadcast results. */
- if (!ierr)
- if (idp)
- if ((mpierr = MPI_Bcast(idp, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
-
- return ierr;
-}
-
-/**
- * @ingroup PIOc_inq_var
- * The PIO-C interface for the NetCDF function nc_inq_var.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__variables.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @param varid the variable ID.
- * @param xtypep a pointer that will get the type of the attribute.
- * @param nattsp a pointer that will get the number of attributes
- * @return PIO_NOERR for success, error code otherwise. See PIOc_Set_File_Error_Handling
- */
-int PIOc_inq_var(int ncid, int varid, char *name, nc_type *xtypep, int *ndimsp,
- int *dimidsp, int *nattsp)
-{
- iosystem_desc_t *ios;
- file_desc_t *file;
- int ndims; /* The number of dimensions for this variable. */
- int ierr = PIO_NOERR;
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
-
- LOG((1, "PIOc_inq_var ncid = %d varid = %d", ncid, varid));
-
- /* Get the file info, based on the ncid. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_INQ_VAR;
- char name_present = name ? true : false;
- char xtype_present = xtypep ? true : false;
- char ndims_present = ndimsp ? true : false;
- char dimids_present = dimidsp ? true : false;
- char natts_present = nattsp ? true : false;
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&varid, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&name_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&xtype_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&ndims_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&dimids_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&natts_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- LOG((2, "PIOc_inq_var name_present = %d xtype_present = %d ndims_present = %d "
- "dimids_present = %d, natts_present = %d nattsp = %d",
- name_present, xtype_present, ndims_present, dimids_present, natts_present, nattsp));
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- return check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* Call the netCDF layer. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- {
- ierr = ncmpi_inq_varndims(file->fh, varid, &ndims);
- if (!ierr)
- ierr = ncmpi_inq_var(file->fh, varid, name, xtypep, ndimsp, dimidsp, nattsp);;
- }
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- {
- ierr = nc_inq_varndims(file->fh, varid, &ndims);
- if (!ierr)
- ierr = nc_inq_var(file->fh, varid, name, xtypep, ndimsp, dimidsp, nattsp);
- }
-#endif /* _NETCDF */
- }
-
- if (ndimsp)
- LOG((2, "PIOc_inq_var ndims = %d ierr = %d", *ndimsp, ierr));
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- if (ierr)
- return check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Broadcast the results for non-null pointers. */
- if (!ierr)
- {
- if (name)
- {
- int slen;
- if(ios->iomaster)
- slen = strlen(name);
- if ((mpierr = MPI_Bcast(&slen, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- if ((mpierr = MPI_Bcast((void *)name, slen + 1, MPI_CHAR, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
- if (xtypep)
- if ((mpierr = MPI_Bcast(xtypep, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
-
- if (ndimsp)
- {
- if (ios->ioroot)
- LOG((2, "PIOc_inq_var about to Bcast ndims = %d ios->ioroot = %d", *ndimsp, ios->ioroot));
- if ((mpierr = MPI_Bcast(ndimsp, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- file->varlist[varid].ndims = *ndimsp;
- LOG((2, "PIOc_inq_var Bcast ndims = %d", *ndimsp));
- }
- if (dimidsp)
- {
- if ((mpierr = MPI_Bcast(&ndims, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- if ((mpierr = MPI_Bcast(dimidsp, ndims, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
- if (nattsp)
- if ((mpierr = MPI_Bcast(nattsp, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- return ierr;
-}
-
-/**
- * @ingroup PIOc_inq_varname
- * The PIO-C interface for the NetCDF function nc_inq_varname.
- */
-int PIOc_inq_varname (int ncid, int varid, char *name)
-{
- return PIOc_inq_var(ncid, varid, name, NULL, NULL, NULL, NULL);
-}
-
-/**
- * @ingroup PIOc_inq_vartype
- * The PIO-C interface for the NetCDF function nc_inq_vartype.
- */
-int PIOc_inq_vartype (int ncid, int varid, nc_type *xtypep)
-{
- return PIOc_inq_var(ncid, varid, NULL, xtypep, NULL, NULL, NULL);
-}
-
-/**
- * @ingroup PIOc_inq_varndims
- * The PIO-C interface for the NetCDF function nc_inq_varndims.
- */
-int PIOc_inq_varndims (int ncid, int varid, int *ndimsp)
-{
- return PIOc_inq_var(ncid, varid, NULL, NULL, ndimsp, NULL, NULL);
-}
-
-/**
- * @ingroup PIOc_inq_vardimid
- * The PIO-C interface for the NetCDF function nc_inq_vardimid.
- */
-int PIOc_inq_vardimid(int ncid, int varid, int *dimidsp)
-{
- return PIOc_inq_var(ncid, varid, NULL, NULL, NULL, dimidsp, NULL);
-}
-
-/**
- * @ingroup PIOc_inq_varnatts
- * The PIO-C interface for the NetCDF function nc_inq_varnatts.
- */
-int PIOc_inq_varnatts (int ncid, int varid, int *nattsp)
-{
- return PIOc_inq_var(ncid, varid, NULL, NULL, NULL, NULL, nattsp);
-}
-
-/**
- * @ingroup PIOc_inq_varid
- * The PIO-C interface for the NetCDF function nc_inq_varid.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__variables.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @param varid the variable ID.
- * @param varidp a pointer that will get the variable id
- * @return PIO_NOERR for success, error code otherwise. See PIOc_Set_File_Error_Handling
- */
-int PIOc_inq_varid (int ncid, const char *name, int *varidp)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
-
- /* Caller must provide name. */
- if (!name || strlen(name) > NC_MAX_NAME)
- return PIO_EINVAL;
-
- /* Get file info based on ncid. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- LOG((1, "PIOc_inq_varid ncid = %d name = %s", ncid, name));
-
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_INQ_VARID;
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- int namelen;
- namelen = strlen(name);
- if (!mpierr)
- mpierr = MPI_Bcast(&namelen, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast((void *)name, namelen + 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- ierr = ncmpi_inq_varid(file->fh, name, varidp);;
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- ierr = nc_inq_varid(file->fh, name, varidp);
-#endif /* _NETCDF */
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- {
- check_mpi(file, mpierr, __FILE__, __LINE__);
- return PIO_EIO;
- }
- check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Broadcast results to all tasks. Ignore NULL parameters. */
- if (varidp)
- if ((mpierr = MPI_Bcast(varidp, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- check_mpi(file, mpierr, __FILE__, __LINE__);
-
- return ierr;
-}
-
-/**
- * @ingroup PIOc_inq_att
- * The PIO-C interface for the NetCDF function nc_inq_att.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__attributes.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @param varid the variable ID.
- * @param xtypep a pointer that will get the type of the attribute.
- * @param lenp a pointer that will get the number of values
- * @return PIO_NOERR for success, error code otherwise. See PIOc_Set_File_Error_Handling
- */
-int PIOc_inq_att(int ncid, int varid, const char *name, nc_type *xtypep,
- PIO_Offset *lenp)
-{
- int msg = PIO_MSG_INQ_ATT;
- iosystem_desc_t *ios;
- file_desc_t *file;
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
- int ierr = PIO_NOERR;
-
- /* Caller must provide a name. */
- if (!name)
- return PIO_EINVAL;
-
- LOG((1, "PIOc_inq_att ncid = %d varid = %d xtpyep = %d lenp = %d",
- ncid, varid, xtypep, lenp));
-
- /* Find file based on ncid. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- char xtype_present = xtypep ? true : false;
- char len_present = lenp ? true : false;
- int namelen = strlen(name);
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&varid, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&namelen, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast((void *)name, namelen + 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&xtype_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&len_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- ierr = ncmpi_inq_att(file->fh, varid, name, xtypep, lenp);
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- ierr = nc_inq_att(file->fh, varid, name, xtypep, (size_t *)lenp);
-#endif /* _NETCDF */
- LOG((2, "PIOc_inq netcdf call returned %d", ierr));
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- {
- check_mpi(file, mpierr, __FILE__, __LINE__);
- return PIO_EIO;
- }
- check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Broadcast results. */
- if (!ierr)
- {
- if(xtypep)
- if ((mpierr = MPI_Bcast(xtypep, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- check_mpi(file, mpierr, __FILE__, __LINE__);
- if(lenp)
- if ((mpierr = MPI_Bcast(lenp, 1, MPI_OFFSET, ios->ioroot, ios->my_comm)))
- check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- return ierr;
-}
-
-/**
- * @ingroup PIOc_inq_attlen
- * The PIO-C interface for the NetCDF function nc_inq_attlen.
- */
-int PIOc_inq_attlen (int ncid, int varid, const char *name, PIO_Offset *lenp)
-{
- return PIOc_inq_att(ncid, varid, name, NULL, lenp);
-}
-
-/**
- * @ingroup PIOc_inq_atttype
- * The PIO-C interface for the NetCDF function nc_inq_atttype.
- */
-int PIOc_inq_atttype(int ncid, int varid, const char *name, nc_type *xtypep)
-{
- return PIOc_inq_att(ncid, varid, name, xtypep, NULL);
-}
-
-/**
- * @ingroup PIOc_inq_attname
- * The PIO-C interface for the NetCDF function nc_inq_attname.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__attributes.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @param varid the variable ID.
- * @param attnum the attribute ID.
- * @return PIO_NOERR for success, error code otherwise. See PIOc_Set_File_Error_Handling
- */
-int PIOc_inq_attname(int ncid, int varid, int attnum, char *name)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
-
- LOG((1, "PIOc_inq_attname ncid = %d varid = %d attnum = %d", ncid, varid,
- attnum));
-
- /* Find the info about this file. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_INQ_ATTNAME;
- char name_present = name ? true : false;
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&varid, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&attnum, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&name_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- ierr = ncmpi_inq_attname(file->fh, varid, attnum, name);;
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- ierr = nc_inq_attname(file->fh, varid, attnum, name);;
-#endif /* _NETCDF */
- LOG((2, "PIOc_inq_attname netcdf call returned %d", ierr));
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- {
- check_mpi(file, mpierr, __FILE__, __LINE__);
- return PIO_EIO;
- }
- check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Broadcast results to all tasks. Ignore NULL parameters. */
- if (!ierr)
- if (name)
- {
- int namelen = strlen(name);
- if ((mpierr = MPI_Bcast(&namelen, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- check_mpi(file, mpierr, __FILE__, __LINE__);
- if ((mpierr = MPI_Bcast((void *)name, namelen + 1, MPI_CHAR, ios->ioroot,
- ios->my_comm)))
- check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- return ierr;
-}
-
-/**
- * @ingroup PIOc_inq_attid
- * The PIO-C interface for the NetCDF function nc_inq_attid.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__attributes.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @param varid the variable ID.
- * @param idp a pointer that will get the id of the variable or attribute.
- * @return PIO_NOERR for success, error code otherwise. See PIOc_Set_File_Error_Handling
- */
-int PIOc_inq_attid(int ncid, int varid, const char *name, int *idp)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
-
- /* User must provide name shorter than NC_MAX_NAME +1. */
- if (!name || strlen(name) > NC_MAX_NAME)
- return PIO_EINVAL;
-
- LOG((1, "PIOc_inq_attid ncid = %d varid = %d name = %s", ncid, varid, name));
-
- /* Find the info about this file. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_INQ_ATTID;
- int namelen = strlen(name);
- char id_present = idp ? true : false;
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&varid, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&namelen, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast((char *)name, namelen + 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&id_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- ierr = ncmpi_inq_attid(file->fh, varid, name, idp);;
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- ierr = nc_inq_attid(file->fh, varid, name, idp);;
-#endif /* _NETCDF */
- LOG((2, "PIOc_inq_attname netcdf call returned %d", ierr));
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- {
- check_mpi(file, mpierr, __FILE__, __LINE__);
- return PIO_EIO;
- }
- check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Broadcast results. */
- if (!ierr)
- {
- if (idp)
- if ((mpierr = MPI_Bcast(idp, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- return ierr;
-}
-
-/**
- * @ingroup PIOc_rename_dim
- * The PIO-C interface for the NetCDF function nc_rename_dim.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__dimensions.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @return PIO_NOERR for success, error code otherwise. See PIOc_Set_File_Error_Handling
- */
-int PIOc_rename_dim(int ncid, int dimid, const char *name)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
-
- /* User must provide name of correct length. */
- if (!name || strlen(name) > NC_MAX_NAME)
- return PIO_EINVAL;
-
- LOG((1, "PIOc_rename_dim ncid = %d dimid = %d name = %s", ncid, dimid, name));
-
- /* Find the info about this file. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_RENAME_DIM; /* Message for async notification. */
- int namelen = strlen(name);
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&dimid, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&namelen, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast((void *)name, namelen + 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- LOG((2, "PIOc_rename_dim Bcast file->fh = %d dimid = %d namelen = %d name = %s",
- file->fh, dimid, namelen, name));
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- ierr = ncmpi_rename_dim(file->fh, dimid, name);
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- ierr = nc_rename_dim(file->fh, dimid, name);;
-#endif /* _NETCDF */
- LOG((2, "PIOc_inq netcdf call returned %d", ierr));
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- {
- check_mpi(file, mpierr, __FILE__, __LINE__);
- return PIO_EIO;
- }
- check_netcdf(file, ierr, __FILE__, __LINE__);
-
- return ierr;
-}
-
-/**
- * @ingroup PIOc_rename_var
- * The PIO-C interface for the NetCDF function nc_rename_var.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__variables.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @param varid the variable ID.
- * @return PIO_NOERR for success, error code otherwise. See PIOc_Set_File_Error_Handling
- */
-int PIOc_rename_var(int ncid, int varid, const char *name)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
-
- /* User must provide name of correct length. */
- if (!name || strlen(name) > NC_MAX_NAME)
- return PIO_EINVAL;
-
- LOG((1, "PIOc_rename_var ncid = %d varid = %d name = %s", ncid, varid, name));
-
- /* Find the info about this file. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_RENAME_VAR; /* Message for async notification. */
- int namelen = strlen(name);
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&varid, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&namelen, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast((void *)name, namelen + 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- LOG((2, "PIOc_rename_var Bcast file->fh = %d varid = %d namelen = %d name = %s",
- file->fh, varid, namelen, name));
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- ierr = ncmpi_rename_var(file->fh, varid, name);
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- ierr = nc_rename_var(file->fh, varid, name);;
-#endif /* _NETCDF */
- LOG((2, "PIOc_inq netcdf call returned %d", ierr));
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- {
- check_mpi(file, mpierr, __FILE__, __LINE__);
- return PIO_EIO;
- }
- check_netcdf(file, ierr, __FILE__, __LINE__);
-
- return ierr;
-}
-
-/**
- * @ingroup PIOc_rename_att
- * The PIO-C interface for the NetCDF function nc_rename_att.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__attributes.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @param varid the variable ID.
- * @return PIO_NOERR for success, error code otherwise. See
- * PIOc_Set_File_Error_Handling
- */
-int PIOc_rename_att (int ncid, int varid, const char *name,
- const char *newname)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI functions. */
-
- /* User must provide names of correct length. */
- if (!name || strlen(name) > NC_MAX_NAME ||
- !newname || strlen(newname) > NC_MAX_NAME)
- return PIO_EINVAL;
-
- LOG((1, "PIOc_rename_att ncid = %d varid = %d name = %s newname = %s",
- ncid, varid, name, newname));
-
- /* Find the info about this file. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_RENAME_ATT; /* Message for async notification. */
- int namelen = strlen(name);
- int newnamelen = strlen(newname);
-
- if (ios->compmaster)
- mpierr = MPI_Send(&msg, 1, MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&varid, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&namelen, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast((char *)name, namelen + 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&newnamelen, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast((char *)newname, newnamelen + 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- ierr = ncmpi_rename_att(file->fh, varid, name, newname);
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- ierr = nc_rename_att(file->fh, varid, name, newname);
-#endif /* _NETCDF */
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- {
- check_mpi(file, mpierr, __FILE__, __LINE__);
- return PIO_EIO;
- }
- check_netcdf(file, ierr, __FILE__, __LINE__);
-
- LOG((2, "PIOc_rename_att succeeded"));
- return ierr;
-}
-
-/**
- * @ingroup PIOc_del_att
- * The PIO-C interface for the NetCDF function nc_del_att.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__attributes.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @param varid the variable ID.
- * @return PIO_NOERR for success, error code otherwise. See PIOc_Set_File_Error_Handling
- */
-int PIOc_del_att(int ncid, int varid, const char *name)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI functions. */
-
- /* User must provide name of correct length. */
- if (!name || strlen(name) > NC_MAX_NAME)
- return PIO_EINVAL;
-
- LOG((1, "PIOc_del_att ncid = %d varid = %d name = %s", ncid, varid, name));
-
- /* Find the info about this file. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_DEL_ATT;
- int namelen = strlen(name); /* Length of name string. */
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&varid, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&namelen, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast((char *)name, namelen + 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- ierr = ncmpi_del_att(file->fh, varid, name);
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- ierr = nc_del_att(file->fh, varid, name);
-#endif /* _NETCDF */
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- {
- check_mpi(file, mpierr, __FILE__, __LINE__);
- return PIO_EIO;
- }
- check_netcdf(file, ierr, __FILE__, __LINE__);
-
- LOG((2, "PIOc_del_att succeeded"));
- return ierr;
-}
-
-/**
- * @ingroup PIOc_set_fill
- * The PIO-C interface for the NetCDF function nc_set_fill.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__datasets.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @return PIO_NOERR for success, error code otherwise. See PIOc_Set_File_Error_Handling
- */
-int PIOc_set_fill (int ncid, int fillmode, int *old_modep)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI functions. */
-
- LOG((1, "PIOc_set_fill ncid = %d fillmode = %d old_modep = %d", ncid, fillmode,
- old_modep));
-
- /* Find the info about this file. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_SET_FILL;
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- ierr = ncmpi_set_fill(file->fh, fillmode, old_modep);
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- ierr = nc_set_fill(file->fh, fillmode, old_modep);
-#endif /* _NETCDF */
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- {
- check_mpi(file, mpierr, __FILE__, __LINE__);
- return PIO_EIO;
- }
- check_netcdf(file, ierr, __FILE__, __LINE__);
-
- LOG((2, "PIOc_set_fill succeeded"));
- return ierr;
-}
-
-/** This is an internal function that handles both PIOc_enddef and
- * PIOc_redef.
- * @param ncid the ncid of the file to enddef or redef
- * @param is_enddef set to non-zero for enddef, 0 for redef.
- * @returns PIO_NOERR on success, error code on failure. */
-int pioc_change_def(int ncid, int is_enddef)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI functions. */
-
- LOG((1, "pioc_change_def ncid = %d is_enddef = %d", ncid, is_enddef));
-
- /* Find the info about this file. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
- LOG((2, "pioc_change_def found file"));
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = is_enddef ? PIO_MSG_ENDDEF : PIO_MSG_REDEF;
- LOG((2, "sending message msg = %d", msg));
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1, MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- LOG((2, "pioc_change_def ncid = %d mpierr = %d", file->fh, mpierr));
- }
-
- /* Handle MPI errors. */
- LOG((2, "pioc_change_def handling MPI errors"));
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* If this is an IO task, then call the netCDF function. */
- LOG((2, "pioc_change_def ios->ioproc = %d", ios->ioproc));
- if (ios->ioproc)
- {
- LOG((2, "pioc_change_def calling netcdf function"));
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- if (is_enddef)
- ierr = ncmpi_enddef(file->fh);
- else
- ierr = ncmpi_redef(file->fh);
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- if (is_enddef)
- ierr = nc_enddef(file->fh);
- else
- ierr = nc_redef(file->fh);
-#endif /* _NETCDF */
- LOG((2, "pioc_change_def ierr = %d", ierr));
- }
-
- /* Broadcast and check the return code. */
- LOG((2, "pioc_change_def bcasting return code ierr = %d", ierr));
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- if (ierr)
- return check_netcdf(file, ierr, __FILE__, __LINE__);
- LOG((2, "pioc_change_def succeeded"));
-
- return ierr;
-}
-
-/**
- * @ingroup PIOc_enddef
- * The PIO-C interface for the NetCDF function nc_enddef.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__datasets.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @return PIO_NOERR for success, error code otherwise. See PIOc_Set_File_Error_Handling
- */
-int PIOc_enddef(int ncid)
-{
- return pioc_change_def(ncid, 1);
-}
-
-/**
- * @ingroup PIOc_redef
- * The PIO-C interface for the NetCDF function nc_redef.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__datasets.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @return PIO_NOERR for success, error code otherwise. See PIOc_Set_File_Error_Handling
- */
-int PIOc_redef(int ncid)
-{
- return pioc_change_def(ncid, 0);
-}
-
-/**
- * @ingroup PIOc_def_dim
- * The PIO-C interface for the NetCDF function nc_def_dim.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__dimensions.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @param idp a pointer that will get the id of the variable or attribute.
- * @return PIO_NOERR for success, error code otherwise. See PIOc_Set_File_Error_Handling
- */
-int PIOc_def_dim (int ncid, const char *name, PIO_Offset len, int *idp)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
-
- /* User must provide name. */
- if (!name || strlen(name) > NC_MAX_NAME)
- return PIO_EINVAL;
-
- LOG((1, "PIOc_def_dim ncid = %d name = %s len = %d", ncid, name, len));
-
- /* Find the info about this file. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_DEF_DIM;
- int namelen = strlen(name);
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, ios->compmaster, ios->intercomm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&namelen, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast((void *)name, namelen + 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&len, 1, MPI_INT, ios->compmaster, ios->intercomm);
- }
-
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- ierr = ncmpi_def_dim(file->fh, name, len, idp);;
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- ierr = nc_def_dim(file->fh, name, (size_t)len, idp);
-#endif /* _NETCDF */
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- {
- check_mpi(file, mpierr2, __FILE__, __LINE__);
- return PIO_EIO;
- }
- check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Broadcast results to all tasks. Ignore NULL parameters. */
- if (!ierr)
- if (idp)
- if ((mpierr = MPI_Bcast(idp , 1, MPI_INT, ios->ioroot, ios->my_comm)))
- check_mpi(file, mpierr, __FILE__, __LINE__);
-
- return ierr;
-}
-
-/**
- * @ingroup PIOc_def_var
- * The PIO-C interface for the NetCDF function nc_def_var.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__variables.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @param varid the variable ID.
- * @param varidp a pointer that will get the variable id
- * @return PIO_NOERR for success, error code otherwise. See
- * PIOc_Set_File_Error_Handling
- */
-int PIOc_def_var (int ncid, const char *name, nc_type xtype, int ndims,
- const int *dimidsp, int *varidp)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
-
- /* User must provide name and storage for varid. */
- if (!name || !varidp || strlen(name) > NC_MAX_NAME)
- {
- check_netcdf(file, PIO_EINVAL, __FILE__, __LINE__);
- return PIO_EINVAL;
- }
-
- /* Get the file information. */
- if (!(file = pio_get_file_from_id(ncid)))
- {
- check_netcdf(file, PIO_EBADID, __FILE__, __LINE__);
- return PIO_EBADID;
- }
- ios = file->iosystem;
-
- /* If using async, and not an IO task, then send parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_DEF_VAR;
- int namelen = strlen(name);
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1, MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&(ncid), 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&namelen, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast((void *)name, namelen + 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&xtype, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&ndims, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast((void *)dimidsp, ndims, MPI_INT, ios->compmaster, ios->intercomm);
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- ierr = ncmpi_def_var(ncid, name, xtype, ndims, dimidsp, varidp);
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- ierr = nc_def_var(ncid, name, xtype, ndims, dimidsp, varidp);
-#ifdef _NETCDF4
- /* For netCDF-4 serial files, turn on compression for this variable. */
- if (!ierr && file->iotype == PIO_IOTYPE_NETCDF4C)
- ierr = nc_def_var_deflate(ncid, *varidp, 0, 1, 1);
-
- /* For netCDF-4 parallel files, set parallel access to collective. */
- if (!ierr && file->iotype == PIO_IOTYPE_NETCDF4P)
- ierr = nc_var_par_access(ncid, *varidp, NC_COLLECTIVE);
-#endif /* _NETCDF4 */
-#endif /* _NETCDF */
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- {
- check_mpi(file, mpierr, __FILE__, __LINE__);
- return PIO_EIO;
- }
- check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Broadcast results. */
- if (!ierr)
- if (varidp)
- if ((mpierr = MPI_Bcast(varidp , 1, MPI_INT, ios->ioroot, ios->my_comm)))
- check_mpi(file, mpierr, __FILE__, __LINE__);
-
- return ierr;
-}
-
-/**
- * @ingroup PIOc_inq_var_fill
- * The PIO-C interface for the NetCDF function nc_inq_var_fill.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__variables.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @param varid the variable ID.
- * @return PIO_NOERR for success, error code otherwise. See PIOc_Set_File_Error_Handling
- */
-int PIOc_inq_var_fill(int ncid, int varid, int *no_fill, void *fill_valuep)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
-
- LOG((1, "PIOc_inq ncid = %d", ncid));
-
- /* Find the info about this file. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_INQ_VAR_FILL;
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, ios->compmaster, ios->intercomm);
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- ierr = ncmpi_inq_var_fill(file->fh, varid, no_fill, fill_valuep);
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- ierr = nc_inq_var_fill(file->fh, varid, no_fill, fill_valuep);
-#endif /* _NETCDF */
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- {
- check_mpi(file, mpierr, __FILE__, __LINE__);
- return PIO_EIO;
- }
- check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Broadcast results to all tasks. Ignore NULL parameters. */
- if (!ierr)
- if (fill_valuep)
- if ((mpierr = MPI_Bcast(fill_valuep, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- check_mpi(file, mpierr, __FILE__, __LINE__);
-
- return ierr;
-}
-
-/**
- * @ingroup PIOc_get_att
- * The PIO-C interface for the NetCDF function nc_get_att.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__attributes.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @param varid the variable ID.
- * @return PIO_NOERR for success, error code otherwise. See
- * PIOc_Set_File_Error_Handling
- */
-int PIOc_get_att(int ncid, int varid, const char *name, void *ip)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
- PIO_Offset attlen, typelen;
- nc_type atttype;
-
- /* User must provide a name and destination pointer. */
- if (!name || !ip || strlen(name) > NC_MAX_NAME)
- return PIO_EINVAL;
-
- LOG((1, "PIOc_get_att ncid %d varid %d name %s", ncid, varid, name));
-
- /* Find the info about this file. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* Run these on all tasks if async is not in use, but only on
- * non-IO tasks if async is in use. */
- if (!ios->async_interface || !ios->ioproc)
- {
- /* Get the type and length of the attribute. */
- if ((ierr = PIOc_inq_att(file->fh, varid, name, &atttype, &attlen)))
- {
- check_netcdf(file, ierr, __FILE__, __LINE__);
- return ierr;
- }
-
- /* Get the length (in bytes) of the type. */
- if ((ierr = PIOc_inq_type(file->fh, atttype, NULL, &typelen)))
- {
- check_netcdf(file, ierr, __FILE__, __LINE__);
- return ierr;
- }
- }
-
- /* If async is in use, and this is not an IO task, bcast the
- * parameters and the attribute and type information we fetched. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_GET_ATT;
-
- /* Send the message to IO master. */
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- /* Send the function parameters. */
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&varid, 1, MPI_INT, ios->compmaster, ios->intercomm);
- int namelen = strlen(name);
- if (!mpierr)
- mpierr = MPI_Bcast(&namelen, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast((void *)name, namelen + 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&file->iotype, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&atttype, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&attlen, 1, MPI_OFFSET, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&typelen, 1, MPI_OFFSET, ios->compmaster, ios->intercomm);
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
-
- /* Broadcast values currently only known on computation tasks to IO tasks. */
- LOG((2, "PIOc_get_att bcast from comproot = %d attlen = %d typelen = %d", ios->comproot, attlen, typelen));
- if ((mpierr = MPI_Bcast(&attlen, 1, MPI_OFFSET, ios->comproot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- if ((mpierr = MPI_Bcast(&typelen, 1, MPI_OFFSET, ios->comproot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- LOG((2, "PIOc_get_att bcast complete attlen = %d typelen = %d", attlen, typelen));
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- ierr = ncmpi_get_att(file->fh, varid, name, ip);
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- ierr = nc_get_att(file->fh, varid, name, ip);
-#endif /* _NETCDF */
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- {
- check_mpi(file, mpierr, __FILE__, __LINE__);
- return PIO_EIO;
- }
- check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Broadcast results to all tasks. */
- if (!ierr)
- {
- if ((mpierr = MPI_Bcast(ip, (int)attlen * typelen, MPI_BYTE, ios->ioroot,
- ios->my_comm)))
- {
- check_mpi(file, mpierr, __FILE__, __LINE__);
- return PIO_EIO;
- }
- }
- return ierr;
-}
-
-/**
- * @ingroup PIOc_put_att
- * The PIO-C interface for the NetCDF function nc_put_att.
- *
- * This routine is called collectively by all tasks in the communicator
- * ios.union_comm. For more information on the underlying NetCDF commmand
- * please read about this function in the NetCDF documentation at:
- * http://www.unidata.ucar.edu/software/netcdf/docs/group__attributes.html
- *
- * @param ncid the ncid of the open file, obtained from
- * PIOc_openfile() or PIOc_createfile().
- * @param varid the variable ID.
- * @return PIO_NOERR for success, error code otherwise. See PIOc_Set_File_Error_Handling
- */
-int PIOc_put_att(int ncid, int varid, const char *name, nc_type xtype,
- PIO_Offset len, const void *op)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- PIO_Offset typelen; /* Length (in bytes) of the type. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
-
- LOG((1, "PIOc_put_att ncid = %d varid = %d name = %s", ncid, varid, name));
-
- /* Find the info about this file. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* Run these on all tasks if async is not in use, but only on
- * non-IO tasks if async is in use. */
- if (!ios->async_interface || !ios->ioproc)
- {
- /* Get the length (in bytes) of the type. */
- if ((ierr = PIOc_inq_type(ncid, xtype, NULL, &typelen)))
- {
- check_netcdf(file, ierr, __FILE__, __LINE__);
- return ierr;
- }
- LOG((2, "PIOc_put_att typelen = %d", ncid, typelen));
- }
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_PUT_ATT;
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1, MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- if (!mpierr)
- mpierr = MPI_Bcast(&file->fh, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&varid, 1, MPI_INT, ios->compmaster, ios->intercomm);
- int namelen = strlen(name);
- if (!mpierr)
- mpierr = MPI_Bcast(&namelen, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast((void *)name, namelen + 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&xtype, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&len, 1, MPI_OFFSET, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&typelen, 1, MPI_OFFSET, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast((void *)op, len * typelen, MPI_BYTE, ios->compmaster,
- ios->intercomm);
- LOG((2, "PIOc_put_att finished bcast ncid = %d varid = %d namelen = %d name = %s "
- "len = %d typelen = %d", file->fh, varid, namelen, name, len, typelen));
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- return check_mpi(file, mpierr, __FILE__, __LINE__);
-
- /* Broadcast values currently only known on computation tasks to IO tasks. */
- LOG((2, "PIOc_put_att bcast from comproot = %d typelen = %d", ios->comproot, typelen));
- if ((mpierr = MPI_Bcast(&typelen, 1, MPI_OFFSET, ios->comproot, ios->my_comm)))
- check_mpi(file, mpierr, __FILE__, __LINE__);
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- ierr = ncmpi_put_att(file->fh, varid, name, xtype, len, op);
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- ierr = nc_put_att(file->fh, varid, name, xtype, (size_t)len, op);
-#endif /* _NETCDF */
- }
-
- /* Broadcast and check the return code. */
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- {
- check_mpi(file, mpierr, __FILE__, __LINE__);
- return PIO_EIO;
- }
- check_netcdf(file, ierr, __FILE__, __LINE__);
-
- return ierr;
-}
-
-/**
- * @ingroup PIOc_get_att_double
- * The PIO-C interface for the NetCDF function nc_get_att_double.
- */
-int PIOc_get_att_double(int ncid, int varid, const char *name, double *ip)
-{
- return PIOc_get_att(ncid, varid, name, (void *)ip);
-}
-
-/**
- * @ingroup PIOc_get_att_uchar
- * The PIO-C interface for the NetCDF function nc_get_att_uchar.
- */
-int PIOc_get_att_uchar (int ncid, int varid, const char *name, unsigned char *ip)
-{
- return PIOc_get_att(ncid, varid, name, (void *)ip);
-}
-
-/**
- * @ingroup PIOc_get_att_ushort
- * The PIO-C interface for the NetCDF function nc_get_att_ushort.
- */
-int PIOc_get_att_ushort (int ncid, int varid, const char *name, unsigned short *ip)
-{
- return PIOc_get_att(ncid, varid, name, (void *)ip);
-}
-
-/**
- * @ingroup PIOc_get_att_uint
- * The PIO-C interface for the NetCDF function nc_get_att_uint.
- */
-int PIOc_get_att_uint (int ncid, int varid, const char *name, unsigned int *ip)
-{
- return PIOc_get_att(ncid, varid, name, (void *)ip);
-}
-
-/**
- * @ingroup PIOc_get_att_long
- * The PIO-C interface for the NetCDF function nc_get_att_long.
- */
-int PIOc_get_att_long (int ncid, int varid, const char *name, long *ip)
-{
- return PIOc_get_att(ncid, varid, name, (void *)ip);
-}
-
-/**
- * @ingroup PIOc_get_att_ubyte
- * The PIO-C interface for the NetCDF function nc_get_att_ubyte.
- */
-int PIOc_get_att_ubyte (int ncid, int varid, const char *name, unsigned char *ip)
-{
- return PIOc_get_att(ncid, varid, name, (void *)ip);
-}
-
-/**
- * @ingroup PIOc_get_att_text
- * The PIO-C interface for the NetCDF function nc_get_att_text.
- */
-int PIOc_get_att_text (int ncid, int varid, const char *name, char *ip)
-{
- return PIOc_get_att(ncid, varid, name, (void *)ip);
-}
-
-/**
- * @ingroup PIOc_get_att_schar
- * The PIO-C interface for the NetCDF function nc_get_att_schar.
- */
-int PIOc_get_att_schar (int ncid, int varid, const char *name, signed char *ip)
-{
- return PIOc_get_att(ncid, varid, name, (void *)ip);
-}
-
-/**
- * @ingroup PIOc_get_att_ulonglong
- * The PIO-C interface for the NetCDF function nc_get_att_ulonglong.
- */
-int PIOc_get_att_ulonglong (int ncid, int varid, const char *name, unsigned long long *ip)
-{
- return PIOc_get_att(ncid, varid, name, (void *)ip);
-}
-
-/**
- * @ingroup PIOc_get_att_short
- * The PIO-C interface for the NetCDF function nc_get_att_short.
- */
-int PIOc_get_att_short (int ncid, int varid, const char *name, short *ip)
-{
- return PIOc_get_att(ncid, varid, name, (void *)ip);
-}
-
-/**
- * @ingroup PIOc_get_att_int
- * The PIO-C interface for the NetCDF function nc_get_att_int.
- */
-int PIOc_get_att_int(int ncid, int varid, const char *name, int *ip)
-{
- return PIOc_get_att(ncid, varid, name, (void *)ip);
-}
-
-/**
- * @ingroup PIOc_get_att_longlong
- * The PIO-C interface for the NetCDF function nc_get_att_longlong.
- */
-int PIOc_get_att_longlong(int ncid, int varid, const char *name, long long *ip)
-{
- return PIOc_get_att(ncid, varid, name, (void *)ip);
-}
-
-/**
- * @ingroup PIOc_get_att_float
- * The PIO-C interface for the NetCDF function nc_get_att_float.
- */
-int PIOc_get_att_float (int ncid, int varid, const char *name, float *ip)
-{
- return PIOc_get_att(ncid, varid, name, (void *)ip);
-}
-
-/**
- * @ingroup PIOc_put_att_schar
- * The PIO-C interface for the NetCDF function nc_put_att_schar.
- */
-int PIOc_put_att_schar(int ncid, int varid, const char *name, nc_type xtype,
- PIO_Offset len, const signed char *op)
-{
- return PIOc_put_att(ncid, varid, name, xtype, len, op);
-}
-
-/**
- * @ingroup PIOc_put_att_long
- * The PIO-C interface for the NetCDF function nc_put_att_long.
- */
-int PIOc_put_att_long(int ncid, int varid, const char *name, nc_type xtype,
- PIO_Offset len, const long *op)
-{
- return PIOc_put_att(ncid, varid, name, NC_CHAR, len, op);
-}
-
-/**
- * @ingroup PIOc_put_att_int
- * The PIO-C interface for the NetCDF function nc_put_att_int.
- */
-int PIOc_put_att_int(int ncid, int varid, const char *name, nc_type xtype,
- PIO_Offset len, const int *op)
-{
- return PIOc_put_att(ncid, varid, name, xtype, len, op);
-}
-
-/**
- * @ingroup PIOc_put_att_uchar
- * The PIO-C interface for the NetCDF function nc_put_att_uchar.
- */
-int PIOc_put_att_uchar(int ncid, int varid, const char *name, nc_type xtype,
- PIO_Offset len, const unsigned char *op)
-{
- return PIOc_put_att(ncid, varid, name, xtype, len, op);
-}
-
-/**
- * @ingroup PIOc_put_att_longlong
- * The PIO-C interface for the NetCDF function nc_put_att_longlong.
- */
-int PIOc_put_att_longlong(int ncid, int varid, const char *name, nc_type xtype,
- PIO_Offset len, const long long *op)
-{
- return PIOc_put_att(ncid, varid, name, xtype, len, op);
-}
-
-/**
- * @ingroup PIOc_put_att_uint
- * The PIO-C interface for the NetCDF function nc_put_att_uint.
- */
-int PIOc_put_att_uint(int ncid, int varid, const char *name, nc_type xtype,
- PIO_Offset len, const unsigned int *op)
-{
- return PIOc_put_att(ncid, varid, name, xtype, len, op);
-}
-
-/**
- * @ingroup PIOc_put_att_ubyte
- * The PIO-C interface for the NetCDF function nc_put_att_ubyte.
- */
-int PIOc_put_att_ubyte(int ncid, int varid, const char *name, nc_type xtype,
- PIO_Offset len, const unsigned char *op)
-{
- return PIOc_put_att(ncid, varid, name, xtype, len, op);
-}
-
-/**
- * @ingroup PIOc_put_att_float
- * The PIO-C interface for the NetCDF function nc_put_att_float.
- */
-int PIOc_put_att_float(int ncid, int varid, const char *name, nc_type xtype,
- PIO_Offset len, const float *op)
-{
- return PIOc_put_att(ncid, varid, name, xtype, len, op);
-}
-
-/**
- * @ingroup PIOc_put_att_ulonglong
- * The PIO-C interface for the NetCDF function nc_put_att_ulonglong.
- */
-int PIOc_put_att_ulonglong(int ncid, int varid, const char *name, nc_type xtype,
- PIO_Offset len, const unsigned long long *op)
-{
- return PIOc_put_att(ncid, varid, name, xtype, len, op);
-}
-
-/**
- * @ingroup PIOc_put_att_ushort
- * The PIO-C interface for the NetCDF function nc_put_att_ushort.
- */
-int PIOc_put_att_ushort(int ncid, int varid, const char *name, nc_type xtype,
- PIO_Offset len, const unsigned short *op)
-{
- return PIOc_put_att(ncid, varid, name, xtype, len, op);
-}
-
-/**
- * @ingroup PIOc_put_att_text
- * The PIO-C interface for the NetCDF function nc_put_att_text.
- */
-int PIOc_put_att_text(int ncid, int varid, const char *name,
- PIO_Offset len, const char *op)
-{
- return PIOc_put_att(ncid, varid, name, NC_CHAR, len, op);
-}
-
-/**
- * @ingroup PIOc_put_att_short
- * The PIO-C interface for the NetCDF function nc_put_att_short.
- */
-int PIOc_put_att_short(int ncid, int varid, const char *name, nc_type xtype,
- PIO_Offset len, const short *op)
-{
- return PIOc_put_att(ncid, varid, name, xtype, len, op);
-}
-
-/**
- * @ingroup PIOc_put_att_double
- * The PIO-C interface for the NetCDF function nc_put_att_double.
- */
-int PIOc_put_att_double(int ncid, int varid, const char *name, nc_type xtype,
- PIO_Offset len, const double *op)
-{
- return PIOc_put_att(ncid, varid, name, xtype, len, op);
-}
-
-
diff --git a/cime/externals/pio2/src/clib/pio_put_nc.c b/cime/externals/pio2/src/clib/pio_put_nc.c
index 96783586ad2f..c0b621eb903d 100644
--- a/cime/externals/pio2/src/clib/pio_put_nc.c
+++ b/cime/externals/pio2/src/clib/pio_put_nc.c
@@ -4,11 +4,11 @@
///
/// PIO interface to nc_put_vars_uchar
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vars_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const unsigned char *op)
+int PIOc_put_vars_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const unsigned char *op)
{
int ierr;
int msg;
@@ -28,7 +28,7 @@ int PIOc_put_vars_uchar (int ncid, int varid, const PIO_Offset start[], const PI
msg = PIO_MSG_PUT_VARS_UCHAR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -55,7 +55,7 @@ int PIOc_put_vars_uchar (int ncid, int varid, const PIO_Offset start[], const PI
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -82,11 +82,11 @@ int PIOc_put_vars_uchar (int ncid, int varid, const PIO_Offset start[], const PI
///
/// PIO interface to nc_put_vars_ushort
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vars_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const unsigned short *op)
+int PIOc_put_vars_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const unsigned short *op)
{
int ierr;
int msg;
@@ -106,7 +106,7 @@ int PIOc_put_vars_ushort (int ncid, int varid, const PIO_Offset start[], const P
msg = PIO_MSG_PUT_VARS_USHORT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -133,7 +133,7 @@ int PIOc_put_vars_ushort (int ncid, int varid, const PIO_Offset start[], const P
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -160,11 +160,11 @@ int PIOc_put_vars_ushort (int ncid, int varid, const PIO_Offset start[], const P
///
/// PIO interface to nc_put_vars_ulonglong
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vars_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const unsigned long long *op)
+int PIOc_put_vars_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const unsigned long long *op)
{
int ierr;
int msg;
@@ -184,7 +184,7 @@ int PIOc_put_vars_ulonglong (int ncid, int varid, const PIO_Offset start[], cons
msg = PIO_MSG_PUT_VARS_ULONGLONG;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -211,7 +211,7 @@ int PIOc_put_vars_ulonglong (int ncid, int varid, const PIO_Offset start[], cons
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -238,11 +238,11 @@ int PIOc_put_vars_ulonglong (int ncid, int varid, const PIO_Offset start[], cons
///
/// PIO interface to nc_put_varm
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_varm (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
+int PIOc_put_varm (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
{
int ierr;
int msg;
@@ -262,7 +262,7 @@ int PIOc_put_varm (int ncid, int varid, const PIO_Offset start[], const PIO_Offs
msg = PIO_MSG_PUT_VARM;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -289,7 +289,7 @@ int PIOc_put_varm (int ncid, int varid, const PIO_Offset start[], const PIO_Offs
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -316,11 +316,11 @@ int PIOc_put_varm (int ncid, int varid, const PIO_Offset start[], const PIO_Offs
///
/// PIO interface to nc_put_vars_uint
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vars_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const unsigned int *op)
+int PIOc_put_vars_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const unsigned int *op)
{
int ierr;
int msg;
@@ -340,7 +340,7 @@ int PIOc_put_vars_uint (int ncid, int varid, const PIO_Offset start[], const PIO
msg = PIO_MSG_PUT_VARS_UINT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -367,7 +367,7 @@ int PIOc_put_vars_uint (int ncid, int varid, const PIO_Offset start[], const PIO
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -394,11 +394,11 @@ int PIOc_put_vars_uint (int ncid, int varid, const PIO_Offset start[], const PIO
///
/// PIO interface to nc_put_varm_uchar
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_varm_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned char *op)
+int PIOc_put_varm_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned char *op)
{
int ierr;
int msg;
@@ -418,7 +418,7 @@ int PIOc_put_varm_uchar (int ncid, int varid, const PIO_Offset start[], const PI
msg = PIO_MSG_PUT_VARM_UCHAR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -445,7 +445,7 @@ int PIOc_put_varm_uchar (int ncid, int varid, const PIO_Offset start[], const PI
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -472,11 +472,11 @@ int PIOc_put_varm_uchar (int ncid, int varid, const PIO_Offset start[], const PI
///
/// PIO interface to nc_put_var_ushort
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var_ushort (int ncid, int varid, const unsigned short *op)
+int PIOc_put_var_ushort (int ncid, int varid, const unsigned short *op)
{
int ierr;
int msg;
@@ -496,7 +496,7 @@ int PIOc_put_var_ushort (int ncid, int varid, const unsigned short *op)
msg = PIO_MSG_PUT_VAR_USHORT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -523,7 +523,7 @@ int PIOc_put_var_ushort (int ncid, int varid, const unsigned short *op)
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -550,11 +550,11 @@ int PIOc_put_var_ushort (int ncid, int varid, const unsigned short *op)
///
/// PIO interface to nc_put_var1_longlong
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var1_longlong (int ncid, int varid, const PIO_Offset index[], const long long *op)
+int PIOc_put_var1_longlong (int ncid, int varid, const PIO_Offset index[], const long long *op)
{
int ierr;
int msg;
@@ -574,7 +574,7 @@ int PIOc_put_var1_longlong (int ncid, int varid, const PIO_Offset index[], const
msg = PIO_MSG_PUT_VAR1_LONGLONG;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -601,7 +601,7 @@ int PIOc_put_var1_longlong (int ncid, int varid, const PIO_Offset index[], const
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -628,11 +628,11 @@ int PIOc_put_var1_longlong (int ncid, int varid, const PIO_Offset index[], const
///
/// PIO interface to nc_put_vara_uchar
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vara_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const unsigned char *op)
+int PIOc_put_vara_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const unsigned char *op)
{
int ierr;
int msg;
@@ -652,7 +652,7 @@ int PIOc_put_vara_uchar (int ncid, int varid, const PIO_Offset start[], const PI
msg = PIO_MSG_PUT_VARA_UCHAR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -679,7 +679,7 @@ int PIOc_put_vara_uchar (int ncid, int varid, const PIO_Offset start[], const PI
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -706,11 +706,11 @@ int PIOc_put_vara_uchar (int ncid, int varid, const PIO_Offset start[], const PI
///
/// PIO interface to nc_put_varm_short
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_varm_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const short *op)
+int PIOc_put_varm_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const short *op)
{
int ierr;
int msg;
@@ -730,7 +730,7 @@ int PIOc_put_varm_short (int ncid, int varid, const PIO_Offset start[], const PI
msg = PIO_MSG_PUT_VARM_SHORT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -757,7 +757,7 @@ int PIOc_put_varm_short (int ncid, int varid, const PIO_Offset start[], const PI
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -784,11 +784,11 @@ int PIOc_put_varm_short (int ncid, int varid, const PIO_Offset start[], const PI
///
/// PIO interface to nc_put_var1_long
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var1_long (int ncid, int varid, const PIO_Offset index[], const long *ip)
+int PIOc_put_var1_long (int ncid, int varid, const PIO_Offset index[], const long *ip)
{
int ierr;
int msg;
@@ -808,7 +808,7 @@ int PIOc_put_var1_long (int ncid, int varid, const PIO_Offset index[], const lon
msg = PIO_MSG_PUT_VAR1_LONG;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -835,7 +835,7 @@ int PIOc_put_var1_long (int ncid, int varid, const PIO_Offset index[], const lon
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -862,11 +862,11 @@ int PIOc_put_var1_long (int ncid, int varid, const PIO_Offset index[], const lon
///
/// PIO interface to nc_put_vars_long
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vars_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const long *op)
+int PIOc_put_vars_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const long *op)
{
int ierr;
int msg;
@@ -886,7 +886,7 @@ int PIOc_put_vars_long (int ncid, int varid, const PIO_Offset start[], const PIO
msg = PIO_MSG_PUT_VARS_LONG;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -913,7 +913,7 @@ int PIOc_put_vars_long (int ncid, int varid, const PIO_Offset start[], const PIO
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -940,11 +940,11 @@ int PIOc_put_vars_long (int ncid, int varid, const PIO_Offset start[], const PIO
///
/// PIO interface to nc_put_var_short
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var_short (int ncid, int varid, const short *op)
+int PIOc_put_var_short (int ncid, int varid, const short *op)
{
int ierr;
int msg;
@@ -964,7 +964,7 @@ int PIOc_put_var_short (int ncid, int varid, const short *op)
msg = PIO_MSG_PUT_VAR_SHORT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -991,7 +991,7 @@ int PIOc_put_var_short (int ncid, int varid, const short *op)
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -1018,11 +1018,11 @@ int PIOc_put_var_short (int ncid, int varid, const short *op)
///
/// PIO interface to nc_put_vara_int
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vara_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const int *op)
+int PIOc_put_vara_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const int *op)
{
int ierr;
int msg;
@@ -1042,7 +1042,7 @@ int PIOc_put_vara_int (int ncid, int varid, const PIO_Offset start[], const PIO_
msg = PIO_MSG_PUT_VARA_INT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1069,7 +1069,7 @@ int PIOc_put_vara_int (int ncid, int varid, const PIO_Offset start[], const PIO_
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -1096,11 +1096,11 @@ int PIOc_put_vara_int (int ncid, int varid, const PIO_Offset start[], const PIO_
///
/// PIO interface to nc_put_var1_ushort
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var1_ushort (int ncid, int varid, const PIO_Offset index[], const unsigned short *op)
+int PIOc_put_var1_ushort (int ncid, int varid, const PIO_Offset index[], const unsigned short *op)
{
int ierr;
int msg;
@@ -1120,7 +1120,7 @@ int PIOc_put_var1_ushort (int ncid, int varid, const PIO_Offset index[], const u
msg = PIO_MSG_PUT_VAR1_USHORT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1147,7 +1147,7 @@ int PIOc_put_var1_ushort (int ncid, int varid, const PIO_Offset index[], const u
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -1174,11 +1174,11 @@ int PIOc_put_var1_ushort (int ncid, int varid, const PIO_Offset index[], const u
///
/// PIO interface to nc_put_vara_text
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vara_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const char *op)
+int PIOc_put_vara_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const char *op)
{
int ierr;
int msg;
@@ -1198,7 +1198,7 @@ int PIOc_put_vara_text (int ncid, int varid, const PIO_Offset start[], const PIO
msg = PIO_MSG_PUT_VARA_TEXT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1225,7 +1225,7 @@ int PIOc_put_vara_text (int ncid, int varid, const PIO_Offset start[], const PIO
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -1252,11 +1252,11 @@ int PIOc_put_vara_text (int ncid, int varid, const PIO_Offset start[], const PIO
///
/// PIO interface to nc_put_varm_text
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_varm_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const char *op)
+int PIOc_put_varm_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const char *op)
{
int ierr;
int msg;
@@ -1276,7 +1276,7 @@ int PIOc_put_varm_text (int ncid, int varid, const PIO_Offset start[], const PIO
msg = PIO_MSG_PUT_VARM_TEXT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1303,7 +1303,7 @@ int PIOc_put_varm_text (int ncid, int varid, const PIO_Offset start[], const PIO
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -1330,11 +1330,11 @@ int PIOc_put_varm_text (int ncid, int varid, const PIO_Offset start[], const PIO
///
/// PIO interface to nc_put_varm_ushort
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_varm_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned short *op)
+int PIOc_put_varm_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned short *op)
{
int ierr;
int msg;
@@ -1354,7 +1354,7 @@ int PIOc_put_varm_ushort (int ncid, int varid, const PIO_Offset start[], const P
msg = PIO_MSG_PUT_VARM_USHORT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1381,7 +1381,7 @@ int PIOc_put_varm_ushort (int ncid, int varid, const PIO_Offset start[], const P
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -1408,11 +1408,11 @@ int PIOc_put_varm_ushort (int ncid, int varid, const PIO_Offset start[], const P
///
/// PIO interface to nc_put_var_ulonglong
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var_ulonglong (int ncid, int varid, const unsigned long long *op)
+int PIOc_put_var_ulonglong (int ncid, int varid, const unsigned long long *op)
{
int ierr;
int msg;
@@ -1432,7 +1432,7 @@ int PIOc_put_var_ulonglong (int ncid, int varid, const unsigned long long *op)
msg = PIO_MSG_PUT_VAR_ULONGLONG;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1459,7 +1459,7 @@ int PIOc_put_var_ulonglong (int ncid, int varid, const unsigned long long *op)
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -1486,11 +1486,11 @@ int PIOc_put_var_ulonglong (int ncid, int varid, const unsigned long long *op)
///
/// PIO interface to nc_put_var_int
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var_int (int ncid, int varid, const int *op)
+int PIOc_put_var_int (int ncid, int varid, const int *op)
{
int ierr;
int msg;
@@ -1510,7 +1510,7 @@ int PIOc_put_var_int (int ncid, int varid, const int *op)
msg = PIO_MSG_PUT_VAR_INT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1537,7 +1537,7 @@ int PIOc_put_var_int (int ncid, int varid, const int *op)
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -1564,11 +1564,11 @@ int PIOc_put_var_int (int ncid, int varid, const int *op)
///
/// PIO interface to nc_put_var_longlong
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var_longlong (int ncid, int varid, const long long *op)
+int PIOc_put_var_longlong (int ncid, int varid, const long long *op)
{
int ierr;
int msg;
@@ -1588,7 +1588,7 @@ int PIOc_put_var_longlong (int ncid, int varid, const long long *op)
msg = PIO_MSG_PUT_VAR_LONGLONG;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1615,7 +1615,7 @@ int PIOc_put_var_longlong (int ncid, int varid, const long long *op)
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -1642,11 +1642,11 @@ int PIOc_put_var_longlong (int ncid, int varid, const long long *op)
///
/// PIO interface to nc_put_var_schar
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var_schar (int ncid, int varid, const signed char *op)
+int PIOc_put_var_schar (int ncid, int varid, const signed char *op)
{
int ierr;
int msg;
@@ -1666,7 +1666,7 @@ int PIOc_put_var_schar (int ncid, int varid, const signed char *op)
msg = PIO_MSG_PUT_VAR_SCHAR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1693,7 +1693,7 @@ int PIOc_put_var_schar (int ncid, int varid, const signed char *op)
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -1720,11 +1720,11 @@ int PIOc_put_var_schar (int ncid, int varid, const signed char *op)
///
/// PIO interface to nc_put_var_uint
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var_uint (int ncid, int varid, const unsigned int *op)
+int PIOc_put_var_uint (int ncid, int varid, const unsigned int *op)
{
int ierr;
int msg;
@@ -1744,7 +1744,7 @@ int PIOc_put_var_uint (int ncid, int varid, const unsigned int *op)
msg = PIO_MSG_PUT_VAR_UINT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1771,7 +1771,7 @@ int PIOc_put_var_uint (int ncid, int varid, const unsigned int *op)
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -1798,11 +1798,11 @@ int PIOc_put_var_uint (int ncid, int varid, const unsigned int *op)
///
/// PIO interface to nc_put_var
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var (int ncid, int varid, const void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
+int PIOc_put_var (int ncid, int varid, const void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
{
int ierr;
int msg;
@@ -1822,7 +1822,7 @@ int PIOc_put_var (int ncid, int varid, const void *buf, PIO_Offset bufcount, MPI
msg = PIO_MSG_PUT_VAR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1849,7 +1849,7 @@ int PIOc_put_var (int ncid, int varid, const void *buf, PIO_Offset bufcount, MPI
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -1876,11 +1876,11 @@ int PIOc_put_var (int ncid, int varid, const void *buf, PIO_Offset bufcount, MPI
///
/// PIO interface to nc_put_vara_ushort
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vara_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const unsigned short *op)
+int PIOc_put_vara_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const unsigned short *op)
{
int ierr;
int msg;
@@ -1900,7 +1900,7 @@ int PIOc_put_vara_ushort (int ncid, int varid, const PIO_Offset start[], const P
msg = PIO_MSG_PUT_VARA_USHORT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -1927,7 +1927,7 @@ int PIOc_put_vara_ushort (int ncid, int varid, const PIO_Offset start[], const P
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -1954,11 +1954,11 @@ int PIOc_put_vara_ushort (int ncid, int varid, const PIO_Offset start[], const P
///
/// PIO interface to nc_put_vars_short
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vars_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const short *op)
+int PIOc_put_vars_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const short *op)
{
int ierr;
int msg;
@@ -1978,7 +1978,7 @@ int PIOc_put_vars_short (int ncid, int varid, const PIO_Offset start[], const PI
msg = PIO_MSG_PUT_VARS_SHORT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2005,7 +2005,7 @@ int PIOc_put_vars_short (int ncid, int varid, const PIO_Offset start[], const PI
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -2032,11 +2032,11 @@ int PIOc_put_vars_short (int ncid, int varid, const PIO_Offset start[], const PI
///
/// PIO interface to nc_put_vara_uint
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vara_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const unsigned int *op)
+int PIOc_put_vara_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const unsigned int *op)
{
int ierr;
int msg;
@@ -2056,7 +2056,7 @@ int PIOc_put_vara_uint (int ncid, int varid, const PIO_Offset start[], const PIO
msg = PIO_MSG_PUT_VARA_UINT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2083,7 +2083,7 @@ int PIOc_put_vara_uint (int ncid, int varid, const PIO_Offset start[], const PIO
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -2110,11 +2110,11 @@ int PIOc_put_vara_uint (int ncid, int varid, const PIO_Offset start[], const PIO
///
/// PIO interface to nc_put_vara_schar
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vara_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const signed char *op)
+int PIOc_put_vara_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const signed char *op)
{
int ierr;
int msg;
@@ -2134,7 +2134,7 @@ int PIOc_put_vara_schar (int ncid, int varid, const PIO_Offset start[], const PI
msg = PIO_MSG_PUT_VARA_SCHAR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2161,7 +2161,7 @@ int PIOc_put_vara_schar (int ncid, int varid, const PIO_Offset start[], const PI
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -2188,11 +2188,11 @@ int PIOc_put_vara_schar (int ncid, int varid, const PIO_Offset start[], const PI
///
/// PIO interface to nc_put_varm_ulonglong
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_varm_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned long long *op)
+int PIOc_put_varm_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned long long *op)
{
int ierr;
int msg;
@@ -2212,7 +2212,7 @@ int PIOc_put_varm_ulonglong (int ncid, int varid, const PIO_Offset start[], cons
msg = PIO_MSG_PUT_VARM_ULONGLONG;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2239,7 +2239,7 @@ int PIOc_put_varm_ulonglong (int ncid, int varid, const PIO_Offset start[], cons
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -2266,11 +2266,11 @@ int PIOc_put_varm_ulonglong (int ncid, int varid, const PIO_Offset start[], cons
///
/// PIO interface to nc_put_var1_uchar
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var1_uchar (int ncid, int varid, const PIO_Offset index[], const unsigned char *op)
+int PIOc_put_var1_uchar (int ncid, int varid, const PIO_Offset index[], const unsigned char *op)
{
int ierr;
int msg;
@@ -2290,7 +2290,7 @@ int PIOc_put_var1_uchar (int ncid, int varid, const PIO_Offset index[], const un
msg = PIO_MSG_PUT_VAR1_UCHAR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2317,7 +2317,7 @@ int PIOc_put_var1_uchar (int ncid, int varid, const PIO_Offset index[], const un
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -2344,11 +2344,11 @@ int PIOc_put_var1_uchar (int ncid, int varid, const PIO_Offset index[], const un
///
/// PIO interface to nc_put_varm_int
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_varm_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const int *op)
+int PIOc_put_varm_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const int *op)
{
int ierr;
int msg;
@@ -2368,7 +2368,7 @@ int PIOc_put_varm_int (int ncid, int varid, const PIO_Offset start[], const PIO_
msg = PIO_MSG_PUT_VARM_INT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2395,7 +2395,7 @@ int PIOc_put_varm_int (int ncid, int varid, const PIO_Offset start[], const PIO_
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -2422,11 +2422,11 @@ int PIOc_put_varm_int (int ncid, int varid, const PIO_Offset start[], const PIO_
///
/// PIO interface to nc_put_vars_schar
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vars_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const signed char *op)
+int PIOc_put_vars_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const signed char *op)
{
int ierr;
int msg;
@@ -2446,7 +2446,7 @@ int PIOc_put_vars_schar (int ncid, int varid, const PIO_Offset start[], const PI
msg = PIO_MSG_PUT_VARS_SCHAR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2473,7 +2473,7 @@ int PIOc_put_vars_schar (int ncid, int varid, const PIO_Offset start[], const PI
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -2500,11 +2500,11 @@ int PIOc_put_vars_schar (int ncid, int varid, const PIO_Offset start[], const PI
///
/// PIO interface to nc_put_var1
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var1 (int ncid, int varid, const PIO_Offset index[], const void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
+int PIOc_put_var1 (int ncid, int varid, const PIO_Offset index[], const void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
{
int ierr;
int msg;
@@ -2524,7 +2524,7 @@ int PIOc_put_var1 (int ncid, int varid, const PIO_Offset index[], const void *bu
msg = PIO_MSG_PUT_VAR1;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2551,7 +2551,7 @@ int PIOc_put_var1 (int ncid, int varid, const PIO_Offset index[], const void *bu
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -2578,11 +2578,11 @@ int PIOc_put_var1 (int ncid, int varid, const PIO_Offset index[], const void *bu
///
/// PIO interface to nc_put_vara_float
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vara_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const float *op)
+int PIOc_put_vara_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const float *op)
{
int ierr;
int msg;
@@ -2602,7 +2602,7 @@ int PIOc_put_vara_float (int ncid, int varid, const PIO_Offset start[], const PI
msg = PIO_MSG_PUT_VARA_FLOAT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2629,7 +2629,7 @@ int PIOc_put_vara_float (int ncid, int varid, const PIO_Offset start[], const PI
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -2656,11 +2656,11 @@ int PIOc_put_vara_float (int ncid, int varid, const PIO_Offset start[], const PI
///
/// PIO interface to nc_put_var1_float
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var1_float (int ncid, int varid, const PIO_Offset index[], const float *op)
+int PIOc_put_var1_float (int ncid, int varid, const PIO_Offset index[], const float *op)
{
int ierr;
int msg;
@@ -2680,7 +2680,7 @@ int PIOc_put_var1_float (int ncid, int varid, const PIO_Offset index[], const fl
msg = PIO_MSG_PUT_VAR1_FLOAT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2707,7 +2707,7 @@ int PIOc_put_var1_float (int ncid, int varid, const PIO_Offset index[], const fl
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -2734,11 +2734,11 @@ int PIOc_put_var1_float (int ncid, int varid, const PIO_Offset index[], const fl
///
/// PIO interface to nc_put_varm_float
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_varm_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const float *op)
+int PIOc_put_varm_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const float *op)
{
int ierr;
int msg;
@@ -2758,7 +2758,7 @@ int PIOc_put_varm_float (int ncid, int varid, const PIO_Offset start[], const PI
msg = PIO_MSG_PUT_VARM_FLOAT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2785,7 +2785,7 @@ int PIOc_put_varm_float (int ncid, int varid, const PIO_Offset start[], const PI
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -2812,11 +2812,11 @@ int PIOc_put_varm_float (int ncid, int varid, const PIO_Offset start[], const PI
///
/// PIO interface to nc_put_var1_text
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var1_text (int ncid, int varid, const PIO_Offset index[], const char *op)
+int PIOc_put_var1_text (int ncid, int varid, const PIO_Offset index[], const char *op)
{
int ierr;
int msg;
@@ -2836,7 +2836,7 @@ int PIOc_put_var1_text (int ncid, int varid, const PIO_Offset index[], const cha
msg = PIO_MSG_PUT_VAR1_TEXT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2863,7 +2863,7 @@ int PIOc_put_var1_text (int ncid, int varid, const PIO_Offset index[], const cha
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -2890,11 +2890,11 @@ int PIOc_put_var1_text (int ncid, int varid, const PIO_Offset index[], const cha
///
/// PIO interface to nc_put_vars_text
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vars_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const char *op)
+int PIOc_put_vars_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const char *op)
{
int ierr;
int msg;
@@ -2914,7 +2914,7 @@ int PIOc_put_vars_text (int ncid, int varid, const PIO_Offset start[], const PIO
msg = PIO_MSG_PUT_VARS_TEXT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -2941,7 +2941,7 @@ int PIOc_put_vars_text (int ncid, int varid, const PIO_Offset start[], const PIO
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -2968,11 +2968,11 @@ int PIOc_put_vars_text (int ncid, int varid, const PIO_Offset start[], const PIO
///
/// PIO interface to nc_put_varm_long
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_varm_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const long *op)
+int PIOc_put_varm_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const long *op)
{
int ierr;
int msg;
@@ -2992,7 +2992,7 @@ int PIOc_put_varm_long (int ncid, int varid, const PIO_Offset start[], const PIO
msg = PIO_MSG_PUT_VARM_LONG;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3019,7 +3019,7 @@ int PIOc_put_varm_long (int ncid, int varid, const PIO_Offset start[], const PIO
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -3046,11 +3046,11 @@ int PIOc_put_varm_long (int ncid, int varid, const PIO_Offset start[], const PIO
///
/// PIO interface to nc_put_vars_double
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vars_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const double *op)
+int PIOc_put_vars_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const double *op)
{
int ierr;
int msg;
@@ -3070,7 +3070,7 @@ int PIOc_put_vars_double (int ncid, int varid, const PIO_Offset start[], const P
msg = PIO_MSG_PUT_VARS_DOUBLE;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3097,7 +3097,7 @@ int PIOc_put_vars_double (int ncid, int varid, const PIO_Offset start[], const P
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -3124,11 +3124,11 @@ int PIOc_put_vars_double (int ncid, int varid, const PIO_Offset start[], const P
///
/// PIO interface to nc_put_vara_longlong
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vara_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const long long *op)
+int PIOc_put_vara_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const long long *op)
{
int ierr;
int msg;
@@ -3148,7 +3148,7 @@ int PIOc_put_vara_longlong (int ncid, int varid, const PIO_Offset start[], const
msg = PIO_MSG_PUT_VARA_LONGLONG;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3175,7 +3175,7 @@ int PIOc_put_vara_longlong (int ncid, int varid, const PIO_Offset start[], const
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -3202,11 +3202,11 @@ int PIOc_put_vara_longlong (int ncid, int varid, const PIO_Offset start[], const
///
/// PIO interface to nc_put_var_double
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var_double (int ncid, int varid, const double *op)
+int PIOc_put_var_double (int ncid, int varid, const double *op)
{
int ierr;
int msg;
@@ -3226,7 +3226,7 @@ int PIOc_put_var_double (int ncid, int varid, const double *op)
msg = PIO_MSG_PUT_VAR_DOUBLE;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3253,7 +3253,7 @@ int PIOc_put_var_double (int ncid, int varid, const double *op)
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -3280,11 +3280,11 @@ int PIOc_put_var_double (int ncid, int varid, const double *op)
///
/// PIO interface to nc_put_var_float
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var_float (int ncid, int varid, const float *op)
+int PIOc_put_var_float (int ncid, int varid, const float *op)
{
int ierr;
int msg;
@@ -3304,7 +3304,7 @@ int PIOc_put_var_float (int ncid, int varid, const float *op)
msg = PIO_MSG_PUT_VAR_FLOAT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3331,7 +3331,7 @@ int PIOc_put_var_float (int ncid, int varid, const float *op)
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -3358,11 +3358,11 @@ int PIOc_put_var_float (int ncid, int varid, const float *op)
///
/// PIO interface to nc_put_var1_ulonglong
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var1_ulonglong (int ncid, int varid, const PIO_Offset index[], const unsigned long long *op)
+int PIOc_put_var1_ulonglong (int ncid, int varid, const PIO_Offset index[], const unsigned long long *op)
{
int ierr;
int msg;
@@ -3382,7 +3382,7 @@ int PIOc_put_var1_ulonglong (int ncid, int varid, const PIO_Offset index[], cons
msg = PIO_MSG_PUT_VAR1_ULONGLONG;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3409,7 +3409,7 @@ int PIOc_put_var1_ulonglong (int ncid, int varid, const PIO_Offset index[], cons
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -3436,11 +3436,11 @@ int PIOc_put_var1_ulonglong (int ncid, int varid, const PIO_Offset index[], cons
///
/// PIO interface to nc_put_varm_uint
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_varm_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned int *op)
+int PIOc_put_varm_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned int *op)
{
int ierr;
int msg;
@@ -3460,7 +3460,7 @@ int PIOc_put_varm_uint (int ncid, int varid, const PIO_Offset start[], const PIO
msg = PIO_MSG_PUT_VARM_UINT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3487,7 +3487,7 @@ int PIOc_put_varm_uint (int ncid, int varid, const PIO_Offset start[], const PIO
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -3514,11 +3514,11 @@ int PIOc_put_varm_uint (int ncid, int varid, const PIO_Offset start[], const PIO
///
/// PIO interface to nc_put_var1_uint
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var1_uint (int ncid, int varid, const PIO_Offset index[], const unsigned int *op)
+int PIOc_put_var1_uint (int ncid, int varid, const PIO_Offset index[], const unsigned int *op)
{
int ierr;
int msg;
@@ -3538,7 +3538,7 @@ int PIOc_put_var1_uint (int ncid, int varid, const PIO_Offset index[], const uns
msg = PIO_MSG_PUT_VAR1_UINT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3565,7 +3565,7 @@ int PIOc_put_var1_uint (int ncid, int varid, const PIO_Offset index[], const uns
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -3592,11 +3592,11 @@ int PIOc_put_var1_uint (int ncid, int varid, const PIO_Offset index[], const uns
///
/// PIO interface to nc_put_var1_int
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var1_int (int ncid, int varid, const PIO_Offset index[], const int *op)
+int PIOc_put_var1_int (int ncid, int varid, const PIO_Offset index[], const int *op)
{
int ierr;
int msg;
@@ -3616,7 +3616,7 @@ int PIOc_put_var1_int (int ncid, int varid, const PIO_Offset index[], const int
msg = PIO_MSG_PUT_VAR1_INT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3643,7 +3643,7 @@ int PIOc_put_var1_int (int ncid, int varid, const PIO_Offset index[], const int
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -3670,11 +3670,11 @@ int PIOc_put_var1_int (int ncid, int varid, const PIO_Offset index[], const int
///
/// PIO interface to nc_put_vars_float
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vars_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const float *op)
+int PIOc_put_vars_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const float *op)
{
int ierr;
int msg;
@@ -3694,7 +3694,7 @@ int PIOc_put_vars_float (int ncid, int varid, const PIO_Offset start[], const PI
msg = PIO_MSG_PUT_VARS_FLOAT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3721,7 +3721,7 @@ int PIOc_put_vars_float (int ncid, int varid, const PIO_Offset start[], const PI
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -3748,11 +3748,11 @@ int PIOc_put_vars_float (int ncid, int varid, const PIO_Offset start[], const PI
///
/// PIO interface to nc_put_vara_short
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vara_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const short *op)
+int PIOc_put_vara_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const short *op)
{
int ierr;
int msg;
@@ -3772,7 +3772,7 @@ int PIOc_put_vara_short (int ncid, int varid, const PIO_Offset start[], const PI
msg = PIO_MSG_PUT_VARA_SHORT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3799,7 +3799,7 @@ int PIOc_put_vara_short (int ncid, int varid, const PIO_Offset start[], const PI
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -3826,11 +3826,11 @@ int PIOc_put_vara_short (int ncid, int varid, const PIO_Offset start[], const PI
///
/// PIO interface to nc_put_var1_schar
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var1_schar (int ncid, int varid, const PIO_Offset index[], const signed char *op)
+int PIOc_put_var1_schar (int ncid, int varid, const PIO_Offset index[], const signed char *op)
{
int ierr;
int msg;
@@ -3850,7 +3850,7 @@ int PIOc_put_var1_schar (int ncid, int varid, const PIO_Offset index[], const si
msg = PIO_MSG_PUT_VAR1_SCHAR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3877,7 +3877,7 @@ int PIOc_put_var1_schar (int ncid, int varid, const PIO_Offset index[], const si
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -3904,11 +3904,11 @@ int PIOc_put_var1_schar (int ncid, int varid, const PIO_Offset index[], const si
///
/// PIO interface to nc_put_vara_ulonglong
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vara_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const unsigned long long *op)
+int PIOc_put_vara_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const unsigned long long *op)
{
int ierr;
int msg;
@@ -3928,7 +3928,7 @@ int PIOc_put_vara_ulonglong (int ncid, int varid, const PIO_Offset start[], cons
msg = PIO_MSG_PUT_VARA_ULONGLONG;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -3955,7 +3955,7 @@ int PIOc_put_vara_ulonglong (int ncid, int varid, const PIO_Offset start[], cons
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -3982,11 +3982,11 @@ int PIOc_put_vara_ulonglong (int ncid, int varid, const PIO_Offset start[], cons
///
/// PIO interface to nc_put_varm_double
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_varm_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const double *op)
+int PIOc_put_varm_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const double *op)
{
int ierr;
int msg;
@@ -4006,7 +4006,7 @@ int PIOc_put_varm_double (int ncid, int varid, const PIO_Offset start[], const P
msg = PIO_MSG_PUT_VARM_DOUBLE;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4033,7 +4033,7 @@ int PIOc_put_varm_double (int ncid, int varid, const PIO_Offset start[], const P
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -4060,11 +4060,11 @@ int PIOc_put_varm_double (int ncid, int varid, const PIO_Offset start[], const P
///
/// PIO interface to nc_put_vara
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vara (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
+int PIOc_put_vara (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
{
int ierr;
int msg;
@@ -4084,7 +4084,7 @@ int PIOc_put_vara (int ncid, int varid, const PIO_Offset start[], const PIO_Offs
msg = PIO_MSG_PUT_VARA;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4111,7 +4111,7 @@ int PIOc_put_vara (int ncid, int varid, const PIO_Offset start[], const PIO_Offs
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -4138,11 +4138,11 @@ int PIOc_put_vara (int ncid, int varid, const PIO_Offset start[], const PIO_Offs
///
/// PIO interface to nc_put_vara_long
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vara_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const long *op)
+int PIOc_put_vara_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const long *op)
{
int ierr;
int msg;
@@ -4162,7 +4162,7 @@ int PIOc_put_vara_long (int ncid, int varid, const PIO_Offset start[], const PIO
msg = PIO_MSG_PUT_VARA_LONG;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4189,7 +4189,7 @@ int PIOc_put_vara_long (int ncid, int varid, const PIO_Offset start[], const PIO
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -4216,11 +4216,11 @@ int PIOc_put_vara_long (int ncid, int varid, const PIO_Offset start[], const PIO
///
/// PIO interface to nc_put_var1_double
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var1_double (int ncid, int varid, const PIO_Offset index[], const double *op)
+int PIOc_put_var1_double (int ncid, int varid, const PIO_Offset index[], const double *op)
{
int ierr;
int msg;
@@ -4240,7 +4240,7 @@ int PIOc_put_var1_double (int ncid, int varid, const PIO_Offset index[], const d
msg = PIO_MSG_PUT_VAR1_DOUBLE;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4267,7 +4267,7 @@ int PIOc_put_var1_double (int ncid, int varid, const PIO_Offset index[], const d
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -4294,11 +4294,11 @@ int PIOc_put_var1_double (int ncid, int varid, const PIO_Offset index[], const d
///
/// PIO interface to nc_put_varm_schar
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_varm_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const signed char *op)
+int PIOc_put_varm_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const signed char *op)
{
int ierr;
int msg;
@@ -4318,7 +4318,7 @@ int PIOc_put_varm_schar (int ncid, int varid, const PIO_Offset start[], const PI
msg = PIO_MSG_PUT_VARM_SCHAR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4345,7 +4345,7 @@ int PIOc_put_varm_schar (int ncid, int varid, const PIO_Offset start[], const PI
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -4372,11 +4372,11 @@ int PIOc_put_varm_schar (int ncid, int varid, const PIO_Offset start[], const PI
///
/// PIO interface to nc_put_var_text
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var_text (int ncid, int varid, const char *op)
+int PIOc_put_var_text (int ncid, int varid, const char *op)
{
int ierr;
int msg;
@@ -4396,7 +4396,7 @@ int PIOc_put_var_text (int ncid, int varid, const char *op)
msg = PIO_MSG_PUT_VAR_TEXT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4423,7 +4423,7 @@ int PIOc_put_var_text (int ncid, int varid, const char *op)
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -4450,11 +4450,11 @@ int PIOc_put_var_text (int ncid, int varid, const char *op)
///
/// PIO interface to nc_put_vars_int
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vars_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const int *op)
+int PIOc_put_vars_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const int *op)
{
int ierr;
int msg;
@@ -4474,7 +4474,7 @@ int PIOc_put_vars_int (int ncid, int varid, const PIO_Offset start[], const PIO_
msg = PIO_MSG_PUT_VARS_INT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4501,7 +4501,7 @@ int PIOc_put_vars_int (int ncid, int varid, const PIO_Offset start[], const PIO_
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -4528,11 +4528,11 @@ int PIOc_put_vars_int (int ncid, int varid, const PIO_Offset start[], const PIO_
///
/// PIO interface to nc_put_var1_short
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var1_short (int ncid, int varid, const PIO_Offset index[], const short *op)
+int PIOc_put_var1_short (int ncid, int varid, const PIO_Offset index[], const short *op)
{
int ierr;
int msg;
@@ -4552,7 +4552,7 @@ int PIOc_put_var1_short (int ncid, int varid, const PIO_Offset index[], const sh
msg = PIO_MSG_PUT_VAR1_SHORT;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4579,7 +4579,7 @@ int PIOc_put_var1_short (int ncid, int varid, const PIO_Offset index[], const sh
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -4606,11 +4606,11 @@ int PIOc_put_var1_short (int ncid, int varid, const PIO_Offset index[], const sh
///
/// PIO interface to nc_put_vars_longlong
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vars_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const long long *op)
+int PIOc_put_vars_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const long long *op)
{
int ierr;
int msg;
@@ -4630,7 +4630,7 @@ int PIOc_put_vars_longlong (int ncid, int varid, const PIO_Offset start[], const
msg = PIO_MSG_PUT_VARS_LONGLONG;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4657,7 +4657,7 @@ int PIOc_put_vars_longlong (int ncid, int varid, const PIO_Offset start[], const
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -4684,11 +4684,11 @@ int PIOc_put_vars_longlong (int ncid, int varid, const PIO_Offset start[], const
///
/// PIO interface to nc_put_vara_double
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vara_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const double *op)
+int PIOc_put_vara_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const double *op)
{
int ierr;
int msg;
@@ -4708,7 +4708,7 @@ int PIOc_put_vara_double (int ncid, int varid, const PIO_Offset start[], const P
msg = PIO_MSG_PUT_VARA_DOUBLE;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4735,7 +4735,7 @@ int PIOc_put_vara_double (int ncid, int varid, const PIO_Offset start[], const P
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -4762,11 +4762,11 @@ int PIOc_put_vara_double (int ncid, int varid, const PIO_Offset start[], const P
///
/// PIO interface to nc_put_vars
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_vars (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
+int PIOc_put_vars (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
{
int ierr;
int msg;
@@ -4786,7 +4786,7 @@ int PIOc_put_vars (int ncid, int varid, const PIO_Offset start[], const PIO_Offs
msg = PIO_MSG_PUT_VARS;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4813,7 +4813,7 @@ int PIOc_put_vars (int ncid, int varid, const PIO_Offset start[], const PIO_Offs
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -4840,11 +4840,11 @@ int PIOc_put_vars (int ncid, int varid, const PIO_Offset start[], const PIO_Offs
///
/// PIO interface to nc_put_var_uchar
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var_uchar (int ncid, int varid, const unsigned char *op)
+int PIOc_put_var_uchar (int ncid, int varid, const unsigned char *op)
{
int ierr;
int msg;
@@ -4864,7 +4864,7 @@ int PIOc_put_var_uchar (int ncid, int varid, const unsigned char *op)
msg = PIO_MSG_PUT_VAR_UCHAR;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4891,7 +4891,7 @@ int PIOc_put_var_uchar (int ncid, int varid, const unsigned char *op)
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -4918,11 +4918,11 @@ int PIOc_put_var_uchar (int ncid, int varid, const unsigned char *op)
///
/// PIO interface to nc_put_var_long
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_var_long (int ncid, int varid, const long *op)
+int PIOc_put_var_long (int ncid, int varid, const long *op)
{
int ierr;
int msg;
@@ -4942,7 +4942,7 @@ int PIOc_put_var_long (int ncid, int varid, const long *op)
msg = PIO_MSG_PUT_VAR_LONG;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -4969,7 +4969,7 @@ int PIOc_put_var_long (int ncid, int varid, const long *op)
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
@@ -4996,11 +4996,11 @@ int PIOc_put_var_long (int ncid, int varid, const long *op)
///
/// PIO interface to nc_put_varm_longlong
///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
+/// This routine is called collectively by all tasks in the communicator ios.union_comm.
+///
/// Refer to the netcdf documentation.
///
-int PIOc_put_varm_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const long long *op)
+int PIOc_put_varm_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const long long *op)
{
int ierr;
int msg;
@@ -5020,7 +5020,7 @@ int PIOc_put_varm_longlong (int ncid, int varid, const PIO_Offset start[], const
msg = PIO_MSG_PUT_VARM_LONGLONG;
if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
+ if(ios->compmaster)
mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
}
@@ -5047,7 +5047,7 @@ int PIOc_put_varm_longlong (int ncid, int varid, const PIO_Offset start[], const
vdesc = file->varlist + varid;
if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
+ vdesc->request = realloc(vdesc->request,
sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
}
request = vdesc->request+vdesc->nreqs;
diff --git a/cime/externals/pio2/src/clib/pio_put_nc_async.c b/cime/externals/pio2/src/clib/pio_put_nc_async.c
deleted file mode 100644
index 9ef0d63da125..000000000000
--- a/cime/externals/pio2/src/clib/pio_put_nc_async.c
+++ /dev/null
@@ -1,988 +0,0 @@
-/**
- * @file
- * PIO functions to write data.
- *
- * @author Ed Hartnett
- * @date 2016
- * @see http://code.google.com/p/parallelio/
- */
-
-#include
-#include
-#include
-
-/**
- * Internal PIO function which provides a type-neutral interface to
- * nc_put_vars.
- *
- * Users should not call this function directly. Instead, call one of
- * the derived functions, depending on the type of data you are
- * writing: PIOc_put_vars_text(), PIOc_put_vars_uchar(),
- * PIOc_put_vars_schar(), PIOc_put_vars_ushort(),
- * PIOc_put_vars_short(), PIOc_put_vars_uint(), PIOc_put_vars_int(),
- * PIOc_put_vars_long(), PIOc_put_vars_float(),
- * PIOc_put_vars_longlong(), PIOc_put_vars_double(),
- * PIOc_put_vars_ulonglong().
- *
- * This routine is called collectively by all tasks in the
- * communicator ios.union_comm.
- *
- * @param ncid identifies the netCDF file
- * @param varid the variable ID number
- * @param start an array of start indicies (must have same number of
- * entries as variable has dimensions). If NULL, indices of 0 will be
- * used.
- * @param count an array of counts (must have same number of entries
- * as variable has dimensions). If NULL, counts matching the size of
- * the variable will be used.
- * @param stride an array of strides (must have same number of
- * entries as variable has dimensions). If NULL, strides of 1 will be
- * used.
- * @param xtype the netCDF type of the data being passed in buf. Data
- * will be automatically covnerted from this type to the type of the
- * variable being written to.
- * @param buf pointer to the data to be written.
- *
- * @return PIO_NOERR on success, error code otherwise.
- * @private
- */
-int PIOc_put_vars_tc(int ncid, int varid, const PIO_Offset *start, const PIO_Offset *count,
- const PIO_Offset *stride, nc_type xtype, const void *buf)
-{
- iosystem_desc_t *ios; /* Pointer to io system information. */
- file_desc_t *file; /* Pointer to file information. */
- int ierr = PIO_NOERR; /* Return code from function calls. */
- int mpierr = MPI_SUCCESS, mpierr2; /* Return code from MPI function codes. */
- int ndims; /* The number of dimensions in the variable. */
- int *dimids; /* The IDs of the dimensions for this variable. */
- PIO_Offset typelen; /* Size (in bytes) of the data type of data in buf. */
- PIO_Offset num_elem = 1; /* Number of data elements in the buffer. */
- char start_present = start ? true : false; /* Is start non-NULL? */
- char count_present = count ? true : false; /* Is count non-NULL? */
- char stride_present = stride ? true : false; /* Is stride non-NULL? */
- PIO_Offset *rstart, *rcount, *rstride;
- var_desc_t *vdesc;
- PIO_Offset usage;
- int *request;
-
- LOG((1, "PIOc_put_vars_tc ncid = %d varid = %d start = %d count = %d "
- "stride = %d xtype = %d", ncid, varid, start, count, stride, xtype));
-
- /* User must provide some data. */
- if (!buf)
- return PIO_EINVAL;
-
- /* Find the info about this file. */
- if (!(file = pio_get_file_from_id(ncid)))
- return PIO_EBADID;
- ios = file->iosystem;
-
- /* Run these on all tasks if async is not in use, but only on
- * non-IO tasks if async is in use. */
- if (!ios->async_interface || !ios->ioproc)
- {
- /* Get the number of dims for this var. */
- if ((ierr = PIOc_inq_varndims(ncid, varid, &ndims)))
- return check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Get the length of the data type. */
- if ((ierr = PIOc_inq_type(ncid, xtype, NULL, &typelen)))
- return check_netcdf(file, ierr, __FILE__, __LINE__);
-
- PIO_Offset dimlen[ndims];
-
- /* If no count array was passed, we need to know the dimlens
- * so we can calculate how many data elements are in the
- * buf. */
- if (!count)
- {
- int dimid[ndims];
-
- /* Get the dimids for this var. */
- if ((ierr = PIOc_inq_vardimid(ncid, varid, dimid)))
- return check_netcdf(file, ierr, __FILE__, __LINE__);
-
- /* Get the length of each dimension. */
- for (int vd = 0; vd < ndims; vd++)
- if ((ierr = PIOc_inq_dimlen(ncid, dimid[vd], &dimlen[vd])))
- return check_netcdf(file, ierr, __FILE__, __LINE__);
- }
-
- /* Allocate memory for these arrays, now that we know ndims. */
- if (!(rstart = malloc(ndims * sizeof(PIO_Offset))))
- return check_netcdf(file, PIO_ENOMEM, __FILE__, __LINE__);
- if (!(rcount = malloc(ndims * sizeof(PIO_Offset))))
- return check_netcdf(file, PIO_ENOMEM, __FILE__, __LINE__);
- if (!(rstride = malloc(ndims * sizeof(PIO_Offset))))
- return check_netcdf(file, PIO_ENOMEM, __FILE__, __LINE__);
-
- /* Figure out the real start, count, and stride arrays. (The
- * user may have passed in NULLs.) */
- for (int vd = 0; vd < ndims; vd++)
- {
- rstart[vd] = start ? start[vd] : 0;
- rcount[vd] = count ? count[vd] : dimlen[vd];
- rstride[vd] = stride ? stride[vd] : 1;
- }
-
- /* How many elements in buf? */
- for (int vd = 0; vd < ndims; vd++)
- num_elem *= (rcount[vd] - rstart[vd])/rstride[vd];
- LOG((2, "PIOc_put_vars_tc num_elem = %d", num_elem));
- }
-
- /* If async is in use, and this is not an IO task, bcast the parameters. */
- if (ios->async_interface)
- {
- if (!ios->ioproc)
- {
- int msg = PIO_MSG_PUT_VARS;
-
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1, MPI_INT, ios->ioroot, 1, ios->union_comm);
-
- /* Send the function parameters and associated informaiton
- * to the msg handler. */
- if (!mpierr)
- mpierr = MPI_Bcast(&ncid, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&varid, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&ndims, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&start_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr && start_present)
- mpierr = MPI_Bcast((PIO_Offset *)start, ndims, MPI_OFFSET, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&count_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr && count_present)
- mpierr = MPI_Bcast((PIO_Offset *)count, ndims, MPI_OFFSET, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&stride_present, 1, MPI_CHAR, ios->compmaster, ios->intercomm);
- if (!mpierr && stride_present)
- mpierr = MPI_Bcast((PIO_Offset *)stride, ndims, MPI_OFFSET, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&xtype, 1, MPI_INT, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&num_elem, 1, MPI_OFFSET, ios->compmaster, ios->intercomm);
- if (!mpierr)
- mpierr = MPI_Bcast(&typelen, 1, MPI_OFFSET, ios->compmaster, ios->intercomm);
- LOG((2, "PIOc_put_vars_tc ncid = %d varid = %d ndims = %d start_present = %d "
- "count_present = %d stride_present = %d xtype = %d num_elem = %d", ncid, varid,
- ndims, start_present, count_present, stride_present, xtype, num_elem));
-
- /* Send the data. */
- if (!mpierr)
- mpierr = MPI_Bcast((void *)buf, num_elem * typelen, MPI_BYTE, ios->compmaster,
- ios->intercomm);
- }
-
- /* Handle MPI errors. */
- if ((mpierr2 = MPI_Bcast(&mpierr, 1, MPI_INT, ios->comproot, ios->my_comm)))
- return check_mpi(file, mpierr2, __FILE__, __LINE__);
- if (mpierr)
- check_mpi(file, mpierr, __FILE__, __LINE__);
- LOG((2, "PIOc_put_vars_tc checked mpierr = %d", mpierr));
-
- /* Broadcast values currently only known on computation tasks to IO tasks. */
- LOG((2, "PIOc_put_vars_tc bcast from comproot"));
- if ((mpierr = MPI_Bcast(&ndims, 1, MPI_INT, ios->comproot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- LOG((2, "PIOc_put_vars_tc complete bcast from comproot ndims = %d", ndims));
- }
-
- /* If this is an IO task, then call the netCDF function. */
- if (ios->ioproc)
- {
-#ifdef _PNETCDF
- if (file->iotype == PIO_IOTYPE_PNETCDF)
- {
- PIO_Offset *fake_stride;
-
- if (!stride_present)
- {
- LOG((2, "stride not present"));
- if (!(fake_stride = malloc(ndims * sizeof(PIO_Offset))))
- return PIO_ENOMEM;
- for (int d = 0; d < ndims; d++)
- fake_stride[d] = 1;
- }
- else
- fake_stride = (PIO_Offset *)stride;
-
- LOG((2, "PIOc_put_vars_tc calling pnetcdf function"));
- vdesc = file->varlist + varid;
- if (vdesc->nreqs % PIO_REQUEST_ALLOC_CHUNK == 0)
- vdesc->request = realloc(vdesc->request,
- sizeof(int) * (vdesc->nreqs + PIO_REQUEST_ALLOC_CHUNK));
- request = vdesc->request + vdesc->nreqs;
- LOG((2, "PIOc_put_vars_tc request = %d", vdesc->request));
-
- /* Only the IO master actually does the call. */
- if (ios->iomaster)
- {
-/* LOG((2, "PIOc_put_vars_tc ncid = %d varid = %d start[0] = %d count[0] = %d fake_stride[0] = %d",
- ncid, varid, start[0], count[0], fake_stride[0]));*/
- /* for (int d = 0; d < ndims; d++) */
- /* LOG((2, "start[%d] = %d count[%d] = %d stride[%d] = %d", d, start[d], d, count[d], d, stride[d])); */
- switch(xtype)
- {
- case NC_BYTE:
- ierr = ncmpi_bput_vars_schar(ncid, varid, start, count, fake_stride, buf, request);
- break;
- case NC_CHAR:
- ierr = ncmpi_bput_vars_text(ncid, varid, start, count, fake_stride, buf, request);
- break;
- case NC_SHORT:
- ierr = ncmpi_bput_vars_short(ncid, varid, start, count, fake_stride, buf, request);
- break;
- case NC_INT:
- LOG((2, "PIOc_put_vars_tc io_rank 0 doing pnetcdf for int"));
- ierr = ncmpi_bput_vars_int(ncid, varid, start, count, fake_stride, buf, request);
- LOG((2, "PIOc_put_vars_tc io_rank 0 done with pnetcdf call for int ierr = %d", ierr));
- break;
- case NC_FLOAT:
- ierr = ncmpi_bput_vars_float(ncid, varid, start, count, fake_stride, buf, request);
- break;
- case NC_DOUBLE:
- ierr = ncmpi_bput_vars_double(ncid, varid, start, count, fake_stride, buf, request);
- break;
- case NC_INT64:
- ierr = ncmpi_bput_vars_longlong(ncid, varid, start, count, fake_stride, buf, request);
- break;
- default:
- LOG((0, "Unknown type for pnetcdf file! xtype = %d", xtype));
- }
- LOG((2, "PIOc_put_vars_tc io_rank 0 done with pnetcdf call"));
- }
- else
- *request = PIO_REQ_NULL;
-
- vdesc->nreqs++;
- LOG((2, "PIOc_put_vars_tc flushing output buffer"));
- flush_output_buffer(file, false, 0);
- LOG((2, "PIOc_put_vars_tc flushed output buffer"));
-
- /* Free malloced resources. */
- if (!stride_present)
- free(fake_stride);
- }
-#endif /* _PNETCDF */
-#ifdef _NETCDF
- if (file->iotype != PIO_IOTYPE_PNETCDF && file->do_io)
- {
- LOG((2, "PIOc_put_vars_tc calling netcdf function file->iotype = %d",
- file->iotype));
- switch(xtype)
- {
- case NC_BYTE:
- ierr = nc_put_vars_schar(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- case NC_CHAR:
- ierr = nc_put_vars_schar(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- case NC_SHORT:
- ierr = nc_put_vars_short(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- case NC_INT:
- ierr = nc_put_vars_int(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- case NC_FLOAT:
- ierr = nc_put_vars_float(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- case NC_DOUBLE:
- ierr = nc_put_vars_double(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
-#ifdef _NETCDF4
- case NC_UBYTE:
- ierr = nc_put_vars_uchar(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- case NC_USHORT:
- ierr = nc_put_vars_ushort(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- case NC_UINT:
- ierr = nc_put_vars_uint(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- case NC_INT64:
- ierr = nc_put_vars_longlong(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- case NC_UINT64:
- ierr = nc_put_vars_ulonglong(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
- break;
- /* case NC_STRING: */
- /* ierr = nc_put_vars_string(ncid, varid, (size_t *)start, (size_t *)count, */
- /* (ptrdiff_t *)stride, (void *)buf); */
- /* break; */
- default:
- ierr = nc_put_vars(ncid, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);
-#endif /* _NETCDF4 */
- }
- }
-#endif /* _NETCDF */
- }
-
- /* Broadcast and check the return code. */
- LOG((2, "PIOc_put_vars_tc bcasting netcdf return code %d", ierr));
- if ((mpierr = MPI_Bcast(&ierr, 1, MPI_INT, ios->ioroot, ios->my_comm)))
- return check_mpi(file, mpierr, __FILE__, __LINE__);
- if (ierr)
- return check_netcdf(file, ierr, __FILE__, __LINE__);
- LOG((2, "PIOc_put_vars_tc bcast netcdf return code %d complete", ierr));
-
- return ierr;
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vars_text(int ncid, int varid, const PIO_Offset *start, const PIO_Offset *count,
- const PIO_Offset *stride, const char *op)
-{
- return PIOc_put_vars_tc(ncid, varid, start, count, stride, NC_CHAR, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vars_uchar(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const PIO_Offset *stride,
- const unsigned char *op)
-{
- return PIOc_put_vars_tc(ncid, varid, start, count, stride, NC_UBYTE, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vars_schar(int ncid, int varid, const PIO_Offset *start, const PIO_Offset *count,
- const PIO_Offset *stride, const signed char *op)
-{
- return PIOc_put_vars_tc(ncid, varid, start, count, stride, NC_BYTE, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vars_ushort(int ncid, int varid, const PIO_Offset *start, const PIO_Offset *count,
- const PIO_Offset *stride, const unsigned short *op)
-{
- return PIOc_put_vars_tc(ncid, varid, start, count, stride, NC_USHORT, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vars_short(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const PIO_Offset *stride, const short *op)
-{
- return PIOc_put_vars_tc(ncid, varid, start, count, stride, NC_SHORT, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vars_uint(int ncid, int varid, const PIO_Offset *start, const PIO_Offset *count,
- const PIO_Offset *stride, const unsigned int *op)
-{
- return PIOc_put_vars_tc(ncid, varid, start, count, stride, NC_UINT, op);
-}
-
-/** PIO interface to nc_put_vars_int */
-int PIOc_put_vars_int(int ncid, int varid, const PIO_Offset *start, const PIO_Offset *count,
- const PIO_Offset *stride, const int *op)
-{
- return PIOc_put_vars_tc(ncid, varid, start, count, stride, NC_INT, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vars_long(int ncid, int varid, const PIO_Offset *start, const PIO_Offset *count,
- const PIO_Offset *stride, const long *op)
-{
- return PIOc_put_vars_tc(ncid, varid, start, count, stride, NC_INT, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vars_float(int ncid, int varid, const PIO_Offset *start, const PIO_Offset *count,
- const PIO_Offset *stride, const float *op)
-{
- return PIOc_put_vars_tc(ncid, varid, start, count, stride, NC_FLOAT, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vars_longlong(int ncid, int varid, const PIO_Offset *start, const PIO_Offset *count,
- const PIO_Offset *stride, const long long *op)
-{
- return PIOc_put_vars_tc(ncid, varid, start, count, stride, NC_INT64, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vars_double(int ncid, int varid, const PIO_Offset *start, const PIO_Offset *count,
- const PIO_Offset *stride, const double *op)
-{
- return PIOc_put_vars_tc(ncid, varid, start, count, stride, NC_DOUBLE, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vars_ulonglong(int ncid, int varid, const PIO_Offset *start, const PIO_Offset *count,
- const PIO_Offset *stride, const unsigned long long *op)
-{
- return PIOc_put_vars_tc(ncid, varid, start, count, stride, NC_UINT64, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var1_tc(int ncid, int varid, const PIO_Offset *index, nc_type xtype,
- const void *op)
-{
- int ndims;
- int ierr;
-
- /* Find the number of dimensions. */
- if ((ierr = PIOc_inq_varndims(ncid, varid, &ndims)))
- return ierr;
-
- /* Set up count array. */
- PIO_Offset count[ndims];
- for (int c = 0; c < ndims; c++)
- count[c] = 1;
-
- return PIOc_put_vars_tc(ncid, varid, index, count, NULL, xtype, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var1_text(int ncid, int varid, const PIO_Offset *index, const char *op)
-{
- return PIOc_put_var1_tc(ncid, varid, index, NC_CHAR, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var1_uchar(int ncid, int varid, const PIO_Offset *index,
- const unsigned char *op)
-{
- return PIOc_put_var1_tc(ncid, varid, index, NC_UBYTE, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var1_schar(int ncid, int varid, const PIO_Offset *index,
- const signed char *op)
-{
- return PIOc_put_var1_tc(ncid, varid, index, NC_BYTE, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var1_ushort(int ncid, int varid, const PIO_Offset *index,
- const unsigned short *op)
-{
- return PIOc_put_var1_tc(ncid, varid, index, NC_USHORT, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var1_short(int ncid, int varid, const PIO_Offset *index,
- const short *op)
-{
- return PIOc_put_var1_tc(ncid, varid, index, NC_SHORT, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var1_uint(int ncid, int varid, const PIO_Offset *index,
- const unsigned int *op)
-{
- return PIOc_put_var1_tc(ncid, varid, index, NC_UINT, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var1_int(int ncid, int varid, const PIO_Offset *index, const int *op)
-{
- return PIOc_put_var1_tc(ncid, varid, index, NC_INT, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var1_float(int ncid, int varid, const PIO_Offset *index, const float *op)
-{
- return PIOc_put_var1_tc(ncid, varid, index, NC_FLOAT, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var1_long(int ncid, int varid, const PIO_Offset *index, const long *op)
-{
- return PIOc_put_var1_tc(ncid, varid, index, NC_LONG, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var1_double(int ncid, int varid, const PIO_Offset *index,
- const double *op)
-{
- return PIOc_put_var1_tc(ncid, varid, index, NC_DOUBLE, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var1_ulonglong(int ncid, int varid, const PIO_Offset *index,
- const unsigned long long *op)
-{
- return PIOc_put_var1_tc(ncid, varid, index, NC_UINT64, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var1_longlong(int ncid, int varid, const PIO_Offset *index,
- const long long *op)
-{
- return PIOc_put_var1_tc(ncid, varid, index, NC_INT64, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vara_text(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const char *op)
-{
- return PIOc_put_vars_text(ncid, varid, start, count, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vara_uchar(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const unsigned char *op)
-{
- return PIOc_put_vars_uchar(ncid, varid, start, count, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vara_schar(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const signed char *op)
-{
- return PIOc_put_vars_schar(ncid, varid, start, count, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vara_ushort(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const unsigned short *op)
-{
- return PIOc_put_vars_ushort(ncid, varid, start, count, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vara_short(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const short *op)
-{
- return PIOc_put_vars_short(ncid, varid, start, count, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vara_uint(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const unsigned int *op)
-{
- return PIOc_put_vars_uint(ncid, varid, start, count, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vara_int(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const int *op)
-{
- return PIOc_put_vars_int(ncid, varid, start, count, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vara_long(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const long *op)
-{
- return PIOc_put_vars_long(ncid, varid, start, count, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vara_float(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const float *op)
-{
- return PIOc_put_vars_float(ncid, varid, start, count, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vara_ulonglong(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const unsigned long long *op)
-{
- return PIOc_put_vars_ulonglong(ncid, varid, start, count, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vara_longlong(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const long long *op)
-{
- return PIOc_put_vars_longlong(ncid, varid, start, count, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vara_double(int ncid, int varid, const PIO_Offset *start,
- const PIO_Offset *count, const double *op)
-{
- return PIOc_put_vars_double(ncid, varid, start, count, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var_text(int ncid, int varid, const char *op)
-{
- return PIOc_put_vars_text(ncid, varid, NULL, NULL, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var_uchar(int ncid, int varid, const unsigned char *op)
-{
- return PIOc_put_vars_uchar(ncid, varid, NULL, NULL, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var_schar(int ncid, int varid, const signed char *op)
-{
- return PIOc_put_vars_schar(ncid, varid, NULL, NULL, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var_ushort(int ncid, int varid, const unsigned short *op)
-{
- return PIOc_put_vars_tc(ncid, varid, NULL, NULL, NULL, NC_USHORT, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var_short(int ncid, int varid, const short *op)
-{
- return PIOc_put_vars_short(ncid, varid, NULL, NULL, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var_uint(int ncid, int varid, const unsigned int *op)
-{
- return PIOc_put_vars_uint(ncid, varid, NULL, NULL, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var_int(int ncid, int varid, const int *op)
-{
- return PIOc_put_vars_int(ncid, varid, NULL, NULL, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var_long(int ncid, int varid, const long *op)
-{
- return PIOc_put_vars_long(ncid, varid, NULL, NULL, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var_float(int ncid, int varid, const float *op)
-{
- return PIOc_put_vars_float(ncid, varid, NULL, NULL, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var_ulonglong(int ncid, int varid, const unsigned long long *op)
-{
- return PIOc_put_vars_ulonglong(ncid, varid, NULL, NULL, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var_longlong(int ncid, int varid, const long long *op)
-{
- return PIOc_put_vars_longlong(ncid, varid, NULL, NULL, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var_double(int ncid, int varid, const double *op)
-{
- return PIOc_put_vars_double(ncid, varid, NULL, NULL, NULL, op);
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var(int ncid, int varid, const void *buf, PIO_Offset bufcount,
- MPI_Datatype buftype)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- var_desc_t *vdesc;
- PIO_Offset usage;
- int *request;
-
- ierr = PIO_NOERR;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_PUT_VAR;
-
- if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, ios->compmaster, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_var_par_access(file->fh, varid, NC_COLLECTIVE);
- ierr = nc_put_var(file->fh, varid, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_put_var(file->fh, varid, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- vdesc = file->varlist + varid;
-
- if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
- sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- }
- request = vdesc->request+vdesc->nreqs;
-
- if(ios->io_rank==0){
- ierr = ncmpi_bput_var(file->fh, varid, buf, bufcount, buftype, request);;
- }else{
- *request = PIO_REQ_NULL;
- }
- vdesc->nreqs++;
- flush_output_buffer(file, false, 0);
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- return ierr;
-}
-
-/**
- * PIO interface to nc_put_vars
- *
- * This routine is called collectively by all tasks in the
- * communicator ios.union_comm.
-
- * Refer to the
- * netcdf documentation. */
-int PIOc_put_vars(int ncid, int varid, const PIO_Offset *start, const PIO_Offset *count,
- const PIO_Offset *stride, const void *buf, PIO_Offset bufcount,
- MPI_Datatype buftype)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- var_desc_t *vdesc;
- PIO_Offset usage;
- int *request;
-
- ierr = PIO_NOERR;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_PUT_VARS;
-
- if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, ios->compmaster, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_var_par_access(file->fh, varid, NC_COLLECTIVE);
- ierr = nc_put_vars(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_put_vars(file->fh, varid, (size_t *)start, (size_t *)count,
- (ptrdiff_t *)stride, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- vdesc = file->varlist + varid;
-
- if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
- sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- }
- request = vdesc->request+vdesc->nreqs;
-
- if(ios->io_rank==0){
- ierr = ncmpi_bput_vars(file->fh, varid, start, count, stride, buf,
- bufcount, buftype, request);;
- }else{
- *request = PIO_REQ_NULL;
- }
- vdesc->nreqs++;
- flush_output_buffer(file, false, 0);
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- return ierr;
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_var1(int ncid, int varid, const PIO_Offset *index, const void *buf,
- PIO_Offset bufcount, MPI_Datatype buftype)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- var_desc_t *vdesc;
- PIO_Offset usage;
- int *request;
-
- ierr = PIO_NOERR;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_PUT_VAR1;
-
- if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, ios->compmaster, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_var_par_access(file->fh, varid, NC_COLLECTIVE);
- ierr = nc_put_var1(file->fh, varid, (size_t *) index, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_put_var1(file->fh, varid, (size_t *) index, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- vdesc = file->varlist + varid;
-
- if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
- sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- }
- request = vdesc->request+vdesc->nreqs;
-
- if(ios->io_rank==0){
- ierr = ncmpi_bput_var1(file->fh, varid, index, buf, bufcount, buftype, request);;
- }else{
- *request = PIO_REQ_NULL;
- }
- vdesc->nreqs++;
- flush_output_buffer(file, false, 0);
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- return ierr;
-}
-
-/** Interface to netCDF data write function. */
-int PIOc_put_vara(int ncid, int varid, const PIO_Offset *start, const PIO_Offset *count, const void *buf,
- PIO_Offset bufcount, MPI_Datatype buftype)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- var_desc_t *vdesc;
- PIO_Offset usage;
- int *request;
-
- ierr = PIO_NOERR;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_PUT_VARA;
-
- if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, ios->compmaster, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_var_par_access(file->fh, varid, NC_COLLECTIVE);
- ierr = nc_put_vara(file->fh, varid, (size_t *) start, (size_t *) count, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_put_vara(file->fh, varid, (size_t *) start, (size_t *) count, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- vdesc = file->varlist + varid;
-
- if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
- sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- }
- request = vdesc->request+vdesc->nreqs;
-
- if(ios->io_rank==0){
- ierr = ncmpi_bput_vara(file->fh, varid, start, count, buf, bufcount, buftype, request);;
- }else{
- *request = PIO_REQ_NULL;
- }
- vdesc->nreqs++;
- flush_output_buffer(file, false, 0);
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- return ierr;
-}
diff --git a/cime/externals/pio2/src/clib/pio_rearrange.c b/cime/externals/pio2/src/clib/pio_rearrange.c
index a432b7f3b366..e343e922a00a 100644
--- a/cime/externals/pio2/src/clib/pio_rearrange.c
+++ b/cime/externals/pio2/src/clib/pio_rearrange.c
@@ -4,16 +4,16 @@
/// @date 2014
/// @brief Code to map IO to model decomposition
///
-///
-///
-///
+///
+///
+///
/// @see http://code.google.com/p/parallelio/
////
#include
#include
/** internal variable used for debugging */
-int tmpioproc=-1;
+int tmpioproc=-1;
/** @internal
@@ -167,7 +167,7 @@ void compute_maxIObuffersize(MPI_Comm io_comm, io_desc_t *iodesc)
CheckMPIReturn(MPI_Allreduce(MPI_IN_PLACE, &totiosize, 1, MPI_OFFSET, MPI_MAX, io_comm),__FILE__,__LINE__);
iodesc->maxiobuflen = totiosize;
if(iodesc->maxiobuflen<=0 ){
- fprintf(stderr,"%s %d %ld %ld %d %d %d\n",__FILE__,__LINE__,iodesc->maxiobuflen,totiosize,MPI_OFFSET,MPI_MAX,io_comm);
+ fprintf(stderr,"%s %d %ld %ld %d %d %d\n",__FILE__,__LINE__,iodesc->maxiobuflen,totiosize,MPI_OFFSET,MPI_MAX,io_comm);
piodie("ERROR: maxiobuflen<=0",__FILE__,__LINE__);
}
@@ -175,7 +175,7 @@ void compute_maxIObuffersize(MPI_Comm io_comm, io_desc_t *iodesc)
/**
** @internal
** Create the derived MPI datatypes used for comp2io and io2comp transfers
-** @param basetype The type of data (int,real,double).
+** @param basetype The type of data (int,real,double).
** @param msgcnt The number of MPI messages/tasks to use.
** @param dlen The length of the data array.
** @param mindex An array of indexes into the data array from the comp map
@@ -244,7 +244,7 @@ int create_mpi_datatypes(const MPI_Datatype basetype,const int msgcnt,const PIO_
displace[k++] = (int) (lindex[j]);
}
}
-
+
}else{
for(int j=0;jrtype[i], &lb, &extent);
printf("%s %d %d %d %d \n",__FILE__,__LINE__,i,lb,extent);
-
+
}
}
*/
@@ -354,7 +354,7 @@ int define_iodesc_datatypes(const iosystem_desc_t ios, io_desc_t *iodesc)
** @endinternal
*/
-int compute_counts(const iosystem_desc_t ios, io_desc_t *iodesc, const int maplen,
+int compute_counts(const iosystem_desc_t ios, io_desc_t *iodesc, const int maplen,
const int dest_ioproc[], const PIO_Offset dest_ioindex[], MPI_Comm mycomm)
{
@@ -384,7 +384,7 @@ int compute_counts(const iosystem_desc_t ios, io_desc_t *iodesc, const int maple
PIO_Offset s2rindex[iodesc->ndof];
-
+
if(iodesc->rearranger==PIO_REARR_BOX)
numiotasks = ios.num_iotasks;
else
@@ -439,7 +439,7 @@ int compute_counts(const iosystem_desc_t ios, io_desc_t *iodesc, const int maple
// printf("%s %d %d\n",__FILE__,__LINE__,iodesc->scount[i]);
// Share the iodesc->scount from each compute task to all io tasks
- ierr = pio_swapm( iodesc->scount, send_counts, send_displs, sr_types,
+ ierr = pio_swapm( iodesc->scount, send_counts, send_displs, sr_types,
recv_buf, recv_counts, recv_displs, sr_types,
mycomm, false, false, maxreq);
@@ -498,7 +498,7 @@ int compute_counts(const iosystem_desc_t ios, io_desc_t *iodesc, const int maple
}
for(i=0;i -1){
// this should be moved to create_box
@@ -560,21 +560,21 @@ int compute_counts(const iosystem_desc_t ios, io_desc_t *iodesc, const int maple
// s2rindex is the list of indeces on each compute task
- /*
+ /*
printf("%d s2rindex: ", ios.comp_rank);
for(i=0;indof;i++)
printf("%ld ",s2rindex[i]);
printf("\n");
*/
// printf("%s %d %d %d %d %d %d %d\n",__FILE__,__LINE__,send_counts[0],recv_counts[0],send_displs[0],recv_displs[0],sr_types[0],iodesc->llen);
- ierr = pio_swapm( s2rindex, send_counts, send_displs, sr_types,
+ ierr = pio_swapm( s2rindex, send_counts, send_displs, sr_types,
iodesc->rindex, recv_counts, recv_displs, sr_types,
mycomm, false, false, 0);
// printf("%s %d\n",__FILE__,__LINE__);
// rindex is an array of the indices of the data to be sent from
- // this io task to each compute task.
- /*
+ // this io task to each compute task.
+ /*
if(ios.ioproc){
printf("%d rindex: ",ios.io_rank);
for(int j=0;jllen;j++)
@@ -586,7 +586,7 @@ int compute_counts(const iosystem_desc_t ios, io_desc_t *iodesc, const int maple
}
-/**
+/**
** @internal
** Moves data from compute tasks to IO tasks.
** @endinternal
@@ -615,14 +615,14 @@ int rearrange_comp2io(const iosystem_desc_t ios, io_desc_t *iodesc, void *sbuf,
#ifdef TIMING
GPTLstart("PIO:rearrange_comp2io");
#endif
-
+
if(iodesc->rearranger == PIO_REARR_BOX){
mycomm = ios.union_comm;
niotasks = ios.num_iotasks;
}else{
mycomm = iodesc->subset_comm;
niotasks = 1;
- }
+ }
MPI_Comm_size(mycomm, &ntasks);
pioassert(nvars>0,"nvars must be > 0",__FILE__,__LINE__);
@@ -657,8 +657,8 @@ int rearrange_comp2io(const iosystem_desc_t ios, io_desc_t *iodesc, void *sbuf,
//printf("%s %d %ld %ld\n",__FILE__,__LINE__,sendtypes,recvtypes);
for(i=0;irindex[i]);
-
+
}
}
@@ -699,7 +699,7 @@ int rearrange_comp2io(const iosystem_desc_t ios, io_desc_t *iodesc, void *sbuf,
int io_comprank = ios.ioranks[i];
if(iodesc->rearranger==PIO_REARR_SUBSET)
io_comprank=0;
-
+
// printf("scount[%d]=%d\n",i,scount[i]);
if(scount[i] > 0 && sbuf != NULL) {
sendcounts[io_comprank]=1;
@@ -711,7 +711,7 @@ int rearrange_comp2io(const iosystem_desc_t ios, io_desc_t *iodesc, void *sbuf,
}else{
sendcounts[io_comprank]=0;
}
- }
+ }
// printf("%s %d %d\n",__FILE__,__LINE__,((int *)sbuf)[5]);
//printf("%s %d %ld %ld %ld %ld\n",__FILE__,__LINE__,sendtypes[0],recvtypes,sbuf,rbuf);
// Data in sbuf on the compute nodes is sent to rbuf on the ionodes
@@ -720,15 +720,15 @@ int rearrange_comp2io(const iosystem_desc_t ios, io_desc_t *iodesc, void *sbuf,
// printf("%s %d %d %d \n",__FILE__,__LINE__,i,((int *)sbuf)[i]);
pio_swapm( sbuf, sendcounts, sdispls, sendtypes,
- rbuf, recvcounts, rdispls, recvtypes,
+ rbuf, recvcounts, rdispls, recvtypes,
mycomm, iodesc->handshake, iodesc->isend, iodesc->max_requests);
// if(ios.ioproc)
// for(i=0;illen;i++)
// printf("%s %d %d %d\n",__FILE__,__LINE__,i,((int *)rbuf)[i]);
- //printf("%s %d %ld %ld\n",__FILE__,__LINE__,sendtypes[0],recvtypes);
+ //printf("%s %d %ld %ld\n",__FILE__,__LINE__,sendtypes[0],recvtypes);
brel(sendcounts);
- brel(recvcounts);
+ brel(recvcounts);
brel(sdispls);
brel(rdispls);
//printf("%s %d %ld %ld\n",__FILE__,__LINE__,sendtypes[0],recvtypes);
@@ -737,12 +737,12 @@ int rearrange_comp2io(const iosystem_desc_t ios, io_desc_t *iodesc, void *sbuf,
//printf("%s %d %d %ld\n",__FILE__,__LINE__,i,sendtypes[i]);
MPI_Type_free(sendtypes+i);
}
- if(recvtypes[i] != MPI_DATATYPE_NULL){
- //printf("%s %d %d\n",__FILE__,__LINE__,i);
+ if(recvtypes[i] != MPI_DATATYPE_NULL){
+ //printf("%s %d %d\n",__FILE__,__LINE__,i);
MPI_Type_free(recvtypes+i);
}
}
-
+
brel(sendtypes);
brel(recvtypes);
@@ -752,7 +752,7 @@ int rearrange_comp2io(const iosystem_desc_t ios, io_desc_t *iodesc, void *sbuf,
return PIO_NOERR;
}
-/**
+/**
** @internal
** Moves data from compute tasks to IO tasks.
** @endinternal
@@ -761,7 +761,7 @@ int rearrange_comp2io(const iosystem_desc_t ios, io_desc_t *iodesc, void *sbuf,
int rearrange_io2comp(const iosystem_desc_t ios, io_desc_t *iodesc, void *sbuf, void *rbuf)
{
-
+
// int maxreq = 0;
MPI_Comm mycomm;
@@ -849,8 +849,8 @@ int rearrange_io2comp(const iosystem_desc_t ios, io_desc_t *iodesc, void *sbuf,
recvcounts[io_comprank]=1;
recvtypes[io_comprank]=iodesc->stype[i];
}
- }
-
+ }
+
//
// Data in sbuf on the ionodes is sent to rbuf on the compute nodes
//
@@ -858,13 +858,13 @@ int rearrange_io2comp(const iosystem_desc_t ios, io_desc_t *iodesc, void *sbuf,
printf("%s %d \n",__FILE__,__LINE__);
#endif
pio_swapm( sbuf, sendcounts, sdispls, sendtypes,
- rbuf, recvcounts, rdispls, recvtypes,
+ rbuf, recvcounts, rdispls, recvtypes,
mycomm, iodesc->handshake, iodesc->isend, iodesc->max_requests);
#if DEBUG
printf("%s %d \n",__FILE__,__LINE__);
#endif
brel(sendcounts);
- brel(recvcounts);
+ brel(recvcounts);
brel(sdispls);
brel(rdispls);
brel(sendtypes);
@@ -873,7 +873,7 @@ int rearrange_io2comp(const iosystem_desc_t ios, io_desc_t *iodesc, void *sbuf,
#ifdef TIMING
GPTLstop("PIO:rearrange_io2comp");
#endif
-
+
return PIO_NOERR;
}
@@ -899,7 +899,7 @@ void determine_fill(iosystem_desc_t ios, io_desc_t *iodesc, const int gsize[], c
MPI_Allreduce(MPI_IN_PLACE, &totalllen, 1, PIO_OFFSET, MPI_SUM, ios.union_comm);
-
+
if(totalllen < totalgridsize){
//if(ios.iomaster) printf("%s %d %ld %ld\n",__FILE__,__LINE__,totalllen,totalgridsize);
iodesc->needsfill = true;
@@ -948,16 +948,16 @@ void iodesc_dump(io_desc_t *iodesc)
for(int j=0;jllen;j++)
printf(" %ld ",iodesc->rindex[j]);
printf("\n");
-
+
}
-/**
+/**
** @internal
** The box rearranger computes a mapping between IO tasks and compute tasks such that the data
- ** on io tasks can be written with a single call to the underlying netcdf library. This
- ** may involve an all to all rearrangement in the mapping, but should minimize data movement in
+ ** on io tasks can be written with a single call to the underlying netcdf library. This
+ ** may involve an all to all rearrangement in the mapping, but should minimize data movement in
** lower level libraries
** @endinternal
**
@@ -976,7 +976,7 @@ int box_rearrange_create(const iosystem_desc_t ios,const int maplen, const PIO_O
MPI_Datatype dtype;
int dest_ioproc[maplen];
PIO_Offset dest_ioindex[maplen];
- int sndlths[nprocs];
+ int sndlths[nprocs];
int sdispls[nprocs];
int recvlths[nprocs];
int rdispls[nprocs];
@@ -1016,7 +1016,7 @@ int box_rearrange_create(const iosystem_desc_t ios,const int maplen, const PIO_O
}
determine_fill(ios, iodesc, gsize, compmap);
- /*
+ /*
if(ios.ioproc){
for(i=0; ifirstregion->start[i],iodesc->firstregion->count[i]);
@@ -1029,12 +1029,12 @@ int box_rearrange_create(const iosystem_desc_t ios,const int maplen, const PIO_O
int io_comprank = ios.ioranks[i];
recvlths[ io_comprank ] = 1;
rdispls[ io_comprank ] = i*tsize;
- }
+ }
// The length of each iomap
// iomaplen = calloc(nioprocs, sizeof(PIO_Offset));
pio_swapm(&(iodesc->llen), sndlths, sdispls, dtypes,
- iomaplen, recvlths, rdispls, dtypes,
+ iomaplen, recvlths, rdispls, dtypes,
ios.union_comm, false, false, maxreq);
@@ -1060,16 +1060,16 @@ int box_rearrange_create(const iosystem_desc_t ios,const int maplen, const PIO_O
sndlths[ j ] = ndims;
}
recvlths[ io_comprank ] = ndims;
-
+
// The count from iotask i is sent to all compute tasks
-
+
pio_swapm(iodesc->firstregion->count, sndlths, sdispls, dtypes,
- count, recvlths, rdispls, dtypes,
+ count, recvlths, rdispls, dtypes,
ios.union_comm, false, false, maxreq);
-
+
// The start from iotask i is sent to all compute tasks
pio_swapm(iodesc->firstregion->start, sndlths, sdispls, dtypes,
- start, recvlths, rdispls, dtypes,
+ start, recvlths, rdispls, dtypes,
ios.union_comm, false, false, maxreq);
for(k=0;kiomap - y->iomap);
-}
+}
/**
** @internal
- ** Each region is a block of output which can be represented in a single call to the underlying
- ** netcdf library. This can be as small as a single data point, but we hope we've aggragated better than that.
+ ** Each region is a block of output which can be represented in a single call to the underlying
+ ** netcdf library. This can be as small as a single data point, but we hope we've aggragated better than that.
** @endinternal
*/
-void get_start_and_count_regions(const int ndims, const int gdims[],const int maplen,
+void get_start_and_count_regions(const int ndims, const int gdims[],const int maplen,
const PIO_Offset map[], int *maxregions, io_region *firstregion)
{
int i;
@@ -1157,24 +1157,24 @@ void get_start_and_count_regions(const int ndims, const int gdims[],const int ma
while(nmaplen < maplen){
// Here we find the largest region from the current offset into the iomap
// regionlen is the size of that region and we step to that point in the map array
- // until we reach the end
+ // until we reach the end
for(i=0;icount[i]=1;
}
- regionlen = find_region(ndims, gdims, maplen-nmaplen,
+ regionlen = find_region(ndims, gdims, maplen-nmaplen,
map+nmaplen, region->start, region->count);
// printf("%s %d %d %d\n",__FILE__,__LINE__,region->start[0],region->count[0]);
pioassert(region->start[0]>=0,"failed to find region",__FILE__,__LINE__);
-
+
nmaplen = nmaplen+regionlen;
if(region->next==NULL && nmaplen< maplen){
region->next = alloc_region(ndims);
- // The offset into the local array buffer is the sum of the sizes of all of the previous regions (loffset)
+ // The offset into the local array buffer is the sum of the sizes of all of the previous regions (loffset)
region=region->next;
region->loffset = nmaplen;
// The calls to the io library are collective and so we must have the same number of regions on each
- // io task maxregions will be the total number of regions on this task
+ // io task maxregions will be the total number of regions on this task
(*maxregions)++;
}
}
@@ -1182,9 +1182,9 @@ void get_start_and_count_regions(const int ndims, const int gdims[],const int ma
}
-/**
+/**
** @internal
- ** The subset rearranger needs a mapping from compute tasks to IO task, the only requirement is
+ ** The subset rearranger needs a mapping from compute tasks to IO task, the only requirement is
** that each compute task map to one and only one IO task. This mapping groups by mpi task id
** others are possible and may be better for certain decompositions
** @endinternal
@@ -1211,15 +1211,15 @@ void default_subset_partition(const iosystem_desc_t ios, io_desc_t *iodesc)
}
-/**
+/**
** @internal
** The subset rearranger computes a mapping between IO tasks and compute tasks such that each compute
- ** task communicates with one and only one IO task.
+ ** task communicates with one and only one IO task.
** @endinternal
**
*/
-int subset_rearrange_create(const iosystem_desc_t ios,const int maplen, PIO_Offset compmap[],
+int subset_rearrange_create(const iosystem_desc_t ios,const int maplen, PIO_Offset compmap[],
const int gsize[], const int ndims, io_desc_t *iodesc)
{
@@ -1234,20 +1234,20 @@ int subset_rearrange_create(const iosystem_desc_t ios,const int maplen, PIO_Offs
PIO_Offset totalgridsize;
PIO_Offset *srcindex=NULL;
PIO_Offset *myfillgrid = NULL;
- int maxregions;
+ int maxregions;
int maxreq = MAX_GATHER_BLOCK_SIZE;
int rank, ntasks, rcnt;
size_t pio_offset_size=sizeof(PIO_Offset);
-
-
+
+
tmpioproc = ios.io_rank;
- /* subset partitions each have exactly 1 io task which is task 0 of that subset_comm */
+ /* subset partitions each have exactly 1 io task which is task 0 of that subset_comm */
/* TODO: introduce a mechanism for users to define partitions */
default_subset_partition(ios, iodesc);
iodesc->rearranger = PIO_REARR_SUBSET;
-
+
MPI_Comm_rank(iodesc->subset_comm, &rank);
MPI_Comm_size(iodesc->subset_comm, &ntasks);
@@ -1264,7 +1264,7 @@ int subset_rearrange_create(const iosystem_desc_t ios,const int maplen, PIO_Offs
piomemerror(ios,ntasks * sizeof(int), __FILE__,__LINE__);
}
rcnt = 1;
- }
+ }
iodesc->scount = (int *) bget(sizeof(int));
if(iodesc->scount == NULL){
piomemerror(ios,sizeof(int), __FILE__,__LINE__);
@@ -1276,14 +1276,14 @@ int subset_rearrange_create(const iosystem_desc_t ios,const int maplen, PIO_Offs
}
for(i=0;i=0 && compmap[i]<=totalgridsize, "Compmap value out of bounds",__FILE__,__LINE__);
if(compmap[i]>0){
(iodesc->scount[0])++;
}
}
if(iodesc->scount[0]>0){
- iodesc->sindex = (PIO_Offset *) bget(iodesc->scount[0]*pio_offset_size);
+ iodesc->sindex = (PIO_Offset *) bget(iodesc->scount[0]*pio_offset_size);
if(iodesc->sindex == NULL){
piomemerror(ios,iodesc->scount[0]*pio_offset_size, __FILE__,__LINE__);
}
@@ -1301,9 +1301,9 @@ int subset_rearrange_create(const iosystem_desc_t ios,const int maplen, PIO_Offs
// Pass the reduced maplen (without holes) from each compute task to its associated IO task
// printf("%s %d %ld\n",__FILE__,__LINE__,iodesc->scount);
-
+
pio_fc_gather( (void *) iodesc->scount, 1, MPI_INT,
- (void *) iodesc->rcount, rcnt, MPI_INT,
+ (void *) iodesc->rcount, rcnt, MPI_INT,
0, iodesc->subset_comm, maxreq);
iodesc->llen = 0;
@@ -1320,7 +1320,7 @@ int subset_rearrange_create(const iosystem_desc_t ios,const int maplen, PIO_Offs
rdispls[i] = rdispls[i-1]+ iodesc->rcount[ i-1 ];
}
// printf("%s %d %ld %d %d\n",__FILE__,__LINE__,iodesc,iodesc->llen,maplen);
-
+
if(iodesc->llen>0){
srcindex = (PIO_Offset *) bget(iodesc->llen*pio_offset_size);
if(srcindex == NULL){
@@ -1340,11 +1340,11 @@ int subset_rearrange_create(const iosystem_desc_t ios,const int maplen, PIO_Offs
// Pass the sindex from each compute task to its associated IO task
pio_fc_gatherv((void *) iodesc->sindex, iodesc->scount[0], PIO_OFFSET,
- (void *) srcindex, recvlths, rdispls, PIO_OFFSET,
+ (void *) srcindex, recvlths, rdispls, PIO_OFFSET,
0, iodesc->subset_comm, maxreq);
if(ios.ioproc && iodesc->llen>0){
- map = (mapsort *) bget(iodesc->llen * sizeof(mapsort));
+ map = (mapsort *) bget(iodesc->llen * sizeof(mapsort));
if(map == NULL){
piomemerror(ios,iodesc->llen * sizeof(mapsort), __FILE__,__LINE__);
}
@@ -1376,7 +1376,7 @@ int subset_rearrange_create(const iosystem_desc_t ios,const int maplen, PIO_Offs
}
pio_fc_gatherv((void *) shrtmap, iodesc->scount[0], PIO_OFFSET,
- (void *) iomap, recvlths, rdispls, PIO_OFFSET,
+ (void *) iomap, recvlths, rdispls, PIO_OFFSET,
0, iodesc->subset_comm, maxreq);
@@ -1397,8 +1397,8 @@ int subset_rearrange_create(const iosystem_desc_t ios,const int maplen, PIO_Offs
}
pos += iodesc->rcount[i];
}
- // sort the mapping, this will transpose the data into IO order
- qsort(map, iodesc->llen, sizeof(mapsort), compare_offsets);
+ // sort the mapping, this will transpose the data into IO order
+ qsort(map, iodesc->llen, sizeof(mapsort), compare_offsets);
iodesc->rindex = (PIO_Offset *) bget(iodesc->llen*pio_offset_size);
if(iodesc->rindex == NULL){
@@ -1449,7 +1449,7 @@ int subset_rearrange_create(const iosystem_desc_t ios,const int maplen, PIO_Offs
thisgridsize[0] = totalgridsize/ios.num_iotasks;
thisgridmax[0] = thisgridsize[0];
int xtra = totalgridsize - thisgridsize[0]*ios.num_iotasks;
-
+
for(nio=0;nioholegridsize;i++){
myfillgrid[i]=-1;
}
-
+
j=0;
for(i=0;imaxfillregions=0;
if(myfillgrid!=NULL){
iodesc->fillregion = alloc_region(iodesc->ndims);
@@ -1534,7 +1534,7 @@ int subset_rearrange_create(const iosystem_desc_t ios,const int maplen, PIO_Offs
iodesc->maxfillregions = maxregions;
}
- CheckMPIReturn(MPI_Scatterv((void *) srcindex, recvlths, rdispls, PIO_OFFSET,
+ CheckMPIReturn(MPI_Scatterv((void *) srcindex, recvlths, rdispls, PIO_OFFSET,
(void *) iodesc->sindex, iodesc->scount[0], PIO_OFFSET,
0, iodesc->subset_comm),__FILE__,__LINE__);
@@ -1553,23 +1553,23 @@ int subset_rearrange_create(const iosystem_desc_t ios,const int maplen, PIO_Offs
iodesc->firstregion);
maxregions = iodesc->maxregions;
MPI_Allreduce(MPI_IN_PLACE,&maxregions,1, MPI_INT, MPI_MAX, ios.io_comm);
- iodesc->maxregions = maxregions;
+ iodesc->maxregions = maxregions;
if(iomap != NULL)
brel(iomap);
-
-
+
+
if(map != NULL)
brel(map);
if(srcindex != NULL)
brel(srcindex);
-
+
compute_maxIObuffersize(ios.io_comm, iodesc);
-
+
iodesc->nrecvs=ntasks;
#ifdef DEBUG
iodesc_dump(iodesc);
-#endif
+#endif
}
/* using maxiobuflen compute the maximum number of vars of this type that the io
@@ -1582,8 +1582,8 @@ int subset_rearrange_create(const iosystem_desc_t ios,const int maplen, PIO_Offs
}
-
-
+
+
void performance_tune_rearranger(iosystem_desc_t ios, io_desc_t *iodesc)
{
@@ -1594,7 +1594,7 @@ void performance_tune_rearranger(iosystem_desc_t ios, io_desc_t *iodesc)
int nprocs;
MPI_Comm mycomm;
#ifdef TIMING
-#ifdef PERFTUNE
+#ifdef PERFTUNE
double *wall, usr[2], sys[2];
void *cbuf, *ibuf;
int tsize;
@@ -1614,7 +1614,7 @@ void performance_tune_rearranger(iosystem_desc_t ios, io_desc_t *iodesc)
mycomm = ios.union_comm;
}else{
mycomm = iodesc->subset_comm;
- }
+ }
MPI_Comm_size(mycomm, &nprocs);
MPI_Comm_rank(mycomm, &myrank);
@@ -1675,7 +1675,7 @@ void performance_tune_rearranger(iosystem_desc_t ios, io_desc_t *iodesc)
}
}
}
-
+
iodesc->handshake = handshake;
iodesc->isend = isend;
@@ -1689,4 +1689,4 @@ void performance_tune_rearranger(iosystem_desc_t ios, io_desc_t *iodesc)
#endif
#endif
-}
+}
diff --git a/cime/externals/pio2/src/clib/pio_spmd.c b/cime/externals/pio2/src/clib/pio_spmd.c
index 389963bb6559..9645ed45b0f7 100644
--- a/cime/externals/pio2/src/clib/pio_spmd.c
+++ b/cime/externals/pio2/src/clib/pio_spmd.c
@@ -4,7 +4,7 @@
* @date 2014
* @brief MPI_Gather, MPI_Gatherv, and MPI_Alltoallw with flow control options
*/
-
+
#ifdef TESTSWAPM
#include
#include
@@ -22,19 +22,19 @@
#include
#endif
-/**
+/**
** @brief Wrapper for MPI calls to print the Error string on error
*/
void CheckMPIReturn(const int ierr,const char file[],const int line)
{
-
+
if(ierr != MPI_SUCCESS){
char errstring[MPI_MAX_ERROR_STRING];
int errstrlen;
int mpierr = MPI_Error_string( ierr, errstring, &errstrlen);
fprintf(stderr, "MPI ERROR: %s in file %s at line %d\n",errstring, file, line);
-
+
}
}
@@ -44,7 +44,7 @@ void CheckMPIReturn(const int ierr,const char file[],const int line)
*/
int pio_fc_gather( void *sendbuf, const int sendcnt, const MPI_Datatype sendtype,
- void *recvbuf, const int recvcnt, const MPI_Datatype recvtype, const int root,
+ void *recvbuf, const int recvcnt, const MPI_Datatype recvtype, const int root,
const MPI_Comm comm, const int flow_cntl)
{
bool fc_gather;
@@ -56,7 +56,7 @@ int pio_fc_gather( void *sendbuf, const int sendcnt, const MPI_Datatype sendtype
int hs;
int displs;
int dsize;
-
+
if(flow_cntl > 0){
@@ -102,7 +102,7 @@ int pio_fc_gather( void *sendbuf, const int sendcnt, const MPI_Datatype sendtype
}
// copy local data
- CheckMPIReturn(MPI_Type_size(sendtype, &dsize), __FILE__,__LINE__);
+ CheckMPIReturn(MPI_Type_size(sendtype, &dsize), __FILE__,__LINE__);
memcpy(recvbuf, sendbuf, sendcnt*dsize );
count = min(count, preposts);
@@ -123,7 +123,7 @@ int pio_fc_gather( void *sendbuf, const int sendcnt, const MPI_Datatype sendtype
return PIO_NOERR;
}
-
+
/**
@@ -132,7 +132,7 @@ int pio_fc_gather( void *sendbuf, const int sendcnt, const MPI_Datatype sendtype
int pio_fc_gatherv( void *sendbuf, const int sendcnt, const MPI_Datatype sendtype,
void *recvbuf, const int recvcnts[], const int displs[],
- const MPI_Datatype recvtype, const int root,
+ const MPI_Datatype recvtype, const int root,
const MPI_Comm comm, const int flow_cntl)
{
bool fc_gather;
@@ -143,7 +143,7 @@ int pio_fc_gatherv( void *sendbuf, const int sendcnt, const MPI_Datatype sendtyp
int ierr;
int hs;
int dsize;
-
+
if(flow_cntl > 0){
fc_gather = true;
@@ -188,7 +188,7 @@ int pio_fc_gatherv( void *sendbuf, const int sendcnt, const MPI_Datatype sendtyp
}
// copy local data
- CheckMPIReturn(MPI_Type_size(sendtype, &dsize), __FILE__,__LINE__);
+ CheckMPIReturn(MPI_Type_size(sendtype, &dsize), __FILE__,__LINE__);
CheckMPIReturn(MPI_Sendrecv(sendbuf, sendcnt, sendtype,
mytask, 102, recvbuf, recvcnts[mytask], recvtype,
mytask, 102, comm, &status),__FILE__,__LINE__);
@@ -211,7 +211,7 @@ int pio_fc_gatherv( void *sendbuf, const int sendcnt, const MPI_Datatype sendtyp
///
/// @brief Returns the smallest power of 2 greater than i
-///
+///
int ceil2(const int i)
{
int p=1;
@@ -222,7 +222,7 @@ int ceil2(const int i)
}
///
-/// @brief Given integers p and k between 0 and np-1,
+/// @brief Given integers p and k between 0 and np-1,
/// if (p+1)^k <= np-1 then return (p+1)^k else -1
int pair(const int np, const int p, const int k)
{
@@ -234,8 +234,8 @@ int pair(const int np, const int p, const int k)
/**
** @brief Provides the functionality of MPI_Alltoallw with flow control options
*/
-int pio_swapm(void *sndbuf, int sndlths[], int sdispls[], MPI_Datatype stypes[],
- void *rcvbuf, int rcvlths[], int rdispls[], MPI_Datatype rtypes[],
+int pio_swapm(void *sndbuf, int sndlths[], int sdispls[], MPI_Datatype stypes[],
+ void *rcvbuf, int rcvlths[], int rdispls[], MPI_Datatype rtypes[],
MPI_Comm comm,const bool handshake, bool isend,const int max_requests)
{
@@ -272,7 +272,7 @@ int pio_swapm(void *sndbuf, int sndlths[], int sdispls[], MPI_Datatype stypes
}
#endif
CheckMPIReturn(MPI_Alltoallw( sndbuf, sndlths, sdispls, stypes, rcvbuf, rcvlths, rdispls, rtypes, comm),__FILE__,__LINE__);
-
+
#ifdef OPEN_MPI
for(int i=0;i 0){
tag = p + offset_t;
-
+
ptr = (void *)((char *) rcvbuf + rdispls[p]);
CheckMPIReturn(MPI_Irecv( ptr, rcvlths[p], rtypes[p], p, tag, comm, rcvids+rstep), __FILE__,__LINE__);
if(handshake)
@@ -452,7 +452,7 @@ int pio_swapm(void *sndbuf, int sndlths[], int sdispls[], MPI_Datatype stypes
CheckMPIReturn(MPI_Waitall(steps, sndids, MPI_STATUSES_IGNORE), __FILE__,__LINE__);
}
// printf("%s %d %d \n",__FILE__,__LINE__,nprocs);
-
+
return PIO_NOERR;
}
@@ -488,7 +488,7 @@ int main( int argc, char **argv )
MPI_Abort( comm, 1 );
}
/* Test pio_fc_gather */
-
+
msg_cnt=0;
while(msg_cnt<= MAX_GATHER_BLOCK_SIZE){
/* Load up the buffers */
@@ -500,7 +500,7 @@ int main( int argc, char **argv )
MPI_Barrier(comm);
if(rank==0) printf("Start gather test %d\n",msg_cnt);
if(rank == 0) gettimeofday(&t1, NULL);
-
+
err = pio_fc_gather( sbuf, size, MPI_INT, rbuf, size, MPI_INT, 0, comm, msg_cnt);
@@ -511,7 +511,7 @@ int main( int argc, char **argv )
if(rank==0){
for(j=0;j
-#include
-#include
-
-///
-/// PIO interface to nc_put_varm
-///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
-/// Refer to the netcdf documentation.
-///
-int PIOc_put_varm (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- var_desc_t *vdesc;
- PIO_Offset usage;
- int *request;
-
- ierr = PIO_NOERR;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_PUT_VARM;
-
- /* Sorry, but varm functions are not supported by the async interface. */
- if(ios->async_interface)
- return PIO_EINVAL;
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_var_par_access(file->fh, varid, NC_COLLECTIVE);
- ierr = nc_put_varm(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_put_varm(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- vdesc = file->varlist + varid;
-
- if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
- sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- }
- request = vdesc->request+vdesc->nreqs;
-
- if(ios->io_rank==0){
- ierr = ncmpi_bput_varm(file->fh, varid, start, count, stride, imap, buf, bufcount, buftype, request);;
- }else{
- *request = PIO_REQ_NULL;
- }
- vdesc->nreqs++;
- flush_output_buffer(file, false, 0);
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- return ierr;
-}
-
-///
-/// PIO interface to nc_put_varm_uchar
-///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
-/// Refer to the netcdf documentation.
-///
-int PIOc_put_varm_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned char *op)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- var_desc_t *vdesc;
- PIO_Offset usage;
- int *request;
-
- ierr = PIO_NOERR;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_PUT_VARM_UCHAR;
-
- /* Sorry, but varm functions are not supported by the async interface. */
- if(ios->async_interface)
- return PIO_EINVAL;
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_var_par_access(file->fh, varid, NC_COLLECTIVE);
- ierr = nc_put_varm_uchar(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_put_varm_uchar(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- vdesc = file->varlist + varid;
-
- if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
- sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- }
- request = vdesc->request+vdesc->nreqs;
-
- if(ios->io_rank==0){
- ierr = ncmpi_bput_varm_uchar(file->fh, varid, start, count, stride, imap, op, request);;
- }else{
- *request = PIO_REQ_NULL;
- }
- vdesc->nreqs++;
- flush_output_buffer(file, false, 0);
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- return ierr;
-}
-
-///
-/// PIO interface to nc_put_varm_short
-///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
-/// Refer to the netcdf documentation.
-///
-int PIOc_put_varm_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const short *op)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- var_desc_t *vdesc;
- PIO_Offset usage;
- int *request;
-
- ierr = PIO_NOERR;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_PUT_VARM_SHORT;
-
- /* Sorry, but varm functions are not supported by the async interface. */
- if(ios->async_interface)
- return PIO_EINVAL;
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_var_par_access(file->fh, varid, NC_COLLECTIVE);
- ierr = nc_put_varm_short(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_put_varm_short(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- vdesc = file->varlist + varid;
-
- if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
- sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- }
- request = vdesc->request+vdesc->nreqs;
-
- if(ios->io_rank==0){
- ierr = ncmpi_bput_varm_short(file->fh, varid, start, count, stride, imap, op, request);;
- }else{
- *request = PIO_REQ_NULL;
- }
- vdesc->nreqs++;
- flush_output_buffer(file, false, 0);
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- return ierr;
-}
-///
-/// PIO interface to nc_put_varm_text
-///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
-/// Refer to the netcdf documentation.
-///
-int PIOc_put_varm_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const char *op)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- var_desc_t *vdesc;
- PIO_Offset usage;
- int *request;
-
- ierr = PIO_NOERR;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_PUT_VARM_TEXT;
-
- /* Sorry, but varm functions are not supported by the async interface. */
- if(ios->async_interface)
- return PIO_EINVAL;
-
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_var_par_access(file->fh, varid, NC_COLLECTIVE);
- ierr = nc_put_varm_text(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_put_varm_text(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- vdesc = file->varlist + varid;
-
- if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
- sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- }
- request = vdesc->request+vdesc->nreqs;
-
- if(ios->io_rank==0){
- ierr = ncmpi_bput_varm_text(file->fh, varid, start, count, stride, imap, op, request);;
- }else{
- *request = PIO_REQ_NULL;
- }
- vdesc->nreqs++;
- flush_output_buffer(file, false, 0);
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- return ierr;
-}
-
-///
-/// PIO interface to nc_put_varm_ushort
-///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
-/// Refer to the netcdf documentation.
-///
-int PIOc_put_varm_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned short *op)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- var_desc_t *vdesc;
- PIO_Offset usage;
- int *request;
-
- ierr = PIO_NOERR;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_PUT_VARM_USHORT;
-
- /* Sorry, but varm functions are not supported by the async interface. */
- if(ios->async_interface)
- return PIO_EINVAL;
-
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_var_par_access(file->fh, varid, NC_COLLECTIVE);
- ierr = nc_put_varm_ushort(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_put_varm_ushort(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- vdesc = file->varlist + varid;
-
- if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
- sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- }
- request = vdesc->request+vdesc->nreqs;
-
- if(ios->io_rank==0){
- ierr = ncmpi_bput_varm_ushort(file->fh, varid, start, count, stride, imap, op, request);;
- }else{
- *request = PIO_REQ_NULL;
- }
- vdesc->nreqs++;
- flush_output_buffer(file, false, 0);
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- return ierr;
-}
-
-///
-/// PIO interface to nc_put_varm_ulonglong
-///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
-/// Refer to the netcdf documentation.
-///
-int PIOc_put_varm_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned long long *op)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- var_desc_t *vdesc;
- PIO_Offset usage;
- int *request;
-
- ierr = PIO_NOERR;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_PUT_VARM_ULONGLONG;
-
- /* Sorry, but varm functions are not supported by the async interface. */
- if(ios->async_interface)
- return PIO_EINVAL;
-
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_var_par_access(file->fh, varid, NC_COLLECTIVE);
- ierr = nc_put_varm_ulonglong(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_put_varm_ulonglong(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- vdesc = file->varlist + varid;
-
- if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
- sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- }
- request = vdesc->request+vdesc->nreqs;
-
- if(ios->io_rank==0){
- ierr = ncmpi_bput_varm_ulonglong(file->fh, varid, start, count, stride, imap, op, request);;
- }else{
- *request = PIO_REQ_NULL;
- }
- vdesc->nreqs++;
- flush_output_buffer(file, false, 0);
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- return ierr;
-}
-///
-/// PIO interface to nc_put_varm_int
-///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
-/// Refer to the netcdf documentation.
-///
-int PIOc_put_varm_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const int *op)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- var_desc_t *vdesc;
- PIO_Offset usage;
- int *request;
-
- ierr = PIO_NOERR;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_PUT_VARM_INT;
-
- /* Sorry, but varm functions are not supported by the async interface. */
- if(ios->async_interface)
- return PIO_EINVAL;
-
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_var_par_access(file->fh, varid, NC_COLLECTIVE);
- ierr = nc_put_varm_int(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_put_varm_int(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- vdesc = file->varlist + varid;
-
- if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
- sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- }
- request = vdesc->request+vdesc->nreqs;
-
- if(ios->io_rank==0){
- ierr = ncmpi_bput_varm_int(file->fh, varid, start, count, stride, imap, op, request);;
- }else{
- *request = PIO_REQ_NULL;
- }
- vdesc->nreqs++;
- flush_output_buffer(file, false, 0);
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- return ierr;
-}
-
-///
-/// PIO interface to nc_put_varm_float
-///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
-/// Refer to the netcdf documentation.
-///
-int PIOc_put_varm_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const float *op)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- var_desc_t *vdesc;
- PIO_Offset usage;
- int *request;
-
- ierr = PIO_NOERR;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_PUT_VARM_FLOAT;
-
- /* Sorry, but varm functions are not supported by the async interface. */
- if(ios->async_interface)
- return PIO_EINVAL;
-
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_var_par_access(file->fh, varid, NC_COLLECTIVE);
- ierr = nc_put_varm_float(file->fh, varid,(size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_put_varm_float(file->fh, varid,(size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- vdesc = file->varlist + varid;
-
- if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
- sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- }
- request = vdesc->request+vdesc->nreqs;
-
- if(ios->io_rank==0){
- ierr = ncmpi_bput_varm_float(file->fh, varid, start, count, stride, imap, op, request);;
- }else{
- *request = PIO_REQ_NULL;
- }
- vdesc->nreqs++;
- flush_output_buffer(file, false, 0);
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- return ierr;
-}
-///
-/// PIO interface to nc_put_varm_long
-///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
-/// Refer to the netcdf documentation.
-///
-int PIOc_put_varm_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const long *op)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- var_desc_t *vdesc;
- PIO_Offset usage;
- int *request;
-
- ierr = PIO_NOERR;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_PUT_VARM_LONG;
-
- /* Sorry, but varm functions are not supported by the async interface. */
- if(ios->async_interface)
- return PIO_EINVAL;
-
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_var_par_access(file->fh, varid, NC_COLLECTIVE);
- ierr = nc_put_varm_long(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_put_varm_long(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- vdesc = file->varlist + varid;
-
- if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
- sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- }
- request = vdesc->request+vdesc->nreqs;
-
- if(ios->io_rank==0){
- ierr = ncmpi_bput_varm_long(file->fh, varid, start, count, stride, imap, op, request);;
- }else{
- *request = PIO_REQ_NULL;
- }
- vdesc->nreqs++;
- flush_output_buffer(file, false, 0);
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- return ierr;
-}
-
-///
-/// PIO interface to nc_put_varm_uint
-///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
-/// Refer to the netcdf documentation.
-///
-int PIOc_put_varm_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const unsigned int *op)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- var_desc_t *vdesc;
- PIO_Offset usage;
- int *request;
-
- ierr = PIO_NOERR;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_PUT_VARM_UINT;
-
- /* Sorry, but varm functions are not supported by the async interface. */
- if(ios->async_interface)
- return PIO_EINVAL;
-
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_var_par_access(file->fh, varid, NC_COLLECTIVE);
- ierr = nc_put_varm_uint(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_put_varm_uint(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- vdesc = file->varlist + varid;
-
- if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
- sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- }
- request = vdesc->request+vdesc->nreqs;
-
- if(ios->io_rank==0){
- ierr = ncmpi_bput_varm_uint(file->fh, varid, start, count, stride, imap, op, request);;
- }else{
- *request = PIO_REQ_NULL;
- }
- vdesc->nreqs++;
- flush_output_buffer(file, false, 0);
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- return ierr;
-}
-
-///
-/// PIO interface to nc_put_varm_double
-///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
-/// Refer to the netcdf documentation.
-///
-int PIOc_put_varm_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const double *op)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- var_desc_t *vdesc;
- PIO_Offset usage;
- int *request;
-
- ierr = PIO_NOERR;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_PUT_VARM_DOUBLE;
-
- /* Sorry, but varm functions are not supported by the async interface. */
- if(ios->async_interface)
- return PIO_EINVAL;
-
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_var_par_access(file->fh, varid, NC_COLLECTIVE);
- ierr = nc_put_varm_double(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_put_varm_double(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- vdesc = file->varlist + varid;
-
- if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
- sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- }
- request = vdesc->request+vdesc->nreqs;
-
- if(ios->io_rank==0){
- ierr = ncmpi_bput_varm_double(file->fh, varid, start, count, stride, imap, op, request);;
- }else{
- *request = PIO_REQ_NULL;
- }
- vdesc->nreqs++;
- flush_output_buffer(file, false, 0);
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- return ierr;
-}
-///
-/// PIO interface to nc_put_varm_schar
-///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
-/// Refer to the netcdf documentation.
-///
-int PIOc_put_varm_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const signed char *op)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- var_desc_t *vdesc;
- PIO_Offset usage;
- int *request;
-
- ierr = PIO_NOERR;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_PUT_VARM_SCHAR;
-
- /* Sorry, but varm functions are not supported by the async interface. */
- if(ios->async_interface)
- return PIO_EINVAL;
-
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_var_par_access(file->fh, varid, NC_COLLECTIVE);
- ierr = nc_put_varm_schar(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_put_varm_schar(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- vdesc = file->varlist + varid;
-
- if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
- sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- }
- request = vdesc->request+vdesc->nreqs;
-
- if(ios->io_rank==0){
- ierr = ncmpi_bput_varm_schar(file->fh, varid, start, count, stride, imap, op, request);;
- }else{
- *request = PIO_REQ_NULL;
- }
- vdesc->nreqs++;
- flush_output_buffer(file, false, 0);
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- return ierr;
-}
-
-///
-/// PIO interface to nc_put_varm_longlong
-///
-/// This routine is called collectively by all tasks in the communicator ios.union_comm.
-///
-/// Refer to the netcdf documentation.
-///
-int PIOc_put_varm_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], const long long *op)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- var_desc_t *vdesc;
- PIO_Offset usage;
- int *request;
-
- ierr = PIO_NOERR;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_PUT_VARM_LONGLONG;
-
- /* Sorry, but varm functions are not supported by the async interface. */
- if(ios->async_interface)
- return PIO_EINVAL;
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_var_par_access(file->fh, varid, NC_COLLECTIVE);
- ierr = nc_put_varm_longlong(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- if(ios->io_rank==0){
- ierr = nc_put_varm_longlong(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, op);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
- vdesc = file->varlist + varid;
-
- if(vdesc->nreqs%PIO_REQUEST_ALLOC_CHUNK == 0 ){
- vdesc->request = realloc(vdesc->request,
- sizeof(int)*(vdesc->nreqs+PIO_REQUEST_ALLOC_CHUNK));
- }
- request = vdesc->request+vdesc->nreqs;
-
- if(ios->io_rank==0){
- ierr = ncmpi_bput_varm_longlong(file->fh, varid, start, count, stride, imap, op, request);;
- }else{
- *request = PIO_REQ_NULL;
- }
- vdesc->nreqs++;
- flush_output_buffer(file, false, 0);
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- return ierr;
-}
-
-int PIOc_get_varm_uchar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], unsigned char *buf)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- MPI_Datatype ibuftype;
- int ndims;
- int ibufcnt;
- bool bcast = false;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_GET_VARM_UCHAR;
- ibuftype = MPI_UNSIGNED_CHAR;
- ierr = PIOc_inq_varndims(file->fh, varid, &ndims);
- ibufcnt = 1;
- for(int i=0;iasync_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_get_varm_uchar(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- bcast = true;
- if(ios->iomaster){
- ierr = nc_get_varm_uchar(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
-#ifdef PNET_READ_AND_BCAST
- ncmpi_begin_indep_data(file->fh);
- if(ios->iomaster){
- ierr = ncmpi_get_varm_uchar(file->fh, varid, start, count, stride, imap, buf);;
- };
- ncmpi_end_indep_data(file->fh);
- bcast=true;
-#else
- ierr = ncmpi_get_varm_uchar_all(file->fh, varid, start, count, stride, imap, buf);;
-#endif
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- if(ios->async_interface || bcast ||
- (ios->num_iotasks < ios->num_comptasks)){
- MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
- }
-
- return ierr;
-}
-
-int PIOc_get_varm_schar (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], signed char *buf)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- MPI_Datatype ibuftype;
- int ndims;
- int ibufcnt;
- bool bcast = false;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_GET_VARM_SCHAR;
- ibuftype = MPI_CHAR;
- ierr = PIOc_inq_varndims(file->fh, varid, &ndims);
- ibufcnt = 1;
- for(int i=0;iasync_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_get_varm_schar(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- bcast = true;
- if(ios->iomaster){
- ierr = nc_get_varm_schar(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
-#ifdef PNET_READ_AND_BCAST
- ncmpi_begin_indep_data(file->fh);
- if(ios->iomaster){
- ierr = ncmpi_get_varm_schar(file->fh, varid, start, count, stride, imap, buf);;
- };
- ncmpi_end_indep_data(file->fh);
- bcast=true;
-#else
- ierr = ncmpi_get_varm_schar_all(file->fh, varid, start, count, stride, imap, buf);;
-#endif
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- if(ios->async_interface || bcast ||
- (ios->num_iotasks < ios->num_comptasks)){
- MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
- }
-
- return ierr;
-}
-
-int PIOc_get_varm_double (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], double *buf)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- MPI_Datatype ibuftype;
- int ndims;
- int ibufcnt;
- bool bcast = false;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_GET_VARM_DOUBLE;
- ibuftype = MPI_DOUBLE;
- ierr = PIOc_inq_varndims(file->fh, varid, &ndims);
- ibufcnt = 1;
- for(int i=0;iasync_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_get_varm_double(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- bcast = true;
- if(ios->iomaster){
- ierr = nc_get_varm_double(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
-#ifdef PNET_READ_AND_BCAST
- ncmpi_begin_indep_data(file->fh);
- if(ios->iomaster){
- ierr = ncmpi_get_varm_double(file->fh, varid, start, count, stride, imap, buf);;
- };
- ncmpi_end_indep_data(file->fh);
- bcast=true;
-#else
- ierr = ncmpi_get_varm_double_all(file->fh, varid, start, count, stride, imap, buf);;
-#endif
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- if(ios->async_interface || bcast ||
- (ios->num_iotasks < ios->num_comptasks)){
- MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
- }
-
- return ierr;
-}
-
-int PIOc_get_varm_text (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], char *buf)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- MPI_Datatype ibuftype;
- int ndims;
- int ibufcnt;
- bool bcast = false;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_GET_VARM_TEXT;
- ibuftype = MPI_CHAR;
- ierr = PIOc_inq_varndims(file->fh, varid, &ndims);
- ibufcnt = 1;
- for(int i=0;iasync_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_get_varm_text(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- bcast = true;
- if(ios->iomaster){
- ierr = nc_get_varm_text(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
-#ifdef PNET_READ_AND_BCAST
- ncmpi_begin_indep_data(file->fh);
- if(ios->iomaster){
- ierr = ncmpi_get_varm_text(file->fh, varid, start, count, stride, imap, buf);;
- };
- ncmpi_end_indep_data(file->fh);
- bcast=true;
-#else
- ierr = ncmpi_get_varm_text_all(file->fh, varid, start, count, stride, imap, buf);;
-#endif
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- if(ios->async_interface || bcast ||
- (ios->num_iotasks < ios->num_comptasks)){
- MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
- }
-
- return ierr;
-}
-
-int PIOc_get_varm_int (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], int *buf)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- MPI_Datatype ibuftype;
- int ndims;
- int ibufcnt;
- bool bcast = false;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_GET_VARM_INT;
- ibuftype = MPI_INT;
- ierr = PIOc_inq_varndims(file->fh, varid, &ndims);
- ibufcnt = 1;
- for(int i=0;iasync_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_get_varm_int(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- bcast = true;
- if(ios->iomaster){
- ierr = nc_get_varm_int(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
-#ifdef PNET_READ_AND_BCAST
- ncmpi_begin_indep_data(file->fh);
- if(ios->iomaster){
- ierr = ncmpi_get_varm_int(file->fh, varid, start, count, stride, imap, buf);;
- };
- ncmpi_end_indep_data(file->fh);
- bcast=true;
-#else
- ierr = ncmpi_get_varm_int_all(file->fh, varid, start, count, stride, imap, buf);;
-#endif
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- if(ios->async_interface || bcast ||
- (ios->num_iotasks < ios->num_comptasks)){
- MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
- }
-
- return ierr;
-}
-
-int PIOc_get_varm_uint (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], unsigned int *buf)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- MPI_Datatype ibuftype;
- int ndims;
- int ibufcnt;
- bool bcast = false;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_GET_VARM_UINT;
- ibuftype = MPI_UNSIGNED;
- ierr = PIOc_inq_varndims(file->fh, varid, &ndims);
- ibufcnt = 1;
- for(int i=0;iasync_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_get_varm_uint(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- bcast = true;
- if(ios->iomaster){
- ierr = nc_get_varm_uint(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
-#ifdef PNET_READ_AND_BCAST
- ncmpi_begin_indep_data(file->fh);
- if(ios->iomaster){
- ierr = ncmpi_get_varm_uint(file->fh, varid, start, count, stride, imap, buf);;
- };
- ncmpi_end_indep_data(file->fh);
- bcast=true;
-#else
- ierr = ncmpi_get_varm_uint_all(file->fh, varid, start, count, stride, imap, buf);;
-#endif
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- if(ios->async_interface || bcast ||
- (ios->num_iotasks < ios->num_comptasks)){
- MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
- }
-
- return ierr;
-}
-
-int PIOc_get_varm (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], void *buf, PIO_Offset bufcount, MPI_Datatype buftype)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- MPI_Datatype ibuftype;
- int ndims;
- int ibufcnt;
- bool bcast = false;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_GET_VARM;
- ibufcnt = bufcount;
- ibuftype = buftype;
- ierr = PIO_NOERR;
-
- if(ios->async_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_get_varm(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- bcast = true;
- if(ios->iomaster){
- ierr = nc_get_varm(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
-#ifdef PNET_READ_AND_BCAST
- ncmpi_begin_indep_data(file->fh);
- if(ios->iomaster){
- ierr = ncmpi_get_varm(file->fh, varid, start, count, stride, imap, buf, bufcount, buftype);;
- };
- ncmpi_end_indep_data(file->fh);
- bcast=true;
-#else
- ierr = ncmpi_get_varm_all(file->fh, varid, start, count, stride, imap, buf, bufcount, buftype);;
-#endif
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- if(ios->async_interface || bcast ||
- (ios->num_iotasks < ios->num_comptasks)){
- MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
- }
-
- return ierr;
-}
-
-int PIOc_get_varm_float (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], float *buf)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- MPI_Datatype ibuftype;
- int ndims;
- int ibufcnt;
- bool bcast = false;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_GET_VARM_FLOAT;
- ibuftype = MPI_FLOAT;
- ierr = PIOc_inq_varndims(file->fh, varid, &ndims);
- ibufcnt = 1;
- for(int i=0;iasync_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_get_varm_float(file->fh, varid,(size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- bcast = true;
- if(ios->iomaster){
- ierr = nc_get_varm_float(file->fh, varid,(size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
-#ifdef PNET_READ_AND_BCAST
- ncmpi_begin_indep_data(file->fh);
- if(ios->iomaster){
- ierr = ncmpi_get_varm_float(file->fh, varid, start, count, stride, imap, buf);;
- };
- ncmpi_end_indep_data(file->fh);
- bcast=true;
-#else
- ierr = ncmpi_get_varm_float_all(file->fh, varid, start, count, stride, imap, buf);;
-#endif
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- if(ios->async_interface || bcast ||
- (ios->num_iotasks < ios->num_comptasks)){
- MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
- }
-
- return ierr;
-}
-
-int PIOc_get_varm_long (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], long *buf)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- MPI_Datatype ibuftype;
- int ndims;
- int ibufcnt;
- bool bcast = false;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_GET_VARM_LONG;
- ibuftype = MPI_LONG;
- ierr = PIOc_inq_varndims(file->fh, varid, &ndims);
- ibufcnt = 1;
- for(int i=0;iasync_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_get_varm_long(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- bcast = true;
- if(ios->iomaster){
- ierr = nc_get_varm_long(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
-#ifdef PNET_READ_AND_BCAST
- ncmpi_begin_indep_data(file->fh);
- if(ios->iomaster){
- ierr = ncmpi_get_varm_long(file->fh, varid, start, count, stride, imap, buf);;
- };
- ncmpi_end_indep_data(file->fh);
- bcast=true;
-#else
- ierr = ncmpi_get_varm_long_all(file->fh, varid, start, count, stride, imap, buf);;
-#endif
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- if(ios->async_interface || bcast ||
- (ios->num_iotasks < ios->num_comptasks)){
- MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
- }
-
- return ierr;
-}
-
-int PIOc_get_varm_ushort (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], unsigned short *buf)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- MPI_Datatype ibuftype;
- int ndims;
- int ibufcnt;
- bool bcast = false;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_GET_VARM_USHORT;
- ibuftype = MPI_UNSIGNED_SHORT;
- ierr = PIOc_inq_varndims(file->fh, varid, &ndims);
- ibufcnt = 1;
- for(int i=0;iasync_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_get_varm_ushort(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- bcast = true;
- if(ios->iomaster){
- ierr = nc_get_varm_ushort(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
-#ifdef PNET_READ_AND_BCAST
- ncmpi_begin_indep_data(file->fh);
- if(ios->iomaster){
- ierr = ncmpi_get_varm_ushort(file->fh, varid, start, count, stride, imap, buf);;
- };
- ncmpi_end_indep_data(file->fh);
- bcast=true;
-#else
- ierr = ncmpi_get_varm_ushort_all(file->fh, varid, start, count, stride, imap, buf);;
-#endif
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- if(ios->async_interface || bcast ||
- (ios->num_iotasks < ios->num_comptasks)){
- MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
- }
-
- return ierr;
-}
-
-int PIOc_get_varm_longlong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], long long *buf)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- MPI_Datatype ibuftype;
- int ndims;
- int ibufcnt;
- bool bcast = false;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_GET_VARM_LONGLONG;
- ibuftype = MPI_LONG_LONG;
- ierr = PIOc_inq_varndims(file->fh, varid, &ndims);
- ibufcnt = 1;
- for(int i=0;iasync_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_get_varm_longlong(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- bcast = true;
- if(ios->iomaster){
- ierr = nc_get_varm_longlong(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
-#ifdef PNET_READ_AND_BCAST
- ncmpi_begin_indep_data(file->fh);
- if(ios->iomaster){
- ierr = ncmpi_get_varm_longlong(file->fh, varid, start, count, stride, imap, buf);;
- };
- ncmpi_end_indep_data(file->fh);
- bcast=true;
-#else
- ierr = ncmpi_get_varm_longlong_all(file->fh, varid, start, count, stride, imap, buf);;
-#endif
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- if(ios->async_interface || bcast ||
- (ios->num_iotasks < ios->num_comptasks)){
- MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
- }
-
- return ierr;
-}
-
-int PIOc_get_varm_short (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], short *buf)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- MPI_Datatype ibuftype;
- int ndims;
- int ibufcnt;
- bool bcast = false;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_GET_VARM_SHORT;
- ibuftype = MPI_SHORT;
- ierr = PIOc_inq_varndims(file->fh, varid, &ndims);
- ibufcnt = 1;
- for(int i=0;iasync_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_get_varm_short(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- bcast = true;
- if(ios->iomaster){
- ierr = nc_get_varm_short(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
-#ifdef PNET_READ_AND_BCAST
- ncmpi_begin_indep_data(file->fh);
- if(ios->iomaster){
- ierr = ncmpi_get_varm_short(file->fh, varid, start, count, stride, imap, buf);;
- };
- ncmpi_end_indep_data(file->fh);
- bcast=true;
-#else
- ierr = ncmpi_get_varm_short_all(file->fh, varid, start, count, stride, imap, buf);;
-#endif
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- if(ios->async_interface || bcast ||
- (ios->num_iotasks < ios->num_comptasks)){
- MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
- }
-
- return ierr;
-}
-
-int PIOc_get_varm_ulonglong (int ncid, int varid, const PIO_Offset start[], const PIO_Offset count[], const PIO_Offset stride[], const PIO_Offset imap[], unsigned long long *buf)
-{
- int ierr;
- int msg;
- int mpierr;
- iosystem_desc_t *ios;
- file_desc_t *file;
- MPI_Datatype ibuftype;
- int ndims;
- int ibufcnt;
- bool bcast = false;
-
- file = pio_get_file_from_id(ncid);
- if(file == NULL)
- return PIO_EBADID;
- ios = file->iosystem;
- msg = PIO_MSG_GET_VARM_ULONGLONG;
- ibuftype = MPI_UNSIGNED_LONG_LONG;
- ierr = PIOc_inq_varndims(file->fh, varid, &ndims);
- ibufcnt = 1;
- for(int i=0;iasync_interface && ! ios->ioproc){
- if(ios->compmaster)
- mpierr = MPI_Send(&msg, 1,MPI_INT, ios->ioroot, 1, ios->union_comm);
- mpierr = MPI_Bcast(&(file->fh),1, MPI_INT, 0, ios->intercomm);
- }
-
-
- if(ios->ioproc){
- switch(file->iotype){
-#ifdef _NETCDF
-#ifdef _NETCDF4
- case PIO_IOTYPE_NETCDF4P:
- ierr = nc_get_varm_ulonglong(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- break;
- case PIO_IOTYPE_NETCDF4C:
-#endif
- case PIO_IOTYPE_NETCDF:
- bcast = true;
- if(ios->iomaster){
- ierr = nc_get_varm_ulonglong(file->fh, varid, (size_t *) start, (size_t *) count, (ptrdiff_t *) stride, (ptrdiff_t *) imap, buf);;
- }
- break;
-#endif
-#ifdef _PNETCDF
- case PIO_IOTYPE_PNETCDF:
-#ifdef PNET_READ_AND_BCAST
- ncmpi_begin_indep_data(file->fh);
- if(ios->iomaster){
- ierr = ncmpi_get_varm_ulonglong(file->fh, varid, start, count, stride, imap, buf);;
- };
- ncmpi_end_indep_data(file->fh);
- bcast=true;
-#else
- ierr = ncmpi_get_varm_ulonglong_all(file->fh, varid, start, count, stride, imap, buf);;
-#endif
- break;
-#endif
- default:
- ierr = iotype_error(file->iotype,__FILE__,__LINE__);
- }
- }
-
- ierr = check_netcdf(file, ierr, __FILE__,__LINE__);
-
- if(ios->async_interface || bcast ||
- (ios->num_iotasks < ios->num_comptasks)){
- MPI_Bcast(buf, ibufcnt, ibuftype, ios->ioroot, ios->my_comm);
- }
-
- return ierr;
-}
-
diff --git a/cime/externals/pio2/src/clib/pioc.c b/cime/externals/pio2/src/clib/pioc.c
index 97afd9489775..3a10c76a6655 100644
--- a/cime/externals/pio2/src/clib/pioc.c
+++ b/cime/externals/pio2/src/clib/pioc.c
@@ -7,10 +7,11 @@
* @see http://code.google.com/p/parallelio/
*/
-#include
+
#include
#include
+
static int counter=0;
/**
@@ -175,6 +176,8 @@ int PIOc_get_local_array_size(int ioid)
** @param iostart An optional array of start values for block cyclic decompositions (optional input)
** @param iocount An optional array of count values for block cyclic decompositions (optional input)
*/
+
+
int PIOc_InitDecomp(const int iosysid, const int basetype,const int ndims, const int dims[],
const int maplen, const PIO_Offset *compmap, int *ioidp,const int *rearranger,
const PIO_Offset *iostart,const PIO_Offset *iocount)
@@ -186,6 +189,8 @@ int PIOc_InitDecomp(const int iosysid, const int basetype,const int ndims, const
int iosize;
int ndisp;
+
+
for(int i=0;inum_comptasks,ndims,counter);
}
- PIOc_writemap(filename,ndims,dims,maplen, (PIO_Offset *)compmap,ios->comp_comm);
+ PIOc_writemap(filename,ndims,dims,maplen,compmap,ios->comp_comm);
counter++;
}
@@ -276,6 +281,8 @@ int PIOc_InitDecomp(const int iosysid, const int basetype,const int ndims, const
** expressed in terms of start and count on the file.
** in this case we compute the compdof and use the subset rearranger
*/
+
+
int PIOc_InitDecomp_bc(const int iosysid, const int basetype,const int ndims, const int dims[],
const long int start[], const long int count[], int *ioidp)
@@ -333,35 +340,21 @@ int PIOc_InitDecomp_bc(const int iosysid, const int basetype,const int ndims, co
return PIO_NOERR;
}
-/* @ingroup PIO_init
- *
- * Library initialization used when IO tasks are a subset of compute
- * tasks.
- *
- * This function creates an MPI intracommunicator between a set of IO
- * tasks and one or more sets of computational tasks.
- *
- * The caller must create all comp_comm and the io_comm MPI
- * communicators before calling this function.
- *
- * @param comp_comm the MPI_Comm of the compute tasks
- *
- * @param num_iotasks the number of io tasks to use
- *
- * @param stride the offset between io tasks in the comp_comm
- *
- * @param base the comp_comm index of the first io task
- *
- * @param rearr the rearranger to use by default, this may be
- * overriden in the @ref PIO_initdecomp
- *
- * @param iosysidp index of the defined system descriptor
- *
- * @return 0 on success, otherwise a PIO error code.
+
+/**
+ ** @ingroup PIO_init
+ ** @brief library initialization used when IO tasks are a subset of compute tasks
+ ** @param comp_comm the MPI_Comm of the compute tasks
+ ** @param num_iotasks the number of io tasks to use
+ ** @param stride the offset between io tasks in the comp_comm
+ ** @param base the comp_comm index of the first io task
+ ** @param rearr the rearranger to use by default, this may be overriden in the @ref PIO_initdecomp
+ ** @param iosysidp index of the defined system descriptor
*/
-int PIOc_Init_Intracomm(const MPI_Comm comp_comm, const int num_iotasks,
- const int stride, const int base, const int rearr,
- int *iosysidp)
+
+int PIOc_Init_Intracomm(const MPI_Comm comp_comm,
+ const int num_iotasks, const int stride,
+ const int base,const int rearr, int *iosysidp)
{
iosystem_desc_t *iosys;
int ierr = PIO_NOERR;
@@ -393,8 +386,8 @@ int PIOc_Init_Intracomm(const MPI_Comm comp_comm, const int num_iotasks,
iosys->intercomm = MPI_COMM_NULL;
iosys->error_handler = PIO_INTERNAL_ERROR;
iosys->async_interface= false;
- iosys->compmaster = 0;
- iosys->iomaster = 0;
+ iosys->compmaster = false;
+ iosys->iomaster = false;
iosys->ioproc = false;
iosys->default_rearranger = rearr;
iosys->num_iotasks = num_iotasks;
@@ -405,7 +398,7 @@ int PIOc_Init_Intracomm(const MPI_Comm comp_comm, const int num_iotasks,
CheckMPIReturn(MPI_Comm_rank(iosys->comp_comm, &(iosys->comp_rank)),__FILE__,__LINE__);
CheckMPIReturn(MPI_Comm_size(iosys->comp_comm, &(iosys->num_comptasks)),__FILE__,__LINE__);
if(iosys->comp_rank==0)
- iosys->compmaster = MPI_ROOT;
+ iosys->compmaster = true;
/* Ensure that settings for number of computation tasks, number
* of IO tasks, and the stride are reasonable. */
@@ -415,7 +408,7 @@ int PIOc_Init_Intracomm(const MPI_Comm comp_comm, const int num_iotasks,
iosys->num_iotasks = 1;
ustride = 1;
}
- if((iosys->num_iotasks < 1) || ((iosys->num_iotasks*ustride) > iosys->num_comptasks)){
+ if((iosys->num_iotasks < 1) || (((iosys->num_iotasks-1)*ustride+1) > iosys->num_comptasks)){
fprintf(stderr, "PIO_TP PIOc_Init_Intracomm error\n");
fprintf(stderr, "num_iotasks=%d, ustride=%d, num_comptasks=%d\n", num_iotasks, ustride, iosys->num_comptasks);
return PIO_EBADID;
@@ -435,7 +428,7 @@ int PIOc_Init_Intracomm(const MPI_Comm comp_comm, const int num_iotasks,
iosys->info = MPI_INFO_NULL;
if(iosys->comp_rank == iosys->ioranks[0])
- iosys->iomaster = MPI_ROOT;
+ iosys->iomaster = true;
/* Create a group for the computation tasks. */
CheckMPIReturn(MPI_Comm_group(iosys->comp_comm, &(iosys->compgroup)),__FILE__,__LINE__);
@@ -445,11 +438,10 @@ int PIOc_Init_Intracomm(const MPI_Comm comp_comm, const int num_iotasks,
&(iosys->iogroup)),__FILE__,__LINE__);
/* Create an MPI communicator for the IO tasks. */
- CheckMPIReturn(MPI_Comm_create(iosys->comp_comm, iosys->iogroup, &(iosys->io_comm))
- ,__FILE__,__LINE__);
+ CheckMPIReturn(MPI_Comm_create(iosys->comp_comm, iosys->iogroup, &(iosys->io_comm)),__FILE__,__LINE__);
/* For the tasks that are doing IO, get their rank. */
- if (iosys->ioproc)
+ if(iosys->ioproc)
CheckMPIReturn(MPI_Comm_rank(iosys->io_comm, &(iosys->io_rank)),__FILE__,__LINE__);
else
iosys->io_rank = -1;
@@ -497,9 +489,8 @@ int PIOc_set_hint(const int iosysid, char hint[], const char hintval[])
}
-/** @ingroup PIO_finalize
- * Clean up internal data structures, free MPI resources, and exit the
- * pio library.
+/** @ingroup PIO_finalize
+ * @brief Clean up data structures and exit the pio library.
*
* @param iosysid: the io system ID provided by PIOc_Init_Intracomm().
*
@@ -509,38 +500,27 @@ int PIOc_set_hint(const int iosysid, char hint[], const char hintval[])
int PIOc_finalize(const int iosysid)
{
iosystem_desc_t *ios, *nios;
- int msg;
- int mpierr;
ios = pio_get_iosystem_from_id(iosysid);
if(ios == NULL)
- return PIO_EBADID;
-
- /* If asynch IO is in use, send the PIO_MSG_EXIT message from the
- * comp master to the IO processes. */
- if (ios->async_interface && !ios->comp_rank)
- {
- msg = PIO_MSG_EXIT;
- mpierr = MPI_Send(&msg, 1, MPI_INT, ios->ioroot, 1, ios->union_comm);
- CheckMPIReturn(mpierr, __FILE__, __LINE__);
- }
-
- /* Free this memory that was allocated in init_intracomm. */
- if (ios->ioranks)
+ return PIO_EBADID;
+ /* FIXME: The memory for ioranks is allocated in C only for intracomms
+ * Remove this check once mem allocs for ioranks completely moves to the
+ * C code
+ */
+ if(ios->intercomm == MPI_COMM_NULL){
+ if(ios->ioranks != NULL){
free(ios->ioranks);
+ }
+ }
- /* Free the buffer pool. */
free_cn_buffer_pool(*ios);
/* Free the MPI groups. */
- if (ios->compgroup != MPI_GROUP_NULL)
- MPI_Group_free(&ios->compgroup);
-
- if (ios->iogroup != MPI_GROUP_NULL)
- MPI_Group_free(&(ios->iogroup));
+ MPI_Group_free(&(ios->compgroup));
+ MPI_Group_free(&(ios->iogroup));
- /* Free the MPI communicators. my_comm is just a copy (but not an
- * MPI copy), so does not have to have an MPI_Comm_free() call. */
+ /* Free the MPI communicators. */
if(ios->intercomm != MPI_COMM_NULL){
MPI_Comm_free(&(ios->intercomm));
}
@@ -554,8 +534,9 @@ int PIOc_finalize(const int iosysid)
MPI_Comm_free(&(ios->union_comm));
}
- /* Delete the iosystem_desc_t data associated with this id. */
return pio_delete_iosystem_from_list(iosysid);
+
+
}
/**
diff --git a/cime/externals/pio2/src/clib/pioc_sc.c b/cime/externals/pio2/src/clib/pioc_sc.c
index de3d774655f2..baf86b4442f3 100644
--- a/cime/externals/pio2/src/clib/pioc_sc.c
+++ b/cime/externals/pio2/src/clib/pioc_sc.c
@@ -4,9 +4,9 @@
* @date 2014
* @brief Compute start and count arrays for the box rearranger
*
- *
- *
- *
+ *
+ *
+ *
* @see http://code.google.com/p/parallelio/
*/
#ifdef TESTCALCDECOMP
@@ -120,7 +120,7 @@ int calcdisplace(const int bsize, const int numblocks,const PIO_Offset map[], in
}
-void computestartandcount(const int gdim, const int ioprocs, const int rank,
+void computestartandcount(const int gdim, const int ioprocs, const int rank,
PIO_Offset *start, PIO_Offset *kount)
{
int irank;
@@ -140,18 +140,18 @@ void computestartandcount(const int gdim, const int ioprocs, const int rank,
}
*start = lstart;
*kount = lkount;
-
+
}
PIO_Offset GCDblocksize(const int arrlen, const PIO_Offset arr_in[]){
// PIO_Offset del_arr[arrlen-1];
- // PIO_Offset loc_arr[arrlen-1];
+ // PIO_Offset loc_arr[arrlen-1];
PIO_Offset *gaps=NULL;
PIO_Offset *blk_len=NULL;
int i, j, k, n, numblks, numtimes, ii, numgaps;
PIO_Offset bsize, bsizeg, blklensum;
// PIO_Offset *del_arr = (PIO_Offset *) calloc((arrlen-1),sizeof(PIO_Offset));
- // PIO_Offset *loc_arr = (PIO_Offset *) calloc((arrlen-1),sizeof(PIO_Offset));
+ // PIO_Offset *loc_arr = (PIO_Offset *) calloc((arrlen-1),sizeof(PIO_Offset));
PIO_Offset del_arr[arrlen-1];
PIO_Offset loc_arr[arrlen-1];
@@ -169,11 +169,11 @@ PIO_Offset GCDblocksize(const int arrlen, const PIO_Offset arr_in[]){
return(1);
}
// printf("%s %d %d %d %d\n",__FILE__,__LINE__,i,del_arr[i],arr_in[i]);
-
+
}
}
-
+
numblks = numtimes+1;
if(numtimes==0)
@@ -200,7 +200,7 @@ PIO_Offset GCDblocksize(const int arrlen, const PIO_Offset arr_in[]){
numgaps=ii;
}
// printf("%s %d\n",__FILE__,__LINE__);
-
+
j=0;
for(i=0;i 0) {
bsizeg = lgcd_array(numgaps, gaps);
bsize = lgcd(bsize,bsizeg);
// free(gaps);
}
- if(arr_in[0]>0)
+ if(arr_in[0]>0)
bsize = lgcd(bsize,arr_in[0]);
// printf("%s %d\n",__FILE__,__LINE__);
@@ -324,7 +324,7 @@ int CalcStartandCount(const int basetype, const int ndims,const int *gdims, cons
#endif
for(i=0;i<=ldims;i++){
if(gdims[i]>= ioprocs){
- computestartandcount(gdims[i],ioprocs,tiorank,start+i,kount+i);
+ computestartandcount(gdims[i],ioprocs,tiorank,start+i,kount+i);
#ifdef TESTCALCDECOMP
if(myiorank==0) printf("%d tiorank %d i %d start %d count %d\n",__LINE__,tiorank,i,start[i],kount[i]);
#endif
@@ -349,10 +349,10 @@ int CalcStartandCount(const int basetype, const int ndims,const int *gdims, cons
}
}
pknt = 1;
-
+
for(i=0;i
-#if PIO_ENABLE_LOGGING
-#include
-#include
-#endif /* PIO_ENABLE_LOGGING */
#include
#include
#include
#define versno 2001
-#if PIO_ENABLE_LOGGING
-int pio_log_level = 0;
-int my_rank;
-#endif /* PIO_ENABLE_LOGGING */
-
-/** Return a string description of an error code. If zero is passed, a
- * null is returned.
- *
- * @param pioerr the error code returned by a PIO function call.
- * @param errmsg Pointer that will get the error message. It will be
- * PIO_MAX_NAME chars or less.
- *
- * @return 0 on success
- */
-int
-PIOc_strerror(int pioerr, char *errmsg)
-{
-
- /* System error? */
- if(pioerr > 0)
- {
- const char *cp = (const char *)strerror(pioerr);
- if (cp)
- strncpy(errmsg, cp, PIO_MAX_NAME);
- else
- strcpy(errmsg, "Unknown Error");
- }
- else if (pioerr == PIO_NOERR)
- {
- strcpy(errmsg, "No error");
- }
- else if (pioerr <= NC2_ERR && pioerr >= NC4_LAST_ERROR) /* NetCDF error? */
- {
-#if defined( _PNETCDF) || defined(_NETCDF)
- strncpy(errmsg, nc_strerror(pioerr), NC_MAX_NAME);
-#else /* defined( _PNETCDF) || defined(_NETCDF) */
- strcpy(errmsg, "NetCDF error code, PIO not built with netCDF.");
-#endif /* defined( _PNETCDF) || defined(_NETCDF) */
- }
- else
- {
- /* Handle PIO errors. */
- switch(pioerr) {
- case PIO_EBADIOTYPE:
- strcpy(errmsg, "Bad IO type");
- break;
- default:
- strcpy(errmsg, "unknown PIO error");
- }
- }
-
- return PIO_NOERR;
-}
-
-/** Set the logging level. Set to -1 for nothing, 0 for errors only, 1
- * for important logging, and so on. Log levels below 1 are only
- * printed on the io/component root. If the library is not built with
- * logging, this function does nothing. */
-int PIOc_set_log_level(int level)
-{
-#if PIO_ENABLE_LOGGING
- printf("setting log level to %d\n", level);
- pio_log_level = level;
- MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
-#endif /* PIO_ENABLE_LOGGING */
- return PIO_NOERR;
-}
-
-#if PIO_ENABLE_LOGGING
-/** This function prints out a message, if the severity of the message
- is lower than the global pio_log_level. To use it, do something
- like this:
-
- pio_log(0, "this computer will explode in %d seconds", i);
-
- After the first arg (the severity), use the rest like a normal
- printf statement. Output will appear on stdout.
- This function is heavily based on the function in section 15.5 of
- the C FAQ.
-*/
-void
-pio_log(int severity, const char *fmt, ...)
-{
- va_list argp;
- int t;
-
- /* If the severity is greater than the log level, we don't print
- this message. */
- if (severity > pio_log_level)
- return;
-
- /* If the severity is 0, only print on rank 0. */
- if (severity < 1 && my_rank != 0)
- return;
-
- /* If the severity is zero, this is an error. Otherwise insert that
- many tabs before the message. */
- if (!severity)
- fprintf(stdout, "ERROR: ");
- for (t = 0; t < severity; t++)
- fprintf(stdout, "\t");
-
- /* Show the rank. */
- fprintf(stdout, "%d ", my_rank);
-
- /* Print out the variable list of args with vprintf. */
- va_start(argp, fmt);
- vfprintf(stdout, fmt, argp);
- va_end(argp);
-
- /* Put on a final linefeed. */
- fprintf(stdout, "\n");
- fflush(stdout);
-}
-#endif /* PIO_ENABLE_LOGGING */
-
static pio_swapm_defaults swapm_defaults;
bool PIO_Save_Decomps=false;
/**
@@ -235,35 +114,6 @@ void pioassert(_Bool expression, const char *msg, const char *fname, const int l
}
-/** Handle MPI errors. An error message is sent to stderr, then the
- check_netcdf() function is called with PIO_EIO.
-
- @param file pointer to the file_desc_t info
- @param mpierr the MPI return code to handle
- @param filename the name of the code file where error occured.
- @param line the line of code where error occured.
- @return PIO_NOERR for no error, otherwise PIO_EIO.
- */
-int check_mpi(file_desc_t *file, const int mpierr, const char *filename,
- const int line)
-{
- if (mpierr)
- {
- char errstring[MPI_MAX_ERROR_STRING];
- int errstrlen;
-
- /* If we can get an error string from MPI, print it to stderr. */
- if (!MPI_Error_string(mpierr, errstring, &errstrlen))
- fprintf(stderr, "MPI ERROR: %s in file %s at line %d\n",
- errstring, filename, line);
-
- /* Handle all MPI errors as PIO_EIO. */
- check_netcdf(file, PIO_EIO, filename, line);
- return PIO_EIO;
- }
- return PIO_NOERR;
-}
-
/** Check the result of a netCDF API call.
*
* @param file pointer to the PIO structure describing this file.
diff --git a/cime/externals/pio2/src/clib/topology.c b/cime/externals/pio2/src/clib/topology.c
index 9fc2f369931d..9334d127c175 100644
--- a/cime/externals/pio2/src/clib/topology.c
+++ b/cime/externals/pio2/src/clib/topology.c
@@ -34,23 +34,23 @@ void identity(const MPI_Comm comm, int *iotask)
MPIX_Hardware(&hw);
-
+
Kernel_GetPersonality(&pers, sizeof(pers));
-
+
int numIONodes,numPsets,numNodesInPset,rankInPset;
int numpsets, psetID, psetsize, psetrank;
bgq_pset_info (comm, &numpsets, &psetID, &psetsize, &psetrank);
-
- numIONodes = numpsets;
- numNodesInPset = psetsize;
- rankInPset = rank;
- numPsets = numpsets;
-
+
+ numIONodes = numpsets;
+ numNodesInPset = psetsize;
+ rankInPset = rank;
+ numPsets = numpsets;
+
if(rank == 0) { printf("number of IO nodes in block: %i \n",numIONodes);}
if(rank == 0) { printf("number of Psets in block : %i \n",numPsets);}
if(rank == 0) { printf("number of compute nodes in Pset: %i \n",numNodesInPset);}
-
+
int psetNum;
psetNum = psetID;
@@ -77,26 +77,26 @@ void identity(const MPI_Comm comm, int *iotask)
free(TasksPerPset);
}
-void determineiotasks(const MPI_Comm comm, int *numiotasks,int *base, int *stride, int *rearr,
+void determineiotasks(const MPI_Comm comm, int *numiotasks,int *base, int *stride, int *rearr,
bool *iamIOtask)
{
-/*
+/*
Returns the correct numiotasks and the flag iamIOtask
Some concepts:
- processor set: A group of processors on the Blue Gene system which have
+ processor set: A group of processors on the Blue Gene system which have
one or more IO processor (Pset)
- IO-node: A special Blue Gene node dedicated to performing IO. There
+ IO-node: A special Blue Gene node dedicated to performing IO. There
are one or more per processor set
- IO-client: This is software concept. This refers to the MPI task
- which performs IO for the PIO library
+ IO-client: This is software concept. This refers to the MPI task
+ which performs IO for the PIO library
*/
- int psetNum;
+ int psetNum;
int coreId;
int iam;
int task_count;
@@ -121,21 +121,21 @@ void determineiotasks(const MPI_Comm comm, int *numiotasks,int *base, int *strid
Kernel_GetPersonality(&pers, sizeof(pers));
-
+
int numIONodes,numPsets,numNodesInPset,rankInPset;
int numiotasks_per_node,remainder,numIONodes_per_pset;
int lstride;
-
+
/* Number of computational nodes in processor set */
int numpsets, psetID, psetsize, psetrank;
bgq_pset_info (comm,&numpsets, &psetID, &psetsize, &psetrank);
- numIONodes = numpsets;
+ numIONodes = numpsets;
numNodesInPset = psetsize;
/* printf("Determine io tasks: me %i : nodes in pset= %i ionodes = %i\n", rank, numNodesInPset, numIONodes); */
-
- if((*numiotasks) < 0 ) {
+
+ if((*numiotasks) < 0 ) {
/* negative numiotasks value indicates that this is the number per IO-node */
(*numiotasks) = - (*numiotasks);
if((*numiotasks) > numNodesInPset) {
@@ -164,7 +164,7 @@ void determineiotasks(const MPI_Comm comm, int *numiotasks,int *base, int *strid
numiotasks_per_node = 8; /* default number of IO-client per IO-node is not otherwise specificied */
}
remainder = 0;
- }
+ }
/* number of IO nodes with a larger number of io-client per io-node */
if(remainder > 0) {
@@ -177,62 +177,62 @@ void determineiotasks(const MPI_Comm comm, int *numiotasks,int *base, int *strid
}
lstride = min(np,floor((float)numNodesInPset/(float)numiotasks_per_node));
}
-
+
/* Number of processor sets */
- numPsets = numpsets;
-
+ numPsets = numpsets;
+
/* number of IO nodes in processor set (I need to add
- code to deal with the case where numIONodes_per_pset != 1 works
+ code to deal with the case where numIONodes_per_pset != 1 works
correctly) */
numIONodes_per_pset = numIONodes/numPsets;
-
+
/* Determine which core on node.... I don't want to put more than one io-task per node */
coreId = Kernel_PhysicalProcessorID ();
-
+
/* What is the rank of this node in the processor set */
psetNum = psetID;
rankInPset = psetrank;
-
+
/* printf("Pset #: %i has %i nodes in Pset; base = %i\n",psetNum,numNodesInPset, *base); */
-
+
(*iamIOtask) = false; /* initialize to zero */
-
+
if (numiotasks_per_node == numNodesInPset)(*base) = 0; /* Reset the base to 0 if we are using all tasks */
- /* start stridding MPI tasks from base task */
+ /* start stridding MPI tasks from base task */
iam = max(0,rankInPset-(*base));
if (iam >= 0) {
/* mark tasks that will be IO-tasks or IO-clients */
/* printf("iam = %d lstride = %d coreID = %d\n",iam,lstride,coreId); */
if((iam % lstride == 0) && (coreId == 0) ) { /* only io tasks indicated by stride and coreId = 0 */
- if((iam/lstride) < numiotasks_per_node) {
+ if((iam/lstride) < numiotasks_per_node) {
/* only set the first (numiotasks_per_node - 1) tasks */
(*iamIOtask) = true;
} else if ((iam/lstride) == numiotasks_per_node) {
- /* If there is an uneven number of io-clients to io-nodes
- allocate the first remainder - 1 processor sets to
+ /* If there is an uneven number of io-clients to io-nodes
+ allocate the first remainder - 1 processor sets to
have a total of numiotasks_per_node */
if(psetNum < remainder) {(*iamIOtask) = true;
- };
+ };
}
}
/* printf("comm = %d iam = %d lstride = %d coreID = %d iamIOtask = %i \n",comm, iam,lstride,coreId,(*iamIOtask)); */
}
- }else{
+ }else{
/* We are not doing rearrangement.... so all tasks are io-tasks */
(*iamIOtask) = true;
}
-
- /*printf("comm = %d myrank = %i iotask = %i \n", comm, rank, (*iamIOtask));*/
-
+
+ /*printf("comm = %d myrank = %i iotask = %i \n", comm, rank, (*iamIOtask));*/
+
/* now we need to correctly determine the numiotasks */
MPI_Allreduce(iamIOtask, &task_count, 1, MPI_INT, MPI_SUM, comm);
(*numiotasks) = task_count;
-
-
+
+
}
int bgq_ion_id (void)
@@ -296,7 +296,7 @@ int bgq_pset_info (MPI_Comm comm, int* tot_pset, int* psetID, int* pset_size, in
// Compute the ION BridgeNode ID
key = bgq_ion_id ();
-
+
// Create the pset_comm per bridge node
status = MPI_Comm_split ( comp_comm, key, comp_rank, &pset_comm);
if ( MPI_SUCCESS != status)
@@ -308,7 +308,7 @@ int bgq_pset_info (MPI_Comm comm, int* tot_pset, int* psetID, int* pset_size, in
// Calculate the rank in pset and pset size
MPI_Comm_rank (pset_comm, rank_in_pset);
MPI_Comm_size (pset_comm, pset_size);
-
+
// Create the Bridge root nodes communicator
bridge_root = 0;
if (0 == *rank_in_pset)
@@ -317,7 +317,7 @@ int bgq_pset_info (MPI_Comm comm, int* tot_pset, int* psetID, int* pset_size, in
// Calculate the total number of bridge nodes / psets
tot_bridges = 0;
MPI_Allreduce (&bridge_root, &tot_bridges, 1, MPI_INT, MPI_SUM, comm);
-
+
*tot_pset = tot_bridges;
// Calculate the Pset ID
@@ -329,7 +329,7 @@ int bgq_pset_info (MPI_Comm comm, int* tot_pset, int* psetID, int* pset_size, in
rem_psets = tot_bridges-1;
cur_pset++;
}
-
+
t_buf = 0; // Dummy value
if (0 == comp_rank)
{
@@ -345,18 +345,18 @@ int bgq_pset_info (MPI_Comm comm, int* tot_pset, int* psetID, int* pset_size, in
{
MPI_Send (&t_buf, 1, MPI_INT, 0, 0, comm);
MPI_Recv (&temp_id,1, MPI_INT, 0, 0, comm, &mpi_status);
-
+
*psetID = temp_id;
/*printf (" Pset ID is %d \n", *psetID);*/
}
// Broadcast the PSET ID to all ranks in the psetcomm
MPI_Bcast ( psetID, 1, MPI_INT, 0, pset_comm);
-
+
// Free the split comm
MPI_Comm_free (&pset_comm);
-
+
MPI_Barrier (comm);
-
+
return 0;
}
diff --git a/cime/externals/pio2/src/flib/CMakeLists.txt b/cime/externals/pio2/src/flib/CMakeLists.txt
index 8d36a029cf12..cef209ba8b91 100644
--- a/cime/externals/pio2/src/flib/CMakeLists.txt
+++ b/cime/externals/pio2/src/flib/CMakeLists.txt
@@ -12,14 +12,14 @@ set (PIO_Fortran_SRCS pio_nf.F90
pio.F90
pio_kinds.F90
pio_types.F90
- piolib_mod.F90
+ piolib_mod.F90
pio_support.F90)
-
+
set (PIO_GenF90_SRCS pionfatt_mod.F90
pionfput_mod.F90
pionfget_mod.F90
piodarray.F90)
-
+
set (PIO_Fortran_MODS ${CMAKE_CURRENT_BINARY_DIR}/pio.mod
${CMAKE_CURRENT_BINARY_DIR}/pio_nf.mod
${CMAKE_CURRENT_BINARY_DIR}/pio_types.mod
@@ -46,7 +46,7 @@ target_compile_definitions (piof
PUBLIC ${CMAKE_SYSTEM_DIRECTIVE})
target_compile_definitions (piof
PUBLIC ${CMAKE_Fortran_COMPILER_DIRECTIVE})
-
+
# Compiler-specific compile options
if ("${CMAKE_Fortran_COMPILER_ID}" STREQUAL "GNU")
target_compile_options (piof
@@ -99,7 +99,7 @@ install (FILES ${PIO_Fortran_MODS} DESTINATION include)
#===== pioc =====
target_link_libraries(piof
PUBLIC pioc)
-
+
#===== genf90 =====
if (DEFINED GENF90_PATH)
add_custom_target(genf90
@@ -194,7 +194,7 @@ endif ()
if (PIO_ENABLE_TIMING)
if (GPTL_Fortran_Perf_FOUND)
message (STATUS "Found GPTL Fortran Perf: ${GPTL_Fortran_Perf_LIBRARIES}")
- target_include_directories (piof
+ target_include_directories (piof
PUBLIC ${GPTL_Fortran_INCLUDE_DIRS})
target_link_libraries (piof
PUBLIC ${GPTL_Fortran_LIBRARIES})
@@ -208,9 +208,9 @@ endif ()
#===== NetCDF-Fortran =====
find_package (NetCDF "4.3.3" COMPONENTS Fortran)
if (NetCDF_Fortran_FOUND)
- target_include_directories (piof
+ target_include_directories (piof
PUBLIC ${NetCDF_Fortran_INCLUDE_DIRS})
- target_compile_definitions (piof
+ target_compile_definitions (piof
PUBLIC _NETCDF)
target_link_libraries (piof
PUBLIC ${NetCDF_Fortran_LIBRARIES})
@@ -219,7 +219,7 @@ if (NetCDF_Fortran_FOUND)
PUBLIC _NETCDF4)
endif ()
else ()
- target_compile_definitions (piof
+ target_compile_definitions (piof
PUBLIC _NONETCDF)
endif ()
@@ -228,9 +228,9 @@ if (WITH_PNETCDF)
find_package (PnetCDF "1.6" COMPONENTS Fortran REQUIRED)
endif ()
if (PnetCDF_Fortran_FOUND)
- target_include_directories (piof
+ target_include_directories (piof
PUBLIC ${PnetCDF_Fortran_INCLUDE_DIRS})
- target_compile_definitions (piof
+ target_compile_definitions (piof
PUBLIC _PNETCDF)
target_link_libraries (piof
PUBLIC ${PnetCDF_Fortran_LIBRARIES})
@@ -242,9 +242,9 @@ if (PnetCDF_Fortran_FOUND)
target_compile_definitions(piof
PUBLIC USE_PNETCDF_VARN
PUBLIC USE_PNETCDF_VARN_ON_READ)
- endif()
+ endif()
else ()
- target_compile_definitions (piof
+ target_compile_definitions (piof
PUBLIC _NOPNETCDF)
endif ()
@@ -258,7 +258,7 @@ target_compile_options (piof
target_compile_definitions (piof
PUBLIC ${PIO_Fortran_EXTRA_COMPILE_DEFINITIONS})
if (PIO_Fortran_EXTRA_LINK_FLAGS)
- set_target_properties(piof PROPERTIES
+ set_target_properties(piof PROPERTIES
LINK_FLAGS ${PIO_Fortran_EXTRA_LINK_FLAGS})
endif ()
@@ -266,4 +266,4 @@ endif ()
if (NOT PnetCDF_Fortran_FOUND AND NOT NetCDF_Fortran_FOUND)
message (FATAL_ERROR "Must have PnetCDF and/or NetCDF Fortran libraries")
endif ()
-
+
diff --git a/cime/externals/pio2/src/flib/pio.F90 b/cime/externals/pio2/src/flib/pio.F90
index 739c0f8441e1..0609c506c530 100644
--- a/cime/externals/pio2/src/flib/pio.F90
+++ b/cime/externals/pio2/src/flib/pio.F90
@@ -1,7 +1,7 @@
!>
-!! @file
+!! @file
!! @brief User interface Module for PIO, this is the only file a user program should 'use'
-!!
+!!
!<
module pio
@@ -28,9 +28,9 @@ module pio
pio_nofill, pio_unlimited, pio_fill_int, pio_fill_double, pio_fill_float, &
#endif
pio_64bit_offset, pio_64bit_data, &
- pio_internal_error, pio_bcast_error, pio_return_error
+ pio_internal_error, pio_bcast_error, pio_return_error, pio_rearr_opt_t
- use piodarray, only : pio_read_darray, pio_write_darray, pio_set_buffer_size_limit
+ use piodarray, only : pio_read_darray, pio_write_darray, pio_set_buffer_size_limit
use pio_nf, only: &
PIO_enddef, &
@@ -53,14 +53,12 @@ module pio
PIO_def_var , &
PIO_def_var_deflate , &
PIO_redef , &
- PIO_set_log_level, &
PIO_inquire_variable , &
PIO_inquire_dimension, &
PIO_set_chunk_cache, &
PIO_get_chunk_cache, &
PIO_set_var_chunk_cache, &
- PIO_get_var_chunk_cache, &
- PIO_strerror
+ PIO_get_var_chunk_cache
use pionfatt_mod, only : PIO_put_att => put_att, &
PIO_get_att => get_att
@@ -113,11 +111,11 @@ integer(C_INT) function PIOc_iam_iotask(iosysid, iotask) &
logical(C_BOOL), intent(out) :: iotask
end function PIOc_iam_iotask
end interface
-
+
ierr = PIOc_iam_iotask(iosystem%iosysid, ctask)
task = ctask
end function pio_iam_iotask
-
+
!>
!! @public
!! @brief Integer function returns rank of IO task.
@@ -133,7 +131,7 @@ integer(C_INT) function PIOc_iotask_rank(iosysid, rank) &
integer(C_INT), intent(out) :: rank
end function PIOc_iotask_rank
end interface
-
+
ierr = PIOc_iotask_rank(iosystem%iosysid, rank)
end function pio_iotask_rank
diff --git a/cime/externals/pio2/src/flib/pio_kinds.F90 b/cime/externals/pio2/src/flib/pio_kinds.F90
index 4601406d99b0..98006f4eada2 100644
--- a/cime/externals/pio2/src/flib/pio_kinds.F90
+++ b/cime/externals/pio2/src/flib/pio_kinds.F90
@@ -1,6 +1,6 @@
!>
!! @file pio_kinds.F90
-!! @brief basic data types
+!! @brief basic data types
!!
!<
module pio_kinds
@@ -38,10 +38,10 @@ module pio_kinds
r4 = selected_real_kind(6) ,&
r8 = selected_real_kind(13)
!
-! MPI defines MPI_OFFSET_KIND as the byte size of the
+! MPI defines MPI_OFFSET_KIND as the byte size of the
! type, which is not nessasarily the type kind
!
-
+
integer, parameter, public :: PIO_OFFSET_KIND=MPI_OFFSET_KIND
!EOP
diff --git a/cime/externals/pio2/src/flib/pio_nf.F90 b/cime/externals/pio2/src/flib/pio_nf.F90
index 4db50a2a51df..5c798f834ec8 100644
--- a/cime/externals/pio2/src/flib/pio_nf.F90
+++ b/cime/externals/pio2/src/flib/pio_nf.F90
@@ -36,9 +36,7 @@ module pio_nf
pio_get_chunk_cache , &
pio_set_var_chunk_cache , &
pio_get_var_chunk_cache , &
- pio_redef , &
- pio_set_log_level , &
- pio_strerror
+ pio_redef
! pio_copy_att to be done
interface pio_def_var
@@ -182,24 +180,13 @@ module pio_nf
module procedure &
enddef_desc , &
enddef_id
- end interface pio_enddef
-
+ end interface
interface pio_redef
module procedure &
redef_desc , &
redef_id
end interface
- interface pio_set_log_level
- module procedure &
- set_log_level
- end interface pio_set_log_level
-
- interface pio_strerror
- module procedure &
- strerror
- end interface pio_strerror
-
interface pio_inquire
module procedure &
inquire_desc , &
@@ -661,69 +648,18 @@ integer function redef_desc(File) result(ierr)
type (File_desc_t) , intent(inout) :: File
ierr = redef_id(file%fh)
end function redef_desc
-
-!>
-!! @defgroup PIO_set_log_level
-!<
-!>
-!! @ingroup PIO_set_log_level
-!! Sets the logging level. Only takes effect if PIO was built with
-!! PIO_ENABLE_LOGGING=On
-!!
-!! @param log_level the logging level.
-!! @retval ierr @copydoc error_return
-!<
- integer function set_log_level(log_level) result(ierr)
- integer, intent(in) :: log_level
- interface
- integer(C_INT) function PIOc_set_log_level(log_level) &
- bind(C, name="PIOc_set_log_level")
- use iso_c_binding
- integer(C_INT), value :: log_level
- end function PIOc_set_log_level
- end interface
- ierr = PIOc_set_log_level(log_level)
- end function set_log_level
-
- !>
- !! @defgroup PIO_strerror
- !<
- !>
- !! @ingroup PIO_strerror
- !! Returns a descriptive string for an error code.
- !!
- !! @param errcode the error code
- !! @retval a description of the error
- !<
- integer function strerror(errcode, errmsg) result(ierr)
- integer, intent(in) :: errcode
- character(len=*), intent(out) :: errmsg
- interface
- integer(C_INT) function PIOc_strerror(errcode, errmsg) &
- bind(C, name="PIOc_strerror")
- use iso_c_binding
- integer(C_INT), value :: errcode
- character(C_CHAR) :: errmsg(*)
- end function PIOc_strerror
- end interface
- errmsg = C_NULL_CHAR
- ierr = PIOc_strerror(errcode, errmsg)
- call replace_c_null(errmsg)
-
- end function strerror
-
!>
!! @public
!! @ingroup PIO_redef
!! @brief Wrapper for the C function \ref PIOc_redef .
!<
integer function redef_id(ncid) result(ierr)
- integer, intent(in) :: ncid
+ integer ,intent(in) :: ncid
interface
integer(C_INT) function PIOc_redef(ncid) &
- bind(C, name="PIOc_redef")
+ bind(C ,name="PIOc_redef")
use iso_c_binding
- integer(C_INT), value :: ncid
+ integer(C_INT) , value :: ncid
end function PIOc_redef
end interface
ierr = PIOc_redef(ncid)
@@ -1697,7 +1633,7 @@ end function set_chunk_cache
!>
!! @public
-!! @ingroup PIO_get_chunk_cache
+!! @ingroup PIO_set_chunk_cache
!! @brief Gets current settings for chunk cache (only relevant for netCDF4/HDF5 files.)
!<
integer function get_chunk_cache(iosysid, iotype, chunk_cache_size, chunk_cache_nelems, &
@@ -1727,7 +1663,7 @@ end function get_chunk_cache
!>
!! @public
-!! @ingroup PIO_set_var_chunk_cache
+!! @ingroup PIO_set_chunk_cache
!! @brief Changes chunk cache settings for a variable in a netCDF-4/HDF5 file.
!<
integer function set_var_chunk_cache_id(file, varid, chunk_cache_size, &
diff --git a/cime/externals/pio2/src/flib/pio_support.F90 b/cime/externals/pio2/src/flib/pio_support.F90
index f5ef8bde5c22..fe0fde0b59e0 100644
--- a/cime/externals/pio2/src/flib/pio_support.F90
+++ b/cime/externals/pio2/src/flib/pio_support.F90
@@ -28,7 +28,7 @@ module pio_support
character(len=*), parameter :: modName='pio_support'
contains
-!>
+!>
!! @public
!! @brief Remove null termination (C-style) from strings for Fortran.
!<
@@ -81,7 +81,7 @@ subroutine piodie (file,line, msg, ival1, msg2, ival2, msg3, ival3, mpirank)
character(len=*), parameter :: subName=modName//'::pio_die'
integer :: ierr, myrank=-1
-
+
if(present(mpirank)) myrank=mpirank
if (present(ival3)) then
@@ -108,9 +108,9 @@ subroutine piodie (file,line, msg, ival1, msg2, ival2, msg3, ival3, mpirank)
call xl__trbk()
#endif
- ! passing an argument of 1 to mpi_abort will lead to a STOPALL output
+ ! passing an argument of 1 to mpi_abort will lead to a STOPALL output
! error code of 257
- call mpi_abort (MPI_COMM_WORLD, 1, ierr)
+ call mpi_abort (MPI_COMM_WORLD, 1, ierr)
#ifdef CPRNAG
stop
@@ -129,7 +129,7 @@ end subroutine piodie
!=============================================
!>
!! @public
-!! @brief Check and prints an error message if an error occured in an MPI
+!! @brief Check and prints an error message if an error occured in an MPI
!! subroutine.
!! @param locmesg : Message to output
!! @param errcode : MPI error code
@@ -161,7 +161,7 @@ end subroutine CheckMPIreturn
!! @brief Fortran interface to write a mapping file
!! @param file : The file where the decomp map will be written.
!! @param gdims : The dimensions of the data array in memory.
-!! @param DOF : The multidimensional array of indexes that describes how
+!! @param DOF : The multidimensional array of indexes that describes how
!! data in memory are written to a file.
!! @param comm : The MPI comm index.
!! @param punit : Optional argument that is no longer used.
@@ -188,7 +188,7 @@ subroutine pio_writedof (file, gdims, DOF, comm, punit)
integer,optional,intent(in) :: punit
integer :: err
integer :: ndims
-
+
interface
integer(c_int) function PIOc_writemap_from_f90(file, ndims, gdims, maplen, map, f90_comm) &
@@ -197,7 +197,7 @@ integer(c_int) function PIOc_writemap_from_f90(file, ndims, gdims, maplen, map,
character(C_CHAR), intent(in) :: file
integer(C_INT), value, intent(in) :: ndims
integer(C_INT), intent(in) :: gdims(*)
- integer(C_SIZE_T), value, intent(in) :: maplen
+ integer(C_SIZE_T), value, intent(in) :: maplen
integer(C_SIZE_T), intent(in) :: map(*)
integer(C_INT), value, intent(in) :: f90_comm
end function PIOc_writemap_from_f90
@@ -226,7 +226,7 @@ subroutine pio_readdof (file, ndims, gdims, DOF, comm, punit)
! Author: T Craig
!
! Change History
- !
+ !
!-----------------------------------------------------------------------
! $Id$
!-----------------------------------------------------------------------
@@ -247,7 +247,7 @@ subroutine pio_readdof (file, ndims, gdims, DOF, comm, punit)
type(C_PTR) :: tgdims, tmap
interface
integer(C_INT) function PIOc_readmap_from_f90(file, ndims, gdims, maplen, map, f90_comm) &
- bind(C,name="PIOc_readmap_from_f90")
+ bind(C,name="PIOc_readmap_from_f90")
use iso_c_binding
character(C_CHAR), intent(in) :: file
integer(C_INT), intent(out) :: ndims
diff --git a/cime/externals/pio2/src/flib/pio_types.F90 b/cime/externals/pio2/src/flib/pio_types.F90
index 1edd352e284e..c3482ffbb264 100644
--- a/cime/externals/pio2/src/flib/pio_types.F90
+++ b/cime/externals/pio2/src/flib/pio_types.F90
@@ -229,4 +229,17 @@ module pio_types
#endif
integer, public, parameter :: PIO_num_OST = 16
+ type, public :: PIO_rearr_comm_fc_opt_t
+ logical :: enable_hs ! Enable handshake?
+ logical :: enable_isend ! Enable isends?
+ integer :: max_pend_req ! Maximum pending requests
+ end type PIO_rearr_comm_fc_opt_t
+
+ type, public :: PIO_rearr_opt_t
+ integer :: comm_type
+ integer :: fcd ! Flow control direction
+ type(PIO_rearr_comm_fc_opt_t) :: comm_fc_opts_comp2io
+ type(PIO_rearr_comm_fc_opt_t) :: comm_fc_opts_io2comp
+ end type PIO_rearr_opt_t
+
end module pio_types
diff --git a/cime/externals/pio2/src/flib/piodarray.F90.in b/cime/externals/pio2/src/flib/piodarray.F90.in
index 7c80e89df8ad..f39926f96d75 100644
--- a/cime/externals/pio2/src/flib/piodarray.F90.in
+++ b/cime/externals/pio2/src/flib/piodarray.F90.in
@@ -1,6 +1,6 @@
#define __PIO_FILE__ 'piodarray'
!>
-!! @file
+!! @file
!! @brief Read and write routines for decomposed data.
!<
module piodarray
@@ -16,8 +16,8 @@ module piodarray
private
public :: pio_read_darray, pio_write_darray, pio_set_buffer_size_limit
-
-!>
+
+!>
!! @defgroup PIO_write_darray PIO_write_darray
!! @brief The overloaded PIO_write_darray writes a distributed array to disk.
!<
@@ -30,7 +30,7 @@ module piodarray
end interface
-!>
+!>
!! @defgroup PIO_read_darray PIO_read_darray
!! @brief The overloaded PIO_read_darray function reads a distributed array from disk.
!<
@@ -90,9 +90,9 @@ end interface
contains
subroutine pio_set_buffer_size_limit(limit)
- integer(PIO_OFFSET_KIND), intent(in) :: limit
+ integer(PIO_OFFSET_KIND), intent(in) :: limit
integer(PIO_OFFSET_KIND) :: oldval
- interface
+ interface
integer(C_LONG_LONG) function PIOc_set_buffer_size_limit(limit) &
bind(C,name="PIOc_set_buffer_size_limit")
use iso_c_binding
@@ -103,7 +103,7 @@ contains
call piodie(__PIO_FILE__,__LINE__,' bad value to buffer_size_limit: ',int(limit))
end if
oldval = PIOc_set_buffer_size_limit(limit)
-
+
end subroutine pio_set_buffer_size_limit
! TYPE real,int,double
@@ -138,7 +138,7 @@ contains
carraylen = int(arraylen,C_SIZE_T)
cptr = C_LOC(array)
#ifdef TIMING
- call t_startf("PIO:write_darray_{TYPE}")
+ call t_startf("PIO:write_darray_{TYPE}")
#endif
if(present(fillval)) then
iostat = PIOc_write_darray(file%fh, varDesc%varid-1, iodesc%ioid, carraylen,cptr, C_LOC(fillval))
@@ -146,7 +146,7 @@ contains
iostat = PIOc_write_darray(file%fh, varDesc%varid-1, iodesc%ioid, carraylen, cptr, C_NULL_PTR)
endif
#ifdef TIMING
- call t_stopf("PIO:write_darray_{TYPE}")
+ call t_stopf("PIO:write_darray_{TYPE}")
#endif
end subroutine write_darray_1d_cinterface_{TYPE}
@@ -183,7 +183,7 @@ contains
type(C_PTR) :: cptr
integer :: i
carraylen = int(arraylen,C_SIZE_T)
-
+
cptr = C_LOC(array)
do i=1,nvars
varid(i) = vardesc(i)%varid-1
@@ -198,10 +198,10 @@ contains
end subroutine write_darray_multi_1d_cinterface_{TYPE}
! TYPE real,int,double
-!>
+!>
!! @public
!! @ingroup PIO_write_darray
-!! @brief Writes a 1D array of type {TYPE}.
+!! @brief Writes a 1D array of type {TYPE}.
!! @details
!! @param File \ref file_desc_t
!! @param varDesc \ref var_desc_t
@@ -209,7 +209,7 @@ contains
!! @param array : The data to be written
!! @param iostat : The status returned from this routine (see \ref PIO_seterrorhandling for details)
!! @param fillval : An optional fill value to fill holes in the data written
-!<
+!<
subroutine write_darray_multi_1d_{TYPE} (File,varDesc,ioDesc, array, iostat, fillval)
! !DESCRIPTION:
! Writes a block of TYPE to a netcdf file.
@@ -238,7 +238,7 @@ contains
integer :: nvars
nvars = size(vardesc)
-
+
call write_darray_multi_1d_cinterface_{TYPE} (file, varDesc, iodesc, nvars, size(array), array, iostat, fillval)
end subroutine write_darray_multi_1d_{TYPE}
@@ -269,17 +269,17 @@ contains
integer(i4), intent(out) :: iostat
character(len=*), parameter :: subName=modName//'::write_darray_{TYPE}'
-
+
call write_darray_1d_cinterface_{TYPE} (file, varDesc, iodesc, size(array), array, iostat, fillval)
end subroutine write_darray_1d_{TYPE}
! TYPE real,int,double
! DIMS 2,3,4,5,6,7
-!>
+!>
!! @public
!! @ingroup PIO_write_darray
-!! @brief Writes a {DIMS}D array of type {TYPE}.
+!! @brief Writes a {DIMS}D array of type {TYPE}.
!! @details
!! @param File @ref file_desc_t
!! @param varDesc @ref var_desc_t
@@ -287,7 +287,7 @@ contains
!! @param array : The data to be written
!! @param iostat : The status returned from this routine (see \ref PIO_seterrorhandling for details)
!! @param fillval : An optional fill value to fill holes in the data written
-!<
+!<
subroutine write_darray_{DIMS}d_{TYPE} (File,varDesc,ioDesc, array, iostat, fillval)
! !INPUT PARAMETERS:
@@ -323,7 +323,7 @@ contains
! cannot call transfer function with a 0 sized array
if(size(array)==0) then
call write_darray_1d_{TYPE} (File, varDesc, iodesc, dumbvar, iostat)
- else
+ else
call write_darray_1d_{TYPE} (File, varDesc, iodesc, transfer(array,transvar), iostat, fillval)
end if
#endif
@@ -331,7 +331,7 @@ contains
! TYPE real,int,double
! DIMS 1,2,3,4,5,6,7
-!>
+!>
!! @public
!! @ingroup PIO_read_darray
!! @brief Read distributed array of type {TYPE} from a netCDF variable of {DIMS} dimension(s).
@@ -386,7 +386,7 @@ contains
! !INPUT PARAMETERS:
integer, intent(in) :: ncid, varid, ioid
integer(C_SIZE_T), intent(in) :: alen
-
+
{VTYPE}, target :: array(*) ! array to be read
integer(i4), intent(out) :: iostat
diff --git a/cime/externals/pio2/src/flib/piolib_mod.F90 b/cime/externals/pio2/src/flib/piolib_mod.F90
index 7d6a6dc245a7..70497c53cf84 100644
--- a/cime/externals/pio2/src/flib/piolib_mod.F90
+++ b/cime/externals/pio2/src/flib/piolib_mod.F90
@@ -142,7 +142,7 @@ module piolib_mod
!! @defgroup PIO_initdecomp PIO_initdecomp
!! @brief PIO_initdecomp is an overload interface the models decomposition to pio.
!! @details initdecomp_1dof_bin_i8, initdecomp_1dof_nf_i4, initdecomp_2dof_bin_i4,
-!! and initdecomp_2dof_nf_i4 are all depreciated, but supported for backwards
+!! and initdecomp_2dof_nf_i4 are all deprecated, but supported for backwards
!! compatibility.
!<
interface PIO_initdecomp
diff --git a/cime/externals/pio2/src/flib/pionfatt_mod.F90.in b/cime/externals/pio2/src/flib/pionfatt_mod.F90.in
index ad9d8c663d18..4066b96c0736 100644
--- a/cime/externals/pio2/src/flib/pionfatt_mod.F90.in
+++ b/cime/externals/pio2/src/flib/pionfatt_mod.F90.in
@@ -1,6 +1,6 @@
#define __PIO_FILE__ "pionfatt_mod.F90"
!>
-!! @file
+!! @file
!! @brief NetCDF attribute interface to PIO
!<
module pionfatt_mod
@@ -37,14 +37,14 @@ module pionfatt_mod
end interface
!>
- !! @public
+ !! @public
!! @defgroup PIO_put_att PIO_put_att
- !! @brief Writes an netcdf attribute to a file
+ !! @brief Writes an netcdf attribute to a file
!<
!>
- !! @public
+ !! @public
!! @defgroup PIO_get_att PIO_get_att
- !! @brief Reads an netcdf attribute from a file
+ !! @brief Reads an netcdf attribute from a file
!<
private :: modName
@@ -121,7 +121,7 @@ module pionfatt_mod
integer(C_INT), intent(out) :: op
end function PIOc_get_att_int
end interface
-
+
interface
integer(C_INT) function PIOc_get_att_float (ncid, varid, name, op) &
bind(C,name="PIOc_get_att_float")
@@ -132,7 +132,7 @@ module pionfatt_mod
real(C_FLOAT), intent(out) :: op
end function PIOc_get_att_float
end interface
-
+
interface
integer(C_INT) function PIOc_get_att_double (ncid, varid, name, op) &
bind(C,name="PIOc_get_att_double")
@@ -143,19 +143,19 @@ module pionfatt_mod
real(C_DOUBLE), intent(out) :: op
end function PIOc_get_att_double
end interface
-
+
contains
!>
- !! @public
+ !! @public
!! @ingroup PIO_put_att
!! @brief Writes an netcdf attribute to a file
!! @details
!! @param File @copydoc file_desc_t
!! @param varid : The netcdf variable identifier
!! @param name : name of the attribute to add
- !! @param var : The value for the netcdf attribute
+ !! @param var : The value for the netcdf attribute
!! @retval ierr @copydoc error_return
!<
integer function put_att_desc_{TYPE} (File, vdesc, name, values) result(ierr)
@@ -208,7 +208,7 @@ contains
deallocate(cvar)
end function put_att_id_text
- integer function put_att_1d_id_text (ncid, varid, name, value) result(ierr)
+ integer function put_att_1d_id_text (ncid, varid, name, value) result(ierr)
use iso_c_binding
integer, intent(in) :: ncid
integer, intent(in) :: varid
@@ -220,8 +220,8 @@ contains
slen = len(value(1))
alen = size(value)
allocate(nvalue(slen*alen))
-
- do i=1,alen
+
+ do i=1,alen
j= len_trim(value(i))
do k=1,j
nvalue(k+(i-1)*slen) = value(i)(k:k)
@@ -244,9 +244,9 @@ contains
character(len=*), intent(in) :: values(arrlen)
integer :: vallen
-
+
ierr = PIOc_put_att_text (ncid,varid-1,trim(name)//C_NULL_CHAR, int(arrlen,C_SIZE_T),values(1))
-
+
end function put_att_1d_id_text_internal
@@ -267,14 +267,14 @@ contains
!pl The next line is needed by genf90.pl, do not remove it.
! TYPE real,double,int
!>
- !! @public
+ !! @public
!! @ingroup PIO_put_att
!! @brief Writes an netcdf attribute to a file
!! @details
!! @param File @copydoc file_desc_t
!! @param varid : The netcdf variable identifier
!! @param name : name of the attribute to add
- !! @param values : The value for the netcdf attribute
+ !! @param values : The value for the netcdf attribute
!! @retval ierr @copydoc error_return
!<
integer function put_att_1d_id_{TYPE} (ncid, varid, name, values) result(ierr)
@@ -305,14 +305,14 @@ contains
! TYPE real,int,double
!>
- !! @public
+ !! @public
!! @ingroup PIO_put_att
!! @brief Writes an netcdf attribute to a file
!! @details
!! @param File @copydoc file_desc_t
!! @param varDesc @copydoc var_desc_t
!! @param name : name of the attribute to add
- !! @param var : The value for the netcdf attribute
+ !! @param var : The value for the netcdf attribute
!! @retval ierr @copydoc error_return
!<
integer function put_att_1d_desc_{TYPE} (File,varDesc,name,values) result(ierr)
@@ -339,14 +339,14 @@ contains
!>
- !! @public
+ !! @public
!! @ingroup PIO_get_att
!! @brief Reads an netcdf attribute from a file
!! @details
!! @param File @copydoc file_desc_t
!! @param varDesc @copydoc var_desc_t
!! @param name : name of the attribute to get
- !! @param values : The value for the netcdf attribute
+ !! @param values : The value for the netcdf attribute
!! @retval ierr @copydoc error_return
!<
integer function get_att_desc_{TYPE} (File,varDesc,name,values) result(ierr)
@@ -361,14 +361,14 @@ contains
end function get_att_desc_{TYPE}
!>
- !! @public
+ !! @public
!! @ingroup PIO_get_att
!! @brief Reads an netcdf attribute from a file
!! @details
!! @param File @copydoc file_desc_t
!! @param varDesc @copydoc var_desc_t
!! @param name : name of the attribute to get
- !! @param values : The value for the netcdf attribute
+ !! @param values : The value for the netcdf attribute
!! @retval ierr @copydoc error_return
!<
! TYPE int,real,double
@@ -386,14 +386,14 @@ contains
end function get_att_desc_1d_{TYPE}
!>
- !! @public
+ !! @public
!! @ingroup PIO_get_att
!! @brief Reads an netcdf attribute from a file
!! @details
!! @param File @copydoc file_desc_t
!! @param varid : The netcdf variable identifier
!! @param name : name of the attribute to get
- !! @param values : The value for the netcdf attribute
+ !! @param values : The value for the netcdf attribute
!! @retval ierr @copydoc error_return
!<
! TYPE int,real,double
@@ -410,7 +410,7 @@ contains
end function get_att_id_{TYPE}
-
+
integer function get_att_{TYPE} (File,varid,name,values) result(ierr)
type (File_desc_t), intent(in) , target :: File
integer(i4), intent(in) :: varid
@@ -425,14 +425,14 @@ contains
! TYPE real,int,double
!>
- !! @public
+ !! @public
!! @ingroup PIO_get_att
!! @brief Reads an netcdf attribute from a file
!! @details
!! @param File @copydoc file_desc_t
!! @param varid : The netcdf variable identifier
!! @param name : name of the attribute to get
- !! @param values : The value for the netcdf attribute
+ !! @param values : The value for the netcdf attribute
!! @retval ierr @copydoc error_return
!<
integer function get_att_1d_{TYPE} (File,varid,name,values) result(ierr)
diff --git a/cime/externals/pio2/src/flib/pionfget_mod.F90.in b/cime/externals/pio2/src/flib/pionfget_mod.F90.in
index ebd335dde2ed..3cbca7e6bed9 100644
--- a/cime/externals/pio2/src/flib/pionfget_mod.F90.in
+++ b/cime/externals/pio2/src/flib/pionfget_mod.F90.in
@@ -1,6 +1,6 @@
#define __PIO_FILE__ "pionfget_mod.F90"
!>
-!! @file
+!! @file
!! @brief Read Routines for non-decomposed NetCDF data.
!<
module pionfget_mod
@@ -17,8 +17,8 @@ module pionfget_mod
!! @defgroup PIO_get_var PIO_get_var
!! @brief Reads non-decomposed data from a NetCDF file
!! @details The get_var interface is provided as a simplified interface to
-!! read variables from a NetCDF format file. The variable is read on the
-!! root IO task and broadcast in its entirety to all tasks.
+!! read variables from a NetCDF format file. The variable is read on the
+!! root IO task and broadcast in its entirety to all tasks.
!<
public :: get_var
interface get_var
@@ -284,7 +284,7 @@ CONTAINS
!! the variable's dimensions. Hence, if the variable is a record
!! variable, the first element of count corresponds to a count of the
!! number of records to read.
-!! Note: setting any element of the count array to zero causes the function to exit without error, and without doing anything.
+!! Note: setting any element of the count array to zero causes the function to exit without error, and without doing anything.
!! @param ival : The value for the netcdf metadata
!! @retval ierr @ref error_return
!<
@@ -332,7 +332,7 @@ CONTAINS
!! the variable's dimensions. Hence, if the variable is a record
!! variable, the first element of count corresponds to a count of the
!! number of records to read.
-!! Note: setting any element of the count array to zero causes the function to exit without error, and without doing anything.
+!! Note: setting any element of the count array to zero causes the function to exit without error, and without doing anything.
!! @param ival : The value for the netcdf metadata
!! @retval ierr @ref error_return
!<
@@ -379,11 +379,11 @@ CONTAINS
integer, intent(in) :: varid
{VTYPE}, intent(out) :: ival
{VTYPE} :: aival(1)
-
+
ierr = PIOc_get_var_{NCTYPE} (File%fh, varid-1, aival)
ival = aival(1)
-
+
end function Get_var_0d_{TYPE}
! DIMS 1,2,3,4,5
@@ -395,7 +395,7 @@ CONTAINS
ierr = get_var_text_internal(File%fh, varid, size(ival), ival)
end function get_var_{DIMS}d_text
-
+
integer function get_var_text_internal (ncid,varid, nstrs, ival) result(ierr)
integer, intent(in) :: ncid
integer, intent(in) :: varid
diff --git a/cime/externals/pio2/src/flib/pionfput_mod.F90.in b/cime/externals/pio2/src/flib/pionfput_mod.F90.in
index d4b54220a941..fdc3e6241972 100644
--- a/cime/externals/pio2/src/flib/pionfput_mod.F90.in
+++ b/cime/externals/pio2/src/flib/pionfput_mod.F90.in
@@ -1,6 +1,6 @@
#define __PIO_FILE__ "pionfput_mod.F90"
!>
-!! @file
+!! @file
!! @brief Write routines for non-decomposed NetCDF data.
!<
module pionfput_mod
@@ -18,17 +18,17 @@ module pionfput_mod
!! @defgroup PIO_put_var PIO_put_var
!! @brief Writes data to a netCDF file.
!! @details The put_var interface is provided as a simplified interface to
-!! write variables to a netcdf format file.
-!! @warning Although this is a collective call the variable is written from the
+!! write variables to a netcdf format file.
+!! @warning Although this is a collective call the variable is written from the
!! root IO task, no consistancy check is made with data passed on other tasks.
-!!
+!!
!<
public :: put_var
interface put_var
! DIMS 0,1,2,3,4,5
module procedure put_var_{DIMS}d_{TYPE}, put_var_vdesc_{DIMS}d_{TYPE}
! DIMS 1,2,3,4,5
- module procedure put_vara_{DIMS}d_{TYPE}
+ module procedure put_vara_{DIMS}d_{TYPE}
! DIMS 1,2,3,4,5
module procedure put_vara_vdesc_{DIMS}d_{TYPE}
module procedure put_var1_{TYPE}, put_var1_vdesc_{TYPE}
@@ -99,7 +99,7 @@ contains
!! @details
!! @param File @copydoc file_desc_t
!! @param varid : The netcdf variable identifier
-!! @param index :
+!! @param index :
!! @param ival : The value for the netcdf metadata
!! @retval ierr @copydoc error_return
!<
@@ -111,7 +111,7 @@ contains
integer :: i
integer, allocatable :: count(:)
integer :: ndims
-
+
ndims = size(index)
allocate(count(ndims))
count = 1
@@ -134,7 +134,7 @@ contains
!! @details
!! @param File @copydoc file_desc_t
!! @param varid : The netcdf variable identifier
-!! @param index :
+!! @param index :
!! @param ival : The value for the netcdf metadata
!! @retval ierr @copydoc error_return
!<
@@ -157,18 +157,18 @@ contains
#ifdef TIMING
call t_startf("PIO:put_var1_{TYPE}")
-#endif
+#endif
clen = size(index)
allocate(cindex(clen))
do i=1,clen
cindex(i) = index(clen-i+1)-1
enddo
-
+
ierr = PIOc_put_var1_{NCTYPE} (file%fh, varid-1, cindex, ival)
deallocate(cindex)
#ifdef TIMING
call t_stopf("PIO:put_var1_{TYPE}")
-#endif
+#endif
end function put_var1_{TYPE}
!>
@@ -178,7 +178,7 @@ contains
!! @details
!! @param File @copydoc file_desc_t
!! @param vardesc @copydoc var_desc_t
-!! @param start :
+!! @param start :
!! @param ival : The value for the netcdf metadata
!! @retval ierr @copydoc error_return
!<
@@ -220,7 +220,7 @@ contains
do i=1,len_trim(ival)
cval(i) = ival(i:i)
end do
-
+
ierr = PIOc_put_var_text(file%fh, varid-1, cval)
deallocate(cval)
@@ -276,7 +276,7 @@ contains
integer, intent(in) :: ncid
integer, intent(in) :: varid
{VTYPE}, intent(in) :: ival(*)
-
+
interface
integer(C_INT) function PIOc_put_var_{NCTYPE}(ncid, varid, op) &
bind(C,name="PIOc_put_var_{NCTYPE}")
@@ -338,13 +338,13 @@ contains
ierr=PIO_NOERR
#ifdef TIMING
call t_startf("PIO:put_var_0d_{TYPE}")
-#endif
+#endif
ierr = put_var_internal_{TYPE} (File%fh, varid, (/ival/))
#ifdef TIMING
call t_stopf("PIO:put_var_0d_{TYPE}")
-#endif
+#endif
end function put_var_0d_{TYPE}
@@ -419,7 +419,7 @@ contains
integer(C_SIZE_T), allocatable :: cstart(:), ccount(:)
integer :: i
integer :: ndims
-
+
do i=1,size(count)
if(count(i)<=0) then
ndims=i-1
@@ -528,7 +528,7 @@ contains
enddo
#endif
-
+
end subroutine Fstring2Cstring_{DIMS}d
@@ -554,13 +554,13 @@ contains
{VTYPE}, intent(in) :: ival{DIMSTR}
#ifdef TIMING
call t_startf("PIO:put_vara_{DIMS}d_{TYPE}")
-#endif
+#endif
ierr = put_vara_internal_{TYPE} (File%fh, varid, start, count, ival)
#ifdef TIMING
call t_stopf("PIO:put_vara_{DIMS}d_{TYPE}")
-#endif
+#endif
end function put_vara_{DIMS}d_{TYPE}
! DIMS 1,2,3,4,5
@@ -571,8 +571,8 @@ contains
!! @details
!! @param File @copydoc file_desc_t
!! @param vardesc @copydoc var_desc_t
-!! @param start :
-!! @param count :
+!! @param start :
+!! @param count :
!! @param ival : The value for the netcdf metadata
!! @retval ierr @copydoc error_return
!<
diff --git a/cime/externals/pio2/src/gptl/CMakeLists.txt b/cime/externals/pio2/src/gptl/CMakeLists.txt
index b1f9c5bc11cc..02301f581838 100644
--- a/cime/externals/pio2/src/gptl/CMakeLists.txt
+++ b/cime/externals/pio2/src/gptl/CMakeLists.txt
@@ -90,7 +90,7 @@ if (ENABLE_LIBRT)
PUBLIC ${LIBRT_INCLUDE_DIRECTORIES})
target_link_libraries (gptl
PUBLIC ${LIBRT_LIBRARIES})
- endif ()
+ endif ()
endif ()
#===== MPI =====
@@ -116,7 +116,7 @@ else ()
PUBLIC HAVE_MPI)
endif ()
endif ()
-
+
# Check MPI library for Comm_f2c function
set (CMAKE_REQUIRED_LIBRARIES ${MPI_C_LIBRARIES})
check_function_exists (MPI_Comm_f2c MPI_HAS_COMM_F2C)
@@ -141,7 +141,7 @@ else ()
target_compile_definitions (gptl
PUBLIC NO_MPIMOD)
endif ()
-
+
#===== GetTimeOfDay =====
if (NOT DEFINED SYSTEM_HAS_GETTIMEOFDAY)
get_target_property (GPTL_LINK_LIBRARIES gptl LINK_LIBRARIES)
diff --git a/cime/externals/pio2/src/gptl/ChangeLog b/cime/externals/pio2/src/gptl/ChangeLog
index 8bbbbcfe4fc7..3d11911e6b60 100644
--- a/cime/externals/pio2/src/gptl/ChangeLog
+++ b/cime/externals/pio2/src/gptl/ChangeLog
@@ -2,15 +2,15 @@ timing_120921: Add code for cmake build, should not have any affect otherwise
timing_120803: Bug fix in setting timing_detail_limit default.
[Patrick Worley]
timing_120731: Correction in Makefile for serial build [Jim Edwards]
-timing_120728: Replace process subset optional parameter in t_prf with
- outpe_thispe optional parameter. Change def_perf_outpe_num to 0.
+timing_120728: Replace process subset optional parameter in t_prf with
+ outpe_thispe optional parameter. Change def_perf_outpe_num to 0.
[Patrick Worley]
timing_120717: Retain timestamp on cp in Makefile [Jim Edwards]
timing_120710: Correct issue in Makefile [Jim Edwards]
timing_120709: Change for BGP to measure on compute nodes rather than IO nodes only,
minor Change in Makefile so that gptl can build seperate from csm_share
in cesm [Jim Edwards]
-timing_120512: Bug fix in global statistics logic for when a thread has no events
+timing_120512: Bug fix in global statistics logic for when a thread has no events
to contribute to the merge (mods to gptl.c)
[Patrick Worley]
timing_120419: Minor changes for mpi-serial compile (jedwards)
@@ -18,7 +18,7 @@ timing_120408: Make HAVE_COMM_F2C default to true. (jedwards)
timing_120110: Update to GPTL 4.1 source (mods to gptl.c and GPTLprint_memusage)
[Jim Rosinski (GPTL 4.1), Patrick Worley]
timing_120109: Bug fix (adding shr_kind_i8 to shr_kind_mod list)
-timing_111205: Update to gptl 4.0 (introducing CESM customizations);
+timing_111205: Update to gptl 4.0 (introducing CESM customizations);
support for handles in t_startf/t_stopf;
support for restricting output to explicitly named process subsets
[Jim Rosinski (gptl 4.0), Patrick Worley]
@@ -29,7 +29,7 @@ timing_101210: Fix interface to cesm build system, add workaround for xlf bug
timing_101202: updated get_memusage and print_memusage from GPTL version 3.7; adds
improved support for MacOS and SLASHPROC
[Jim Rosinski, Chuck Bardeen (integrated by P. Worley)]
-timing_091021: update to GPTL version 3.5; rewrite of GPTLpr_summary: much faster, merging
+timing_091021: update to GPTL version 3.5; rewrite of GPTLpr_summary: much faster, merging
events from all processes and all threads (not just process 0/thread 0);
miscellaneous fixes
[Jim Rosinski (gptl 3.5), Joseph Singh, Patrick Worley]
@@ -39,7 +39,7 @@ timing_090929: added explicit support for the GPTL-native token HAVE_MPI (indica
timing_081221: restore default assumption that gettimeofday available
timing_081028: bug fix in include order in gptl_papi.c
timing_081026: change in output format to make postprocessing simpler
-timing_081024: support for up to one million processes and writing timing files to
+timing_081024: support for up to one million processes and writing timing files to
subdirectories
timing_081017: updated to gptl version 3_4_2. Changed some defaults.
[Jim Rosinski, Patrick Worley]
@@ -57,8 +57,8 @@ timing_071023: updated to gptl version 2.16, added support for output of global
statistics; removed dependencies on shr and CAM routines; renamed
gptlutil.c to GPTLutil.c
[Patrick Worley, Jim Rosinski]
-timing_071019: modified namelist logic to abort if try to set unknown namelist parameters;
- changed default number of reporting processes to 1;
+timing_071019: modified namelist logic to abort if try to set unknown namelist parameters;
+ changed default number of reporting processes to 1;
reversed meaning and changed names of CPP tokens to NO_C99_INLINE and NO_VPRINTF
[Patrick Worley]
timing_071010: modified gptl.c to remove the 'inline' specification unless the
@@ -67,75 +67,75 @@ timing_071010: modified gptl.c to remove the 'inline' specification unless the
timing_070810: added ChangeLog
updated to latest version of GPTL (from Jim Rosinski)
modified perf_mod.F90:
- - added perf_outpe_num and perf_outpe_stride to perf_inparm
+ - added perf_outpe_num and perf_outpe_stride to perf_inparm
namelist to control which processes output timing data
- added perf_papi_enable to perf_inparm namelist to enable
- PAPI counters
+ PAPI counters
- added papi_inparm namelist and papi_ctr1,2,3,4 namelist
parameters to specify PAPI counters
[Patrick Worley, Jim Rosinski]
-timing_070525: bug fix in gptl.c
+timing_070525: bug fix in gptl.c
- unitialized pointer, testing for null pter
before traversing
[Patrick Worley]
timing_070328: modified perf_mod.F90
- deleted HIDE_MPI cpp token
[Erik Kluzek]
-timing_070327: bug fixes in gptl.c
- - testing for null pters before traversing
+timing_070327: bug fixes in gptl.c
+ - testing for null pters before traversing
links; added missing type declaration to GPTLallocate for sum
- bug fixes in perf_mod.F90
- - fixed OMP-related logic, modified settings reporting,
+ bug fixes in perf_mod.F90
+ - fixed OMP-related logic, modified settings reporting,
modified to work when namelist input is
missing; moved timer depth logic back into gptl.c
[Patrick Worley]
-timing_070308: added perf_mod.F90
- - defines all t_xxx entry points - calling gptlxxx directly
+timing_070308: added perf_mod.F90
+ - defines all t_xxx entry points - calling gptlxxx directly
and removing all external gptlxxx dependencies,
added detail option as an alternative way to disable
event timing, added runtime selection of timing_disable,
perf_timer, timer_depth_limit, timing_detail_limit,
timing_barrier, perf_single_file via namelist parameters
- modified f_wrappers.c
- - replaced all t_xxx entry points with gptlxxx entry points,
+ modified f_wrappers.c
+ - replaced all t_xxx entry points with gptlxxx entry points,
added new gptlxxx entry points, deleted _fcd support
- modified gptl.c
+ modified gptl.c
- deleted DISABLE_TIMERS cpp token, modified GPTLpr call
and logic to move some of support for concatenating timing
output into a single file to perf_mod.F90
- modified gptl.h
- - exposed gptlxxx entry points and to add support for choice
+ modified gptl.h
+ - exposed gptlxxx entry points and to add support for choice
of GPTL timer
modified gptl.inc
- removed t_xxx entry points and expose gptlxxx entry points
[Patrick Worley]
-timing_061207: modified gptl.c
- - improved event output ordering
+timing_061207: modified gptl.c
+ - improved event output ordering
[Jim Edwards]
-timing_061124: modified gptl.c
+timing_061124: modified gptl.c
- modified GPTLpr to add option to concatenate
all timing data in a single output file, added GPTL_enable
- and GPTL_disable as runtime control of event timing,
+ and GPTL_disable as runtime control of event timing,
process 0-only reporting of timing options - unless DEBUG
cpp token defined
- modified gptl.h
+ modified gptl.h
- redefined GPTLpr parameters
- modified f_wrappers.c
- - added t_enablef and t_disablef to call GPTL_enable and
+ modified f_wrappers.c
+ - added t_enablef and t_disablef to call GPTL_enable and
GPTL_disable, added t_pr_onef, added string.h include
- bug fix in f_wrappers.c
+ bug fix in f_wrappers.c
- changed character string size declaration from int to size_t
- bug fix in gptl_papi.c
+ bug fix in gptl_papi.c
- modified error message - from Jim Edwards
modified private.h
- increased maximum event name length
[Patrick Worley]
-timing_061028: modified f_wrappers.c
+timing_061028: modified f_wrappers.c
- deleted dependency on cfort.h
[Patrick Worley]
-timing_060524: modified f_wrappers.c
- - added support for CRAY cpp token and fixed routine
+timing_060524: modified f_wrappers.c
+ - added support for CRAY cpp token and fixed routine
type declarations
[Patrick Worley]
-timing_051212: original subversion version
+timing_051212: original subversion version
- see CAM ChangeLog for earlier history
diff --git a/cime/externals/pio2/src/gptl/GPTLget_memusage.c b/cime/externals/pio2/src/gptl/GPTLget_memusage.c
index 4b0d138b2b6e..4ccdef8b2a3c 100644
--- a/cime/externals/pio2/src/gptl/GPTLget_memusage.c
+++ b/cime/externals/pio2/src/gptl/GPTLget_memusage.c
@@ -4,7 +4,7 @@
** Author: Jim Rosinski
** Credit to Chuck Bardeen for MACOS section (__APPLE__ ifdef)
**
-** get_memusage:
+** get_memusage:
**
** Designed to be called from Fortran, returns information about memory
** usage in each of 5 input int* args. On Linux read from the /proc
@@ -63,7 +63,7 @@ int GPTLget_memusage (int *size, int *rss, int *share, int *text, int *datastack
long long total;
int node_config;
-
+
/* memory available */
Kernel_GetPersonality(&pers, sizeof(pers));
total = BGP_Personality_DDRSizeMB(&pers);
@@ -116,7 +116,7 @@ int GPTLget_memusage (int *size, int *rss, int *share, int *text, int *datastack
** arguments, close the file and return.
*/
- ret = fscanf (fd, "%d %d %d %d %d %d %d",
+ ret = fscanf (fd, "%d %d %d %d %d %d %d",
size, rss, share, text, datastack, &dum, &dum);
ret = fclose (fd);
return 0;
@@ -124,9 +124,9 @@ int GPTLget_memusage (int *size, int *rss, int *share, int *text, int *datastack
#elif (defined __APPLE__)
FILE *fd;
- char cmd[60];
+ char cmd[60];
int pid = (int) getpid ();
-
+
sprintf (cmd, "ps -o vsz -o rss -o tsiz -p %d | grep -v RSS", pid);
fd = popen (cmd, "r");
@@ -145,7 +145,7 @@ int GPTLget_memusage (int *size, int *rss, int *share, int *text, int *datastack
if (getrusage (RUSAGE_SELF, &usage) < 0)
return -1;
-
+
*size = -1;
*rss = usage.ru_maxrss;
*share = -1;
diff --git a/cime/externals/pio2/src/gptl/GPTLprint_memusage.c b/cime/externals/pio2/src/gptl/GPTLprint_memusage.c
index a185d61100f4..5ab873dccb46 100644
--- a/cime/externals/pio2/src/gptl/GPTLprint_memusage.c
+++ b/cime/externals/pio2/src/gptl/GPTLprint_memusage.c
@@ -30,13 +30,13 @@ int GPTLprint_memusage (const char *str)
static const int nbytes = 1024*1024*10; /* allocate 10 MB */
static double blockstomb; /* convert blocks to MB */
void *space; /* allocated space */
-
+
if (GPTLget_memusage (&size, &rss, &share, &text, &datastack) < 0)
return -1;
#if (defined HAVE_SLASHPROC || defined __APPLE__)
/*
- ** Determine size in bytes of memory usage info presented by the OS. Method: allocate a
+ ** Determine size in bytes of memory usage info presented by the OS. Method: allocate a
** known amount of memory and see how much bigger the process becomes.
*/
@@ -47,7 +47,7 @@ int GPTLprint_memusage (const char *str)
/*
** Estimate bytes per block, then refine to nearest power of 2.
** The assumption is that the OS presents memory usage info in
- ** units that are a power of 2.
+ ** units that are a power of 2.
*/
bytesperblock = (int) ((nbytes / (double) (size2 - size)) + 0.5);
bytesperblock = nearest_powerof2 (bytesperblock);
@@ -57,19 +57,19 @@ int GPTLprint_memusage (const char *str)
}
free (space);
}
-
+
if (bytesperblock > 0)
- printf ("%s size=%.1f MB rss=%.1f MB share=%.1f MB text=%.1f MB datastack=%.1f MB\n",
- str, size*blockstomb, rss*blockstomb, share*blockstomb,
+ printf ("%s size=%.1f MB rss=%.1f MB share=%.1f MB text=%.1f MB datastack=%.1f MB\n",
+ str, size*blockstomb, rss*blockstomb, share*blockstomb,
text*blockstomb, datastack*blockstomb);
else
- printf ("%s size=%d rss=%d share=%d text=%d datastack=%d\n",
+ printf ("%s size=%d rss=%d share=%d text=%d datastack=%d\n",
str, size, rss, share, text, datastack);
#else
/*
- ** Use max rss as returned by getrusage. If someone knows how to
+ ** Use max rss as returned by getrusage. If someone knows how to
** get the process size under AIX please tell me.
*/
@@ -85,7 +85,7 @@ int GPTLprint_memusage (const char *str)
}
/*
-** nearest_powerof2:
+** nearest_powerof2:
** Determine nearest integer which is a power of 2.
** Note: algorithm can't use anything that requires -lm because this is a library,
** and we don't want to burden the user with having to add extra libraries to the
@@ -112,7 +112,7 @@ static int nearest_powerof2 (const int val)
delta1 = val - lower;
delta2 = higher - val;
-
+
if (delta1 < delta2)
return lower;
else
diff --git a/cime/externals/pio2/src/gptl/GPTLutil.c b/cime/externals/pio2/src/gptl/GPTLutil.c
index f882834d2a13..b1c7cf80df48 100644
--- a/cime/externals/pio2/src/gptl/GPTLutil.c
+++ b/cime/externals/pio2/src/gptl/GPTLutil.c
@@ -25,10 +25,10 @@ static int max_error = 500; /* max number of error print msgs */
int GPTLerror (const char *fmt, ...)
{
va_list args;
-
+
va_start (args, fmt);
static int num_error = 0;
-
+
if (fmt != NULL && num_error < max_error) {
#ifndef NO_VPRINTF
(void) vfprintf (stderr, fmt, args);
@@ -39,10 +39,10 @@ int GPTLerror (const char *fmt, ...)
(void) fprintf (stderr, "Truncating further error print now after %d msgs",
num_error);
++num_error;
- }
-
+ }
+
va_end (args);
-
+
if (abort_on_error)
exit (-1);
diff --git a/cime/externals/pio2/src/gptl/README b/cime/externals/pio2/src/gptl/README
index f8f3f7f7a09e..2f0991da2188 100644
--- a/cime/externals/pio2/src/gptl/README
+++ b/cime/externals/pio2/src/gptl/README
@@ -18,7 +18,7 @@ Of course these events can only be enabled if the PAPI counters they require
are available on the target architecture.
-Using GPTL
+Using GPTL
----------
C codes making GPTL library calls should #include . Fortran codes
@@ -63,7 +63,7 @@ GPTLfinalize() can be called to clean up the GPTL environment. All space
malloc'ed by the GPTL library will be freed by this call.
-Example
+Example
-------
From "man GPTLstart", a simple example calling sequence to time a couple of
@@ -86,7 +86,7 @@ do_work(); /* do some work */
(void) GPTLpr (mympitaskid); /* print the results to timing. */
-Auto-instrumentation
+Auto-instrumentation
--------------------
If the regions to be timed are defined by function entry and exit points, and
@@ -128,7 +128,7 @@ Running hex2name.pl converts the function addresses back to human-readable
function names. It uses the UNIX "nm" utility to do this.
-Multi-processor instrumented codes
+Multi-processor instrumented codes
----------------------------------
For instrumented codes which make use of threading and/or MPI, a
diff --git a/cime/externals/pio2/src/gptl/f_wrappers.c b/cime/externals/pio2/src/gptl/f_wrappers.c
index b1da29ec4eb2..02f4b7567803 100644
--- a/cime/externals/pio2/src/gptl/f_wrappers.c
+++ b/cime/externals/pio2/src/gptl/f_wrappers.c
@@ -2,7 +2,7 @@
** $Id: f_wrappers.c,v 1.56 2010-12-29 18:46:42 rosinski Exp $
**
** Author: Jim Rosinski
-**
+**
** Fortran wrappers for timing library routines
*/
@@ -175,12 +175,12 @@ int gptlsetoption (int *option, int *val);
int gptlenable (void);
int gptldisable (void);
int gptlsetutr (int *option);
-int gptlquery (const char *name, int *t, int *count, int *onflg, double *wallclock,
- double *usr, double *sys, long long *papicounters_out, int *maxcounters,
+int gptlquery (const char *name, int *t, int *count, int *onflg, double *wallclock,
+ double *usr, double *sys, long long *papicounters_out, int *maxcounters,
int nc);
int gptlquerycounters (const char *name, int *t, long long *papicounters_out, int nc);
int gptlget_wallclock (const char *name, int *t, double *value, int nc);
-int gptlget_eventvalue (const char *timername, const char *eventname, int *t, double *value,
+int gptlget_eventvalue (const char *timername, const char *eventname, int *t, double *value,
int nc1, int nc2);
int gptlget_nregions (int *t, int *nregions);
int gptlget_regionname (int *t, int *region, char *name, int nc);
@@ -258,7 +258,7 @@ int gptlpr_summary (int *fcomm)
#endif
#else
int ccomm = 0;
-#endif
+#endif
return GPTLpr_summary (ccomm);
}
@@ -278,7 +278,7 @@ int gptlpr_summary_file (int *fcomm, char *file, int nc1)
#endif
#else
int ccomm = 0;
-#endif
+#endif
if ( ! (locfile = (char *) malloc (nc1+1)))
return GPTLerror ("gptlpr_summary_file: malloc error\n");
@@ -304,7 +304,7 @@ int gptlbarrier (int *fcomm, char *name, int nc1)
#endif
#else
int ccomm = 0;
-#endif
+#endif
numchars = MIN (nc1, MAX_CHARS);
strncpy (cname, name, numchars);
@@ -394,8 +394,8 @@ int gptlsetutr (int *option)
return GPTLsetutr (*option);
}
-int gptlquery (const char *name, int *t, int *count, int *onflg, double *wallclock,
- double *usr, double *sys, long long *papicounters_out, int *maxcounters,
+int gptlquery (const char *name, int *t, int *count, int *onflg, double *wallclock,
+ double *usr, double *sys, long long *papicounters_out, int *maxcounters,
int nc)
{
char cname[MAX_CHARS+1];
@@ -430,7 +430,7 @@ int gptlget_wallclock (const char *name, int *t, double *value, int nc)
return GPTLget_wallclock (cname, *t, value);
}
-int gptlget_eventvalue (const char *timername, const char *eventname, int *t, double *value,
+int gptlget_eventvalue (const char *timername, const char *eventname, int *t, double *value,
int nc1, int nc2)
{
char ctimername[MAX_CHARS+1];
diff --git a/cime/externals/pio2/src/gptl/gptl.c b/cime/externals/pio2/src/gptl/gptl.c
index 19c0ff7fa6a0..6346bf1b9993 100644
--- a/cime/externals/pio2/src/gptl/gptl.c
+++ b/cime/externals/pio2/src/gptl/gptl.c
@@ -17,7 +17,7 @@
#include
#ifndef HAVE_C99_INLINE
-#define inline
+#define inline
#endif
#ifdef HAVE_PAPI
@@ -134,7 +134,7 @@ static char **timerlist; /* list of all timers */
typedef struct {
int val; /* depth in calling tree */
int padding[31]; /* padding is to mitigate false cache sharing */
-} Nofalse;
+} Nofalse;
static Timer ***callstack; /* call stack */
static Nofalse *stackidx; /* index into callstack: */
@@ -260,7 +260,7 @@ int GPTLsetoption (const int option, /* option */
switch (option) {
case GPTLcpu:
#ifdef HAVE_TIMES
- cpustats.enabled = (bool) val;
+ cpustats.enabled = (bool) val;
if (verbose)
printf ("%s: cpustats = %d\n", thisfunc, val);
#else
@@ -268,56 +268,56 @@ int GPTLsetoption (const int option, /* option */
return GPTLerror ("%s: times() not available\n", thisfunc);
#endif
return 0;
- case GPTLwall:
- wallstats.enabled = (bool) val;
+ case GPTLwall:
+ wallstats.enabled = (bool) val;
if (verbose)
printf ("%s: boolean wallstats = %d\n", thisfunc, val);
return 0;
- case GPTLoverhead:
- overheadstats.enabled = (bool) val;
+ case GPTLoverhead:
+ overheadstats.enabled = (bool) val;
if (verbose)
printf ("%s: boolean overheadstats = %d\n", thisfunc, val);
return 0;
- case GPTLdepthlimit:
- depthlimit = val;
+ case GPTLdepthlimit:
+ depthlimit = val;
if (verbose)
printf ("%s: depthlimit = %d\n", thisfunc, val);
return 0;
- case GPTLverbose:
- verbose = (bool) val;
+ case GPTLverbose:
+ verbose = (bool) val;
#ifdef HAVE_PAPI
(void) GPTL_PAPIsetoption (GPTLverbose, val);
#endif
if (verbose)
printf ("%s: boolean verbose = %d\n", thisfunc, val);
return 0;
- case GPTLpercent:
- percent = (bool) val;
+ case GPTLpercent:
+ percent = (bool) val;
if (verbose)
printf ("%s: boolean percent = %d\n", thisfunc, val);
return 0;
- case GPTLdopr_preamble:
- dopr_preamble = (bool) val;
+ case GPTLdopr_preamble:
+ dopr_preamble = (bool) val;
if (verbose)
printf ("%s: boolean dopr_preamble = %d\n", thisfunc, val);
return 0;
- case GPTLdopr_threadsort:
- dopr_threadsort = (bool) val;
+ case GPTLdopr_threadsort:
+ dopr_threadsort = (bool) val;
if (verbose)
printf ("%s: boolean dopr_threadsort = %d\n", thisfunc, val);
return 0;
- case GPTLdopr_multparent:
- dopr_multparent = (bool) val;
+ case GPTLdopr_multparent:
+ dopr_multparent = (bool) val;
if (verbose)
printf ("%s: boolean dopr_multparent = %d\n", thisfunc, val);
return 0;
- case GPTLdopr_collision:
- dopr_collision = (bool) val;
+ case GPTLdopr_collision:
+ dopr_collision = (bool) val;
if (verbose)
printf ("%s: boolean dopr_collision = %d\n", thisfunc, val);
return 0;
case GPTLprint_method:
- method = (Method) val;
+ method = (Method) val;
if (verbose)
printf ("%s: print_method = %s\n", thisfunc, methodstr (method));
return 0;
@@ -338,8 +338,8 @@ int GPTLsetoption (const int option, /* option */
printf ("%s: boolean sync_mpi = %d\n", thisfunc, val);
return 0;
- /*
- ** Allow GPTLmultiplex to fall through because it will be handled by
+ /*
+ ** Allow GPTLmultiplex to fall through because it will be handled by
** GPTL_PAPIsetoption()
*/
@@ -405,7 +405,7 @@ int GPTLsetutr (const int option)
** GPTLinitialize (): Initialization routine must be called from single-threaded
** region before any other timing routines may be called. The need for this
** routine could be eliminated if not targetting timing library for threaded
-** capability.
+** capability.
**
** return value: 0 (success) or GPTLerror (failure)
*/
@@ -469,12 +469,12 @@ int GPTLinitialize (void)
return GPTLerror ("%s: Failure from GPTL_PAPIinitialize\n", thisfunc);
#endif
- /*
+ /*
** Call init routine for underlying timing routine.
*/
if ((*funclist[funcidx].funcinit)() < 0) {
- fprintf (stderr, "%s: Failure initializing %s. Reverting underlying timer to %s\n",
+ fprintf (stderr, "%s: Failure initializing %s. Reverting underlying timer to %s\n",
thisfunc, funclist[funcidx].name, funclist[0].name);
funcidx = 0;
}
@@ -620,12 +620,12 @@ int GPTLstart_instr (void *self)
ptr = getentry_instr (hashtable[t], self, &indx);
- /*
- ** Recursion => increment depth in recursion and return. We need to return
+ /*
+ ** Recursion => increment depth in recursion and return. We need to return
** because we don't want to restart the timer. We want the reported time for
** the timer to reflect the outermost layer of recursion.
*/
-
+
if (ptr && ptr->onflg) {
++ptr->recurselvl;
return 0;
@@ -662,7 +662,7 @@ int GPTLstart_instr (void *self)
return GPTLerror ("%s: update_ptr error\n", thisfunc);
return (0);
-}
+}
/*
** GPTLstart: start a timer
@@ -700,15 +700,15 @@ int GPTLstart (const char *name) /* timer name */
return 0;
}
- /*
+ /*
** ptr will point to the requested timer in the current list,
- ** or NULL if this is a new entry
+ ** or NULL if this is a new entry
*/
ptr = getentry (hashtable[t], name, &indx);
- /*
- ** Recursion => increment depth in recursion and return. We need to return
+ /*
+ ** Recursion => increment depth in recursion and return. We need to return
** because we don't want to restart the timer. We want the reported time for
** the timer to reflect the outermost layer of recursion.
*/
@@ -786,7 +786,7 @@ int GPTLstart_handle (const char *name, /* timer name */
}
/*
- ** If on input, handle references a non-zero value, assume it's a previously returned Timer*
+ ** If on input, handle references a non-zero value, assume it's a previously returned Timer*
** passed in by the user. If zero, generate the hash entry and return it to the user.
*/
@@ -795,9 +795,9 @@ int GPTLstart_handle (const char *name, /* timer name */
} else {
ptr = getentry (hashtable[t], name, &indx);
}
-
- /*
- ** Recursion => increment depth in recursion and return. We need to return
+
+ /*
+ ** Recursion => increment depth in recursion and return. We need to return
** because we don't want to restart the timer. We want the reported time for
** the timer to reflect the outermost layer of recursion.
*/
@@ -869,7 +869,7 @@ static int update_ll_hash (Timer *ptr, const int t, const unsigned int indx)
last[t] = ptr;
++hashtable[t][indx].nument;
nument = hashtable[t][indx].nument;
-
+
eptr = (Timer **) realloc (hashtable[t][indx].entries, nument * sizeof (Timer *));
if ( ! eptr)
return GPTLerror ("update_ll_hash: realloc error\n");
@@ -898,7 +898,7 @@ static inline int update_ptr (Timer *ptr, const int t)
if (cpustats.enabled && get_cpustamp (&ptr->cpu.last_utime, &ptr->cpu.last_stime) < 0)
return GPTLerror ("update_ptr: get_cpustamp error");
-
+
if (wallstats.enabled) {
tp2 = (*ptr2wtimefunc) ();
ptr->wall.last = tp2;
@@ -922,9 +922,9 @@ static inline int update_ptr (Timer *ptr, const int t)
** Return value: 0 (success) or GPTLerror (failure)
*/
-static inline int update_parent_info (Timer *ptr,
- Timer **callstackt,
- int stackidxt)
+static inline int update_parent_info (Timer *ptr,
+ Timer **callstackt,
+ int stackidxt)
{
int n; /* loop index through known parents */
Timer *pptr; /* pointer to parent in callstack */
@@ -941,7 +941,7 @@ static inline int update_parent_info (Timer *ptr,
callstackt[stackidxt] = ptr;
- /*
+ /*
** If the region has no parent, bump its orphan count
** (should never happen since "GPTL_ROOT" added).
*/
@@ -1010,7 +1010,7 @@ int GPTLstop_instr (void *self)
return GPTLerror ("%s: GPTLinitialize has not been called\n", thisfunc);
/* Get the timestamp */
-
+
if (wallstats.enabled) {
tp1 = (*ptr2wtimefunc) ();
}
@@ -1033,7 +1033,7 @@ int GPTLstop_instr (void *self)
ptr = getentry_instr (hashtable[t], self, &indx);
- if ( ! ptr)
+ if ( ! ptr)
return GPTLerror ("%s: timer for %p had not been started.\n", thisfunc, self);
if ( ! ptr->onflg )
@@ -1041,7 +1041,7 @@ int GPTLstop_instr (void *self)
++ptr->count;
- /*
+ /*
** Recursion => decrement depth in recursion and return. We need to return
** because we don't want to stop the timer. We want the reported time for
** the timer to reflect the outermost layer of recursion.
@@ -1085,7 +1085,7 @@ int GPTLstop (const char *name) /* timer name */
return GPTLerror ("%s: GPTLinitialize has not been called\n", thisfunc);
/* Get the timestamp */
-
+
if (wallstats.enabled) {
tp1 = (*ptr2wtimefunc) ();
}
@@ -1114,7 +1114,7 @@ int GPTLstop (const char *name) /* timer name */
++ptr->count;
- /*
+ /*
** Recursion => decrement depth in recursion and return. We need to return
** because we don't want to stop the timer. We want the reported time for
** the timer to reflect the outermost layer of recursion.
@@ -1160,7 +1160,7 @@ int GPTLstop_handle (const char *name, /* timer name */
return GPTLerror ("%s: GPTLinitialize has not been called\n", thisfunc);
/* Get the timestamp */
-
+
if (wallstats.enabled) {
tp1 = (*ptr2wtimefunc) ();
}
@@ -1182,7 +1182,7 @@ int GPTLstop_handle (const char *name, /* timer name */
}
/*
- ** If on input, handle references a non-zero value, assume it's a previously returned Timer*
+ ** If on input, handle references a non-zero value, assume it's a previously returned Timer*
** passed in by the user. If zero, generate the hash entry and return it to the user.
*/
@@ -1198,7 +1198,7 @@ int GPTLstop_handle (const char *name, /* timer name */
++ptr->count;
- /*
+ /*
** Recursion => decrement depth in recursion and return. We need to return
** because we don't want to stop the timer. We want the reported time for
** the timer to reflect the outermost layer of recursion.
@@ -1224,7 +1224,7 @@ int GPTLstop_handle (const char *name, /* timer name */
}
/*
-** update_stats: update stats inside ptr. Called by GPTLstop, GPTLstop_instr,
+** update_stats: update stats inside ptr. Called by GPTLstop, GPTLstop_instr,
** GPTLstop_handle
**
** Input arguments:
@@ -1237,9 +1237,9 @@ int GPTLstop_handle (const char *name, /* timer name */
** Return value: 0 (success) or GPTLerror (failure)
*/
-static inline int update_stats (Timer *ptr,
- const double tp1,
- const long usr,
+static inline int update_stats (Timer *ptr,
+ const double tp1,
+ const long usr,
const long sys,
const int t)
{
@@ -1375,7 +1375,7 @@ int GPTLreset (void)
return 0;
}
-/*
+/*
** GPTLpr_set_append: set GPTLpr_file and GPTLpr_summary_file
** to use append mode
*/
@@ -1386,20 +1386,20 @@ int GPTLpr_set_append (void)
return 0;
}
-/*
+/*
** GPTLpr_query_append: query whether GPTLpr_file and GPTLpr_summary_file
** use append mode
*/
int GPTLpr_query_append (void)
{
- if (pr_append)
+ if (pr_append)
return 1;
- else
+ else
return 0;
}
-/*
+/*
** GPTLpr_set_write: set GPTLpr_file and GPTLpr_summary_file
** to use write mode
*/
@@ -1410,20 +1410,20 @@ int GPTLpr_set_write (void)
return 0;
}
-/*
+/*
** GPTLpr_query_write: query whether GPTLpr_file and GPTLpr_summary_file
** use write mode
*/
int GPTLpr_query_write (void)
{
- if (pr_append)
+ if (pr_append)
return 0;
- else
+ else
return 1;
}
-/*
+/*
** GPTLpr: Print values of all timers
**
** Input arguments:
@@ -1448,7 +1448,7 @@ int GPTLpr (const int id) /* output file will be named "timing." */
return 0;
}
-/*
+/*
** GPTLpr_file: Print values of all timers
**
** Input arguments:
@@ -1500,9 +1500,9 @@ int GPTLpr_file (const char *outfile) /* output file to write */
/* 2 is for "/" plus null */
if (outdir)
- totlen = strlen (outdir) + strlen (outfile) + 2;
+ totlen = strlen (outdir) + strlen (outfile) + 2;
else
- totlen = strlen (outfile) + 2;
+ totlen = strlen (outfile) + 2;
outpath = (char *) GPTLallocate (totlen);
@@ -1619,11 +1619,11 @@ int GPTLpr_file (const char *outfile) /* output file to write */
}
sum = (float *) GPTLallocate (nthreads * sizeof (float));
-
+
for (t = 0; t < nthreads; ++t) {
/*
- ** Construct tree for printing timers in parent/child form. get_max_depth() must be called
+ ** Construct tree for printing timers in parent/child form. get_max_depth() must be called
** AFTER construct_tree() because it relies on the per-parent children arrays being complete.
*/
@@ -1671,7 +1671,7 @@ int GPTLpr_file (const char *outfile) /* output file to write */
printself_andchildren (timers[t], fp, t, -1, tot_overhead);
- /*
+ /*
** Sum of overhead across timers is meaningful.
** Factor of 2 is because there are 2 utr calls per start/stop pair.
*/
@@ -1721,8 +1721,8 @@ int GPTLpr_file (const char *outfile) /* output file to write */
/* Start at next to skip dummy */
for (ptr = timers[0]->next; ptr; ptr = ptr->next) {
-
- /*
+
+ /*
** To print sum stats, first create a new timer then copy thread 0
** stats into it. then sum using "add", and finally print.
*/
@@ -1874,7 +1874,7 @@ int GPTLpr_file (const char *outfile) /* output file to write */
totmem += gptlmem;
fprintf (fp, "\n");
fprintf (fp, "Thread %d total memory usage = %g KB\n", t, gptlmem*.001);
- fprintf (fp, " Hashmem = %g KB\n"
+ fprintf (fp, " Hashmem = %g KB\n"
" Regionmem = %g KB (papimem portion = %g KB)\n"
" Parent/child arrays = %g KB\n",
hashmem*.001, regionmem*.001, papimem*.001, pchmem*.001);
@@ -1892,7 +1892,7 @@ int GPTLpr_file (const char *outfile) /* output file to write */
return 0;
}
-/*
+/*
** construct_tree: Build the parent->children tree starting with knowledge of
** parent list for each child.
**
@@ -1944,7 +1944,7 @@ int construct_tree (Timer *timerst, Method method)
}
break;
case GPTLfull_tree:
- /*
+ /*
** Careful: this one can create *lots* of output!
*/
for (n = 0; n < ptr->nparent; ++n) {
@@ -1959,7 +1959,7 @@ int construct_tree (Timer *timerst, Method method)
return 0;
}
-/*
+/*
** methodstr: Return a pointer to a string which represents the method
**
** Input arguments:
@@ -1980,9 +1980,9 @@ static char *methodstr (Method method)
return "Unknown";
}
-/*
+/*
** newchild: Add an entry to the children list of parent. Use function
-** is_descendant() to prevent infinite loops.
+** is_descendant() to prevent infinite loops.
**
** Input arguments:
** parent: parent node
@@ -2017,7 +2017,7 @@ static int newchild (Timer *parent, Timer *child)
}
/*
- ** To guarantee no loops, ensure that proposed parent isn't already a descendant of
+ ** To guarantee no loops, ensure that proposed parent isn't already a descendant of
** proposed child
*/
@@ -2040,13 +2040,13 @@ static int newchild (Timer *parent, Timer *child)
return 0;
}
-/*
+/*
** get_max_depth: Determine the maximum call tree depth by traversing the
** tree recursively
**
** Input arguments:
** ptr: Starting timer
-** startdepth: current depth when function invoked
+** startdepth: current depth when function invoked
**
** Return value: maximum depth
*/
@@ -2064,7 +2064,7 @@ static int get_max_depth (const Timer *ptr, const int startdepth)
return maxdepth;
}
-/*
+/*
** num_descendants: Determine the number of descendants of a timer by traversing
** the tree recursively. This function is not currently used. It could be
** useful in a pruning algorithm
@@ -2086,7 +2086,7 @@ static int num_descendants (Timer *ptr)
return ptr->num_desc;
}
-/*
+/*
** is_descendant: Determine whether node2 is in the descendant list for
** node1
**
@@ -2114,7 +2114,7 @@ static int is_descendant (const Timer *node1, const Timer *node2)
return 0;
}
-/*
+/*
** printstats: print a single timer
**
** Input arguments:
@@ -2224,7 +2224,7 @@ static void printstats (const Timer *timer,
else
fprintf (fp, "%13.3e ", timer->nbytes / timer->count);
#endif
-
+
#ifdef HAVE_PAPI
GPTL_PAPIpr (fp, &timer->aux, t, timer->count, timer->wall.accum);
#endif
@@ -2232,13 +2232,13 @@ static void printstats (const Timer *timer,
fprintf (fp, "\n");
}
-/*
-** print_multparentinfo:
+/*
+** print_multparentinfo:
**
** Input arguments:
** Input/output arguments:
*/
-void print_multparentinfo (FILE *fp,
+void print_multparentinfo (FILE *fp,
Timer *ptr)
{
int n;
@@ -2263,7 +2263,7 @@ void print_multparentinfo (FILE *fp,
fprintf (fp, "%8.1e %-32s\n\n", (float) ptr->count, ptr->name);
}
-/*
+/*
** add: add the contents of tin to tout
**
** Input arguments:
@@ -2272,14 +2272,14 @@ void print_multparentinfo (FILE *fp,
** tout: output timer summed into
*/
-static void add (Timer *tout,
+static void add (Timer *tout,
const Timer *tin)
{
tout->count += tin->count;
if (wallstats.enabled) {
tout->wall.accum += tin->wall.accum;
-
+
tout->wall.max = MAX (tout->wall.max, tin->wall.max);
tout->wall.min = MIN (tout->wall.min, tin->wall.min);
}
@@ -2293,8 +2293,8 @@ static void add (Timer *tout,
#endif
}
-/*
-** GPTLpr_summary: Gather and print summary stats across
+/*
+** GPTLpr_summary: Gather and print summary stats across
** threads and MPI tasks
**
** Input arguments:
@@ -2315,10 +2315,10 @@ int GPTLpr_summary (int comm)
}
#ifdef HAVE_MPI
-int GPTLpr_summary_file (MPI_Comm comm,
+int GPTLpr_summary_file (MPI_Comm comm,
const char *outfile)
#else
-int GPTLpr_summary_file (int comm,
+int GPTLpr_summary_file (int comm,
const char *outfile)
#endif
{
@@ -2362,7 +2362,7 @@ int GPTLpr_summary_file (int comm,
return GPTLerror ("%s: GPTLinitialize() has not been called\n", thisfunc);
/*
- ** Each process gathers stats for its threads.
+ ** Each process gathers stats for its threads.
** Binary tree used combine results.
** Master prints results.
*/
@@ -2411,7 +2411,7 @@ int GPTLpr_summary_file (int comm,
/* allocate storage for data for all timers */
if( !( storage = malloc( sizeof(Summarystats) * count ) ) && count )
return GPTLerror ("%s: memory allocation failed\n", thisfunc);
-
+
if ( (ret = collect_data( iam, comm, &count, &storage) ) != 0 )
return GPTLerror ("%s: master collect_data failed\n", thisfunc);
@@ -2526,7 +2526,7 @@ static int merge_thread_data()
/* count timers for thread 0 */
count_r = 0;
- for (ptr = timers[0]->next; ptr; ptr = ptr->next) count_r++;
+ for (ptr = timers[0]->next; ptr; ptr = ptr->next) count_r++;
timerlist = (char **) GPTLallocate( sizeof (char *));
if( !( timerlist[0] = (char *)malloc( count_r * length * sizeof (char)) ) && count_r)
@@ -2551,7 +2551,7 @@ static int merge_thread_data()
/* count timers for thread */
count[t] = 0;
- for (ptr = timers[t]->next; ptr; ptr = ptr->next) count[t]++;
+ for (ptr = timers[t]->next; ptr; ptr = ptr->next) count[t]++;
if( count[t] > max_count || max_count == 0 ) max_count = count[t];
@@ -2587,24 +2587,24 @@ static int merge_thread_data()
k = 0;
n = 0;
num_newtimers = 0;
- while( k < count[0] && n < count[t] ) {
+ while( k < count[0] && n < count[t] ) {
/* linear comparison of timers */
compare = strcmp( sort[0][k], sort[t][n] );
- if( compare == 0 ) {
+ if( compare == 0 ) {
/* both have, nothing needs to be done */
k++;
n++;
continue;
}
- if( compare < 0 ) {
+ if( compare < 0 ) {
/* event that only master has, nothing needs to be done */
k++;
continue;
}
- if( compare > 0 ) {
+ if( compare > 0 ) {
/* event that only slave thread has, need to add */
newtimers[num_newtimers] = sort[t][n];
n++;
@@ -2612,8 +2612,8 @@ static int merge_thread_data()
}
}
- while( n < count[t] ) {
- /* adds any remaining timers, since we know that all the rest
+ while( n < count[t] ) {
+ /* adds any remaining timers, since we know that all the rest
are new since have checked all master thread timers */
newtimers[num_newtimers] = sort[t][n];
num_newtimers++;
@@ -2622,7 +2622,7 @@ static int merge_thread_data()
if( num_newtimers ) {
/* sorts by memory address to restore original order */
- qsort( newtimers, num_newtimers, sizeof(char*), ncmp );
+ qsort( newtimers, num_newtimers, sizeof(char*), ncmp );
/* reallocate memory to hold additional timers */
if( !( sort[0] = realloc( sort[0], (count[0] + num_newtimers) * sizeof (char *)) ) )
@@ -2631,7 +2631,7 @@ static int merge_thread_data()
return GPTLerror ("%s: memory reallocation failed\n", thisfunc);
k = count[0];
- for (n = 0; n < num_newtimers; n++) {
+ for (n = 0; n < num_newtimers; n++) {
/* add new found timers */
memcpy( timerlist[0] + (count[0] + n) * length, newtimers[n], length * sizeof (char) );
}
@@ -2639,7 +2639,7 @@ static int merge_thread_data()
count[0] += num_newtimers;
/* reassign pointers in sort since realloc will have broken them if it moved the memory. */
- x = 0;
+ x = 0;
for (k = 0; k < count[0]; k++) {
sort[0][k] = timerlist[0] + x;
x += length;
@@ -2649,7 +2649,7 @@ static int merge_thread_data()
}
}
- free(sort[0]);
+ free(sort[0]);
/* don't free timerlist[0], since needed for subsequent steps in gathering global statistics */
for (t = 1; t < nthreads; t++) {
free(sort[t]);
@@ -2679,14 +2679,14 @@ static int merge_thread_data()
*/
#ifdef HAVE_MPI
-static int collect_data(const int iam,
+static int collect_data(const int iam,
MPI_Comm comm,
- int *count,
+ int *count,
Summarystats **summarystats_cumul )
#else
-static int collect_data(const int iam,
+static int collect_data(const int iam,
int comm,
- int *count,
+ int *count,
Summarystats **summarystats_cumul )
#endif
{
@@ -2809,11 +2809,11 @@ static int collect_data(const int iam,
{
compare = strcmp(sort_master[k], sort_slave[n]);
- if (compare == 0) {
+ if (compare == 0) {
/* matching timers found */
/* find element number of the name in original timerlist so that it can be matched with its summarystats */
- m_index = get_index( timerlist[0], sort_master[k] );
+ m_index = get_index( timerlist[0], sort_master[k] );
s_index = get_index( timers_slave, sort_slave[n] );
get_summarystats (&summarystats[m_index], &summarystats_slave[s_index]);
@@ -2822,7 +2822,7 @@ static int collect_data(const int iam,
continue;
}
- if (compare > 0) {
+ if (compare > 0) {
/* s1 >s2 . slave has event; master does not */
newtimers[num_newtimers] = sort_slave[n];
num_newtimers++;
@@ -2834,7 +2834,7 @@ static int collect_data(const int iam,
k++;
}
- while (n < count_slave) {
+ while (n < count_slave) {
/* add all remaining timers which only the slave has */
newtimers[num_newtimers] = sort_slave[n];
num_newtimers++;
@@ -2842,7 +2842,7 @@ static int collect_data(const int iam,
}
/* sort by memory address to get original order */
- qsort (newtimers, num_newtimers, sizeof(char*), ncmp);
+ qsort (newtimers, num_newtimers, sizeof(char*), ncmp);
/* reallocate to hold new timer names and summary stats from slave */
if (!(timerlist[0] = realloc( timerlist[0], length * (*count + num_newtimers) * sizeof (char) ) ))
@@ -2922,7 +2922,7 @@ static int collect_data(const int iam,
** Return value: index of element in list
*/
-int get_index( const char * list,
+int get_index( const char * list,
const char * element )
{
return (( element - list ) / ( MAX_CHARS + 1 ));
@@ -2955,7 +2955,7 @@ static int ncmp( const char **x, const char **y )
GPTLerror("%s: shared memory address between timers\n", thisfunc);
}
-/*
+/*
** get_threadstats: gather stats for timer "name" over all threads
**
** Input arguments:
@@ -2965,7 +2965,7 @@ static int ncmp( const char **x, const char **y )
** summarystats: max/min stats over all threads
*/
-void get_threadstats (const int iam,
+void get_threadstats (const int iam,
const char *name,
Summarystats *summarystats)
{
@@ -3017,7 +3017,7 @@ void get_threadstats (const int iam,
summarystats->papimax[n] = value;
summarystats->papimax_t[n] = t;
}
-
+
if (value < summarystats->papimin[n] || summarystats->papimin[n] == 0.) {
summarystats->papimin[n] = value;
summarystats->papimin_t[n] = t;
@@ -3030,7 +3030,7 @@ void get_threadstats (const int iam,
if ( summarystats->count ) summarystats->processes = 1;
}
-/*
+/*
** get_summarystats: write max/min stats into mpistats based on comparison
** with summarystats_slave
**
@@ -3040,7 +3040,7 @@ void get_threadstats (const int iam,
** summarystats: stats (starts out as master stats)
*/
-void get_summarystats (Summarystats *summarystats,
+void get_summarystats (Summarystats *summarystats,
const Summarystats *summarystats_slave)
{
if (summarystats_slave->count == 0) return;
@@ -3051,7 +3051,7 @@ void get_summarystats (Summarystats *summarystats,
summarystats->wallmax_t = summarystats_slave->wallmax_t;
}
- if ((summarystats_slave->wallmin < summarystats->wallmin) ||
+ if ((summarystats_slave->wallmin < summarystats->wallmin) ||
(summarystats->count == 0)){
summarystats->wallmin = summarystats_slave->wallmin;
summarystats->wallmin_p = summarystats_slave->wallmin_p;
@@ -3068,7 +3068,7 @@ void get_summarystats (Summarystats *summarystats,
summarystats->papimax_t[n] = summarystats_slave->papimax_t[n];
}
- if ((summarystats_slave->papimin[n] < summarystats->papimin[n]) ||
+ if ((summarystats_slave->papimin[n] < summarystats->papimin[n]) ||
(summarystats->count == 0)){
summarystats->papimin[n] = summarystats_slave->papimin[n];
summarystats->papimin_p[n] = summarystats_slave->papimin_p[n];
@@ -3085,7 +3085,7 @@ void get_summarystats (Summarystats *summarystats,
summarystats->threads += summarystats_slave->threads;
}
-/*
+/*
** GPTLbarrier: When MPI enabled, set and time an MPI barrier
**
** Input arguments:
@@ -3138,10 +3138,10 @@ static inline int get_cpustamp (long *usr, long *sys)
}
/*
-** GPTLquery: return current status info about a timer. If certain stats are not
+** GPTLquery: return current status info about a timer. If certain stats are not
** enabled, they should just have zeros in them. If PAPI is not enabled, input
** counter info is ignored.
-**
+**
** Input args:
** name: timer name
** maxcounters: max number of PAPI counters to get info for
@@ -3156,7 +3156,7 @@ static inline int get_cpustamp (long *usr, long *sys)
** papicounters_out: accumulated PAPI counters
*/
-int GPTLquery (const char *name,
+int GPTLquery (const char *name,
int t,
int *count,
int *onflg,
@@ -3169,14 +3169,14 @@ int GPTLquery (const char *name,
Timer *ptr; /* linked list pointer */
unsigned int indx; /* linked list index returned from getentry (unused) */
static const char *thisfunc = "GPTLquery";
-
+
if ( ! initialized)
return GPTLerror ("%s: GPTLinitialize has not been called\n", thisfunc);
-
+
/*
** If t is < 0, assume the request is for the current thread
*/
-
+
if (t < 0) {
if ((t = get_thread_num ()) < 0)
return GPTLerror ("%s: get_thread_num failure\n", thisfunc);
@@ -3184,7 +3184,7 @@ int GPTLquery (const char *name,
if (t >= maxthreads)
return GPTLerror ("%s: requested thread %d is too big\n", thisfunc, t);
}
-
+
ptr = getentry (hashtable[t], name, &indx);
if ( !ptr)
return GPTLerror ("%s: requested timer %s does not have a name hash\n", thisfunc, name);
@@ -3203,7 +3203,7 @@ int GPTLquery (const char *name,
/*
** GPTLquerycounters: return current PAPI counters for a timer.
** THIS ROUTINE ID DEPRECATED. USE GPTLget_eventvalue() instead
-**
+**
** Input args:
** name: timer name
** t: thread number (if < 0, the request is for the current thread)
@@ -3212,21 +3212,21 @@ int GPTLquery (const char *name,
** papicounters_out: accumulated PAPI counters
*/
-int GPTLquerycounters (const char *name,
+int GPTLquerycounters (const char *name,
int t,
long long *papicounters_out)
{
Timer *ptr; /* linked list pointer */
unsigned int indx; /* hash index returned from getentry */
static const char *thisfunc = "GPTLquery_counters";
-
+
if ( ! initialized)
return GPTLerror ("%s: GPTLinitialize has not been called\n", thisfunc);
-
+
/*
** If t is < 0, assume the request is for the current thread
*/
-
+
if (t < 0) {
if ((t = get_thread_num ()) < 0)
return GPTLerror ("%s: get_thread_num failure\n", thisfunc);
@@ -3234,7 +3234,7 @@ int GPTLquerycounters (const char *name,
if (t >= maxthreads)
return GPTLerror ("%s: requested thread %d is too big\n", thisfunc, t);
}
-
+
ptr = getentry (hashtable[t], name, &indx);
if ( !ptr)
return GPTLerror ("%s: requested timer %s does not have a name hash\n", thisfunc, name);
@@ -3248,7 +3248,7 @@ int GPTLquerycounters (const char *name,
/*
** GPTLget_wallclock: return wallclock accumulation for a timer.
-**
+**
** Input args:
** timername: timer name
** t: thread number (if < 0, the request is for the current thread)
@@ -3265,17 +3265,17 @@ int GPTLget_wallclock (const char *timername,
Timer *ptr; /* linked list pointer */
unsigned int indx; /* hash index returned from getentry (unused) */
static const char *thisfunc = "GPTLget_wallclock";
-
+
if ( ! initialized)
return GPTLerror ("%s: GPTLinitialize has not been called\n", thisfunc);
if ( ! wallstats.enabled)
return GPTLerror ("%s: wallstats not enabled\n", thisfunc);
-
+
/*
** If t is < 0, assume the request is for the current thread
*/
-
+
if (t < 0) {
if ((t = get_thread_num ()) < 0)
return GPTLerror ("%s: bad return from get_thread_num\n", thisfunc);
@@ -3283,9 +3283,9 @@ int GPTLget_wallclock (const char *timername,
if (t >= maxthreads)
return GPTLerror ("%s: requested thread %d is too big\n", thisfunc, t);
}
-
- /*
- ** Don't know whether hashtable entry for timername was generated with
+
+ /*
+ ** Don't know whether hashtable entry for timername was generated with
** *_instr() or not, so try both possibilities
*/
@@ -3305,7 +3305,7 @@ int GPTLget_wallclock (const char *timername,
/*
** GPTLget_eventvalue: return PAPI-based event value for a timer. All values will be
** returned as doubles, even if the event is not derived.
-**
+**
** Input args:
** timername: timer name
** eventname: event name (must be currently enabled)
@@ -3324,14 +3324,14 @@ int GPTLget_eventvalue (const char *timername,
Timer *ptr; /* linked list pointer */
unsigned int indx; /* hash index returned from getentry (unused) */
static const char *thisfunc = "GPTLget_eventvalue";
-
+
if ( ! initialized)
return GPTLerror ("%s: GPTLinitialize has not been called\n", thisfunc);
-
+
/*
** If t is < 0, assume the request is for the current thread
*/
-
+
if (t < 0) {
if ((t = get_thread_num ()) < 0)
return GPTLerror ("%s: get_thread_num failure\n", thisfunc);
@@ -3339,9 +3339,9 @@ int GPTLget_eventvalue (const char *timername,
if (t >= maxthreads)
return GPTLerror ("%s: requested thread %d is too big\n", thisfunc, t);
}
-
- /*
- ** Don't know whether hashtable entry for timername was generated with
+
+ /*
+ ** Don't know whether hashtable entry for timername was generated with
** *_instr() or not, so try both possibilities
*/
@@ -3357,13 +3357,13 @@ int GPTLget_eventvalue (const char *timername,
#ifdef HAVE_PAPI
return GPTL_PAPIget_eventvalue (eventname, &ptr->aux, value);
#else
- return GPTLerror ("%s: PAPI not enabled\n", thisfunc);
+ return GPTLerror ("%s: PAPI not enabled\n", thisfunc);
#endif
}
/*
** GPTLget_nregions: return number of regions (i.e. timer names) for this thread
-**
+**
** Input args:
** t: thread number (if < 0, the request is for the current thread)
**
@@ -3371,7 +3371,7 @@ int GPTLget_eventvalue (const char *timername,
** nregions: number of regions
*/
-int GPTLget_nregions (int t,
+int GPTLget_nregions (int t,
int *nregions)
{
Timer *ptr; /* walk through linked list */
@@ -3379,11 +3379,11 @@ int GPTLget_nregions (int t,
if ( ! initialized)
return GPTLerror ("%s: GPTLinitialize has not been called\n", thisfunc);
-
+
/*
** If t is < 0, assume the request is for the current thread
*/
-
+
if (t < 0) {
if ((t = get_thread_num ()) < 0)
return GPTLerror ("%s: get_thread_num failure\n", thisfunc);
@@ -3391,9 +3391,9 @@ int GPTLget_nregions (int t,
if (t >= maxthreads)
return GPTLerror ("%s: requested thread %d is too big\n", thisfunc, t);
}
-
+
*nregions = 0;
- for (ptr = timers[t]->next; ptr; ptr = ptr->next)
+ for (ptr = timers[t]->next; ptr; ptr = ptr->next)
++*nregions;
return 0;
@@ -3401,7 +3401,7 @@ int GPTLget_nregions (int t,
/*
** GPTLget_regionname: return region name for this thread
-**
+**
** Input args:
** t: thread number (if < 0, the request is for the current thread)
** region: region number
@@ -3423,11 +3423,11 @@ int GPTLget_regionname (int t, /* thread number */
if ( ! initialized)
return GPTLerror ("%s: GPTLinitialize has not been called\n", thisfunc);
-
+
/*
** If t is < 0, assume the request is for the current thread
*/
-
+
if (t < 0) {
if ((t = get_thread_num ()) < 0)
return GPTLerror ("%s: get_thread_num failure\n", thisfunc);
@@ -3435,7 +3435,7 @@ int GPTLget_regionname (int t, /* thread number */
if (t >= maxthreads)
return GPTLerror ("%s: requested thread %d is too big\n", thisfunc, t);
}
-
+
ptr = timers[t]->next;
for (i = 0; i < region; i++) {
if ( ! ptr)
@@ -3446,7 +3446,7 @@ int GPTLget_regionname (int t, /* thread number */
if (ptr) {
ncpy = MIN (nc, strlen (ptr->name));
strncpy (name, ptr->name, ncpy);
-
+
/*
** Adding the \0 is only important when called from C
*/
@@ -3523,7 +3523,7 @@ static inline Timer *getentry (const Hashentry *hashtable, /* hash table */
const unsigned char *c; /* pointer to elements of "name" */
Timer *ptr = 0; /* return value when entry not found */
- /*
+ /*
** Hash value is sum of: chars times their 1-based position index, modulo tablesize
*/
@@ -3535,7 +3535,7 @@ static inline Timer *getentry (const Hashentry *hashtable, /* hash table */
*indx %= tablesize;
- /*
+ /*
** If nument exceeds 1 there was a hash collision and we must search
** linearly through an array for a match
*/
@@ -3723,7 +3723,7 @@ static int init_papitime ()
return GPTLerror ("%s: not enabled\n", thisfunc);
#endif
}
-
+
static inline double utr_papitime ()
{
#ifdef HAVE_PAPI
@@ -3735,8 +3735,8 @@ static inline double utr_papitime ()
#endif
}
-/*
-** Probably need to link with -lrt for this one to work
+/*
+** Probably need to link with -lrt for this one to work
*/
static int init_clock_gettime ()
@@ -3831,7 +3831,7 @@ static inline double utr_gettimeofday ()
#endif
}
-/*
+/*
** Determine underlying timing routine overhead: call it 1000 times.
*/
@@ -3852,7 +3852,7 @@ static double utr_getoverhead ()
*/
static void printself_andchildren (const Timer *ptr,
- FILE *fp,
+ FILE *fp,
const int t,
const int depth,
const double tot_overhead)
@@ -3868,9 +3868,9 @@ static void printself_andchildren (const Timer *ptr,
#ifdef ENABLE_PMPI
/*
-** GPTLgetentry: called ONLY from pmpi.c (i.e. not a public entry point). Returns a pointer to the
+** GPTLgetentry: called ONLY from pmpi.c (i.e. not a public entry point). Returns a pointer to the
** requested timer name by calling internal function getentry()
-**
+**
** Return value: 0 (NULL) or the return value of getentry()
*/
@@ -3894,7 +3894,7 @@ Timer *GPTLgetentry (const char *name)
}
/*
-** GPTLpr_file_has_been_called: Called ONLY from pmpi.c (i.e. not a public entry point). Return
+** GPTLpr_file_has_been_called: Called ONLY from pmpi.c (i.e. not a public entry point). Return
** whether GPTLpr_file has been called. MPI_Finalize wrapper needs
** to know whether it needs to call GPTLpr.
*/
@@ -3917,7 +3917,7 @@ int GPTLpr_has_been_called (void)
** $Id: gptl.c,v 1.157 2011-03-28 20:55:18 rosinski Exp $
**
** Author: Jim Rosinski
-**
+**
** Utility functions handle thread-based GPTL needs.
*/
@@ -3925,7 +3925,7 @@ int GPTLpr_has_been_called (void)
#define MAX_THREADS 128
/**********************************************************************************/
-/*
+/*
** 3 sets of routines: OMP threading, PTHREADS, unthreaded
*/
@@ -3951,13 +3951,13 @@ static int threadinit (void)
if (omp_get_thread_num () != 0)
return GPTLerror ("OMP %s: MUST only be called by the master thread\n", thisfunc);
- /*
- ** Allocate the threadid array which maps physical thread IDs to logical IDs
+ /*
+ ** Allocate the threadid array which maps physical thread IDs to logical IDs
** For OpenMP this will be just threadid_omp[iam] = iam;
*/
- if (threadid_omp)
- return GPTLerror ("OMP %s: has already been called.\nMaybe mistakenly called by multiple threads?",
+ if (threadid_omp)
+ return GPTLerror ("OMP %s: has already been called.\nMaybe mistakenly called by multiple threads?",
thisfunc);
maxthreads = MAX ((1), (omp_get_max_threads ()));
@@ -3975,7 +3975,7 @@ static int threadinit (void)
#ifdef VERBOSE
printf ("OMP %s: Set maxthreads=%d\n", thisfunc, maxthreads);
#endif
-
+
return 0;
}
@@ -4018,7 +4018,7 @@ static inline int get_thread_num (void)
if (t == threadid_omp[t])
return t;
- /*
+ /*
** Thread id not found. Modify threadid_omp with our ID, then start PAPI events if required.
** Due to the setting of threadid_omp, everything below here will only execute once per thread.
*/
@@ -4049,7 +4049,7 @@ static inline int get_thread_num (void)
/*
** nthreads = maxthreads based on setting in threadinit
*/
-
+
nthreads = maxthreads;
#ifdef VERBOSE
printf ("OMP %s: nthreads=%d\n", thisfunc, nthreads);
@@ -4069,7 +4069,7 @@ static void print_threadmapping (FILE *fp)
}
/**********************************************************************************/
-/*
+/*
** PTHREADS
*/
@@ -4096,7 +4096,7 @@ static int threadinit (void)
static const char *thisfunc = "threadinit";
/*
- ** The following test is not rock-solid, but it's pretty close in terms of guaranteeing that
+ ** The following test is not rock-solid, but it's pretty close in terms of guaranteeing that
** threadinit gets called by only 1 thread. Problem is, mutex hasn't yet been initialized
** so we can't use it.
*/
@@ -4112,7 +4112,7 @@ static int threadinit (void)
** Previously, t_mutex = PTHREAD_MUTEX_INITIALIZER on the static declaration line was
** adequate to initialize the mutex. But this failed in programs that invoked
** GPTLfinalize() followed by GPTLinitialize().
- ** "man pthread_mutex_init" indicates that passing NULL as the second argument to
+ ** "man pthread_mutex_init" indicates that passing NULL as the second argument to
** pthread_mutex_init() should appropriately initialize the mutex, assuming it was
** properly destroyed by a previous call to pthread_mutex_destroy();
*/
@@ -4121,16 +4121,16 @@ static int threadinit (void)
if ((ret = pthread_mutex_init ((pthread_mutex_t *) &t_mutex, NULL)) != 0)
return GPTLerror ("PTHREADS %s: mutex init failure: ret=%d\n", thisfunc, ret);
#endif
-
- /*
- ** Allocate the threadid array which maps physical thread IDs to logical IDs
+
+ /*
+ ** Allocate the threadid array which maps physical thread IDs to logical IDs
*/
- if (threadid)
+ if (threadid)
return GPTLerror ("PTHREADS %s: threadid not null\n", thisfunc);
else if ( ! (threadid = (pthread_t *) GPTLallocate (MAX_THREADS * sizeof (pthread_t))))
return GPTLerror ("PTHREADS %s: malloc failure for %d elements of threadid\n", thisfunc, MAX_THREADS);
-
+
maxthreads = MAX_THREADS;
/*
@@ -4175,7 +4175,7 @@ static void threadfinalize ()
**
** Output results:
** nthreads: Updated number of threads
-** threadid: Our thread id added to list on 1st call
+** threadid: Our thread id added to list on 1st call
**
** Return value: thread number (success) or GPTLerror (failure)
*/
@@ -4210,7 +4210,7 @@ static inline int get_thread_num (void)
return t;
#endif
- /*
+ /*
** Thread id not found. Define a critical region, then start PAPI counters if
** necessary and modify threadid[] with our id.
*/
@@ -4234,7 +4234,7 @@ static inline int get_thread_num (void)
threadid[nthreads] = mythreadid;
#ifdef VERBOSE
- printf ("PTHREADS %s: 1st call threadid=%lu maps to location %d\n",
+ printf ("PTHREADS %s: 1st call threadid=%lu maps to location %d\n",
thisfunc, (unsigned long) mythreadid, nthreads);
#endif
@@ -4247,14 +4247,14 @@ static inline int get_thread_num (void)
if (GPTLget_npapievents () > 0) {
#ifdef VERBOSE
- printf ("PTHREADS get_thread_num: Starting EventSet threadid=%lu location=%d\n",
+ printf ("PTHREADS get_thread_num: Starting EventSet threadid=%lu location=%d\n",
(unsigned long) mythreadid, nthreads);
#endif
if (GPTLcreate_and_start_events (nthreads) < 0) {
if (unlock_mutex () < 0)
fprintf (stderr, "PTHREADS %s: mutex unlock failure\n", thisfunc);
- return GPTLerror ("PTHREADS %s: error from GPTLcreate_and_start_events for thread %d\n",
+ return GPTLerror ("PTHREADS %s: error from GPTLcreate_and_start_events for thread %d\n",
thisfunc, nthreads);
}
}
diff --git a/cime/externals/pio2/src/gptl/gptl.inc b/cime/externals/pio2/src/gptl/gptl.inc
index 2ed2ca5c0708..4d9d782a7942 100644
--- a/cime/externals/pio2/src/gptl/gptl.inc
+++ b/cime/externals/pio2/src/gptl/gptl.inc
@@ -97,7 +97,7 @@
integer gptlstart_handle
integer gptlstop
integer gptlstop_handle
- integer gptlstamp
+ integer gptlstamp
integer gptlpr_set_append
integer gptlpr_query_append
integer gptlpr_set_write
@@ -107,7 +107,7 @@
integer gptlpr_summary
integer gptlpr_summary_file
integer gptlbarrier
- integer gptlreset
+ integer gptlreset
integer gptlfinalize
integer gptlget_memusage
integer gptlprint_memusage
@@ -130,7 +130,7 @@
external gptlstart_handle
external gptlstop
external gptlstop_handle
- external gptlstamp
+ external gptlstamp
external gptlpr_set_append
external gptlpr_query_append
external gptlpr_set_write
@@ -140,7 +140,7 @@
external gptlpr_summary
external gptlpr_summary_file
external gptlbarrier
- external gptlreset
+ external gptlreset
external gptlfinalize
external gptlget_memusage
external gptlprint_memusage
diff --git a/cime/externals/pio2/src/gptl/gptl_papi.c b/cime/externals/pio2/src/gptl/gptl_papi.c
index 941316918bef..a8e42fd132ea 100644
--- a/cime/externals/pio2/src/gptl/gptl_papi.c
+++ b/cime/externals/pio2/src/gptl/gptl_papi.c
@@ -5,7 +5,7 @@
**
** Contains routines which interface to PAPI library
*/
-
+
#include "private.h"
#include "gptl.h"
@@ -149,8 +149,8 @@ static const Entry derivedtable [] = {
};
static const int nderivedentries = sizeof (derivedtable) / sizeof (Entry);
-static int npapievents = 0; /* number of PAPI events: initialize to 0 */
-static int nevents = 0; /* number of events: initialize to 0 */
+static int npapievents = 0; /* number of PAPI events: initialize to 0 */
+static int nevents = 0; /* number of events: initialize to 0 */
static int *EventSet; /* list of events to be counted by PAPI */
static long_long **papicounters; /* counters returned from PAPI */
@@ -171,11 +171,11 @@ static int enable (int);
static int getderivedidx (int);
/*
-** GPTL_PAPIsetoption: enable or disable PAPI event defined by "counter". Called
+** GPTL_PAPIsetoption: enable or disable PAPI event defined by "counter". Called
** from GPTLsetoption. Since all events are off by default, val=false degenerates
** to a no-op. Coded this way to be consistent with the rest of GPTL
**
-** Input args:
+** Input args:
** counter: PAPI counter
** val: true or false for enable or disable
**
@@ -219,7 +219,7 @@ int GPTL_PAPIsetoption (const int counter, /* PAPI counter (or option) */
break;
}
- /*
+ /*
** If val is false, return an error if the event has already been enabled.
** Otherwise just warn that attempting to disable a PAPI-based event
** that has already been enabled doesn't work--for now it's just a no-op
@@ -238,10 +238,10 @@ int GPTL_PAPIsetoption (const int counter, /* PAPI counter (or option) */
/* If the event has already been enabled for printing, exit */
if (already_enabled (counter))
- return GPTLerror ("GPTL_PAPIsetoption: counter %d has already been enabled\n",
+ return GPTLerror ("GPTL_PAPIsetoption: counter %d has already been enabled\n",
counter);
- /*
+ /*
** Initialize PAPI if it hasn't already been done.
** From here on down we can assume the intent is to enable (not disable) an option
*/
@@ -267,7 +267,7 @@ int GPTL_PAPIsetoption (const int counter, /* PAPI counter (or option) */
pr_event[nevents].numidx = enable (PAPI_TOT_INS);
pr_event[nevents].denomidx = enable (PAPI_TOT_CYC);
if (verbose)
- printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_TOT_INS / PAPI_TOT_CYC\n",
+ printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_TOT_INS / PAPI_TOT_CYC\n",
pr_event[nevents].event.namestr);
++nevents;
return 0;
@@ -278,18 +278,18 @@ int GPTL_PAPIsetoption (const int counter, /* PAPI counter (or option) */
pr_event[nevents].numidx = enable (PAPI_FP_OPS);
pr_event[nevents].denomidx = enable (PAPI_LST_INS);
if (verbose)
- printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_FP_OPS / PAPI_LST_INS\n",
+ printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_FP_OPS / PAPI_LST_INS\n",
pr_event[nevents].event.namestr);
} else if (canenable2 (PAPI_FP_OPS, PAPI_L1_DCA)) {
pr_event[nevents].event = derivedtable[idx];
pr_event[nevents].numidx = enable (PAPI_FP_OPS);
pr_event[nevents].denomidx = enable (PAPI_L1_DCA);
#ifdef DEBUG
- printf ("GPTL_PAPIsetoption: pr_event %d is derived and will be PAPI event %d / %d\n",
+ printf ("GPTL_PAPIsetoption: pr_event %d is derived and will be PAPI event %d / %d\n",
nevents, pr_event[nevents].numidx, pr_event[nevents].denomidx);
#endif
if (verbose)
- printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_FP_OPS / PAPI_L1_DCA\n",
+ printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_FP_OPS / PAPI_L1_DCA\n",
pr_event[nevents].event.namestr);
} else {
return GPTLerror ("GPTL_PAPIsetoption: GPTL_CI unavailable\n");
@@ -305,7 +305,7 @@ int GPTL_PAPIsetoption (const int counter, /* PAPI counter (or option) */
pr_event[nevents].numidx = enable (PAPI_FP_OPS);
pr_event[nevents].denomidx = enable (PAPI_TOT_CYC);
if (verbose)
- printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_FP_OPS / PAPI_TOT_CYC\n",
+ printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_FP_OPS / PAPI_TOT_CYC\n",
pr_event[nevents].event.namestr);
++nevents;
return 0;
@@ -318,7 +318,7 @@ int GPTL_PAPIsetoption (const int counter, /* PAPI counter (or option) */
pr_event[nevents].numidx = enable (PAPI_FP_OPS);
pr_event[nevents].denomidx = enable (PAPI_TOT_INS);
if (verbose)
- printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_FP_OPS / PAPI_TOT_INS\n",
+ printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_FP_OPS / PAPI_TOT_INS\n",
pr_event[nevents].event.namestr);
++nevents;
return 0;
@@ -329,14 +329,14 @@ int GPTL_PAPIsetoption (const int counter, /* PAPI counter (or option) */
pr_event[nevents].numidx = enable (PAPI_LST_INS);
pr_event[nevents].denomidx = enable (PAPI_TOT_INS);
if (verbose)
- printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_LST_INS / PAPI_TOT_INS\n",
+ printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_LST_INS / PAPI_TOT_INS\n",
pr_event[nevents].event.namestr);
} else if (canenable2 (PAPI_L1_DCA, PAPI_TOT_INS)) {
pr_event[nevents].event = derivedtable[idx];
pr_event[nevents].numidx = enable (PAPI_L1_DCA);
pr_event[nevents].denomidx = enable (PAPI_TOT_INS);
if (verbose)
- printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_L1_DCA / PAPI_TOT_INS\n",
+ printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_L1_DCA / PAPI_TOT_INS\n",
pr_event[nevents].event.namestr);
} else {
return GPTLerror ("GPTL_PAPIsetoption: GPTL_LSTPI unavailable\n");
@@ -352,7 +352,7 @@ int GPTL_PAPIsetoption (const int counter, /* PAPI counter (or option) */
pr_event[nevents].numidx = enable (PAPI_L1_DCM);
pr_event[nevents].denomidx = enable (PAPI_L1_DCA);
if (verbose)
- printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_L1_DCM / PAPI_L1_DCA\n",
+ printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_L1_DCM / PAPI_L1_DCA\n",
pr_event[nevents].event.namestr);
++nevents;
return 0;
@@ -363,14 +363,14 @@ int GPTL_PAPIsetoption (const int counter, /* PAPI counter (or option) */
pr_event[nevents].numidx = enable (PAPI_LST_INS);
pr_event[nevents].denomidx = enable (PAPI_L1_DCM);
if (verbose)
- printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_LST_INS / PAPI_L1_DCM\n",
+ printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_LST_INS / PAPI_L1_DCM\n",
pr_event[nevents].event.namestr);
} else if (canenable2 (PAPI_L1_DCA, PAPI_L1_DCM)) {
pr_event[nevents].event = derivedtable[idx];
pr_event[nevents].numidx = enable (PAPI_L1_DCA);
pr_event[nevents].denomidx = enable (PAPI_L1_DCM);
if (verbose)
- printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_L1_DCA / PAPI_L1_DCM\n",
+ printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_L1_DCA / PAPI_L1_DCM\n",
pr_event[nevents].event.namestr);
} else {
return GPTLerror ("GPTL_PAPIsetoption: GPTL_LSTPDCM unavailable\n");
@@ -389,7 +389,7 @@ int GPTL_PAPIsetoption (const int counter, /* PAPI counter (or option) */
pr_event[nevents].numidx = enable (PAPI_L2_TCM);
pr_event[nevents].denomidx = enable (PAPI_L2_TCA);
if (verbose)
- printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_L2_TCM / PAPI_L2_TCA\n",
+ printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_L2_TCM / PAPI_L2_TCA\n",
pr_event[nevents].event.namestr);
++nevents;
return 0;
@@ -400,14 +400,14 @@ int GPTL_PAPIsetoption (const int counter, /* PAPI counter (or option) */
pr_event[nevents].numidx = enable (PAPI_LST_INS);
pr_event[nevents].denomidx = enable (PAPI_L2_TCM);
if (verbose)
- printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_LST_INS / PAPI_L2_TCM\n",
+ printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_LST_INS / PAPI_L2_TCM\n",
pr_event[nevents].event.namestr);
} else if (canenable2 (PAPI_L1_DCA, PAPI_L2_TCM)) {
pr_event[nevents].event = derivedtable[idx];
pr_event[nevents].numidx = enable (PAPI_L1_DCA);
pr_event[nevents].denomidx = enable (PAPI_L2_TCM);
if (verbose)
- printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_L1_DCA / PAPI_L2_TCM\n",
+ printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_L1_DCA / PAPI_L2_TCM\n",
pr_event[nevents].event.namestr);
} else {
return GPTLerror ("GPTL_PAPIsetoption: GPTL_LSTPL2M unavailable\n");
@@ -423,7 +423,7 @@ int GPTL_PAPIsetoption (const int counter, /* PAPI counter (or option) */
pr_event[nevents].numidx = enable (PAPI_L3_TCM);
pr_event[nevents].denomidx = enable (PAPI_L3_TCR);
if (verbose)
- printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_L3_TCM / PAPI_L3_TCR\n",
+ printf ("GPTL_PAPIsetoption: enabling derived event %s = PAPI_L3_TCM / PAPI_L3_TCR\n",
pr_event[nevents].event.namestr);
++nevents;
return 0;
@@ -444,11 +444,11 @@ int GPTL_PAPIsetoption (const int counter, /* PAPI counter (or option) */
pr_event[nevents].numidx = enable (counter);
pr_event[nevents].denomidx = -1; /* flag says not derived (no denominator) */
} else {
- return GPTLerror ("GPTL_PAPIsetoption: Can't enable event \n",
+ return GPTLerror ("GPTL_PAPIsetoption: Can't enable event \n",
papitable[n].longstr);
}
if (verbose)
- printf ("GPTL_PAPIsetoption: enabling PAPI preset event %s\n",
+ printf ("GPTL_PAPIsetoption: enabling PAPI preset event %s\n",
pr_event[nevents].event.namestr);
++nevents;
return 0;
@@ -458,9 +458,9 @@ int GPTL_PAPIsetoption (const int counter, /* PAPI counter (or option) */
/*
** Check native events last: If PAPI_event_code_to_name fails, give up
*/
-
+
if ((ret = PAPI_event_code_to_name (counter, eventname)) != PAPI_OK)
- return GPTLerror ("GPTL_PAPIsetoption: name not found for counter %d: PAPI_strerror: %s\n",
+ return GPTLerror ("GPTL_PAPIsetoption: name not found for counter %d: PAPI_strerror: %s\n",
counter, PAPI_strerror (ret));
/*
@@ -514,12 +514,12 @@ int GPTL_PAPIsetoption (const int counter, /* PAPI counter (or option) */
/*
** canenable: determine whether a PAPI counter can be enabled
**
-** Input args:
+** Input args:
** counter: PAPI counter
**
** Return value: 0 (success) or non-zero (failure)
*/
-
+
int canenable (int counter)
{
char eventname[PAPI_MAX_STR_LEN]; /* returned from PAPI_event_code_to_name */
@@ -539,13 +539,13 @@ int canenable (int counter)
/*
** canenable2: determine whether 2 PAPI counters can be enabled
**
-** Input args:
+** Input args:
** counter1: PAPI counter
** counter2: PAPI counter
**
** Return value: 0 (success) or non-zero (failure)
*/
-
+
int canenable2 (int counter1, int counter2)
{
char eventname[PAPI_MAX_STR_LEN]; /* returned from PAPI_event_code_to_name */
@@ -573,12 +573,12 @@ int canenable2 (int counter1, int counter2)
** well as output directly. E.g. PAPI_FP_OPS is used to compute
** computational intensity, and floating point ops per instruction.
**
-** Input args:
+** Input args:
** counter: PAPI counter
**
** Return value: index into papieventlist (success) or negative (not found)
*/
-
+
int papievent_is_enabled (int counter)
{
int n;
@@ -591,14 +591,14 @@ int papievent_is_enabled (int counter)
/*
** already_enabled: determine whether a PAPI-based event has already been
-** enabled for printing.
+** enabled for printing.
**
-** Input args:
+** Input args:
** counter: PAPI or derived counter
**
** Return value: 1 (true) or 0 (false)
*/
-
+
int already_enabled (int counter)
{
int n;
@@ -613,12 +613,12 @@ int already_enabled (int counter)
** enable: enable a PAPI event. ASSUMES that canenable() has already determined
** that the event can be enabled.
**
-** Input args:
+** Input args:
** counter: PAPI counter
**
** Return value: index into papieventlist
*/
-
+
int enable (int counter)
{
int n;
@@ -643,7 +643,7 @@ int enable (int counter)
/*
** getderivedidx: find the table index of a derived counter
**
-** Input args:
+** Input args:
** counter: derived counter
**
** Return value: index into derivedtable (success) or GPTLerror (failure)
@@ -672,7 +672,7 @@ int GPTL_PAPIlibraryinit ()
if ((ret = PAPI_is_initialized ()) == PAPI_NOT_INITED) {
if ((ret = PAPI_library_init (PAPI_VER_CURRENT)) != PAPI_VER_CURRENT) {
- fprintf (stderr, "GPTL_PAPIlibraryinit: ret=%d PAPI_VER_CURRENT=%d\n",
+ fprintf (stderr, "GPTL_PAPIlibraryinit: ret=%d PAPI_VER_CURRENT=%d\n",
ret, (int) PAPI_VER_CURRENT);
return GPTLerror ("GPTL_PAPIlibraryinit: PAPI_library_init failure:%s\n",
PAPI_strerror (ret));
@@ -683,16 +683,16 @@ int GPTL_PAPIlibraryinit ()
/*
** GPTL_PAPIinitialize(): Initialize the PAPI interface. Called from GPTLinitialize.
-** PAPI_library_init must be called before any other PAPI routines.
+** PAPI_library_init must be called before any other PAPI routines.
** PAPI_thread_init is called subsequently if threading is enabled.
** Finally, allocate space for PAPI counters and start them.
**
-** Input args:
+** Input args:
** maxthreads: number of threads
**
** Return value: 0 (success) or GPTLerror or -1 (failure)
*/
-
+
int GPTL_PAPIinitialize (const int maxthreads, /* number of threads */
const bool verbose_flag, /* output verbosity */
int *nevents_out, /* nevents needed by gptl.c */
@@ -748,8 +748,8 @@ int GPTL_PAPIinitialize (const int maxthreads, /* number of threads */
** Threaded routine to create the "event set" (PAPI terminology) and start
** the counters. This is only done once, and is called from get_thread_num
** for the first time for the thread.
-**
-** Input args:
+**
+** Input args:
** t: thread number
**
** Return value: 0 (success) or GPTLerror (failure)
@@ -764,7 +764,7 @@ int GPTLcreate_and_start_events (const int t) /* thread number */
/* Create the event set */
if ((ret = PAPI_create_eventset (&EventSet[t])) != PAPI_OK)
- return GPTLerror ("GPTLcreate_and_start_events: thread %d failure creating eventset: %s\n",
+ return GPTLerror ("GPTLcreate_and_start_events: thread %d failure creating eventset: %s\n",
t, PAPI_strerror (ret));
if (verbose)
@@ -797,20 +797,20 @@ int GPTLcreate_and_start_events (const int t) /* thread number */
if ((ret = PAPI_cleanup_eventset (EventSet[t])) != PAPI_OK)
return GPTLerror ("GPTLcreate_and_start_events: %s\n", PAPI_strerror (ret));
-
+
if ((ret = PAPI_destroy_eventset (&EventSet[t])) != PAPI_OK)
return GPTLerror ("GPTLcreate_and_start_events: %s\n", PAPI_strerror (ret));
if ((ret = PAPI_create_eventset (&EventSet[t])) != PAPI_OK)
- return GPTLerror ("GPTLcreate_and_start_events: failure creating eventset: %s\n",
+ return GPTLerror ("GPTLcreate_and_start_events: failure creating eventset: %s\n",
PAPI_strerror (ret));
if ((ret = PAPI_multiplex_init ()) != PAPI_OK)
- return GPTLerror ("GPTLcreate_and_start_events: failure from PAPI_multiplex_init%s\n",
+ return GPTLerror ("GPTLcreate_and_start_events: failure from PAPI_multiplex_init%s\n",
PAPI_strerror (ret));
if ((ret = PAPI_set_multiplex (EventSet[t])) != PAPI_OK)
- return GPTLerror ("GPTLcreate_and_start_events: failure from PAPI_set_multiplex: %s\n",
+ return GPTLerror ("GPTLcreate_and_start_events: failure from PAPI_set_multiplex: %s\n",
PAPI_strerror (ret));
for (n = 0; n < npapievents; n++) {
@@ -825,20 +825,20 @@ int GPTLcreate_and_start_events (const int t) /* thread number */
/* Start the event set. It will only be read from now on--never stopped */
if ((ret = PAPI_start (EventSet[t])) != PAPI_OK)
- return GPTLerror ("GPTLcreate_and_start_events: failed to start event set: %s\n",
+ return GPTLerror ("GPTLcreate_and_start_events: failed to start event set: %s\n",
PAPI_strerror (ret));
return 0;
}
/*
-** GPTL_PAPIstart: Start the PAPI counters (actually they are just read).
+** GPTL_PAPIstart: Start the PAPI counters (actually they are just read).
** Called from GPTLstart.
**
-** Input args:
+** Input args:
** t: thread number
**
-** Output args:
+** Output args:
** aux: struct containing the counters
**
** Return value: 0 (success) or GPTLerror (failure)
@@ -849,7 +849,7 @@ int GPTL_PAPIstart (const int t, /* thread number */
{
int ret; /* return code from PAPI lib calls */
int n; /* loop index */
-
+
/* If no events are to be counted just return */
if (npapievents == 0)
@@ -860,25 +860,25 @@ int GPTL_PAPIstart (const int t, /* thread number */
if ((ret = PAPI_read (EventSet[t], papicounters[t])) != PAPI_OK)
return GPTLerror ("GPTL_PAPIstart: %s\n", PAPI_strerror (ret));
- /*
+ /*
** Store the counter values. When GPTL_PAPIstop is called, the counters
** will again be read, and differenced with the values saved here.
*/
for (n = 0; n < npapievents; n++)
aux->last[n] = papicounters[t][n];
-
+
return 0;
}
/*
-** GPTL_PAPIstop: Stop the PAPI counters (actually they are just read).
+** GPTL_PAPIstop: Stop the PAPI counters (actually they are just read).
** Called from GPTLstop.
**
** Input args:
** t: thread number
**
-** Input/output args:
+** Input/output args:
** aux: struct containing the counters
**
** Return value: 0 (success) or GPTLerror (failure)
@@ -900,8 +900,8 @@ int GPTL_PAPIstop (const int t, /* thread number */
if ((ret = PAPI_read (EventSet[t], papicounters[t])) != PAPI_OK)
return GPTLerror ("GPTL_PAPIstop: %s\n", PAPI_strerror (ret));
-
- /*
+
+ /*
** Accumulate the difference since timer start in aux.
** Negative accumulation can happen when multiplexing is enabled, so don't
** set count to BADCOUNT in that case.
@@ -924,14 +924,14 @@ int GPTL_PAPIstop (const int t, /* thread number */
** GPTL_PAPIprstr: Print the descriptive string for all enabled PAPI events.
** Called from GPTLpr.
**
-** Input args:
+** Input args:
** fp: file descriptor
*/
void GPTL_PAPIprstr (FILE *fp)
{
int n;
-
+
if (narrowprint) {
for (n = 0; n < nevents; n++) {
fprintf (fp, "%8.8s ", pr_event[n].event.str8);
@@ -957,7 +957,7 @@ void GPTL_PAPIprstr (FILE *fp)
** GPTL_PAPIpr: Print PAPI counter values for all enabled events, including
** derived events. Called from GPTLpr.
**
-** Input args:
+** Input args:
** fp: file descriptor
** aux: struct containing the counters
*/
@@ -989,7 +989,7 @@ void GPTL_PAPIpr (FILE *fp, /* file descriptor to write
denomidx = pr_event[n].denomidx;
#ifdef DEBUG
- printf ("GPTL_PAPIpr: derived event: numidx=%d denomidx=%d values = %ld %ld\n",
+ printf ("GPTL_PAPIpr: derived event: numidx=%d denomidx=%d values = %ld %ld\n",
numidx, denomidx, (long) aux->accum[numidx], (long) aux->accum[denomidx]);
#endif
/* Protect against divide by zero */
@@ -1003,7 +1003,7 @@ void GPTL_PAPIpr (FILE *fp, /* file descriptor to write
} else { /* Raw PAPI event */
#ifdef DEBUG
- printf ("GPTL_PAPIpr: raw event: numidx=%d value = %ld\n",
+ printf ("GPTL_PAPIpr: raw event: numidx=%d value = %ld\n",
numidx, (long) aux->accum[numidx]);
#endif
if (aux->accum[numidx] < PRTHRESH)
@@ -1055,12 +1055,12 @@ void GPTL_PAPIprintenabled (FILE *fp)
fprintf (fp, " %s\n", eventname);
fprintf (fp, "\n");
}
-}
+}
/*
** GPTL_PAPIadd: Accumulate PAPI counters. Called from add.
**
-** Input/Output args:
+** Input/Output args:
** auxout: auxout = auxout + auxin
**
** Input args:
@@ -1071,7 +1071,7 @@ void GPTL_PAPIadd (Papistats *auxout, /* output struct */
const Papistats *auxin) /* input struct */
{
int n;
-
+
for (n = 0; n < npapievents; n++)
if (auxin->accum[n] == BADCOUNT || auxout->accum[n] == BADCOUNT)
auxout->accum[n] = BADCOUNT;
@@ -1229,7 +1229,7 @@ int GPTLevent_name_to_code (const char *name, int *code)
int n; /* loop over derived entries */
/*
- ** First check derived events
+ ** First check derived events
*/
for (n = 0; n < nderivedentries; ++n) {
@@ -1272,7 +1272,7 @@ int GPTLevent_code_to_name (const int code, char *name)
int n; /* loop over derived entries */
/*
- ** First check derived events
+ ** First check derived events
*/
for (n = 0; n < nderivedentries; ++n) {
diff --git a/cime/externals/pio2/src/gptl/perf_mod.F90 b/cime/externals/pio2/src/gptl/perf_mod.F90
index e62059de98ea..2e38c491c67a 100644
--- a/cime/externals/pio2/src/gptl/perf_mod.F90
+++ b/cime/externals/pio2/src/gptl/perf_mod.F90
@@ -30,7 +30,7 @@ module perf_mod
!-----------------------------------------------------------------------
implicit none
private ! Make the default access private
-
+ save
!-----------------------------------------------------------------------
! Public interfaces ----------------------------------------------------
diff --git a/cime/externals/pio2/src/gptl/perf_utils.F90 b/cime/externals/pio2/src/gptl/perf_utils.F90
index 76c7294136ee..2ab74ada4f76 100644
--- a/cime/externals/pio2/src/gptl/perf_utils.F90
+++ b/cime/externals/pio2/src/gptl/perf_utils.F90
@@ -1,15 +1,15 @@
module perf_utils
-!-----------------------------------------------------------------------
-!
+!-----------------------------------------------------------------------
+!
! Purpose: This module supplies the csm_share and CAM utilities
! needed by perf_mod.F90 (when the csm_share and CAM utilities
! are not available).
-!
+!
! Author: P. Worley, October 2007
!
! $Id$
-!
+!
!-----------------------------------------------------------------------
#ifndef NO_MPIMOD
use mpi
@@ -49,7 +49,7 @@ module perf_utils
!- include statements --------------------------------------------------
!-----------------------------------------------------------------------
#ifdef NO_MPIMOD
-#include
+#include