diff --git a/CITATION.cff b/CITATION.cff index 4e611a57468..f7eaf133318 100644 --- a/CITATION.cff +++ b/CITATION.cff @@ -9,4 +9,4 @@ authors: website: 'https://www.hdfgroup.org' repository-code: 'https://github.com/HDFGroup/hdf5' url: 'https://www.hdfgroup.org/HDF5/' -repository-artifact: 'https://www.hdfgroup.org/downloads/hdf5/' +repository-artifact: 'https://support.hdfgroup.org/downloads/HDF5' diff --git a/HDF5Examples/README.md b/HDF5Examples/README.md index 2f0090ba02c..e70a1a792d0 100644 --- a/HDF5Examples/README.md +++ b/HDF5Examples/README.md @@ -48,17 +48,17 @@ HDF5 SNAPSHOTS, PREVIOUS RELEASES AND SOURCE CODE -------------------------------------------- Full Documentation and Programming Resources for this HDF5 can be found at - https://portal.hdfgroup.org/documentation/index.html + https://support.hdfgroup.org/documentation/HDF5/index.html Periodically development code snapshots are provided at the following URL: - - https://gamma.hdfgroup.org/ftp/pub/outgoing/hdf5/snapshots/ + + https://github.com/HDFGroup/hdf5/releases Source packages for current and previous releases are located at: - - https://portal.hdfgroup.org/downloads/ + + https://support.hdfgroup.org/releases/hdf5/downloads/ Development code is available at our Github location: - + https://github.com/HDFGroup/hdf5.git diff --git a/README.md b/README.md index 36f853c6990..1be794abcb1 100644 --- a/README.md +++ b/README.md @@ -108,7 +108,7 @@ Periodically development code snapshots are provided at the following URL: Source packages for current and previous releases are located at: - https://portal.hdfgroup.org/Downloads + https://support.hdfgroup.org/downloads/HDF5 Development code is available at our Github location: diff --git a/config/cmake/README.md.cmake.in b/config/cmake/README.md.cmake.in index 7f6af3646a2..3f541e4e8a3 100644 --- a/config/cmake/README.md.cmake.in +++ b/config/cmake/README.md.cmake.in @@ -75,6 +75,6 @@ For more information see USING_CMake_Examples.txt in the install folder. =========================================================================== Documentation for this release can be found at the following URL: - https://portal.hdfgroup.org/documentation/index.html#hdf5 + https://support.hdfgroup.org/hdf5/@HDF5_PACKAGE_NAME@-@HDF5_PACKAGE_VERSION@/documentation/doxygen/index.html Bugs should be reported to help@hdfgroup.org. diff --git a/doc/parallel-compression.md b/doc/parallel-compression.md index 48ed4c3c37d..77126d6acf6 100644 --- a/doc/parallel-compression.md +++ b/doc/parallel-compression.md @@ -64,9 +64,9 @@ H5Dwrite(..., dxpl_id, ...); The following are two simple examples of using the parallel compression feature: -[ph5_filtered_writes.c](https://github.com/HDFGroup/hdf5/blob/develop/HDF5Examples/C/H5PAR/ph5_filtered_writes.c) +[ph5_filtered_writes.c][u1] -[ph5_filtered_writes_no_sel.c](https://github.com/HDFGroup/hdf5/blob/develop/HDF5Examples/C/H5PAR/ph5_filtered_writes_no_sel.c) +[ph5_filtered_writes_no_sel.c][u2] The former contains simple examples of using the parallel compression feature to write to compressed datasets, while the @@ -79,7 +79,7 @@ participate in the collective write call. ## Multi-dataset I/O support The parallel compression feature is supported when using the -multi-dataset I/O API routines ([H5Dwrite_multi](https://hdfgroup.github.io/hdf5/develop/group___h5_d.html#gaf6213bf3a876c1741810037ff2bb85d8)/[H5Dread_multi](https://hdfgroup.github.io/hdf5/develop/group___h5_d.html#ga8eb1c838aff79a17de385d0707709915)), but the +multi-dataset I/O API routines ([H5Dwrite_multi][u3]/[H5Dread_multi][u4]), but the following should be kept in mind: - Parallel writes to filtered datasets **must** still be collective, @@ -99,7 +99,7 @@ following should be kept in mind: ## Incremental file space allocation support -HDF5's [file space allocation time](https://hdfgroup.github.io/hdf5/develop/group___d_c_p_l.html#ga85faefca58387bba409b65c470d7d851) +HDF5's [file space allocation time][u5] is a dataset creation property that can have significant effects on application performance, especially if the application uses parallel HDF5. In a serial HDF5 application, the default file space @@ -118,7 +118,7 @@ While this strategy has worked in the past, it has some noticeable drawbacks. For one, the larger the chunked dataset being created, the more noticeable overhead there will be during dataset creation as all of the data chunks are being allocated in the HDF5 file. -Further, these data chunks will, by default, be [filled](https://hdfgroup.github.io/hdf5/develop/group___d_c_p_l.html#ga4335bb45b35386daa837b4ff1b9cd4a4) +Further, these data chunks will, by default, be [filled][u6] with HDF5's default fill data value, leading to extraordinary dataset creation overhead and resulting in pre-filling large portions of a dataset that the application might have been planning @@ -126,12 +126,12 @@ to overwrite anyway. Even worse, there will be more initial overhead from compressing that fill data before writing it out, only to have it read back in, unfiltered and modified the first time a chunk is written to. In the past, it was typically suggested that parallel -HDF5 applications should use [H5Pset_fill_time](https://hdfgroup.github.io/hdf5/develop/group___d_c_p_l.html#ga6bd822266b31f86551a9a1d79601b6a2) +HDF5 applications should use [H5Pset_fill_time][u7] with a value of `H5D_FILL_TIME_NEVER` in order to disable writing of the fill value to dataset chunks, but this isn't ideal if the application actually wishes to make use of fill values. -With [improvements made](https://www.hdfgroup.org/2022/03/parallel-compression-improvements-in-hdf5-1-13-1/) +With [improvements made][u8] to the parallel compression feature for the HDF5 1.13.1 release, "incremental" file space allocation is now the default for datasets created in parallel *only if they have filters applied to them*. @@ -154,7 +154,7 @@ optimal performance out of the parallel compression feature. ### Begin with a good chunking strategy -[Starting with a good chunking strategy](https://portal.hdfgroup.org/documentation/hdf5-docs/chunking_in_hdf5.html) +[Starting with a good chunking strategy][u9] will generally have the largest impact on overall application performance. The different chunking parameters can be difficult to fine-tune, but it is essential to start with a well-performing @@ -166,7 +166,7 @@ chosen chunk size becomes a very important factor when compression is involved, as data chunks have to be completely read and re-written to perform partial writes to the chunk. -[Improving I/O performance with HDF5 compressed datasets](https://docs.hdfgroup.org/archive/support/HDF5/doc/TechNotes/TechNote-HDF5-ImprovingIOPerformanceCompressedDatasets.pdf) +[Improving I/O performance with HDF5 compressed datasets][u10] is a useful reference for more information on getting good performance when using a chunked dataset layout. @@ -220,14 +220,14 @@ chunks to end up at addresses in the file that do not align well with the underlying file system, possibly leading to poor performance. As an example, Lustre performance is generally good when writes are aligned with the chosen stripe size. -The HDF5 application can use [H5Pset_alignment](https://hdfgroup.github.io/hdf5/develop/group___f_a_p_l.html#gab99d5af749aeb3896fd9e3ceb273677a) +The HDF5 application can use [H5Pset_alignment][u11] to have a bit more control over where objects in the HDF5 file end up. However, do note that setting the alignment of objects generally wastes space in the file and has the potential to dramatically increase its resulting size, so caution should be used when choosing the alignment parameters. -[H5Pset_alignment](https://hdfgroup.github.io/hdf5/develop/group___f_a_p_l.html#gab99d5af749aeb3896fd9e3ceb273677a) +[H5Pset_alignment][u11] has two parameters that control the alignment of objects in the HDF5 file, the "threshold" value and the alignment value. The threshold value specifies that any object greater @@ -264,19 +264,19 @@ in a file, this can create significant amounts of free space in the file over its lifetime and eventually cause performance issues. -An HDF5 application can use [H5Pset_file_space_strategy](https://hdfgroup.github.io/hdf5/develop/group___f_c_p_l.html#ga167ff65f392ca3b7f1933b1cee1b9f70) +An HDF5 application can use [H5Pset_file_space_strategy][u12] with a value of `H5F_FSPACE_STRATEGY_PAGE` to enable the paged aggregation feature, which can accumulate metadata and raw data for dataset data chunks into well-aligned, configurably sized "pages" for better performance. However, note that using the paged aggregation feature will cause any setting from -[H5Pset_alignment](https://hdfgroup.github.io/hdf5/develop/group___f_a_p_l.html#gab99d5af749aeb3896fd9e3ceb273677a) +[H5Pset_alignment][u11] to be ignored. While an application should be able to get -comparable performance effects by [setting the size of these pages](https://hdfgroup.github.io/hdf5/develop/group___f_c_p_l.html#gad012d7f3c2f1e1999eb1770aae3a4963) to be equal to the value that -would have been set for [H5Pset_alignment](https://hdfgroup.github.io/hdf5/develop/group___f_a_p_l.html#gab99d5af749aeb3896fd9e3ceb273677a), +comparable performance effects by [setting the size of these pages][u13] +to be equal to the value that would have been set for [H5Pset_alignment][u11], this may not necessarily be the case and should be studied. -Note that [H5Pset_file_space_strategy](https://hdfgroup.github.io/hdf5/develop/group___f_c_p_l.html#ga167ff65f392ca3b7f1933b1cee1b9f70) +Note that [H5Pset_file_space_strategy][u12] has a `persist` parameter. This determines whether or not the file free space manager should include extra metadata in the HDF5 file about free space sections in the file. If this @@ -300,12 +300,12 @@ hid_t file_id = H5Fcreate("file.h5", H5F_ACC_TRUNC, fcpl_id, fapl_id); While the parallel compression feature requires that the HDF5 application set and maintain collective I/O at the application -interface level (via [H5Pset_dxpl_mpio](https://hdfgroup.github.io/hdf5/develop/group___d_x_p_l.html#ga001a22b64f60b815abf5de8b4776f09e)), +interface level (via [H5Pset_dxpl_mpio][u14]), it does not require that the actual MPI I/O that occurs at the lowest layers of HDF5 be collective; independent I/O may perform better depending on the application I/O patterns and parallel file system performance, among other factors. The -application may use [H5Pset_dxpl_mpio_collective_opt](https://hdfgroup.github.io/hdf5/develop/group___d_x_p_l.html#gacb30d14d1791ec7ff9ee73aa148a51a3) +application may use [H5Pset_dxpl_mpio_collective_opt][u15] to control this setting and see which I/O method provides the best performance. @@ -318,7 +318,7 @@ H5Dwrite(..., dxpl_id, ...); ### Runtime HDF5 Library version -An HDF5 application can use the [H5Pset_libver_bounds](https://hdfgroup.github.io/hdf5/develop/group___f_a_p_l.html#gacbe1724e7f70cd17ed687417a1d2a910) +An HDF5 application can use the [H5Pset_libver_bounds][u16] routine to set the upper and lower bounds on library versions to use when creating HDF5 objects. For parallel compression specifically, setting the library version to the latest available @@ -332,3 +332,20 @@ H5Pset_libver_bounds(fapl_id, H5F_LIBVER_LATEST, H5F_LIBVER_LATEST); hid_t file_id = H5Fcreate("file.h5", H5F_ACC_TRUNC, H5P_DEFAULT, fapl_id); ... ``` + +[u1]: https://github.com/HDFGroup/hdf5/blob/develop/HDF5Examples/C/H5PAR/ph5_filtered_writes.c +[u2]: https://github.com/HDFGroup/hdf5/blob/develop/HDF5Examples/C/H5PAR/ph5_filtered_writes_no_sel.c +[u3]: https://hdfgroup.github.io/hdf5/develop/group___h5_d.html#gaf6213bf3a876c1741810037ff2bb85d8 +[u4]: https://hdfgroup.github.io/hdf5/develop/group___h5_d.html#ga8eb1c838aff79a17de385d0707709915 +[u5]: https://hdfgroup.github.io/hdf5/develop/group___d_c_p_l.html#ga85faefca58387bba409b65c470d7d851 +[u6]: https://hdfgroup.github.io/hdf5/develop/group___d_c_p_l.html#ga4335bb45b35386daa837b4ff1b9cd4a4 +[u7]: https://hdfgroup.github.io/hdf5/develop/group___d_c_p_l.html#ga6bd822266b31f86551a9a1d79601b6a2 +[u8]: https://support.hdfgroup.org/documentation/HDF5/parallel-compression-improvements-in-hdf5-1-13-1 +[u9]: https://support.hdfgroup.org/documentation/HDF5/chunking_in_hdf5.html +[u10]: https://support.hdfgroup.org/documentation/HDF5/technotes/TechNote-HDF5-ImprovingIOPerformanceCompressedDatasets.pdf +[u11]: https://hdfgroup.github.io/hdf5/develop/group___f_a_p_l.html#gab99d5af749aeb3896fd9e3ceb273677a +[u12]: https://hdfgroup.github.io/hdf5/develop/group___f_c_p_l.html#ga167ff65f392ca3b7f1933b1cee1b9f70 +[u13]: https://hdfgroup.github.io/hdf5/develop/group___f_c_p_l.html#gad012d7f3c2f1e1999eb1770aae3a4963 +[u14]: https://hdfgroup.github.io/hdf5/develop/group___d_x_p_l.html#ga001a22b64f60b815abf5de8b4776f09e +[u15]: https://hdfgroup.github.io/hdf5/develop/group___d_x_p_l.html#gacb30d14d1791ec7ff9ee73aa148a51a3 +[u16]: https://hdfgroup.github.io/hdf5/develop/group___f_a_p_l.html#gacbe1724e7f70cd17ed687417a1d2a910 diff --git a/doxygen/aliases b/doxygen/aliases index 4bb6e8c0792..24a496c2203 100644 --- a/doxygen/aliases +++ b/doxygen/aliases @@ -4,17 +4,16 @@ ALIASES += THG="The HDF Group" # Default URLs (Note that md files do not use any aliases) ################################################################################ # Default URL for HDF Group Files -ALIASES += HDFURL="docs.hdfgroup.org/hdf5" +ALIASES += HDFURL="support.hdfgroup.org" # URL for archived files -ALIASES += ARCURL="docs.hdfgroup.org/archive/support/HDF5/doc" +ALIASES += ARCURL="\HDFURL/archive/support/HDF5/doc" # URL for RFCs -ALIASES += RFCURL="docs.hdfgroup.org/hdf5/rfc" +ALIASES += RFCURL="\HDFURL/hdf5/rfc" # URL for documentation -ALIASES += DSPURL="portal.hdfgroup.org/display/HDF5" -ALIASES += DOCURL="portal.hdfgroup.org/documentation/hdf5-docs" +ALIASES += DOCURL="\HDFURL/releases/hdf5/documentation" # URL for downloads -ALIASES += DWNURL="portal.hdfgroup.org/downloads" -ALIASES += AEXURL="support.hdfgroup.org/ftp/HDF5/examples" +ALIASES += DWNURL="\HDFURL/releases/hdf5/downloads" +ALIASES += AEXURL="\HDFURL/archive/support/ftp/HDF5/examples" # doxygen subdir (develop, v1_14) ALIASES += DOXURL="hdfgroup.github.io/hdf5/develop" #branch name (develop, hdf5_1_14) @@ -259,13 +258,13 @@ ALIASES += sa_metadata_ops="\sa \li H5Pget_all_coll_metadata_ops() \li H5Pget_co ALIASES += ref_cons_semantics="Enabling a Strict Consistency Semantics Model in Parallel HDF5" ALIASES += ref_file_image_ops="HDF5 File Image Operations" -ALIASES += ref_filter_pipe="Data Flow Pipeline for H5Dread()" +ALIASES += ref_filter_pipe="Data Flow Pipeline for H5Dread()" ALIASES += ref_group_impls="Group implementations in HDF5" ALIASES += ref_h5lib_relver="HDF5 Library Release Version Numbers" -ALIASES += ref_mdc_in_hdf5="Metadata Caching in HDF5" -ALIASES += ref_mdc_logging="Metadata Cache Logging" +ALIASES += ref_mdc_in_hdf5="Metadata Caching in HDF5" +ALIASES += ref_mdc_logging="Metadata Cache Logging" ALIASES += ref_news_112="New Features in HDF5 Release 1.12" -ALIASES += ref_h5ocopy="Copying Committed Datatypes with H5Ocopy()" +ALIASES += ref_h5ocopy="Copying Committed Datatypes with H5Ocopy()" ALIASES += ref_sencode_fmt_change="RFC H5Sencode() / H5Sdecode() Format Change" ALIASES += ref_vlen_strings="\Emph{Creating variable-length string datatypes}" ALIASES += ref_vol_doc="VOL documentation" diff --git a/doxygen/dox/About.dox b/doxygen/dox/About.dox index e145516a30e..73010b0c3de 100644 --- a/doxygen/dox/About.dox +++ b/doxygen/dox/About.dox @@ -83,7 +83,7 @@ as a general reference. All custom commands for this project are located in the aliases -file in the doxygen +file in the doxygen subdirectory of the main HDF5 repo. The custom commands are grouped in sections. Find a suitable section for your command or diff --git a/doxygen/dox/DDLBNF110.dox b/doxygen/dox/DDLBNF110.dox index 6d6b67ef7fd..b392526417a 100644 --- a/doxygen/dox/DDLBNF110.dox +++ b/doxygen/dox/DDLBNF110.dox @@ -1,7 +1,5 @@ /** \page DDLBNF110 DDL in BNF through HDF5 1.10 -\todo Revise this & break it up! - \section intro110 Introduction This document contains the data description language (DDL) for an HDF5 file. The diff --git a/doxygen/dox/DDLBNF112.dox b/doxygen/dox/DDLBNF112.dox index cfe34c321f9..c6463c23d5c 100644 --- a/doxygen/dox/DDLBNF112.dox +++ b/doxygen/dox/DDLBNF112.dox @@ -1,7 +1,5 @@ /** \page DDLBNF112 DDL in BNF for HDF5 1.12 through HDF5 1.14.3 -\todo Revise this & break it up! - \section intro112 Introduction This document contains the data description language (DDL) for an HDF5 file. The diff --git a/doxygen/dox/DDLBNF114.dox b/doxygen/dox/DDLBNF114.dox index 61e9157e560..baa7a57fea6 100644 --- a/doxygen/dox/DDLBNF114.dox +++ b/doxygen/dox/DDLBNF114.dox @@ -1,7 +1,5 @@ /** \page DDLBNF114 DDL in BNF for HDF5 1.14.4 and above -\todo Revise this & break it up! - \section intro114 Introduction This document contains the data description language (DDL) for an HDF5 file. The diff --git a/doxygen/dox/ExamplesAPI.dox b/doxygen/dox/ExamplesAPI.dox index c48b00e6dbb..dbd24f4d888 100644 --- a/doxygen/dox/ExamplesAPI.dox +++ b/doxygen/dox/ExamplesAPI.dox @@ -30,7 +30,7 @@ Languages are C, Fortran, Java (JHI5), Java Object Package, Python (High Level), C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_d_alloc.h5 @@ -43,7 +43,7 @@ Languages are C, Fortran, Java (JHI5), Java Object Package, Python (High Level), C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_d_checksum.h5 @@ -56,7 +56,7 @@ Languages are C, Fortran, Java (JHI5), Java Object Package, Python (High Level), C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_d_chunk.h5 @@ -69,7 +69,7 @@ Languages are C, Fortran, Java (JHI5), Java Object Package, Python (High Level), C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_d_compact.h5 @@ -82,7 +82,7 @@ Languages are C, Fortran, Java (JHI5), Java Object Package, Python (High Level), C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_d_extern.h5 @@ -95,7 +95,7 @@ Languages are C, Fortran, Java (JHI5), Java Object Package, Python (High Level), C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_d_fillval.h5 @@ -108,7 +108,7 @@ Languages are C, Fortran, Java (JHI5), Java Object Package, Python (High Level), C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_d_gzip.h5 @@ -121,7 +121,7 @@ Languages are C, Fortran, Java (JHI5), Java Object Package, Python (High Level), C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_d_hyper.h5 @@ -134,7 +134,7 @@ Languages are C, Fortran, Java (JHI5), Java Object Package, Python (High Level), C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_d_nbit.h5 @@ -147,7 +147,7 @@ Languages are C, Fortran, Java (JHI5), Java Object Package, Python (High Level), C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_d_rdwrc.h5 @@ -160,7 +160,7 @@ Languages are C, Fortran, Java (JHI5), Java Object Package, Python (High Level), C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_d_shuffle.h5 @@ -173,7 +173,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_d_sofloat.h5 @@ -186,7 +186,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_d_soint.h5 @@ -199,7 +199,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_d_szip.h5 @@ -212,7 +212,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_d_transform.h5 @@ -225,7 +225,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_d_unlimadd.h5 @@ -238,7 +238,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_d_unlimgzip.h5 @@ -251,7 +251,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_d_unlimmod.h5 @@ -275,7 +275,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_g_compact.h5 @@ -289,7 +289,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_g_corder.h5 @@ -302,7 +302,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_g_create.h5 @@ -315,7 +315,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_g_intermediate.h5 @@ -328,7 +328,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_g_iterate.h5 @@ -341,7 +341,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_g_phase.h5 @@ -366,7 +366,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_g_visit.h5 @@ -388,9 +388,9 @@ FORTRAN Read / Write Array (Attribute) C -FORTRAN +FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_arrayatt.h5 @@ -401,9 +401,9 @@ FORTRAN Read / Write Array (Dataset) C -FORTRAN +FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_array.h5 @@ -414,9 +414,9 @@ FORTRAN Read / Write Bitfield (Attribute) C -FORTRAN +FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_bitatt.h5 @@ -427,9 +427,9 @@ FORTRAN Read / Write Bitfield (Dataset) C -FORTRAN +FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_bit.h5 @@ -440,9 +440,9 @@ FORTRAN Read / Write Compound (Attribute) C -FORTRAN +FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_cmpdatt.h5 @@ -453,9 +453,9 @@ FORTRAN Read / Write Compound (Dataset) C -FORTRAN +FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_cmpd.h5 @@ -468,7 +468,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_commit.h5 @@ -533,7 +533,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_floatatt.h5 @@ -546,7 +546,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_float.h5 @@ -559,7 +559,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_intatt.h5 @@ -572,7 +572,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_int.h5 @@ -585,7 +585,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_objrefatt.h5 @@ -598,7 +598,7 @@ FORTRAN C FORTRAN Java - JavaObj + JavaObj MATLAB PyHigh PyLow h5ex_t_objref.h5 @@ -611,7 +611,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_opaqueatt.h5 @@ -624,7 +624,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_opaque.h5 @@ -637,7 +637,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_regrefatt.h5 @@ -650,7 +650,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_regref.h5 @@ -661,9 +661,9 @@ FORTRAN Read / Write String (Attribute) C -FORTRAN +FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_stringatt.h5 @@ -676,7 +676,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_string.h5 @@ -709,8 +709,7 @@ FORTRAN Read / Write Variable Length String (Attribute) C -FORTRAN - Java JavaObj MATLAB PyHigh PyLow + FORTRAN Java JavaObj MATLAB PyHigh PyLow h5ex_t_vlstringatt.h5 h5ex_t_vlstringatt.tst @@ -722,7 +721,7 @@ FORTRAN C FORTRAN Java -JavaObj +JavaObj MATLAB PyHigh PyLow h5ex_t_vlstring.h5 @@ -843,7 +842,7 @@ FORTRAN Create/Read/Write an Attribute Java -JavaObj +JavaObj HDF5AttributeCreate.txt @@ -851,7 +850,7 @@ FORTRAN Create Datasets Java -JavaObj +JavaObj HDF5DatasetCreate.txt @@ -859,7 +858,7 @@ FORTRAN Read/Write Datasets Java -JavaObj +JavaObj HDF5DatasetRead.txt @@ -867,7 +866,7 @@ FORTRAN Create an Empty File Java -JavaObj +JavaObj HDF5FileCreate.txt @@ -883,9 +882,9 @@ FORTRAN Create Groups Java -JavaObj +JavaObj -HDF5GroupCreate.txt +HDF5GroupCreate.txt Select a Subset of a Dataset @@ -899,9 +898,9 @@ FORTRAN Create Two Datasets Within Groups Java -JavaObj +JavaObj -HDF5GroupDatasetCreate.txt +HDF5GroupDatasetCreate.txt @@ -918,7 +917,7 @@ FORTRAN Creating and Accessing a File C -FORTRAN +FORTRAN MATLAB PyHigh PyLow ph5_.h5 @@ -928,7 +927,7 @@ FORTRAN Creating and Accessing a Dataset C -FORTRAN +FORTRAN MATLAB PyHigh PyLow ph5_.h5 @@ -938,7 +937,7 @@ FORTRAN Writing and Reading Contiguous Hyperslabs C -FORTRAN +FORTRAN MATLAB PyHigh PyLow ph5_.h5 @@ -948,7 +947,7 @@ FORTRAN Writing and Reading Regularly Spaced Data Hyperslabs C -FORTRAN +FORTRAN MATLAB PyHigh PyLow ph5_.h5 @@ -958,7 +957,7 @@ FORTRAN Writing and Reading Pattern Hyperslabs C -FORTRAN +FORTRAN MATLAB PyHigh PyLow ph5_.h5 @@ -968,7 +967,7 @@ FORTRAN Writing and Reading Chunk Hyperslabs C -FORTRAN +FORTRAN MATLAB PyHigh PyLow ph5_.h5 @@ -978,7 +977,8 @@ FORTRAN Using the Subfiling VFD to Write a File Striped Across Multiple Subfiles C - FORTRAN MATLAB PyHigh PyLow +FORTRAN + MATLAB PyHigh PyLow ph5_.h5 ph5_.tst @@ -996,7 +996,8 @@ FORTRAN Collectively Write Datasets with Filters and Not All Ranks have Data C - FORTRAN MATLAB PyHigh PyLow +FORTRAN + MATLAB PyHigh PyLow ph5_.h5 ph5_.tst diff --git a/doxygen/dox/GettingStarted.dox b/doxygen/dox/GettingStarted.dox index aa81ca28744..274598c9537 100644 --- a/doxygen/dox/GettingStarted.dox +++ b/doxygen/dox/GettingStarted.dox @@ -38,7 +38,7 @@ Step by step instructions for learning HDF5 that include programming examples \subsection subsec_learn_tutor The HDF Group Tutorials and Examples These tutorials and examples are available for learning about the HDF5 High Level APIs, tools, -Parallel HDF5, and the HDF5-1.10 VDS and SWMR new features: +Parallel HDF5, and the VDS and SWMR features: - @@ -91,7 +91,7 @@ These examples (C, C++, Fortran, Java, Python) are provided in the HDF5 source c - @@ -107,7 +107,7 @@ These examples (C, C++, Fortran, Java, Python) are provided in the HDF5 source c - @@ -131,7 +131,7 @@ These examples (C, C++, Fortran, Java, Python) are provided in the HDF5 source c - diff --git a/doxygen/dox/LearnBasics3.dox b/doxygen/dox/LearnBasics3.dox index 3e9dd8ea090..d853c83d742 100644 --- a/doxygen/dox/LearnBasics3.dox +++ b/doxygen/dox/LearnBasics3.dox @@ -183,7 +183,7 @@ to a new with a new layout. \section secLBDsetLayoutSource Sources of Information Chunking in HDF5 (See the documentation on Advanced Topics in HDF5) -\see \ref sec_plist in the HDF5 \ref UG. +see \ref sec_plist in the HDF5 \ref UG.
Previous Chapter \ref LBPropsList - Next Chapter \ref LBExtDset @@ -251,7 +251,7 @@ The following operations are required in order to create a compressed dataset: \li Create the dataset. \li Close the dataset creation property list and dataset. -For more information on compression, see the FAQ question on Using Compression in HDF5. +For more information on compression, see the FAQ question on Using Compression in HDF5. \section secLBComDsetProg Programming Example @@ -720,7 +720,7 @@ Previous Chapter \ref LBQuiz - Next Chapter \ref LBCompiling Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics -/** @page LBCompiling Compiling HDF5 Applications +@page LBCompiling Compiling HDF5 Applications Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
@@ -969,13 +969,13 @@ or on WINDOWS you may need to add the path to the bin folder to PATH. \subsection subsecLBCompilingCMakeScripts CMake Scripts for Building Applications Simple scripts are provided for building applications with different languages and options. -See CMake Scripts for Building Applications. +See CMake Scripts for Building Applications. For a more complete script (and to help resolve issues) see the script provided with the HDF5 Examples project. \subsection subsecLBCompilingCMakeExamples HDF5 Examples The installed HDF5 can be verified by compiling the HDF5 Examples project, included with the CMake built HDF5 binaries -in the share folder or you can go to the HDF5 Examples github repository. +in the share folder or you can go to the HDF5 Examples in the HDF5 github repository. Go into the share directory and follow the instructions in USING_CMake_examples.txt to build the examples. @@ -1035,9 +1035,11 @@ Previous Chapter \ref LBQuizAnswers - Next Chapter \ref LBTraining Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics -*/ +@page LBTraining Training Videos + +Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics -/ref LBTraining +Training Videos
Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics diff --git a/doxygen/dox/LearnHDFView.dox b/doxygen/dox/LearnHDFView.dox index 2f0a0782e60..cfe11e19137 100644 --- a/doxygen/dox/LearnHDFView.dox +++ b/doxygen/dox/LearnHDFView.dox @@ -7,7 +7,7 @@ This tutorial enables you to get a feel for HDF5 by using the HDFView browser. I any programming experience. \section sec_learn_hv_install HDFView Installation -\li Download and install HDFView. It can be downloaded from the Download HDFView page. +\li Download and install HDFView. It can be downloaded from the Download HDFView page. \li Obtain the storm1.txt text file, used in the tutorial. \section sec_learn_hv_begin Begin Tutorial @@ -246,7 +246,7 @@ in the file). Please note that the chunk sizes used in this topic are for demonstration purposes only. For information on chunking and specifying an appropriate chunk size, see the -Chunking in HDF5 documentation. +Chunking in HDF5 documentation. Also see the HDF5 Tutorial topic on \ref secLBComDsetCreate.
@@ -68,7 +68,7 @@ A brief introduction to Parallel HDF5. If you are new to HDF5 please see the @re
-HDF5-1.10 New Features +New Features since HDF5-1.10 \li \ref VDS diff --git a/doxygen/dox/IntroHDF5.dox b/doxygen/dox/IntroHDF5.dox index 9ef55d3a573..6f3938ed8b2 100644 --- a/doxygen/dox/IntroHDF5.dox +++ b/doxygen/dox/IntroHDF5.dox @@ -262,7 +262,7 @@ FORTRAN routines are similar; they begin with “h5*” and end with “_f”.
  • Java routines are similar; the routine names begin with “H5*” and are prefixed with “H5.” as the class. Constants are in the HDF5Constants class and are prefixed with "HDF5Constants.". The function arguments -are usually similar, @see @ref HDF5LIB +are usually similar, see @ref HDF5LIB
  • For example: @@ -616,8 +616,8 @@ on the HDF-EOS Tools and Information Center pag \section secHDF5Examples Examples \li \ref LBExamples \li \ref ExAPI -\li Examples in the Source Code -\li Other Examples +\li Examples in the Source Code +\li Other Examples \section secHDF5ExamplesCompile How To Compile For information on compiling in C, C++ and Fortran, see: \ref LBCompiling diff --git a/doxygen/dox/LearnBasics.dox b/doxygen/dox/LearnBasics.dox index ed83b367b6b..4db515c1a57 100644 --- a/doxygen/dox/LearnBasics.dox +++ b/doxygen/dox/LearnBasics.dox @@ -59,7 +59,7 @@ These examples (C, C++, Fortran, Java, Python) are provided in the HDF5 source c
    Create a file C Fortran C++ Java Python +C Fortran C++ Java Python
    Create a group C Fortran C++ Java Python +C Fortran C++ Java Python
    Create datasets in a group C Fortran C++ Java Python +C Fortran C++ Java Python
    Create a chunked and compressed dataset C Fortran C++ Java Python +C Fortran C++ Java Python
    - - - - - - - - - - - - - - - - - - -
    Values for H5Z_filter_tDescription
    0-255These values are reserved for filters predefined and - registered by the HDF5 library and of use to the general - public. They are described in a separate section - below.
    256-511Filter numbers in this range are used for testing only - and can be used temporarily by any organization. No - attempt is made to resolve numbering conflicts since all - definitions are by nature temporary.
    512-65535Reserved for future assignment. Please contact the - HDF5 development team - to reserve a value or range of values for - use by your filters.
    - -

    Defining and Querying the Filter Pipeline

    - -

    Two types of filters can be applied to raw data I/O: permanent - filters and transient filters. The permanent filter pipeline is - defined when the dataset is created while the transient pipeline - is defined for each I/O operation. During an - H5Dwrite() the transient filters are applied first - in the order defined and then the permanent filters are applied - in the order defined. For an H5Dread() the - opposite order is used: permanent filters in reverse order, then - transient filters in reverse order. An H5Dread() - must result in the same amount of data for a chunk as the - original H5Dwrite(). - -

    The permanent filter pipeline is defined by calling - H5Pset_filter() for a dataset creation property - list while the transient filter pipeline is defined by calling - that function for a dataset transfer property list. - -

    -
    herr_t H5Pset_filter (hid_t plist, - H5Z_filter_t filter, unsigned int flags, - size_t cd_nelmts, const unsigned int - cd_values[]) -
    This function adds the specified filter and - corresponding properties to the end of the transient or - permanent output filter pipeline (depending on whether - plist is a dataset creation or dataset transfer - property list). The flags argument specifies certain - general properties of the filter and is documented below. The - cd_values is an array of cd_nelmts integers - which are auxiliary data for the filter. The integer values - will be stored in the dataset object header as part of the - filter information. -
    int H5Pget_nfilters (hid_t plist) -
    This function returns the number of filters defined in the - permanent or transient filter pipeline depending on whether - plist is a dataset creation or dataset transfer - property list. In each pipeline the filters are numbered from - 0 through N-1 where N is the value returned - by this function. During output to the file the filters of a - pipeline are applied in increasing order (the inverse is true - for input). Zero is returned if there are no filters in the - pipeline and a negative value is returned for errors. -
    H5Z_filter_t H5Pget_filter (hid_t plist, - int filter_number, unsigned int *flags, - size_t *cd_nelmts, unsigned int - *cd_values, size_t namelen, char name[]) -
    This is the query counterpart of - H5Pset_filter() and returns information about a - particular filter number in a permanent or transient pipeline - depending on whether plist is a dataset creation or - dataset transfer property list. On input, cd_nelmts - indicates the number of entries in the cd_values - array allocated by the caller while on exit it contains the - number of values defined by the filter. The - filter_number should be a value between zero and - N-1 as described for H5Pget_nfilters() - and the function will return failure (a negative value) if the - filter number is out of range. If name is a pointer - to an array of at least namelen bytes then the filter - name will be copied into that array. The name will be null - terminated if the namelen is large enough. The - filter name returned will be the name appearing in the file or - else the name registered for the filter or else an empty string. -
    - -

    The flags argument to the functions above is a bit vector of - the following fields: - -

    - - - - - - - - - - -
    Values for flagsDescription
    H5Z_FLAG_OPTIONALIf this bit is set then the filter is optional. If - the filter fails (see below) during an - H5Dwrite() operation then the filter is - just excluded from the pipeline for the chunk for which - it failed; the filter will not participate in the - pipeline during an H5Dread() of the chunk. - This is commonly used for compression filters: if the - compression result would be larger than the input then - the compression filter returns failure and the - uncompressed data is stored in the file. If this bit is - clear and a filter fails then the - H5Dwrite() or H5Dread() also - fails.
    - -

    Defining Filters

    - -

    Each filter is bidirectional, handling both input and output to - the file, and a flag is passed to the filter to indicate the - direction. In either case the filter reads a chunk of data from - a buffer, usually performs some sort of transformation on the - data, places the result in the same or new buffer, and returns - the buffer pointer and size to the caller. If something goes - wrong the filter should return zero to indicate a failure. - -

    During output, a filter that fails or isn't defined and is - marked as optional is silently excluded from the pipeline and - will not be used when reading that chunk of data. A required - filter that fails or isn't defined causes the entire output - operation to fail. During input, any filter that has not been - excluded from the pipeline during output and fails or is not - defined will cause the entire input operation to fail. - -

    Filters are defined in two phases. The first phase is to - define a function to act as the filter and link the function - into the application. The second phase is to register the - function, associating the function with an - H5Z_filter_t identification number and a comment. - -

    -
    typedef size_t (*H5Z_func_t)(unsigned int - flags, size_t cd_nelmts, const unsigned int - cd_values[], size_t nbytes, size_t - *buf_size, void **buf) -
    The flags, cd_nelmts, and - cd_values are the same as for the - H5Pset_filter() function with the additional flag - H5Z_FLAG_REVERSE which is set when the filter is - called as part of the input pipeline. The input buffer is - pointed to by *buf and has a total size of - *buf_size bytes but only nbytes are valid - data. The filter should perform the transformation in place if - possible and return the number of valid bytes or zero for - failure. If the transformation cannot be done in place then - the filter should allocate a new buffer with - malloc() and assign it to *buf, - assigning the allocated size of that buffer to - *buf_size. The old buffer should be freed - by calling free(). - -

    -
    herr_t H5Zregister (H5Z_filter_t filter_id, - const char *comment, H5Z_func_t - filter) -
    The filter function is associated with a filter - number and a short ASCII comment which will be stored in the - hdf5 file if the filter is used as part of a permanent - pipeline during dataset creation. -
    - -

    Predefined Filters

    - -

    If zlib version 1.1.2 or later was found - during configuration then the library will define a filter whose - H5Z_filter_t number is - H5Z_FILTER_DEFLATE. Since this compression method - has the potential for generating compressed data which is larger - than the original, the H5Z_FLAG_OPTIONAL flag - should be turned on so such cases can be handled gracefully by - storing the original data instead of the compressed data. The - cd_nvalues should be one with cd_value[0] - being a compression aggression level between zero and nine, - inclusive (zero is the fastest compression while nine results in - the best compression ratio). - -

    A convenience function for adding the - H5Z_FILTER_DEFLATE filter to a pipeline is: - -

    -
    herr_t H5Pset_deflate (hid_t plist, unsigned - aggression) -
    The deflate compression method is added to the end of the - permanent or transient filter pipeline depending on whether - plist is a dataset creation or dataset transfer - property list. The aggression is a number between - zero and nine (inclusive) to indicate the tradeoff between - speed and compression ratio (zero is fastest, nine is best - ratio). -
    - -

    Even if the zlib isn't detected during - configuration the application can define - H5Z_FILTER_DEFLATE as a permanent filter. If the - filter is marked as optional (as with - H5Pset_deflate()) then it will always fail and be - automatically removed from the pipeline. Applications that read - data will fail only if the data is actually compressed; they - won't fail if H5Z_FILTER_DEFLATE was part of the - permanent output pipeline but was automatically excluded because - it didn't exist when the data was written. - -

    zlib can be acquired from - - https://zlib.net. - -

    Example

    - -

    This example shows how to define and register a simple filter - that adds a checksum capability to the data stream. - -

    The function that acts as the filter always returns zero - (failure) if the md5() function was not detected at - configuration time (left as an exercise for the reader). - Otherwise the function is broken down to an input and output - half. The output half calculates a checksum, increases the size - of the output buffer if necessary, and appends the checksum to - the end of the buffer. The input half calculates the checksum - on the first part of the buffer and compares it to the checksum - already stored at the end of the buffer. If the two differ then - zero (failure) is returned, otherwise the buffer size is reduced - to exclude the checksum. - -

    - - - - -
    -

    
    -                  size_t
    -                  md5_filter(unsigned int flags, size_t cd_nelmts,
    -                  const unsigned int cd_values[], size_t nbytes,
    -                  size_t *buf_size, void **buf)
    -                  {
    -                  #ifdef HAVE_MD5
    -                  unsigned char       cksum[16];
    -
    -                  if (flags & H5Z_REVERSE) {
    -                  /* Input */
    -                  assert(nbytes>=16);
    -                  md5(nbytes-16, *buf, cksum);
    -
    -                  /* Compare */
    -                  if (memcmp(cksum, (char*)(*buf)+nbytes-16, 16)) {
    -                  return 0; /*fail*/
    -                  }
    -
    -                  /* Strip off checksum */
    -                  return nbytes-16;
    -
    -                  } else {
    -                  /* Output */
    -                  md5(nbytes, *buf, cksum);
    -
    -                  /* Increase buffer size if necessary */
    -                  if (nbytes+16>*buf_size) {
    -                  *buf_size = nbytes + 16;
    -                  *buf = realloc(*buf, *buf_size);
    -                  }
    -
    -                  /* Append checksum */
    -                  memcpy((char*)(*buf)+nbytes, cksum, 16);
    -                  return nbytes+16;
    -                  }
    -                  #else
    -                  return 0; /*fail*/
    -                  #endif
    -                  }
    -	          
    -
    - -

    Once the filter function is defined it must be registered so - the HDF5 library knows about it. Since we're testing this - filter we choose one of the H5Z_filter_t numbers - from the reserved range. We'll randomly choose 305. - -

    -

    - - - - -
    -

    
    -                  #define FILTER_MD5 305
    -                  herr_t status = H5Zregister(FILTER_MD5, "md5 checksum", md5_filter);
    -	          
    -
    - -

    Now we can use the filter in a pipeline. We could have added - the filter to the pipeline before defining or registering the - filter as long as the filter was defined and registered by time - we tried to use it (if the filter is marked as optional then we - could have used it without defining it and the library would - have automatically removed it from the pipeline for each chunk - written before the filter was defined and registered). - -

    -

    - - - - -
    -

    
    -                  hid_t dcpl = H5Pcreate(H5P_DATASET_CREATE);
    -                  hsize_t chunk_size[3] = {10,10,10};
    -                  H5Pset_chunk(dcpl, 3, chunk_size);
    -                  H5Pset_filter(dcpl, FILTER_MD5, 0, 0, NULL);
    -                  hid_t dset = H5Dcreate(file, "dset", H5T_NATIVE_DOUBLE, space, dcpl);
    -	          
    -
    - -

    6. Filter Diagnostics

    - -

    If the library is compiled with debugging turned on for the H5Z - layer (usually as a result of configure - --enable-debug=z) then filter statistics are printed when - the application exits normally or the library is closed. The - statistics are written to the standard error stream and include - two lines for each filter that was used: one for input and one - for output. The following fields are displayed: - -

    -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Field NameDescription
    MethodThis is the name of the method as defined with - H5Zregister() with the characters - "< or ">" prepended to indicate - input or output.
    TotalThe total number of bytes processed by the filter - including errors. This is the maximum of the - nbytes argument or the return value. -
    ErrorsThis field shows the number of bytes of the Total - column which can be attributed to errors.
    User, System, ElapsedThese are the amount of user time, system time, and - elapsed time in seconds spent in the filter function. - Elapsed time is sensitive to system load. These times - may be zero on operating systems that don't support the - required operations.
    BandwidthThis is the filter bandwidth which is the total - number of bytes processed divided by elapsed time. - Since elapsed time is subject to system load the - bandwidth numbers cannot always be trusted. - Furthermore, the bandwidth includes bytes attributed to - errors which may significantly taint the value if the - function is able to detect errors without much - expense.
    - -

    -

    - - - - - -
    - Example: Filter Statistics -
    -

    H5Z: filter statistics accumulated ov=
    -                  er life of library:
    -                  Method     Total  Errors  User  System  Elapsed Bandwidth
    -                  ------     -----  ------  ----  ------  ------- ---------
    -                  >deflate  160000   40000  0.62    0.74     1.33 117.5 kBs
    -                  <deflate  120000       0  0.11    0.00     0.12 1.000 MBs
    -	          
    -
    - -
    - - -

    Footnote 1: Dataset chunks can be compressed - through the use of filters. Developers should be aware that - reading and rewriting compressed chunked data can result in holes - in an HDF5 file. In time, enough such holes can increase the - file size enough to impair application or library performance - when working with that file. See - - Freespace Management - in the chapter - - Performance Analysis and Issues.

    - diff --git a/doxygen/examples/H5.format.1.0.html b/doxygen/examples/H5.format.1.0.html index 32e377d4323..00da963c48e 100644 --- a/doxygen/examples/H5.format.1.0.html +++ b/doxygen/examples/H5.format.1.0.html @@ -3441,8 +3441,8 @@

    Name: Data Storage - Filter Pipeline

    library. Values 256 through 511 have been set aside for use when developing/testing new filters. The remaining values are allocated to specific filters by contacting the - HDF5 Development - Team. + HDF5 development team. + diff --git a/doxygen/examples/H5.format.1.1.html b/doxygen/examples/H5.format.1.1.html index 707bdc7c281..418afd5ab88 100644 --- a/doxygen/examples/H5.format.1.1.html +++ b/doxygen/examples/H5.format.1.1.html @@ -5558,9 +5558,9 @@

    Name: Data Storage - Filter Pipeline

    1If you are reading an earlier version of this document, this link may have changed. If the link does not work, use the latest version of this document - on The HDF Group’s website, - - https://support.hdfgroup.org/HDF5/doc/H5.format.html; + on The HDF Group’s website, + + H5.format.html; the link there will always be correct. (Return)

    diff --git a/doxygen/examples/H5DS_Spec.pdf b/doxygen/examples/H5DS_Spec.pdf new file mode 100644 index 00000000000..813f4ded3e1 Binary files /dev/null and b/doxygen/examples/H5DS_Spec.pdf differ diff --git a/doxygen/examples/IOFlow.html b/doxygen/examples/IOFlow.html index e890edbb766..b33196d502a 100644 --- a/doxygen/examples/IOFlow.html +++ b/doxygen/examples/IOFlow.html @@ -1,5 +1,4 @@ - HDF5 Raw I/O Flow Notes diff --git a/doxygen/examples/LibraryReleaseVersionNumbers.html b/doxygen/examples/LibraryReleaseVersionNumbers.html index 57b211cd61b..dedbece0c11 100644 --- a/doxygen/examples/LibraryReleaseVersionNumbers.html +++ b/doxygen/examples/LibraryReleaseVersionNumbers.html @@ -241,7 +241,7 @@

    Version Support from the Library<

    For more information on these and other function calls and macros, - see the HDF5 Reference Manual.

    + see the HDF5 Reference Manual.

    Use Cases

    diff --git a/doxygen/examples/intro_SWMR.html b/doxygen/examples/intro_SWMR.html deleted file mode 100644 index b1adb62bdb5..00000000000 --- a/doxygen/examples/intro_SWMR.html +++ /dev/null @@ -1,103 +0,0 @@ - - - Introduction to Single-Writer_Multiple-Reader (SWMR) - -

    Introduction to SWMR

    -

    The Single-Writer / Multiple-Reader (SWMR) feature enables multiple processes to read an HDF5 file while it is being written to (by a single process) without using locks or requiring communication between processes.

    -

    tutr-swmr1.png -

    All communication between processes must be performed via the HDF5 file. The HDF5 file under SWMR access must reside on a system that complies with POSIX write() semantics.

    -

    The basic engineering challenge for this to work was to ensure that the readers of an HDF5 file always see a coherent (though possibly not up to date) HDF5 file.

    -

    The issue is that when writing data there is information in the metadata cache in addition to the physical file on disk:

    -

    tutr-swmr2.png -

    However, the readers can only see the state contained in the physical file:

    -

    tutr-swmr3.png -

    The SWMR solution implements dependencies on when the metadata can be flushed to the file. This ensures that metadata cache flush operations occur in the proper order, so that there will never be internal file pointers in the physical file that point to invalid (unflushed) file addresses.

    -

    A beneficial side effect of using SWMR access is better fault tolerance. It is more difficult to corrupt a file when using SWMR.

    -

    Documentation

    -

    SWMR User's Guide

    -

    HDF5 Library APIs

    - -

    Tools

    - -

    Design Documents

    -

    Error while fetching page properties report data:

    -

    Programming Model

    -

    Please be aware that the SWMR feature requires that an HDF5 file be created with the latest file format. See H5P_SET_LIBVER_BOUNDS for more information.

    -

    To use SWMR follow the the general programming model for creating and accessing HDF5 files and objects along with the steps described below.

    -

    SWMR Writer:

    -

    The SWMR writer either opens an existing file and objects or creates them as follows.

    -

    Open an existing file:

    -

    Call H5Fopen using the H5F_ACC_SWMR_WRITE flag. -Begin writing datasets. -Periodically flush data. -Create a new file:

    -

    Call H5Fcreate using the latest file format. -Create groups, datasets and attributes, and then close the attributes. -Call H5F_START_SWMR_WRITE to start SWMR access to the file. -Periodically flush data.

    -

    Example Code:

    -

    Create the file using the latest file format property:

    -

    - fapl = H5Pcreate (H5P_FILE_ACCESS); - status = H5Pset_libver_bounds (fapl, H5F_LIBVER_LATEST, H5F_LIBVER_LATEST); - fid = H5Fcreate (filename, H5F_ACC_TRUNC, H5P_DEFAULT, fapl); -[Create objects (files, datasets, ...). Close any attributes and named datatype objects. Groups and datasets may remain open before starting SWMR access to them.]

    -

    Start SWMR access to the file:

    -

    status = H5Fstart_swmr_write (fid); -Reopen the datasets and start writing, periodically flushing data:

    -

    status = H5Dwrite (dset_id, ...); - status = H5Dflush (dset_id);

    -

    SWMR Reader:

    -

    The SWMR reader must continually poll for new data:

    -

    Call H5Fopen using the H5F_ACC_SWMR_READ flag. -Poll, checking the size of the dataset to see if there is new data available for reading. -Read new data, if any.

    -

    Example Code:

    -

    Open the file using the SWMR read flag:

    -

    fid = H5Fopen (filename, H5F_ACC_RDONLY | H5F_ACC_SWMR_READ, H5P_DEFAULT); -Open the dataset and then repeatedly poll the dataset, by getting the dimensions, reading new data, and refreshing:

    -

    dset_id = H5Dopen (...); - space_id = H5Dget_space (...); - while (...) { - status = H5Dread (dset_id, ...); - status = H5Drefresh (dset_id); - space_id = H5Dget_space (...); - }

    -

    Limitations and Scope

    -

    An HDF5 file under SWMR access must reside on a system that complies with POSIX write() semantics. It is also limited in scope as follows:

    -

    The writer process is only allowed to modify raw data of existing datasets by;

    -

    Appending data along any unlimited dimension. -Modifying existing data -The following operations are not allowed (and the corresponding HDF5 files will fail):

    -

    The writer cannot add new objects to the file. -The writer cannot delete objects in the file. -The writer cannot modify or append data with variable length, string or region reference datatypes. -File space recycling is not allowed. As a result the size of a file modified by a SWMR writer may be larger than a file modified by a non-SWMR writer.

    -

    Tools for Working with SWMR

    -

    Two new tools, h5watch and h5clear, are available for use with SWMR. The other HDF5 utilities have also been modified to recognize SWMR:

    -

    The h5watch tool allows a user to monitor the growth of a dataset. -The h5clear tool clears the status flags in the superblock of an HDF5 file. -The rest of the HDF5 tools will exit gracefully but not work with SWMR otherwise.

    -

    Programming Example

    -

    A good example of using SWMR is included with the HDF5 tests in the source code. You can run it while reading the file it creates. If you then interrupt the application and reader and look at the resulting file, you will see that the file is still valid. Follow these steps:

    -

    Download the HDF5-1.10 source code to a local directory on a filesystem (that complies with POSIX write() semantics). Build the software. No special configuration options are needed to use SWMR.

    -

    Invoke two command terminal windows. In one window go into the bin/ directory of the built binaries. In the other window go into the test/ directory of the HDF5-1.10 source code that was just built.

    -

    In the window in the test/ directory compile and run use_append_chunk.c. The example writes a three dimensional dataset by planes (with chunks of size 1 x 256 x 256).

    -

    In the other window (in the bin/ directory) run h5watch on the file created by use_append_chunk.c (use_append_chunk.h5). It should be run while use_append_chunk is executing and you will see valid data displayed with h5watch.

    -

    Interrupt use_append_chunk while it is running, and stop h5watch.

    -

    Use h5clear to clear the status flags in the superblock of the HDF5 file (use_append_chunk.h5).

    -

    View the file with h5dump. You will see that it is a valid file even though the application did not close properly. It will contain data up to the point that it was interrupted.

    - - diff --git a/doxygen/examples/intro_VDS.html b/doxygen/examples/intro_VDS.html deleted file mode 100644 index 6e573b9b75c..00000000000 --- a/doxygen/examples/intro_VDS.html +++ /dev/null @@ -1,72 +0,0 @@ - - - Introduction to the Virtual Dataset - VDS - -

    The HDF5 Virtual Dataset (VDS) feature enables users to access data in a collection of HDF5 files as a single HDF5 dataset and to use the HDF5 APIs to work with that dataset.

    -

    For example, your data may be collected into four files:

    - -

    tutrvds-multimgs.png - -

    You can map the datasets in the four files into a single VDS that can be accessed just like any other dataset:

    - -

    tutrvds-snglimg.png - -

    The mapping between a VDS and the HDF5 source datasets is persistent and transparent to an application. If a source file is missing the fill value will be displayed.

    -

    See the Virtual (VDS) Documentation for complete details regarding the VDS feature.

    -

    The VDS feature was implemented using hyperslab selection (H5S_SELECT_HYPERSLAB). See the tutorial on Reading From or Writing to a Subset of a Dataset for more information on selecting hyperslabs.

    -

    Programming Model -To create a Virtual Dataset you simply follow the HDF5 programming model and add a few additional API calls to map the source code datasets to the VDS.

    -

    Following are the steps for creating a Virtual Dataset:

    -

    Create the source datasets that will comprise the VDS -Create the VDS: ‐ Define a datatype and dataspace (can be unlimited) -‐ Define the dataset creation property list (including fill value) -‐ (Repeat for each source dataset) Map elements from the source dataset to elements of the VDS: -Select elements in the source dataset (source selection) -Select elements in the virtual dataset (destination selection) -Map destination selections to source selections (see Functions for Working with a VDS)

    -

    ‐ Call H5Dcreate using the properties defined above -Access the VDS as a regular HDF5 dataset -Close the VDS when finished

    -

    Functions for Working with a VDS -The H5P_SET_VIRTUAL API sets the mapping between virtual and source datasets. This is a dataset creation property list. Using this API will change the layout of the dataset to H5D_VIRTUAL. As with specifying any dataset creation property list, an instance of the property list is created, modified, passed into the dataset creation call and then closed:

    -

    dcpl = H5Pcreate (H5P_DATASET_CREATE);

    -

    src_space = H5screate_simple ... - status = H5Sselect_hyperslab (space, ... - status = H5Pset_virtual (dcpl, space, SRC_FILE[i], SRC_DATASET[i], src_space);

    -

    dset = H5Dcreate2 (file, DATASET, H5T_NATIVE_INT, space, H5P_DEFAULT, dcpl, H5P_DEFAULT);

    -

    status = H5Pclose (dcpl); -There are several other APIs introduced with Virtual Datasets, including query functions. For details see the complete list of HDF5 library APIs that support Virtual Datasets

    -

    Limitations -This feature requires HDF5-1.10. -The number of source datasets is unlimited. However, there is a limit on the size of each source dataset.

    -

    Programming Examples -Example 1 -This example creates three HDF5 files, each with a one-dimensional dataset of 6 elements. The datasets in these files are the source datasets that are then used to create a 4 x 6 Virtual Dataset with a fill value of -1. The first three rows of the VDS are mapped to the data from the three source datasets as shown below:

    -

    tutrvds-ex.png

    -

    In this example the three source datasets are mapped to the VDS with this code:

    -
    src\_space = H5Screate\_simple (RANK1, dims, NULL);
    -for (i = 0; i < 3; i++) {
    -    start[0] = (hsize\_t)i;
    -    /* Select i-th row in the virtual dataset; selection in the source datasets is the same. */
    -    status = H5Sselect\_hyperslab (space, H5S\_SELECT\_SET, start, NULL, count, block);
    -    status = H5Pset\_virtual (dcpl, space, SRC\_FILE[i], SRC\_DATASET[i], src\_space);
    -}
    -
    -

    After the VDS is created and closed, it is reopened. The property list is then queried to determine the layout of the dataset and its mappings, and the data in the VDS is read and printed.

    -

    This example is in the HDF5 source code and can be obtained from here:

    -

    C Example

    -

    For details on compiling an HDF5 application: [ Compiling HDF5 Applications ]

    -

    Example 2 -This example shows how to use a C-style printf statement for specifying multiple source datasets as one virtual dataset. Only one mapping is required. In other words only one H5P_SET_VIRTUAL call is needed to map multiple datasets. It creates a 2-dimensional unlimited VDS. Then it re-opens the file, makes queries, and reads the virtual dataset.

    -

    The source datasets are specified as A-0, A-1, A-2, and A-3. These are mapped to the virtual dataset with one call:

    -
    status = H5Pset\_virtual (dcpl, vspace, SRCFILE, "/A-%b", src\_space);
    -
    -

    The %b indicates that the block count of the selection in the dimension should be used.

    -

    C Example

    -

    For details on compiling an HDF5 application: [ Compiling HDF5 Applications ]

    -

    Using h5dump with a VDS -The h5dump utility can be used to view a VDS. The h5dump output for a VDS looks exactly like that for any other dataset. If h5dump cannot find a source dataset then the fill value will be displayed.

    -

    You can determine that a dataset is a VDS by looking at its properties with h5dump -p. It will display each source dataset mapping, beginning with Mapping 0. Below is an excerpt of the output of h5dump -p on the vds.h5 file created in Example 1.You can see that the entire source file a.h5 is mapped to the first row of the /VDS dataset:

    - -

    tutrvds-map.png

    - diff --git a/doxygen/examples/tables/propertyLists.dox b/doxygen/examples/tables/propertyLists.dox index 340e13c26a5..76727b58a59 100644 --- a/doxygen/examples/tables/propertyLists.dox +++ b/doxygen/examples/tables/propertyLists.dox @@ -490,12 +490,12 @@ and one raw data file. #H5Pget_filter Returns information about a filter in a pipeline. -The C function is a macro: \see \ref api-compat-macros. +The C function is a macro: @see @ref api-compat-macros. #H5Pget_filter_by_id Returns information about the specified filter. -The C function is a macro: \see \ref api-compat-macros. +The C function is a macro: @see @ref api-compat-macros. #H5Pmodify_filter @@ -739,12 +739,12 @@ of the library for reading or writing the actual data. #H5Pget_filter Returns information about a filter in a pipeline. The -C function is a macro: \see \ref api-compat-macros. +C function is a macro: @see @ref api-compat-macros. #H5Pget_filter_by_id Returns information about the specified filter. The -C function is a macro: \see \ref api-compat-macros. +C function is a macro: @see @ref api-compat-macros. #H5Pget_nfilters diff --git a/doxygen/hdf5doxy_layout.xml b/doxygen/hdf5doxy_layout.xml index d895b2dd5bd..20e951856c7 100644 --- a/doxygen/hdf5doxy_layout.xml +++ b/doxygen/hdf5doxy_layout.xml @@ -5,12 +5,12 @@ - + + --> diff --git a/hl/src/H5DOpublic.h b/hl/src/H5DOpublic.h index 661ca7a2abe..09a8f64829f 100644 --- a/hl/src/H5DOpublic.h +++ b/hl/src/H5DOpublic.h @@ -161,7 +161,7 @@ H5_HLDLL herr_t H5DOappend(hid_t dset_id, hid_t dxpl_id, unsigned axis, size_t e * from one datatype to another, and the filter pipeline to write the chunk. * Developers should have experience with these processes before * using this function. Please see - * + * * Using the Direct Chunk Write Function * for more information. * diff --git a/hl/src/H5DSpublic.h b/hl/src/H5DSpublic.h index 4afe51180f9..6a08be8e5c2 100644 --- a/hl/src/H5DSpublic.h +++ b/hl/src/H5DSpublic.h @@ -117,7 +117,7 @@ H5_HLDLL herr_t H5DSwith_new_ref(hid_t obj_id, hbool_t *with_new_ref); * * Entries are created in the #DIMENSION_LIST and * #REFERENCE_LIST attributes, as defined in section 4.2 of - * + * * HDF5 Dimension Scale Specification. * * Fails if: @@ -147,7 +147,7 @@ H5_HLDLL herr_t H5DSattach_scale(hid_t did, hid_t dsid, unsigned int idx); * dimension \p idx of dataset \p did. This deletes the entries in the * #DIMENSION_LIST and #REFERENCE_LIST attributes, * as defined in section 4.2 of - * + * * HDF5 Dimension Scale Specification. * * Fails if: @@ -180,7 +180,7 @@ H5_HLDLL herr_t H5DSdetach_scale(hid_t did, hid_t dsid, unsigned int idx); * as defined above. Creates the CLASS attribute, set to the value * "DIMENSION_SCALE" and an empty #REFERENCE_LIST attribute, * as described in - * + * * HDF5 Dimension Scale Specification. * (PDF, see section 4.2). * diff --git a/hl/src/H5LTpublic.h b/hl/src/H5LTpublic.h index 18f7502209f..514fe244e10 100644 --- a/hl/src/H5LTpublic.h +++ b/hl/src/H5LTpublic.h @@ -1386,8 +1386,8 @@ H5_HLDLL herr_t H5LTget_attribute_info(hid_t loc_id, const char *obj_name, const * \p lang_type definition of HDF5 datatypes. * Currently, only the DDL(#H5LT_DDL) is supported. * The complete DDL definition of HDF5 datatypes can be found in - * the last chapter of the - * + * the specifications chapter of the + * * HDF5 User's Guide. * * \par Example @@ -1424,8 +1424,8 @@ H5_HLDLL hid_t H5LTtext_to_dtype(const char *text, H5LT_lang_t lang_type); * * Currently only DDL (#H5LT_DDL) is supported for \p lang_type. * The complete DDL definition of HDF5 data types can be found in - * the last chapter of the - * + * the specifications chapter of the + * * HDF5 User's Guide. * * \par Example @@ -1625,7 +1625,7 @@ H5_HLDLL htri_t H5LTpath_valid(hid_t loc_id, const char *path, hbool_t check_obj * \note **Recommended Reading:** * \note This function is part of the file image operations feature set. * It is highly recommended to study the guide - * + * * HDF5 File Image Operations before using this feature set.\n * See the “See Also” section below for links to other elements of * HDF5 file image operations. diff --git a/java/src/hdf/overview.html b/java/src/hdf/overview.html index 84e945b2f87..8329277cda7 100644 --- a/java/src/hdf/overview.html +++ b/java/src/hdf/overview.html @@ -91,6 +91,6 @@

    and the HDF5 library.

    To Obtain

    -The JHI5 is included with the HDF5 library. +The JHI5 is included with the HDF5 library. diff --git a/java/src/jni/exceptionImp.c b/java/src/jni/exceptionImp.c index 4cf03ac9f28..6b2004ddeb4 100644 --- a/java/src/jni/exceptionImp.c +++ b/java/src/jni/exceptionImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5Constants.c b/java/src/jni/h5Constants.c index 41395a4413f..aeec71fb9f4 100644 --- a/java/src/jni/h5Constants.c +++ b/java/src/jni/h5Constants.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5Imp.c b/java/src/jni/h5Imp.c index 898b52ad3ed..6092419c256 100644 --- a/java/src/jni/h5Imp.c +++ b/java/src/jni/h5Imp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5aImp.c b/java/src/jni/h5aImp.c index 54c862eff6c..b6ed1c4c3e1 100644 --- a/java/src/jni/h5aImp.c +++ b/java/src/jni/h5aImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5dImp.c b/java/src/jni/h5dImp.c index f6318b222d4..363936b76e9 100644 --- a/java/src/jni/h5dImp.c +++ b/java/src/jni/h5dImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5eImp.c b/java/src/jni/h5eImp.c index d52a4f72cd0..89c9362626f 100644 --- a/java/src/jni/h5eImp.c +++ b/java/src/jni/h5eImp.c @@ -21,9 +21,6 @@ extern "C" { * Each routine wraps a single HDF entry point, generally with the * analogous arguments and return codes. * - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * */ #include diff --git a/java/src/jni/h5fImp.c b/java/src/jni/h5fImp.c index 9295383ef4d..6bd17a786cb 100644 --- a/java/src/jni/h5fImp.c +++ b/java/src/jni/h5fImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5gImp.c b/java/src/jni/h5gImp.c index fce68022649..54b72b6c09a 100644 --- a/java/src/jni/h5gImp.c +++ b/java/src/jni/h5gImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5iImp.c b/java/src/jni/h5iImp.c index de70e1e424f..728c3b14ed5 100644 --- a/java/src/jni/h5iImp.c +++ b/java/src/jni/h5iImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5jni.h b/java/src/jni/h5jni.h index ad867083ba9..b1bd968ba7c 100644 --- a/java/src/jni/h5jni.h +++ b/java/src/jni/h5jni.h @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #include #include "H5version.h" #include diff --git a/java/src/jni/h5lImp.c b/java/src/jni/h5lImp.c index 0d9ac7dfc01..7d487999f96 100644 --- a/java/src/jni/h5lImp.c +++ b/java/src/jni/h5lImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5oImp.c b/java/src/jni/h5oImp.c index 15daeafde6b..60a6e4fbf90 100644 --- a/java/src/jni/h5oImp.c +++ b/java/src/jni/h5oImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5pACPLImp.c b/java/src/jni/h5pACPLImp.c index 4635fa7373b..7c9895a6de1 100644 --- a/java/src/jni/h5pACPLImp.c +++ b/java/src/jni/h5pACPLImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5pDAPLImp.c b/java/src/jni/h5pDAPLImp.c index 01c3983c2cc..44378a1dc5e 100644 --- a/java/src/jni/h5pDAPLImp.c +++ b/java/src/jni/h5pDAPLImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5pDCPLImp.c b/java/src/jni/h5pDCPLImp.c index a624fd96987..ebe12cb5455 100644 --- a/java/src/jni/h5pDCPLImp.c +++ b/java/src/jni/h5pDCPLImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5pDXPLImp.c b/java/src/jni/h5pDXPLImp.c index 31f6d02b860..3b519ef2709 100644 --- a/java/src/jni/h5pDXPLImp.c +++ b/java/src/jni/h5pDXPLImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5pFAPLImp.c b/java/src/jni/h5pFAPLImp.c index af56336fb55..24b7f357e50 100644 --- a/java/src/jni/h5pFAPLImp.c +++ b/java/src/jni/h5pFAPLImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5pFCPLImp.c b/java/src/jni/h5pFCPLImp.c index 7c1b44add5f..56b4e921aae 100644 --- a/java/src/jni/h5pFCPLImp.c +++ b/java/src/jni/h5pFCPLImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5pGAPLImp.c b/java/src/jni/h5pGAPLImp.c index 0ee65710ac5..b38bd4b3b23 100644 --- a/java/src/jni/h5pGAPLImp.c +++ b/java/src/jni/h5pGAPLImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5pGCPLImp.c b/java/src/jni/h5pGCPLImp.c index 49d79dc2366..b71558012ce 100644 --- a/java/src/jni/h5pGCPLImp.c +++ b/java/src/jni/h5pGCPLImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5pImp.c b/java/src/jni/h5pImp.c index c952ccb9dff..6c17984ae24 100644 --- a/java/src/jni/h5pImp.c +++ b/java/src/jni/h5pImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5pLAPLImp.c b/java/src/jni/h5pLAPLImp.c index 3048c155413..36813e33fc9 100644 --- a/java/src/jni/h5pLAPLImp.c +++ b/java/src/jni/h5pLAPLImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5pLCPLImp.c b/java/src/jni/h5pLCPLImp.c index ecabadd29bc..e27a9eb1570 100644 --- a/java/src/jni/h5pLCPLImp.c +++ b/java/src/jni/h5pLCPLImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5pOCPLImp.c b/java/src/jni/h5pOCPLImp.c index 7cd9b5c721f..a743cbaa7f4 100644 --- a/java/src/jni/h5pOCPLImp.c +++ b/java/src/jni/h5pOCPLImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5pOCpyPLImp.c b/java/src/jni/h5pOCpyPLImp.c index c4d2ed7fd14..a78aaa259f0 100644 --- a/java/src/jni/h5pOCpyPLImp.c +++ b/java/src/jni/h5pOCpyPLImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5pStrCPLImp.c b/java/src/jni/h5pStrCPLImp.c index 0045efa342e..3382f0aea30 100644 --- a/java/src/jni/h5pStrCPLImp.c +++ b/java/src/jni/h5pStrCPLImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5plImp.c b/java/src/jni/h5plImp.c index 3c87fd52a99..9632e9e2609 100644 --- a/java/src/jni/h5plImp.c +++ b/java/src/jni/h5plImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5rImp.c b/java/src/jni/h5rImp.c index f97f803f90e..4ccad5457a2 100644 --- a/java/src/jni/h5rImp.c +++ b/java/src/jni/h5rImp.c @@ -10,11 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5sImp.c b/java/src/jni/h5sImp.c index 55fb268434f..738db67ffee 100644 --- a/java/src/jni/h5sImp.c +++ b/java/src/jni/h5sImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5tImp.c b/java/src/jni/h5tImp.c index 309454b16e4..316455715ac 100644 --- a/java/src/jni/h5tImp.c +++ b/java/src/jni/h5tImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5util.c b/java/src/jni/h5util.c index 9c441729a39..fb619aa619d 100644 --- a/java/src/jni/h5util.c +++ b/java/src/jni/h5util.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5util.h b/java/src/jni/h5util.h index 5af96afaee9..011aaec428f 100644 --- a/java/src/jni/h5util.h +++ b/java/src/jni/h5util.h @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifndef H5UTIL_H__ #define H5UTIL_H__ diff --git a/java/src/jni/h5vlImp.c b/java/src/jni/h5vlImp.c index 2bf0b8d6b0a..47e532a5609 100644 --- a/java/src/jni/h5vlImp.c +++ b/java/src/jni/h5vlImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/h5zImp.c b/java/src/jni/h5zImp.c index e6d37bfa3af..9c387fa33ee 100644 --- a/java/src/jni/h5zImp.c +++ b/java/src/jni/h5zImp.c @@ -10,12 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ - #ifdef __cplusplus extern "C" { #endif /* __cplusplus */ diff --git a/java/src/jni/nativeData.c b/java/src/jni/nativeData.c index d25951ff436..d014b64579d 100644 --- a/java/src/jni/nativeData.c +++ b/java/src/jni/nativeData.c @@ -10,11 +10,6 @@ * help@hdfgroup.org. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ -/* - * For details of the HDF libraries, see the HDF Documentation at: - * https://portal.hdfgroup.org/documentation/index.html - * - */ /* * This module contains the implementation of all the native methods * used for number conversion. This is represented by the Java diff --git a/release_docs/INSTALL b/release_docs/INSTALL index 9373192912b..63f2115fdd6 100644 --- a/release_docs/INSTALL +++ b/release_docs/INSTALL @@ -49,7 +49,7 @@ CONTENTS include the Szip library with the encoder enabled. These can be found here: - https://www.hdfgroup.org/downloads/hdf5/ + https://support.hdfgroup.org/downloads/HDF5 Please notice that if HDF5 configure cannot find a valid Szip library, configure will not fail; in this case, the compression filter will diff --git a/release_docs/INSTALL_Autotools.txt b/release_docs/INSTALL_Autotools.txt index a2c948198e8..0a2400dc096 100644 --- a/release_docs/INSTALL_Autotools.txt +++ b/release_docs/INSTALL_Autotools.txt @@ -334,7 +334,7 @@ III. Full installation instructions for source distributions (or '--with-pthread=DIR') flag to the configure script. For further information, see: - https://portal.hdfgroup.org/display/knowledge/Questions+about+thread-safety+and+concurrent+access + https://support.hdfgroup.org/documentation/HDF5/Questions+about+thread-safety+and+concurrent+access The high-level, C++, Fortran and Java interfaces are not compatible with the thread-safety option because the lock is not hoisted @@ -492,7 +492,7 @@ IV. Using the Library For information on using HDF5 see the documentation, tutorials and examples found here: - https://portal.hdfgroup.org/documentation/index.html + https://support.hdfgroup.org/documentation/HDF5/index.html A summary of the features included in the built HDF5 installation can be found in the libhdf5.settings file in the same directory as the static and/or diff --git a/release_docs/INSTALL_CMake.txt b/release_docs/INSTALL_CMake.txt index a86bae4bab6..b2bd84c20f3 100644 --- a/release_docs/INSTALL_CMake.txt +++ b/release_docs/INSTALL_CMake.txt @@ -59,7 +59,7 @@ HDF Group recommends using the ctest script mode to build HDF5. ------------------------------------------------------------------------- Individual files needed as mentioned in this document ------------------------------------------------------------------------- -Download from https://github.com/HDFGroup/hdf5/tree/master/config/cmake/scripts: +Download from https://github.com/HDFGroup/hdf5/blob/develop/config/cmake/scripts: CTestScript.cmake -- CMake build script HDF5config.cmake -- CMake configuration script diff --git a/release_docs/INSTALL_parallel b/release_docs/INSTALL_parallel index 9eb486f79d2..df255c6e0ad 100644 --- a/release_docs/INSTALL_parallel +++ b/release_docs/INSTALL_parallel @@ -90,7 +90,7 @@ nodes. They would probably work for other Cray systems but have not been verified. Obtain the HDF5 source code: - https://portal.hdfgroup.org/display/support/Downloads + https://support.hdfgroup.org/downloads/HDF5 The entire build process should be done on a MOM node in an interactive allocation and on a file system accessible by all compute nodes. Request an interactive allocation with qsub: diff --git a/release_docs/RELEASE.txt b/release_docs/RELEASE.txt index aa681ddf791..89ed27fa5dc 100644 --- a/release_docs/RELEASE.txt +++ b/release_docs/RELEASE.txt @@ -15,16 +15,16 @@ final release. Links to HDF5 documentation can be found on: - https://portal.hdfgroup.org/documentation/ + https://support.hdfgroup.org/documentation/HDF5 The official HDF5 releases can be obtained from: - https://www.hdfgroup.org/downloads/hdf5/ + https://support.hdfgroup.org/downloads/HDF5/ Changes from Release to Release and New Features in the HDF5-1.16.x release series can be found at: - https://portal.hdfgroup.org/documentation/hdf5-docs/release_specific_info.html + https://support.hdfgroup.org/documentation/HDF5/release_specific_info.html If you have any questions or comments, please send them to the HDF Help Desk: diff --git a/release_docs/RELEASE_PROCESS.md b/release_docs/RELEASE_PROCESS.md index 047183b6026..3e876d34369 100644 --- a/release_docs/RELEASE_PROCESS.md +++ b/release_docs/RELEASE_PROCESS.md @@ -18,7 +18,7 @@ Maintenance releases are always forward compatible with regards to the HDF5 file - HDF5 libraries and command line utilities can access files created by future maintenance versions of the library. Note that maintenance releases are NOT guaranteed to be interface-compatible, meaning that, on occasion, application source code will need updated and re-compiled against a new maintenance release when the interface changes. Interface changes are only made when absolutely necessary as deemed by the HDF5 product manager(s), and interface compatibility reports are published with each release to inform customers and users of any incompatibilities in the interface. -For more information on the HDF5 versioning and backward and forward compatibility issues, see the [API Compatibility Macros](https://hdfgroup.github.io/hdf5/develop/api-compat-macros.html) on the public website. +For more information on the HDF5 versioning and backward and forward compatibility issues, see the [API Compatibility Macros][u13] on the public website. ## Participants: - Product Manager — The individual responsible for the overall direction and development of a software product at The HDF Group. @@ -35,21 +35,21 @@ For more information on the HDF5 versioning and backward and forward compatibili ### 3. Prepare Release Notes (Release Manager) 1. Confirm that all non-trivial changes made to the source are reflected in the release notes. Verify the following: - [HDF5 Milestones Projects](https://github.com/HDFGroup/hdf5/milestones) - - Each entry in [RELEASE.txt](https://github.com/HDFGroup/hdf5/blob/develop/release_docs/RELEASE.txt) traces to one or more resolved GH issues marked with FixVersion="X.Y.Z". - - Each resolved GH milestone issue traces to an entry in [RELEASE.txt](https://github.com/HDFGroup/hdf5/blob/develop/release_docs/RELEASE.txt). + - Each entry in [RELEASE.txt][u1] traces to one or more resolved GH issues marked with FixVersion="X.Y.Z". + - Each resolved GH milestone issue traces to an entry in [RELEASE.txt][u1]. - Each resolved GH milestone issue traces to one or more revisions to the HDF5 source. - Each resolved GH milestone issue traces to one or more pull requests. -2. For each previously authored KNOWN ISSUE in the [RELEASE.txt](https://github.com/HDFGroup/hdf5/blob/develop/release_docs/RELEASE.txt), if the issue has been resolved or can no longer be confirmed, remove the issue from the [RELEASE.txt](https://github.com/HDFGroup/hdf5/blob/develop/release_docs/RELEASE.txt). +2. For each previously authored KNOWN ISSUE in the [RELEASE.txt][u1], if the issue has been resolved or can no longer be confirmed, remove the issue from the [RELEASE.txt][u1]. - Document any new known issues at the top of the list. -3. Update the TESTED CONFIGURATION FEATURES SUMMARY in [RELEASE.txt](https://github.com/HDFGroup/hdf5/blob/develop/release_docs/RELEASE.txt) to correspond to features and options that have been tested during the maintenance period by the automated daily regression tests. +3. Update the TESTED CONFIGURATION FEATURES SUMMARY in [RELEASE.txt][u1] to correspond to features and options that have been tested during the maintenance period by the automated daily regression tests. - **See: Testing/Testing Systems(this is a page in confluence)** -4. Update current compiler information for each platform in the PLATFORMS TESTED section of [RELEASE.txt](https://github.com/HDFGroup/hdf5/blob/develop/release_docs/RELEASE.txt). -5. Review the [RELEASE.txt](https://github.com/HDFGroup/hdf5/blob/develop/release_docs/RELEASE.txt) for formatting and language to verify that it corresponds to guidelines found in **[Writing Notes in a RELEASE.txt(this is missing)]()** File. -6. Review and update, if needed, the [README](https://github.com/HDFGroup/hdf5/blob/develop/README.md) and [COPYING](https://github.com/HDFGroup/hdf5/blob/develop/COPYING) files. -7. Review and update all INSTALL_* files in [release_docs](https://github.com/HDFGroup/hdf5/tree/develop/release_docs), if needed. - - [INSTALL](https://github.com/HDFGroup/hdf5/blob/develop/release_docs/INSTALL) should be general info and not require extensive changes - - [INSTALL_Autotools.txt](https://github.com/HDFGroup/hdf5/blob/develop/release_docs/INSTALL_Autotools.txt) are the instructions for building under autotools. - - [INSTALL_CMake.txt](https://github.com/HDFGroup/hdf5/blob/develop/release_docs/INSTALL_CMake.txt) are the instructions for building under CMake. +4. Update current compiler information for each platform in the PLATFORMS TESTED section of [RELEASE.txt][u1]. +5. Review the [RELEASE.txt][u1] for formatting and language to verify that it corresponds to guidelines found in **[Writing Notes in a RELEASE.txt(this is missing)]()** File. +6. Review and update, if needed, the [README][u2] and [COPYING][u3] files. +7. Review and update all INSTALL_* files in [release_docs][u4], if needed. + - [INSTALL][u5] should be general info and not require extensive changes + - [INSTALL_Autotools.txt][u6] are the instructions for building under autotools. + - [INSTALL_CMake.txt][u7] are the instructions for building under CMake. ### 4. Freeze Code (Release Manager | Test Automation Team) 1. Transition from performing maintenance on software to preparing for its delivery. @@ -62,14 +62,14 @@ For more information on the HDF5 versioning and backward and forward compatibili ### 5. Update Interface Version (Release Manager | Product Manager) 1. Verify interface additions, changes, and removals, and update the shared library interface version number. 2. Execute the CI snapshot workflow. - - Actions - “[hdf5 release build](https://github.com/HDFGroup/hdf5/blob/develop/.github/workflows/release.yml)” workflow and use the defaults. + - Actions - “[hdf5 release build][u8]” workflow and use the defaults. 3. Download and inspect release build source and binary files. Downloaded source files should build correctly, one or more binaries should install and run correctly. There should be nothing missing nor any extraneous files that aren’t meant for release. -4. Verify the interface compatibility reports between the current source and the previous release on the Github [Snapshots](https://github.com/HDFGroup/hdf5/releases/tag/snapshot-1.14) page. - - The compatibility reports are produced by the CI and are viewable in the Github [Releases/snapshot](https://github.com/HDFGroup/hdf5/releases/tag/snapshot) section. -5. Verify the interface compatibility reports between the current source and the previous release on the Github [Snapshots](https://github.com/HDFGroup/hdf5/releases/tag/snapshot-1.14) page. - - The compatibility reports are produced by the CI and are viewable in the Github [Releases/snapshot](https://github.com/HDFGroup/hdf5/releases/tag/snapshot) section. +4. Verify the interface compatibility reports between the current source and the previous release on the Github [Snapshots]u14] page. + - The compatibility reports are produced by the CI and are viewable in the Github [Releases/snapshot][u15] section. +5. Verify the interface compatibility reports between the current source and the previous release on the Github [Snapshots][u14] page. + - The compatibility reports are produced by the CI and are viewable in the Github [Releases/snapshot][u15] section. 6. Confirm the necessity of and approve of any interface-breaking changes. If any changes need to be reverted, task the developer who made the change to do so as soon as possible. If a change is reverted, return to the previous step and regenerate the compatibility report after the changes is made. Otherwise, continue to the next step. -7. Update the .so version numbers in the [config/lt_vers.am](https://github.com/HDFGroup/hdf5/blob/develop/config/lt_vers.am) file in the support branch according to [libtool's library interface version](https://www.gnu.org/software/libtool/manual/libtool.html#Versioning) scheme. +7. Update the .so version numbers in the [config/lt_vers.am][u9] file in the support branch according to [libtool's library interface version](https://www.gnu.org/software/libtool/manual/libtool.html#Versioning) scheme. - See [Updating version info (Libtool)](https://www.gnu.org/software/libtool/manual/html_node/Updating-version-info.html#Updating-version-info) for rules to help update library version numbers. 8. After the release branch has been created, run `./autogen.sh` to regenerate build system files on the release branch and commit the changes. @@ -83,21 +83,21 @@ For more information on the HDF5 versioning and backward and forward compatibili - or create the new branch in GitHub GUI. 4. Check that required CMake files point to the specific versions of the third-party software (szip, zlib and plugins) that they depend on. - Update as needed. -5. Change the **support** branch to X.Y.{Z+1}-1 using the [bin/h5vers](https://github.com/HDFGroup/hdf5/blob/develop/bin/h5vers) script: +5. Change the **support** branch to X.Y.{Z+1}-1 using the [bin/h5vers][u10] script: - `$ git checkout hdf5_X_Y` - `$ bin/h5vers -s X.Y.{Z+1}-1;` - `$ git commit -m "Updated support branch version number to X.Y.{Z+1}-1"` - `$ git push` -6. Change the **release preparation branch**'s version number to X.Y.Z-{SR+1} using the [bin/h5vers](https://github.com/HDFGroup/hdf5/blob/develop/bin/h5vers) script: +6. Change the **release preparation branch**'s version number to X.Y.Z-{SR+1} using the [bin/h5vers][u10]/bin/h5vers script: - `$ git checkout hdf5_X_Y_Z;` - `$ bin/h5vers -s X.Y.Z-{SR+1};` - `$ git commit -m "Updated release preparation branch version number to X.Y.Z-{SR+1}"` - `$ git push` 7. Update default configuration mode - `$ git checkout hdf5_X_Y_Z;` and `$ bin/switch_maint_mode -disable ./configure.ac` to disable `AM_MAINTAINER_MODE`. - - Need to set option `HDF5_GENERATE_HEADERS` to `OFF`, currently in line 996 of [src/CMakeLists.txt](https://github.com/HDFGroup/hdf5/blob/develop/src/CMakeLists.txt). - - Change the **release preparation branch**'s (i.e. hdf5_X_Y_Z) default configuration mode from development to production in [configure.ac](https://github.com/HDFGroup/hdf5/blob/develop/configure.ac). - - Find “Determine build mode” in [configure.ac](https://github.com/HDFGroup/hdf5/blob/develop/configure.ac). + - Need to set option `HDF5_GENERATE_HEADERS` to `OFF`, currently in line 996 of [src/CMakeLists.txt][11]. + - Change the **release preparation branch**'s (i.e. hdf5_X_Y_Z) default configuration mode from development to production in [configure.ac][u12]. + - Find “Determine build mode” in [configure.ac][u12]. - Change `default=debug` to `default=production` at the bottom of the `AS_HELP_STRING` for `--enable-build-mode`. - Under `if test "X-$BUILD_MODE" = X- ; then` change `BUILD_MODE=debug` to `BUILD_MODE=production`. - Run `sh ./autogen.sh` to regenerate the UNIX build system files and commit the changes. (use `git status --ignored` to see the changes and `git add -f` to add all files. First delete any new files not to be committed, notably `src/H5public.h~` and `autom4te.cache/`.) @@ -114,7 +114,7 @@ For more information on the HDF5 versioning and backward and forward compatibili 7. Choose the release branch 8. Change ‘Release version tag’ name to 'hdf5_X.Y.Z.P' - P is some pre-release number. -9. Send a message to the HDF forum indicating that a pre-release source package is available for testing at and that feedback from the user community on their test results is being accepted. +9. Send a message to the HDF forum indicating that a pre-release source package is available for testing at and that feedback from the user community on their test results is being accepted. 10. Contact paying clients who are interested in testing the pre-release source package and inform them that it is available for testing and that feedback on their test results of the pre-release is appreciated. 11. This should be automated and currently github binaries are not signed. - Follow the [How to sign binaries with digital certificates(this is missing)]() work instructions to sign each Windows and Mac binary package with a digital certificate. @@ -137,7 +137,7 @@ For more information on the HDF5 versioning and backward and forward compatibili ### 8. Finalize Release Notes (Release Manager) 1. Perform a final review of release notes and ensure that any new changes made to the source, any new known issues discovered, and any additional tests run since the code freeze have been reflected in RELEASE.txt and other appropriate in-source documentation files (INSTALL_*, etc.). (Refer to the sub-steps of step 3 for what to check). -2. Update the [RELEASE.txt](https://github.com/HDFGroup/hdf5/blob/develop/release_docs/RELEASE.txt) in the **support** branch (i.e. hdf5_X_Y) to remove entries in “Bugs fixed” and “New Features” sections and increment the version number for the following release (“Bug fixes since X.Y.Z” - occurs twice). +2. Update the [RELEASE.txt][u1] in the **support** branch (i.e. hdf5_X_Y) to remove entries in “Bugs fixed” and “New Features” sections and increment the version number for the following release (“Bug fixes since X.Y.Z” - occurs twice). - `$ git checkout hdf5_X_Y` - `$ vi RELEASE.txt # update RELEASE.txt to clear it out` - `$ git commit -m "Reset RELEASE.txt in preparation for the next release."` @@ -161,3 +161,19 @@ For more information on the HDF5 versioning and backward and forward compatibili ### 11. Conduct Release Retrospective (Release Manager) 1. Schedule time and solicit comments from retrospective 2. Identify issues and document them + +[u1]: https://github.com/HDFGroup/hdf5/blob/develop/release_docs/RELEASE.txt +[u2]: https://github.com/HDFGroup/hdf5/blob/develop/README.md +[u3]: https://github.com/HDFGroup/hdf5/blob/develop/COPYING +[u4]: https://github.com/HDFGroup/hdf5/blob/develop/release_docs +[u5]: https://github.com/HDFGroup/hdf5/blob/develop/release_docs/INSTALL +[u6]: https://github.com/HDFGroup/hdf5/blob/develop/release_docs/INSTALL_Auto.txt +[u7]: https://github.com/HDFGroup/hdf5/blob/develop/release_docs/INSTALL_CMake.txt +[u8]: https://github.com/HDFGroup/hdf5/blob/develop/.github/workflows/release.yml +[u9]: https://github.com/HDFGroup/hdf5/blob/develop/config/lt_vers.am +[u10]: https://github.com/HDFGroup/hdf5/blob/develop/bin/h5vers +[u11]: https://github.com/HDFGroup/hdf5/blob/develop/src/CMakeLists.txt +[u12]: https://github.com/HDFGroup/hdf5/blob/develop/configure.ac +[u13]: https://support.hdfgroup.org/documentation/HDF5/v1_14/v1_14_4/api-compat-macros.html +[u14]: https://github.com/HDFGroup/hdf5/releases/tag/snapshot-1.14 +[u15]: https://github.com/HDFGroup/hdf5/releases/tag/snapshot diff --git a/src/H5Amodule.h b/src/H5Amodule.h index 18fabe56f58..42715535367 100644 --- a/src/H5Amodule.h +++ b/src/H5Amodule.h @@ -59,7 +59,7 @@ * attached directly to that object * * \subsection subsec_error_H5A Attribute Function Summaries - * @see H5A reference manual + * see @ref H5A reference manual * * \subsection subsec_attribute_program Programming Model for Attributes * @@ -98,26 +98,6 @@ * \li Close the attribute * \li Close the primary data object (if appropriate) * - * - * - * - * - * - * - * - * - * - * - *
    CreateUpdate
    - * \snippet{lineno} H5A_examples.c create - * - * \snippet{lineno} H5A_examples.c update - *
    ReadDelete
    - * \snippet{lineno} H5A_examples.c read - * - * \snippet{lineno} H5A_examples.c delete - *
    - * * \subsection subsec_attribute_work Working with Attributes * * \subsubsection subsubsec_attribute_work_struct The Structure of an Attribute @@ -376,7 +356,7 @@ * An HDF5 attribute is a small metadata object describing the nature and/or intended usage of a primary data * object. A primary data object may be a dataset, group, or committed datatype. * - * @see sec_attribute + * @see \ref sec_attribute * */ diff --git a/src/H5Dmodule.h b/src/H5Dmodule.h index 26e748ce1a0..96c5b1a704e 100644 --- a/src/H5Dmodule.h +++ b/src/H5Dmodule.h @@ -887,7 +887,7 @@ filter. * It is clear that the internal HDF5 filter mechanism, while extensible, does not work well with third-party * filters. It would be a maintenance nightmare to keep adding and supporting new compression methods * in HDF5. For any set of HDF5 “internal” filters, there always will be data with which the “internal” -filters + * filters * will not achieve the optimal performance needed to address data I/O and storage problems. Thus the * internal HDF5 filter mechanism is enhanced to address the issues discussed above. * @@ -901,7 +901,7 @@ filters * * When an application reads data compressed with a third-party HDF5 filter, the HDF5 Library will search * for the required filter plugin, register the filter with the library (if the filter function is not -registered) and + * registered) and * apply it to the data on the read operation. * * For more information, @@ -1496,7 +1496,7 @@ allocated if necessary. * the size of the memory datatype and the number of elements in the memory selection. * * Variable-length data are organized in two or more areas of memory. For more information, - * \see \ref h4_vlen_datatype "Variable-length Datatypes". + * see \ref h4_vlen_datatype "Variable-length Datatypes". * * When writing data, the application creates an array of * vl_info_t which contains pointers to the elements. The elements might be, for example, strings. @@ -2735,7 +2735,7 @@ allocated if necessary. * See The HDF Group website for further information regarding the SZip filter. * * \subsubsection subsubsec_dataset_filters_dyn Using Dynamically-Loadable Filters - * \see \ref sec_filter_plugins for further information regarding the dynamically-loadable filters. + * see \ref sec_filter_plugins for further information regarding the dynamically-loadable filters. * * HDF has a filter plugin repository of useful third-party plugins that can used * diff --git a/src/H5Emodule.h b/src/H5Emodule.h index 307b5a7fac4..f46456a1369 100644 --- a/src/H5Emodule.h +++ b/src/H5Emodule.h @@ -58,7 +58,7 @@ * design for the Error Handling API. * * \subsection subsec_error_H5E Error Handling Function Summaries - * @see H5E reference manual + * see @ref H5E reference manual * * \subsection subsec_error_program Programming Model for Error Handling * This section is under construction. @@ -80,24 +80,21 @@ * an error stack ID is needed as a parameter, \ref H5E_DEFAULT can be used to indicate the library's default * stack. The first error record of the error stack, number #000, is produced by the API function itself and * is usually sufficient to indicate to the application what went wrong. - *
    - * - * - * - * - *
    Example: An Error Message
    - *

    If an application calls \ref H5Tclose on a - * predefined datatype then the following message is - * printed on the standard error stream. This is a - * simple error that has only one component, the API - * function; other errors may have many components. - *

    + *
    + * If an application calls \ref H5Tclose  on a
    + * predefined datatype then the following message is
    + * printed on the standard error stream.  This is a
    + * simple error that has only one component, the API
    + * function; other errors may have many components.
    + *
    + * An Error Message Example
    + * \code
      * HDF5-DIAG: Error detected in HDF5 (1.10.9) thread 0.
      *    #000: H5T.c line ### in H5Tclose(): predefined datatype
      *       major: Function argument
      *       minor: Bad value
    - *         
    - *
    + * \endcode + * * In the example above, we can see that an error record has a major message and a minor message. A major * message generally indicates where the error happens. The location can be a dataset or a dataspace, for * example. A minor message explains further details of the error. An example is “unable to open file”. @@ -158,15 +155,15 @@ * * Example: Turn off error messages while probing a function * \code - * *** Save old error handler *** + * // Save old error handler * H5E_auto2_t oldfunc; * void *old_client_data; * H5Eget_auto2(error_stack, &old_func, &old_client_data); - * *** Turn off error handling *** + * // Turn off error handling * H5Eset_auto2(error_stack, NULL, NULL); - * *** Probe. Likely to fail, but that's okay *** + * // Probe. Likely to fail, but that's okay * status = H5Fopen (......); - * *** Restore previous error handler *** + * // Restore previous error handler * H5Eset_auto2(error_stack, old_func, old_client_data); * \endcode * @@ -174,9 +171,9 @@ * * Example: Disable automatic printing and explicitly print error messages * \code - * *** Turn off error handling permanently *** + * // Turn off error handling permanently * H5Eset_auto2(error_stack, NULL, NULL); - * *** If failure, print error message *** + * // If failure, print error message * if (H5Fopen (....)<0) { * H5Eprint2(H5E_DEFAULT, stderr); * exit (1); @@ -243,9 +240,9 @@ * * The following example shows a user‐defined callback function. * - * Example: A user‐defined callback function + * A user‐defined callback function Example * \code - * \#define MSG_SIZE 64 + * #define MSG_SIZE 64 * herr_t * custom_print_cb(unsigned n, const H5E_error2_t *err_desc, void *client_data) * { @@ -255,7 +252,7 @@ * char cls[MSG_SIZE]; * const int indent = 4; * - * *** Get descriptions for the major and minor error numbers *** + * // Get descriptions for the major and minor error numbers * if(H5Eget_class_name(err_desc->cls_id, cls, MSG_SIZE) < 0) * TEST_ERROR; * if(H5Eget_msg(err_desc->maj_num, NULL, maj, MSG_SIZE) < 0) @@ -296,13 +293,11 @@ * to push its own error records onto the error stack once it declares an error class of its own through the * HDF5 Error API. * - * - * - * - * - * - *
    Example: An Error Report
    - *

    An error report shows both the library's error record and the application's error records. - * See the example below. - *

    + * An error report shows both the library's error record and the application's error records.
    + * See the example below.
    + *
    + * An Error Report Example
    + * \code
      * Error Test-DIAG: Error detected in Error Program (1.0)
      *         thread 8192:
      *     #000: ../../hdf5/test/error_test.c line ### in main():
    @@ -318,10 +313,8 @@
      *         not a dataset
      *       major: Invalid arguments to routine
      *       minor: Inappropriate type
    - *       
    - *
    + *\endcode + * * In the line above error record #002 in the example above, the starting phrase is HDF5. This is the error * class name of the HDF5 Library. All of the library's error messages (major and minor) are in this default * error class. The Error Test in the beginning of the line above error record #000 is the name of the @@ -334,7 +327,7 @@ * * Example: The user‐defined error handler * \code - * \#define MSG_SIZE 64 + * #define MSG_SIZE 64 * herr_t * custom_print_cb(unsigned n, const H5E_error2_t *err_desc, * void* client_data) @@ -345,7 +338,7 @@ * char cls[MSG_SIZE]; * const int indent = 4; * - * *** Get descriptions for the major and minor error numbers *** + * // Get descriptions for the major and minor error numbers * if(H5Eget_class_name(err_desc->cls_id, cls, MSG_SIZE) < 0) * TEST_ERROR; * if(H5Eget_msg(err_desc->maj_num, NULL, maj, MSG_SIZE) < 0) @@ -411,13 +404,13 @@ * * Example: Create an error class and error messages * \code - * *** Create an error class *** + * // Create an error class * class_id = H5Eregister_class(ERR_CLS_NAME, PROG_NAME, PROG_VERS); - * *** Retrieve class name *** + * // Retrieve class name * H5Eget_class_name(class_id, cls_name, cls_size); - * *** Create a major error message in the class *** + * // Create a major error message in the class * maj_id = H5Ecreate_msg(class_id, H5E_MAJOR, “... ...”); - * *** Create a minor error message in the class *** + * // Create a minor error message in the class * min_id = H5Ecreate_msg(class_id, H5E_MINOR, “... ...”); * \endcode * @@ -486,14 +479,14 @@ * * Example: Pushing an error message to an error stack * \code - * *** Make call to HDF5 I/O routine *** + * // Make call to HDF5 I/O routine * if((dset_id=H5Dopen(file_id, dset_name, access_plist)) < 0) * { - * *** Push client error onto error stack *** + * // Push client error onto error stack * H5Epush(H5E_DEFAULT,__FILE__,FUNC,__LINE__,cls_id, * CLIENT_ERR_MAJ_IO,CLIENT_ERR_MINOR_OPEN, “H5Dopen failed”); * } - * *** Indicate error occurred in function *** + * // Indicate error occurred in function * return 0; * \endcode * @@ -504,15 +497,15 @@ * \code * if (H5Dwrite(dset_id, mem_type_id, mem_space_id, file_space_id, dset_xfer_plist_id, buf) < 0) * { - * *** Push client error onto error stack *** + * // Push client error onto error stack * H5Epush2(H5E_DEFAULT,__FILE__,FUNC,__LINE__,cls_id, * CLIENT_ERR_MAJ_IO,CLIENT_ERR_MINOR_HDF5, * “H5Dwrite failed”); - * *** Preserve the error stack by assigning an object handle to it *** + * // Preserve the error stack by assigning an object handle to it * error_stack = H5Eget_current_stack(); - * *** Close dataset *** + * // Close dataset * H5Dclose(dset_id); - * *** Replace the current error stack with the preserved one *** + * // Replace the current error stack with the preserved one * H5Eset_current_stack(error_stack); * } * return 0; @@ -545,7 +538,7 @@ * error stack. The error stack is statically allocated to reduce the * complexity of handling errors within the \ref H5E package. * - * @see sec_error + * @see \ref sec_error * */ diff --git a/src/H5Fmodule.h b/src/H5Fmodule.h index 5cb4a05dd7f..c9f1b31ceac 100644 --- a/src/H5Fmodule.h +++ b/src/H5Fmodule.h @@ -43,7 +43,7 @@ * \li The use of low-level file drivers * * This chapter assumes an understanding of the material presented in the data model chapter. For - * more information, @see @ref sec_data_model. + * more information, see \ref sec_data_model. * * \subsection subsec_file_access_modes File Access Modes * There are two issues regarding file access: @@ -101,7 +101,7 @@ * a user-definable data block; the size of data address parameters; properties of the B-trees that are * used to manage the data in the file; and certain HDF5 Library versioning information. * - * For more information, @see @ref subsubsec_file_property_lists_props. + * For more information, see \ref subsubsec_file_property_lists_props. * * This section has a more detailed discussion of file creation properties. If you have no special * requirements for these file characteristics, you can simply specify #H5P_DEFAULT for the default @@ -112,7 +112,7 @@ * settings, and parallel I/O. Data alignment, metadata block and cache sizes, and data sieve buffer * size are factors in improving I/O performance. * - * For more information, @see @ref subsubsec_file_property_lists_access. + * For more information, see \ref subsubsec_file_property_lists_access. * * This section has a more detailed discussion of file access properties. If you have no special * requirements for these file access characteristics, you can simply specify #H5P_DEFAULT for the @@ -466,8 +466,9 @@ * remain valid. Each of these file identifiers must be released by calling #H5Fclose when it is no * longer needed. * - * For more information, @see @ref subsubsec_file_property_lists_access. - * For more information, @see @ref subsec_file_property_lists. + * For more information, see \ref subsubsec_file_property_lists_access. + * + * For more information, see \ref subsec_file_property_lists. * * \subsection subsec_file_closes Closing an HDF5 File * #H5Fclose both closes a file and releases the file identifier returned by #H5Fopen or #H5Fcreate. @@ -512,7 +513,7 @@ * information for every property list function is provided in the \ref H5P * section of the HDF5 Reference Manual. * - * For more information, @see @ref sec_plist. + * For more information, @see \ref sec_plist. * * \subsubsection subsubsec_file_property_lists_create Creating a Property List * If you do not wish to rely on the default file creation and access properties, you must first create @@ -594,7 +595,7 @@ * \subsubsection subsubsec_file_property_lists_access File Access Properties * This section discusses file access properties that are not related to the low-level file drivers. File * drivers are discussed separately later in this chapter. - * For more information, @see @ref subsec_file_alternate_drivers. + * For more information, @see \ref subsec_file_alternate_drivers. * * File access property lists control various aspects of file I/O and structure. * @@ -657,7 +658,7 @@ * * HDF5 employs an extremely flexible mechanism called the virtual file layer, or VFL, for file * I/O. A full understanding of the VFL is only necessary if you plan to write your own drivers - * @see \ref VFL in the HDF5 Technical Notes. + * see \ref VFL in the HDF5 Technical Notes. * * For our * purposes here, it is sufficient to know that the low-level drivers used for file I/O reside in the @@ -690,7 +691,7 @@ * * If an application requires a special-purpose low-level driver, the VFL provides a public API for * creating one. For more information on how to create a driver, - * @see @ref VFL in the HDF5 Technical Notes. + * see \ref VFL in the HDF5 Technical Notes. * * \subsubsection subsubsec_file_alternate_drivers_id Identifying the Previously‐used File Driver * When creating a new HDF5 file, no history exists, so the file driver must be specified if it is to be @@ -888,11 +889,11 @@ * * Additional parameters may be added to these functions in the future. * - * @see + * see * HDF5 File Image Operations * section for information on more advanced usage of the Memory file driver, and - * @see + * see * Modified Region Writes * section for information on how to set write operations so that only modified regions are written * to storage. @@ -1070,7 +1071,7 @@ * name is FILE. If the function does not find an existing file, it will create one. If it does find an * existing file, it will empty the file in preparation for a new set of data. The identifier for the * "new" file will be passed back to the application program. - * For more information, @see @ref subsec_file_access_modes. + * For more information, @see \ref subsec_file_access_modes. * * Creating a file with default creation and access properties * \code @@ -1182,7 +1183,7 @@ * Note: In the code example above, loc_id is the file identifier for File1, /B is the link path to the * group where File2 is mounted, child_id is the file identifier for File2, and plist_id is a property * list identifier. - * For more information, @see @ref sec_group. + * For more information, @see \ref sec_group. * * See the entries for #H5Fmount, #H5Funmount, and #H5Lcreate_external in the HDF5 Reference Manual. * diff --git a/src/H5Gmodule.h b/src/H5Gmodule.h index a06d44cea75..49fc9ed9472 100644 --- a/src/H5Gmodule.h +++ b/src/H5Gmodule.h @@ -722,7 +722,7 @@ * *

    Mounting a File

    * An external link is a permanent connection between two files. A temporary connection can be set - * up with the #H5Fmount function. For more information, @see sec_file. + * up with the #H5Fmount function. For more information, @see \ref sec_file. * For more information, see the #H5Fmount function in the \ref RM. * * \subsubsection subsubsec_group_program_info Discovering Information about Objects diff --git a/src/H5PLmodule.h b/src/H5PLmodule.h index f034e7c6631..1aedc2783fe 100644 --- a/src/H5PLmodule.h +++ b/src/H5PLmodule.h @@ -276,10 +276,12 @@ * \endcode * * See the documentation at - * hdf5_plugins/docs folder. In + * hdf5_plugins/docs folder. In * particular: - * INSTALL_With_CMake - * USING_HDF5_AND_CMake + * INSTALL_With_CMake + * USING_HDF5_AND_CMake */ /** diff --git a/src/H5Pmodule.h b/src/H5Pmodule.h index ef300f9312a..8ac6f86eed9 100644 --- a/src/H5Pmodule.h +++ b/src/H5Pmodule.h @@ -979,7 +979,7 @@ *
    * \snippet{doc} tables/propertyLists.dox lcpl_table *
    - * @see STRCPL + * @see @ref STRCPL * * \defgroup ACPL Attribute Creation Properties * \ingroup STRCPL @@ -988,7 +988,7 @@ * \snippet{doc} tables/propertyLists.dox acpl_table * * - * @see STRCPL + * @see @ref STRCPL * * \defgroup LAPL Link Access Properties * \ingroup H5P diff --git a/src/H5Ppublic.h b/src/H5Ppublic.h index 320f55d9368..4d376c2b6be 100644 --- a/src/H5Ppublic.h +++ b/src/H5Ppublic.h @@ -5205,7 +5205,7 @@ H5_DLL herr_t H5Pset_mdc_config(hid_t plist_id, H5AC_cache_config_t *config_ptr) * current state of the logging flags. * * The log format is described in [Metadata Cache Logging] - * (https://\DSPURL/Fine-tuning+the+Metadata+Cache). + * (https://\DOCURL/advanced_topics/Fine-tuning+the+Metadata+Cache). * * \since 1.10.0 * diff --git a/src/H5Smodule.h b/src/H5Smodule.h index 2dc8fe127d6..b9897485405 100644 --- a/src/H5Smodule.h +++ b/src/H5Smodule.h @@ -53,7 +53,7 @@ * sub‐sampling, and scatter‐gather access to datasets. * * \subsection subsec_dataspace_function Dataspace Function Summaries - * @see H5S reference manual provides a reference list of dataspace functions, the H5S APIs. + * see \ref H5S reference manual provides a reference list of dataspace functions, the H5S APIs. * * \subsection subsec_dataspace_program Definition of Dataspace Objects and the Dataspace Programming Model * @@ -977,9 +977,9 @@ * \subsection subsec_dataspace_refer References * * Another use of selections is to store a reference to a region of a dataset in the file or an external file. - An HDF5 object reference + * An HDF5 object reference * object is a pointer to an object (attribute, dataset, group, or committed datatype) in the file or an - external file. A selection can + * external file. A selection can * be used to create a pointer to a set of selected elements of a dataset, called a region reference. The * selection can be either a point selection or a hyperslab selection. * @@ -990,13 +990,179 @@ * To discover the elements and/or read the data, the region reference can be dereferenced to obtain the * identifiers for the dataset and dataspace. * - * For more information, \see subsubsec_datatype_other_refs. + * For more information, \see \ref subsubsec_datatype_other_refs. * * \subsubsection subsubsec_dataspace_refer_use Example Uses for Region References + * Region references are used to implement stored pointers to data within a dataset. For example, features + * in a large dataset might be indexed by a table. See the figure below. This table could be stored as an + * HDF5 dataset with a compound datatype, for example, with a field for the name of the feature and a region + * reference to point to the feature in the dataset. See the second figure below. + * + * + * + * + * + *
    + * \image html Dspace_features.gif " Features indexed by a table" + *
    + * + * + * + * + * + *
    + * \image html Dspace_features_cmpd.gif "Storing the table with a compound datatype" + *
    * * \subsubsection subsubsec_dataspace_refer_create Creating References to Regions + * To create a region reference: + * \li 1. Create or open the dataset that contains the region + * \li 2. Get the dataspace for the dataset + * \li 3. Define a selection that specifies the region + * \li 4. Create a region reference using the dataset and dataspace with selection + * \li 5. Write the region reference(s) to the desired dataset or attribute + * \li 6. Release the region reference(s) + * + * The figure below shows a diagram of a file with three datasets. Dataset D1 and D2 are two dimensional + * arrays of integers. Dataset R1 is a one dimensional array of references to regions in D1 and D2. The + * regions can be any valid selection of the dataspace of the target dataset. + * + * + * + * + *
    + * \image html Dspace_three_datasets.gif "A file with three datasets" + *
    + * Note: In the figure above, R1 is a 1 D array of region pointers; each pointer refers to a selection + * in one dataset. + * + * The example below shows code to create the array of region references. The references are created in an + * array of type #H5R_ref_t. Each region is defined as a selection on the dataspace of the dataset, + * and a reference is created using \ref H5Rcreate_region(). The call to \ref H5Rcreate_region() specifies the + file, + * dataset, and the dataspace with selection. + * + * Create an array of region references + * \code + * // create an array of 4 region references + * H5R_ref_t ref[4]; + * + * // Create a reference to the first hyperslab in the first Dataset. + * offset[0] = 1; offset[1] = 1; + * count[0] = 3; count[1] = 2; + * status = H5Sselect_hyperslab(space_id, H5S_SELECT_SET, offset, NULL, count, NULL); + * status = H5Rcreate_region(file_id, "D1", space_id, H5P_DEFAULT, &ref[0]); + * + * // The second reference is to a union of hyperslabs in the first Dataset + * offset[0] = 5; offset[1] = 3; + * count[0] = 1; count[1] = 4; + * status = H5Sselect_none(space_id); + * status = H5Sselect_hyperslab(space_id, H5S_SELECT_SET, offset, NULL, count, NULL); + * offset[0] = 6; offset[1] = 5; + * count[0] = 1; count[1] = 2; + * status = H5Sselect_hyperslab(space_id, H5S_SELECT_OR, offset, NULL, count, NULL); + * status = H5Rcreate_region(file_id, "D1", space_id, H5P_DEFAULT, &ref[1]); + * + * // the fourth reference is to a selection of points in the first Dataset + * status = H5Sselect_none(space_id); + * coord[0][0] = 4; coord[0][1] = 4; + * coord[1][0] = 2; coord[1][1] = 6; + * coord[2][0] = 3; coord[2][1] = 7; + * coord[3][0] = 1; coord[3][1] = 5; + * coord[4][0] = 5; coord[4][1] = 8; + * + * status = H5Sselect_elements(space_id, H5S_SELECT_SET, num_points, (const hssize_t **)coord); + * status = H5Rcreate_region(file_id, "D1", space_id, H5P_DEFAULT, &ref[3]); + * + * // the third reference is to a hyperslab in the second Dataset + * offset[0] = 0; offset[1] = 0; + * count[0] = 4; count[1] = 6; + * status = H5Sselect_hyperslab(space_id2, H5S_SELECT_SET, offset, NULL, count, NULL); + * status = H5Rcreate_region(file_id, "D2", space_id2, H5P_DEFAULT, &ref[2]); + * \endcode + * + * When all the references are created, the array of references is written to the dataset R1. The + * dataset is declared to have datatype #H5T_STD_REF. See the example below. Also, note the release + * of the references afterwards. + * + * Write the array of references to a dataset + * \code + * Hsize_t dimsr[1]; + * dimsr[0] = 4; + * + * // Dataset with references. + * spacer_id = H5Screate_simple(1, dimsr, NULL); + * dsetr_id = H5Dcreate(file_id, "R1", H5T_STD_REF_DSETREG, spacer_id, H5P_DEFAULT, H5P_DEFAULT, + * H5P_DEFAULT); + * + * // Write dataset with the references. + * status = H5Dwrite(dsetr_id, H5T_STD_REF_DSETREG, H5S_ALL, H5S_ALL, H5P_DEFAULT, ref); + * + * status = H5Rdestroy(&ref[0]); + * status = H5Rdestroy(&ref[1]); + * status = H5Rdestroy(&ref[0]); + * status = H5Rdestroy(&ref[1]); + * \endcode + * + * When creating region references, the following rules are enforced. + * \li The selection must be a valid selection for the target dataset, just as when transferring data + * \li The dataset must exist in the file when the reference is created; #H5Rcreate_region + * \li The target dataset must be in the same file as the stored reference * * \subsubsection subsubsec_dataspace_refer_read Reading References to Regions + * To retrieve data from a region reference, the reference must be read from the file, and then the data can + * be retrieved. The steps are: + * \li 1. Open the dataset or attribute containing the reference objects + * \li 2. Read the reference object(s) + * \li 3. For each region reference, get the dataset (#H5Ropen_object) and dataspace (#H5Ropen_region) + * \li 4. Use the dataspace and datatype to discover what space is needed to store the data, allocate the + * correct storage and create a dataspace and datatype to define the memory data layout + * \li 5. Release the region reference(s) + * + * The example below shows code to read an array of region references from a dataset, and then read the + * data from the first selected region. Note that the region reference has information that records the + * dataset (within the file) and the selection on the dataspace of the dataset. After dereferencing the + * regions reference, the datatype, number of points, and some aspects of the selection can be discovered. + * (For a union of hyperslabs, it may not be possible to determine the exact set of hyperslabs that has been + * combined.) + * The table below the code example shows the inquiry functions. + * + * When reading data from a region reference, the following rules are enforced: + * \li The target dataset must be present and accessible in the file + * \li The selection must be a valid selection for the dataset + * + * Read an array of region references; read from the first selection + * \code + * dsetr_id = H5Dopen (file_id, "R1", H5P_DEFAULT); + * status = H5Dread(dsetr_id, H5T_STD, H5S_ALL, H5S_ALL, H5P_DEFAULT, ref_out); + * + * // Dereference the first reference. + * // 1) get the dataset (H5Ropen_object) + * // 2) get the selected dataspace (H5Ropen_region) + * + * dsetv_id = H5Ropen_object(&ref_out[0], H5P_DEFAULT, H5P_DEFAULT); + * space_id = H5Ropen_region(&ref_out[0], H5P_DEFAULT, H5P_DEFAULT); + * + * // Discover how many points and shape of the data + * ndims = H5Sget_simple_extent_ndims(space_id); + * H5Sget_simple_extent_dims(space_id,dimsx,NULL); + * + * // Read and display hyperslab selection from the dataset. + * dimsy[0] = H5Sget_select_npoints(space_id); + * spacex_id = H5Screate_simple(1, dimsy, NULL); + * + * status = H5Dread(dsetv_id, H5T_NATIVE_INT, H5S_ALL, space_id, H5P_DEFAULT, data_out); + * printf("Selected hyperslab: "); + * for (i = 0; i < 8; i++) { + * printf("\n"); + * for (j = 0; j < 10; j++) + * printf("%d ", data_out[i][j]); + * } + * printf("\n"); + * + * status = H5Rdestroy(&ref_out[0]); + * \endcode + * * * \subsection subsec_dataspace_deprecated_refer Deprecated References to Dataset Regions * The API described in this section was deprecated since HDF5 1.12.0. Shown are @@ -1016,34 +1182,7 @@ * retrieved with a call to #H5Rget_region(). The selected dataspace can be used to read the selected data * elements. * - * For more information, \see subsubsec_datatype_other_refs. - * - * \subsubsection subsubsec_dataspace_deprecated_refer_use Deprecated Example Uses for Region References - * - * Region references are used to implement stored pointers to data within a dataset. For example, features - * in a large dataset might be indexed by a table. See the figure below. This table could be stored as an - * HDF5 dataset with a compound datatype, for example, with a field for the name of the feature and a region - * reference to point to the feature in the dataset. See the second figure below. - * - * - * - * - * - *
    - * \image html Dspace_features.gif " Features indexed by a table" - *
    - * - * - * - * - * - *
    - * \image html Dspace_features_cmpd.gif "Storing the table with a compound datatype" - *
    - * - * * \subsubsection subsubsec_dataspace_deprecated_refer_create Deprecated Creating References to Regions - * * To create a region reference: * \li 1. Create or open the dataset that contains the region * \li 2. Get the dataspace for the dataset @@ -1183,6 +1322,7 @@ * printf("\n"); * \endcode * + * \subsection subsec_dataspace_funcs Functions * * * @@ -1243,7 +1383,6 @@ * *
    The inquiry functions
    * - * * \subsection subsec_dataspace_sample Sample Programs * * This section contains the full programs from which several of the code examples in this chapter were diff --git a/src/H5Tmodule.h b/src/H5Tmodule.h index 636679e8380..3e121469108 100644 --- a/src/H5Tmodule.h +++ b/src/H5Tmodule.h @@ -304,7 +304,7 @@ * * * - * @see H5R + * @see @ref H5R * * * @@ -971,7 +971,7 @@ * translated to and from standard types of the same class, as described above. * * \subsection subsec_datatype_function Datatype Function Summaries - * @see H5T reference manual provides a reference list of datatype functions, the H5T APIs. + * see \ref H5T reference manual provides a reference list of datatype functions, the H5T APIs. * * \subsection subsec_datatype_program Programming Model for Datatypes * The HDF5 Library implements an object-oriented model of datatypes. HDF5 datatypes are @@ -2164,6 +2164,7 @@ filled according to the value of this property. The padding can be: * \endcode * * The example below shows the content of the file written on a little-endian machine. + * * Create and write a little-endian dataset with a compound datatype in C * \code * HDF5 “SDScompound.h5” { @@ -2248,6 +2249,7 @@ filled according to the value of this property. The padding can be: * * The figure below shows the content of the file written on a little-endian machine. Only float and * double fields are written. The default fill value is used to initialize the unwritten integer field. + * * Writing floats and doubles to a dataset on a little-endian system * \code * HDF5 “SDScompound.h5” { @@ -2285,6 +2287,7 @@ filled according to the value of this property. The padding can be: * compound datatype. As this example illustrates, writing and reading compound datatypes in * Fortran is always done by fields. The content of the written file is the same as shown in the * example above. + * * Create and write a dataset with a compound datatype in Fortran * \code * ! One cannot write an array of a derived datatype in @@ -2921,6 +2924,7 @@ filled according to the value of this property. The padding can be: * declaration of a datatype of type #H5T_C_S1 which is set to #H5T_VARIABLE. The HDF5 * Library automatically translates between this and the vl_t structure. Note: the #H5T_VARIABLE * size can only be used with string datatypes. + * * Set the string datatype size to H5T_VARIABLE * \code * tid1 = H5Tcopy (H5T_C_S1); @@ -2929,6 +2933,7 @@ filled according to the value of this property. The padding can be: * * Variable-length strings can be read into C strings (in other words, pointers to zero terminated * arrays of char). See the example below. + * * Read variable-length strings into C strings * \code * char *rdata[SPACE1_DIM1]; @@ -3053,6 +3058,7 @@ filled according to the value of this property. The padding can be: * would be as an array of integers. The example below shows an example of how to create an * enumeration with five elements. The elements map symbolic names to 2-byte integers. See the * table below. + * * Create an enumeration with five elements * \code * hid_t hdf_en_colors; @@ -3582,6 +3588,7 @@ filled according to the value of this property. The padding can be: * * To create two or more datasets that share a common datatype, first commit the datatype, and then * use that datatype to create the datasets. See the example below. + * * Create a shareable datatype * \code * hid_t t1 = ...some transient type...; @@ -3697,6 +3704,7 @@ filled according to the value of this property. The padding can be: * memory. The destination datatype must be specified in the #H5Dread call. The example below * shows an example of reading a dataset of 32-bit integers. The figure below the example shows * the data transformation that is performed. + * * Specify the destination datatype with H5Dread * \code * // Stored as H5T_STD_BE32 @@ -3797,6 +3805,7 @@ filled according to the value of this property. The padding can be: * The currently supported text format used by #H5LTtext_to_dtype and #H5LTdtype_to_text is the * data description language (DDL) and conforms to the \ref DDLBNF114. The portion of the * \ref DDLBNF114 that defines HDF5 datatypes appears below. + * * The definition of HDF5 datatypes from the HDF5 DDL * \code * ::= | | | diff --git a/src/H5module.h b/src/H5module.h index a7aa05a0644..083f40005c7 100644 --- a/src/H5module.h +++ b/src/H5module.h @@ -28,6 +28,7 @@ /** \page H5DM_UG HDF5 Data Model and File Structure * * \section sec_data_model The HDF5 Data Model and File Structure + * * \subsection subsec_data_model_intro Introduction * The Hierarchical Data Format (HDF) implements a model for managing and storing data. The * model includes an abstract data model and an abstract storage model (the data format), and @@ -100,8 +101,11 @@ * model, and stored in a storage medium. The stored objects include header blocks, free lists, data * blocks, B-trees, and other objects. Each group or dataset is stored as one or more header and data * blocks. - * @see HDF5 File Format Specification - * for more information on how these objects are organized. The HDF5 library can also use other + * + * For more information on how these objects are organized; + * see HDF5 File Format Specification + * + * The HDF5 library can also use other * libraries and modules such as compression. * * diff --git a/tools/src/h5dump/h5dump.c b/tools/src/h5dump/h5dump.c index dc86e526294..bb916d9fb1f 100644 --- a/tools/src/h5dump/h5dump.c +++ b/tools/src/h5dump/h5dump.c @@ -336,7 +336,7 @@ usage(const char *prog) PRINTVALSTREAM( rawoutstream, " " - "https://portal.hdfgroup.org/documentation/hdf5-docs/registered_virtual_file_drivers_vfds.html.\n"); + "https://support.hdfgroup.org/documentation/HDF5/registered_virtual_file_drivers_vfds.html.\n"); PRINTVALSTREAM(rawoutstream, " Without the file driver flag, the file will be opened with each driver in\n"); PRINTVALSTREAM(rawoutstream, " turn and in the order specified above until one driver succeeds\n"); diff --git a/tools/test/h5dump/expected/h5dump-help.txt b/tools/test/h5dump/expected/h5dump-help.txt index a78d8d820ec..694bc6ae975 100644 --- a/tools/test/h5dump/expected/h5dump-help.txt +++ b/tools/test/h5dump/expected/h5dump-help.txt @@ -105,7 +105,7 @@ usage: h5dump [OPTIONS] files --------------- Option Argument Conventions --------------- D - is the file driver to use in opening the file. Acceptable values are available from - https://portal.hdfgroup.org/documentation/hdf5-docs/registered_virtual_file_drivers_vfds.html. + https://support.hdfgroup.org/documentation/HDF5/registered_virtual_file_drivers_vfds.html. Without the file driver flag, the file will be opened with each driver in turn and in the order specified above until one driver succeeds in opening the file. diff --git a/tools/test/h5dump/expected/pbits/tnofilename-with-packed-bits.ddl b/tools/test/h5dump/expected/pbits/tnofilename-with-packed-bits.ddl index a78d8d820ec..694bc6ae975 100644 --- a/tools/test/h5dump/expected/pbits/tnofilename-with-packed-bits.ddl +++ b/tools/test/h5dump/expected/pbits/tnofilename-with-packed-bits.ddl @@ -105,7 +105,7 @@ usage: h5dump [OPTIONS] files --------------- Option Argument Conventions --------------- D - is the file driver to use in opening the file. Acceptable values are available from - https://portal.hdfgroup.org/documentation/hdf5-docs/registered_virtual_file_drivers_vfds.html. + https://support.hdfgroup.org/documentation/HDF5/registered_virtual_file_drivers_vfds.html. Without the file driver flag, the file will be opened with each driver in turn and in the order specified above until one driver succeeds in opening the file. diff --git a/tools/test/h5dump/expected/pbits/tpbitsIncomplete.ddl b/tools/test/h5dump/expected/pbits/tpbitsIncomplete.ddl index a78d8d820ec..694bc6ae975 100644 --- a/tools/test/h5dump/expected/pbits/tpbitsIncomplete.ddl +++ b/tools/test/h5dump/expected/pbits/tpbitsIncomplete.ddl @@ -105,7 +105,7 @@ usage: h5dump [OPTIONS] files --------------- Option Argument Conventions --------------- D - is the file driver to use in opening the file. Acceptable values are available from - https://portal.hdfgroup.org/documentation/hdf5-docs/registered_virtual_file_drivers_vfds.html. + https://support.hdfgroup.org/documentation/HDF5/registered_virtual_file_drivers_vfds.html. Without the file driver flag, the file will be opened with each driver in turn and in the order specified above until one driver succeeds in opening the file. diff --git a/tools/test/h5dump/expected/pbits/tpbitsLengthExceeded.ddl b/tools/test/h5dump/expected/pbits/tpbitsLengthExceeded.ddl index a78d8d820ec..694bc6ae975 100644 --- a/tools/test/h5dump/expected/pbits/tpbitsLengthExceeded.ddl +++ b/tools/test/h5dump/expected/pbits/tpbitsLengthExceeded.ddl @@ -105,7 +105,7 @@ usage: h5dump [OPTIONS] files --------------- Option Argument Conventions --------------- D - is the file driver to use in opening the file. Acceptable values are available from - https://portal.hdfgroup.org/documentation/hdf5-docs/registered_virtual_file_drivers_vfds.html. + https://support.hdfgroup.org/documentation/HDF5/registered_virtual_file_drivers_vfds.html. Without the file driver flag, the file will be opened with each driver in turn and in the order specified above until one driver succeeds in opening the file. diff --git a/tools/test/h5dump/expected/pbits/tpbitsLengthPositive.ddl b/tools/test/h5dump/expected/pbits/tpbitsLengthPositive.ddl index a78d8d820ec..694bc6ae975 100644 --- a/tools/test/h5dump/expected/pbits/tpbitsLengthPositive.ddl +++ b/tools/test/h5dump/expected/pbits/tpbitsLengthPositive.ddl @@ -105,7 +105,7 @@ usage: h5dump [OPTIONS] files --------------- Option Argument Conventions --------------- D - is the file driver to use in opening the file. Acceptable values are available from - https://portal.hdfgroup.org/documentation/hdf5-docs/registered_virtual_file_drivers_vfds.html. + https://support.hdfgroup.org/documentation/HDF5/registered_virtual_file_drivers_vfds.html. Without the file driver flag, the file will be opened with each driver in turn and in the order specified above until one driver succeeds in opening the file. diff --git a/tools/test/h5dump/expected/pbits/tpbitsMaxExceeded.ddl b/tools/test/h5dump/expected/pbits/tpbitsMaxExceeded.ddl index a78d8d820ec..694bc6ae975 100644 --- a/tools/test/h5dump/expected/pbits/tpbitsMaxExceeded.ddl +++ b/tools/test/h5dump/expected/pbits/tpbitsMaxExceeded.ddl @@ -105,7 +105,7 @@ usage: h5dump [OPTIONS] files --------------- Option Argument Conventions --------------- D - is the file driver to use in opening the file. Acceptable values are available from - https://portal.hdfgroup.org/documentation/hdf5-docs/registered_virtual_file_drivers_vfds.html. + https://support.hdfgroup.org/documentation/HDF5/registered_virtual_file_drivers_vfds.html. Without the file driver flag, the file will be opened with each driver in turn and in the order specified above until one driver succeeds in opening the file. diff --git a/tools/test/h5dump/expected/pbits/tpbitsOffsetExceeded.ddl b/tools/test/h5dump/expected/pbits/tpbitsOffsetExceeded.ddl index a78d8d820ec..694bc6ae975 100644 --- a/tools/test/h5dump/expected/pbits/tpbitsOffsetExceeded.ddl +++ b/tools/test/h5dump/expected/pbits/tpbitsOffsetExceeded.ddl @@ -105,7 +105,7 @@ usage: h5dump [OPTIONS] files --------------- Option Argument Conventions --------------- D - is the file driver to use in opening the file. Acceptable values are available from - https://portal.hdfgroup.org/documentation/hdf5-docs/registered_virtual_file_drivers_vfds.html. + https://support.hdfgroup.org/documentation/HDF5/registered_virtual_file_drivers_vfds.html. Without the file driver flag, the file will be opened with each driver in turn and in the order specified above until one driver succeeds in opening the file. diff --git a/tools/test/h5dump/expected/pbits/tpbitsOffsetNegative.ddl b/tools/test/h5dump/expected/pbits/tpbitsOffsetNegative.ddl index a78d8d820ec..694bc6ae975 100644 --- a/tools/test/h5dump/expected/pbits/tpbitsOffsetNegative.ddl +++ b/tools/test/h5dump/expected/pbits/tpbitsOffsetNegative.ddl @@ -105,7 +105,7 @@ usage: h5dump [OPTIONS] files --------------- Option Argument Conventions --------------- D - is the file driver to use in opening the file. Acceptable values are available from - https://portal.hdfgroup.org/documentation/hdf5-docs/registered_virtual_file_drivers_vfds.html. + https://support.hdfgroup.org/documentation/HDF5/registered_virtual_file_drivers_vfds.html. Without the file driver flag, the file will be opened with each driver in turn and in the order specified above until one driver succeeds in opening the file.