Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

removed the use of encoded single apostrophe #3261

Merged
merged 8 commits into from
Jul 20, 2023
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion config/sanitizer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ These obviously force the standard to be required, and also disables compiler-sp

## Sanitizer Builds [`sanitizers.cmake`](sanitizers.cmake)

Sanitizers are tools that perform checks during a programs runtime and returns issues, and as such, along with unit testing, code coverage and static analysis, is another tool to add to the programmers toolbox. And of course, like the previous tools, are tragically simple to add into any project using CMake, allowing any project and developer to quickly and easily use.
Sanitizers are tools that perform checks during a program's runtime and returns issues, and as such, along with unit testing, code coverage and static analysis, is another tool to add to the programmers toolbox. And of course, like the previous tools, are tragically simple to add into any project using CMake, allowing any project and developer to quickly and easily use.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sentence has some subject/verb disagreements. Should be tools that perform ... and return issues - and - Sanitizers ... are another tool to add ... And maybe trivially simple instead of tragically simple, or just simple?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated.


A quick rundown of the tools available, and what they do:
- [LeakSanitizer](https://clang.llvm.org/docs/LeakSanitizer.html) detects memory leaks, or issues where memory is allocated and never deallocated, causing programs to slowly consume more and more memory, eventually leading to a crash.
Expand Down
2 changes: 1 addition & 1 deletion doxygen/aliases
Original file line number Diff line number Diff line change
Expand Up @@ -306,7 +306,7 @@ ALIASES += ref_rfc20120523="<a href=\"https://docs.hdfgroup.org/hdf5/rfc/paged_a
ALIASES += ref_rfc20120501="<a href=\"https://docs.hdfgroup.org/hdf5/rfc/HDF5FileImageOperations.pdf\">HDF5 File Image Operations</a>"
ALIASES += ref_rfc20120305="<a href=\"https://docs.hdfgroup.org/hdf5/rfc/RFC%20PHDF5%20Consistency%20Semantics%20MC%20120328.docx.pdf\">Enabling a Strict Consistency Semantics Model in Parallel HDF5</a>"
ALIASES += ref_rfc20120220="<a href=\"https://docs.hdfgroup.org/hdf5/rfc/h5repack_improve_hyperslab_over_chunked_dataset_v1.pdf\"><tt>h5repack</tt>: Improved Hyperslab selections for Large Chunked Datasets</a>"
ALIASES += ref_rfc20120120="<a href=\"https://docs.hdfgroup.org/hdf5/rfc/2012-1-25-Maintainers-guide-for-datatype.docx.pdf\">A Maintainers Guide for the Datatype Module in HDF5 Library</a>"
ALIASES += ref_rfc20120120="<a href=\"https://docs.hdfgroup.org/hdf5/rfc/2012-1-25-Maintainers-guide-for-datatype.docx.pdf\">A Maintainer's Guide for the Datatype Module in HDF5 Library</a>"
ALIASES += ref_rfc20120104="<a href=\"https://docs.hdfgroup.org/hdf5/rfc/RFC_actual_io_v4-1_done.docx.pdf\">Actual I/O Mode</a>"
ALIASES += ref_rfc20111119="<a href=\"https://docs.hdfgroup.org/hdf5/rfc/RFC-H5Ocompare-review_v6.pdf\">New public functions to handle comparison</a>"
ALIASES += ref_rfc20110825="<a href=\"https://docs.hdfgroup.org/hdf5/rfc/2011-08-31-RFC_H5Ocopy_Named_DT_v2.docx.pdf\">Merging Named Datatypes in H5Ocopy()</a>"
Expand Down
4 changes: 2 additions & 2 deletions doxygen/dox/IntroHDF5.dox
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ It is a 2-dimensional 5 x 3 array (the dataspace). The datatype should not be co
</ul>

\subsubsection subsec_intro_desc_prop_dspace Dataspaces
A dataspace describes the layout of a datasets data elements. It can consist of no elements (NULL),
A dataspace describes the layout of a dataset's data elements. It can consist of no elements (NULL),
a single element (scalar), or a simple array.

<table>
Expand All @@ -141,7 +141,7 @@ in size (i.e. they are extendible).

There are two roles of a dataspace:
\li It contains the spatial information (logical layout) of a dataset stored in a file. This includes the rank and dimensions of a dataset, which are a permanent part of the dataset definition.
\li It describes an applications data buffers and data elements participating in I/O. In other words, it can be used to select a portion or subset of a dataset.
\li It describes an application's data buffers and data elements participating in I/O. In other words, it can be used to select a portion or subset of a dataset.

<table>
<caption>The dataspace is used to describe both the logical layout of a dataset and a subset of a dataset.</caption>
Expand Down
2 changes: 1 addition & 1 deletion doxygen/dox/high_level/extension.dox
Original file line number Diff line number Diff line change
Expand Up @@ -392,7 +392,7 @@ H5_HLRDLL herr_t H5LRread_region(hid_t obj_id,
*
* \details H5LRget_region_info() queries information about the data
* pointed by a region reference \p ref. It returns one of the
* absolute paths to a dataset, length of the path, datasets rank
* absolute paths to a dataset, length of the path, dataset's rank
* and datatype, description of the referenced region and type of
* the referenced region. Any output argument can be NULL if that
* argument does not need to be returned.
Expand Down
4 changes: 2 additions & 2 deletions doxygen/examples/tables/fileDriverLists.dox
Original file line number Diff line number Diff line change
Expand Up @@ -94,8 +94,8 @@ version of the file can be written back to disk or abandoned.</td>
<tr>
<td>Family</td>
<td>#H5FD_FAMILY</td>
<td>With this driver, the HDF5 files address space is partitioned into pieces and sent to
separate storage files using an underlying driver of the users choice. This driver is for
<td>With this driver, the HDF5 file's address space is partitioned into pieces and sent to
separate storage files using an underlying driver of the user's choice. This driver is for
systems that do not support files larger than 2 gigabytes.</td>
<td>#H5Pset_fapl_family</td>
</tr>
Expand Down
18 changes: 9 additions & 9 deletions hl/src/H5DOpublic.h
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ extern "C" {
* HDF5 functions described is this section are implemented in the HDF5 High-level
* library as optimized functions. These functions generally require careful setup
* and testing as they enable an application to bypass portions of the HDF5
* librarys I/O pipeline for performance purposes.
* library's I/O pipeline for performance purposes.
*
* These functions are distributed in the standard HDF5 distribution and are
* available any time the HDF5 High-level library is available.
Expand Down Expand Up @@ -113,7 +113,7 @@ H5_HLDLL herr_t H5DOappend(hid_t dset_id, hid_t dxpl_id, unsigned axis, size_t e
* \param[in] dxpl_id Transfer property list identifier for
* this I/O operation
* \param[in] filters Mask for identifying the filters in use
* \param[in] offset Logical position of the chunks first element
* \param[in] offset Logical position of the chunk's first element
* in the dataspace
* \param[in] data_size Size of the actual data to be written in bytes
* \param[in] buf Buffer containing data to be written to the chunk
Expand All @@ -131,20 +131,20 @@ H5_HLDLL herr_t H5DOappend(hid_t dset_id, hid_t dxpl_id, unsigned axis, size_t e
* logical \p offset in a chunked dataset \p dset_id from the application
* memory buffer \p buf to the dataset in the file. Typically, the data
* in \p buf is preprocessed in memory by a custom transformation, such as
* compression. The chunk will bypass the librarys internal data
* compression. The chunk will bypass the library's internal data
* transfer pipeline, including filters, and will be written directly to the file.
*
* \p dxpl_id is a data transfer property list identifier.
*
* \p filters is a mask providing a record of which filters are used
* with the chunk. The default value of the mask is zero (\c 0),
* indicating that all enabled filters are applied. A filter is skipped
* if the bit corresponding to the filters position in the pipeline
* if the bit corresponding to the filter's position in the pipeline
* (<tt>0 ≤ position < 32</tt>) is turned on. This mask is saved
* with the chunk in the file.
*
* \p offset is an array specifying the logical position of the first
* element of the chunk in the datasets dataspace. The length of the
* element of the chunk in the dataset's dataspace. The length of the
* offset array must equal the number of dimensions, or rank, of the
* dataspace. The values in \p offset must not exceed the dimension limits
* and must specify a point that falls on a dataset chunk boundary.
Expand Down Expand Up @@ -189,7 +189,7 @@ H5_HLDLL herr_t H5DOwrite_chunk(hid_t dset_id, hid_t dxpl_id, uint32_t filters,
* \param[in] dset_id Identifier for the dataset to be read
* \param[in] dxpl_id Transfer property list identifier for
* this I/O operation
* \param[in] offset Logical position of the chunks first
* \param[in] offset Logical position of the chunk's first
element in the dataspace
* \param[in,out] filters Mask for identifying the filters used
* with the chunk
Expand All @@ -209,19 +209,19 @@ H5_HLDLL herr_t H5DOwrite_chunk(hid_t dset_id, hid_t dxpl_id, uint32_t filters,
* by its logical \p offset in a chunked dataset \p dset_id
* from the dataset in the file into the application memory
* buffer \p buf. The data in \p buf is read directly from the file
* bypassing the librarys internal data transfer pipeline,
* bypassing the library's internal data transfer pipeline,
* including filters.
*
* \p dxpl_id is a data transfer property list identifier.
*
* The mask \p filters indicates which filters are used with the
* chunk when written. A zero value indicates that all enabled filters
* are applied on the chunk. A filter is skipped if the bit corresponding
* to the filters position in the pipeline
* to the filter's position in the pipeline
* (<tt>0 ≤ position < 32</tt>) is turned on.
*
* \p offset is an array specifying the logical position of the first
* element of the chunk in the datasets dataspace. The length of the
* element of the chunk in the dataset's dataspace. The length of the
* offset array must equal the number of dimensions, or rank, of the
* dataspace. The values in \p offset must not exceed the dimension
* limits and must specify a point that falls on a dataset chunk boundary.
Expand Down
6 changes: 3 additions & 3 deletions hl/src/H5LDpublic.h
Original file line number Diff line number Diff line change
Expand Up @@ -50,18 +50,18 @@ H5_HLDLL herr_t H5LDget_dset_dims(hid_t did, hsize_t *cur_dims);
*-------------------------------------------------------------------------
* \ingroup H5LT
*
* \brief Returns the size in bytes of the datasets datatype
* \brief Returns the size in bytes of the dataset's datatype
*
* \param[in] did The dataset identifier
* \param[in] fields The pointer to a comma-separated list of fields for a compound datatype
*
* \return If successful, returns the size in bytes of the
* datasets datatype. Otherwise, returns 0.
* dataset's datatype. Otherwise, returns 0.
*
* \details H5LDget_dset_type_size() allows the user to find out the datatype
* size for the dataset associated with \p did. If the
* parameter \p fields is NULL, this routine just returns the size
* of the datasets datatype. If the dataset has a compound datatype
* of the dataset's datatype. If the dataset has a compound datatype
* and \p fields is non-NULL, this routine returns the size of the
* datatype(s) for the selected fields specified in \p fields.
* Note that ’,’ is the separator for the fields of a compound
Expand Down
18 changes: 9 additions & 9 deletions hl/src/H5LTpublic.h
Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,7 @@ H5_HLDLL herr_t H5LTmake_dataset(hid_t loc_id, const char *dset_name, int rank,
* named \p dset_name attached to the object specified by
* the identifier \p loc_id.
*
* The datasets datatype will be \e character, #H5T_NATIVE_CHAR.
* The dataset's datatype will be \e character, #H5T_NATIVE_CHAR.
*
*/
H5_HLDLL herr_t H5LTmake_dataset_char(hid_t loc_id, const char *dset_name, int rank, const hsize_t *dims,
Expand All @@ -232,7 +232,7 @@ H5_HLDLL herr_t H5LTmake_dataset_char(hid_t loc_id, const char *dset_name, int r
* named \p dset_name attached to the object specified by
* the identifier \p loc_id.
*
* The datasets datatype will be <em>short signed integer</em>,
* The dataset's datatype will be <em>short signed integer</em>,
* #H5T_NATIVE_SHORT.
*
*/
Expand All @@ -257,7 +257,7 @@ H5_HLDLL herr_t H5LTmake_dataset_short(hid_t loc_id, const char *dset_name, int
* named \p dset_name attached to the object specified by
* the identifier \p loc_id.
*
* The datasets datatype will be <em>native signed integer</em>,
* The dataset's datatype will be <em>native signed integer</em>,
* #H5T_NATIVE_INT.
*
* \version Fortran subroutine modified in this release to accommodate
Expand Down Expand Up @@ -285,7 +285,7 @@ H5_HLDLL herr_t H5LTmake_dataset_int(hid_t loc_id, const char *dset_name, int ra
* named \p dset_name attached to the object specified by
* the identifier \p loc_id.
*
* The datasets datatype will be <em>long signed integer</em>,
* The dataset's datatype will be <em>long signed integer</em>,
* #H5T_NATIVE_LONG.
*
*/
Expand All @@ -310,7 +310,7 @@ H5_HLDLL herr_t H5LTmake_dataset_long(hid_t loc_id, const char *dset_name, int r
* named \p dset_name attached to the object specified by
* the identifier \p loc_id.
*
* The datasets datatype will be <em>native floating point</em>,
* The dataset's datatype will be <em>native floating point</em>,
* #H5T_NATIVE_FLOAT.
*
* \version 1.8.7 Fortran subroutine modified in this release to accommodate
Expand Down Expand Up @@ -338,7 +338,7 @@ H5_HLDLL herr_t H5LTmake_dataset_float(hid_t loc_id, const char *dset_name, int
* named \p dset_name attached to the object specified by
* the identifier \p loc_id.
*
* The datasets datatype will be
* The dataset's datatype will be
* <em>native floating-point double</em>, #H5T_NATIVE_DOUBLE.
*
* \version 1.8.7 Fortran subroutine modified in this release to accommodate
Expand All @@ -364,7 +364,7 @@ H5_HLDLL herr_t H5LTmake_dataset_double(hid_t loc_id, const char *dset_name, int
* named \p dset_name attached to the object specified by
* the identifier \p loc_id.
*
* The datasets datatype will be <em>C string</em>, #H5T_C_S1.
* The dataset's datatype will be <em>C string</em>, #H5T_C_S1.
*
*/
H5_HLDLL herr_t H5LTmake_dataset_string(hid_t loc_id, const char *dset_name, const char *buf);
Expand Down Expand Up @@ -1496,7 +1496,7 @@ H5_HLDLL herr_t H5LTfind_attribute(hid_t loc_id, const char *name);
* final component of \p path resolves to an HDF5 object;
* if not, the final component is a dangling link.
*
* The meaning of the functions return value depends on the
* The meaning of the function's return value depends on the
* value of \p check_object_valid:
*
* If \p check_object_valid is set to \c FALSE, H5LTpath_valid()
Expand All @@ -1516,7 +1516,7 @@ H5_HLDLL herr_t H5LTfind_attribute(hid_t loc_id, const char *name);
* \p path can be any one of the following:
*
* - An absolute path, which starts with a slash (\c /)
* indicating the files root group, followed by the members
* indicating the file's root group, followed by the members
* - A relative path with respect to \p loc_id
* - A dot (\c .), if \p loc_id is the object identifier for
* the object itself.
Expand Down
Loading