From 29c268444cb04d6e4410e50c06f536fd20f9e9b9 Mon Sep 17 00:00:00 2001
From: Binh-Minh The Single-Writer / Multiple-Reader (SWMR) feature enables multiple processes to read an HDF5 file while it is being written to (by a single process) without using locks or requiring communication between processes.
All communication between processes must be performed via the HDF5 file. The HDF5 file under SWMR access must reside on a system that complies with POSIX write() semantics. The basic engineering challenge for this to work was to ensure that the readers of an HDF5 file always see a coherent (though possibly not up to date) HDF5 file. The issue is that when writing data there is information in the metadata cache in addition to the physical file on disk:
However, the readers can only see the state contained in the physical file:
The SWMR solution implements dependencies on when the metadata can be flushed to the file. This ensures that metadata cache flush operations occur in the proper order, so that there will never be internal file pointers in the physical file that point to invalid (unflushed) file addresses. A beneficial side effect of using SWMR access is better fault tolerance. It is more difficult to corrupt a file when using SWMR.Introduction to SWMR
Documentation
@@ -46,7 +49,8 @@ SWMR Writer:
Periodically flush data.
Create the file using the latest file format property:
-fapl = H5Pcreate (H5P_FILE_ACCESS); +
+ fapl = H5Pcreate (H5P_FILE_ACCESS); status = H5Pset_libver_bounds (fapl, H5F_LIBVER_LATEST, H5F_LIBVER_LATEST); fid = H5Fcreate (filename, H5F_ACC_TRUNC, H5P_DEFAULT, fapl); [Create objects (files, datasets, ...). Close any attributes and named datatype objects. Groups and datasets may remain open before starting SWMR access to them.]
diff --git a/doxygen/examples/intro_VDS.html b/doxygen/examples/intro_VDS.html index 5f3e0336e4c..6e573b9b75c 100644 --- a/doxygen/examples/intro_VDS.html +++ b/doxygen/examples/intro_VDS.html @@ -5,11 +5,11 @@The HDF5 Virtual Dataset (VDS) feature enables users to access data in a collection of HDF5 files as a single HDF5 dataset and to use the HDF5 APIs to work with that dataset.
For example, your data may be collected into four files:
-+
You can map the datasets in the four files into a single VDS that can be accessed just like any other dataset:
-+
The mapping between a VDS and the HDF5 source datasets is persistent and transparent to an application. If a source file is missing the fill value will be displayed.
See the Virtual (VDS) Documentation for complete details regarding the VDS feature.
@@ -42,6 +42,7 @@Programming Examples Example 1 This example creates three HDF5 files, each with a one-dimensional dataset of 6 elements. The datasets in these files are the source datasets that are then used to create a 4 x 6 Virtual Dataset with a fill value of -1. The first three rows of the VDS are mapped to the data from the three source datasets as shown below:
+In this example the three source datasets are mapped to the VDS with this code:
src\_space = H5Screate\_simple (RANK1, dims, NULL);
for (i = 0; i < 3; i++) {
@@ -67,4 +68,5 @@
The h5dump utility can be used to view a VDS. The h5dump output for a VDS looks exactly like that for any other dataset. If h5dump cannot find a source dataset then the fill value will be displayed.
You can determine that a dataset is a VDS by looking at its properties with h5dump -p. It will display each source dataset mapping, beginning with Mapping 0. Below is an excerpt of the output of h5dump -p on the vds.h5 file created in Example 1.You can see that the entire source file a.h5 is mapped to the first row of the /VDS dataset:
+