- Chunking in HDF5
+ @ref hdf5_chunking
|
Structuring the use of chunking and tuning it for performance.
diff --git a/doxygen/dox/chunking_in_hdf5.dox b/doxygen/dox/chunking_in_hdf5.dox
new file mode 100644
index 00000000000..b46662a3a5b
--- /dev/null
+++ b/doxygen/dox/chunking_in_hdf5.dox
@@ -0,0 +1,398 @@
+/** \page hdf5_chunking Chunking in HDF5
+ *
+ * \section sec_hdf5_chunking_intro Introduction
+ * Datasets in HDF5 not only provide a convenient, structured, and self-describing way to store data,
+ * but are also designed to do so with good performance. In order to maximize performance, the HDF5
+ * library provides ways to specify how the data is stored on disk, how it is accessed, and how it should be held in memory.
+ *
+ * \section sec_hdf5_chunking_def What are Chunks?
+ * Datasets in HDF5 can represent arrays with any number of dimensions (up to 32). However, in the file this dataset
+ * must be stored as part of the 1-dimensional stream of data that is the low-level file. The way in which the multidimensional
+ * dataset is mapped to the serial file is called the layout. The most obvious way to accomplish this is to simply flatten the
+ * dataset in a way similar to how arrays are stored in memory, serializing the entire dataset into a monolithic block on disk,
+ * which maps directly to a memory buffer the size of the dataset. This is called a contiguous layout.
+ *
+ * An alternative to the contiguous layout is the chunked layout. Whereas contiguous datasets are stored in a single block in
+ * the file, chunked datasets are split into multiple chunks which are all stored separately in the file. The chunks can be
+ * stored in any order and any position within the HDF5 file. Chunks can then be read and written individually, improving
+ * performance when operating on a subset of the dataset.
+ *
+ * The API functions used to read and write chunked datasets are exactly the same functions used to read and write contiguous
+ * datasets. The only difference is a single call to set up the layout on a property list before the dataset is created. In this
+ * way, a program can switch between using chunked and contiguous datasets by simply altering that call. Example 1, below, creates
+ * a dataset with a size of 12x12 and a chunk size of 4x4. The example could be changed to create a contiguous dataset instead by
+ * simply commenting out the call to #H5Pset_chunk and changing dcpl_id in the #H5Dcreate call to #H5P_DEFAULT.
+ *
+ * Example 1: Creating a chunked dataset
+ * \code
+ * #include "hdf5.h"
+ * #define FILENAME "file.h5"
+ * #define DATASET "dataset"
+ *
+ * int main() {
+ *
+ * hid_t file_id, dset_id, space_id, dcpl_id;
+ * hsize_t chunk_dims[2] = {4, 4};
+ * hsize_t dset_dims[2] = {12, 12};
+ * herr_t status;
+ * int i, j;
+ * int buffer[12][12];
+ *
+ * // Create the file
+ * file_id = H5Fcreate(FILENAME, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT);
+ *
+ * // Create a dataset creation property list and set it to use chunking
+ * dcpl_id = H5Pcreate(H5P_DATASET_CREATE);
+ * status = H5Pset_chunk(dcpl_id, 2, chunk_dims);
+ *
+ * // Create the dataspace and the chunked dataset
+ * space_id = H5Screate_simple(2, dset_dims, NULL);
+ * dset_id = H5Dcreate(file_id, DATASET, H5T_STD_I32BE, space_id, H5P_DEFAULT, dcpl_id, H5P_DEFAULT);
+ *
+ * // Initialize dataset
+ * for (i = 0; i < 12; i++)
+ * for (j = 0; j < 12; j++)
+ * buffer[i][j] = i + j + 1;
+ *
+ * // Write to the dataset
+ * status = H5Dwrite(dset_id, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, H5P_DEFAULT, buffer);
+ *
+ * // Close
+ * status = H5Dclose(dset_id);
+ * status = H5Sclose(space_id);
+ * status = H5Pclose(dcpl_id);
+ * status = H5Fclose(file_id);
+ * }
+ * \endcode
+ *
+ * The chunks of a chunked dataset are split along logical boundaries in the dataset's representation as an array, not
+ * along boundaries in the serialized form. Suppose a dataset has a chunk size of 2x2. In this case, the first chunk
+ * would go from (0,0) to (2,2), the second from (0,2) to (2,4), and so on. By selecting the chunk size carefully, it is
+ * possible to fine tune I/O to maximize performance for any access pattern. Chunking is also required to use advanced
+ * features such as compression and dataset resizing.
+ *
+ *
+ *
+ *
+ * \image html chunking1and2.png
+ * |
+ *
+ *
+ *
+ * \section sec_hdf5_chunking_data Data Storage Order
+ * To understand the effects of chunking on I/O performance it is necessary to understand the order in which data is
+ * actually stored on disk. When using the C interface, data elements are stored in "row-major" order, meaning that,
+ * for a 2- dimensional dataset, rows of data are stored in-order on the disk. This is equivalent to the storage order
+ * of C arrays in memory.
+ *
+ * Suppose we have a 10x10 contiguous dataset B. The first element stored on disk is B[0][0], the second B[0][1],
+ * the eleventh B[1][0], and so on. If we want to read the elements from B[2][3] to B[2][7], we have to read the
+ * elements in the 24th, 25th, 26th, 27th, and 28th positions. Since all of these positions are contiguous, or next
+ * to each other, this can be done in a single read operation: read 5 elements starting at the 24th position. This
+ * operation is illustrated in figure 3: the pink cells represent elements to be read and the solid line represents
+ * a read operation. Now suppose we want to read the elements in the column from B[3][2] to B[7][2]. In this case we
+ * must read the elements in the 33rd, 43rd, 53rd, 63rd, and 73rd positions. Since these positions are not contiguous,
+ * this must be done in 5 separate read operations. This operation is illustrated in figure 4: the solid lines again
+ * represent read operations, and the dotted lines represent seek operations. An alternative would be to perform a single
+ * large read operation, in this case 41 elements starting at the 33rd position. This is called a sieve buffer and is
+ * supported by HDF5 for contiguous datasets, but not for chunked datasets. By setting the chunk sizes correctly, it
+ * is possible to greatly exceed the performance of the sieve buffer scheme.
+ *
+ *
+ *
+ *
+ * \image html chunking3and4.png
+ * |
+ *
+ *
+ *
+ * Likewise, in higher dimensions, the last dimension specified is the fastest changing on disk. So if we have a four
+ * dimensional dataset A, then the first element on disk would be A[0][0][0][0], the second A[0][0][0][1], the third A[0][0][0][2], and so on.
+ *
+ * \section sec_hdf5_chunking_part Chunking and Partial I/O
+ * The issues outlined above regarding data storage order help to illustrate one of the major benefits of dataset chunking,
+ * its ability to improve the performance of partial I/O. Partial I/O is an I/O operation (read or write) which operates
+ * on only one part of the dataset. To maximize the performance of partial I/O, the data elements selected for I/O must be
+ * contiguous on disk. As we saw above, with a contiguous dataset, this means that the selection must always equal the extent
+ * in all but the slowest changing dimension, unless the selection in the slowest changing dimension is a single element. With
+ * a 2-d dataset in C, this means that the selection must be as wide as the entire dataset unless only a single row is selected.
+ * With a 3-d dataset, this means that the selection must be as wide and as deep as the entire dataset, unless only a single row
+ * is selected, in which case it must still be as deep as the entire dataset, unless only a single column is also selected.
+ *
+ * Chunking allows the user to modify the conditions for maximum performance by changing the regions in the dataset which are
+ * contiguous. For example, reading a 20x20 selection in a contiguous dataset with a width greater than 20 would require 20
+ * separate and non-contiguous read operations. If the same operation were performed on a dataset that was created with a
+ * chunk size of 20x20, the operation would require only a single read operation. In general, if your selections are always
+ * the same size (or multiples of the same size), and start at multiples of that size, then the chunk size should be set to the
+ * selection size, or an integer divisor of it. This recommendation is subject to the guidelines in the pitfalls section;
+ * specifically, it should not be too small or too large.
+ *
+ * Using this strategy, we can greatly improve the performance of the operation shown in figure 4. If we create the
+ * dataset with a chunk size of 10x1, each column of the dataset will be stored separately and contiguously. The read
+ * of a partial column can then be done is a single operation. This is illustrated in figure 5, and the code to implement
+ * a similar operation is shown in example 2. For simplicity, example 2 implements writing to this dataset instead of reading from it.
+ *
+ *
+ *
+ *
+ * \image html chunking5.png
+ * |
+ *
+ *
+ *
+ *
+ * Example 2: Writing part of a column to a chunked dataset
+ * \code
+ * #include "hdf5.h"
+ * #define FILENAME "file.h5"
+ * #define DATASET "dataset"
+ *
+ * int main() {
+ *
+ * hid_t file_id, dset_id, fspace_id, mspace_id, dcpl_id;
+ * hsize_t chunk_dims[2] = {10, 1};
+ * hsize_t dset_dims[2] = {10, 10};
+ * hsize_t mem_dims[1] = {5};
+ * hsize_t start[2] = {3, 2};
+ * hsize_t count[2] = {5, 1};
+ * herr_t status;
+ * int buffer[5], i;
+ *
+ * // Create the file
+ * file_id = H5Fcreate(FILENAME, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT);
+ *
+ * // Create a dataset creation property list to use chunking with a chunk size of 10x1
+ * dcpl_id = H5Pcreate(H5P_DATASET_CREATE);
+ *
+ * status = H5Pset_chunk(dcpl_id, 2, chunk_dims);
+ *
+ * // Create the dataspace and the chunked dataset
+ * fspace_id = H5Screate_simple(2, dset_dims, NULL);
+ *
+ * dset_id = H5Dcreate(file_id, DATASET, H5T_STD_I32BE, fspace_id, H5P_DEFAULT, dcpl_id, H5P_DEFAULT);
+ *
+ * // Select the elements from 3, 2 to 7, 2
+ * status = H5Sselect_hyperslab(fspace_id, H5S_SELECT_SET, start, NULL, count, NULL);
+ *
+ * // Create the memory dataspace
+ * mspace_id = H5Screate_simple(1, mem_dims, NULL);
+ *
+ * // Initialize dataset
+ * for (i = 0; i < 5; i++)
+ * buffer[i] = i+1;
+ *
+ * // Write to the dataset
+ * status = H5Dwrite(dset_id, H5T_NATIVE_INT, mspace_id, fspace_id, H5P_DEFAULT, buffer);
+ *
+ * // Close
+ * status = H5Dclose(dset_id);
+ * status = H5Sclose(fspace_id);
+ * status = H5Sclose(mspace_id);
+ * status = H5Pclose(dcpl_id);
+ * status = H5Fclose(file_id);
+ * }
+ * \endcode
+ *
+ * \section sec_hdf5_chunking_cache Chunk Caching
+ * Another major feature of the dataset chunking scheme is the chunk cache. As it sounds, this is a cache of the chunks in
+ * the dataset. This cache can greatly improve performance whenever the same chunks are read from or written to multiple
+ * times, by preventing the library from having to read from and write to disk multiple times. However, the current
+ * implementation of the chunk cache does not adjust its parameters automatically, and therefore the parameters must be
+ * adjusted manually to achieve optimal performance. In some rare cases it may be best to completely disable the chunk
+ * caching scheme. Each open dataset has its own chunk cache, which is separate from the caches for all other open datasets.
+ *
+ * When a selection is read from a chunked dataset, the chunks containing the selection are first read into the cache, and then
+ * the selected parts of those chunks are copied into the user's buffer. The cached chunks stay in the cache until they are evicted,
+ * which typically occurs because more space is needed in the cache for new chunks, but they can also be evicted if hash values
+ * collide (more on this later). Once the chunk is evicted it is written to disk if necessary and freed from memory.
+ *
+ * This process is illustrated in figures 6 and 7. In figure 6, the application requests a row of values, and the library responds
+ * by bringing the chunks containing that row into cache, and retrieving the values from cache. In figure 7, the application requests
+ * a different row that is covered by the same chunks, and the library retrieves the values directly from cache without touching the disk.
+ *
+ *
+ *
+ *
+ * \image html chunking6.png
+ * |
+ *
+ *
+ *
+ *
+ *
+ *
+ * \image html chunking7.png
+ * |
+ *
+ *
+ *
+ * In order to allow the chunks to be looked up quickly in cache, each chunk is assigned a unique hash value that is
+ * used to look up the chunk. The cache contains a simple array of pointers to chunks, which is called a hash table.
+ * A chunk's hash value is simply the index into the hash table of the pointer to that chunk. While the pointer at this
+ * location might instead point to a different chunk or to nothing at all, no other locations in the hash table can contain
+ * a pointer to the chunk in question. Therefore, the library only has to check this one location in the hash table to tell
+ * if a chunk is in cache or not. This also means that if two or more chunks share the same hash value, then only one of
+ * those chunks can be in the cache at the same time. When a chunk is brought into cache and another chunk with the same hash
+ * value is already in cache, the second chunk must be evicted first. Therefore it is very important to make sure that the size
+ * of the hash table, also called the nslots parameter in #H5Pset_cache and #H5Pset_chunk_cache, is large enough to minimize
+ * the number of hash value collisions.
+ *
+ * Prior to 1.10, the library determines the hash value for a chunk by assigning a unique index that is a linear index
+ * into a hypothetical array of chunks. That is, the upper-left chunk has an index of 0, the one to the right of that
+ * has an index of 1, and so on.
+ *
+ * For example, the algorithm prior to 1.10 simply incremented the index by one along the fastest growing dimension.
+ * The diagram below illustrates the indices for a 5 x 3 chunk prior to HDF5 1.10:
+ * \code
+ * 0 1 2
+ * 3 4 5
+ * 6 7 8
+ * 9 10 11
+ * 12 13 14
+ * \endcode
+ *
+ * As of HDF5 1.10, the library uses a more complicated way to determine the chunk index. Each dimension gets a fixed
+ * number of bits for the number of chunks in that dimension. When creating the dataset, the library first determines the
+ * number of bits needed to encode the number of chunks in each dimension individually by using the log2 function. It then
+ * partitions the chunk index into bitfields, one for each dimension, where the size of each bitfield is as computed above.
+ * The fastest changing dimension is the least significant bit. To compute the chunk index for an individual chunk, for each
+ * dimension, the coordinates of that chunk in an array of chunks is placed into the corresponding bitfield. The 5 x 3 chunk
+ * example above needs 5 bits for its indices (as shown below, the 3 bits in blue are for the row, and the 2 bits in green are for the column)
+ *
+ *
+ *
+ *
+ * \image html chunking8.png "5 bits"
+ * |
+ *
+ *
+ *
+ * Therefore, the indices for the 5 x 3 chunks become like this:
+ * \code
+ * 0 1 2
+ * 4 5 6
+ * 8 9 10
+ * 12 13 14
+ * 16 17 18
+ * \endcode
+ *
+ * This index is then divided by the size of the hash table, nslots, and the remainder, or modulus, is the hash value.
+ * Because this scheme can result in regularly spaced indices being used frequently, it is important that nslots be a
+ * prime number to minimize the chance of collisions. In general, nslots should probably be set to a number approximately
+ * 100 times the number of chunks that can fit in nbytes bytes, unless memory is extremely limited. There is of course no
+ * advantage in setting nslots to a number larger than the total number of chunks in the dataset.
+ *
+ * The w0 parameter affects how the library decides which chunk to evict when it needs room in the cache. If w0 is set to 0,
+ * then the library will always evict the least recently used chunk in cache. If w0 is set to 1, the library will always evict
+ * the least recently used chunk which has been fully read or written, and if none have been fully read or written, it will
+ * evict the least recently used chunk. If w0 is between 0 and 1, the behavior will be a blend of the two. Therefore, if the
+ * application will access the same data more than once, w0 should be set closer to 0, and if the application does not, w0
+ * should be set closer to 1.
+ *
+ * It is important to remember that chunk caching will only give a benefit when reading or writing the same chunk more than
+ * once. If, for example, an application is reading an entire dataset, with only whole chunks selected for each operation,
+ * then chunk caching will not help performance, and it may be preferable to completely disable the chunk cache in order to
+ * save memory. It may also be advantageous to disable the chunk cache when writing small amounts to many different chunks,
+ * if memory is not large enough to hold all those chunks in cache at once.
+ *
+ * \section sec_hdf5_chunking_filt I/O Filters and Compression
+ *
+ * Dataset chunking also enables the use of I/O filters, including compression. The filters are applied to each chunk individually,
+ * and the entire chunk is processed at once. The filter must be applied every time the chunk is loaded into cache, and every time
+ * the chunk is flushed to disk. These facts all make choosing the proper settings for the chunk cache and chunk size even more
+ * critical for the performance of filtered datasets.
+ *
+ * Because the entire chunk must be filtered every time disk I/O occurs, it is no longer a viable option to disable the
+ * chunk cache when writing small amounts of data to many different chunks. To achieve acceptable performance, it is critical
+ * to minimize the chance that a chunk will be flushed from cache before it is completely read or written. This can be done by
+ * increasing the size of the chunk cache, adjusting the size of the chunks, or adjusting I/O patterns.
+ *
+ * \section sec_hdf5_chunking_limits Chunk Maximum Limits
+ *
+ * Chunks have some maximum limits. They are:
+ * \li The maximum number of elements in a chunk is 232-1 which is equal to 4,294,967,295.
+ * \li The maximum size for any chunk is 4GB.
+ * \li The size of a chunk cannot exceed the size of a fixed-size dataset. For example, a dataset consisting of a 5x4
+ * fixed-size array cannot be defined with 10x10 chunks.
+ *
+ * For more information, see the entry for #H5Pset_chunk in the HDF5 Reference Manual.
+ *
+ * \section sec_hdf5_chunking_pit Pitfalls
+ *
+ * Inappropriate chunk size and cache settings can dramatically reduce performance. There are a number of ways this can happen.
+ * Some of the more common issues include:
+ * \li Chunks are too small There is a certain amount of overhead associated with finding chunks. When chunks are made
+ * smaller, there are more of them in the dataset. When performing I/O on a dataset, if there are many chunks in the selection,
+ * it will take extra time to look up each chunk. In addition, since the chunks are stored independently, more chunks results
+ * in more I/O operations, further compounding the issue. The extra metadata needed to locate the chunks also causes the file
+ * size to increase as chunks are made smaller. Making chunks larger results in fewer chunk lookups, smaller file size, and
+ * fewer I/O operations in most cases.
+ *
+ * \li Chunks are too large It may be tempting to simply set the chunk size to be the same as the dataset size in order
+ * to enable compression on a contiguous dataset. However, this can have unintended consequences. Because the entire chunk must
+ * be read from disk and decompressed before performing any operations, this will impose a great performance penalty when operating
+ * on a small subset of the dataset if the cache is not large enough to hold the one-chunk dataset. In addition, if the dataset is
+ * large enough, since the entire chunk must be held in memory while compressing and decompressing, the operation could cause the
+ * operating system to page memory to disk, slowing down the entire system.
+ *
+ * \li Cache is not big enough Similarly, if the chunk cache is not set to a large enough size for the chunk size and access pattern,
+ * poor performance will result. In general, the chunk cache should be large enough to fit all of the chunks that contain part of a
+ * hyperslab selection used to read or write. When the chunk cache is not large enough, all of the chunks in the selection will be
+ * read into cache, written to disk (if writing), and evicted. If the application then revisits the same chunks, they will have to be
+ * read and possibly written again, whereas if the cache were large enough they would only have to be read (and possibly written) once.
+ * However, if selections for I/O always coincide with chunk boundaries, this does not matter as much, as there is no wasted I/O and the
+ * application is unlikely to revisit the same chunks soon after.
+ * If the total size of the chunks involved in a selection is too big to practically fit into memory, and neither the chunk nor
+ * the selection can be resized or reshaped, it may be better to disable the chunk cache. Whether this is better depends on the
+ * storage order of the selected elements. It will also make little difference if the dataset is filtered, as entire chunks must
+ * be brought into memory anyways in that case. When the chunk cache is disabled and there are no filters, all I/O is done directly
+ * to and from the disk. If the selection is mostly along the fastest changing dimension (i.e. rows), then the data will be more
+ * contiguous on disk, and direct I/O will be more efficient than reading entire chunks, and hence the cache should be disabled. If
+ * however the selection is mostly along the slowest changing dimension (columns), then the data will not be contiguous on disk,
+ * and direct I/O will involve a large number of small operations, and it will probably be more efficient to just operate on the entire
+ * chunk, therefore the cache should be set large enough to hold at least 1 chunk. To disable the chunk cache, either nbytes or nslots
+ * should be set to 0.
+ *
+ * \li Improper hash table size Because only one chunk can be present in each slot of the hash table, it is possible for an
+ * improperly set hash table
+ * size (nslots) to severely impact performance. For example, if there are 100 columns of chunks in a dataset, and the
+ * hash table size is set to 100, then all the chunks in each row will have the same hash value. Attempting to access a row
+ * of elements will result in each chunk being brought into cache and then evicted to allow the next one to occupy its slot
+ * in the hash table, even if the chunk cache is large enough, in terms of nbytes, to hold all of them. Similar situations can
+ * arise when nslots is a factor or multiple of the number of rows of chunks, or equivalent situations in higher dimensions.
+ *
+ * Luckily, because each slot in the hash table only occupies the size of the pointer for the system, usually 4 or 8 bytes,
+ * there is little reason to keep nslots small. Again, a general rule is that nslots should be set to a prime number at least
+ * 100 times the number of chunks that can fit in nbytes, or simply set to the number of chunks in the dataset.
+ *
+ * \section sec_hdf5_chunking_ad_ref Additional Resources
+ *
+ * The slide set Chunking in HDF5 (PDF),
+ * a tutorial from HDF and HDF-EOS Workshop XIII (2009) provides additional HDF5 chunking use cases and examples.
+ *
+ * The page \ref sec_exapi_desc lists many code examples that are regularly tested with the HDF5 library. Several illustrate
+ * the use of chunking in HDF5, particularly \ref sec_exapi_dsets and \ref sec_exapi_filts.
+ *
+ * \ref hdf5_chunk_issues provides additional information regarding chunking that has not yet been incorporated into this document.
+ *
+ * \section sec_hdf5_chunking_direct Directions for Future Development
+ * As seen above, the HDF5 chunk cache currently requires careful control of the parameters in order to achieve optimal performance.
+ * In the future, we plan to improve the chunk cache to be more foolproof in many ways, and deliver acceptable performance in most
+ * cases even when no thought is given to the chunking parameters.
+ *
+ * One way to make the chunk cache more user-friendly is to automatically resize the chunk cache as needed for each operation.
+ * The cache should be able to detect when the cache should be skipped or when it needs to be enlarged based on the pattern of
+ * I/O operations. At a minimum, it should be able to detect when the cache would severely hurt performance for a single operation
+ * and disable the cache for that operation. This would of course be optional.
+ *
+ * Another way is to allow chaining of entries in the hash table. This would make the hash table size much less of an issue,
+ * as chunks could share the same hash value by making a linked list.
+ *
+ * Finally, it may even be desirable to set some reasonable default chunk size based on the dataset size and possibly some other
+ * information on the intended access pattern. This would probably be a high-level routine.
+ *
+ * Other features planned for chunking include new index methods (besides b-trees), disabling filters for chunks that are partially over
+ * the edge of a dataset, only storing the used portions of these edge chunks, and allowing multiple reader processes to read the same
+ * dataset as a single writer process writes to it.
+ *
+ */
diff --git a/doxygen/img/Chunk_f1.gif b/doxygen/img/Chunk_f1.gif
new file mode 100644
index 0000000000000000000000000000000000000000..d73201a1b2e78115f3bc61964d3f6ac97325ad48
GIT binary patch
literal 3664
zcmV-W4zKY?Nk%v~VZi}m0e}Di00030|Nkri0002N0bv0E0{(=LsmtvTqnxzbi?iOm
z`wxawK$hl-rs~SJ?hD8AOxN~}=lag~{_ipXhs2`sh)gP%%;rt#j7q1}s`ZK`LbKei
z_X`e-SKG1qj83c9tTOuzkIUy;0R4{7>-X-x{|^{g)hAeJc!>C9m&n-Y_!vM4IZ0W`
zCTW?exp{@j`3V|YCrVmsy5o7Oy1JUm`U=bG8cSR2IBT1$3y8bx`zzNAJWOmRT#THI
zb*$X%%*6aGJq;91eT{8noz30dw&)!$Ub8)Jj$MAP{+zzeNn39xFEzori;?
zo~fv~=HM8$saU3Dp^i0M)@ib^Y0oYK>o#uNxlc#^(27-)S0#8$Rn^EAt>D0f(<6;~#qR`rI
z4Ib!Vk9EP**^WyIi6W07>UWnl7WEgSA~k+6sWR~`b
ziQ$V8Fsa6LBf#+FCONY80GM)GInywT#R(aab6%OGgs>?f%Q_1nN2V2pPAS)*bnba#
zp?W0>Xnh?r%IKkFwuvJHZmQG3pN?LpVUnAbsOX}m8n~8YnlcClpROK}BXw?IiY6E9
z^+hX0tz0@ORHM>F;;CWkHeZQ71xg>z=+42!%`|NU3OlJl%Cm|
z8lk9`c_A$~xu#+1hTOvU<+sKuvn){U#*1CL;=*w*U(gOnZ4>r7i|u{h#+z!urnS0m
zk>aV;?}p`?^YDrh%f>6R2{YB@#YZYgFGw2m*(}2o>#*j&ZTvTKFcllj8=f8ev@OfP
zA}lk=4Et$bq|2Brv40^`rR>LJDf+TdDKk>EQbIfUugI6I+6m7pwPwTTo}c
z-O>Xu4PmGNr#WlG=Ayi-x;3kMEY};;<1S9~g8kUfT^%j;6nN{ZinVM1{cX%~!}K-a
zb>Doi;Cb`CmDnP?4LO2)AO4c`+=j8?a}M>
z`B!EmGW}ZHCy_mESkp~D?wh}S`t@%kp8W2AOwaryh{_)PgS69Yeffgf4SM(P8v(!X
z;R|*X0gM4}bHD;3aDfVJpiL6^Kn6z8fgWQC1uZzi3|4R>7v!J@F}Ojc*at5f;ox)1
zP{K&1kc2EmVG9AWLKn)=QL%#6|0L6*-US49Vagx<)Ca`tG$w%lLqiQEBfwf=hKOmq
z*Ok(zzaailJJlIP{t};tGbF~(JhoVpp{|%SwH+v9Q&i2C%2+?_Ekti%BwH3AhbN!y
z>P$NH%gPqzvMA~ijZbtyOro&GMjX*$ZOlul)(5^q8j*)`BHA9)2uZFvl8LW-qo{Ot
zLnrBslD@m+hqm@eDMr#mP_tqYt9D1}q$H5dg9WY>$;w2`&r0LFNGP*!>RoA#MBO
zDUFdz1-UPl-1FoucZp10g3AfSq+Brn7!Yc3a))A6S~4pav}+#IXm!lYGn-ima7L4j
zzl@eKF(gV!7H^yBtW!5dQA00ksf38!mpBD6B?&0AV`SVXI}-#*MWT|Baiqi=cX!V~
zv?h<*q!dH`{n^m{F;k$x#MvP~anaIAA)&lrq$4A`tdBx;q*Qt*g*;hGH>GrQ0WGH(
z-ULdJ_OMj#lxZg@YEB8g(xtu})2~>^(f1Lsmo`0BErpQFnv%4qmDJtv5Q@{6o^z>1
zRjN{3+Qjw&Fq2D|rX8zE%4||~tX9RGN#|)vRdN-4XT|7L1-8zr{!^|nYpX1tsmiEo
z;eSCz2_wh2kg{TotaGhrB6$jiK|ZgbjT@$36Z=*w`jN17rEJ*z_1G^c)>-si>s{=s
zQn2<&q=L0)n?I}yxW@47p?XK3u>RqpTbGgpxt*x|ciQjp*
zvT%y4baA^;=XPPUzd5XYt99QjaM!lCOe!t|%ippjmzqxX1r7B%Bj7&QPwRazSsvP8
z@xl|Y4LKwa<$xN0=lP&mU6FPayPzDJELwIEnYT1KW
zmSC4VC_*o9Im}?j$R@MQ(miL5fd~$h+>4H;m^rOUhz$4WE-K
z-B>x!>)-iwyk01x6Vb4u@Qm+FC?yU0i*T+5j~!gUOB)BkN%7%W`KP@i<4dDvItiO5
zZN*B{;>X2>Fi#ZyY4+I}(1NtEndmHQAs;%=f+cNJJRL+o+eXy1u0uwrMQqp*Z}7MpnG@e8y)AC={;t{@m?>ZQ
zzK2~fX9vl{AQkv7g-xiO>-^Z}>^XZF4s=LAd@#^HX3f!9aYJ7m5<68kj3>--R|*|9
zM=gxUjau%a{yfqqX}jCs4)=Tnd9iV?yWQ`OcVFlE=KagN-~SGHlHeL>Xqvj=V{R0r
zD}C`)fx4g&5BZND&+(ihb!8N8(6R4W^Tu0y<~
zJ@Ru8ee9#w_SEmb*ST*}(qew_LI3;ZXMb(PAHVUzkCo*9zjXQXmoxd@UwqVi
zfBz>t{^!>Zfc_?X{6~NTI1a^!fTWjo^H6_@MStp-fRz$}8smQl=zHFr
z=7Ajuf-$6INXCI87-c3{f*bgOD7b)q_8{
zfIujOwlRc72s%cHghZHxODHQ$=!Dbqgi$y(IVgn;NQG7S4m^m3p|^!tsD)iECLqtUhToTAXedAMWrySdNT#6;He^d*5nFka6Im8RC$vFq
zML2!fDqAusl#wiqBNZLQb#!EF#K$p~wTm}}|h+<+Gg=DCX
zoKlPJm<{X*j|cdU>H&CFD34MScn5(-B9Je#kT;f4sFIOC0!;`hjGzdNcMy^aMUsEv
zfR{#$HEEMtmW%k5koaYhGHH^E(1#ndN)ri9DE_IFBiWOe5(b{g6+?-Pff$i~g_NkI
zl+03n6I6&T#5dg?}wIcS(XmDcrV#e
zN%@IC$%S&MmUO9fb}3YNNtb$QmR%&4b8wN&R*?Gg8;6N+Zs`${!xw-#nQ19iVL6$O
zDT7%EmT&nTOV^l|xs)ZBOTr5Z}3Sb9@hYM4m*cVSwQ
zXF8>6>UU^5k4*ZeNE)YbI;RXurggfTc>04@s;8EAr+tc6fZBt6DyXkOsD(sQkyMj|z5@3aNiesib&4soocr1w_vHO
idX%nOm#qq`LN%*@DXX=rpMj^VyUMG*>Z|-v0027{BspyW
literal 0
HcmV?d00001
diff --git a/doxygen/img/Chunk_f2.gif b/doxygen/img/Chunk_f2.gif
new file mode 100644
index 0000000000000000000000000000000000000000..68f94337d86742bc6c3891dcc6e6e6e4187abcf3
GIT binary patch
literal 3986
zcmV;D4{h*ANk%v~VPOFw0)PMj00030|Nkri0001C0U-ha0{(=LsmtvTqnxzbi?iOm
z`xAuXNS5Y_rs~SJ?hD8AOxN~}=SI%={tpZahs0uPXG|)W%%=0H13;(Ls`ZL>A+y}B
z_Y1CV!{oC0Tpgp=?6zAheaq+cy1I_f>$CU0{|`vi7f4uW7&CaNxX2ig*a#V^_b6Fu
z$wqmpx#^Y3dG(?2Y34af73YI$`c=C6q{;*9$|pMu3RG*Gma99(E7r@^8f^4SV@$j>
zoIHdqNf3o$CD^Yc0++U49eJqK*#Nu1x+e%`T4^UoUiy4?};n
zFWB$jihw5w3Phpr;J|wdqcr?sutda%A`((GF>ypj{)!ealGqpmWW$agD>5wkOr%1Q
zDi4yJ6)FL)m>>kcbk*`qPD?je?JVV`oJsH~pqaJGXN*Xq4*+Rnfpm2FnC7
z_-{zUjMEw(Bo(7xm~40T;js9lqs^Q-3s!tLb7#3*K8qF|$23<_l38m$S{aIKjUFU!
zPQ6w2=-s+sd%g?HG33dl)6(WkDt42|#o_M0{X4Vn;I`|c9!*{CYU9Vbhn(G3xjE+D
zD}V-my|b$9$Nixm^VxLo@8HXNr;i}1`0@4r2~UXIUQLh9iBX+Iq?>QB*`=8}!5Jr=ZSJAwn-EQ+r-o#18R#~6qNK&3y`YJx
zR*lZU=%Xn~3hAMgBBbb5mR>3ur$+thX`-QOvFRFpwo#}yk>YV`SbWNfYN=MEdi
z-U(`&w;;YI@{wK%CeYP;XL`%RJTb_y>sMQw=cy!9R_U8S||Ym0n=y*uz+r4hU>
zzySjmooBc~H><)6`}T0d2L3y(vJ$5!aKHWPTNbxaZoE!zCVLzm$OA_^GRXrKwu68%
z%Uqhi=Mp-u%ZsYKj-FD&pV!7klI}%9Wl#dujIDbBG3I)*`2WMV9RK?t@ns`^Br&9
zdk_A1qJh7Nc;cJj&5z-Zr~Wx>x|Uy#dFGm973{H-TcI-Kp?xk}+}D!MFR#y{{)*|p
zvR-=UtHTZlkD-T7I}WYh?)edtN4>l2%L?5z>%q%@y6=lSeZ1<)W4e6O%_sak;Kidp
z{j#_p{X6rt@;-atqI(K{_2LumQTeauZjSIfvcH`BV4x4M`oqJIqVwzHUnTPU^DhYb
zRRw^E_?`eeLBImq?tm2{p5L%%K>yuOdDkOd_SV)r=vDA@7o6YrDEL4NMo@nf)L;lX
zI6~_E?|cC4pxwfEA^LHIfx>X$_`pR&J|s{YImCht>t(<9?XVd=yu}dnQAE%k5s4BM
z;u3SE!V*5wdM6Cx{`{uMy#+S$ido#?RHW_@nL0k1=ea
z88Jx5QO+_Mx;(`#d&$RNdXSf6G-RVn*+5o06Kkpz(kYKo&G=n2MU3R)Hb2$POn!4J
z*=*l9o5IOxo|ByEtl>3_i4A5t5}rAdXC0?04tl=gn9Urfy_ESv^C2^r01cx+`{_$9
zhSQXQ^k*>s^ob;U3NxV_gr++S>Ck^zREN^csMN}tzjbD1q3RUr(Lg#Sla{QV995}%
zNP5x7HAqz`)et0L#h!=Ow1175&xxL+2#79+pCN)6l-84^doda=}&fR8>h~J!3Gnx~2ia
zEU7gUOe?chS709WjUEkHLEAXkdlq($P7NSb-`Q9qy7Z-D_2y*FT3K-l#a}RqEE5kA
zy+wJ}P;EijN=93*IpyYH17gr(T6SSWhl?Kd^yRQ~OzcGS1)l^7x34p+ZE=ko-OMtS
ztSwsCaRa)-fdXQ&DS59!X9&3qej;zLw$k<;2!sYzJgS;S2Ubf_)1a36pkd6t6WJapGLt$
zo-CF_DasqCEyPC@^EPt)U4jO=yIsbsn$yeWDa)6AW}XR+y?Dp|`ti>DeVd+P$^K`2
z7H-Y{U9gFF_~lh%beoB`rk1xHVL3mz(b{}8rOjz+C+pzRkmhikReI@=da};VBJ&VQ
z4URxF*%YW=GMe2CzgBNa)jKAorZ2f@Aj3J*W5s2x|7=TI>zc<#c5tqfTZ8{UfccDAuu?R6X5;S6{5wk57?
zievoR7Do6}3q0_Bni|m|k7&Vboyv+=TH$mq`B2e4?_GOZ(u76%NY%Ss{$ID8-he*&
zu;0D$Zp(Yr59ch)HRZEopIZ@SYdP;^cQ)!xkS+Z<4hbN4I*Uh|*7eePo?+1Gi!-MVk=?lV^_zr#K$wF92SgWrqU3t#HI
z1|IQw35!ox%J{YLthwlnyrYtb8<@SzV8ONg;>}kWjy;S(dDgqg^>|&;qtNswe^$j_
z&oQ9-ne{L49qhG>`--8B?@|Wrf@tO+{Df@xXdk!dn?-x8A75O57xMC%)_gWYH~P;f
zi{PvO?@>KH0`7u4dZBJM3s~hE+&}vR1WuU-;N$%30vhkR{{Affg{Syii+|K{oP6@B
zpY78>{3kd!CV4_$)F(ZzL4PR*SczbN-UA}}r#sYQCkwcD=0Qb$XH4?qfZzgt#3OzC
z)DaRmf&URX=Ta#5=2kl=7PX@sbV7jn6<`uTD77bcIRS!pmxAI#csRCoPbhiVw|``JdE8WeSD1017dhA$clG9l
zr1E{d3$4rp|Xd0*fV}uhDOK}qQect
zz%DfyV}uxoWVkkc$cJRs3iE=9F$Rep0)|V0hmd!PtdjnTH@Asp^@md-h|ks`rTB^R
zCyJypidQCk^s-W`IBA{MHjD_1UKVApcyqOAQrH!Wnz%BjICG}RPCo^V8Ma}wIE>a;
zjA?gaFT;qqxQsFgVVLra$QX@NNLk!djnH$AL+FdXvyEBQjWGy_)R>HmIF7|)blP~1
zBdCjD^M=G|j^9`^u8lA&m5>g5wyECzXorh%=wqj{s?np7?|Ih!O*NhwP}2Y14wE
zrjY9hjsz(v4|!@4nTF0tkwb-%F}F1NXpnx?k;FHWq}7Wd8Ga=xF_ZXN8rhJ^;)n;w
zlF@dO{x~>hn0II>bQAfKbR~rX$Az@ji8cuvP5wBDM#&RHnTm3QCp`x}VB8pcFcv7W$wVnxPVUp&Sa58_J=r
zXrK)`q9JOcAR3kLIie9dd=q-2E9#;xilPPDq8<98ADTY#h*SvaoBBDOGRb*N!5U`4
zB0j2`?m440rCYH#QpSL!lBJ#%S`MiP9lAFbq<2)SF+Rfobre2z&XE>zd
zrB@nM9;37%VI!mKd80w9p@~ZVp*8BJm1?LS`lT4EsY|M&h+3yN8mg4asU&)-rAnid
zN~Nais+L-*r~0a`N~)X6skADquzIVa3aYXCr!m^AsG6(5+Nr!crMqgY$r`Mq%B;t_
ztV&9(vudoWO0C0MtHS!M){3msnyt?2t;(vc;o7Xfiml=bu4&4x(WuRp&imnh^h~5yd-=wc{n5xj~sSC=WG=eYJs)p5aqV({A^@^}nNT4Kf6LLyr_?QtA
z>tz*dBm25~i8m7(3y;X*A{IMOKM^DJL`#G@DY9T1w5R8jM$3mfsc)b)lN>R%9oSPpnUGk^
zU+Jf{irTgM0hCpFlwwOvH`_B!sfQG(QD%FPR9UrP`menew>dbsK3KPQd$&iMw|2I-
sdfT^4__ur;xT!X{gj=|?cDRU}xKy~fdEvH?3%QXixsyw|22cP1JC=t4l>h($
literal 0
HcmV?d00001
diff --git a/doxygen/img/Chunk_f3.gif b/doxygen/img/Chunk_f3.gif
new file mode 100644
index 0000000000000000000000000000000000000000..e6e8457869c5f4b9011fb08374441b78ac0c9633
GIT binary patch
literal 6815
zcmV;Q8eru|Nk%v~VSobQ0e}Di00030|Nkri0001h0^k7v0{)DTsmtvTqnxzbi?iOm
z`wxcVNS5Y_rs~SJ?hD8AOxN~}=Q6-o!1iE6XwI2Q5urR2vl-iWS@tk
zwDHwar(>P@j83bOUCP!9CtAEy@x0lgvoCKU-q*G#sG!F+xQ3XhxHxhZ_VrT*Sys37
zm@t_cLbNv#%)I=HwJJ*up}Pk#AsIrelJ)tP%hM@?$ii0s8ArL2KFQ9?AW8CAt|vk`Ye^7qV}TR%9-hj
zE12XsBCYnasZ}the5x9XHm%yVCzA4GD+jLJxpeEIVC4ilwCK^KOPfB8I<@N6tXsQ&4Li2%*|clh
zzKuJ#?%WxXt-%Zfc<|rE)d+uayhU;r%g^{WUL3mgG|g31w;mnM^~;e#<-mIx(e}34
zor^DTW_%*`iM!ii-yVHC`SYi1hYMc>_#65D_4g0)9|!?{Fd%{Sg~wKb9PIbrZwht>
zp->Oqys6$9Dag&X=9{n^NWMxlAgY`mV6;LP>9rt*&V9uMIy7C%X>`{Ow8OrCTwNZQ2VUeB$nuZ^#F^`?0q!
zf3&hpB{vpJSpp%Yr>Fp*<(bG3&n#`s?nbQef`ft#EVc!Q8IQyyAym$gRzqa*wTlzq!~U#w#2ll0UCV&+%Q|t1iCM(lTuY-RjFuzSJ_`KmLcb*RHVu
zZjByqbDRmtwl{6XWqPwa-{Go*H}*-;e3KFhoIv*r%%}~4?OWi!^u@mkN^pC55Sks_
z6vF*^uzb-I68e@GKM~3hf1R?R|3Y;QuJMn0_JAP#REWFV1#yT13`!y1#KCe&?;OHY
z7E65iy%|a|h8F<~{t2(OL>9VEfmGbv6{pC#W5sDhEMi>QlGr=f?WuuX1mpU?$h6&w
zX^vd5UlYMczZ9mihzAoQp8n^)!x0WSMPgtJDL1wcx>1I3+#ecq_bo$mQGOKE<0328
zNH)feSvB$(?;iQXJAe{?nY?6QN*TR`*{O~?yx-442n_=n?1ljRV=avs%HedfkOm1J
z9;vrSQ;HCklk|%-A_$J^F{_f66ehKZxyAYX=$H-DTn7=!#x^?hX3zv75wXKc?^x3!
zC4^-Hv&lzV`jKGk1d6>zxy)z9Qhh&z=fb+VO)*w*Fx1@EIL`?~Z_+TIYs2U28Wg0y
z%~P2Ivu7**2|7W8>W!Urq*L>>sj3~4Goat3Xh-{)&KepCYUEsKF=^q@m_Rh5`V5Rh
zJH}A%9a5VsU6n-php~hv%A^84T1q!s)8yUsnjXbr`FQ%ea^5kR%(P`YLF&|xYGhvj
zBu`0U8clEg6re#Z*-(M%Qh8#vr{&BlO}m<=uW}V*XO-#8kb2M-!Sgw26>3XEYSp6}
zE`0HfX%tTi)uKifuQO09Q<>DvJen|Dd@bu-aavc&(B`gT?cF@v6(GOi4O3d5;C8JX?Vn+*RH)a?Ql$7oQdqplT(_z=
zx6zd@V!oS8b@t4*1dK0gho-@a?iR7PG}E+D2H(??G`VR+?{h&m%6_)jt&*KBU@lwV
z)0&E`_g%1l4-?=D&$qYeEZ}yb`?UmL*uoUXu!H5h;bh*{z8pS^Sp_^|;P#h7RQ)YG
zY8+e!SIWSl#c$YROyVB%^Tft1@N$9d-oxfN$Ss~rhL@bzCqWn`(B+lQV*F&zWEqRh
z8}CNwOXXZ<7RH|J?qJXSop;)J9XM|CBt5Li{%{1yTh2_KmDptypVze9ZEk4qJpNVE
z2$wb5BeHp=Y-F>jG<0M9^CB^o6`LBmu>7?dZWoQ`ZlyQLC_W%)glwqlD!I~9CDcQ@
z(`H2bN^{v|v>d4j=)|dnl3*0vIIS7$H;?s)BHHw7D^rrO<@(0wG2pIQJYX4LRGfXD
z-K#ZyqGLu%n6`E&m0fjT3RJo7@Iux4YB*Zmx_w
z-0@zwi#tc}eE$S`8
zO@7Ztg=;S>dr9Q(fTTAXb*WscSgg0aWa}mf-
z{-n)o`xQUG`QohE^s&83$v<`WqYr&Lmap{^AQN6Zk##&it>az(ozPRA5Vh}nk4uva
zzm;mOfAN1;5BcB}_s0mz*EBA-e=4PS>DN%Wph1koa0FOtZMSB5rF+BIYkD$q&7ewr
z2YBOGeghOl2zW;P=X0Rf2zMiamF0IgCrK8S7oxF(LpCP1LxP{=B@uWzGblv{=z;E+
zF9@WAtVc*cxJti=W`X5S2MAogA$_Jtd^}h$*LONf=oh~Vj%c|6BvTAhkr|GgG}OuuQzgGD27@_g?uqQc6f(fMTWofJmP~y4A_T?
z@`q{&PV3i!WvGPY*M^PwhEv6aT_|K0$cKdpY0iOw}z|;8IvfAm3Vh3Xo}#aiF(0_xrm9NcrYhJD#A#JzGyMKXkC=H
zey(_Qh}ewINQ%sOcC@IB>k)It2qwVTc%6uir?`c|h)~v8jNP~>$Y@+-#CxqMi{~hf
z(YTJ>M|w^rjM!5^WGpLS#&Rn9zE%kt_OUN;gnBF
zBS4vZ0XcUt8Ix2QM*Vn{$@q>?$dE{xlnEJ$N?CqWxoTXgi(bi=ZyAqq36WB{dvHmY
zZdsNrnU_YnmpK`hK{=MQNJMH$N_NRAR%w`t8JLqdn0GmjirJOl*q17~l6+a2emR*$
zxtPB2Uv;U6o(Y+ahnJyAngxiMmxq-;0-B{M7&vK`uL+x$iI94EnUtB5hbKgYSxu~2
zXrq~hx@nWHshL~Jc&W*opXr;o36`0;lg-JFe+igh`J653QJwiszS&z;)ScOxj@wz3m|2@zftu0@kL)Ry{_bg-jR~JuX`JL)lgT-Q4rQOXiJ$Hmo@X|m
z`$>-)f@(uamgzYaS$Uqeshrb^pc_V4pO=jPbuJRRpC2b31?rqk_G}l*X6HdSxq=D`
z7jOla5Ccb|2xmMg%A)$_ZzbBIFq&{PDx)>3qBk0&I7*{BYNI*oqdWSe@K+%?7uj;Ii;L*ogO-qpQUmFNQV;Gp;*eFlBt)fRH51lrcQR6
zKWU^#>Qf5Zq|u3_x459^*HL78rDy6sSfi&`!>9duf6Hi!dn%}W3Mhm6N@8l6b}F8D
z>M*kDrgF)g%lV)#kx>pAo3E(;GPGu>*5-c2iGPX;pp2?C!}X*Kw}^cRsJ|E}o4Ti#
z+NiU+oS-UL0ZN|#p`U8%s*c*A+Es^fimS9!rnw3Tg<7Y`Qbe?xotk2QPRfHN`I{J*
zreIi}YT9%&rIVuC3*U*JyV{2!Wv)Di@&P-x2{W@+zKsq>%7Fm
zQY*h{>bGnvTK$Wk$=kj=`>srDW)TcT27J5~oWO`0V%s~tS%;N-7ru$*c1xSVf&0HG
z+yof;!P`2Be1=&jyoqEhx^H`j8cTiEYiI@-J`h}$>szvh3&QW?DnM7isyVDQ?2|Yw
zskqy?Us-0eE4==j!c*MD{F}W8yMQx;!Hzq?_ld2`I}euPwPowX8*If(XR^(~xA6Cd`!;`zf2Nfk5XrLCCxBwi)6pX+EOTMRTM}O54qMN@jQIZpad_J6wKC`u|
zcPMe?{=-I$#&ui5tJA|C*+FMWxF^iP9tyoUwy*kAxE|cXefz}J%f74o$F59Vk?T8>
z$$5#qv2x+a6&c7)NWjzv%G}bb7M06hMZaYnb}NRyQY?GhHK<4XhhO^2f(*OqcfVbP
z$e$d|(oDr!+`yvgcstg*SbWL(m&%pK!c*i+Dm%`~yg~xJyN@@KSQotOT*XPNv~~-~
zU!0zQV!j&e$JwmH+zh_k9E1A;b3)w5lyq?Pe8uNWfN8^2!|aE}Y|Kx~bPS9g3(e4O
zOwKP`wXk~4Iy}wTE6^o9(vl*}TDWQ
z)H2M~=IPBtyM0)l)j*8aSUR9ljL^q2hhVnWzZ)H=mdp)Z&v|XqUVYR_-PM1sig9hd
zGi&BTlkM1&UD=fV*q5!@na$aj-PxG!*^w=zI`T;a
zXu%>V*q(dXcv#m_n$!CW*mzCSd>zz6{ne)|CtLbKy^BBYan7qf$Ds_|XG+yb+{S}D
z+-PaF9&OvVJB?eaq&}
z(aUY!;C;Dj?cSj&-e@Y(@jb-EMzz$f*M2SAdrjN>{onYFGxcrX4$PSaE~@j5%LI<#
zcTLu)?9b}G%hKK7{$1e#{-G(t;0?~T@yX$At<@d<;U*K}t31?T4c+NY;TJyLD(=wy
zIO4|$-!V>=3Qpr0Y2&yZ)?zK;`rP9_zT*0w;Wu8ZB~Ik3I^##~okq^i7S7^N{?S1$
zuu&|e&Vm{kpYUx}l>7)M8m~QF|R_a@Q=q8@xWuE7H9_u?1>KQL(c0<&gIFTpS#ZN6TR%RZtTat>C~?2*Z%3SUfs`b
zkG{_B?d9#?u941u-;D0*=WgweF72V7?R+lo#9q34{_lf~@NORM6>nh*kMX9Y@dJ(R?LP4z
zkM8LX@(r)t8{g~~kMiTL@+%*&if>`BX9E*{_pZze(yWabsg{Xj0E%8
z9P%PB^EhAh3}5UwpRGZUvFiZWo5O3%Xf74vJd>RU;D%VxS~JyWdBVk3Dbw`BeRJ-21*y`jPMaVgLR9r?2zb&+^!B{-KZl>3>PvKlrzA
z`Q4xV<4^v(|NH^Z{_|e^`9CA>&;KM&0P!((x&2`SkydNw)jMVWH4hvikvta@T`|^0
z)R&B9kgoCS&iUT|fkEMrSTr6#JETmxbUvX`>6BWvQkvJ0mfQ7y!C~<;J7$!RXrp?~
zWfR}SxZGVgZ{~OXp5OQXQS!zF0SgTe5fiNl!x$AGAtNOx@hBa6E^{(9?XEU!K0!Md
zEh$Y;QBze@N>^Q9VPo$?J!NfgadX=yX0{{aRRf{%_Cf)NfP
zbQdMzgN6oafN0z)o
zu?@lZ{3Z=6<*V7vh$>GOJreS0)2BU33XPhzzSZUkbIu$%HpAIsYae>un)lAqy@5~O
zePMWTr0RgyS!1UD9%)kW|91Oxi4xr&d
z2`99W!w*4>5WPJzXz-*8F+&iD4Mz--zYbNJK*I)UtkK3Bam-Q29eM20#~*FW&PW0@P}~wy^hD+&&2Uk0
z8r{L9ElYF|PC5xK)X@S#{uLFbP2(K3jfZ5c^V36B)uYucSbg!kK3R>mgF6{2qoES%
z^wrir@T4`=1F2Jqn3;*9I1AxmTvP>$Ksfnxk3o&Rf#I&E*;2wu^pSXrtd|I|e>ABph(F
z@lN;I$ic2$=@R61d|JILGO=B?x0ZYBz?o&+^sr4&9dv3tKHJ^86?Yx-yCuI=bh>YE
zA>+PJ=R72sn|8bIcvaW==WeA&yx7Dy$6NKoHTO1n!JXfeW^r>LJyq#v4*F!@!4F^j
N@yRdWd`b!c06RxY2j&0(
literal 0
HcmV?d00001
diff --git a/doxygen/img/Chunk_f4.gif b/doxygen/img/Chunk_f4.gif
new file mode 100644
index 0000000000000000000000000000000000000000..76f099459faf3a0c9e579256f78235b8278e8be0
GIT binary patch
literal 5772
zcmV;77IW!GNk%v~VSobQ0e}Di00030|Nkri0001h0^k7v0{)DTsmtvTqnxzbi?iOm
z`wxcVNS5Y_rs~SJ?hD8AOxN~}=Q6-o!1iE6XwI2Q5urR2vl-iWS@tk
zR7k9Tb>y=7jP_Et);7_iMrzp%`mz(Z*D?Ajyz3WON0$ePc8Hj$cycxN^&@sf#pFbF
zwa$huhzHhSB)_zN8T%k~*tT#Br<
zx)(}$8P@pRbWEB&3$4stnktC9ZQbD8?FY#PRxXDwyw2W^*vRQ=%n6!oy4);%pUWMt
zUzP2fAZwsb#q*~vAF^ZHptPz)DBY%By24Ro=q{ti{*5rMG5k2toV-vA?>I#G%o(O9
z)JUQuhEicbkPXL43}_FaCs{UQ@={ohqtKy5K{zrfvz4}bM^Ew-QVjGp?2h3EOYDHy^A+*S-pJw`uz(yu;9Uj3mZO+
zII&_KaThy&3^_78osuhCzKr=yZH-6OPfB8I<@N6tXsQ&4Li2%*|clh
zzKuJ#?%lk5_ZA8>&7sXLh_5iN0(lDM%ZEGX7+mvn&CD51H&flZ@a-D2hGa|`NA?ol
zZ;U5#enfip*UPNOo+iBc;qq;~H`?BPe#UnG?|Ue3AASDev0n}Y{#T%Q{l#}65W>|5
zT7-)=I3INsM)BZYRaf)z?vmxh0+!=H#g+$3R&keRpxi1sMRViP2(NF!n~b|f8%
z2EJIKjy3)`*o`j;`G$|?8CgRyM2gg*i%#mtp<7Fy!();&_E;s5TJkmJaZP49gGvfc
zwPl$fCbuAz7@gT=O5d3YCyp$%*_@hh((u)ew|oiZoTD+xrWpqA03*bi2#xdI!YZ6!KmqE5k;i;Qvs!Rbcwl{(o@27#p72dJ`VCzx{TgP>L-
z<*C7uJThG2Ua!Ztf9o}
ziK`K+L^`UrYcMMihWIcA;03)ZQjZm-VKa?x$%=sLtN8XwD_xDc>r<|9ASvq`I~lAk
zEBY#%UTQ=&94%Y#x}fBpnp#}0#c@sQt~0t$3oyA?!MiXp6Z^}rl$?_CXuyMzX_Y8q
z-CMBBq9pXPXwoK3-a95Rck--n1&t;-)b4^aGh!+YptCj07!=4G)+%L?WUPWW&O*!qa%ME!(hx?8r@K;*z
zWr$T0Jo`V1dn`@zY2U~FzQUJB_=&?)kKoY&BE!NCn5IebN@`T1#T+VUj?KkS4@+a81*5d7H{A__#aUnZHub>^iV#l;)Enlm
zg}cl_r++LQp$i9CysdRlf*3TQhXPc?Vns|(F$~G4Ci1fu8YYA4-1{u8h0Ibu~1dPn487vpBb8y@S6Lt)u`(x)FZV$p?aRAIW-2*)_K
zqm9ISW0-_k7%R$gk2~w(p1`QX9gazmfDGgl4GBe~ov|}|=1C44W#O3C(EM5t%vS%rd1J%~+<*l$IPOH;YNkW1=#g
zrb;Geu8GZNYV$vGhpCkmPH}?rpiMi;WS>#_mD@oAk
z9dx10Oep>Sbm&mU8I+-pjHv3m2~Ki`GN8c}CPwuc&pugnPV}TGB}eK^ld6=DC_P~@
zTe>@#t~4()U1df&+R=Y{^rkrdXg?zf(RogErfaNe*@%kNqJAr==PYSb;b>H+Zgi(S
z&8biCD8`>^RC)syYA>J4#-)OFtW1pRP)T}L!Jw0@STyTbcemAo8dIzMq-#&BddVia
zbzf;^s|ek?SHB8&uI1FLR|7lPIf|94cD*Y|b!yeCPBpT?>1tE^npocfHnW`7>|#HJ
zSV)Gpv%~l-WKWA)yh>KCgq>_^k3=??9&5DbIx1X@CzLnXVm0b18)$Z{7G-
z1V?DI{#5O1g;rPBzP7QHU7=ww17HbXkisG^v9C-!+sjJT#5E#uT!-Yd3P1M37@b;Ut4Fpz2FV;c{-wXDr>lbu{+#uj#;4|ZIHk6f7~V|mM5
z5^^Z3EXy5p`NZ0j@r^gUWGKTpt7%qpclZS6ZN_=bi1IR>@BA=3V|dQ^LvEKw%ABVR@ivT^^MOQ1q}($AE1-25CpOjDXbWmfc>1%2olTX@uGCiRpJ
zsp;mWdeab2B&%P&Xgz;g%z6nitgR{QLU*{-qV6@XeLd)1zd2>A&Rel>jZar2d)e^H
zwJooF7mhI-#Lk|znypRfC<{B(zYaEzQT^ZLO*_HW9=Exlt!!@|TiokDH$1<6ZFFZF
z+lJQlwzZAsVN*HW?{;@_E$#1s<2%p9=69n74j3)po8I*vxT(3_ZHP;ps|C-pxDoDX
zfMa~)+R->&+dVsw3tZv(ws*-L4(y4W{NXFFY{kiZaE_ztbg9dp$7Md|{^QlR=C!@~
zmks`CocnU;M5j5lW9;cfCtcnPS2~=F4)sPNtW+c)w8%9m^uts=>nE)h%^I%QfL@(P
zV`pin1HE*#g1Y4qXS-Hcrmw9RdhF8XdXb_IbvEnU?yoF6t(z@&i}#!Fc#(VCQJ!+Q
ztDNEBda={f%~QHxobWU)H|1aGcOr*Yd`87ktb|
zEZcXA{_+GDM&>o2s=&ki_3S&4(Lw)t%d6e-r$0I4vj}|VapCuP=Xtz`PtDeoJNCH6
z5cB&keA*L!_VT{EZlRi3?jwHmq=){zi*J1Nvpm+iUr~c$KUnepYoGS=FB |