-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable serialization and shared memory #410
Changes from all commits
53bc81a
bc4b50a
3e9e9f7
073dbe5
c9ac614
b11a8de
8424317
f84ed1d
36d9211
e6289d8
44822df
d438ce2
f7bb9fc
8dab338
2abb8b8
d8adfa3
3699b42
1505a33
0f0c5bb
93f302c
1fbf448
0e302d8
528294c
2774533
dab9759
ba09841
8e1be78
2159368
bfc8ed5
d89aec3
50dc6f3
2942ac1
1d93b72
a73245d
263bea6
7e5cf15
9d2b76c
d8f92c9
ade9791
7856aa5
9f24343
1214961
da45bb2
f190325
44650e6
85d5119
b16e9f5
ee7b62e
b7cd6cd
a0cc47a
5d7cb38
8375015
b403596
00a191b
a8c43bd
6e3ca63
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -19,13 +19,14 @@ The parallelism model | |
---------------------- | ||
|
||
For the most part, ``singularity-eos`` tries to be agnostic to how you | ||
parallelize your code on-node. (It knows nothing at all about | ||
distributed memory parallelism.) An ``EOS`` object can be copied into | ||
any parallel code block by value (see below) and scalar calls do not | ||
attempt any internal multi-threading, meaning ``EOS`` objects are not | ||
thread-safe, but are compatible with thread safety, assuming the user | ||
calls them appropriately. The main complication is ``lambda`` arrays, | ||
which are discussed below. | ||
parallelize your code on-node. It knows nothing at all about | ||
distributed memory parallelism, with one exception, discussed | ||
below. An ``EOS`` object can be copied into any parallel code block by | ||
value (see below) and scalar calls do not attempt any internal | ||
multi-threading, meaning ``EOS`` objects are not thread-safe, but are | ||
compatible with thread safety, assuming the user calls them | ||
appropriately. The main complication is ``lambda`` arrays, which are | ||
discussed below. | ||
|
||
The vector ``EOS`` method overloads are a bit different. These are | ||
thread-parallel operations launched by ``singularity-EOS``. They run | ||
|
@@ -39,6 +40,271 @@ A more generic version of the vector calls exists in the ``Evaluate`` | |
method, which allows the user to specify arbitrary parallel dispatch | ||
models by writing their own loops. See the relevant section below. | ||
|
||
Serialization and shared memory | ||
-------------------------------- | ||
|
||
While ``singularity-eos`` makes a best effort to be agnostic to | ||
parallelism, it exposes several methods that are useful in a | ||
distributed memory environment. In particular, there are two use-cases | ||
the library seeks to support: | ||
|
||
#. To avoid stressing a filesystem, it may desirable to load a table from one thread (e.g., MPI rank) and broadcast this data to all other ranks. | ||
#. To save memory it may be desirable to place tabulated data, which is read-only after it has been loaded from file, into shared memory on a given node, even if all other data is thread local in a distributed-memory environment. This is possible via, e.g., `MPI Windows`_. | ||
|
||
Therefore ``singularity-eos`` exposes several methods that can be used | ||
in this context. The function | ||
|
||
.. cpp:function:: std::size_t EOS::SerializedSizeInBytes() const; | ||
|
||
returns the amount of memory required in bytes to hold a serialized | ||
EOS object. The return value will depend on the underlying equation of | ||
state model currently contained in the object. The function | ||
|
||
.. cpp:function:: std::size_t EOS::SharedMemorySizeInBytes() const; | ||
|
||
returns the amount of data (in bytes) that a given object can place into shared memory. Again, the return value depends on the model the object currently represents. | ||
|
||
.. note:: | ||
|
||
Many models may not be able to utilize shared memory at all. This | ||
holds for most analytic models, for example. The ``EOSPAC`` backend | ||
will only utilize shared memory if the ``EOSPAC`` version is sufficiently recent | ||
to support it and if ``singularity-eos`` is built with serialization | ||
support for ``EOSPAC`` (enabled with | ||
``-DSINGULARITY_EOSPAC_ENABLE_SHMEM=ON``). | ||
|
||
The function | ||
|
||
.. cpp:function:: std::size_t EOS::Serialize(char *dst); | ||
|
||
fills the ``dst`` pointer with the memory required for serialization | ||
and returns the number of bytes written to ``dst``. The function | ||
|
||
.. cpp:function:: std::pair<std::size_t, char*> EOS::Serialize(); | ||
|
||
allocates a ``char*`` pointer to contain serialized data and fills | ||
it. | ||
|
||
.. warning:: | ||
|
||
Serialization and de-serialization may only be performed on objects | ||
that live in host memory, before you have called | ||
``eos.GetOnDevice()``. Attempting to serialize device-initialized | ||
objects is undefined behavior, but will likely result in a | ||
segmentation fault. | ||
|
||
The pair is the pointer and its size. The function | ||
|
||
.. code-block:: cpp | ||
|
||
std::size_t EOS::DeSerialize(char *src, | ||
const SharedMemSettings &stngs = DEFAULT_SHMEM_STNGS) | ||
|
||
Sets an EOS object based on the serialized representation contained in | ||
``src``. It returns the number of bytes read from ``src``. Optionally, | ||
``DeSerialize`` may also write the data that can be shared to a | ||
pointer contained in ``SharedMemSettings``. If you do this, you must | ||
pass this pointer in, but designate only one thread per shared memory | ||
domain (frequently a node or socket) to actually write to this | ||
data. ``SharedMemSettings`` is a struct containing a ``data`` pointer | ||
and a ``is_domain_root`` boolean: | ||
|
||
.. code-block:: cpp | ||
|
||
struct SharedMemSettings { | ||
SharedMemSettings(); | ||
SharedMemSettings(char *data_, bool is_domain_root_) | ||
: data(data_), is_domain_root(is_domain_root_) {} | ||
char *data = nullptr; // defaults | ||
bool is_domain_root = false; | ||
}; | ||
|
||
The ``data`` pointer should point to a shared memory allocation. The | ||
``is_domain_root`` boolean should be true for exactly one thread per | ||
shared memory domain. | ||
|
||
For example you might call ``DeSerialize`` as | ||
|
||
.. code-block:: cpp | ||
|
||
std::size_t read_size = eos.DeSerialize(packed_data, | ||
singularity::SharedMemSettings(shared_data, | ||
my_rank % NTHREADS == 0)); | ||
assert(read_size == write_size); // for safety | ||
|
||
.. warning:: | ||
|
||
Note that for equation of state models that have dynamically | ||
allocated memory, ``singularity-eos`` reserves the right to point | ||
directly at data in ``src``, so it **cannot** be freed until you | ||
would call ``eos.Finalize()``. If the ``SharedMemSettings`` are | ||
utilized to request data be written to a shared memory pointer, | ||
however, you can free the ``src`` pointer, so long as you don't free | ||
the shared memory pointer. | ||
|
||
Putting everything together, a full sequence with MPI might look like this: | ||
|
||
.. code-block:: cpp | ||
|
||
singularity::EOS eos; | ||
std::size_t packed_size, shared_size; | ||
char *packed_data; | ||
if (rank == 0) { // load eos object | ||
eos = singularity::StellarCollapse(filename); | ||
packed_size = eos.SerializedSizeInBytes(); | ||
shared_size = eos.SharedMemorySizeInBytes(); | ||
} | ||
|
||
// Send sizes | ||
MPI_Bcast(&packed_size, 1, MPI_UNSIGNED_LONG, 0, MPI_COMM_WORLD); | ||
MPI_Bcast(&spacked_size, 1, MPI_UNSIGNED_LONG, 0, MPI_COMM_WORLD); | ||
|
||
// Allocate data needed | ||
packed_data = (char*)malloc(packed_size); | ||
if (rank == 0) { | ||
eos.Serialize(packed_data); | ||
eos.Finalize(); // Clean up this EOS object so it can be reused. | ||
} | ||
MPI_Bcast(packed_data, packed_size, MPI_BYTE, 0, MPI_COMM_WORLD); | ||
|
||
// the default doesn't do shared memory. | ||
// we will change it below if shared memory is enabled. | ||
singularity::SharedMemSettings settings = singularity::DEFAULT_SHMEM_STNGS; | ||
|
||
char *shared_data; | ||
char *mpi_base_pointer; | ||
int mpi_unit; | ||
MPI_Aint query_size; | ||
MPI_Win window; | ||
MPI_Comm shared_memory_comm; | ||
int island_rank, island_size; // rank in, size of shared memory region | ||
if (use_mpi_shared_memory) { | ||
// Generate shared memory comms | ||
MPI_Comm_split_type(MPI_COMM_WORLD, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL, &shared_memory_comm); | ||
// rank on a region that shares memory | ||
MPI_Comm_rank(shared_memory_comm, &island_rank); | ||
// size on a region that shares memory | ||
MPI_COMM_size(shared_memory_comm, &island_size); | ||
|
||
// Create the MPI shared memory object and get a pointer to shared data | ||
// this allocation is a collective and must be called on every rank. | ||
// the total size of the allocation is the sum over ranks in the shared memory comm | ||
// of requested memory. So it's valid to request all you want on rank 0 and nothing | ||
// on the remaining ranks. | ||
MPI_Win_allocate_shared((island_rank == 0) ? shared_size : 0, | ||
1, MPI_INFO_NULL, shared_memory_comm, &mpi_base_pointer, | ||
&window); | ||
// This gets a pointer to the shared memory allocation valid in local address space | ||
// on every rank | ||
MPI_Win_shared_query(window, MPI_PROC_NULL, &query_size, &mpi_unit, &shared_data); | ||
// Mutex for MPI window. Writing to shared memory currently allowed. | ||
MPI_Win_lock_all(MPI_MODE_NOCHECK, window); | ||
// Set SharedMemSettings | ||
settings.data = shared_data; | ||
settings.is_domain_root = (island_rank == 0); | ||
} | ||
eos.DeSerialize(packed_data, settings); | ||
if (use_mpi_shared_memory) { | ||
MPI_Win_unlock_all(window); // Writing to shared memory disabled. | ||
MPI_Barrier(shared_memory_comm); | ||
free(packed_data); | ||
} | ||
|
||
In the case where many EOS objects may be active at once, you can | ||
combine serialization and comm steps. You may wish to, for example, | ||
have a single pointer containing all serialized EOS's. Same for the | ||
shared memory. ``singularity-eos`` provides machinery to do this in | ||
the ``singularity-eos/base/serialization_utils.hpp`` header. This | ||
provides a helper struct, ``BulkSerializer``: | ||
|
||
.. code-block:: cpp | ||
|
||
template<typename Container_t, Resizer_t = MemberResizer> | ||
singularity::BulkSerializer | ||
|
||
which may be initialized by a collection of ``EOS`` objects or by | ||
simply assigning (or constructing) its member field, ``eos_objects`` | ||
appropriately. An example ``Container_t`` might be | ||
``std::vector<EOS>``. A specialization for ``vector`` is provided as | ||
``VectorSerializer``. The ``Resizer_t`` is a functor that knows how to | ||
resize a collection. For example, the ``MemberResizor`` functor used | ||
for ``std::vector``s | ||
|
||
.. code-block:: cpp | ||
|
||
struct MemberResizer { | ||
template<typename Collection_t> | ||
void operator()(Collection_t &collection, std::size_t count) { | ||
collection.resize(count); | ||
} | ||
}; | ||
|
||
which will work for any ``stl`` container with a ``resize`` method. | ||
|
||
The ``BulkSerializer`` provides all the above-described serialization | ||
functions for ``EOS`` objects: ``SerializedSizeInBytes``, | ||
``SharedMemorySizeInBytes``, ``Serialize``, and ``DeSerialize``, but | ||
it operates on all ``EOS`` objects contained in the container it | ||
wraps, not just one. Example usage might look like this: | ||
|
||
.. code-block:: cpp | ||
|
||
int packed_size, shared_size; | ||
singularity::VectorSerializer<EOS> serializer; | ||
if (rank == 0) { // load eos object | ||
// Code to initialize a bunch of EOS objects into a std::vector<EOS> | ||
/* | ||
Initialization code goes here | ||
*/ | ||
serializer = singularity::VectorSerializer<EOS>(eos_vec); | ||
packed_size = serializer.SerializedSizeInBytes(); | ||
shared_size = serializer.SharedMemorySizeInBytes(); | ||
} | ||
|
||
// Send sizes | ||
MPI_Bcast(&packed_size, 1, MPI_UNSIGNED_LONG, 0, MPI_COMM_WORLD); | ||
MPI_Bcast(&packed_size, 1, MPI_UNSIGNED_LONG, 0, MPI_COMM_WORLD); | ||
|
||
// Allocate data needed | ||
packed_data = (char*)malloc(packed_size); | ||
if (rank == 0) { | ||
serializer.Serialize(packed_data); | ||
serializer.Finalize(); // Clean up all EOSs owned by the serializer | ||
} | ||
MPI_Bcast(packed_data, packed_size, MPI_BYTE, 0, MPI_COMM_WORLD); | ||
|
||
singularity::SharedMemSettings settings = singularity::DEFAULT_SHMEM_STNGS; | ||
// same MPI declarations as above | ||
if (use_mpi_shared_memory) { | ||
// same MPI code as above including setting the settings | ||
settings.data = shared_data; | ||
settings.is_domain_root = (island_rank == 0); | ||
} | ||
singularity::VectorSerializer<EOS> deserializer; | ||
deserializer.DeSerialize(packed_data, settings); | ||
if (use_mpi_shared_memory) { | ||
// same MPI code as above | ||
} | ||
// extract each individual EOS and do something with it | ||
std::vector<EOS> eos_host_vec = deserializer.eos_objects; | ||
// get on device if you want | ||
for (auto EOS : eos_host_vec) { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This could be a can of worms at some point when we want to have 1 copy per GPU rather than 1 copy per node. Some host codes may want to hand us GPU memory so we may need to have a facility to handle that case at some point. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What this will currently produce is 1 copy per MPI rank on device and 1 copy per node on host, because |
||
EOS eos_device = eos.GetOnDevice(); | ||
// ... | ||
} | ||
|
||
It is also possible to (with care) mix serializers... i.e., you might | ||
serialize with a ``VectorSerializer`` and de-serialize with a | ||
different container, as all that is required is that a container have | ||
a ``size``, provide iterators, and be capable of being resized. | ||
|
||
.. warning:: | ||
|
||
Since EOSPAC is a library, DeSerialization is destructive for EOSPAC | ||
and may have side-effects. | ||
|
||
.. _`MPI Windows`: https://www.mpi-forum.org/docs/mpi-4.1/mpi41-report/node311.htm | ||
|
||
.. _variant section: | ||
|
||
Variants | ||
|
@@ -444,7 +710,7 @@ unmodified EOS model, call | |
.. cpp:function:: auto GetUnmodifiedObject(); | ||
|
||
The return value here will be either the type of the ``EOS`` variant | ||
type or the unmodified model (for example ``IdealGas``) or, depending | ||
type or the unmodified model (for example ``IdealGas``), depending | ||
on whether this method was callled within a variant or on a standalone | ||
model outside a variant. | ||
|
||
|
@@ -552,6 +818,18 @@ might look something like this: | |
|
||
.. _eos methods reference section: | ||
|
||
CheckParams | ||
------------ | ||
|
||
You may check whether or not an equation of state object is | ||
constructed self-consistently and ready for use by calling | ||
|
||
.. cpp:function:: void CheckParams() const; | ||
|
||
which raise an error and/or print an equation of state specific error | ||
message if something has gone wrong. Most EOS constructors and ways of | ||
building an EOS call ``CheckParams`` by default. | ||
|
||
Yurlungur marked this conversation as resolved.
Show resolved
Hide resolved
|
||
Equation of State Methods Reference | ||
------------------------------------ | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -32,11 +32,21 @@ constexpr unsigned long all_values = (1 << 7) - 1; | |
} // namespace thermalqs | ||
|
||
constexpr size_t MAX_NUM_LAMBDAS = 3; | ||
enum class DataStatus { Deallocated = 0, OnDevice = 1, OnHost = 2 }; | ||
enum class DataStatus { Deallocated = 0, OnDevice = 1, OnHost = 2, UnManaged = 3 }; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we have sufficient coverage in our testing to make sure we are catching the potential issues? I recall this took up many of @Yurlungur's cycles to get this to work correctly back in the day. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I believe so. I think our tests should catch this if there's an issue. But I can't promise that. The |
||
enum class TableStatus { OnTable = 0, OffBottom = 1, OffTop = 2 }; | ||
constexpr Real ROOM_TEMPERATURE = 293; // K | ||
constexpr Real ATMOSPHERIC_PRESSURE = 1e6; | ||
|
||
struct SharedMemSettings { | ||
SharedMemSettings() = default; | ||
SharedMemSettings(char *data_, bool is_domain_root_) | ||
: data(data_), is_domain_root(is_domain_root_) {} | ||
bool CopyNeeded() const { return (data != nullptr) && is_domain_root; } | ||
char *data = nullptr; | ||
bool is_domain_root = false; // default true or false? | ||
}; | ||
const SharedMemSettings DEFAULT_SHMEM_STNGS = SharedMemSettings(); | ||
|
||
} // namespace singularity | ||
|
||
#endif // SINGULARITY_EOS_BASE_CONSTANTS_HPP_ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know
SHMEM
is something else but hopefully next toEOSPAC
it's clear what this means