-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathReleaseNotes.txt
115 lines (92 loc) · 4.37 KB
/
ReleaseNotes.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
Release Notes for Domi for Trilinos 12.12
=========================================
Enhancements
------------
* Added more sophisticated processor decomposition algorithm. If the
decomposition of the processors along each axis is specified
incompletely for two or more axes, Domi now returns a much more
logical decomposition
Bug fixes
---------
* Fixed memory management error related to Tuple and ArrayView
* Update ParameterList documentation so that they display properly
in HTML documentation
* Fixed bug in MDMap::getAugmentedMDMap() method
Release Notes for Domi for Trilinos 12.8
========================================
Added replicated boundaries
---------------------------
* A replicated boundary exists only on a periodic domain, and is
simply a convention that the end points are the same points. For
example, a left end coordinate that represents 0 degrees and a
right end coordinate that represents 360 degrees. Domi now
supports either convention, and it affects communication.
* Added additional tests for periodic domains
Enhancements
------------
* New MDVector constructor that takes a parent MDVector and an array
of Slices
* MDMap support for axis maps
* MDMap getMDComm() method
Release Notes for Domi for Trilinos 12.4
========================================
Various bug fixes:
------------------
* Fixed a bug where a periodic axis under a serial build would not
update communication padding. The parallel version uses MPI
capabilities to do this, so a serial-only capability had to be
added for this corner case.
* Fixed a test that was doing a dynamic type comparison to be more
robust.
* Fixed some minor bugs in the MDVector getTpetraMultiVector()
method.
* The MDArrayView size() method was made const, as it should be
Created serial tests
--------------------
* Domi does not provide much in the way of unique capabilities if it
is compiled in serial. Nevertheless, all tests that run on 1
processor were updated to run under a serial build of Trilinos as
well.
Release Notes for Domi for Trilinos 12.0
========================================
Domi provides distributed data structures for multi-dimensional data.
The inspirational use case is parallel domain decomposition for finite
difference applications. To that end, Domi provides the following
classes:
MDArray, MDArrayView, MDArrayRCP
These classes define multi-dimensional arrays, with arbitrary
runtime-defined dimensions, built on top of the Teuchos Array,
ArrayView, and ArrayRCP classes, respectively. These are serial
in nature and provide the mechanism for data manipulation on each
processor of distributed MDVectors.
Slice
The Slice class is inspired by the Python slice object, which
stores a start, stop, and step index, and can be constructed to
utilize a variety of logical default values. Slices can be used
on all multi-dimensional objects to return views into subsets of
those objects.
MDComm
Multi-dimensional communicator. This communicator provides a map
between a multi-dimensional array of processors and their ranks,
and can be queried for neighbor ranks. MDComms can be sliced,
which returns a sub-sommunicator.
MDMap
An MDMap describes the decomposition of an MDVector on an MDComm.
It stores the start and stop indexes along each dimension,
including boundary padding for algorithms that require extra
indexes along boundaries, and communication padding used to
update values from neighboring processors. MDMaps can be sliced,
and the resulting MDMap may reside on a sub-communicator. An
MDMap can be converted to an equivalent Epetra or Tpetra Map.
MDVector
An MDVector is a multi-dimensional array, distrubuted in a
structured manner across multiple processors. This distribution
is described by an MDVector's MDMap. The MDVector's data is
stored locally on each processor with an MDArrayRCP. An MDVector
can update its communication padding from neighboring processors
automatically. An MDVector can be sliced, and the resulting
MDVector may reside on a sub-communicator. An MDVector can be
converted to equivalent Epetra or Tpetra Vectors or MultiVectors.
If there are no stride gaps in the data due to slicing, these
converted Epetra and Tpetra objects may be views of the original
data.