Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Features/54 distributed random numbers #362

Merged
merged 32 commits into from
Sep 12, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
a38ce5d
Provided get and set state functions, reworked seed setting, first th…
Markus-Goetz Jun 6, 2019
08e5835
Added threefry64, added intxx to floatxx conversion functions
Markus-Goetz Jun 13, 2019
52837a5
Added float conversion sugar, added Kundu random normal transformation
Markus-Goetz Jun 17, 2019
b0a2a90
Broken inbetween state, nothing working yet, but would like to backup…
Markus-Goetz Jul 4, 2019
4e593f8
Simon taking over
Markus-Goetz Aug 14, 2019
8e726ac
Merge remote-tracking branch 'origin/master' into features/54-distrib…
Aug 26, 2019
8a87c4f
Implemented the counter_sequenze function and added multiple test cases
Aug 26, 2019
7039689
Merge branch 'master' into features/54-distributed-random-numbers
Markus-Goetz Aug 28, 2019
3d46337
fixing unit test that broke down because of new random generator
Aug 29, 2019
526c029
fixed a but in random
Aug 29, 2019
4e2339e
Merge branch 'master' into features/54-distributed-random-numbers
coquelin77 Aug 29, 2019
f53311d
fixed the reduce function max and min
Sep 2, 2019
1c59145
Merge branch 'features/54-distributed-random-numbers' of https://gith…
Sep 2, 2019
3d08648
fixed the kmeans setup to fit the new random module
Sep 2, 2019
d33f743
unit tests now running in kmeans
Sep 2, 2019
2d511a5
reduced the number of iterations for the threefry algorithm
Sep 6, 2019
1c9daa3
Merge remote-tracking branch 'origin/master' into features/54-distrib…
Sep 6, 2019
e9d2f9b
Fixed the randn and randint functions and added test cases for both o…
Sep 6, 2019
2d54f25
removed unnecessary imports
Sep 6, 2019
0d6d0e6
added more negative test cases
Sep 6, 2019
8b46d13
fixed a bug
Sep 6, 2019
2ad7aa9
renewed the function description
Sep 6, 2019
1471d90
implemented rand for float32
Sep 9, 2019
47a781e
added test cases for randint with int32
Sep 9, 2019
4f14b43
added tests for randn with float32
Sep 9, 2019
b4d3047
Merge remote-tracking branch 'origin/master' into features/54-distrib…
Sep 9, 2019
9e94112
added one more test for wrong type input
Sep 9, 2019
a106e46
trying to fix threefry with 32 bit
Sep 9, 2019
418d8ad
Merge branch 'master' into features/54-distributed-random-numbers
Markus-Goetz Sep 12, 2019
ca8f64a
threefry32 is now done
Sep 12, 2019
b9d9fd2
Merge branch 'features/54-distributed-random-numbers' of https://gith…
Sep 12, 2019
e774c95
set rounds of the threefry algorithm to 8 for both implementations
Sep 12, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Empty file removed heat/core/manipulation.py
Empty file.
2 changes: 0 additions & 2 deletions heat/core/manipulations.py
Original file line number Diff line number Diff line change
Expand Up @@ -569,8 +569,6 @@ def sort(a, axis=None, descending=False, out=None):
second_result[idx_slice] = r_val
second_indices[idx_slice] = r_ind

# print('second_result', second_result, 'tmp_indices', second_indices)

second_result, tmp_indices = second_result.sort(dim=0, descending=descending)
final_result = second_result.transpose(0, axis)
final_indices = torch.empty_like(second_indices)
Expand Down
6 changes: 3 additions & 3 deletions heat/core/operations.py
Original file line number Diff line number Diff line change
Expand Up @@ -223,9 +223,9 @@ def __reduce_op(x, partial_op, reduction_op, **kwargs):
lshape_losedim = tuple(x.lshape[dim] for dim in range(len(x.lshape)) if dim not in axis)
output_shape = gshape_losedim
# Take care of special cases argmin and argmax: keep partial.shape[0]
if (0 in axis and partial.shape[0] != 1):
if 0 in axis and partial.shape[0] != 1:
lshape_losedim = (partial.shape[0],) + lshape_losedim
if (not 0 in axis and partial.shape[0] != x.lshape[0]):
if 0 not in axis and partial.shape[0] != x.lshape[0]:
lshape_losedim = (partial.shape[0],) + lshape_losedim[1:]
partial = partial.reshape(lshape_losedim)

Expand All @@ -241,7 +241,7 @@ def __reduce_op(x, partial_op, reduction_op, **kwargs):

# if reduction_op is a Boolean operation, then resulting tensor is bool
boolean_ops = [MPI.LAND, MPI.LOR, MPI.BAND, MPI.BOR]
tensor_type = bool if reduction_op in boolean_ops else partial[0].dtype
tensor_type = bool if reduction_op in boolean_ops else partial.dtype

if out is not None:
out._DNDarray__array = partial
Expand Down
Loading