Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce Sum Square Operator #428

Merged
merged 3 commits into from
Nov 11, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions docs/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,11 @@ All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [Unreleased] - 2023-11-06

## Added
- Reduce Sum Square Operator.

## [Unreleased] - 2023-11-05

### Added
Expand Down
1 change: 1 addition & 0 deletions docs/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,7 @@
* [tensor.bitwise_and](framework/operators/tensor/tensor.bitwise_and.md)
* [tensor.round](framework/operators/tensor/tensor.round.md)
* [tensor.scatter](framework/operators/tensor/tensor.scatter.md)
* [tensor.reduce\_sum\_square](framework/operators/tensor/tensor.reduce\_sum\_square.md)
* [tensor.reduce\_l2](framework/operators/tensor/tensor.reduce\_l2.md)
* [tensor.reduce\_l1](framework/operators/tensor/tensor.reduce\_l1.md)
* [Neural Network](framework/operators/neural-network/README.md)
Expand Down
1 change: 1 addition & 0 deletions docs/framework/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,7 @@ You can see below the list of current supported ONNX Operators:
| [Round](operators/tensor/tensor.round.md) | :white\_check\_mark: |
| [MaxInTensor](operators/tensor/tensor.max\_in\_tensor.md) | :white\_check\_mark: |
| [Max](operators/tensor/tensor.max.md) | :white\_check\_mark: |
| [ReduceSumSquare](operators/tensor/tensor.reduce\_sum\_square.md) | :white\_check\_mark: |
| [Trilu](operators/tensor/tensor.trilu.md) | :white\_check\_mark: |
| [Scatter](operators/tensor/scatter.max.md) | :white\_check\_mark: |
| [ReduceL1](operators/tensor/tensor.reduce\_l1.md) | :white\_check\_mark: |
Expand Down
1 change: 1 addition & 0 deletions docs/framework/operators/tensor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,7 @@ use orion::operators::tensor::TensorTrait;
| [`tensor.round`](tensor.round.md) | Computes the round value of all elements in the input tensor. |
| [`tensor.trilu`](tensor.trilu.md) | Returns the upper or lower triangular part of a tensor or a batch of 2D matrices. |
| [`tensor.scatter`](tensor.scatter.md) | Produces a copy of input data, and updates value to values specified by updates at specific index positions specified by indices. |
| [`tensor.reduce_sum_square`](tensor.reduce\_sum\_square.md) | Computes the sum square of the input tensor's elements along the provided axes. |
| [`tensor.reduce_l1`](tensor.reduce\_l1.md) | Computes the L1 norm of the input tensor's elements along the provided axes. |

## Arithmetic Operations
Expand Down
Empty file.
38 changes: 38 additions & 0 deletions docs/framework/operators/tensor/tensor.reduce_sum_square.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
## tensor.reduce_sum_square

```rust
fn reduce_sum_square(self: @Tensor<T>, axis: usize, keepdims: bool) -> Tensor<T>;
```

Computes the sum square of the input tensor's elements along the provided axes.
## Args

* `self`(`@Tensor<T>`) - The input tensor.
* `axis`(`usize`) - The dimension to reduce.
* `keepdims`(`bool`) - If true, retains reduced dimensions with length 1.

## Panics

* Panics if axis is not in the range of the input tensor's dimensions.

## Returns

A new `Tensor<T>` instance with the specified axis reduced by summing its elements.

fn reduce_sum_square_example() -> Tensor<u32> {

let mut shape = ArrayTrait::<usize>::new();
shape.append(2);
shape.append(2);
let mut data = ArrayTrait::new();
data.append(1);
data.append(2);
data.append(3);
data.append(4);
let tensor = TensorTrait::<u32>::new(shape.span(), data.span());

We can call `reduce_sum_square` function as follows.
return tensor.reduce_sum_square(axis: 1, keepdims: true);
}
>>> [[5, 25]]
```
266 changes: 266 additions & 0 deletions nodegen/node/reduce_sum_square.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,266 @@
import numpy as np
from nodegen.node import RunAll
from ..helpers import make_node, make_test, to_fp, Tensor, Dtype, FixedImpl
import numpy as np


class Reduce_sum_square(RunAll):
@staticmethod
def reduce_sum_square_fp8x23():
def reduce_sum_square_export_do_not_keepdims():
shape = [3, 2, 2]
axes = np.array([2], dtype=np.int64)
keepdims = False
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.int64)
y = np.sum(a=np.square(x), axis=tuple(axes), keepdims=False).astype(np.int64)

x = Tensor(Dtype.FP8x23, x.shape, x.flatten())
y = Tensor(Dtype.FP8x23, y.shape, y.flatten())

name = "reduce_sum_square_fp8x23_export_do_not_keepdims"
make_node([x], [y], name)
make_test(
[x], y, "input_0.reduce_sum_square(2, false)", name)

def reduce_sum_square_export_keepdims():
shape = [3, 2, 2]
axes = np.array([2], dtype=np.int64)
keepdims = True
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.int64)
y = np.sum(a=np.square(x), axis=tuple(axes), keepdims=True).astype(np.int64)

x = Tensor(Dtype.FP8x23, x.shape, x.flatten())
y = Tensor(Dtype.FP8x23, y.shape, y.flatten())

name = "reduce_sum_square_fp8x23_export_keepdims"
make_node([x], [y], name)
make_test(
[x], y, "input_0.reduce_sum_square(2, true)", name)

def reduce_sum_square_axis_0():
shape = [3, 3, 3]
axes = np.array([0], dtype=np.int64)
keepdims = True
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.int64)
y = np.sum(a=np.square(x), axis=tuple(axes), keepdims=True).astype(np.int64)

x = Tensor(Dtype.FP8x23, x.shape, x.flatten())
y = Tensor(Dtype.FP8x23, y.shape, y.flatten())

name = "reduce_sum_square_fp8x23_export_negative_axes_keepdims"
make_node([x], [y], name)
make_test(
[x], y, "input_0.reduce_sum_square(0, true)", name)


reduce_sum_square_export_do_not_keepdims()
reduce_sum_square_export_keepdims()
reduce_sum_square_axis_0()

@staticmethod
def reduce_sum_square_fp16x16():
def reduce_sum_square_export_do_not_keepdims():
shape = [3, 2, 2]
axes = np.array([2], dtype=np.int64)
keepdims = False
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.int64)
y = np.sum(a=np.square(x), axis=tuple(axes), keepdims=False).astype(np.int64)

x = Tensor(Dtype.FP16x16, x.shape, x.flatten())
y = Tensor(Dtype.FP16x16, y.shape, y.flatten())

name = "reduce_sum_square_fp16x16_export_do_not_keepdims"
make_node([x], [y], name)
make_test(
[x], y, "input_0.reduce_sum_square(2, false)", name)

def reduce_sum_square_export_keepdims():
shape = [3, 2, 2]
axes = np.array([2], dtype=np.int64)
keepdims = True
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.int64)
y = np.sum(a=np.square(x), axis=tuple(axes), keepdims=True).astype(np.int64)

x = Tensor(Dtype.FP16x16, x.shape, x.flatten())
y = Tensor(Dtype.FP16x16, y.shape, y.flatten())

name = "reduce_sum_square_fp16x16_export_keepdims"
make_node([x], [y], name)
make_test(
[x], y, "input_0.reduce_sum_square(2, true)", name)

def reduce_sum_square_axis_0():
shape = [2, 2, 2]
axes = np.array([0], dtype=np.int64)
keepdims = True
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.int64)
y = np.sum(a=np.square(x), axis=tuple(axes), keepdims=True).astype(np.int64)

x = Tensor(Dtype.FP16x16, x.shape, x.flatten())
y = Tensor(Dtype.FP16x16, y.shape, y.flatten())

name = "reduce_sum_square_fp16x16_export_negative_axes_keepdims"
make_node([x], [y], name)
make_test(
[x], y, "input_0.reduce_sum_square(0, true)", name)


reduce_sum_square_export_do_not_keepdims()
reduce_sum_square_export_keepdims()
reduce_sum_square_axis_0()

@staticmethod
def reduce_sum_square_i8():
def reduce_sum_square_export_do_not_keepdims():
shape = [2, 2, 2]
axes = np.array([2], dtype=np.int8)
keepdims = False
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.int8)
y = np.sum(a=np.square(x), axis=tuple(axes), keepdims=False).astype(np.int8)

x = Tensor(Dtype.I8, x.shape, x.flatten())
y = Tensor(Dtype.I8, y.shape, y.flatten())

name = "reduce_sum_square_i8_export_do_not_keepdims"
make_node([x], [y], name)
make_test(
[x], y, "input_0.reduce_sum_square(2, false)", name)

def reduce_sum_square_export_keepdims():
shape = [2, 2, 2]
axes = np.array([2], dtype=np.int8)
keepdims = True
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.int8)
y = np.sum(a=np.square(x), axis=tuple(axes), keepdims=True).astype(np.int8)

x = Tensor(Dtype.I8, x.shape, x.flatten())
y = Tensor(Dtype.I8, y.shape, y.flatten())

name = "reduce_sum_square_i8_export_keepdims"
make_node([x], [y], name)
make_test(
[x], y, "input_0.reduce_sum_square(2, true)", name)

def reduce_sum_square_axis_0():
shape = [2, 2, 2]
axes = np.array([0], dtype=np.int8)
keepdims = True
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.int8)
y = np.sum(a=np.square(x), axis=tuple(axes), keepdims=True).astype(np.int8)

x = Tensor(Dtype.I8, x.shape, x.flatten())
y = Tensor(Dtype.I8, y.shape, y.flatten())

name = "reduce_sum_square_i8_export_negative_axes_keepdims"
make_node([x], [y], name)
make_test(
[x], y, "input_0.reduce_sum_square(0, true)", name)


reduce_sum_square_export_do_not_keepdims()
reduce_sum_square_export_keepdims()
reduce_sum_square_axis_0()

@staticmethod
def reduce_sum_square_i32():
def reduce_sum_square_export_do_not_keepdims():
shape = [3, 2, 2]
axes = np.array([2], dtype=np.int32)
keepdims = False
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.int32)
y = np.sum(a=np.square(x), axis=tuple(axes), keepdims=False).astype(np.int32)

x = Tensor(Dtype.I32, x.shape, x.flatten())
y = Tensor(Dtype.I32, y.shape, y.flatten())

name = "reduce_sum_square_i32_export_do_not_keepdims"
make_node([x], [y], name)
make_test(
[x], y, "input_0.reduce_sum_square(2, false)", name)

def reduce_sum_square_export_keepdims():
shape = [3, 2, 2]
axes = np.array([2], dtype=np.int32)
keepdims = True
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.int32)
y = np.sum(a=np.square(x), axis=tuple(axes), keepdims=True).astype(np.int32)

x = Tensor(Dtype.I32, x.shape, x.flatten())
y = Tensor(Dtype.I32, y.shape, y.flatten())

name = "reduce_sum_square_i32_export_keepdims"
make_node([x], [y], name)
make_test(
[x], y, "input_0.reduce_sum_square(2, true)", name)

def reduce_sum_square_axis_0():
shape = [3, 3, 3]
axes = np.array([0], dtype=np.int32)
keepdims = True
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.int32)
y = np.sum(a=np.square(x), axis=tuple(axes), keepdims=True).astype(np.int32)

x = Tensor(Dtype.I32, x.shape, x.flatten())
y = Tensor(Dtype.I32, y.shape, y.flatten())

name = "reduce_sum_square_i32_export_negative_axes_keepdims"
make_node([x], [y], name)
make_test(
[x], y, "input_0.reduce_sum_square(0, true)", name)


reduce_sum_square_export_do_not_keepdims()
reduce_sum_square_export_keepdims()
reduce_sum_square_axis_0()

@staticmethod
def reduce_sum_square_u32():
def reduce_sum_square_export_do_not_keepdims():
shape = [3, 2, 2]
axes = np.array([2], dtype=np.uint32)
keepdims = False
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.uint32)
y = np.sum(a=np.square(x), axis=tuple(axes), keepdims=False).astype(np.uint32)

x = Tensor(Dtype.U32, x.shape, x.flatten())
y = Tensor(Dtype.U32, y.shape, y.flatten())

name = "reduce_sum_square_u32_export_do_not_keepdims"
make_node([x], [y], name)
make_test(
[x], y, "input_0.reduce_sum_square(2, false)", name)

def reduce_sum_square_export_keepdims():
shape = [3, 2, 2]
axes = np.array([2], dtype=np.uint32)
keepdims = True
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.uint32)
y = np.sum(a=np.square(x), axis=tuple(axes), keepdims=True).astype(np.uint32)

x = Tensor(Dtype.U32, x.shape, x.flatten())
y = Tensor(Dtype.U32, y.shape, y.flatten())

name = "reduce_sum_square_u32_export_keepdims"
make_node([x], [y], name)
make_test(
[x], y, "input_0.reduce_sum_square(2, true)", name)

def reduce_sum_square_axis_0():
shape = [3, 3, 3]
axes = np.array([0], dtype=np.uint32)
keepdims = True
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.uint32)
y = np.sum(a=np.square(x), axis=tuple(axes), keepdims=True).astype(np.uint32)

x = Tensor(Dtype.U32, x.shape, x.flatten())
y = Tensor(Dtype.U32, y.shape, y.flatten())

name = "reduce_sum_square_u32_export_negative_axes_keepdims"
make_node([x], [y], name)
make_test(
[x], y, "input_0.reduce_sum_square(0, true)", name)


reduce_sum_square_export_do_not_keepdims()
reduce_sum_square_export_keepdims()
reduce_sum_square_axis_0()
41 changes: 41 additions & 0 deletions src/operators/tensor/core.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,7 @@ impl TensorSerde<T, impl TSerde: Serde<T>, impl TDrop: Drop<T>> of Serde<Tensor<
/// reduce_l1 - Computes the L1 norm of the input tensor's elements along the provided axes.
/// trilu - Returns the upper or lower triangular part of a tensor or a batch of 2D matrices.
/// scatter - Produces a copy of input data, and updates value to values specified by updates at specific index positions specified by indices.
/// reduce_sum_square - Computes the sum square of the input tensor's elements along the provided axes.
/// reduce_l2 - Computes the L2 norm of the input tensor's elements along the provided axes.
trait TensorTrait<T> {
/// # tensor.new
Expand Down Expand Up @@ -3491,6 +3492,46 @@ trait TensorTrait<T> {
/// ```
///
fn reduce_l2(self: @Tensor<T>, axis: usize, keepdims: bool) -> Tensor<T>;
/// ## tensor.reduce_sum_square
///
/// ```rust
/// fn reduce_sum_square(self: @Tensor<T>, axis: usize, keepdims: bool) -> Tensor<T>;
/// ```
///
/// Computes the sum square of the input tensor's elements along the provided axes.
/// ## Args
///
/// * `self`(`@Tensor<T>`) - The input tensor.
/// * `axis`(`usize`) - The dimension to reduce.
/// * `keepdims`(`bool`) - If true, retains reduced dimensions with length 1.
///
/// ## Panics
///
/// * Panics if axis is not in the range of the input tensor's dimensions.
///
/// ## Returns
///
/// A new `Tensor<T>` instance with the specified axis reduced by summing its elements.
///
/// fn reduce_sum_square_example() -> Tensor<u32> {
///
/// let mut shape = ArrayTrait::<usize>::new();
/// shape.append(2);
/// shape.append(2);
/// let mut data = ArrayTrait::new();
/// data.append(1);
/// data.append(2);
/// data.append(3);
/// data.append(4);
/// let tensor = TensorTrait::<u32>::new(shape.span(), data.span());
///
/// We can call `reduce_sum_square` function as follows.
/// return tensor.reduce_sum_square(axis: 1, keepdims: true);
/// }
/// >>> [[5, 25]]
/// ```
///
fn reduce_sum_square(self: @Tensor<T>, axis: usize, keepdims: bool) -> Tensor<T>;
}

/// Cf: TensorTrait::new docstring
Expand Down
Loading