Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat: SequenceConstruct #457

Merged
merged 5 commits into from
Nov 16, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,7 @@
* [tensor.reduce\_sum\_square](framework/operators/tensor/tensor.reduce\_sum\_square.md)
* [tensor.reduce\_l2](framework/operators/tensor/tensor.reduce\_l2.md)
* [tensor.reduce\_l1](framework/operators/tensor/tensor.reduce\_l1.md)
* [tensor.sequence\_construct](framework/operators/tensor/tensor.sequence\_construct.md)
* [tensor.shrink](framework/operators/tensor/tensor.shrink.md)
* [tensor.sequence\_empty](framework/operators/tensor/tensor.sequence\_empty.md)
* [tensor.reduce_mean](framework/operators/tensor/tensor.reduce\_mean.md)
Expand Down
1 change: 1 addition & 0 deletions docs/framework/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,7 @@ You can see below the list of current supported ONNX Operators:
| [ConstantOfShape](operators/tensor/tensor.constant_of_shape.md) | :white\_check\_mark: |
| [ReduceL1](operators/tensor/tensor.reduce\_l1.md) | :white\_check\_mark: |
| [ReduceL2](operators/tensor/tensor.reduce\_l2.md) | :white\_check\_mark: |
| [SequenceConstruct](operators/tensor/tensor.sequence\_construct.md) | :white\_check\_mark: |
| [Shrink](operators/tensor/tensor.shrink.md) | :white\_check\_mark: |
| [SequenceEmpty](operators/tensor/tensor.sequence\_empty.md) | :white\_check\_mark: |
| [ReduceL2](operators/tensor/tensor.reduce\_l2.md) | :white\_check\_mark: |
Expand Down
1 change: 1 addition & 0 deletions docs/framework/operators/tensor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,7 @@ use orion::operators::tensor::TensorTrait;
| [`tensor.binarizer`](tensor.binarizer.md) | Maps the values of a tensor element-wise to 0 or 1 based on the comparison against a threshold value. |
| [`tensor.reduce_sum_square`](tensor.reduce\_sum\_square.md) | Computes the sum square of the input tensor's elements along the provided axes. |
| [`tensor.reduce_l2`](tensor.reduce\_l2.md) | Computes the L2 norm of the input tensor's elements along the provided axes. |
| [`tensor.sequence_construct`](tensor.sequence\_construct.md) | Constructs a tensor sequence containing the input tensors. |
| [`tensor.shrink`](tensor.shrink.md) | Shrinks the input tensor element-wise to the output tensor with the same datatype and shape based on a defined formula. |
| [`tensor.sequence_empty`](tensor.sequence\_empty.md) | Returns an empty tensor sequence. |
| [`tensor.reduce_mean`](tensor.reduce\_mean.md) | Computes the mean of the input tensor's elements along the provided axes. |
Expand Down
35 changes: 35 additions & 0 deletions docs/framework/operators/tensor/tensor.sequence_construct.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
## tensor.sequence_construct

```rust
fn sequence_construct(tensors: Array<Tensor<T>>) -> Array<Tensor<T>>;
```

Constructs a tensor sequence containing the input tensors.

## Args

* `tensors`(`Array<Tensor<T>>`) - The array of input tensors.

## Panics

* Panics if input tensor array is empty.

## Returns

A tensor sequence `Array<Tensor<T>>` containing the input tensors.

## Examples

```rust
use array::{ArrayTrait, SpanTrait};

use orion::operators::tensor::{TensorTrait, Tensor, U32Tensor};

fn sequence_construct_example() -> Array<Tensor<usize>> {
let tensor1 = TensorTrait::new(shape: array![2, 2].span(), data: array![0, 1, 2, 3].span());
let tensor2 = TensorTrait::new(shape: array![2, 2].span(), data: array![4, 5, 6, 7].span());
let result = TensorTrait::sequence_construct(tensors: array![tensor1, tensor2]);
return result;
}
>>> [[0, 1, 2, 3], [4, 5, 6, 7]]
```
85 changes: 85 additions & 0 deletions nodegen/node/sequence_construct.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
import numpy as np
from nodegen.node import RunAll
from ..helpers import make_test, to_fp, Tensor, Dtype, FixedImpl


class Sequence_construct(RunAll):

@staticmethod
def sequence_construct_u32():
sequence = []
tensor_cnt = np.random.randint(1, 10)
shape = np.random.randint(1, 4, 2)

for _ in range(tensor_cnt):
values = np.random.randint(0, 6, shape).astype(np.uint32)
tensor = Tensor(Dtype.U32, values.shape, values.flatten())

sequence.append(tensor)

name = "sequence_construct_u32"
make_test([sequence], sequence, "TensorTrait::sequence_construct(input_0)", name)


@staticmethod
def sequence_construct_i32():
sequence = []
tensor_cnt = np.random.randint(1, 10)
shape = np.random.randint(1, 4, 2)

for _ in range(tensor_cnt):
values = np.random.randint(-6, 6, shape).astype(np.int32)
tensor = Tensor(Dtype.I32, values.shape, values.flatten())

sequence.append(tensor)

name = "sequence_construct_i32"
make_test([sequence], sequence, "TensorTrait::sequence_construct(input_0)", name)


@staticmethod
def sequence_construct_i8():
sequence = []
tensor_cnt = np.random.randint(1, 10)
shape = np.random.randint(1, 4, 2)

for _ in range(tensor_cnt):
values = np.random.randint(-6, 6, shape).astype(np.int8)
tensor = Tensor(Dtype.I8, values.shape, values.flatten())

sequence.append(tensor)

name = "sequence_construct_i8"
make_test([sequence], sequence, "TensorTrait::sequence_construct(input_0)", name)


@staticmethod
def sequence_construct_fp8x23():
sequence = []
tensor_cnt = np.random.randint(1, 10)
shape = np.random.randint(1, 4, 2)

for _ in range(tensor_cnt):
values = np.random.randint(-6, 6, shape).astype(np.float64)
tensor = Tensor(Dtype.FP8x23, values.shape, to_fp(values.flatten(), FixedImpl.FP8x23))

sequence.append(tensor)

name = "sequence_construct_fp8x23"
make_test([sequence], sequence, "TensorTrait::sequence_construct(input_0)", name)


@staticmethod
def sequence_construct_fp16x16():
sequence = []
tensor_cnt = np.random.randint(1, 10)
shape = np.random.randint(1, 4, 2)

for _ in range(tensor_cnt):
values = np.random.randint(-6, 6, shape).astype(np.float64)
tensor = Tensor(Dtype.FP16x16, values.shape, to_fp(values.flatten(), FixedImpl.FP16x16))

sequence.append(tensor)

name = "sequence_construct_fp16x16"
make_test([sequence], sequence, "TensorTrait::sequence_construct(input_0)", name)
38 changes: 38 additions & 0 deletions src/operators/tensor/core.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -96,6 +96,7 @@ impl TensorSerde<T, impl TSerde: Serde<T>, impl TDrop: Drop<T>> of Serde<Tensor<
/// binarizer – Maps the values of a tensor element-wise to 0 or 1 based on the comparison against a threshold value.
/// reduce_sum_square - Computes the sum square of the input tensor's elements along the provided axes.
/// reduce_l2 - Computes the L2 norm of the input tensor's elements along the provided axes.
/// sequence_construct – Constructs a tensor sequence containing the input tensors.
/// shrink – Shrinks the input tensor element-wise to the output tensor with the same datatype and shape based on a defined formula.
/// sequence_empty - Returns an empty tensor sequence.
/// reduce_mean - Computes the mean of the input tensor's elements along the provided axes.
Expand Down Expand Up @@ -3821,6 +3822,43 @@ trait TensorTrait<T> {
/// ```
///
fn shrink(self: Tensor<T>, bias: Option<T>, lambd: Option<T>) -> Tensor<T>;
/// ## tensor.sequence_construct
///
/// ```rust
/// fn sequence_construct(tensors: Array<Tensor<T>>) -> Array<Tensor<T>>;
/// ```
///
/// Constructs a tensor sequence containing the input tensors.
///
/// ## Args
///
/// * `tensors`(`Array<Tensor<T>>`) - The array of input tensors.
///
/// ## Panics
///
/// * Panics if input tensor array is empty.
///
/// ## Returns
///
/// A tensor sequence `Array<Tensor<T>>` containing the input tensors.
///
/// ## Examples
///
/// ```rust
/// use array::{ArrayTrait, SpanTrait};
///
/// use orion::operators::tensor::{TensorTrait, Tensor, U32Tensor};
///
/// fn sequence_construct_example() -> Array<Tensor<usize>> {
/// let tensor1 = TensorTrait::new(shape: array![2, 2].span(), data: array![0, 1, 2, 3].span());
/// let tensor2 = TensorTrait::new(shape: array![2, 2].span(), data: array![4, 5, 6, 7].span());
/// let result = TensorTrait::sequence_construct(tensors: array![tensor1, tensor2]);
/// return result;
/// }
/// >>> [[0, 1, 2, 3], [4, 5, 6, 7]]
/// ```
///
fn sequence_construct(tensors: Array<Tensor<T>>) -> Array<Tensor<T>>;
}

/// Cf: TensorTrait::new docstring
Expand Down
4 changes: 4 additions & 0 deletions src/operators/tensor/implementations/tensor_bool.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -312,6 +312,10 @@ impl BoolTensor of TensorTrait<bool> {
constant_of_shape(shape, value)
}

fn sequence_construct(tensors: Array<Tensor<bool>>) -> Array<Tensor<bool>> {
math::sequence_construct::sequence_construct(tensors)
}

fn shrink(self: Tensor<bool>, bias: Option<bool>, lambd: Option<bool>) -> Tensor<bool> {
panic(array!['not supported!'])
}
Expand Down
4 changes: 4 additions & 0 deletions src/operators/tensor/implementations/tensor_fp16x16.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -349,6 +349,10 @@ impl FP16x16Tensor of TensorTrait<FP16x16> {
math::scatter::scatter(self, updates, indices, axis, reduction)
}

fn sequence_construct(tensors: Array<Tensor<FP16x16>>) -> Array<Tensor<FP16x16>> {
math::sequence_construct::sequence_construct(tensors)
}

fn shrink(self: Tensor<FP16x16>, bias: Option<FP16x16>, lambd: Option<FP16x16>) -> Tensor<FP16x16> {
math::shrink::shrink(self, bias, lambd)
}
Expand Down
4 changes: 4 additions & 0 deletions src/operators/tensor/implementations/tensor_fp16x16wide.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -337,6 +337,10 @@ impl FP16x16WTensor of TensorTrait<FP16x16W> {
math::reduce_l2::reduce_l2(self, axis, keepdims)
}

fn sequence_construct(tensors: Array<Tensor<FP16x16W>>) -> Array<Tensor<FP16x16W>> {
math::sequence_construct::sequence_construct(tensors)
}

fn shrink(self: Tensor<FP16x16W>, bias: Option<FP16x16W>, lambd: Option<FP16x16W>) -> Tensor<FP16x16W> {
math::shrink::shrink(self, bias, lambd)
}
Expand Down
4 changes: 4 additions & 0 deletions src/operators/tensor/implementations/tensor_fp32x32.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -350,6 +350,10 @@ impl FP32x32Tensor of TensorTrait<FP32x32> {
math::reduce_l2::reduce_l2(self, axis, keepdims)
}

fn sequence_construct(tensors: Array<Tensor<FP32x32>>) -> Array<Tensor<FP32x32>> {
math::sequence_construct::sequence_construct(tensors)
}

fn shrink(self: Tensor<FP32x32>, bias: Option<FP32x32>, lambd: Option<FP32x32>) -> Tensor<FP32x32> {
math::shrink::shrink(self, bias, lambd)
}
Expand Down
4 changes: 4 additions & 0 deletions src/operators/tensor/implementations/tensor_fp64x64.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -350,6 +350,10 @@ impl FP64x64Tensor of TensorTrait<FP64x64> {
math::scatter::scatter(self, updates, indices, axis, reduction)
}

fn sequence_construct(tensors: Array<Tensor<FP64x64>>) -> Array<Tensor<FP64x64>> {
math::sequence_construct::sequence_construct(tensors)
}

fn shrink(self: Tensor<FP64x64>, bias: Option<FP64x64>, lambd: Option<FP64x64>) -> Tensor<FP64x64> {
math::shrink::shrink(self, bias, lambd)
}
Expand Down
4 changes: 4 additions & 0 deletions src/operators/tensor/implementations/tensor_fp8x23.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -341,6 +341,10 @@ impl FP8x23Tensor of TensorTrait<FP8x23> {
math::reduce_l2::reduce_l2(self, axis, keepdims)
}

fn sequence_construct(tensors: Array<Tensor<FP8x23>>) -> Array<Tensor<FP8x23>> {
math::sequence_construct::sequence_construct(tensors)
}

fn shrink(self: Tensor<FP8x23>, bias: Option<FP8x23>, lambd: Option<FP8x23>) -> Tensor<FP8x23> {
math::shrink::shrink(self, bias, lambd)
}
Expand Down
4 changes: 4 additions & 0 deletions src/operators/tensor/implementations/tensor_fp8x23wide.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -328,6 +328,10 @@ impl FP8x23WTensor of TensorTrait<FP8x23W> {
math::scatter::scatter(self, updates, indices, axis, reduction)
}

fn sequence_construct(tensors: Array<Tensor<FP8x23W>>) -> Array<Tensor<FP8x23W>> {
math::sequence_construct::sequence_construct(tensors)
}

fn shrink(self: Tensor<FP8x23W>, bias: Option<FP8x23W>, lambd: Option<FP8x23W>) -> Tensor<FP8x23W> {
math::shrink::shrink(self, bias, lambd)
}
Expand Down
4 changes: 4 additions & 0 deletions src/operators/tensor/implementations/tensor_i32.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -349,6 +349,10 @@ impl I32Tensor of TensorTrait<i32> {
panic(array!['not supported!'])
}

fn sequence_construct(tensors: Array<Tensor<i32>>) -> Array<Tensor<i32>> {
math::sequence_construct::sequence_construct(tensors)
}

fn shrink(self: Tensor<i32>, bias: Option<i32>, lambd: Option<i32>) -> Tensor<i32> {
panic(array!['not supported!'])
}
Expand Down
4 changes: 4 additions & 0 deletions src/operators/tensor/implementations/tensor_i8.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -349,6 +349,10 @@ impl I8Tensor of TensorTrait<i8> {
panic(array!['not supported!'])
}

fn sequence_construct(tensors: Array<Tensor<i8>>) -> Array<Tensor<i8>> {
math::sequence_construct::sequence_construct(tensors)
}

fn shrink(self: Tensor<i8>, bias: Option<i8>, lambd: Option<i8>) -> Tensor<i8> {
panic(array!['not supported!'])
}
Expand Down
4 changes: 4 additions & 0 deletions src/operators/tensor/implementations/tensor_u32.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -319,6 +319,10 @@ impl U32Tensor of TensorTrait<u32> {
panic(array!['not supported!'])
}

fn sequence_construct(tensors: Array<Tensor<u32>>) -> Array<Tensor<u32>> {
math::sequence_construct::sequence_construct(tensors)
}

fn shrink(self: Tensor<u32>, bias: Option<u32>, lambd: Option<u32>) -> Tensor<u32> {
panic(array!['not supported!'])
}
Expand Down
1 change: 1 addition & 0 deletions src/operators/tensor/math.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ mod reduce_l2;
mod reduce_l1;
mod reduce_sum_square;
mod bitwise_and;
mod sequence_construct;
mod shrink;
mod sequence_empty;
mod reduce_mean;
12 changes: 12 additions & 0 deletions src/operators/tensor/math/sequence_construct.cairo
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
use array::{ArrayTrait, SpanTrait};

use orion::operators::tensor::{TensorTrait, Tensor};


/// Cf: TensorTrait::sequence_construct docstring
fn sequence_construct<T, impl TDrop: Drop<T>>(tensors: Array<Tensor<T>>) -> Array<Tensor<T>> {

assert(tensors.len() >= 1, 'Input tensors must be >= 1');

return tensors;
}
5 changes: 5 additions & 0 deletions tests/nodes.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -665,6 +665,11 @@ mod reduce_l1_i8_export_negative_axes_keepdims;
mod reduce_l1_u32_export_do_not_keepdims;
mod reduce_l1_u32_export_keepdims;
mod reduce_l1_u32_export_negative_axes_keepdims;
mod sequence_construct_fp16x16;
mod sequence_construct_fp8x23;
mod sequence_construct_i32;
mod sequence_construct_i8;
mod sequence_construct_u32;
mod shrink_hard_fp16x16;
mod shrink_soft_fp16x16;
mod shrink_hard_fp8x23;
Expand Down
20 changes: 20 additions & 0 deletions tests/nodes/sequence_construct_fp16x16.cairo
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
mod input_0;
mod output_0;


use orion::operators::tensor::FP16x16TensorPartialEq;
use orion::operators::tensor::{TensorTrait, Tensor};
use array::{ArrayTrait, SpanTrait};
use orion::utils::{assert_eq, assert_seq_eq};
use orion::operators::tensor::FP16x16Tensor;

#[test]
#[available_gas(2000000000)]
fn test_sequence_construct_fp16x16() {
let input_0 = input_0::input_0();
let z = output_0::output_0();

let y = TensorTrait::sequence_construct(input_0);

assert_seq_eq(y, z);
}
Loading