Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat : conv_transpose #555

Merged
merged 5 commits into from
Feb 9, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,6 +163,7 @@
* [nn.hard\_sigmoid](framework/operators/neural-network/nn.hard\_sigmoid.md)
* [nn.thresholded\_relu](framework/operators/neural-network/nn.thresholded\_relu.md)
* [nn.gemm](framework/operators/neural-network/nn.gemm.md)
* [nn.conv_transpose](framework/operators/neural-network/nn.conv\_transpose.md)
* [nn.conv](framework/operators/neural-network/nn.conv.md)
* [nn.depth_to_space](framework/operators/neural-network/nn.depth_to_space.md)
* [nn.space_to_depth](framework/operators/neural-network/nn.space_to_depth.md)
Expand Down
1 change: 1 addition & 0 deletions docs/framework/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ You can see below the list of current supported ONNX Operators:
| [Softplus](operators/neural-network/nn.softplus.md) | :white\_check\_mark: |
| [Linear](operators/neural-network/nn.linear.md) | :white\_check\_mark: |
| [HardSigmoid](operators/neural-network/nn.hard\_sigmoid.md) | :white\_check\_mark: |
| [ConvTranspose](operators/neural-network/nn.conv\_transpose_.md) | :white\_check\_mark: |
| [Conv](operators/neural-network/nn.conv.md) | :white\_check\_mark: |
| [Sinh](operators/tensor/tensor.sinh.md) | :white\_check\_mark: |
| [Asinh](operators/tensor/tensor.asinh.md) | :white\_check\_mark: |
Expand Down
3 changes: 2 additions & 1 deletion docs/framework/operators/neural-network/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,5 +35,6 @@ Orion supports currently these `NN` types.
| [`nn.hard_sigmoid`](nn.hard\_sigmoid.md) | Applies the Hard Sigmoid function to an n-dimensional input tensor. |
| [`nn.thresholded_relu`](nn.thresholded\_relu.md) | Performs the thresholded relu activation function element-wise. |
| [`nn.gemm`](nn.gemm.md) | Performs General Matrix multiplication. |
| [`nn.conv_transpose`](nn.conv\_transpose.md) | Performs the convolution of the input data tensor and weigth tensor. |
| [`nn.conv_transpose`](nn.conv\_transpose.md) | Performs the convolution transpose of the input data tensor and weigth tensor. |
| [`nn.conv`](nn.conv.md) | Performs the convolution of the input data tensor and weigth tensor. |

4 changes: 2 additions & 2 deletions docs/framework/operators/neural-network/nn.conv.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@

# NNTrait::conv_transpose
# NNTrait::conv

```rust
conv(
Expand Down Expand Up @@ -43,7 +43,7 @@ use orion::numbers::FP16x16;
use orion::operators::tensor::{Tensor, TensorTrait, FP16x16Tensor};


fn example_conv_transpose() -> Tensor<FP16x16> {
fn example_conv() -> Tensor<FP16x16> {
let mut shape = ArrayTrait::<usize>::new();
shape.append(1);
shape.append(1);
Expand Down
128 changes: 128 additions & 0 deletions docs/framework/operators/neural-network/nn.conv_transpose.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
# NNTrait::conv_transpose

```rust
conv_transpose(
X: @Tensor<T>,
W: @Tensor<T>,
B: Option<@Tensor<T>>,
auto_pad: Option<AUTO_PAD>,
dilations: Option<Span<usize>>,
group: Option<usize>,
kernel_shape: Option<Span<usize>>,
output_padding: Option<Span<usize>>,
output_shape: Option<Span<usize>>,
pads: Option<Span<usize>>,
strides: Option<Span<usize>>,
) -> Tensor<T>
```

The convolution transpose operator consumes an input tensor and a input weigth tensor, and computes the output.

## Args

* `X`(`@Tensor<T>`) - Input data tensor, has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W if 2D, otherwise the size is (N x C x D1 x D2 ... x Dn).
* `W`(`@Tensor<T>`) - The weight tensor, has size (C x M/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps if 2D, for more than 2 dimensions, the weight shape will be (C x M/group x k1 x k2 x ... x kn).
* `B`(`Option<@Tensor<T>>`) - Optional 1D bias to be added to the convolution, has size of M.
* `auto_pad`(`Option<AUTO_PAD>`) - Default is NOTSET, auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. NOTSET means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that `output_shape[i] = input_shape[i] * strides[i]` for each axis `i`.
* `dilations`(`Option<Span<usize>>`) - Dilation value along each spatial axis of the filter. If not present, the dilation defaults to 1 along each spatial axis.
* `group`(`Option<usize>`) - Default is 1, number of groups input channels and output channels are divided into.
* `kernel_shape`(`Option<Span<usize>>`) - The shape of the convolution kernel. If not present, should be inferred from input W.
* `output_padding`(`Option<Span<usize>>`) - Additional elements added to the side with higher coordinate indices in the output. Each padding value in "output_padding" must be less than the corresponding stride/dilation dimension. By default, this attribute is a zero vector.
* `output_shape`(`Option<Span<usize>>`) - The shape of the output can be explicitly set which will cause pads values to be auto generated. If output_shape is specified pads values are ignored. See doc for details for equations to generate pads.
* `pads`(`Option<Span<usize>>`) - Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. `pads` format should be as follow [x1_begin, x2_begin...x1_end, x2_end,...], where xi_begin the number of pixels added at the beginning of axis `i` and xi_end, the number of pixels added at the end of axis `i`. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis.
* `strides`(`Option<Span<usize>>`) - Stride along each spatial axis. If not present, the stride defaults to 1 along each spatial axis.

## Returns

A `Tensor<T>` that contains the result of the convolution transpose.

## Examples

```rust
use orion::operators::nn::NNTrait;
use orion::numbers::FixedTrait;
use orion::operators::nn::FP16x16NN;
use orion::numbers::FP16x16;
use orion::operators::tensor::{Tensor, TensorTrait, FP16x16Tensor};

fn example_conv_transpose() -> Tensor<FP16x16> {
let mut shape = ArrayTrait::<usize>::new();
shape.append(1);
shape.append(2);
shape.append(3);
shape.append(3);

let mut data = ArrayTrait::new();
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 65536, sign: false });
let W = TensorTrait::new(shape.span(), data.span());

let mut shape = ArrayTrait::<usize>::new();
shape.append(1);
shape.append(1);
shape.append(3);
shape.append(3);

let mut data = ArrayTrait::new();
data.append(FP16x16 { mag: 0, sign: false });
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 131072, sign: false });
data.append(FP16x16 { mag: 196608, sign: false });
data.append(FP16x16 { mag: 262144, sign: false });
data.append(FP16x16 { mag: 327680, sign: false });
data.append(FP16x16 { mag: 393216, sign: false });
data.append(FP16x16 { mag: 458752, sign: false });
data.append(FP16x16 { mag: 524288, sign: false });
let mut X = TensorTrait::new(shape.span(), data.span());

return NNTrait::conv_transpose(
@X,
@W,
Option::None,
Option::None,
Option::None,
Option::None,
Option::None,
Option::None,
Option::None,
Option::None,
Option::None,
);

}
>>> [
[
[
[0.0, 1.0, 3.0, 3.0, 2.0],
[3.0, 8.0, 15.0, 12.0, 7.0],
[9.0, 21.0, 36.0, 27.0, 15.0],
[9.0, 20.0, 33.0, 24.0, 13.0],
[6.0, 13.0, 21.0, 15.0, 8.0],
],
[
[0.0, 1.0, 3.0, 3.0, 2.0],
[3.0, 8.0, 15.0, 12.0, 7.0],
[9.0, 21.0, 36.0, 27.0, 15.0],
[9.0, 20.0, 33.0, 24.0, 13.0],
[6.0, 13.0, 21.0, 15.0, 8.0],
],
]
]

````
62 changes: 33 additions & 29 deletions docs/framework/operators/neural-network/nn.depth_to_space.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,34 +21,38 @@ A `Tensor<T>` of [N, C/(blocksize * blocksize), H * blocksize, W * blocksize].
```rust
use core::array::{ArrayTrait, SpanTrait};
use orion::operators::tensor::{TensorTrait, Tensor};
use orion::operators::tensor::I8Tensor;
use orion::numbers::{IntegerTrait, i8};

fn relu_example() -> Tensor<i32> {
let mut shape = ArrayTrait::<usize>::new();
shape.append(1);
shape.append(4);
shape.append(2);
shape.append(2);

let mut data = ArrayTrait::new();
data.append(i8 { mag: 1, sign: false });
data.append(i8 { mag: 3, sign: true });
data.append(i8 { mag: 3, sign: true });
data.append(i8 { mag: 1, sign: false });
data.append(i8 { mag: 1, sign: true });
data.append(i8 { mag: 3, sign: true });
data.append(i8 { mag: 2, sign: true });
data.append(i8 { mag: 1, sign: true });
data.append(i8 { mag: 1, sign: true });
data.append(i8 { mag: 2, sign: false });
data.append(i8 { mag: 1, sign: true });
data.append(i8 { mag: 2, sign: true });
data.append(i8 { mag: 3, sign: true });
data.append(i8 { mag: 3, sign: true });
data.append(i8 { mag: 2, sign: false });
data.append(i8 { mag: 2, sign: false });
let tensor = TensorTrait::new(shape.span(), data.span());
use orion::operators::tensor::{I8Tensor, I8TensorAdd};
use orion::numbers::NumberTrait;
use orion::operators::nn::NNTrait;
use orion::operators::nn::I8NN;
use orion::numbers::FixedTrait;

fn depth_to_space_example() -> Tensor<i8> {
let mut shape = ArrayTrait::<usize>::new();
shape.append(1);
shape.append(4);
shape.append(2);
shape.append(2);

let mut data = ArrayTrait::new();
data.append(-2);
data.append(0);
data.append(-1);
data.append(0);
data.append(0);
data.append(-3);
data.append(2);
data.append(1);
data.append(-2);
data.append(-2);
data.append(0);
data.append(-2);
data.append(-1);
data.append(-1);
data.append(2);
data.append(2);
let tensor = TensorTrait::new(shape.span(), data.span());
return NNTrait::depth_to_space(@tensor, 2, 'DCR');
}
>>> [[[[1, 1, 3, 3], [1, 3, 2, 3], [3, 2, 1, 1], [1, 2, 2, 2]]]]
>>> [[[[-2, 0, 0, -3], [-2, -1, -2, -1], [-1, 2, 0, 1], [0, 2, -2, 2]]]]
```
2 changes: 2 additions & 0 deletions docs/framework/operators/tensor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,8 @@ use orion::operators::tensor::TensorTrait;
| [`tensor.erf`](tensor.erf.md) | Computes the error function of the given input tensor element-wise. |
| [`tensor.layer_normalization`](tensor.layer\_normalization.md) | computes the layer normalization of the input tensor. |
| [`tensor.split`](tensor.split.md) | Split a tensor into a list of tensors, along the specified ‘axis’. |
| [`tensor.optional`](tensor.optional.md) | Constructs an optional-type value containing either an empty optional of a certain type specified by the attribute, or a non-empty value containing the input element. |
| [`tensor.dynamic_quantize_linear`](tensor.dynamic\_quantize\_linear.md) | Computes the Scale, Zero Point and FP32->8Bit conversion of FP32 Input data. |
| [`tensor.scatter_nd`](tensor.scatter\_nd.md) | The output of the operation is produced by creating a copy of the input data, and then updating its value to values specified by updates at specific index positions specified by indices. Its output shape is the same as the shape of data |

## Arithmetic Operations
Expand Down
1 change: 1 addition & 0 deletions docs/framework/operators/tensor/tensor.split.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
## Args
Split a tensor into a list of tensors, along the specified ‘axis’

## Args

* `self`(`@Tensor<T>`) - The input tensor.
* `axis`(`usize`) - The axis along which to split on.
Expand Down
Loading
Loading