Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensor expand operator #1508

Merged
merged 81 commits into from
Mar 22, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
81 commits
Select commit Hold shift + click to select a range
ffd6b36
Improve CI cache - remove burn-tch artifacts
antimora Sep 21, 2023
8165284
Merge remote-tracking branch 'upstream/main'
antimora Sep 22, 2023
ceca95f
Merge remote-tracking branch 'upstream/main'
antimora Sep 25, 2023
840dd78
Merge remote-tracking branch 'upstream/main'
antimora Sep 29, 2023
3926ae2
Merge remote-tracking branch 'upstream/main'
antimora Oct 1, 2023
3951f61
Merge remote-tracking branch 'upstream/main'
antimora Oct 4, 2023
9b3e9d8
Merge remote-tracking branch 'upstream/main'
antimora Oct 6, 2023
36ee8f0
Merge remote-tracking branch 'upstream/main'
antimora Oct 7, 2023
6447e19
Merge remote-tracking branch 'upstream/main'
antimora Oct 12, 2023
73c67e8
Merge branch 'burn-rs:main' into main
antimora Oct 12, 2023
25b41c4
Merge branch 'main' of https://github.com/antimora/burn
antimora Oct 12, 2023
d120e95
Merge remote-tracking branch 'upstream/main'
antimora Nov 12, 2023
1a165be
Merge remote-tracking branch 'upstream/main'
antimora Nov 12, 2023
d3e6786
Merge remote-tracking branch 'upstream/main'
antimora Nov 12, 2023
1f53142
Merge remote-tracking branch 'upstream/main'
antimora Nov 13, 2023
d54e29b
Merge remote-tracking branch 'upstream/main'
antimora Nov 14, 2023
462c3d8
Merge remote-tracking branch 'upstream/main'
antimora Nov 15, 2023
1503427
Merge remote-tracking branch 'upstream/main'
antimora Nov 18, 2023
4e08a50
Merge remote-tracking branch 'upstream/main'
antimora Nov 19, 2023
d2e66de
Merge remote-tracking branch 'upstream/main'
antimora Nov 20, 2023
fb9320f
Merge remote-tracking branch 'upstream/main'
antimora Nov 20, 2023
d0c8026
Merge remote-tracking branch 'upstream/main'
antimora Nov 20, 2023
2bed69c
Merge remote-tracking branch 'upstream/main'
antimora Nov 22, 2023
b79084b
Merge remote-tracking branch 'upstream/main'
antimora Nov 22, 2023
4078a99
Merge remote-tracking branch 'upstream/main'
antimora Nov 27, 2023
f12c4ab
Merge remote-tracking branch 'upstream/main'
antimora Nov 27, 2023
a5027d2
Merge remote-tracking branch 'upstream/main'
antimora Nov 29, 2023
d14f8b8
Merge remote-tracking branch 'upstream/main'
antimora Nov 30, 2023
125a69e
Merge remote-tracking branch 'upstream/main'
antimora Nov 30, 2023
a07c46b
Merge remote-tracking branch 'upstream/main'
antimora Dec 1, 2023
a668591
Merge remote-tracking branch 'upstream/main'
antimora Dec 3, 2023
c07ec34
Merge remote-tracking branch 'upstream/main'
antimora Dec 18, 2023
b3c7384
Merge remote-tracking branch 'upstream/main'
antimora Jan 17, 2024
b3144e8
Merge remote-tracking branch 'upstream/main'
antimora Jan 18, 2024
5625968
Merge remote-tracking branch 'upstream/main'
antimora Jan 22, 2024
aa6144b
Merge remote-tracking branch 'upstream/main'
antimora Jan 24, 2024
13f6ad9
Merge remote-tracking branch 'upstream/main'
antimora Jan 25, 2024
6bdc54c
Merge remote-tracking branch 'upstream/main'
antimora Jan 26, 2024
7654a6c
Merge remote-tracking branch 'upstream/main'
antimora Jan 30, 2024
12240f4
Merge remote-tracking branch 'upstream/main'
antimora Feb 2, 2024
3f8b798
Merge remote-tracking branch 'upstream/main'
antimora Feb 5, 2024
c27e1ff
Merge remote-tracking branch 'upstream/main'
antimora Feb 8, 2024
5b2f37d
Merge remote-tracking branch 'upstream/main'
antimora Feb 12, 2024
bb5d979
Merge remote-tracking branch 'upstream/main'
antimora Feb 12, 2024
8d1fb84
Merge remote-tracking branch 'upstream/main'
antimora Feb 14, 2024
b054b3b
Merge remote-tracking branch 'upstream/main'
antimora Feb 15, 2024
d14dd8e
Merge remote-tracking branch 'upstream/main'
antimora Feb 15, 2024
1311bcc
PyTorch config deserializer from .pt file
antimora Feb 18, 2024
8346628
Update pytorch-model.md
antimora Feb 18, 2024
b5bd4e6
Merge remote-tracking branch 'upstream/main'
antimora Feb 19, 2024
12d2414
Merge remote-tracking branch 'upstream/main'
antimora Feb 21, 2024
bc6d0bc
Merge remote-tracking branch 'upstream/main'
antimora Feb 22, 2024
0e0cc30
Merge remote-tracking branch 'upstream/main'
antimora Feb 26, 2024
4ec9af8
Merge remote-tracking branch 'upstream/main'
antimora Feb 29, 2024
59c8472
Merge remote-tracking branch 'upstream/main'
antimora Feb 29, 2024
5ff50cd
Merge branch 'tracel-ai:main' into main
antimora Feb 29, 2024
ff8a2bf
Merge remote-tracking branch 'upstream/main'
antimora Mar 1, 2024
4a4e275
Merge remote-tracking branch 'upstream/main'
antimora Mar 1, 2024
3b68207
Merge remote-tracking branch 'upstream/main'
antimora Mar 1, 2024
37dee9f
Merge remote-tracking branch 'upstream/main'
antimora Mar 3, 2024
0fe1fc0
Merge remote-tracking branch 'upstream/main'
antimora Mar 5, 2024
0583a3f
Merge remote-tracking branch 'upstream/main'
antimora Mar 5, 2024
9e576d3
Merge remote-tracking branch 'upstream/main'
antimora Mar 6, 2024
79553ef
Merge remote-tracking branch 'upstream/main'
antimora Mar 8, 2024
943a591
Merge remote-tracking branch 'upstream/main'
antimora Mar 9, 2024
de412bb
Merge remote-tracking branch 'upstream/main'
antimora Mar 11, 2024
a92ca41
Merge remote-tracking branch 'upstream/main'
antimora Mar 11, 2024
dea103a
Merge remote-tracking branch 'upstream/main'
antimora Mar 12, 2024
2228c06
Merge remote-tracking branch 'upstream/main'
antimora Mar 16, 2024
7ad4b07
WIP
antimora Mar 21, 2024
f1b9165
Rename broadcast_to to expand
antimora Mar 21, 2024
81836e4
Rename broadcast_to expand file
antimora Mar 21, 2024
a68e5cb
Implemented fusion backend and fix bugs
antimora Mar 22, 2024
364afc4
Merge remote-tracking branch 'upstream/main'
antimora Mar 22, 2024
0c95cf1
Remove old files
antimora Mar 22, 2024
ef46f27
Remove unused state
antimora Mar 22, 2024
ac0fc48
Rename to the correct op name
antimora Mar 22, 2024
38dbbdc
Add missing comment
antimora Mar 22, 2024
285aa68
Fix expand check function doc
antimora Mar 22, 2024
73ad7d4
Rename the leftover names
antimora Mar 22, 2024
5f32187
Rename leftover names
antimora Mar 22, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions burn-book/src/building-blocks/tensor.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,7 @@ Those operations are available for all tensor kinds: `Int`, `Float`, and `Bool`.
| `tensor.all_dim(dim)` | `tensor.all(dim)` |
| `tensor.any()` | `tensor.any()` |
| `tensor.any_dim(dim)` | `tensor.any(dim)` |
| `tensor.expand(shape)` | `tensor.expand(shape)` |
| `tensor.chunk(num_chunks, dim)` | `tensor.chunk(num_chunks, dim)` |
| `tensor.device()` | `tensor.device` |
| `tensor.dims()` | `tensor.size()` |
Expand Down
7 changes: 7 additions & 0 deletions crates/burn-autodiff/src/ops/bool_tensor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -128,4 +128,11 @@ impl<B: Backend, C: CheckpointStrategy> BoolTensorOps<Self> for Autodiff<B, C> {
fn bool_nonzero<const D: usize>(tensor: BoolTensor<B, D>) -> Vec<IntTensor<B, 1>> {
B::bool_nonzero(tensor)
}

fn bool_expand<const D: usize, const D2: usize>(
tensor: BoolTensor<B, D>,
shape: Shape<D2>,
) -> BoolTensor<B, D2> {
B::bool_expand(tensor, shape)
}
}
7 changes: 7 additions & 0 deletions crates/burn-autodiff/src/ops/int_tensor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -369,6 +369,13 @@ impl<B: Backend, C: CheckpointStrategy> IntTensorOps<Self> for Autodiff<B, C> {
B::int_prod_dim(tensor, dim)
}

fn int_expand<const D: usize, const D2: usize>(
tensor: IntTensor<B, D>,
shape: Shape<D2>,
) -> IntTensor<B, D2> {
B::int_expand(tensor, shape)
}

fn int_sort<const D: usize>(
tensor: IntTensor<Self, D>,
dim: usize,
Expand Down
75 changes: 75 additions & 0 deletions crates/burn-autodiff/src/ops/tensor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2437,6 +2437,81 @@ impl<B: Backend, C: CheckpointStrategy> FloatTensorOps<Self> for Autodiff<B, C>
.stateless(B::float_sign(tensor.primitive))
}

fn float_expand<const D1: usize, const D2: usize>(
tensor: FloatTensor<Self, D1>,
shape: Shape<D2>,
) -> FloatTensor<Self, D2> {
#[derive(Debug)]
struct ExpandDim<const D1: usize, const D2: usize>;

#[derive(new, Debug)]
struct RetroExpand<B: Backend, const D1: usize, const D2: usize> {
input_id: NodeID,
shape: Shape<D2>,
_backend: PhantomData<B>,
}

impl<B: Backend, const D1: usize, const D2: usize> RetroForward for RetroExpand<B, D1, D2> {
fn forward(&self, states: &mut BackwardStates, out_node: NodeID) {
let input = states.get_state::<B::FloatTensorPrimitive<D1>>(&self.input_id);
let out = B::float_expand(input, self.shape.clone());
states.save(out_node, out)
}
}

impl<B: Backend, const D1: usize, const D2: usize> Backward<B, D2, 1> for ExpandDim<D1, D2> {
type State = Shape<D1>;

fn backward(
self,
ops: Ops<Self::State, 1>,
grads: &mut Gradients,
_checkpointer: &mut Checkpointer,
) {
let shape_original = ops.state;

let mut shape_expanded = [1; D2];

debug_assert!(D2 >= D1);

for i in 0..D1 {
shape_expanded[i + (D2 - D1)] = shape_original.dims[i];
}

unary::<B, D2, D1, _>(ops.parents, ops.node, grads, |grad| {
let shape_grad = B::float_shape(&grad);
let mut grad = grad;

#[allow(clippy::needless_range_loop)]
for i in 0..D2 {
if shape_expanded[i] == 1 && shape_grad.dims[i] != 1 {
grad = B::float_sum_dim(grad, i);
}
}

B::float_reshape(grad, shape_original)
});
}
}

match ExpandDim
.prepare::<C>([tensor.node.clone()], [tensor.graph.clone()])
.memory_bound()
.retro_forward(RetroExpand::<B, D1, D2>::new(
tensor.node.id.clone(),
shape.clone(),
))
.parents([&tensor])
.stateful()
{
OpsKind::Tracked(prep) => prep.finish(
B::float_shape(&tensor.primitive),
B::float_expand(tensor.primitive, shape),
),
OpsKind::UnTracked(prep) => prep.finish(B::float_expand(tensor.primitive, shape)),
}
}

fn float_sort<const D: usize>(
tensor: FloatTensor<Self, D>,
dim: usize,
Expand Down
38 changes: 38 additions & 0 deletions crates/burn-autodiff/src/tests/expand.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
#[burn_tensor_testgen::testgen(ad_expand)]
mod tests {
use super::*;
use burn_tensor::{Data, Tensor};

#[test]
fn should_diff_expand() {
// Python code to generate the test case values
// import torch
// x1 = torch.tensor([4.0, 7.0, 2.0, 3.0], requires_grad=True)
// x2 = torch.tensor([2.0, 4.5, 7.0, 3.0], requires_grad=True)
// y = x1.expand(4, 4)
// z = (x2 * y).sum()
// z.backward()
// print("x1", x1.grad)
// print("x2", x2.grad)

let device = Default::default();

let data_1: Data<f32, 1> = Data::from([4.0, 7.0, 2.0, 3.0]);
let tensor_1 = TestAutodiffTensor::from_data(data_1, &device).require_grad();

let data_2: Data<f32, 1> = Data::from([2.0, 4.5, 7.0, 3.0]);
let tensor_2 = TestAutodiffTensor::from_data(data_2, &device).require_grad();

let tensor_3 = tensor_1.clone().expand([4, 4]);

// Use unsqueeze to make tensor_2 have the same shape as tensor_3
let tensor_4 = tensor_2.clone().unsqueeze().mul(tensor_3).sum();
let grads = tensor_4.backward();

let grad_1 = tensor_1.grad(&grads).unwrap();
let grad_2 = tensor_2.grad(&grads).unwrap();

assert_eq!(grad_1.to_data(), Data::from([8., 18., 28., 12.]));
assert_eq!(grad_2.to_data(), Data::from([16., 28., 8., 12.]));
}
}
2 changes: 2 additions & 0 deletions crates/burn-autodiff/src/tests/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ mod cross_entropy;
mod div;
mod erf;
mod exp;
mod expand;
mod flip;
mod gather_scatter;
mod gelu;
Expand Down Expand Up @@ -119,6 +120,7 @@ macro_rules! testgen_all {
burn_autodiff::testgen_ad_flip!();
burn_autodiff::testgen_ad_nonzero!();
burn_autodiff::testgen_ad_sign!();
burn_autodiff::testgen_ad_expand!();
burn_autodiff::testgen_ad_sort!();
};
}
2 changes: 2 additions & 0 deletions crates/burn-candle/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,7 @@ mod tests {
burn_tensor::testgen_sub!();
burn_tensor::testgen_tanh!();
burn_tensor::testgen_transpose!();
burn_tensor::testgen_expand!();

// test stats
burn_tensor::testgen_var!();
Expand Down Expand Up @@ -157,4 +158,5 @@ mod tests {
burn_autodiff::testgen_ad_sub!();
burn_autodiff::testgen_ad_tanh!();
burn_autodiff::testgen_ad_transpose!();
burn_autodiff::testgen_ad_expand!();
}
7 changes: 7 additions & 0 deletions crates/burn-candle/src/ops/base.rs
Original file line number Diff line number Diff line change
Expand Up @@ -142,3 +142,10 @@ pub fn chunk<E: CandleElement, const D: usize>(
Err(e) => panic!("error chunk from Candle"),
}
}

pub fn expand<E: CandleElement, const D1: usize, const D2: usize>(
tensor: CandleTensor<E, D1>,
shape: Shape<D2>,
) -> CandleTensor<E, D2> {
CandleTensor::new(tensor.tensor.broadcast_as(&shape.dims).unwrap())
}
9 changes: 9 additions & 0 deletions crates/burn-candle/src/ops/bool_tensor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ use crate::{
Candle, CandleTensor,
};

use super::base::{expand, permute};

impl<F: FloatCandleElement, I: IntCandleElement> BoolTensorOps<Self> for Candle<F, I> {
fn bool_empty<const D: usize>(shape: Shape<D>, device: &Device<Self>) -> BoolTensor<Self, D> {
super::base::empty(shape, device)
Expand Down Expand Up @@ -140,4 +142,11 @@ impl<F: FloatCandleElement, I: IntCandleElement> BoolTensorOps<Self> for Candle<
) -> BoolTensor<Self, D> {
super::base::flip(tensor, axes)
}

fn bool_expand<const D1: usize, const D2: usize>(
tensor: BoolTensor<Self, D1>,
shape: Shape<D2>,
) -> BoolTensor<Self, D2> {
expand(tensor, shape)
}
}
9 changes: 9 additions & 0 deletions crates/burn-candle/src/ops/int_tensor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ use crate::{
Candle, CandleTensor,
};

use super::base::{expand, permute};

impl<F: FloatCandleElement, I: IntCandleElement> IntTensorOps<Self> for Candle<F, I> {
fn int_empty<const D: usize>(shape: Shape<D>, device: &Device<Self>) -> IntTensor<Self, D> {
super::base::empty(shape, device)
Expand Down Expand Up @@ -430,6 +432,13 @@ impl<F: FloatCandleElement, I: IntCandleElement> IntTensorOps<Self> for Candle<F
super::base::flip(tensor, axes)
}

fn int_expand<const D1: usize, const D2: usize>(
tensor: IntTensor<Self, D1>,
shape: Shape<D2>,
) -> IntTensor<Self, D2> {
expand(tensor, shape)
}

// TODO add sign operator once Candle supports it:
// https://github.com/huggingface/candle/issues/1827
}
9 changes: 9 additions & 0 deletions crates/burn-candle/src/ops/tensor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ use crate::{
Candle, CandleTensor,
};

use super::base::{expand, permute};

impl<F: FloatCandleElement, I: IntCandleElement> FloatTensorOps<Self> for Candle<F, I> {
fn float_from_data<const D: usize>(
data: Data<F, D>,
Expand Down Expand Up @@ -530,6 +532,13 @@ impl<F: FloatCandleElement, I: IntCandleElement> FloatTensorOps<Self> for Candle
super::base::flip(tensor, axes)
}

fn float_expand<const D1: usize, const D2: usize>(
tensor: FloatTensor<Self, D1>,
shape: Shape<D2>,
) -> FloatTensor<Self, D2> {
expand(tensor, shape)
}

// TODO add sign operator once Candle supports it:
// https://github.com/huggingface/candle/issues/1827
}
45 changes: 42 additions & 3 deletions crates/burn-fusion/src/ops/boolean.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,10 @@ use crate::{
ops::binary::binary_ops_shape,
stream::{
BaseOperationDescription, BinaryOperationDescription, BoolOperationDescription,
CatOperationDescription, FlipOperationDescription, Operation, OperationDescription,
PermuteOperationDescription, ReshapeDescription, SliceAssignOperationDescription,
SliceOperationDescription, StreamId, SwapDimsDescription, UnaryOperationDescription,
CatOperationDescription, ExpandOperationDescription, FlipOperationDescription, Operation,
OperationDescription, PermuteOperationDescription, ReshapeDescription,
SliceAssignOperationDescription, SliceOperationDescription, StreamId, SwapDimsDescription,
UnaryOperationDescription,
},
Fusion, FusionBackend,
};
Expand Down Expand Up @@ -467,6 +468,44 @@ impl<B: FusionBackend> BoolTensorOps<Self> for Fusion<B> {
out
}

fn bool_expand<const D1: usize, const D2: usize>(
tensor: BoolTensor<Self, D1>,
shape: Shape<D2>,
) -> BoolTensor<Self, D2> {
#[derive(new)]
struct ExpandOps<const D: usize, const D2: usize> {
desc: ExpandOperationDescription,
}

impl<const D: usize, const D2: usize, B: FusionBackend> Operation<B> for ExpandOps<D, D2> {
fn execute(self: Box<Self>, handles: &mut crate::HandleContainer<B>) {
let input = handles.get_bool_tensor::<D>(&self.desc.input);
let shape: [usize; D2] = self.desc.shape.try_into().unwrap();
let output = B::bool_expand(input, shape.into());

handles.register_bool_tensor(&self.desc.out.id, output);
}
}

let stream = tensor.stream;

let out = tensor.client.tensor_uninitialized(shape.dims.into());

let desc = ExpandOperationDescription {
input: tensor.into_description(),
shape: shape.dims.into(),
out: out.to_description_out(),
};

out.client.register(
vec![stream],
OperationDescription::BaseBool(BaseOperationDescription::Expand(desc.clone())),
ExpandOps::<D1, D2>::new(desc),
);

out
}

fn bool_flip<const D: usize>(
tensor: BoolTensor<Self, D>,
axes: &[usize],
Expand Down
53 changes: 46 additions & 7 deletions crates/burn-fusion/src/ops/float.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,14 @@ use crate::{
scalar_float2int_ops, scalar_float_cmp_ops, scalar_float_ops,
stream::{
BaseOperationDescription, BinaryOperationDescription, CatOperationDescription,
ClampOperationDescription, FlipOperationDescription, FloatOperationDescription,
GatherOperationDescription, MaskFillOperationDescription, MaskWhereOperationDescription,
NumericOperationDescription, Operation, OperationDescription, PermuteOperationDescription,
RandomOperationDescription, ReduceDimWithIndicesDescription, ReshapeDescription,
ScalarOperationDescription, ScatterOperationDescription, SelectAssignOperationDescription,
SelectOperationDescription, SliceAssignOperationDescription, SliceOperationDescription,
StreamId, SwapDimsDescription, UnaryOperationDescription,
ClampOperationDescription, ExpandOperationDescription, FlipOperationDescription,
FloatOperationDescription, GatherOperationDescription, MaskFillOperationDescription,
MaskWhereOperationDescription, NumericOperationDescription, Operation,
OperationDescription, PermuteOperationDescription, RandomOperationDescription,
ReduceDimWithIndicesDescription, ReshapeDescription, ScalarOperationDescription,
ScatterOperationDescription, SelectAssignOperationDescription, SelectOperationDescription,
SliceAssignOperationDescription, SliceOperationDescription, StreamId, SwapDimsDescription,
UnaryOperationDescription,
},
unary_float_ops, Fusion, FusionBackend, TensorDescription,
};
Expand Down Expand Up @@ -1847,6 +1848,44 @@ impl<B: FusionBackend> FloatTensorOps<Self> for Fusion<B> {
out
}

fn float_expand<const D1: usize, const D2: usize>(
tensor: FloatTensor<Self, D1>,
shape: Shape<D2>,
) -> FloatTensor<Self, D2> {
#[derive(new)]
struct ExpandOps<const D: usize, const D2: usize> {
desc: ExpandOperationDescription,
}

impl<const D: usize, const D2: usize, B: FusionBackend> Operation<B> for ExpandOps<D, D2> {
fn execute(self: Box<Self>, handles: &mut crate::HandleContainer<B>) {
let input = handles.get_float_tensor::<D>(&self.desc.input);
let shape: [usize; D2] = self.desc.shape.try_into().unwrap();
let output = B::float_expand(input, shape.into());

handles.register_float_tensor(&self.desc.out.id, output);
}
}

let stream = tensor.stream;

let out = tensor.client.tensor_uninitialized(shape.dims.into());

let desc = ExpandOperationDescription {
input: tensor.into_description(),
shape: shape.dims.into(),
out: out.to_description_out(),
};

out.client.register(
vec![stream],
OperationDescription::BaseFloat(BaseOperationDescription::Expand(desc.clone())),
ExpandOps::<D1, D2>::new(desc),
);

out
}

fn float_flip<const D: usize>(
tensor: FloatTensor<Self, D>,
axes: &[usize],
Expand Down
Loading
Loading