Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[wasm] Add sparseToDense kernel #6528

Merged
merged 11 commits into from
Jan 29, 2023

Conversation

kon72
Copy link
Contributor

@kon72 kon72 commented Jun 12, 2022

Closes #4824

To see the logs from the Cloud Build CI, please join either our discussion or announcement mailing list.


This change is Reviewable

Copy link
Member

@mattsoulanille mattsoulanille left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thank you!

tfjs-backend-wasm/src/cc/scatter_impl.cc Show resolved Hide resolved
Comment on lines +53 to +55
if (!update_as_scalar) {
updates_ptr++;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't fully understand the purpose of this. It looks like it's only used if sparse_values_rank == 0 (SparseToDense.cc:35), in which case it assigns all the values of the output tensor to *updates_ptr (without incrementing).

Since this only happens when the rank of the sparse tensor is zero, wouldn't this only ever assign one value? In that case, I don't think it's necessary, since the loop will only run once.

On the other hand, it's implemented the same way in the CPU backend, so I think this is fine.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TensorFlow doc states that if sparse_values is scalar, the value is assigned to all sparce indices.

So as far as I understand, sparse_indices could take more than one index while sparse_values is a scalar, and the loop will in turn run more than once.

}
}

out_buf_ptr -= (flattened_index * slice_size + slice_size);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very confusing since you are technically resetting the pointer but using decrement. Can you leave out_buf_ptr unchanged and create a new variable that stores the start of the current iteration output (i.e. out_buf_ptr + flattened_index * slice_size) and only increment the temporary so this decrement reset is no longer necessary?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thank you!

Copy link
Contributor

@ahmedsabie ahmedsabie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See previous comment

@sklum
Copy link
Contributor

sklum commented Jan 17, 2023

Would love to get this merged as it's also a pain point for my work. Given the lack of activity, is it possible to merge despite @ahmedsabie nit?

If not, I'm happy to try to resolve in a separate PR, although I've never worked with the C++ implementation and Bazel. Are there up to date instructions on building from source anywhere? My (albeit brief) search came up empty.

@mattsoulanille
Copy link
Member

Sorry I didn't follow up on this earlier. Ahmed has left the TFJS team, so I'll assign this to someone else.

@mattsoulanille mattsoulanille requested review from chunnienc and ahmedsabie and removed request for ahmedsabie January 18, 2023 17:30
auto& sparse_values_info = backend::get_tensor_info(sparse_values_id);
auto& default_value_info = backend::get_tensor_info(default_value_id);
const std::vector<size_t>& strides =
std::vector<size_t>(strides_ptr, strides_ptr + slice_rank);
Copy link
Collaborator

@chunnienc chunnienc Jan 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const std::vector<size_t> strides(strides_ptr, strides_ptr + slice_rank);

It's bad to keep the lvalue ref of a temporary value.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's valid because the lifetime of the temporary object will be extended by binding to const ref.
Anyway, the reference here was completely redundant and your suggested change is much clearer. Thank you!

@sklum
Copy link
Contributor

sklum commented Jan 23, 2023

Thanks for the quick follow up @mattsoulanille and review @chunnienc.

@kon72 do you still have interest / availability to address the review?

@sklum
Copy link
Contributor

sklum commented Jan 24, 2023

Alright @mattsoulanille, I've opened a PR on @kon72's fork that addresses @chunnienc's change request. Are you able to merge that PR to the fork as you have maintainer rights? Otherwise I'm happy to open a clean PR with my fork.

@mattsoulanille
Copy link
Member

@sklum Thanks for the fix. I can't merge the PR on the fork, but I cherry-picked the commit and pushed it here for @chunnienc to take a look at.

@kon72
Copy link
Contributor Author

kon72 commented Jan 25, 2023

Hello all,

Sorry for the long delay and thank you for reviewing and following up on this PR.
I will take a look and address these reviews quickly.

@kon72 kon72 requested a review from chunnienc January 26, 2023 15:29
@mattsoulanille mattsoulanille merged commit 6fb1dcd into tensorflow:master Jan 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Backend WASM does not implement kernel SparseToDense
5 participants