-
Notifications
You must be signed in to change notification settings - Fork 312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a multidimensional matrix multiply #16
Comments
NumPy has let a = Array::<f64, _>::random((7, 5, 10));
let b = Array::<f64, _>::random((10, 5));
let c = einsum!("ijk", "kj" -> "i" | a, b); // results one-dim array with 7 components |
Then I'd be curious how you recover the performance of matrix multiplication. ndarray uses crate matrixmultiply which uses a packing strategy and a vectorized kernel. |
I saw opt-numpy, which decompose given einsum into more fundamental tensor reduction. This project can detects the case where |
Thanks for the link. I'd spontaneously say that a good einsum is a project as large as ndarray itself (almost) and would be a thing to build in its own crate. |
Would be nice to have more matrix multiply implementations. Even expanding the set of implemented types to include |
The TACO project looks very promising. It was designed with both dense and sparse tensors in mind, supporting various formats of sparse tensors and block sparse tensors. Its API is similar to Julia TensorOperations or Nim Arraymancer, which is more intuitive than that of NumPy |
General dimensions, like numpy::dot.
Restricted to float types, for transparent blas support.
The text was updated successfully, but these errors were encountered: