Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose
linalg::dot
in public API #968Expose
linalg::dot
in public API #968Changes from 2 commits
01dd067
e6a5bb1
400dfa9
f376c51
9c9efe8
2bf4eae
2e1c0e6
a668098
977949f
a125c4f
6f8a76c
af52c0c
25333ee
98f7d85
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than adding another factory function for a strided vector, why not just allow a strided layout to be configured in the make_device_vector_view and make_host_vector_view?
Right now the make_*_vector_view automatically configures a row-major layout but the layout should really be configurable (and potentially strided, or col major if desired).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated make_device_vector_view to allow strided input here - let me know what you think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I brought this up with the axpy as well, but it seems weird to accept a general mdspan for this when what we are really looking for is a 1d vector. Do you see value in accepting a matrix or dense tensor with 3+ dimensional extents? If not, we should just accept the vector_view directly (which is aliased to be any mdspan with 1d extents.
If we accepted a device_vector_view directly, we wouldn't need the enable_if statements at all. I think we should go ahead and do the same for the axpy to keep things consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agreed - made the changes here so that both axpy and dot take device_vector_view's
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we just go ahead and wrap the cublasEx functions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I created an issue so we can discuss further #977 .
Reading the docs a little closer, and it looks like even w/ cublasDotEx having different dtypes for the input/outputs isn't currently supported: https://docs.nvidia.com/cuda/cublas/index.html#cublas-dotEx - so it won't have much value for the dot API (though I could see a use for it myself with the gemm api w/ implicit and the mixed precision work I was talking about last week)