-
Notifications
You must be signed in to change notification settings - Fork 181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Support for a scatter 'concatenate' or 'groupby' operation #398
Comments
This does refer to the CSR representation of a sparse matrix, which is implemented in PyG:
|
This issue had no activity for 6 months. It will be closed in 2 weeks unless there is some new activity. Is this issue already resolved? |
@rusty1s Is there a simple way to retrieve the outputs that @davidbuterez provided:
from the result of |
count = rowptr.diff()
col.split(count.tolist()) |
I have a PR with a draft implementation if you give me push access to the repo. I'll push to a feature branch I guess? Unsure what the contribution guidelines are. |
@rusty1s What's the correct way to contribute? |
What do you want to contribute exactly? It looks like |
Hmm. Unsure what you mean. It's an implementation of
Inputs:
Output:
AFAIK |
Wouldn't this be similar to |
For anyone who read this, I found that the closeset solution to something like
However, batch MUST be sorted in this function. More explanations can be found in the document. In addition, I tried @rusty1s I suggest incoporating this function into the |
Hi, thanks for the amazing work so far!
I was wondering if it would be possible to efficiently support a scatter operation that instead of reducing (e.g. using sum, mean, max, or min), simply returns the values indicated by the index.
For example, following the homepage illustration of this repo:
I would like to get an output similar to this:
(the order within each list would not matter)
I am not sure if I am missing something or if this is possible using existing operations. Perhaps the varying length is problematic, but this could be handled with nested tensors or padding. I would like to apply this operation several times per training epoch so ideally it would be efficient on GPUs.
The text was updated successfully, but these errors were encountered: