-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement AddPositionalEncoding
transform
#4521
Conversation
for more information, see https://pre-commit.ci
Codecov Report
@@ Coverage Diff @@
## master #4521 +/- ##
==========================================
+ Coverage 82.81% 82.88% +0.06%
==========================================
Files 315 316 +1
Lines 16605 16674 +69
==========================================
+ Hits 13752 13820 +68
- Misses 2853 2854 +1
Continue to review full report at Codecov.
|
I guess the paper is this one: GRAPH NEURAL NETWORKS WITH LEARNABLE |
@Padarn I added the paper you mentioned in the docs of lines 32 -- 34.
Is this what you mean, or if there is anything I missed, could you specify it? Thank you. |
Yes thats all I meant, just common practice across the docstrings, thanks! |
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks pretty good. Thank you very much! Only have some minor points regarding code structure.
) | ||
|
||
|
||
def test_add_laplacian_eigenvector_pe(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any chance we can also test for output rather than solely checking for shapes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the output tests for RandomWalkPE.
However, for LaplacianEigenvectorPE, I found that each run produces different outputs. I guess computing eigenvectors of Laplacian is not a deterministic operation. How can we write output tests for LaplacianEigenvectorPE? Any suggestion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can try setting the random seed of torch and numpy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried np.random.seed
and random.seed
, but it still creates different outputs. I found that using a fixed value for v0
in scipy.linalg.eigs
produces deterministic outputs. We can place **kwargs in __call__
and pass the kwargs (including v0
) into eigs explicitly. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it looks like when it's not provided there is a flag 'info=0' set.
How different are the outputs, personally I feel maybe adding some leniency in the checking of the result (i.e using all_close
with a high atol
pr rtol
) might be better than adding the extra **kwargs
just for this.
Just my opinion though. I think both ways work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have tested various graphs for AddLaplacianEigenvectorPE
. If there is no v0
, the output really varies so we cannot test with torch.allclose
.
Instead, I added a clustering test on a graph with two clusters, which uses the mean value of the first non-trivial eigenvector (Fiedler vector) of each cluster.
Also, I added tests with exact values using seed_everything
and v0
. The keyword argument v0
will be added through AddLaplacianEigenvectorPE.__init__
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well... But looking at failed tests, using seed_everything
and v0
does not guarantee the same output in different machines.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. I think what you've done for the clustering test is a good solution. Thanks!
Co-authored-by: Matthias Fey <[email protected]>
Co-authored-by: Matthias Fey <[email protected]>
Co-authored-by: Matthias Fey <[email protected]>
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
AddPositionalEncoding
transform
This is super great. Thank you all. I will send other PRs mentioned in this thread soon. |
Thank you! |
Great! |
This PR includes implementations of Laplacian eigenvector positional encoding & random walk positional encoding based on the original authors' implementation.
Any comments are welcome. Plus, it looks fine to me but I do not know why there is an indentation error in the documentation build...