You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, firstly I would like to thank you for sharing the code. I was looking at the Spatial Attention component (line 56 in model.py) and I've noticed some differences from what is presented in the paper:
I can not find where you're splitting the vertices into G partitions (and doing the intra/inter group attention). As far as I can understand the spatialAttention function does only the intra-group spatial attention without any restrictions.
After you're computing eq 7 (line 86 in model.py) the output is projected again using 2 FC layers, which in the paper are not described. What is the reason for it?
Looking at eq 7 the input of function f3 is the previous hidden representation where in you're code you're also using the static graph embeddings (e_{v,tj})
Looking forward for your reply.
The text was updated successfully, but these errors were encountered:
Have the same doubt. I couldn't fully understand the inter-group spatial attention described in the paper so I try to see how the code works. But I cannot find anything related. There is even no G in the code.
I think the FC is used to change the dimension of the attention results so that the new hidden states can have the same dimension as the previous hidden states.
Hello, firstly I would like to thank you for sharing the code. I was looking at the Spatial Attention component (line 56 in model.py) and I've noticed some differences from what is presented in the paper:
Looking forward for your reply.
The text was updated successfully, but these errors were encountered: