You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am looking into your code. But it seems that in models.py, the self.multi_head_att_layers(self-attention) and self.relation_attention_gcns(cross-KG attention) use the same adjacency matrix, rather than different adj matrix for each channel. Is there anything wrong with my understanding?
The text was updated successfully, but these errors were encountered:
These two models use adjs with the same connectivity. But the edge weights are calculated by KG Self-Attention and Cross-KG Attention module separately in two channels.
Hi,
I am looking into your code. But it seems that in
models.py
, theself.multi_head_att_layers
(self-attention) andself.relation_attention_gcns
(cross-KG attention) use the same adjacency matrix, rather than different adj matrix for each channel. Is there anything wrong with my understanding?The text was updated successfully, but these errors were encountered: