Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some inconsistence in the paper and the code #2

Open
likuanppd opened this issue Aug 20, 2021 · 5 comments
Open

Some inconsistence in the paper and the code #2

likuanppd opened this issue Aug 20, 2021 · 5 comments

Comments

@likuanppd
Copy link

likuanppd commented Aug 20, 2021

Hi, Dai
There is some inconsistence in the paper and the code listed below.

1---It is writen that f_p and f_e will pretrain in the paper, but in the code it seems you just compute the feature cos simmilarities to get the potential edge set at the very beginning(and this is very essential. Without this step the performance greatlydecreases). I don't see any pretrain step.

2---The total loss in the paper are composed with L_E(reconstruction loss), L_p(the crossentropy loss of the pseudo label predictor on training set) and L_G(the crossentropy loss of the final classifier). It is writen that argmin L_G + αL_E + βL_P
On contrary, the line 133 in NRGNN.py, "total_loss = loss_gcn + loss_pred + self.args.alpha * rec_loss + self.args.beta * loss_add", the loss_add is not consistent with the lossL_p. Apparently there are four components in the code, and the loss_pred is the L_P in the paper. Is there any details about loss_add in paper that i missed?

Thanks

@zchengk
Copy link

zchengk commented Feb 19, 2022

I also wanna know something about the second question, which confuses me deeply. What's more, I have repeated the experiments in the paper and the accuracy decreases several percentages. I think it should be related to the two questions above.

@JackYFL
Copy link

JackYFL commented Apr 19, 2023

I agree with you. I think this operation is very tricky, since 'loss_add' is obtained through the best prediction of previous epochs of pseudo label miner, which is unfair for other baselines. Moreover, this trick can improve the performance greatly according to my ablation.

@pintu-dot
Copy link

Hey I was trying to implement the code but it gives error : one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [0]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Did anyone else face this issue what was the fix that you guys trued

I encountered this error here, please ignore the line numbers here :

`16 esgnn = NRGNN(args,device)
---> 17 esgnn.fit(features, adj, noise_labels, idx_train, idx_val)
18
19 print("=====test set accuracy=======")

3 frames
in fit(self, features, adj, labels, idx_train, idx_val)
58 for epoch in range(args.epochs):
59 print(epoch)
---> 60 self.train(epoch, features, edge_index, idx_train, idx_val)
61
62 print("Optimization Finished!")

in train(self, epoch, features, edge_index, idx_train, idx_val)
118
119 total_loss = loss_gcn + loss_pred + self.args.alpha * rec_loss + self.args.beta * loss_add
--> 120 total_loss.backward()
121 self.optimizer.step()
122`

@JackYFL
Copy link

JackYFL commented Nov 10, 2023

I think it may be caused by the version of packages.

@rockcor
Copy link

rockcor commented Nov 28, 2023

Hey I was trying to implement the code but it gives error : one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [0]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Did anyone else face this issue what was the fix that you guys trued

I encountered this error here, please ignore the line numbers here :

`16 esgnn = NRGNN(args,device) ---> 17 esgnn.fit(features, adj, noise_labels, idx_train, idx_val) 18 19 print("=====test set accuracy=======")

3 frames in fit(self, features, adj, labels, idx_train, idx_val) 58 for epoch in range(args.epochs): 59 print(epoch) ---> 60 self.train(epoch, features, edge_index, idx_train, idx_val) 61 62 print("Optimization Finished!")

in train(self, epoch, features, edge_index, idx_train, idx_val) 118 119 total_loss = loss_gcn + loss_pred + self.args.alpha * rec_loss + self.args.beta * loss_add --> 120 total_loss.backward() 121 self.optimizer.step() 122`

It hints that problem happen in Relu() function.
just add detach() at :
estimated_weights = F.relu(output.detach())
@pintu-dot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants