Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The paper directly cites data from other papers? #29

Open
apuomline opened this issue Jul 21, 2024 · 0 comments
Open

The paper directly cites data from other papers? #29

apuomline opened this issue Jul 21, 2024 · 0 comments

Comments

@apuomline
Copy link

"Hello, author. Your work on deformable DLKA is very impressive. However, we seem to have found an issue: in your paper, the DSC for the Synapse using the nnformer is 86.57%, which is exactly the same as the data run in the nnformer paper. But your experimental environment is not the same as the nnformer's environment. That is to say, you have achieved completely identical experimental results in different environments?"
nnformer: image
nnformer-experiment:We run all experiments based on Python 3.6, PyTorch 1.8.1
and Ubuntu 18.04. All training procedures have been per�formed on a single NVIDIA 2080 GPU with 11GB memory.
The initial learning rate is set to 0.01 and we employ a
“poly” decay strategy as described in Equation 7. The default
optimizer is SGD where we set the momentum to 0.99. The
weight decay is set to 3e-5. We utilize both cross entropy loss
and dice loss by simply summing them up. The number of
training epochs (i.e., max epoch in Equation 7) is 1000 and
one epoch contains 250 iterations. The number of heads of
multi-head self-attention used in different encoder stages is [6,
12, 24, 48] on Synapse. In the rest two datasets, the number
of heads becomes [3, 6, 12, 24].

deformable dlka:
image

deformable dlka-experiment:
We have implemented both 2D and 3D models using
the PyTorch framework and performed training on a sin�gle RTX 3090 GPU. For the 2D method, a batch size of 20
was used, along with Stochastic Gradient Descent (SGD)
employing a base learning rate of 0.05, a momentum of 0.9,
and a weight decay of 0.0001. The training process con�sisted of 400 epochs, employing a combination of cross�entropy and Dice loss,

"In light of this, we are very curious to know whether the segmentation figures for the Synapse using the nnformer in your paper were taken directly from the nnformer's work?"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant