You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recently I used the pytorch version for some simple backdoor attacks, and it really works well.
However, I have a question when reading your paper and the code:
In sec 4.2 and previous sections, the paper claims that 'the compromised neurons should activate in a specific range' to get the output elevation. I think this statement makes sense and is intuitive. However, in algorithm 2 and the code, it seems that you maximum the activation value of the compromised neuron, instead of letting them activate in the specific range, in which could gets the required output elevation.
So, is maximizing the activation value of the compromised neurons equivalent to activating them within a specific range? Is it because I missed some analysis and misunderstood it?
The text was updated successfully, but these errors were encountered:
Recently I used the pytorch version for some simple backdoor attacks, and it really works well.
However, I have a question when reading your paper and the code:
In sec 4.2 and previous sections, the paper claims that 'the compromised neurons should activate in a specific range' to get the output elevation. I think this statement makes sense and is intuitive. However, in algorithm 2 and the code, it seems that you maximum the activation value of the compromised neuron, instead of letting them activate in the specific range, in which could gets the required output elevation.
So, is maximizing the activation value of the compromised neurons equivalent to activating them within a specific range? Is it because I missed some analysis and misunderstood it?
The text was updated successfully, but these errors were encountered: