You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently working on GradOutHook, which is different from the current zennit.core.Hook in that instead of overwriting the full gradient of the module, it only changes the gradient output. For Composites using zennit.core.Hook, only a single Hook can be attached at a time, because it will change the full gradient of the module. The GradOutHook can modify the output gradient multiple times, and can be used together with zennit.core.Hook. This can lead to using multiple Composites at a time. Another way to enable multiple hooks would be to let the module_map function of Composites allow to return a tuple of Hooks to be applied.
The main use case for this is to mask or re-weight neurons, mainly to support LRP for GNNs. Another use-case is to mask certain neurons to get LRP for a subset of features/concepts.
This will somewhat change the Hook-inheritance, where a HookBase will be added to specify the interface necessary for all Hooks. Also, I am considering to add a Mask rule to zennit/rule.py which takes a function or a tensor to mask the gradient output, which can be used without subclassing the planned GradOutHook.
The text was updated successfully, but these errors were encountered:
I am currently working on
GradOutHook
, which is different from the currentzennit.core.Hook
in that instead of overwriting the full gradient of the module, it only changes the gradient output. For Composites usingzennit.core.Hook
, only a single Hook can be attached at a time, because it will change the full gradient of the module. TheGradOutHook
can modify the output gradient multiple times, and can be used together withzennit.core.Hook
. This can lead to using multiple Composites at a time. Another way to enable multiple hooks would be to let themodule_map
function of Composites allow to return a tuple of Hooks to be applied.The main use case for this is to mask or re-weight neurons, mainly to support LRP for GNNs. Another use-case is to mask certain neurons to get LRP for a subset of features/concepts.
This will somewhat change the Hook-inheritance, where a
HookBase
will be added to specify the interface necessary for all Hooks. Also, I am considering to add aMask
rule tozennit/rule.py
which takes a function or a tensor to mask the gradient output, which can be used without subclassing the plannedGradOutHook
.The text was updated successfully, but these errors were encountered: