1.4.0
New features and improvements
- Two new layers:
SpectralConv2DTranspose
, a Lipschitz version of the KerasConv2DTranspose
layer- activation layer
Householder
which is a parametrized generalization of theGroupSort2
- Two new regularizers to foster orthogonality:
LorthRegularizer
for an orthogonal convolutionOrthDenseRegularizer
for an orthogonalDense
matrix kernel
- Two new losses for Lipschitz networks:
TauCategoricalCrossentropy
, a categorical cross-entropy loss with temperature scalingtau
CategoricalHinge
, a hinge loss for multi-class problems based on the implementation of the KerasCategoricalHinge
- Two new custom callbacks:
LossParamScheduler
to change loss hyper-parameters during training, e.g.min_margin
,alpha
andtau
LossParamLog
to log the value of loss parameters
- The Björck orthogonalization algorithm was accelerated.
- Normalizers (power iteration and Björck) use
tf.while_loop
and theswap_memory
argument can be globally set usingset_swap_memory(bool)
. Default value isTrue
to save memory usage in GPU. - The new function
set_stop_grad_spectral(bool)
allows to bypass the back-propagation in the power iteration algorithm that computes the spectral norm. Default value isTrue
. Stopping gradient propagation reduces runtime. - Due to bugs in TensorFlow serialization of custom losses and metrics (version 2.0 and 2.1), deel-lip now only supports TensorFlow >= 2.2.
Fixes
SpectralInitializer
does not reuse anymore the same base initializer in multiple instances.
Full Changelog: v1.3.0...v1.4.0