- Introduces
Lux.Training
API for less clunky training loops. - WeightNorm cannot be invoked on a parameter with all elements equal to 0.
- Fixes Recurrent Model bug on GPUs
- Deprecations
ActivationFunction
-- See reasoning in LuxDL#71- Default
parameterlength
/statelength
/initialparameters
/initialstates
for certain types trainmode
/testmode
withmode
argumentConv
:bias
deprecated in favor ofuse_bias
- Manual detailing Lux Interface
- Fixes bug with ComponentArray + Optimiser FluxML/Optimisers.jl#91
Dropout
Layers caches1 / (1 - p)
for minor improvements for forward passdropout
has a custom rrule -- significantly improves performance for smaller arrays- WARN A bug with ComponentArray + CUDA + Optimiser usage was introduced. Please update to v0.4.8 or higher.
- Documentation revamped
gpu
andcpu
give a deprecation warning if used on anAbstractExplicitLayer
- Allow Arbitrary Parameter Types
- Updated to support julia v1.6 (test time dependency issues)
- Extending Scale to allow for multiple dimension inputs (LuxDL#40)
SelectDim
is no longer type unstable -- Internal storage for the Layer has been changedDropout
&VariationalDropout
returnNoOpLayer
if the probability of dropout is0
- Code Formatting -- SciMLStyle (LuxDL#31)
- Fix math rendering in docs
- Add Setfield compat for v1.0