Releases: LukasHedegaard/continual-inference
Releases · LukasHedegaard/continual-inference
Fix state_buffer device after clean_state
Fixed
- Ensure state_buffer remains on the same device after clean_state.
Fix string argument in TransformerEncoderLayerFactory activation
Fixed
- Option to use strings to specify transformer activation.
Torch 2.0 compatibility
Added
- Onnx as dev requirement.
Changed
- Allow torch>=2.0.
Skip module and leading residual
Added
Skip
module.- "leading" mode in
Residual
.
Doc fixed and compatibility updates
### Added
- Description of state handling to README.
Fixed
- Documentation formatting for
co.Identity()
examples. - Horovod check for newer pytorch lightning versions.
Enhanced SingleOutputTransformerEncoderLayer and Residual fix
### Added
query_index
argument toSingleOutputTransformerEncoderLayer
.
Fixed
Residual
centred residual andDelay
auto_delay forward_step.
Extended normalization support
### Added
- Support for
GroupNorm
andInstanceNorm
Major documentation overhaul
### Added
append
function toco.Sequential
.- Production-ready docstrings for public functions.
- reduce_max to
Reduce
.
### Changed
- Rename
Unity
toIdentity
to followtorch.nn
. - Major overhaul of README, improving descriptions and adding benchmark.
- Major overhaul of docs, improving descriptions and adding benchmark.
- MHA warnings to only log once.
Removed
- Unused parameters
batch_first
andbidirectional
for RNN, GRU, and LSTM.
Fix Conditional onnx for single-option config
Fixed
co.Conditional
onnx support for single-option config.
Add missing ONNX support for co.Conditional
Fixed
co.Conditional
onnx support.