Skip to content
This repository has been archived by the owner on Dec 30, 2019. It is now read-only.

Latest commit

 

History

History
105 lines (54 loc) · 4.16 KB

CHANGELOG.md

File metadata and controls

105 lines (54 loc) · 4.16 KB

0.3.4 (2016-03-03)

Bug Fixes

  • cuda/convolution: workaround for 0 memory allocation (e30b59de)

0.3.3 (2016-03-03)

Features

  • cudnnv4: passive support for cuDNNv4 (0dc46301)

0.3.2 (2016-03-02)

Breaking Changes

  • convolution: change convolution functions to require workspace (f9d40136

0.3.1 (2016-02-23)

Features

  • pointwise: add pointwise activation functions (cuDNN) (d74821b5)

0.3.0 (2016-02-22)

Features

  • log_softmax: add LogSoftmax operations (86a8ae67)
  • cuda:
    • share workspace between CUDA convolution operations (7f5f3207)
    • allow CUDA activations to work with 1D/2D tensors (f4effe7d)
    • allow CUDA softmax to work with 1-3D tensors (f74f72b6)
  • nn_trait: remove trait bounds for NN (9ad08d9f)
  • license: change license to dual MIT/Apache-2.0 (8a940690)

Breaking Changes

  • convolution: implement convolutions correctly (24b164b5)

Performance

  • convolution: don't do a memAlloc for a zero size workspace (73612bb5)

0.2.1 (2016-01-21)

Features

  • native: Add support for softmax w/ test and benches. (14d6d1bc)

Bug Fixes

  • native: Fix sigmoid_grad to use x_diff instead of x for dx (c25a32aa)

0.2.0 (2016-01-15)

Features

  • bench: add bench and perf utilities (0e2d34c6)
  • native: implement Sigmoid, ReLU, tanh for Native backend. (ece54e37)

0.1.0 (2015-12-21)

Bug Fixes

  • scale_params: fix ScalParams default to work on stable (43654dca)

Features

  • activation: add most popular NN activation functions (3311bb43)
  • features: add framework feature groups (08629ea8)
  • nn:
    • add all cudnn available operations to collenchyma-nn (03384763)
    • add basic nn implementation structure (aa17ef0f)
  • sigmoid: