Skip to content

v1.16

Compare
Choose a tag to compare
@oobabooga oobabooga released this 25 Oct 04:10
· 37 commits to main since this release
cc8c7ed

Backend updates

  • Transformers: bump to 4.46.
  • Accelerate: bump to 1.0.

Changes

  • Add whisper turbo (#6423). Thanks @SeanScripts.
  • Add RWKV-World instruction template (#6456). Thanks @MollySophia.
  • Minor Documentation update - query cuda compute for docker .env (#6469). Thanks @practical-dreamer.
  • Remove lm_eval and optimum from requirements (they don't seem to be necessary anymore).

Bug fixes

  • Fix llama.cpp loader not being random. Thanks @reydeljuego12345.
  • Fix temperature_last when temperature not in sampler priority (#6439). Thanks @ThisIsPIRI.
  • Make token bans work again on HF loaders (#6488). Thanks @ThisIsPIRI.
  • Fix for systems that have bash in a non-standard directory (#6428). Thanks @LuNeder.
  • Fix intel bug described in #6253 (#6433). Thanks @schorschie.
  • Fix locally compiled llama-cpp-python failing to import.