diff --git a/CHANGELOG.md b/CHANGELOG.md index a26a0e7ba..84333e867 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -117,3 +117,16 @@ Features: Bug fixes: - fixed an issue where too many threads were created in blockwise quantization on the CPU for large tensors + + +### 0.35.0 + +#### CUDA 11.8 support and bug fixes + +Features: + - CUDA 11.8 support added and binaries added to the PyPI release. + +Bug fixes: + - fixed a bug where too long directory names would crash the CUDA SETUP #35 (thank you @tomaarsen) + - fixed a bug where CPU installations on Colab would run into an error #34 (thank you @tomaarsen) + - fixed an issue where the default CUDA version with fast-DreamBooth was not supported #52 diff --git a/README.md b/README.md index eac64a52d..7d35a8026 100644 --- a/README.md +++ b/README.md @@ -10,6 +10,8 @@ Resources: - [LLM.int8() Paper](https://arxiv.org/abs/2208.07339) -- [LLM.int8() Software Blog Post](https://huggingface.co/blog/hf-bitsandbytes-integration) -- [LLM.int8() Emergent Features Blog Post](https://timdettmers.com/2022/08/17/llm-int8-and-emergent-features/) ## TL;DR +**Requirements** +Linux distribution (Ubuntu, MacOS, etc.) + CUDA >= 10.0. LLM.int8() requires Turing or Ampere GPUs. **Installation**: ``pip install bitsandbytes`` @@ -52,6 +54,8 @@ Hardware requirements: Supported CUDA versions: 10.2 - 11.7 +The bitsandbytes library is currently only supported on Linux distributions. Windows is not supported at the moment. + The requirements can best be fulfilled by installing pytorch via anaconda. You can install PyTorch by following the ["Get Started"](https://pytorch.org/get-started/locally/) instructions on the official website. ## Using bitsandbytes diff --git a/setup.py b/setup.py index 610684b54..3f5dafd22 100644 --- a/setup.py +++ b/setup.py @@ -18,7 +18,7 @@ def read(fname): setup( name=f"bitsandbytes", - version=f"0.34.0", + version=f"0.35.0", author="Tim Dettmers", author_email="dettmers@cs.washington.edu", description="8-bit optimizers and matrix multiplication routines.",