Skip to content

Commit

Permalink
Add windows known issue with cuda
Browse files Browse the repository at this point in the history
  • Loading branch information
laggui committed Sep 17, 2024
1 parent 1ea27b9 commit 042ba6a
Showing 1 changed file with 13 additions and 0 deletions.
13 changes: 13 additions & 0 deletions llama-burn/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,3 +130,16 @@ instruction-tuned model based on the Llama2 architecture and tokenizer.
Based on your hardware and the model selected, the `wgpu` backend might not be able to successfully
run the model due to the current memory management strategy. With `cuda` selected, the precision is
set to `f32` due to compilation errors with `f16`.

### Windows

The `cuda` backend is [unable to find nvrtc lib](https://github.com/coreylowman/cudarc/issues/246):

```
Unable to find nvrtc lib under the names ["nvrtc", "nvrtc64", "nvrtc64_12", "nvrtc64_123", "nvrtc64_123_0", "nvrtc64_120_3", "nvrtc64_10"]. Please open GitHub issue.
```

This has been fixed in the latest `cudarc` release (used by our `cuda-jit` backend), which is
currently used [on main](https://github.com/tracel-ai/burn). To circumvent the issue, feel free to
modify the code and use the latest Burn dependency in your project instead of `0.14.0`. This should
also allow you to use `f16` precision (compilation errors have been fixed since).

0 comments on commit 042ba6a

Please sign in to comment.