Skip to content

Commit

Permalink
Update readme with new mixed precision support
Browse files Browse the repository at this point in the history
  • Loading branch information
sthalles committed Feb 11, 2021
1 parent 727cbae commit 1848fc9
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ $ python run.py -data ./datasets --dataset-name stl10 --log-every-n-steps 100 --

If you want to run it on CPU (for debugging purposes) use the ```--disable-cuda``` option.

For 16-bit precision GPU training, make sure to install [NVIDIA apex](https://github.com/NVIDIA/apex) and use the ```--fp16_precision``` flag.
For 16-bit precision GPU training, there **NO** need to to install [NVIDIA apex](https://github.com/NVIDIA/apex). Just use the ```--fp16_precision``` flag and this implementation will use [Pytorch built in AMP training](https://pytorch.org/docs/stable/notes/amp_examples.html).

## Feature Evaluation

Expand Down

0 comments on commit 1848fc9

Please sign in to comment.