Skip to content

Commit

Permalink
Fix broken link in docs (#1969)
Browse files Browse the repository at this point in the history
Signed-off-by: Huang, Tai <[email protected]>
  • Loading branch information
thuang6 authored Aug 9, 2024
1 parent 385da7c commit de0fa21
Show file tree
Hide file tree
Showing 4 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion docs/source/3x/PT_MixedPrecision.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,5 +107,5 @@ best_model = autotune(model=build_torch_model(), tune_config=custom_tune_config,

## Examples

Users can also refer to [examples](https://github.com/intel/neural-compressor/blob/master/examples/3.x_api/pytorch\cv\mixed_precision
Users can also refer to [examples](https://github.com/intel/neural-compressor/blob/master/examples/3.x_api/pytorch/cv/mixed_precision
) on how to quantize a model with Mixed Precision.
2 changes: 1 addition & 1 deletion docs/source/3x/TF_Quant.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ TensorFlow Quantization

`neural_compressor.tensorflow` supports quantizing both TensorFlow and Keras model with or without accuracy aware tuning.

For the detailed quantization fundamentals, please refer to the document for [Quantization](../quantization.md).
For the detailed quantization fundamentals, please refer to the document for [Quantization](quantization.md).


## Get Started
Expand Down
2 changes: 1 addition & 1 deletion docs/source/3x/TF_SQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,4 +50,4 @@ best_model = autotune(
## Examples

Users can also refer to [examples](https://github.com/intel/neural-compressor/blob/master/examples/3.x_api/tensorflow/nlp/large_language_models\quantization\ptq\smoothquant) on how to apply smooth quant to a TensorFlow model with `neural_compressor.tensorflow`.
Users can also refer to [examples](https://github.com/intel/neural-compressor/blob/master/examples/3.x_api/tensorflow/nlp/large_language_models/quantization/ptq/smoothquant) on how to apply smooth quant to a TensorFlow model with `neural_compressor.tensorflow`.
2 changes: 1 addition & 1 deletion docs/source/3x/quantization.md
Original file line number Diff line number Diff line change
Expand Up @@ -396,7 +396,7 @@ For supported quantization methods for `accuracy aware tuning` and the detailed

User could refer to below chart to understand the whole tuning flow.

<img src="../source/imgs/accuracy_aware_tuning_flow.png" width=600 height=480 alt="accuracy aware tuning working flow">
<img src="./imgs/workflow.png" alt="accuracy aware tuning working flow">



Expand Down

0 comments on commit de0fa21

Please sign in to comment.