diff --git a/docs/source/quantization.md b/docs/source/quantization.md
index e55eb5aef3d..92994d044c5 100644
--- a/docs/source/quantization.md
+++ b/docs/source/quantization.md
@@ -524,7 +524,11 @@ Intel(R) Neural Compressor support multi-framework: PyTorch, Tensorflow, ONNX Ru
-> Note: DmlExecutionProvider support works as experimental, please expect exceptions.
+> ***Note***
+>
+> DmlExecutionProvider support works as experimental, please expect exceptions.
+>
+> Known limitation: the batch size of onnx models has to be fixed to 1 for DmlExecutionProvider, no multi-batch and dynamic batch support yet.
Examples of configure:
```python