Skip to content

Commit

Permalink
Update publications and documentations (#1056)
Browse files Browse the repository at this point in the history
Signed-off-by: chensuyue <[email protected]>
  • Loading branch information
chensuyue authored Jul 3, 2023
1 parent ff9ed45 commit 20f9704
Show file tree
Hide file tree
Showing 4 changed files with 35 additions and 11 deletions.
11 changes: 11 additions & 0 deletions .azure-pipelines/scripts/codeScan/pyspelling/inc_dict.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2665,3 +2665,14 @@ boolean
deepcopy
optype
perchannel
LokuUdeVg
Ntsk
PLg
UKERBljNxC
YADWOFuj
cloudblogs
dmjx
fdb
jJA
wWLes
xHKe
12 changes: 8 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -137,14 +137,18 @@ q_model = fit(
</tbody>
</table>

> More documentations can be found at [User Guide](./docs/source/user_guide.md).
## Selected Publications/Events
* Blog on Medium: [Intel Optimization at Netflix](https://medium.com/@amerather_9719/intel-optimization-at-netflix-79ef0efb9d2) (May 2023)
* Blog on Medium: [Effective Post-training Quantization for Large Language Models with Enhanced SmoothQuant Approach](https://medium.com/@NeuralCompressor/effective-post-training-quantization-for-large-language-models-with-enhanced-smoothquant-approach-93e9d104fb98) (Apr 2023)
* Blog by Intel: [Intel® Xeon® Processors Are Still the Only CPU With MLPerf Results, Raising the Bar By 5x](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Intel-Xeon-Processors-Are-Still-the-Only-CPU-With-MLPerf-Results/post/1472750) (Apr 2023)
* Blog by MSFT: [Olive: A user-friendly toolchain for hardware-aware model optimization](https://cloudblogs.microsoft.com/opensource/2023/06/26/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization/) (June 2023)
* Blog by MSFT: [Automate optimization techniques for transformer models](https://cloudblogs.microsoft.com/opensource/2023/06/26/automate-optimization-techniques-for-transformer-models/) (June 2023)
* Post on Social Media: [Get Started Post-Training Dynamic Quantization | AI Model Optimization with Intel® Neural Compressor](https://www.youtube.com/watch?v=5xHKe4wWLes&list=PLg-UKERBljNxC8dmjx7jJA2YADWOFuj_p&index=4) (June 2023)
* Post on Social Media: [How to Choose AI Model Quantization Techniques | AI Model Optimization with Intel® Neural Compressor](https://www.youtube.com/watch?v=ie3w_j0Ntsk) (June 2023)
* Post on Social Media: [What is AI Model Optimization | AI Model Optimization with Intel® Neural Compressor | Intel Software](https://www.youtube.com/watch?v=m2LokuUdeVg&list=PLg-UKERBljNxC8dmjx7jJA2YADWOFuj_p&index=2) (June 2023)
* NeurIPS'2022: [Fast Distilbert on CPUs](https://arxiv.org/abs/2211.07715) (Oct 2022)
* NeurIPS'2022: [QuaLA-MiniLM: a Quantized Length Adaptive MiniLM](https://arxiv.org/abs/2210.17114) (Oct 2022)

> View our [Full Publication List](./docs/source/publication_list.md).
> View [Full Publication List](./docs/source/publication_list.md).
## Additional Content

Expand Down
10 changes: 8 additions & 2 deletions docs/source/publication_list.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,12 @@
Full Publications/Events (60)
Full Publications/Events (66)
==========
## 2023 (7)
## 2023 (13)
* Blog by MSFT: [Olive: A user-friendly toolchain for hardware-aware model optimization](https://cloudblogs.microsoft.com/opensource/2023/06/26/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization/) (June 2023)
* Blog by MSFT: [Automate optimization techniques for transformer models](https://cloudblogs.microsoft.com/opensource/2023/06/26/automate-optimization-techniques-for-transformer-models/) (June 2023)
* Post on Social Media: [Get Started Post-Training Dynamic Quantization | AI Model Optimization with Intel® Neural Compressor](https://www.youtube.com/watch?v=5xHKe4wWLes&list=PLg-UKERBljNxC8dmjx7jJA2YADWOFuj_p&index=4) (June 2023)
* Post on Social Media: [How to Choose AI Model Quantization Techniques | AI Model Optimization with Intel® Neural Compressor](https://www.youtube.com/watch?v=ie3w_j0Ntsk) (June 2023)
* Post on Social Media: [What is AI Model Optimization | AI Model Optimization with Intel® Neural Compressor | Intel Software](https://www.youtube.com/watch?v=m2LokuUdeVg&list=PLg-UKERBljNxC8dmjx7jJA2YADWOFuj_p&index=2) (June 2023)
* Blog on Medium: [Streamlining Model Optimization as a Service with Intel Neural Compressor](https://medium.com/intel-analytics-software/streamlining-model-optimization-as-a-service-with-intel-neural-compressor-fd970fdb2928) (June 2023)
* Blog on Medium: [Intel Optimization at Netflix](https://medium.com/@amerather_9719/intel-optimization-at-netflix-79ef0efb9d2) (May 2023)
* Blog on Medium: [Effective Post-training Quantization for Large Language Models with Enhanced SmoothQuant Approach](https://medium.com/@NeuralCompressor/effective-post-training-quantization-for-large-language-models-with-enhanced-smoothquant-approach-93e9d104fb98) (Apr 2023)
* Blog by Intel: [Intel® Xeon® Processors Are Still the Only CPU With MLPerf Results, Raising the Bar By 5x](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Intel-Xeon-Processors-Are-Still-the-Only-CPU-With-MLPerf-Results/post/1472750) (Apr 2023)
Expand Down
13 changes: 8 additions & 5 deletions docs/source/user_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,14 +72,17 @@ This part provides the advanced topics that help user dive deep into Intel® Neu
<tr>
<td colspan="2" align="center"><a href="adaptor.md">Adaptor</a></td>
<td colspan="2" align="center"><a href="tuning_strategies.md">Strategy</a></td>
<td colspan="3" align="center"><a href="objective.md">Objective</a></td>
<td colspan="3" align="center"><a href="distillation_quantization.md">Distillation for Quantization</a></td>
<td colspan="2" align="center"><a href="objective.md">Objective</a></td>
</tr>
<tr>
<td colspan="2" align="center"><a href="smooth_quant.md">SmoothQuant</a></td>
<td colspan="2" align="center"><a href="diagnosis.md">Diagnosis</a></td>
<td colspan="3" align="center"><a href="add_new_data_type.md">Add New Data Type</a></td>
<td colspan="3" align="center"><a href="add_new_adaptor.md">Add New Adaptor</a></td>
<td colspan="2" align="center"><a href="add_new_data_type.md">Add New Data Type</a></td>
<td colspan="2" align="center"><a href="add_new_adaptor.md">Add New Adaptor</a></td>
</tr>
<tr>
<td colspan="2" align="center"><a href="distillation_quantization.md">Distillation for Quantization</a></td>
<td colspan="2" align="center"><a href="smooth_quant.md">SmoothQuant</a></td>
<td colspan="2" align="center"><a href="quantization_weight_only.md">Weight-Only Quantization</a></td>
</tr>
</tbody>
</table>
Expand Down

0 comments on commit 20f9704

Please sign in to comment.