-
Notifications
You must be signed in to change notification settings - Fork 258
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Project Name change from "Intel Low Precision Inference Toolkit" to "…
…Intel Low Precision Optimization Tool".
- Loading branch information
Showing
4 changed files
with
15 additions
and
15 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,7 +1,7 @@ | ||
Intel® Low Precision Inference Toolkit (iLiT) | ||
Intel® Low Precision Optimization Tool (iLiT) | ||
========================================= | ||
|
||
Intel® Low Precision Inference Toolkit (iLiT) is an open-source python library which is intended to deliver a unified low-precision inference interface cross multiple Intel optimized DL frameworks on both CPU and GPU. It supports automatic accuracy-driven tuning strategies, along with additional objectives like performance, model size, or memory footprint. It also provides the easy extension capability for new backends, tuning strategies, metrics and objectives. | ||
Intel® Low Precision Optimization Tool (iLiT) is an open-source python library which is intended to deliver a unified low-precision inference interface cross multiple Intel optimized DL frameworks on both CPU and GPU. It supports automatic accuracy-driven tuning strategies, along with additional objectives like performance, model size, or memory footprint. It also provides the easy extension capability for new backends, tuning strategies, metrics and objectives. | ||
|
||
> **WARNING** | ||
> | ||
|
@@ -27,8 +27,8 @@ comprehensive step-by-step instructions of how to enable iLiT on sample models. | |
# Install from source | ||
|
||
```Shell | ||
git clone https://github.com/intel/lp-inference-kit.git | ||
cd lp-inference-kit | ||
git clone https://github.com/intel/lp-opt-tool.git | ||
cd lp-opt-tool | ||
python setup.py install | ||
``` | ||
|
||
|
@@ -84,7 +84,7 @@ The followings are the examples integrated with iLiT for auto tuning. | |
# Support | ||
|
||
Please submit your questions, feature requests, and bug reports on the | ||
[GitHub issues](https://github.com/intel/lp-inference-kit/issues) page. You may also reach out to [email protected]. | ||
[GitHub issues](https://github.com/intel/lp-opt-tool/issues) page. You may also reach out to [email protected]. | ||
|
||
# Contributing | ||
|
||
|
@@ -97,7 +97,7 @@ to improve the library: | |
[code contribution guidelines](CONTRIBUTING.md#code_contribution_guidelines) | ||
and [coding style](CONTRIBUTING.md#coding_style). | ||
* Ensure that you can run all the examples with your patch. | ||
* Submit a [pull request](https://github.com/intel/lp-inference-kit/pulls). | ||
* Submit a [pull request](https://github.com/intel/lp-opt-tool/pulls). | ||
|
||
For additional details, see [contribution guidelines](CONTRIBUTING.md). | ||
|
||
|
@@ -132,8 +132,8 @@ If you use iLiT in your research or wish to refer to the tuning results publishe | |
``` | ||
@misc{iLiT, | ||
author = {Feng Tian, Chuanqi Wang, Guoming Zhang, Penghui Cheng, Pengxin Yuan, Haihao Shen, and Jiong Gong}, | ||
title = {Intel® Low Precision Inference Toolkit}, | ||
howpublished = {\url{https://github.com/intel/lp-inference-kit}}, | ||
title = {Intel® Low Precision Optimization Tool}, | ||
howpublished = {\url{https://github.com/intel/lp-opt-tool}}, | ||
year = {2020} | ||
} | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -6,12 +6,12 @@ | |
version="1.0a0", | ||
author="Intel MLP/MLPC Team", | ||
author_email="[email protected], [email protected], [email protected], [email protected], [email protected], [email protected]", | ||
description="Repository of intel low precision inference toolkit", | ||
description="Repository of Intel Low Precision Optimization Tool", | ||
long_description=open("README.md", "r", encoding='utf-8').read(), | ||
long_description_content_type="text/markdown", | ||
keywords='quantization, auto-tuning, post-training static quantization, post-training dynamic quantization, quantization-aware training, tuning strategy', | ||
license='', | ||
url="https://github.intel.com/intel/lp-inference-kit", | ||
url="https://github.com/intel/lp-opt-tool", | ||
packages = find_packages(), | ||
package_dir = {'':'.'}, | ||
package_data={'': ['*.py', '*.yaml']}, | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters