Skip to content

Commit

Permalink
Merge pull request #2466 from oneapi-src/release/2024.2_AITools
Browse files Browse the repository at this point in the history
README updates for 2024.2 AI Tools release
  • Loading branch information
jimmytwei authored Aug 22, 2024
2 parents 495ff2b + 0d8ec9a commit 7b6c5c0
Show file tree
Hide file tree
Showing 2 changed files with 87 additions and 16 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,9 @@

The oneAPI Collective Communications Library Bindings for PyTorch* (oneCCL Bindings for PyTorch*) holds PyTorch bindings maintained by Intel for the Intel® oneAPI Collective Communications Library (oneCCL).

| Area | Description
| Property | Description
|:--- |:---
| Category | Getting Started
| What you will learn | How to get started with oneCCL Bindings for PyTorch*
| Time to complete | 60 minutes

Expand Down Expand Up @@ -34,39 +35,107 @@ The Jupyter Notebook also demonstrates how to change PyTorch* distributed worklo
>- [Intel® oneCCL Bindings for PyTorch*](https://github.com/intel/torch-ccl)
>- [Distributed Training with oneCCL in PyTorch*](https://github.com/intel/optimized-models/tree/master/pytorch/distributed)
## Environment Setup
You will need to download and install the following toolkits, tools, and components to use the sample.
<!-- Use numbered steps instead of subheadings -->

## Run the `oneCCL Bindings for PyTorch* Getting Started` Sample
**1. Get AI Tools**

Go to the section which corresponds to the installation method chosen in [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html) to see relevant instructions:
* [AI Tools Offline Installer (Validated)](#ai-tools-offline-installer-validated)
* [Docker](#docker)
Required AI Tools: Intel® Extension for PyTorch* - (CPU or GPU)

If you have not already, select and install these Tools via [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html). AI and Analytics samples are validated on AI Tools Offline Installer. It is recommended to select Offline Installer option in AI Tools Selector.

>**Note**: If Docker option is chosen in AI Tools Selector, refer to [Working with Preset Containers](https://github.com/intel/ai-containers/tree/main/preset) to learn how to run the docker and samples.
**2. (Offline Installer) Activate the AI Tools bundle base environment**

### AI Tools Offline Installer (Validated)
1. If you have not already done so, activate the AI Tools bundle base environment. If you used the default location to install AI Tools, open a terminal and type the following
If the default path is used during the installation of AI Tools:
```
source $HOME/intel/oneapi/intelpython/bin/activate
```
If you used a separate location, open a terminal and type the following
If a non-default path is used:
```
source <custom_path>/bin/activate
```
2. Clone the GitHub repository and install required packages:

**3. (Offline Installer) Activate relevant Conda environment**

For CPU
```
conda activate pytorch
```
For GPU
```
conda activate pytorch-gpu
```

**4. Clone the GitHub repository**

```
git clone https://github.com/oneapi-src/oneAPI-samples.git
cd oneAPI-samples/AI-and-Analytics/Getting-Started-Samples/Intel_oneCCL_Bindings_For_PyTorch_GettingStarted/
cd oneAPI-samples/AI-and-Analytics/Getting-Started-Samples/Intel_oneCCL_Bindings_For_PyTorch_GettingStarted
```

**5. Install dependencies**

>**Note**: Before running the following commands, make sure your Conda/Python environment with AI Tools installed is activated
```
pip install -r requirements.txt
pip install notebook
```
For Jupyter Notebook, refer to [Installing Jupyter](https://jupyter.org/install) for detailed installation instructions.

## Run the Sample
>**Note**: Before running the sample, make sure [Environment Setup](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics/Getting-Started-Samples/INC-Quantization-Sample-for-PyTorch#environment-setup) is completed.
Go to the section which corresponds to the installation method chosen in [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html) to see relevant instructions:
* [AI Tools Offline Installer (Validated)](#ai-tools-offline-installer-validated)
* [Docker](#docker)

### AI Tools Offline Installer (Validated)

**1. Register Conda kernel to Jupyter Notebook kernel**

**For CPU**

If the default path is used during the installation of AI Tools:

```
3. Launch Jupyter Notebook.
$HOME/intel/oneapi/intelpython/envs/pytorch/bin/python -m ipykernel install --user --name=pytorch
```

If a non-default path is used:
```
<custom_path>/bin/python -m ipykernel install --user --name=pytorch
```

**For GPU**

If the default path is used during the installation of AI Tools:

```
$HOME/intel/oneapi/intelpython/envs/pytorch-gpu/bin/python -m ipykernel install --user --name=pytorch-gpu
```

If a non-default path is used:
```
<custom_path>/bin/python -m ipykernel install --user --name=pytorch-gpu
```
**2. Launch Jupyter Notebook.**
```
jupyter notebook --ip=0.0.0.0 --port 8888 --allow-root
```
4. Follow the instructions to open the URL with the token in your browser.
5. Locate and select the Notebook.
**3. Follow the instructions to open the URL with the token in your browser.**

**4. Select the Notebook.**
```
oneCCL_Bindings_GettingStarted.ipynb
```
6. Change your Jupyter Notebook kernel to **PyTorch** or **PyTorch-GPU**.
7. Run every cell in the Notebook in sequence.

**5. Change kernel to ``pytorch`` or ``pytorch-gpu``.**

**6. Run every cell in the Notebook in sequence.**

### Docker
AI Tools Docker images already have Get Started samples pre-installed. Refer to [Working with Preset Containers](https://github.com/intel/ai-containers/tree/main/preset) to learn how to run the docker and samples.
Expand All @@ -77,3 +146,5 @@ Code samples are licensed under the MIT license. See
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.

Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt).

*Other names and brands may be claimed as the property of others. [Trademarks](https://www.intel.com/content/www/us/en/legal/trademarks.html)
2 changes: 1 addition & 1 deletion AI-and-Analytics/Getting-Started-Samples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
|Classical Machine Learning| Intel® Optimization for XGBoost* | [IntelPython_XGBoost_GettingStarted](IntelPython_XGBoost_GettingStarted) | Set up and trains an XGBoost* model on datasets for prediction.
|Classical Machine Learning| daal4py | [IntelPython_daal4py_GettingStarted](IntelPython_daal4py_GettingStarted) | Batch linear regression using the Python API package daal4py from oneAPI Data Analytics Library (oneDAL).
|Deep Learning <br/> Inference Optimization| Intel® Optimization for TensorFlow* | [IntelTensorFlow_GettingStarted](IntelTensorFlow_GettingStarted) | A simple training example for TensorFlow.
|Deep Learning <br/> Inference Optimization|Intel® Extension of PyTorch | [IntelPyTorch_GettingStarted](Intel_Extension_For_PyTorch_GettingStarted) | A simple training example for Intel® Extension of PyTorch.
|Deep Learning <br/> Inference Optimization|Intel® Extension of PyTorch | [IntelPyTorch_GettingStarted]([https://github.com/intel/intel-extension-for-pytorch/tree/main/examples/cpu/inference/python/jupyter-notebooks](https://github.com/intel/intel-extension-for-pytorch/blob/main/examples/cpu/inference/python/jupyter-notebooks/IPEX_Getting_Started.ipynb)| A simple training example for Intel® Extension of PyTorch.
|Classical Machine Learning| Scikit-learn (OneDAL) | [Intel_Extension_For_SKLearn_GettingStarted](Intel_Extension_For_SKLearn_GettingStarted) | Speed up a scikit-learn application using Intel oneDAL.
|Deep Learning <br/> Inference Optimization|Intel® Extension of TensorFlow | [Intel® Extension For TensorFlow GettingStarted](Intel_Extension_For_TensorFlow_GettingStarted) | Guides users how to run a TensorFlow inference workload on both GPU and CPU.
|Deep Learning Inference Optimization|oneCCL Bindings for PyTorch | [Intel oneCCL Bindings For PyTorch GettingStarted](Intel_oneCCL_Bindings_For_PyTorch_GettingStarted) | Guides users through the process of running a simple PyTorch* distributed workload on both GPU and CPU. |
Expand Down

0 comments on commit 7b6c5c0

Please sign in to comment.