Skip to content
/ cuml Public
forked from rapidsai/cuml

cuML - RAPIDS Machine Learning Library

License

Notifications You must be signed in to change notification settings

milakov/cuml

 
 

Repository files navigation

 cuML - RAPIDS Machine Learning Algorithms

Machine learning is a fundamental capability of RAPIDS. cuML is a suite of libraries that implements a machine learning algorithms within the RAPIDS data science ecosystem. cuML enables data scientists, researchers, and software engineers to run traditional ML tasks on GPUs without going into the details of CUDA programming.

NOTE: For the latest stable README.md ensure you are on the master branch.

The cuML repository contains:

  1. python: Python based GPU Dataframe (GDF) machine learning package that takes cuDF dataframes as input. cuML connects the data to C++/CUDA based cuML and ml-prims libraries without ever leaving GPU memory.

  2. cuML: C++/CUDA machine learning algorithms. This library currently includes the following six algorithms; a) Single GPU Truncated Singular Value Decomposition (tSVD), b) Single GPU Principal Component Analysis (PCA), c) Single GPU Density-based Spatial Clustering of Applications with Noise (DBSCAN), d) Single GPU Kalman Filtering, e) Multi-GPU K-Means Clustering, f) Multi-GPU K-Nearest Neighbors (Uses Faiss).

  3. ml-prims: Low level machine learning primitives used in cuML. ml-prims is comprised of the following components; a) Linear Algebra, b) Statistics, c) Basic Matrix Operations, d) Distance Functions, e) Random Number Generation.

Available Algorithms:

  • Truncated Singular Value Decomposition (tSVD),

  • Principal Component Analysis (PCA),

  • Density-based spatial clustering of applications with noise (DBSCAN),

  • K-Means Clustering,

  • K-Nearest Neighbors (Requires Faiss installation to use),

  • Linear Regression (Ordinary Least Squares),

  • Ridge Regression.

  • Kalman Filter.

Upcoming algorithms:

  • More Kalman Filter versions,

  • Lasso,

  • Elastic-Net,

  • Logistic Regression,

  • UMAP

More ML algorithms in cuML and more ML primitives in ml-prims are being added currently. Example notebooks are provided in the python folder to test the functionality and performance. Goals for future versions include more algorithms and multi-gpu versions of the algorithms and primitives.

The installation option provided currently consists on building from source. Upcoming versions will add pip and conda options, along docker containers. They will be available in the coming weeks.

Setup

Conda

cuML can be installed using the rapidsai conda channel:

conda install -c nvidia -c rapidsai -c conda-forge -c pytorch -c defaults cuml

Pip

cuML can also be installed using pip. Select the package based on your version of CUDA:

# cuda 9.2
pip install cuml-cuda92

# cuda 10.0
pip install cuml-cuda100

You also need to ensure libomp and libopenblas are installed:

apt install libopenblas-base libomp-dev

Note: There is no faiss-gpu package installable by pip, so the KNN algorithm will not work unless you install Faiss manually or via conda (see below).

Dependencies for Installing/Building from Source:

To install cuML from source, ensure the dependencies are met:

  1. cuDF (>=0.5.0)
  2. zlib Provided by zlib1g-dev in Ubuntu 16.04
  3. cmake (>= 3.12.4)
  4. CUDA (>= 9.2)
  5. Cython (>= 0.29)
  6. gcc (>=5.4.0)
  7. BLAS - Any BLAS compatible with Cmake's FindBLAS
# cuda 9.2
conda install -c pytorch faiss-gpu cuda92

# cuda 10.0
conda install -c pytorch faiss-gpu cuda100

Installing from Source:

Once dependencies are present, follow the steps below:

  1. Clone the repository.
$ git clone --recurse-submodules https://github.com/rapidsai/cuml.git
  1. Build and install libcuml (the C++/CUDA library containing the cuML algorithms), starting from the repository root folder:
$ cd cuML
$ mkdir build
$ cd build
$ cmake ..

If using a conda environment (recommended currently), then cmake can be configured appropriately via:

$ cmake .. -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX

Note: The following warning message is dependent upon the version of cmake and the CMAKE_INSTALL_PREFIX used. If this warning is displayed, the build should still run succesfully. We are currently working to resolve this open issue. You can silence this warning by adding -DCMAKE_IGNORE_PATH=$CONDA_PREFIX/lib to your cmake command.

Cannot generate a safe runtime search path for target ml_test because files
in some directories may conflict with libraries in implicit directories:

The configuration script will print the BLAS found on the search path. If the version found does not match the version intended, use the flag -DBLAS_LIBRARIES=/path/to/blas.so with the cmake command to force your own version.

  1. Build libcuml:
$ make -j
$ make install

To run tests (optional):

$ ./ml_test

If you want a list of the available tests:

$ ./ml_test --gtest_list_tests
  1. Build the cuml python package:
$ cd ../../python
$ python setup.py build_ext --inplace

To run Python tests (optional):

$ py.test -v

If you want a list of the available tests:

$ py.test cuML/test --collect-only
  1. Finally, install the Python package to your Python path:
$ python setup.py install

Python Notebooks

Demo notebooks for the cuML Python algorithms can be found in the rapidsai/notebooks repository on Github.

External

The external folders contains submodules that this project in-turn depends on. Appropriate location flags will be automatically populated in the main CMakeLists.txt file for these.

Current external submodules are:

Contributing

Please use issues and pull requests to report bugs and add functionality.

Contact

Find out more details on the RAPIDS site

Open GPU Data Science

The RAPIDS suite of open source software libraries aim to enable execution of end-to-end data science and analytics pipelines entirely on GPUs. It relies on NVIDIA® CUDA® primitives for low-level compute optimization, but exposing that GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces.

About

cuML - RAPIDS Machine Learning Library

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 48.3%
  • Cuda 27.5%
  • Python 21.1%
  • CMake 1.7%
  • Shell 1.0%
  • C 0.3%
  • Dockerfile 0.1%