Skip to content
/ FBGEMM Public
forked from pytorch/FBGEMM

FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/

License

Notifications You must be signed in to change notification settings

gnahzg/FBGEMM

 
 

Repository files navigation

FBGEMM

FBGEMM CI

FBGEMM (Facebook GEneral Matrix Multiplication) is a low-precision, high-performance matrix-matrix multiplications and convolution library for server-side inference.

The library provides efficient low-precision general matrix multiplication for small batch sizes and support for accuracy-loss minimizing techniques such as row-wise quantization and outlier-aware quantization. FBGEMM also exploits fusion opportunities in order to overcome the unique challenges of matrix multiplication at lower precision with bandwidth-bound operations.

FBGEMM is used as a backend of Caffe2 and PyTorch quantized operators for x86 machines:

Build Instructions

Build with CMake

The general instructions for building with Cmake are as follows:

# Clone the repo
git clone --recursive https://github.com/pytorch/FBGEMM.git
cd FBGEMM

# Pull down the submodules
git submodule sync
git submodule update --init --recursive

# Create a build directory
mkdir build
cd build

# Set up the build
cmake -DUSE_SANITIZER=address -DFBGEMM_LIBRARY_TYPE=shared -DPYTHON_EXECUTABLE=/usr/bin/python3 ..

# Run the build
make -j VERBOSE=1

# Run all tests
make test

# Install the package
make install
Build Issues with GCC 12

As of time of writing, compilation of FBGEMM on GCC 12 will fail due to a known compiler regression. To work around the issue, simply add the following exports prior to running CMake:

export CFLAGS+=" -Wno-error=maybe-uninitialized -Wno-error=uninitialized -Wno-error=restrict"
export CXXFLAGS+=" -Wno-error=maybe-uninitialized -Wno-error=uninitialized -Wno-error=restrict"

Please see GitHub issues 77939, 1094, and 1666 for more details.

Run Examples

The tests in the test/ directory and benchmarks in the bench/ directory are some great examples of using FBGEMM. For instance, the SpMDMTest test in test/PackedRequantizeAcc16Test.cc shows how to combine row offset calculations with packing of A (PackAWithRowOffset), how to pack B matrix (PackBMatrix) and construct output pipeline (sparse_matrix*dense_matrix --> requantization --> nop) fused with inner GEMM macro kernel.

Dependencies

FBGEMM requires gcc 8+ and a CPU with support for AVX2 instruction set or higher. It has been tested on Mac OS X and Linux.

asmjit

With inner kernels, FBGEMM takes a “one size doesn't fit all” approach, so the implementation dynamically generates efficient matrix-shape specific vectorized code using a third-party library called asmjit. asmjit is required to build FBGEMM.

cpuinfo

FBGEMM detects CPU instruction set support at runtime using cpuinfo library and dispatches optimized kernels for the detected instruction set. Therefore, cpuinfo is required to detect CPU type.

googletest

googletest is required to build and run FBGEMM's tests. googletest is not required if you don't want to run FBGEMM tests. By default, building of tests is on. Turn it off by setting FBGEMM_BUILD_TESTS to off.

You can download asmjit, cpuinfo, googletest and set ASMJIT_SRC_DIR, CPUINFO_SRC_DIR, GOOGLETEST_SOURCE_DIR respectively for cmake to find these libraries. If any of these variables is not set, cmake will build the git submodules found in the third_party directory.

FBGEMM, in general, does not have any dependency on Intel MKL. However, for performance comparison, some benchmarks use MKL functions. If MKL is found or MKL path is provided with INTEL_MKL_DIR benchmarks are built with MKL and performance numbers are reported for MKL functions as well. However, if MKL is not found, the benchmarks are not built.

Documentation

For a high-level overview, design philosophy and brief descriptions of various parts of FBGEMM please see our blog post.

What's New?

API Docs

We have extensively used comments in our source files. The best and up-do-date documentation is available in the source files.

You can also turn on the option to generate the documentation (using Doxygen and Sphinx by setting the -DFBGEMM_BUILD_DOCS=ON flag when invoking CMake.

Citation

For those looking for the appropriate article to cite regarding FBGEMM, we recommend citing our paper:

@article{fbgemm,
  title={FBGEMM: Enabling High-Performance Low-Precision Deep Learning Inference},
  author={Khudia, Daya and Huang, Jianyu and Basu, Protonu and Deng, Summer and Liu, Haixin and Park, Jongsoo and Smelyanskiy, Mikhail},
  journal={arXiv preprint arXiv:2101.05615},
  year={2021}
}

Join the FBGEMM community

For questions, support, news updates, or feature requests, please feel free to:

For contributions, please see the CONTRIBUTING file for ways to help out.

License

FBGEMM is BSD licensed, as found in the LICENSE file.

About

FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 56.1%
  • Python 22.8%
  • Cuda 19.3%
  • CMake 1.3%
  • C 0.3%
  • Starlark 0.2%