FBGEMM (Facebook GEneral Matrix Multiplication) is a low-precision, high-performance matrix-matrix multiplications and convolution library for server-side inference.
The library provides efficient low-precision general matrix multiplication for small batch sizes and support for accuracy-loss minimizing techniques such as row-wise quantization and outlier-aware quantization. FBGEMM also exploits fusion opportunities in order to overcome the unique challenges of matrix multiplication at lower precision with bandwidth-bound operations.
FBGEMM is used as a backend of Caffe2 and PyTorch quantized operators for x86 machines:
- Caffe2: https://github.com/pytorch/pytorch/tree/master/caffe2/quantization/server
- PyTorch: https://github.com/pytorch/pytorch/tree/master/aten/src/ATen/native/quantized/cpu
See the full Documentation for more information on building, installing, and developing with FBGEMM, as well as the most up-to-date support matrix and API documentation for this library.
- New Features and Recent Improvements (January, 2020)
For a high-level overview, design philosophy and brief descriptions of various parts of FBGEMM please see our blog post.
For those looking for the appropriate article to cite regarding FBGEMM, we recommend citing our paper:
@article{fbgemm,
title={FBGEMM: Enabling High-Performance Low-Precision Deep Learning Inference},
author={Khudia, Daya and Huang, Jianyu and Basu, Protonu and Deng, Summer and Liu, Haixin and Park, Jongsoo and Smelyanskiy, Mikhail},
journal={arXiv preprint arXiv:2101.05615},
year={2021}
}
For questions, support, news updates, or feature requests, please feel free to:
- File a ticket in GitHub Issues
- Post a discussion in GitHub Discussions
- Reach out to us on the
#fbgemm
channel in PyTorch Slack
For contributions, please see the CONTRIBUTING
file for
ways to help out.
FBGEMM is BSD licensed, as found in the LICENSE
file.