Examples || Docs || PyPI || Docker Images || Beaker Images || License || Changelog
First install PyTorch according to the instructions specific to your operating system and hardware. Then you can install from PyPI with:
pip install ai2-olmo-core
There are a number of optional dependencies that must be installed to use certain functionality as well, including:
- flash-attn for flash attention and certain other fused operations.
- torchao for float8 training.
- megablocks for mixture-of-experts (MoE) models.
The published Docker images contain all core and optional dependencies, and are regularly tested on our in-house H100 clusters. But there are several things to keep in mind if you intend to use these images:
- They do not come with the OLMo-core package installed, only its dependencies, to accommodate for regular code changes.
- They may not work on your own cluster if you have different hardware or driver/CUDA versions.
If the published images do not work for your use-case for any of the above reasons, you could adapt our Dockerfile to build your own images.
Even though this library is under rapid development we are trying hard to adhere to Semantic Versioning with every release except for features that are explicitly marked as beta features. Those features will be tagged like this in the API docs:
Official training scripts for various model sizes can be found in src/scripts/train/
.
To see the exact usage for each script, run the script without any arguments.
Throughput numbers from these scripts with various different configuration settings are reported below, measured on a cluster with NVIDIA H100 GPUs.
Model size | Model arch. | Context length | Precision | Throughput1 | Training script | Commandline overrides |
---|---|---|---|---|---|---|
1B | OLMo-1124 | 4096 | BF16 | 55,000 TPS | OLMo2-1B.py |
|
4096 | BF16/FP82 | 65,000 TPS | OLMo2-1B.py |
--model.float8_config.enabled=true |
||
7B | OLMo-1124 | 4096 | BF16 | 10,000 TPS | OLMo2-7B.py |
|
4096 | BF16/FP8 | 13,000 TPS | OLMo2-7B.py |
--model.float8_config.enabled=true |
||
8B | Llama | 4096 | BF16 | 9,500 TPS | Llama3-8B.py |
|
4096 | BF16/FP8 | 12,500 TPS | Llama3-8B.py |
--model.float8_config.enabled=true |
||
13B | OLMo-1124 | 4096 | BF16 | 4,600 TPS | OLMo2-13B.py |
|
4096 | BF16/FP8 | 5,500 TPS | OLMo2-13B.py |
--model.float8_config.enabled=true |
After cloning OLMo-core and setting up a Python virtual environment, install the codebase from source with:
pip install -e .[all]
The Python library source code is located in src/olmo_core
. The corresponding tests are located in src/test
. The library docs are located in docs
. You can build the docs locally with make docs
.
Code checks:
- We use
pytest
to run tests. You can run all tests withpytest -v src/test
. You can also pointpytest
at a specific test file to run it individually. - We use
isort
andblack
for code formatting. Ideally you should integrate these into your editor, but you can also run them manually or configure them with a pre-commit hook. To validate that all files are formatted correctly, runmake style-check
. - We use
ruff
as our primary linter. You can run it withmake lint-check
. - We use
mypy
as our type checker. You can run it withmake type-check
.
@article{OLMo,
title={OLMo: Accelerating the Science of Language Models},
author={Dirk Groeneveld and Iz Beltagy and Pete Walsh and Akshita Bhagia and Rodney Kinney and Oyvind Tafjord and A. Jha and Hamish Ivison and Ian Magnusson and Yizhong Wang and Shane Arora and David Atkinson and Russell Authur and Khyathi Raghavi Chandu and Arman Cohan and Jennifer Dumas and Yanai Elazar and Yuling Gu and Jack Hessel and Tushar Khot and William Merrill and Jacob Daniel Morrison and Niklas Muennighoff and Aakanksha Naik and Crystal Nam and Matthew E. Peters and Valentina Pyatkin and Abhilasha Ravichander and Dustin Schwenk and Saurabh Shah and Will Smith and Emma Strubell and Nishant Subramani and Mitchell Wortsman and Pradeep Dasigi and Nathan Lambert and Kyle Richardson and Luke Zettlemoyer and Jesse Dodge and Kyle Lo and Luca Soldaini and Noah A. Smith and Hanna Hajishirzi},
year={2024},
url={https://api.semanticscholar.org/CorpusID:267365485},
journal={arXiv preprint},
}