CEBRA is a library for estimating Consistent EmBeddings of high-dimensional Recordings using Auxiliary variables. It contains self-supervised learning algorithms implemented in PyTorch, and has support for a variety of different datasets common in biology and neuroscience.
To receive updates on code releases, please 👀 watch or ⭐️ star this repository!
cebra
is a self-supervised method for non-linear clustering that allows for label-informed time series analysis.
It can jointly use behavioral and neural data in a hypothesis- or discovery-driven manner to produce consistent, high-performance latent spaces. While it is not specific to neural and behavioral data, this is the first domain we used the tool in. This application case is to obtain a consistent representation of latent variables driving activity and behavior, improving decoding accuracy of behavioral variables over standard supervised learning, and obtaining embeddings which are robust to domain shifts.
-
📄 Publication May 2023: Learnable latent embeddings for joint behavioural and neural analysis. Steffen Schneider*, Jin Hwa Lee* and Mackenzie Weygandt Mathis. Nature 2023.
-
📄 Preprint April 2022: Learnable latent embeddings for joint behavioral and neural analysis. Steffen Schneider*, Jin Hwa Lee* and Mackenzie Weygandt Mathis
- CEBRA is released for academic use only (please read the license file). If this license is not appropriate for your application, please contact Prof. Mackenzie W. Mathis ([email protected]) for a commercial use license.