Skip to content

Commit

Permalink
editing the intro
Browse files Browse the repository at this point in the history
  • Loading branch information
slinderman committed Jul 12, 2024
1 parent b77a76e commit c7341b1
Showing 1 changed file with 7 additions and 6 deletions.
13 changes: 7 additions & 6 deletions paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,18 +43,20 @@ bibliography: paper.bib

# Summary

Probabilistic state space models (SSMs) are fundamental tools for modeling sequential data, and they are broadly used across engineering and scientific disciplines. Let $y_1, \ldots y_T$ denote a sequence of observations where $y_t$ denotes the observation at time $t$. In an SSM, the observations are generated by a latent state, $z_t$, which evolve according to a dynamics (aka transition) model. An SSM may also use inputs (aka controls or covariates), $u_t$, to steer the latent state dynamics and influence the observations.
State space models (SSMs) are fundamental tools for modeling sequential data. They are broadly used across engineering disciplines like signal processing and control theory, as well as scientific domains like neuroscience, genetics, ecology, and climate science. Fast and robust tools for state space modeling are crucial to researchers in all of these application areas.

For example, SSMs are often used in neuroscience to model the dynamics of neural spike train recordings [@vyas2020computation]. Here, $y_t$ is a vector of spike counts from each of, say, 100 measured neurons. The activity of nearby neurons is often correlated, and SSMs can capture that correlation through a lower dimensional latent state, $z_t$. Finally, if we know that certain sensory inputs may drive the neural activity, we can encode them in $u_t$. A common goal in neuroscience is to infer the latent states $z_t$ that best explain the observed neural spike train; this is called _state inference_. Another goal is to estimate the dynamics that govern how latent states evolve over time; this is part of the _parameter estimation_ process. `Dynamax` provides algorithms for state inference and parameter estimation in a variety of SSMs.
State space models specify a probability distribution over a sequence of observations, $y_1, \ldots y_T$, where $y_t$ denotes the observation at time $t$. The key assumption of an SSM is that the observations arise from a sequence of _latent states_, $z_1, \ldots, z_T$, which evolve according to a _dynamics model_ (aka transition model). An SSM may also use inputs (aka controls or covariates), $u_1,\ldots,u_T$, to steer the latent state dynamics and influence the observations.

For example, SSMs are often used in neuroscience to model the dynamics of neural spike train recordings [@vyas2020computation]. Here, $y_t$ is a vector of spike counts from each of, say, 100 measured neurons. The activity of nearby neurons is often correlated, and SSMs can capture that correlation through a lower dimensional latent state, $z_t$, which may change slowly over time. If we know that certain sensory inputs may drive the neural activity, we can encode them in $u_t$. A common goal in neuroscience is to infer the latent states $z_t$ that best explain the observed neural spike train; formally, this is called _state inference_. Another goal is to estimate the dynamics that govern how latent states evolve; formally, this is part of the _parameter estimation_ process. `Dynamax` provides algorithms for state inference and parameter estimation in a variety of SSMs.

The key design choices when constructing an SSM include the type of latent state (is $z_t$ a continuous or discrete random variable?), the dynamics that govern how latent states evolve over time (are they linear or nonlinear?), and the conditional distribution of the observations (are they Gaussian, Poisson, etc.?). Canonical examples of SSMs include hidden Markov models (HMM), which have discrete latent states, and linear dynamical systems (LDS), which have continuous latent states with linear dynamics and additive Gaussian noise. `Dynamax` supports these canonical examples as well as more complex models.

More information about state space models and algorithms for state inference and parameter estimation can be found in @murphy2023probabilistic and @sarkka2023bayesian.
More information about state space models and algorithms for state inference and parameter estimation can be found in the textbooks by @murphy2023probabilistic and @sarkka2023bayesian.


# Statement of need

`Dynamax` is an open-source Python pacakge for state space modeling. Since it is built with `JAX` [@jax], it automatically supports just-in-time (JIT) compilation for hardware acceleration on CPU, GPU, and TPU machines. It also supports automatic differentiation for gradient-based model learning. While other libraries exist for state space modeling in Python (and some also use `JAX`), this library provides a combination of low-level inference algorithms and high-level modeling objects that can support a wide range of research applications.
`Dynamax` is an open-source Python pacakge for state space modeling. Since it is built with `JAX` [@jax], it supports just-in-time (JIT) compilation for hardware acceleration on CPU, GPU, and TPU machines. It also supports automatic differentiation for gradient-based model learning. While other libraries exist for state space modeling in Python (and some also use `JAX`), this library provides a combination of low-level inference algorithms and high-level modeling objects that can support a wide range of research applications.

The API for `Dynamax` is divided into two parts: a set of core, functionally pure, low-level inference algorithms, and a high-level, object oriented module for constructing and fitting probabilistic SSMs. The low-level inference API provides message passing algorithms for several common types of SSMs. For example, `Dynamax` provides `JAX` implementations for:

Expand All @@ -66,8 +68,7 @@ The API for `Dynamax` is divided into two parts: a set of core, functionally pur

The high-level model API makes it easy to construct, fit, and inspect HMMs and linear Gaussian SSMs.

`Dynamax` has supported several publications. The low-level API has been used in machine learning research [@zhao2023revisiting; @lee2023switching; @chang2023low].
More sophisticated, special purpose models on top of `Dynamax`, like the Keypoint-MoSeq library for modeling postural dynamics of animals [@weinreb2024keypoint]. Finally, the `Dynamax` tutorials are used as reference examples in a major machine learning textbook [@murphy2023probabilistic].
`Dynamax` has supported several publications. The low-level API has been used in machine learning research [@zhao2023revisiting; @lee2023switching; @chang2023low]. More sophisticated, special purpose models on top of `Dynamax`, like the Keypoint-MoSeq library for modeling postural dynamics of animals [@weinreb2024keypoint]. Finally, the `Dynamax` tutorials are used as reference examples in a major machine learning textbook [@murphy2023probabilistic].

# Acknowledgements

Expand Down

0 comments on commit c7341b1

Please sign in to comment.