📚 This code is for reproducing the experiments in: Ai-sampler: Adversarial Learning of Markov kernels with involutive maps .
To ensure that JAX and PyTorch are installed with the right CUDA/cuDNN version of your platform, we recommend installing the GPU-supported JAX and CPU-only Pytorch.
To use this library, simply clone the repository, create a conda environment and install the dependencies with:
conda create -n aisampler python=3.10
conda activate aisampler
pip install -r requirements.txt
then install the library by running
pip install -e .
Note: the library supports wandb
for better logging.
For macOS change jax[cuda12]
into jax[cpu]
in the requirement.txt
file.
The repository is structured as follows:
./aisampler
. Library source code that implements theAi-sampler
../data
. Contains the data for Bayesian logistic regression../experiments
. Collection of the experiments.-
/train
. Scripts for training theAi-sampler
.
-
/test
. Scripts for sampling with the trainiedAi-sampler
and with HMC.
To train the Ai-sampler
on the 2D densities, from the root folder run:
python experiments/train/train_toy_density.py --task.target_density_name=hamiltonian_mog2 --task.train.num_epochs=51 --task.checkpoint.checkpoint_dir=./checkpoints --task.checkpoint.save_every=50
Checkpoints are saved every save_every
epochs into checkpoint_dir
. To sample using the trained Ai-sampler
run:
python experiments/test/sample_aisampler_toy_density.py --task.target_density_name=hamiltonian_mog2 --task.checkpoint.checkpoint_dir=./checkpoints --task.checkpoint.checkpoint_epoch=50 --task.num_parallel_chains=10 --task.num_iterations=1000 --task.burn_in=100
where num_parallel_chains
sets the number of Markov chains run in parallel, num_iterations
the length of the chains (after burn_in
).
To train the Ai-sampler
on the Bayesian logistic regression posterior, from the root folder run:
python experiments/train/train_logistic_regression.py --task=experiments/config/config_train_logistic_regression.py --task.dataset_name=Heart --task.train.num_epochs=200 --task.checkpoint.checkpoint_dir=./checkpoints --task.checkpoint.save_every=50
To sample from the trained Ai-sampler
run:
python experiments/test/sample_aisampler_logistic_regression.py --task.dataset_name=Heart --task.checkpoint.checkpoint_dir=./checkpoints --task.checkpoint.checkpoint_epoch=400 --task.num_parallel_chains=10 --task.num_iterations=1000 --task.burn_in=100
To sample from 2D densities with HMC (Neal 2012) run:
python experiments/test/sample_hmc_toy_density.py --task.target_density_name=hamiltonian_mog2
To sample with HMC from Bayesian logistic regression posterior, run
python experiments/test/sample_hmc_logistic_regression.py --task.dataset_name=Heart
Note: the library supports wandb
for better logging. Simply install it and add --task.wandb.use=True
.
If you want to cite us use the following BibTeX entry:
@article{egorov2024ai,
title={Ai-Sampler: Adversarial Learning of Markov kernels with involutive maps},
author={Egorov, Evgenii and Valperga, Ricardo and Gavves, Efstratios},
journal={arXiv preprint arXiv:2406.02490},
year={2024}
}