Yunfei Lu, Pengfei Gu, Chaoli Wang
This is the official pytorch implementation for the paper "FCNR: Fast Compressive Neural Representation of Visualization Images".
Set up a conda environment with all dependencies with Python 3.9:
pip install -r requirements.txt
You can generate customized visualization images with different viewpoints and timesteps on your own dataset via volume or isosurface rendering. Here is a link to download the vortex dataset (direct volume rendering images included) we use: vortex.
Specify <gpu_idx>
, <exp_name>
and <config_name>
to start training and inferencing:
python train.py <gpu_idx> <exp_name> --config ./configs/<config_name>
An example of the configuration file we use is ./configs/cfg.json
. You can follow it to implement on your customized dataset.
Here is a comparison between the results of FCNR and existing baselines:
@article{lu2024fcnr,
title={{FCNR}: Fast Compressive Neural Representation of Visualization Images},
author={Lu, Yunfei, Gu, Pengfei, and Wang, Chaoli},
booktitle={Proceedings of IEEE VIS Conference (Short Papers)},
year={2024},
note={Accepted}
}