Source code for our IJCARS paper Seeing under the cover with a 3D U-Net: point cloud-based weight estimation of covered patients.
Please first install the following dependencies
- Python3 (we use 3.8.3)
- numpy
- pytorch (we tested 1.6.0 and 1.9.0)
- bps
- yacs
- scipy
- sklearn
- Download the SLP dataset from https://web.northeastern.edu/ostadabbas/2019/06/27/multimodal-in-bed-pose-estimation/. Create a directory
/dataset/SLP
and move move the dataset to this directory. We recommend to create a symlink. - Execute
cd data
andpython preprocess_slp.py
to generate point clouds from the original depth images and to obtain the segmentation masks of uncovered patients. The data is written to'/dataset/SLP/3d_data_{}_{}'.format(POSITION, COVER_COND)
.
- In
/configs/defaults.py
, modify_C.BASE_DIRECTORY
in line 5 to the root directory where you intend to save the results. - In the config files
/configs/CONFIG_TO_SPECIFY.yaml
, you can optionally modifyEXPERIMENT_NAME
in line 1. Models and log files will finally be written toos.path.join(cfg.BASE_DIRECTORY, cfg.EXPERIMENT_NAME)
. - Navigate to the
main
directory - Execute
python train.py --gpu GPU --config-file ../configs/config_ours_uncovering.yaml --stage uncovering
to train the 3D U-Net to virtually uncover the patients. This corresponds to step 1 in our paper. After each epoch, we save the model weights and a log file to the specified directory. - Execute
python train.py --gpu GPU --config-file ../configs/config_ours_weight.yaml --stage weight
to train the 3D CNN for weight regression of patients that were previously uncovered by the 3D U-Net. This corresponds to step 2 in our paper. Again, we save the model weights and a log file to the specified directory after each epoch.
- If you trained a model yourself following the instructions above, you can test the model by executing
python test.py --config-file ../configs/config_ours_weight.yaml --gpu GPU --val-split VAL_SPLIT --cover-condition COVER_COND --position POSITION
.VAL_SPLIT
should be in {dana, sim}, where "dana" represents the lab setting and "sim" is the simulated hospital room.COVER_COND
should be in {cover1, cover2, cover12}, andPOSITION
should be in {supine, lateral, all, left, right}. The output is the mean average error (MAE) in kg for specified setting, cover condition and patient position. - Otherwise, we provide pre-trained models. Download the models and use them for inference by executing
python test.py --config-file ../configs/config_ours_weight.yaml --gpu GPU --val-split VAL_SPLIT --cover-condition COVER_COND --position POSITION --unet-path /PATH/TO/UNET --cnn3d-path /PATH/TO/3DCNN
. These models achieve the following MAEs: supine & cover1: 4.61kg, lateral & cover1: 4.50kg, supine & cover2: 4.86kg, lateral & cover2: 4.53kg.
- To train one of the baseline models, execute
python train_baseline.py --gpu GPU --config-file ../configs/CONFIG_TO_SPECIFY.yaml
by specifying the desired config file. - For testing the trained baseline model, execute
python test_baseline.py --config-file ../configs/CONFIG_TO_SPECIFY.yaml --gpu GPU --val-split VAL_SPLIT --cover-condition COVER_COND --position POSITION
. Now, depending on the trained model,COVER_COND
can / should be set to uncover as well.
If you find our code useful for your work, please cite the following paper
@article{bigalke2021seeing,
title={Seeing under the cover with a 3D U-Net: point cloud-based weight estimation of covered patients},
author={Bigalke, Alexander and Hansen, Lasse and Diesel, Jasper and Heinrich, Mattias P},
journal={International journal of computer assisted radiology and surgery},
volume={16},
number={12},
pages={2079--2087},
year={2021}
}