By Baoru Huang, Yicheng Hu, Anh Nguyen, Stamatia Giannarou, Daniel S. Elson
- Installation:
cd $Sensing_area_detection
conda env create -f environment.yml
- We newly acquired two datasets: 1) Jerry dataset and 2) Coffbea dataset
- Jerry dataset includes: Stereo laparoscopic images with standard illumination, Stereo laparoscopic images with laser on and laparoscopic light off, laser segmentation mask, laser center point ground truth, and PCA line points txt files.
- Coffbea dataset includes: everything included in Jerry dataset, and the ground truth depth map of every frames.
- Labelling:
- Example data. (a) Standard illumination left RGB image; (b) left image with laser on and laparoscopic light off; same for (c) and (d) but for right images
- Problem Definition. (a) The input RGB image, (b) The estimated line using PCA for obtaining principal points, (c) The image with laser on that we used to detect the intersection ground truth
-
Training:
- Change the data directory to the folder of data
cd $Sensing_area_detection
python main.py --mode train
-
Test:
cd $Sensing_area_detection
python main.py --mode test
-
Results:
If you find our paper useful in your research, please consider citing:
MIT License
- This work was supported by the UK National Institute for Health Research (NIHR) Invention for Innovation Award NIHR200035, the Cancer Research UK Imperial Centre, the Royal Society (UF140290) and the NIHR Imperial Biomedical Research Centre.