Skip to content

EarthNets/DensePrediction

Repository files navigation

Monocular-Height-Estimation-Toolbox

Introduction

Monocular-Height-Estimation-Toolbox is an open source monocular height estimation toolbox based on PyTorch and MMSegmentation v0.16.0.

It aims to benchmark monocular height estimation methods and provides effective supports for evaluating and visualizing results.

Major features

  • Unified benchmark

    Provide a unified benchmark toolbox for various depth estimation methods.

Thanks to MMSeg, we own these major features. 😊

Benchmark and model zoo

Results and models are available in the model zoo.

Supported backbones (partially release):

  • ResNet (CVPR'2016)
  • EfficientNet (ICML'2019)
  • Vision Transformer (ICLR'2021)
  • Swin Transformer (ICCV'2021)
  • I recommend cross-package import in config, so that you can utilize other backbone in MMcls, MMseg, etc. Refer to introduction. I will add more backbones in the future.

Supported methods:

Supported datasets:

Installation

Please refer to get_started.md for installation and dataset_prepare.md for dataset preparation.

Get Started

We provide train.md and inference.md for the usage of this toolbox.

License

This project is released under the Apache 2.0 license.

Acknowledgement

This repo benefits from awesome works of mmsegmentation, Adabins, BTS. Please also consider citing them.

Cite

If you find this toolbox helpful for your projects or research, consider citing one of our works listed below. I may conduct a technique report based on this toolbox to discuss training details for supervised monocular depth estimation in the future.

@article{li2022binsformer,
  title={BinsFormer: Revisiting Adaptive Bins for Monocular Depth Estimation},
  author={Li, Zhenyu and Wang, Xuyang and Liu, Xianming and Jiang, Junjun},
  journal={arXiv preprint arXiv:2204.00987},
  year={2022}
}
@article{li2022depthformer,
  title={DepthFormer: Exploiting Long-Range Correlation and Local Information for Accurate Monocular Depth Estimation},
  author={Li, Zhenyu and Chen, Zehui and Liu, Xianming and Jiang, Junjun},
  journal={arXiv preprint arXiv:2203.14211},
  year={2022}
}
@article{li2021simipu,
  title={SimIPU: Simple 2D Image and 3D Point Cloud Unsupervised Pre-Training for Spatial-Aware Visual Representations},
  author={Li, Zhenyu and Chen, Zehui and Li, Ang and Fang, Liangji and Jiang, Qinhong and Liu, Xianming and Jiang, Junjun and Zhou, Bolei and Zhao, Hang},
  journal={arXiv preprint arXiv:2112.04680},
  year={2021}
}

Acknowledgment

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published