Utilizes AI to instantly fully convert 2D content into stereo 3D
Fig) Input image(Left) / Output(Middle) / 3D effect(Right)
With the rapid of autostereoscopic 3D monitors, through the specialized optical lens and eye-tracking technology delivers users have an entirely new stereoscopic 3D visualization experience. However, it only works with 3D content inputs (Stereoscopic Images). Such as side-by-side images. But most image or video on the internet is 2D single view content. Making the technology difficult to popularize. In order to solve this problem, this project utilizes "Deep Learning" and "Computer Vision" to enable conversion of 2D content into stereo 3D content.
- [May 2023] Fixes regarding links and filenames
- [Aug 2020] Release C++ and cython version
- [Aug 2020] Initial release of stereo image generation base on MiDaS v2.0
-
Download the model weights model-f45da743.pt and place the file in the root folder.
-
Set up dependencies with CUDA:
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
The code was tested with Cuda 10.1, Python 3.6.6, PyTorch 1.6.0, Torchvision 0.7.0 and OpenCV 3.4.0.12.
-
Place input images in the folder
example
e.g. from here Male lion in Okonjima, Namibia -
Run the model:
(Generate depth map from image)
python depth_estimation_image.py
(Generate depth map from camera)
python depth_estimation_cam.py
(Generate stereo image from image)
python stereo_generation_image.py
(Generate stereo image from camera)
python stereo_generation_cam.py
-
The resulting depth maps are written to the
depth
folder.The resulting stereo image are written to the
stereo
folder.
Our code builds upon Intel MiDaS