Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Support for 360 Camera images #526

Closed
jjisnow opened this issue Jun 26, 2019 · 45 comments · Fixed by #1464
Closed

[Feature Request] Support for 360 Camera images #526

jjisnow opened this issue Jun 26, 2019 · 45 comments · Fixed by #1464
Labels
360/fisheye feature request feature request from the community

Comments

@jjisnow
Copy link

jjisnow commented Jun 26, 2019

It would be great if there was support for the standard flat 360 camera projection images given out by 360 cameras.

J

@natowi
Copy link
Member

natowi commented Jun 26, 2019

You could try Meshroom-2019.1.0\aliceVision\bin\aliceVision_utils_split360Images.exe (cli only)

@jjisnow
Copy link
Author

jjisnow commented Jun 27, 2019

Thanks! I've done that now, but there is no help in the app on computing the necessary metadata to use with meshroom

@natowi
Copy link
Member

natowi commented Jun 27, 2019

What do you mean with

there is no help in the app on computing the necessary metadata to use with meshroom

?
Do you need help with the cli or meshroom?

@jjisnow
Copy link
Author

jjisnow commented Jun 28, 2019

I figured it out thanks. The images split perfectly. When i tried mesh room, the sfm step failed with images from one panorama, but if i renamed the files from several panoramas to get some parallax in sequential images to get some initial point cloud, then augmented the scan with the remaining photos, then on dense cloud choose "Downscale" =1, as noted from #409 it proceeds perfectly!

@jeffreyianwilson
Copy link

I know it can split the images but does it deal with the cube map pinhole cameras as a fixed rig?

@natowi
Copy link
Member

natowi commented Jun 28, 2019

@jeffreyianwilson I have tested this with the Datasets from here and it works.

@natowi natowi closed this as completed Jun 28, 2019
@Baasje85
Copy link

@natowi I would leave this issue open. And have the new openMVG code imported to support panoramas natively.

@natowi natowi reopened this Jun 29, 2019
@jeffreyianwilson
Copy link

Excellent, processing hundreds if not thousands of panoramas into cube maps images is an unnecessary waste of storage

@jeffreyianwilson
Copy link

Does Meshroom/Alice Vision support camera rigs/fisheye lenses? I want to take the individual camera output from a 360 rig (8 x200degree cameras) and apply this rig per shot. The parallax offset is considerable and prevents close range precision when using Equirectangular (converted to cubemap) images

@Baasje85
Copy link

Typically such rig does not use fish eye lenses, but fixed focal lenses. If you would calibrate this rig (and this is the missing documentation part) this would be better than the combined image, more image detail, more overlap per photo and thus depth. Then again, openMVG recently showed that calibrated stitched images are superior to unstitched unrigged images with respect to matching them in SfM. So you may wonder if a workflow: start with pre-stitched then augment with raw images gives faster results.

@jeffreyianwilson
Copy link

The Insta 360 Pro 2 and Pro use 200 degree lenses. Like I said, close proximity features and camera offset from the nodal point prevent any sort of precision from baked equirectangular images.

@jeffreyianwilson
Copy link

I am looking at constructing a "calibration room" which would have enough features to treat each lens/sensor separately but as a whole as part of a rig.

@Baasje85
Copy link

@jeffreyianwilson you might be interested in https://blog.elphel.com/category/calibration/

@fabiencastan
Copy link
Member

Hi @jeffreyianwilson,

Does Meshroom/Alice Vision support camera rigs/fisheye lenses? I want to take the individual camera output from a 360 rig (8 x200degree cameras) and apply this rig per shot.

Yes, this is fully supported as explained here:
https://github.com/alicevision/meshroom/wiki/Multi-Camera-Rig
The calibration of the rig is fully automatic.

Would you be open to share one of your datasets with me? I would be interested to do more tests on these setups. If yes, you could use the private mailing-list [email protected].

Thanks

@natowi natowi added feature request feature request from the community do not close issue that should stay open (avoid automatically close because stale) and removed type:enhancement labels Oct 27, 2019
@stale
Copy link

stale bot commented Feb 24, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale for issues that becomes stale (no solution) label Feb 24, 2020
@natowi natowi removed the stale for issues that becomes stale (no solution) label Feb 24, 2020
@CorentinLemaitre
Copy link

Hello, I have a Samsung gear 360 camera and i do 360 picture with 30 Megapixel equirectangular every 10 meter to survey bicycle routes. Then I add geolocation to pictures and share the pictures on Mapillary mostly to add map feature in OpenStreetMap.
I wonder if there is support of 360 pictures or it is still something to develop.
I could share any picture i have taken if it is helpful.

@fabiencastan
Copy link
Member

@CorentinLemaitre,
Yes, it would be interesting to have access to a dataset made with the Samsung Gear 360.

There is no support for 360° images in input. We have support for a rig of synchronized cameras, but I don't know if you have access to the raw images on the Samsung Gear 360 (before stitching).

@CorentinLemaitre
Copy link

I have 360 before processing because this camera (2016) don't do the stitching. After I have done the stitching process I delete these files. I have really few that left on my computer.
Here is an example of the picture i have before stitching :
360_5204
And the result after stitching :
360_5204_Stitch_YHC

@EwoutH
Copy link
Contributor

EwoutH commented Apr 30, 2020

I have a small dataset of closely located 360-degree equirectangular images (taken with a Gear 360 2016). I previously used them with Cupix. I can provide one (in private) if it helps development.

Here are five images from my old rooftop to start with:

Unstitched (7776x3888 dual fisheye)

Stitched (7776x3888 equirectangular)

@fabiencastan
Copy link
Member

Thanks for the datasets.

@Baasje85
Copy link

@fabiencastan would you be interested in other vendors too?

@SM-26
Copy link

SM-26 commented Jun 11, 2020

@Baasje85 I think it would not hurt to have a few different datasets for testing.

@fabiencastan We could use a demo&testing dataset similar to https://github.com/alicevision/dataset_monstree Maybe we can put something together based on user contributions for a few different camera models.

I'll be more than happy to help
I have an Insta360 One X

any notes or pointers on how you want a sample set?
how may pictures? HDR on or off? indoor or outdoor?

@tscibilia
Copy link

Here's my contribution... 5 image interior dataset from an Insta360 OneX

I actually want to use meshroom for interiors so I have a lot more if it's helpful (an entire house). I could provide it privately from github, just contact me.

@natowi
Copy link
Member

natowi commented Jun 11, 2020

I´m merging the shared datasets into one repository with a hand full of images per dataset, all under CC-BY-SA-4.0 license. If you are ok with it, leave a thumbs up on this post and I´ll add your dataset.
@EwoutH @Baasje85 @SM-26 @tscibilia

When it is well structured, I can move it to AliceVision. https://github.com/natowi/meshroom-360-datasets

@SM-26
Copy link

SM-26 commented Jun 11, 2020

tscibilia beat me to the punch.
But, I saw that there is no info about the Insta360 one X on the camera DB

Sensor: 1/2.3" (~6.16 x 4.62mm)
source

@natowi
Copy link
Member

natowi commented Jun 11, 2020

@SM-26 what is the make and model in the metadata?

how may pictures? HDR on or off? indoor or outdoor?

We don´t need too many images (let´s say images from ~6 different locations), these datasets are just for testing and demonstration.
I think indoor/outdoor with and without HDR would be nice. If you are using a tripod, you could use the same positions for HDR on/off.

@SM-26
Copy link

SM-26 commented Jun 11, 2020

@SM-26 what is the make and model in the metadata?

how may pictures? HDR on or off? indoor or outdoor?

We don´t need too many images (let´s say images from ~6 different locations), these datasets are just for testing and demonstration.
I think indoor/outdoor with and without HDR would be nice. If you are using a tripod, you could use the same positions for HDR on/off.

camera brand: Arashi Vision
camera model: Insta360 ONE X

I'm on it, good thing the weekend is here.

@SM-26
Copy link

SM-26 commented Jun 17, 2020

@SM-26 what is the make and model in the metadata?

how may pictures? HDR on or off? indoor or outdoor?

We don´t need too many images (let´s say images from ~6 different locations), these datasets are just for testing and demonstration.
I think indoor/outdoor with and without HDR would be nice. If you are using a tripod, you could use the same positions for HDR on/off.

Sorry it took me such a long time.
I've created a PR
I'd love to help as much as I can

@tscibilia
Copy link

Just catching up, I saw the repo and @SM-26 pull request so I did a PR of my own

@Aracon
Copy link

Aracon commented Feb 10, 2021

Are there any recommended settings or workflow for double-fisheye images?
I am trying to use Gear360 in outdoor. Both stitched and non-stitched (double fisheye) images are accessible on this cam.
I tried to extract "regular" images with aliceVision_utils_split360Images.exe, but just a few images (4 from 340) were matched with default Meshroom settings.
I saw also the option "fisheye" in camera settings in Meshroom, should I split non-stitched images and try this option?

@akirayou
Copy link

akirayou commented Jul 2, 2021

FYI: I tried it on RICOH THETA Z1 (dual fisheye image). Meshroom runs.

Here is the report

I used original script to split .
I also added vignette to remove features on edge of fisheye circles. (Little bit better result of camera pose estimation)

In my experiment, using rig setting is not good for 360 degree images,because PreapareDenseSecene node get failed.
Just adding EXIF cameara serial number for each L/R images was enough .

@fabiencastan
Copy link
Member

@akirayou
Have you tried the split360Images executable?

@natowi It would be good to add the corresponding node in meshroom: https://github.com/natowi/meshroom_external_plugins/blob/master/Split360Images.py
Could you submit it as a PR?

@akirayou
Copy link

akirayou commented Jul 2, 2021

Have you tried the split360Images executable?

I've not tried it yet.
Because I want to try with dual fisheye image and THETA Z1's dual fisheye image format is DNG [not supported].
And I want to marge JPEG's exif data and DNG's image, so I have to write the script by my self.

Using equirectangular image (THETA's jpeg output) and split360Images sounds easy way.But it seems to need more photos to reconstruction.

@fabiencastan
Copy link
Member

DNG and dual-fisheye are supposed to supported.

@akirayou
Copy link

akirayou commented Jul 2, 2021

I can not run it in my environment (JPG is ok)
Meshroom-2021.1.0 on win10 20H2 (Japanese)

C:\Users\youak>C:\Meshroom-2021.1.0\aliceVision\bin\aliceVision_utils_split360Images.exe -i C:\Users\youak\Desktop\meshroom_theta\DNG\R0010072.DNG -o a -m dualfisheye
Program called with the following parameters:

  • dualFisheyeSplitPreset = "center" (default)
  • equirectangularDemoMode = 0 (default)
  • equirectangularNbSplits = 2 (default)
  • equirectangularSplitResolution = 1200 (default)
  • input = "C:\Users\youak\Desktop\meshroom_theta\DNG\R0010072.DNG"
  • output = "a"
  • splitMode = "dualfisheye"
  • verboseLevel = "info" (default)

[00:08:34.793096][fatal] Can't write output image file 'C:\Users\youak\a/R0010072_0.DNG'.

@natowi natowi linked a pull request Jul 2, 2021 that will close this issue
@natowi natowi removed the do not close issue that should stay open (avoid automatically close because stale) label Jul 7, 2021
@calbear47
Copy link

@fabiencastan I'm assuming adding this node in the graph editor hasn't been released yet. Is that correct?

@fabiencastan
Copy link
Member

yes

@dpredie
Copy link

dpredie commented Apr 14, 2022

Hi, im trying to decompose a theta X 11K jpeg using aliceVision_util_split360images.exe but it seems only to generate images on the horizon line. is there any parameters that can be inputted so it splits the top and bottom too?

@natowi
Copy link
Member

natowi commented Apr 14, 2022

For dualfisheye there is a top bottom setting

@Hamed93g
Copy link

Hamed93g commented Dec 2, 2022

Hi guys I have zero coding experience
I want to split the 360 images to top/bot/left/right not just on a horizon line

I used this code :
.\aliceVision_utils_split360Images.exe -i C:\Users\craig\Pictures\THETA
— equirectangularNbSplits 32 -o C:\Users\craig\Pictures\mesh

from this link :
https://medium.com/theta360-guide/splitting-360-images-into-2d-images-137fab5406da

what should i do ?
with simple coding ( using insta360 1x )

@natowi
Copy link
Member

natowi commented Jan 15, 2023

@Hamed93g "THETA— equirectangularNbSplits" the -- and spaces may cause issues. Try

.\aliceVision_utils_split360Images.exe -i "C:\Users\craig\Pictures\THETA
— equirectangularNbSplits" 32 -o "C:\Users\craig\Pictures\mesh"

If this does not help, please open a new issue.

@fabiencastan
Copy link
Member

Since the release 2023.2, the Split360Images can be added directly into the graph after the CameraInit node:
#1939

@kromond
Copy link

kromond commented Mar 3, 2024

Since the release 2023.2, the Split360Images can be added directly into the graph after the CameraInit node: #1939

This functions well, thank you so much for adding. I am having trouble though. I'm using bracketed exposures to make an HDR spherical pano from a Gear 360 camera. The resulting sfm data from the split360Image does not seem to work with the Hdr pipeline when I plug it. The sfm data all looks correct, but the LdrToHdrSampling is mixing images from each 'rig'. Also exposure blending is also not doing the right thing even when I use the un-split original images and I have not yet figured out why

@natowi
Copy link
Member

natowi commented Mar 3, 2024

@kromond You can open a new issue for this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
360/fisheye feature request feature request from the community
Projects
None yet
Development

Successfully merging a pull request may close this issue.