-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Support for 360 Camera images #526
Comments
You could try Meshroom-2019.1.0\aliceVision\bin\aliceVision_utils_split360Images.exe (cli only) |
Thanks! I've done that now, but there is no help in the app on computing the necessary metadata to use with meshroom |
What do you mean with
? |
I figured it out thanks. The images split perfectly. When i tried mesh room, the sfm step failed with images from one panorama, but if i renamed the files from several panoramas to get some parallax in sequential images to get some initial point cloud, then augmented the scan with the remaining photos, then on dense cloud choose "Downscale" =1, as noted from #409 it proceeds perfectly! |
I know it can split the images but does it deal with the cube map pinhole cameras as a fixed rig? |
@jeffreyianwilson I have tested this with the Datasets from here and it works. |
@natowi I would leave this issue open. And have the new openMVG code imported to support panoramas natively. |
Excellent, processing hundreds if not thousands of panoramas into cube maps images is an unnecessary waste of storage |
Does Meshroom/Alice Vision support camera rigs/fisheye lenses? I want to take the individual camera output from a 360 rig (8 x200degree cameras) and apply this rig per shot. The parallax offset is considerable and prevents close range precision when using Equirectangular (converted to cubemap) images |
Typically such rig does not use fish eye lenses, but fixed focal lenses. If you would calibrate this rig (and this is the missing documentation part) this would be better than the combined image, more image detail, more overlap per photo and thus depth. Then again, openMVG recently showed that calibrated stitched images are superior to unstitched unrigged images with respect to matching them in SfM. So you may wonder if a workflow: start with pre-stitched then augment with raw images gives faster results. |
The Insta 360 Pro 2 and Pro use 200 degree lenses. Like I said, close proximity features and camera offset from the nodal point prevent any sort of precision from baked equirectangular images. |
I am looking at constructing a "calibration room" which would have enough features to treat each lens/sensor separately but as a whole as part of a rig. |
@jeffreyianwilson you might be interested in https://blog.elphel.com/category/calibration/ |
Yes, this is fully supported as explained here: Would you be open to share one of your datasets with me? I would be interested to do more tests on these setups. If yes, you could use the private mailing-list [email protected]. Thanks |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Hello, I have a Samsung gear 360 camera and i do 360 picture with 30 Megapixel equirectangular every 10 meter to survey bicycle routes. Then I add geolocation to pictures and share the pictures on Mapillary mostly to add map feature in OpenStreetMap. |
@CorentinLemaitre, There is no support for 360° images in input. We have support for a rig of synchronized cameras, but I don't know if you have access to the raw images on the Samsung Gear 360 (before stitching). |
I have a small dataset of closely located 360-degree equirectangular images (taken with a Gear 360 2016). I previously used them with Cupix. I can provide one (in private) if it helps development. Here are five images from my old rooftop to start with: Unstitched (7776x3888 dual fisheye) Stitched (7776x3888 equirectangular) |
Thanks for the datasets. |
@fabiencastan would you be interested in other vendors too? |
I'll be more than happy to help any notes or pointers on how you want a sample set? |
Here's my contribution... 5 image interior dataset from an Insta360 OneX I actually want to use meshroom for interiors so I have a lot more if it's helpful (an entire house). I could provide it privately from github, just contact me. |
I´m merging the shared datasets into one repository with a hand full of images per dataset, all under CC-BY-SA-4.0 license. If you are ok with it, leave a thumbs up on this post and I´ll add your dataset. When it is well structured, I can move it to AliceVision. https://github.com/natowi/meshroom-360-datasets |
tscibilia beat me to the punch. Sensor: 1/2.3" (~6.16 x 4.62mm) |
@SM-26 what is the make and model in the metadata?
We don´t need too many images (let´s say images from ~6 different locations), these datasets are just for testing and demonstration. |
camera brand: Arashi Vision I'm on it, good thing the weekend is here. |
Sorry it took me such a long time. |
Just catching up, I saw the repo and @SM-26 pull request so I did a PR of my own |
Are there any recommended settings or workflow for double-fisheye images? |
FYI: I tried it on RICOH THETA Z1 (dual fisheye image). Meshroom runs. I used original script to split . In my experiment, using rig setting is not good for 360 degree images,because PreapareDenseSecene node get failed. |
@akirayou @natowi It would be good to add the corresponding node in meshroom: https://github.com/natowi/meshroom_external_plugins/blob/master/Split360Images.py |
I've not tried it yet. Using equirectangular image (THETA's jpeg output) and split360Images sounds easy way.But it seems to need more photos to reconstruction. |
DNG and dual-fisheye are supposed to supported. |
I can not run it in my environment (JPG is ok)
|
@fabiencastan I'm assuming adding this node in the graph editor hasn't been released yet. Is that correct? |
yes |
Hi, im trying to decompose a theta X 11K jpeg using aliceVision_util_split360images.exe but it seems only to generate images on the horizon line. is there any parameters that can be inputted so it splits the top and bottom too? |
For dualfisheye there is a top bottom setting |
Hi guys I have zero coding experience I used this code : from this link : what should i do ? |
@Hamed93g "THETA— equirectangularNbSplits" the -- and spaces may cause issues. Try .\aliceVision_utils_split360Images.exe -i "C:\Users\craig\Pictures\THETA If this does not help, please open a new issue. |
Since the release 2023.2, the Split360Images can be added directly into the graph after the CameraInit node: |
This functions well, thank you so much for adding. I am having trouble though. I'm using bracketed exposures to make an HDR spherical pano from a Gear 360 camera. The resulting sfm data from the split360Image does not seem to work with the Hdr pipeline when I plug it. The sfm data all looks correct, but the LdrToHdrSampling is mixing images from each 'rig'. Also exposure blending is also not doing the right thing even when I use the un-split original images and I have not yet figured out why |
@kromond You can open a new issue for this |
It would be great if there was support for the standard flat 360 camera projection images given out by 360 cameras.
J
The text was updated successfully, but these errors were encountered: