-
-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REVIEW]: Caliscope: GUI Based Multicamera Calibration and Motion Tracking #7155
Comments
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks. For a list of things I can do to help you, just type:
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
|
|
Software report:
Commit count by author:
|
Paper file info: 📄 Wordcount for ✅ The paper includes a |
License info: ✅ License found: |
Hi @mprib and reviewers @Timozen and @davidpagnon - this is the review thread for the submission. All of our communications will happen here from now on. Meanwhile, please check the post at the top of the issue for instructions on how to generate your own review checklist. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines. The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues directly in the software repository. If you do so, please mention this thread so that a link is created (and I can keep an eye on what is happening). Please also feel free to comment and ask questions in this thread. It is often easier to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package. Please feel free to ping me (@mooniean) if you have any questions or concerns. Thanks! |
Review checklist for @davidpagnonConflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
|
Thank you @davidpagnon and @Timozen for serving as reviewers on this project. I am grateful for your time and look forward to your feedback. |
Review checklist for @TimozenConflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
|
Thank you for making this tool available! I heard good things about it a few months ago but did not find time to test it, so this review is the perfect occasion. The installation procedure and the execution are quite user-friendly, and the output is the same as AniPose or Pose2Sim so it can be reused as is (or almost, I'll implement very minimal changes to my code so that people can just copy paste your caliscope Config.toml file into their Pose2Sim calibration folder). Several aspects make it a pain-free first experience:
Paper:
Doc:
GUI:
Side remarks (which would take more time to implement)As you point out in the paper, Caliscope's main goal is to make multi-camera calibration easy and straightforward for non-experts. However, you also include a tool to perform pose estimation, triangulate it, and rig the output. I feel like calibration is a huge problem, and having a tool dedicated solely to that purpose could be quite convenient for countless people. Calibration is also needed for the study of animals, objects, or even for architecture, and most people would not care about human pose. And for those who care, MediaPipe/BlazePose is getting outdated so they may want to use something more recent such as RTMlib. Moreover, that last bit of the package seems to be a bit less polished. What would you think of decorrelating these two parts and making them two separate tools? On the other hand, you coded quite a convenient toolbox to record from multiple webcams and synchronize them. This could be mentioned in the paper, especially since it actually automatically generates the required "frame_time_history.csv" file that is required. I do realize this may sound contradictory to my previous point, but I feel like it would be even better to directly include it in the Caliscope. It would not make the toolbox heavier since the requirements are the same, and people would be able to record and calibrate a scene from one single window. Off-topic: I believe the toolbox used to be called "pyxy3d". I like the new name, which looks much less like a bunch of letters collapsed together and sounds nice and easy to remember. |
Thank you for your detailed feedback on caliscope! I recognize the time it takes to dive into a new project and catalogue this information. I'm incredibly grateful for your work on this. And I'm grateful that this has provided to an opportunity to get connected to the author of Pose2Sim! I'm passionate about the potential of emerging tools to improve the study of movement and hope that this might be the beginning of a dialogue rather than a one-off interaction. I wanted to reply briefly to note that I am targeting a more extensive follow up with point-by-point replies by the end of this upcoming week. Best, Mac |
Thank you again for your comments. My replies are woven in below. For suggested improvements that I have not yet been able to resolve, I have opened up issues on the repo and provide links to those here. Regards, Mac
Thank you for the suggestion and I absolutely agree that it's worthwhile to discuss the two projects. I have included this new paragraph in the paper to highlight similarities and distinctions: Pose2Sim (Pagnon et al., 2022) is another tool that merges both camera calibration and markerless pose estimation. Similar to Caliscope, Pose2Sim employs a camera model composed of 4 pinhole camera parameters and 5 distortion parameters. Additionally, both projects export output to the .trc file format to facilitate integration with the biomechanical modelling software OpenSim. While Pose2Sim does not perform bundle adjustment to refine the extrinsic camera estimates, it does provide a number of features that are valuable components of a motion tracking workflow which are not present in Caliscope and would be useful in future motion tracking tools. These include the ability to calibrate camera extrinsics based on scene elements, the capacity to distinguish between multiple subjects in view at once, and more sophisticated triangulation methods that can incorporate the confidence of a model’s prediction of a pose landmark.
I have added text to the Post Processing section of the docs to clarifies the Holistic OpenSim tracker. Essentially, it is a filtered subset of the Mediapipe Holistic output. In test usage, files of the raw Holistic output became unweildy due to containing several hundred points for the face and dozens for the hands; but then using the smaller output of Mediapipe Pose became problematic because it converts to a much lower resolution for point detection. Holistic Opensim was my attempt to make a smaller file with better detection that could track the wrist and skull better. Regarding the rig generation/Rigmarole: the "bonus features" characterization is spot on. My intention is to try to keep Caliscope largely agnostic to these downstream tasks and to primarily try to do "one thing well" (camera calibration),: and to provide the foundation of a framework for integrating future pose estimation tools into the pipeline.
I have added the following to the ReadMe for clarity: This README provides a general overview and quick guide to install
Thank you for the feedback. I have added a Video Capture tab to the workflow section of the docs that directs to MultiWebCam.
Thank you. I would need to better understand how to resolve the indeterminate rotations to know how this might be implemented but it does seem like an interesting future addition.
Using your CI action linked below as a template I've updated the testing to python 3.10, and 3.11 across macOS/Linux/Windows. And finally got around to making sure the code lints with ruff.
See comment above. Thanks for the useful resource.
For simplicity I now have the tests installing with pip. I have expanded the tests to include 3.10 and 3.11. There was a patch of python 3.9 that came out that I found was associated with random segfaults in PyQt, so that one got taken off the list. There are currently issues with the combination of Python 3.12 + Mediapipe + MacOS, so 3.12 is not included. This should not effect the core camera calibration workflow, so future developments that do not involve mediapipe (or macOS) should not experience this roadblock.
But typing
Removed
Issue created to add tooltips: mprib/caliscope#636
Issue created to widen text boxes: mprib/caliscope#641 In the issue above I include some screenshots of how it is intended to appear (and shows on my system). I have recently removed a qt theme for simplicity's sake which may also impact the character legibility.
I'm wondering if images are getting clipped in the GUI? The issue linked above (641) includes some screenshots of how the board should display. My hope is that at the very least the properly rendered board is what is save out as a
Agree this would be an improvement and have added it as an issue, though this is one of those long-term issues that may take some time to reflect on how to best implement : mprib/caliscope#637 Just for context, I have found that killing and re-launching these threads can easily result in pretty bad memory leaks. Realistically, it may take some time before I could implement that improvement in a stable way, though I agree it would be a good future addition.
The first two are the original interface and Autocalibrate was added later. I think most of the time autocalibrate will work fine, but I see the value in having the ability to select specific frames as wanted.
This is as intended. It's possible for a bad distortion model to appear reasonable because it also appears zoomed in when "undistorted". When zoomed out, the bizarre distortion around the image edges is apparent.
I agree that this suggestion would improve the flow. I have created an issue to incorporate this in a future version: mprib/caliscope#644
I definitely acknowledge that step requires experimentation to orient to the controls. The colors are the defaults offered by PyQtGraph's AxisItem object and I do not see a way in their documentation to modify the color selection. PyQtGraph is amazing, though. Strongly recommend.
I will plan to include the X+90, etc in the tooltips for clarity.
The holistic opensim tracker is discussed above. It is essentially a subset of the holistic mediapipe output that could drive IK on a full skeletal model in OpenSim, but without lots of points for the hands and face. The scaling is just taking the mean distance between select landmarks across a dynamic calibration trial. Those distances are then used to scale a blender rig that is animated via IK by the subsequent motion trials. The rigging component is in a separate package, Rigmarole which is based on Blender. Those are definitely side-quests and not the primary goal of the package.
I definitely view the "Post Processing" tab as a proof of concept to demonstrate the quality of the calibration and what could be done with it. I am highly drawn to the idea of "do one thing" and see the value of just having the camera system calibration spit out a config which can be used by by other tools. Landmark tracking, triangulation, filtering, and inverse kinematics are all such rich domains themselves.
With video recorded from a synchronized camera system (like FLIR), Regarding
here I begin a bit of a ramble about the project broadly I'm excited to pursue alternate hardware approaches in the future such as FLIR, or more DIY approaches such as raspberry pi's with MIPI cameras. I see this project fundamentally as managing the conversion of raw videos to a human readable camera calibration I'm glad that this review has given an opportunity to connect with someone who is working on these problems as well and hope that we might continue a conversation about them.
Thank you! Naming things is agonizingly difficult... |
Thank you for this extensive response! I'm satisfied with the answers I do not address below.
So I'm not sure I totally get it. I understand that you "scale a blender rig that is animated via IK" but then you do not use OpenSim scaling and IK , do you?
I feel like it would be worth linking to MultiWebCam at each mention of frame_time_history.csv, or at least there as when I first had a look at the overview section, I thought you had to create your own script or create the file manually, which can be scary for new users.
Okay, thank you, I don't know much about poetry and did not realize there was such a nice way to publish a package. And more importantly, I would not want to deprive you of such a joy!
I answered on this issue: mprib/caliscope#641
Okay, good to know!
So the use of "Add grid" and "Calibrate" is for selecting specific frames? I do not see anything about it in the doc, it may be something to add?
Okay, it makes sense! What are FLIR cameras? I thought they were mostly for infrared video capture, are you telling me they are also good for RGB capture, synchronization, and that they are rather cheap? In any case, I quite like the idea of using Raspberry Pis with MIPI cameras!
Well, I would not be against the idea of a community creating the Blender of Mocap 😄 It would be quite a long-term project but it does not seem totally out of reach indeed! |
I've cropped the conversation for brevity and hope that this addresses any unresolved points. If there are any outstanding questions remaining, please just let me know! Best, Mac
That is correct. I had originally created that paired down version with the intention of some ad hoc experimentations in OpenSim and ended up repurposing it for use in Blender where I did some automated tools in a different package. Given the opportunity for confusion, I have renamed it to
Excellent suggestion. Link to sample data and MultiWebCam added there.
Thank you. Added instructions associated with this to the Intrinsic Calibration docs.
I was sloppy with my language. Teledyne FLIR provides cameras that can record synchronized RGB video (among other types) using a shared hardware trigger. Aside: a potential advantage that I see for a raspberry pi based camera system is that each "camera server" could provide both image capture and potentially some local landmark estimation. Processing times from a multicamera system can become unwieldy when all frames are processed on a single machine, but if distributed across multiple devices it could be far more manageable. |
InstallationI have evaluated the software on MacOS 15 using a MacBook with an M1 processor. Following the installation procedure in https://mprib.github.io/caliscope/installation/#__tabbed_2_2, the program did not start. After checking the main [readme.md](http://readme.md) of the repo, I saw that for MacOS, some env variables need to be set. This resolved the issue ^^ As @davidpagnon already mentioned, the [readme.md](http://readme.md) and documentation seem to need to be in complete sync; this should be resolved. UsageI will first say that this is the first time I have worked with multi-view pose estimation software. However, I have my fair share of camera calibration experience. I followed the Example Project page. Please note that I found this 404 link while checking the Overview page: https://mprib.github.io/caliscope/sample_data.md The tutorial YouTube video of the project https://youtu.be/voE3IKYtuIQ should be, IMHO, more prominent for users so that inexperienced people (like me, in this case, ^^) can follow along. This was extremely helpful and might make the overall experience smooth!!! The intrinsic calibration ran quickly on my machine, and the log helped me understand what the tool was trying to do in the background. The extrinsic calibration ran at the same speed for me. After the reloading of the workflow, the window size (which I adjusted) reset to the original, which is a minor thing, but it confused me for a second ^^ Initially, I was a bit confused about the repetitive reloading of the workspace; however, I quickly realized that whenever a menu option was unavailable, such as "Post Processing," the underlying reason was that something was not yet in the project folder. So, I quickly learned to utilize the "reload workspace" button whenever something was added to the project or one processing step finished. After that, I was able to reproduce the test project results. When I attempted to extract the "FACE" option, the whole tool crashed without any notice and what the underlying cause is/was. I could repeat this crash several times without a problem, and I checked the log files, but they contained no actual clue.
However, the tool still wrote the results into the appropriate folder, and reloading the project also worked. GUI
PaperI like the style of the paper. It was written clearly, showing the shortcomings or issues of the current state and which value Caliscope adds to the area. As most information is also inside the repository and documentation, further explanations are optional from my side. @mprib I still need to check the 'Installation instructions' checkbox, but it would be nice if you could update the online documentation to contain the MacOS problems and fixes. |
Thank you for providing this feedback! I am grateful to get your perspective on the project and particularly glad to have the experience of a MacOS user as part of the review. My replies are embedded in context below:
I very much appreciate the catch. I have updated the docs to direct the setting of environment variables as is done on the README. Apologies for the confusion.
Thank you. Updated now to the correct link: https://mprib.github.io/caliscope/sample_project/
Thank you for this advice. I've linked and bolded the walkthrough in the intro of the README.
This is a hiccup that is related to PyQtGraph, an excellent library. The flicker out and back in is one quirk that happens when loading the 3D viewer which I haven't found a way around.
Thank you for the feedback about the first time through. I have the addition of tool tips as an Issue to resolve. I recall clearly the addition of tooltips you did on JeFaPaTo and it really was a helpful thing. Glad to see where the points of confusion lie with Caliscope.
This is definitely something I want to address so I've created an issue here for the face tracker crashing: mprib/caliscope#648 In the meantime I've removed it from the set of sample trackers to avoid confusion.
Absolutely! With the most recent merge, the MacOS fixes are now live on the docs. JeFaPaTo was an awesome project to see because I knew what putting together a full-fledged qt tool like that required. Your review means a lot to me. Thank you for the time you've invested working through the pain points (and documenting them) so that I can make this a more smooth experience for others. |
@mprib, that all sounds very good to me! I checked the last missing box in my review panel, and I would be good to go now to continue with the next part of the submission phase @mooniean :) Concerning the opened issue mprib/caliscope#648, if you need my support and testing again, just let me know over there ^^ |
Apologies for the delay! @davidpagnon are you happy for the review to proceed? |
I am, thanks! |
Post-Review Checklist for Editor and AuthorsAdditional Author Tasks After Review is Complete
Editor Tasks Prior to Acceptance
|
@mprib Last final checks! After you've finished the tasks in the comment above, let me know :) |
The JOSS release has been uploaded to Zenodo: https://zenodo.org/records/13905848 DOI: 10.5281/zenodo.13905848 I have double checked the ORCID/Author/Title/License info. Please let me know if there is anything that I need to clean up on my end. Thank you! Mac |
@editorialbot generate pdf |
@editorialbot check references |
|
@editorialbot set v0.5.1 as version |
Done! version is now v0.5.1 |
@editorialbot generate pdf |
@editorialbot recommend-accept |
|
|
👋 @openjournals/dsais-eics, this paper is ready to be accepted and published. Check final proof 👉📄 Download article If the paper PDF and the deposit XML files look good in openjournals/joss-papers#5997, then you can now move forward with accepting the submission by compiling again with the command |
@editorialbot generate pdf 🔍 checking out the following:
|
👋 @mprib - Thanks for producing a really nice submission. I just need you to address the following before I accept this for publication:
No need to conduct a new release. Just let me know when this has been fixed and I'll accept. Thanks! |
I've made the update now to Thank you! Mac |
@editorialbot generate pdf |
@editorialbot accept |
|
Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository. If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file. You can copy the contents for your CITATION.cff file here: CITATION.cff
If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation. |
🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘 |
🦋🦋🦋 👉 Bluesky post for this paper 👈 🦋🦋🦋 |
🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨 Here's what you must now do:
Any issues? Notify your editorial technical team... |
Congratulations @mprib!!! and thanks for the awesome work again @mooniean @davidpagnon @crvernon |
🥳 Congratulations on your new publication @mprib! Many thanks to @mooniean for editing and @Timozen and @davidpagnon for your time, hard work, and expertise!! JOSS wouldn't be able to function nor succeed without your efforts. Please consider becoming a reviewer for JOSS if you are not already: https://reviewers.joss.theoj.org/join |
🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉 If you would like to include a link to your paper from your README use the following code snippets
This is how it will look in your documentation: We need your help! The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:
|
Thank you @crvernon! @mooniean, thank you for guiding this submission along the way while I worked through my learning curve. Though I'd once served as a reviewer for JOSS, this was my first time submitting. It's been a valuable experience and a wonderful way to get substantive feedback. @Timozen, thank you for your time reviewing and in particular your comments on the README and docs. Your feedback made things better and also has left me with a growing sense of clarity about how I should structure the documentation going forward to avoid duplication/confusion. It's something that absolutely required an outside set of eyes and I'm grateful for your attention on it. @davidpagnon, I'm so glad that this review provided an opportunity to intersect with you. Your comments have definitely left me reflecting on major next steps for this project. I see the value in maintaining a tight focus on the core calibration components and then creating clean edges so that the calibration output can interface easily with other tools. I look forward to keeping the dialogue going! |
Submitting author: @mprib (Donald Prible)
Repository: https://github.com/mprib/caliscope
Branch with paper.md (empty if default branch):
Version: v0.5.1
Editor: @mooniean
Reviewers: @Timozen, @davidpagnon
Archive: 10.5281/zenodo.13905848
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@Timozen & @davidpagnon, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @mooniean know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @davidpagnon
📝 Checklist for @Timozen
The text was updated successfully, but these errors were encountered: