Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: Caliscope: GUI Based Multicamera Calibration and Motion Tracking #7155

Closed
editorialbot opened this issue Aug 28, 2024 · 61 comments
Closed
Assignees
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Aug 28, 2024

Submitting author: @mprib (Donald Prible)
Repository: https://github.com/mprib/caliscope
Branch with paper.md (empty if default branch):
Version: v0.5.1
Editor: @mooniean
Reviewers: @Timozen, @davidpagnon
Archive: 10.5281/zenodo.13905848

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/7fb23eeb517d6d4c8c740c71c75aac82"><img src="https://joss.theoj.org/papers/7fb23eeb517d6d4c8c740c71c75aac82/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/7fb23eeb517d6d4c8c740c71c75aac82/status.svg)](https://joss.theoj.org/papers/7fb23eeb517d6d4c8c740c71c75aac82)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@Timozen & @davidpagnon, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @mooniean know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @davidpagnon

📝 Checklist for @Timozen

@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.5281/zenodo.7233714 is OK
- 10.1016/j.celrep.2021.109730 is OK
- 10.48550/arXiv.1906.08172 is OK
- 10.1038/s41593-018-0209-y is OK
- 10.1007/3-540-44480-7_21 is OK
- 10.1038/s41592-019-0686-2 is OK

🟡 SKIP DOIs

- No DOI given, and none found for title: The OpenCV Library
- No DOI given, and none found for title: Indie Vicon System Cost and Solo Operator Guide
- No DOI given, and none found for title: Large-Scale Bundle Adjustment in SciPy
- No DOI given, and none found for title: OpenMMLab Pose Estimation Toolbox and Benchmark

❌ MISSING DOIs

- 10.3389/fspor.2020.00050 may be a valid DOI for title: Evaluation of 3D Markerless Motion Capture Accurac...

❌ INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.90  T=2.88 s (98.9 files/s, 518906.4 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
CSV                             19              0              0        1260835
XML                            132              0          24029         195128
Python                          91           2762           1891           8776
TOML                            14            113             17            694
Markdown                        20            260              0            585
SVG                              4              0              0            292
YAML                             4             25             12            135
TeX                              1              9              0            123
-------------------------------------------------------------------------------
SUM:                           285           3169          25949        1466568
-------------------------------------------------------------------------------

Commit count by author:

  2806	mprib
   139	Mac Prible
     3	dependabot[bot]
     2	Ryan Govostes
     1	Jaeseung Han
     1	Rohan

@editorialbot
Copy link
Collaborator Author

Paper file info:

📄 Wordcount for paper.md is 731

✅ The paper includes a Statement of need section

@editorialbot
Copy link
Collaborator Author

License info:

✅ License found: BSD 2-Clause "Simplified" License (Valid open source OSI approved license)

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@mooniean
Copy link

Hi @mprib and reviewers @Timozen and @davidpagnon - this is the review thread for the submission. All of our communications will happen here from now on.

Meanwhile, please check the post at the top of the issue for instructions on how to generate your own review checklist. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues directly in the software repository. If you do so, please mention this thread so that a link is created (and I can keep an eye on what is happening). Please also feel free to comment and ask questions in this thread. It is often easier to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

Please feel free to ping me (@mooniean) if you have any questions or concerns. Thanks!

@davidpagnon
Copy link

davidpagnon commented Aug 28, 2024

Review checklist for @davidpagnon

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/mprib/caliscope?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@mprib) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@mprib
Copy link

mprib commented Aug 28, 2024

Thank you @davidpagnon and @Timozen for serving as reviewers on this project. I am grateful for your time and look forward to your feedback.

@Timozen
Copy link

Timozen commented Aug 30, 2024

Review checklist for @Timozen

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/mprib/caliscope?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@mprib) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@davidpagnon
Copy link

davidpagnon commented Sep 5, 2024

Thank you for making this tool available! I heard good things about it a few months ago but did not find time to test it, so this review is the perfect occasion.

The installation procedure and the execution are quite user-friendly, and the output is the same as AniPose or Pose2Sim so it can be reused as is (or almost, I'll implement very minimal changes to my code so that people can just copy paste your caliscope Config.toml file into their Pose2Sim calibration folder).

Several aspects make it a pain-free first experience:

  • A GUI that launches without having to type any Python command
  • Clear tutorial and guidelines, and videos at each but the last stage (pose estimation)
  • Demonstration data which constitutes a clean model to follow

Paper:

  • It is clear, short, and concise.
  • I absolutely don't want to push it as there is obvisouly a conflict of interest, but you could cite the Pose2Sim calibration in your literature. It does not use bundle adjustment, is not as convenient as your abstract class, but it also provide 5 distortion coefficients and allows the user to calibrate with elements of the scene if they forgot to take a calibration board with them.
  • More information could be given on the pose estimation part, especially on the holistic opensim optimization, the rig generation, and how to use it with Rigmarole (unless you want to keep it as a bonus feature outside of the scope of Caliscope).

Doc:

  • The text on the GitHub.io website is not the same as in the Readme.md file. As I am not sure which is the most up to date, I suppose it could be confusing to other users as well. Maybe you could have a minimal Readme.md file that redirects to the website?
  • I installed it within a conda environment instead of venv, but it still goes smoothly
  • You specify that the user either needs to have their cameras already synchronized and with the same number of frames, or to generate a frame_time_history.csv file. However, you do not redirect to your multiwebcam package (actually you do in the Readme.md but not in the documentation). It would be very convenient to give this hint to the user.
  • Although there can be some indetermination on rotations, most people still use a checkerboard for calibration. Nothing important but it would be nice to include this option.
  • The code testing seems to be testing almost all functions and classes, which is great. However, the workflow could do some linting (none done for now), test on more OS (only Linux for now), and with more Python versions (only 3.11 for now).
    Obviously, according to your instructions, you've done some manual testing on Windows/MacOS/Linux, but it could be made automatic upon each push to the repository. See for example this package, which tests on all OS, with python versions 3.8 to 3.11.
    It's also a bit strange that the installation procedure asks for Python 3.10 while the tests run on 3.11. Out of curiosity, the pyproject.toml requires python >=3.10, <3.12: what prevents it from running on other versions? Lastly, you make the user install Caliscope via pip, but tests use poetry instead.
    Really not needed, but I take advantage of this point to make a suggestion: you could add another workflow to automatically publish your package to pypi upon each new release (see (this example workflow](https://github.com/davidpagnon/Sports2D/blob/main/.github/workflows/publish-on-release.yml))

GUI:

  • It's a matter of personal taste, but if the terminal window is not wide enough, the ASCII art collapses and it almost looks like a bug. I would suggest removing it.
  • To make everything clearer, what would you think of adding a description for each button upon hovering over them with the mouse? This is a small adjustment but it could make the user experience slightly easier.
  • The input boxes in which I enter integers are too small. It would be nice to widen these boxes.
    Capture d'écran 2024-09-04 164440
  • I do not quite get how the Charuco board generation works. Once I have set the number of rows and columns and the square size, why are there no margins on both sides of the image when I increase both its width and height?
  • It could be useful to add a button to stop camera calibration, volume calibration, and post-processing once they have started. Or just to make these buttons clickable a second time to abort. For example, this can be useful if we process long videos and realize we chose wrong parameters.
  • What is the use of "Add grid" and "Calibrate" in the Camera tab? It seems like "Autocalibrate" is the only useful button.
  • "Zoom" only takes effect when "Apply distortion" is ticked.
  • It would make more sense to move the "Calibrate capture volume" to the "Capture volume" tab, in the same way that "Camera calibration" is in the "Cameras" tab.
  • It is hard to know along which axis to adjust the coordinate system since there is no clear way of knowing which color corresponds to which axis. A simple way would be to simply comply to the xyz -> RGB standard.
  • X+, X-, etc do not invert the axis, but rotate the 3D points around axes by 90°. I would simply change the buttons to X+90°, X-90°, etc.
  • What is holistic OpenSim? OpenSim is not installed, so how do you perform scaling, inverse kinematics, and rigging? Is it a custom algorithm? I would like to know a bit more about it.

Side remarks (which would take more time to implement)

As you point out in the paper, Caliscope's main goal is to make multi-camera calibration easy and straightforward for non-experts. However, you also include a tool to perform pose estimation, triangulate it, and rig the output. I feel like calibration is a huge problem, and having a tool dedicated solely to that purpose could be quite convenient for countless people. Calibration is also needed for the study of animals, objects, or even for architecture, and most people would not care about human pose. And for those who care, MediaPipe/BlazePose is getting outdated so they may want to use something more recent such as RTMlib. Moreover, that last bit of the package seems to be a bit less polished. What would you think of decorrelating these two parts and making them two separate tools?

On the other hand, you coded quite a convenient toolbox to record from multiple webcams and synchronize them. This could be mentioned in the paper, especially since it actually automatically generates the required "frame_time_history.csv" file that is required. I do realize this may sound contradictory to my previous point, but I feel like it would be even better to directly include it in the Caliscope. It would not make the toolbox heavier since the requirements are the same, and people would be able to record and calibrate a scene from one single window.

Off-topic: I believe the toolbox used to be called "pyxy3d". I like the new name, which looks much less like a bunch of letters collapsed together and sounds nice and easy to remember.

@mprib
Copy link

mprib commented Sep 8, 2024

@davidpagnon,

Thank you for your detailed feedback on caliscope! I recognize the time it takes to dive into a new project and catalogue this information. I'm incredibly grateful for your work on this.

And I'm grateful that this has provided to an opportunity to get connected to the author of Pose2Sim! I'm passionate about the potential of emerging tools to improve the study of movement and hope that this might be the beginning of a dialogue rather than a one-off interaction.

I wanted to reply briefly to note that I am targeting a more extensive follow up with point-by-point replies by the end of this upcoming week.

Best,

Mac

@mprib
Copy link

mprib commented Sep 16, 2024

@davidpagnon ,

Thank you again for your comments. My replies are woven in below. For suggested improvements that I have not yet been able to resolve, I have opened up issues on the repo and provide links to those here.

Regards,

Mac

Thank you for making this tool available! I heard good things about it a few months ago but did not find time to test it, so this review is the perfect occasion.

Several aspects make it a pain-free first experience:

A GUI that launches without having to type any Python command

Clear tutorial and guidelines, and videos at each but the last stage (pose estimation)

Demonstration data which constitutes a clean model to follow

Paper:

It is clear, short, and concise.

I absolutely don't want to push it as there is obvisouly a conflict of interest, but you could cite the Pose2Sim calibration in your literature. It does not use bundle adjustment, is not as convenient as your abstract class, but it also provide 5 distortion coefficients and allows the user to calibrate with elements of the scene if they forgot to take a calibration board with them.

Thank you for the suggestion and I absolutely agree that it's worthwhile to discuss the two projects. I have included this new paragraph in the paper to highlight similarities and distinctions:

Pose2Sim (Pagnon et al., 2022) is another tool that merges both camera calibration and markerless pose estimation. Similar to Caliscope, Pose2Sim employs a camera model composed of 4 pinhole camera parameters and 5 distortion parameters. Additionally, both projects export output to the .trc file format to facilitate integration with the biomechanical modelling software OpenSim. While Pose2Sim does not perform bundle adjustment to refine the extrinsic camera estimates, it does provide a number of features that are valuable components of a motion tracking workflow which are not present in Caliscope and would be useful in future motion tracking tools. These include the ability to calibrate camera extrinsics based on scene elements, the capacity to distinguish between multiple subjects in view at once, and more sophisticated triangulation methods that can incorporate the confidence of a model’s prediction of a pose landmark.

More information could be given on the pose estimation part, especially on the holistic opensim optimization, the rig generation, and how to use it with Rigmarole (unless you want to keep it as a bonus feature outside of the scope of Caliscope).

I have added text to the Post Processing section of the docs to clarifies the Holistic OpenSim tracker.

Essentially, it is a filtered subset of the Mediapipe Holistic output. In test usage, files of the raw Holistic output became unweildy due to containing several hundred points for the face and dozens for the hands; but then using the smaller output of Mediapipe Pose became problematic because it converts to a much lower resolution for point detection. Holistic Opensim was my attempt to make a smaller file with better detection that could track the wrist and skull better.

Regarding the rig generation/Rigmarole: the "bonus features" characterization is spot on. My intention is to try to keep Caliscope largely agnostic to these downstream tasks and to primarily try to do "one thing well" (camera calibration),: and to provide the foundation of a framework for integrating future pose estimation tools into the pipeline.

Doc:

The text on the GitHub.io website is not the same as in the Readme.md file. As I am not sure which is the most up to date, I suppose it could be confusing to other users as well. Maybe you could have a minimal Readme.md file that redirects to the website?

I have added the following to the ReadMe for clarity:

This README provides a general overview and quick guide to install Calicope on your system. For a more detailed description of the calibration process and workflow, please see our docs.

I installed it within a conda environment instead of venv, but it still goes smoothly

You specify that the user either needs to have their cameras already synchronized and with the same number of frames, or to generate a frame_time_history.csv file. However, you do not redirect to your multiwebcam package (actually you do in the Readme.md but not in the documentation). It would be very convenient to give this hint to the user.

Thank you for the feedback. I have added a Video Capture tab to the workflow section of the docs that directs to MultiWebCam.

Although there can be some indetermination on rotations, most people still use a checkerboard for calibration. Nothing important but it would be nice to include this option.

Thank you. I would need to better understand how to resolve the indeterminate rotations to know how this might be implemented but it does seem like an interesting future addition.

The code testing seems to be testing almost all functions and classes, which is great. However, the workflow could do some linting (none done for now), test on more OS (only Linux for now), and with more Python versions (only 3.11 for now).

Using your CI action linked below as a template I've updated the testing to python 3.10, and 3.11 across macOS/Linux/Windows. And finally got around to making sure the code lints with ruff.

Obviously, according to your instructions, you've done some manual testing on Windows/MacOS/Linux, but it could be made automatic upon each push to the repository. See for example this package, which tests on all OS, with python versions 3.8 to 3.11.

See comment above. Thanks for the useful resource.

It's also a bit strange that the installation procedure asks for Python 3.10 while the tests run on 3.11. Out of curiosity, the pyproject.toml requires python >=3.10, <3.12: what prevents it from running on other versions? Lastly, you make the user install Caliscope via pip, but tests use poetry instead.

For simplicity I now have the tests installing with pip. I have expanded the tests to include 3.10 and 3.11.

There was a patch of python 3.9 that came out that I found was associated with random segfaults in PyQt, so that one got taken off the list.

There are currently issues with the combination of Python 3.12 + Mediapipe + MacOS, so 3.12 is not included. This should not effect the core camera calibration workflow, so future developments that do not involve mediapipe (or macOS) should not experience this roadblock.

Really not needed, but I take advantage of this point to make a suggestion: you could add another workflow to automatically publish your package to pypi upon each new release (see (this example workflow](https://github.com/davidpagnon/Sports2D/blob/main/.github/workflows/publish-on-release.yml))

But typing poetry publish really does give me so much joy!

GUI:

It's a matter of personal taste, but if the terminal window is not wide enough, the ASCII art collapses and it almost looks like a bug. I would suggest removing it.

Removed

To make everything clearer, what would you think of adding a description for each button upon hovering over them with the mouse? This is a small adjustment but it could make the user experience slightly easier.

Issue created to add tooltips: mprib/caliscope#636

The input boxes in which I enter integers are too small. It would be nice to widen these boxes.
Capture d'écran 2024-09-04 164440

Issue created to widen text boxes: mprib/caliscope#641

In the issue above I include some screenshots of how it is intended to appear (and shows on my system). I have recently removed a qt theme for simplicity's sake which may also impact the character legibility.

I do not quite get how the Charuco board generation works. Once I have set the number of rows and columns and the square size, why are there no margins on both sides of the image when I increase both its width and height?

I'm wondering if images are getting clipped in the GUI? The issue linked above (641) includes some screenshots of how the board should display. My hope is that at the very least the properly rendered board is what is save out as a png.

It could be useful to add a button to stop camera calibration, volume calibration, and post-processing once they have started. Or just to make these buttons clickable a second time to abort. For example, this can be useful if we process long videos and realize we chose wrong parameters.

Agree this would be an improvement and have added it as an issue, though this is one of those long-term issues that may take some time to reflect on how to best implement : mprib/caliscope#637

Just for context, I have found that killing and re-launching these threads can easily result in pretty bad memory leaks. Realistically, it may take some time before I could implement that improvement in a stable way, though I agree it would be a good future addition.

What is the use of "Add grid" and "Calibrate" in the Camera tab? It seems like "Autocalibrate" is the only useful button.

The first two are the original interface and Autocalibrate was added later. I think most of the time autocalibrate will work fine, but I see the value in having the ability to select specific frames as wanted.

"Zoom" only takes effect when "Apply distortion" is ticked.

This is as intended. It's possible for a bad distortion model to appear reasonable because it also appears zoomed in when "undistorted". When zoomed out, the bizarre distortion around the image edges is apparent.

It would make more sense to move the "Calibrate capture volume" to the "Capture volume" tab, in the same way that "Camera calibration" is in the "Cameras" tab.

I agree that this suggestion would improve the flow. I have created an issue to incorporate this in a future version: mprib/caliscope#644

It is hard to know along which axis to adjust the coordinate system since there is no clear way of knowing which color corresponds to which axis. A simple way would be to simply comply to the xyz -> RGB standard.

I definitely acknowledge that step requires experimentation to orient to the controls. The colors are the defaults offered by PyQtGraph's AxisItem object and I do not see a way in their documentation to modify the color selection.

PyQtGraph is amazing, though. Strongly recommend.

X+, X-, etc do not invert the axis, but rotate the 3D points around axes by 90°. I would simply change the buttons to X+90°, X-90°, etc.

I will plan to include the X+90, etc in the tooltips for clarity.

What is holistic OpenSim? OpenSim is not installed, so how do you perform scaling, inverse kinematics, and rigging? Is it a custom algorithm? I would like to know a bit more about it.

The holistic opensim tracker is discussed above. It is essentially a subset of the holistic mediapipe output that could drive IK on a full skeletal model in OpenSim, but without lots of points for the hands and face.

The scaling is just taking the mean distance between select landmarks across a dynamic calibration trial. Those distances are then used to scale a blender rig that is animated via IK by the subsequent motion trials.

The rigging component is in a separate package, Rigmarole which is based on Blender. Those are definitely side-quests and not the primary goal of the package.

Side remarks (which would take more time to implement)

As you point out in the paper, Caliscope's main goal is to make multi-camera calibration easy and straightforward for non-experts. However, you also include a tool to perform pose estimation, triangulate it, and rig the output. I feel like calibration is a huge problem, and having a tool dedicated solely to that purpose could be quite convenient for countless people. Calibration is also needed for the study of animals, objects, or even for architecture, and most people would not care about human pose. And for those who care, MediaPipe/BlazePose is getting outdated so they may want to use something more recent such as RTMlib. Moreover, that last bit of the package seems to be a bit less polished. What would you think of decorrelating these two parts and making them two separate tools?

I definitely view the "Post Processing" tab as a proof of concept to demonstrate the quality of the calibration and what could be done with it. I am highly drawn to the idea of "do one thing" and see the value of just having the camera system calibration spit out a config which can be used by by other tools.

Landmark tracking, triangulation, filtering, and inverse kinematics are all such rich domains themselves.
It would be great to have loosely coupled tools that could pass the ball down field to other packages via standardized human readable file formats.

On the other hand, you coded quite a convenient toolbox to record from multiple webcams and synchronize them. This could be mentioned in the paper, especially since it actually automatically generates the required "frame_time_history.csv" file that is required. I do realize this may sound contradictory to my previous point, but I feel like it would be even better to directly include it in the Caliscope. It would not make the toolbox heavier since the requirements are the same, and people would be able to record and calibrate a scene from one single window.

With video recorded from a synchronized camera system (like FLIR), frame_time_history.csv is not required. When that file is not present, frames will just be bundled up in lock-step.

Regarding MultiWebCam, I have become hesitant to promote the use of webcams strongly.
Working with them across multiple operating systems is a bit of a nightmare, and my expectation is that bundling MultiWebCam with Caliscope would create issues.

MultiWebCam did allow me to test a lot of things out cheaply while iterating in my home office, but the resolution/frame rate/motion blur will limit many use cases.
That being said, I also know how much the Vicon systems in our lab cost, and my gut tells me that given how good and inexpensive modern phone cameras are, there has to be a better way than the major mocap vendors.

here I begin a bit of a ramble about the project broadly

I'm excited to pursue alternate hardware approaches in the future such as FLIR, or more DIY approaches such as raspberry pi's with MIPI cameras.
I strongly suspect we are approaching at a time where inexpensive off-the-shelf components can capture high quality synchronized footage appropriate for research and clinical applications.

I see this project fundamentally as managing the conversion of raw videos to a human readable camera calibration config.toml.
I want it to be as agnostic as possible to the adjacent workflows so that it can be easily adapted to integrate with these tools as they emerge and develop.

I'm glad that this review has given an opportunity to connect with someone who is working on these problems as well and hope that we might continue a conversation about them.
I see a future where there is a broad ecosystem of open source tools that can be integrated into a comprehensive motion capture/biomechanical workflow.
My hope with Caliscope is to contribute one small link in the chain.

Off-topic: I believe the toolbox used to be called "pyxy3d". I like the new name, which looks much less like a bunch of letters collapsed together and sounds nice and easy to remember.

Thank you! Naming things is agonizingly difficult...

@davidpagnon
Copy link

davidpagnon commented Sep 17, 2024

Thank you for this extensive response! I'm satisfied with the answers I do not address below.

Essentially, it is a filtered subset of the Mediapipe Holistic output. In test usage, files of the raw Holistic output became unweildy due to containing several hundred points for the face and dozens for the hands; but then using the smaller output of Mediapipe Pose became problematic because it converts to a much lower resolution for point detection. Holistic Opensim was my attempt to make a smaller file with better detection that could track the wrist and skull better.

So I'm not sure I totally get it. I understand that you "scale a blender rig that is animated via IK" but then you do not use OpenSim scaling and IK , do you?

Thank you for the feedback. I have added a Video Capture tab to the workflow section of the docs that directs to MultiWebCam.

I feel like it would be worth linking to MultiWebCam at each mention of frame_time_history.csv, or at least there as when I first had a look at the overview section, I thought you had to create your own script or create the file manually, which can be scary for new users.

But typing poetry publish really does give me so much joy!

Okay, thank you, I don't know much about poetry and did not realize there was such a nice way to publish a package. And more importantly, I would not want to deprive you of such a joy!

In the issue above I include some screenshots of how it is intended to appear (and shows on my system). I have recently removed a qt theme for simplicity's sake which may also impact the character legibility.

I answered on this issue: mprib/caliscope#641

Just for context, I have found that killing and re-launching these threads can easily result in pretty bad memory leaks. Realistically, it may take some time before I could implement that improvement in a stable way, though I agree it would be a good future addition.

Okay, good to know!

What is the use of "Add grid" and "Calibrate" in the Camera tab? It seems like "Autocalibrate" is the only useful button.

The first two are the original interface and Autocalibrate was added later. I think most of the time autocalibrate will work fine, but I see the value in having the ability to select specific frames as wanted.

So the use of "Add grid" and "Calibrate" is for selecting specific frames? I do not see anything about it in the doc, it may be something to add?

Working with them across multiple operating systems is a bit of a nightmare, and my expectation is that bundling MultiWebCam with Caliscope would create issues.

Okay, it makes sense!

What are FLIR cameras? I thought they were mostly for infrared video capture, are you telling me they are also good for RGB capture, synchronization, and that they are rather cheap? In any case, I quite like the idea of using Raspberry Pis with MIPI cameras!

I see a future where there is a broad ecosystem of open source tools that can be integrated into a comprehensive motion capture/biomechanical workflow.

Well, I would not be against the idea of a community creating the Blender of Mocap 😄 It would be quite a long-term project but it does not seem totally out of reach indeed!

@mprib
Copy link

mprib commented Sep 18, 2024

@davidpagnon,

I've cropped the conversation for brevity and hope that this addresses any unresolved points. If there are any outstanding questions remaining, please just let me know!

Best,

Mac


Essentially, it is a filtered subset of the Mediapipe Holistic output. In test usage, files of the raw Holistic output became unweildy due to containing several hundred points for the face and dozens for the hands; but then using the smaller output of Mediapipe Pose became problematic because it converts to a much lower resolution for point detection. Holistic Opensim was my attempt to make a smaller file with better detection that could track the wrist and skull better.

So I'm not sure I totally get it. I understand that you "scale a blender rig that is animated via IK" but then you do not use OpenSim scaling and IK , do you?

That is correct. I had originally created that paired down version with the intention of some ad hoc experimentations in OpenSim and ended up repurposing it for use in Blender where I did some automated tools in a different package.

Given the opportunity for confusion, I have renamed it to Simple Holistic.

Thank you for the feedback. I have added a Video Capture tab to the workflow section of the docs that directs to MultiWebCam.

I feel like it would be worth linking to MultiWebCam at each mention of frame_time_history.csv, or at least there as when I first had a look at the overview section, I thought you had to create your own script or create the file manually, which can be scary for new users.

Excellent suggestion. Link to sample data and MultiWebCam added there.

What is the use of "Add grid" and "Calibrate" in the Camera tab? It seems like "Autocalibrate" is the only useful button.

The first two are the original interface and Autocalibrate was added later. I think most of the time autocalibrate will work fine, but I see the value in having the ability to select specific frames as wanted.

So the use of "Add grid" and "Calibrate" is for selecting specific frames? I do not see anything about it in the doc, it may be something to add?

Thank you. Added instructions associated with this to the Intrinsic Calibration docs.

What are FLIR cameras? I thought they were mostly for infrared video capture, are you telling me they are also good for RGB capture, synchronization, and that they are rather cheap? In any case, I quite like the idea of using Raspberry Pis with MIPI cameras!

I was sloppy with my language. Teledyne FLIR provides cameras that can record synchronized RGB video (among other types) using a shared hardware trigger.

Aside: a potential advantage that I see for a raspberry pi based camera system is that each "camera server" could provide both image capture and potentially some local landmark estimation. Processing times from a multicamera system can become unwieldy when all frames are processed on a single machine, but if distributed across multiple devices it could be far more manageable.

@Timozen
Copy link

Timozen commented Sep 20, 2024

Installation

I have evaluated the software on MacOS 15 using a MacBook with an M1 processor.

Following the installation procedure in https://mprib.github.io/caliscope/installation/#__tabbed_2_2, the program did not start.

After checking the main [readme.md](http://readme.md) of the repo, I saw that for MacOS, some env variables need to be set. This resolved the issue ^^

As @davidpagnon already mentioned, the [readme.md](http://readme.md) and documentation seem to need to be in complete sync; this should be resolved.

Usage

I will first say that this is the first time I have worked with multi-view pose estimation software. However, I have my fair share of camera calibration experience.

I followed the Example Project page. Please note that I found this 404 link while checking the Overview page: https://mprib.github.io/caliscope/sample_data.md

The tutorial YouTube video of the project https://youtu.be/voE3IKYtuIQ should be, IMHO, more prominent for users so that inexperienced people (like me, in this case, ^^) can follow along.

This was extremely helpful and might make the overall experience smooth!!!

The intrinsic calibration ran quickly on my machine, and the log helped me understand what the tool was trying to do in the background.

The extrinsic calibration ran at the same speed for me.

After the reloading of the workflow, the window size (which I adjusted) reset to the original, which is a minor thing, but it confused me for a second ^^

Initially, I was a bit confused about the repetitive reloading of the workspace; however, I quickly realized that whenever a menu option was unavailable, such as "Post Processing," the underlying reason was that something was not yet in the project folder.

So, I quickly learned to utilize the "reload workspace" button whenever something was added to the project or one processing step finished.

After that, I was able to reproduce the test project results.

When I attempted to extract the "FACE" option, the whole tool crashed without any notice and what the underlying cause is/was.

I could repeat this crash several times without a problem, and I checked the log files, but they contained no actual clue.

INFO| caliscope.recording.video_recorder| 101|  End of sync packets signaled...breaking record loop
INFO| caliscope.recording.video_recorder| 149|  releasing video writers...
INFO| caliscope.recording.video_recorder| 151|  releasing video writer for port 1
INFO| caliscope.recording.video_recorder| 151|  releasing video writer for port 2
INFO| caliscope.recording.video_recorder| 151|  releasing video writer for port 3
INFO| caliscope.recording.video_recorder| 156|  Initiate storing of frame history
INFO| caliscope.recording.video_recorder| 176|  Storing frame history to /test/recordings/sitting/FACE/frame_time_history.csv
INFO| caliscope.recording.video_recorder| 159|  Initiate storing of point history
INFO| caliscope.recording.video_recorder| 170|  Storing point data in /test/recordings/sitting/FACE/xy_FACE.csv
INFO| caliscope.recording.video_recorder| 164|  About to emit `all frames saved` signal
INFO| caliscope.post_processing.post_processor|  73|  (Stage 1 of 2): 99% of frames processed for (x,y) landmark detection
INFO| caliscope.post_processing.post_processor|  96|  Reading in (x,y) data..
INFO| caliscope.post_processing.post_processor|  99|  Filling small gaps in (x,y) data
INFO| caliscope.post_processing.gap_filling|  42|  Gap filling for (x,y) data from port 1. Filling gaps that are 3 frames or less...
INFO| caliscope.post_processing.gap_filling|  42|  Gap filling for (x,y) data from port 2. Filling gaps that are 3 frames or less...
INFO| caliscope.post_processing.gap_filling|  42|  Gap filling for (x,y) data from port 3. Filling gaps that are 3 frames or less...
INFO| caliscope.post_processing.gap_filling|  71|  (x,y) gap filling complete
INFO| caliscope.post_processing.post_processor| 101|  Beginning data triangulation
INFO| caliscope.cameras.camera_array| 209|  Creating camera array projection matrices
INFO| caliscope.triangulate.triangulation| 193|  Processing points from camera 1
INFO| caliscope.triangulate.triangulation| 193|  Processing points from camera 2
INFO| caliscope.triangulate.triangulation| 193|  Processing points from camera 3
INFO| caliscope.triangulate.triangulation| 201|  Assembling undistorted dataframe
INFO| caliscope.triangulate.triangulation| 120|  About to begin triangulation...due to jit, first round of calculations may take a moment.
[1]    92789 bus error  caliscope

However, the tool still wrote the results into the appropriate folder, and reloading the project also worked.
This is not a significant issue, but perhaps you can find a way to capture such crashes and write the log.
Please also note that after restarting the tool, the previous log is gone, so I could not investigate the first crash.
IMHO, this is not a reason for me not to continue the paper's publication, but it should be something you should be aware of.

GUI

  • First of all, I know the effort that goes into creating such a sophisticated GUI; props to that!
  • The concerns I would have raised already have been brought up by @davidpagnon and answered by @mprib.
  • It would still be helpful if this information is in the tool docs with an explanation.

Paper

I like the style of the paper. It was written clearly, showing the shortcomings or issues of the current state and which value Caliscope adds to the area.

As most information is also inside the repository and documentation, further explanations are optional from my side.


@mprib I still need to check the 'Installation instructions' checkbox, but it would be nice if you could update the online documentation to contain the MacOS problems and fixes.
After that, I will be okay with everything.

@mprib
Copy link

mprib commented Sep 25, 2024

@Timozen,

Thank you for providing this feedback! I am grateful to get your perspective on the project and particularly glad to have the experience of a MacOS user as part of the review.

My replies are embedded in context below:

Installation

I have evaluated the software on MacOS 15 using a MacBook with an M1 processor.

Following the installation procedure in https://mprib.github.io/caliscope/installation/#__tabbed_2_2, the program did not start.

After checking the main [readme.md](http://readme.md) of the repo, I saw that for MacOS, some env variables need to be set. This resolved the issue ^^

As @davidpagnon already mentioned, the [readme.md](http://readme.md) and documentation seem to need to be in complete sync; this should be resolved.

I very much appreciate the catch. I have updated the docs to direct the setting of environment variables as is done on the README. Apologies for the confusion.

Usage

I will first say that this is the first time I have worked with multi-view pose estimation software. However, I have my fair share of camera calibration experience.

I followed the Example Project page. Please note that I found this 404 link while checking the Overview page: https://mprib.github.io/caliscope/sample_data.md

Thank you. Updated now to the correct link: https://mprib.github.io/caliscope/sample_project/

The tutorial YouTube video of the project https://youtu.be/voE3IKYtuIQ should be, IMHO, more prominent for users so that inexperienced people (like me, in this case, ^^) can follow along.

This was extremely helpful and might make the overall experience smooth!!!

Thank you for this advice. I've linked and bolded the walkthrough in the intro of the README.

The intrinsic calibration ran quickly on my machine, and the log helped me understand what the tool was trying to do in the background.

The extrinsic calibration ran at the same speed for me.

After the reloading of the workflow, the window size (which I adjusted) reset to the original, which is a minor thing, but it confused me for a second ^^

This is a hiccup that is related to PyQtGraph, an excellent library. The flicker out and back in is one quirk that happens when loading the 3D viewer which I haven't found a way around.

Initially, I was a bit confused about the repetitive reloading of the workspace; however, I quickly realized that whenever a menu option was unavailable, such as "Post Processing," the underlying reason was that something was not yet in the project folder.

So, I quickly learned to utilize the "reload workspace" button whenever something was added to the project or one processing step finished.

After that, I was able to reproduce the test project results.

Thank you for the feedback about the first time through. I have the addition of tool tips as an Issue to resolve. I recall clearly the addition of tooltips you did on JeFaPaTo and it really was a helpful thing. Glad to see where the points of confusion lie with Caliscope.

When I attempted to extract the "FACE" option, the whole tool crashed without any notice and what the underlying cause is/was.

I could repeat this crash several times without a problem, and I checked the log files, but they contained no actual clue.

INFO| caliscope.recording.video_recorder| 101|  End of sync packets signaled...breaking record loop
INFO| caliscope.recording.video_recorder| 149|  releasing video writers...
INFO| caliscope.recording.video_recorder| 151|  releasing video writer for port 1
INFO| caliscope.recording.video_recorder| 151|  releasing video writer for port 2
INFO| caliscope.recording.video_recorder| 151|  releasing video writer for port 3
INFO| caliscope.recording.video_recorder| 156|  Initiate storing of frame history
INFO| caliscope.recording.video_recorder| 176|  Storing frame history to /test/recordings/sitting/FACE/frame_time_history.csv
INFO| caliscope.recording.video_recorder| 159|  Initiate storing of point history
INFO| caliscope.recording.video_recorder| 170|  Storing point data in /test/recordings/sitting/FACE/xy_FACE.csv
INFO| caliscope.recording.video_recorder| 164|  About to emit `all frames saved` signal
INFO| caliscope.post_processing.post_processor|  73|  (Stage 1 of 2): 99% of frames processed for (x,y) landmark detection
INFO| caliscope.post_processing.post_processor|  96|  Reading in (x,y) data..
INFO| caliscope.post_processing.post_processor|  99|  Filling small gaps in (x,y) data
INFO| caliscope.post_processing.gap_filling|  42|  Gap filling for (x,y) data from port 1. Filling gaps that are 3 frames or less...
INFO| caliscope.post_processing.gap_filling|  42|  Gap filling for (x,y) data from port 2. Filling gaps that are 3 frames or less...
INFO| caliscope.post_processing.gap_filling|  42|  Gap filling for (x,y) data from port 3. Filling gaps that are 3 frames or less...
INFO| caliscope.post_processing.gap_filling|  71|  (x,y) gap filling complete
INFO| caliscope.post_processing.post_processor| 101|  Beginning data triangulation
INFO| caliscope.cameras.camera_array| 209|  Creating camera array projection matrices
INFO| caliscope.triangulate.triangulation| 193|  Processing points from camera 1
INFO| caliscope.triangulate.triangulation| 193|  Processing points from camera 2
INFO| caliscope.triangulate.triangulation| 193|  Processing points from camera 3
INFO| caliscope.triangulate.triangulation| 201|  Assembling undistorted dataframe
INFO| caliscope.triangulate.triangulation| 120|  About to begin triangulation...due to jit, first round of calculations may take a moment.
[1]    92789 bus error  caliscope

However, the tool still wrote the results into the appropriate folder, and reloading the project also worked. This is not a significant issue, but perhaps you can find a way to capture such crashes and write the log. Please also note that after restarting the tool, the previous log is gone, so I could not investigate the first crash. IMHO, this is not a reason for me not to continue the paper's publication, but it should be something you should be aware of.

This is definitely something I want to address so I've created an issue here for the face tracker crashing: mprib/caliscope#648

In the meantime I've removed it from the set of sample trackers to avoid confusion.

GUI

* First of all, I know the effort that goes into creating such a sophisticated GUI; props to that!

* The concerns I would have raised already have been brought up by @davidpagnon and answered by @mprib.

* It would still be helpful if this information is in the tool docs with an explanation.

Paper

I like the style of the paper. It was written clearly, showing the shortcomings or issues of the current state and which value Caliscope adds to the area.

As most information is also inside the repository and documentation, further explanations are optional from my side.

@mprib I still need to check the 'Installation instructions' checkbox, but it would be nice if you could update the online documentation to contain the MacOS problems and fixes. After that, I will be okay with everything.

Absolutely! With the most recent merge, the MacOS fixes are now live on the docs.

JeFaPaTo was an awesome project to see because I knew what putting together a full-fledged qt tool like that required. Your review means a lot to me. Thank you for the time you've invested working through the pain points (and documenting them) so that I can make this a more smooth experience for others.

@Timozen
Copy link

Timozen commented Sep 26, 2024

@mprib, that all sounds very good to me! I checked the last missing box in my review panel, and I would be good to go now to continue with the next part of the submission phase @mooniean :)

Concerning the opened issue mprib/caliscope#648, if you need my support and testing again, just let me know over there ^^

@mooniean
Copy link

mooniean commented Oct 8, 2024

Apologies for the delay!

@davidpagnon are you happy for the review to proceed?

@davidpagnon
Copy link

I am, thanks!
The issues have been taken into account. Not all have been processed yet, but it is a work in progress and the remaining ones are details: the toolbox is perfectly usable and useful.

@mooniean
Copy link

mooniean commented Oct 8, 2024

Post-Review Checklist for Editor and Authors

Additional Author Tasks After Review is Complete

  • Double check authors and affiliations (including ORCIDs)
  • Make a release of the software with the latest changes from the review and post the version number here. This is the version that will be used in the JOSS paper.
  • Archive the release on Zenodo/figshare/etc and post the DOI here.
  • Make sure that the title and author list (including ORCIDs) in the archive match those in the JOSS paper.
  • Make sure that the license listed for the archive is the same as the software license.

Editor Tasks Prior to Acceptance

  • Read the text of the paper and offer comments/corrections (as either a list or a pull request)
  • Check that the archive title, author list, version tag, and the license are correct
  • Set archive DOI with @editorialbot set <DOI here> as archive
  • Set version with @editorialbot set <version here> as version
  • Double check rendering of paper with @editorialbot generate pdf
  • Specifically check the references with @editorialbot check references and ask author(s) to update as needed
  • Recommend acceptance with @editorialbot recommend-accept

@mooniean
Copy link

mooniean commented Oct 8, 2024

@mprib Last final checks! After you've finished the tasks in the comment above, let me know :)

@mprib
Copy link

mprib commented Oct 8, 2024

@mooniean,

The JOSS release has been uploaded to Zenodo: https://zenodo.org/records/13905848

DOI: 10.5281/zenodo.13905848

I have double checked the ORCID/Author/Title/License info.

Please let me know if there is anything that I need to clean up on my end.

Thank you!

Mac

@mooniean
Copy link

mooniean commented Oct 9, 2024

@editorialbot generate pdf

@mooniean
Copy link

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.5281/zenodo.7233714 is OK
- 10.1016/j.celrep.2021.109730 is OK
- 10.48550/arXiv.1906.08172 is OK
- 10.1038/s41593-018-0209-y is OK
- 10.3389/fspor.2020.00050 is OK
- 10.21105/joss.04362 is OK
- 10.1007/3-540-44480-7_21 is OK
- 10.1038/s41592-019-0686-2 is OK

🟡 SKIP DOIs

- No DOI given, and none found for title: The OpenCV Library
- No DOI given, and none found for title: Indie Vicon System Cost and Solo Operator Guide
- No DOI given, and none found for title: Large-Scale Bundle Adjustment in Scipy
- No DOI given, and none found for title: OpenMMLab Pose Estimation Toolbox and Benchmark

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@mooniean
Copy link

@editorialbot set v0.5.1 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now v0.5.1

@mooniean
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@mooniean
Copy link

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.5281/zenodo.7233714 is OK
- 10.1016/j.celrep.2021.109730 is OK
- 10.48550/arXiv.1906.08172 is OK
- 10.1038/s41593-018-0209-y is OK
- 10.3389/fspor.2020.00050 is OK
- 10.21105/joss.04362 is OK
- 10.1007/3-540-44480-7_21 is OK
- 10.1038/s41592-019-0686-2 is OK

🟡 SKIP DOIs

- No DOI given, and none found for title: The OpenCV Library
- No DOI given, and none found for title: Indie Vicon System Cost and Solo Operator Guide
- No DOI given, and none found for title: Large-Scale Bundle Adjustment in Scipy
- No DOI given, and none found for title: OpenMMLab Pose Estimation Toolbox and Benchmark

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/dsais-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#5997, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Oct 14, 2024
@crvernon
Copy link

crvernon commented Oct 14, 2024

@editorialbot generate pdf

🔍 checking out the following:

  • reviewer checklists are completed or addressed
  • version set
  • archive set
  • archive names (including order) and title in archive matches those specified in the paper
  • archive uses the same license as the repo and is OSI approved as open source
  • archive DOI and version match or redirect to those set by editor in review thread
  • paper is error free - grammar and typos
  • paper is error free - test links in the paper and bib
  • paper is error free - refs preserve capitalization where necessary
  • paper is error free - no invalid refs without justification

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@crvernon
Copy link

crvernon commented Oct 14, 2024

👋 @mprib - Thanks for producing a really nice submission. I just need you to address the following before I accept this for publication:

  • LINE 45: There should be a space in between "OpenCV" and your citation.

No need to conduct a new release. Just let me know when this has been fixed and I'll accept.

Thanks!

@mprib
Copy link

mprib commented Oct 14, 2024

@crvernon,

I've made the update now to paper.md.

Thank you!

Mac

@crvernon
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@crvernon
Copy link

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Prible
  given-names: Donald
  orcid: "https://orcid.org/0000-0001-9243-468X"
doi: 10.5281/zenodo.13905848
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Prible
    given-names: Donald
    orcid: "https://orcid.org/0000-0001-9243-468X"
  date-published: 2024-10-15
  doi: 10.21105/joss.07155
  issn: 2475-9066
  issue: 102
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 7155
  title: "Caliscope: GUI Based Multicamera Calibration and Motion
    Tracking"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.07155"
  volume: 9
title: "Caliscope: GUI Based Multicamera Calibration and Motion
  Tracking"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🦋🦋🦋 👉 Bluesky post for this paper 👈 🦋🦋🦋

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.07155 joss-papers#6000
  2. Wait five minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.07155
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Oct 15, 2024
@Timozen
Copy link

Timozen commented Oct 15, 2024

Congratulations @mprib!!! and thanks for the awesome work again @mooniean @davidpagnon @crvernon

@crvernon
Copy link

🥳 Congratulations on your new publication @mprib! Many thanks to @mooniean for editing and @Timozen and @davidpagnon for your time, hard work, and expertise!! JOSS wouldn't be able to function nor succeed without your efforts.

Please consider becoming a reviewer for JOSS if you are not already: https://reviewers.joss.theoj.org/join

@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following

code snippets

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.07155/status.svg)](https://doi.org/10.21105/joss.07155)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.07155">
  <img src="https://joss.theoj.org/papers/10.21105/joss.07155/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.07155/status.svg
   :target: https://doi.org/10.21105/joss.07155

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@mprib
Copy link

mprib commented Oct 15, 2024

Thank you @crvernon!

@mooniean, thank you for guiding this submission along the way while I worked through my learning curve. Though I'd once served as a reviewer for JOSS, this was my first time submitting. It's been a valuable experience and a wonderful way to get substantive feedback.

@Timozen, thank you for your time reviewing and in particular your comments on the README and docs. Your feedback made things better and also has left me with a growing sense of clarity about how I should structure the documentation going forward to avoid duplication/confusion. It's something that absolutely required an outside set of eyes and I'm grateful for your attention on it.

@davidpagnon, I'm so glad that this review provided an opportunity to intersect with you. Your comments have definitely left me reflecting on major next steps for this project. I see the value in maintaining a tight focus on the core calibration components and then creating clean edges so that the calibration output can interface easily with other tools. I look forward to keeping the dialogue going!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning
Projects
None yet
Development

No branches or pull requests

6 participants