Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: ParaMonte: A high-performance serial/parallel Monte Carlo simulation library for C, C++, Fortran #2741

Closed
40 tasks done
whedon opened this issue Oct 12, 2020 · 113 comments
Assignees
Labels
accepted Batchfile CMake published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review Shell

Comments

@whedon
Copy link

whedon commented Oct 12, 2020

Submitting author: @shahmoradi (Amir Shahmoradi)
Repository: https://github.com/cdslaborg/paramonte
Version: v1.5.1
Editor: @VivianePons
Reviewer: @milancurcic, @williamfgc
Archive: 10.5281/zenodo.4749957

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/f964b6e22c71515c310fbe3843ad4513"><img src="https://joss.theoj.org/papers/f964b6e22c71515c310fbe3843ad4513/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/f964b6e22c71515c310fbe3843ad4513/status.svg)](https://joss.theoj.org/papers/f964b6e22c71515c310fbe3843ad4513)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@milancurcic & @williamfgc, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @VivianePons know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Review checklist for @milancurcic

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@shahmoradi) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @williamfgc

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@shahmoradi) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.) I don't have enough domain knowledge to conduct a study on the topic

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution. Reviewer: There are build instructions, but
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified? There are manual steps, but not automatic CI
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support Since it's hosted on GitHub it's following the typical workflow, issues, PR

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
@whedon
Copy link
Author

whedon commented Oct 12, 2020

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @milancurcic, @williamfgc it looks like you're currently assigned to review this paper 🎉.

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

⭐ Important ⭐

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Oct 12, 2020

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@whedon
Copy link
Author

whedon commented Oct 12, 2020

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- None

MISSING DOIs

- 10.1088/0004-637x/766/2/111 may be a valid DOI for title: A Multivariate Fit Luminosity Function and World Model for Long Gamma-Ray Bursts
- 10.1093/mnras/stv714 may be a valid DOI for title: Short versus long gamma-ray bursts: a comprehensive study of energetics and prompt gamma-ray correlations
- 10.1007/s11222-006-9438-0 may be a valid DOI for title: DRAM: efficient adaptive MCMC
- 10.1093/bioinformatics/btp162 may be a valid DOI for title: GNU MCSim: Bayesian statistical inference for SBML-coded systems biology models
- 10.18637/jss.v035.i04 may be a valid DOI for title: PyMC: Bayesian stochastic modelling in Python

INVALID DOIs

- None

@williamfgc
Copy link

@whedon @VivianePons I have trouble when trying to accept the invitation above. Also, I can't edit my checklist. I'd appreciate help with it as I'm currently reviewing the software.

@whedon
Copy link
Author

whedon commented Oct 19, 2020

I'm sorry human, I don't understand that. You can see what commands I support by typing:

@whedon commands

@williamfgc
Copy link

@whedon commands

@whedon
Copy link
Author

whedon commented Oct 19, 2020

Here are some things you can ask me to do:

# List Whedon's capabilities
@whedon commands

# List of editor GitHub usernames
@whedon list editors

# List of reviewers together with programming language preferences and domain expertise
@whedon list reviewers

EDITORIAL TASKS

# Compile the paper
@whedon generate pdf

# Compile the paper from alternative branch
@whedon generate pdf from branch custom-branch-name

# Ask Whedon to check the references for missing DOIs
@whedon check references

# Ask Whedon to check repository statistics for the submitted software
@whedon check repository

@williamfgc
Copy link

@whedon list reviewers

@whedon
Copy link
Author

whedon commented Oct 19, 2020

Here's the current list of reviewers: https://bit.ly/joss-reviewers

@VivianePons
Copy link

@whedon re-invite @williamfgc as reviewer

@VivianePons
Copy link

That should fix the issue

@whedon
Copy link
Author

whedon commented Oct 21, 2020

OK, the reviewer has been re-invited.

@williamfgc please accept the invite by clicking this link: https://github.com/openjournals/joss-reviews/invitations

@williamfgc
Copy link

williamfgc commented Nov 3, 2020

I have a few initial recommendations upon my review. I hope the authors find this helpful to transition ParaMonte from academic code to production quality software:

  1. I couldn't find the use of automated test frameworks for unit and functional tests. I'd recommend using ctest as the project has CMake support and create automated tests. It seems there are a set of tests under src/test, but they are not executed automatically. Also a Fortran unit testing framework could be a good alternative.
  2. CI: this project is missing continuous integration. I find it difficult to accept it without comprehensive testing when changes are made to the repo. CI builds trusts among users and developers.
  3. Contributing: it's not clear the contribution policy. This is tied to 1. and 2. and the lack of CI infrastructure (either GitHub Actions, CircleCI, Travis, on a single platform would be helpful, etc.).
  4. Installation: add support for CMake installation. While there is a large effort to support different platform via scripts, it would be beneficial to reuse CMake portable capabilities to build and install the library portably. This could be added to their CI.
  5. Code coverage: once CI is installed please add code coverage (e.g. Coverity, Codecov)
  6. Code documentation: it's hard to find API level documentation (e.g. doxygen) for internal functions so any new developer to the project can pick up the original author's intention. I see some files have it while others (SpecBase_Description_mod.f90)[https://github.com/cdslaborg/paramonte/blob/master/src/ParaMonte/SpecBase_Description_mod.f90] don't.
  7. The paper focuses too much on previous work publications, rather than exposing the merits of the software infrastructure and (most importantly) impact. I'd focus it on the community impact: use cases, important integration with other projects, etc.
  8. Distribution and packaging (optional): it would be good to have the option for standard distribution packages (conda, spack). Install scripts have to be constantly updated and maintained. Without proper CI, bugs could pile up pretty quickly.

I'd be happy to expand on the above if required, but I believe they are necessary to meet minimal software quality standards.

@VivianePons
Copy link

Hi @shahmoradi have you had any time to look at the comments made by @williamfgc ?

@shahmoradi
Copy link

@VivianePonsThank you for your reminder. I am working on it right now as we speak.

@milancurcic
Copy link

The items I checked off are okay. Few remain unchecked because of issues with installation.

I tested serial binary releases for Fortran with both GNU and Intel. Both worked as expected and I was able to build and run the example program.

I wasn't able to use the parallel releases because they require MPICH. I tend to work with OpenMPI so it'd be nice if binary releases with OpenMPI were available. I opened an issue for this.

I tried building from source which failed. I opened an issue for this.

There was also a slight issue with wording in ACKNOWLEDGMENT.md which make it sound like the license requires citing the software. I opened an issue for this.

I confirm and re-iterate points 1 and 3 by @williamfgc as important. Points 2, 5, and 8 I consider nice to have but not essential. Point 6 I couldn't confirm--I had no problem finding the API docs and found them thorough.

I suggest tackling these before proceeding with the review.

@williamfgc
Copy link

@milancurcic thanks for confirming. I modified some points to make them clear based on your comments.

@VivianePons
Copy link

Hi all, @shahmoradi can I ask where we are on this? (No rush, I just want to make sure we are still on track!)

@shahmoradi
Copy link

I have addressed @williamfgc comments and I am working on @milancurcic 's comments. I should be able to back with a full revision by tomorrow, hopefully.

@shahmoradi
Copy link

shahmoradi commented Dec 25, 2020

@williamfgc

Thank you for your extensive detailed feedback, which significantly improved the quality of this codebase. We will address each of your comments below.

  1. The library tests.
    The library originally had a significant number of tests that used to be automatically executed. But at some point, someone (perhaps myself) decided to temporarily disable the tests, which then became permanent since nobody reactivated the tests.

    We have now added a comprehensive set of 866 new unique tests that cover nearly 100% of the entire library as well as the various functionalities of the samplers in the library. Given the three possible major parallelism configurations of the library, there are overall 3 * 866 = 2598 tests for all serial as well as the MPI and Coarray parallel builds. The code coverage report for all three builds for the most recent version of the library is available here:

    Since the library relies on preprocessing directives to implement different parallelism paradigms (serial, MPI, and Coarray Fortran), the tests had to be also adapted and preprocessed according to the characteristics of the library for different parallelism paradigms. No testing framework that we have inspected so far, met the minimum requirements for testing this package under the different build configurations. We have therefore developed a unit testing framework that is specifically tailored for the needs of this library.
    The tests in the library are separated into two different categories:

    1. basic: Tests that verify the functionality and accuracy of the fundamental backbone modules and procedures of the library.
    2. sampler: Tests that verify the functionality, semantics, and accuracy of the library's samplers.

    By default both categories of tests are deactivated for any particular build for the following reasons:

    1. The tests can take significant time and memory, in particular, when building the library for multiple different configurations. This can also easily break Continuous Integration workflows like Travis CI that limit the maximum length of the stdout log file.
    2. Some of the tests such as those that verify the restart functionality of the library's sampler in parallel require external intervention in the regular working of the samplers. This in turn requires the MPI/Coarray processes to be able to communicate the error messages with each other to avoid deadlocks. These error message communications between different processes are not normally needed as the failing process automatically calls mpi_abort() in MPI or error stop in Coarray to halt the entire program.
      However, such exceptions and errors should be gracefully handled during testing, which requires the addition of extra error handling code snippets to the sampler routines that are otherwise deactivated in normal production runs.
      These extra exception and error handling communications in parallel modes can be quite expensive if they appear in the parallel production code. As such, the library's tests are by default switched off, unless the developer (or the user) readily but explicitly enables the tests via the -t all or --test all or --codecov build flag. For example,
    ./install.sh --lang fortran --build testing --lib dynamic --par none --test all

    will build the library for usage from the Fortran language with all tests activated. The tests can be also limited to one category, if desired, to reduce the testing and build time. This is particularly needed during the development of the library when the focus is on one particular functionality in the library. For example,

    ./install.sh --lang c --build testing --lib dynamic --par mpi -t basic

    will test the library build for parallel usage from the C language and will only activate the basic tests of the library. The default mode, as mentioned above, is,

    ./install.sh -t none

    However, when generating code coverage, all tests are activated by default. For example,

    ./install.sh --codecov

    and,

    ./install.sh --codecov -t all

    are equivalent and,

    ./install.sh --codecov -t none

    leads to an error message, since a minimal set of tests are required to generate the code coverage report.

  2. Continuous Integration.
    We have now enabled Travis CI for this library. All builds are successful. Due to technical difficulties with Coarray Fortran and Windows, CAF parallel builds and builds for Windows are currently excluded from Travis CI. Intel has recently released its new version of Fortran compilers free of charge for all platforms including Windows. This will hopefully enable the testing and CI of this library also on Windows in the near future.

  3. Contribution policy.
    We have now laid out a set of guidelines for contributing to the core of library or other contributions:

  4. CMake support.
    CMake is indeed already supported on Linux and macOS. However, direct use of CMake to build the library is suboptimal, in particular, for the end-users as CMake cannot install missing libraries on the user's system if needed. One of our primary goals in the development of this package was to make a compiled-language library as accessible as possible to anyone, even those who do not have experience with compiled languages or CMake or different library dependencies. To achieve this goal, we had to develop multiple layers of build scripts that lay the ground for CMake to successfully build the library. The current build scripts are capable of automatically installing all missing components including the CMake application (if the minimum required version is not detected on the system), GNU compilers, as well as OpenMPI, MPICH MPI, and OpenCoarrays libraries.

    The situation on Windows is slightly more complex. While library build with CMake was originally supported on Windows, we eventually dropped it in favor of developing a home-grown separate Windows Batch build system for the library exclusively on Windows. The reason for this decision was CMake's poor support of the Fortran programming language at the time, especially on Windows. The CMake has dramatically improved over the past three years and given the recent availability of Intel compilers on Windows free of charge, CMake support for Windows is now among top priorities to add to the library.

  5. Code coverage.
    Code coverage reports are now added for every new release of the library and are permanently available on GitHub in a separate repository for the three most different builds of the library (the three parallelism paradigms):

    We have strived for adding an automated code coverage report via codecov but so far in vain. We will continue our efforts to generate automated code coverage reports and analyses via codecov/TravisCI or similar platforms.

  6. Documentation.

  7. Manuscript's contents and focus.
    We have now significantly reduced the amount of technical contents of the paper.

  8. Package redistribution and CI.
    We absolutely agree that CI is necessary for the success and longevity of this repository and we have now set up an automated build and testing workflow with Travis-CI. We also agree that having a standard distribution method is essential in the long term. There have been, however, a few challenges along our way on identifying and setting up the best method of distribution, most importantly, the many different configurations with which the library can be built, such as the programming languages, the MPI library dependencies, etc. Furthermore, there is the question of which packaging and redistribution method would be the best. On Windows and macOS, the task might be rather straightforward as only a few popular methods exist. The situation, however, is completely different on Linux OS.

    Currently, the users can access the prebuilt versions of the library for almost any production configuration they might need. The prebuilt libraries are always available on the GitHub release page of the project. Downloading and using these prebuilt libraries takes only three steps from the bash terminal. For example, the prebuilt library for the C programming language can be downloaded and run via,

    libname=libparamonte_c_linux_x64_intel_release_dynamic_heap
    wget https://github.com/cdslaborg/paramonte/releases/latest/download/$libname.tar.gz
    tar xvzf $libname.tar.gz && cd $libname && ./build.sh && ./run.sh && cd ..

@milancurcic

Thank you for your valuable feedback, which significantly improved the quality of this work. Below, we will address your comments and questions,

  1. OpenMPI releases.
    Thank you for pointing this out. As you mentioned, OpenMPI is a popular implementation. We have now added the prebuilt versions of the library with OpenMPI in addition to MPICH. This required a complete revision of the naming convention used in the library across all languages and documentations. The new naming convention automatically suffixes,

    1. any MPICH build of the library with _mpich,
    2. any OpenMPI build of the library with _openmpi,
    3. any Intel MPI build of the library with _impi,
      A complete description of the naming convention used in the library is provided in the library's documentation.
  2. Build failure.
    Thank you for pointing this out. This apparently happens on systems where either the MPI library or the GNU compilers are already installed and meet the minimum version requirements of ParaMonte. As such, the build scripts would not feel the need to create a custom setup.sh to set up the environmental variables. However, it would require the setup file's existence, which would subsequently interrupt the build by throwing an exception.

    We believe this issue is now fixed in the latest release of the library.

  3. license and ACKNOWLEDGMENT.md.
    Thank you again for pointing this out. The existing ACKNOWLEDGMENT.md was a relic of the original license of the library. This is now fixed.

  4. points 1 and 3 by @williamfgc.
    Please see our responses to comments by @williamfgc.

@VivianePons
Copy link

Thank you very much, I am glad to see that the review process has been so successful and has improved the quality of the software :) I leave it to @williamfgc and @milancurcic to review your comments and the improvements that have been made

Happy new year to all of you and thank your for the provided work

@whedon whedon added the recommend-accept Papers recommended for acceptance in JOSS. label May 12, 2021
@whedon
Copy link
Author

whedon commented May 12, 2021

Attempting dry run of processing paper acceptance...

@whedon
Copy link
Author

whedon commented May 12, 2021

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.3847/1538-4357/abb9b7 is OK
- 10.1088/0004-637x/766/2/111 is OK
- 10.1093/mnras/stv714 is OK
- 10.1007/s11222-006-9438-0 is OK
- 10.1093/bioinformatics/btp162 is OK
- 10.18637/jss.v035.i04 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented May 12, 2021

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#2301

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#2301, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true

@danielskatz
Copy link

👋 @shahmoradi - I'm the AEiC on duty currently, and I'll be proofreading this shortly, then either requesting changes, or proceeding to publication.

@danielskatz
Copy link

My suggested changes are in cdslaborg/paramonte#14 - please merge this, or let me know what you disagree with, then we can proceed

@shahmoradi
Copy link

done. Thank you @danielskatz @VivianePons for your patience and edits, and special thanks again to both reviewers @milancurcic and @williamfgc for their variable suggestions.

@danielskatz
Copy link

@whedon accept

@whedon
Copy link
Author

whedon commented May 13, 2021

Attempting dry run of processing paper acceptance...

@whedon
Copy link
Author

whedon commented May 13, 2021

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#2303

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#2303, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true

@whedon
Copy link
Author

whedon commented May 13, 2021

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.3847/1538-4357/abb9b7 is OK
- 10.1088/0004-637x/766/2/111 is OK
- 10.1093/mnras/stv714 is OK
- 10.1007/s11222-006-9438-0 is OK
- 10.1093/bioinformatics/btp162 is OK
- 10.18637/jss.v035.i04 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@danielskatz
Copy link

👋 @VivianePons - this paper is probably too long and too detailed, but I don't think it's worth cutting it at this point - this is just a thought of something to consider for other future papers

@danielskatz
Copy link

@whedon accept deposit=true

@whedon
Copy link
Author

whedon commented May 13, 2021

Doing it live! Attempting automated processing of paper acceptance...

@whedon whedon added accepted published Papers published in JOSS labels May 13, 2021
@whedon
Copy link
Author

whedon commented May 13, 2021

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@whedon
Copy link
Author

whedon commented May 13, 2021

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.02741 joss-papers#2304
  2. Wait a couple of minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.02741
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@danielskatz
Copy link

Congratulations to @shahmoradi (Amir Shahmoradi) and co-author!!

And thanks to @milancurcic and @williamfgc for reviewing, and @VivianePons for editing!

@whedon
Copy link
Author

whedon commented May 13, 2021

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.02741/status.svg)](https://doi.org/10.21105/joss.02741)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.02741">
  <img src="https://joss.theoj.org/papers/10.21105/joss.02741/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.02741/status.svg
   :target: https://doi.org/10.21105/joss.02741

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@danielskatz
Copy link

@arfon and @shahmoradi - I just realized (due to a comment by @kyleniemeyer in another review) that I made a mistake when suggesting edits to this paper, which removed some of the references. Specifically, in cdslaborg/paramonte#14, I put multiple references together with commas, instead of the correct semicolons, which led to all but the first of each set not being included in the pdf.

I'm quite sorry about this.

I'm not sure how to fix this, as I think the paper src has been deleted.

Can we fix this? If so, how?

@shahmoradi
Copy link

Thanks for the note. I have revived the manuscript Markdown file in the project's repository. Please let me know if I have to do anything else. Thank you.

@danielskatz
Copy link

Thanks @shahmoradi - can you merge cdslaborg/paramonte#15

@arfon
Copy link
Member

arfon commented May 20, 2021

Can we fix this? If so, how?

I think this should be fixed in openjournals/joss-papers@c529ab4 (which basically includes all of these changes @danielskatz)

@danielskatz
Copy link

The two references at the end of the paragraph that starts with "For each parallel simulation" also should be put together, I think

@danielskatz
Copy link

And the same with the two at the end of the paragraph that starts with "To alleviate"

@arfon
Copy link
Member

arfon commented May 20, 2021

Done and done. The updated paper might take a few hours to show up due to caching but it should be fixed.

@danielskatz
Copy link

Thanks! And again, sorry for this mistake on my part.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted Batchfile CMake published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review Shell
Projects
None yet
Development

No branches or pull requests

8 participants