Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: cmstatr: An R Package for Statistical Analysis of Composite Material Data #2265

Closed
38 tasks done
whedon opened this issue May 27, 2020 · 45 comments
Closed
38 tasks done
Assignees
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review

Comments

@whedon
Copy link

whedon commented May 27, 2020

Submitting author: @kloppen (Stefan Kloppenborg)
Repository: https://github.com/ComtekAdvancedStructures/cmstatr
Version: 0.6.0
Editor: @usethedata
Reviewer: @myousefi2016, @JonasMoss
Archive: 10.5281/zenodo.3930475

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/d42c3a9b8c2bfa7e7c493e5123d55b53"><img src="https://joss.theoj.org/papers/d42c3a9b8c2bfa7e7c493e5123d55b53/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/d42c3a9b8c2bfa7e7c493e5123d55b53/status.svg)](https://joss.theoj.org/papers/d42c3a9b8c2bfa7e7c493e5123d55b53)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@myousefi2016 & @JonasMoss, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @usethedata know.

Please try and complete your review in the next six weeks

Review checklist for @myousefi2016

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@kloppen) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @JonasMoss

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@kloppen) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
@whedon
Copy link
Author

whedon commented May 27, 2020

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @myousefi2016, @JonasMoss it looks like you're currently assigned to review this paper 🎉.

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

⭐ Important ⭐

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf

@whedon
Copy link
Author

whedon commented May 27, 2020

Reference check summary:

OK DOIs

- 10.2307/2682297 is OK
- 10.1080/01621459.1987.10478517 is OK
- 10.1080/03610919408813222 is OK
- 10.1198/004017002188618428 is OK
- 10.21105/joss.01686 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented May 27, 2020

@usethedata
Copy link

Thank you @myousefi2016 and @JonasMoss for serving as reviewers for this. Please feel free to reach out to me either through an issue or via email (see my profile) if you have any questions or need any help.

@JonasMoss
Copy link

JonasMoss commented Jun 17, 2020

@kloppen Here is the report! Please don't hesitate to ask for details or make comments!

This package is thorough in its documentation and testing and has a clearly laid out reason to exist. I'm impressed by the authors' devotion to showing agreement with CMH-17-1G in almost every function. I deeply appreciate comments such as "The results of this function have been validated against published values in Lawless (1982).", which demonstrates the seriousness of the authors. What's more, it has three vignettes and a unified philosophy. I would have without a doubt used this package if it had been relevant to my work.

Not surprisingly, I only found minor issues with this package.

Documentation. The documentation is usually strong in explaining what the arguments are there for. But it's often difficult to understand what different functions do. Take k_equiv, with the description k-factors for equivalency testing. I need an explanation of what k-factors are and what equivalency testing is (which I suspect is used differently in this package than the usual statistical terminology "equivalence test").
To help you understand the problem, compare k_equiv with nonpara_binomial_rank, which is much easier to understand and is well-referenced. You state that it finds "distribution-free tolerance limits for large samples" using a named method, and supply a reference for that method. That's what I'd like to see. However, please fix the reference there.

I find it difficult to understand, thus verify, the following functions too. (And this problem is compounded when I don't see an example.)

  • maximum_normed_residual
  • equiv_change_mean needs a reference. Is it a wrapper of a two-sample t-test? Then it should say so in Description. What does the modcv = TRUE option do? Something described in CMH-17-1G?
  • equiv_mean_extremum. Could you explain what this function does, preferably with a reference?
  • k_factor_normal: You should define what the k factor is and provide a reference or maybe remove it from the public API.
  • basis_values: Needs a reference to tolerance interval and its definition.
  • Add reference to tolerance intervals, where needed, to explain what you are doing.

There might be other functions too with too little of "what does the function do" too. Please be aware that I'm not asking for you to rewrite the documentation (which is good), I just need some more information in some circumstances. In particular, I have to be able to verify if that the exported functions do what they say they do, this is one of the unchecked checkboxes -- and then I have to understand what they are supposed to do. ;)

  • Explain what the functions do.
  • Spelling and writing. Some minor issues here too. I haven't hunted for spelling errors, and I don't think they are important.
    • Anderson-Darling -> Anderson--Darling (does Rmarkdown support en-dashes though?)
    • Hanson-Koopmans -> Hanson--Koopmans
    • Please write out "CV" at least once when you use it in the documenation; this is true for any abbreviation. (E.g. calc_cv_star; which should cross-reference cv too.)
    • Output of equiv_change_mean: Qualificaiton -> Qualification
    • Some functions have very minor spacing issues in the ouput, for instance levene_test.
    • basis: preform -> perform.
  • Terminology.
    • In, say, the Anderson--Darling tests, you write "significance" when you report a p-value.
    • Likewise, you write in the documentation that "The significance level is calculated assuming that the parameters of the distribution are unknown; these parameters are estimate from the data." The proper terminology is "p-value"; not "significance level", which refers to a significance cutoff (i.e. alpha.)
  • Verification:
    • Your normal Anderson--Darling test does not agree with nortest::ad.test. Why is that so?
  • Examples. It would be nice to have examples for all exported functions; this should be easy; just copy from your test_that folder. In particular, the following functions lack examples.
    • levene_test
    • maximum_normed_residual
    • nonpara_binomial_rank

Installation

Calling

devtools::install_github("ComtekAdvancedStructures/cmstatr", build_vignettes = TRUE, build_opts = c("--no-resave-data", "--no-manual"))

fails when rmarkdown dplyr, and tidyr aren't installed. I suggest adding install.packages("rmarkdown"), and the same for the rest, to the installation instructions or, (ii) add rmarkdown to Imports instead of Suggests. (iii) It is possible to add checks in the vignette code to find out if a package has been installed, then ask to the user to install it right there and then. Don't do it automatically though -- CRAN won't accept it! In my opinion, this third option is not worth the trouble.

  • Installation. Fix the rmarkdown, dplyr, tidyr issue.

Paper

Design Values are determined such that, with 95% confidence, they are either the 99% or 90% one-sided lower confidence bound of the material strength, depending on the type of structure.

I was strongly confused by this passage, and I think it should be reworded or removed. An option is to use only the language of tolerance intervals/bounds, and provide a reference and introduction to their definition and interpretation. Tolerance intervals are too convoluted to properly define in this paper (or the documentation), but they should be referenced by their proper name so that people (that is, people like me! :p) can easily check if they are implemented correctly. Moreover, I suggest to use the terminology "content" and "confidence level" for p and conf. Since you only care about lower confidence levels, I suggest you remove references to confidence intervals and replace them with confidence levels instead; it is confusing that several functions take conf as an argument but fails to return a confidence interval.

It is nice that you repeatedly state in the documentation that p = 0.9 and conf = 0.95 corresponds to a B-Basis and so on, you should also state what they are supposed to calculate when p != 0.9 and conf != 0.95. (That is, the lower limits of a one-sided tolerance interval with content p and confidence level 0.95.)

A reference for the to me surprising calculations used in e.g. basis_normal and k_factor_normal is Statistical Tolerance Regions: Theory, Applications, and Computation (Krishnamoorthy, Mathew, 2009), Section 1.2. But you probably know better references than I do!

  • Documenation: Reference to tolerance bounds.

The rest of the software paper looks fine to me!

@kloppen
Copy link

kloppen commented Jun 17, 2020

@JonasMoss Thank you very much for your comments. You clearly spent considerable effort in conducting a thorough review of the paper and the package. I really appreciate it!

I'll start addressing your comments and will let you know when I'm done. You were quite clear in what you thought should be changed, so I don't anticipate that I'll need any clarification of your comments, but if something comes up, I'll be sure to ask.

Thanks again!

@kloppen
Copy link

kloppen commented Jun 25, 2020

Thanks again for your review and comments, @JonasMoss

I've addressed your comments. I've created a separate branch in the cmstatr repo for this, called joss-review. There is a brief response to each of your comments below, and in most cases a commit hash corresponding to the actual change that was made to address the comment. In some cases, there were several commits that affected the same part of the documentation (eg. fixing a spelling error in one commit, then restructuring a sentence in another), so I'm not sure if it's easier to look at the individual commits, or to just compare the branch in which the changes have been made (the joss-review branch) with master to see everything that's changed. You should be able to compare those two branches here:

https://github.com/ComtekAdvancedStructures/cmstatr/compare/feature/joss-review

If you're satisfied with the changes and the responses below, I'll merge the joss-review branch into master and increment the version number of the package to 0.6.0 (and ask whedon to re-build the paper). If you'd like any additional changes, or would like to discuss any other aspects of the paper or the package, please let me know.

This package is thorough in its documentation and testing and has a clearly
laid out reason to exist. I'm impressed by the authors' devotion to showing
agreement with CMH-17-1G in almost every function. I deeply appreciate
comments such as "The results of this function have been validated against
published values in Lawless (1982).", which demonstrates the seriousness of
the authors. What's more, it has three vignettes and a unified philosophy.
I would have without a doubt used this package if it had been relevant to my work.

Not surprisingly, I only found minor issues with this package.

Documentation

The documentation is usually strong in explaining what the arguments are there for.
But it's often difficult to understand what different functions do. Take k_equiv,
with the description k-factors for equivalency testing. I need an explanation of
what k-factors are and what equivalency testing is (which I suspect is used
differently in this package than the usual statistical terminology "equivalence test").
To help you understand the problem, compare k_equiv with nonpara_binomial_rank,
which is much easier to understand and is well-referenced. You state that it finds
"distribution-free tolerance limits for large samples" using a named method, and
supply a reference for that method. That's what I'd like to see. However, please
fix the reference there.

Commit 0b26e1d: I've updated the documentation for both k_equiv and equiv_mean_extremum.

Commit 4042e15: Added URL to documentation for nonpara_binomial_rank function. I've left the reference in paper.bib pointing to the JSTOR URL and the American Statistician journal so that I can keep the DOI, which does refer to the published version of this paper, not the preprint.

I find it difficult to understand, thus verify, the following functions too.
(And this problem is compounded when I don't see an example.)

  • maximum_normed_residual

Commit 92ffa21

  • equiv_change_mean needs a reference. Is it a wrapper of a two-sample t-test?
    Then it should say so in Description. What does the modcv = TRUE option do?
    Something described in CMH-17-1G?

Commit 7fdf604

  • equiv_mean_extremum. Could you explain what this function does,
    preferably with a reference?

Commit 0b26e1d

  • k_factor_normal: You should define what the k factor is and provide a
    reference or maybe remove it from the public API.
  • basis_values: Needs a reference to tolerance interval and its definition.

Both addressed in commit 0b115ab

  • Add reference to tolerance intervals, where needed, to explain what you are doing.

These have been added throughout the documentation, as needed.

There might be other functions too with too little of "what does the function do"
too. Please be aware that I'm not asking for you to rewrite the documentation
(which is good), I just need some more information in some circumstances.
In particular, I have to be able to verify if that the exported functions do
what they say they do, this is one of the unchecked checkboxes -- and then I have
to understand what they are supposed to do. ;)

  • Explain what the functions do.
  • Spelling and writing. Some minor issues here too. I haven't hunted for
    spelling errors, and I don't think they are important.
    • Anderson-Darling -> Anderson--Darling (does Rmarkdown support en-dashes though?)

Commit 685aeb9

  • Hanson-Koopmans -> Hanson--Koopmans

Commit ab6b5d0

  • Please write out "CV" at least once when you use it in the documenation;
    this is true for any abbreviation. (E.g. calc_cv_star; which should cross-reference cv too.)

Commit ffd31d5

  • Output of equiv_change_mean: Qualificaiton -> Qualification

Commit cebef8e

  • Some functions have very minor spacing issues in the ouput, for instance levene_test.

Commit 668ee9b

  • basis: preform -> perform.

Commit b859093

  • Terminology.
    • In, say, the Anderson--Darling tests, you write "significance" when you report a p-value.
    • Likewise, you write in the documentation that "The significance level is calculated assuming
      that the parameters of the distribution are unknown; these parameters are estimate from the
      data." The proper terminology is "p-value"; not "significance level", which refers to a
      significance cutoff (i.e. alpha.)

Commit bc6d412: In the case of the Anderson--Darling tests, I've tried to clear up the terminology, primarily using the term observed significance level, which is the term used by the CMH-17-1G handbook, and also the existing software in use. My understanding is that observed significance level (OSL) is a much less common term that means the same thing as p-value. I agree that calling this p-value, rather than the much less common name observed significance level, would be the more precise terminology, but I do think that there is value in using the terminology that the likely users of this package are accustomed to.

  • Verification:
    • Your normal Anderson--Darling test does not agree with nortest::ad.test. Why is that so?

Commit bc6d412. There is a modification, which depends on sample size and distribution, made to the test statistic. This modification is intended to account for the fact that the population parameters are estimated from the sample, rather than known. There are at least two sets of the modifications in the literature. nortest uses the modification formula from D'Agostino (1986), while cmstatr uses the modification formula from Stephens (1974). The unmodified test statistic is reported by both nortest and cmstatr and for all the examples that I've tried, these agree. However, the p-value is computed based on the modified test statistic. Both nortest and cmstatr also use different equations to calculate the p-value based on their respective modified test statistics, so the resulting p-values are different. I've added some text to the details section of the anderson_darling_normal function about this. The method used in cmstatr matches the method published in CMH-17-1G and that is currently used in the software typically used for statistical analysis of composite material data, and I think that matching the results of the current software as much as possible has value.

  • Examples. It would be nice to have examples for all exported functions; this should be easy;
    just copy from your test_that folder. In particular, the following functions lack examples.
    • levene_test

Commit 668ee9b

  • maximum_normed_residual

Commit 92ffa21

  • nonpara_binomial_rank

Commit 608a849. This commit also added or improved the examples for the functions:

  • basis_normal
  • basis_pooled_sd
  • calc_cv_star
  • cv
  • hk_ext
  • k_equiv
  • k_factor_normal
  • normalize_group_mean
  • normalize_ply_thickness

Installation

Calling

devtools::install_github("ComtekAdvancedStructures/cmstatr", build_vignettes = TRUE, build_opts = c("--no-resave-data", "--no-manual"))

fails when rmarkdown dplyr, and tidyr aren't installed. I suggest adding
install.packages("rmarkdown"), and the same for the rest, to the installation
instructions or, (ii) add rmarkdown to Imports instead of Suggests.
(iii) It is possible to add checks in the vignette code to find out if a
package has been installed, then ask to the user to install it right there
and then. Don't do it automatically though -- CRAN won't accept it!
In my opinion, this third option is not worth the trouble.

  • Installation. Fix the rmarkdown, dplyr, tidyr issue.

Commit 7dc781b addressed this.

Installation was verified through fresh install of rocker/rstudio docker container as follows:

docker run -d -p 8787:8787 -e ROOT=TRUE rocker/rstudio

Install system dependencies inside docker container:

docker ps  # note container ID
docker exec -it <container ID> bash
sudo apt install zlib1g-dev libxml2-dev   

Then, launch http://localhost:8787

I've confirmed that when dplyr and tidyr are not explicitly installed, the installation of cmstatr fails (as you reported). Executing the new installation instructions in README succeeds.

Paper

Design Values are determined such that, with 95% confidence,
they are either the 99% or 90% one-sided lower confidence bound
of the material strength, depending on the type of structure.

I was strongly confused by this passage, and I think it should be
reworded or removed. An option is to use only the language of tolerance
intervals/bounds, and provide a reference and introduction to their definition
and interpretation. Tolerance intervals are too convoluted to properly define in
this paper (or the documentation), but they should be referenced by their proper
name so that people (that is, people like me! :p) can easily check if they are
implemented correctly. Moreover, I suggest to use the terminology "content" and
"confidence level" for p and conf. Since you only care about lower confidence
levels, I suggest you remove references to confidence intervals and replace them
with confidence levels instead; it is confusing that several functions take conf
as an argument but fails to return a confidence interval.

Commit c36c520: The confusing sentence in the paper has been reworked.

It is nice that you repeatedly state in the documentation that p = 0.9 and
conf = 0.95 corresponds to a B-Basis and so on, you should also state what
they are supposed to calculate when p != 0.9 and conf != 0.95. (That is, the
lower limits of a one-sided tolerance interval with content p and confidence level 0.95.)

I've added a sentence about this in the documentation for the basis... functions. While cmstatr is capable of computing tolerance bounds for other contents or confidence levels, these are seldom needed in practice. I don't think that there is a need to discuss this explicitly in the paper due to the infrequent need for practitioners in this field to compute tolerance bounds with different content and/or confidence.

A reference for the to me surprising calculations used in e.g. basis_normal and
k_factor_normal is Statistical Tolerance Regions: Theory, Applications, and
Computation (Krishnamoorthy, Mathew, 2009), Section 1.2. But you probably know
better references than I do!

Commit eac7522: I've added a reference to the Krishnamoorthy book to the documentation for the k_factor_normal and basis_normal functions. The "surprising" equation also appears in CMH-17-1G, but having more references certainly can't hurt.

  • Documenation: Reference to tolerance bounds.

I've added a reference to the basis... documentation and to the paper itself.

The rest of the software paper looks fine to me!

I did notice, as I was addressing your comments, that the object returned by
the maximum_normed_residual function doesn't include the value of alpha
supplied by the user. To be consistent with the other functions, I've added
this in, and also now report the value of alpha used in the print
and glance methods for that class. See commit c46e5e6

@JonasMoss
Copy link

Thanks, @kloppen! Again, let me say I think this is a very nice package. =)

I think everything looks fine now, @usethedata.

@kloppen
Copy link

kloppen commented Jun 29, 2020

The changes that I made to address JonasMoss' comments were in the feature/joss-review branch of the cmstatr repository. I've now merged those changes into master.

I'll re-generate the PDF:
@whedon generate pdf

I did bump the software version number to 0.6.0 because there were some minor changes the resulted from the review. I think that only editors can ask whedon to change the version number. So, @usethedata, can you please update the version number for cmstatr to 0.6.0?

I haven't been through the process of publishing in JOSS before, so please let me know what, if anything, I need to do as the author at this point.

@kloppen
Copy link

kloppen commented Jun 29, 2020

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Jun 29, 2020

@usethedata
Copy link

@whedon set 0.6.0 as version

@whedon
Copy link
Author

whedon commented Jul 4, 2020

OK. 0.6.0 is the version.

@usethedata
Copy link

@whedon check references

@whedon
Copy link
Author

whedon commented Jul 4, 2020

Reference check summary:

OK DOIs

- 10.2307/2682297 is OK
- 10.1080/01621459.1987.10478517 is OK
- 10.1080/03610919408813222 is OK
- 10.1198/004017002188618428 is OK
- 10.21105/joss.01686 is OK
- 10.1002/9780470473900 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@usethedata
Copy link

@kloppen I've created an issue in your repository with a couple of nits to consider and the next steps in the process: cmstatr/cmstatr#19

@kloppen
Copy link

kloppen commented Jul 4, 2020

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Jul 4, 2020

@kloppen
Copy link

kloppen commented Jul 4, 2020

@usethedata :

The Zenodo DOI is: 10.5281/zenodo.3930475

I've fixed the couple little nits that you raised on cmstatr/cmstatr#19. I did not change software version number (so, it's still 0.6.0).

Please let me know if you need me to do anything else.

@usethedata
Copy link

@whedon set 10.5281/zenodo.3930475 as archive

@whedon
Copy link
Author

whedon commented Jul 5, 2020

OK. 10.5281/zenodo.3930475 is the archive.

@usethedata
Copy link

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Jul 5, 2020

@usethedata
Copy link

@whedon check references

@whedon
Copy link
Author

whedon commented Jul 5, 2020

Reference check summary:

OK DOIs

- 10.2307/2682297 is OK
- 10.1080/01621459.1987.10478517 is OK
- 10.1080/03610919408813222 is OK
- 10.1198/004017002188618428 is OK
- 10.21105/joss.01686 is OK
- 10.1002/9780470473900 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@usethedata
Copy link

@whedon accept

@whedon
Copy link
Author

whedon commented Jul 5, 2020

Attempting dry run of processing paper acceptance...

@whedon whedon added the recommend-accept Papers recommended for acceptance in JOSS. label Jul 5, 2020
@whedon
Copy link
Author

whedon commented Jul 5, 2020

Reference check summary:

OK DOIs

- 10.2307/2682297 is OK
- 10.1080/01621459.1987.10478517 is OK
- 10.1080/03610919408813222 is OK
- 10.1198/004017002188618428 is OK
- 10.21105/joss.01686 is OK
- 10.1002/9780470473900 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Jul 5, 2020

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#1538

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#1538, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true

@danielskatz
Copy link

@kloppen - I've suggested some small changes to the paper in cmstatr/cmstatr#20 - Once these are made/merged (or you let me know what you disagree with), we can finish the publishing of this work.

@kloppen
Copy link

kloppen commented Jul 5, 2020

Thanks for your corrections, @danielskatz. I've merged your PR (and then made a subsequent commit to roll those changes into the R-Notebook that generates the paper.md file; that subsequent commit is 923157e).

Please let me know if there is anything else I need to do as the author.

@kloppen
Copy link

kloppen commented Jul 5, 2020

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Jul 5, 2020

@danielskatz
Copy link

@whedon accept

@whedon
Copy link
Author

whedon commented Jul 5, 2020

Attempting dry run of processing paper acceptance...

@whedon
Copy link
Author

whedon commented Jul 5, 2020

Reference check summary:

OK DOIs

- 10.2307/2682297 is OK
- 10.1080/01621459.1987.10478517 is OK
- 10.1080/03610919408813222 is OK
- 10.1198/004017002188618428 is OK
- 10.21105/joss.01686 is OK
- 10.1002/9780470473900 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Jul 5, 2020

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#1544

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#1544, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true

@danielskatz
Copy link

@whedon accept deposit=true

@whedon
Copy link
Author

whedon commented Jul 5, 2020

Doing it live! Attempting automated processing of paper acceptance...

@whedon whedon added accepted published Papers published in JOSS labels Jul 5, 2020
@whedon
Copy link
Author

whedon commented Jul 5, 2020

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@whedon
Copy link
Author

whedon commented Jul 5, 2020

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.02265 joss-papers#1545
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.02265
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@danielskatz
Copy link

danielskatz commented Jul 5, 2020

Thanks to @myousefi2016 & @JonasMoss for reviewing, and @usethedata for editing!

Congratulations to @kloppen (Stefan Kloppenborg)!!

@whedon
Copy link
Author

whedon commented Jul 5, 2020

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.02265/status.svg)](https://doi.org/10.21105/joss.02265)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.02265">
  <img src="https://joss.theoj.org/papers/10.21105/joss.02265/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.02265/status.svg
   :target: https://doi.org/10.21105/joss.02265

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@kloppen
Copy link

kloppen commented Jul 5, 2020

I'd like to thank @JonasMoss, @myousefi2016, @usethedata and @danielskatz for volunteering their time. I really appreciate it!

@JonasMoss
Copy link

Congratulations! @kloppen 🎊 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review
Projects
None yet
Development

No branches or pull requests

6 participants