Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: pySYD: Automated measurements of global asteroseismic parameters #3331

Closed
40 tasks done
whedon opened this issue Jun 4, 2021 · 130 comments
Closed
40 tasks done
Assignees
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 1 (AASS) Astronomy, Astrophysics, and Space Sciences

Comments

@whedon
Copy link

whedon commented Jun 4, 2021

Submitting author: @ashleychontos (Ashley Chontos)
Repository: https://github.com/ashleychontos/pySYD
Branch with paper.md (empty if default branch): master
Version: v6.10.0
Editor: @mbobra
Reviewers: @danhey, @benjaminpope
Archive: 10.5281/zenodo.7301604

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/6465a9dd3141c207175f200c7f891f1e"><img src="https://joss.theoj.org/papers/6465a9dd3141c207175f200c7f891f1e/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/6465a9dd3141c207175f200c7f891f1e/status.svg)](https://joss.theoj.org/papers/6465a9dd3141c207175f200c7f891f1e)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@danhey & @benjaminpope, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @mbobra know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Review checklist for @danhey

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@ashleychontos) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of Need' that clearly states what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @benjaminpope

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@ashleychontos) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of Need' that clearly states what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
@whedon
Copy link
Author

whedon commented Jun 4, 2021

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @danhey, @benjaminpope it looks like you're currently assigned to review this paper 🎉.

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

⭐ Important ⭐

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Jun 4, 2021

Software report (experimental):

github.com/AlDanial/cloc v 1.88  T=0.06 s (535.1 files/s, 76350.6 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          10            483           1036           1812
TeX                              1             21              0            402
reStructuredText                13            207            192            311
Markdown                         3             30              0            121
YAML                             3              0              0             34
DOS Batch                        1              8              1             26
make                             1              4              6              9
TOML                             1              0              0              6
-------------------------------------------------------------------------------
SUM:                            33            753           1235           2721
-------------------------------------------------------------------------------


Statistical information for the repository 'affe02802b3915c638d92384' was
gathered on 2021/06/04.
The following historical commit information, by author, was found:

Author                     Commits    Insertions      Deletions    % of changes
Ashley Chontos                 129         21985          19881           93.97
Maryum Sayeed                   10           444            241            1.54
Pavadol Yamsiri                  1          1233            261            3.35
danxhuber                        9           344            165            1.14

Below are the number of rows from each author that have survived and are still
intact in the current revision:

Author                     Rows      Stability          Age       % in comments
Ashley Chontos             3162           14.4          1.2                9.84
Maryum Sayeed               169           38.1          0.0                0.59

@whedon
Copy link
Author

whedon commented Jun 4, 2021

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1103/RevModPhys.93.015001 is OK
- 10.1051/0004-6361/201322068 is OK
- 10.3847/1538-3881/aabc4f is OK
- 10.1038/nature12419 is OK
- 10.1088/0067-0049/210/1/1 is OK
- 10.1051/0004-6361/201424181 is OK
- 10.1111/j.1365-2966.2009.16030.x is OK
- 10.1051/0004-6361/201015185 is OK
- 10.1088/0004-637X/743/2/143 is OK
- 10.1051/0004-6361/200913266 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.1051/0004-6361/200912944 is OK
- 10.3847/1538-3881/abcd39 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.3847/1538-4365/aa97df is OK
- 10.1111/j.1365-2966.2011.18968.x is OK
- 10.1093/mnrasl/sly123 is OK

MISSING DOIs

- 10.1017/cbo9781139333696.004 may be a valid DOI for title: Solar-like oscillations: An observational perspective
- 10.1553/cia160s74 may be a valid DOI for title: Automated extraction of oscillation parameters for Kepler observations of solar-type stars

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Jun 4, 2021

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@mbobra
Copy link
Member

mbobra commented Jun 4, 2021

👋 @danhey @benjaminpope Thank you so much for agreeing to review! You can find the article in the comment box above ⬆️ , and the checklist and software repository linked in the first comment box on this issue. I think you're good to go -- please let me know if you need anything else.

Again, JOSS is an open review process and we encourage communication between the reviewers, the submitting author, and the editor. So please feel free to ask questions at any time, I'm always around.

@benjaminpope
Copy link

Intro Comments:

  • Thanks for inviting me to review the pySYD release - this is an important piece of software that will bring asteroseismology from the closed-source dark ages to an open source renaissance (I hope!). IDL-SYD is well-used and well-known, but together with some other major pipelines has not been available open-source, which means that asteroseismology as a field suffers from reproducibility and inaccessibility. pySYD will be an important step forward that, I am pretty sure, will enable a great deal of exciting science.
  • The release of pySYD as documented so far isn't quite where we need to be for JOSS or for optimal use for the community. In particular, there are two high level issues:
    • The testing of the pySYD software against IDL-SYD is not documented in this paper, or cited from elsewhere, so there is actually a gap in reproducibility introduced.
    • The documentation of pySYD is not quite at the stage where it will be optimally user-friendly, and we probably need to have a bit more hand-holding with the examples to help people like me pick it up and play.
  • It is also not clear to me whether this is intended as a purely command-line tool, or whether the API is sufficiently useful for working with other code, for example in a Jupyter notebook or as part of a larger script. If this is beyond the scope of the project it is fine, but I didn't get a sense from the paper or the documentation what kind of workflow was expected, supported, or possible.

Installation:

  • pip install works, and pysyd setup works too.
  • Could pysyd setup put data in an install directory, rather than in the current directory?

Unit tests:

  • As far as I can tell, there are no unit tests or continuous integration solutions, at least as-is on the GitHub repo. The JOSS review requires automated tests.

Repository:

  • There are no community guidelines as required by JOSS.

Examples:

  • We get nice and fast performance from pysyd run -star 1435467 -show -verbose as expected from the Getting Started / Example Fit part of the readthedocs. On this page, you should include the expected output though - so we know if the graphs are working ok.
  • The output from this example differs from the one on the page! I understand this is because of the differing -mc flag, but I'm not sure why one command is shown in full (with no output), and the other output is shown in full (but no command).
  • The Quickstart example should have an example output. Also not clear how this is different from Example Fit.
  • The examples are nice graphs... but supposing we did pip install, it's not immediately obvious where the data live, and no commands are given to reproduce the examples. So after "you are ready to test out the software!" there should be a few worked examples showing commands and outputs!
  • The figures take a very long time to render and close, at least on my MacBook Pro. Perhaps the examples might like to have -save flags by default so that these don't cause an issue.

Manuscript:

  • Summary - this probably needs to be reworked. Currently it is a summary of asteroseismology, but not of pySYD. You can cut down some of the explanation of physics if necessary, but certainly need to add a bit about what pySYD is and what you use it for.
  • In first paragraphs - there is a comment that SYD has been tested against closed-source pipelines. It is my understanding that SYD itself is not yet open source, and that this very paper is the open-source release - so you want to clarify whether SYD too is closed-source.
  • Should cite Borucki and Ricker papers for Kepler and TESS citations.
  • The comparison of pySYD and IDL-SYD shows nonzero scatter... but this is ostensibly a translation of the IDL-SYD pipeline, so I would expect them to be even closer. This is important because if reproducibility of legacy Kepler results is a goal, systematics in IDL-SYD should also be reproduced! If it is a reworking of the same ideas, but not a translation, this will differ. There should probably be some comments about this. 0.5% is not quite trivial, even if the uncertainties are typically of order ~ %, if you average over many stars.
  • The IDL-SYD to pySYD benchmarks are not open source - are these linked anywhere? This comparison is not reproducible - we don't know which stars these are, or what the exact time baselines are (3-11 months is quite a range). I think this comparison is an important part of the paper, given pySYD is about reproducibility and extension of the IDL-SYD pipeline.
  • I think it should say earlier in the paper that this is about solar-like oscillations, as opposed to (say) classical pulsations.

@ashleychontos
Copy link

Thank you so much @benjaminpope for providing this extremely helpful review so quickly! We also acknowledge the importance of reproducing the paper figure but have been stuck on the best way to implement this. We were wondering if JOSS has a recommended data-hosting website, something analogous to CDS for data releases associated with astronomy papers. Providing the power spectra to reproduce this figure would be about ~300 MB of data and therefore probably too large to provide directly on our GitHub repo.

@danhey
Copy link

danhey commented Jun 9, 2021

Hi all, this has been a pleasure to review! It's a great piece of software, and I have no doubt it will be extremely useful for asteroseismology. Overall, I believe pySYD satisfies the JOSS criteria for publication after a few points have been addressed.

I agree with all of Ben's comments, especially regarding the distinction between a command-line vs. API tool. It appears to me that the functionality is there for using pySYD from say, a Jupyter notebook, but this has not been documented.

I will try to structure my review in the same format as Ben's.

Installation

  • Installation is fine from a fresh Conda environment. However, tqdm appears to be a requirement that is not listed in pySYD.

Unit tests

  • I agree with Ben here, there don't appear to be any unit tests! It's important to have unit tests, especially for a tool that will hopefully become a standard in asteroseismology. I suggest at the minimum, you include tests for the core functionality (i.e., input a star -> ensure the outputs are exactly the same). If you need some inspiration on how testing is done for an astronomy Python package, Lightkurve does a very good job of it. Check out their tests, and their GitHub actions workflow.

Repository

  • No community guidelines.
  • Perhaps a minor oversight but pySYD doesn't seem to specify which Python version(s) it is compatible with. I know many astronomers still use Python 2.7 for compatibility, will pySYD work with that?

Examples and functionality

  • The examples all work fine for me.
  • it is not clear in the documentation that the plots must be closed for the pipeline to proceed. i.e., when I ran the “pysyd run -star 1435467 -show -verbose” command, I had to close the plots as they appeared for the pipeline to continue. They should either automatically close or a prompt should be displayed for continuing, or the pipeline should just continue and create new plots as it goes.
  • Would it be prudent to include an example for the TESS data? And on that note, how does PySyd deal with significant gaps in the data? This is kind of a new issue because the Kepler data was so good! I’ve noticed that there’s a pysyd.utils.stitch_data() function for gapped data, perhaps a warning should also be thrown when this function is used? This could be fixed in one step: provide an example for a TESS star that has a large data gap in it!

Manuscript

  • Overall the manuscript is well written and clear, I like it!
  • The scatter in the IDL-SYD and pySYD comparison plots also concerns me a bit. If the goal is to completely reproduce the functionality of the IDL version then there should be little to no scatter. Do we know what causes these deviations, or, are they purely the result of random sampling? And which pipeline output would we expect to give the more accurate result?
  • Regarding the open sourcing of the benchmarks:

Providing the power spectra to reproduce this figure would be about ~300 MB of data and therefore probably too large to provide directly on our GitHub repo.

Only a suggestion, but would it be easier to instead write a small utility function in pySYD that calculates the power spectrum given an input light curve? This would also be useful for the general functionality of PySYD, allowing people to supply a light curve. It would also benefit reproducibility: I have seen a lot of different power spectra normalizations in my time ...

@benjaminpope
Copy link

Agree with all of @danhey's comments.

Re @ashleychontos' question about data hosting - there are a few ways to do this... you could just upload a static .ipnyb that implements the required calculations, and describe where the input data come from - I imagine these are programmatically pulled from MAST or KASOC/TASOC. Or you could do it all on git-lfs. Finally, many universities have large, permanent data storage solutions - eg UQ does - you could upload relevant datasets there and link to those.

@mbobra mbobra removed the waitlisted Submissions in the JOSS backlog due to reduced service mode. label Jun 9, 2021
@mbobra
Copy link
Member

mbobra commented Jun 9, 2021

@ashleychontos Agree with @benjaminpope. You can provide code that queries and downloads the data and then makes the figure from that downloaded data. Then it is the user's responsibility to store the data (for ~300 MB, not too big an ask), but all they need to do to reproduce the figure is run the provided code.

@whedon
Copy link
Author

whedon commented Jun 18, 2021

👋 @danhey, please update us on how your review is going (this is an automated reminder).

@whedon
Copy link
Author

whedon commented Jun 18, 2021

👋 @benjaminpope, please update us on how your review is going (this is an automated reminder).

@ashleychontos
Copy link

@mbobra apologies in advance since this is all of our (the authors) first time preparing anything like this. First and foremost, what is the most appropriate way to address and comment on either/both of the reviews (similar to a resubmission with AAS)? Following are comments from the reviews that we could use some clarification on before proceeding.

  1. Re: the unit test. The scope of unit tests is more clear for say, a squared function, that can easily assert that the result is indeed the squared value of the input. However, it is not as clear to us on how we could implement this for a more complex software package like pySYD. @benjaminpope brought up a fair point about the documentation examples not matching the output, which was true and actually the result of an older version of both the software and documentation. We (the authors) have discussed the possible implementation of such test suite but were first curious that if the examples in the documentation matched the output verbatim, if this would be sufficient?
  2. Re: the JOSS community guidelines. On our documentation homepage, it provides both an invitation for code contributions as well as the link to report any issues with the software. Should this content be a page in its own or highlighted in some other way that meets the JOSS requirements?

If the above test is not sufficient, we have discussed two other alternatives and would like to hear any comments or suggestions (@danhey included). The first is providing enough examples that cover all optional arguments to make sure no error is produced when they are all called (additionally, a plot could be produced to show the before and after when those arguments are used). The second idea was to have a counter that runs through all functions and if successful, asserts that the number returned equals the number of functions in the pipeline. This would also account for optional functions that are not required for the software to run successfully.

@benjaminpope
Copy link

@ashleychontos thanks! So as a gold standard example, @dfm did this for the exoplanet review: #3285 (comment), in which he links to some GitHub Issues that resolve each of the reviews as a checklist. So you might like to create a checklist, in which you respond point by point to each of the reviews, or you might like to create a separate issue for each point, or something like that.

If the examples in the documentation match the output verbatim that will be good! It will also be good to have richer output - for example, figures (and autosave figures, rather than click-through) and terminal output, so that we can test that it reproduces the examples exactly. (And a GitHub Action to test one or more examples, even with just a checksum, would meet the continuous testing requirement).

@danhey
Copy link

danhey commented Jul 1, 2021

Agree with @benjaminpope. I also think a good unit test for GitHub Actions would be to just simply run pySYD on a test star for which numax and dnu is well known, and assert that the output of the pipeline is np.isclose() to what you expect. This means that if you make some changes to the code, you will instantly see if the new output is significantly different, and the test will fail.

@ashleychontos
Copy link

Thank you both @benjaminpope and @danhey again for such quick and helpful responses! I will follow Ben's suggestion and open up a separate pull request for each review and update those accordingly.

@mbobra
Copy link
Member

mbobra commented Jul 2, 2021

First and foremost, what is the most appropriate way to address and comment on either/both of the reviews (similar to a resubmission with AAS)?

👋 @ashleychontos It looks like you already got your answer -- but in any case, I will provide a rather vague response that as long as the reviewers feel (1) you have addressed their concerns and, (2) the software adheres to the JOSS guideline, I will accept the submission!

@danielskatz
Copy link

@mbobra - can you provide an update about this submission? It seems stuck...

@mbobra mbobra added the paused label Sep 30, 2021
@mbobra
Copy link
Member

mbobra commented Sep 30, 2021

I am going to pause the review. This gives @ashleychontos time to improve their code and also gives reviewers a break from keeping up with this review thread.

@ashleychontos There's nothing wrong (or negative) with pausing a review. If you know you'll be done by a certain time, we can keep it paused. If you're not sure how long all this will take (or if you no longer want to pursue it), you can withdraw this submission and resubmit at a later date. There's nothing wrong with that and it won't impact your next submission.

@ashleychontos
Copy link

Thank you for clarifying @mbobra - the primary three developers for pySYD are tied up at the moment (which includes myself, currently busy with postdoc applications), hence the delay. I think we still want to pursue it and also agree with all the reviewers comments, but a likely time scale when I can revisit this is ~2 months? Is that an ok amount of time to be paused?

@editorialbot
Copy link
Collaborator

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1103/RevModPhys.93.015001 is OK
- 10.1051/0004-6361/201322068 is OK
- 10.3847/1538-3881/aabc4f is OK
- 10.1038/nature12419 is OK
- 10.1017/cbo9781139333696.004 is OK
- 10.1126/science.1185402 is OK
- 10.1088/0067-0049/210/1/1 is OK
- 10.1051/0004-6361/201424181 is OK
- 10.1111/j.1365-2966.2009.16030.x is OK
- 10.1051/0004-6361/201015185 is OK
- 10.1553/cia160s74 is OK
- 10.1088/0004-637X/743/2/143 is OK
- 10.1051/0004-6361/200913266 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.1051/0004-6361/200912944 is OK
- 10.3847/1538-3881/abcd39 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.5281/zenodo.3509134 is OK
- 10.1117/1.JATIS.1.1.014003 is OK
- 10.1214/aos/1176344136 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.3847/1538-4365/aa97df is OK
- 10.1111/j.1365-2966.2011.18968.x is OK
- 10.1093/mnrasl/sly123 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@mbobra
Copy link
Member

mbobra commented Nov 7, 2022

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1103/RevModPhys.93.015001 is OK
- 10.1051/0004-6361/201322068 is OK
- 10.3847/1538-3881/aabc4f is OK
- 10.1038/nature12419 is OK
- 10.1017/cbo9781139333696.004 is OK
- 10.1126/science.1185402 is OK
- 10.1088/0067-0049/210/1/1 is OK
- 10.1051/0004-6361/201424181 is OK
- 10.1111/j.1365-2966.2009.16030.x is OK
- 10.1051/0004-6361/201015185 is OK
- 10.1553/cia160s74 is OK
- 10.1088/0004-637X/743/2/143 is OK
- 10.1051/0004-6361/200913266 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.1051/0004-6361/200912944 is OK
- 10.3847/1538-3881/abcd39 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.5281/zenodo.3509134 is OK
- 10.1117/1.JATIS.1.1.014003 is OK
- 10.1214/aos/1176344136 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.3847/1538-4365/aa97df is OK
- 10.1111/j.1365-2966.2011.18968.x is OK
- 10.1093/mnrasl/sly123 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator

👋 @openjournals/aass-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#3690, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Nov 7, 2022
@mbobra
Copy link
Member

mbobra commented Nov 7, 2022

Congratulations, @ashleychontos 🥳 It was a long ride but I hope you feel the JOSS review process improved the software. I think the final result turned out awesome. The EiC team @openjournals/joss-eics will take it from here!

@ashleychontos
Copy link

ashleychontos commented Nov 7, 2022

Oh absolutely @mbobra I learned more about software and development throughout this process than I ever knew pre-JOSS times combined. Thank you again SO much for everything, especially for riding this out with me!

@dfm
Copy link

dfm commented Nov 7, 2022

@ashleychontos — I've opened a PR with a few formatting edits to the bibliography, but we should be good to go after that: ashleychontos/pySYD#42

@ashleychontos
Copy link

great @dfm I think it should be all set now!

@dfm dfm removed the recommend-accept Papers recommended for acceptance in JOSS. label Nov 7, 2022
@dfm
Copy link

dfm commented Nov 7, 2022

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1103/RevModPhys.93.015001 is OK
- 10.1051/0004-6361/201322068 is OK
- 10.3847/1538-3881/aabc4f is OK
- 10.1038/nature12419 is OK
- 10.1017/cbo9781139333696.004 is OK
- 10.1126/science.1185402 is OK
- 10.1088/0067-0049/210/1/1 is OK
- 10.1051/0004-6361/201424181 is OK
- 10.1111/j.1365-2966.2009.16030.x is OK
- 10.1051/0004-6361/201015185 is OK
- 10.1553/cia160s74 is OK
- 10.1088/0004-637X/743/2/143 is OK
- 10.1051/0004-6361/200913266 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.1051/0004-6361/200912944 is OK
- 10.3847/1538-3881/abcd39 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.5281/zenodo.3509134 is OK
- 10.1117/1.JATIS.1.1.014003 is OK
- 10.1214/aos/1176344136 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.3847/1538-4365/aa97df is OK
- 10.1111/j.1365-2966.2011.18968.x is OK
- 10.1093/mnrasl/sly123 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator

👋 @openjournals/aass-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#3691, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Nov 7, 2022
@dfm
Copy link

dfm commented Nov 7, 2022

@editorialbot accept

@editorialbot
Copy link
Collaborator

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@editorialbot
Copy link
Collaborator

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.03331 joss-papers#3692
  2. Wait a couple of minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.03331
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Nov 7, 2022
@dfm
Copy link

dfm commented Nov 7, 2022

@danhey, @benjaminpope — many thanks for your reviews here and to @mbobra for editing this submission! JOSS relies upon the volunteer effort of people like you and we simply wouldn't be able to do this without you ✨

@ashleychontos — your paper is now accepted and published in JOSS ⚡🚀💥

@dfm dfm closed this as completed Nov 7, 2022
@editorialbot
Copy link
Collaborator

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.03331/status.svg)](https://doi.org/10.21105/joss.03331)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.03331">
  <img src="https://joss.theoj.org/papers/10.21105/joss.03331/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.03331/status.svg
   :target: https://doi.org/10.21105/joss.03331

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 1 (AASS) Astronomy, Astrophysics, and Space Sciences
Projects
None yet
Development

No branches or pull requests

9 participants