Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: dlmmc: Dynamical linear model regression for atmospheric time-series analysis #1157

Closed
36 tasks done
whedon opened this issue Jan 7, 2019 · 63 comments
Closed
36 tasks done
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review

Comments

@whedon
Copy link

whedon commented Jan 7, 2019

Submitting author: @justinalsing (Justin Alsing)
Repository: https://github.com/justinalsing/dlmmc
Version: 1.1
Editor: @danielskatz
Reviewer: @Chilipp, @taqtiqa-mark
Archive: 10.5281/zenodo.2660704

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/af268dfacc4bff0fd37a70602d9e74b6"><img src="http://joss.theoj.org/papers/af268dfacc4bff0fd37a70602d9e74b6/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/af268dfacc4bff0fd37a70602d9e74b6/status.svg)](http://joss.theoj.org/papers/af268dfacc4bff0fd37a70602d9e74b6)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@Chilipp & @taqtiqa-mark, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @danielskatz know.

Please try and complete your review in the next two weeks

Review checklist for @Chilipp

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Version: 1.1
  • Authorship: Has the submitting author (@justinalsing) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Authors: Does the paper.md file include a list of authors with their affiliations?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?

Review checklist for @taqtiqa-mark

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Version: 1.1
  • Authorship: Has the submitting author (@justinalsing) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Authors: Does the paper.md file include a list of authors with their affiliations?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
@whedon
Copy link
Author

whedon commented Jan 7, 2019

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @Chilipp, it looks like you're currently assigned as the reviewer for this paper 🎉.

⭐ Important ⭐

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

@whedon
Copy link
Author

whedon commented Jan 7, 2019

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

whedon commented Jan 7, 2019

@danielskatz
Copy link

@Chilipp, @taqtiqa-mark - thanks for agreeing to review. (Note that @Chilipp will not be able to start for a week.)

If you have any questions, please ask. Otherwise, I'll expect that you will work through the checklists above and raise issues as you find items you cannot check off.

@danielskatz
Copy link

👋 @Chilipp, @taqtiqa-mark - I hope you can start reviewing this soon - at least, the code of conduct and conflict of interest boxes for each of you...

@Chilipp
Copy link

Chilipp commented Jan 17, 2019

Yes, @danielskatz, I will start tomorrow. My apologies for the delay

@danielskatz
Copy link

👋 @taqtiqa-mark - are you able to get started too?

@Chilipp
Copy link

Chilipp commented Jan 18, 2019

Hello @justinalsing! I am so far done with my first look into your software and I would like to have your feedback on some of the following points:

General checks

  • Version: I could not find any version associated to your repository, is it written somewhere in your code?

Functionality

Documentation

  • A statement of need: I understand that DLMs are great and useful to climate and especially atmospheric science. But I do not understand what the scope of your software is. Is it only that you provide the suite of four models? Since your software has a quite general name I wonder whether you will extend them in the future and aim for a large suite of DLMs to which others can contribute as well?
  • Installation instructions: see
    installation failure with pip justinalsing/dlmmc#3. Additionally I have the impression that you need to have a certain background in python and jupyter to be able to run through your tutorial. I strongly recommend to provide some statements about the previous knowledge you expect from the user of your software.
  • Functionality documentation: Again, what are you aiming for? You provide instructions for how to use your models but if you aim for providing the possibility in extending the models, you should also give guidelines on how to extend them
  • Automated tests: I could not find any. Could you please tell, how you test the functionality?
  • Community guidelines: You have a short section in your README (https://github.com/justinalsing/dlmmc/blob/master/README.md#contributions-reporting-issues-and-feature-requests) but I strongly recomment to provide a CONTRIBUTING file (see https://help.github.com/articles/setting-guidelines-for-repository-contributors/). However this is not a formal requirement of JOSS as far as I know. Nevertheless, to my impression your are lacking clear guidelines on how to contribute and how to seek support

Software paper

  • A statement of need: As noted above, is your aim to distribute the four models or to provide a framework that can serve as a suite for more DLMs?
  • References: For the users of your package, please correct the citations in the model_descriptions.pdf. They are lacking, e.g. the full list of authors and DOIs.

@danielskatz
Copy link

👋 @justinalsing - note that we're waiting for your inputs here

@justinalsing
Copy link

Hi @CHILLIP, @danielskatz, @taqtiqa-mark

Thanks for all the feedback and sorry for the delay. Let me try to address your points:

Version thanks for pointing this out! I’ve just made a release, v1.0

Installation I saw you had problems installing with pip. The dependency that typically causes problems (as in your case) is pyston, since it needs to play nicely with the C++ compilers on your machine. For some it has worked with pip, for others it seems to work with conda but not pip. I’ve added a note to the README that says you can use “conda install (recommended) or pip” and another note, “Note: conda seems to be more stable for getting pystan to work than pip - if you installed with pip and compiling the stan models throws errors, try again with with conda.” Do you have any other suggestions for making the install instructions more solid?

I’ve also added a point in the README that prior knowledge of python/jupyter is needed to be able to fully follow the tutorial, as suggested. Agree this was a good idea.

Statement of need yes, I intend that the HMC-Kalman filtering framework can be used to expand the suite of models beyond those initially provided. Essentially the space of DLM models are characterized by a few model matrices; more-or-less all that needs modifying in the stan model code to implement a new one is to replace the model matrices and add any additional hyper-parameters. I’ve now provided a CONTRIBUTING file as you suggested giving details of how to contribute extended models to the suite. It does assume familiarity with stan, which I think is needed to contribute meaningfully. I’d be happy to give further details here if you feel they are needed and am very open to suggestions on how to improve this.

Automated tests In my mind the tutorial also serves this purpose: if the models compile and the tutorial runs through without error, you’re in good shape. I’ve added a note to the README that the tutorial also serves this purpose. However, if you feel additional tests are required I am open to suggestions..

Community guidelines see above re added CONTRIBUTING file.

References I’ve updated the references in the model_descriptions.pdf as suggested (thanks for pointing those out)

I hope this addresses your points and look forward to hearing your further thoughts



Thanks again!

@taqtiqa-mark
Copy link

taqtiqa-mark commented Jan 31, 2019

My apologies for the delay. It took a while to find my first notes. which follow:

I've had a quick look at the model PDF and some of the repository and can say, while its fresh in my mind:

The use-case appears to be reasonably well motivated - implementing a known/published model, and a non-trivial software stack (e.g. parallel libraries).

My working assumption is the question of originality is out-of-scope - it suffices that the library implements a model already published. Consequently, while non-linear AR/(G)ARCH models are wide spread, I haven't investigated the question of whether this improves on them. However I would suggest that data sets used thought to have those properties from the R-project, would be used in the test suite.

Given the JOSS is software focused I'm placing all emphasis on the software engineering aspects, and almost non on the contribution to domain knowledge.

That non-trivial software stack nature suggests weight be given to the following observations:

  • No test suite (some examples, but not formulated in a way that breaking changes would be flagged). I would expect tests using known null models, known data sets and results from a reference implementation, e.g. from R data sets package. I would also expect tests using data simulated from an alternate model - this would at least confirm the model recovers the true parameters (within a margin), and flag when anything breaks.

  • No software documentation generated (that I can see). I would expect this to be created as part of the CI build.

  • In general I expect a CI build for such opensource projects (CI services are free and ubiquitous) - this allows for quick validation of what works, as well as running the test suite and would generate up to date documentation.

  • Vague instructions regarding installing parallel run requirements (unless pip installed are guaranteed to Just-Work on all platforms?). My bitter experience is that such general statements are used because it is so difficult to document the precise setup that works. @Chilipp's observations confirm this. Since parallel execution is an advertised feature I would expect a parallel test suite, and in the read-me an example of the speedup obtained using some known available data set. It is conventional to be explicit about which platforms the software is known to work on, and which platforms are supported - using CI builds would be one path to verify this, and would allow the author to work out exactly which version combinations work.

  • cached pyc folders suggest pain lies ahead.

I'll update this with links to the issues opened against the upstream project, as I open them.

@danielskatz, for me blocking issues are:

  1. Automated test suite, that at a minimum:
  • Fits null and alternate specifications to simulated and empirical data. The tests would be simply checking parameters vales, and goodness of fit metrics. The 'true' or expected values should come from a reference implementation that is clearly identified, e.g. some R-project, etc. library result set, and the script used to generate the reference values would be part of the project.
  • Parallel test suite that exercises the parallel calculations. This would likely be slow (i.e. while processing large data or models is faster than serial runs the parallel run would be expected to be 'slow' in calendar time - unit test times are generally aimed to be in the sub-seconds range) so would likely have its own test harness to prevent the TDD/BDD cycle from being slowed down.
  1. Installation instructions for the platforms that are supported. My assumption is it is not the purpose of the reviewers to resolve all these issues across the various platforms? The warning note (should be more prominent) about conda instability and switching between pip/conda is really something that the package should resolve and provide the user with a painless path?
  2. Documentation. Each function a user is expected to interact with should be documented. Ack. that many functions are likely to be upstream, but if the user is expected to interact with them, the package document should, at a minimum, provide links to the upstream documentation of the version in use.

@danielskatz
Copy link

Thanks @taqtiqa-mark - these details are very helpful.

In JOSS, we don't really have results like

Right now I'd be inclined to conclude: Major Revisions, resubmit.

We just say what's needed to be done, and then let the author decide to do this or not. In other words, we keep going until the submission is acceptable or we decide that it cannot be made so.

@justinalsing
Copy link

Hi @taqtiqa-mark ,

Thanks for your feedback.

I think it would help it I reiterate/clarify the scope of this code. The code is a suite of statistical models, written in stan, than can then be ran on user’s data using the pystan interface (provided by the stan team).

Along with the models, I provide a tutorial covering how to run the stan models using pystan — this is really to ease the way for users who are not familiar with pystan. For users who want to run many DLMs quickly but are not used to setting up MPI runs with python, I also provide a template for how to run models in parallel this way. The tutorial and MPI-run template are really examples of how to use the code to help users get up and running quickly, rather than central to the package. That is why, for example, I do not attempt to give openmpi install instructions that will work robustly for everyone — as you point out that is a difficult thing to do in practice.

Since the package is a suite of models implemented in a probabilistic-programming language, the model code “in of itself” does not have any “functionality” to be documented, ie., the model codes in stan_dlm_models.py are not a set of functions that can be called by the user. Rather, they are specifications of probabilistic models that are then passed to the pystan interface, which then runs them on user input data. I am intentionally relying on the pystan interface to run the models, since it already exists and is well documented. I could have implemented a higher level interface so that users would not have to interact with pystan at all, but this seemed unnecessary and would make maintaining the code more cumbersome.

There is a question of whether a suite of stan models constitutes a valid open source project. I believe it does for the following reasons:
— The (stratospheric science) community has explicitly asked for this, since the expertise/time to implement such models within the community is scarce, and the momentum to replace the industry-standard MLR with DLM is rapidly growing. The code is already being used by members of the community.
— The code is extendable in a meaningful way: people can implement new models based on the dlmmc code with relatively little effort, since it essentially just involves modifying the lines of code that specify the model matrices that define the space of DLM models. The bulk of the work (setting up the Kalman-filtering and smoothing, etc) has been coded up in a general-purpose way so does not need modifying, massively reducing the barrier to entry for people to implement new models.

I hope this focuses the discussion on what I should do to make this code ready for public consumption.

@taqtiqa-mark and @CHILLIP, you both raised automated tests and more robust install instructions. If you have some concrete ideas/specifications for what you think is needed here I’d greatly appreciate it.

The other related point you raised @taqtiqa-mark is validation. Validation of Bayesian codes is non-trivial, requiring either comparison to a known analytical posterior (hard in this case), comparing to another code, or running on a large ensemble of mock data vectors and looking at the statistical properties of the results (expensive). If you really feel this is a requirement I’d love to hear what you think would be most worthwhile here.

@taqtiqa-mark
Copy link

@danielskatz,

I've amended my comment o remove that observation.

@justinalsing makes the following observation:

There is a question of whether a suite of stan models constitutes a valid open source project

My assumption is that this question has been resolved in the affirmative by the (sub-)editor(s) - otherwise this would not have progressed to the review stage.
Please indicate if that assumption is correct.

If incorrect, are there JOSS guidelines to on what is a "valid open source project" (for the purposes of JOSS) to guide reviewers to address this issue in a way that results in consistent outcomes?

I'll address @justinalsing response separately - by editing my comments above to add links to issue in his repository - to prevent this thread from becoming spaghetti.

@danielskatz
Copy link

There is a question of whether a suite of stan models constitutes a valid open source project

My assumption is that this question has been resolved in the affirmative by the (sub-)editor(s) - otherwise this would not have progressed to the review stage.
Please indicate if that assumption is correct.

Yes, this is correct.

@danielskatz
Copy link

Also, @taqtiqa-mark - I think that rather than editing comments to address responses, it's better to just write a new comment, and, as you've said, to open issues in the source repository with links to them here. And so, to get an up-to-date status, can you (and also @Chilipp) please add new comments below that tell us what your current concerns are (linking to issues as needed)?

Chilipp referenced this issue in justinalsing/dlmmc Feb 7, 2019
@Chilipp
Copy link

Chilipp commented Feb 7, 2019

thanks @justinalsing for your detailed responses to my comments and the ones from @taqtiqa-mark ! I especially appreciate your work on the contribution guidelines.

There are still some open points and I would like to outsource them to other issues in your repository:

If you work on these issues, please directly comment in the issues in your repository as this is also recommended in the review process guidelines above

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

@danielskatz
Copy link

Re: automated tests - The JOSS requirement is just "Are there automated tests or manual steps described so that the function of the software can be verified?" While automated tests are generally better for the software, manual steps are sufficient for JOSS acceptance. Specifically, adding good automated tests can be a longer-term activity than JOSS acceptance.

@Chilipp
Copy link

Chilipp commented Feb 15, 2019

Thanks @danielskatz for your clarification on this topic. @justinalsing states above (#1157 (comment))

In my mind the tutorial also serves this purpose: if the models compile and the tutorial runs through without error, you’re in good shape. I’ve added a note to the README that the tutorial also serves this purpose.

Would this be sufficient for an acceptance in JOSS? Because the feeling of @taqtiqa-mark and me is that there should be some more tests implemented (see justinalsing/dlmmc#5)

@justinalsing
Copy link

justinalsing commented Feb 15, 2019

Hi @Chilipp , @danielskatz , @taqtiqa-mark ,

Thanks for your detailed comments and for solidifying the action-points into concrete issues, I'll get to these ASAP.

Thanks for re-raising exactly what is sufficient for JOSS acceptance re tests. Given Daniel's comment about manual vs automated tests, I was going to suggest that the path of least resistance for me would be to beef up the tutorial to (a) have sections to run all of the implemented models rather than just the "vanilla" one, and (b) run each model on mock data generated from the same model so that the examples serve as explicit validation tests in addition to testing whether the code runs. If that sounds sufficient I'll go ahead and put those in there - otherwise it'll take a good while longer for me to implement an automated test suite.

Thanks!

@danielskatz
Copy link

I believe this would be reasonable.

@justinalsing
Copy link

Great - I'll wait for confirmation from the reviewers @Chilipp and @taqtiqa-mark that they would be satisfied with this and then go ahead and implement

@justinalsing
Copy link

justinalsing commented Apr 15, 2019

Hi @taqtiqa-mark

OK so I've now done the following: created an isolated python environment using virtualenv, then followed my install instructions:

pip install numpy scipy matplotlib netCDF4 pystan
git clone https://github.com/justinalsing/dlmmc.git
cd dlmmc
python compile_stan_models.py

and then tested that the jupyter notebook dlm_tutorial.ipynb runs through without errors by doing

jupyter-nbconvert --to notebook --execute --ExecutePreprocessor.timeout=100000 dlm_tutorial.ipynb

This all worked fine - I attach a text file with the terminal session for all of the above (from creating the virtual environment through to running the test).

One point of contention though: I used pip to install dependencies (which works fine for me and other colleagues who use pystan extensively). However, @Chilipp did not get a working pystan set up when he did pip install, but he did with conda, so suggested that I change the install instructions to conda (which I duly did). But I don't use conda personally or have it set up

So, is this satisfactory for you, and any ideas for resolving the pip/conda issue? I'll note that any issues are isolated to the dependency pystan (which is well documented at https://pystan.readthedocs.io/en/latest/). I feel like providing the existing install instructions, and pointing users to the pystan docs in the README in case they have issues with that particular package, is appropriate for this?

Thanks,
Justin

DLMMC install:test terminal output.txt

@danielskatz
Copy link

Hi @taqtiqa-mark and @justinalsing,

Just to make sure I understand where things are, it looks like Mark has created three issues

that are blocking his acceptance of the installation part of the submission, which is the last part that needs to be resolved.

And this is now in Justin's court to respond to them.

Is this correct?

I also see justinalsing/dlmmc#5 is still open, but apparently this is not blocking anything for Mark.

@justinalsing
Copy link

Hi @danielskatz @taqtiqa-mark

I've responded to the remaining three issues. As far as I can see the install instructions seem to be working fine, having already been changed from pip to conda following @Chilipp's suggestion, and the MPI example script provided in the Git repo is not an integral part of this package (as has been discussed multiple times) so I personally don't feel that this should be a blocking issue for JOSS acceptance (could you weigh in on this @danielskatz?)

@justinalsing
Copy link

Hi @taqtiqa-mark - I've done some more work on the install instructions and responded to your outstanding issues. I hope we are getting close now!

@taqtiqa-mark
Copy link

@danielskatz: LGTM

@danielskatz
Copy link

👋 @justinalsing - can you make a new archive of the software in zenodo, figshare, etc.? And paste the DOI here? Also, if the version has changed, please let me know the new version number.

@danielskatz
Copy link

Thanks very much @taqtiqa-mark and @Chilipp for your work on this review!

@justinalsing
Copy link

Yes, thanks @taqtiqa-mark @Chilipp @danielskatz for your work and patience!

@danielskatz I've archived with Zenodo as advised - here's the DOI

DOI

and the new version number is v1.1

@danielskatz
Copy link

@whedon set archive 10.5281/zenodo.2660704

@danielskatz
Copy link

@whedon set 10.5281/zenodo.2660704 as archive

@whedon
Copy link
Author

whedon commented May 3, 2019

OK. 10.5281/zenodo.2660704 is the archive.

@danielskatz
Copy link

@whedon set 1.1 as version

@whedon
Copy link
Author

whedon commented May 3, 2019

OK. 1.1 is the version.

@danielskatz
Copy link

@whedon accept

@whedon
Copy link
Author

whedon commented May 3, 2019

Attempting dry run of processing paper acceptance...

@whedon
Copy link
Author

whedon commented May 3, 2019


OK DOIs

- 10.5194/acp-14-9707-2014 is OK
- 10.5194/acp-18-1379-2018 is OK
- 10.5194/acp-17-12269-2017 is OK
- 10.1093/acprof:oso/9780199641178.001.0001 is OK
- 10.18637/jss.v076.i01 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented May 3, 2019

Check final proof 👉 openjournals/joss-papers#657

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#657, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true

@danielskatz
Copy link

... in progress ... will continue shortly

@danielskatz
Copy link

@whedon accept deposit=true

@whedon
Copy link
Author

whedon commented May 3, 2019

Doing it live! Attempting automated processing of paper acceptance...

@whedon whedon added the accepted label May 3, 2019
@whedon
Copy link
Author

whedon commented May 3, 2019

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.01157 joss-papers#658
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.01157
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? notify your editorial technical team...

@whedon
Copy link
Author

whedon commented May 3, 2019

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](http://joss.theoj.org/papers/10.21105/joss.01157/status.svg)](https://doi.org/10.21105/joss.01157)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01157">
  <img src="http://joss.theoj.org/papers/10.21105/joss.01157/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: http://joss.theoj.org/papers/10.21105/joss.01157/status.svg
   :target: https://doi.org/10.21105/joss.01157

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@whedon whedon added published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. labels Mar 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review
Projects
None yet
Development

No branches or pull requests

6 participants