-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JOSS review: version numbers and github releases #2
Comments
I think this a bit tricky to address and we would love to have your comments on this. As you pointed out there are currently no version identifiers whatsoever. This is mainly because I didn't want to confuse users with too many different versions all over the place. Currently we are facing the following situation: The pre-trained models are hosted and versioned on zenodo. E.g. the Therefore it would be the best to tag the pytorch version with Thanks for your input on this. |
Yeah, that's indeed trickier than the typical OSS package release. I do think it's important to have versioning for citation and replication purposes, and your proposed scheme seems good to me. It's maybe worth looping in @arokem for a second opinion on how this aligns to the spirit of the version requirements in JOSS submissions. |
For the purpose of the paper, I think that clarity about which version of each one of the sub-modules is referenced would be important and archival versions for each one of them would need to be created. I think this means creating a release/tag and zenodo archive for each one of them when the paper is close to acceptance and then pointing from your README here to these DOIs. Then, once the paper is accepted, you create one more archive for this repo, including that README, with the DOIs pointing to the archival versions of each sub-module. Does that make sense? |
@arokem yes, I think I understand. In detail, JOSS is okay with the fact that one particular submodule (namely the tensorflow reimplementation) will be tagged and archived without being fully functional (e.g. a |
Yes. I think this is fine as long as it's clear (both in the docs here, as well as in the paper) what works and what doesn't. |
I'm also a little bit puzzled by combining different implementations/sub-modules into one submission for a software paper. I see the reasoning, as the goal of all sub-modules is to implement the same functionality. What I cannot see at the moment, is why it is sufficient for the paper when only one of those implementations is ready ( To cite from https://joss.theoj.org/about#submitting:
@faroit: do you also plan to maintain all three different implementations, or will you not most likely benchmark them and in the end continue with the best one? |
@hagenw thanks for your input. I agree that a feature-complete submission for all three implementations would be optimal. But at least for tensorflow we don't see the benefit to release a 1.x version when so many things change with 2.0 (which also concern audio data loading as this is important for us).
We see the current submission addressing the scientific community first for which we think the pytorch version is best suited. Therefore we will only release the pre-trained models for pytorch. In the end we want researchers to be able to cite only one DOI when they use any version of open-unmix. For this to happen I see two ways: A) We might be better off with an arxiv submission focussing on the technical details of the model, withdrawing this submission. Then we could submit separate JOSS submissions for each implementation. This, however, means that researchers would probably mainly only cite the arxiv version, which we don't think is optimal. B) We proceed with the submission with the following changes:
C) same as B) but also remove nnabla |
I guess the issue comes from intermixing software and journal publications as it is of course the goal of JOSS.
I don't think that this can easily be solved besides getting rid of the paper/citations madness, but I will definitely not start that discussion here ;) For me C) would be a good solution, but I would be also fine with the proposal of B) if this is favored by the other reviewers. |
Options B or C sound fine to me. It seems like the pytorch implementation is the first-class artifact, and would be enough on its own to merit a joss publication IMO. Ports / implementations in other frameworks seem to me like extensions or iterations, which are obviously important to users, but not directly relevant to the pub here. The story might be different if the focus here was on cross-framework implementation (eg onnx) rather than a specific model and application, but that's clearly not the focus of this work. |
We went with C)
I would propose to still leave the utilities as submodules here, since they are a mandatory requirement for running open-unmix and we don't plan to submit them to JOSS at this point. I am open to remove them as well, if you think this makes more sense. |
@bmcfee the solution is totally fine with me. |
Yep. Looks good! |
(Note: I'll be opening a bunch of issues relating to JOSS review.)
The review form asks if version number for the software reflects that reported in the github release. There do not appear to be any release tags for this project, and the repository (repositories?) do not contain version identifiers.
The text was updated successfully, but these errors were encountered: