-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Artifact completeness and size #1
Comments
In my opinion, if authors do not submit the complete dataset, they should be required to justify that decision. For very large datasets, they can always upload them to archive.org and additionally provide a subset that allows reviewers to test provided scripts. We used the following formation in the MSR 2019 Mining Challenge CfP:
|
It’s here on my website: https://ineed.coffee/5205/how-to-disclose-data-for-double-blind-review-and-make-it-archived-open-data-upon-acceptance/ |
Thanks Daniel, forgot to copy the link. |
By default yes, unless it is not allowed due to other issues (e.g. IP). The burden should be on the authors to justify why some parts of the artefact where not made public.
All claims in a paper must be supported by the artefact. The authors must be responsible to either ensure this or explain why they cannot. There is no need for someone to decide.
We could have a round 2^32 bytes maximum artefact size :-) Seriously, that should be up to the author.
A representative sample must be extracted from the full dataset. The statistical techniques must be documented by the authors and be subject to evaluation during artefact review. The tools that compose the artefact must be able to work with the sample and produce similar results, within ranges that the authors must describe and explain. |
Must the artifact provide data and tools to replicate ALL experiments in a paper, or is it allowable to scope an artifact to consider only part of the claims?
Who decides (authors, reviewers or chairs) what claims in a given paper should be supported by the artifact?
What should we consider "too much data" or "too long of an experiment" that can’t be submitted in full for artifact evaluation? For instance, one researcher might consider a 2GB dataset too large to submit in full, while another might submit a 2TB dataset.
Whatever the criteria for "too big" is - what process should authors follow if their artifact is too big to submit some subset of their artifact for evaluation?
The text was updated successfully, but these errors were encountered: