Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: Trash AI: A Web GUI for Serverless Computer Vision Analysis of Images of Trash #5136

Closed
editorialbot opened this issue Feb 8, 2023 · 65 comments
Assignees
Labels
accepted Makefile published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review Shell Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Feb 8, 2023

Submitting author: @wincowgerDEV (Win Cowger)
Repository: https://github.com/code4sac/trash-ai
Branch with paper.md (empty if default branch):
Version: 1.0
Editor: @arfon
Reviewers: @domna, @luxaritas
Archive: 10.5281/zenodo.8384126

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/6ffbb0f89e6c928dad6908a02639789b"><img src="https://joss.theoj.org/papers/6ffbb0f89e6c928dad6908a02639789b/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/6ffbb0f89e6c928dad6908a02639789b/status.svg)](https://joss.theoj.org/papers/6ffbb0f89e6c928dad6908a02639789b)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@domna & @luxaritas, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @arfon know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @domna

📝 Checklist for @luxaritas

@editorialbot editorialbot added Makefile Python review Shell Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning labels Feb 8, 2023
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.17 s (736.8 files/s, 57502.6 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
TypeScript                      37            193            335           2062
Vuejs Component                 23             40             35           1878
Jupyter Notebook                 4              0           1407            758
Python                           3             90            101            609
YAML                            11             57             30            501
JSON                            11              2              0            307
Markdown                         9            114              0            279
Bourne Shell                     6             29              9            248
TeX                              1             14              0            182
make                             5             22             14             74
SVG                              1              0              0             56
JavaScript                       6              0              6             53
HTML                             2              4              2             36
Dockerfile                       2              8              1             24
XML                              1              0              0              9
Sass                             1              1              2              8
-------------------------------------------------------------------------------
SUM:                           123            574           1942           7084
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 1335

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.5281/zenodo.4154370 is OK
- 10.48550/ARXIV.2003.06975 is OK

MISSING DOIs

- 10.1186/s43591-022-00035-1 may be a valid DOI for title: Trash Taxonomy Tool: harmonizing classification systems used to describe trash in environments
- 10.1186/s40965-018-0050-y may be a valid DOI for title: OpenLitterMap.com – Open Data on Plastic Pollution with Blockchain Rewards (Littercoin)

INVALID DOIs

- https://doi.org/10.1029/2019EA000960 is INVALID because of 'https://doi.org/' prefix
- https://doi.org/10.1016/j.wasman.2021.12.001 is INVALID because of 'https://doi.org/' prefix

@arfon
Copy link
Member

arfon commented Feb 8, 2023

@domna, @luxaritas – This is the review thread for the paper. All of our communications will happen here from now on.

Please read the "Reviewer instructions & questions" in the first comment above. Please create your checklist typing:

@editorialbot generate my checklist

As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention https://github.com/openjournals/joss-reviews/issues/5136 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for the review process to be completed within about 4-6 weeks but please make a start well ahead of this as JOSS reviews are by their nature iterative and any early feedback you may be able to provide to the author will be very helpful in meeting this schedule.

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@domna
Copy link

domna commented Feb 9, 2023

Review checklist for @domna

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/code4sac/trash-ai?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@wincowgerDEV) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@luxaritas
Copy link

luxaritas commented Feb 9, 2023

Review checklist for @luxaritas

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/code4sac/trash-ai?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@wincowgerDEV) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@luxaritas
Copy link

luxaritas commented Feb 12, 2023

Just finished an initial pass looking over this, and wanted to share a few high-level comments (not submitting issues at this point because I'm not sure how much is actionable in that context, but I'm happy to for anything where it would be useful):

  • It does not appear the submitting author (@wincowgerDEV) made significant contributions to the software itself - the only commits I see attributed are the paper and a minor tweak to deployment configurations

  • I'm on the fence about whether this qualifies under the JOSS substantivity requirement. On one hand, I'm definitely on board with the value of having an intuitive interface to do this work. On the other hand, I'm not fully sold on the value as currently implemented: The current service lacks substantial "workflow" tools (eg, some sort of projects/tagging, multiple collaborators, additional analytics, ...) and some level of polish/UX work to the degree that it almost feels like a Google Colab notebook could serve the purpose more efficiently (point it to a folder of images, run the nodebook, get some summary stats back out).

    It also seems like the cloud deployment approach here doesn't provide much value for someone wanting to deploy it on their own - this seems like a great candidate to deploy to something like GitHub pages, Netlify, etc whereas the current approach is pretty complex for what it is. The only thing that prevents it from being deployed in that way is the collection of images to feed back to the model, which hasn't actually been followed through on yet. Even with the current structure, I'll also note that the deployment scripts and such could probably be simplified and consolidated a bit - when taking a brief look at it, it felt like I had to jump around a bit to find all the different bits and pieces and had a lot of different calls between different tools (eg, why use makefiles vs just having a single docker-compose in the root and use docker-compose up?).

    Additionally, while I know the model isn't yours, when I sampled a couple images, it seems like the model is sufficiently unreliable at least to me that I'm not confident I would trust it to do the analysis for me as a researcher. I also have concerns about whether it would be fast enough just running locally to analyze the amount of images that would be needed in a research context (though not being a subject matter expert, I can't make that judgement with any real confidence).

    To be clear I definitely think this has the promise to be really useful, but I'm not sure it hits the mark as-is.

  • While the video walkthrough is great, it would probably be best to have a bit more written instruction on usage in the README/on the website. I think the setup documentation could also be improved a bit. The local setup seems to refer to specifics of WSL a lot where it could be more system-agnostic, and while the usage of GH workflows is novel, the AWS deployment instructions are a bit nonstandard in that regard plus seem to assume that you're deploying it from the repo itself/as a repo owner.

  • There doesn't seem to be any automated tests. While the desired behavior here is relatively clear/trivial, there's no formal instructions to verify behavior

  • As tagged by editorialbot, it looks like there are some citation formatting issues

@wincowgerDEV
Copy link

Hey @luxaritas, Thanks for the initial pass. I am creating some issues now based on these helpful comments and have responses below. Will update you when the issues are fixed.

It does not appear the submitting author (@wincowgerDEV) made significant contributions to the software itself - the only commits I see attributed are the paper and a minor tweak to deployment configurations

Joss submission guidelines require that the submitting author is a major contributor of the software: https://joss.readthedocs.io/en/latest/submitting.html
You are correct that I have not contributed directly to the source code. However, there are many ways to contribute to software development beyond contributing to source code. On any software team, there are typically testers, documenters, project managers, source code developers, and many other roles. I have contributed over 100 hours to this project by contributing to documentation, testing, conceiving of the project, and planning the project. I also lead the writing of the manuscript. Additionally, the corresponding author @shollingsworth, is the primary contributor to the source code.

I'm on the fence about whether this qualifies under the JOSS substantivity requirement. On one hand, I'm definitely on board with the value of having an intuitive interface to do this work. On the other hand, I'm not fully sold on the value as currently implemented: The current service lacks substantial "workflow" tools (eg, some sort of projects/tagging, multiple collaborators, additional analytics, ...) and some level of polish/UX work to the degree that it almost feels like a Google Colab notebook could serve the purpose more efficiently (point it to a folder of images, run the nodebook, get some summary stats back out).

**I disagree with this comment. Most researchers in my field would not know how to use a Google Colab notebook to do this work. The fact that computer vision is almost solely run using programming languages is preventing its adoption in trash research, this is the premise of our work. I also am not aware of any needs they would have for these additional workflow tools that you are mentioning but would be happy to consider them if you can provide some specifics.

During the prereview some of these questions came up from @arfon as well and I had additional responses there: #5005

Please refer to the JOSS Submission guidelines here for the substantiative requirement and consider our in-line responses to why we should be considered meeting it: https://joss.readthedocs.io/en/latest/submitting.html. The guidelines state these requirements:
Age of software (is this a well-established software project) / length of commit history.

  • The projects has been developed since Sep 8, 2021, approximately a year and a half
    Number of commits.
  • The project consists of 91 squash and merge commits which are bundled commits averaging typically around 10 commits in one, so many hundreds of commits.
    Number of authors.
  • The manuscript has 9 authors and there are 8 contributors to the repo demonstrating that this project has a lively open source community backing it.
    Total lines of code (LOC). Submissions under 1000 LOC will usually be flagged, those under 300 LOC will be desk rejected.
  • The project has 57502 lines of code
    Whether the software has already been cited in academic papers.
  • While the project hasn't been cited in academic papers yet, we have cited several papers in the statement of need which substantiate the need in the field for this software: https://github.com/code4sac/trash-ai/blob/production/paper.md#statement-of-need
    Whether the software is sufficiently useful that it is likely to be cited by your peer group.
  • Same response as previous**

It also seems like the cloud deployment approach here doesn't provide much value for someone wanting to deploy it on their own - this seems like a great candidate to deploy to something like GitHub pages, Netlify, etc whereas the current approach is pretty complex for what it is. The only thing that prevents it from being deployed in that way is the collection of images to feed back to the model, which hasn't actually been followed through on yet. Even with the current structure, I'll also note that the deployment scripts and such could probably be simplified and consolidated a bit - when taking a brief look at it, it felt like I had to jump around a bit to find all the different bits and pieces and had a lot of different calls between different tools (eg, why use makefiles vs just having a single docker-compose in the root and use docker-compose up?).

Thanks for this comment, I just created an issue for the second part of this comment: code4sac/trash-ai#106. For the first part, I do not believe that deployment to proprietary infrastructures is within the primary scope of JOSS: https://joss.readthedocs.io/en/latest/submitting.html#submissions-using-proprietary-languages-development-environments. We made a local deployment option: https://github.com/code4sac/trash-ai/blob/production/docs/localdev.md which uses a simple docker-compose deployment. Have you tried to use this?

Additionally, while I know the model isn't yours, when I sampled a couple images, it seems like the model is sufficiently unreliable at least to me that I'm not confident I would trust it to do the analysis for me as a researcher. I also have concerns about whether it would be fast enough just running locally to analyze the amount of images that would be needed in a research context (though not being a subject matter expert, I can't make that judgement with any real confidence).

**We did create the model and are working on it to improve it. However, the model itself isn't within the scope of a JOSS review to my knowledge, it would need to be reviewed by a machine learning journal. We are aware that it can still be improved in accuracy and that was mentioned in the video. One of the main challenges with improving its accuracy is a lack of labeled images from the diversity of possible settings that trash can be in. We are hoping that people will use the platform and share images to it so that we can relabel them and improve the model in the long term. Have you attempted to use the local deployment option? It is extremely fast and runs asynchronously so it can be run in the background. **

To be clear I definitely think this has the promise to be really useful, but I'm not sure it hits the mark as-is.

Thanks for the kind words.

While the video walkthrough is great, it would probably be best to have a bit more written instruction on usage in the README/on the website. I think the setup documentation could also be improved a bit. The local setup seems to refer to specifics of WSL a lot where it could be more system-agnostic, and while the usage of GH workflows is novel, the AWS deployment instructions are a bit nonstandard in that regard plus seem to assume that you're deploying it from the repo itself/as a repo owner.

**Thanks for the kind words, I created this issue to correct these issues: code4sac/trash-ai#107 **

There doesn't seem to be any automated tests. While the desired behavior here is relatively clear/trivial, there's no formal instructions to verify behavior.

Created this issue: code4sac/trash-ai#108, there are many automated tests baked into the docker workflow but you are correct that they should be more clear in the documentation and there should be a formal validation proceedure.

As tagged by editorialbot, it looks like there are some citation formatting issues

Thanks for pointing that out. Created this issue to resolve them: code4sac/trash-ai#109

@luxaritas
Copy link

luxaritas commented Feb 14, 2023

Thanks for the clarifications @wincowgerDEV!

You are correct that I have not contributed directly to the source code. However, there are many ways to contribute to software development beyond contributing to source code. On any software team, there are typically testers, documenters, project managers, source code developers, and many other roles. I have contributed over 100 hours to this project by contributing to documentation, testing, conceiving of the project, and planning the project. I also lead the writing of the manuscript.

Understood - thanks for the additional detail to help verify.

I disagree with this comment. Most researchers in my field would not know how to use a Google Colab notebook to do this work. The fact that computer vision is almost solely run using programming languages is preventing its adoption in trash research, this is the premise of our work. I also am not aware of any needs they would have for these additional workflow tools that you are mentioning but would be happy to consider them if you can provide some specifics.

My perspective on this is limited due to my limited knowledge of this field, so I appreciate your perspective. I will note that it should be possible to use a Colab notebook without actually having to be familiar with the programming itself, though I recognize that it's not the most user-friendly experience (and on investigating the approaches I had in mind, I think it's probably not as good an option as I thought!).

Thanks for this comment, I just created an issue for the second part of this comment: code4sac/trash-ai#106. For the first part, I do not believe that deployment to proprietary infrastructures is within the primary scope of JOSS: https://joss.readthedocs.io/en/latest/submitting.html#submissions-using-proprietary-languages-development-environments. We made a local deployment option: https://github.com/code4sac/trash-ai/blob/production/docs/localdev.md which uses a simple docker-compose deployment. Have you tried to use this?

Yes, I have, and it works great. However, I still think this is at least somewhat relevant. In order for this package to be useful to non-technical researchers, it has to be deployed. That can either be serviced by your own hosted instance (this is great, and I think is a real value add based on your statement of need, though if I incorporate that into my consideration of amount-of-work then I should include it as at least some small component of the review), or by having it self-deployed (in which case, someone else needs to be able to have a clear way to host it, whereas right now your options are either 1) what is designed to be a local development environment or 2) a cloud-based deployment workflow which is hard to adapt for third-party use (and in both cases has a bunch of components that a typical install wouldn't really need). Additionally, I would consider ease of self-deployment as part of a high-quality open-source package, and if you include this as part of your primary documentation (as you do in the README) I would consider it as something which should be functional and understandable for an end-user (rather than an internal detail of the repo).

We did create the model and are working on it to improve it. However, the model itself isn't within the scope of a JOSS review to my knowledge, it would need to be reviewed by a machine learning journal. We are aware that it can still be improved in accuracy and that was mentioned in the video. One of the main challenges with improving its accuracy is a lack of labeled images from the diversity of possible settings that trash can be in. We are hoping that people will use the platform and share images to it so that we can relabel them and improve the model in the long term.

Ah, I misunderstood what I read previously - I see now.

While the model itself is not directly within the scope of JOSS, I think it is still somewhat relevant - without a usable model, the value of your application to researchers in the way your describe can't really be realized. I don't intend for this to be a critical component to the review itself, but moreso an additional data point to other aspects of the review. This is especially true as because you've trained the model yourself, the model you're providing is part of the product being delivered (and so has additional implications on the "functionality" requirements).

Have you attempted to use the local deployment option? It is extremely fast and runs asynchronously so it can be run in the background.

I was just basing this off of what I saw in the demo video, as I don't have a lot of sample data to work with myself, so my comment on performance is somewhat speculative - again I don't have knowledge of how this would be used in the wild, so my intent was to flag that as something I could see as being an issue, but don't have the knowledge to verify myself. If you've found the performance to be sufficient with your understanding of the use cases, I don't have any real concern.

@wincowgerDEV
Copy link

@luxaritas Thanks for the thoughtful response back. Some responses below to follow up. We will get to work on these aspects and circle back when they are ready.

My perspective on this is limited due to my limited knowledge of this field, so I appreciate your perspective. I will note that it should be possible to use a Colab notebook without actually having to be familiar with the programming itself, though I recognize that it's not the most user-friendly experience (and on investigating the approaches I had in mind, I think it's probably not as good an option as I thought!).

I completely agree that it is possible to do all of this within a Colab notebook or some other programmable interface. Will share with the group to think some more about how we can better integrate this application with programmable interfaces. I think it will have to be part of a longer term development timeline though since several other workflows exist.

Yes, I have, and it works great. However, I still think this is at least somewhat relevant. In order for this package to be useful to non-technical researchers, it has to be deployed. That can either be serviced by your own hosted instance (this is great, and I think is a real value add based on your statement of need, though if I incorporate that into my consideration of amount-of-work then I should include it as at least some small component of the review), or by having it self-deployed (in which case, someone else needs to be able to have a clear way to host it, whereas right now your options are either 1) what is designed to be a local development environment or 2) a cloud-based deployment workflow which is hard to adapt for third-party use (and in both cases has a bunch of components that a typical install wouldn't really need). Additionally, I would consider ease of self-deployment as part of a high-quality open-source package, and if you include this as part of your primary documentation (as you do in the README) I would consider it as something which should be functional and understandable for an end-user (rather than an internal detail of the repo).

Happy for any comments you have about the remote hosting options and we will do our best to incorporate them. Agreed that this is the option that most developers will want to use. Absolutely agree that the primary documentation should lead to function and understandable self deployment. If you end up running into any issues related to that please let us know and we will fix them.

While the model itself is not directly within the scope of JOSS, I think it is still somewhat relevant - without a usable model, the value of your application to researchers in the way your describe can't really be realized. I don't intend for this to be a critical component to the review itself, but moreso an additional data point to other aspects of the review. This is especially true as because you've trained the model yourself, the model you're providing is part of the product being delivered (and so has additional implications on the "functionality" requirements).

Agreed here too! It is our highest priority for development right now to improve the model. Definitely, if the model accuracy is not leading the functional use of the tool then its an issue we should resolve in this tool and I don't think that the only resolution is to improve the model accuracy. We currently report the model confidence in the labels which I believe helps the user to interpret the accuracy. Perhaps there is another aspect of the software you think would be helpful to add for a user who is facing functionality-related issues?

I was just basing this off of what I saw in the demo video, as I don't have a lot of sample data to work with myself, so my comment on performance is somewhat speculative - again I don't have knowledge of how this would be used in the wild, so my intent was to flag that as something I could see as being an issue, but don't have the knowledge to verify myself. If you've found the performance to be sufficient with your understanding of the use cases, I don't have any real concern.

Appreciate your flexibility on this point.

@domna
Copy link

domna commented Feb 27, 2023

Hi @wincowgerDEV and @luxaritas ,

thank you for the detailed discussion. It already addressed a lot of points I had while reviewing the software. However, I'd like to add my view to a few points and add some additional thoughts.

Deployment

I agree with @luxaritas view on the deployment, that the software should be easily deployable to any hosting solution or local deployment. While you provide a localdev option I see it, however, as an option for doing local development for an aws based system instead of as a full-fledged deployment solution. From my view the localstack containers and backend upload options are not really necessary for a local deployment.
What I would find nice to have would be:

  • Documentation on how to compile the front-end and directly upload it to any hosting solution (ideally with disabled backend upload option, so you really only need a webspace or even could use gh pages).
  • A single docker container which contains the compiled gui so admins can easily deploy in a docker based system. It would also be nice if this container would be uploaded to gh packages or dockerhub, so you could just provide a docker-compose file people could use to deploy the service.

Privacy

While it is stated in the paper that some of the images are uploaded to an S3 storage there is no information about it on your trashai.org webpage. I think it is at least necessary to put a disclaimer to users that their image data is uploaded and that images may contain delicate information through exif data (because I think the targeted audience is not always aware of what information images may contain). In my view it would even be better to have an opt-out option for image upload and/or stripping unnecessary parts of the exif data while uploading and/or informing users of uploading process to aws while also putting information resources to exif data directly on the page.

Example data / Tutorial

I find your video tutorial very nice and good to follow. Additionally, I think it would be nice if you'd also have a folder of example data with which new users could directly follow your tutorial. Maybe even with a post-processing example of json data.

Documentation and data structure

I agree with @luxaritas that the written documentation could be expanded. What I miss the most is a detailed explanation of the json data structure for post-processing software. Which fields are expected and what information do they contain? Are there fields which are not available for all data. As I am not a familiar with trash research I was also wondering whether there is some standardized format for data exchange on trash location data and labelling which could be used here (I think the Hapich et al 2022 paper is elaborating on this, so it would be nice to read some more details of the connection of this trash taxonomy to the data format of trashai). Also I understand that your targeted audience is less technical so dealing with json data may be a barrier. I think it would also be good to offer a pdf based overview (like an analysis page which could just be printed into a pdf - so users could directly have a map overview of their trash expedition).

Reference to other software

I was missing information on which software people typically use for researching trash images. Since I'm not familiar with the field I may not be aware that such software is not really available, but I think it would be useful to give a glimpse into the working process around your tool (like: what do I actually do with the downloaded json data?). It would also be interesting to know whether there are some widely known databases where you can put your trash data, as I think the useful part of such information is putting a lot of data from different researchers together to give a broad overview of littering places. I think especially it would be nice to have some guidance what to do with the json data downloaded by your tool. Is there some analysis software for that?

I was also very happy to read that you plan to upload your data to the taco dataset. Do you also plan on adding a feature where users can directly annotate their data in trashai and then upload the annotation together with the images to trashai. I think this could be a powerful tool of advancing the taco dataset, since it would target the erroneous classifications of the model directly.

Future of the software

This is just more out of curiosity for your future plans. I find your software a super helpful tool for annotation. However, I was wondering if you plan to bring the data of different sources together. As you state in your paper such information is often used for policy decisions and I think the approach is really useful when data from different places come together. So going in the direction of having a database for trash researchers where data can be easily uploaded from trashai and policy makers can search for locations to get information in this area could be extremely helpful. Are you already thinking in this direction?

@luxaritas
Copy link

I completely agree that it is possible to do all of this within a Colab notebook or some other programmable interface. Will share with the group to think some more about how we can better integrate this application with programmable interfaces. I think it will have to be part of a longer term development timeline though since several other workflows exist.

To be clear, I'm not necessarily advocating for this as an addition to the GUI - the reason I brought it up was more around trying to figure out the value proposition of the UI itself and whether it was providing something that couldn't be done in a simpler way, and it sounds like the UI is important to make CV tools accessible to researchers. Integration with programable interfaces could still be valuable though (at the very least, I'd imagine it would be a good idea for the model to be available directly and not just via UI).

Agreed here too! It is our highest priority for development right now to improve the model. Definitely, if the model accuracy is not leading the functional use of the tool then its an issue we should resolve in this tool and I don't think that the only resolution is to improve the model accuracy. We currently report the model confidence in the labels which I believe helps the user to interpret the accuracy. Perhaps there is another aspect of the software you think would be helpful to add for a user who is facing functionality-related issues?

The confidence labels are definitely a great idea. At some level though model accuracy itself is a barrier to usefulness of this tool - no matter what else you do, if you have to manually re-tag all the images anyways it's not helping automate the process which is the whole point... Unfortunately it's a weird situation where the model itself isn't the primary thing under scrutiny for JOSS, but it's a critical part of the application's functionality

@wincowgerDEV
Copy link

@domna Thanks so much for the comments and thorough review. Some responses in line below.

I agree with @luxaritas view on the deployment, that the software should be easily deployable to any hosting solution or local deployment. While you provide a localdev option I see it, however, as an option for doing local development for an aws based system instead of as a full-fledged deployment solution. From my view the localstack containers and backend upload options are not really necessary for a local deployment.
What I would find nice to have would be:

Documentation on how to compile the front-end and directly upload it to any hosting solution (ideally with disabled backend upload option, so you really only need a webspace or even could use gh pages).

  • Could you provide an example of an application that has this kind of functionality? I am not super familiar with it but definitely interested in figuring out how we can do it.

A single docker container which contains the compiled gui so admins can easily deploy in a docker based system. It would also be nice if this container would be uploaded to gh packages or dockerhub, so you could just provide a docker-compose file people could use to deploy the service.

Privacy

While it is stated in the paper that some of the images are uploaded to an S3 storage there is no information about it on your trashai.org webpage. I think it is at least necessary to put a disclaimer to users that their image data is uploaded and that images may contain delicate information through exif data (because I think the targeted audience is not always aware of what information images may contain). In my view it would even be better to have an opt-out option for image upload and/or stripping unnecessary parts of the exif data while uploading and/or informing users of uploading process to aws while also putting information resources to exif data directly on the page.

Example data / Tutorial

I find your video tutorial very nice and good to follow. Additionally, I think it would be nice if you'd also have a folder of example data with which new users could directly follow your tutorial. Maybe even with a post-processing example of json data.

Documentation and data structure

I agree with @luxaritas that the written documentation could be expanded. What I miss the most is a detailed explanation of the json data structure for post-processing software. Which fields are expected and what information do they contain? Are there fields which are not available for all data. As I am not a familiar with trash research I was also wondering whether there is some standardized format for data exchange on trash location data and labelling which could be used here (I think the Hapich et al 2022 paper is elaborating on this, so it would be nice to read some more details of the connection of this trash taxonomy to the data format of trashai). Also I understand that your targeted audience is less technical so dealing with json data may be a barrier. I think it would also be good to offer a pdf based overview (like an analysis page which could just be printed into a pdf - so users could directly have a map overview of their trash expedition).

Reference to other software

I was missing information on which software people typically use for researching trash images. Since I'm not familiar with the field I may not be aware that such software is not really available, but I think it would be useful to give a glimpse into the working process around your tool (like: what do I actually do with the downloaded json data?). It would also be interesting to know whether there are some widely known databases where you can put your trash data, as I think the useful part of such information is putting a lot of data from different researchers together to give a broad overview of littering places. I think especially it would be nice to have some guidance what to do with the json data downloaded by your tool. Is there some analysis software for that?

I was also very happy to read that you plan to upload your data to the taco dataset. Do you also plan on adding a feature where users can directly annotate their data in trashai and then upload the annotation together with the images to trashai. I think this could be a powerful tool of advancing the taco dataset, since it would target the erroneous classifications of the model directly.

  • Thanks for the kind words. We do plan on adding in that feature and are actively working on it. Glad to hear you also think it will be useful.

Future of the software

This is just more out of curiosity for your future plans. I find your software a super helpful tool for annotation. However, I was wondering if you plan to bring the data of different sources together. As you state in your paper such information is often used for policy decisions and I think the approach is really useful when data from different places come together. So going in the direction of having a database for trash researchers where data can be easily uploaded from trashai and policy makers can search for locations to get information in this area could be extremely helpful. Are you already thinking in this direction?

  • Love this vision you are sharing. We don't currently have a central repository for trash images and historically image data hasn't been shared very much in our field even though they have been used in many studies. I will think on this some more to consider how best to integrate the tools. We have a portal we have been developing for sharing images and other data in a standardized format of microplastics wincowger.shinyapps.io/validator. It could be used for trash images and data as well but we haven't extended the schema for it yet. Will think about this for the long term roadmap of the tool for sure.

@wincowgerDEV
Copy link

@luxaritas Thanks for the follow up clarification.

To be clear, I'm not necessarily advocating for this as an addition to the GUI - the reason I brought it up was more around trying to figure out the value proposition of the UI itself and whether it was providing something that couldn't be done in a simpler way, and it sounds like the UI is important to make CV tools accessible to researchers. Integration with programable interfaces could still be valuable though (at the very least, I'd imagine it would be a good idea for the model to be available directly and not just via UI).

The confidence labels are definitely a great idea. At some level though model accuracy itself is a barrier to usefulness of this tool - no matter what else you do, if you have to manually re-tag all the images anyways it's not helping automate the process which is the whole point... Unfortunately it's a weird situation where the model itself isn't the primary thing under scrutiny for JOSS, but it's a critical part of the application's functionality

  • I agree with you that accuracy is a barrier to usability. It is going to take us a long time to build a general trash ai but I don't think that precludes us from getting this publication out and allowing people to begin using this tool. It is actually already being used by state agencies around California and a country wide trash survey: https://www.notracetrails.com/trashteam. In the end, it is a double edged sword. We need people to use the tool so that we can get the data we need to improve it but they are less likely to use it until it performs better. In the long term, we expect it to be a standard in trash research.

@domna
Copy link

domna commented Mar 7, 2023

@wincowgerDEV Thank you for the detailed response and opening of issues.

Documentation on how to compile the front-end and directly upload it to any hosting solution (ideally with disabled backend upload option, so you really only need a webspace or even could use gh pages).

  • Could you provide an example of an application that has this kind of functionality? I am not super familiar with it but definitely interested in figuring out how we can do it.

I think I explained this point a little over-complicatedly. I just meant that it would be good to have a documentation on how to generate a static javascript bundle and upload it to a self-hosted webspace (e.g. locally for people wanting to use this model in an intranet or so).
Specifically, this means having something like

cd frontend
yarn build
cp dist <your-webspace-folder>

in the documentation. Regarding the disabled backend I've seen that it is possible to just use the frontend without any issues while testing this.
For gh pages it would just mean that you have an automatic gh action to generate a webpage which would then compile the js and copy it. You can simply use this action here https://github.com/marketplace/actions/github-pages-action and add your yarn build step. I think it should just work. The benefit would mainly be for development purposes, so people cloning the repo could immediately rollout the changes through this action.

A single docker container which contains the compiled gui so admins can easily deploy in a docker based system. It would also be nice if this container would be uploaded to gh packages or dockerhub, so you could just provide a docker-compose file people could use to deploy the service.

Yes, you directly build in the docker-compose file, so people need to clone the repository. If you would upload a pre-compiled container people could just download a docker-compose file and just run it from there. The container would be downloaded automatically from the repository and started.

Also your localdev really is an local development option and is not targeted for deployment. My idea was just to build one lean container which just serves the frontend so people can quickly run it on their desktop machines with docker desktop or on a local server with docker.

A simple Dockerfile could look like this:

FROM node:16.12.0 as builder
COPY . .
RUN yarn
RUN yarn build

FROM nginx:1.23.3
COPY --from=builder dist /usr/share/nginx/html

inside the frontend dir. This would just create an nginx based container with the frontend copied as static files into its public directory.

👍️

Example data / Tutorial

I find your video tutorial very nice and good to follow. Additionally, I think it would be nice if you'd also have a folder of example data with which new users could directly follow your tutorial. Maybe even with a post-processing example of json data.

👍️

Documentation and data structure

I agree with @luxaritas that the written documentation could be expanded. What I miss the most is a detailed explanation of the json data structure for post-processing software. Which fields are expected and what information do they contain? Are there fields which are not available for all data. As I am not a familiar with trash research I was also wondering whether there is some standardized format for data exchange on trash location data and labelling which could be used here (I think the Hapich et al 2022 paper is elaborating on this, so it would be nice to read some more details of the connection of this trash taxonomy to the data format of trashai). Also I understand that your targeted audience is less technical so dealing with json data may be a barrier. I think it would also be good to offer a pdf based overview (like an analysis page which could just be printed into a pdf - so users could directly have a map overview of their trash expedition).

👍️

Reference to other software

I was missing information on which software people typically use for researching trash images. Since I'm not familiar with the field I may not be aware that such software is not really available, but I think it would be useful to give a glimpse into the working process around your tool (like: what do I actually do with the downloaded json data?). It would also be interesting to know whether there are some widely known databases where you can put your trash data, as I think the useful part of such information is putting a lot of data from different researchers together to give a broad overview of littering places. I think especially it would be nice to have some guidance what to do with the json data downloaded by your tool. Is there some analysis software for that?

👍️

I was also very happy to read that you plan to upload your data to the taco dataset. Do you also plan on adding a feature where users can directly annotate their data in trashai and then upload the annotation together with the images to trashai. I think this could be a powerful tool of advancing the taco dataset, since it would target the erroneous classifications of the model directly.

  • Thanks for the kind words. We do plan on adding in that feature and are actively working on it. Glad to hear you also think it will be useful.

Future of the software

This is just more out of curiosity for your future plans. I find your software a super helpful tool for annotation. However, I was wondering if you plan to bring the data of different sources together. As you state in your paper such information is often used for policy decisions and I think the approach is really useful when data from different places come together. So going in the direction of having a database for trash researchers where data can be easily uploaded from trashai and policy makers can search for locations to get information in this area could be extremely helpful. Are you already thinking in this direction?

  • Love this vision you are sharing. We don't currently have a central repository for trash images and historically image data hasn't been shared very much in our field even though they have been used in many studies. I will think on this some more to consider how best to integrate the tools. We have a portal we have been developing for sharing images and other data in a standardized format of microplastics wincowger.shinyapps.io/validator. It could be used for trash images and data as well but we haven't extended the schema for it yet. Will think about this for the long term roadmap of the tool for sure.

Yes, bringing all of this data from different sources together is probably a hard endeavor. While I think in the long run it will be extremely helpful to have such a database I think it's a lot of work of harmonising different metadata schemes and building the apis. I don't know if there has anything happened yet in this direction in your research community (like metadata standards or so)

@wincowgerDEV
Copy link

@domna Thanks for the feedback here. I created new issues for both of your first two points, love the ideas and its a lot more clear to me now what you were thinking. I need to circle back with the developers on this to see how challenging it will be to integrate these but will definitely give them a shot.
code4sac/trash-ai#127
code4sac/trash-ai#126

Yes, bringing all of this data from different sources together is probably a hard endeavor. While I think in the long run it will be extremely helpful to have such a database I think it's a lot of work of harmonising different metadata schemes and building the apis. I don't know if there has anything happened yet in this direction in your research community (like metadata standards or so)

Totally agree its a big messy process to harmonize schemas. There some standards for reporting in the field but nothing like a JSON Schema to my knowledge. Everything is expected to live in local databases in the field today or in spreadsheet-like files. We will build from those advancements to inform our work here.

@wincowgerDEV
Copy link

Hey @luxaritas and @domna, Hope you are both doing well. Just checking in that you are both done with your first round of reviews. If so, I will begin making revisions on the repo.

@domna
Copy link

domna commented Apr 14, 2023

Hi @wincowgerDEV,

yes, thank you. Please go ahead and start your revisions. Feel free to also refer to me in the issues and I'll follow the updates. Looking forward to your changes!

@arfon
Copy link
Member

arfon commented Apr 14, 2023

Hey @luxaritas and @domna, Hope you are both doing well. Just checking in that you are both done with your first round of reviews. If so, I will begin making revisions on the repo.

👍 I think it makes sense to start incorporating reviewer feedback at this stage @wincowgerDEV.

@wincowgerDEV
Copy link

wincowgerDEV commented Apr 14, 2023 via email

@arfon
Copy link
Member

arfon commented Sep 22, 2023

@wincowgerDEV – looks like we're very close to being done here. I will circle back here next week, but in the meantime, please give your own paper a final read to check for any potential typos etc.

After that, could you make a new release of this software that includes the changes that have resulted from this review. Then, please make an archive of the software in Zenodo/figshare/other service and update this thread with the DOI of the archive? For the Zenodo/figshare archive, please make sure that:

  • The title of the archive is the same as the JOSS paper title
  • That the authors of the archive are the same as the JOSS paper authors
  • I can then move forward with accepting the submission.

@wincowgerDEV
Copy link

Thanks @arfon, responses below:

@wincowgerDEV – looks like we're very close to being done here. I will circle back here next week, but in the meantime, please give your own paper a final read to check for any potential typos etc.

Went through one last time and all seems correct.

After that, could you make a new release of this software that includes the changes that have resulted from this review. Then, please make an archive of the software in Zenodo/figshare/other service and update this thread with the DOI of the archive? For the Zenodo/figshare archive, please make sure that:

  • The title of the archive is the same as the JOSS paper title
  • That the authors of the archive are the same as the JOSS paper authors
  • I can then move forward with accepting the submission.

I made this zenodo repository with the most up to date release, titled it the same as the manuscript and added the same authors in order:
https://zenodo.org/record/8384126

@arfon
Copy link
Member

arfon commented Sep 28, 2023

@editorialbot set 10.5281/zenodo.8384126 as archive

@editorialbot
Copy link
Collaborator Author

Done! archive is now 10.5281/zenodo.8384126

@arfon
Copy link
Member

arfon commented Sep 28, 2023

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.5281/zenodo.4154370 is OK
- 10.1186/s43591-022-00035-1 is OK
- 10.1029/2019EA000960 is OK
- 10.1186/s40965-018-0050-y is OK
- 10.1016/j.wasman.2021.12.001 is OK
- 10.48550/ARXIV.2003.06975 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/dsais-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#4629, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Sep 28, 2023
@arfon
Copy link
Member

arfon commented Sep 29, 2023

@wincowgerDEV – can you please merge this PR? code4sac/trash-ai#192

@arfon
Copy link
Member

arfon commented Sep 29, 2023

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.5281/zenodo.4154370 is OK
- 10.1186/s43591-022-00035-1 is OK
- 10.1029/2019EA000960 is OK
- 10.1186/s40965-018-0050-y is OK
- 10.1016/j.wasman.2021.12.001 is OK
- 10.48550/ARXIV.2003.06975 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/dsais-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#4632, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@arfon
Copy link
Member

arfon commented Sep 29, 2023

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Cowger
  given-names: Win
  orcid: "https://orcid.org/0000-0001-9226-3104"
- family-names: Hollingsworth
  given-names: Steven
- family-names: Fey
  given-names: Day
- family-names: Norris
  given-names: Mary C
- family-names: Yu
  given-names: Walter
- family-names: Kerge
  given-names: Kristiina
- family-names: Haamer
  given-names: Kris
- family-names: Durante
  given-names: Gina
- family-names: Hernandez
  given-names: Brianda
contact:
- family-names: Hollingsworth
  given-names: Steven
doi: 10.5281/zenodo.8384126
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Cowger
    given-names: Win
    orcid: "https://orcid.org/0000-0001-9226-3104"
  - family-names: Hollingsworth
    given-names: Steven
  - family-names: Fey
    given-names: Day
  - family-names: Norris
    given-names: Mary C
  - family-names: Yu
    given-names: Walter
  - family-names: Kerge
    given-names: Kristiina
  - family-names: Haamer
    given-names: Kris
  - family-names: Durante
    given-names: Gina
  - family-names: Hernandez
    given-names: Brianda
  date-published: 2023-09-29
  doi: 10.21105/joss.05136
  issn: 2475-9066
  issue: 89
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 5136
  title: "Trash AI: A Web GUI for Serverless Computer Vision Analysis of
    Images of Trash"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.05136"
  volume: 8
title: "Trash AI: A Web GUI for Serverless Computer Vision Analysis of
  Images of Trash"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.05136 joss-papers#4633
  2. Wait a couple of minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.05136
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Sep 29, 2023
@arfon
Copy link
Member

arfon commented Sep 29, 2023

@domna, @luxaritas – many thanks for your reviews here! JOSS relies upon the volunteer effort of people like you and we simply wouldn't be able to do this without you ✨

@wincowgerDEV – your paper is now accepted and published in JOSS ⚡🚀💥

@arfon arfon closed this as completed Sep 29, 2023
@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.05136/status.svg)](https://doi.org/10.21105/joss.05136)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.05136">
  <img src="https://joss.theoj.org/papers/10.21105/joss.05136/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.05136/status.svg
   :target: https://doi.org/10.21105/joss.05136

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@wincowgerDEV
Copy link

Wonderful! Thank you so much @arfon, @luxaritas, and @domna for all your hard work helping us bring this project to life.

@domna
Copy link

domna commented Sep 30, 2023

Congrats 🎉
I was happy to review this nice project! Good luck with your software in the future. I hope the publication gives you some additional reach for your research!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted Makefile published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review Shell Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning
Projects
None yet
Development

No branches or pull requests

5 participants