Skip to content

Commit

Permalink
Aws/trashai staging / update about page (#46)
Browse files Browse the repository at this point in the history
* Coco POC #8 #9 (#19) (#26)

* Fixed some package.json issues, updated some entries in .gitignore and .in (for those who would use it)

* saving work

* saving work... having issues with async reading file then loading it in coco

* coco confirmed working

* Added JSON metadata

* Arg! this should work

* Removed Pro port mapping from localstack docker-compose

* Added multiple file upload and cleaned up presentation a bit

Looks like there is a problem with uploading multiple files at once, the
predictions don't always trigger

* Minor updates to MacOS details (#24)

Co-authored-by: Jim <[email protected]>

Co-authored-by: Jim Ewald <[email protected]>
Co-authored-by: Jim <[email protected]>

Co-authored-by: Jim Ewald <[email protected]>
Co-authored-by: Jim <[email protected]>

* removed symlink

* Testing coco (#28)

* Fixed some package.json issues, updated some entries in .gitignore and .in (for those who would use it)

* saving work

* saving work... having issues with async reading file then loading it in coco

* coco confirmed working

* Added JSON metadata

* Arg! this should work

* Removed Pro port mapping from localstack docker-compose

* Added multiple file upload and cleaned up presentation a bit

Looks like there is a problem with uploading multiple files at once, the
predictions don't always trigger

* Minor updates to MacOS details (#24)

Co-authored-by: Jim <[email protected]>

* removed symlink to web_model

* Coco POC #8 #9 (#19) (#26)

* Fixed some package.json issues, updated some entries in .gitignore and .in (for those who would use it)

* saving work

* saving work... having issues with async reading file then loading it in coco

* coco confirmed working

* Added JSON metadata

* Arg! this should work

* Removed Pro port mapping from localstack docker-compose

* Added multiple file upload and cleaned up presentation a bit

Looks like there is a problem with uploading multiple files at once, the
predictions don't always trigger

* Minor updates to MacOS details (#24)

Co-authored-by: Jim <[email protected]>

Co-authored-by: Jim Ewald <[email protected]>
Co-authored-by: Jim <[email protected]>

Co-authored-by: Jim Ewald <[email protected]>
Co-authored-by: Jim <[email protected]>

* removed symlink

Co-authored-by: Jim Ewald <[email protected]>
Co-authored-by: Jim <[email protected]>

* Changed upload behaviour

* Refactored upload dialog / drop area

* added more model samples
* removed dropzone package

* Fixed issue with remove / key and array order

* Removed lab / test for deployed versions of environment

* Added about page

* Forgot to update the metadata text

* Yolov5-Taco web model #7 (#30)

* Testing tf_web model export

* removing pyright

* Working version of taco dataset + initial build instructions

* Created links to code4sac and the github project

Added jupyter notebook files for new training as well as using existing
training data

* Adjusted new training jupyter notebook

* Testing change to model loading in AWS

* Try 2 on loading model

* Try 3 on loading model

* Reverting changes (has to do with basic auth url)

* Removed old react frontend directory

* Removed old react frontend directory (#34)

* Moving domain from codefordev to trashai.org

* Fixes for #36, #35, #32 + download all feature, and backend refactor w/ dedup (#38)

* Replaced Backend with python, fixed infa deploy logic # 35

* backend now tracks images by their sha256 value to avoid duplication
* added new manual action to infra deploy logic # 35
* added log retention logic to backend deploy # 36
* added new fields to deploy_map
    "log_retention_days": 30,
    "dns_domain_map_root": true,
* made github_actions script aware of domain, and "is_root_domain" deploy_map setting

* Addresses # 32, added new features amongst them a download all button

* added new packages to UI
    * "dexie" local storage w/ indexeddb browser store, localstore
      couldn't cut it
    * "file-saver": allows us to do a saveas with the zip file
    * "jszip": zip libarary to address #
* indicators for backend upload success, and if the image has been seen
  before
* now looks good in mobile as well as desktop
* also using indexeddb introduced caching the file objects (for download later) and caching of the tf model

* Added support for python packages

* Adjusted permission to add/remove layers in the prefix namespace

* fixed permissions issue for layers / backend deploy

* Added hash and other metadata to the metadata display and hover over filename

* Fixed metadata download and info button

* bugfix on s3 naming with file extension (#40)

* Added magnifying feature for larger images

* filling out the about page (#45)

* filling out the about page

* adding article

* better formatting

Co-authored-by: Jim Ewald <[email protected]>
Co-authored-by: Jim <[email protected]>
Co-authored-by: Dan Fey <[email protected]>
  • Loading branch information
4 people authored Apr 14, 2022
1 parent f71bba4 commit 694e33f
Showing 1 changed file with 40 additions and 1 deletion.
41 changes: 40 additions & 1 deletion frontend/src/components/about.vue
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,46 @@
<v-sheet>
<h1 style="align: center">About Trash AI</h1>
<hr class="my-3" />
<p>TODO</p>
<h2> Welcome to Trash AI! </h2>
<p>
This is an <a href="https://github.com/code4sac/trash-ai">open source project</a>
developed and maintained by <a href="https://codeforsacramento.org">Code For Sacramento</a>
in partnership with Win Cowger from UC Riverside and Walter Yu from CALTRANS.
There have also been <a href="https://github.com/code4sac/trash-ai/graphs/contributors">many contributors</a> to the code base.
</p>

<h2> What is it? </h2>
<p>
Trash AI allows you to upload an image containing trash and get back data about the trash in the image,
including the classification of trash and bounding box of where the trash is in the image.
</p>

<h2> How does it work? </h2>
<p>
Trash AI builds a model using the <a href="https://github.com/ultralytics/yolov5">YoloV5 toolset</a> trained on the <a href="http://tacodataset.org/">TACO dataset</a>.
The model takes an image containing trash and returns a list of annotations and bounding boxes of trash within the image.
The model is imported into the front-end <a href="https://vuejs.org/">Vue.js</a> application where it is invoked when an image is uploaded.
The Vue application then displayes the results of the model on the image.
</p>

<h2> How can I use Trash AI? </h2>
<p>
Trash AI is open source and free to use however you see fit. You may classify images and download the data.
You may copy and modify the code for your own use.
</p>

<h2> Disclaimer about uploaded images </h2>
<p>
The current version of Trash AI and the model we are using is just a start!
When you upload an image, we are storing the image and the classification in an effort
to expand the trash dataset and improve the model over time.
</p>

<h2> Reporting issues and improvements </h2>
<p>
If you would like to report an issue or request a feature, please
<a href="https://github.com/code4sac/trash-ai/issues/new">open a Github Issue</a> in our repository.
</p>
</v-sheet>
</template>
<script>
Expand Down

0 comments on commit 694e33f

Please sign in to comment.