From 385f935befbe37614918b8a2a36512e43e2288e3 Mon Sep 17 00:00:00 2001
From: Steven Hollingsworth
Date: Sat, 20 Aug 2022 15:02:59 -0700
Subject: [PATCH 01/20] Aws/trashai staging (#65)
* Coco POC #8 #9 (#19) (#26)
* Fixed some package.json issues, updated some entries in .gitignore and .in (for those who would use it)
* saving work
* saving work... having issues with async reading file then loading it in coco
* coco confirmed working
* Added JSON metadata
* Arg! this should work
* Removed Pro port mapping from localstack docker-compose
* Added multiple file upload and cleaned up presentation a bit
Looks like there is a problem with uploading multiple files at once, the
predictions don't always trigger
* Minor updates to MacOS details (#24)
Co-authored-by: Jim
Co-authored-by: Jim Ewald
Co-authored-by: Jim
Co-authored-by: Jim Ewald
Co-authored-by: Jim
* removed symlink
* Testing coco (#28)
* Fixed some package.json issues, updated some entries in .gitignore and .in (for those who would use it)
* saving work
* saving work... having issues with async reading file then loading it in coco
* coco confirmed working
* Added JSON metadata
* Arg! this should work
* Removed Pro port mapping from localstack docker-compose
* Added multiple file upload and cleaned up presentation a bit
Looks like there is a problem with uploading multiple files at once, the
predictions don't always trigger
* Minor updates to MacOS details (#24)
Co-authored-by: Jim
* removed symlink to web_model
* Coco POC #8 #9 (#19) (#26)
* Fixed some package.json issues, updated some entries in .gitignore and .in (for those who would use it)
* saving work
* saving work... having issues with async reading file then loading it in coco
* coco confirmed working
* Added JSON metadata
* Arg! this should work
* Removed Pro port mapping from localstack docker-compose
* Added multiple file upload and cleaned up presentation a bit
Looks like there is a problem with uploading multiple files at once, the
predictions don't always trigger
* Minor updates to MacOS details (#24)
Co-authored-by: Jim
Co-authored-by: Jim Ewald
Co-authored-by: Jim
Co-authored-by: Jim Ewald
Co-authored-by: Jim
* removed symlink
Co-authored-by: Jim Ewald
Co-authored-by: Jim
* Changed upload behaviour
* Refactored upload dialog / drop area
* added more model samples
* removed dropzone package
* Fixed issue with remove / key and array order
* Removed lab / test for deployed versions of environment
* Added about page
* Forgot to update the metadata text
* Yolov5-Taco web model #7 (#30)
* Testing tf_web model export
* removing pyright
* Working version of taco dataset + initial build instructions
* Created links to code4sac and the github project
Added jupyter notebook files for new training as well as using existing
training data
* Adjusted new training jupyter notebook
* Testing change to model loading in AWS
* Try 2 on loading model
* Try 3 on loading model
* Reverting changes (has to do with basic auth url)
* Removed old react frontend directory
* Removed old react frontend directory (#34)
* Moving domain from codefordev to trashai.org
* Replaced Backend with python, fixed infa deploy logic # 35
* backend now tracks images by their sha256 value to avoid duplication
* added new manual action to infra deploy logic # 35
* added log retention logic to backend deploy # 36
* added new fields to deploy_map
"log_retention_days": 30,
"dns_domain_map_root": true,
* made github_actions script aware of domain, and "is_root_domain" deploy_map setting
* Addresses # 32, added new features amongst them a download all button
* added new packages to UI
* "dexie" local storage w/ indexeddb browser store, localstore
couldn't cut it
* "file-saver": allows us to do a saveas with the zip file
* "jszip": zip libarary to address #
* indicators for backend upload success, and if the image has been seen
before
* now looks good in mobile as well as desktop
* also using indexeddb introduced caching the file objects (for download later) and caching of the tf model
* Added support for python packages
* Adjusted permission to add/remove layers in the prefix namespace
* fixed permissions issue for layers / backend deploy
* Added hash and other metadata to the metadata display and hover over filename
* Fixed metadata download and info button
* Fixes for #36, #35, #32 + download all feature, and backend refactor w/ dedup (#38)
* Replaced Backend with python, fixed infa deploy logic # 35
* backend now tracks images by their sha256 value to avoid duplication
* added new manual action to infra deploy logic # 35
* added log retention logic to backend deploy # 36
* added new fields to deploy_map
"log_retention_days": 30,
"dns_domain_map_root": true,
* made github_actions script aware of domain, and "is_root_domain" deploy_map setting
* Addresses # 32, added new features amongst them a download all button
* added new packages to UI
* "dexie" local storage w/ indexeddb browser store, localstore
couldn't cut it
* "file-saver": allows us to do a saveas with the zip file
* "jszip": zip libarary to address #
* indicators for backend upload success, and if the image has been seen
before
* now looks good in mobile as well as desktop
* also using indexeddb introduced caching the file objects (for download later) and caching of the tf model
* Added support for python packages
* Adjusted permission to add/remove layers in the prefix namespace
* fixed permissions issue for layers / backend deploy
* Added hash and other metadata to the metadata display and hover over filename
* Fixed metadata download and info button
* Fixing file ext name issue with s3 bucket upload on backend
* bugfix on s3 naming with file extension (#40)
* Added magnifying feature for larger images
* filling out the about page (#45)
* filling out the about page
* adding article
* better formatting
* Made about page default #49
* Added "Samples" button to upload tab closes #50
* Making about updates specified in #48 (#51)
* Making about updates specified in #48
* removing space
* small about update
* Demoing the CI/CD integration
* Typescript refactor Version 1
* removed nuxt
* removed more files
* adjusted build commands
* fixed height issue with uploads
* re-implemented backend
* adjusted frontend deploy stack
* Added github secret for google maps api key
* Added dockerfile for frontend
* added jszip
* Fixed android upload gps issue, fixed mobile status truncating issue
* removed uneeded logo files
* Typescript refactor (#58)
* Replaced Backend with python, fixed infa deploy logic # 35
* backend now tracks images by their sha256 value to avoid duplication
* added new manual action to infra deploy logic # 35
* added log retention logic to backend deploy # 36
* added new fields to deploy_map
"log_retention_days": 30,
"dns_domain_map_root": true,
* made github_actions script aware of domain, and "is_root_domain" deploy_map setting
* Addresses # 32, added new features amongst them a download all button
* added new packages to UI
* "dexie" local storage w/ indexeddb browser store, localstore
couldn't cut it
* "file-saver": allows us to do a saveas with the zip file
* "jszip": zip libarary to address #
* indicators for backend upload success, and if the image has been seen
before
* now looks good in mobile as well as desktop
* also using indexeddb introduced caching the file objects (for download later) and caching of the tf model
* Added support for python packages
* Adjusted permission to add/remove layers in the prefix namespace
* fixed permissions issue for layers / backend deploy
* Added hash and other metadata to the metadata display and hover over filename
* Fixed metadata download and info button
* Fixing file ext name issue with s3 bucket upload on backend
* Promote staging updates to prod (#52)
* Coco POC #8 #9 (#19) (#26)
* Fixed some package.json issues, updated some entries in .gitignore and .in (for those who would use it)
* saving work
* saving work... having issues with async reading file then loading it in coco
* coco confirmed working
* Added JSON metadata
* Arg! this should work
* Removed Pro port mapping from localstack docker-compose
* Added multiple file upload and cleaned up presentation a bit
Looks like there is a problem with uploading multiple files at once, the
predictions don't always trigger
* Minor updates to MacOS details (#24)
Co-authored-by: Jim
Co-authored-by: Jim Ewald
Co-authored-by: Jim
Co-authored-by: Jim Ewald
Co-authored-by: Jim
* removed symlink
* Testing coco (#28)
* Fixed some package.json issues, updated some entries in .gitignore and .in (for those who would use it)
* saving work
* saving work... having issues with async reading file then loading it in coco
* coco confirmed working
* Added JSON metadata
* Arg! this should work
* Removed Pro port mapping from localstack docker-compose
* Added multiple file upload and cleaned up presentation a bit
Looks like there is a problem with uploading multiple files at once, the
predictions don't always trigger
* Minor updates to MacOS details (#24)
Co-authored-by: Jim
* removed symlink to web_model
* Coco POC #8 #9 (#19) (#26)
* Fixed some package.json issues, updated some entries in .gitignore and .in (for those who would use it)
* saving work
* saving work... having issues with async reading file then loading it in coco
* coco confirmed working
* Added JSON metadata
* Arg! this should work
* Removed Pro port mapping from localstack docker-compose
* Added multiple file upload and cleaned up presentation a bit
Looks like there is a problem with uploading multiple files at once, the
predictions don't always trigger
* Minor updates to MacOS details (#24)
Co-authored-by: Jim
Co-authored-by: Jim Ewald
Co-authored-by: Jim
Co-authored-by: Jim Ewald
Co-authored-by: Jim
* removed symlink
Co-authored-by: Jim Ewald
Co-authored-by: Jim
* Changed upload behaviour
* Refactored upload dialog / drop area
* added more model samples
* removed dropzone package
* Fixed issue with remove / key and array order
* Removed lab / test for deployed versions of environment
* Added about page
* Forgot to update the metadata text
* Yolov5-Taco web model #7 (#30)
* Testing tf_web model export
* removing pyright
* Working version of taco dataset + initial build instructions
* Created links to code4sac and the github project
Added jupyter notebook files for new training as well as using existing
training data
* Adjusted new training jupyter notebook
* Testing change to model loading in AWS
* Try 2 on loading model
* Try 3 on loading model
* Reverting changes (has to do with basic auth url)
* Removed old react frontend directory
* Removed old react frontend directory (#34)
* Moving domain from codefordev to trashai.org
* Fixes for #36, #35, #32 + download all feature, and backend refactor w/ dedup (#38)
* Replaced Backend with python, fixed infa deploy logic # 35
* backend now tracks images by their sha256 value to avoid duplication
* added new manual action to infra deploy logic # 35
* added log retention logic to backend deploy # 36
* added new fields to deploy_map
"log_retention_days": 30,
"dns_domain_map_root": true,
* made github_actions script aware of domain, and "is_root_domain" deploy_map setting
* Addresses # 32, added new features amongst them a download all button
* added new packages to UI
* "dexie" local storage w/ indexeddb browser store, localstore
couldn't cut it
* "file-saver": allows us to do a saveas with the zip file
* "jszip": zip libarary to address #
* indicators for backend upload success, and if the image has been seen
before
* now looks good in mobile as well as desktop
* also using indexeddb introduced caching the file objects (for download later) and caching of the tf model
* Added support for python packages
* Adjusted permission to add/remove layers in the prefix namespace
* fixed permissions issue for layers / backend deploy
* Added hash and other metadata to the metadata display and hover over filename
* Fixed metadata download and info button
* bugfix on s3 naming with file extension (#40)
* Added magnifying feature for larger images
* filling out the about page (#45)
* filling out the about page
* adding article
* better formatting
* Made about page default #49
* Added "Samples" button to upload tab closes #50
* Making about updates specified in #48 (#51)
* Making about updates specified in #48
* removing space
* small about update
Co-authored-by: Steven Hollingsworth
Co-authored-by: Jim Ewald
Co-authored-by: Jim
* Typescript refactor Version 1
* removed nuxt
* removed more files
* adjusted build commands
* fixed height issue with uploads
* re-implemented backend
* adjusted frontend deploy stack
* Added github secret for google maps api key
* Added dockerfile for frontend
* added jszip
* Fixed android upload gps issue, fixed mobile status truncating issue
* removed uneeded logo files
Co-authored-by: Dan Fey
Co-authored-by: Jim Ewald
Co-authored-by: Jim
* updated deploy scripts
* testing config update
* added vite
* testing vite
* testing vite
* testing deploy
* adjusted permissions to allow access to public bucket
* still testing
* arg!
* Updating about page (#64)
* Updating about page to show Steve as the lead dev and adding feedback form
* making about pages consistent
Co-authored-by: Jim Ewald
Co-authored-by: Jim
Co-authored-by: Dan Fey
---
frontend/src/components/about.vue | 12 ++++++------
frontend/src/views/about.vue | 12 ++++++------
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/frontend/src/components/about.vue b/frontend/src/components/about.vue
index cd4d6db..b386cf2 100644
--- a/frontend/src/components/about.vue
+++ b/frontend/src/components/about.vue
@@ -14,12 +14,10 @@
The Moore Institute for Plastic Pollution Research
- and Walter Yu from CALTRANS. There
- have also been
- many contributors
- to the code base.
+ and Walter Yu from CALTRANS.
+
+ Steven Hollingsworth
+ is the lead developer and contributor to the code base.
To get started, visit the Upload Tab or
@@ -68,6 +66,8 @@
>open a Github Issue
in our repository.
+ If you would like to provide more general feedback, please fill out
+ our feedback form here .
To get started, visit the Upload Tab or
@@ -68,6 +66,8 @@
open a Github Issue
in our repository.
+ If you would like to provide more general feedback, please fill out
+ our feedback form here .
From 118760fc0b6db3eb259bc39270736b425a9c0c62 Mon Sep 17 00:00:00 2001
From: Dan Fey
Date: Sat, 27 Aug 2022 16:23:56 -0700
Subject: [PATCH 02/20] Google analytics addition merge to production (#68)
* Coco POC #8 #9 (#19) (#26)
* Fixed some package.json issues, updated some entries in .gitignore and .in (for those who would use it)
* saving work
* saving work... having issues with async reading file then loading it in coco
* coco confirmed working
* Added JSON metadata
* Arg! this should work
* Removed Pro port mapping from localstack docker-compose
* Added multiple file upload and cleaned up presentation a bit
Looks like there is a problem with uploading multiple files at once, the
predictions don't always trigger
* Minor updates to MacOS details (#24)
Co-authored-by: Jim
Co-authored-by: Jim Ewald
Co-authored-by: Jim
Co-authored-by: Jim Ewald
Co-authored-by: Jim
* removed symlink
* Testing coco (#28)
* Fixed some package.json issues, updated some entries in .gitignore and .in (for those who would use it)
* saving work
* saving work... having issues with async reading file then loading it in coco
* coco confirmed working
* Added JSON metadata
* Arg! this should work
* Removed Pro port mapping from localstack docker-compose
* Added multiple file upload and cleaned up presentation a bit
Looks like there is a problem with uploading multiple files at once, the
predictions don't always trigger
* Minor updates to MacOS details (#24)
Co-authored-by: Jim
* removed symlink to web_model
* Coco POC #8 #9 (#19) (#26)
* Fixed some package.json issues, updated some entries in .gitignore and .in (for those who would use it)
* saving work
* saving work... having issues with async reading file then loading it in coco
* coco confirmed working
* Added JSON metadata
* Arg! this should work
* Removed Pro port mapping from localstack docker-compose
* Added multiple file upload and cleaned up presentation a bit
Looks like there is a problem with uploading multiple files at once, the
predictions don't always trigger
* Minor updates to MacOS details (#24)
Co-authored-by: Jim
Co-authored-by: Jim Ewald
Co-authored-by: Jim
Co-authored-by: Jim Ewald
Co-authored-by: Jim
* removed symlink
Co-authored-by: Jim Ewald
Co-authored-by: Jim
* Changed upload behaviour
* Refactored upload dialog / drop area
* added more model samples
* removed dropzone package
* Fixed issue with remove / key and array order
* Removed lab / test for deployed versions of environment
* Added about page
* Forgot to update the metadata text
* Yolov5-Taco web model #7 (#30)
* Testing tf_web model export
* removing pyright
* Working version of taco dataset + initial build instructions
* Created links to code4sac and the github project
Added jupyter notebook files for new training as well as using existing
training data
* Adjusted new training jupyter notebook
* Testing change to model loading in AWS
* Try 2 on loading model
* Try 3 on loading model
* Reverting changes (has to do with basic auth url)
* Removed old react frontend directory
* Removed old react frontend directory (#34)
* Moving domain from codefordev to trashai.org
* Replaced Backend with python, fixed infa deploy logic # 35
* backend now tracks images by their sha256 value to avoid duplication
* added new manual action to infra deploy logic # 35
* added log retention logic to backend deploy # 36
* added new fields to deploy_map
"log_retention_days": 30,
"dns_domain_map_root": true,
* made github_actions script aware of domain, and "is_root_domain" deploy_map setting
* Addresses # 32, added new features amongst them a download all button
* added new packages to UI
* "dexie" local storage w/ indexeddb browser store, localstore
couldn't cut it
* "file-saver": allows us to do a saveas with the zip file
* "jszip": zip libarary to address #
* indicators for backend upload success, and if the image has been seen
before
* now looks good in mobile as well as desktop
* also using indexeddb introduced caching the file objects (for download later) and caching of the tf model
* Added support for python packages
* Adjusted permission to add/remove layers in the prefix namespace
* fixed permissions issue for layers / backend deploy
* Added hash and other metadata to the metadata display and hover over filename
* Fixed metadata download and info button
* Fixes for #36, #35, #32 + download all feature, and backend refactor w/ dedup (#38)
* Replaced Backend with python, fixed infa deploy logic # 35
* backend now tracks images by their sha256 value to avoid duplication
* added new manual action to infra deploy logic # 35
* added log retention logic to backend deploy # 36
* added new fields to deploy_map
"log_retention_days": 30,
"dns_domain_map_root": true,
* made github_actions script aware of domain, and "is_root_domain" deploy_map setting
* Addresses # 32, added new features amongst them a download all button
* added new packages to UI
* "dexie" local storage w/ indexeddb browser store, localstore
couldn't cut it
* "file-saver": allows us to do a saveas with the zip file
* "jszip": zip libarary to address #
* indicators for backend upload success, and if the image has been seen
before
* now looks good in mobile as well as desktop
* also using indexeddb introduced caching the file objects (for download later) and caching of the tf model
* Added support for python packages
* Adjusted permission to add/remove layers in the prefix namespace
* fixed permissions issue for layers / backend deploy
* Added hash and other metadata to the metadata display and hover over filename
* Fixed metadata download and info button
* Fixing file ext name issue with s3 bucket upload on backend
* bugfix on s3 naming with file extension (#40)
* Added magnifying feature for larger images
* filling out the about page (#45)
* filling out the about page
* adding article
* better formatting
* Made about page default #49
* Added "Samples" button to upload tab closes #50
* Making about updates specified in #48 (#51)
* Making about updates specified in #48
* removing space
* small about update
* Demoing the CI/CD integration
* Typescript refactor Version 1
* removed nuxt
* removed more files
* adjusted build commands
* fixed height issue with uploads
* re-implemented backend
* adjusted frontend deploy stack
* Added github secret for google maps api key
* Added dockerfile for frontend
* added jszip
* Fixed android upload gps issue, fixed mobile status truncating issue
* removed uneeded logo files
* Typescript refactor (#58)
* Replaced Backend with python, fixed infa deploy logic # 35
* backend now tracks images by their sha256 value to avoid duplication
* added new manual action to infra deploy logic # 35
* added log retention logic to backend deploy # 36
* added new fields to deploy_map
"log_retention_days": 30,
"dns_domain_map_root": true,
* made github_actions script aware of domain, and "is_root_domain" deploy_map setting
* Addresses # 32, added new features amongst them a download all button
* added new packages to UI
* "dexie" local storage w/ indexeddb browser store, localstore
couldn't cut it
* "file-saver": allows us to do a saveas with the zip file
* "jszip": zip libarary to address #
* indicators for backend upload success, and if the image has been seen
before
* now looks good in mobile as well as desktop
* also using indexeddb introduced caching the file objects (for download later) and caching of the tf model
* Added support for python packages
* Adjusted permission to add/remove layers in the prefix namespace
* fixed permissions issue for layers / backend deploy
* Added hash and other metadata to the metadata display and hover over filename
* Fixed metadata download and info button
* Fixing file ext name issue with s3 bucket upload on backend
* Promote staging updates to prod (#52)
* Coco POC #8 #9 (#19) (#26)
* Fixed some package.json issues, updated some entries in .gitignore and .in (for those who would use it)
* saving work
* saving work... having issues with async reading file then loading it in coco
* coco confirmed working
* Added JSON metadata
* Arg! this should work
* Removed Pro port mapping from localstack docker-compose
* Added multiple file upload and cleaned up presentation a bit
Looks like there is a problem with uploading multiple files at once, the
predictions don't always trigger
* Minor updates to MacOS details (#24)
Co-authored-by: Jim
Co-authored-by: Jim Ewald
Co-authored-by: Jim
Co-authored-by: Jim Ewald
Co-authored-by: Jim
* removed symlink
* Testing coco (#28)
* Fixed some package.json issues, updated some entries in .gitignore and .in (for those who would use it)
* saving work
* saving work... having issues with async reading file then loading it in coco
* coco confirmed working
* Added JSON metadata
* Arg! this should work
* Removed Pro port mapping from localstack docker-compose
* Added multiple file upload and cleaned up presentation a bit
Looks like there is a problem with uploading multiple files at once, the
predictions don't always trigger
* Minor updates to MacOS details (#24)
Co-authored-by: Jim
* removed symlink to web_model
* Coco POC #8 #9 (#19) (#26)
* Fixed some package.json issues, updated some entries in .gitignore and .in (for those who would use it)
* saving work
* saving work... having issues with async reading file then loading it in coco
* coco confirmed working
* Added JSON metadata
* Arg! this should work
* Removed Pro port mapping from localstack docker-compose
* Added multiple file upload and cleaned up presentation a bit
Looks like there is a problem with uploading multiple files at once, the
predictions don't always trigger
* Minor updates to MacOS details (#24)
Co-authored-by: Jim
Co-authored-by: Jim Ewald
Co-authored-by: Jim
Co-authored-by: Jim Ewald
Co-authored-by: Jim
* removed symlink
Co-authored-by: Jim Ewald
Co-authored-by: Jim
* Changed upload behaviour
* Refactored upload dialog / drop area
* added more model samples
* removed dropzone package
* Fixed issue with remove / key and array order
* Removed lab / test for deployed versions of environment
* Added about page
* Forgot to update the metadata text
* Yolov5-Taco web model #7 (#30)
* Testing tf_web model export
* removing pyright
* Working version of taco dataset + initial build instructions
* Created links to code4sac and the github project
Added jupyter notebook files for new training as well as using existing
training data
* Adjusted new training jupyter notebook
* Testing change to model loading in AWS
* Try 2 on loading model
* Try 3 on loading model
* Reverting changes (has to do with basic auth url)
* Removed old react frontend directory
* Removed old react frontend directory (#34)
* Moving domain from codefordev to trashai.org
* Fixes for #36, #35, #32 + download all feature, and backend refactor w/ dedup (#38)
* Replaced Backend with python, fixed infa deploy logic # 35
* backend now tracks images by their sha256 value to avoid duplication
* added new manual action to infra deploy logic # 35
* added log retention logic to backend deploy # 36
* added new fields to deploy_map
"log_retention_days": 30,
"dns_domain_map_root": true,
* made github_actions script aware of domain, and "is_root_domain" deploy_map setting
* Addresses # 32, added new features amongst them a download all button
* added new packages to UI
* "dexie" local storage w/ indexeddb browser store, localstore
couldn't cut it
* "file-saver": allows us to do a saveas with the zip file
* "jszip": zip libarary to address #
* indicators for backend upload success, and if the image has been seen
before
* now looks good in mobile as well as desktop
* also using indexeddb introduced caching the file objects (for download later) and caching of the tf model
* Added support for python packages
* Adjusted permission to add/remove layers in the prefix namespace
* fixed permissions issue for layers / backend deploy
* Added hash and other metadata to the metadata display and hover over filename
* Fixed metadata download and info button
* bugfix on s3 naming with file extension (#40)
* Added magnifying feature for larger images
* filling out the about page (#45)
* filling out the about page
* adding article
* better formatting
* Made about page default #49
* Added "Samples" button to upload tab closes #50
* Making about updates specified in #48 (#51)
* Making about updates specified in #48
* removing space
* small about update
Co-authored-by: Steven Hollingsworth
Co-authored-by: Jim Ewald
Co-authored-by: Jim
* Typescript refactor Version 1
* removed nuxt
* removed more files
* adjusted build commands
* fixed height issue with uploads
* re-implemented backend
* adjusted frontend deploy stack
* Added github secret for google maps api key
* Added dockerfile for frontend
* added jszip
* Fixed android upload gps issue, fixed mobile status truncating issue
* removed uneeded logo files
Co-authored-by: Dan Fey
Co-authored-by: Jim Ewald
Co-authored-by: Jim
* updated deploy scripts
* testing config update
* added vite
* testing vite
* testing vite
* testing deploy
* adjusted permissions to allow access to public bucket
* still testing
* arg!
* Updating about page (#64)
* Updating about page to show Steve as the lead dev and adding feedback form
* making about pages consistent
* adding google analytics script (#67)
Co-authored-by: Steven Hollingsworth
Co-authored-by: Jim Ewald
Co-authored-by: Jim
---
frontend/index.html | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/frontend/index.html b/frontend/index.html
index a088c7a..3fe7076 100644
--- a/frontend/index.html
+++ b/frontend/index.html
@@ -2,6 +2,15 @@
+
+
+
From 61dd07fc46bec174cf8b31fa59dbe33e10bdc1dc Mon Sep 17 00:00:00 2001
From: running-man-01 <112680312+running-man-01@users.noreply.github.com>
Date: Fri, 14 Oct 2022 12:08:20 -0400
Subject: [PATCH 03/20] a prototype for accuracy evaluation
---
notebooks/Evaluation_Accuracy.ipynb | 725 ++++++++++++++++++++++++++++
1 file changed, 725 insertions(+)
create mode 100644 notebooks/Evaluation_Accuracy.ipynb
diff --git a/notebooks/Evaluation_Accuracy.ipynb b/notebooks/Evaluation_Accuracy.ipynb
new file mode 100644
index 0000000..cd209c4
--- /dev/null
+++ b/notebooks/Evaluation_Accuracy.ipynb
@@ -0,0 +1,725 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
+ "id": "RJQb7ylX2vHr"
+ },
+ "outputs": [],
+ "source": [
+ "# Build evaluation method\n",
+ "# Aim at >.90 accuracy\n",
+ "\n",
+ "# currently it is tested with yolov5 prediction results\n",
+ "# it should be compatible for all torch prediction outputs in the form of .pandas().xywh "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "0D7J191IYwp8",
+ "outputId": "015e9ea9-6b3f-4774-e798-07866d3a07e0"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Fri Oct 14 16:04:14 2022 \n",
+ "+-----------------------------------------------------------------------------+\n",
+ "| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |\n",
+ "|-------------------------------+----------------------+----------------------+\n",
+ "| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n",
+ "| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n",
+ "| | | MIG M. |\n",
+ "|===============================+======================+======================|\n",
+ "| 0 A100-SXM4-40GB Off | 00000000:00:04.0 Off | 0 |\n",
+ "| N/A 32C P0 45W / 400W | 0MiB / 40536MiB | 0% Default |\n",
+ "| | | Disabled |\n",
+ "+-------------------------------+----------------------+----------------------+\n",
+ " \n",
+ "+-----------------------------------------------------------------------------+\n",
+ "| Processes: |\n",
+ "| GPU GI CI PID Type Process name GPU Memory |\n",
+ "| ID ID Usage |\n",
+ "|=============================================================================|\n",
+ "| No running processes found |\n",
+ "+-----------------------------------------------------------------------------+\n"
+ ]
+ }
+ ],
+ "source": [
+ "!nvidia-smi"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# 0. Prep, install yolov5, download and partition datasets"
+ ],
+ "metadata": {
+ "id": "_9jgRJYlL_-H"
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {
+ "id": "2TTOdqkAk_Ad"
+ },
+ "outputs": [],
+ "source": [
+ "%%capture\n",
+ "!git clone https://github.com/ultralytics/yolov5 \n",
+ "%cd yolov5\n",
+ "!pip install -r requirements.txt #wandb\n",
+ "%cd .."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {
+ "id": "FBHHieZsCbGS"
+ },
+ "outputs": [],
+ "source": [
+ "from PIL import Image, ExifTags\n",
+ "from pycocotools.coco import COCO\n",
+ "from matplotlib.patches import Polygon, Rectangle\n",
+ "from matplotlib.collections import PatchCollection\n",
+ "import colorsys\n",
+ "import random\n",
+ "import pylab\n",
+ "\n",
+ "import json\n",
+ "import numpy as np\n",
+ "import pandas as pd\n",
+ "import matplotlib.pyplot as plt\n",
+ "import seaborn as sns; sns.set()\n",
+ "from tqdm import tqdm\n",
+ "\n",
+ "import shutil\n",
+ "import os\n",
+ "import re\n",
+ "\n",
+ "\n",
+ "import torch\n",
+ "from torch.utils.data import Dataset, DataLoader\n",
+ "from torchvision import transforms, utils"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "from google.colab import drive\n",
+ "drive.mount('/content/drive')"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "M0nSxY0wdrr2",
+ "outputId": "9d0e06a0-2937-4d01-8a08-54a4361091e1"
+ },
+ "execution_count": 5,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "%cp ./drive/MyDrive/rotated2.zip ./"
+ ],
+ "metadata": {
+ "id": "Z9TZLZUZdusE"
+ },
+ "execution_count": 6,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "%%capture\n",
+ "!wget https://raw.githubusercontent.com/pedropro/TACO/master/data/annotations.json\n",
+ "!wget https://raw.githubusercontent.com/pedropro/TACO/master/data/annotations_unofficial.json\n",
+ "!unzip -qq ./rotated2.zip \n",
+ "%mv ./content/* ./"
+ ],
+ "metadata": {
+ "id": "nbwKD15fKpfa"
+ },
+ "execution_count": 7,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {
+ "id": "CPFCzX31IGBq",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "8635bf93-000c-46ac-e30d-3fdd0688584a"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Number of all images:\n",
+ "1500\n"
+ ]
+ }
+ ],
+ "source": [
+ "nr_imgs=None\n",
+ "for root, dirnames, filenames in os.walk('./yoloTACO/labels/'):\n",
+ " nr_imgs = len(filenames)\n",
+ " break\n",
+ "print('Number of all images:\\n'+str(nr_imgs))\n",
+ "\n",
+ "## train test split\n",
+ "'''\n",
+ "train: images/train\n",
+ "val: images/val\n",
+ "test: images/test\n",
+ "'''\n",
+ "np.random.seed(5)\n",
+ "id_list=[i for i in range(nr_imgs)]\n",
+ "np.random.shuffle(id_list)\n",
+ "train_ids = id_list[:1300]\n",
+ "val_ids = id_list[1300:1400]\n",
+ "test_ids = id_list[1400:]\n",
+ "\n",
+ "def move_helper(ids, desti):\n",
+ " for id in ids:\n",
+ " img_name = os.path.join( './yoloTACO/images', str(id)+'.jpg' )\n",
+ " lbl_name = os.path.join( './yoloTACO/labels', str(id)+'.txt' )\n",
+ " print(img_name)\n",
+ " if os.path.isfile(img_name):\n",
+ " shutil.copy( img_name, './yoloTACO/images/'+desti)\n",
+ " shutil.copy( lbl_name, './yoloTACO/labels/'+desti)\n",
+ " else :\n",
+ " print('file does not exist', img_name)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {
+ "id": "kwCWClsrSD4z"
+ },
+ "outputs": [],
+ "source": [
+ "%%capture\n",
+ "!mkdir yoloTACO/images/train\n",
+ "!mkdir yoloTACO/images/val\n",
+ "!mkdir yoloTACO/images/test\n",
+ "!mkdir yoloTACO/labels/train\n",
+ "!mkdir yoloTACO/labels/val\n",
+ "!mkdir yoloTACO/labels/test\n",
+ "move_helper(test_ids,'test')\n",
+ "move_helper(train_ids,'train')\n",
+ "move_helper(val_ids,'val')"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "%%bash\n",
+ "mkdir ./datasets\n",
+ "mv yoloTACO datasets/"
+ ],
+ "metadata": {
+ "id": "gHyjxWAW7DF0"
+ },
+ "execution_count": 10,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [],
+ "metadata": {
+ "id": "S96FWSElHY9d"
+ },
+ "execution_count": 10,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {
+ "id": "evH7jgs5Dnj7",
+ "cellView": "form"
+ },
+ "outputs": [],
+ "source": [
+ "#@title yml\n",
+ "\n",
+ "with open('./yolov5/data/yoloTACO.yaml', mode='w') as fp:\n",
+ " lines = '''path: ../datasets/yoloTACO # dataset root dir\n",
+ "train: images/train # train images \n",
+ "val: images/val # val images \n",
+ "test: images/test # test images (optional)\n",
+ "\n",
+ "# Classes\n",
+ "names:\n",
+ " 0: Aluminium foil\n",
+ " 1: Battery\n",
+ " 2: Aluminium blister pack\n",
+ " 3: Carded blister pack\n",
+ " 4: Other plastic bottle\n",
+ " 5: Clear plastic bottle\n",
+ " 6: Glass bottle\n",
+ " 7: Plastic bottle cap\n",
+ " 8: Metal bottle cap\n",
+ " 9: Broken glass\n",
+ " 10: Food Can\n",
+ " 11: Aerosol\n",
+ " 12: Drink can\n",
+ " 13: Toilet tube\n",
+ " 14: Other carton\n",
+ " 15: Egg carton\n",
+ " 16: Drink carton\n",
+ " 17: Corrugated carton\n",
+ " 18: Meal carton\n",
+ " 19: Pizza box\n",
+ " 20: Paper cup\n",
+ " 21: Disposable plastic cup\n",
+ " 22: Foam cup\n",
+ " 23: Glass cup\n",
+ " 24: Other plastic cup\n",
+ " 25: Food waste\n",
+ " 26: Glass jar\n",
+ " 27: Plastic lid\n",
+ " 28: Metal lid\n",
+ " 29: Other plastic\n",
+ " 30: Magazine paper\n",
+ " 31: Tissues\n",
+ " 32: Wrapping paper\n",
+ " 33: Normal paper\n",
+ " 34: Paper bag\n",
+ " 35: Plastified paper bag\n",
+ " 36: Plastic film\n",
+ " 37: Six pack rings\n",
+ " 38: Garbage bag\n",
+ " 39: Other plastic wrapper\n",
+ " 40: Single-use carrier bag\n",
+ " 41: Polypropylene bag\n",
+ " 42: Crisp packet\n",
+ " 43: Spread tub\n",
+ " 44: Tupperware\n",
+ " 45: Disposable food container\n",
+ " 46: Foam food container\n",
+ " 47: Other plastic container\n",
+ " 48: Plastic glooves\n",
+ " 49: Plastic utensils\n",
+ " 50: Pop tab\n",
+ " 51: Rope & strings\n",
+ " 52: Scrap metal\n",
+ " 53: Shoe\n",
+ " 54: Squeezable tube\n",
+ " 55: Plastic straw\n",
+ " 56: Paper straw\n",
+ " 57: Styrofoam piece\n",
+ " 58: Unlabeled litter\n",
+ " 59: Cigarette'''\n",
+ " fp.writelines(lines)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "CdMrwEgnB9C4",
+ "outputId": "1eded40f-2c94-4a74-f0b1-74394d2c2623"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "/content/yolov5\n",
+ "benchmarks.py\t detect.py models\t setup.cfg val.py\n",
+ "classify\t export.py README.md\t train.py\n",
+ "CONTRIBUTING.md hubconf.py requirements.txt tutorial.ipynb\n",
+ "data\t\t LICENSE segment\t utils\n"
+ ]
+ }
+ ],
+ "source": [
+ "%cd ./yolov5\n",
+ "!ls"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "%pwd"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 35
+ },
+ "id": "XslsKqRuHpmf",
+ "outputId": "cc3a38c5-8c0b-4760-e41a-284eb8d9d8f0"
+ },
+ "execution_count": 13,
+ "outputs": [
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ "'/content/yolov5'"
+ ],
+ "application/vnd.google.colaboratory.intrinsic+json": {
+ "type": "string"
+ }
+ },
+ "metadata": {},
+ "execution_count": 13
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [],
+ "metadata": {
+ "id": "67Xa-feZH9q5"
+ },
+ "execution_count": 13,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# 1. Evaluate with our best trained weights so far"
+ ],
+ "metadata": {
+ "id": "C37qgWyEMLpj"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "## detect and eval with yolo default scripts"
+ ],
+ "metadata": {
+ "id": "fHmO_QRlN4bQ"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "%cp /content/drive/MyDrive/trash_ai_trained_weights/yolov5x6_best_weights.pt ./"
+ ],
+ "metadata": {
+ "id": "0KMhfVDENzT2"
+ },
+ "execution_count": 14,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {
+ "id": "DoLh0BGlXQMC",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "e92a300c-f497-48ef-a66a-1380417d1645"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "\u001b[34m\u001b[1mval: \u001b[0mdata=/content/yolov5/data/yoloTACO.yaml, weights=['./yolov5x6_best_weights.pt'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.6, max_det=300, task=test, device=, workers=8, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=False, project=runs/val, name=exp, exist_ok=False, half=False, dnn=False\n",
+ "YOLOv5 🚀 v6.2-194-g2a19d07 Python-3.7.14 torch-1.12.1+cu113 CUDA:0 (A100-SXM4-40GB, 40536MiB)\n",
+ "\n",
+ "Fusing layers... \n",
+ "Model summary: 416 layers, 140537980 parameters, 0 gradients, 209.1 GFLOPs\n",
+ "\u001b[34m\u001b[1mtest: \u001b[0mScanning '/content/datasets/yoloTACO/labels/test' images and labels...100 found, 0 missing, 0 empty, 0 corrupt: 100% 100/100 [00:00<00:00, 2635.51it/s]\n",
+ "\u001b[34m\u001b[1mtest: \u001b[0mNew cache created: /content/datasets/yoloTACO/labels/test.cache\n",
+ " Class Images Instances P R mAP50 mAP50-95: 100% 4/4 [00:08<00:00, 2.16s/it]\n",
+ " all 100 300 0.0464 0.628 0.11 0.102\n",
+ "Speed: 0.2ms pre-process, 5.4ms inference, 2.4ms NMS per image at shape (32, 3, 640, 640)\n",
+ "Results saved to \u001b[1mruns/val/exp\u001b[0m\n"
+ ]
+ }
+ ],
+ "source": [
+ "!python val.py --data yoloTACO.yaml --task test --weights ./yolov5x6_best_weights.pt\n",
+ "#!python detect.py --weights ./yolov5x6_best_weights.pt --source /content/datasets/yoloTACO/images/test"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "Note that the default `MAP` is not the \"wanted\" metrics for our project, as our sponsor specifically requested a metrics under the name \"accuracy\" and a target score of >.90."
+ ],
+ "metadata": {
+ "id": "4of5Qufu1Pd_"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "## detect with torch framework manually\n",
+ "\n",
+ "This is a necessary step to use our accuracy evaluator."
+ ],
+ "metadata": {
+ "id": "5Ay3MOsDN90j"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "model = torch.hub.load('ultralytics/yolov5', 'custom', path='./yolov5x6_best_weights.pt') # load our local model"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "30QWyk7yiMsk",
+ "outputId": "b9c644c2-3766-4039-b032-92f3c2aa9b64"
+ },
+ "execution_count": 16,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "Using cache found in /root/.cache/torch/hub/ultralytics_yolov5_master\n",
+ "YOLOv5 🚀 2022-10-14 Python-3.7.14 torch-1.12.1+cu113 CUDA:0 (A100-SXM4-40GB, 40536MiB)\n",
+ "\n",
+ "Fusing layers... \n",
+ "Model summary: 416 layers, 140537980 parameters, 0 gradients, 209.1 GFLOPs\n",
+ "Adding AutoShape... \n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "# Load test imgs\n",
+ "test_dir = '/content/datasets/yoloTACO/images/test/'\n",
+ "test_list = [i[2] for i in os.walk(test_dir)][0]\n",
+ "test_list = [re.findall(r'\\d+',i)[0] for i in test_list]\n",
+ "test_read_img_list = [Image.open(test_dir+str(i)+'.jpg') for i in test_list]\n",
+ "# alternatively use cv2: cv2.imread('target_path')[..., ::-1] # OpenCV image (BGR to RGB)\n"
+ ],
+ "metadata": {
+ "id": "GUjCnveZk2sf"
+ },
+ "execution_count": 17,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "# Inference\n",
+ "results = model(test_read_img_list) # batch of images\n",
+ "pred_pd = results.pandas().xywh "
+ ],
+ "metadata": {
+ "id": "RKr_CupeiMvD"
+ },
+ "execution_count": 18,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "%%capture\n",
+ "!wget -O data/annotations.json https://raw.githubusercontent.com/pedropro/TACO/master/data/annotations.json\n",
+ "anno_path = './data/annotations.json'\n",
+ "annos = COCO(annotation_file=anno_path)\n",
+ "with open(anno_path, 'r') as f:\n",
+ " annos_json = json.loads(f.read())\n",
+ "no_to_clname = {i:j for i,j in enumerate([i['name'] for i in annos_json['categories']])}\n"
+ ],
+ "metadata": {
+ "id": "QRj9_Dk_iMxG"
+ },
+ "execution_count": 19,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "truth = [annos.loadAnns(annos.getAnnIds(int(i))) for i in test_list]\n",
+ "truth_pd = []\n",
+ "for i in truth:\n",
+ " cache = [j['bbox']+[1]+[j['category_id']]+[no_to_clname[j['category_id']]] for j in i]\n",
+ " df = pd.DataFrame(cache,columns = ['xcenter','ycenter','width','height','confidence','class','name'])\n",
+ " df['xcenter'] = df['xcenter'] + df['width']/2\n",
+ " df['ycenter'] = df['ycenter'] + df['height']/2\n",
+ " truth_pd.append(df)"
+ ],
+ "metadata": {
+ "id": "zGFRscK6iMzQ"
+ },
+ "execution_count": 20,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# 2. Accuracy evaluation\n",
+ "\n",
+ "For each object with a truth bounding box in each image, if there is a prediction bounding box that has an IOU > threshold with that truth bounding box, it is counted as `detected`.\n",
+ "\n",
+ "For overall model `accuracy`, we count total number of `detected` of all images over total number of `objects` of all images."
+ ],
+ "metadata": {
+ "id": "TyiyfCEAODQI"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "def bbox_iou(box1, box2, eps=1e-7):\n",
+ " # CITATION: adapted from YOLOV5 utils, author, cr: ultralytics\n",
+ " # Returns Intersection over Union (IoU) of box1(1,4) to box2(n,4)\n",
+ "\n",
+ " # Get the coordinates of bounding boxes, transform from xywh to xyxy\n",
+ " (x1, y1, w1, h1), (x2, y2, w2, h2) = box1.chunk(4, 1), box2.chunk(4, 1)\n",
+ " w1_, h1_, w2_, h2_ = w1 / 2, h1 / 2, w2 / 2, h2 / 2\n",
+ " b1_x1, b1_x2, b1_y1, b1_y2 = x1 - w1_, x1 + w1_, y1 - h1_, y1 + h1_\n",
+ " b2_x1, b2_x2, b2_y1, b2_y2 = x2 - w2_, x2 + w2_, y2 - h2_, y2 + h2_\n",
+ "\n",
+ "\n",
+ " inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \\\n",
+ " (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)\n",
+ " union = w1 * h1 + w2 * h2 - inter + eps\n",
+ " return inter / union # return IoU\n",
+ " \n",
+ "def each_pic(pred_df,truth_df,iou_th):\n",
+ " nr_objs = truth_df.shape[0]\n",
+ " nr_dets = 0\n",
+ " for i in truth_df.iterrows():\n",
+ " tbox_tensor = torch.tensor([i[1].tolist()[:4]])\n",
+ " tlabel = i[1].tolist()[5]\n",
+ " \n",
+ " row_counter=0\n",
+ " for j in pred_df.iterrows():\n",
+ " pbox_tensor = torch.tensor([j[1].tolist()[:4]])\n",
+ " plabel = j[1].tolist()[5]\n",
+ " if bbox_iou(tbox_tensor,pbox_tensor)>iou_th and tlabel==plabel:\n",
+ " nr_dets+=1\n",
+ " pred_df.drop([row_counter]) # drop matched bbox, so one prediction bbox \n",
+ " #wont be counted as \"detected\" for two different objects\n",
+ " continue\n",
+ " row_counter+=1\n",
+ " return nr_objs,nr_dets"
+ ],
+ "metadata": {
+ "id": "LkQgO9T5OHzs"
+ },
+ "execution_count": 21,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "def acc(pred,truth,iou_th=0.7):\n",
+ " objs,dets=0,0\n",
+ " for i in tqdm(range(len(truth))):\n",
+ " o,d=each_pic(pred_pd[i],truth_pd[i],iou_th)\n",
+ " objs+=o\n",
+ " dets+=d\n",
+ " return np.round(dets/objs,6)\n",
+ "\n",
+ "accuracy = acc(pred_pd,truth_pd)"
+ ],
+ "metadata": {
+ "id": "p8E9EsAMOH1d",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "b78c1f85-ae98-481a-cd49-4e651676e7b4"
+ },
+ "execution_count": 26,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "100%|██████████| 100/100 [00:00<00:00, 190.89it/s]\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "print('Our trained model has an accuracy of: '+str(accuracy))"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "jdfiVJvhLPMw",
+ "outputId": "096c1660-36a0-4c04-b39b-4f91010559aa"
+ },
+ "execution_count": 27,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Our trained model has an accuracy of: 0.6\n"
+ ]
+ }
+ ]
+ }
+ ],
+ "metadata": {
+ "accelerator": "GPU",
+ "colab": {
+ "background_execution": "on",
+ "collapsed_sections": [],
+ "provenance": []
+ },
+ "gpuClass": "premium",
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.12"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
\ No newline at end of file
From 73c7a3c782edde33ab3339aa42d13953642ba31d Mon Sep 17 00:00:00 2001
From: running-man-01 <112680312+running-man-01@users.noreply.github.com>
Date: Fri, 14 Oct 2022 21:40:33 -0400
Subject: [PATCH 04/20] Update Evaluation_Accuracy.ipynb
---
notebooks/Evaluation_Accuracy.ipynb | 100 +++++++++++++---------------
1 file changed, 47 insertions(+), 53 deletions(-)
diff --git a/notebooks/Evaluation_Accuracy.ipynb b/notebooks/Evaluation_Accuracy.ipynb
index cd209c4..1df509c 100644
--- a/notebooks/Evaluation_Accuracy.ipynb
+++ b/notebooks/Evaluation_Accuracy.ipynb
@@ -1,23 +1,22 @@
{
"cells": [
{
- "cell_type": "code",
- "execution_count": 1,
- "metadata": {
- "id": "RJQb7ylX2vHr"
- },
- "outputs": [],
+ "cell_type": "markdown",
"source": [
"# Build evaluation method\n",
- "# Aim at >.90 accuracy\n",
+ "* Aim at >.90 accuracy\n",
"\n",
- "# currently it is tested with yolov5 prediction results\n",
- "# it should be compatible for all torch prediction outputs in the form of .pandas().xywh "
- ]
+ "currently it is tested with yolov5 prediction results.\n",
+ "\n",
+ "It is compatible for all torch prediction outputs in the form of .pandas().xywh (xcenter, ycenter, width, height).\n"
+ ],
+ "metadata": {
+ "id": "qN8V989HzHDT"
+ }
},
{
"cell_type": "code",
- "execution_count": 2,
+ "execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
@@ -68,7 +67,7 @@
},
{
"cell_type": "code",
- "execution_count": 3,
+ "execution_count": null,
"metadata": {
"id": "2TTOdqkAk_Ad"
},
@@ -83,7 +82,7 @@
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": null,
"metadata": {
"id": "FBHHieZsCbGS"
},
@@ -117,8 +116,15 @@
{
"cell_type": "code",
"source": [
- "from google.colab import drive\n",
- "drive.mount('/content/drive')"
+ "mount_drive = None\n",
+ "if mount_drive:\n",
+ " from google.colab import drive\n",
+ " drive.mount('/content/drive')\n",
+ " %cp ./drive/MyDrive/rotated2.zip ./\n",
+ " %cp /content/drive/MyDrive/trash_ai_trained_weights/yolov5x6_best_weights.pt ./\n",
+ " else:\n",
+ " !gdown 1hq0KcSM31yrR4YlWqM_P29Y3YTuvuIom # download best trained yolov5x6 weights\n",
+ " !gdown 1X3O2v3GIPveq3ylWF6o1qHI5uzbN1vWA # download organized TACO images (TACO itself, 1500 images, without unofficial images)"
],
"metadata": {
"colab": {
@@ -127,7 +133,7 @@
"id": "M0nSxY0wdrr2",
"outputId": "9d0e06a0-2937-4d01-8a08-54a4361091e1"
},
- "execution_count": 5,
+ "execution_count": null,
"outputs": [
{
"output_type": "stream",
@@ -141,12 +147,13 @@
{
"cell_type": "code",
"source": [
- "%cp ./drive/MyDrive/rotated2.zip ./"
+ "!unzip -qq ./rotated2.zip \n",
+ "%mv ./content/* ./"
],
"metadata": {
- "id": "Z9TZLZUZdusE"
+ "id": "Pfvddv2pS_ZX"
},
- "execution_count": 6,
+ "execution_count": null,
"outputs": []
},
{
@@ -154,19 +161,17 @@
"source": [
"%%capture\n",
"!wget https://raw.githubusercontent.com/pedropro/TACO/master/data/annotations.json\n",
- "!wget https://raw.githubusercontent.com/pedropro/TACO/master/data/annotations_unofficial.json\n",
- "!unzip -qq ./rotated2.zip \n",
- "%mv ./content/* ./"
+ "!wget https://raw.githubusercontent.com/pedropro/TACO/master/data/annotations_unofficial.json"
],
"metadata": {
"id": "nbwKD15fKpfa"
},
- "execution_count": 7,
+ "execution_count": null,
"outputs": []
},
{
"cell_type": "code",
- "execution_count": 8,
+ "execution_count": null,
"metadata": {
"id": "CPFCzX31IGBq",
"colab": {
@@ -218,7 +223,7 @@
},
{
"cell_type": "code",
- "execution_count": 9,
+ "execution_count": null,
"metadata": {
"id": "kwCWClsrSD4z"
},
@@ -246,7 +251,7 @@
"metadata": {
"id": "gHyjxWAW7DF0"
},
- "execution_count": 10,
+ "execution_count": null,
"outputs": []
},
{
@@ -255,12 +260,12 @@
"metadata": {
"id": "S96FWSElHY9d"
},
- "execution_count": 10,
+ "execution_count": null,
"outputs": []
},
{
"cell_type": "code",
- "execution_count": 11,
+ "execution_count": null,
"metadata": {
"id": "evH7jgs5Dnj7",
"cellView": "form"
@@ -342,7 +347,7 @@
},
{
"cell_type": "code",
- "execution_count": 12,
+ "execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
@@ -381,7 +386,7 @@
"id": "XslsKqRuHpmf",
"outputId": "cc3a38c5-8c0b-4760-e41a-284eb8d9d8f0"
},
- "execution_count": 13,
+ "execution_count": null,
"outputs": [
{
"output_type": "execute_result",
@@ -404,7 +409,7 @@
"metadata": {
"id": "67Xa-feZH9q5"
},
- "execution_count": 13,
+ "execution_count": null,
"outputs": []
},
{
@@ -427,18 +432,7 @@
},
{
"cell_type": "code",
- "source": [
- "%cp /content/drive/MyDrive/trash_ai_trained_weights/yolov5x6_best_weights.pt ./"
- ],
- "metadata": {
- "id": "0KMhfVDENzT2"
- },
- "execution_count": 14,
- "outputs": []
- },
- {
- "cell_type": "code",
- "execution_count": 15,
+ "execution_count": null,
"metadata": {
"id": "DoLh0BGlXQMC",
"colab": {
@@ -502,7 +496,7 @@
"id": "30QWyk7yiMsk",
"outputId": "b9c644c2-3766-4039-b032-92f3c2aa9b64"
},
- "execution_count": 16,
+ "execution_count": null,
"outputs": [
{
"output_type": "stream",
@@ -531,7 +525,7 @@
"metadata": {
"id": "GUjCnveZk2sf"
},
- "execution_count": 17,
+ "execution_count": null,
"outputs": []
},
{
@@ -544,7 +538,7 @@
"metadata": {
"id": "RKr_CupeiMvD"
},
- "execution_count": 18,
+ "execution_count": null,
"outputs": []
},
{
@@ -561,7 +555,7 @@
"metadata": {
"id": "QRj9_Dk_iMxG"
},
- "execution_count": 19,
+ "execution_count": null,
"outputs": []
},
{
@@ -579,7 +573,7 @@
"metadata": {
"id": "zGFRscK6iMzQ"
},
- "execution_count": 20,
+ "execution_count": null,
"outputs": []
},
{
@@ -628,7 +622,7 @@
" if bbox_iou(tbox_tensor,pbox_tensor)>iou_th and tlabel==plabel:\n",
" nr_dets+=1\n",
" pred_df.drop([row_counter]) # drop matched bbox, so one prediction bbox \n",
- " #wont be counted as \"detected\" for two different objects\n",
+ " # wont be counted as \"detected\" for two different objects\n",
" continue\n",
" row_counter+=1\n",
" return nr_objs,nr_dets"
@@ -636,7 +630,7 @@
"metadata": {
"id": "LkQgO9T5OHzs"
},
- "execution_count": 21,
+ "execution_count": null,
"outputs": []
},
{
@@ -659,7 +653,7 @@
},
"outputId": "b78c1f85-ae98-481a-cd49-4e651676e7b4"
},
- "execution_count": 26,
+ "execution_count": null,
"outputs": [
{
"output_type": "stream",
@@ -682,7 +676,7 @@
"id": "jdfiVJvhLPMw",
"outputId": "096c1660-36a0-4c04-b39b-4f91010559aa"
},
- "execution_count": 27,
+ "execution_count": null,
"outputs": [
{
"output_type": "stream",
@@ -701,7 +695,7 @@
"collapsed_sections": [],
"provenance": []
},
- "gpuClass": "premium",
+ "gpuClass": "standard",
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
From 1e9df7ad962b0b0c2839374e2b4d34068d1f594a Mon Sep 17 00:00:00 2001
From: running-man-01 <112680312+running-man-01@users.noreply.github.com>
Date: Fri, 14 Oct 2022 21:45:48 -0400
Subject: [PATCH 05/20] Update Evaluation_Accuracy.ipynb
---
notebooks/Evaluation_Accuracy.ipynb | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/notebooks/Evaluation_Accuracy.ipynb b/notebooks/Evaluation_Accuracy.ipynb
index 1df509c..1496525 100644
--- a/notebooks/Evaluation_Accuracy.ipynb
+++ b/notebooks/Evaluation_Accuracy.ipynb
@@ -3,12 +3,14 @@
{
"cell_type": "markdown",
"source": [
- "# Build evaluation method\n",
+ "# This notebook: Build evaluation method\n",
"* Aim at >.90 accuracy\n",
"\n",
"currently it is tested with yolov5 prediction results.\n",
"\n",
- "It is compatible for all torch prediction outputs in the form of .pandas().xywh (xcenter, ycenter, width, height).\n"
+ "It is compatible for all torch prediction outputs in the form of .pandas().xywh (xcenter, ycenter, width, height).\n",
+ "\n",
+ "Using Google Colab to view this notebook is highly recommended.\n"
],
"metadata": {
"id": "qN8V989HzHDT"
@@ -59,7 +61,7 @@
{
"cell_type": "markdown",
"source": [
- "# 0. Prep, install yolov5, download and partition datasets"
+ "# 0. Prep works, install yolov5, download and partition datasets"
],
"metadata": {
"id": "_9jgRJYlL_-H"
@@ -116,8 +118,11 @@
{
"cell_type": "code",
"source": [
- "mount_drive = None\n",
+ "mount_drive = False\n",
"if mount_drive:\n",
+ " # gdown a gdrive file too frequently triggers google's control and makes the file un-gdown-able\n",
+ " # in this case, go to 1hq0KcSM31yrR4YlWqM_P29Y3YTuvuIom and 1X3O2v3GIPveq3ylWF6o1qHI5uzbN1vWA, manually\n",
+ " # make a copy of them to your own drive and mount your drive to the colab instance, then you can manipulate freely\n",
" from google.colab import drive\n",
" drive.mount('/content/drive')\n",
" %cp ./drive/MyDrive/rotated2.zip ./\n",
From d057a03ee20acaddf70a7485e8d90dc91ab6e43b Mon Sep 17 00:00:00 2001
From: running-man-01 <112680312+running-man-01@users.noreply.github.com>
Date: Sat, 15 Oct 2022 00:01:12 -0400
Subject: [PATCH 06/20] Update Evaluation_Accuracy.ipynb
---
notebooks/Evaluation_Accuracy.ipynb | 125 +++++++++++++++-------------
1 file changed, 66 insertions(+), 59 deletions(-)
diff --git a/notebooks/Evaluation_Accuracy.ipynb b/notebooks/Evaluation_Accuracy.ipynb
index 1496525..e91d5d2 100644
--- a/notebooks/Evaluation_Accuracy.ipynb
+++ b/notebooks/Evaluation_Accuracy.ipynb
@@ -18,20 +18,20 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 1,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "0D7J191IYwp8",
- "outputId": "015e9ea9-6b3f-4774-e798-07866d3a07e0"
+ "outputId": "5a21fd71-1ca5-472d-8127-7fe8a40aa8fe"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
- "Fri Oct 14 16:04:14 2022 \n",
+ "Sat Oct 15 03:02:11 2022 \n",
"+-----------------------------------------------------------------------------+\n",
"| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |\n",
"|-------------------------------+----------------------+----------------------+\n",
@@ -39,9 +39,9 @@
"| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n",
"| | | MIG M. |\n",
"|===============================+======================+======================|\n",
- "| 0 A100-SXM4-40GB Off | 00000000:00:04.0 Off | 0 |\n",
- "| N/A 32C P0 45W / 400W | 0MiB / 40536MiB | 0% Default |\n",
- "| | | Disabled |\n",
+ "| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |\n",
+ "| N/A 71C P8 12W / 70W | 0MiB / 15109MiB | 0% Default |\n",
+ "| | | N/A |\n",
"+-------------------------------+----------------------+----------------------+\n",
" \n",
"+-----------------------------------------------------------------------------+\n",
@@ -69,7 +69,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 2,
"metadata": {
"id": "2TTOdqkAk_Ad"
},
@@ -84,7 +84,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 3,
"metadata": {
"id": "FBHHieZsCbGS"
},
@@ -118,33 +118,36 @@
{
"cell_type": "code",
"source": [
- "mount_drive = False\n",
- "if mount_drive:\n",
+ "mount_drive = True\n",
+ "if not mount_drive:\n",
" # gdown a gdrive file too frequently triggers google's control and makes the file un-gdown-able\n",
" # in this case, go to 1hq0KcSM31yrR4YlWqM_P29Y3YTuvuIom and 1X3O2v3GIPveq3ylWF6o1qHI5uzbN1vWA, manually\n",
" # make a copy of them to your own drive and mount your drive to the colab instance, then you can manipulate freely\n",
+ " !gdown 1hq0KcSM31yrR4YlWqM_P29Y3YTuvuIom # download best trained yolov5x6 weights\n",
+ " !gdown 1X3O2v3GIPveq3ylWF6o1qHI5uzbN1vWA # download organized TACO images (TACO itself, 1500 images, without unofficial images)\n",
+ "\n",
+ "if mount_drive:\n",
" from google.colab import drive\n",
" drive.mount('/content/drive')\n",
- " %cp ./drive/MyDrive/rotated2.zip ./\n",
+ " %cp /content/drive/MyDrive/rotated2.zip ./\n",
" %cp /content/drive/MyDrive/trash_ai_trained_weights/yolov5x6_best_weights.pt ./\n",
- " else:\n",
- " !gdown 1hq0KcSM31yrR4YlWqM_P29Y3YTuvuIom # download best trained yolov5x6 weights\n",
- " !gdown 1X3O2v3GIPveq3ylWF6o1qHI5uzbN1vWA # download organized TACO images (TACO itself, 1500 images, without unofficial images)"
+ "\n",
+ " \n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "M0nSxY0wdrr2",
- "outputId": "9d0e06a0-2937-4d01-8a08-54a4361091e1"
+ "outputId": "92afe650-fe40-46c3-86ad-b2ab4306927b"
},
- "execution_count": null,
+ "execution_count": 11,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
- "Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n"
+ "Mounted at /content/drive\n"
]
}
]
@@ -158,7 +161,7 @@
"metadata": {
"id": "Pfvddv2pS_ZX"
},
- "execution_count": null,
+ "execution_count": 12,
"outputs": []
},
{
@@ -171,18 +174,18 @@
"metadata": {
"id": "nbwKD15fKpfa"
},
- "execution_count": null,
+ "execution_count": 13,
"outputs": []
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 14,
"metadata": {
"id": "CPFCzX31IGBq",
"colab": {
"base_uri": "https://localhost:8080/"
},
- "outputId": "8635bf93-000c-46ac-e30d-3fdd0688584a"
+ "outputId": "10d788e0-77a8-454e-8f27-1fcab36437fd"
},
"outputs": [
{
@@ -228,7 +231,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 15,
"metadata": {
"id": "kwCWClsrSD4z"
},
@@ -256,7 +259,7 @@
"metadata": {
"id": "gHyjxWAW7DF0"
},
- "execution_count": null,
+ "execution_count": 16,
"outputs": []
},
{
@@ -265,12 +268,12 @@
"metadata": {
"id": "S96FWSElHY9d"
},
- "execution_count": null,
+ "execution_count": 16,
"outputs": []
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 17,
"metadata": {
"id": "evH7jgs5Dnj7",
"cellView": "form"
@@ -352,13 +355,13 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 18,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "CdMrwEgnB9C4",
- "outputId": "1eded40f-2c94-4a74-f0b1-74394d2c2623"
+ "outputId": "33d7064a-64e2-466a-951f-ce5dd4151476"
},
"outputs": [
{
@@ -389,9 +392,9 @@
"height": 35
},
"id": "XslsKqRuHpmf",
- "outputId": "cc3a38c5-8c0b-4760-e41a-284eb8d9d8f0"
+ "outputId": "e38f5538-1dfa-43c1-bce3-2bbc51aa7cd3"
},
- "execution_count": null,
+ "execution_count": 19,
"outputs": [
{
"output_type": "execute_result",
@@ -404,7 +407,7 @@
}
},
"metadata": {},
- "execution_count": 13
+ "execution_count": 19
}
]
},
@@ -414,7 +417,7 @@
"metadata": {
"id": "67Xa-feZH9q5"
},
- "execution_count": null,
+ "execution_count": 19,
"outputs": []
},
{
@@ -437,35 +440,37 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 22,
"metadata": {
"id": "DoLh0BGlXQMC",
"colab": {
"base_uri": "https://localhost:8080/"
},
- "outputId": "e92a300c-f497-48ef-a66a-1380417d1645"
+ "outputId": "779a6a40-bcf3-40d5-9ad0-e1c6dbe6b463"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
- "\u001b[34m\u001b[1mval: \u001b[0mdata=/content/yolov5/data/yoloTACO.yaml, weights=['./yolov5x6_best_weights.pt'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.6, max_det=300, task=test, device=, workers=8, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=False, project=runs/val, name=exp, exist_ok=False, half=False, dnn=False\n",
- "YOLOv5 🚀 v6.2-194-g2a19d07 Python-3.7.14 torch-1.12.1+cu113 CUDA:0 (A100-SXM4-40GB, 40536MiB)\n",
+ "\u001b[34m\u001b[1mval: \u001b[0mdata=/content/yolov5/data/yoloTACO.yaml, weights=['/content/yolov5x6_best_weights.pt'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.6, max_det=300, task=test, device=, workers=8, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=False, project=runs/val, name=exp, exist_ok=False, half=False, dnn=False\n",
+ "YOLOv5 🚀 v6.2-195-gdf80e7c Python-3.7.14 torch-1.12.1+cu113 CUDA:0 (Tesla T4, 15110MiB)\n",
"\n",
"Fusing layers... \n",
"Model summary: 416 layers, 140537980 parameters, 0 gradients, 209.1 GFLOPs\n",
- "\u001b[34m\u001b[1mtest: \u001b[0mScanning '/content/datasets/yoloTACO/labels/test' images and labels...100 found, 0 missing, 0 empty, 0 corrupt: 100% 100/100 [00:00<00:00, 2635.51it/s]\n",
+ "Downloading https://ultralytics.com/assets/Arial.ttf to /root/.config/Ultralytics/Arial.ttf...\n",
+ "100% 755k/755k [00:00<00:00, 104MB/s]\n",
+ "\u001b[34m\u001b[1mtest: \u001b[0mScanning '/content/datasets/yoloTACO/labels/test' images and labels...100 found, 0 missing, 0 empty, 0 corrupt: 100% 100/100 [00:00<00:00, 256.12it/s]\n",
"\u001b[34m\u001b[1mtest: \u001b[0mNew cache created: /content/datasets/yoloTACO/labels/test.cache\n",
- " Class Images Instances P R mAP50 mAP50-95: 100% 4/4 [00:08<00:00, 2.16s/it]\n",
+ " Class Images Instances P R mAP50 mAP50-95: 100% 4/4 [00:20<00:00, 5.15s/it]\n",
" all 100 300 0.0464 0.628 0.11 0.102\n",
- "Speed: 0.2ms pre-process, 5.4ms inference, 2.4ms NMS per image at shape (32, 3, 640, 640)\n",
- "Results saved to \u001b[1mruns/val/exp\u001b[0m\n"
+ "Speed: 3.0ms pre-process, 47.7ms inference, 3.0ms NMS per image at shape (32, 3, 640, 640)\n",
+ "Results saved to \u001b[1mruns/val/exp2\u001b[0m\n"
]
}
],
"source": [
- "!python val.py --data yoloTACO.yaml --task test --weights ./yolov5x6_best_weights.pt\n",
+ "!python val.py --data yoloTACO.yaml --task test --weights /content/yolov5x6_best_weights.pt\n",
"#!python detect.py --weights ./yolov5x6_best_weights.pt --source /content/datasets/yoloTACO/images/test"
]
},
@@ -492,23 +497,23 @@
{
"cell_type": "code",
"source": [
- "model = torch.hub.load('ultralytics/yolov5', 'custom', path='./yolov5x6_best_weights.pt') # load our local model"
+ "model = torch.hub.load('ultralytics/yolov5', 'custom', path='/content/yolov5x6_best_weights.pt') # load our local model"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "30QWyk7yiMsk",
- "outputId": "b9c644c2-3766-4039-b032-92f3c2aa9b64"
+ "outputId": "386298ab-c1ab-46bb-9067-db9a2acdfc7e"
},
- "execution_count": null,
+ "execution_count": 23,
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
"Using cache found in /root/.cache/torch/hub/ultralytics_yolov5_master\n",
- "YOLOv5 🚀 2022-10-14 Python-3.7.14 torch-1.12.1+cu113 CUDA:0 (A100-SXM4-40GB, 40536MiB)\n",
+ "YOLOv5 🚀 2022-10-15 Python-3.7.14 torch-1.12.1+cu113 CUDA:0 (Tesla T4, 15110MiB)\n",
"\n",
"Fusing layers... \n",
"Model summary: 416 layers, 140537980 parameters, 0 gradients, 209.1 GFLOPs\n",
@@ -530,7 +535,7 @@
"metadata": {
"id": "GUjCnveZk2sf"
},
- "execution_count": null,
+ "execution_count": 24,
"outputs": []
},
{
@@ -543,7 +548,7 @@
"metadata": {
"id": "RKr_CupeiMvD"
},
- "execution_count": null,
+ "execution_count": 25,
"outputs": []
},
{
@@ -560,7 +565,7 @@
"metadata": {
"id": "QRj9_Dk_iMxG"
},
- "execution_count": null,
+ "execution_count": 26,
"outputs": []
},
{
@@ -578,7 +583,7 @@
"metadata": {
"id": "zGFRscK6iMzQ"
},
- "execution_count": null,
+ "execution_count": 27,
"outputs": []
},
{
@@ -614,6 +619,7 @@
" return inter / union # return IoU\n",
" \n",
"def each_pic(pred_df,truth_df,iou_th):\n",
+ " pred_df_ = pred_df.assign(matched=[0]*pred_df.shape[0])\n",
" nr_objs = truth_df.shape[0]\n",
" nr_dets = 0\n",
" for i in truth_df.iterrows():\n",
@@ -621,13 +627,14 @@
" tlabel = i[1].tolist()[5]\n",
" \n",
" row_counter=0\n",
- " for j in pred_df.iterrows():\n",
+ " for j in pred_df_.iterrows():\n",
" pbox_tensor = torch.tensor([j[1].tolist()[:4]])\n",
" plabel = j[1].tolist()[5]\n",
- " if bbox_iou(tbox_tensor,pbox_tensor)>iou_th and tlabel==plabel:\n",
+ " matched = j[1].tolist()[-1]\n",
+ " if bbox_iou(tbox_tensor,pbox_tensor)>iou_th and tlabel==plabel and matched==0:\n",
" nr_dets+=1\n",
- " pred_df.drop([row_counter]) # drop matched bbox, so one prediction bbox \n",
- " # wont be counted as \"detected\" for two different objects\n",
+ " pred_df_.iat[row_counter,-1]=1 # mark matched bbox, so one prediction bbox \n",
+ " # wont be counted as \"detected\" for two different objects\n",
" continue\n",
" row_counter+=1\n",
" return nr_objs,nr_dets"
@@ -635,7 +642,7 @@
"metadata": {
"id": "LkQgO9T5OHzs"
},
- "execution_count": null,
+ "execution_count": 70,
"outputs": []
},
{
@@ -656,15 +663,15 @@
"colab": {
"base_uri": "https://localhost:8080/"
},
- "outputId": "b78c1f85-ae98-481a-cd49-4e651676e7b4"
+ "outputId": "ea64c873-8b68-4653-fed1-45719c68db40"
},
- "execution_count": null,
+ "execution_count": 71,
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
- "100%|██████████| 100/100 [00:00<00:00, 190.89it/s]\n"
+ "100%|██████████| 100/100 [00:00<00:00, 186.24it/s]\n"
]
}
]
@@ -679,15 +686,15 @@
"base_uri": "https://localhost:8080/"
},
"id": "jdfiVJvhLPMw",
- "outputId": "096c1660-36a0-4c04-b39b-4f91010559aa"
+ "outputId": "6da632d6-c3b1-4772-b917-46d534735ae3"
},
- "execution_count": null,
+ "execution_count": 72,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
- "Our trained model has an accuracy of: 0.6\n"
+ "Our trained model has an accuracy of: 0.856667\n"
]
}
]
From eac7bdd7319dd301bcbba0891cb35113d1c570b8 Mon Sep 17 00:00:00 2001
From: running-man-01 <112680312+running-man-01@users.noreply.github.com>
Date: Sat, 15 Oct 2022 02:53:50 -0400
Subject: [PATCH 07/20] Update Evaluation_Accuracy.ipynb
---
notebooks/Evaluation_Accuracy.ipynb | 144 +++++++++++++++++-----------
1 file changed, 88 insertions(+), 56 deletions(-)
diff --git a/notebooks/Evaluation_Accuracy.ipynb b/notebooks/Evaluation_Accuracy.ipynb
index e91d5d2..ceb8add 100644
--- a/notebooks/Evaluation_Accuracy.ipynb
+++ b/notebooks/Evaluation_Accuracy.ipynb
@@ -10,12 +10,34 @@
"\n",
"It is compatible for all torch prediction outputs in the form of .pandas().xywh (xcenter, ycenter, width, height).\n",
"\n",
- "Using Google Colab to view this notebook is highly recommended.\n"
+ "Using Google Colab to view this notebook is highly recommended.\n",
+ "\n"
],
"metadata": {
"id": "qN8V989HzHDT"
}
},
+ {
+ "cell_type": "markdown",
+ "source": [
+ "### Questions to be considered:\n",
+ "* Do we want the big TACO? i.e. the **unofficial TACO** that contains 5,000+ images. \n",
+ "\n",
+ "* Do we want to **reduce target classes**? There are 60 categories and 27 super-categories. Predicting super-categories may increase accuracy.\n",
+ "\n",
+ "* **Train/Test split**. Currently I do a fully random 1300/100/100 split for train/val/test. This is obviously not the most common choice as usualy we do something like 70/30 or 80/20 for train/test split. Also, the current split is not stratified -- classes(categories)'s distribution in training and testing set will be different which might be a problem! Input are greatly welcomed!"
+ ],
+ "metadata": {
+ "id": "mS4ry1DRFSes"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [],
+ "metadata": {
+ "id": "oiQtItjoD-q7"
+ }
+ },
{
"cell_type": "code",
"execution_count": 1,
@@ -24,14 +46,14 @@
"base_uri": "https://localhost:8080/"
},
"id": "0D7J191IYwp8",
- "outputId": "5a21fd71-1ca5-472d-8127-7fe8a40aa8fe"
+ "outputId": "5dfc99a2-b2df-4b97-d298-54cfa0975c87"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
- "Sat Oct 15 03:02:11 2022 \n",
+ "Sat Oct 15 06:28:19 2022 \n",
"+-----------------------------------------------------------------------------+\n",
"| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |\n",
"|-------------------------------+----------------------+----------------------+\n",
@@ -40,7 +62,7 @@
"| | | MIG M. |\n",
"|===============================+======================+======================|\n",
"| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |\n",
- "| N/A 71C P8 12W / 70W | 0MiB / 15109MiB | 0% Default |\n",
+ "| N/A 38C P8 12W / 70W | 0MiB / 15109MiB | 0% Default |\n",
"| | | N/A |\n",
"+-------------------------------+----------------------+----------------------+\n",
" \n",
@@ -76,6 +98,7 @@
"outputs": [],
"source": [
"%%capture\n",
+ "%rm -rf /content/*\n",
"!git clone https://github.com/ultralytics/yolov5 \n",
"%cd yolov5\n",
"!pip install -r requirements.txt #wandb\n",
@@ -109,7 +132,6 @@
"import os\n",
"import re\n",
"\n",
- "\n",
"import torch\n",
"from torch.utils.data import Dataset, DataLoader\n",
"from torchvision import transforms, utils"
@@ -118,7 +140,7 @@
{
"cell_type": "code",
"source": [
- "mount_drive = True\n",
+ "mount_drive = False\n",
"if not mount_drive:\n",
" # gdown a gdrive file too frequently triggers google's control and makes the file un-gdown-able\n",
" # in this case, go to 1hq0KcSM31yrR4YlWqM_P29Y3YTuvuIom and 1X3O2v3GIPveq3ylWF6o1qHI5uzbN1vWA, manually\n",
@@ -128,9 +150,9 @@
"\n",
"if mount_drive:\n",
" from google.colab import drive\n",
- " drive.mount('/content/drive')\n",
- " %cp /content/drive/MyDrive/rotated2.zip ./\n",
- " %cp /content/drive/MyDrive/trash_ai_trained_weights/yolov5x6_best_weights.pt ./\n",
+ " drive.mount('/gdrive')\n",
+ " %cp /gdrive/MyDrive/rotated2.zip /content/\n",
+ " %cp /gdrive/MyDrive/trash_ai_trained_weights/yolov5x6_best_weights.pt /content/\n",
"\n",
" \n"
],
@@ -139,15 +161,22 @@
"base_uri": "https://localhost:8080/"
},
"id": "M0nSxY0wdrr2",
- "outputId": "92afe650-fe40-46c3-86ad-b2ab4306927b"
+ "outputId": "67349666-e448-41b6-ee2a-0eb924b23bd7"
},
- "execution_count": 11,
+ "execution_count": 4,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
- "Mounted at /content/drive\n"
+ "Downloading...\n",
+ "From: https://drive.google.com/uc?id=1hq0KcSM31yrR4YlWqM_P29Y3YTuvuIom\n",
+ "To: /content/yolov5x6_best_weights.pt\n",
+ "100% 282M/282M [00:01<00:00, 173MB/s]\n",
+ "Downloading...\n",
+ "From: https://drive.google.com/uc?id=1X3O2v3GIPveq3ylWF6o1qHI5uzbN1vWA\n",
+ "To: /content/rotated2.zip\n",
+ "100% 2.61G/2.61G [00:21<00:00, 124MB/s]\n"
]
}
]
@@ -161,7 +190,7 @@
"metadata": {
"id": "Pfvddv2pS_ZX"
},
- "execution_count": 12,
+ "execution_count": 5,
"outputs": []
},
{
@@ -174,18 +203,18 @@
"metadata": {
"id": "nbwKD15fKpfa"
},
- "execution_count": 13,
+ "execution_count": 6,
"outputs": []
},
{
"cell_type": "code",
- "execution_count": 14,
+ "execution_count": 7,
"metadata": {
"id": "CPFCzX31IGBq",
"colab": {
"base_uri": "https://localhost:8080/"
},
- "outputId": "10d788e0-77a8-454e-8f27-1fcab36437fd"
+ "outputId": "a98e7c7d-e6ea-49ab-ccd0-53f7f984be4f"
},
"outputs": [
{
@@ -231,7 +260,7 @@
},
{
"cell_type": "code",
- "execution_count": 15,
+ "execution_count": 8,
"metadata": {
"id": "kwCWClsrSD4z"
},
@@ -259,7 +288,7 @@
"metadata": {
"id": "gHyjxWAW7DF0"
},
- "execution_count": 16,
+ "execution_count": 9,
"outputs": []
},
{
@@ -268,15 +297,14 @@
"metadata": {
"id": "S96FWSElHY9d"
},
- "execution_count": 16,
+ "execution_count": 9,
"outputs": []
},
{
"cell_type": "code",
- "execution_count": 17,
+ "execution_count": 10,
"metadata": {
- "id": "evH7jgs5Dnj7",
- "cellView": "form"
+ "id": "evH7jgs5Dnj7"
},
"outputs": [],
"source": [
@@ -355,13 +383,13 @@
},
{
"cell_type": "code",
- "execution_count": 18,
+ "execution_count": 11,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "CdMrwEgnB9C4",
- "outputId": "33d7064a-64e2-466a-951f-ce5dd4151476"
+ "outputId": "0ee6474a-3811-466b-871a-2c9f8329cee4"
},
"outputs": [
{
@@ -389,12 +417,12 @@
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
- "height": 35
+ "height": 36
},
"id": "XslsKqRuHpmf",
- "outputId": "e38f5538-1dfa-43c1-bce3-2bbc51aa7cd3"
+ "outputId": "067d39e4-f43a-47a4-8bd0-dff2fe479821"
},
- "execution_count": 19,
+ "execution_count": 12,
"outputs": [
{
"output_type": "execute_result",
@@ -407,7 +435,7 @@
}
},
"metadata": {},
- "execution_count": 19
+ "execution_count": 12
}
]
},
@@ -417,7 +445,7 @@
"metadata": {
"id": "67Xa-feZH9q5"
},
- "execution_count": 19,
+ "execution_count": 12,
"outputs": []
},
{
@@ -440,13 +468,13 @@
},
{
"cell_type": "code",
- "execution_count": 22,
+ "execution_count": 13,
"metadata": {
"id": "DoLh0BGlXQMC",
"colab": {
"base_uri": "https://localhost:8080/"
},
- "outputId": "779a6a40-bcf3-40d5-9ad0-e1c6dbe6b463"
+ "outputId": "6e99118b-dcb8-466c-d135-ba708edb7b64"
},
"outputs": [
{
@@ -459,13 +487,13 @@
"Fusing layers... \n",
"Model summary: 416 layers, 140537980 parameters, 0 gradients, 209.1 GFLOPs\n",
"Downloading https://ultralytics.com/assets/Arial.ttf to /root/.config/Ultralytics/Arial.ttf...\n",
- "100% 755k/755k [00:00<00:00, 104MB/s]\n",
- "\u001b[34m\u001b[1mtest: \u001b[0mScanning '/content/datasets/yoloTACO/labels/test' images and labels...100 found, 0 missing, 0 empty, 0 corrupt: 100% 100/100 [00:00<00:00, 256.12it/s]\n",
+ "100% 755k/755k [00:00<00:00, 5.82MB/s]\n",
+ "\u001b[34m\u001b[1mtest: \u001b[0mScanning '/content/datasets/yoloTACO/labels/test' images and labels...100 found, 0 missing, 0 empty, 0 corrupt: 100% 100/100 [00:00<00:00, 741.61it/s]\n",
"\u001b[34m\u001b[1mtest: \u001b[0mNew cache created: /content/datasets/yoloTACO/labels/test.cache\n",
- " Class Images Instances P R mAP50 mAP50-95: 100% 4/4 [00:20<00:00, 5.15s/it]\n",
+ " Class Images Instances P R mAP50 mAP50-95: 100% 4/4 [00:21<00:00, 5.31s/it]\n",
" all 100 300 0.0464 0.628 0.11 0.102\n",
- "Speed: 3.0ms pre-process, 47.7ms inference, 3.0ms NMS per image at shape (32, 3, 640, 640)\n",
- "Results saved to \u001b[1mruns/val/exp2\u001b[0m\n"
+ "Speed: 3.5ms pre-process, 45.7ms inference, 4.0ms NMS per image at shape (32, 3, 640, 640)\n",
+ "Results saved to \u001b[1mruns/val/exp\u001b[0m\n"
]
}
],
@@ -504,15 +532,17 @@
"base_uri": "https://localhost:8080/"
},
"id": "30QWyk7yiMsk",
- "outputId": "386298ab-c1ab-46bb-9067-db9a2acdfc7e"
+ "outputId": "0e41a752-9d2d-4285-c130-b28d385eaa35"
},
- "execution_count": 23,
+ "execution_count": 14,
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
- "Using cache found in /root/.cache/torch/hub/ultralytics_yolov5_master\n",
+ "/usr/local/lib/python3.7/dist-packages/torch/hub.py:267: UserWarning: You are about to download and run code from an untrusted repository. In a future release, this won't be allowed. To add the repository to your trusted list, change the command to {calling_fn}(..., trust_repo=False) and a command prompt will appear asking for an explicit confirmation of trust, or load(..., trust_repo=True), which will assume that the prompt is to be answered with 'yes'. You can also use load(..., trust_repo='check') which will only prompt for confirmation if the repo is not already trusted. This will eventually be the default behaviour\n",
+ " \"You are about to download and run code from an untrusted repository. In a future release, this won't \"\n",
+ "Downloading: \"https://github.com/ultralytics/yolov5/zipball/master\" to /root/.cache/torch/hub/master.zip\n",
"YOLOv5 🚀 2022-10-15 Python-3.7.14 torch-1.12.1+cu113 CUDA:0 (Tesla T4, 15110MiB)\n",
"\n",
"Fusing layers... \n",
@@ -527,15 +557,13 @@
"source": [
"# Load test imgs\n",
"test_dir = '/content/datasets/yoloTACO/images/test/'\n",
- "test_list = [i[2] for i in os.walk(test_dir)][0]\n",
- "test_list = [re.findall(r'\\d+',i)[0] for i in test_list]\n",
- "test_read_img_list = [Image.open(test_dir+str(i)+'.jpg') for i in test_list]\n",
- "# alternatively use cv2: cv2.imread('target_path')[..., ::-1] # OpenCV image (BGR to RGB)\n"
+ "test_list = test_ids # [i[2] for i in os.walk(test_dir)][0] # or alternatively read from files # test_list = [re.findall(r'\\d+',i)[0] for i in test_list]\n",
+ "test_read_img_list = [Image.open(test_dir+str(i)+'.jpg') for i in test_list] # alternatively use cv2: cv2.imread('target_path')[..., ::-1] # OpenCV image (BGR to RGB)"
],
"metadata": {
"id": "GUjCnveZk2sf"
},
- "execution_count": 24,
+ "execution_count": 15,
"outputs": []
},
{
@@ -548,7 +576,7 @@
"metadata": {
"id": "RKr_CupeiMvD"
},
- "execution_count": 25,
+ "execution_count": 16,
"outputs": []
},
{
@@ -565,13 +593,16 @@
"metadata": {
"id": "QRj9_Dk_iMxG"
},
- "execution_count": 26,
+ "execution_count": 17,
"outputs": []
},
{
"cell_type": "code",
"source": [
"truth = [annos.loadAnns(annos.getAnnIds(int(i))) for i in test_list]\n",
+ "# TODO: here we still reads official annotations from the official .json file. \n",
+ "# However, we've already modified the annotations (i.e. dropped bbox that's only a few pixels)\n",
+ "# Should read annotations directly from local .txt files in yoloTACO/labels/test/*\n",
"truth_pd = []\n",
"for i in truth:\n",
" cache = [j['bbox']+[1]+[j['category_id']]+[no_to_clname[j['category_id']]] for j in i]\n",
@@ -583,7 +614,7 @@
"metadata": {
"id": "zGFRscK6iMzQ"
},
- "execution_count": 27,
+ "execution_count": 18,
"outputs": []
},
{
@@ -642,13 +673,13 @@
"metadata": {
"id": "LkQgO9T5OHzs"
},
- "execution_count": 70,
+ "execution_count": 19,
"outputs": []
},
{
"cell_type": "code",
"source": [
- "def acc(pred,truth,iou_th=0.7):\n",
+ "def acc(pred,truth,iou_th=0.5):\n",
" objs,dets=0,0\n",
" for i in tqdm(range(len(truth))):\n",
" o,d=each_pic(pred_pd[i],truth_pd[i],iou_th)\n",
@@ -663,15 +694,15 @@
"colab": {
"base_uri": "https://localhost:8080/"
},
- "outputId": "ea64c873-8b68-4653-fed1-45719c68db40"
+ "outputId": "0e05a9dc-2a55-4228-9d7a-552f3fdee3e3"
},
- "execution_count": 71,
+ "execution_count": 20,
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
- "100%|██████████| 100/100 [00:00<00:00, 186.24it/s]\n"
+ "100%|██████████| 100/100 [00:00<00:00, 165.45it/s]\n"
]
}
]
@@ -686,15 +717,15 @@
"base_uri": "https://localhost:8080/"
},
"id": "jdfiVJvhLPMw",
- "outputId": "6da632d6-c3b1-4772-b917-46d534735ae3"
+ "outputId": "e5bf1e11-1cee-46e1-fcc1-39e50fe8f80e"
},
- "execution_count": 72,
+ "execution_count": 21,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
- "Our trained model has an accuracy of: 0.856667\n"
+ "Our trained model has an accuracy of: 0.883333\n"
]
}
]
@@ -705,7 +736,8 @@
"colab": {
"background_execution": "on",
"collapsed_sections": [],
- "provenance": []
+ "provenance": [],
+ "toc_visible": true
},
"gpuClass": "standard",
"kernelspec": {
From 81699b34b3ac2f69be457a8a35ac17506a637117 Mon Sep 17 00:00:00 2001
From: running-man-01 <112680312+running-man-01@users.noreply.github.com>
Date: Sun, 16 Oct 2022 05:48:29 -0400
Subject: [PATCH 08/20] Update Evaluation_Accuracy.ipynb
---
notebooks/Evaluation_Accuracy.ipynb | 773 +++++++++++++++++++++++-----
1 file changed, 637 insertions(+), 136 deletions(-)
diff --git a/notebooks/Evaluation_Accuracy.ipynb b/notebooks/Evaluation_Accuracy.ipynb
index ceb8add..4d2397a 100644
--- a/notebooks/Evaluation_Accuracy.ipynb
+++ b/notebooks/Evaluation_Accuracy.ipynb
@@ -6,11 +6,11 @@
"# This notebook: Build evaluation method\n",
"* Aim at >.90 accuracy\n",
"\n",
- "currently it is tested with yolov5 prediction results.\n",
+ "Currently it is tested with yolov5 prediction results. But it is compatible for all prediction outputs as long as they are in the form of .pandas().xywh. (see section `1.3` for examples)\n",
"\n",
- "It is compatible for all torch prediction outputs in the form of .pandas().xywh (xcenter, ycenter, width, height).\n",
+ "Using `Google Colab` to view this notebook is highly recommended.\n",
"\n",
- "Using Google Colab to view this notebook is highly recommended.\n",
+ "**Note** that the weights `yolov5x6_best_weights.pt` used in this demo was trained on all images without train/test split. So the performance on the test set is exaggerated. It is only used to demonstrate how the `accuracy` metric works.\n",
"\n"
],
"metadata": {
@@ -20,40 +20,44 @@
{
"cell_type": "markdown",
"source": [
- "### Questions to be considered:\n",
- "* Do we want the big TACO? i.e. the **unofficial TACO** that contains 5,000+ images. \n",
+ "### Questions:\n",
+ "* Want the **big TACO**? i.e. the **unofficial TACO** that contains 5,000+ images. The label quality of the big TACO might be poor. I experimented with it and found a dozen errors in labels (annotations already).\n",
"\n",
- "* Do we want to **reduce target classes**? There are 60 categories and 27 super-categories. Predicting super-categories may increase accuracy.\n",
+ "* **Reduce target classes**? There are 60 categories and 28 super-categories. Currently we predict 60 classes, which is might be too many considering that we only have less than 1500 training images. Should we use the 28 super-categories as classes to be predicted? Or more radically, 5~10 classes of plastic, metal, glass, etc.\n",
"\n",
- "* **Train/Test split**. Currently I do a fully random 1300/100/100 split for train/val/test. This is obviously not the most common choice as usualy we do something like 70/30 or 80/20 for train/test split. Also, the current split is not stratified -- classes(categories)'s distribution in training and testing set will be different which might be a problem! Input are greatly welcomed!"
+ "* Better **Train/Test split**? Currently I do a fully random 1300/100/100 split for train/val/test. This is obviously not the most common choice. Also, the current split is not stratified -- classes(categories)'s distribution in training and testing set will be different which might be a problem! Input are greatly welcomed!"
],
"metadata": {
"id": "mS4ry1DRFSes"
}
},
{
- "cell_type": "markdown",
- "source": [],
+ "cell_type": "code",
+ "source": [
+ "mount_drive = True #mount only if you have weights and TACO images in your drive already"
+ ],
"metadata": {
- "id": "oiQtItjoD-q7"
- }
+ "id": "R_Dc3-3xaTwr"
+ },
+ "execution_count": 1,
+ "outputs": []
},
{
"cell_type": "code",
- "execution_count": 1,
+ "execution_count": 2,
"metadata": {
+ "id": "0D7J191IYwp8",
"colab": {
"base_uri": "https://localhost:8080/"
},
- "id": "0D7J191IYwp8",
- "outputId": "5dfc99a2-b2df-4b97-d298-54cfa0975c87"
+ "outputId": "2f99f7fa-06ad-4d2b-8e30-657f7c840302"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
- "Sat Oct 15 06:28:19 2022 \n",
+ "Sun Oct 16 09:45:12 2022 \n",
"+-----------------------------------------------------------------------------+\n",
"| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |\n",
"|-------------------------------+----------------------+----------------------+\n",
@@ -62,7 +66,7 @@
"| | | MIG M. |\n",
"|===============================+======================+======================|\n",
"| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |\n",
- "| N/A 38C P8 12W / 70W | 0MiB / 15109MiB | 0% Default |\n",
+ "| N/A 67C P8 13W / 70W | 0MiB / 15109MiB | 0% Default |\n",
"| | | N/A |\n",
"+-------------------------------+----------------------+----------------------+\n",
" \n",
@@ -91,14 +95,48 @@
},
{
"cell_type": "code",
- "execution_count": 2,
+ "source": [
+ "%cd /content/"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "ucAYkRiCIxFW",
+ "outputId": "952b350d-2e7c-4871-9a68-ae874d4580f2"
+ },
+ "execution_count": 3,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "/content\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "%%bash\n",
+ "find . \\! -name 'rotated2.zip' -delete"
+ ],
+ "metadata": {
+ "id": "cJDeErmxIuY8"
+ },
+ "execution_count": 4,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
"metadata": {
"id": "2TTOdqkAk_Ad"
},
"outputs": [],
"source": [
"%%capture\n",
- "%rm -rf /content/*\n",
"!git clone https://github.com/ultralytics/yolov5 \n",
"%cd yolov5\n",
"!pip install -r requirements.txt #wandb\n",
@@ -107,7 +145,7 @@
},
{
"cell_type": "code",
- "execution_count": 3,
+ "execution_count": 6,
"metadata": {
"id": "FBHHieZsCbGS"
},
@@ -140,43 +178,37 @@
{
"cell_type": "code",
"source": [
- "mount_drive = False\n",
"if not mount_drive:\n",
" # gdown a gdrive file too frequently triggers google's control and makes the file un-gdown-able\n",
" # in this case, go to 1hq0KcSM31yrR4YlWqM_P29Y3YTuvuIom and 1X3O2v3GIPveq3ylWF6o1qHI5uzbN1vWA, manually\n",
" # make a copy of them to your own drive and mount your drive to the colab instance, then you can manipulate freely\n",
- " !gdown 1hq0KcSM31yrR4YlWqM_P29Y3YTuvuIom # download best trained yolov5x6 weights\n",
+ " \n",
+ " !gdown 151cUWIawXdRkVPg5M-aFvlKD67_gENGh # download best trained yolov5x6 weights on original classes\n",
" !gdown 1X3O2v3GIPveq3ylWF6o1qHI5uzbN1vWA # download organized TACO images (TACO itself, 1500 images, without unofficial images)\n",
"\n",
"if mount_drive:\n",
" from google.colab import drive\n",
" drive.mount('/gdrive')\n",
- " %cp /gdrive/MyDrive/rotated2.zip /content/\n",
- " %cp /gdrive/MyDrive/trash_ai_trained_weights/yolov5x6_best_weights.pt /content/\n",
+ " %cp /gdrive/MyDrive/trash_ai_trained_weights/yolov5x6_best_weights.pt /content/yolov5x6_best_weights.pt #get trained weights\n",
+ " if not os.path.isfile('/content/rotated2.zip'):\n",
+ " %cp /gdrive/MyDrive/rotated2_og.zip /content/rotated2.zip #get images\n",
"\n",
" \n"
],
"metadata": {
+ "id": "M0nSxY0wdrr2",
"colab": {
"base_uri": "https://localhost:8080/"
},
- "id": "M0nSxY0wdrr2",
- "outputId": "67349666-e448-41b6-ee2a-0eb924b23bd7"
+ "outputId": "2319d7c8-77b9-4721-97b7-003a1d06c146"
},
- "execution_count": 4,
+ "execution_count": 7,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
- "Downloading...\n",
- "From: https://drive.google.com/uc?id=1hq0KcSM31yrR4YlWqM_P29Y3YTuvuIom\n",
- "To: /content/yolov5x6_best_weights.pt\n",
- "100% 282M/282M [00:01<00:00, 173MB/s]\n",
- "Downloading...\n",
- "From: https://drive.google.com/uc?id=1X3O2v3GIPveq3ylWF6o1qHI5uzbN1vWA\n",
- "To: /content/rotated2.zip\n",
- "100% 2.61G/2.61G [00:21<00:00, 124MB/s]\n"
+ "Drive already mounted at /gdrive; to attempt to forcibly remount, call drive.mount(\"/gdrive\", force_remount=True).\n"
]
}
]
@@ -184,13 +216,13 @@
{
"cell_type": "code",
"source": [
- "!unzip -qq ./rotated2.zip \n",
- "%mv ./content/* ./"
+ "!unzip -qq /content/rotated2.zip \n",
+ "%mv /content/content/* /content/"
],
"metadata": {
"id": "Pfvddv2pS_ZX"
},
- "execution_count": 5,
+ "execution_count": 8,
"outputs": []
},
{
@@ -203,18 +235,18 @@
"metadata": {
"id": "nbwKD15fKpfa"
},
- "execution_count": 6,
+ "execution_count": 9,
"outputs": []
},
{
"cell_type": "code",
- "execution_count": 7,
+ "execution_count": 10,
"metadata": {
"id": "CPFCzX31IGBq",
"colab": {
"base_uri": "https://localhost:8080/"
},
- "outputId": "a98e7c7d-e6ea-49ab-ccd0-53f7f984be4f"
+ "outputId": "0f05a8f3-d98e-4c9c-f65e-c48151791d5f"
},
"outputs": [
{
@@ -239,7 +271,7 @@
"val: images/val\n",
"test: images/test\n",
"'''\n",
- "np.random.seed(5)\n",
+ "np.random.seed(4)\n",
"id_list=[i for i in range(nr_imgs)]\n",
"np.random.shuffle(id_list)\n",
"train_ids = id_list[:1300]\n",
@@ -260,7 +292,7 @@
},
{
"cell_type": "code",
- "execution_count": 8,
+ "execution_count": 11,
"metadata": {
"id": "kwCWClsrSD4z"
},
@@ -288,30 +320,35 @@
"metadata": {
"id": "gHyjxWAW7DF0"
},
- "execution_count": 9,
+ "execution_count": 12,
"outputs": []
},
{
"cell_type": "code",
- "source": [],
+ "source": [
+ "reduced=False #True if using reduced classes (28 categories)"
+ ],
"metadata": {
"id": "S96FWSElHY9d"
},
- "execution_count": 9,
+ "execution_count": 13,
"outputs": []
},
{
"cell_type": "code",
- "execution_count": 10,
+ "execution_count": 14,
"metadata": {
- "id": "evH7jgs5Dnj7"
+ "id": "evH7jgs5Dnj7",
+ "cellView": "form"
},
"outputs": [],
"source": [
"#@title yml\n",
"\n",
- "with open('./yolov5/data/yoloTACO.yaml', mode='w') as fp:\n",
- " lines = '''path: ../datasets/yoloTACO # dataset root dir\n",
+ "if reduced == True:\n",
+ "\n",
+ " with open('/content/yolov5/data/yoloTACO.yaml', mode='w') as fp:\n",
+ " lines = '''path: ../datasets/yoloTACO # dataset root dir\n",
"train: images/train # train images \n",
"val: images/val # val images \n",
"test: images/test # test images (optional)\n",
@@ -320,6 +357,45 @@
"names:\n",
" 0: Aluminium foil\n",
" 1: Battery\n",
+ " 2: Blister pack\n",
+ " 3: Bottle\n",
+ " 4: Bottle cap\n",
+ " 5: Broken glass\n",
+ " 6: Can\n",
+ " 7: Carton\n",
+ " 8: Cup\n",
+ " 9: Food waste\n",
+ " 10: Glass jar\n",
+ " 11: Lid\n",
+ " 12: Other plastic\n",
+ " 13: Paper\n",
+ " 14: Paper bag\n",
+ " 15: Plastic bag & wrapper\n",
+ " 16: Plastic container\n",
+ " 17: Plastic glooves\n",
+ " 18: Plastic utensils\n",
+ " 19: Pop tab\n",
+ " 20: Rope & strings\n",
+ " 21: Scrap metal\n",
+ " 22: Shoe\n",
+ " 23: Squeezable tube\n",
+ " 24: Straw\n",
+ " 25: Styrofoam piece\n",
+ " 26: Unlabeled litter\n",
+ " 27: Cigarette'''\n",
+ " fp.writelines(lines)\n",
+ "\n",
+ "else: \n",
+ " with open('/content/yolov5/data/yoloTACO.yaml', mode='w') as fp:\n",
+ " lines = '''path: ../datasets/yoloTACO # dataset root dir\n",
+ "train: images/train # train images (relative to 'path') 128 images\n",
+ "val: images/val # val images (relative to 'path') 128 images\n",
+ "test: images/test # test images (optional)\n",
+ "\n",
+ "# Classes\n",
+ "names:\n",
+ " 0: Aluminium foil\n",
+ " 1: Battery\n",
" 2: Aluminium blister pack\n",
" 3: Carded blister pack\n",
" 4: Other plastic bottle\n",
@@ -378,18 +454,18 @@
" 57: Styrofoam piece\n",
" 58: Unlabeled litter\n",
" 59: Cigarette'''\n",
- " fp.writelines(lines)"
+ " fp.writelines(lines)"
]
},
{
"cell_type": "code",
- "execution_count": 11,
+ "execution_count": 15,
"metadata": {
+ "id": "CdMrwEgnB9C4",
"colab": {
"base_uri": "https://localhost:8080/"
},
- "id": "CdMrwEgnB9C4",
- "outputId": "0ee6474a-3811-466b-871a-2c9f8329cee4"
+ "outputId": "3cf45867-9660-47d5-ae3d-f7e22e4394d6"
},
"outputs": [
{
@@ -415,14 +491,14 @@
"%pwd"
],
"metadata": {
+ "id": "XslsKqRuHpmf",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 36
},
- "id": "XslsKqRuHpmf",
- "outputId": "067d39e4-f43a-47a4-8bd0-dff2fe479821"
+ "outputId": "b2f257f8-2cc8-4822-9795-a255e9a2325e"
},
- "execution_count": 12,
+ "execution_count": 16,
"outputs": [
{
"output_type": "execute_result",
@@ -435,7 +511,7 @@
}
},
"metadata": {},
- "execution_count": 12
+ "execution_count": 16
}
]
},
@@ -445,7 +521,7 @@
"metadata": {
"id": "67Xa-feZH9q5"
},
- "execution_count": 12,
+ "execution_count": 16,
"outputs": []
},
{
@@ -460,7 +536,7 @@
{
"cell_type": "markdown",
"source": [
- "## detect and eval with yolo default scripts"
+ "## 1.1 detect and eval with yolo default scripts"
],
"metadata": {
"id": "fHmO_QRlN4bQ"
@@ -468,13 +544,13 @@
},
{
"cell_type": "code",
- "execution_count": 13,
+ "execution_count": 17,
"metadata": {
"id": "DoLh0BGlXQMC",
"colab": {
"base_uri": "https://localhost:8080/"
},
- "outputId": "6e99118b-dcb8-466c-d135-ba708edb7b64"
+ "outputId": "33886043-4bfe-43c0-fd85-4ea4de024985"
},
"outputs": [
{
@@ -486,20 +562,18 @@
"\n",
"Fusing layers... \n",
"Model summary: 416 layers, 140537980 parameters, 0 gradients, 209.1 GFLOPs\n",
- "Downloading https://ultralytics.com/assets/Arial.ttf to /root/.config/Ultralytics/Arial.ttf...\n",
- "100% 755k/755k [00:00<00:00, 5.82MB/s]\n",
- "\u001b[34m\u001b[1mtest: \u001b[0mScanning '/content/datasets/yoloTACO/labels/test' images and labels...100 found, 0 missing, 0 empty, 0 corrupt: 100% 100/100 [00:00<00:00, 741.61it/s]\n",
+ "\u001b[34m\u001b[1mtest: \u001b[0mScanning '/content/datasets/yoloTACO/labels/test' images and labels...100 found, 0 missing, 0 empty, 0 corrupt: 100% 100/100 [00:00<00:00, 280.16it/s]\n",
"\u001b[34m\u001b[1mtest: \u001b[0mNew cache created: /content/datasets/yoloTACO/labels/test.cache\n",
- " Class Images Instances P R mAP50 mAP50-95: 100% 4/4 [00:21<00:00, 5.31s/it]\n",
- " all 100 300 0.0464 0.628 0.11 0.102\n",
- "Speed: 3.5ms pre-process, 45.7ms inference, 4.0ms NMS per image at shape (32, 3, 640, 640)\n",
+ " Class Images Instances P R mAP50 mAP50-95: 100% 4/4 [00:23<00:00, 5.84s/it]\n",
+ " all 100 286 0.937 0.924 0.942 0.857\n",
+ "Speed: 0.2ms pre-process, 49.1ms inference, 4.0ms NMS per image at shape (32, 3, 640, 640)\n",
"Results saved to \u001b[1mruns/val/exp\u001b[0m\n"
]
}
],
"source": [
"!python val.py --data yoloTACO.yaml --task test --weights /content/yolov5x6_best_weights.pt\n",
- "#!python detect.py --weights ./yolov5x6_best_weights.pt --source /content/datasets/yoloTACO/images/test"
+ "#!python detect.py --weights /content/yolov5x6_best_weights.pt --source /content/datasets/yoloTACO/images/test"
]
},
{
@@ -514,7 +588,7 @@
{
"cell_type": "markdown",
"source": [
- "## detect with torch framework manually\n",
+ "## 1.2 detect with torch framework manually\n",
"\n",
"This is a necessary step to use our accuracy evaluator."
],
@@ -525,25 +599,23 @@
{
"cell_type": "code",
"source": [
- "model = torch.hub.load('ultralytics/yolov5', 'custom', path='/content/yolov5x6_best_weights.pt') # load our local model"
+ "model = torch.hub.load('ultralytics/yolov5', 'custom', path='/content/yolov5x6_best_weights.pt',force_reload=True) # load our local model"
],
"metadata": {
+ "id": "30QWyk7yiMsk",
"colab": {
"base_uri": "https://localhost:8080/"
},
- "id": "30QWyk7yiMsk",
- "outputId": "0e41a752-9d2d-4285-c130-b28d385eaa35"
+ "outputId": "af00d1c2-7621-4e24-c942-4a9dac803742"
},
- "execution_count": 14,
+ "execution_count": 18,
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
- "/usr/local/lib/python3.7/dist-packages/torch/hub.py:267: UserWarning: You are about to download and run code from an untrusted repository. In a future release, this won't be allowed. To add the repository to your trusted list, change the command to {calling_fn}(..., trust_repo=False) and a command prompt will appear asking for an explicit confirmation of trust, or load(..., trust_repo=True), which will assume that the prompt is to be answered with 'yes'. You can also use load(..., trust_repo='check') which will only prompt for confirmation if the repo is not already trusted. This will eventually be the default behaviour\n",
- " \"You are about to download and run code from an untrusted repository. In a future release, this won't \"\n",
"Downloading: \"https://github.com/ultralytics/yolov5/zipball/master\" to /root/.cache/torch/hub/master.zip\n",
- "YOLOv5 🚀 2022-10-15 Python-3.7.14 torch-1.12.1+cu113 CUDA:0 (Tesla T4, 15110MiB)\n",
+ "YOLOv5 🚀 2022-10-16 Python-3.7.14 torch-1.12.1+cu113 CUDA:0 (Tesla T4, 15110MiB)\n",
"\n",
"Fusing layers... \n",
"Model summary: 416 layers, 140537980 parameters, 0 gradients, 209.1 GFLOPs\n",
@@ -558,12 +630,13 @@
"# Load test imgs\n",
"test_dir = '/content/datasets/yoloTACO/images/test/'\n",
"test_list = test_ids # [i[2] for i in os.walk(test_dir)][0] # or alternatively read from files # test_list = [re.findall(r'\\d+',i)[0] for i in test_list]\n",
+ "\n",
"test_read_img_list = [Image.open(test_dir+str(i)+'.jpg') for i in test_list] # alternatively use cv2: cv2.imread('target_path')[..., ::-1] # OpenCV image (BGR to RGB)"
],
"metadata": {
"id": "GUjCnveZk2sf"
},
- "execution_count": 15,
+ "execution_count": 19,
"outputs": []
},
{
@@ -571,12 +644,44 @@
"source": [
"# Inference\n",
"results = model(test_read_img_list) # batch of images\n",
- "pred_pd = results.pandas().xywh "
+ "pred_pd = results.pandas().xywh\n",
+ "\n",
+ "for j,i in enumerate(pred_pd):\n",
+ " i=i.assign(image_id=[test_list[j]]*i.shape[0])\n",
+ " pred_pd[j]=i"
],
"metadata": {
"id": "RKr_CupeiMvD"
},
- "execution_count": 16,
+ "execution_count": 20,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "# clear GPU mem\n",
+ "def free_memory(to_delete: list, debug=False):\n",
+ " import gc\n",
+ " import inspect\n",
+ " calling_namespace = inspect.currentframe().f_back\n",
+ " if debug:\n",
+ " print('Before:')\n",
+ " torch.get_less_used_gpu(debug=True)\n",
+ "\n",
+ " for _var in to_delete:\n",
+ " calling_namespace.f_locals.pop(_var, None)\n",
+ " gc.collect()\n",
+ " torch.cuda.empty_cache()\n",
+ " if debug:\n",
+ " print('After:')\n",
+ " torch.get_less_used_gpu(debug=True)\n",
+ "\n",
+ "free_memory([model])"
+ ],
+ "metadata": {
+ "id": "HvwZN3vzHB5u"
+ },
+ "execution_count": 21,
"outputs": []
},
{
@@ -593,30 +698,417 @@
"metadata": {
"id": "QRj9_Dk_iMxG"
},
- "execution_count": 17,
+ "execution_count": 22,
"outputs": []
},
{
"cell_type": "code",
"source": [
- "truth = [annos.loadAnns(annos.getAnnIds(int(i))) for i in test_list]\n",
- "# TODO: here we still reads official annotations from the official .json file. \n",
- "# However, we've already modified the annotations (i.e. dropped bbox that's only a few pixels)\n",
- "# Should read annotations directly from local .txt files in yoloTACO/labels/test/*\n",
"truth_pd = []\n",
- "for i in truth:\n",
- " cache = [j['bbox']+[1]+[j['category_id']]+[no_to_clname[j['category_id']]] for j in i]\n",
- " df = pd.DataFrame(cache,columns = ['xcenter','ycenter','width','height','confidence','class','name'])\n",
- " df['xcenter'] = df['xcenter'] + df['width']/2\n",
- " df['ycenter'] = df['ycenter'] + df['height']/2\n",
- " truth_pd.append(df)"
+ "for i in test_list:\n",
+ " img_info = annos.loadImgs(i)[0] \n",
+ " img_height = img_info['height']\n",
+ " img_width = img_info['width']\n",
+ "\n",
+ " cache = pd.read_csv('/content/datasets/yoloTACO/labels/'+str(i)+'.txt',header=None,\n",
+ " names = ['class','xcenter','ycenter','width','height'],delimiter=' ')\n",
+ " cache[\"xcenter\"] = img_width * cache[\"xcenter\"]\n",
+ " cache[\"ycenter\"] = img_height * cache[\"ycenter\"]\n",
+ " cache[\"width\"] = img_width * cache[\"width\"]\n",
+ " cache[\"height\"] = img_height * cache[\"height\"]\n",
+ "\n",
+ " cache = cache.assign(confidence = [1]*cache.shape[0])\n",
+ " cache = cache.reindex(columns=['xcenter','ycenter','width','height','confidence','class'])\n",
+ " cache = cache.assign(image_id = [i]*cache.shape[0])\n",
+ "\n",
+ " # cache = cache.assign(img_width = [width]*cache.shape[0])\n",
+ " # cache = cache.assign(img_height = [height]*cache.shape[0])\n",
+ "\n",
+ " truth_pd.append(cache)"
],
"metadata": {
- "id": "zGFRscK6iMzQ"
+ "id": "o6dfoDEzdV-1"
},
- "execution_count": 18,
+ "execution_count": 23,
"outputs": []
},
+ {
+ "cell_type": "markdown",
+ "source": [
+ "## 1.3 example prediction and truth"
+ ],
+ "metadata": {
+ "id": "QNrQCtxcBiw0"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "pred_pd[:2] #predictions for first two images, there will be two dataframes"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "WuvboCQRLUvU",
+ "outputId": "7dd35b84-976a-4da2-e93e-d40d86dda068"
+ },
+ "execution_count": 24,
+ "outputs": [
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ "[ xcenter ycenter width height confidence class \\\n",
+ " 0 1162.197754 2048.309082 678.386230 717.108032 0.919622 51 \n",
+ " 1 1339.011230 1566.067871 82.199829 108.453735 0.902251 59 \n",
+ " \n",
+ " name image_id \n",
+ " 0 Rope & strings 86 \n",
+ " 1 Cigarette 86 ,\n",
+ " xcenter ycenter width height confidence class \\\n",
+ " 0 1631.046875 499.599060 233.611206 263.736176 0.855568 51 \n",
+ " 1 1047.939941 665.791748 104.976990 81.199646 0.852488 58 \n",
+ " \n",
+ " name image_id \n",
+ " 0 Rope & strings 171 \n",
+ " 1 Unlabeled litter 171 ]"
+ ]
+ },
+ "metadata": {},
+ "execution_count": 24
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "pred_pd[1]"
+ ],
+ "metadata": {
+ "id": "WFbRwGhdR0o5",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 112
+ },
+ "outputId": "1e9b2d77-4430-4478-ea4f-0e44e45bc29f"
+ },
+ "execution_count": 25,
+ "outputs": [
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ " xcenter ycenter width height confidence class \\\n",
+ "0 1631.046875 499.599060 233.611206 263.736176 0.855568 51 \n",
+ "1 1047.939941 665.791748 104.976990 81.199646 0.852488 58 \n",
+ "\n",
+ " name image_id \n",
+ "0 Rope & strings 171 \n",
+ "1 Unlabeled litter 171 "
+ ],
+ "text/html": [
+ "\n",
+ "