Skip to content

Latest commit

 

History

History
85 lines (69 loc) · 5.51 KB

README.md

File metadata and controls

85 lines (69 loc) · 5.51 KB

junkSort.AI: an image recognition application for social good

Team and AI for Social Good logo

About the Contributors

Juliette Lavoie, Julie Tseng, and Isabella Nikolaidis are a part of the McGill Office of Innovation's AI for Social Good Summer Lab. This program entails a two week crash course in machine learning and a two week project development phase for members of the lab.

About the Project

Project Selection Process

Given that the program has a fundamental principle of social good, the idea for this project percolated from an interest in promoting and facilitating sustainable practices. A quick search revealed that a few other implementations of image recognition for garbage sorting existed, but none in the form of an app. Additionally, the app approach was chosen as there existed key tutorials on using TensorFlow for image recognition on Android apps.

How was the app developed and how does it work?

  • Android application based on the Android ML Example from the Mindorks community
  • Uses the Google TensorFlow API in Java
  • Trains the new objects for recycling/compost/trash classification as a final layer on top of pre-trained Inception V3
  • Images for final layer taken from ImageNet sets and contributed by Gary Thung and Mindy Yang who also made a trash sorting AI application called TrashNet
  • Recognized images classified into .txt lists for each category
  • Modifications made to the MainActivity, Classifier, and TensorFlowImageClassifier Java files change text displayed on screen from image label to appropriate category
  • Categorisation would only be carried out for objects that were recognized with >40% confidence, otherwise the object was thrown into the trash category
  • Android Studio used to test and debug the app

Challenges encountered in the design process

  • Time constraint and varying experience with Android development and image recognition limited scope of projects that were achievable
  • Efficiency of image classifier
    • Training length and computational power required made it difficult to train on a large set for final layer
    • ImageNet sets sometimes required cleaning (ex. tea bag set contained many pictures with cups, thus trained model would classify cups as compostable)
  • Efficiency of categorisation methodology
    • Simplistic and inelegant approach: after item is recognized, searches from .txt list (finite and non-robust state space)
    • Automatically classify objects with highest confidence level <40% as trash
    • Approach makes it hard to scale with more objects

Phases of the Project

  • PART 1: Image recognition
    • API design choice --> TensorFlow, Inception
    • Minimum viable product - retrain neural net
    • Optimizing model to recognize more objects and with higher accuracy
  • PART 2: Categorisation
    • Write code for layer between labeling and output to categorize objects as recyclable, compostable, or trash
    • Develop item lists to use for categorisation
  • PART 3: App Design
    • Decide on operating system --> Android
    • Test on device for minimum viable product --> Works but classification is weak
    • Improve on User Interface design
  • PART 4: Wrap up and presentation
    • Create presentation template
    • Fill in content
    • Practice presentation

Project progression

Refer to the PROGRESS-TRACK document for summarized progress day by day.

Thanks to...