Skip to content

malek-luky/Automatic-Wheel-Assembly-Detection

Repository files navigation

Car Wheel Assembly Detection 🚘

GitHub last commit (branch) GitHub Actions Workflow Status Website

Authors: Elizaveta Isianova, Lukas Malek, Lukas Rasocha, Vratislav Besta, Weihang Li

βš™οΈ Tech Stack & Tools

GitHub top language Conda Gcloud Docker PyTorch DVC FastAPI wandb lightning

πŸ“ Project

This project presents an attempt to find a solution for predicting the successful assembly of tires onto wheels autonomously. Currently, the method uses purely an image based classification to predict whether the tire was assembled correctly. To enrich this, we attempt to use an LSTM model to analyze inputs from torque and force sensors of the assembling robot, enabling the system to determine the optimal conditions for tire assembly without human intervention. The goal is to increase efficiency and accuracy in tire assembly processes, reducing the reliance on manual labor and minimizing errors.

Motivation

The project is based on real case situated within the Testbed for Industry 4.0 at CTU Prague. The current quality control methodology uses CNNs for the visual inspection of tire assemblies.

Data

Data are meassured and labelled by the lab. The dataset is generated through robotic cell runs, every sample is then labeled as true (successful assembly) or false (unsuccessful assembly).

Project Goal

This project aims to introduce a new method for enhancing the quality control process in car wheel assembly executed by a delta robot.

Approach

Departing from the picture-based assessment using CNNs, our approach aims to evaluate the correctness of the assembly based on the data from a force-torque sensor. This transforms the dataset into a collection of time series, capturing recorded sensor data from individual tire assemblies. Each element from the series is a 6D vector combining a 3 DOF force vector and a 3 DOF torque vector.

Methodology

The chosen methodology is an implementation of Long Short-Term Memory Recurrent Neural Networks (LSTM RNNs) using PyTorch since the data are in timeseries. There is no existing baseline solution for the current problem. Therefore the project could be evaluated and compared to the existing CNN approach.

Limitations

Due to the small dataset limited by the time constraints and the amount of labelled data, we don't expect to obtain a well performing model, but rather want to present a method for further development.

Framework

As a third-party framework we are going to use PyTorch Lightning and maybe with a Pytorch Forecasting package built on top of the Lightning.

🐍 Conda Installation

Create the environment, install the dependencies and download the data

git clone https://github.com/malek-luky/Automatic-Wheel-Assembly-Detection.git
cd Automatic-Wheel-Assembly-Detection
make conda

🐳 Docker

This will build an image of our project and run it in a container. In the container you will have all the dependencies, data and code needed to run the project. We have three different dockerfiles:

  • conda_setup: debugging purposes, sets the environement and waits for user to run it in interactive mode
  • train_model: downloads dependencies, trains model and send it to Weight and Biases (wandb)
  • deploy_model: downloads dependencies and the model from wandb and waits for user input to make predictions

The following steps to build and run are written for train_model only, but it can be easily changed for any dockerfile.

Build the container locally after downloading the repository.

git clone https://github.com/malek-luky/Automatic-Wheel-Assembly-Detection.git
cd Automatic-Wheel-Assembly-Detection
<uncomment line 21 and 22 inside dockerfiles/train_model.dockerfile>
docker build -f dockerfiles/train_model.dockerfile . -t trainer:latest
docker run --name trainer -e WANDB_API_KEY=<WANDB_API_KEY> trainer:latest

Pulls the docker image from GCP Artifact Registry

There is an error while loading the data from the bucket. Unfortunately, there is no workaround at this moment.

make train_model
docker run --name trainer -e WANDB_API_KEY=<WANDB_API_KEY> europe-west1-docker.pkg.dev/wheel-assembly-detection/wheel-assembly-detection-images/train_model:latest

πŸ’» Google Cloud Computing

Create VM Machine in GCP

  1. Open Compute Engine
  2. Create a name
  3. Region: europe-west1 (Belgium)
  4. Zone: europe-west1-b
  5. Machine configuration: Compute-optimized
  6. Series: C2D
  7. Machine Type: c2d-standard-4 (must have at least 16GB RAM)
  8. Boot disk: 20 GB
  9. Container image: <ADDRESS-OF-IMAGE-IN-ARTIFACT-REGISTRY> (click Deploy Container)
  10. Restart policy: never
  11. The rest is default

Via gcloud command

If the gcloud command is unkown, follow the steps for your OS. Otherwise there are three three dockerfiles that can be deployed to Virtual Machine in GCP (suffix _vm to the dockerfile name`). All of the create the same instance but with specific container. The instance of the name is folowing the dockerfile name (conda_setup/train_model/deploy_model)

make train_model_vm
gcloud compute ssh --zone "europe-west1-b" "train-model" --project "wheel-assembly-detection"

Connecting to VM machine

  • Can be via SSH inside the browser Compute Engine
  • Or locally using command similar to this one gcloud compute ssh --zone "europe-west1-b" "<name_of_instance>" --project "wheel-assembly-detection" (the instatnces can be listed using gcloud compute instances list)

Controlling deployed Virtual Machine

  • docker ps: shows the docker files running on the machine
  • docker logs <CONATINER_ID> wait until its successfully pulled
  • docker ps: pulled container has new ID
  • docker exec -it CONTAINER-ID /bin/bash: starts the docker in interactive window (only the conda_wheel_assemly_detection, the rest only train the model, upload the model and exits, maybe setting the restart policy to "never" should fix this issue)

πŸ‘€ Optional

It re-creates filtered, normalized and processed folders. The processed data is stored in data/processed/dataset_concatenated.csv and is used for training.

Re-process the data

python src/data/make_dataset.py

Re-train the model

python src/models/train_model.py

Run training locally without W&B

python src/models/train_model.py

Run training locally with W&B

python src/models/train_model.py --wandb_on

Remove the conda environment

conda remove --name DTU_ML_Ops --all

🌐 Deployment

This repository is configured for deployment using Google Cloud️ ☁️. The images in this repository are re-built and deployed automatically using GitHub Actions and stored in Google Artifact Registry on every push to the main branch.

We also automatically re-train the model using Vertex AI, store it in Weights & Biases model registry and deploy it using Google Cloud Run.

Automatic Workflows

With access to GCP you can simply make your changes and merge it into main. When the merge is done, GitHub Actions will automatically train and deploy the model. We have 4 workflows in total.

  • build_conda: build the image and stores in in GCP
  • build_train: runs the built image on Vertex AI to train the model and sends it to wandb
  • build_deploy: deploy the image to cloud run to handle user requests and via FastAPI gives predictions

πŸ€– Use our model

Cloud Deployment

The model is deployed using Google Cloud Run. You can make a prediction using the following command:

curl -X 'POST' \
  'https://deployed-model-service-t2tcujqlqq-ew.a.run.app/predict' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
    "sequence": [
        [0.1, 0.2, 0.3, 0.2, 0.3, 0.4, 0.2, 0.1],
        [0.2, 0.3, 0.4, 0.3, 0.4, 0.5, 0.3, 0.2],
        [0.1, 0.2, 0.3, 0.2, 0.3, 0.4, 0.2, 0.1],
        [0.1, 0.2, 0.3, 0.2, 0.3, 0.4, 0.2, 0.1],
        [0.1, 0.2, 0.3, 0.2, 0.3, 0.4, 0.2, 0.1],
        [0.1, 0.2, 0.3, 0.2, 0.3, 0.4, 0.2, 0.1],
        [0.1, 0.2, 0.3, 0.2, 0.3, 0.4, 0.2, 0.1],
        [0.1, 0.2, 0.3, 0.2, 0.3, 0.4, 0.2, 0.1],
        [0.1, 0.2, 0.3, 0.2, 0.3, 0.4, 0.2, 0.1],
        [0.2, 0.3, 0.4, 0.3, 0.4, 0.5, 0.3, 0.2]
    ]
}'

Local deployment

Our model can also be deployed locally. The guidelines for running a local server and making predictions are here

🀝 Contributing

Contributions are always welcome! If you have any ideas or suggestions for the project, please create an issue or submit a pull request. Please follow these conventions for commit messages.

πŸ’» Technology Used

  • Docker: "PC Setup" inside the docker file
  • Conda: Package manager
  • GCP
    • Cloud Storage: Stores data for dvc pull
    • Artifact Registry: Stores built docker images (can be created into container)
    • Compute Engine: Enables creating virtual machines
    • Functions / Run: Deployment
    • Vertex AI: includes virtual machines, training of AI models ("abstraction above VM...")
  • OmegaConf: Handle the config data for train_model.py
  • CodeCov: Creates the coverage report and submit it as a comments to the pull request
  • CookieCutter: Template used for generating code sctructure
  • DVC: Data versioning tool, similar is github but for data
  • GitHub: Versioning tool for written code
  • GitHub Actions: Run pytest, Codecov and upload built docker images to GCP
  • Pytest: Runs some tests to check whether the code is working
  • CodeCov: Tool for uploading coverage report from pytest as a comment to pull requests
  • Weight and Biases: wandb, used for storing and tracking the trained model
  • Pytorch Lightning: Framework for training our LTSM model and storing default config values
  • Forecasting: Abstracion above Pytorch Lightning working with Timeseries data
  • Torchserve: Used for local deployment
  • FastAPI: Creates API for our model, wrap it into container so it can be accessed anywhere
  • Slack/SMS: Handle the alerts, Slack for deployed model, SMS for a server cold-run

DIAGRAM

Diagram

πŸ“‚ PROJECT STRUCTURE

The directory structure of the project looks like this:

β”œβ”€β”€ .dvc/                 <- Cache and config for data version control
β”œβ”€β”€ .github/workflows     <- Includes the steps for GitHub Actions
β”‚   └── build_conda       <- Conda dockerfile: Build conda image and push it to GCP
β”‚   β”œβ”€β”€ build_deploy      <- Deploy dockerfile: build, push and deploy
β”‚   └── build_train       <- Train dockerfile: Build train image and push it to GCP
β”‚   └── pytests           <- Runs the data and model pytests
β”œβ”€β”€ data                  <- Run dvc pull to see this folder
β”‚   └── filtered          <- Seperated raw data, one file is one meassurement
β”‚   └── normalized        <- Normalized filtered data
β”‚   β”œβ”€β”€ processed         <- Torch sensors from normalized data and concatenated csv
β”‚   └── raw               <- Original meassurements
β”œβ”€β”€ deployment            <- Other deployment options as Cloud Function and torchserve
β”‚   └── cloud_functions   <- File that can be run as a Cloud Function on GCP (WIP)
β”‚   └── torchserve/       <- All data needed for local deployment
β”œβ”€β”€ dockerfiles           <- Storage of out dockerfiles
β”‚   └── conda_wheel       <- Setups the machine and open interactive environement
β”‚   β”œβ”€β”€ train_wheel       <- Runs train_model.py that upload the new model to wandb
β”‚   └── serve_model       <- Uses FastAPI, as the only dockerfile also deploys the model
β”‚   └── README            <- Notes and few commands regarding the dockerfiles struggle
β”œβ”€β”€ docs                  <- Documentation folder
β”‚   β”œβ”€β”€ index.md          <- Homepage for your documentation
β”‚   β”œβ”€β”€ mkdocs.yml        <- Configuration file for mkdocs
β”‚   └── source/           <- Source directory for documentation files
β”œβ”€β”€ reports               <- Generated analysis as HTML, PDF, LaTeX, etc.
β”‚   └── figures/          <- Generated graphics and figures to be used in reporting
β”‚   └── README            <- Exam questions and project work progress
β”œβ”€β”€ src                   <- Source code
β”‚   β”œβ”€β”€ data              <- Scripts to download or generate data
β”‚   β”‚   └── filter        <- Seperates the meassurement into csv files
β”‚   β”‚   └── make_dataset  <- Runs filter->normalize->process as one script
β”‚   β”‚   └── normalize     <- Normalizes the filtered data
β”‚   β”‚   └── process       <- Changes normalized data into torch files and concatenated csv
β”‚   β”‚   └── README        <- Includes more details about the scripts
β”‚   β”‚   └── utils         <- File with custom functions
β”‚   β”œβ”€β”€ helper            <- Folder with custom functions
β”‚   β”‚   └── convert_reqs  <- Function that mirrors the requirements to environment.yml
β”‚   β”‚   └── gcp_utils     <- Function that returns wandb_api on GCP cloud via secret
β”‚   β”‚   └── logger        <- Creates logs to logs/ folder for easier debugging
β”‚   β”œβ”€β”€ models            <- Model implementations, training script and prediction script
β”‚   β”‚   └── arch_model    <- Old model class definition and function calls
β”‚   β”‚   └── arch_train_m  <- Old model using Forecasting and TemporalFusionTransformer
β”‚   β”‚   └── model         <- New lightweight model class definition and function calls
β”‚   β”‚   └── predict_model <- Predicts the result from unseen data
β”‚   β”‚   └── train_model   <- New lightweight model using Lightning's LTSM
β”œβ”€β”€ tests                 <- Contains all pytest for Github workflow
β”‚   └── test_data         <- Checks if data exist and the data shape
β”‚   β”œβ”€β”€ test_model        <- Check if the trained model is correct
β”œβ”€β”€ .gitignore            <- Data that are now pushed to GitHub
β”œβ”€β”€ .pre-commit-config    <- Formats the code following pep8 and mirror requirements.txt
β”œβ”€β”€ LICENSE               <- Open-source license info
β”œβ”€β”€ Makefile              <- Makefile with convenience commands like `make data` or `make train`
β”œβ”€β”€ README.md             <- The top-level README which you are reading right now
β”œβ”€β”€ data.dvc              <- Links the newest data from GCP Cloud Storage
β”œβ”€β”€ environment.yml       <- Requirements for new conda env, also used inside docker
β”œβ”€β”€ pyproject.toml        <- Project (python) configuration file
└── requirements.txt      <- The pip requirements file for reproducing the environment

πŸ™ Acknowledgements

Created using mlops_template, a cookiecutter template for getting started with Machine Learning Operations (MLOps).