See the blog post at https://dvc.org/blog/end-to-end-computer-vision-api-part-1-data-versioning-and-ml-pipelines for a full explanation.
- training of image segmentation model using DVC pipelines
- web API that calls the above model on a provided image and responds with a binary segmentation mask
https://github.com/abin24/Magnetic-tile-defect-datasets.
For instruction on how to runs things locally, see the "Local development setup" section below.
In this stage, we'll be performing all our model experiments: work on data preprocessing, change model architecture, tune hyperparameters, etc. Once we think our experiment is ready to be run, we'll push our changes to a remote repository (in this case, GitHub). This push will trigger a CI/CD job in GitHub Actions, which in turn will:
- provision an EC2 virtual machine with a GPU in AWS
- deploy our experiment branch to this machine
- rerun the entire DVC pipeline
- push files with metrics and other DVC artifacts back to GitHub
*Note: the workflow is set up so that GitHub Actions will overwrite git history, thus in order to sync our local workspace with the remote we'll run the git fetch
command followed by the git reset --hard @{upstream}
command.
Here we don't run git pull
as it will result in merging the upstream into our local git history.
At this point, we can assess the results in DVC Studio and GitHub and decide what things we want to change next.
flowchart TB
A(git checkout dev\ngit check -b experiment) -->|Push changes| B("Exp CML workflow\n(training & reporting)")
B --> |Reports,\nmetrics,\nplots| C("Check results.\nAre they good?")
C --> |No\nchange experiment parameters | A
C -->|Yes| D(Merge to dev branch)
Once we are happy with our model's performance on the experiment branch, we can merge it into the dev branch. This would trigger a different CI/CD job that will:
- retrain the model with the new parameters
- deploy the web REST API application (that relies on the new/retrained model) to a development endpoint on Heroku
*Note: the workflow is set up so that GitHub Actions will overwrite git history, thus in order to sync our local workspace with the remote we'll run the git fetch
command followed by the git reset --hard @{upstream}
command.
Here we don't run git pull
as it will result in merging the upstream into our local git history.
Now we can test our API and assess the end-to-end performance of the overall solution.
flowchart TB
A(Dev CML workflow) --> B(Retraining) --> C(Deployment to dev and monitoring)
If we've thoroughly tested and monitored our dev web API, we can merge the development branch in the main branch of our repository. Again, this triggers the 3rd CI/CD workflow that deploys the code from the main branch to the production API.
flowchart TB
A(Successful deployment to dev) --> B(Merge dev into main) --> C(Prod CML workflow) --> D(Deployment to prod)
- pipenv
pipenv shell
pipenv install
AWS S3 is our remote storage configured in the .dvc/config file. You need to edit this file to configure your own. Many different remote storage types are supported including all major cloud providers. For more info see the docs.
dvc repro
The model will be saved in the models/
directory.
Here's the DAG of the pipeline:
$ dvc dag
+----------------+
| check_packages |
*****+----------------+*****
***** * ** ******
****** *** ** *****
*** * ** ******
+-----------+ ** * ***
| data_load | ** * *
+-----------+ ** * *
*** ** * *
* ** * *
** * * *
+------------+ * *
| data_split |*** * *
+------------+ ***** * *
* ******* * *
* ***** * *
* **** * *
** +-------+ ***
**** | train | ******
**** +-------+ *****
*** ** ******
**** ** ******
** * ***
+----------+
| evaluate |
+----------+
uvicorn app.main:app
Build image
docker build . -t mag-tiles
Run container
docker run -p 8000:8000 -e PORT=8000 mag-tiles
*only POST method is supported, i.e. opening a link in a browser (GET method) will return a "Method Not Allowed"
response.
With curl
curl -X POST -F 'image=@data/MAGNETIC_TILE_SURFACE_DEFECTS/test_images/<image_name>.jpg' -v http://127.0.0.1:8000/analyze
With python
import json
from pathlib import Path
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import numpy as np
import requests
url = 'http://127.0.0.1:8000/analyze'
file_path = Path(
'data/MAGNETIC_TILE_SURFACE_DEFECTS/test_images/<image_name>.jpg')
files = {'image': (str(file_path), open(file_path, 'rb'), "image/jpeg")}
response = requests.post(url, files=files)
data = json.loads(response.content)
pred = np.array(data['pred'])
plt.imsave(f'{file_path.stem}_mask.png', pred, cmap=cm.gray)
heroku container:login
heroku create <APP_NAME>
heroku container:push web --app <APP_NAME>
heroku container:release web --app <APP_NAME>
Currently, the dev version of the app is deployed to https://demo-api-mag-tiles-dev.herokuapp.com/analyze