Skip to content

A step-by-step guide to deploy and run a Deep Learning model in the cloud (GKE) using Docker and Kubernetes

Notifications You must be signed in to change notification settings

VirajYParikh/Google-Kubernetes-Engine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Google Kubernetes Engine

Assignment 2: Deploy Ml model (Docker and Kubernetes using Google Kubernetes Engine)

Objective:

The objective of this assignment revolves around running a Deep Learning model utilizing Docker and Kubernetes with the help of the Google Kubernetes Engine provided by Google.

There are several steps involved while doing the same, which are documented and explained below:

Steps:

Step 1:

Creating the script to be run which comprises the deep learning model and the data. This script is present in mnist/main.py file.

Step 2:

Getting the Google Kubernetes Engine ready for deployment of the docker image and k8s cluster:

  1. Login to Google Cloud Console
  2. Enable the Kubernetes Engine API along with the Cloud Computing Engine API.
  3. Create the k8s cluster using the GCP interface by going to the Kubernetes engine console

image

  1. Install the Google Cloud SDK using: brew install google-cloud-sdk
  2. Initialize CLI: gcloud init
  3. Install the GKE Cloud auth plugin using: gcloud components install gke-gcloud-auth-plugin
Step 3:

Build and Move the Docker image to Google Kubernetes Engine:

  1. Create the Dockerfile script: "Dockerfile"
  2. Build the docker image using the following syntax:

docker build -t gcr.io/{project-id}/{app-name} .

docker build -t gcr.io/my-project-27779-401121/my-dl-app:latest .

Screenshot 2023-12-03 at 7 11 22 PM
  1. Push the docker image to the Google Cloud engine repository (container repo) using the following syntax:

docker push gcr.io/{project-id}/{app-name}

docker push gcr.io/my-project-27779-401121/my-dl-app:latest

Step 4: Fetch the k8s cluster endpoint to deploy the yaml file using the format:

gcloud container clusters get-credentials {cluster-name} --zone={zone-name};

gcloud container clusters get-credentials autopilot-cluster-1 --zone=us-central1

Step 5: Deploy the application on GKE using Kubectl and the yaml file
  1. Install Kubernetes on your machine: brew install kubectl
  2. Create persistent volume claim yaml file for Storage (In GKE we do not need to create a PV yaml since GKE automatically assigns volume): "pvc.yaml"
  3. Create a .yaml file for deployment: "dl_deployment.yaml"
  4. Run the command to deploy the pvc yamil in gke: kubectl apply -f pvc.yaml
  5. Run the command to check the status of your deployment: kubectl get pvc
Screenshot 2023-12-03 at 11 51 46 PM
  1. Run the command to deploy the .yaml file in gke: kubectl apply -f dl_deployment.yaml
  2. Run the command to check the status of your deployment: kubectl get pods
Screenshot 2023-12-03 at 8 36 10 PM
  1. To see the final output in the pod use the following command syntax:

kubectl logs {pod-name}

kubectl logs my-dl-app-6957f4d4-d76nq

Screenshot 2023-12-03 at 8 38 22 PM

END

About

A step-by-step guide to deploy and run a Deep Learning model in the cloud (GKE) using Docker and Kubernetes

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published