Skip to content

SlimShadys/MachineLearning

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Environment for ML exercises

This repository contains scripts to setup an environment for exercises of Machine Learning course held by Prof. Iocchi.

The repo has been developed with the contributions of ML tutors: Ermanno Bartoli and Francesco Frattolillo.

Install Docker

In order to have a ready environment without installing manually all the libraries and dependencies, we use docker.

In order to install docker on your PC, you can follow the following guide:

NB: It's important that you add your user to the docker group and log out and in again, before proceeding.

Nvidia GPU driver installation

Note: skip this section if you do not have an NVidia GPU

In order to run the tensorflow-gpu container you need to have an NVIDIA GPU and the host machine requires the NVIDIA driver (you don't need the NVIDIA CUDA Toolkit). Follow the

Follow the remaining steps described on the tensorflow website. In particular, Install the Nvidia Container Toolkit by following the

Tensorflow

The standard docker image used in this course is tensorflow 2.13.0-gpu-jupyter with both GPU support and jupyter notebook pre installed.

This docker image works also on CPU.

Installation

You'll need to follow this few steps:

  • Clone the repository with the following command:
git clone https://github.com/iocchi/MLexercises.git
  • go inside the repository and create a folder called notebooks
cd MLexercises
  • Build the docker image by running the script:
bash build.bash

Usage

Once that you're ready, if in the previous step you have built the image with the build.bash script, you can run the image

with GPU support

bash rungpu.bash

without GPU support

bash run.bash

Custom build and run

You can build and run the images with direct commands instead of using scripts.

Build an image

docker built -t NAME_OF_IMAGE .

Run the image with GPU support

nvidia-docker run --name NAME_OF_IMAGE --rm -p 8888:8888  NAME_OF_IMAGE

or without GPU support

docker run --name NAME_OF_IMAGE --rm -p 8888:8888  NAME_OF_IMAGE

NB: -p 8888:8888 should be always the same because it's for the port

Connect Colab to local Notebook

Since Google Colab has some limitations but a well structured interface, you can decide to connect colab to a local runtime and use the computational power of your machine.

  1. First of all upload your local ipynb file to colab upload file on colab
  2. Once you have uploaded your file the first time, colab will automatically save and update (CTRL+S) it on your drive. Next time you want to work on this file just open it from google drive open from google drive
  3. Connect google colab to local runtime (after running the docker container): local runtime
  4. Write http://localhost:8888/?token= as showed below: connection
  5. Add the token showed when executing the command in the Usage section token

To test your image, use the first_notebook.ipynb available in the test directory.

Stop the container

To stop the container, you can press CTRL-c in the terminal where you launched it, or issue in another terminal the command

docker stop mlnotebook

Mount local folders

If you want to develop and run code locally (without Colab), you should mount a local folder to the container, write your Python code there and run the Python script from the container.

Commit the docker

Although not recommended, if you change something inside the docker and you want to keep the changes don't forget to commit the image by doing the following command:

docker commit ID_IMAGE NAME_OF_IMAGE

NB: you can see the id of the image by doing the command:

docker ps

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%