This repository contains an example code used for air-writing recognition from an image input viewpoint. That means there is no previous hand-localization stage, but the class of the performed gesture is predicted from the whole video sequence.
- Install Anaconda for Python 3.6: Anaconda.
- Download this repository.
- (If not installed) Install CUDA Toolkit, Nvidia drivers, and library cuDNN for GPU support in Tensorflow. More instructions in section Requirements to run TensorFlow with GPU support from Tensorflow Installation Guide.
- Install the conda environment and required packages: "conda env create -f tensorflow.yml".
- Download and install "graphviz-2.38.msi" from https://graphviz.gitlab.io/_pages/Download/Download_windows.html.
- Add the graphviz bin folder to the PATH system environment variable (Example: "C:/Program Files (x86)/Graphviz2.38/bin/")
- Create the subfolder "models".
- Develope dataset from the link Leap Motion writing acquisition to the subfolder "input". The final dataset will have with the following folder structure:
gesture_1/
repetition_1/
frame_000000.png
frame_000001.png
...
frame_000999.png
repetition_2/
...
repetition_7/
gesture_2/
...
gesture_N/
where repetition_N
is a sample folder and class_N
is a writing gesture type.
Execute ./windows/testme.bat
Alternatively:
- Run a Anaconda prompt.
- Activate the conda environment with the command "activate tensorflow".
- Execute:
python ../test.py --experiment_rootdir=../models ^
--weights_fname=../models/test_4/weights_015.h5 ^
--img_mode=rgb
Note1 : Depending on your installation, you will need to write python3
or just python
to run the code.
Execute ./windows/trainme.bat
Alternatively:
- Run a Anaconda prompt.
- Activate the conda environment with the command "activate tensorflow".
- Execute:
python train.py --experiment_rootdir=./models/test_1 ^
--img_mode=rgb
See more flas in common_flags.py
to set batch size, number of epochs, dataset directories, etc.
- Run a Anaconda prompt.
- Activate the conda environment with the command "activate tensorflow".
- Execute:
python train.py --restore_model=True --experiment_rootdir=./models/test_1 ^
--weights_fname=models/weights_015.h5 ^
--img_mode=rgb
where the pre-trained model called m ./models/test_1
must be in the directory you indicate in --experiment_rootdir
.