The project utilizes a combination of deep learning techniques, including transformers and attention mechanisms, to build a model capable of translating speech to sign language. It involves processing input audio signals, extracting relevant features, and generating corresponding sign language gestures.
Speech Processing : The system takes input audio signals as speech input.
Deep Learning Model : Utilizes transformer-based architectures for translation tasks.
Sign Language Generation : Translates speech into sign language gestures.
Training and Testing : Includes functionalities for training and testing the model.
Command-Line Interface : Provides a command-line interface for easy interaction.
- Clone the repository
git clone https://github.com/nila-2003/speechToSignLanguage.git
- navigate to the project directory
cd speechToSignLanguage
- Installing the dependencies
pip install -r requirements.txt
- Training the model
python __main__.py train ./Configs/Base.yaml
- Testing the model
python __main__.py test ./Configs/Base.yaml ./Models/best.ckpt