IMITATION LEARNING OF HAND GESTURES FOR A DUAL ARM ROBOT MANIPULATOR
The primary objective of this project is to use a deep learning-based gesture generation model, and feed it with a custom voice/text as the primary input data and obtain the joint angles to make a dual arm robot perform those gestures. To accomplish it, the following steps are carried out: ● To generate joint coordinates using Deep learning model which is done through feeding customised speech/text as input. Converting those joint coordinates of the simulated skeleton in the 3D space into joint angles. ● Evaluating and mapping particular joints in a dual arm robot to perform gestures, and defining the joint angles with respect to the joint angle range for each joint in a dual arm robot. By accomplishing these steps this project aims at bringing the relevant gestures for a dual arm robot to perform with any customised speech/text input.