Replies: 4 comments 1 reply
-
possible, but very much out of scope here. first, i'm not using tflite, so it wouldn't be part of this project. but there are bigger conceptual issues:
|
Beta Was this translation helpful? Give feedback.
-
btw, i've converted the issue into a discussion as its a good topic, but not really a product issue. |
Beta Was this translation helpful? Give feedback.
-
in that case, floating-point is not an issue, so you could convert to
my approach is to run landscape points through generic analysis first so i get things like "finger x is half-curled and pointing right". take a look at https://github.com/vladmandic/human/blob/main/src/hand/fingergesture.ts for some examples.. |
Beta Was this translation helpful? Give feedback.
-
btw, i've created a script that converts all model to |
Beta Was this translation helpful? Give feedback.
-
Issue Description
I am new to mediapipe. I want to use the mediapipe hands (hand detection and hand landmarks models) to be combined along with a simple gesture recognition model. All three to be combined to a single tflite file for inference on a microcontroller.
Steps to Reproduce
Is it possible to take the blaze hand models weights and make those layers non trainable and add few additional FC layers which take two inputs the keypoints from the blaze hand models and additional label input for predefined geustures. The output should be a single prediction of hand gesture.
Any leads will be helpful. To generate the model.
The overall idea is to have this multistage model compressed into a single model which can be trained with just few epochs and with less data for new gestures leaveraging the already trained weights from blaze hand.
Beta Was this translation helpful? Give feedback.
All reactions