What is the reason to do Euler_Angle to Expressional_Map conversion? #26
Replies: 11 comments
-
Dear Kelvin, We represent human motion using exponential map, because this representation avoid numerical issues related to potential discontinuitites in joint angle values due to transitions from -pi to pi. We pre-calculate all the features (not only motion features, but also speech and text features) before training the model in order to be efficient, since feature extraction is not so fast. I hope that answers your question. Best, |
Beta Was this translation helpful? Give feedback.
-
Dear Taras, If the major concern is about the discontinuity of Euler_Angle, why not to consider to use 3D coordinates (Position) directly to do model building which will not introduce discontinuity issue. Have a nice day, |
Beta Was this translation helpful? Give feedback.
-
BTW, I am a speech person, that is why I lack the knowledge on motion, Late on, I will show you my recent result on face mesh generation, which I directly use 3D coordinate to build the model, maybe you have other reason why not to use 3D coordinates to build gesture model? Thanks, |
Beta Was this translation helpful? Give feedback.
-
output.300.mp4 |
Beta Was this translation helpful? Give feedback.
-
I don't know a particularly good tutorial on exponential map, but this could be a good starting point: As for using 3D coordinates - we don't do that because most of virtual characters and humanoid robots cannot be driven by 3D coordinates, they require joint angles. Hence we use representation which retains information about joint angles :it is easy to convert exponential maps back to join angles, while it is tricky to convert 3D coordinates to joint angles. Is it clear now? Btw, @kelvinqin, nice results with face mesh generation! :) |
Beta Was this translation helpful? Give feedback.
-
Dear Taras, Merry Christmas, |
Beta Was this translation helpful? Give feedback.
-
Dear Taras, In data processing phase, you convert the raw training data in bvh2feature.py (from EulerAngle to ExpMap), the pipeline is: I guess, JointSelector function means, the feature you extracted for training includes 15 segments, which corresponds to 45 dims vector (15*3) In model prediction phase, you call write_bvh.py to convert the prediction result back into EulerAngle: When I look at temp.bvh file, I found it is a full body skeleton with everything (I use bvhacker to view it) instead of only 15 segments. My question is what is the secrete to map 45-dim vector back into a full body skeleton? One more question is, the result I got is a little different comparing with yours in https://vimeo.com/449190061. Not sure if it is because that I am use a different model? I will attached my result for you to take a look (arm movement is not so strong like yours) Thanks so much for your guidance, Kelvin |
Beta Was this translation helpful? Give feedback.
-
Here is my result (run demo.py) generated_motion.mp4 |
Beta Was this translation helpful? Give feedback.
-
Your results look reasonable. There are slightly different because we have a slightly different model for the demo now. |
Beta Was this translation helpful? Give feedback.
-
"My question is what is the secrete to map 45-dim vector back into a full body skeleton?" If you have another question - please open another issue |
Beta Was this translation helpful? Give feedback.
-
Dear Taras, |
Beta Was this translation helpful? Give feedback.
-
Dear Taras,
Can you please share your knowledge on why you do Euler_Angle2Exponential_Map conversion first and then build deep learning model?
Is it because that Exponential_Map has some special characteristics for easy convergence?
A related question is why you don't consider to do Euler_Angle2Position conversion for model building?
Thanks for your sharing,
Kelvin
Beta Was this translation helpful? Give feedback.
All reactions