-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Let's build a tool that assists in motion capture by:
- preserving the scene context (WHAT is being acted out in this clip?); and
- minimizing human error (WHEN & WHERE did we leave off in the previous clip).
We would like to process animations sourced from existing captures (i.e. from an .FBX) and live motion capture (i.e. streamed from OptiTrack Motive).
For a first-pass implementation, I'm using Unity's Mecanim to compute a model-agnostic representation of animation keyframes, and serializing them to disk (JSON for the win).
Again, leveraging Mecanim, we can deserialize poses and apply to new avatar. Keyframes are linearly interpolated w.r.t. time.
One of the first queries I have of a list of keyframes is "how close is my current pose to keyframes in the clip?". To start, let's compute a naive error (sum of abs. error of all muscles). Each keyframe has an associated body pose. Apply to a line renderer and BAM you have an intuitive visualization of similarity. This will be useful when aligning an actor to an existing animation clip.