Skip to content
Nicholas G Vitovitch edited this page Nov 20, 2018 · 1 revision

Proposal

Let's build a tool that assists in motion capture by:

  • preserving the scene context (WHAT is being acted out in this clip?); and
  • minimizing human error (WHEN & WHERE did we leave off in the previous clip).

The story so far …

Input-agnostic animation capture

We would like to process animations sourced from existing captures (i.e. from an .FBX) and live motion capture (i.e. streamed from OptiTrack Motive).

For a first-pass implementation, I'm using Unity's Mecanim to compute a model-agnostic representation of animation keyframes, and serializing them to disk (JSON for the win).

Animation playback

Again, leveraging Mecanim, we can deserialize poses and apply to new avatar. Keyframes are linearly interpolated w.r.t. time.

Keyframe similarity

One of the first queries I have of a list of keyframes is "how close is my current pose to keyframes in the clip?". To start, let's compute a naive error (sum of abs. error of all muscles). Each keyframe has an associated body pose. Apply to a line renderer and BAM you have an intuitive visualization of similarity. This will be useful when aligning an actor to an existing animation clip.

Clone this wiki locally