This guide describes the basic principles of video face tracking in FaceRig and it teaches you how to calibrate it for a better experience. Keep in mind that FaceRig uses video and audio tracking for lip movement, so for a better understanding of this tutorial we suggest turning audio Lipsync off.
Expression Units explained. Calibration in general.
Face Tracking is based on Expression Units mapped to specific face features and also to the avatar’s gesture animation. They are basically defined by a numeric value which changes as you gesture. This value covers a slightly different interval from person to person, that means the minimum and maximum values you reach for each Expression Unit must be configured accordingly for your own face. FaceRig’s default Expression Units intervals were configured such that it should work for each of our team members with a 70-90% accuracy.
The Expression Units are used to fine tune the tracking based on your appearance
and/or preferences. For example, if you want your avatar to keep his eyes half closed, here’s
where you can adjust the values to get that result.
All expression units have a disable option so you can choose if you want the avatar to
completely ignore certain actions (for example pursed lips).
Tracker Input Range shows the minimum and maximum values for different
expressions. Min white bar means minimum value and has a certain action linked with it (e.g.
closed mouth), Max white bar represents the maximum value and has the opposite action
from the Max white bar linked to it (e.g. open mouth), and the black bar is the actual value that
the tracker outputs based on the user’s actions.
To adjust the interval just pull the sliders.
You should drag the min and max markers such that the tracked value (the black marker) could reach both of them while doing specific face expressions (like moving your eyebrows up and down). Note that you cannot move the min marker on the right of the max one, and the max one to the left of the min one.
The Smoothing slider in the window interface. The Smoothing value filters the tracked value (the black marker) to reduce jitter/noise caused by various reasons (webcam image resolution, fps, image blurriness, bad lighting etc). Configuring Smoothing slider can achieve:
with higher smoothing values – reduced uncontrollable avatar movements and also smoothed movements, but you will also delay the avatar reaction (it will seem like a lag)
with lower smoothing values – responsive avatar movements, but the animation may be a little jittery