The timeline editor is a powerful tool for editing and mixing prerecorded FaceRig Performances (FaceRig specific animations). The output can be exported as a movie (.webm file), animation (.fbx and .dae files) or FaceRig Performance (.rpl and .wav file).
A FaceRig Performance is a set of animation parameters that drives the character movement and animations, all saved into a single .rpl file. Additionally, FaceRig also records audio input data and outputs two uncompressed audio files (.wav), first one is the recording with the voice changing effects, while the second one is the pristine version without the effects applied and is usually named “name_clean.wav”.
The timeline editor provides the means to crop performances, split performances into different parts and control how different performances blend together.
A simple introduction into the timeline editor works can be found in the Getting Started on Timeline Editor Essentials section.
The projects created in the Timeline Editor can be saved in order to be opened at a later date or transferred to different computers in a .frsproj file. The contents of the project are saved in an XML format and contain inside them the following data:
• The path to the workspace
• All the tracks and the subsequent track items
• For the replay items, all the modules are stored
• Blend curves for audio and performances
• All avatar override behaviour and camera switchboard intervals
To be noted that all the files that are opened by a project are saved as relative paths to the .frsproj file itself. This means that whenever you want to move a project, all the relative paths between the .frsproj file and the other workspace files must be preserved.
The timeline editor is split into three parts: the Rendered Preview, the Workspace Panel and the Tracks Panel.
The rendered preview is the result of all animation blending at a given timestamp. It features simple timeline controls, such as play/pause, step frame forward/backward, go to start/end.
The Workspace Panel displays all the performance and audio files that can be mixed in the Tracks panel. It supports a simple search box to quickly find a desired file and can filter the files by the three extensions specific to FaceRig performances, .rpl, .fca and .wav. When the structure of the workspace changes outside the application, it is recommended hitting the refresh button so that newly added files can be accessed.
The workspace can be set by going to File > Set workspace folder.
The Tracks Panel is where all performance and audio editing and mixing is done. The timeline-based editor is designed as a multi-track panel where performance and audio files can be added, moved, cropped and split. There are three distinct track types, one is for performances (.rpl files), one for camera animations (.fca), and the last is for audio (.wav files). Each track is blended with the others using a strict set of rules that can be controlled by the user. The user can control the animation modules (see Modules subsection) that apply on the avatar and the blending curve of each performance. In case of the audio tracks, the modules do not apply and the blending curve refers to the volume of the audio item. The camera tracks can only be active one at a time, as blending cameras doesn’t make sense. As such, below the tracks panel there is a track called Camera Switchboard, which will be detailed later in the Guideline.
The tracks panel supports renaming, muting/hiding, locking any other modification or even soloing a certain track. All aspects are applied separately on each type of track, performance or audio.
FaceRig Studio supports heterogeneous tracking input, that is interpreted and then mapped/retargeted onto a 3D character. All animation information can be categorized in different groups (mouth, eyes, hands, etc) so that the user can filter exactly what animations are applied to the avatar. These animation groups are referred as Modules, and each performance track supports a filter of which of these modules are applied to the avatar from the performance items that are on this track.
The best example is to have two performances, one has only face expressions and the other hands movements. The recorded performances were done with two different instances of FaceRig, and you need to mix them together. The solution is to add two tracks, one for each performance, on the face expressions one you toggle off the hands movement, while on the other you only toggle the hands movement. Setting the two performance items one under the other, the system will only take face expression info from the first one and only hands animation info from the other. The result is an enhanced performance with tracking capabilities from two instances of FaceRig, without losing any of the original movement of each of the performance items.
All tracks blend together after a specific set of rules.
Under a specific timestamp, all animation values are added with their respective weights and then averaged to get a normalized value.
1. The blending curve (a percentile between 0 and 100) is applied, giving it a weight when adding to the output animation result.
2. If animation weights sum are over 1.0, they are normalized
3. If animation weights sum is under 1.0, the remainder to 1.0 is filled with idle pose
4. All weights and averages are applied on every module independently
Under a specific timestamp, all audio items added and normalized.
No blending – See switchboard explanation
The user can control the modules of each track and the blending curves of each performance/audio item. A performance can be added by dragging .rpl file to a Performance track (blue colored), while audio items can be added by dragging the .wav file to an Audio track (green colored). You can add the .rpl and .wav file at the same time on different tracks by dragging the replay file and holding ctrl when releasing. The program will try to find the .wav file with the same name as the rpl and add it on a separate track.
Avatar Override Behaviour Track
The avatar override behaviour track is a track on which you can override certain behaviours of the avatar, as well as the possibility of changing the avatar and background on the fly at a point in time during playback. This is done by having intervals across the playback time of the project, each of them having a set of options which get applied only when the current play time falls within that interval.
In the example below we have two intervals. The first which spans from start of the project (0.0 seconds) up to around the 10 seconds mark, and the second one from 10 seconds onward. If we look at them, we can see that they have different icons on them, which represent a part of the content of that interval. These icons are meant to show you what the interval contains, at a glance. Most importantly, we can see what avatar and background is going to be loaded, and we can observe that in the first interval we have Fluffo with the Political Background, and in the second one, we have Midori with the FaceRig Forest.
In terms of avatar behaviour which we can override, we have the following properties which can be edited for each interval on its own:
• Setup – Represents the Avatar and Background which will be loaded when playing through this interval. By default, the values for the avatar and background are represented by the setup you previously had when you entered the timeline. The setup allows you to choose between a set of hotkeys, which are created outside of the timeline and save the avatar and background currently selected. In order to create a new hotkey, by default, the key shortcuts are ctrl + shift + <number from 0 to 9>.
• Cross eyes – Toggles if cross eyes movement is allowed.
• Eye lids Soft Link Weight – When set to 0 the eye lids move independently of each other, whereas when set to 1 they move symmetrically.
• Eyebrows Soft Link Weight – When set to 0 the eyebrows move independently of each other, whereas when set to 1 they move symmetrically.
• Mouth Corners Soft Link Weight – When set to 0 the smile can be asymmetric, and when it is set to 1 the corners move symmetrically.
• Look at Camera ratio – When set to 0 it uses the performance data for the direction it looks at, and, when set to 1 it will overwrite the performance and look at the camera. In between 0 and 1 it blends between the two sources.
• Orientate Head To Camera – When enabled it will orientate the head towards the camera.
• Point Left Arm At Camera – Will point the left arm of the avatar towards the camera.
• Point Right Arm At Camera – Will point the right arm of the avatar towards the camera.
• Enable Tracked Blink – When enabled it will use the information from the performance to blink.
• Enable Auto Blink + Auto Blink Behaviour Slider – When enabled it will introduce procedural blinking at a certain frequency and speed, which can be controlled by the slider adjacent to it, the leftmost value of the slider being a simulation of tired blinking and the rightmost value simulates alert blinking
• Enable Pupil Behaviour + Pupil Behaviour Slider – When enabled it introduces procedural micro movements of the pupil which appear naturally as the eye shifts ever so slightly. The slider adjusts how visible these movements should be.
• Idle Movement Weight – Avatars by default have an Idle animation, so that they don’t look stiff as a brick. When you set this slider to 0, the animation will be completely removed, when set to 1 the animation is used entirely, and any values in between represent the ratio with which the animation is blended.
|Holding Shift||Snapping behavior|
|S||Toggle split mode|
|Left/Right arrows||Frame by frame Forward/Backward|
|Ctrl+Shift+S||Save Project As..|