Official Technical info on avatars. - Facerig

Homepage Forum Creating & Contributing FaceRIG Development Official Technical info on avatars.

This topic contains 9 replies, has 7 voices, and was last updated by  Jack9333 4 years, 5 months ago.

Viewing 10 posts - 1 through 10 (of 10 total)
  • Author
    Posts
  • #59233

    faceriguser
    Member

    Hey folks :).
    There have been quite a few questions about avatars technical info, and we’ve tried to answer as best we could everywhere they popped up.
    Some matters, particularly regarding animation and rigging, are still prone to change before Beta, but here’s how things are now:

    For sure we will not be using morph targets/blend shapes, in the first version of the avatar files.
    To build our avatars we are using high res sculpts, to get proper normal maps, and then we do the real time (low res) version, retopo-ed from the high res sculpts .
    All the expressions are built using classic bone driven animations (with a high number of bones in the face) and up to four influences per vertex.
    Blendshapes will probably be supported in a future version of the avatar files, right now we start with classic skinned meshes, and a large number of bones to provide the needed degree of deformation control.

    In regard to shaders/materials:
    typically we have up to 9 distinct shaders on avatars
    – geometry strip hair shader (with anisotropic specular)
    – fur shader , with physics/intertia (must be uniquely mapped, with no major stretching because of texture-space shading calculations)
    – fur shader, without physics/intertia (must be uniquely mapped, with no major stretching because of texture-space shading calculations)
    – human skin (with subsurface scattering simulation) (must be uniquely mapped, with no major stretching because of texture-space shading calculations)
    – eye ball shader ( that supports pupil dilation and refraction/reflection- with some simulated occlusion)
    – special material used for eyelid diffuse shadows on the eyeball.
    – inside of the mouth shader (must be uniquely mapped, with no major stretching because of texture-space shading calculations)
    – cloth and metal shader
    – eyelashes/whiskers shader

    In regard to bone count:
    we have avatars ranging from 70 to 120 bones.

    In regard to triangle count:
    we have avatars ranging from 22 000 to 80 000 triangles on the base mesh (though bear in mind that the fur shader can multiply up to 32 times the triangles from the furred areas).

    In regard to texture maps :
    we are using multiple texture maps for each shader, typically the following:
    – diffuse map
    – specular map (with custom signification for each of the RGBA channels)
    – normal map (up to 4 distinct normal maps for meshes with animated normals)
    – Ambient occlusion map (up to 4 AO’s for meshes with animated normals, packed in the 4 channels of the same texture )
    – other material properties map ( dependent on materials )
    Fine tuning of the texture maps will have to be done with a running FaceRig build, because the default Maya/Max viewport render does not support the the rendering features that we use, and overall different lighting equations are used, so it is best to fine tune them while using a running copy of FaceRig.

    In regard To Rigging and Animation:
    There will be two kinds of specifications regarding the rigging and animation, one for enthusiasts and one for professional character makers.

    As for the enthusiasts, they will have to comply with a very strict pipeline and as long as they stick to that they should get the expected avatar results very easy. They will have a template for bone structure and fewer animations to make, as the program will make the proper exports of the animation atomics and modifications in order to make it work. And these automatized interventions will be made possible by using strict naming conventions.

    For the pros, there are significantly less restrictions regarding naming conventions and bone hierarchies but they will have to extra careful while making and preparing the animations in order to work properly within the system. They will have much more liberty deciding how the rig is structured and what bones are driving the expression animations but since they probably make a very custom avatar they will have to tell the system how to use it. They will do that by exporting the animations in certain ways and in some cases editing a configuration file ( defining which animations are base animations, and which are additive animations and other gritty details ) 😛 .

    edited by facerig

    #59235

    TrueChaoS
    Member

    stickied~

    #59236

    lastplace199
    Member

    😯 *scratches head* uuh huh

    #59237

    As far as I understand, I will need to provide fbx of model with animation, and each frame in this animation will be one element of facial expressions position setting – L_upper_eyelid_closed, L_upper_eyelid_open, L_mouth_corner_up, L_mouth_corner_dn, etc – am I correct? Or it will be frames still, but for whole facial expressions – like sad,happy,mouth opened, surprised, etc? If so, can you suggest bare minimum of expressions for an avatar to work? Or bone position and rotation will be controlled procedurally through FaceRig engine and scripting?

    Also, there is cloth and metal shader, but will there be glass shader (maybe with transparency, but without refraction), or some kind of “basic” shader, for things that are not metal, skin, fur or cloth? Plastic, for instance. and for eyeball shader, will it work correctly with non-spherical eyes, or highly stylized eyes, like that of Sonic (one huge flat monoeye)?

    #59238

    faceriguser
    Member
    darth_biomech wrote:
    As far as I understand, I will need to provide fbx of model with animation, and each frame in this animation will be one element of facial expressions position setting – L_upper_eyelid_closed, L_upper_eyelid_open, L_mouth_corner_up, L_mouth_corner_dn, etc – am I correct? Or it will be frames still, but for whole facial expressions – like sad,happy,mouth opened, surprised, etc? If so, can you suggest bare minimum of expressions for an avatar to work?

    Both approaches will work. If you provide the whole facial expressions you will have less freedom with the rig, as we will have to compute the differences and deduce the individual elements, and for that we will need strict rig conventions.

    If you provide all needed individual atomics, and we don’t have to re-build them from the individual whole expressions, then you will have more control and freedom with the rig layout.

    Mihai knows better about these matters, and he will chime in a bit later with more info 🙂

    darth_biomech wrote:
    Or bone position and rotation will be controlled procedurally through FaceRig engine and scripting?

    We have considered this, but we are not pursuing this direction for now. There is too much guesswork involved with this approach, and the results are often creepy.

    darth_biomech wrote:
    Also, there is cloth and metal shader, but will there be glass shader (maybe with transparency, but without refraction), or some kind of “basic” shader, for things that are not metal, skin, fur or cloth? Plastic, for instance. and for eyeball shader, will it work correctly with non-spherical eyes, or highly stylized eyes, like that of Sonic (one huge flat monoeye)?

    When I say shader I mean optimized GPU code that can be customized with artist input, such as variables and textures, to create a variety of materials .

    The Cloth and Metal shader with properly tuned variables and textures can also do hard plastic.
    The sub surface scattering shader with properly tuned variables and can also do translucent plastic, (or teeth :)).
    The eyeball shader will work for non spherical eyes. It will also work for Sonic monoeye, but in this case you may want to separate the pupil as separate geometry gliding on the surface of the eyeball.

    Stuff we left for later:
    – a hair shader for textured fins/strips that supports alpha-blending and sorting with respect to the camera direction ( this is not trivial for real-time, and that’s why right now the textured fins/strips hair is exclusively alpha test+ antialias only, and it looks a bit rough)
    – a glass shader, (with simulated reflection AND refraction -not necessarily physically correct :)) that sorts correctly with the above hair shader

    There is plenty more stuff that we left for later, BUT we simply do not have the resources to tackle all these at once. Focusing on them before other, less mathematically complex but more pressing issues such as interface functionality, will mean launching the program later.

    edited by facerig

    #59239

    dawnchapel
    Member

    I’m gonna have basically a million questions as I muddle my way through the actor creation process but here are my questions for right now.

    Questions relating to the fur system:

    [list]-Can the fur height be set to zero for furless areas like the nose, inside of the ears? Is fur length handled as a grayscale texture channel or some other way?

    -How is the direction of the fur determined? Orientation of the surface geometry its bound to in the UV space or some other way?

    -Is the fur color derived from the diffuse channel of the geometry it’s bound to, or is there a separate color channel for fur? (that is to say, does the ‘skin’ of the character have to be the same color as the fur? If so, is it possible to sidestep this by having the fur bound to invisible geometry?)

    -How much of the fur on the furry actors is fur, and how much of it is polygon hair strips?

    -Can I layer two different fur systems on top of each other by having each one bound to different geometry?[/list]

    Questions not relating to the fur system:

    [list]-Are the ears and noses of the animal actors that you’ve built so far discrete geometry, or are they contiguous with the rest of the head mesh? If they’re contiguous, can they also be done as discrete geometry? it seems like this is a thing that would make customization very easy if you can offer a half-dozen different kinds of noses, ears, horns, etc., for a single actor.

    -Can textures be animated/changed on the fly, driven by facial expressions? This would be useful for creating actors with a lowpoly/cartoony style, like Wind Waker or Animal Crossing, where facial features are animated mostly by texture rather than geometry.

    -Can scripted events be triggered by facial expressions reaching a certain value? like a combination of lifted eyebrows + frowning mouth spawning a particle system of cartoonish sweat, or an angry expression also causing the actor’s ears to pin back the way angry cats do, etc.?[/list]

    #59240

    Mihai
    Member

    – the fur can have zero length and the artist can control this property by the distance between pair nodes that represents the hair root and hair tip. This pairs should be placed on the surface from place to place like you would place guides in a render hair system.

    – the fur direction is determined by the same pair nodes used to set fur length. The vector between the to nodes gives the fur direction. This direction is expressed in surface tangent space so is UV related.

    – the fur IS the rendered surface. There is no distinction between fur and surface below. There is no below or above in this regard. The fur is the surface. If the surface is invisible the fur is invisible.

    – all the fur is made by a real-time shader, all the hair is made by polygon strips

    -two fur surface that occupy the same space will likely render incorrectly (sorting problems)

    – the avatars can be made from multiple polygon shells. The surface doesn’t have to be continuous.

    – texture visibility can be animated for diffuse, ambient occlusion and normal maps. 4 maps are supported per channel.

    – scripted events are not supported yet, but most expression animations are play by percent not by time so it’s possible to have certain events at certain percents. It depends on the kind of effect searched for.

    #59241

    dawnchapel
    Member

    Excellent, thank you!

    #59242

    I stumbled upon a documentation in program’s folder, that describes a lot of animations… While purpose of some of them can be extrapolated from their names, the use of the rest remains a mystery… So I thought that it would be nice if we had some sort of reference illustrations to them. About how they must deform the face, I mean.

    #59243

    Jack9333
    Member

    Hello!

    I was wondering if it was possible to implement blend shapes into the facial animations prior to exporting to .fbx, or does the avatar have to be solely joint/skin cluster driven?

    Also, is there some sort of reference that I can take a look at so that I can figure out how to control fur direction when using the fur material?

    Thanks a bunch and keep up the good work!

Viewing 10 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic.


Privacy Preference Center

  • Warning: reset() expects parameter 1 to be array, string given in /usr/local/important/web/wp.facerig.com/wp-content/plugins/gdpr/public/partials/privacy-preferences-modal.php on line 33
Warning: Invalid argument supplied for foreach() in /usr/local/important/web/wp.facerig.com/wp-content/plugins/gdpr/public/partials/privacy-preferences-modal.php on line 93