Figure1: A 3D model in T-pose and bones placed in it
Modellers model the characters in T-Pose so the riggers can rig the character more easily. This helps them to place the bones and adjust vertex weights and envelopes easier. When all the bones are placed in the 3D model and the skinning process starts, the current pose of the skeleton is saved. This pose is called Binding Pose. The name shows the whole story. Binding pose is the pose where you start binding a mesh to its corresponding skeleton.
Now why a binding pose should be saved ? You might know that all of the vertices positions initially should be calculated in modelling space. A skinned mesh's vertices are transformed by it's corresponding skeleton. When the skeleton bones transform, the vertices' positions change based on the weights they have for each bone (bones transformations are calculated in mesh's modelling space (object space)). Now if the vertices' positions change based on the orientation of the bones the whole mesh will be distorted because of an extra orientation or maybe position that the riggers applied to bones to adjust them into meshes. The rigger transforms each bone to fit them in the mesh's body. These transformations (call it Binding Pose Transformation) should not be calculated for the final vertices positions. You can consider an example in which you have a skinned mesh, so by changing bone transformations the vertices positions should be changed as well. Lets consider just one bone rotation. The orientation of the bone in mesh's modelling space is equal to:
Bone Orientation = Binding Pose Rotation * KeyFrame Rotation
If vertices get affected by the "Bone Orientation" variable, first they will be rotated by Binding Pose Rotation and next with the key frame rotation and that binding pose rotation will distort all of the vertices because the initial model that the modeller created is affected by some extra unnecessary rotations produced by the rigger. So for bones belonging to a skeleton of a skinned mesh, a reference pose have to be saved and this reference pose is called Binding Pose. It will be saved when you apply a skin modifier to your mesh or in other words when you start the skinning process. For each bone in character animation, all of the key frames are stored relative to its binding pose. Now for a skinned mesh, deformations occur based on the relative transformations of each bone to its binding pose. This will prevent the mesh to be distorted.
For animation blending each bone has a weight. The weight is ranged between 0 and 1 which 1 means the full calculation of the key frame transformation and 0 means the binding pose. Now any other number between 0 and 1 for a weight shows a blended transformation between bones binding poses and key frames transformations.
Hi, if you don't mind could I ask animation-related questions? I'm learning to do animation using opengl and collada.
ReplyDeleteHi,
DeleteYes of course, why not :)
Thanks. Sorry I replied late, and sorry this post is a little long.
DeleteThe 3d model format I use is collada. From what I have searched, to calculate the skinning matrix, I need each joint's relative matrix, absolute matrix, animation matrix and the inverse bind matrix. In my opinion, if I'm correct, the relative matrix is stored as in .the absolute matrix needs to be calculated by multiplying the joint's relative matrix
by its parent's absolution matrix. And for the skinning equation, I need the inverse bind matrix, which is, if I guess correctly,
located under the library_controllers. My questions are
1) Are the matrix in library_visual_scene the local transform matrix of the bone, and are they bind pose matrix?
2) Why do I need the absolute matrix ( the one that is acquired through multiplying the relative/local transform matrix by its parent's absolute matrix) It doesn't appear in the skinning equation.
3) Are the matrix in library_controller the Inverse Bind Matrix?
4) What do absolute, relative and inverse bind matrix do?
5) I found the skinning equation on this website : http://http.developer.nvidia.com/GPUGems/gpugems_ch04.html . It is the weight of vertex * JointMatrix ( I suppose ) * inverse bind matrix * vertex.
I don't know what is the jointMatrix in that equation?
6) When I export animation from blender, I see a series of matrices for each bone in library_animation. I think they correspond to the keyframe time.
But how do I use these matrices ( i call them animation matrix).
7) What approach can I take to calculate the skinning matrix ?
Hello again,
DeleteActually I haven't used collada before but usually all the exporters produce same data with different arrangement and size. However I exported a sample file with collada and had a short look at it, So here are the answers:
1- The matrix in the library_visual_scene is not the inverse binding pose matrix. I think it shows the current transformation of the bones in the scene but i'm not sure. As I said I haven't worked with collada before.
2- You need the absolute matrix for the skinning process. Since vertices positions are being calculated in modeling space or world space you need the absolute matrices.
3- If you open the file and search for "INV_BIND_MATRIX", it shows that it refers to the controller section. So the matrices in the controller section might be the inverse bind pose transformation.
4- Relative transformations are used for animation. If you use the absolute matrices you will get unnatural results for character animation during keyframe interpolation. It can tear the skeleton because of having translation data in it and also interpolating the joints in parent space (or better with respect to each joint binding pose) can result a better and more natural result.
The absolute matrix is just computed based on the relative matrices and it's used for being used for skinning process or any other operation that needs the absolute transformation of bones like physical calculation or inverse kinematics. You don't need to save absolute, you can always acquire it from relative matrices. In some cases you may save absolute matrices for caching purposes to be used for low detailed NPCs.
The inverse bind matrix is used for ignoring the initial transformation of the bones. When you start rigging a character, you move and rotate the bones and also you might scale them and place them in the right position in the mesh. This transformation is called binding pose. After this you start skinning process. If you apply the binding pose to the mesh, the mesh's initial shape should be messed up. So you always should multiply each bone's inverse bind pose to the absolute matrix to have the initial pose of the mesh.
5- The equation is correct but the joint matrix here is the absolute matrix that you have calculated in modeling space (not world).
6- Keyframes are just transformations which are coming with local time . When you start the animation you should count the time. Then based on the current time you can find the two active keyframes. Then you should interpolate between these two active keyframes. Don't forget to first interpolate the bones in the parent space for each bone and then calculate the absolute transformation for each bone and use it for the skinning process.
7- Each vertex has some weights to bones. So its final transformation should be the weighted sum of the the equation you wrote in your 4th question:
((weight1 of vertex * JointMatrix1 * inverse bind matrix1) + (weight2 of vertex * JointMatrix2 * inverse bind matrix2) +..+(weight n of vertex * JointMatrix n * inverse bind matrix n)) * vertex.
The number 'n' is not going beyond 4 at the most times for performance reasons, but it depends on your exporter to export how many bones influencing vertices.
There are many approaches that can help you to optimize your skinning process, like GPU skinning, using SIMD operations on CPU or applying dirty flags to the bones, but the overall process is just about averaging the vertices based on the weights of their corresponding bones.
Hope this would help :)
Hi, sorry I forgot to reply to your help. Thanks for your help. If I may I want to ask another question which is if there's a keyframe at 0:02, I should interpolate the joint matrix(the matrix used for animation using the weight equation) during these two secends? If so how can I interpolate the matrix?
DeleteHey, Nice blog...and Really helpful...
ReplyDeleteThank you
http://www.gameyan.com/3d-character-animation.html
Thanks :)
DeleteHi, I have a question regarding animation:
ReplyDelete1.I have a Hand mesh which I want to animate.
2.I have the Skeleton which can be hierarchically animated.
3.My mesh is also weighted in Blender. So each vertex has 4 associated bones to be affected by.
4.When I apply the Animation of my Skeleton to the mesh, the hierarchy is applied. (so the hierarchy of the mesh, matches the hierarchy of the Skeleton).
So far so good, now question: the fingers look to be stretched (its like the fingers smashed by a heavy door). Why?
Note: (I didn't apply the bind pose bone Transformation Matrix explicitly, but I read about it and I believe its functionality is there, in the hierarchical Transformation I have for my Skeleton).
If you need more clarification of the steps, please ask.
Hi Amin,
DeleteSo it seems that you have skinned a mesh in blender and imported it in a custom 3D real time application based on D3D or OpenGL?! if it's true then you need to consider the binding pose to obtain the final transformation of the vertices in modelling space. The hand squash pose is likely because of the binding pose. The rigger might have scaled the bone to put it in the right place in the mesh.
When you create a bone in a DCC tool like Blender, it has a default direction and size. The rigger changes its direction and size to put it in the right position in the mesh body. This extra transformation is called binding pose and should not be applied to final vertices positions while skinning. You need binding pose to find the exact world transformation of the bones but it should not be applied to the vertices final transformations.
So after you calculated the bones matrices in the mesh modelling space, you need to deduct the binding pose from each bone matrix, like this:
Final_Bone_Matrix_For_Skinning = BindingPose.Inverse * BoneMatrix
Each bone has a binding pose. You have to multiply the inverse binding pose of each bone by the bone transformation matrix. Note that the BoneMatrix and BindingPose matrices here are considered to be in mesh's modelling space.
Hi Peyman,
ReplyDeleteThank you for the informative response. I now understand why I need such a matrix.
Do you know how should I access that matrix from FBXSDK. I also have a FBXleader which imports the FBX data to c++, I am abit confused which one of them could be the Bind Pose matrix of the bone:
const Math::V3View& globPos() const;
const Math::RotMat& globOrient() const;
const Math::RigidTransform& globTransf() const;
const Math::RigidTransform& parentToProximal() const;
const Math::RigidTransform& proximalToDistal() const;
Math::RigidTransform distalToProximal() const;
The point is that, with the loader I have, I can not instantiate its abstract class related to the above methods (the class seems to be incomplete). I would be glad if you could show the way I can use FBX sdk.
ReplyDeleteHi Amin,
DeleteUnfortunately I haven't worked with FBXSDK before but every skeletal animation exporter have to export the binding pose in their respective file. So there should be a method which can load the binding pose data. From the names of the methods above, I think non of them are responsible to obtain the binding pose transforms.
If the class you're working with is an abstract one then you should implement it yourself or find another class in the SDK which is implementing it.
At the worst case, if you don't find any method to load the binding pose, you can export a one frame animation from the DCC tool (blender here) with the binding pose enabled and use this animation data as the binding pose in your application. BTW this technique is not recommended since the binding pose is surely stored in the skeleton data structure you are already using.
Dear Peyman,
ReplyDeleteThanks for your reply.
Yes you were right. Actually after detailed Investigation of the Loader data struture, I could return the binding pose data of the skeleton.
However, I applied the inverse of the matrices to the bone Transformation. But I did not get what I wanted. Actually this time when I apply the bind pose matrices to the rest pose, then I will not get a correct mesh!
This makes me believe that, what I said from the first is more correct in my case. That is, I do not Need bind pose matrices, since the functionaliy of These matrices implicitly is there (e.g. in the steps I have already stated). because for Animation I use my own set up of Skeleton (e.g. not the blender one). so for each bone Transformation i already transform the bone to the origin, calculate Tr Mat, and apply the hierarchy and Skin the correspounding Vertices.
Please tell me your say.
Hi Amin,
DeleteThe way you are calculating the bone transformation can cause plenty of issues specially on interpolations and I'm not sure how you are calculating the matrices and how the transformation key frames are stored in the keyframes. Are they store in their parent space? Are they stored in their corresponding binding pose space or you are using the keyframed bone transformation in the modelling space. All can lead you to calculate the bones transformation in the hierarchy differently.
You can send me some part of your hierarchy matrix calculation codes and a screen shots from the correct animation in blender, corrupted animation in your own app and the character in binding pose. I can help you more this way.
Also try to find out that in what space the animation transformation data is stored. That can help you much.
Dear Peyman;
ReplyDeleteThanks for the post.
I should say I have no Animation in blender. I think this is one part of my Software which makes confusion.
What I have is just a still mesh (at ist rest pose), together with rigged Information which means: weights, indices of weights and bind pose Matrix.
Regarding the way I calculate my bone Transformation Matrices I can write it as follow:
lets consider one pose only for now. But considering number of poses for Animation has the similar principle:
1. I assign a Rotation to each limb: XYZ degrees
2. Then I calculate Transformation matrices of each limb (in world space)
3. Then I return the Bindpose Matrices for each limb
4. within a method called HierarchyApplied(), I apply These Transformation to all childeren of the limbs, as following:
vector Posture1Hand::HierarchyApplied(HandSkltn HNDSKs, vector BindPoseMatrices, string test){
vector WorldMatrices; WorldMatrices.resize(HNDSKs.GetLimbNum());
vector OffSetMatrices; OffSetMatrices.resize(HNDSKs.GetLimbNum());
vector Matrices; Matrices.resize(HNDSKs.GetLimbNum());
//non Hierarchical Matrices
for (unsigned int i = 0; i < Matrices.size(); i++){
WorldMatrices[i] = newPose[i].getModelMatSkltn(HNDSKs.GetLimb(i).getLwCenter());
OffSetMatrices[i] = glm::inverse(BindPoseMatrices[i]);
}
for (unsigned int i = 0; i < Matrices.size(); i++){
vectorchilderen = HNDSKs.GetLimb(i).getChildren();
for (unsigned int j = 0; j < childeren.size(); j++){
WorldMatrices[childeren[j]->getId()] = WorldMatrices[i] * WorldMatrices[childeren[j]->getId()];
OffSetMatrices[childeren[j]->getId()] =(BindPoseMatrices[childeren[j]->getId()])*OffSetMatrices[i] * glm::inverse(BindPoseMatrices[childeren[j]->getId()]);
}
}
for (unsigned int i = 0; i < Matrices.size(); i++){
Matrices[i] = OffSetMatrices[i]*WorldMatrices[i];
}
return Matrices;
}
Explanation of the code above:
ReplyDeleteTo visualize the pose correctly I Need two matrices for each bone: 1. WorldMatrix, 2. OffSetMatrix
1. World Matrix for each bone is the Transformation Matrix of the bone (using XYZ degree) influenced by the matrices parents in hierarchy. Say if we have Transformation matrices of upperPart, middlePart and lowerPart a finger as T1, T2, T3, then Tranfromation of the upperPart after applying hierarchy: T = T1*T2*T3
2. OffSetMatrix is calculated using the inverse of the bindpose Matrices as following: say we have M1, M2, M3 as the Bindpose Matrix of upper, middle and lower part of a finger, then OffSetMatrix of UpperPart of the finger: M = Inv(M3)*Inv(M2)*Inv(M1)
Then the tranfromation Matrix which should be applied to a vertex has the following Format: TrV=T*M
Regarding the Screen shot, I dont knwo how can I include this Screen shot here. Can you please Show me the way?
For screen shots, you can upload them on any third party website like DropBox or your Google Drive and share the link here.
DeleteRegarding the codes, from what I saw so far, the issue is in the way you are calculating the final matrix. It should be something like this:
FinalTransformation = Inv(M1) * T1 * Inv(M2)*T2 * Inv(M3) * T3