Peyman Massoudi, Amir H. Fassihi,

*"Achieving Dynamic AI Difficulty by Using Reinforcement Learning and Fuzzy Logic Skill Metering",*IEEE International Games Innovation Conference, 2013

This post is not related to animation. I just want to share you a research that I and Amir H. Fassihi did together for Dead Mage about dynamic AI difficulty so check it out if you are interested in applying computational intelligence to computer games AI:

Peyman Massoudi, Amir H. Fassihi,*"Achieving Dynamic AI Difficulty by Using Reinforcement Learning and Fuzzy Logic Skill Metering", *IEEE International Games Innovation Conference, 2013

Peyman Massoudi, Amir H. Fassihi,

Modellers always model 3D models in a pose known as T-pose where the character model stands in a certain pose and the hands and body shows a figure like T (Figure1)

Modellers model the characters in T-Pose so the riggers can rig the character more easily. This helps them to place the bones and adjust vertex weights and envelopes easier. When all the bones are placed in the 3D model and the skinning process starts, the current pose of the skeleton is saved. This pose is called Binding Pose. The name shows the whole story. Binding pose is the pose where you start binding a mesh to its corresponding skeleton.

Now why a binding pose should be saved ? You might know that all of the vertices positions initially should be calculated in modelling space. A skinned mesh's vertices are transformed by it's corresponding skeleton. When the skeleton bones transform, the vertices' positions change based on the weights they have for each bone (bones transformations are calculated in mesh's modelling space (object space)). Now if the vertices' positions change based on the orientation of the bones the whole mesh will be distorted because of an extra orientation or maybe position that the riggers applied to bones to adjust them into meshes. The rigger transforms each bone to fit them in the mesh's body. These transformations (call it Binding Pose Transformation) should not be calculated for the final vertices positions. You can consider an example in which you have a skinned mesh, so by changing bone transformations the vertices positions should be changed as well. Lets consider just one bone rotation. The orientation of the bone in mesh's modelling space is equal to:

Bone Orientation = Binding Pose Rotation * KeyFrame Rotation

If vertices get affected by the "Bone Orientation" variable, first they will be rotated by Binding Pose Rotation and next with the key frame rotation and that binding pose rotation will distort all of the vertices because the initial model that the modeller created is affected by some extra unnecessary rotations produced by the rigger. So for bones belonging to a skeleton of a skinned mesh, a reference pose have to be saved and this reference pose is called Binding Pose. It will be saved when you apply a skin modifier to your mesh or in other words when you start the skinning process. For each bone in character animation, all of the key frames are stored relative to its binding pose. Now for a skinned mesh, deformations occur based on the relative transformations of each bone to its binding pose. This will prevent the mesh to be distorted.

For animation blending each bone has a weight. The weight is ranged between 0 and 1 which 1 means the full calculation of the key frame transformation and 0 means the binding pose. Now any other number between 0 and 1 for a weight shows a blended transformation between bones binding poses and key frames transformations.

Figure1: A 3D model in T-pose and bones placed in it

Modellers model the characters in T-Pose so the riggers can rig the character more easily. This helps them to place the bones and adjust vertex weights and envelopes easier. When all the bones are placed in the 3D model and the skinning process starts, the current pose of the skeleton is saved. This pose is called Binding Pose. The name shows the whole story. Binding pose is the pose where you start binding a mesh to its corresponding skeleton.

Now why a binding pose should be saved ? You might know that all of the vertices positions initially should be calculated in modelling space. A skinned mesh's vertices are transformed by it's corresponding skeleton. When the skeleton bones transform, the vertices' positions change based on the weights they have for each bone (bones transformations are calculated in mesh's modelling space (object space)). Now if the vertices' positions change based on the orientation of the bones the whole mesh will be distorted because of an extra orientation or maybe position that the riggers applied to bones to adjust them into meshes. The rigger transforms each bone to fit them in the mesh's body. These transformations (call it Binding Pose Transformation) should not be calculated for the final vertices positions. You can consider an example in which you have a skinned mesh, so by changing bone transformations the vertices positions should be changed as well. Lets consider just one bone rotation. The orientation of the bone in mesh's modelling space is equal to:

Bone Orientation = Binding Pose Rotation * KeyFrame Rotation

If vertices get affected by the "Bone Orientation" variable, first they will be rotated by Binding Pose Rotation and next with the key frame rotation and that binding pose rotation will distort all of the vertices because the initial model that the modeller created is affected by some extra unnecessary rotations produced by the rigger. So for bones belonging to a skeleton of a skinned mesh, a reference pose have to be saved and this reference pose is called Binding Pose. It will be saved when you apply a skin modifier to your mesh or in other words when you start the skinning process. For each bone in character animation, all of the key frames are stored relative to its binding pose. Now for a skinned mesh, deformations occur based on the relative transformations of each bone to its binding pose. This will prevent the mesh to be distorted.

For animation blending each bone has a weight. The weight is ranged between 0 and 1 which 1 means the full calculation of the key frame transformation and 0 means the binding pose. Now any other number between 0 and 1 for a weight shows a blended transformation between bones binding poses and key frames transformations.

In most games, walk and run animations can be blended together by
holding analog stick. Depending on how much analog stick is pressed, two
animations (usually walk and run) start blending. There is a problem in blending between walk and run animations and that is these two animations has
different timings. Blending between walk and run results an unexpected motion. In this
post I want to talk about how we can blend between two animations with
different timings like walking and running.

Before going further, let’s consider human
jogging. Jogging is a movement between run and walk. It's not as slow as
walk and not as fast as run. We can say that jogging is partially
run and partially walk so the gait and hands should not move as long as they
move in running and should not move as short as they move in walking. We can say
that the transform of bones are averaged between run and walk in jogging.
That's actually what we are doing in animation blending. Obviously, animation blending
is a weighted average of different animations keyframes. So for now we know
that jogging actual poses can be achieved by blending between run and walk
animations. One another thing should be considered is that jogging speed value is also
between run and walk speed. This means that jogging is slower than run and faster
than walk. To achieve a jog animation by blending between walk and run, we also need to blend between speeds of two animations.

Now let’s consider how we should blend between walk and run to
make a jog animation. First it comes from animators. They should animate walk
and run with same normalized time. For example, if in walk animation, left foot
starts planting on the ground at normalized time 0.5 then the left foot in run
animation should start planting on the ground at normalized time 0.5 too and if the right foot in walk, starts planting on the ground at normalized time 0 and 1 (loop poses) then the right foot in run
animation should start planting on the ground at time 0 and 1 too.

After making walk and run animations with the rules I mentioned, you can blend the speed of two animations with the same blend factor used for
animation blending:

a = blend_factor where
0 <= blend_factor <=1

T1 = walk_length (in
seconds)

T2 = run_length (in seconds)

T1>T2

At the most times we need to blend animations with each other
linearly so if we use linear animation blending then we need to blend speeds of walk and run
animations linearly as well. For this reason I wrote a linear equation to show the
time length of jog animation:

jog_length = (T2 -T1) * a + T1
where jog animation is the result of
blending between walk and run animations with blend factor ‘a’ linearly and jog_length is the time length for jog animation.

At the final step we should change the playback rate of both walk
and run animations to achieve a blended speed:

Walk.PlayBackRate = T1 / Jog_length

Run.PlayBackRate = T2 / Jog_length

For avoiding some problems like round-off errors in floating points you can set the Run.NormalizedTime to Walk.NormalizedTime instead of setting the Run.PlayBackRate.

For avoiding some problems like round-off errors in floating points you can set the Run.NormalizedTime to Walk.NormalizedTime instead of setting the Run.PlayBackRate.

Make sure to change the speed of both animations before calling
your blend function, otherwise you will face unexpected results.

In the previous post, Euler angles and Quaternions were compared and some important reasons which causes the unit Quaternions to become the dominant rotation system in graphics and game engines were studied. Now we know that we can't run away from quaternions if we want to become an animation programmer, gameplay programmer or graphics programmer, so lets check out two important feature of quaternions.

First lets check out the multiplication of a quaternion to a vector. One vector can be transformed by being multiplied to a quaternion. This multiplication can both scale and rotate the vector. If we multiply an unit quaternion to a vector, it just rotates it and no scale occurs. This is why all the graphics and game engines normalize quaternions before multiplying it to a vector. So after normalizing a quaternion, a vector can be rotated with this equation:

Result = q * v * inverse(q);

Where q is an unit quaternion and v is our vector which we want it to be rotated and Result is the vector v after being rotated by q. One most important feature of quaternion multiplication is that it's not commutative. So q1*q2 is not equal to q2*q1. This order of multiplication is really important for many animation algorithms like animation blending techniques. For example you can seek the importance of this rule in additive animations. Additive animations are one type of animation which are the difference of at least two animations. Mostly they are used for asynchronous events. For example you can create a breathing additive animation in which only character's spine bone is rotating in and out and other bones have no transformation. This additive animation can be added to any animation like idle aiming, running and many more. Now if we want to add an additive animation to an existing one, we should notice about the order of quaternion multiplication. First the main animation should be calculated, then the additive animation comes up on it so the rotations should be calculated in this order.

Now let's consider a simple example in which q1 is the rotation of main animation and q2 is the rotation of additive animation. the order of additive animation blending should be like this:

1- First apply q2 to the current vector (v) :

Result1 = q2 * v * inverse(q2)

2- Second apply q1 to the Result1:

Result2 = q1 * Result1 * inverse (q1) = q1 * q2 * v * inverse(q2) * inverse(q1)

So you can see if you want to add an animation to another you have to multiply them like:

q1*....qn-1*qn*v*inverse(qn)*inverse(qn-1)*....*inverse(qa).

This means your vector first rotates by qn and then qn-1, qn-2 ..., q1

The order of the quaternions applied in the above example shows the local additive rotations. That means it will be added in the local space of each bone. If you reverse the order you will have global additive blending. That means it will let the main animation to rotate the bones and then the additive animation will rotate the bones in the world space. The global space additive is not usually used game or graphics engines. I've seen global space additive animations just in 3DSMax CAT and 3DSMax is a DCC tool not a game engine.

Just note that most of the graphics and game engines overload the quaternion-vector multiplication with operator '*', so you can replace the 'q * v * inverse(q)' with just 'q * v'. If you are using an engine or API which has overloaded quaternion-vector multiplication then you can apply the multiplication order mentioned above, like this:

q1*....qn-1*qn

Now lets consider another important feature of unit quaternions. You can consider the imaginary vector part of a unit quaternion as the axis which your vector wants to rotate around and the scalar part is some value shows the amount of rotation around that axis. The imaginary vector part has a great feature and that is, its magnitude is equal to sin(a/2) where a is the rotation degree or radian which you want your vector to rotate around the imaginary vector part of your unit quaternion. Also there is another important feature for unit quaternions and that is the the scalar part is equal to cos(a/2). So whenever you want to create an unit quaternion which can rotate your object around an axis [x,y,z] with a degree equal to 'a', you can follow these steps in order:

1- First normalize your rotation axis:

[x,y,z]/ length( [x,y,z] )

2- Multiply the normalized rotation axis to sin(a/2) and use the result as your imaginary vector part.

3- Use cos(a/2) as your quaternion scalar part.

Now for a better understanding, lets consider an example in which you want to rotate your object around y-axis with 90 degrees. First you have to normalize your rotation axis which is [0, 1, 0]. In this example, our rotation axis is already normalized so we should not normalized it again. Second, we should create our imaginary vector part by multiplying the rotation axis to sin( 90/2 ):

sin (90/2) = sin (45) = 0.7071

Imaginary vector part = [0, 1, 0] *0.7071 = [0, 0.7071, 0]

At the third step, we should calculate the scalar part of our quaternion which is equal to cos (90/2):

w = cos (45) = 0.7071

The final quaternion should be like this:

[ 0.7071, 0, 0.7071, 0]

You can write it as its mathematical form:

0.7071 + 0i + 0.7071j + 0k = 0.7071 + 0.7071j

This can be considered inversely, for the times you have an unit quaternion and you want to know how much it can rotates your vector. In this situation, for finding the desired degree you can do this:

DesiredDegree = 2 * ArcCos(w)

Don't forget that a unit quaternion can just rotate an object between 0 and 180 degrees because it uses ArcCosine function to represent rotations. So if you want to rotate an object more than 180 degrees through time, you have to do it with more than one rotation. For example if you want to rotate your object with 240 degrees linearly through time, you can first Slerp it it with 120 degrees at t/2 and after that, Slerp it again with another 120 degrees at next t/2.

Conclusion

Most of the game and graphics engines use unit quaternions for rotations. Learning to work with them is very important. Some people prefer to use Euler angles for rotations because they are most understandable but the fact is that the used graphics or game engine converts Euler angles to an unit quaternion because of the reasons i mentioned in this post. So by working directly with unit quaternions the conversion step of an Euler angles rotation to unit quaternions can be omitted.

In this post two important feature of quaternions and unit quaternions were considered. These two important features are using frequently in Animation, Gameplay or Graphics programming so its good to keep them in mind.

First lets check out the multiplication of a quaternion to a vector. One vector can be transformed by being multiplied to a quaternion. This multiplication can both scale and rotate the vector. If we multiply an unit quaternion to a vector, it just rotates it and no scale occurs. This is why all the graphics and game engines normalize quaternions before multiplying it to a vector. So after normalizing a quaternion, a vector can be rotated with this equation:

Result = q * v * inverse(q);

Where q is an unit quaternion and v is our vector which we want it to be rotated and Result is the vector v after being rotated by q. One most important feature of quaternion multiplication is that it's not commutative. So q1*q2 is not equal to q2*q1. This order of multiplication is really important for many animation algorithms like animation blending techniques. For example you can seek the importance of this rule in additive animations. Additive animations are one type of animation which are the difference of at least two animations. Mostly they are used for asynchronous events. For example you can create a breathing additive animation in which only character's spine bone is rotating in and out and other bones have no transformation. This additive animation can be added to any animation like idle aiming, running and many more. Now if we want to add an additive animation to an existing one, we should notice about the order of quaternion multiplication. First the main animation should be calculated, then the additive animation comes up on it so the rotations should be calculated in this order.

Now let's consider a simple example in which q1 is the rotation of main animation and q2 is the rotation of additive animation. the order of additive animation blending should be like this:

1- First apply q2 to the current vector (v) :

Result1 = q2 * v * inverse(q2)

2- Second apply q1 to the Result1:

Result2 = q1 * Result1 * inverse (q1) = q1 * q2 * v * inverse(q2) * inverse(q1)

So you can see if you want to add an animation to another you have to multiply them like:

q1*....qn-1*qn*v*inverse(qn)*inverse(qn-1)*....*inverse(qa).

This means your vector first rotates by qn and then qn-1, qn-2 ..., q1

The order of the quaternions applied in the above example shows the local additive rotations. That means it will be added in the local space of each bone. If you reverse the order you will have global additive blending. That means it will let the main animation to rotate the bones and then the additive animation will rotate the bones in the world space. The global space additive is not usually used game or graphics engines. I've seen global space additive animations just in 3DSMax CAT and 3DSMax is a DCC tool not a game engine.

Just note that most of the graphics and game engines overload the quaternion-vector multiplication with operator '*', so you can replace the 'q * v * inverse(q)' with just 'q * v'. If you are using an engine or API which has overloaded quaternion-vector multiplication then you can apply the multiplication order mentioned above, like this:

q1*....qn-1*qn

Now lets consider another important feature of unit quaternions. You can consider the imaginary vector part of a unit quaternion as the axis which your vector wants to rotate around and the scalar part is some value shows the amount of rotation around that axis. The imaginary vector part has a great feature and that is, its magnitude is equal to sin(a/2) where a is the rotation degree or radian which you want your vector to rotate around the imaginary vector part of your unit quaternion. Also there is another important feature for unit quaternions and that is the the scalar part is equal to cos(a/2). So whenever you want to create an unit quaternion which can rotate your object around an axis [x,y,z] with a degree equal to 'a', you can follow these steps in order:

1- First normalize your rotation axis:

[x,y,z]/ length( [x,y,z] )

2- Multiply the normalized rotation axis to sin(a/2) and use the result as your imaginary vector part.

3- Use cos(a/2) as your quaternion scalar part.

Now for a better understanding, lets consider an example in which you want to rotate your object around y-axis with 90 degrees. First you have to normalize your rotation axis which is [0, 1, 0]. In this example, our rotation axis is already normalized so we should not normalized it again. Second, we should create our imaginary vector part by multiplying the rotation axis to sin( 90/2 ):

sin (90/2) = sin (45) = 0.7071

Imaginary vector part = [0, 1, 0] *0.7071 = [0, 0.7071, 0]

At the third step, we should calculate the scalar part of our quaternion which is equal to cos (90/2):

w = cos (45) = 0.7071

The final quaternion should be like this:

[ 0.7071, 0, 0.7071, 0]

You can write it as its mathematical form:

0.7071 + 0i + 0.7071j + 0k = 0.7071 + 0.7071j

This can be considered inversely, for the times you have an unit quaternion and you want to know how much it can rotates your vector. In this situation, for finding the desired degree you can do this:

DesiredDegree = 2 * ArcCos(w)

Don't forget that a unit quaternion can just rotate an object between 0 and 180 degrees because it uses ArcCosine function to represent rotations. So if you want to rotate an object more than 180 degrees through time, you have to do it with more than one rotation. For example if you want to rotate your object with 240 degrees linearly through time, you can first Slerp it it with 120 degrees at t/2 and after that, Slerp it again with another 120 degrees at next t/2.

Conclusion

Most of the game and graphics engines use unit quaternions for rotations. Learning to work with them is very important. Some people prefer to use Euler angles for rotations because they are most understandable but the fact is that the used graphics or game engine converts Euler angles to an unit quaternion because of the reasons i mentioned in this post. So by working directly with unit quaternions the conversion step of an Euler angles rotation to unit quaternions can be omitted.

In this post two important feature of quaternions and unit quaternions were considered. These two important features are using frequently in Animation, Gameplay or Graphics programming so its good to keep them in mind.

I've made a new character animation reel with some of my animations made during past three years. You can check it out here:

https://vimeo.com/64880103

https://vimeo.com/64880103

Quaternion algebra is used to transform one vector to another in 3D graphics and unit Quaternions (Quaternions with the magnitude equal to one) are widely used to represent rotations. Almost all of the 3D applications, graphics and game engines use unit Quaternions to represent orientation. By using them, each rotation can be represented relative to a reference point uniquely. Although understanding Euler angles is much easier than Quaternions but using unit Quaternions provide some good benefits in comparison to Euler Angles. I've listed the three most important benefits here:

1- Less Computational Overhead:

Quaternions have two parts, a scalar part known as 'w' and an imaginary part known as (x,y,z). So Quaternions can be represented as a four elements vector, (w,x,y,z). Euler angles has a 3x3 matrix representation. Quaternion production makes less computational overhead in comparison to Euler angles because of it's vector representation. Also Quaternions need less memory space in comparison to Euler angles.

2- No Gimbal Lock:

Euler angles rotation system can lead the application to a problem named Gimbal Lock. Gimbal lock occurs when we lose one of the degrees of freedom. Unit Quaternions has no similar problem for rotation in 3D space. This is the main reason that makes unit Quaternions to be the preferred system to being used in computer graphics.

3- Better keyframe interpolation:

You know that keyframes in 3D animations mostly contain 3D rotations. Interpolating between these keyframes is very important. Using Slerp Quaternion interpolation causes good results in interpolating between two key frames. Actually Slerp selects the shortest arc in all possible paths in the rotation space (a 3D sphere with a radius equal to the magnitude of the difference of the source and destination points) to rotate one point to another and this is what we need most of the times, specially in character animation. I've provided a video which shows the difference between two interpolations. The box on the left side uses Euler XYZ and the one on the right side uses Quaternion selrp for rotation interpolation. Both of them use 2 similar key frames. You can see the differences in interpolations:

In the next post i will talk more about the important features of the unit Quaternions, as understanding them is very important for animation programming.

1- Less Computational Overhead:

Quaternions have two parts, a scalar part known as 'w' and an imaginary part known as (x,y,z). So Quaternions can be represented as a four elements vector, (w,x,y,z). Euler angles has a 3x3 matrix representation. Quaternion production makes less computational overhead in comparison to Euler angles because of it's vector representation. Also Quaternions need less memory space in comparison to Euler angles.

2- No Gimbal Lock:

Euler angles rotation system can lead the application to a problem named Gimbal Lock. Gimbal lock occurs when we lose one of the degrees of freedom. Unit Quaternions has no similar problem for rotation in 3D space. This is the main reason that makes unit Quaternions to be the preferred system to being used in computer graphics.

3- Better keyframe interpolation:

You know that keyframes in 3D animations mostly contain 3D rotations. Interpolating between these keyframes is very important. Using Slerp Quaternion interpolation causes good results in interpolating between two key frames. Actually Slerp selects the shortest arc in all possible paths in the rotation space (a 3D sphere with a radius equal to the magnitude of the difference of the source and destination points) to rotate one point to another and this is what we need most of the times, specially in character animation. I've provided a video which shows the difference between two interpolations. The box on the left side uses Euler XYZ and the one on the right side uses Quaternion selrp for rotation interpolation. Both of them use 2 similar key frames. You can see the differences in interpolations:

In the next post i will talk more about the important features of the unit Quaternions, as understanding them is very important for animation programming.

Firstly high heels was a handy tool for Persian horsemen. They hooked it's heels into the saddle's pedal to gain better control while shooting and riding a horse. Although becoming high heels from a handy tool for Persian horsemen to indispensable shoes for women has an interesting tale, this post is not concentrating on high heels historical evolution. It intends to show how high heels can affect people walking and stance. Do not forget that female heroes in video games are very powerful but they are not like Gorillas! They can jump higher than Vlasic, hit harder than Sharapova and walk sexier than fashion models. It's all about entertainment. People prefer to control a pretty girl rather than a female gorilla. Mostly female heroes wear high heels in their journey through game so let's study how high heels can affect human movement.

Lets start with stance idle. Wearing high heels lowers the base of support. This makes the center of pressure on feet become almost near the ball joint (this is not healthy at all). So the one who wear high heels should shift her weight forward so her center of mass can be in the same vertical direction where center of pressure of her feet is. This vertical direction is known as line of gravity. This is essential to remain balanced. For shifting center of mass forward, the hip should move forward and this causes the spine to become curvy. The hip and stomach move forward and the rib cage extends a little due to the leaning of the hip. This makes a curve to the spine and this curve raises the buttocks higher. Beside these effects, walking on high heels causes the calf muscle to be contracted and this makes women legs more curvy! Yes that's true, those curves attracts people. This is a rule, not just in human kind but in all mammals. So for stance pose, you have to raise her buttocks higher, move her hips and rib cage forward and bend her spine back so it becomes more curvy.

Now lets consider the walking. First the double support phase has a longer time because gaining balance during single support phase is harder due to the lower base of support. This makes shorter gaits in comparison to walking with normal shoes but the art of cat walk is that fashion models can take long gaits like when they wear normal shoes! As our female heroes are fashion models too, we have to ignore short gaits for walking.

During the single-support phase where just one foot is on the ground, human hip adducts through the center of mass. This is because that the line of gravity should connect to the base of support for gaining balance. This process is also known as weight shifting. This weight shifting leads hips to swing left and right when each foot starts touching the ground. The hips swing left when left foot touches the ground and swings right when right foot touches the ground. Wearing high heels provide lower base of support so for gaining better balance weight should be shifted more than walking with normal shoes so the line of gravity can connect with base of support which is the area of the shoe which is contacting with ground surface (during single support phase). This means the hips have to swing more to left and right and this makes walking with high heels more attractive in comparison to walking with normal shoes. So far we considered the lift and gait phases now let's consider the plant phase! While walking with normal shoes the knees extends smoothly so heels can make contact with ground softly. This makes a lower impulse to feet. This lower impulse makes the other body parts including hands to swing softer due to first law of newton . This is completely opposite for walking in high heels. The high heels do not allow the knees to extend smoothly. This is because the heels contact earlier with ground with higher speed and this leads to a higher impulse. Due to the first law of newton the other body parts should bounce more in comparison to walking with normal shoes. This higher bounce leads the hands to swing forward and backward higher and makes soft body parts like buttocks and breasts to shake more with higher amplitudes! And this makes walking more sexy.

Conclusion

In this post some of the effects which high heels could have on females walking has been reviewed. Knowing these effects is essential for creating better poses. So lets review these effects:

1- High heels make spine more curvy by shifting the hips and ribcage forward, bending spine backward and raising the buttocks higher.

2- Walking in high heels causes to take shorter steps in comparison to walking with normal shoes but we ignore short steps for female heroes because fashion models can take long steps while walking with high heels (the art of cat walk) and our female heroes are fashion models too!

3- During walking the, hips swing more to left and right (hip adduction)

4- During foot planting, heels contact to ground with higher speed and this makes the hands to swing with higher amplitude to forward and backward. Also if you have control over buttocks or breasts you have to shake them more obvious due to the higher impulse which is made by contacting heels to the ground.

Lets start with stance idle. Wearing high heels lowers the base of support. This makes the center of pressure on feet become almost near the ball joint (this is not healthy at all). So the one who wear high heels should shift her weight forward so her center of mass can be in the same vertical direction where center of pressure of her feet is. This vertical direction is known as line of gravity. This is essential to remain balanced. For shifting center of mass forward, the hip should move forward and this causes the spine to become curvy. The hip and stomach move forward and the rib cage extends a little due to the leaning of the hip. This makes a curve to the spine and this curve raises the buttocks higher. Beside these effects, walking on high heels causes the calf muscle to be contracted and this makes women legs more curvy! Yes that's true, those curves attracts people. This is a rule, not just in human kind but in all mammals. So for stance pose, you have to raise her buttocks higher, move her hips and rib cage forward and bend her spine back so it becomes more curvy.

Now lets consider the walking. First the double support phase has a longer time because gaining balance during single support phase is harder due to the lower base of support. This makes shorter gaits in comparison to walking with normal shoes but the art of cat walk is that fashion models can take long gaits like when they wear normal shoes! As our female heroes are fashion models too, we have to ignore short gaits for walking.

During the single-support phase where just one foot is on the ground, human hip adducts through the center of mass. This is because that the line of gravity should connect to the base of support for gaining balance. This process is also known as weight shifting. This weight shifting leads hips to swing left and right when each foot starts touching the ground. The hips swing left when left foot touches the ground and swings right when right foot touches the ground. Wearing high heels provide lower base of support so for gaining better balance weight should be shifted more than walking with normal shoes so the line of gravity can connect with base of support which is the area of the shoe which is contacting with ground surface (during single support phase). This means the hips have to swing more to left and right and this makes walking with high heels more attractive in comparison to walking with normal shoes. So far we considered the lift and gait phases now let's consider the plant phase! While walking with normal shoes the knees extends smoothly so heels can make contact with ground softly. This makes a lower impulse to feet. This lower impulse makes the other body parts including hands to swing softer due to first law of newton . This is completely opposite for walking in high heels. The high heels do not allow the knees to extend smoothly. This is because the heels contact earlier with ground with higher speed and this leads to a higher impulse. Due to the first law of newton the other body parts should bounce more in comparison to walking with normal shoes. This higher bounce leads the hands to swing forward and backward higher and makes soft body parts like buttocks and breasts to shake more with higher amplitudes! And this makes walking more sexy.

Conclusion

In this post some of the effects which high heels could have on females walking has been reviewed. Knowing these effects is essential for creating better poses. So lets review these effects:

1- High heels make spine more curvy by shifting the hips and ribcage forward, bending spine backward and raising the buttocks higher.

2- Walking in high heels causes to take shorter steps in comparison to walking with normal shoes but we ignore short steps for female heroes because fashion models can take long steps while walking with high heels (the art of cat walk) and our female heroes are fashion models too!

3- During walking the, hips swing more to left and right (hip adduction)

4- During foot planting, heels contact to ground with higher speed and this makes the hands to swing with higher amplitude to forward and backward. Also if you have control over buttocks or breasts you have to shake them more obvious due to the higher impulse which is made by contacting heels to the ground.

This is a very good paper about AI-Based Animation. I believe their method can be used to generate some advanced combat animations like what you can find in Batman Arkham Asylum and Assasin's Creed III. By applying it you can find best transitions to other existing animations based on direction of control and smoothness of transitions. This can give a great diversity to game by selecting best suited motions with fluid transitions at any state the character goes. Their method creates better and more responsive transitions from what you can achieve from motion graphs.

James MacCan, Nancy Pollard ,"Responsive Characters From Motion Fragments" In Proceeding of ACM SIGGRAPH - 2007

James MacCan, Nancy Pollard ,"Responsive Characters From Motion Fragments" In Proceeding of ACM SIGGRAPH - 2007

Subscribe to:
Posts (Atom)