Tuesday, June 30, 2015

Foot Placement Using Foot IK

Inverse kinematics has found its way in character animation very well. It has become a major part of animation content creation tools while the animators can not animate characters without IK. There exits different solvers for inverse kinematics. Analytical solution for IK chains with two bones and Cyclic Coordinate Descent for IK chains with more than two bones are the most famous ones. They are widely used in animation content creation tools. While IK has a wide usage in DCC tools, it has found its way to real time animation systems as well. This includes game engines and animation systems which are widely being used in games. Using IK has almost become a standard for many games with good visuals. By using IK in real time animations, characters can overcome the variety of the environments they are moving in. Assume a character that has a walk animation. The animator has animated it on an even surface so it will be fine if you move it on an even surface. But when you move it on an uneven surface, his feet is not going to be placed correctly on the ground. Here you can use Foot IK to place character feet on ground. The usage of IK is not restricted to feet. It can be used for hands as well. The same scenario can be used in a rock climbing feature for both hands and feet. Feet IK is also used to avoid foot skating.

IK in real time animation systems is acting as a post process on animations. This means that the original animation(s) are always being calculated in their normal way, then IK is applying after that to correct character poses to respond well to the changes in environment. It is also used to avoid foot skating while moving root position/rotation procedurally or semi-procedurally.

Using IK in video games can go beyond this, as some video games has integrated full body IK within their engine. However using full body IK has not become a standard in gaming industry yet and not many games using it but using IK for hands and feet has almost become a standard for games which are caring more for their visuals.

This post is going to show how a foot placement system can be created to place the character feet on uneven surfaces dynamically or planting feet on ground to avoid foot skating. The post is originally based on the document I provided with a foot placement system named "Mec Foot Placer". Mec Foot Placer is a foot placement system which I implemented in my free time. It's a free and you can get it from unity asset store. I've shared some useful parts of the document here for those who like to use or implement these kind of systems for their games.

Before going further I recommend to check out these unity web player build to see how the system is affecting characters feet:

Mec Foot Placer with plant foot feature
Mec Foot Placer without plant foot feature

And here is the link to the asset store:

Mec Foot Placer on Unity Asset Store

The system is using Unity5 Mecanim so you might see some Mecanim specific notes in the post. If you are not a Unity3D developer, you can jump out of the unity specific topics in this post, otherwise those would be helpful as well. However the technique described here is not restricted to Unity and you can implement it on any platform which is offering IK, FK and physics. So in this post, I tried to describe the system generally for those who like to implement the same system on different platforms (not just Unity) and at the end some Unity specific notes are provided.

Mec Foot Placer


1- Introduction


Mec foot placer provides an automatic workflow for the character feet to be placed on grounds and uneven terrains. This document provides the details of this system and depicts how it can be setup. Mec foot placer acts as a post process on animations so while it places the foot automatically on grounds, it will save the overall shape of the feet determined by the active animation(s).


2- Work Flow


Mec foot placer can find the appropriate foot position on ground by using raycasts. The system uses three raycasts to find foot, toe and the corner of heel position. Toe position is used for foot pitch rotation and heel corner position is used for foot roll. The foot yaw rotation will be obtained from the animation itself to make sure the original animation pose is not wrongly affected. The system always check the ground availability based on foot position from the current active animation(s). If system detects any ground, it will set the foot in an appropriate position and rotation on it.

When the system is active, it will automatically place the foot on ground in these steps respectively:


1- First it gets the foot position from current active animation(s).

2- It casts a ray from an origin on top of the foot position in direction of the up vector with a custom offset distance.

3-The ray is pointing down from the origin in the direction of the opposite up vector with a distance equal to the same offset distance from step 1 plus foot height and a custom extra ray distance value. Figure 1 shows how the ray is cast for step 1 to 3.

Figure 2 shows the final foot position after detecting a contact point. White sphere in the figure 2 is showing the detected contact point.

The detected contact point is not suitable for placing the foot because it is ignoring the foot height and it causes the leg to be stretched and penetrate through ground. So a vector equal to UpVector * FootHeight will be added to detected contact point (white sphere) to set the final foot position. The Up Vector will be normalized automatically within the system. The blue sphere in Figure 2 shows the final foot position.


Figure 1- Ray casting for finding foot contact point


Figure 2- Final foot position after detecting a contact point



4- From the detected foot position another ray is cast based on the foot forward vector and current Foot rotation from the FK pose (foot animation pose). This ray is used to find toe position. The toe position is going to be used to find foot pitch rotation. Figure 3 shows how toe position is going to be found by using raycasts.


Figure 3- Raycast for finding Toe position


The Toe Vector in figure3 is equal to foot yaw rotation from animation multiplied by normalized forward vector multiplied by foot length:


Applying foot yaw rotation from animation is causing the system to save the original foot direction determined by artist/animator. Figure 4 shows the detected toe position leading the foot to be placed on the surface correctly. Blue sphere shows the detected toe position.


Figure 4- Detected toe position and the according foot rotation

5- From the detected foot position in step 1 to 3 another ray is cast to find the heel corner position. This ray is used to find the foot roll rotation. Figure 5 shows how this step is working.

The Heel Vector in figure5 is equal to foot yaw rotation from animation multiplied by normalized right vector multiplied by foot half width. Where right vector is forward vector rotated 90 degrees around up vector.




Figure 5- Raycast for finding Heel corner position


The blue sphere in the Figure 6 shows the detected heel corner position and the roll rotation of Foot IK based on that.


Figure 6- Detected heel corner position and the according foot roll rotation


6- The system also adjusts the IKHints of legs automatically to achieve a natural knee shape. IKHints are known as swivel angles as well. The IKHints or swivel angles are determining the surface in which The IK chain is being solved.

Figure 7 shows how the detected IKHint position is set. The blue sphere is showing the final IKHint position and the white sphere is showing the detected toe position. Calf vector is a vector in direction of calf bone (lower leg) and with magnitude equal to calf bone length.

Note that when the system is in IK mode, if the ray for foot position detection fails to contact to any ground, the foot placement system will transition to FK mode smoothly through time. This is also true if the system wants to switch back from FK to IK as well.



Figure 7- Final IKHint position (swivel angle)


3- Foot Placement Input Data


Each foot needs an input data to work correctly. For this reason a FootPlacementData component is provided to manipulate each foot needed data.The input variables coming with the component should be filled by user. Each input variable is described here:
  •  Is Left Foot: Check this check box if the current FootPlacmentData component you are setting is for left foot, otherwise it will be considered as right foot.

  • Ignore Character Controller Collision: If this check box is checked the raycasts will ignore character controller collision. This will just work for the owner game object character controller and it’s not working for the rest of character controllers in the world. To ignore other character controllers layers should be set using Unity3D layering system.

  • Plant Foot: If this check box is checked, the system will check for foot planting feature. Character’s foot will be planted after detecting a contact point and it will remain on the detected position and rotation until it automatically returns to FK mode. It also checks for the ground height changes so feet can be placed on ground while being planted. This feature provides good solution to avoid foot skating. If the plant foot check box is not checked the foot always gets its position and rotation from animation and after that the foot placement uses raycasts to place it on the ground. If the plant foot is checked the foot will be placed on the first detected contact point and it will not follow the animation while it is in the IK mode. While foot plant feature is active the system can blend between the planted foot position and rotation with the position and rotation of the foot without plant foot feature. However feet are always placed on the uneven terrains and grounds.

Foot plant feature has some functions to manipulate its blend weights. Check out section 4 to find out more about its functions. It is recommended to enable this feature in some states which might have foot skating like locomotion states and disable it in the other states which is not capable of having foot skating like standing Idle. Please check “IdleUpdate.cs” and “LocomotionUpdate.cs” to find out how to disable and enable this feature safely.
  • Forward Vector: This vector should be set to show the character foot initial direction. What you see in the Character foot reference pose or (Mecanim T-Pose) is what you should use here. At many times character initial forward vector is equal to foot forward vector.

  • IK Hint Offset: The IK hint position will be calculated automatically as stated in section 1. The IK Hint Offset is added to this the final calculated IK Hint Position to fine tune the final IK hint position.

  • Up Vector: Up vector shows the character up vector. This should be equal to world up vector (Vector3(0 , 1, 0)) if the character is moving on ground. Otherwise for some rare situations like running on walls this should be changed accordingly.

  • Foot Offset Dist: The distance used for raycasting. Figure 1, 3 and 5 show the parameter in action.

  • Foot Length: Foot length is used to find the toe position to set the foot pitch rotation. It should be equal to character’s foot length. Figure 3 shows the details.

  • Foot Half Width: This parameter should be equal to half width of the foot. It is used to find foot roll rotation. Figure 5 shows the details.

  • Foot Height: Foot height is used to set the correct heel position on ground. It should be equal to distance of heel center to lower leg joint. Figure 1 and 2 show the details.

  • Foot Rotation Limit: The IK foot rotation will not exceed from this value and this will be its limit for rotation. The value is considered in degrees.

  • Transition Time: When no contact point is detected by the system, it will switch to FK mode smoothly through time. Also when the system is in FK mode and it finds a contact point it will switch to IK mode smoothly through time. The “Transition Time” parameter, is the time length of the smooth transition between FK to IK or IK to FK.

  • Extra Ray Distance Check: This parameter is used to find correct foot, toe or heel corner position on ground. Figure 1, 3 and 5 show the parameter in action. This parameter can be changed dynamically to achieve better visual results. Check out the "IdleUpdate.cs" and "LocomotionUpdate.cs" scripts in the project to find out the details. These scripts are changing the “Extra Ray Distance Check” value based on the foot position in the animations and the current animation state. Both scripts use Unity 5 animator behavior callbacks.

4- Mec Foot Placer Component


The Mec Foot Placer component is responsible for setting the correct rotation and position of feet on the ground and switching between IK and FK for feet automatically.
Mec Foot Placer provides some functions which can be used by user. These functions are stated here:

  • void SetActive(AvatarIKGoal foot_id, bool active): This function will set the system on or off safely for each foot. At some states you don’t need the system to be active. For example when character is falling, there is no need to check for foot placement. However the system will work correctly on this state too but the user can disable it to ignore the calculations of the component. Each feet can be activated or deactivated separately.
  • bool IsActive(AvatarIKGoal foot_id): If the current foot is active the function returns true otherwise false.

  • void EnablePlant(AvatarIKGoal foot_id, float blend_speed):  This function will set the plant foot weight from its current value to 1 through time based on blend speed parameter. The function is useful to be used for some states which need foot planting like locomotion. By using it, plant foot feature will be activated smoothly through time. To have the plant foot feature effective the plant foot weight value should be higher than 0.

  •  void DisablePlant(AvatarIKGoal foot_id, float blend_speed):  This function will set the plant foot weight from its current value to 0 through time based on blend speed parameter. The function is useful to be used for some states which doesn’t need foot planting like standing idles. By using it plant foot feature will be deactivated smoothly through time. 

  • void SetPlantBlendWeight(AvatarIKGoal foot_id, float weight): Sometimes you need to manually change the plant blend values rather than using DisablePlant or EnablePlant functions. This function sets the blend weight between planted foot position and rotation with the non-planted foot position and rotation.

  •  float GetPlantBlendWeight(AvatarIKGoal foot_id): This function returns current blend weight for foot planting feature.

5- Quick Setup Guide


To setup the system you have to add a MecFootPlacer component. The foot placement component needs at least one FootPlacementData component otherwise it will not work. If two feet are needed to be considered, two foot placement data components should be set. One for right foot and one for left foot so the system can manipulate both feet.
After setting the components the Mec Foot Placer system should work correctly. Check out the example scenes for more info.

5-1- Important Notes on Setting Up the System


Some important notes should be considered before setting up the system:


  • Exposing Necessary Bones: If you checked “Optimize Game Objects” in the avatar rig, then some bones have to be exposed since the Mec Foot Placer needs them to work correctly. The bones are listed here:
           1- The corresponding bones for left and right feet.
           2- The corresponding bones for left and right lower legs.

          Check out the “Robot Kyle” avatar in the project to find how it can be done.


  • Setting up the data correctly: Don’t forget that setting up the data for foot placement needs precision and at some states the data needs to be changed dynamically to achieve the best effects. For example the “Extra Ray Distance Check” parameter should be increased or decreased in different states or animation times to achieve better visual results. Fortunately this can be done easily in Unity 5 by using animator behavior scripts. Check out the "IdleUpdate.cs" and "LocomotionUpdate.cs" in project to find out the details. Both scripts are being called within “SimpleLocomotion” animation controller. As Scripts show, the “Extra Ray Distance Check” parameter is increased while character enters in idle state. Also in the locomotion state, the “Extra Ray Distance Check” parameter increases in the times that character is putting his feet on the ground. This is to make sure that foot touches the ground. It will be decreased while the foot is not on the ground to help the foot move freely while looking for ground contacts as well.

  • Checking IK Pass Check Box: On any layer in which you need IK, you have to check IK pass check box in the animator controller so the IK callbacks can be called by Mecanim. If you don't check this check box, Mec Foot Placer will not work.

  • Mecanim Specific Features: Mecanim humanoid rigs provide a leg stretching feature which can help foot IK to look better. The leg stretching feature can avoid Knee popping while setting foot IK. However the value should be set accurately. High values of this feature makes the character cartoony and low values can increase the chance of knee popping in the character. To setup Leg Stretch feature you have to select your character asset. It should be a humanoid rig. In the rig tab, select configure then select muscles tab. In the “Additional Settings” group, you can find a parameter named “Leg Stretch”. Check out “Robot Kyle” avatar in the project to find out more.




Wednesday, June 24, 2015

Avoiding High Dimensionality in Animation State Space

By the progression of computer hardware, video games can have plenty of animations. This amount of animations need to be manipulated. Each specific animation or just some frames of it needs to be played at the right moment to fulfill a motion related task . Usually, developers try to keep the animation controller separate from the other modules like AI or control so they can just send some parameters to the animation system and  the animation system should return the most suitable animation to respond well to the control or AI modules. This can remove the complexity of manipulating animations from the control or AI as they already have their own complexities.

The animation controller promises to return the most suitable animation based on the input parameters. There exists different rules to select animations based on the input parameters. Usually, the speed, rotation and translation of the current animation bones are considered and based on these, a suitable animation will be selected which has the least difference in speed and translation/rotation with the currently playing animation poses. Also the returned animation has to satisfy the motion related tasks. It has to do what the other modules are expected from it to do. For example a path planner can send input parameters like steering degree and speed value to the animation controller and the animation controller should return the best suited motion out of its existing animations to follow the path correctly.

There exists different animation controllers which have already become a standard in video games. The most famous are the animation state machines. They are in many game engines or game animation middleware. They can be combined with animation blend trees as the most of the animation systems are offering them. Usually they are created manually by the animation specialists.

There are some other animation controllers like motion graphs, parametric motion graphs or reinforcement learning based animation controllers. Each of which have their own specifications and they should be discussed separately. Just note that all of these controllers can be implemented on top of an animation state machine which can offer animation blending, transitioning, time offsets within transitions and hierarchical states. I can mention Unreal Engine 4 animation system as a good one which have most of these features (not all).

Animation controllers might face a problem when they have to manipulate many animations. The problem is the high dimensionality of the state space. The controller has to create many states so it can respond well to the input parameters. When state space become high dimensional, the transitions between states grow as an order of power of two. Having a high dimensional state space will lead the system to become impractical, memory consuming and very hard to be manipulated.

In this post I want to introduce a paper based on a research I made about 1.5 years ago. The paper is published about 9 months ago. The research was about reducing state parameters in a Reinforcement Learning based animation controller which was used for locomotion planning. Although RL-based animation controllers are used less in gaming industry but they are finding their way through, because they can offer an almost automatic workflow to connect separate animations within  an animation database to fulfill different motion tasks and create a continuous space out of separate animations.

I'll try to write another post to show how you can reduce states in a manually created animation state machine since manually created animation state machines are being used most widely in gaming industry. However this post is about reducing the dimensions of state space in a RL-Based animation controller.

Here is the abstract:

"Motion and locomotion planning have a wide area of usage in different fields. Locomotion planning with premade character animations has been highly noticed in recent years. Reinforcement Learning presents promising ways to create motion planners using premade character animations. Although RL-based motion planners offer great ways to control character animations but they have some problems that make them hard to be used in practice, including high dimensionality and environment dependency. In this paper we present a motion planner which can fulfill its motion tasks by selecting its best animation sequences in different environments without any previous knowledge of the environment. We combined reinforcement learning with a fuzzy motion planer to fulfill motion tasks. The fuzzy control system commands the agent to seek the goal in environment and avoid obstacles and based on these commands, the agent select its best animation sequences. The motion planner is taught through a reinforcement learning process to find optimal policy for selecting its best animation sequences. To validate our motion planner‟s performance, we implemented our method and compared it with a pure RL-based motion planner."

You may want to read the paper here.

Monday, May 25, 2015

Combining Ragdoll and Keyframe Animation to Achieve Dynamic Poses

One of the greatest challenges in game character animation is the adaption of characters with the dynamic environments and dynamic actions of users. Each character owns many animations which help the animation controllers to respond well in different situations. Although owning plenty of animations can help to achieve better visual responses but it can't always be enough, since the environment which the character is moving in, can be dynamic and can be changed through time. Also player actions are dynamic and this can affect animations as well. Each action of the player needs feedback and animation plays a huge role here.

So even if the character owns many animations, it can't cover all situations, and the motions' visual and responsiveness can become unacceptable at some points. To overcome these issues, many animation techniques have been invented. Some of them are semi procedural, like animation blending and some of them are fully procedural like IK and physically based animations. In this post, I want to talk a little about combination of physical animation and keyframe animation and briefly show how they can be blended together to create dynamic poses. This subject is a  huge one, so I'm just going to briefly talk about it. At the end a case study is provided too.

Combining Keyframe Animation and Ragdoll


Using ragdoll animations have become a standard in the video gaming industry. You can see it in many games but nowadays using simple ragdoll is not acceptable for many game developers which are caring more about animations in their games. So they are using more advanced physical animations. There exists different physical animation techniques. Here I want to point out one which can be very useful in video games and some game engines or physics/animation APIs are providing it.

Using ragdoll alone, usually creates unaccepted motions and at many points it is not providing human like motions and it can be used just instead of die animations. It can't be used for a living character. To have a better ragdoll simulation, the ragdoll animations can be driven by the active keyframe animations. Assume that you have an animation which is generating poses at each frame and you have a ragdoll skeleton which consists of different physical joints and hinges. Here we can get the pose generated by keyframe animation as a target and apply forces to the physical joints of the ragdoll skeleton to reach this target. With this, the ragdoll skeleton is trying to follow animation at each frame while it can react to environment and applied forces within game. Also the pose generated by the physical skeleton can be blended by the actual pose of animation so we can make transitions between physical animation and keyframe animations. Having this can help a lot to achieve dynamic poses. Let's consider a simple example. You have a shooter game. You are shooting at an enemy while he is running. Based on the magnitude of the force caused by bullet, you can change the blend factor of the pose generated between ragdoll and currently playing animations. With this you will have a transition between animation and ragdoll while your ragdoll skeleton is reacting to the bullet physical force. Also the ragdoll skeleton tries to reach itself to the current animation pose.

If you want to find out more about this technique in details and find physical joint equations to follow the animation, I recommend to check the Dynamic Response for Motion Capture Animation paper. Also Havok physics API documentation is describing this technique very well but it's not talking about the equations.

This technique is beautifully implemented in Havok animation tool. You can define a ragdoll skeleton for a character and let it follow the currently playing animations. It also provides pose blending between ragdoll and currently playing animations. So it helps a lot to achieve dynamic acceptable poses.

Next I want to explain my mentioned example more in action. I implemented a simple body hit reaction by blending 4 run animations with the animation driven ragdoll. I used Havok Animation Tool and Vision engine which both are provided within Project Anarchy tool sets. In the Havok animation tool, "Rigid Body Ragdoll Controls Modifier" is responsible for animation driven ragdoll.

A Simple Case Study


Now let's consider the example more in action. I have a character which is running and I want the character to continue running while being hit. Each time the hit point differs. You can assume that the character is hit by randomly fired bullets. For this I made an animation blend tree with 4 run animations. One normal run and three unstable runs.

First, based on the direction of the applied force to different joints, I'm blending between three unstable runs. I have a unstable run left, forward and right.

Second, based on the magnitude of applied force, I'm blending between normal run and the unstable runs. So here I've generated a pose just by blending between 4 animations based on the direction and magnitude of the applied force. Although animation blending can create smooth poses, but it is not enough for creating dynamic poses in this example. So I'm applying animation driven ragdoll as a post process on the currently generated poses from the animation blend tree. The animation driven ragdoll is trying to apply forces to its motors and physical joints at each frame to make ragdoll skeleton pose close to the pose generated by active animations. So at each frame we have two poses. One which is generated by the animation blend tree and one which is generated by the animation driven ragdoll. The animation driven ragdoll is also affected by the bullets force bullets.

To achieve a dynamic and acceptable pose I blend between two poses generated by two animation systems based on the bullet force magnitude. After applying force, I'm blending back to actual animations through time so the pose generated by animation driven ragdoll, fades out through time. And again if the character is hit by a bullet, I apply forces to ragdoll skeleton and blend it with animation (based on force magnitude).

Conclusion


If you want to create such a system, it's better to create a platform which can blend animations very well like an advanced blend tree. The blend tree can respond to the input which is a force vector here. Then as a post process, you can use animation driven ragdoll to apply dynamic forces to body and blend it with the currently playing animations. The blend tree used in this example was very simple. You can use more complex blending systems with more animations to have a more realistic character.

The video here shows the example I described. I'm applying randomly generated forces to randomly selected joints to simulate bullet hits on the character body while he is running. As I mentioned, the character can have much more animations while being combined with ragdoll to shape more realistic motion.

After each hit, the character is blending back to its active animations smoothly.









Saturday, February 14, 2015

Skeletal Animation Optimization Tips and Tricks



Introduction


Skeletal animation plays an important role in video games. Recent games are using many characters within. Processing huge amount of animations can be computationally intensive and requires much memory as well so to have multiple characters in a real time scene, optimization is highly needed. There exists many techniques which can be used to optimize skeletal animations. This article tends to address some of these techniques. Some of the techniques addressed here, have plenty of details, so I just try to define them in a general way and introduce references for those who are eager to know the details.

This article is divided into two main sections. The first section is addressing some optimization techniques which can be used within animation systems. The second section is from the perspective of the animation system users and describes some techniques which can be used by users to use the animation system more efficiently. So if you are an animator/technical animator you can read the second section and if you are a programmer and you want to implement an animation system you may read the first section.

This article is not going to talk about mesh skinning optimization and it's just going to talk about skeletal animation optimization techniques. There exists plenty of useful articles about mesh skinning around the web.


1. Skeletal Animation Optimization Techniques


I assumed that most of the audiences of this article know the basics of skeletal animation so I'm not going to talk about the basics here. To start, let's have a definition for a skeleton in character animation. A skeleton is an abstract model of a human or animal body in computer graphics. It is a tree data structure. A tree which its nodes are called bones or joints. Bones are just some containers for transformations. For each skeletal animation, there exists animation tracks. Each track has the transformation info of a specific bone. A track is a sequence of keyframes. A keyframe is transformation of a bone at a specific time. The keyframe time is specified from the beginning of the animation. Usually the keyframes are stored relative to a pose of the bone named binding pose. These animation tracks and skeletal representation can be optimized in different ways. In the following sections, I will introduce some of these techniques. As stated before, the techniques are described generally and this article is not going to describe the details here. Each of which can be described in a separated article.

Optimizing Animation Tracks


An animation consists of animation tracks. Each animation track stores the animation related to one bone. An animation track is a sequence of keyframes where each keyframe contains one of the translation, rotation or scale info. Animation tracks are one thing that can be optimized easily from different aspects. First we have to note that most of the bones in character animations do not have translation. For example we don't need to move fingers or hands. They just need to be rotated. Usually the only bones that need to have translation are the root bone and the props (weapons, shields and so on). The other body organs do not move and they are just being rotated. Also the realistic characters usually do not use scale. Scale is usually applied to cartoony characters. One other thing about the scale is that animators mostly use uniform scale and less non-uniform scale.

So based on these information, we can remove scale and translation keyframes for those animation tracks that do not own these two. The animation tracks can become light weighted and allocate less memory and calculation by removing unnecessary translation and scale keyframes. Also if we use uniform scale, the scale keyframes can just contain one float instead of a Vector3.

Another technique which is very useful for optimization of animation tracks, is animation compression schemes. The most famous one is curve simplification. You may know it as keyframe reduction as well. It reduces the keyframes of an animation track based on a user defined error. With this, the consecutive keyframes which have a little difference can be omitted. The curve simplification should be applied for translation, rotation and scale separately because each of which has their own keyframes and different values. Also their value difference is calculated differently. You may read this paper about curve simplification to find out more about it.

One other thing that can be considered here, is how you store rotation values in the rotation keyframes. Usually the rotations are stored in unit quaternion format because quaternions have some good advantages over Euler Angles. So if you are storing quaternions in your keyframes, you need to store four elements. But in unit quaternions the scalar part can be obtained easily from the vector part. So the quaternions can be stored with just 3 floats instead of four.  See this post from my blog to find out how you can obtain the scalar part from the vector part.


Representation of a Skeleton in Memory


As mentioned in previous sections, a skeleton is a tree data structure. As animation is a dynamic process, the bones may be accessed frequently while the animation is being processed. So a good way is to keep the bones sequentially in the memory. They should not be separated because of the  locality of the references. The sequential allocation of bones in memory can be more cache friendly for CPU.


Using SSE Instructions


To update a character animation, the system has to do lots of calculations. Most of them are based on linear algebra. This means that the most of calculations are with vectors. For example the bones are always being interpolated between two consecutive keyframes. So the system has to LERP between two translations and two scales and SLERP between two quaternion rotations as well. Also there might be animation blending which leads the system to interpolate between two or more different animations based on their weights. LERP and SLERP are calculated with these equations respectively:

LERP(V1, V2, a) = (1-a) * V1 + a * V2
SLERP(Q1, Q2, a) =  sin((1-a)*t)/sin(t) * Q1 + sin(a*t)/sin(t) * Q2

Where 't' is the angle between Q1 and Q2 and 'a' is interpolation factor and it is a normalized value. These two equations are frequently used in keyframe interpolation and animation blending. Using SSE instructions can help you to achieve faster and more efficient results. I highly recommend to see the hkVector4f class from Havok physics/animation SDK as reference. The hkVector4f class is using SSE instructions very well and it's a very well designed class. You can define translation, scale and quaternion similar to hkVector4f class.

You have to note that if you are using SSE instructions, then your objects which is using it, have to be memory aligned otherwise you will run into traps and exceptions. Also you should consider your target platform and see that how it supports these kind of instructions.


Multithreading the Animation Pipeline


Imagine you have a crowded scene full of NPCs in which each NPC has bunch of skeletal animations. Maybe a herd of bulls. The animation can take much time to be processed. This can be reduced significantly if the computation of crowds become multithreaded. Each entity’s animations can be computed in a different thread.

Intel introduced a good solution to achieve this goal in this article. It defines a thread pool with worker threads which their count should not be more than CPU cores otherwise the application performance decreases. Each entity has its own animation and skinning calculations and it is considered as a job and is placed in a job queue. Each job is picked by a worker thread and the main thread calls render functions when the jobs are done. If you want to see this computation technique more in action, I suggest you to have a look at Havok animation/physics documentation and study the multithreading in the animation season. To have the docs you have to download the whole SDK here. Also you can find that Havok is handling synchronous and asynchronous jobs there by defining different job types.

Updating Animations


One important thing in animation systems is how you manage the update rate of a skeleton and its skinning data. Do we always need to update animations each frame? If true do we need to update each bone every frame? So here we should have a LOD manager for skeletal animations. The LOD manager should decide whether to update hierarchy or not. It can consider different states of a character to decide about its update rate. Some possible cases to be considered are listed here:

1- The priority of the animated character: Some characters like NPCs and crowds do not have very high degree of priority so you may not update them in every frame. At most of the times, they are not going to be seen clearly so they can be ignored to be updated every frame.

2- Distance to Camera: If the character is far from the camera, many of its movements cannot be seen. So why should we just compute something that cannot be seen? Here we can define a skeleton map for our current skeleton and select more important bones to be updated and ignore the others. For example when you are far from the camera you don't need to update finger bones or neck bone. You can just update spines, head, arms and legs. These are the bones which can be seen from far. So with this you have a light weighted skeleton and you are ignoring many bones to update. Don't forget that human hands have 28 bones for fingers and 28 bones for a small portion of a mesh, is not very efficient.

3- Using Dirty Flags For Bones: In many situations, the bone transformation is not changed in two consecutive frames. For example the animator himself didn't animate that bone in several frames or the curve simplification algorithm reduced consecutive keyframes which are more similar. In these kind of situations, you don't need to update the bone in its local space again. As you might know the bones are firstly calculated in their local space based on animation info and then they will be multiplied by their binding pose and parent transformation to be specified in world or modeling space. Defining dirty flags for each bone can help you to not calculate bones in their local space if they are not changed between two consecutive frames. They can be updated in their local space if they are dirty.

4- Update Just When They Are going To Be Rendered: Imagine a scene in which some agents are following you. You try to run away from them. The agents are not in the camera frustum but their AI controller is monitoring and following you. So do you think we should update the skeleton while the player can't see them? Not at most cases. So you can ignore the update of skeletons which are not in camera frustum. Both Unity3D and Unreal Engine4 have this feature. They allow you to select whether the skeleton and its skinned mesh should be updated or not if they are not in camera frustum.

You might need to update skeletons even if they are not in the camera frustum. For example you might need to shoot an object to a character's head which is not in the camera. Or you may need to read the root motion data and use it for locomotion extraction. So you need calculated bone positions. In this kind of situations you can force the skeleton to be updated manually or not using this technique.

2. Optimized Usage of Animation Systems


So far, some techniques have been discussed to implement an optimized animation systems. As a user of an animation system, you should trust the system and assume that it is well optimized. You assume that the system has many of the techniques described above or even more. So you can produce animations which can be friendlier with an optimized animation system. I'm going to address some of them here. This section is more convenient for animators/technical animators.


Do not Move All Bones Always


As mentioned earlier the animation tracks can be optimized and their keyframes can be reduced easily. So by knowing this, you can create animations which is more suitable for this kind of optimization. So do not scale or move the bones if it is not necessary. Do not transform bones that cannot be seen. For example while you are making a fast sword attack, not all of the facial bones can be seen. So you don't need to move them all.

In the cutscenes where you have a predefined camera, you know which bones are in the camera frustum. So if you have zoomed the camera on the face of your character, you don't need to move the fingers or hands. With this you will save your own time and will let the system to save much memory by preventing to export or simplifying the animation tracks for bones.

One other important thing is duplicating two consecutive keyframes. This occurs frequently in blocking phase of animation. For example, you move fingers in frame 1 and again move them in frame 15 and you copy the keyframe 15 to frame 30. Keyframe 15 and 30 are the same. But the default keyframe interpolation techniques are set to make the animation curves smooth. This means that you might get an extra motion between frame 15 and 30. Figure1 shows a curve which is smoothed with keyframe interpolation techniques.


Figure1: A smoothed animation curve

As you can see in Figure1, the keyframe number 2 and 3 are the same. But there is an extra motion between them. You might need this smoothness for many bones so leave it be if you need it. But if you don't need it make sure to make the two similar consecutive keyframes more linearly as shown in figure 2. With this, the keyframe reduction algorithm can reduce the keyframe samples.


Figure2: Two linear consecutive keyframes

You should consider this case for finger bones more carefully. Because fingers can be up to 28 bones for a human skeleton and they are showing a small portion of the body but they take much memory and calculation. In the previous example if you make the two similar consecutive keyframes linear, there would be no visual artifact for finger bones and you can drop 28 * (30 - 15 + 1) keyframe samples. Where 28 is the number of finger bones and 30 and 15 are the frames in which keyframes are created by the animator. The sampling rate is one sample per frame in this example. So by setting two consecutive keyframes to linear for finger bones, you will save much memory. This amount of memory can't be very huge for one animation but it can become huge when your game have many skeletal animations.

Using additive and partial animations instead of full body animation


Animation blending has different techniques. Two of them which are very good at both functionality and performance are additive and partial animation blending. These two blending schemes are usually used for asynchronous animation events. For example when you are running and decide to shoot. So lower body continue to run and the upper body blends to shoot animation.

Using additive and partial animations can help you to have less animations. Let me describe this with an example. Imagine you have a locomotion animation controller. It blends between 3 animations (walk, run and sprint) based on input speed. You want to add a spine lean animation to this locomotion. So when your character is accelerating the character leans forward for a period of time. First you can make 3 full body walk_lean_fwd, run_lean_fwd and sprint_lean_fwd animations which are blending synchronously with walk, run and sprint respectively. You can change the blend weight to achieve a lean forward animation. Now you have three full body animations with several frames. This means more keyframes, more memory usage and more calculation. Also your blend tree gets more complicated and high dimensional. Imagine that you are adding 6 more animations to your current locomotion system. Two directional walks, two directional runs and two directional sprints. Each of them have to be blended with walk, run and sprint respectively. So with this, if we want to have leaning forward,  we have to add two directional walk_lean_fwd, two directional run_fwd and two directional sprint_fwd and blend them respectively with walk, run and sprint blend trees. The blend tree is going to be high dimensional and needs too much full body animations and too much memory and calculation. Even it becomes hard for the user to manipulate.

You can handle this situation more easily by using a light weighted additive animation. An additive animation is an animation that is going to be added to current animations. Usually it's a difference between two poses. So first your current animations are calculated then the additive is going to be multiplied to the current transforms. Usually the additive animation is just a single frame animation which is not really needs to affect all of the body parts. In our example the additive animation can be a single frame animation in which spine bones are rotated forward, the head bone is rotated down and the arms are spread a little. You can add this animation to the current locomotion animations by manipulating its weight. You can achieve the same results with just one single frame and half body additive animation and there is no need to produce different lean forward full body animations. So using additive and partial animation blending can reduce your workload and help you to achieve better performance very easily.

Using Motion Retargetting


A motion retargetting system promises to apply a same animation on different skeletons without visual artifacts. By using it you can share your animations between different characters. For example you make a walk for a specific character and you can use this animation for other characters as well. By using motion retargetting you can save your memory by preventing animation duplication. But just note that a motion retargetting system has its own computations. So it's not just the basic skeletal animation and it needs many other techniques like how to scale the positions and root bone translation how to limit the joints, ability to mirror animations and many other things. So you may save animation memory and animating time, but the system needs more computation. The computation may not become a bottleneck in your game.

Unity3D, Unreal Engine4 and Havok animation all three support motion retargetting. If you don't need to share animations between different skeletons, you don't need to use motion retargetting.


Conclusion


Optimization is always a serious part in video games. Video games are among the soft real time software so they should respond in a proper time. Animation is always an important part of a video game. It is important from different aspects like visuals, controls, storytelling, gameplay and more. Having lots of character animations in a game can improve it significantly, but the system should be capable of handling too much character animations. This article tried to address some of the techniques which are important in the optimization of skeletal animations. Some of the techniques are highly detailed and they were discussed generally here. The discussed techniques were reviewed from two perspectives. First, from the developers who want to create skeletal animation systems and second, from the users of animation systems.

Saturday, October 18, 2014

Havok Animation

Run time animation systems are one of the most important components in early games. Animation can have effect on different functional aspects of a game including visual, control, game design so a robust animation system  is needed for any game engine. For this reason traditional and in house game engine developers work hard to provide a good and well designed animation system. For example Unreal engine 4 has a good and robust animation system which includes many features like state machines, different animation blending techniques, motion retargeting IK and so on. This is true for Unity3D animation system a.k.a Mecanim.

But in this article I want to have a short introduction on Havok Animation Tool or you may know it as Havok Behavior Tool. Havok animation tool is a robust animation middle ware based on Havok Animation SDK. The havok animation tool provides many great functional and non functional features needed for run time animations like:

  •  Different animation blending techniques.
  • Inverse kinematics.
  • Rich animation state machines with ability to reduce state machine complexity.
  • Ragdoll and keyframe animation blending.
  • Pose matching
  • Animation retargetting
  • Root motion extraction
  • Great peformance
Havok Animation gives you control on many different events which can occur within an animation system. The animation system is very robust and well designed and it's very great at performance. 

In comparison to Unity Mecanim and  Unreal animation system, I prefer Havok animation. The Havok animation tool is a standalone software and it's not usually comes out within a game engine, so you need to integrate it with your engine if you want to have it there. However Havok Vision engine has already did that and integrated it into the engine. You can get Havok vision and Havok animation tool free via Project Anarchy tool sets (free for just mobile development)

For this reason I want to write an article about some features of Havok animation in next posts.

Saturday, March 29, 2014

Motion Planning Project #1

I'm working on a  motion planning project. In this post I placed the first video of the Motion Planner which I'm working on. The motion Planner uses a fuzzy control system to avoid obstacles and reach a specified goal. 37 fuzzy rules are defined on three different parameters to control the speed and direction of the agent.

The system is still immature. It's going to be combined with some machine learning techniques to enhance its performance but for now it just contains a fuzzy motion planner and the animations are few.

There are sets of parametric animations which their parameters are changing based on the commands come from the motion planner. For now the motion planner just controls the speed and direction of the agent.

As you can see in the video the agent is not always find the best and shortest way through the goal because it has no previous knowledge about the environment and it is exploring the environment while going through the goal. This technique has some pros and some cons.

Pros:

The agent with the same fuzzy rulebase can avoid obstacles and reach the goal in different environments with different arrayed obstacles and no preprocessing phase is needed as you can see in the video.

Cons:

The agent doesn't have any previous info about the environment and it can't find the best and shortest path through the goal.

The system performance should be improved after I add more animations to it and integrate it with some machine learning techniques. In my next posts I will update some other videos and share the progress of the work here.

Here is the video:

https://vimeo.com/90407845