Saturday, November 14, 2015

Mirroring 3D Character Animations


Video games have resources. Resources are raw data that need to be manipulated, baked and become ready to be used in game. Textures, meshes, animations and sometimes metadatas are all counted as resources. These resources are consuming significant amount of memory. Re-using and manipulating resources is essential for a game engine.

In terms of animation, there exists plenty of actions which can be used to manage animations as resources and one is motion retargeting.

With motion retargeting, one can use a specific animation on different skeletons with different reference or binding poses, different joint size and different heights. For example, you just have a walk animation and want to use it for 5 different characters with different physical shapes. Motion retargeting systems can do this nicely so you don't need to have five different walks for those 5 different characters. You just have one walk animation which can be used for all characters. This represents lower amount of animations and therefore less needed resources.

Motion retargeting systems apply some modifications on top of animation data to make them suitable for different skeletons. These modifications include:

1- Defining a generic but modifiable skeleton template for bipeds or quadrupeds
2- Root motion reasonable scaling
3- Ability to edit skeleton reference pose
4- Joint movement limitations
5- Animation mirroring
6- Adding a run-time rig on top of the skeleton template.

Creating a motion retargeting system needs a vast amount of work and it's a huge topic. In this post I just want to show you how you can mirror character animations. Motion retargeting systems are usually supporting animation mirroring. It's useful for different purposes. Mirrored animations can be used to avoid foot-skating and also for achieving responsiveness and again, by mirroring an input pose, you can avoid creating new mirrored animations and you just using the same animation data, no new animation needed here. You can select the animation or its mirrored based on the foot phases.

In the next post, I will show you how you can use mirrored animations in action but this post is just concentrating on mirroring an input pose from an animation.

For this post, I used Unreal Engine 4. Unreal Engine has a very robust, flexible and optimized animation system but its motion retargeting is still immature. At this time, it can't be compared with Unity3D or Havok Animation motion retargeting.

Mirror Animations

To mirror animations, two types of bones should be considered. First the bones that have a mirrored bone in the skeleton hierarchy like hands, arms, legs, foots and facial bones. Let's call these mirrored bones, twins. Second, the bones which have no twin, like pelvis, spines, neck and head.

So to create a mirroring system, we have to define some meta data about the skeleton. It should save each bone twins, if it has any. For this reason, I define a class named AnimationMirrorData which saves and manipulate required data such as mirror-mapped bones, rotation mirror axis and position negation direction.

To mirror animations, I defined a custom animation node which can be used in unreal engine animation graph. It receives a pose in local space and mirrors it. It also has two input pins. One is for an animation mirror data object which should be initialized by the user and one is a boolean which let the node to be turned on or off. As you can see in the picture, there is no extra animation needed here and the node just accepts the current pose and mirrors it and you can turn it on or off based on the game or animation circumstances.

Here I discuss how to mirror each type of bones:

1- Mirroring bones which has a twin in the hierarchy

These kind of bones like hands and legs have a twin in the hierarchy. To mirror them, we need to swap the transforms of the two bones. For example the left upper arm transform should be pasted on the right upper arm, and the right upper arm transform should be pasted on the left upper arm. To do this, we have to deduct the the binding pose from the current transform of the bone at the current frame. In Unreal Engine 4 the local poses are calculated in their parent space as well as the binding poses. We don't want to mirror the binding poses of the bones and we just need to mirror the deducted transform. By doing this, we can make sure that the character can stay on the same spot and it won't rotate 180 degrees. Remember, this only works if the binding poses of the twin bones in the skeleton are already mirrored. This means that the rigger should have mirrored the twin bones when he/she wanted to rig the mesh.

2- Mirroring bones with no twin

These kind of bones are like root, pelvis or spine which don't have any twin in the hierarchy. For these kind of bones, again we have to deduct the binding pose from the current bone transform.  Now the current deducted transform should be mirrored. This time we need a mirror axis. The mirror axis should be selected by the user. Mostly it is x,y or z in the bone's binding pose space. So for rotations, if you select X as the mirror axis, you should negate the y and z components of the quaternion. To mirror the translations, things are a little different because for translations we never want to change the up and forward direction of the translations. That means by mirroring the animation, we don't want the character to move upside down and also backward. We just want the side movement to be negated. So here for the translations we just need to negate one component of the translation vector. So it is not counted as a mirror, mathematically.

Following, I placed some parts of the code which I wrote for the mirror animation node:

Here is  the AnimationMirrorData header file:

 #pragma once  
 #include "Object.h"  
 #include "AnimationMirrorData.generated.h"  
 enum class MirrorDir : uint8  
      None = 0,  
      X_Axis = 1,  
      Y_Axis = 2,  
      Z_Axis = 3  
 class ANIMATIONMIRRORING_API UAnimationMirrorData : public UObject  
      //Shows mirror axis. 0 = None, 1 = X, 2 = Y, 3 = Z   
      UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Mirror Animation")  
      MirrorDir MirrorAxis_Rot;  
      UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Mirror Animation")  
      MirrorDir RightAxis;  
      UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Mirror Animation")  
      MirrorDir PelvisMirrorAxis_Rot;  
      UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Mirror Animation")  
      MirrorDir PelvisRightAxis;  
      UFUNCTION(BlueprintCallable, Category = "Mirror Animation")  
      void SetMirrorMappedBone(const FName bone_name, const FName mirror_bone_name);  
      UFUNCTION(BlueprintCallable, Category = "Mirror Animation")  
      FName GetMirroMappedBone(const FName bone_name) const;  
      TArray<FName> GetBoneMirrorDataStructure() const;  
      TArray<FName> mMirrorData; 

And here are two functions which are mainly responsible to mirror animations:

 void FAnimMirror::Evaluate(FPoseContext& Output)  
      if (!mAnimMirrorData)  
      if (Output.AnimInstance)  
           TArray<FCompactPoseBoneIndex> lAr;  
           int32 lCurrentMirroredBoneInd = 0;  
           int32 lMirBoneCount = mAnimMirrorData->GetBoneMirrorDataStructure().Num();  
           //Mirror Mapped Bones  
           for (uint8 i = 0; i < lMirBoneCount; i += 2)  
                FCompactPoseBoneIndex lInd1 = FCompactPoseBoneIndex(Output.AnimInstance->GetSkelMeshComponent()->GetBoneIndex(mAnimMirrorData->GetBoneMirrorDataStructure()[i]));  
                FCompactPoseBoneIndex lInd2 = FCompactPoseBoneIndex(Output.AnimInstance->GetSkelMeshComponent()->GetBoneIndex(mAnimMirrorData->GetBoneMirrorDataStructure()[i + 1]));  
                FTransform lT1 = Output.Pose[lInd1];  
                FTransform lT2 = Output.Pose[lInd2];  
                Output.Pose[lInd1].SetRotation(Output.Pose.GetRefPose(lInd1).GetRotation() * Output.Pose.GetRefPose(lInd2).GetRotation().Inverse() * lT2.GetRotation());  
                Output.Pose[lInd2].SetRotation(Output.Pose.GetRefPose(lInd2).GetRotation() * Output.Pose.GetRefPose(lInd1).GetRotation().Inverse() * lT1.GetRotation());  
                Output.Pose[lInd1].SetLocation((Output.Pose.GetRefPose(lInd2).GetRotation().Inverse() * lT2.GetRotation() * (lT2.GetLocation() - Output.Pose.GetRefPose(lInd2).GetLocation()))   
                     + Output.Pose.GetRefPose(lInd1).GetLocation());  
                Output.Pose[lInd2].SetLocation((Output.Pose.GetRefPose(lInd1).GetRotation().Inverse() * lT1.GetRotation() * (lT1.GetLocation() - Output.Pose.GetRefPose(lInd1).GetLocation()))   
                     + Output.Pose.GetRefPose(lInd2).GetLocation());  
           //Mirror Unmapped Bones  
           FCompactPoseBoneIndex lPoseBoneCount = FCompactPoseBoneIndex(Output.Pose.GetNumBones());  
           for (FCompactPoseBoneIndex i = FCompactPoseBoneIndex(0); i < lPoseBoneCount;)  
                if (!lAr.Contains(i))  
                     if (!i.IsRootBone())  
                          FTransform lT = Output.Pose[i];  
                          lT.SetRotation(Output.Pose.GetRefPose(i).GetRotation().Inverse() * Output.Pose[i].GetRotation());  
                          lT.SetLocation(Output.Pose[i].GetLocation() - Output.Pose.GetRefPose(i).GetLocation());  
                          if (i.GetInt() != 1)  
                               MirrorPose(lT, (uint8)mAnimMirrorData->MirrorAxis_Rot, (uint8)mAnimMirrorData->RightAxis);  
                               Output.Pose[i].SetRotation(Output.Pose.GetRefPose(i).GetRotation() * lT.GetRotation());  
                               Output.Pose[i].SetLocation(Output.Pose.GetRefPose(i).GetLocation() + lT.GetLocation());  
                               MirrorPose(lT, (uint8)mAnimMirrorData->PelvisMirrorAxis_Rot, (uint8)mAnimMirrorData ->PelvisRightAxis);  
                               Output.Pose[i].SetRotation(Output.Pose.GetRefPose(i).GetRotation() * lT.GetRotation());  
                               Output.Pose[i].SetLocation(Output.Pose.GetRefPose(i).GetLocation() + lT.GetLocation());  
 void FAnimMirror::MirrorPose(FTransform& input_pose, const uint8 mirror_axis, const uint8 pos_fwd_mirror)  
      FVector lMirroredLoc = input_pose.GetLocation();  
      if (pos_fwd_mirror == 1)  
           lMirroredLoc.X = -lMirroredLoc.X;  
           if (pos_fwd_mirror == 2)  
                lMirroredLoc.Y = -lMirroredLoc.Y;  
                if (pos_fwd_mirror == 3)  
                     lMirroredLoc.Z = -lMirroredLoc.Z;  
      switch (mirror_axis)  
           case 1:  
                float lY = -input_pose.GetRotation().Y;  
                float lZ = -input_pose.GetRotation().Z;  
                input_pose.SetRotation(FQuat(input_pose.GetRotation().X, lY, lZ, input_pose.GetRotation().W));  
           case 2:  
                float lX = -input_pose.GetRotation().X;  
                float lZ = -input_pose.GetRotation().Z;  
                input_pose.SetRotation(FQuat(lX, input_pose.GetRotation().Y, lZ, input_pose.GetRotation().W));  
           case 3:  
                float lX = -input_pose.GetRotation().X;  
                float lY = -input_pose.GetRotation().Y;  
                input_pose.SetRotation(FQuat(lX, lY, input_pose.GetRotation().Z, input_pose.GetRotation().W));  

I haven't placed the whole source code here. If you need them, just contact me and I will send them to you.

Monday, September 21, 2015

Creating Non-Repetitive Randomized Idle Using Animation Blending

You might have seen that the standing idle animations in video games are some kind of a magical movement. They never get repetitive. The character is looking at different directions with a non-repetitive pattern. He/she shows different facial animations or shifts his/her weight randomly and does many other usual acts in a standing idle animation.

These kind of animations can be implemented using an animation blend tree and a component which can manipulate animation weights. This post is going to show how a non-repetitive idle animation can be created.

Defining Animation Blend Tree for Idle Animation

In this section, I'm going to define an animation blend tree which can bring a range of possible animations for idle. Before creating a blend tree,  the animations which are used within are described here:

1- A simple breathing idle animation which is just 70 frames (2.33 second).

2- A left weight shift animation similar to the original idle animation while having the pelvis shifted to left and with a more curvy torso. "Similar" here, means that the animations have same timings and almost same poses but just with a difference in main poses. This difference shows the weight shift left pose. I created the weight shift animation just by adding an additive keyframe to different bones on top of the original idle animation in the DCC tool.

3- A right weight shift animation similar to the original idle animation while having the pelvis shifted to right and with a more curvy torso.

4- Four different look animations. Look left, right, up and down. These 4 are all one frame additive animations. Their transforms are subtracted from the first frame of the original idle animation.

5- Two different facial and simple body movement animations. These two animations are additive as well. They are adding some facial animations to the original idle animation and some movement over torso and hands.

So the required animations are described. Now let's define a scenario for blend tree in three steps before creating it:

1- We want the character to stand using an idle animation while often shifting his/her weight. So first we have to create a blend node which can blend between, left weight shift, basic idle and right weight shift.

2- The character wants to look around often and we have four different additive look animations for this. So first we create a blend node which can blend between 4 additive look animations. It works with two parameters. One parameter is mapped to blend between look left and right and one parameter is mapped to blend between look up and down. This blend node is going to be added to the blend node defined in step 1.

3- After adding head look animations, the two additive facial animations are going to be added to the result. These two animations are switching randomly when they are reaching at their final frame.

So a blend tree which is capable of supporting this scenario is shown here:

Idle Animation Controller to Manipulate Blend Weights

So far an animation blend tree is created which can create continuous motions with some simple additive and idle animations. Now we have to manipulate the blend weights to create a non-repetitive idle animation. This would be an easy task. I'm going to define it in four steps to obtain a non-repetitive weight shift animation. These steps can be used for facial and look animations as well:

1- First, we randomly select a target weight for the weight shift. It should be in the range of defined weight shift parameter used in blend tree.

2- I define a random blend speed which makes the character to shift weight through time until it reaches the selected target weight in step 1. The blend speed is randomly selected from a reasonable numeric range.

3- When we reach the target blend weight for weight shift, the character should remain in that blend weight for a while. That's completely like what humans do in reality. When a human stands, he/she shifts his/her weight to left or right and stay in that pose for a while. Shifting weight, helps human body to relax the spine muscles. So we select a random time from a reasonable range to set the weight shift remaining time.

4- After the selected weight shifting time ends, we get back to step 1 and this loop repeats while the character is in idle state.

The same 4 steps goes for the directional look and facial animations as well.

This random time, speed and target weight selection, creates a non-repetitive idle animation. The character always look at different directions with different times while shifting his weight to left or right and do different facial and body movement animations. All are done with different and random time, speed and poses.

You can check the result here in this video:

Here is the source code I wrote for the idle animation controller. The system is implemented in Unreal Engine 4. This component calculates the blend weights and pass them to the animation blend tree:

The header file:

 #pragma once  
 #include "Components/ActorComponent.h"  
 #include "ComponenetIdleRandomizer.generated.h"  
 UCLASS( ClassGroup=(Custom), meta=(BlueprintSpawnableComponent) )  
 class RANDOMIZEDIDLE_API UComponenetIdleRandomizer : public UActorComponent  
      // Called every frame  
      virtual void TickComponent( float DeltaTime, ELevelTick TickType, FActorComponentTickFunction* ThisTickFunction ) override;  
      /*Value to be used for weight shift blend*/  
      float mCurrentWeightShift;  
      /*Value to be used for idle look blend*/  
      FVector2D mCurrentHeadDir;  
      /*Value to be used for idle facial blend*/  
      float mCurrentFacial;  
      FVector2D mTargetHeadDir;  
      float mTargetWeightShift;  
      float mTargetFacial;  
      float mWSTransitionTime;  
      float mWSTime;  
      float mWSCurrentTime;  
      float mLookTransitionTime;  
      float mLookTime;  
      float mLookCurrentTime;  
      float mFacialTransitionTime;  
      float mFacialTime;  
      float mFacialCurrentTime;  
      float mLookTransitionSpeed;  
      float mWSTransitionSpeed;  
      float mFacialTransitionSpeed;  

And The CPP Here:

 #include "RandomizedIdle.h"  
 #include "ComponenetIdleRandomizer.h"  
      // Set this component to be initialized when the game starts, and to be ticked every frame. You can turn these features  
      // off to improve performance if you don't need them.  
      bWantsBeginPlay = true;  
      PrimaryComponentTick.bCanEverTick = true;  
      // ...  
      //weight shift initialization  
      mTargetWeightShift = FMath::RandRange(-100, 100) * 0.01f;  
      mCurrentWeightShift = 0;  
      mWSTransitionTime = FMath::RandRange(10, 20) * 0.1f;  
      mWSTime = FMath::RandRange(20, 50) * 0.1f;  
      mWSCurrentTime = 0;  
      mWSTransitionSpeed = mTargetWeightShift / mWSTransitionTime;  
      //look initialization  
      mTargetHeadDir.X = FMath::RandRange(-80, 80) * 0.01f;  
      mTargetHeadDir.Y = FMath::RandRange(-15, 15) * 0.01f;  
      mCurrentHeadDir = FVector2D::ZeroVector;  
      mLookTransitionTime = FMath::RandRange(10, 20) * 0.1f;  
      mLookTime = FMath::RandRange(20, 40) * 0.1f;  
      mLookCurrentTime = 0;  
      mLookTransitionSpeed = mTargetHeadDir.Size() / mLookTransitionTime;  
      //facial initialization  
      mTargetFacial = FMath::RandRange(0, 100) * 0.01f;  
      mCurrentFacial = 0;  
      mFacialTransitionTime = FMath::RandRange(20, 50) * 0.1f;  
      mFacialTime = FMath::RandRange(20, 40) * 0.1f;  
      mFacialCurrentTime = 0;  
      mFacialTransitionSpeed = mTargetFacial / mFacialTransitionTime;  
 void UComponenetIdleRandomizer::TickComponent( float DeltaTime, ELevelTick TickType, FActorComponentTickFunction* ThisTickFunction )  
      Super::TickComponent( DeltaTime, TickType, ThisTickFunction );  
      /*look weight calculations*/  
      if (mLookCurrentTime > mLookTransitionTime + mLookTime)  
           mLookTime = FMath::RandRange(20, 40) * 0.1f;  
           mLookTransitionTime = FMath::RandRange(20, 40) * 0.1f;  
           mLookCurrentTime = 0;  
           mTargetHeadDir.X = FMath::RandRange(-80, 80) * 0.01f;  
           mTargetHeadDir.Y = FMath::RandRange(-15, 15) * 0.01f;  
           mLookTransitionSpeed = (mTargetHeadDir - mCurrentHeadDir).Size() / mLookTransitionTime;  
      mCurrentHeadDir += mLookTransitionSpeed * (mTargetHeadDir - mCurrentHeadDir).GetSafeNormal() * GetWorld()->DeltaTimeSeconds;  
      if (mLookCurrentTime > mLookTransitionTime)  
           float lTransitionSpeedSign = FMath::Sign(mLookTransitionSpeed);  
           mLookTransitionSpeed = mLookTransitionSpeed - lTransitionSpeedSign * 2.0f * GetWorld()->DeltaTimeSeconds;  
           if (lTransitionSpeedSign * FMath::Sign(mLookTransitionSpeed) == -1)  
                mLookTransitionSpeed = 0;  
           if (FMath::Abs(mCurrentHeadDir.X) > 0.9f)  
                mCurrentHeadDir.X = FMath::Sign(mCurrentHeadDir.X) * 0.9f;  
           if (FMath::Abs(mCurrentHeadDir.Y) > 0.2f)  
                mCurrentHeadDir.Y = FMath::Sign(mCurrentHeadDir.Y) * 0.2f;  
      mLookCurrentTime += GetWorld()->DeltaTimeSeconds;  
      /*weight shift calculations*/  
      if (mWSCurrentTime > mWSTransitionTime + mWSTime)  
           mWSTime = FMath::RandRange(20, 50) * 0.1f;  
           mWSTransitionTime = FMath::RandRange(30, 50) * 0.1f;  
           mWSCurrentTime = 0;  
           mTargetWeightShift = FMath::RandRange(-80, 80) * 0.01f;  
           mWSTransitionSpeed = (mTargetWeightShift - mCurrentWeightShift) / mWSTransitionTime;  
      mCurrentWeightShift += mWSTransitionSpeed * GetWorld()->DeltaTimeSeconds;  
      if (mWSCurrentTime > mWSTransitionTime)  
           float lTransitionSpeedSign = FMath::Sign(mWSTransitionSpeed);  
           mWSTransitionSpeed = mWSTransitionSpeed - lTransitionSpeedSign * 2.0f * GetWorld()->DeltaTimeSeconds;  
           if (lTransitionSpeedSign * FMath::Sign(mWSTransitionSpeed) == -1)  
                mWSTransitionSpeed = 0;  
           if (FMath::Abs(mCurrentWeightShift) > 1)  
                mCurrentWeightShift = FMath::Sign(mCurrentWeightShift) * 1;  
      mWSCurrentTime += GetWorld()->DeltaTimeSeconds;  
      /*facial calculations*/  
      if (mFacialCurrentTime > mFacialTransitionTime + mFacialTime)  
           mFacialTime = FMath::RandRange(20, 50) * 0.1f;  
           mFacialTransitionTime = FMath::RandRange(20, 50) * 0.1f;  
           mFacialCurrentTime = 0;  
           mTargetFacial = FMath::RandRange(0, 100) * 0.01f;  
           mFacialTransitionSpeed = (mTargetFacial - mCurrentFacial) / mFacialTransitionTime;  
      mCurrentFacial += mFacialTransitionSpeed * GetWorld()->DeltaTimeSeconds;  
      if (mFacialCurrentTime > mWSTransitionTime)  
           mCurrentFacial = mTargetFacial;  
      mFacialCurrentTime += GetWorld()->DeltaTimeSeconds;  

Monday, August 10, 2015

The Challenge of Having Responsiveness and Naturalness in Game Animation

Video games as software need to meet functional requirements and it's obvious that the most important functional requirement of a video game is to provide entertainment. Users want to have interesting moments while playing video games and there exists many factors which can bring this entertainment to the players.

One of the important factors is the animations within game. Animation is important because it can affect the game from different aspects. Beauty, controls, narration and driving the logic of the game are among them.

This post is trying to consider the animations in terms of responsiveness while trying to discuss some techniques to retain their naturalness as well.

Here I'm going to share some tips we used in the animations of the 3D action-platforming side-scroller game named "Shadow Blade: Reload". SB:R, PC version has been released 10th 2015 August via Steam and the console versions are on the way. So before going further, let's have a look at some parts of the gameplay here:

You may want to check the Steam page here too.

So here we can discuss the problem. First, consider a simple example in real world. You want to punch into a punching bag. You rotate your hip, torso and shoulder in order and consume energy to rotate and move your different limbs. You are feeling the momentum in your body limbs and muscles and then you are hearing the punch sound just after landing it into the bag. So you are sensing the momentum with your tactile sensation, hearing different voices and sounds related to your action and seeing the desired motion of your body. Everything is synchronized! You are feeling the whole process with your different senses. Everything is ordinary here and this is what our mind knows as something natural.

Now consider another example in a virtual world like a video game. This time you have a controller, you are pressing a button and you want to see a desired motion. This motion can be any animation like a jump or a punch. But this punch is different from the mentioned example in real world because the player is just moving his thumb on the controller and the virtual character should move his whole body in response to it. Each time player presses a button the character should do an appropriate move. If you receive a desired motion with good visual and sounds after pressing each button, we can say that you are going to be merged within the game because it's something almost like the example of the punching in real world. The synchronous response of the animations, controls and audios help the player feel himself more within the game. He uses his tactile sensation while interacting with controller, uses his eyesight to see the desired motion and his hearing sensation to hear the audios. Having all these synchronously at the right moment can bring both responsiveness and naturalness which is what we like to see in our games.

Now the problem is that when you want to have responsiveness you have to kill some naturalness in animations. In a game like Shadow Blade: Reload, the responsiveness is very important because any extra move can lead the player to fall of the edges or be killed by enemies. However we need good-looking animations as well. So here I'm going to list some tips we used to bring both responsiveness and naturalness into our playable character named Kuro:

1- Using Additive Animations: Additive animations can be used to show some asynchronous motions on top of the current animations. We used them in different situations to show the momentum over body while not interrupting player to show different animations. An example is the land animation. After player fall ends and he reaches the ground, he can continue running or attacking or throwing shurikens without any interruptions or land animations. So we are directly blending the fall with other animations like run. But blending directly between fall and run doesn't provide acceptable motion. So here we're just adding an additive land animation on top of the run or other animations to show the momentum over upper body. The additive animation just have visual purposes and the player can continue running or doing other actions without any interruption.

We also used some other additive animations there. For example a windmill additive animation on spine and hands. It's being played when the character stops and starts running consecutively. It can show momentum to hands and spine.

These additive animations are just being added on top the main animations and not interrupting them while the main animations like run and jump are already providing good responsiveness.

2- Specific Turn Animations: You see turn animations in many games. For instance, pressing the movement button in the opposite direction while running, makes the character slide and turn back. While this animation is very good for many games and brings good felling to the motions,  it is not suitable for an action-platforming game like SB:R because you are always moving back and forth on the platforms with low areas and such an extra movement can make you fall unintentionally and it also kills responsiveness. So for turning, we just rotate the character 180 degrees in one frame. But again, rotating the character 180 degrees in just one frame, is not providing a good-looking motion. So here we used two different turn animations. They are showing the character turning and are starting in a direction opposite to character's forward vector and end in a direction equal to character's forward vector. When we turn the character in just one frame, we play this animation and the animation can show the turn completely. It has the same speed of run animation so nothing is just going to be changed in terms of responsiveness and you will just see a turn animation which is showing momentum of a turn motion over the body and it can bring good visuals to the game.

One thing which has to be considered here is that the turn animation starts in a direction opposite to character's forward vector so for using this animation we turned off the transitional blending. because it can make jerky motions on root bone while blending.

To avoid frame mismatches and foot-skating, we used two different turn animations and played them based on the feet phases in run animation. You may check out the turn animation here:

3- Slower Enemies: While the main character is very agile, the enemies are not! Their animations have much more frames. This can help us to get the focus of players out from the main character in many situations . You might know that the human eye has a great ability to focus and zoom on different objects. So when you are looking at one enemy you can only see it clearly and not the others. Slower enemy animations with more frames help us to get the focus out from the player at many points.

As a side note, I want to say that I was watching a scientific show about human eyes a while ago and it showed that the women eyes has wider view than men and men has better focusing. You might want to check this research if you are interested about this topic.

4- Safe Blending Intervals to Cancel Animations: Assume a grappling animation. It can be started from idle pose and ended in idle pose again. The animation can do its job in its 50% of length. So the rest of its time is just for the character to get back to its idle pose safe and smoothly. At the most times, players don't want to see the animations until their ending point. They prefer to do other actions. In our game, players usually tend to cancel the attack and grappling animations after they kill enemies. They want to run, jump or dash and continue navigating. So for each animation which can be cancelled, we are setting a safe interval of blending which is used as the time to start cancelling current animations(s). This interval provides poses which can be blended well with run, jump, dash or other attacks. It provides less foot-skating, frame mismatches and good velocity blending during animation blending.

5- Continuous Animations: In SB:R, most of the animations are animated with respect to the animation(s) which is playing with higher probability before them.

For example we have run attacks for the player. When animating them, the animators have concatenated one loop of run before it and created the run attack just after that. With this, we can have a good speed blending between source and destination animations because the run attack animation has been created with respect to the original run animation. Also we can retain the speed and responsiveness of the previous animations into the current animation.

Another example here is the edge climb which is starting from the wall run animation.

6- Context Based Combat: In SB:R we have context based combat which is helping us using different animations based on the current state of the player (moving, standing,  jumping, distance and/or direction to enemies).

Attacking from each state, causing different animations to be selected which all are preserving almost the same speed and momentum of the player's current state (moving, standing, diving and so on).

For instance, we have run attacks, dash attacks, dive attacks, back stabs, Kusarigama grapples and many other animations. All are being started from their respective animations like run, jump, dash and stand and all trying to preserve the previous motion speed and responsiveness.

7- Physically Simulated Cloths as Secondary Motion: Although responsiveness can lower the rate of naturalness but adding some secondary motions like cloth simulations can help solving this issue. In SB:R we have a scarf for the main character Kuro which helps us showing more acceptable motions.

8- Tense Ragdolls and Lower Crossfade Time in Contacts: Removing cross fade transition times in hits and applying more force to the ragdolls can help more in receiving better hit effects.  However this is useful in many games not just in our case.


Responsiveness VS naturalness is always a huge challenge in video games and there are ways to achieve both. Most times you have to do trade-offs between both to achieve a decent result.

For those who are eager to find more about this topic, I can recommend this good paper from Motion in Games conference:

Aline Normoyle, Sophie Jorg, "Trade-offs between Responsiveness and Naturalness for Player Characters", 2014.

It shows interesting results about players' responses to animations with different amount of responsiveness and naturalness.

Tuesday, June 30, 2015

Foot Placement Using Foot IK

Inverse kinematics has found its way in character animation very well. It has become a major part of animation content creation tools while the animators can not animate characters without IK. There exits different solvers for inverse kinematics. Analytical solution for IK chains with two bones and Cyclic Coordinate Descent for IK chains with more than two bones are the most famous ones. They are widely used in animation content creation tools. While IK has a wide usage in DCC tools, it has found its way to real time animation systems as well. This includes game engines and animation systems which are widely being used in games. Using IK has almost become a standard for many games with good visuals. By using IK in real time animations, characters can overcome the variety of the environments they are moving in. Assume a character that has a walk animation. The animator has animated it on an even surface so it will be fine if you move it on an even surface. But when you move it on an uneven surface, his feet is not going to be placed correctly on the ground. Here you can use Foot IK to place character feet on ground. The usage of IK is not restricted to feet. It can be used for hands as well. The same scenario can be used in a rock climbing feature for both hands and feet. Feet IK is also used to avoid foot skating.

IK in real time animation systems is acting as a post process on animations. This means that the original animation(s) are always being calculated in their normal way, then IK is applying after that to correct character poses to respond well to the changes in environment. It is also used to avoid foot skating while moving root position/rotation procedurally or semi-procedurally.

Using IK in video games can go beyond this, as some video games has integrated full body IK within their engine. However using full body IK has not become a standard in gaming industry yet and not many games using it but using IK for hands and feet has almost become a standard for games which are caring more for their visuals.

This post is going to show how a foot placement system can be created to place the character feet on uneven surfaces dynamically or planting feet on ground to avoid foot skating. The post is originally based on the document I provided with a foot placement system named "Mec Foot Placer". Mec Foot Placer is a foot placement system which I implemented in my free time. It's a free and you can get it from unity asset store. I've shared some useful parts of the document here for those who like to use or implement these kind of systems for their games.

Before going further I recommend to check out these unity web player build to see how the system is affecting characters feet:

Mec Foot Placer with plant foot feature
Mec Foot Placer without plant foot feature

And here is the link to the asset store:

Mec Foot Placer on Unity Asset Store

The system is using Unity5 Mecanim so you might see some Mecanim specific notes in the post. If you are not a Unity3D developer, you can jump out of the unity specific topics in this post, otherwise those would be helpful as well. However the technique described here is not restricted to Unity and you can implement it on any platform which is offering IK, FK and physics. So in this post, I tried to describe the system generally for those who like to implement the same system on different platforms (not just Unity) and at the end some Unity specific notes are provided.

Mec Foot Placer

1- Introduction

Mec foot placer provides an automatic workflow for the character feet to be placed on grounds and uneven terrains. This document provides the details of this system and depicts how it can be setup. Mec foot placer acts as a post process on animations so while it places the foot automatically on grounds, it will save the overall shape of the feet determined by the active animation(s).

2- Work Flow

Mec foot placer can find the appropriate foot position on ground by using raycasts. The system uses three raycasts to find foot, toe and the corner of heel position. Toe position is used for foot pitch rotation and heel corner position is used for foot roll. The foot yaw rotation will be obtained from the animation itself to make sure the original animation pose is not wrongly affected. The system always check the ground availability based on foot position from the current active animation(s). If system detects any ground, it will set the foot in an appropriate position and rotation on it.

When the system is active, it will automatically place the foot on ground in these steps respectively:

1- First it gets the foot position from current active animation(s).

2- It casts a ray from an origin on top of the foot position in direction of the up vector with a custom offset distance.

3-The ray is pointing down from the origin in the direction of the opposite up vector with a distance equal to the same offset distance from step 1 plus foot height and a custom extra ray distance value. Figure 1 shows how the ray is cast for step 1 to 3.

Figure 2 shows the final foot position after detecting a contact point. White sphere in the figure 2 is showing the detected contact point.

The detected contact point is not suitable for placing the foot because it is ignoring the foot height and it causes the leg to be stretched and penetrate through ground. So a vector equal to UpVector * FootHeight will be added to detected contact point (white sphere) to set the final foot position. The Up Vector will be normalized automatically within the system. The blue sphere in Figure 2 shows the final foot position.

Figure 1- Ray casting for finding foot contact point

Figure 2- Final foot position after detecting a contact point

4- From the detected foot position another ray is cast based on the foot forward vector and current Foot rotation from the FK pose (foot animation pose). This ray is used to find toe position. The toe position is going to be used to find foot pitch rotation. Figure 3 shows how toe position is going to be found by using raycasts.

Figure 3- Raycast for finding Toe position

The Toe Vector in figure3 is equal to foot yaw rotation from animation multiplied by normalized forward vector multiplied by foot length:

Applying foot yaw rotation from animation is causing the system to save the original foot direction determined by artist/animator. Figure 4 shows the detected toe position leading the foot to be placed on the surface correctly. Blue sphere shows the detected toe position.

Figure 4- Detected toe position and the according foot rotation

5- From the detected foot position in step 1 to 3 another ray is cast to find the heel corner position. This ray is used to find the foot roll rotation. Figure 5 shows how this step is working.

The Heel Vector in figure5 is equal to foot yaw rotation from animation multiplied by normalized right vector multiplied by foot half width. Where right vector is forward vector rotated 90 degrees around up vector.

Figure 5- Raycast for finding Heel corner position

The blue sphere in the Figure 6 shows the detected heel corner position and the roll rotation of Foot IK based on that.

Figure 6- Detected heel corner position and the according foot roll rotation

6- The system also adjusts the IKHints of legs automatically to achieve a natural knee shape. IKHints are known as swivel angles as well. The IKHints or swivel angles are determining the surface in which The IK chain is being solved.

Figure 7 shows how the detected IKHint position is set. The blue sphere is showing the final IKHint position and the white sphere is showing the detected toe position. Calf vector is a vector in direction of calf bone (lower leg) and with magnitude equal to calf bone length.

Note that when the system is in IK mode, if the ray for foot position detection fails to contact to any ground, the foot placement system will transition to FK mode smoothly through time. This is also true if the system wants to switch back from FK to IK as well.

Figure 7- Final IKHint position (swivel angle)

3- Foot Placement Input Data

Each foot needs an input data to work correctly. For this reason a FootPlacementData component is provided to manipulate each foot needed data.The input variables coming with the component should be filled by user. Each input variable is described here:
  •  Is Left Foot: Check this check box if the current FootPlacmentData component you are setting is for left foot, otherwise it will be considered as right foot.

  • Plant Foot: If this check box is checked, the system will check for foot planting feature. Character’s foot will be planted after detecting a contact point and it will remain on the detected position and rotation until it automatically returns to FK mode. It also checks for the ground height changes so feet can be placed on ground while being planted. This feature provides good solution to avoid foot skating. If the plant foot check box is not checked the foot always gets its position and rotation from animation and after that the foot placement uses raycasts to place it on the ground. If the plant foot is checked the foot will be placed on the first detected contact point and it will not follow the animation while it is in the IK mode. While foot plant feature is active the system can blend between the planted foot position and rotation with the position and rotation of the foot without plant foot feature. However feet are always placed on the uneven terrains and grounds.

Foot plant feature has some functions to manipulate its blend weights. Check out section 4 to find out more about its functions. It is recommended to enable this feature in some states which might have foot skating like locomotion states and disable it in the other states which is not capable of having foot skating like standing Idle. Please check “IdleUpdate.cs” and “LocomotionUpdate.cs” to find out how to disable and enable this feature safely.
  • Forward Vector: This vector should be set to show the character foot initial direction. What you see in the Character foot reference pose or (Mecanim T-Pose) is what you should use here. At many times character initial forward vector is equal to foot forward vector.

  • IK Hint Offset: The IK hint position will be calculated automatically as stated in section 1. The IK Hint Offset is added to this the final calculated IK Hint Position to fine tune the final IK hint position.

  • Up Vector: Up vector shows the character up vector. This should be equal to world up vector (Vector3(0 , 1, 0)) if the character is moving on ground. Otherwise for some rare situations like running on walls this should be changed accordingly.

  • Foot Offset Dist: The distance used for raycasting. Figure 1, 3 and 5 show the parameter in action.

  • Foot Length: Foot length is used to find the toe position to set the foot pitch rotation. It should be equal to character’s foot length. Figure 3 shows the details.

  • Foot Half Width: This parameter should be equal to half width of the foot. It is used to find foot roll rotation. Figure 5 shows the details.

  • Foot Height: Foot height is used to set the correct heel position on ground. It should be equal to distance of heel center to lower leg joint. Figure 1 and 2 show the details.

  • Foot Rotation Limit: The IK foot rotation will not exceed from this value and this will be its limit for rotation. The value is considered in degrees.

  • Transition Time: When no contact point is detected by the system, it will switch to FK mode smoothly through time. Also when the system is in FK mode and it finds a contact point it will switch to IK mode smoothly through time. The “Transition Time” parameter, is the time length of the smooth transition between FK to IK or IK to FK.

  • Extra Ray Distance Check: This parameter is used to find correct foot, toe or heel corner position on ground. Figure 1, 3 and 5 show the parameter in action. This parameter can be changed dynamically to achieve better visual results. Check out the "IdleUpdate.cs" and "LocomotionUpdate.cs" scripts in the project to find out the details. These scripts are changing the “Extra Ray Distance Check” value based on the foot position in the animations and the current animation state. Both scripts use Unity 5 animator behavior callbacks.

4- Mec Foot Placer Component

The Mec Foot Placer component is responsible for setting the correct rotation and position of feet on the ground and switching between IK and FK for feet automatically.
Mec Foot Placer provides some functions which can be used by user. These functions are stated here:

  • void SetActive(AvatarIKGoal foot_id, bool active): This function will set the system on or off safely for each foot. At some states you don’t need the system to be active. For example when character is falling, there is no need to check for foot placement. However the system will work correctly on this state too but the user can disable it to ignore the calculations of the component. Each feet can be activated or deactivated separately.
  • bool IsActive(AvatarIKGoal foot_id): If the current foot is active the function returns true otherwise false.

  • void SetLayerMask(LayerMask layer_mask): Sets the layer mask which is going to be used in raycasts within system. The default value is Everything (LayerMask.NameToLayer(“Everything”); which means all of the objects in the world can be collided by the raycasts.  

Previously there was a check box in foot placement data components named “Ignore Character Controller”. The option is removed and instead the ability of setting layer masks is added. To avoid probable collisions between the character controller of the owner game object and the raycasts within system, the layer mask should be set correctly. For example if you set the owner game object layer to 8 and you want to have collision with the whole world and avoid collisions with the owner game object, you can use this sample code:

SetLayerMask( ~0 & ~(1 << 8));


SetLayerMask (LayerMask.NameToLayer ("Everything") & ~LayerMask.NameToLayer ("CurrentGameObjectLayer"));

  • LayerMask GetLayerMask():  Returns the layer mask set for the raycast.

  • void EnablePlant(AvatarIKGoal foot_id, float blend_speed):  This function will set the plant foot weight from its current value to 1 through time based on blend speed parameter. The function is useful to be used for some states which need foot planting like locomotion. By using it, plant foot feature will be activated smoothly through time. To have the plant foot feature effective the plant foot weight value should be higher than 0.

  •  void DisablePlant(AvatarIKGoal foot_id, float blend_speed):  This function will set the plant foot weight from its current value to 0 through time based on blend speed parameter. The function is useful to be used for some states which doesn’t need foot planting like standing idles. By using it plant foot feature will be deactivated smoothly through time. 

  • void SetPlantBlendWeight(AvatarIKGoal foot_id, float weight): Sometimes you need to manually change the plant blend values rather than using DisablePlant or EnablePlant functions. This function sets the blend weight between planted foot position and rotation with the non-planted foot position and rotation.

  •  float GetPlantBlendWeight(AvatarIKGoal foot_id): This function returns current blend weight for foot planting feature.

5- Quick Setup Guide

To setup the system you have to add a MecFootPlacer component. The foot placement component needs at least one FootPlacementData component otherwise it will not work. If two feet are needed to be considered, two foot placement data components should be set. One for right foot and one for left foot so the system can manipulate both feet.
After setting the components the Mec Foot Placer system should work correctly. Check out the example scenes for more info.

5-1- Important Notes on Setting Up the System

Some important notes should be considered before setting up the system:

  • Exposing Necessary Bones: If you checked “Optimize Game Objects” in the avatar rig, then some bones have to be exposed since the Mec Foot Placer needs them to work correctly. The bones are listed here:
           1- The corresponding bones for left and right feet.
           2- The corresponding bones for left and right lower legs.

          Check out the “Robot Kyle” avatar in the project to find how it can be done.

  • Setting up the data correctly: Don’t forget that setting up the data for foot placement needs precision and at some states the data needs to be changed dynamically to achieve the best effects. For example the “Extra Ray Distance Check” parameter should be increased or decreased in different states or animation times to achieve better visual results. Fortunately this can be done easily in Unity 5 by using animator behavior scripts. Check out the "IdleUpdate.cs" and "LocomotionUpdate.cs" in project to find out the details. Both scripts are being called within “SimpleLocomotion” animation controller. As Scripts show, the “Extra Ray Distance Check” parameter is increased while character enters in idle state. Also in the locomotion state, the “Extra Ray Distance Check” parameter increases in the times that character is putting his feet on the ground. This is to make sure that foot touches the ground. It will be decreased while the foot is not on the ground to help the foot move freely while looking for ground contacts as well.

  • Checking IK Pass Check Box: On any layer in which you need IK, you have to check IK pass check box in the animator controller so the IK callbacks can be called by Mecanim. If you don't check this check box, Mec Foot Placer will not work.

  • Mecanim Specific Features: Mecanim humanoid rigs provide a leg stretching feature which can help foot IK to look better. The leg stretching feature can avoid Knee popping while setting foot IK. However the value should be set accurately. High values of this feature makes the character cartoony and low values can increase the chance of knee popping in the character. To setup Leg Stretch feature you have to select your character asset. It should be a humanoid rig. In the rig tab, select configure then select muscles tab. In the “Additional Settings” group, you can find a parameter named “Leg Stretch”. Check out “Robot Kyle” avatar in the project to find out more.

Wednesday, June 24, 2015

Avoiding High Dimensionality in Animation State Space

By the progression of computer hardware, video games can have plenty of animations. This amount of animations need to be manipulated. Each specific animation or just some frames of it needs to be played at the right moment to fulfill a motion related task . Usually, developers try to keep the animation controller separate from the other modules like AI or control so they can just send some parameters to the animation system and  the animation system should return the most suitable animation to respond well to the control or AI modules. This can remove the complexity of manipulating animations from the control or AI as they already have their own complexities.

The animation controller promises to return the most suitable animation based on the input parameters. There exists different rules to select animations based on the input parameters. Usually, the speed, rotation and translation of the current animation bones are considered and based on these, a suitable animation will be selected which has the least difference in speed and translation/rotation with the currently playing animation poses. Also the returned animation has to satisfy the motion related tasks. It has to do what the other modules are expected from it to do. For example a path planner can send input parameters like steering degree and speed value to the animation controller and the animation controller should return the best suited motion out of its existing animations to follow the path correctly.

There exists different animation controllers which have already become a standard in video games. The most famous are the animation state machines. They are in many game engines or game animation middleware. They can be combined with animation blend trees as the most of the animation systems are offering them. Usually they are created manually by the animation specialists.

There are some other animation controllers like motion graphs, parametric motion graphs or reinforcement learning based animation controllers. Each of which have their own specifications and they should be discussed separately. Just note that all of these controllers can be implemented on top of an animation state machine which can offer animation blending, transitioning, time offsets within transitions and hierarchical states. I can mention Unreal Engine 4 animation system as a good one which have most of these features (not all).

Animation controllers might face a problem when they have to manipulate many animations. The problem is the high dimensionality of the state space. The controller has to create many states so it can respond well to the input parameters. When state space become high dimensional, the transitions between states grow as an order of power of two. Having a high dimensional state space will lead the system to become impractical, memory consuming and very hard to be manipulated.

In this post I want to introduce a paper based on a research I made about 1.5 years ago. The paper is published about 9 months ago. The research was about reducing state parameters in a Reinforcement Learning based animation controller which was used for locomotion planning. Although RL-based animation controllers are used less in gaming industry but they are finding their way through, because they can offer an almost automatic workflow to connect separate animations within  an animation database to fulfill different motion tasks and create a continuous space out of separate animations.

I'll try to write another post to show how you can reduce states in a manually created animation state machine since manually created animation state machines are being used most widely in gaming industry. However this post is about reducing the dimensions of state space in a RL-Based animation controller.

Here is the abstract:

"Motion and locomotion planning have a wide area of usage in different fields. Locomotion planning with premade character animations has been highly noticed in recent years. Reinforcement Learning presents promising ways to create motion planners using premade character animations. Although RL-based motion planners offer great ways to control character animations but they have some problems that make them hard to be used in practice, including high dimensionality and environment dependency. In this paper we present a motion planner which can fulfill its motion tasks by selecting its best animation sequences in different environments without any previous knowledge of the environment. We combined reinforcement learning with a fuzzy motion planer to fulfill motion tasks. The fuzzy control system commands the agent to seek the goal in environment and avoid obstacles and based on these commands, the agent select its best animation sequences. The motion planner is taught through a reinforcement learning process to find optimal policy for selecting its best animation sequences. To validate our motion planner‟s performance, we implemented our method and compared it with a pure RL-based motion planner."

You may want to read the paper here.

Monday, May 25, 2015

Combining Ragdoll and Keyframe Animation to Achieve Dynamic Poses

One of the greatest challenges in game character animation is the adaption of characters with the dynamic environments and dynamic actions of users. Each character owns many animations which help the animation controllers to respond well in different situations. Although owning plenty of animations can help to achieve better visual responses but it can't always be enough, since the environment which the character is moving in, can be dynamic and can be changed through time. Also player actions are dynamic and this can affect animations as well. Each action of the player needs feedback and animation plays a huge role here.

So even if the character owns many animations, it can't cover all situations, and the motions' visual and responsiveness can become unacceptable at some points. To overcome these issues, many animation techniques have been invented. Some of them are semi procedural, like animation blending and some of them are fully procedural like IK and physically based animations. In this post, I want to talk a little about combination of physical animation and keyframe animation and briefly show how they can be blended together to create dynamic poses. This subject is a  huge one, so I'm just going to briefly talk about it. At the end a case study is provided too.

Combining Keyframe Animation and Ragdoll

Using ragdoll animations have become a standard in the video gaming industry. You can see it in many games but nowadays using simple ragdoll is not acceptable for many game developers which are caring more about animations in their games. So they are using more advanced physical animations. There exists different physical animation techniques. Here I want to point out one which can be very useful in video games and some game engines or physics/animation APIs are providing it.

Using ragdoll alone, usually creates unaccepted motions and at many points it is not providing human like motions and it can be used just instead of die animations. It can't be used for a living character. To have a better ragdoll simulation, the ragdoll animations can be driven by the active keyframe animations. Assume that you have an animation which is generating poses at each frame and you have a ragdoll skeleton which consists of different physical joints and hinges. Here we can get the pose generated by keyframe animation as a target and apply forces to the physical joints of the ragdoll skeleton to reach this target. With this, the ragdoll skeleton is trying to follow animation at each frame while it can react to environment and applied forces within game. Also the pose generated by the physical skeleton can be blended by the actual pose of animation so we can make transitions between physical animation and keyframe animations. Having this can help a lot to achieve dynamic poses. Let's consider a simple example. You have a shooter game. You are shooting at an enemy while he is running. Based on the magnitude of the force caused by bullet, you can change the blend factor of the pose generated between ragdoll and currently playing animations. With this you will have a transition between animation and ragdoll while your ragdoll skeleton is reacting to the bullet physical force. Also the ragdoll skeleton tries to reach itself to the current animation pose.

If you want to find out more about this technique in details and find physical joint equations to follow the animation, I recommend to check the Dynamic Response for Motion Capture Animation paper. Also Havok physics API documentation is describing this technique very well but it's not talking about the equations.

This technique is beautifully implemented in Havok animation tool. You can define a ragdoll skeleton for a character and let it follow the currently playing animations. It also provides pose blending between ragdoll and currently playing animations. So it helps a lot to achieve dynamic acceptable poses.

Next I want to explain my mentioned example more in action. I implemented a simple body hit reaction by blending 4 run animations with the animation driven ragdoll. I used Havok Animation Tool and Vision engine which both are provided within Project Anarchy tool sets. In the Havok animation tool, "Rigid Body Ragdoll Controls Modifier" is responsible for animation driven ragdoll.

A Simple Case Study

Now let's consider the example more in action. I have a character which is running and I want the character to continue running while being hit. Each time the hit point differs. You can assume that the character is hit by randomly fired bullets. For this I made an animation blend tree with 4 run animations. One normal run and three unstable runs.

First, based on the direction of the applied force to different joints, I'm blending between three unstable runs. I have a unstable run left, forward and right.

Second, based on the magnitude of applied force, I'm blending between normal run and the unstable runs. So here I've generated a pose just by blending between 4 animations based on the direction and magnitude of the applied force. Although animation blending can create smooth poses, but it is not enough for creating dynamic poses in this example. So I'm applying animation driven ragdoll as a post process on the currently generated poses from the animation blend tree. The animation driven ragdoll is trying to apply forces to its motors and physical joints at each frame to make ragdoll skeleton pose close to the pose generated by active animations. So at each frame we have two poses. One which is generated by the animation blend tree and one which is generated by the animation driven ragdoll. The animation driven ragdoll is also affected by the bullets force bullets.

To achieve a dynamic and acceptable pose I blend between two poses generated by two animation systems based on the bullet force magnitude. After applying force, I'm blending back to actual animations through time so the pose generated by animation driven ragdoll, fades out through time. And again if the character is hit by a bullet, I apply forces to ragdoll skeleton and blend it with animation (based on force magnitude).


If you want to create such a system, it's better to create a platform which can blend animations very well like an advanced blend tree. The blend tree can respond to the input which is a force vector here. Then as a post process, you can use animation driven ragdoll to apply dynamic forces to body and blend it with the currently playing animations. The blend tree used in this example was very simple. You can use more complex blending systems with more animations to have a more realistic character.

The video here shows the example I described. I'm applying randomly generated forces to randomly selected joints to simulate bullet hits on the character body while he is running. As I mentioned, the character can have much more animations while being combined with ragdoll to shape more realistic motion.

After each hit, the character is blending back to its active animations smoothly.