Sunday, December 24, 2017

Using Multiple Bones to Look At a Target in World Space.

This post explains the details of a plugin I created for Unity3D. The plugin is about to have the characters look at a target in a biomechanically correct way and using more than one bone to look at a target in world space. The plugin can be found from asset store via link below:

http://u3d.as/136L

The post is organized as follows. First the workflow of the plugin is completely explained to help users understand the topic well. Also the blog post tries to carry an academic approach so if anyone wants to implement this feature on another platform get an idea how to do it. Second the API and parameters of the plugin is explained so users know exactly how to use the system and at the end, important notes are provided. Make sure to completely read the important notes on setting up if you don't have time to read the whole documentation.

1- Introduction


Imagine you want to write something using your computer's keyboard. When you are pressing keyboard buttons, you are mostly using your fingers and less movement comes from arms or elbows. This shows a simple and basic rule in biomechanics, telling that if you can use your small muscles to do something, you will use them and you won't involve your big muscles in action. Using bigger muscles means consuming more energy and it's avoided on unnecessary situations. Of course using small muscles always need more focus and training. That's why you see a kid can start walking between age 1 or 2 but he can't tie his shoe laces until a reasonable age.

So let's consider another example. A pull up action. When you want to pull up your weight using a pull up bar first you use fingers to hang, then you see your fingers's tendons, ligaments and muscles are stretched and they can't hold your weight then you use your elbow and you see your elbow muscles are extended and tendons are getting stretched then you add your arms to the action and this procedure continues for your shoulders, chest and abs' muscles. So as you can see you tried to do the action first with your small muscles and since they weren't powerful enough, you asked for help from your other muscles and managed to use your bigger muscles to finish the action.

Now let's expand this on another example. A look at example! So now imagine there is a picture in front of you. You can look at it without moving your head and just by moving your eyes. Now move the picture a few centimeters to your left. You still can look at the picture with your eyes however you feel your eyes muscles are getting stretched and tired. Now again move it a bit more to left you. Try to look at it and you see you can't look at the picture with just using your eyes because your eyes muscles are stretched completely and the picture is out of your eyes joint range so you need to use your neck and head as well to look at the picture. So continue like this and move the picture away from you and even more toward your back. You will see your head and neck joints and muscles get stretched and you need to rotate your spine and chest joints to look at the the target. At the end you see that all your eyes muscles, head, neck and spine are in action to let you look at the target just like the pull up example where you couldn't just use your finger muscles to pull up your weight and lots of other muscles came into action for you to pull your weight up.

Perfect look at is working based on this rule. In perfect look at, you can define a chain of bones and their corresponding joint limits in degrees. If the first joint reaches its limit the second one starts to rotate to look at the target. When the second bone reaches its limit the third one starts rotating and this procedure goes until the end of the bone chain. This way you can create a combination of bones to look at a target and not just use simple head movement. Let's have look at the results in these videos:





2- Technical Workflow


This section describes the look at procedure with details.

Every defined bone in look at bone chain has a forward vector which is used to show the bone's current direction. A target in world space is defined as the point which character wants to look at. To look at the point, the system starts from the first bone in the chain. It gets the current bone's forward vector and then calculates the rotation which can bring the bone's forward vector to the difference vector of the target point and the bone position in world space. Pic below shows the vectors.



The first bone rotates and will be clamped into its joint limit. If the first bone meets its joint limit the second bone starts to rotate to let the first bone follow the target. Please note that the second bone should be an ancestor of the first bone. It should not be necessarily its parent but a bone in the same hierarchy which can rotate the first bone when it rotates. The same relation should go for bone two, three and so on. For example if the first bone is head, the second bone can be neck or chest because they are ancestors of the head but it can't be eye because eye is not an ancestor of head.

To rotate the next bone in the bone chain, the system needs to specify a forward vector and a target vector to find the rotation between them. The forward vector is calculated by adding the normalized rotated forward vector of the first bone and the position differences of the first bone of the chain to the current bone( all in world space ).

The target vector is calculated by adding the position difference vector from the first bone of the chain to the current bone and adding this to the normalized position difference vector of target point from the first bone position. This way, by rotating next bones in the look at at chain, we can make sure the first bone in the chain aligns to the target even if the target is out of its joint range. Just a small note here, if the first bone has a huge translation difference from the next bones in the look at chain, the final look at result might have a little error and it won't exactly meet the target but generally character will always look at the target with a good precision which can provide a good intuition of character look ats.

The same workflow continues until the first bone can hit the target or the final bone in the chain meets its joint limit.

Each joint limit is calculated based on the angle between its forward vector and its parent's forward vector. By parent I mean the exact parent in the skeleton hierarchy however the joint angle limit can be calculated easier by calculating the difference of the current bone rotation with its corresponding reference pose rotation but unfortunately Unity Mecanim is not exposing the reference pose into scripts and currently there is no way getting the reference pose. Whenever unity exposes the reference pose into the scripts, both the bone forward vector and parent forward vector will be removed and the reference pose forward vector will be used instead to provide an easier setup for users.


3- Perfect Look At Component Properties


Target Object:

A game object used as the target object for the system. Characters with perfect look at component will look at this object.

Look At Blend Speed:

This value shows how fast the current look at pose will be blended in from the last look at pose. This smooth blending can be very helpful specially when look at is applied on top of a dynamic animation which has a lot of movement.

Draw Debug Look At:

If checked, target vectors and forward vectors for each bone is drawn in scene viewport. Target vectors are drawn in red and forward vectors are green.

Look At Bones:

An array of look at bone data. The size of the array should be equal to the number of bones you want to get involved in the look at process. Make sure there would be no missing bone in the array unless the systems prevent itself from working.

Bone:

The look at bone which is going to be rotated to look at the target.

Rotation Limit:

Joint limit in degrees. If the angle difference between the current bone and its parent in the skeleton hierarchy is higher than this value the next bone in the "Look At Bones" array starts rotating to help the first bone reach the target.

Forward Axis:

The forward vector of the bone. To find the forward axis of a bone, first you need to turn the coordinate system into local then you need to select the bone from the hierarchy panel. Afterwards you can find the forward vector. Picture below shows an example how to find the forward vector of the head bone for a character. As you can see the forward vector for this bone is Y-Axis:


Parent Bone Forward Axis:

The forward vector of the current bone's parent. To find the forward axis of the parent bone first you need to turn the coordinate system into local then you need to select the bone's parent from the hierarchy panel. Afterwards you can find the forward vector. Picture below shows an example how to find the forward vector of the head's parent bone which in this case is neck. As you can see the pic below, neck's forward vector is Y-Axis:


Reset To Default Rotation:

In Unity Mecanim, when an animation is retargetted on a different rig, if that rig has more bones than the retargetted animation, those bones never get updated to any specific transform. This mean they always use a cached value of the last valid pose they already received and the pose buffer never gets flushed which sometimes makes bad problems. To avoid having this situation, make sure to check the "Reset to Default Rotation". Check this check box only when you are sure the look at bones don't receive any pose from the current animation otherwise leave this check box unchecked. Check out the two GIFs below:


As you can see in the GIF above the spine remains disconnected because it receives no pose from animation and it uses the last valid cached pose. By checking the Reset To Default check box we can create a valid rotation for the bones which don't have any animation but want to be in the look at bone chain.




Linked Bones:

Linked bones are the bones which should be rotated as same as the current bone. For example One look at bone chain can be created like this:

Lookat Bone 1 = Right Eye
Lookat Bone 2 = Head
Lookat Bone 3 = Neck
Lookat Bone 4 = Spine2
Lookat Bone 5 = Spine1
Lookat Bone 6 = Spine0

As you can see, there is no Left eye here. So if you apply this look at bones chain to the character, all the bones can be rotated based on their joint limits but the left eye remains still. Here we can define left eye as a linked bone of the right eye. So wherever right eye rotates, left eye also rotates with it. Just like a linked transform. You can add as many as linked bone you want to the current bone.

To find examples of linked bones check out HumanCharacter and UnityCharacter prefabs in the project.

Linked Bones-Reset to Default Rotation:

This is exactly the same as Reset To Default Rotation in the LookAtBones. If you face some situations like the GIF below when you add linked bones, this means the linked bone doesn't carry any animation info and you need to check the Reset To Default Rotation for the bone to make the Mecanim pose buffer not to use the invalid poses.



4- Perfect Look At Component Public API:


GetLookAtWeight():

Returns the current weight of the perfect look at. If weight is zero, perfect look at is turned off, if one perfect look at is applied 100% and any value between will make an average between animation and procedural rotation provided by perfect look at.

SetLookAtWeight( float weight ):

Sets the current weight of the perfect look at. Please note if you use this function any transition will be cancelled because Perfect Look At is not letting external systems to change the weight in two different ways. By two different ways I mean setting lookat weight manually or by calling Enable/DisablePerfectLookAt.

This cancelling is provided to avoid having an error-prone pipeline. To find out more about transitions, check out EnablePerfectLookAt and DisablePerfectLookAt.

EnablePerfectLookAt( float time, bool cancelCurrentTransition = true ):

If this function is called, perfect look at's weight will turn into one within the specified time (blending in).

cancelCurrentTransition: If set to true and if another call to this function or DisablePerfectLookAt is made and the system is still on the transition, the current transition time will be set to zero and transition will continue from current weight to the destination weight within the new time specified.

If cancelCurrentTransition is false and if the system is on a transition, any other call to DisablePefrectLookAt or EnablePerfectLookAt will be ignored.

DisablePerfectLookAt( float time, bool cancelCurrentTransition = true ):

If this function is called, perfect look at's weight will turn into zero within the specified time. All othere details are the same as EnablePerfectLookAt. Please refer to EnablePerfectLookAt to find out more about the function parameters.

IsInTransition():

If PerfectLookAt is on disabling or enabling transition, it will return true otherwise false.

GetTimeToFinishTransition():

If PerfectLookAt is on disabling or enabling transition, it returns the remaining time to finish transition.

Important Notes on Setting Up Perfect Look At:


1- Turn Off Optimize Game Objects:

The only way to change bones transformations in a procedural way in Unity, is through LateUpdate of a component. Unfortunately Unity won't let you set the bones transforms if the "Optimize Game Objects" option of a rig is checked. To make PerfectLookAt working you need to be sure "Optimize Game Objects" is not checked. There is no info on Unity documentation why it is impossible to transform bones in an optimized rig and how Unity Optimize the skeleton calculations.


2- Setting Reset To Default Rotation On Some Necessary Cases:

If you see any of the linked bones are rotating constantly make sure you turn the Linked Bone's Reset To Default Rotation on. To find out more why this issue is happening please refer to Reset To Default Rotation section in this documentation.

3- Defining the Forward Axis of The Bones and Their Corresponding Parents Correctly:

Make sure you always select the correct forward axis both for the bone and its parent. Make sure you change the coordinate system to local and see the bone and its parent's forward axis in the local coordinate system. For more info please check out "Forward Axis" and "Parent Bone Forward Axis" section in this document. 

4- Look At Bones Should Be In The Same Hierarchy But Not Necessarily Child and Parent:

Look at bones order in the "Look At Bones" array matters. It should be set based on the bones hierarchy. For example 4 bones( two eyes, head and chest ) are needed to rotate using perfect look at. These 4 bones should be specified in this order:

First Bone: Left eye ( its linked bone should be right eye )

Second Bone: Head

Third Bone: Chest

As you can see bones defined here are not necessarily parent and child but they are in the same hierarchy. For example Chest is the parent of neck and neck is the parent of head. So when Chest rotates, head will also rotates.

5- Checking Character Prefabs As An Example:

Make sure to check the 3 Prefab Characters and their corresponding scenes as an example of perfect look at. All 3 have different rigs and they use perfect look at.

Tuesday, November 21, 2017

Euler Angles and Its Unexpected Keyframe Interpolation in Character Animation

Have you ever tried to animate a character's bone with a simple rotation along one of the world axis but you see a weird interpolation? An ugly interpolation in two or three axis instead of one! For sure this wasn't what you were expecting. Let's check this GIF:




You rotate the box along X and set a keyframe for it but when you play the animation you will see the box is rotating along X and Y! Well, in this post I want to explain why this issue is happening!

Before going further, let's talk a little about Euler Angles. All the people in animation know this word because it's the most intuitive method for rotating objects in 3D space and rotation has the highest number of clicks within a process of making an animated character. So what is Euler angles then? Euler proved that every unique rotation in 3D space can be described by three rotations along three orthogonal axis. These three rotations are called pitch yaw and roll. Euler angles are famous in 3D animation because they are easy to understand and imagine but every flexibility and ease of use will come up by a cost.

As I wrote earlier, Euler proved that every rotation can be defined uniquely in 3D space but he never said that I will provide you a nice interpolation between two different Euler rotations. So just to find out why this is happening I need to go a bit into the underlying math of Euler angles.

A rotation defined with Euler Angles is composed of 3 different values. Each of which shows rotation along one of the orthogonal axis. Rotation around X axis has a specific rotation matrix as well as rotation along Y and Z so when we want to rotate an object around X then Y and then Z we just need to multiply these matrices:

Z_RotationMatrix * Y_RotationMatrix * X_RotationMatrix

This Euler rotation is called Euler_XYZ. That means you first rotate the object along X then Y and then Z. The important point here is that Euler rotations order matters. The result that Euler XZY provides is different than Euler XYZ even with the same values of pitch, yaw and roll and the reason is because matrix multiplication is not commutative. Euler XZY is calculated like this:

Euler_XZY Rotation = Y_RoationMatrix * Z_RotationMatrix * X_RotationMatrix

And of course the two formulas are different because matrix multiplication order is important.

So let's consider Euler_XYZ. When Y rotation matrix is applied after X rotation matrix, it means the X rotation transformation provide by X matrix is always in the Y rotation matrix's space. That means the first rotation is the child of second. Why? because first we rotate the coordinate system with X rotation matrix and then we rotate the result with Y rotation matrix meaning the whole Y rotation is applied to the first transformation which is X rotation in this case. So wherever Y Matrix takes the coordinate system, X transform is also going there just like a parent-child relationship. This coordinate system with three consecutive rotation matrices is called Gimbal. So Euler angles is always measured in Gimbal coordinates.

Here are some screenshots for you to see how Gimbal rotation react when you rotate an object. Imagine our Euler angles rotation controller order is XYZ which means if we give values of  X= 35, Y= 45 and Z=50 (all in degrees), it will rotate the object first with 35 degrees around X then 45 around Y and at the end 50 around Z. So here you can find the rotation applied step by step and see how Gimbal axis are being changed:


X =0, Y=0, Z=0: Check out the three rotation axis, everything looks normal just like world axis:




X=35, Y=0, Z=0: We rotated the object around X with 35 degrees. Again everything looks normal just similar to default world axis because X is the first rotation to be applied:




X=35, Y=45, Z=0. Y is rotated and you see the X axis is also rotated 45 degrees. Remember about the parent-child relationship I mentioned a few lines above. Here you can see it.




X=35, Y=45, Z=50. Z rotates 50 degrees and both X and Y rotate 50 degrees because in Euler XYZ, Z is the parent of all and all axis follow it:




So these are the axis you have after this rotation. Now imagine if you need to rotate the object in world Y axis again. Since Euler angles are defined in Gimbal coordinate system you can not achieve a rotation around world Y just by rotating the object in one axis. In this case to achieve a rotation around world Y axis, two or three axis will be rotating to be sure you will reach the specified rotation. So YES! here is the ugly and unexpected interpolation. You moved the object in one world axis but the object rotate itself in three or two different dimensions to reach the desired rotation:

























The trajectory above, shows how the object moves on an inefficient arc and it's not just rotation around Y. In the GIF above you can see 2 Gimbal axis are moving during the second rotation we applied (80 degrees around world Y axis):

So how can you avoid this unexpected interpolations while rotating bones or objects? One way is to always have an eye on Gimbal coordinate system. Don't let the World or Local reference coordinate system trick you. Local and World coordinate axis are just there to give a better intuition to the user but your mathematical reference coordinate system is Gimbal here. You can switch to Gimbal constantly to see how you can rotate the bones with less changes in different axis however Gimbal coordinate system suffers from an issue named Gimbal lock where you lose one degree of freedom and it can happen when you rotate second axis 90 degrees. For instance in the order of Euler XYZ if you rotate Y in 90 degrees then X will be aligned on Z because in Gimbal coordinate system X is the child of Y and by rotating Y with 90 degrees, X will rotate 90 degrees around Y as well and it will be aligned on Z. Check out the GIF below:





Now if you want to rotate the object in X axis for the next keyframe, there is no X anymore. Z and X are aligned together and you missed one degree of freedom as you can see in the GIF above. So to unlock the Gimbal lock you need to rotate the object in more than one axis again!

There is also another way to avoid bad interpolation and it's using unit Quaternion rotations instead of Euler angles. Quaternion SLERP interpolation can be very soft and smooth and it always act as expected since it selects the shortest arc possible from one rotation to another. The problem with Quaternions is that they are not intuitive enough to be presented by Bezier curves. The curves animators like so much and can have controls over it a lot! Quaternions are following a different kind of algebra and artists are not really enthusiastic to learn the math behind Quaternions and without learning the math, a Quaternion value can be very confusing however Quaternion rotation controller are still being used in DCC tools like 3DSMax and they come up with interpolators like TCB interpolators which can provide a very smooth interpolation between rotations. So rarely you need to change the TCB values and you can simulate the ease in/outs just by adding one or two single key frames. The only thing here is that you don't have control over the rotation of different axis because you just control the SLERP interpolation factor using a TCB curve and this factor is just a normalized value showing the interpolation percentage between two Quaternions using SLERP function.

When I was teaching game animation, I tried to teach my students to always use Quaternion controllers with TCB interpolators. Using Quaternion controllers with TCB interpolators gives you less control over the interpolation but when you use Quaternion interpolation it always uses SLERP function which selects the shortest arc from the source rotation to the destination. This means you can create your desired curve by just adding a little number of more keyframes and without frequently tweaking unexpected changes on curve values like what Euler angles interpolation offers. In next posts I will try to show you how you can have a better interpolation by using Quaternions instead of Euler angles and by just adding one or two extra key frames.

At the end, the case we studied above can turn to something like this when we switch the rotation controller from Euler XYZ to TCB Quaternion SLERP. As you can see the trajectory is very smooth and just follows the Y rotation in the world. You can compare the trajectories in two GIFs to see the differences!

Sunday, August 21, 2016

Some Combat Animations

During the previous years, I spent most of my time on technical side of animation as I like it more than the artistic side of animation. It's more appealing to me.

Previously, this blog was focusing more on technical side of animations but in next posts it tries to address the art of animation so I try to come up with some tutorials on basics of 3D animation.

So as for the beginning, I've attached some fully key-framed animations I did in the previous years. You might want to check them here:

















Friday, March 25, 2016

The Role of Animations in Hit Effects

This shall be my final post regarding the technical side of animation for a while! I try to write again but it might be more about the artistic side of animation.

Video games are all about entertainment. Game Developers always try to maximize this entertaining experience. One aspect is to let the players feel exactly what they did and receive a fair result of their selected action. This can be considered from different perspectives like game design, risk/reward or aesthetics. Players should receive a suitable feedback based on what they do. Hitting and attacking in action games follow this rule as well. When hitting or being hit, player should feel the impact.  Several techniques can be considered here to show the impact of the hits.
This article tries to address some of these techniques. The techniques are effectively used in Dead Mage’s latest released game named Epic of Kings. EOK is a hack and slash game designed for touch based devices. It’s released on iOS devices and will be released on Android very soon. Here you can see the trailer of the game:



In the rest of the article, I’m going to mention the techniques we used to show the hit impacts in Epic of Kings.

Controlling the Hit Impacts
Before reading this section I want to say that all the mentioned cases here are related to animations which can motivate player’s eyesight. It’s obvious that the audio have a huge impact on hit effects as well but this article is not going to talk about audio as I’m not a professional in the audio field.
So here are some of the animation techniques we used in EOK to control and improve the hit impacts:

1-Animations:
Surely the most important aspect to show the hit effects is the animations themselves. Animations should not be floaty. They have to be started from a damaged pose because the incoming attack has high kinetic energy and it makes the victim to accelerate in the direction of the attack. So the animation should show this. It should start very quick but ends slowly to show some effects of the attack like dizziness. Just note that the time of the hit animations in combat is very important. So the slower part of animations showing the dizziness should have a reasonable time length and it should have safe intervals providing good poses for blending to other animations (if it’s needed to be canceled to other animation). Here is one example of a light hit animation:




2- Uninterruptable Animations:
In hack and slash games, the enemies often has slower animations than the player. One reason, is because of the responsiveness. Responsiveness causes faster animations for player since playable characters are interacting directly with the player and they should respond well to the game input. Enemies are usually slower because player should have some time to see the enemy’s current action and he/she needs some time to make the correct decision. If the enemy animations are too fast he doesn’t have enough time to make the right decision. However the timing of enemy’s animations can be adjusted based on enemy type, attack type and the player progression in the game.
In many situations, this slower enemy animations can’t be cancelled by the player’s attacks. That means the enemy’s animation continues while player hitting him.  Although the player attacks, the enemy is not showing any reaction because his animation is not getting canceled. So here we can use additive animations to show some shakes on the enemy’s body. Here is a video showing additive animations’ role in this scenario:



And here is one additive animation in UE4 editor:





The shown additive animation is animated from reference pose so it can be added on different animation poses generally.

3- No Cross Fade Time (Transitional Blending):
To avoid floaty animations and showing the transferred kinetic energy for hits, the cross fade times should be equal to zero while transitioning to hit animations.

4- Specific Hit Animations:
This is an obvious point. If you have specific hit animations for different attacks, the feeling of the hits would be much better.
For example directional hit animations can help the feeling of hit impacts. Based on incoming attack’s direction, an animation showing the correct hit direction can be played.
One another example is the specific hit animations based on the current animation state. For example in an attack animation, when the time is between t1 and t2, if the character gets hit, animations other than normal hits are played.

5- Body IK as A Post Process:
In Epic of Kings, an IK chain is defined on characters’ spine. It acts as a post process on the poses of block and block hit animations. Post process here means that the original animation generates the base pose and the IK will add some custom poses on top of it so we can save the fidelity to the animations created by the artists.
By moving the end-effector in a reasonable range and blend the IK solution with FK, the spine always change position and creates non-repetitive poses which can improve the visual of the motion.

6- Camera Effects:
As mentioned in the first section, we want the player to feel the impact of the hits. Surely involving the eyesight of the player is very important as all mentioned cases above was about involving players’ eyesight via animation techniques. So camera movements can do a good job to transfer this feeling as well.
One common way is to use camera shakes. In EOK, plenty of different camera shakes with different properties were defined. Properties include frequency and amplitude for position, rotation, FOV. It also has fade in/out values to let the camera shake get added on top of the current camera movement. For example heavy attacks have more amplitude and frequency and light attacks have less frequency and amplitude or the beasts' footsteps have less frequency but more amplitude.
One other important aspect is about animating the camera FOV. In some cases, animating camera FOV on enemy attacks can make sense. Some years ago I watched a documentary movie about self-defense. It was showing that when the brain feels danger, the view of the eyes become more narrow letting them to just focus on the danger. We used this phenomena in EOK by reducing the FOV in some enemy attacks to let the player feel the danger more. Video here shows this in action:



Just note that I suggest to animate FOV just for the situations in which you’re fighting with one enemy which is also our case in epic of kings. For the situations in which you need to fight simultaneously with different characters, FOV should not be changed because player needs to focus on all the events and actions from the enemies around to do the appropriate reaction. Changing FOV in this kind of situations can distract the player a bit.

7- Hit Pauses
One other thing that you can find in many games like street fighter or god of war is hit pauses. Whenever an attack lands, time stops for a short period to show the impact of the attack. We added a slight hit pause in Epic of kings as well.

8- Physically Based Animation
Blending between physically based animation and keyframe animation has been used in many games so far. It can bring dynamic action scenes with non-repetitive animations in the game environment. One common way is to make the ragdoll to follow the animation while responding to external perturbations. With this, the ragdoll can have the overall shape of the animation and respond physically to external forces. This can be blended with the keyframe animation as well to create better and natural poses.
We developed a system on top on UE4 to demonstrate this feature. However we didn’t integrate it in the final game mostly because of the low time in the development and also because in games like Epic of Kings, action scenes are not that dynamic unlike a third person hack and slash or shooter game. So it was not a priority and we forget about integrating this feature into the game.
This video shows this feature in action:


In the video above, a random force applies to random physical bodies and the ragdoll tries to follow the animation while responding to the applied external force. Also it blends with the keyframe animation. If you want to know more about this kind of systems, I’ve written a post about blending between ragdoll and keyframe animation on my blog here.

9- Particle Effects:
There is no doubt that particles can do a great job in terms of aesthetics. Some kind of particles like sparks and blasts can help the hits to be felt better.

Conclusion
Some cases which can help to control and improve hit effects in action games were mentioned. These cases were effectively used in Epic of Kings game. Having all these cases can help the player to feel the action more and involve herself better in the game.

Saturday, February 20, 2016

Epic of Kings: The Game

This post is not directly related to animation techniques. Just wanted to introduce "Epic of Kings" as a game I worked on. It's released recently on Appstore. You may check its trailer here:




 And a game-play video here:



Unreal Engine 4 is used to develop Epic of Kings. 820 animations are used and organized in the game. Unreal engine animation optimization tools helped us a lot here to organize the animations in the game. We didn't let the whole resident animations in memory to exceed 7 MB.

The characters have averagely more than 70 bones which is a high value for mobile games. It's not high for PC/Console games but it's high for mobile games. Having more bones means more memory consumption and more process in calculating skeleton and skin matrices.

Also UE4's animation montage system and animation graphs features helped us a lot to avoid high dimensionality and spaghetti effects while creating animation graphs.

FABRIK IK solution which is very lightweight but great in action is also used at some points for characters' bodies. FABRIK is also provided by UE4 animation system.

Hope you enjoy playing the game and seeing the animations within.

Saturday, November 14, 2015

Mirroring 3D Character Animations

Introduction


Video games have resources. Resources are raw data that need to be manipulated, baked and become ready to be used in game. Textures, meshes, animations and sometimes metadatas are all counted as resources. These resources are consuming significant amount of memory. Re-using and manipulating resources is essential for a game engine.

In terms of animation, there exists plenty of actions which can be used to manage animations as resources and one is motion retargeting.

With motion retargeting, one can use a specific animation on different skeletons with different reference or binding poses, different joint size and different heights. For example, you just have a walk animation and want to use it for 5 different characters with different physical shapes. Motion retargeting systems can do this nicely so you don't need to have five different walks for those 5 different characters. You just have one walk animation which can be used for all characters. This represents lower amount of animations and therefore less needed resources.

Motion retargeting systems apply some modifications on top of animation data to make them suitable for different skeletons. These modifications include:

1- Defining a generic but modifiable skeleton template for bipeds or quadrupeds
2- Root motion reasonable scaling
3- Ability to edit skeleton reference pose
4- Joint movement limitations
5- Animation mirroring
6- Adding a run-time rig on top of the skeleton template.

Creating a motion retargeting system needs a vast amount of work and it's a huge topic. In this post I just want to show you how you can mirror character animations. Motion retargeting systems are usually supporting animation mirroring. It's useful for different purposes. Mirrored animations can be used to avoid foot-skating and also for achieving responsiveness and again, by mirroring an input pose, you can avoid creating new mirrored animations and you just using the same animation data, no new animation needed here. You can select the animation or its mirrored based on the foot phases.

In the next post, I will show you how you can use mirrored animations in action but this post is just concentrating on mirroring an input pose from an animation.

For this post, I used Unreal Engine 4. Unreal Engine has a very robust, flexible and optimized animation system but its motion retargeting is still immature. At this time, it can't be compared with Unity3D or Havok Animation motion retargeting.

Mirror Animations

To mirror animations, two types of bones should be considered. First the bones that have a mirrored bone in the skeleton hierarchy like hands, arms, legs, foots and facial bones. Let's call these mirrored bones, twins. Second, the bones which have no twin, like pelvis, spines, neck and head.

So to create a mirroring system, we have to define some meta data about the skeleton. It should save each bone twins, if it has any. For this reason, I define a class named AnimationMirrorData which saves and manipulate required data such as mirror-mapped bones, rotation mirror axis and position negation direction.

To mirror animations, I defined a custom animation node which can be used in unreal engine animation graph. It receives a pose in local space and mirrors it. It also has two input pins. One is for an animation mirror data object which should be initialized by the user and one is a boolean which let the node to be turned on or off. As you can see in the picture, there is no extra animation needed here and the node just accepts the current pose and mirrors it and you can turn it on or off based on the game or animation circumstances.




Here I discuss how to mirror each type of bones:

1- Mirroring bones which has a twin in the hierarchy

These kind of bones like hands and legs have a twin in the hierarchy. To mirror them, we need to swap the transforms of the two bones. For example the left upper arm transform should be pasted on the right upper arm, and the right upper arm transform should be pasted on the left upper arm. To do this, we have to deduct the the binding pose from the current transform of the bone at the current frame. In Unreal Engine 4 the local poses are calculated in their parent space as well as the binding poses. We don't want to mirror the binding poses of the bones and we just need to mirror the deducted transform. By doing this, we can make sure that the character can stay on the same spot and it won't rotate 180 degrees. Remember, this only works if the binding poses of the twin bones in the skeleton are already mirrored. This means that the rigger should have mirrored the twin bones when he/she wanted to rig the mesh.

2- Mirroring bones with no twin

These kind of bones are like root, pelvis or spine which don't have any twin in the hierarchy. For these kind of bones, again we have to deduct the binding pose from the current bone transform.  Now the current deducted transform should be mirrored. This time we need a mirror axis. The mirror axis should be selected by the user. Mostly it is x,y or z in the bone's binding pose space. So for rotations, if you select X as the mirror axis, you should negate the y and z components of the quaternion. To mirror the translations, things are a little different because for translations we never want to change the up and forward direction of the translations. That means by mirroring the animation, we don't want the character to move upside down and also backward. We just want the side movement to be negated. So here for the translations we just need to negate one component of the translation vector. So it is not counted as a mirror, mathematically.

Following, I placed some parts of the code which I wrote for the mirror animation node:

Here is  the AnimationMirrorData header file:

 #pragma once  
   
 #include "Object.h"  
 #include "AnimationMirrorData.generated.h"  
   
 /**  
  *   
  */  
 UENUM(BlueprintType)  
 enum class MirrorDir : uint8  
 {  
      None = 0,  
      X_Axis = 1,  
      Y_Axis = 2,  
      Z_Axis = 3  
 };  
   
   
 UCLASS(BlueprintType)  
 class ANIMATIONMIRRORING_API UAnimationMirrorData : public UObject  
 {  
 GENERATED_BODY()  
 public:  
   
      UAnimationMirrorData();  
   
      //Shows mirror axis. 0 = None, 1 = X, 2 = Y, 3 = Z   
      UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Mirror Animation")  
      MirrorDir MirrorAxis_Rot;  
   
      UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Mirror Animation")  
      MirrorDir RightAxis;  
   
      UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Mirror Animation")  
      MirrorDir PelvisMirrorAxis_Rot;  
   
      UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Mirror Animation")  
      MirrorDir PelvisRightAxis;  
   
      //Functions  
      UFUNCTION(BlueprintCallable, Category = "Mirror Animation")  
      void SetMirrorMappedBone(const FName bone_name, const FName mirror_bone_name);  
   
      UFUNCTION(BlueprintCallable, Category = "Mirror Animation")  
      FName GetMirroMappedBone(const FName bone_name) const;  
   
      TArray<FName> GetBoneMirrorDataStructure() const;  
   
 protected:  
      TArray<FName> mMirrorData; 
} 


And here are two functions which are mainly responsible to mirror animations:


/***********************************************/  
 void FAnimMirror::Evaluate(FPoseContext& Output)  
 {  
      mBasePose.Evaluate(Output);  
   
   
      if (!mAnimMirrorData)  
      {  
           return;  
      }  
   
      if (Output.AnimInstance)  
      {  
           TArray<FCompactPoseBoneIndex> lAr;  
           int32 lCurrentMirroredBoneInd = 0;  
           int32 lMirBoneCount = mAnimMirrorData->GetBoneMirrorDataStructure().Num();  
   
           //Mirror Mapped Bones  
           for (uint8 i = 0; i < lMirBoneCount; i += 2)  
           {  
                FCompactPoseBoneIndex lInd1 = FCompactPoseBoneIndex(Output.AnimInstance->GetSkelMeshComponent()->GetBoneIndex(mAnimMirrorData->GetBoneMirrorDataStructure()[i]));  
                FCompactPoseBoneIndex lInd2 = FCompactPoseBoneIndex(Output.AnimInstance->GetSkelMeshComponent()->GetBoneIndex(mAnimMirrorData->GetBoneMirrorDataStructure()[i + 1]));  
   
                FTransform lT1 = Output.Pose[lInd1];  
                FTransform lT2 = Output.Pose[lInd2];  
   
                Output.Pose[lInd1].SetRotation(Output.Pose.GetRefPose(lInd1).GetRotation() * Output.Pose.GetRefPose(lInd2).GetRotation().Inverse() * lT2.GetRotation());  
                Output.Pose[lInd2].SetRotation(Output.Pose.GetRefPose(lInd2).GetRotation() * Output.Pose.GetRefPose(lInd1).GetRotation().Inverse() * lT1.GetRotation());  
   
                Output.Pose[lInd1].SetLocation((Output.Pose.GetRefPose(lInd2).GetRotation().Inverse() * lT2.GetRotation() * (lT2.GetLocation() - Output.Pose.GetRefPose(lInd2).GetLocation()))   
                     + Output.Pose.GetRefPose(lInd1).GetLocation());  
                  
                Output.Pose[lInd2].SetLocation((Output.Pose.GetRefPose(lInd1).GetRotation().Inverse() * lT1.GetRotation() * (lT1.GetLocation() - Output.Pose.GetRefPose(lInd1).GetLocation()))   
                     + Output.Pose.GetRefPose(lInd2).GetLocation());  
   
                lAr.Add(lInd1);  
                lAr.Add(lInd2);  
   
           }  
   
   
           //Mirror Unmapped Bones  
           FCompactPoseBoneIndex lPoseBoneCount = FCompactPoseBoneIndex(Output.Pose.GetNumBones());  
   
           for (FCompactPoseBoneIndex i = FCompactPoseBoneIndex(0); i < lPoseBoneCount;)  
           {  
                if (!lAr.Contains(i))  
                {  
                     if (!i.IsRootBone())  
                     {  
                          FTransform lT = Output.Pose[i];  
                          lT.SetRotation(Output.Pose.GetRefPose(i).GetRotation().Inverse() * Output.Pose[i].GetRotation());  
                          lT.SetLocation(Output.Pose[i].GetLocation() - Output.Pose.GetRefPose(i).GetLocation());  
                            
                          if (i.GetInt() != 1)  
                          {  
                               MirrorPose(lT, (uint8)mAnimMirrorData->MirrorAxis_Rot, (uint8)mAnimMirrorData->RightAxis);  
                               Output.Pose[i].SetRotation(Output.Pose.GetRefPose(i).GetRotation() * lT.GetRotation());  
                               Output.Pose[i].SetLocation(Output.Pose.GetRefPose(i).GetLocation() + lT.GetLocation());  
                          }  
                          else  
                          {  
                               MirrorPose(lT, (uint8)mAnimMirrorData->PelvisMirrorAxis_Rot, (uint8)mAnimMirrorData ->PelvisRightAxis);  
                               Output.Pose[i].SetRotation(Output.Pose.GetRefPose(i).GetRotation() * lT.GetRotation());  
                               Output.Pose[i].SetLocation(Output.Pose.GetRefPose(i).GetLocation() + lT.GetLocation());  
                          }  
                     }  
                }  
                ++i;  
           }  
      }  
 };  
   
 void FAnimMirror::MirrorPose(FTransform& input_pose, const uint8 mirror_axis, const uint8 pos_fwd_mirror)  
 {  
   
      FVector lMirroredLoc = input_pose.GetLocation();  
   
      if (pos_fwd_mirror == 1)  
      {  
           lMirroredLoc.X = -lMirroredLoc.X;  
      }  
      else  
      {  
           if (pos_fwd_mirror == 2)  
           {  
                lMirroredLoc.Y = -lMirroredLoc.Y;  
           }  
           else  
           {  
                if (pos_fwd_mirror == 3)  
                {  
                     lMirroredLoc.Z = -lMirroredLoc.Z;  
                }  
           }  
      }  
   
      input_pose.SetLocation(lMirroredLoc);  
   
   
      switch (mirror_axis)  
      {  
           case 1:  
           {  
                const float lY = -input_pose.GetRotation().Y;  
                const float lZ = -input_pose.GetRotation().Z;  
                input_pose.SetRotation(FQuat(input_pose.GetRotation().X, lY, lZ, input_pose.GetRotation().W));  
                break;  
           }  
   
           case 2:  
           {  
                const  float lX = -input_pose.GetRotation().X;  
                const float lZ = -input_pose.GetRotation().Z;  
                input_pose.SetRotation(FQuat(lX, input_pose.GetRotation().Y, lZ, input_pose.GetRotation().W));  
                break;  
           }  
   
           case 3:  
           {  
                const float lX = -input_pose.GetRotation().X;  
                const float lY = -input_pose.GetRotation().Y;  
                input_pose.SetRotation(FQuat(lX, lY, input_pose.GetRotation().Z, input_pose.GetRotation().W));  
                break;  
           }  
      }  
 };  


I haven't placed the whole source code here. If you need them, just contact me and I will send them to you.

Monday, September 21, 2015

Creating Non-Repetitive Randomized Idle Using Animation Blending

You might have seen that the standing idle animations in video games are some kind of a magical movement. They never get repetitive. The character is looking at different directions with a non-repetitive pattern. He/she shows different facial animations or shifts his/her weight randomly and does many other usual acts in a standing idle animation.

These kind of animations can be implemented using an animation blend tree and a component which can manipulate animation weights. This post is going to show how a non-repetitive idle animation can be created.

Defining Animation Blend Tree for Idle Animation

In this section, I'm going to define an animation blend tree which can bring a range of possible animations for idle. Before creating a blend tree,  the animations which are used within are described here:

1- A simple breathing idle animation which is just 70 frames (2.33 second).

2- A left weight shift animation similar to the original idle animation while having the pelvis shifted to left and with a more curvy torso. "Similar" here, means that the animations have same timings and almost same poses but just with a difference in main poses. This difference shows the weight shift left pose. I created the weight shift animation just by adding an additive keyframe to different bones on top of the original idle animation in the DCC tool.

3- A right weight shift animation similar to the original idle animation while having the pelvis shifted to right and with a more curvy torso.

4- Four different look animations. Look left, right, up and down. These 4 are all one frame additive animations. Their transforms are subtracted from the first frame of the original idle animation.

5- Two different facial and simple body movement animations. These two animations are additive as well. They are adding some facial animations to the original idle animation and some movement over torso and hands.

So the required animations are described. Now let's define a scenario for blend tree in three steps before creating it:

1- We want the character to stand using an idle animation while often shifting his/her weight. So first we have to create a blend node which can blend between, left weight shift, basic idle and right weight shift.

2- The character wants to look around often and we have four different additive look animations for this. So first we create a blend node which can blend between 4 additive look animations. It works with two parameters. One parameter is mapped to blend between look left and right and one parameter is mapped to blend between look up and down. This blend node is going to be added to the blend node defined in step 1.

3- After adding head look animations, the two additive facial animations are going to be added to the result. These two animations are switching randomly when they are reaching at their final frame.

So a blend tree which is capable of supporting this scenario is shown here:



Idle Animation Controller to Manipulate Blend Weights

So far an animation blend tree is created which can create continuous motions with some simple additive and idle animations. Now we have to manipulate the blend weights to create a non-repetitive idle animation. This would be an easy task. I'm going to define it in four steps to obtain a non-repetitive weight shift animation. These steps can be used for facial and look animations as well:

1- First, we randomly select a target weight for the weight shift. It should be in the range of defined weight shift parameter used in blend tree.

2- I define a random blend speed which makes the character to shift weight through time until it reaches the selected target weight in step 1. The blend speed is randomly selected from a reasonable numeric range.

3- When we reach the target blend weight for weight shift, the character should remain in that blend weight for a while. That's completely like what humans do in reality. When a human stands, he/she shifts his/her weight to left or right and stay in that pose for a while. Shifting weight, helps human body to relax the spine muscles. So we select a random time from a reasonable range to set the weight shift remaining time.

4- After the selected weight shifting time ends, we get back to step 1 and this loop repeats while the character is in idle state.

The same 4 steps goes for the directional look and facial animations as well.

This random time, speed and target weight selection, creates a non-repetitive idle animation. The character always look at different directions with different times while shifting his weight to left or right and do different facial and body movement animations. All are done with different and random time, speed and poses.


You can check the result here in this video:




Here is the source code I wrote for the idle animation controller. The system is implemented in Unreal Engine 4. This component calculates the blend weights and pass them to the animation blend tree:


The header file:

 
   
 #pragma once  
   
 #include "Components/ActorComponent.h"  
 #include "ComponenetIdleRandomizer.generated.h"  
   
   
 UCLASS( ClassGroup=(Custom), meta=(BlueprintSpawnableComponent) )  
 class RANDOMIZEDIDLE_API UComponenetIdleRandomizer : public UActorComponent  
 {  
      GENERATED_BODY()  
   
 public:       
      UComponenetIdleRandomizer();  
   
      // Called every frame  
      virtual void TickComponent( float DeltaTime, ELevelTick TickType, FActorComponentTickFunction* ThisTickFunction ) override;  
   
   
 public:  
      /*Value to be used for weight shift blend*/  
      UPROPERTY(BluePrintReadOnly)  
      float mCurrentWeightShift;  
   
      /*Value to be used for idle look blend*/  
      UPROPERTY(BluePrintReadOnly)  
      FVector2D mCurrentHeadDir;  
   
      /*Value to be used for idle facial blend*/  
      UPROPERTY(BluePrintReadOnly)  
      float mCurrentFacial;  
   
      FVector2D mTargetHeadDir;  
   
      float mTargetWeightShift;  
   
      float mTargetFacial;  
   
 protected:  
   
      float mWSTransitionTime;  
   
      float mWSTime;  
   
      float mWSCurrentTime;  
   
      float mLookTransitionTime;  
   
      float mLookTime;  
   
      float mLookCurrentTime;  
   
      float mFacialTransitionTime;  
   
      float mFacialTime;  
   
      float mFacialCurrentTime;  
   
 private:  
      float mLookTransitionSpeed;  
   
      float mWSTransitionSpeed;  
   
      float mFacialTransitionSpeed;  
   
        
 };  
   


And The CPP Here:


 #include "RandomizedIdle.h"  
 #include "ComponenetIdleRandomizer.h"  
   
   
 /******************************************************/  
 UComponenetIdleRandomizer::UComponenetIdleRandomizer()  
 {  
      // Set this component to be initialized when the game starts, and to be ticked every frame. You can turn these features  
      // off to improve performance if you don't need them.  
      bWantsBeginPlay = true;  
      PrimaryComponentTick.bCanEverTick = true;  
   
      // ...  
      //weight shift initialization  
      mTargetWeightShift = FMath::RandRange(-100, 100) * 0.01f;  
      mCurrentWeightShift = 0;  
      mWSTransitionTime = FMath::RandRange(10, 20) * 0.1f;  
      mWSTime = FMath::RandRange(20, 50) * 0.1f;  
      mWSCurrentTime = 0;  
      mWSTransitionSpeed = mTargetWeightShift / mWSTransitionTime;  
   
      //look initialization  
      mTargetHeadDir.X = FMath::RandRange(-80, 80) * 0.01f;  
      mTargetHeadDir.Y = FMath::RandRange(-15, 15) * 0.01f;  
      mCurrentHeadDir = FVector2D::ZeroVector;  
      mLookTransitionTime = FMath::RandRange(10, 20) * 0.1f;  
      mLookTime = FMath::RandRange(20, 40) * 0.1f;  
      mLookCurrentTime = 0.f;  
      mLookTransitionSpeed = mTargetHeadDir.Size() / mLookTransitionTime;  
   
      //facial initialization  
      mTargetFacial = FMath::RandRange(0, 100.f) * 0.01f;  
      mCurrentFacial = 0.f;  
      mFacialTransitionTime = FMath::RandRange(20, 50) * 0.1f;  
      mFacialTime = FMath::RandRange(20.f, 40.f) * 0.1f;  
      mFacialCurrentTime = 0.f;  
      mFacialTransitionSpeed = mTargetFacial / mFacialTransitionTime;  
 }  
   
   
 /**********************************************************************************************************************************/  
 void UComponenetIdleRandomizer::TickComponent( float DeltaTime, ELevelTick TickType, FActorComponentTickFunction* ThisTickFunction )  
 {  
      Super::TickComponent( DeltaTime, TickType, ThisTickFunction );  
   
      /*look weight calculations*/  
      if (mLookCurrentTime > mLookTransitionTime + mLookTime)  
      {  
           mLookTime = FMath::RandRange(20, 40) * 0.1f;  
           mLookTransitionTime = FMath::RandRange(20, 40) * 0.1f;  
           mLookCurrentTime = 0;  
           mTargetHeadDir.X = FMath::RandRange(-80, 80) * 0.01f;  
           mTargetHeadDir.Y = FMath::RandRange(-15, 15) * 0.01f;  
           mLookTransitionSpeed = (mTargetHeadDir - mCurrentHeadDir).Size() / mLookTransitionTime;  
      }  
   
      mCurrentHeadDir += mLookTransitionSpeed * (mTargetHeadDir - mCurrentHeadDir).GetSafeNormal() * GetWorld()->DeltaTimeSeconds;  
   
      if (mLookCurrentTime > mLookTransitionTime)  
      {  
           /*Damping*/  
           float lTransitionSpeedSign = FMath::Sign(mLookTransitionSpeed);  
           mLookTransitionSpeed = mLookTransitionSpeed - lTransitionSpeedSign * 2.0f * GetWorld()->DeltaTimeSeconds;  
   
           if (lTransitionSpeedSign * FMath::Sign(mLookTransitionSpeed) == -1)  
           {  
                mLookTransitionSpeed = 0.f;  
           }  
   
           if (FMath::Abs(mCurrentHeadDir.X) > 0.9f)  
           {  
                mCurrentHeadDir.X = FMath::Sign(mCurrentHeadDir.X) * 0.9f;  
           }  
   
           if (FMath::Abs(mCurrentHeadDir.Y) > 0.2f)  
           {  
                mCurrentHeadDir.Y = FMath::Sign(mCurrentHeadDir.Y) * 0.2f;  
           }  
      }  
   
      mLookCurrentTime += DeltaTime;  
   
   
      /*weight shift calculations*/  
      if (mWSCurrentTime > mWSTransitionTime + mWSTime)  
      {  
           mWSTime = FMath::RandRange(20.f, 50.f) * 0.1f;  
           mWSTransitionTime = FMath::RandRange(30.f, 50.f) * 0.1f;  
           mWSCurrentTime = 0;  
           mTargetWeightShift = FMath::RandRange(-80.f, 80.f) * 0.01f;  
           mWSTransitionSpeed = (mTargetWeightShift - mCurrentWeightShift) / mWSTransitionTime;  
      }  
   
      mCurrentWeightShift += mWSTransitionSpeed * DeltaTime;  
   
      if (mWSCurrentTime > mWSTransitionTime)  
      {  
           /*Damping*/  
           float lTransitionSpeedSign = FMath::Sign(mWSTransitionSpeed);  
           mWSTransitionSpeed = mWSTransitionSpeed - lTransitionSpeedSign * 2.0f * GetWorld()->DeltaTimeSeconds;  
   
           if (lTransitionSpeedSign * FMath::Sign(mWSTransitionSpeed) == -1.0f)  
           {  
                mWSTransitionSpeed = 0.f;  
           }  
   
           if (FMath::Abs(mCurrentWeightShift) > 1.0f)  
           {  
                mCurrentWeightShift = FMath::Sign(mCurrentWeightShift);  
           }  
      }  
   
      mWSCurrentTime += GetWorld()->DeltaTimeSeconds;  
   
      /*facial calculations*/  
      if (mFacialCurrentTime > mFacialTransitionTime + mFacialTime)  
      {  
           mFacialTime = FMath::RandRange(20, 50) * 0.1f;  
           mFacialTransitionTime = FMath::RandRange(20, 50) * 0.1f;  
           mFacialCurrentTime = 0;  
           mTargetFacial = FMath::RandRange(0, 100) * 0.01f;  
           mFacialTransitionSpeed = (mTargetFacial - mCurrentFacial) / mFacialTransitionTime;  
      }  
   
      mCurrentFacial += mFacialTransitionSpeed * GetWorld()->DeltaTimeSeconds;  
   
      if (mFacialCurrentTime > mWSTransitionTime)  
      {  
           mCurrentFacial = mTargetFacial;  
      }  
   
      mFacialCurrentTime += DeltaTime;  
 }