Wednesday, June 24, 2015

Avoiding High Dimensionality in Animation State Space

By the progression of computer hardware, video games can have plenty of animations. This amount of animations need to be manipulated. Each specific animation or just some frames of it needs to be played at the right moment to fulfill a motion related task . Usually, developers try to keep the animation controller separate from the other modules like AI or control so they can just send some parameters to the animation system and  the animation system should return the most suitable animation to respond well to the control or AI modules. This can remove the complexity of manipulating animations from the control or AI as they already have their own complexities.

The animation controller promises to return the most suitable animation based on the input parameters. There exists different rules to select animations based on the input parameters. Usually, the speed, rotation and translation of the current animation bones are considered and based on these, a suitable animation will be selected which has the least difference in speed and translation/rotation with the currently playing animation poses. Also the returned animation has to satisfy the motion related tasks. It has to do what the other modules are expected from it to do. For example a path planner can send input parameters like steering degree and speed value to the animation controller and the animation controller should return the best suited motion out of its existing animations to follow the path correctly.

There exists different animation controllers which have already become a standard in video games. The most famous are the animation state machines. They are in many game engines or game animation middleware. They can be combined with animation blend trees as the most of the animation systems are offering them. Usually they are created manually by the animation specialists.

There are some other animation controllers like motion graphs, parametric motion graphs or reinforcement learning based animation controllers. Each of which have their own specifications and they should be discussed separately. Just note that all of these controllers can be implemented on top of an animation state machine which can offer animation blending, transitioning, time offsets within transitions and hierarchical states. I can mention Unreal Engine 4 animation system as a good one which have most of these features (not all).

Animation controllers might face a problem when they have to manipulate many animations. The problem is the high dimensionality of the state space. The controller has to create many states so it can respond well to the input parameters. When state space become high dimensional, the transitions between states grow as an order of power of two. Having a high dimensional state space will lead the system to become impractical, memory consuming and very hard to be manipulated.

In this post I want to introduce a paper based on a research I made about 1.5 years ago. The paper is published about 9 months ago. The research was about reducing state parameters in a Reinforcement Learning based animation controller which was used for locomotion planning. Although RL-based animation controllers are used less in gaming industry but they are finding their way through, because they can offer an almost automatic workflow to connect separate animations within  an animation database to fulfill different motion tasks and create a continuous space out of separate animations.

I'll try to write another post to show how you can reduce states in a manually created animation state machine since manually created animation state machines are being used most widely in gaming industry. However this post is about reducing the dimensions of state space in a RL-Based animation controller.

Here is the abstract:

"Motion and locomotion planning have a wide area of usage in different fields. Locomotion planning with premade character animations has been highly noticed in recent years. Reinforcement Learning presents promising ways to create motion planners using premade character animations. Although RL-based motion planners offer great ways to control character animations but they have some problems that make them hard to be used in practice, including high dimensionality and environment dependency. In this paper we present a motion planner which can fulfill its motion tasks by selecting its best animation sequences in different environments without any previous knowledge of the environment. We combined reinforcement learning with a fuzzy motion planer to fulfill motion tasks. The fuzzy control system commands the agent to seek the goal in environment and avoid obstacles and based on these commands, the agent select its best animation sequences. The motion planner is taught through a reinforcement learning process to find optimal policy for selecting its best animation sequences. To validate our motion planner‟s performance, we implemented our method and compared it with a pure RL-based motion planner."

You may want to read the paper here.

No comments:

Post a Comment