Phase 2 Status – Sean, Subhadip, Leslie, Sajika

by Sean Andrist on March 25, 2011 · 1 comment

in Project 1 Post-Break Checkpoint

1. Our main goal is to be able to define a subspace of a continuous character configuration space using a motion capture database, and use it to animate a character in a way that is very responsive to real time user input. To produce the character space we will be using motion fields, which are a high-dimensional version of a vector field. This will require reinforcement learning for deciding policies.

Our system, which will take the form of a plug-in to Maya, will be able to synthesize novel motion at run-time and animate the character accordingly. Motion fields will allow for quick, precise transitions between motions at any frame, with reasonably smooth animation. Our system will also allow for user input, using reinforcement learning to choose the best animation that will satisfy the user’s specifications.

2. For our demo in two weeks, we hope to be able to go through the entire process of loading the plugin into Maya, loading in a whole directory of motions into our motion database, and then showing the resulting character animation. We will start by showing the passive action selection, then we will demonstrate our control of the character using a custom Maya gui window (containing text inputs, sliders, buttons, etc). A slider will be able to specify the current direction we wish the character to go in. Buttons may be implemented which perturb the character in pre-defined ways or request it to follow pre-created lines in the floor.

3. Success criteria: We will consider our project success if we can actually carry out the demo just described. Getting motion fields implemented and working on a database of motions reliably is our primary goal, with hope of being able to play with it and extend it as time allows.

4. Basically we have an implementation of the “first part” of the ‘Motion fields for Interactive Character Animation’ paper. This involved implementing a Motion fields database. This reads in an entire folder of motion capture files and then creates motion states that store poses, joint velocity, orientation displacements for all the joints at different points in time. We used the ANN library, available locally, to populate each Motion State with it’s nearest neighbors. Using this and given an arbitrary pose we are able to generate a new pose. Unfortunately it is not quite integrated with Maya yet, so we don’t have screenshots to show this off quite yet.

Sajika has been working on the reinforcement learning part. He has implemented a basic version of the technique.

Sean has added a a very simple interface to Maya allowing for interactive control. The functionality of this interface will be added to in the future.

The interface (more a proof of concept for now) is on the right side of the screen. Right now, it allows you to instantly manipulate the offset of the root joint only as the motion is playing.

5. Data: Right now we are working with the database of motion files that can be found in P:\graphics. Specifically, we are working with a directory of motions created by Lucas Kovar, as these motions cover a wide spread of motion types and styles, and are nicely annotated with footplant constraints.

6. Plan: We are almost at the point now where we can display “passive action selection”, i.e. motion fields without the control aspect. Essentially this will look like a motion screensaver. This week we will start integrating and extending the reinforcement learning stuff which Sajika has been working on, and connecting it to the interactive control interface which has already been created on the Maya side. In the time remaining, we hope to possibly add in perturbation response, arbitrary line following, and any other extensions or interesting ways to show off motion fields that we have time for.

{ 1 comment }

sghosh March 28, 2011 at 5:19 pm

– you say your improved version “implemented constraints” – does that mean just represent/show them, or are you actually doing the IK
> First off, yes we are actually doing the IK.

——————————
– what are the phase space distance metrics? do you just use the quaternion (and quaternion derivatives) as positions, or is there something to make the space more metric?
> we scale each dimension by a certain weight factor. Right now we’re using the length of whichever bone is associated with each dimension as the weight factor (as suggested in the paper). This means that two states with a given rotational difference between a large bone (like the femur) will be further away from each other than two states with that same rotational difference between a smaller bone (like a foot or finger).

——————————
-it sounds like you are getting pieces in place, but its unclear to me how far they are, and how well the simple versions of things work. does just using nearest neighbor in phase space really generate motions? how well does blending a few choices help?
> The implementation of the basic part has been done and we are able to use the ANN algorithm to generate nearest neighbors of all the existing motion states (equivalent to a frame in the sequence). Then we take as input a random motion state and use the database we generate the new motion state (uses blending). But at present we have a bug here – every time it generates the same new state. So this needs to be fixed.
The nearest neighbor approach should theoretically be able to generate new motion states. Since we are yet to get the correct final output we cannot comment otherwise.
Blending helps because it creates New states that are not present in the existing database. If it were not so the motion would just reuse states from the database and when that’s the character won’t respond in a believable way to the user inputs.

——————————
-what size example database do you expect to work with? what examples are you trying it with? (a handful of walks with different curvatures?)
>We are not sure what size of database and examples we will end up working with. It will depend on how fast our implementation will run. So we should aim to get thing working on a small scale and then improve on it.
The example database (Kovar’s directory of .bvh files) contains dozens of motions in it (I can count exactly how many when I get back to the lab tomorrow). We have running and walking examples, both in straight lines and in spirals I believe. There are also a bunch of examples where the character is doing somersaults and all kinds of weird things.

Previous post:

Next post: