Project 1, Phase 2- Andrist, Gallege, Ghosh, Watkins

by Leslie on March 4, 2011 · 9 comments

in Project 1 Phase 2 Proposals

—Basic Idea.

Basic Idea: To be able to define a subspace of a continuous character configuration space using a motion capture database, and use it to animate a character in a way that is very responsive to real time user input. To
produce the character space we will be using motion fields, which is a
high-dimensional version of a vector field. We plan to implement a basic
version of reinforcement learning for deciding policies.
Methods used: Motion Field technique, Reinforcement learning
Paper referred to: Motion Fields for Interactive Character Animation
Type: C++, Mel/Maya

—Expected Result.

Our system, which will take the form of a plug-in to Maya, will be able to synthesize novel motion at run-time and animate the character accordingly. Motion fields will allow for quick, precise transitions between motions at any frame, with reasonably smooth animation. Our system will also allow for user input, using reinforcement learning to choose the best animation that will satisfy the user’s specifications.

—Success Criteria.

The user can give specific commands and control the character (or load a file of pre-written commands) and the appropriate motion will be generated at the proper time.  Thus we will have a character that is at the very least analogous to a controllable character, like in a video game. We hope to have a demo similar to that presented in the Motion Fields paper, where we are able to control the character’s motion in Maya.

—Work plan.

We will decouple the Control from the Motion Fields and initially focus on the
latter. By next week we hope to have augmented our infrastructure enough so that each skeleton can hold an entire motion database of motion states, as defined in the motion fields paper. At that point, we will also hopefully be very close to implementing Motion Fields without control, i.e. without reinforcement learning. This is similar to the “motion screensaver” idea, where the skeleton simply meanders through its available motion data. By 3/25 we hope to have a solid understanding of the reinforcement learning required for control, as well as some initial implementation of control (simply controlling the character’s direction and velocity at will). Ideally, by the end of the project, the character will be able to also respond to perturbations (pushes and pulls) realistically, as well as follow arbitrary lines in the ground plane.

—Foreseeable risks. What could possibly go wrong
Using Maya comes with great risk of unforeseeable issues, the most pertinent of which being real-time control. We have great hope for this working out, but it will be evaluated more fully over the weekend. By Monday we will have decided if we can push forward with Maya or if we need to jump ship and beg someone else for their initial architecture.

—Evaluation criteria.
This should be pretty self-explanatory based on the goals we have set for ourselves over the project timeline.

{ 9 comments }

Aaron Bartholomew March 7, 2011 at 4:03 pm

I’m extremely excited to see this project become a reality, but I’m wondering if doing it in MAYA would be more trouble than it’s worth. Regardless of whether or not motion fields can theoretically be done in MAYA, is everyone in the group familiar enough with the program’s offerings/limitations? If not, then the burden of implementation might fall heavily on one (or a couple) person’s shoulders (to communicate the inner-workings of MAYA and to translate the source into MEL). Maybe getting C++ into MAYA is easier than I’ve been led to believe, but it would seem that MAYA is a bottleneck to parallelizing/dividing-up the implementation. I’m not sure what you decided on Monday, but I would go as far as to say, you might want to use a different infrastructure no matter what.

Michael Correll March 7, 2011 at 4:24 pm

The motions fields paper was really interesting to me, and reading the paper I had the simultaneous thoughts “this would be really cool to see implemented” and “I would never personally want to implement this.” Other than getting reinforcement learning &c. working in Maya (which I agree with Aaron sounds like it could be a huge headache), my other concern is your choice of end product. When I think of “controllable, real-time motion” the environment I envision is more like a video game than a keyframe-based renderer like Maya. In choosing a directly controllable character as your end goal it seems like you’re discarding a lot of the great (and not so great) infrastructure that Maya already has in place for setting time constraints and complex time planning tasks.

The motions fields paper also struck me as one that seemed naturally extensible: adding extra dimensions to the vectorspace and/or changing the reward functions to account for new constraints or new metadata seems easy once the initial infrastructure has been created. Is this going to be a paper implementation project, or did you folks have some ideas about what you could do to modify or extend motion fields?

danieljc March 7, 2011 at 4:27 pm

It seems like an interesting project. I would also really wonder how well a real-time application would work as a plugin for Maya. Maybe by now you have already figured this out (otherwise, pushing off the real time control part of the project for a few weeks like your plan says might be a bad idea).

Also, have you thought about how you intend to store all the motion data? Will you need to build and connect to a separate database and if so, have you thought about how well this will integrate with Maya?

Good luck!

Reid March 7, 2011 at 7:30 pm

I remember this paper as being very technical. I would give yourself lots of time to get things working right. Integrating reinforcement learning with motions fields by the 25th seems like a pretty ambitious goal to me (not saying it can’t be done, only don’t underestimate it’s potential difficulty. On that note I have a ML book with a whole chapter on reinforcement learning if you need to borrow one.

I don’t know much about MAYA, but I’m concerned it’s efficiency might not be the best. Not just for real time, but for the learning phase of reinforcement learning. One option would be to write the learner in C/C++ and implement only prediction in the MAYA language with some way to import the results of the learner.

xlzhang March 7, 2011 at 10:05 pm

While the concept of motion fields as described in class ( I have not read the paper yet) seems very interesting, and I love Maya since I’ve begun to use it this semester, I’m not sure these two things go together very well. What I mean to say is that Maya has so many features but not many of them seem conducive to accomplishing your project. While the first phase of this project was perfect to do as a Maya plugin (with many features such as interpolation between keyframes already implemented), there doesn’t seem to be much synergy for your current proposal.

That aside, it would be very interesting if you could control your characters as you are proposing, and then save the resulting motion as a file of some type with Maya.

raja March 7, 2011 at 10:10 pm

The choice of developing a Maya plugin for this project seems to have gathered a lot of interest and debate!
It would be pretty great if you guys can pull off real time interactivity for this. Like Sean mentioned, collaboration between your team and his would be useful for the reinforcement learning (you guys can finally help Mike fully demystify it!) part.

Good luck!

sandrist March 8, 2011 at 12:03 pm

I would just like to make a quick comment addressing all of the concerns people seem to have – and rightly so – about our plans for the Maya implementation. People seem to be concerned that Maya is not the correct platform for interactive, controllable motion. In the way that most people use Maya, that statement is absolutely correct. However, the fact that Maya isn’t normally used for this sort of thing is precisely why I think it is interesting to create a plug-in which extends Maya to make it possible. Maya does not intrinsically rely on keyframes for animation, it is just the way most people do things. In our implementation, we simply hit “play” and as time flows through the system, we animate the character accordingly, without the need to set keyframes. Furthermore, it is absolutely possible to interact with the animation as it is playing, changing properties about it on the fly. Again, this is just something most people don’t think about doing when using Maya, but it’s possible.

Therefore, even though we thank you all for raising valid concerns and clarifying our thinking, we will go ahead and push through with Maya. Hopefully we can surprise you all and present a nice shiny plug-in that does all the things we hope. Or, perhaps, we will show up to class on project turn-in day with broken spirits, on our knees begging for you all to forgive our foolishness.

csv March 9, 2011 at 10:06 pm

I will be curious to know how you implement “reinforced learning” and how you decide the parameters and some interesting results to show that “reinforced learning” is really useful.

Also it will be nice to see the algorithms and implementation of “Motion field”. Somehow, I couldn’t find any open source codes to do experiments with “Motion Fields”, so I will be keen in rephrasing the word “devil lies in the details”.

gleicher March 10, 2011 at 9:54 pm

I too am concerned about the “real time control in Maya” – but the payoff, of having the real time sessions logged so you can go back and review them, might pay off.

Giving up some of the interactivity (so you need to pre-record the control, and see what the character does) is another option. Its actually a valuable thing even if you do have interactivity.

You don’t say much about where the pertubations will come from. Will you provide some interface?

You also don’t say anything about how you are dividing the labor, or what parts of the system will be built in C++ vs. MEL/Maya/…

I am also wondering if you had thoughts on what motion data you will want to test things on.

Previous post:

Next post: