Project 1–Initial Specification

by Mike Gleicher on February 2, 2011 · 1 comment

in Assignments,Project 1

This project aims to get you to work with human motion / motion capture data.

The project roughly splits in half over the 6 weeks: the first half, you’ll work to develop the infrastructure for working with motion data, and the second half, you’ll do something more interesting with the data.

You will be assigned to a group of 4.

For the infrastructure part of the assignment, how your group works together is up to you: you could give each person a separate piece and build 4 programs, build a single integrated system, or have everyone implement everything and have a contest to see which one is best. What matters is that between the 4 of you, you have software that meets a set of requirements (given below). This first part is pretty concrete: I will tell you the pieces to build (so you can build pieces that will be useful for the second part). There will be some choices, but overall, its pretty set.

For the “research” part, you’re group is responsible for choosing and completing 2 different mini-research projects (for example, implementing a basic version of a paper or trying some new idea). You may choose to split your group into 2 halves for this part (each doing 1 sub-project): but note that everyone has some responsibility for the success of all pieces (since the infrastructure is shared). For this second part, you will need to pick something (I’ll give you suggestions – but overall we’ll work together to come up with ideas), develop a plan, …

You may choose the tools that you use for this project. The only catch is that you have to be able to give me a demo – this doesn’t necessarily mean using a CSL supported computer. If it runs in your office, or on your laptop, that’s OK.  You may want to have screen capture tools so you can create videos of your work. Or if you produce animations, write out series of stills and assemble them (or figure out how to write to video files – which is more of a pain than it should be). I will ask each group to tell me about the tools they intend to use.

The schedule:

  • Friday, Feb 4th – groups formed, project announced
  • Friday, Feb 11th – infrastructure plan and signs of life
  • Friday, Feb 18th – progress report
  • Friday, Feb 25th – infrastructure demo, “research” plan
  • Friday, March 4th – status updates
  • Friday, March 11th – projects officially due
  • Friday, March 18th – Spring Break

The infrastructure requirements:

Collectively, your group must demonstrate these capabilities.

  1. You must be able to read motion skeletal motion capture data from BVH files and ASF/AMC. There are plenty of examples of these formats around (BVH tends to be what we’ve used here, so there are directories full of it, AMC/ASF is what the CMU motion data base provides). Note: your internal should probably allow for the bones to vary in length.
  2. You must be able to read in marker position data from at least 2 different common formats (c3d, tvd, trc, …). There are lots of examples at mocapclub.com, …
  3. You must be able to visualize the above data (both skeletal and marker position) – with a player that works at frame rate, allows for stop motion / scrubbing (with a slider), scanning, … For the marker data, it would be nice if you had some way to “connect the dots” (although, you’ll need to tell the program what the connections are). For the skeletal data, you need to draw each bone in a way that makes it clear what the orientation of the bones are. A smooth skin (even a simple one is acceptable – or a collection of independent rigid pieces).
  4. You must be able to generate marker position data from the skeletons. You should be able to generate “tracks” from the joints, or to put “virtual markers” in different coordinate frames.
  5. You must be able to splice pieces of motion together (given a set of ranges of frames, create a single long piece of motion with those pieces). Note: the not only means copying the frames, but also rigidly transforming the pieces so that the end of one meets the beginning of the next. You do not need to have an automatic way to determine what splices to make. In fact, for this part, you don’t even need to worry about the transitions. You do need to have a way to automatically figure out the rigid alignment (to put one clip after the next).
  6. You must be able to interpolate between frames. You should be able to resample motions (create samples between frames by interpolating), as well as blending between two motions. This probably means you need to use quaternions, not euler angles, internally.
  7. You need to be able to write out the motions that are created internally.
  8. You should be able to store footplant (or other positional) constraints. You can specify these seperately (like a file that says things like “frame 5-6 joint 5” doesn’t move). You should be able to read and write these (note: some of the motions in our library are annotated this way, or have your own format), and visualize the constraints (show their positions, and show how badly they are being violated). You do not need to solve the constraints.

Note: that your program will need to read in multiple motion files in order to do this. Think of this as a system that can load a bunch of different motion files and create new ones based on those. Also, you won’t actually do much with the marker data – its more of a warmup exercise. (well, doing more with the marker data might be something to add later).

In designing your program, you might find it easier to think ahead about what you’ll be doing with it. You don’t need to do these things, but make sure your program provides the right infrastructure:

  • Create a motion graph, and synthesize a new motion clip that meets some constraints.
  • Create a motion graph, and synthesize a continuous stream of motion randomly (the “motion screen saver”).
  • Splice the top part of one motion onto the bottom part of another.
  • Compare interpolating motions using a “cartesian” represenation (like Kulpa et al) and a skeletal one.

Some caveats that you probably will want to consider:

  • Not all skeletons have the same topology (set of joints) – you don’t need to be able to mix and match between them (for example, to blend motions with different numbers of joints). Different motions have skeletons with differing bone lengths – your program should do something reasonable with this.
  • You may want to have inverse kinematics to “clean up” the foot skating.
  • You might want to do everything with a non-skeletal representation.
  • You will need to have camera controls, and probably have some automated camera (for example one that can follow the character if it moves far).

At the same time that you are building this infrastructure, you should be thinking about what you will want to do with it during the second part of the project. As you have more of an idea of what you’ll want to do, you can steer your infrastructure towards it.

Part 2…

After we’ve learned more about motion capture data, we’ll discuss what will be reasonable projects to do with it. I promise to give you some ideas, but I want people to come up with their own.

You project might try to recreate / understand some existing technique (near-optimal control), build a special case version of some complicated thing so we can better understand it (near-optimal control), see how two different techniques might fit together (motion graphs + path editing, motion graphs + splicing), see how to use different underlying representations for basic methods (using cartesian representations for motion graphs or splicing), …

The idea for this part is that working in a group of 4 might be hard to divide up the work, so each group is expected to do 2 things. You might find it best to work as independent pairs sharing the infrastructure. Or maybe you will find it best for everyone to collaborate on everything.

The actual deliverables

This will undoubtedly evolve as the project unfolds.

Our main way to assess this will be live demos and discussion. Generally, we’ll use class time on Fridays for meetings with each group to discuss how things are going. We might also have a class meeting some of the weeks– we’ll see how it goes.

Several times, I will ask groups to make postings to the web site (in the Project 1 Handins category). I will ask you to do things like post a picture to show off the status of your program, or show a picture of a challenging motion file that your program was able to do. Also, you will be asked to post your project ideas – and to comment on other people’s.

The final project handin will be written, and include the code artifact deliverables, self-evaluations, etc.

Details will be released as time goes on.

{ 1 trackback }

Previous post:

Next post: