Mike’s Self-Evaluation

by Mike Gleicher on April 21, 2011 · 1 comment

in Project 1

Since everyone did their self (and team) evaluations, I figure its my job to do one as well.

My self-eval is mixed: I think that this project was overall a good idea, but each decision also came with a down side. Its not clear what the right thing is.

Overall: The outcomes were different than I expected. I think everyone learned a lot, but some of the lessons were different (more about group work, planning, etc) than what I expected. I did not expect that some of the technical issues (reinforcement learning) were going to be so hard for people to get to.

I should have been a bit more hands-on with groups. Meeting more often, and trying to keep groups from getting stuck. Groups were generally able to get unstuck, but I can see where a little technical (or group dynamics) intervention and advice could have gone a long way.

On groups of 4: I think that having to work in a self-organizing group of 4 is a valuable thing. Although, sometimes what you learn is how hard it is or how to deal with personality conflicts, or how to cope with unexpected events in a team members life, …

Putting together groups of 4 requires a bit of social engineering. For this project, I missed the call on some things. It was weird because some people I know very well, and others I really don’t know at all.

On the timing: The project did drag on. It should be shorter. I should have kept to the original length. Extending the time just means that the final “crunch” gets delayed – there is something to saying “here is the target, you get what you get.” Unfortunately, the odd timing of the semester (and other events in my life) made timing hard no matter what. Longer projects become really painful for dysfunctional groups, but sometimes groups need time to learn to work together.

On dealing with break: having a project extend over break is bad. But planning around it is difficult.

On open (and moving) deadlines: I knew it wasn’t a great idea, but I was surprised by how many people agreed with me.

On having the infrastructure phase: I think that this generally worked out well, because it forced people to confront some of the basic issues.

A surprising side effect: I think it was good for groups to work together in a very specified project before taking on something more open ended.

On the laundry list for the infrastructure phase: I have mixed feelings. The list should be specific, and consist of lots of small things. I wanted people to have infrastructure that would enable a range of projects (marker processing, for example), but this lead to “busywork” (since it wasn’t used by anyone, and had less pedagogical value than I expected).

In the future, I think I would defer some of the infrastructure until people narrowed their projects. Some things (like writing and testing a BVH parser) are valuable (both pedagogically and practically). Some things (quaternion operations) are better gotten from libraries.

On giving out infrastructure: there are a whole bunch of “excuses” for not doing this, none of them are that compelling:

  • I don’t have a good infrastructure to give, and a bad infrastructure might be worse than nothing. Maybe deal with this by making using it optional, and developing an infrastructure over time.
  • Giving people the choice lets them pick things that I might not have. If I had given a C++ infrastructure, we wouldn’t have seen a Maya integration project, or a pure-python project.
  • It is interesting to see what people come up with.
  • Building the very basic stuff is the best way to really learn about it.
  • If a group fails to build a good infrastructure, they are really stuck for phase 2.
  • Everyone has different ideas on how this stuff should be built.

In the future, I think a BVH parser, quaternion/exp map implementation (including multi-way blending), efficient data structures for storage, maybe concatenation and blending, … would be good.

On encouraging the use of outside libraries: This proves to be a total win. The problem is picking the right one. From 679, I’ve learned that its to hard to pick a library and learn it in a short period of time. In the future, I should be more up on the choices so I can make suggestions. G3D seems like a total win, for example.

On being open ended with project idea choices: I should have been more on top of having people have clear goals that were achievable. The concreteness of a group’s goals seemed to be directly proportional to how well they did at achieving them. Having people implement basic methods (path editing, motion graph screen saver, …) should have been first. Maybe it should have been a three-phase project.

On including a variety of ideas: I had everyone implement point clouds, and read the Ho and Komura paper (and other alternative representations) since I was hoping more people would have done projects using those representations. In hindsight, a hidden agenda is really ineffective.

On letting people choose their tools and platforms: I still think this was OK, but I need to provide better resources. Fortunately, one of my distractions this semester was getting funding to redo the labs, so hopefully that won’t be an issue in the future.

On feedback and demos: I should have been more hands on with requesting demos and in-person discussions. There is something about the forcing function of having to show off what you’ve done. And 15 minutes of conversation can be more effective than 60 minutes of writing/emailing.

On the “project work weeks”: I am not sure if this is a failure because it came before break (clearly the 2 weeks of no class didn’t work), or if its generally a bad idea.

On meddling with exploding groups: We had surprisingly few exploding groups, but some of that was luck. In cases where things started to go bad, I should have intervened earlier and more forcefully.

On splitting groups: the original idea was for the groups of 4 to split into 2 groups of 2 for the second phase. No one chose to do this. I should have either pushed things that direction, or decided it was a bad idea.

On requiring videos: this worked fabulously well. In the future, we’ll actually have to provide the tools.

On having everyone do different things: on one hand, each group gets to pick a project they like. On the other hand, there is less cross-fertilization between groups.

On lightening up on the reading load during project season: this is somewhat counter-productive. But realistic.

On requiring self-evaluations and peer-evaluations: I still believe in this. Making them structured seems to force people to at least think about the things I want them to think about.

{ 1 trackback }

Previous post:

Next post: