On 3/30/2011 12:13 PM, Ross Gardler wrote:
For what it's worth I've also made it clear that making these changes at this 
point is totally unreasonable. This is creating work for our admins - like they 
need more work.

It is strange that every year they seem to work on melange, it is always different but I can't say that it has ever improved. Maybe there is something intrinsically hard about developing an application that is used just once a year. Certainly it is not helpful to introduce such changes to students, mentors and admins the day the program kicks off.
If there are a number of issues like this I strongly suggest considering using 
a spreadsheet for the evaluation (assuming the new interface enables us to 
export the data). The reason I say this is that this system has been evolved 
and fine tuned over the last 6 years. It has been very hard developing a system 
that everyone feels is fair, trying to come up with a new one now will require 
new documentation and discussion.

I agree we should do what is needed to keep our rubric even if we have to supplement melange with a spreadsheet or whatever to determine the proper ranking and then input it to match the new software. Maybe when the scoring period starts mentors should just put each score as part of a comment as described in our process[1] but not assign actual points. Then at least the scores and the reasons should be visible to the community while we figure out how to conform to the new system

Thanks

Kathey


[1]http://community.apache.org/mentee-ranking-process.html



Reply via email to