Hi Grant, I initially thought of doing so, but after working on the Million Queries Track where running the 10,000 queries could take more than a day (depending on the settings) and where indexing was done once and took few days I felt that a more tight control is needed than that provided by the benchmark layer. Maybe I should rephrase this as - I thought that the time it would take to stabilize this functionality is not worth to invest because the run itself can take so long and I wouldn't like to have to repeat all this just because of a mistake in the benchmark settings. But then the same may be a reason to have a framework that protects you from errors... :-) I'll take a second look at this!
For setting the similarities and what have you the SetProp task can be used to set the class names and then your similarity of choice can be loaded byName - will restrict to a no args constructor, but this is not too bad...? We need a new QualityRunTask for sure, but this is quite straightforward. Cheers, Doron On Thu, Jan 31, 2008 at 12:31 AM, Grant Ingersoll <[EMAIL PROTECTED]> wrote: > Has anyone thought about integrating the contrib/benchmark Quality > stuff into the "algorithm" framework that's used for timings, etc.? > For instance, I would like to write an algorithm file where my rounds > consist of doing various runs with different similarities all on the > same index. > > It would probably need a new Task for setting the similarity (and the > ability to modify the index using the setNorms functionality). Anyone > else (Doron :-) ) have any thoughts on going about this? > > -Grant > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] > >