On 7/29/11 3:23 AM, Gilles Sadowski wrote:
> Hello.
>
>>>>>>> [...]
>>>>>>>
>>>>>>> The idea is to have "interleaved" calls to the candidate
>>>>>>> implementations, so
>>>>>>> that (hopefully) they will be penalized (or benefit) in the same
>>>>>>> way
>>>>>>> by what
>>>>>>> the JVM is doing (GC or JIT compilation or ...) while the
>>>>>>> benchmark
>>>>>>> is
>>>>>>> running.
>>>>>>>
>>>>>>> Does this make sense?
>>>>>> Could it be merged by the FastMath performance tests Sebb set up ?
>>>>> I don't think so. If you meant rewriting the
>>>>> "FastMathTestPerformance" tests
>>>>> using the proposed utility, I don't think that it is necessary.
>>>> This was what I meant.
>>>> If this feature is not used in any existing tests, perhaps it should go in
>>>> some other directory. Perhaps a new "utilities" or something like that, at 
>>>> the
>>>> same level as "main" and "test" ?
>>>>
>>>> Anyway, if you feel it's useful to have this available around, don't 
>>>> hesitate.
>>>>
>>> Well, the first use I had in mind was to provide a agreed on way to base a
>>> discussion for requests such as "CM's implementation of foo is not
>>> efficient", and avoid wondering how the reporter got his results. [This
>>> problem occurs for the MATH-628 issue.]. Then, when the problem reported
>>> is confirmed, the new implementation will replace the less efficient one in
>>> CM, so that there won't be any alternative implementation left to compare.
>>>
>>> If you agree with the idea of a "standard" benchmark, it would be very much
>>> necessary that several people have a look at the code: It might be that my
>>> crude "methodology" is not right, or that there is a bug.
>>>
>>> If the code is accepted, then we'll decide where to put it. Even if,
>>> according to the above, its primary use will not be for long-lived unit
>>> tests, it might still be useful in order to compare the efficiency of CM's
>>> algorithms such as the various optimizers. These comparisons could be added
>>> as performance reports similar to "FastMathTestPerformance".
>> +1 to include it.  I would say start by putting it in a top level
>> package of its own - say, "benchmark" in src/test/java.  That way we
>> can use it in test classes or experimentation that we do using test
>> classes to set up benchmarks.
> Isn't it "safer" to put it in package "o.a.c.m" (under "src/test/java")?
> I was thinking of "PerfTestUtils" for the class name.

What I meant was o.a.c.m.benchmark, but I would be fine including it
at the top level to start.

Phil
>
>> If it evolves into a generically
>> useful microbenchmark generator, we can talk about moving it
>> src/main.  Thanks for doing this.
> I didn't think that this utility would ever move to "main", as it's just for
> internal testing.
>
>
> Regards,
> Gilles
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
> For additional commands, e-mail: dev-h...@commons.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org

Reply via email to