Hi,

There is a some small set of benchmarks defined in 
https://github.com/dataArtisans/flink-benchmarks 
<https://github.com/dataArtisans/flink-benchmarks> , however their scope is 
limited and after briefly looking at your PRs, I wouldn’t expect them to cover 
your cases. However if you could define there some jmh micro benchmark to cover 
your cases that would be nice. It would be a shame if someone would 
accidentally revert/brake your improvements in the future.

Piotrek

> On 18 Jul 2018, at 08:52, 陈梓立 <wander4...@gmail.com> wrote:
> 
> Hi Till,
> 
> Thanks for your reply! I will try to add ones later.
> 
> Best,
> tison.
> 
> Till Rohrmann <trohrm...@apache.org> 于2018年7月18日周三 下午2:48写道:
> 
>> Hi Tison,
>> 
>> at the moment there is formal way to verify performance improvements. What
>> you can do is to provide your measurements by adding the graphs to the PR
>> thread and specify the setup. Then others could try to verify these numbers
>> by running their own benchmark.
>> 
>> Cheers,
>> Till
>> 
>> On Wed, Jul 18, 2018 at 1:34 AM 陈梓立 <wander4...@gmail.com> wrote:
>> 
>>> Hi all,
>>> 
>>> Recently I pull 3 PRs about performance improvements[1][2][3]. Unit tests
>>> will verify their correctness, and in the real scenario, we have
>> benchmark
>>> report to confirm that they do help for performance.
>>> 
>>> I wonder what is the formal way to verify a performance improvement. Is
>> it
>>> to give out a benchmark report, or run a standard benchmark, or add
>>> performance test(I don't know how to do it), or anything else.
>>> 
>>> Looking forward for your reply.
>>> 
>>> Best,
>>> tison.
>>> 
>>> [1] https://github.com/apache/flink/pull/6339
>>> [2] https://github.com/apache/flink/pull/6345
>>> [3] https://github.com/apache/flink/pull/6353
>>> 
>> 

Reply via email to