Thanks Aihua for the explanation.
The proposal looks good to me then.
Thanks,
Zhu Zhu
aihua li 于2019年11月21日周四 下午3:59写道:
> Thanks for the comments Zhu Zhu!
>
> > 1. How do we measure the job throughput? By measuring the job execution
> > time on a finite input data set, or measuring the QPS when
Thanks for the comments Zhu Zhu!
> 1. How do we measure the job throughput? By measuring the job execution
> time on a finite input data set, or measuring the QPS when the job has
> reached a stable state?
>I ask this because that, with LazyFromSource schedule mode, tasks are
> launched gradua
Thanks Yu for bringing up this discussion.
The e2e perf tests can be really helpful and the overall design looks good
to me.
Sorry it's late but I have 2 questions about the result check.
1. How do we measure the job throughput? By measuring the job execution
time on a finite input data set, or me
Since one week passed and no more comments, I assume the latest FLIP doc
looks good to all and will open a VOTE thread soon for the FLIP. Thanks for
all the comments and discussion!
Best Regards,
Yu
On Thu, 7 Nov 2019 at 18:35, Yu Li wrote:
> Thanks for the comments Biao!
>
> bq. It seems this
Thanks for the comments Biao!
bq. It seems this proposal is separated into several stages. Is there a
more detailed plan?
Good point! For stage one we'd like to try introducing the benchmark first,
so we could guard the release (hopefully starting from 1.10). For other
stages, we don't have detail
Thanks for the suggestion Jingsong!
I've added a stage for adding more metrics in FLIP document, please check
and let me know if any further concerns. Thanks.
Best Regards,
Yu
On Mon, 4 Nov 2019 at 17:37, Jingsong Li wrote:
> +1 for the idea. Thanks Yu for driving this.
> Just curious about t
Thanks for the comments.
bq. I think the perf e2e test suites will also need to be designed as
supporting running on both standalone env and distributed env. will be
helpful for developing & evaluating the perf.
Agreed and marked down, the benchmark will be able to be executed in
standalone mode.
Thanks Yu for starting this discussion.
I'm in favor of adding a e2e performance testing framework. Currently the
e2e tests are mainly focused
on functionality and written in shell. We need a better e2e framework for
performance and functionality tests.
Best,
Yang
Biao Liu 于2019年11月5日周二 上午10:1
Thanks Yu for bringing this topic.
+1 for this proposal. Glad to have an e2e performance testing.
It seems this proposal is separated into several stages. Is there a more
detailed plan?
Thanks,
Biao /'bɪ.aʊ/
On Mon, 4 Nov 2019 at 19:54, Congxian Qiu wrote:
> +1 for this idea.
>
> Currently,
+1 for this idea.
Currently, we have the micro benchmark for flink, which can help us find
the regressions. And I think the e2e jobs performance testing can also help
us to cover more scenarios.
Best,
Congxian
Jingsong Li 于2019年11月4日周一 下午5:37写道:
> +1 for the idea. Thanks Yu for driving this.
+1 for the idea. Thanks Yu for driving this.
Just curious about that can we collect the metrics about Job scheduling and
task launch. the speed of this part is also important.
We can add tests for watch it too.
Look forward to more batch test support.
Best,
Jingsong Lee
On Mon, Nov 4, 2019 at 10
> The test cases are written in java and scripts in python. We propose a
separate directory/module in parallel with flink-end-to-end-tests, with the
> name of flink-end-to-end-perf-tests.
Glad to see that the newly introduced e2e test will be written in Java.
because I'm re-working on the existed
In stage1, the checkpoint mode isn't disabled,and uses heap as the statebackend.
I think there should be some special scenarios to test checkpoint and
statebackend, which will be discussed and added in the release-1.11
> 在 2019年11月2日,上午12:13,Yun Tang 写道:
>
> By the way, do you think it's worthy
Thanks for starting this discussion. I agree that performance tests will
help us to prevent introducing regressions.
+1 for this proposal.
Cheers,
Till
On Fri, Nov 1, 2019 at 5:13 PM Yun Tang wrote:
> +1, I like the idea of this improvement which acts as a watchdog for
> developers' code chang
+1, I like the idea of this improvement which acts as a watchdog for
developers' code change.
By the way, do you think it's worthy to add a checkpoint mode which just
disable checkpoint to run end-to-end jobs? And when will stage2 and stage3 be
discussed in more details?
Best
Yun Tang
On 11/
Hi Yu,
Thanks for the answers, it makes sense to me :)
Piotrek
> On 31 Oct 2019, at 11:25, Yu Li wrote:
>
> Hi Piotr,
>
> Thanks for the comments!
>
> bq. How are you planning to execute the end-to-end benchmarks and integrate
> them with our build process?
> Great question! We plan to execu
Hi Piotr,
Thanks for the comments!
bq. How are you planning to execute the end-to-end benchmarks and integrate
them with our build process?
Great question! We plan to execute the end-to-end benchmark in a small
cluster (like 3 vm nodes) to better reflect network cost, triggering it
through our Je
Hi Yu,
Thanks for bringing this up.
+1 for the idea and the proposal from my side.
I think that the proposed Test Job List might be a bit redundant/excessive, but:
- we can always adjust this later, once we have the infrastructure in place
- as long as we have the computing resources and ability
18 matches
Mail list logo