+1, I like the idea of this improvement which acts as a watchdog for 
developers' code change.

By the way, do you think it's worthy to add a checkpoint mode which just 
disable checkpoint to run end-to-end jobs? And when will stage2 and stage3 be 
discussed in more details?

Best
Yun Tang

On 11/1/19, 5:02 PM, "Piotr Nowojski" <pi...@ververica.com> wrote:

    Hi Yu,
    
    Thanks for the answers, it makes sense to me :)
    
    Piotrek
    
    > On 31 Oct 2019, at 11:25, Yu Li <car...@gmail.com> wrote:
    > 
    > Hi Piotr,
    > 
    > Thanks for the comments!
    > 
    > bq. How are you planning to execute the end-to-end benchmarks and 
integrate
    > them with our build process?
    > Great question! We plan to execute the end-to-end benchmark in a small
    > cluster (like 3 vm nodes) to better reflect network cost, triggering it
    > through our Jenkins service for micro benchmark and show the result on
    > code-speed center. Will add these into FLIP document if no objections.
    > 
    > bq. Are you planning to monitor the throughput and latency at the same 
time?
    > Good question. And you're right, we will stress the cluster to
    > back-pressure and watch the throughput, latency doesn't mean much in the
    > first test suites. Let me refine the document.
    > 
    > Thanks.
    > 
    > Best Regards,
    > Yu
    > 
    > 
    > On Wed, 30 Oct 2019 at 19:07, Piotr Nowojski <pi...@ververica.com> wrote:
    > 
    >> Hi Yu,
    >> 
    >> Thanks for bringing this up.
    >> 
    >> +1 for the idea and the proposal from my side.
    >> 
    >> I think that the proposed Test Job List might be a bit
    >> redundant/excessive, but:
    >> - we can always adjust this later, once we have the infrastructure in 
place
    >> - as long as we have the computing resources and ability to quickly
    >> interpret the results/catch regressions, it doesn’t hurt to have more
    >> benchmarks/tests then strictly necessary.
    >> 
    >> Which brings me to a question. How are you planning to execute the
    >> end-to-end benchmarks and integrate them with our build process?
    >> 
    >> Another smaller question:
    >> 
    >>> In this initial stage we will only monitor and display job throughput
    >> and latency.
    >> 
    >> Are you planning to monitor the throughput and latency at the same time?
    >> It might be a bit problematic, as when measuring the throughput you want 
to
    >> saturate the system and hit some bottleneck, which will cause a
    >> back-pressure (measuring latency at the same time when system is back
    >> pressured doesn’t make much sense).
    >> 
    >> Piotrek
    >> 
    >>> On 30 Oct 2019, at 11:54, Yu Li <car...@gmail.com> wrote:
    >>> 
    >>> Hi everyone,
    >>> 
    >>> We would like to propose FLIP-83 that adds an end-to-end performance
    >>> testing framework for Flink. We discovered some potential problems
    >> through
    >>> such an internal end-to-end performance testing framework before the
    >>> release of 1.9.0 [1], so we'd like to contribute it to Flink community
    >> as a
    >>> supplement to the existing daily run micro performance benchmark [2] and
    >>> nightly run end-to-end stability test [3].
    >>> 
    >>> The FLIP document could be found here:
    >>> 
    >> 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-83%3A+Flink+End-to-end+Performance+Testing+Framework
    >>> 
    >>> Please kindly review the FLIP document and let us know if you have any
    >>> comments/suggestions, thanks!
    >>> 
    >>> [1] https://s.apache.org/m8kcq
    >>> [2] https://github.com/dataArtisans/flink-benchmarks
    >>> [3] https://github.com/apache/flink/tree/master/flink-end-to-end-tests
    >>> 
    >>> Best Regards,
    >>> Yu
    >> 
    >> 
    
    

Reply via email to