The performance loss being referred to there is reduced throughput.

There's a blog post by Nico Kruber [1] that covers Flink's network stack in
considerable detail. The last section on latency vs throughput gives some
more insight on this point. In the experiment reported on there, the
difference between a buffer timeout of 1 msec and 0 msec resulted in a
significant loss of throughput (roughly a factor of 2x), for very little
gain in reduced latency.

[1] https://flink.apache.org/2019/06/05/flink-network-stack.html

On Wed, Sep 16, 2020 at 4:20 PM Mazen Ezzeddine <
mazen.ezzedd...@etu.unice.fr> wrote:

> Hi all,
>
>  I have read the below in the documentation :
>
> "To maximize throughput, set setBufferTimeout(-1) which will remove the
> timeout and buffers will only be flushed when they are full. To minimize
> latency, set the timeout to a value close to 0 (for example 5 or 10 ms). A
> buffer timeout of 0 should be avoided, because it can cause severe
> performance degradation."
>
>
> why a 0 BufferTimeout  cause severe performance degradation, shouldnt it
> provide min latency, what is meant by perf. degradation there. On the
> otherhand, can we say  that min latency is  always >  BufferTimeout.
>
> Best,
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>

Reply via email to