anuary 04, 2018 12:20 PM
To: Stefan Richter
Cc: Netzer, Liron [ICG-IT]; user@flink.apache.org
Subject: Re: Lower Parallelism derives better latency
Just to make sure:
- This runs on one machine, so only local connections?
On Thu, Jan 4, 2018 at 10:47 AM, Stefan Richter
mailto:s.rich...@da
can do and get back to you.
> Am I the first one who encountered such an issue?
>
> Thanks,
> Liron
>
>
> *From:* Stefan Richter [mailto:s.rich...@data-artisans.com
> ]
> *Sent:* Thursday, January 04, 2018 11:15 AM
> *To:* Netzer, Liron [ICG-IT]
> *Cc:* user@flink.apache.org
ry 04, 2018 11:15 AM
> To: Netzer, Liron [ICG-IT]
> Cc: user@flink.apache.org
> Subject: Re: Lower Parallelism derives better latency
>
> Hi,
>
> ok that would have been good to know, so forget about my explanation attempt
> :-). This makes it interesting, and at the same t
get back to you.
Am I the first one who encountered such an issue?
Thanks,
Liron
From: Stefan Richter [mailto:s.rich...@data-artisans.com]
Sent: Thursday, January 04, 2018 11:15 AM
To: Netzer, Liron [ICG-IT]
Cc: user@flink.apache.org
Subject: Re: Lower Parallelism derives better latency
Hi,
ok
[ICG-IT]
> Cc: user@flink.apache.org
> Subject: Re: Lower Parallelism derives better latency
>
> Hi,
>
> one possible explanation that I see is the following: in a shuffle, each
> there are input and output buffers for each parallel subtask to which data
> could be shuffled
: Wednesday, January 03, 2018 3:20 PM
To: Netzer, Liron [ICG-IT]
Cc: user@flink.apache.org
Subject: Re: Lower Parallelism derives better latency
Hi,
one possible explanation that I see is the following: in a shuffle, each there
are input and output buffers for each parallel subtask to which data
nks,
Liron
From: Aljoscha Krettek [mailto:aljos...@apache.org]
Sent: Wednesday, January 03, 2018 3:03 PM
To: Netzer, Liron [ICG-IT]
Cc: user@flink.apache.org
Subject: Re: Lower Parallelism derives better latency
Hi,
How are you measuring latency? Is it latency within a Flink Job or from Kafka
Hi,
one possible explanation that I see is the following: in a shuffle, each there
are input and output buffers for each parallel subtask to which data could be
shuffled. Those buffers are flushed either when full or after a timeout
interval. If you increase the parallelism, there are more buff
Hi,
How are you measuring latency? Is it latency within a Flink Job or from Kafka
to Kafka? The first seems more likely but I'm still interested in the details.
Best,
Aljoscha
> On 3. Jan 2018, at 08:13, Netzer, Liron wrote:
>
> Hi group,
>
> We have a standalone Flink cluster that is runni