nay wrote).
>
> Besides questions asked by Chesnay, wouldn’t it be safer to implement
> records shedding on a user level in a form of randomly filtering operator?
>
> Piotrek
>
> > On 28 Mar 2018, at 15:49, Luis Alves wrote:
> >
> > @Chesnay I tried both approache
ut?), because it’s
>> easier there to differentiate between normal records and LatencyMarkers.
>>
>> Piotrek
>>
>> On 28 Mar 2018, at 11:44, Luis Alves wrote:
>>>
>>> Hi,
>>>
>>> As part of a project that I'm developing, I
Hi,
As part of a project that I'm developing, I'm extending Flink 1.2 to
support load shedding. I'm doing some performance tests to check the
performance impact of my changes compared to Flink 1.2 release.
>From the results that I'm getting, I can see that load shedding is working
and that incomi
Hi,
I’m running a Flink application with multiple tasks, and I noticed that in the
source task metrics, the input rate (num records in per second) always equals
0, even when it’s receiving records and processing them. I’m reading the input
from a socket.
Given this, I have two questions:
If I
ers consume only after the partition has all data
available).
Did you also see this Wiki page here?
https://cwiki.apache.org/confluence/display/FLINK/Data+exchange+between+tasks
– Ufuk
On Sun, Aug 13, 2017 at 6:59 PM, Luis Alves <
mailto:lmtjal...@gmail.com
> wrote:
> Hi,
>
> Can som
Hi,
Can someone validate the following regarding the ExecutionGraph:
Each IntermediateResult can only be consumed by a single ExecutionJobVertex,
i.e. if two ExecutionJobVertex consume the same tuples (same “stream") that is
produced by the same ExecutionJobVertex, then the producer will have t
release. if you are interested in helping
out with the implementation then feel welcome!
Cheers,
Till
On Wed, Aug 9, 2017 at 12:45 AM, Luis Alves <
mailto:lmtjal...@gmail.com
> wrote:
> Hello,
>
> What happens when a task in a running job fails? Will all the current
> execu
Hello,
What happens when a task in a running job fails? Will all the current
executions of the job's tasks fail? Will all the slots being used by the job
tasks (failed and non-failed ones) be released.
Assuming all the slots are released, wouldn’t it make sense to:
1. “stop” the non-failed ta
ome of your questions.
Cheers,
Till
On Sat, Apr 22, 2017 at 7:42 PM, Luis Alves lmtjal...@gmail.com> wrote:
> Hi,
>
> Regarding the features on Flink that allow to optimize resource usage in
> the cluster (+ latency, throughput ...), i.e. slot sharing, task chaining,
&g
Hi,
Regarding the features on Flink that allow to optimize resource usage in the
cluster (+ latency, throughput ...), i.e. slot sharing, task chaining, async
i/o and dynamic scaling, I would like to ask the following questions (all in
the stream processing context):
In which cases would someo
Hi!
Can someone confirm that the following statement is true: If we have a task
consuming from two other tasks (both using ALL_TO_ALL distribution pattern).
Then, this task will have two InputGates consuming from the same index
(consumedSubpartitionIndex). This means that each task would always
11 matches
Mail list logo