were seen earlier and even those that would be coming now.
From: Chesnay Schepler
Sent: Monday, February 22, 2021 4:50 PM
To: Tripathi,Vikash ; yidan zhao
Cc: user
Subject: Re: Sharding of Operators
Let's clarify what NextOp is.
Is it operating on a keyed stream, or is something like
How will flink runtime handle such a situation?
*From:*Chesnay Schepler
*Sent:* Friday, February 19, 2021 12:52 AM
*To:* yidan zhao ; Tripathi,Vikash
*Cc:* user
*Subject:* Re: Sharding of Operators
When you change the parallelism then keys are re-distributed across
operators instances.
sets for key ‘A’ which are present on
different instances of the same operator ‘NextOp’.
How will flink runtime handle such a situation?
From: Chesnay Schepler
Sent: Friday, February 19, 2021 12:52 AM
To: yidan zhao ; Tripathi,Vikash
Cc: user
Subject: Re: Sharding of Operators
When you change
When you change the parallelism then keys are re-distributed across
operators instances.
/However/, this re-distribution is limited to the set /maxParallelism
/(set via the ExecutionConfig), which by default is 128 if no operators
exceeded the parallelism on the first submission.
This *cannot
Actually, we only need to ensure all records belonging to the same key will
be forwarded to the same operator instance(i), and we do not need to
guarantee that 'i' is the same with the 'i' in previous savepoints. When
the job is restarted, the rule 'same key's record will be in together' is
guarant
Hi there,
I wanted to know how re-partitioning of keys per operator instance would happen
when the current operator instances are scaled up or down and we are restarting
our job from a previous savepoint which had a different number of parallel
instances of the same operator.
My main concern i