I posted my related observation here in a separated thread.
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Does-making-synchronize-call-might-choke-the-whole-pipeline-tc28383.html
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
private static void doWork(long tid) throws InterruptedException
{
if (!sortedTid.contains(tid)) {
sortedTid.add(tid);
}
// simulate a straggler, make the thread with the lowest tid a
slow
processor
if (
Fabian,
Does the above stack trace looks like a deadlock?
at
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:539)
- locked <0x0007baf84040> (a java.util.ArrayDeque)
at
org.apache.flink.runtime.io.netwo
Fabian,
Thank you for replying.
If I understand your previous comment correctly, I setup up a consumer with
parallelism 1 and connect a worker task with parallelism 2.
If worker thread one is making a block call and stuck for 60s, the consumer
thread should continue fetching from the partition
Thanks Fabian. This is really helpful.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Elias Thanks for your reply. In this case,
*When # of Kafka consumers = # of partitions, and I use setParallelism(>1),
something like this
'messageSteam.rebalance().map(lamba).setParallelism(3).print()'
*
If checkpointing is enabled, I assume Flink will commit the offsets in the
'right order' d