Hard to say without logs...

However, this was improved in 1.0 release -- there, if this error
happens, Streams would not die but recover from the error automatically.

Thus, I would recommend to upgrade to 1.0 eventually.


-Matthias

On 2/27/18 8:06 AM, Dmitriy Vsekhvalnov wrote:
> Good day everybody,
> 
> we faced unexpected kafka-streams application death after 3 months of work
> with exception below.
> 
> Our setup:
>  - 2 instances (both died) of kafka-stream app
>  - kafka-streams 0.11.0.0
>  - kafka broker 1.0.0
> 
> Sounds like re-balanced happened and something went terribly wrong this
> time.
> 
> Anybody can shed some light or recommend more robust settings or any other
> idea how to make it more resilient to that kind of errors?
> 
> Thank you.
> 
> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be
> completed since the group has already rebalanced and assigned the
> partitions to another member. This means that the time between subsequent
> calls to poll() was longer than the configured max.poll.interval.ms, which
> typically implies that the poll loop is spending too much time message
> processing. You can address this either by increasing the session timeout
> or by reducing the maximum size of batches returned in poll() with
> max.poll.records.
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:725)
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:604)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1173)
> at
> org.apache.kafka.streams.processor.internals.StreamTask.commitOffsets(StreamTask.java:307)
> at
> org.apache.kafka.streams.processor.internals.StreamTask.access$000(StreamTask.java:49)
> at
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:268)
> at
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:187)
> at
> org.apache.kafka.streams.processor.internals.StreamTask.commitImpl(StreamTask.java:259)
> at
> org.apache.kafka.streams.processor.internals.StreamTask.suspend(StreamTask.java:362)
> at
> org.apache.kafka.streams.processor.internals.StreamTask.suspend(StreamTask.java:346)
> at
> org.apache.kafka.streams.processor.internals.StreamThread$3.apply(StreamThread.java:1118)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.performOnStreamTasks(StreamThread.java:1448)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.suspendTasksAndState(StreamThread.java:1110)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.access$1800(StreamThread.java:73)
> at
> org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsRevoked(StreamThread.java:218)
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinPrepare(ConsumerCoordinator.java:422)
> at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:353)
> at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:310)
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:297)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:582)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:553)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:527)
> 

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to