Thanks... can you point to those improvements/bugs that are fixed in 2.5?

On Mon, Apr 27, 2020 at 1:03 AM Matthias J. Sax <mj...@apache.org> wrote:

> Well, what you say is correct. However, it's a "bug" in the sense that
> for some cases the producer does not need to fail, but can re-initialize
> itself automatically. Of course, you can also see this as an improvement
> and not a bug :)
>
>
> -Matthias
>
> On 4/25/20 7:48 AM, Pushkar Deole wrote:
> > version used is 2.3
> > however, not sure if this is a bug.. after doing some search, came across
> > following for the reason of this:
> >
> > essentially, the transaction coordinator of streams is cleaning up the
> > producer and transaction ids after a certain time interval controller by
> > transactional.id.expiration.ms
> > <
> https://docs.confluent.io/current/installation/configuration/broker-configs.html#transactional-id-expiration-ms
> >,
> > if the coordinator doesn't receive any updates/writes from the producer
> for
> > that much time. Default of this parameter is 7 days and our labs have
> been
> > idle for more than that.
> >
> > On Fri, Apr 24, 2020 at 10:46 PM Matthias J. Sax <mj...@apache.org>
> wrote:
> >
> >> This version are you using?
> >>
> >> Couple of broker and client side exactly-once related bugs got fix in
> >> the latest release 2.5.0.
> >>
> >>
> >> -Matthias
> >>
> >> On 4/23/20 11:59 PM, Pushkar Deole wrote:
> >>> Hello All,
> >>>
> >>> While using kafka streams application, we are intermittently getting
> >>> following exception and stream is closed. We need to restart the
> >>> application to get it working again and start processing. This
> exception
> >> is
> >>> observed in some of the labs which are being idle for some time but it
> is
> >>> not observed always. Any inputs appreciated here.
> >>>
> >>>
> >>
> {"@timestamp":"2020-04-15T13:53:52.698+00:00","@version":"1","message":"stream-thread
> >>> [analytics-event-filter-StreamThread-1] Failed to commit stream task
> 2_14
> >>> due to the following
> >>>
> >>
> error:","logger_name":"org.apache.kafka.streams.processor.internals.AssignedStreamsTasks","thread_name":"analytics-event-filter-StreamThread-1","level":"ERROR","level_value":40000,"stack_trace":"org.apache.kafka.common.KafkaException:
> >>> Unexpected error in AddOffsetsToTxnResponse: The producer attempted to
> >> use
> >>> a producer id which is not currently assigned to its transactional
> >>> id.\n\tat
> >>>
> >>
> org.apache.kafka.clients.producer.internals.TransactionManager$AddOffsetsToTxnHandler.handleResponse(TransactionManager.java:1406)\n\tat
> >>>
> >>
> org.apache.kafka.clients.producer.internals.TransactionManager$TxnRequestHandler.onComplete(TransactionManager.java:1069)\n\tat
> >>>
> >>
> org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109)\n\tat
> >>>
> >>
> org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:561)\n\tat
> >>>
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:553)\n\tat
> >>>
> >>
> org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:425)\n\tat
> >>>
> >>
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:311)\n\tat
> >>>
> >>
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244)\n\tat
> >>> java.base/java.lang.Thread.run(Unknown Source)\n"}
> >>>
> >>
> >>
> >
>
>

Reply via email to