Re: Leveraging DLQ for errors coming from a sink connector plugin

2019-11-15 Thread Javier Holguera
Any chance that somebody can shed some light on this? Thanks! On Tue, 12 Nov 2019 at 17:40, Javier Holguera wrote: > Hi, > > Looking at the Kafka Connect code, it seems that the built-in support for > DLQ queues only works for errors related to transformations and converters >

Leveraging DLQ for errors coming from a sink connector plugin

2019-11-12 Thread Javier Holguera
Hi, Looking at the Kafka Connect code, it seems that the built-in support for DLQ queues only works for errors related to transformations and converters (headers, key, and value). I wonder if it has been considered (and maybe discarded) to use the same mechanism for the call to the connector-plug

Kafka Streams - StateRestoreListener called when new partitions assigned

2019-11-11 Thread Javier Holguera
Hi, I understand that the state store listener that can be set using KafkaStreams.setGlobalStateRestoreListener will be invoked when the streams app starts if it doesn't find the state locally (e.g., running on a ephemeral docker container). However, I wonder if the process happens as well if the

Re: Resetting idleStartTime when all partitions are empty

2019-10-15 Thread Javier Holguera
ed processing mode" it will stay there until > all partitions have data at the same time. That is be design. > > Why is this behavior problematic for your use case? > > > > -Matthias > > > On 10/14/19 7:44 AM, Javier Holguera wrote: > > Hi, > > >

Resetting idleStartTime when all partitions are empty

2019-10-14 Thread Javier Holguera
Hi, We have a KStream and a KTable that we are left-joining. The KTable has a "backlog" of records that we want to consume before any of the entries in the KStream is processed. To guarantee that, we have played with the timestamp extraction, setting the time for those records in the "distant" pas

KeyValueStore implementation that allows retention policy

2018-04-25 Thread Javier Holguera
Hi, I have a look around the "state" folder in Kafka Streams and I realised that only WindowStore and SessionStore allows configuring a retention policy. Looking a bit further, it seems that RocksDbSegmentedBytesStore is the main way to implement a store that can clean itself up based on retent

RE: Kafka Streams - max.poll.interval.ms defaults to Integer.MAX_VALUE

2018-01-01 Thread Javier Holguera
ended to call an external service within Kafka Streams if possible. It would be better, to load the corresponding data into a topic and read as a KTable to do a stream-table join. Not sure if this is feasible for your use-case though. -Matthias On 12/28/17 7:16 AM, Javier Holguera wrote: > Hi

Re: Kafka Streams - max.poll.interval.ms defaults to Integer.MAX_VALUE

2017-12-28 Thread Javier Holguera
12/27/17 6:55 AM, Javier Holguera wrote: > Hi Matthias, > > Thanks for your answer. It makes a lot of sense. > > Just a follow-up question. KIP-62 says: "we give the client as much as > max.poll.interval.ms to handle a batch of records, this is also the maximum > time be

RE: Kafka Streams - max.poll.interval.ms defaults to Integer.MAX_VALUE

2017-12-27 Thread Javier Holguera
-Matthias On 12/20/17 7:14 AM, Javier Holguera wrote: > Hi, > > According to the documentation, "max.poll.interval.ms" defaults to > Integer.MAX_VALUE for Kafka Streams since 0.10.2.1. > > Considering that the "max.poll.interval.ms" is: > > 1

Kafka Streams - max.poll.interval.ms defaults to Integer.MAX_VALUE

2017-12-20 Thread Javier Holguera
Hi, According to the documentation, "max.poll.interval.ms" defaults to Integer.MAX_VALUE for Kafka Streams since 0.10.2.1. Considering that the "max.poll.interval.ms" is: 1. A "processing timeout" to control an upper limit for processing a batch of records AND 2. The rebalance timeout th

Re: Producer: metadata refresh when leader down

2016-09-02 Thread Javier Holguera
`metadata.max.age.ms` is the key here because the shorter we set it, the quicker producers recover from leader crash. Regards, Javier. -- Javier Holguera Sent with Airmail On 2 September 2016 at 15:50:48, Yuto KAWAMURA (kawamuray.dad...@gmail.com) wrote: HI Javier, Not sure but just wondering

Producer: metadata refresh when leader down

2016-09-01 Thread Javier Holguera
time. Is there something I’m missing or not understanding correctly? Thanks for your help! Regards, Javier. -- Javier Holguera Sent with Airmail

Re: Lost offsets after migration to Kafka brokers v0.10.0

2016-08-16 Thread Javier Holguera
consuming normally from that point on. -- Javier Holguera Sent with Airmail On 16 August 2016 at 12:34:19, Sam Pegler (sam.peg...@infectiousmedia.com) wrote: clj-kafka uses the old consumer API's and offset storage in ZK. If I were you I'd migrate to https://github.com/weftio/gregor w

Lost offsets after migration to Kafka brokers v0.10.0

2016-08-16 Thread Javier Holguera
/pingles/clj-kafka/blob/master/src/clj_kafka/offset.clj#L70) using OffsetCommitRequest. Any help would be welcomed. Thanks! -- Javier Holguera Sent with Airmail