Hi Luke,
The complete exception is
java.util.concurrent.ExecutionException:
org.apache.kafka.common.errors.TimeoutException: Topic realtimeImport_1 not
present in metadata after 250 ms.
at
org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:1316)
`enable.auto.commit` is a Consumer config and does not apply to Kafka
Stream.
In Kafka Streams, you basically always have auto commit enabled, and you
can control how frequently commits happen via `commit.interval.ms`.
Also on `close()` Kafka Streams would commit offsets.
-Matthias
On 5/31
Yes, the broker de-dupes using the sequence number.
But for example, if a sequence number is skipped, you could get this
exception: the current batch of messages cannot be appended to the log,
as one batch is missing, and the producer would need to re-send the
previous/missing batch with lower
Thanks for the answer Matthias.
I still have doubts about the meaning of "risks reordering of sent record".
If I understood correctly the example you gave is something like this
1. Producer sends batch with sequence number X
2. That request gets lost in the network
3. Producer sends batch with sequ
Hi, are there some information about compatibility between different
versions java clients and server? Such as 3.0 java client to 2.6 kafka
server?
Thanks!
Hello Meir,
>From the code snippet I cannot find where did you add a KTable, it seems
you created a KStream from the source topic, and aggregate the stream into
a KTable, could you show me the code difference between "adding a KTable"
v.s. "adding a KStream"?
Anyways, the log line should only hap