Thanks! I wonder if this is a bit far-fetched as no one seems to do this at
the moment.
On Fri, May 10, 2019 at 12:50 AM Guozhang Wang wrote:
> Hello Emmanuel,
>
> Yes I think it is do-able technically. Note that it means the offsets of
> cluster A would be stored on cluster B and hence upon res
Hello Emmanuel,
Yes I think it is do-able technically. Note that it means the offsets of
cluster A would be stored on cluster B and hence upon restarting one need
to talk to cluster B in order to get the committed position in cluster A.
Guozhang
On Thu, May 9, 2019 at 11:58 AM Emmanuel wrote:
Hello,
I would like to know if there is a Java client that would allow me to
consume from topics on a cluster A and produce to topics on a cluster B
with exactly-once semantics. My understanding of the Kafka transactions is
that on the paper it could work, but the kafka java client assumes both ar
Producer doesn't reconnect if broker goes down, reappears with new IP
Hi Guozhang,
I am using Kafka 2.2.0. The issue is resolved now. We had set
auto.register.schemas=false
as we wanted to manually register the schemas. It got fixed after setting
the flag to true as it needs to register schemas for its internal topics.
Thanks,
Ishita Rakshit
On Tue, May 7, 2019 at
I am not familiar with Spring Boot. But in general, you could query the
store to see if the counts are as expected:
https://kafka.apache.org/22/documentation/streams/developer-guide/interactive-queries.html
As an alternative, you could inspect either the store changelog topic,
or get a stream from
Thanks for prompt response.
I am not sure I understand correctly, but I am still confused why switching
inter.broker.protocol.version in the last step would make the process
irreversible.
If we agree that log conversion to a new format is applied when a new value of
log.message.format.version or
Artur,
The upgrade process is such that
1) You ensure that there is a hard-check on protocol version if not exists
already. As you have already mentioned above, in #3 - it's to ensure that
min verson for msg formats are being adhered to before upgrade.
2) broker protocol version is to ensure that
Hi,
I read the documentation about upgrading
Kafka(http://kafka.apache.org/21/documentation.html#upgrade_2_1_0) but I have
questions that I believe the documentation doesn't cover. I am planning to
upgrade Kafka from 2.0.0 to 2.1.0 and would like to make sure what to do when
something goes wron