First question: We know that Kafka Streams commits offsets on intervals.
But what offsets are committed? Are the offsets for messages committed are
the ones which have just arrived at the source node? Or the messages that
have been through the entire pipeline? If the latter, how do we avoid data
loss in this case? Is there an equivalent to Connect's RetriableException
in Kafka Streams?
----
Second question: Does giving an application host:port allows Kafka Streams
instances to communicate the data of state stores so that the entire
changelog topic is not read every time a crash happens?
----
Third question: If I want to upgrade the application using some kind of
code changes and bug fixes, how should the upgrade pattern proceed? Right
now, I just kill all the containers and bring new ones up. But this is
taking a lot of time as the topics are replayed. How can I do it faster?
Should I upgrade the processes slowly?
-- 

Regards,
Anish Samir Mashankar
R&D Engineer
System Insights
+91-9789870733

Reply via email to