Frank, thanks for sharing with your findings.
I think this is a general issue to consider in Streams, and the community
has been thinking about it: we write intermediate topics with the stream
time that is inherited from the source topic's timestamps, however that
timestamp is used for log rolling
We are indeed running this setup in production, have been for almost 2
years now, over a gradually increasing number of deployments.
Let me clarify though:
Our clusters don't exceed 5 nodes. We're not exercising Kafka nowhere
near its limits, or bandwidth or disk I/O for that matter.
When we
From: IT Consultant <0binarybudd...@gmail.com>
Sent: Friday, June 2, 2017 11:02 AM
To: users@kafka.apache.org
Subject: Kafka Over TLS Error - Failed to send SSL Close message - Broken Pipe
Hi All,
I have been seeing below error since three days ,
Can you please
Hi,
I would not recommend running this kind of set up in production. Busy Kafka
brokers use up a lot of disk and network bandwidth, which zookeeper does
not deal well with. This means that a burst of traffic to 1 node carries
the risk of disrupting the ZK ensemble.
Secondly, this will cause probl