Hi,

I have come back to looking at Kafka after a while.

Is it really the case that messages can be lost if the producer is
disconnected from the broker, as described in KAFKA-789
<https://issues.apache.org/jira/browse/KAFKA-789>, and touched on with some
elaboration in KAFKA-156 <https://issues.apache.org/jira/browse/KAFKA-156>?
Since it seems mildly related, could anyone please also remind me what are
the *practical* considerations for co-locating the producer and broker on
the same machine? I think my cloud architecture would be "cleaner" and
meaner in lifecycle terms, if brokers were separate machines in the same
cloud zone.

I am currently looking at Kafka as a means of offloading the writing of a
lot of data from my application. My application creates a lot of data, and
I wish for it to become "send and forget" in the sense that my main
application does not concern with storing the data to various data stores,
but rather only spits out the data, and lets over small services route &
handle their persistence. So, it's a bit, in a way, like logging, but, it
is more important that data is not lost.

Thanks in advance,
Matan

Reply via email to