It seems there are two underlying things here: storing messages to
stable storage, and making messages available to consumers (i.e.,
storing messages on the broker). One can be achieved simply and reliably
by spooling to local disk, the other requires network and is inherently
less reliable. Bu
Philip,
We would not use spooling to local disk on the producer to deal with
problems with the connection to the brokers, but rather to absorb temporary
spikes in traffic that would overwhelm the brokers. This is assuming that
1) those spikes are relatively short, but when they come they require m
>>But it shouldn't almost never happen.
Obviously I mean it should almost never happen. Not shouldn't.
Philip
On Fri, Apr 12, 2013 at 8:27 AM, S Ahmed wrote:
> Interesting topic.
>
> How would buffering in RAM help in reality though (just trying to work
> through the scenerio in my head):
>
> producer tries to connect to a broker, it fails, so it appends the message
> to a in-memory store. If the broker
Interesting topic.
How would buffering in RAM help in reality though (just trying to work
through the scenerio in my head):
producer tries to connect to a broker, it fails, so it appends the message
to a in-memory store. If the broker is down for say 20 minutes and then
comes back online, won't
This is just my opinion of course (who else's could it be? :-)) but I think
from an engineering point of view, one must spend one's time making the
Producer-Kafka connection solid, if it is mission-critical.
Kafka is all about getting messages to disk, and assuming your disks are
solid (and 0.8 ha