Yes, this will be a problem if you are providing batching for your REST
service on top of Kafka and have to acknowledge to your client only when
all the callbacks for individual sends are called.
Here is one implementation I have done:
https://github.com/ksenji/KafkaBatchProducer/blob/master/src/m
We've seen a lot of requests for this, and I don't think there is any
general objection.
If you want to discuss concrete API suggestions, perhaps the dev mailing
list is the right place for the discussion.
Gwen
On Tue, Sep 1, 2015 at 11:25 AM, Neelesh wrote:
> Here's what I think :
> # The new
Here's what I think :
# The new producer generates Java futures , we all know the problems with
java futures (cannot compose, blocking, does not work well with other JVM
languages /libraries - RxJava/RxScala etc)
# or we can pass in a callback - works okay when we are dealing with single
messages
If linger.ms is 0, batching does not add to the latency. It will actually
improve throughput without affecting latency. Enabling batching does not
mean it will wait for the batch to be full. Whatever gets filled during the
previous batch send will be sent in the current batch even if it count is
le
Thanks for the answers. Indeed, the callback model is the same regardless
of batching. But for a synchronous web service, batching creates a latency
issue. linger.ms is by default set to zero. Also, java futures are hard to
work with compared to Scala futures. The current API also returns one
fut
Adding to what Gwen already mentioned -
The programming model for the Producer is send() with an optional callback
and we get a Future. This model does not change whether behind the scenes
batching is done or not. So your fault tolerance logic really should not
depend on whether batching is done o
Hi Neelesh :)
The new producer has configuration for controlling the batch sizes.
By default, it will batch as much as possible without delay (controlled by
linger.ms) and without using too much memory (controlled by batch.size).
As mentioned in the docs, you can set batch.size to 0 to disable ba