Got, it thanks, exactly what I had just noticed and sent a reply as you
were replying... thanks Alex.
On Mon, Aug 8, 2016 at 11:35 AM, Alex Loddengaard wrote:
> Hi Alper,
>
> Thanks for sharing. I was particularly interested in seeing what *acks* was
> set to. Since you haven't set it, its value
Hi Alper,
Thanks for sharing. I was particularly interested in seeing what *acks* was
set to. Since you haven't set it, its value is the default, *1*.
To handle errors, you need to use the send() method that takes a callback,
and build an appropriate callback to handle errors. Take a look here fo
(continued from previous email)
Hit send too soon... but I also notice that if I use the producer send
method that takes a callback, in the onCompletion method of the callback, I
do see an exception is sent in the callback:
org.apache.kafka.common.errors.TimeoutException: Batch containing 2
reco
I also notice that if I use the send method that takes a callback:
public Future
send(ProducerRecord record, Callback callback) {
call
that in the onCompletionalso all
On Mon, Aug 8, 2016 at 11:24 AM, Alper Akture
wrote:
> Thanks Alex... using producer props:
>
> {timeout.ms=500, max.block.ms=50
Thanks Alex... using producer props:
{timeout.ms=500, max.block.ms=500, request.timeout.ms=500,
bootstrap.servers=localhost:9092,
serializer.class=kafka.serializer.StringEncoder,
value.serializer=org.apache.kafka.common.serialization.StringSerializer,
metadata.fetch.timeout.ms=500,
key.serializer=
Hi Alper, can you share your producer config -- the Properties object? We
need to learn more to help you understand the behavior you're observing.
Thanks,
Alex
On Fri, Aug 5, 2016 at 7:45 PM, Alper Akture
wrote:
> I'm using 0.10.0.0 and testing some failover scenarios. For dev, i have
> single