For #1, fetcher.getTopicMetadata() is called.
If you have time, you can read getTopicMetadata(). It is a blocking call
with given timeout.
For #2, I don't see any mechanism for metadata sharing.
FYI
On Fri, Dec 29, 2017 at 8:25 AM, Viliam ġurina
wrote:
> Hi,
>
> I use KafkaConsumer.partitionsF
If you are setting acks=0 then you don't care about losing data even when the
cluster is up. The only way to get at-least-once is acks=all.
> On Jun 7, 2017, at 1:12 PM, Ankit Jain wrote:
>
> Thanks hans.
>
> It would work but producer will start loosing the data even the Cluster is
> availabl
Thanks hans.
It would work but producer will start loosing the data even the Cluster is
available.
Thanks
Ankit Jain
On Wed, Jun 7, 2017 at 12:56 PM, Hans Jespersen wrote:
> Try adding props.put("max.block.ms", "0");
>
> -hans
>
>
>
> > On Jun 7, 2017, at 12:24 PM, Ankit Jain wrote:
> >
> > H
Try adding props.put("max.block.ms", "0");
-hans
> On Jun 7, 2017, at 12:24 PM, Ankit Jain wrote:
>
> Hi,
>
> We want to use the non blocking Kafka producer. The producer thread should
> not block if the Kafka is cluster is down or not reachable.
>
> Currently, we are setting following prop
Increasing reconnect.backoff.ms=1000 ms and BLOCK_ON_BUFFER_FULL_CONFIG to
true did not help either. The messages are simply lost.
Upset to find that there is no way to handle messages that are lost when
broker itself is not available and retries are not part of broker
connection issues.
https://i
Kamal,
Say if you have n threads in your executor thread poll, then you can let
consumer.poll() to return at most n records by setting "max.poll.records"
in the consumer config. Then you can maintain a circular bit buffer
indicating completed record offset (this is similar to your "ack" approach
I
Make sure you have inflight requests set to 1 if you want ordered messages.
Thanks,
Mayuresh
On Tue, Sep 8, 2015 at 5:55 AM, Damian Guy wrote:
> Can you do:
> producer.send(...)
> ...
> producer.send(...)
> producer.flush()
>
> By the time the flush returns all of your messages should have bee
Can you do:
producer.send(...)
...
producer.send(...)
producer.flush()
By the time the flush returns all of your messages should have been sent
On 8 September 2015 at 11:50, jinxing wrote:
> if i wanna send the message syncronously i can do as below:
> future=producer.send(producerRecord, callb
My bad. The exposed option is "sync", so omitting that should default to async.
Aditya
From: ram kumar [ramkumarro...@gmail.com]
Sent: Sunday, May 31, 2015 10:43 PM
To: users@kafka.apache.org
Subject: Re: async
async option is not available
async option is not available in 0.8.2.1
On Mon, Jun 1, 2015 at 11:06 AM, Aditya Auradkar <
aaurad...@linkedin.com.invalid> wrote:
> This should be enough:
> bin/kafka-console-producer.sh --async -batch-size=10 --broker-list
> localhost:9092 --topic test
>
> Aditya
>
> ___
This should be enough:
bin/kafka-console-producer.sh --async -batch-size=10 --broker-list
localhost:9092 --topic test
Aditya
From: ram kumar [ramkumarro...@gmail.com]
Sent: Sunday, May 31, 2015 10:18 PM
To: users@kafka.apache.org
Subject: async
hi,
is t
Thanks Jiangjie,
I too have thought the same after looking the code. Thanks a lot for
clearing my doubt!
On Tue, Mar 31, 2015 at 11:45 AM, Jiangjie Qin
wrote:
> The async send() put the message into a message queue then returns. When
> the messages are pulled out of the queue by the sender thre
The async send() put the message into a message queue then returns. When
the messages are pulled out of the queue by the sender thread, it still
uses SyncProducer to send ProducerRequests to brokers.
Jiangjie (Becket) Qin
On 3/30/15, 10:44 PM, "Madhukar Bharti" wrote:
>Hi All,
>
>I am using *as
What kind of exceptions are caught and sent to callback method, i think
when there is IOException callback is not called ?
in NetworkClient.java class, from the following code snippet i dont think
callback is called for this exaception ?
try {
this.selector.poll(Math.min(timeout, metadataTimeo
Yes. Thats right. I misunderstood, my bad.
Thanks,
Mayuresh
On Thu, Mar 19, 2015 at 11:05 AM, sunil kalva wrote:
> future returns RecordMetadata class which contains only metadata not the
> actual message.
> But i think *steven* had a point like saving the reference in impl class
> and retry i
future returns RecordMetadata class which contains only metadata not the
actual message.
But i think *steven* had a point like saving the reference in impl class
and retry if there is an exception in callback method.
On Thu, Mar 19, 2015 at 10:27 PM, Mayuresh Gharat <
gharatmayures...@gmail.com> w
Also you can use the other API that returns a Future and save those futures
into a list and do get() on them to check which message has been sent and
which returned an error so that they can be retried.
Thanks,
Mayuresh
On Thu, Mar 19, 2015 at 9:19 AM, Steven Wu wrote:
> in your callback impl
in your callback impl object, you can save a reference to the actual
message.
On Wed, Mar 18, 2015 at 10:45 PM, sunil kalva wrote:
> Hi
> How do i access the actual message which is failed to send to cluster using
> Callback interface and onCompletion method.
>
> Basically if the sender is faile
We introduced callbacks in the new producer. It's only available in trunk
though.
Thanks,
Jun
On Tue, May 20, 2014 at 4:42 PM, hsy...@gmail.com wrote:
> Hi guys,
>
> So far, is there a way to track the asyn producer callback.
> My requirement is basically if all nodes of the topic goes down,
The async producer's send() API is never supposed to block. If, for some
reason, the producer's queue is full and you try to send more messages, it
will drop those messages and raise a QueueFullException. You can configure
the "message.send.max.retries" config to retry sending the messages n
times,
So the concept to keep in mind is that as long as we set the whole kafka
servers list on the producer and the zookeeper(s) list on the consumers, from a
producer and consumer's perspective it should just work and the code won't get
any information, but instead one should look at the logs?
What
Kafka never had a callback for the async producer yet. But this is proposed
for Kafka 0.9. You can find the proposal here -
https://cwiki.apache.org/confluence/display/KAFKA/Client+Rewrite#ClientRewrite-ProposedProducerAPI
Thanks,
Neha
On Oct 7, 2013 4:52 AM, "Bruno D. Rodrigues"
wrote:
> Apolog
If async mode, messages are sent using the configured ack level. There is
no callback in the async mode right now. We plan to enhance that in the
future.
Thanks,
Jun
On Thu, May 30, 2013 at 9:50 PM, Jason Rosenberg wrote:
> With 0.8, we now have ack levels when sending messages. I'm wonderin
The data in the producer is only kept in memory. In a clean shutdown, the
producer will drain the queue and send all remaining messages. In an
unclean shutdown, all unsent messages are lost.
Thanks,
Jun
On Thu, Feb 14, 2013 at 12:09 PM, Subhash Agrawal wrote:
> Hi,
>
> We are using kafka-broker
24 matches
Mail list logo