Isn’t the producer part of the application? The metadata is stored in
memory. If the application rebooted (process restarted), all the metadata
will be gone.
Jiangjie (Becket) Qin
On 5/13/15, 9:54 AM, "Mohit Gupta" wrote:
>I meant the producer. ( i.e. application using the producer api to push
I meant the producer. ( i.e. application using the producer api to push
messages into kafka ) .
On Wed, May 13, 2015 at 10:20 PM, Mayuresh Gharat <
gharatmayures...@gmail.com> wrote:
> By application rebooting, do you mean you bounce the brokers?
>
> Thanks,
>
> Mayuresh
>
> On Wed, May 13, 2015
By application rebooting, do you mean you bounce the brokers?
Thanks,
Mayuresh
On Wed, May 13, 2015 at 4:06 AM, Mohit Gupta
wrote:
> Thanks Jiangjie. This is helpful.
>
> Adding to what you have mentioned, I can think of one more scenario which
> may not be very rare.
> Say, the application is
Thanks Jiangjie. This is helpful.
Adding to what you have mentioned, I can think of one more scenario which
may not be very rare.
Say, the application is rebooted and the Kafka brokers registered in the
producer are not reachable ( could be due to network issues or those
brokers are actually down
Application will not block on each metadata refresh or metadata is
expired.
Application will only be blocked when
1. It sends the first message to a topic (only for that single message), or
2. The topic has been deleted from broker thus refreshed metadata loses
the topic info (which is pretty rar
Thank you for the clarification.
I think I agree with Mohit. Sometime blocking on logging is not acceptable
by nature of application who uses kafka.
Yes it is not blocking when metadata is still available. But application
will be blocked once metada is expired.
It might be handled by application
Send() will only block if the metadata is *not available* for the topic.
It won’t block if metadata there is stale. The metadata refresh is async
to send(). However, if you send the message to a topic for the first time,
send() will trigger a metadata refresh and block until it has metadata for
tha
I completely agree with Mohit, an application should not have to know or
care about
producer implementation internals.
Given a message and its delivery constraints (produce retry count and
timeout) the producer
should hide any temporal failures until the message is succesfully
delivered, a permanen
I could not follow the reasoning behind blocking the send method if the
metadata is not up-to-date. Though, I see that it as per design, it
requires the metadata to batch the message into appropriate topicPartition
queue. Also, if the metadata could not be updated in the specified
interval, it thro
The way it works I suppose is that, the producer will do fetchMetadata, if
the last fetched metadata is stale (the refresh interval has expired) or if
it is not able to send data to a particular broker in its current metadata
(This might happen in some cases like if the leader moves).
It cannot pr
That¹s right. Send() will first try to get metadata of a topic, that is a
blocking operation.
On 5/12/15, 2:48 AM, "Rendy Bambang Junior"
wrote:
>Hi, sorry if my understanding is incorrect.
>
>I am integrating kafka producer with application, when i try to shutdown
>all kafka broker (preparing f
Hi, sorry if my understanding is incorrect.
I am integrating kafka producer with application, when i try to shutdown
all kafka broker (preparing for prod env) I notice that 'send' method is
blocking.
Is new producer fetch metadata not async?
Rendy
12 matches
Mail list logo