I could not follow the reasoning behind blocking the send method if the
metadata is not up-to-date. Though, I see that it as per design, it
requires the metadata to batch the message into appropriate topicPartition
queue. Also, if the metadata could not be updated in the specified
interval, it throws an exception and the message is not queued to be
retried once the brokers are up.

Should it not be that messages are buffered in another queue ( up-to a
limit ) if the brokers are down and retried later?
Is it not a general use case to require producer to be asynchronous in all
the scenarios?


On Tue, May 12, 2015 at 10:54 PM, Mayuresh Gharat <
gharatmayures...@gmail.com> wrote:

> The way it works I suppose is that, the producer will do fetchMetadata, if
> the last fetched metadata is stale (the refresh interval has expired) or if
> it is not able to send data to a particular broker in its current metadata
> (This might happen in some cases like if the leader moves).
>
> It cannot produce without having the right metadata.
>
> Thanks,
>
> Mayuresh
>
> On Tue, May 12, 2015 at 10:09 AM, Jiangjie Qin <j...@linkedin.com.invalid>
> wrote:
>
> > That¹s right. Send() will first try to get metadata of a topic, that is a
> > blocking operation.
> >
> > On 5/12/15, 2:48 AM, "Rendy Bambang Junior" <rendy.b.jun...@gmail.com>
> > wrote:
> >
> > >Hi, sorry if my understanding is incorrect.
> > >
> > >I am integrating kafka producer with application, when i try to shutdown
> > >all kafka broker (preparing for prod env) I notice that 'send' method is
> > >blocking.
> > >
> > >Is new producer fetch metadata not async?
> > >
> > >Rendy
> >
> >
>
>
> --
> -Regards,
> Mayuresh R. Gharat
> (862) 250-7125
>



-- 
Best Regards,

Mohit Gupta

Reply via email to