Thank you for the clarification. I think I agree with Mohit. Sometime blocking on logging is not acceptable by nature of application who uses kafka.
Yes it is not blocking when metadata is still available. But application will be blocked once metada is expired. It might be handled by application, by implementing async call when do send() and manage buffer and async timeout internally, but it makes async feature in kafka producer has less meaning. Sorry if my understanding is incorrect. Rendy On May 13, 2015 6:59 AM, "Jiangjie Qin" <j...@linkedin.com.invalid> wrote: > Send() will only block if the metadata is *not available* for the topic. > It won’t block if metadata there is stale. The metadata refresh is async > to send(). However, if you send the message to a topic for the first time, > send() will trigger a metadata refresh and block until it has metadata for > that topic. > > Jiangjie (Becket) Qin > > On 5/12/15, 12:58 PM, "Magnus Edenhill" <mag...@edenhill.se> wrote: > > >I completely agree with Mohit, an application should not have to know or > >care about > >producer implementation internals. > >Given a message and its delivery constraints (produce retry count and > >timeout) the producer > >should hide any temporal failures until the message is succesfully > >delivered, a permanent > >error is encountered or the constraints are hit. > >This should also include internal start up sequencing, such as metadata > >retrieval. > > > > > > > >2015-05-12 21:22 GMT+02:00 Mohit Gupta <success.mohit.gu...@gmail.com>: > > > >> I could not follow the reasoning behind blocking the send method if the > >> metadata is not up-to-date. Though, I see that it as per design, it > >> requires the metadata to batch the message into appropriate > >>topicPartition > >> queue. Also, if the metadata could not be updated in the specified > >> interval, it throws an exception and the message is not queued to be > >> retried once the brokers are up. > >> > >> Should it not be that messages are buffered in another queue ( up-to a > >> limit ) if the brokers are down and retried later? > >> Is it not a general use case to require producer to be asynchronous in > >>all > >> the scenarios? > >> > >> > >> On Tue, May 12, 2015 at 10:54 PM, Mayuresh Gharat < > >> gharatmayures...@gmail.com> wrote: > >> > >> > The way it works I suppose is that, the producer will do > >>fetchMetadata, > >> if > >> > the last fetched metadata is stale (the refresh interval has expired) > >>or > >> if > >> > it is not able to send data to a particular broker in its current > >> metadata > >> > (This might happen in some cases like if the leader moves). > >> > > >> > It cannot produce without having the right metadata. > >> > > >> > Thanks, > >> > > >> > Mayuresh > >> > > >> > On Tue, May 12, 2015 at 10:09 AM, Jiangjie Qin > >><j...@linkedin.com.invalid > >> > > >> > wrote: > >> > > >> > > That¹s right. Send() will first try to get metadata of a topic, that > >> is a > >> > > blocking operation. > >> > > > >> > > On 5/12/15, 2:48 AM, "Rendy Bambang Junior" > >><rendy.b.jun...@gmail.com> > >> > > wrote: > >> > > > >> > > >Hi, sorry if my understanding is incorrect. > >> > > > > >> > > >I am integrating kafka producer with application, when i try to > >> shutdown > >> > > >all kafka broker (preparing for prod env) I notice that 'send' > >>method > >> is > >> > > >blocking. > >> > > > > >> > > >Is new producer fetch metadata not async? > >> > > > > >> > > >Rendy > >> > > > >> > > > >> > > >> > > >> > -- > >> > -Regards, > >> > Mayuresh R. Gharat > >> > (862) 250-7125 > >> > > >> > >> > >> > >> -- > >> Best Regards, > >> > >> Mohit Gupta > >> > >