metadata fetch only happens/blocks for the first time you call send. after
the metadata is retrieved can cached in memory. it will not block again. so
yes, there is a possibility it can block. of course, if cluster is down and
metadata was never fetched, then every send can block.

metadata is also refreshed periodically after the first fetch.
metadata.max.age.ms=300000


On Thu, Feb 26, 2015 at 4:47 AM, Gary Ogden <gog...@gmail.com> wrote:

> I was actually referring to the metadata fetch. Sorry I should have been
> more descriptive. I know we can decrease the metadata.fetch.timeout.ms
> setting to be a lot lower, but it's still blocking if it can't get the
> metadata. And I believe that the metadata fetch happens every time we call
> send()?
>
> On 25 February 2015 at 19:03, Guozhang Wang <wangg...@gmail.com> wrote:
>
> > Hi Gray,
> >
> > The Java producer will block on send() when the buffer is full and
> > block.on.buffer.full = true (
> > http://kafka.apache.org/documentation.html#newproducerconfigs). If you
> set
> > the config to false the send() call will throw a BufferExhaustedException
> > which, in your case, can be caught and ignore and allow the message to
> drop
> > on the floor.
> >
> > Guozhang
> >
> >
> >
> > On Wed, Feb 25, 2015 at 5:08 AM, Gary Ogden <gog...@gmail.com> wrote:
> >
> > > Say the entire kafka cluster is down and there's no brokers to connect
> > to.
> > > Is it possible to use the java producer send method and not block until
> > > there's a timeout?  Is it as simple as registering a callback method?
> > >
> > > We need the ability for our application to not have any kind of delay
> > when
> > > sending messages and the cluster is down.  It's ok if the messages are
> > lost
> > > when the cluster is down.
> > >
> > > Thanks!
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>

Reply via email to