Thanks for clarifying. There are no plans to implement a 100% async kafka
client which you decribe. However, if you file a KIP with a detailed
proposal for how you want to implement it, we can then have a discussion
about the tradeoffs in terms of additional code complexity and also with
other use cases. If it is deemed like a worthy improvement by the
community, your KIP and subsequent PR are likely to be accepted.

Thanks,
Apurva



On Fri, Aug 11, 2017 at 4:23 PM, Pavel Moukhataev <m_pas...@mail.ru.invalid>
wrote:

> Imagine I have application with very strong requirements about latency. And
> I want to save my data to kafka for each event (many times per second). And
> my application can't wait for metadata fetch.
>
> So I need to do something quickly from my main thread. If there is kafka
> problem then I can fallback to something - like save messages to disk in
> background thread. For me it is better to receive 'memory buffer is full'
> immidiately (and move my message to background save to file thread) rather
> then wait some time.
>
> And yes, message can't be written to kafka if metadata is not available.
> But metadata fetch is io operation and can be done async.
>
> So what I need is really 100% async kafka client. It buffer memory is
> available then client can put message into buffer, initiate async io
> operation (whatever it be - metadata fetch or send data to kafka) and
> return immidiately. If there is no available memory in buffer then report
> to me immidiately that there is not enough memory.
>
>
> 2017-08-11 21:04 GMT+03:00 Apurva Mehta <apu...@confluent.io>:
>
> > What precise use case do you have in mind? If you don't have cluster
> > metadata, you can't send the requests anyway. If you want to bound your
> > memory and run out of it, that means that you are not able to send data
> for
> > some reason.
> >
> > The best you can do in both cases is to drop old messages from the
> producer
> > buffers and favor the new ones. There is an ongoing discussion around
> > KIP-91 to set an absolute bound on the amount of time the application is
> > willing to wait for messages to be acknowledged. By setting this low
> > enough, you can always favor fresh messages over older ones. And when the
> > brokers are unavailable or simply overloaded, that's the best you can do
> > IMO.
> >
> > On Fri, Aug 11, 2017 at 10:45 AM, Pavel Moukhataev
> > <m_pas...@mail.ru.invalid
> > > wrote:
> >
> > > Hi
> > >
> > > Sometimes kafka is used in nearly real-time java applications that has
> > low
> > > latency requirements. In that case it is very important to minify
> > latency.
> > > In kafka producer API there are two things that are done synchronously
> > and
> > > can be optimized:
> > >  - cluster metadata fetch
> > >  - wait for free memory in buffer
> > >
> > > I suppose this API can be rewritten easily to satisfy real time needs.
> > > Cluster metada fetch can be done asynchronously. And to prevent
> blocking
> > > and waiting on out of memory block.on.buffer.full=false parameter
> > > and BufferExhaustedException can be reimplemented.
> > >
> > > So my question is does this change present in any roadmaps, does
> anybody
> > > already required it? If I create PR with that implemented will that be
> > > accepted?
> > >
> > > --
> > > С уважением, Павел
> > > +7-903-258-5544
> > > skype://pavel.moukhataev
> > >
> >
>
>
>
> --
> С уважением, Павел
> +7-903-258-5544
> skype://pavel.moukhataev
>

Reply via email to