Thank you all for the comments.  Yes, I understand concern from community
members with extra burden of having the complexity to drop message, but if
ability to inject implementation of the Queue which will make this
completely transparent to Kafka.

I just need fine-grained control of the application and queue. Kafka can do
it magic of transferring data. Hence, by default Kafka can just use default
behavior which today.  The injection  give flexibility to control buffer
and how data enqueue/dequeue priority  and addition to that also let us
runtime optimize the queue size without having to manage the complete life
cycle of the Producer.

I just presenting use cases where developer need a fine-grain control
queue.  I can create another buffer for same data in pointless in my
opinion.

Thanks for your support and suggestions.

Thanks.

Bhavesh




On Thu, Aug 7, 2014 at 3:00 PM, Philip O'Toole <
philip.oto...@yahoo.com.invalid> wrote:

> Policies for which messages to drop, retain, etc seem like something you
> should code in your application. I personally would not like to see this
> extra complexity added to Kafka.
>
> Philip
>
> ----------------------------------
> http://www.philipotoole.com
>
> > On Aug 7, 2014, at 2:44 PM, Bhavesh Mistry <mistry.p.bhav...@gmail.com>
> wrote:
> >
> > Basically, requirement is to support message dropping policy in event
> when
> > queue is full.  When you get storm of data (in our case logging due to
> > buggy application code), we would like to retain current message instead
> of
> > first one in queue.   We will mitigate this by rate limiting on producer
> > side. Only thing is if Kafka allows the flexibility to inject
> > implementation, then developer have control over what to drop and retain
> > and also what to priorities.
> >
> > We would like to change tunable parameters (such as batch size and queue
> > size etc or non intrusive parameter that does not impact the life cycle
> of
> > the producer instance )   at runtime after producer instance is created.
> >
> > Thanks,
> >
> > Bhavesh
> >
> >> On Mon, Aug 4, 2014 at 7:05 PM, Joe Stein <joe.st...@stealth.ly> wrote:
> >>
> >> Is it possible there is another solution to the problem? I think if you
> >> could better describe the problem(s) you are facing and how you are
> >> architected some then you may get responses from others that perhaps
> have
> >> faced the same problem with similar architectures ... or maybe folks can
> >> chime in with solution(s) to the problem(s).  When only being presented
> >> with solutions it is hard to say much about if it is problem that folks
> >> will have and if this solution will work for them.
> >>
> >> /*******************************************
> >> Joe Stein
> >> Founder, Principal Consultant
> >> Big Data Open Source Security LLC
> >> http://www.stealth.ly
> >> Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
> >> ********************************************/
> >>
> >>
> >> On Mon, Aug 4, 2014 at 8:52 PM, Bhavesh Mistry <
> mistry.p.bhav...@gmail.com
> >> wrote:
> >>
> >>> Kafka Version:  0.8.x
> >>>
> >>> 1) Ability to define which messages get drooped (least recently instead
> >> of
> >>> most recent in queue)
> >>> 2) Try Unbounded Queue to find out the Upper Limit without drooping any
> >>> messages for application (use case Stress test)
> >>> 3) Priority Blocking Queue ( meaning a single Producer can send
> messages
> >> to
> >>> multiple topic and I would like to give Delivery Priority to message
> for
> >>> particular message for topic)
> >>>
> >>> We have use case to support #3 and #1 since we would like to deliver
> the
> >>> Application Heartbeat first then any other event within the queue for
> any
> >>> topics. To lower TCP connections, we only use one producer for 4 topics
> >> but
> >>> one of the topics has priority for delivery.
> >>>
> >>> Please let me know if this is useful feature to have or not.
> >>>
> >>> Thanks in advance for great support !!
> >>>
> >>> Thanks,
> >>>
> >>> Bhavesh
> >>>
> >>> P.S.  Sorry for asking this question again, but last time there was no
> >>> conclusion.
> >>
>

Reply via email to