I am using the send method of *kafka.javaapi.producer.Producer*







*/**   * Use this API to send data to multiple topics   * @param messages
list of producer data objects that encapsulate the topic, key and message
data   */def send(messages: java.util.List[KeyedMessage[K,V]]) {    import
collection.JavaConversions._    underlying.send((messages:
mutable.Buffer[KeyedMessage[K,V]]).toSeq: _*)  }*

and have following configuration for the producer :









*serializer.class=kafka.serializer.StringEncoderproducer.type=asyncrequest.required.acks=1request.timeout.ms
<http://request.timeout.ms>=30000topic.metadata.refresh.interval.ms
<http://topic.metadata.refresh.interval.ms>=600000queue.buffering.max.ms
<http://queue.buffering.max.ms>=10000queue.buffering.max.messages=50000queue.enqueue.timeout.ms
<http://queue.enqueue.timeout.ms>=-1batch.num.messages=400compression.codec=none*

As per the suggestions, i should increase the *batch.num.messages* to a
higher value. We earlier had more number of concurrent producers running
from different machine which we reduced as we thought the sudden spike in
input volume was causing "*Event queue is full*" error. I will try to use
the MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION config and see how it goes.
Currently i am using kafka_2.10-0.8.2.1 version.

Thanks,
Saurabh

On Tue, Apr 26, 2016 at 1:41 AM, Dana Powers <dana.pow...@gmail.com> wrote:

> This error is thrown by the old scala producer, correct? You might
> also consider switching to the new java producer. It handles this a
> bit differently by blocking during the send call until the internal
> buffers can enqueue a new message. There are a few more configuration
> options available as well, if I recall.
>
> -Dana
>
> On Mon, Apr 25, 2016 at 12:42 PM, Gwen Shapira <g...@confluent.io> wrote:
> > few more things you can do:
> >
> > * Increase "batch.size" - this will give you a larger queue and
> > usually better throughput
> > * More producers - very often the bottleneck is not in Kafka at all.
> > Maybe its the producer? or the network?
> > * Increate max.inflight.requests for the producer - it will allow
> > sending more requests concurrently and perhaps increase throughput.
> >
> > The important bit is: Don't add more brokers if you don't have
> > information that the broker is the bottleneck.
> >
> > Gwen
> >
> > On Mon, Apr 25, 2016 at 12:06 PM, Saurabh Kumar <saurabh...@gmail.com>
> wrote:
> >> Thanks Alex and Sorry for the delayed response. We never could solv this
> >> problem so am resurrecting the thread.  As i understand, from a
> >> client/tenant which is producing messages to a kafka topic, there is not
> >> much that can be controlled. I assume "Event queue is full of unsent
> >> messages" signify that :
> >> 1) We need to expand our cluster by adding more resources/brokers
> >> 2) We need to add a blocking behaviour incase we see that the average
> >> volume of messages is sustainable, and its just the spikes that are
> causing
> >> problems.
> >>
> >> --Saurabh
> >>
> >> On Thu, Feb 18, 2016 at 11:51 PM, John Yost <hokiege...@gmail.com>
> wrote:
> >>
> >>> Hi Alex,
> >>>
> >>> Great info, thanks! I asked a related question this AM--is a full queue
> >>> possibly a symptom of back pressure within Kafka?
> >>>
> >>> --John
> >>>
> >>> On Thu, Feb 18, 2016 at 12:38 PM, Alex Loddengaard <a...@confluent.io>
> >>> wrote:
> >>>
> >>> > Hi Saurabh,
> >>> >
> >>> > This is occurring because the produce message queue is full when a
> >>> produce
> >>> > request is made. The size of the queue is configured
> >>> > via queue.buffering.max.messages. You can experiment with increasing
> this
> >>> > (which will require more JVM heap space), or fiddling with
> >>> > queue.enqueue.timeout.ms to control the blocking behavior when the
> queue
> >>> > is
> >>> > full. Both of these configuration options are explained here:
> >>> >
> >>> > https://kafka.apache.org/08/configuration.html
> >>> >
> >>> > I didn't quite follow your last paragraph, so I'm not sure if the
> >>> following
> >>> > advice is applicable to you or not. You may also experiment with
> adding
> >>> > more producers (either on the same or different machines).
> >>> >
> >>> > I hope this helps.
> >>> >
> >>> > Alex
> >>> >
> >>> > On Thu, Feb 18, 2016 at 2:12 AM, Saurabh Kumar <saurabh...@gmail.com
> >
> >>> > wrote:
> >>> >
> >>> > > Hi,
> >>> > >
> >>> > > We have a Kafka server deployment shared between multiple teams
> and i
> >>> > have
> >>> > > created a topic with multiple partitions on it for pushing some
> JSON
> >>> > data.
> >>> > >
> >>> > > We have multiple such Kafka producers running from different
> machines
> >>> > which
> >>> > > produce/push data to a Kafka topic. A lot of times i see the
> following
> >>> > > exception in the logs : "*Event queue is full of unsent messages,
> could
> >>> > not
> >>> > > send event"*
> >>> > >
> >>> > > Any idea how to solve this ? We can not synchronise the volume or
> >>> timing
> >>> > of
> >>> > > different Kafka producers across machines and between multiple
> >>> processes.
> >>> > > There is a limit on maximum number of concurrent processes (kafka
> >>> > producer)
> >>> > >  that can run on a mchine but it is only going to increase with
> time as
> >>> > we
> >>> > > scale. What is the right way to solve this problem ?
> >>> > >
> >>> > > Thanks,
> >>> > > Saurabh
> >>> > >
> >>> >
> >>> >
> >>> >
> >>> > --
> >>> > *Alex Loddengaard | **Solutions Architect | Confluent*
> >>> > *Download Apache Kafka and Confluent Platform:
> www.confluent.io/download
> >>> > <http://www.confluent.io/download>*
> >>> >
> >>>
>

Reply via email to