I am using the send method of *kafka.javaapi.producer.Producer*
*/** * Use this API to send data to multiple topics * @param messages
list of producer data objects that encapsulate the topic, key and message
data */def send(messages: java.util.List[KeyedMessage[K,V]]) {import
coll
This error is thrown by the old scala producer, correct? You might
also consider switching to the new java producer. It handles this a
bit differently by blocking during the send call until the internal
buffers can enqueue a new message. There are a few more configuration
options available as well,
few more things you can do:
* Increase "batch.size" - this will give you a larger queue and
usually better throughput
* More producers - very often the bottleneck is not in Kafka at all.
Maybe its the producer? or the network?
* Increate max.inflight.requests for the producer - it will allow
sendi
Thanks Alex and Sorry for the delayed response. We never could solv this
problem so am resurrecting the thread. As i understand, from a
client/tenant which is producing messages to a kafka topic, there is not
much that can be controlled. I assume "Event queue is full of unsent
messages" signify th
Hi John,
I'm glad the info was helpful.
It's hard to diagnose this issue without monitoring. I suggest setting up
graphite to graph JMX metrics. There's a good (not designed for production)
script here (as part of a Vagrant VM):
https://github.com/gwenshap/ops_training_vm/blob/master/bootstrap.sh
Hi Alex,
Excellent information, thanks! I very much appreciate your time. BTW,
Kafka is an EXCELLENT product.
It seems like my situation may be a bit of an edge case, based upon your
response. Specifically, when I added more producers (in the case of Storm,
a Kafka producer is a KafkaBolt), that
Hi John,
I should preface this by saying I've never used Storm and KafkaBolt and am
not a streaming expert.
However, if you're running out of buffer in the producer (as is what's
happening in the other thread you referenced), you can possibly alleviate
this by adding more producers, or by tuning
Hi Alex,
Great info, thanks! I asked a related question this AM--is a full queue
possibly a symptom of back pressure within Kafka?
--John
On Thu, Feb 18, 2016 at 12:38 PM, Alex Loddengaard
wrote:
> Hi Saurabh,
>
> This is occurring because the produce message queue is full when a produce
> req
Hi Saurabh,
This is occurring because the produce message queue is full when a produce
request is made. The size of the queue is configured
via queue.buffering.max.messages. You can experiment with increasing this
(which will require more JVM heap space), or fiddling with
queue.enqueue.timeout.ms
Hi Everyone,
I am encountering this exception similar to Saurabh's report earlier today
as I try to scale up a Storm -> Kafka output via the KafkaBolt (i.e., add
more KafkaBolt executors).
Question...does this necessarily indicate back pressure from Kafka where
the Kafka writes cannot keep up wit
Hi,
We have a Kafka server deployment shared between multiple teams and i have
created a topic with multiple partitions on it for pushing some JSON data.
We have multiple such Kafka producers running from different machines which
produce/push data to a Kafka topic. A lot of times i see the follow
11 matches
Mail list logo