Hi,
My 3 node cluster (2x4 core xeon, 8GB ram per machine, striped HDD) had
crashed with OOM when I had some 5-6 topics with 256 partitions per topic.
dont remember the heap size I used, I think it was the default one that is
there in the 0.8.2.1 bundle
I have been trying to come up with the numb
Hello Prabhjot,
Actually what I meant in previous email is that there are 200 topics, with
1 partition each so there are 200 total partitions. Is there any rule of
thumb regarding this matter? Can you share your configuration (including
spec, jvm memory, etc) for kafka cluster?
Thank you,
On Thu
Hi,
Not sure. But, I had hit OOM when using too many partitions and many topics
with a lesser heap size assigned to the jvm
Regards,
Prabhjot
On Thu, Nov 5, 2015 at 10:40 AM, Muqtafi Akhmad
wrote:
> Hello all,
> Recently I got incident with kafka cluster, I found OutOfMemoryError in
> kafka se
Hello all,
Recently I got incident with kafka cluster, I found OutOfMemoryError in
kafka server log
> WARN [ReplicaFetcherThread-0-2], Error in fetch Name: FetchRequest;
> Version: 0; CorrelationId: 7041716; ClientId: ReplicaFetcherThread-0-2;
> ReplicaId: 0; MaxWait: 500 ms; MinBytes: 1 bytes; Re
Thanks a lot Gwen. I bumped up the JVM to 1g on the consumer side and it
works :)
All the consumer belong to the same group and I am using the High level
group API to consume from the kafka. It seems there is some initial meta
data exchange or something about all the partitions are sent to all the
Two things:
1. The OOM happened on the consumer, right? So the memory that matters
is the RAM on the consumer machine, not on the Kafka cluster nodes.
2. If the consumers belong to the same consumer group, each will
consume a subset of the partitions and will only need to allocate
memory for those
Thanks a lot Natty.
I am using this Ruby gem on the client side with all the default config
https://github.com/joekiller/jruby-kafka/blob/master/lib/jruby-kafka/group.rb
and the value fetch.message.max.bytes is set to 1MB.
Currently I only have 3 nodes setup in the Kafka cluster (with 8 GB RAM)
a
The fetch.message.max.size is actually a client-side configuration. With
regard to increasing the number of threads, I think the calculation may be
a little more subtle than what you're proposing, and frankly, it's unlikely
that your servers can handle allocating 200MB x 1000 threads = 200GB of
mem
Thanks Natty.
Is there any config which I need to change on the client side as well?
Also, currently I am trying with only 1 consumer thread. Does the equation
changes to
(#partitions)*(fetchsize)*(#consumer_threads) in case I try to read with
1000 threads from from topic2(1000 partitions)?
-Pran
Hi Pranay,
I think the JIRA you're referencing is a bit orthogonal to the OOME that
you're experiencing. Based on the stacktrace, it looks like your OOME is
coming from a consumer request, which is attempting to allocate 200MB.
There was a thread (relatively recently) that discussed what I think i
Hi All,
I have a kafka cluster setup which has 2 topics
topic1 with 10 partitions
topic2 with 1000 partitions.
While, I am able to consume messages from topic1 just fine, I get following
error from the topic2. There is a resolved issue here on the same thing
https://issues.apache.org/jira/browse
11 matches
Mail list logo