Interesting - can you share those thread dumps?
Also, your process file handle limit is extremely low (and different
from what seems to be the default for your user-id from your earlier
message). You should bump that up to a couple of ten thousand at
least.
Joel
On Fri, Apr 11, 2014 at 08:13:34A
By the way, we may have found the issue ..
On going through the thread dump, we found that 4-5 threads were blocked on
log4j.CallAppenders and 2-3 threads were in IN_NATIVE state while trying to
write logs to disk. The network threads were there-fore blocked on log4j
threads, thus hanging the kaf
When you see this happening (on broker 4 in this instance), can you
confirm the Kafka process handle limit?
cat /proc//limits
On Thu, Apr 10, 2014 at 09:20:51AM +0530, Arya Ketan wrote:
> *Issue : *Kafka cluster goes to an un-responsive state after some time with
> producers getting Socket time-o
Hi Guozhang,
I just have 1 producer client per producer machine.
The producer is in a singleton scope.
Is there a possibility to close producer sockets by force or use a producer
socket pool??
--
Arya
Arya
On Thu, Apr 10, 2014 at 11:38 AM, Guozhang Wang wrote:
> Hello Arya,
>
> The broker
Hello Arya,
The broker seems dead due to too many open file handlers, which are likely
due to too many open sockets. Hhow many producer clientss do you have on
these 5 machines, and could you check if there is any socket leak?
Guozhang
On Wed, Apr 9, 2014 at 8:50 PM, Arya Ketan wrote:
> *Issu
*Issue : *Kafka cluster goes to an un-responsive state after some time with
producers getting Socket time-outs on every request made.
*Kafka Version* - 0.8.1
*Machines* : VMs , 2 cores 8gb RAM, linux , 3 node cluster.
ulimit -a
core file size (blocks, -c) 0
data seg size (kby