: Too many open files in kafka 0.9
There is KAFKA-3317 which is still open.
Have you seen this ?
http://search-hadoop.com/m/Kafka/uyzND1KvOlt1p5UcE?subj=Re+Brokers+is+down+by+java+io+IOException+Too+many+open+files+
On Wed, Nov 29, 2017 at 8:55 AM, REYMOND Jean-max (BPCE-IT - SYNCHRONE
TECHNOL
difference with the others brokers., so is it safe to remove these
directories __consumer_offsets-XX if not acceded since one day ?
-Message d'origine-
De : Ted Yu [mailto:yuzhih...@gmail.com]
Envoyé : mercredi 29 novembre 2017 19:41
À : users@kafka.apache.org
Objet : Re: Too many open
There is KAFKA-3317 which is still open.
Have you seen this ?
http://search-hadoop.com/m/Kafka/uyzND1KvOlt1p5UcE?subj=Re+Brokers+is+down+by+java+io+IOException+Too+many+open+files+
On Wed, Nov 29, 2017 at 8:55 AM, REYMOND Jean-max (BPCE-IT - SYNCHRONE
TECHNOLOGIES) wrote:
> We have a cluster w
We have a cluster with 3 brokers and kafka 0.9.0.1. One week ago, we decide to
adjust log.retention.hours from 10 days to 2 days. Stop and go the cluster and
it is ok. But for one broker, we have every day more and more datas and two
days later crash with message too many open files. lsof return