Re: rocksdb max open file descriptor issue crashed application

2020-02-12 Thread Kostas Kloudas
Hi Apoorv, I am not so familiar with the internal of RocksDB and how the number of open files correlates with the number of (keyed) states and the parallelism you have, but as a starting point you can have a look to [1] for recommendations on how to tune RocksDb for large state and I am also cc'in

Re: rocksdb max open file descriptor issue crashed application

2020-02-11 Thread Apoorv Upadhyay
Hi, Below is the error I am getting : 2020-02-08 05:40:24,543 INFO org.apache.flink.runtime.taskmanager.Task - order-steamBy-api-order-ip (3/6) (34c7b05d5a75dbbcc5718acf6b18) switched from RUNNING to CANCELING. 2020-02-08 05:40:24,543 INFO org.apache.flink.runtime.taskmanager

Re: rocksdb max open file descriptor issue crashed application

2020-02-11 Thread Congxian Qiu
Hi >From the given description, you use RocksDBStateBackend, and will always open 20k files in one machine, and app suddenly opened 35K files than crashed. Could you please share what are the opened files? and what the exception (given the full taskmanager.log maybe helpful) Best, Congxian Apo

rocksdb max open file descriptor issue crashed application

2020-02-11 Thread ApoorvK
flink app is crashing due to "too many file opens" issue , currently app is having 300 operator and 60GB is the state size. suddenly app is opening 35k around files which was 20k few weeks before, hence app is crashing, I have updated the machine as well as yarn limit to 60k hoping it will not cras