[ 
https://issues.apache.org/jira/browse/KAFKA-9603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091641#comment-17091641
 ] 

Lovro Pandžić edited comment on KAFKA-9603 at 4/24/20, 3:04 PM:
----------------------------------------------------------------

Yes, that's what I meant but [~biljazovic] warned me about some other 
circumstances...

Our application is running inside docker with rocksdb stateDir volume mapped to 
host.

On kafka-streams 2.2.0 with `lsof` the problem is visible from within the 
container (raising number of file descriptors).

On kafka-streams 2.1.0 and 2.1.1 'lsof' is stable from within the container, 
but not from host, lsof on host exhibits similar behavior and rising of 
descriptors. Number of files problem is present on both (versions and 
host/container).


was (Author: lpandzic):
Yes, that's what I meant but [~biljazovic] warned me about some other 
circumstances...

Our application is running inside docker with rocksdb stateDir volume mapped to 
host.

On kafka-streams 2.2.0 with `lsof` the problem is visible from within the 
container (raising number of file descriptors).

On kafka-streams 2.1.0 and 2.1.1 'lsof' is stable from within the container, 
but not on from host, lsof on host exhibits similar behavior and rising of 
descriptors. Number of files problem is present on both (versions and 
host/container).

> Number of open files keeps increasing in Streams application
> ------------------------------------------------------------
>
>                 Key: KAFKA-9603
>                 URL: https://issues.apache.org/jira/browse/KAFKA-9603
>             Project: Kafka
>          Issue Type: Bug
>          Components: streams
>    Affects Versions: 2.4.0, 2.3.1
>         Environment: Spring Boot 2.2.4, OpenJDK 13, Centos image
>            Reporter: Bruno Iljazovic
>            Priority: Major
>
> Problem appeared when upgrading from *2.0.1* to *2.3.1*. 
> Relevant Kafka Streams code:
> {code:java}
> KStream<String, Event1> events1 =
>     builder.stream(FIRST_TOPIC_NAME, Consumed.with(stringSerde, event1Serde, 
> event1TimestampExtractor(), null))
>            .mapValues(...);        
> KStream<String, Event2> events2 =
>     builder.stream(SECOND_TOPIC_NAME, Consumed.with(stringSerde, event2Serde, 
> event2TimestampExtractor(), null))
>            .mapValues(...);        
> var joinWindows = JoinWindows.of(Duration.of(1, MINUTES).toMillis())
>                              .until(Duration.of(1, HOURS).toMillis());
> events2.join(events1, this::join, joinWindows, Joined.with(stringSerde, 
> event2Serde, event1Serde))
>                .foreach(...);
> {code}
> Number of open *.sst files keeps increasing until eventually it hits the os 
> limit (65536) and causes this exception:
> {code:java}
> Caused by: org.rocksdb.RocksDBException: While open a file for appending: 
> /.../0_8/KSTREAM-JOINOTHER-0000000010-store/KSTREAM-JOINOTHER-0000000010-store.1579435200000/001354.sst:
>  Too many open files
>       at org.rocksdb.RocksDB.flush(Native Method)
>       at org.rocksdb.RocksDB.flush(RocksDB.java:2394)
> {code}
> Here are example files that are opened and never closed:
> {code:java}
> /.../0_27/KSTREAM-JOINTHIS-0000000009-store/KSTREAM-JOINTHIS-0000000009-store.1582459200000/000114.sst
> /.../0_27/KSTREAM-JOINOTHER-0000000010-store/KSTREAM-JOINOTHER-0000000010-store.1582459200000/000065.sst
> /.../0_29/KSTREAM-JOINTHIS-0000000009-store/KSTREAM-JOINTHIS-0000000009-store.1582156800000/000115.sst
> /.../0_29/KSTREAM-JOINTHIS-0000000009-store/KSTREAM-JOINTHIS-0000000009-store.1582459200000/000112.sst
> /.../0_31/KSTREAM-JOINTHIS-0000000009-store/KSTREAM-JOINTHIS-0000000009-store.1581854400000/000051.sst
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to