Hi, In our streaming instance, the internal caching has been disabled and RocksDB caching has been enabled, with the override as shown below. Although the heap is restricted to 36GB, the memory utilization is going over 100GB in a week and eventually runs out of memory. As part of the profiling, we have confirmed that the garbage collection process is within the within the limit 36GB (on-heap). However, the additional memory utilization is not appearing within the profiling and is the one we suspect, growing unbounded (off-heap).
We have also tried enabling the streams caching (5GB) and disabling the RocksDB config setter (commented as below). However, we are still seeing the similar behaviour where the memory is growing unlimited overtime. We process 20 million records each 20 minutes (a message size - 1KB) on an average. Can you please review and advise what could cause this behavior? We have ensured that the iterators are closed (which happens once a day). //streamsConfig.put(StreamsConfig.ROCKSDB_CONFIG_SETTER_CLASS_CONFIG, RocksDBOverride.class) Kafka Broker / Kafka Stream version: 1.0.0 Rocks DB: 5.7.3 Command: java -Xms12g -Xmx36g -XX:MetaspaceSize=576m -XX:+UseG1GC -XX:ParallelGCThreads=8 -XX:MaxGCPauseMillis=80 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80 -cp /scripts/device_metrics.jar:/libs/kafka/* -Dlog4j.configuration=file:/cfg/device_metrics_log4j.properties org.ssd.devicemetrics /cfg/device_metrics.properties Rocks DB Config setter BlockBasedTableConfig tableConfig = new org.rocksdb.BlockBasedTableConfig(); BloomFilter bloomFilter = new BloomFilter(); tableConfig.setBlockCacheSize(512MB); tableConfig.setBlockSize(64KB); tableConfig.setCacheIndexAndFilterBlocks(false); tableConfig.setFilter(bloomFilter); options.setTableFormatConfig(tableConfig); options.setWriteBufferSize(512MB); options.setMaxWriteBufferNumber(5); options.setCompressionType(CompressionType.LZ4_COMPRESSION); Thanks, Ashok