Hi
How many partitions do your topics have?
As far as I understand there is a RocksDB for every partition of every KTable 
and this can add up quickly.
Depending on how many instances you are using one of them might have to handle 
the complete load temporarily which will use more memory.
Also, RocksDB memory is outside the jvm, so harder to monitor as the metrics 
are not exposed yet through Kafka streams.
Try to limit the size of the buffer cache for the RocksDB with the 
CustomRocksConfigSetter.

Best regards

Patrik

> Am 15.02.2019 um 06:24 schrieb P. won <pwon...@gmail.com>:
> 
> Hi,
> 
> I have a kafka stream app that currently takes 3 topics and aggregates
> them into a KTable. This app resides inside a microservice which has
> been allocated 512 MB memory to work with. After implementing this,
> I've noticed that the docker container running the microservice
> eventually runs out of memory and was trying to debug the cause.
> 
> My current theory (whilst reading the sizing guide
> https://docs.confluent.io/current/streams/sizing.html) is that over
> time, the increasing records stored in the KTable and by extension,
> the underlying RocksDB, is causing the OOM for the microservice. Does
> kafka provide any way to find out the memory used by the underlying
> default RocksDB implementation?

Reply via email to