Hi, Any thoughts on the below issue? I think the behavior should be reproducible if we perform both the put, get from the store (cache enabled), when processing each record from the topic, with processing volume of 2-3 million records each 15 mins, each JSON on an average having 400 to 500 KB approx. Overtime the app runs out of the total memory within 24 hours.
Thanks, Ashok On Wed, Aug 15, 2018 at 5:15 AM, AshokKumar J <ashokkumar...@gmail.com> wrote: > Disabling the stream cache prevents the unbounded memory usage, however > the throughput is low (with ROCKSDB cache enabled). Can you please advise > why the cache objects reference doesn't get released in time (for GC > cleanup) and grows continuously? > > On Tue, Aug 14, 2018 at 11:17 PM, AshokKumar J <ashokkumar...@gmail.com> > wrote: > >> Hi, >> >> we have a stream application that uses the low level API. We persist the >> data into the key value state store. For each record that we retrieve from >> the topic we perform a lookup against the store to see if it exists, if it >> does then we update the existing, else we simply add the new record. With >> this we are running into significant memory issue, basically whatever the >> memory we allocate they all get fully utilized (all the objects goes into >> the older generations). The caching has been enabled and we specified 40% >> of the total memory to the caching. Let's say we have total application >> memory as 24GB and we specify the caching size as 12GB, ideally we expect >> 12GB to reside in older generation and rest should be younger, but for some >> reason everything is going into older generation and eventually we are >> running out of memory within a day. Please see below objects dominator >> tree. Kindly suggest >> >> https://files.slack.com/files-pri/T47H7EWH0-FC8EZ9L66/image.png >> >> >