Hi, There is no way to enable caching on in-memory-store - by definition it is already cached. However the in-memory store will write each update to the changelog (regardless of context.commit), which seems to be the issue you have?
When you say large, how large? Have you tested it and observed that it puts load on the broker? Thanks, Damian On Wed, 11 Jan 2017 at 06:10 Daisuke Moriya <dmor...@yahoo-corp.jp> wrote: Hi. I am developing a simple log counting application using Kafka Streams 0.10.1.1. Its implementation is almost the same as the WordCountProcessor in the confluent document [ http://docs.confluent.io/3.1.1/streams/developer-guide.html#processor-api]. I am using in-memory state store, its key is the ID of log category, value is count. All the changelogs are written to a broker by context.commit() for fault tolerance, but since the data I handle is large and the size of key is large, it takes a long time to process. Even if it is compacted with a broker, this will put a load on the broker. I would like to write only the latest records for each key on the broker instead of all changelogs at context.commit(). This will reduce the load on the broker and I do not think there will be any negative impact on fault tolerance. If I use the persistent state store, I can do this by enabling caching, but I couldn't find how to accomplish this with the in-memory state store. Can I do this? Thank you, -- Daisuke