Hi Chinmay
> Why wouldn't you want to use a changelog ?
Because log compaction won't work here because I want to cache unique ids
which will not be repeated like timestamp. But for restoration, I have to
use changelog. Also, my StreamTask should consume that topic to add new ids
unique ids to KV
> Does KV-store consume automatically from a Kafka topic?
Yes - if you've configured changelog stream for your store
> Does it consume only on restore()?
It consumes only during container initialization (again, assuming if you
have changelog configured)
> implement the StreamTask job to consume
Thanks Yi Pan, I have one more question.
Does KV-store consume automatically from a Kafka topic? Does it consume
only on restore()? If so, do I have to implement the StreamTask job to
consume a Kafka topic and call add() method?
On Fri, Oct 2, 2015 at 2:01 PM, Yi Pan wrote:
> Hi, Jae Hyeon,
>
>
Hi, Jae Hyeon,
Good to see you back on the mailing list again! Regarding to your
questions, please see the answers below:
> My KeyValueStore usage is a little bit different from usual cases because
> > I have to cache all unique ids for the past six hours, which can be
> > configured for the ret
I found the following statement from Samza documentation:
"Periodically the job scans over both stores and deletes any old events
that were not matched within the time window of the join."
It seems that I have to manually implement purging KeyValueStore, did I
understand correctly?
On Fri, Oct 2
Hi Samza devs and users
This is my first try with KeyValueStore and I am really excited!
I glanced through TaskStorageManager source code, it looks creates
consumers for stores and I am wondering that how kafka cleanup will be
propagated to KeyValueStore.
My KeyValueStore usage is a little bit d