Peter, Wesley, thanks for your use cases.
There is a KIP discussion about adding a timestamp based log deletion
policy into Kafka along side with compaction; and I'm thinking whether it
makes sense to enable both log deletion and log compaction for the general
cases of changelog data with expirati
Yes, also classic caching, where you might use memcache with TTLs.
But a different use case for us is sessionizing. We push a high rate of updates
coming from a browser session to our Kafka cluster. If we don’t see an update
for a particular session after some period of time, we say that session
One use case is implementing a data retention policy.
-Peter
> On May 12, 2016, at 17:11, Guozhang Wang wrote:
>
> Wesley,
>
> Could describe your use case a bit more for motivating this? Is your data
> source expiring records and hence you want to auto "delete" the
> corresponding Kafka reco
Wesley,
Could describe your use case a bit more for motivating this? Is your data
source expiring records and hence you want to auto "delete" the
corresponding Kafka records as well?
Guozhang
On Thu, May 12, 2016 at 2:35 PM, Wesley Chow wrote:
> Right, I’m trying to avoid explicitly managing T
Right, I’m trying to avoid explicitly managing TTLs. It’s nice being able to
just produce keys into Kafka without having an accompanying vacuum consumer.
Wes
> On May 12, 2016, at 5:15 PM, Benjamin Manns wrote:
>
> If you send a NULL value to a compacted log, after the retention period it
> w
If you send a NULL value to a compacted log, after the retention period it
will be removed. You could run a process that reprocesses the log and sends
a NULL to keys you want to purge based on some custom logic.
On Thu, May 12, 2016 at 2:01 PM, Wesley Chow wrote:
> Are there any thoughts on supp