> From your answer I understand that whenever a product is
>> deleted a message by the Kafka Streams application needs to be consumed and
>> that specific entry for that product needs to be deleted from the KTable
>> with aggregated data using tombstone.

Exactly.

> If I don't do that the entry will
>> never be deleted and will stay in the KTable. Is this correct?

That's correct.

Note, for windowed KTables this would not be required, because there a
retention time applies. But it seems you are using non-windowed KTables
and thus you need to "clean up" manually.


-Matthias


On 4/19/18 3:37 PM, Mihaela Stoycheva wrote:
> I will try to clarify what I mean by "old state that is not longer needed".
> Lets say I consume messages about products that has been sold to customers
> and I keep a KTable with aggregated data for product with a specific id and
> the number of times it has been bought. At some point a product that has
> been bought many times is no longer available and will never be again -
> let's day it's deleted. Then this aggregated data about it is old and no
> longer needed. From your answer I understand that whenever a product is
> deleted a message by the Kafka Streams application needs to be consumed and
> that specific entry for that product needs to be deleted from the KTable
> with aggregated data using tombstone. If I don't do that the entry will
> never be deleted and will stay in the KTable. Is this correct?
> 
> Thanks,
> Mihaela Stoycheva
> 
> On Thu, Apr 19, 2018 at 3:12 PM, Matthias J. Sax <matth...@confluent.io>
> wrote:
> 
>> Not sure what you mean by "old state that is not longer needed" ?
>>
>> key-value entries are kept forever, and there is no TTL. If you want to
>> delete something from the store, you can return `null` as aggregation
>> result though.
>>
>> -Matthias
>>
>> On 4/19/18 2:28 PM, adrien ruffie wrote:
>>> Hi Mihaela,
>>>
>>>
>>> by default a KTable already have a log compacted behavior.
>>>
>>> therefore you don't need to manually clean up.
>>>
>>>
>>> Best regards,
>>>
>>>
>>> Adrien
>>>
>>> ________________________________
>>> De : Mihaela Stoycheva <mihaela.stoych...@gmail.com>
>>> Envoyé : jeudi 19 avril 2018 13:41:22
>>> À : users@kafka.apache.org
>>> Objet : Is KTable cleaned up automatically in a Kafka streams
>> application?
>>>
>>> Hello,
>>>
>>> I have a Kafka Streams application that is consuming from two topics and
>>> internally aggregating, transforming and joining data. I am using KTable
>> as
>>> result of aggregation and my question is if KTables are cleaned using
>> some
>>> mechanism of Kafka Streams or is this something that I have to do
>> manually
>>> - clean up old state that is not longer needed?
>>>
>>> Regards,
>>> Mihaela Stoycheva
>>>
>>
>>
> 

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to