[ 
https://issues.apache.org/jira/browse/KAFKA-9923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sophie Blee-Goldman resolved KAFKA-9923.
----------------------------------------
    Resolution: Not A Problem

[~cadonna] pointed out that this actually ins't a problem/has been fixed by 
KAFKA-5804. We could certainly stand to do some cleaning up around the 
duplicates handling but at least we aren't losing data!

> Join window store duplicates can be compacted in changelog 
> -----------------------------------------------------------
>
>                 Key: KAFKA-9923
>                 URL: https://issues.apache.org/jira/browse/KAFKA-9923
>             Project: Kafka
>          Issue Type: Bug
>          Components: streams
>            Reporter: Sophie Blee-Goldman
>            Assignee: Bruno Cadonna
>            Priority: Blocker
>             Fix For: 2.6.0
>
>
> Stream-stream joins use the regular `WindowStore` implementation but with 
> `retainDuplicates` set to true. To allow for duplicates while using the same 
> unique-key underlying stores we just wrap the key with an incrementing 
> sequence number before inserting it.
> This wrapping occurs at the innermost layer of the store hierarchy, which 
> means the duplicates must first pass through the changelogging layer. At this 
> point the keys are still identical. So, we end up sending the records to the 
> changelog without distinct keys and therefore may lose the older of the 
> duplicates during compaction.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to