On Sat, Jun 30, 2018 at 12:16 AM Matthias J. Sax <matth...@confluent.io> wrote: > You cannot suppress those records, because both are required for > correctness. Note, that each event might go to a different instance in > the downstream aggregation -- that's why both records are required. > > Not sure what the problem for your business logic is. Note, that Kafka > Streams provides eventual consistency guarantees. What guarantee do you > need?
Let's say, I have a stream of stock orders each of which is associated with a state. I'd like to aggregate the orders in records containing a map (state -> quantity) grouped by (account#, security id) representing changes in exposure. Currently, as an order advances its state (e.g., from "new" to "filled") the order shortly disappears from the "new" bucket before it appears in the "filled" bucket. As long as subsequent processing steps are performed by Kafka Streams that's not a big deal, but things get tricky once legacy systems are involved where you can't simply undo a transaction. Having the ability to collapse both records would simplify this tremendously. The key will remain the same for both of them. I hope this clarifies the scenario a little bit. Thanks, Thilo