be consumed and
that specific entry for that product needs to be deleted from the KTable
with aggregated data using tombstone. If I don't do that the entry will
never be deleted and will stay in the KTable. Is this correct?
Thanks,
Mihaela Stoycheva
On Thu, Apr 19, 2018 at 3:12 PM, Matthias J
manually
- clean up old state that is not longer needed?
Regards,
Mihaela Stoycheva
messages and had to be
restarted. In production there are 5 brokers and the replication factor is
3. The version of kafka streams that I use is 1.0.1 and the kafka version
of the broker is 1.0.1. My question is if this is an expected behavior?
Also is there any way to deal with it?
Regards,
Mihaela
when you have caching enabled, the value of the record
> has already been serialized before sending to the changelogger while the
> key was not. Admittedly it is not very friendly for trouble-shooting
> related log4j entries..
>
>
> Guozhang
>
>
> On Tue, Mar 27
s JSON and the value logged as byte
array instead of JSON?
Regards,
Mihaela Stoycheva