I'm not sure you really need to/ should be sending data back to the table
with .to(table) -- that was kind of the original issue, since it introduces
a cycle*
into your topology (which should be a DAG).
*Of course it's still technically a cycle if you implement the store
updates manually
with a tr
Update - I tried Sophie's suggestion; I implemented a Transformer that
performs puts on the table's backing store. I hid the complexities behind a
kotlin extension method. So now the code looks like this (pesudocode):
KStream.commit() { // transformer is implemented here }
stream.join(table) { ev
Thank you for your response Sophie.
What confuses me is that I have already implemented this same pattern (join
to table, mutate entity, write back to table) in two other streams
applications without any issue whatsoever. After thinking about it, I think
the difference here is the input data. In t
I think the issue here is that you're basically creating a cycle in your
streams topology,
which is generally supposed to be a DAG. If I understand correctly, rather
than writing
the new data to the underlying store you're sending it to the topic from
which the table
is built. Is that right?
The p
Hey Trey,
as I was reading, several suggestions I have are:
1. Could you revert 0ms commit interval to default? It will not help with
the situation as you will try to commit on every poll()
2. I couldn't know how you actually write your code, but you could try
something really simple as print st
This is my third kafka streams application and I'd thought I had gotten to
know the warts and how to use it correctly. But I'm beating my head against
something that I just cannot explain. Values written to a table, when later
read back in a join operation, are stale.
Assume the following simplifi