Hello,
as I am confused that in my tests the Exception on restoring keyed state backend only happens on scale-in/node-fail and not in scale-out, I re-run the scale-out test case.And indeed, there the exception by HeapKeyedStateBackendBuilder does not happen, and restoring keyed state backend works
n is still the same, so its no problem with combination of reinterpretAsKeyedStream and Elastic Scaling in Reactive Mode.
This time I was able to check the logs of the task managers and it seems to be a serialization problem of my state on recovery by checkpoint after elastic scaling.Besides the ment
et more insight.
D.
On Fri, Jan 7, 2022 at 9:40 PM Martin wrote:
> I changed my flink job having an explicit keyBy instead of
> reinterpretAsKeyedStream.
> Situation is still the same, so its no problem with combination of
> reinterpretAsKeyedStream and Elastic Scaling in Reactive Mo
I changed my flink job having an explicit keyBy instead of reinterpretAsKeyedStream.Situation is still the same, so its no problem with combination of reinterpretAsKeyedStream and Elastic Scaling in Reactive Mode.
This time I was able to check the logs of the task managers and it seems to be a
Thanks David for the hints.
I checked the usage of the state API and for me it seems to be correct, but I am a new Flink users.
Checkpoints happen eachs minute, the scaleing I trigger after 30 minutes.The source and sink are Kafka topics in EXACTLY_ONCE mode.
I tried to simplify the code, but did
OK, my first intuition would be some kind of misuse of the state API. Other
guess would be, has any checkpoint completed prior triggering of the
re-scaling event?
I'll also try to verify the scenario you've described, but these would be
the things that I'd check first.
D.
On Fri, Jan 7, 2022 at
Hello David,
right now I cant share the complete code. But I will try in some days to simplify it and reduce the code to still trigger the issue.
First I will check, if the explict keyBy instead of the reinterpretAsKeyedStream fix the issue.If yes, that would assume - for me - that its a bug with
Would you be able share the code of your test by any chance?
Best,
D.
On Fri, Jan 7, 2022 at 10:06 AM Martin wrote:
> Hello David,
>
> I have a test setup, where the input is all the time the same.
> After processing, I check all the output if each sequence number ist just
> used once.
>
> Anot
Hello David,
I have a test setup, where the input is all the time the same.After processing, I check all the output if each sequence number ist just used once.
Another output field is a random UUID generated on startup of a Task (in the open-method of the (c)-keyed process function).In the output I
Hi Martin,
_reinterpretAsKeyedStream_ should definitely work with the reactive mode,
if it doesn't it's a bug that needs to be fixed
> For test use cases (3) and (4) the state of the keyed process function (c)
> seems only available for around 50% of the events processed after
> scale-in/fail.
>
Hi,
typo: "I run that job via Native Kubernetes deployment and use elastic scaling in reactive mode."-> I run it of course via standalone kubernetes deployment, to make elastic scaling possible.
BRMartin
mar...@sonicdev.de schrieb am 06.01.2022 21:38 (GMT +01:00):
Hi,
I have a job where I do a
Hi,
I have a job where I do a keyBy'd process function (a), then on the downstream a normal process function (b) and and then I use reinterpretAsKeyedStream to have yet another keyBy'd process function (c).
The last keyed process function use keyed state for a increasing sequence number.
I run that
12 matches
Mail list logo