Hi
restart the job it's ok and i do that , but i must cancel the job and
submit a new one and i dont want the data from the state
forget to mention that i use the parameter "-allowNonRestoredState"
my steps:
1. stop the job with savepoint
2. run the updated job ( update job graph) from savepoin
Hi,
IIUC, yes.
--
Best!
Xuyang
在 2023-12-04 15:13:56,"arjun s" 写道:
Thank you for providing the details. Can it be confirmed that the Hashmap
within the accumulator stores the map in RocksDB as a binary object and
undergoes deserialization/serialization during the execution of
Thank you for providing the details. Can it be confirmed that the Hashmap
within the accumulator stores the map in RocksDB as a binary object and
undergoes deserialization/serialization during the execution of the
aggregate function?
Thanks,
Arjun
On Mon, 4 Dec 2023 at 12:24, Xuyang wrote:
> Hi
Hi, Arjun.
> I'm using a HashMap to aggregate the results.
Do you means the you define a hashMap in the accumulator? If yes, I think it
restores a binary object about map in RocksDB and deserialize it like this[1].
If you are using flink sql, you can try to debug the class 'WindowOperator' or
Hi, nick.
> using savepoint i must cancel the job to be able run the new graph
Do you mean that you need cancel and start the job using the new flink job
graph in 1.17.1,
and in the past, it was able to make the changes to the new operator effective
without restarting the job?
I think in o
Hi
when i add or remove an operator in the job graph , using savepoint i must
cancel the job to be able run the new graph
e.g. by adding or removing operator (like new sink target)
it was working in the past
i using flink 1.17.1
1. is it a known bug? if so when planned to be fix
2. do i need to
Hi guys,
Forking in sbt solved the issue (Test / fork := true).
On Sun, Dec 3, 2023 at 7:48 AM Barak Ben-Nathan
wrote:
> By the way, I also upgraded to flink-connector-kafka ver. 3.0.2-1.18, to
> no avail.
>
> On Sun, Dec 3, 2023 at 7:45 AM Barak Ben-Nathan
> wrote:
>
>> Thank's Jim,
>>
>> Un