I have also seen this exception:
o.a.f.k.o.o.JobStatusObserver
[ERROR][flink/f-d7681d0f-c093-5d8a-b5f5-2b66b4547bf6] Job
d0ac9da5959d8cc9a82645eeef6751a5 failed with error:
java.util.concurrent.CompletionException:
java.util.concurrent.CompletionException:
java.lang.UnsupportedOperationExcep
Hi Artem,
I had a debug of Flink 1.17.1 (running CsvFilesystemBatchITCase) and I see
the same behaviour. It's the same on master too. Jackson flushes [1] the
underlying stream after every `writeValue` call. I experimented with
disabling the flush by disabling Jackson's FLUSH_PASSED_TO_STREAM [2]
f
Apologies, I have included the jobmanager log for
6969725a69ecc967aac2ce3eedcc274a instead of 7881d53d28751f9bbbd3581976d9fe3d,
however they looked exactly the same.
Can include if necessary.
Thanks
Keith
From: "Lee, Keith"
Date: Thursday, 25 April 2024 at 21:41
To: "user@flink.apache.org"
Hi,
Referring to
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sqlclient/#start-a-sql-job-from-a-savepoint
I’ve followed the instruction however I do not see evidence of the job being
started with savepoint. See SQL statements excerpt below:
Flink SQL> STOP JOB '14de8cc8
Hi.
I already asked before but never got an answer. My observation is that the
operator, after collecting some stats, is trying to restart one of the
deployments. This includes taking a savepoint (`takeSavepointOnUpgrade: true`,
`upgradeMode: savepoint`) and “gracefully” shutting down the JobMa
I know that an `Object` is treated as a generic data type by Flink and
hence serialized using Kryo. I wonder if there is anything one can do to
improve performance w.r.t. to the Kryo-based serializer or if that is
simply an inherent worst case scenario and nothing can be done without
actually switc