We don't have a recommendation, this will depend on the number of jobs you
manage.
If you are managing a few jobs let's say < 50, the default resource
configuration probably works well.
The operator reports JVM metrics similar to Flink jobs so if you see longer
GC pauses you could simply add more
--
Best,
Hjw
--
Best,
Hjw
Flink Kubernetes Operator is a server is responsible to reconcile the
FlinkDeployment.
The operator will continuously monitor the status of Flink jobs.The stability
of the operator's service is also important.
What is the recommended cpu.memory configuration of the Flink kubernetes
operator in
Hi, Alexis.
IIUC, There is no conflict between savepoint history and restore mode.
Restore mode cares about whether/how we manage the savepoint of old job.
Savepoint management in operator only cares about savepoint history of new
job.
In other words, savepoint cleanup should not clean the savepoin
Hi,
As Martijn mentioned, snapshot ownership in 1.15 is the best way.
You say there are just 24000/10 references in a shared directory in a
job. Is your case in the scope of [1] ?
If right, I think it works if you could check the _metadata and find some
files not referenced.
And I suggest you
I figured this out. I get this behavior because I was running the code in
a minicluster test that defaulted to batch. I switched to streaming and it
renders "*ROWTIME*".
On Fri, Nov 25, 2022 at 11:51 PM Dan Hill wrote:
> Hi. I copied the Flink code from this page. My printSchema() does not
>