hi:
I'm trying to deploy a flink job with flink-operaotor. The flink-operator's
version is 1.2.0. And the yaml i use is here:
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: basic-example
spec:
image: flink:1.15
flinkVersion: v1_15
flinkConfiguration:
I have a Flink 1.15 app running in Kubernetes (v1.22) deployed via operator
1.2, using S3-based HA with 2 jobmanagers and 2 taskmanagers.
The app consumes a high-traffic Kafka topic and writes to a Cassandra
database. It had been running fine for 4 days, but at some point
the taskmanagers crashed.
Hi Filip,
It looks like, your state primitive is used in the context of Windows:
Keyed state works like this:
* It uses a cascade of key types to store and retrieve values:
* The key (set by .keyBy)
* A namespace (usually a VoidNamespace), unless it is used in context of
a spec
Hi, I'm trying to load a list state using the State Processor API (Flink
1.14.3)
Cluster settings:
state.backend: rocksdb
state.backend.incremental: true
(...)
Code:
val env = ExecutionEnvironment.getExecutionEnvironment
val savepoint = Savepoint.load(env, pathToSavepoint, new
EmbeddedRocksDBS
Hi everyone
I noticed when going through the scala datastream/table api bridge in my
IDE I cannot see the source of the code. I believe it is because the
Sources are missing on maven:
https://repo1.maven.org/maven2/org/apache/flink/flink-table-api-scala-bridge_2.12/1.15.2/
If you have a look at t
When a batch job finishes and the cluster is shut down the operator cannot
observe the status. It is impossible to tell whether it finished or not.
Try upgrading to Flink 1.15, there this is solved.
Cheers,
Gyula
On Tue, Oct 25, 2022 at 9:23 AM Liting Liu (litiliu)
wrote:
> Hi, I'm deploying a
Hi, I'm deploying a flink batch job with flink-k8s-operator. My
flink-k8s-operator's version is 1.2.0 and flink's version is 1.14.6. I found
after the batch job execute finish, the jobManagerDeploymentStatus field became
"MISSING" in FlinkDeployment crd. And the error field became "Missin