Since the flink-kubernetes-operator is using native K8s integration[1] by
default, you need to give the permissions of pod and deployment as well as
ConfigMap.
You could find more information about the RBAC here[2].
[1].
https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/deployment/r
You are right. Starting multiple JobManagers could help when the pod is
deleted and there's not enough resources in the cluster to start a new one.
For most cases, the JobManager container will be restarted locally without
scheduling a new Kubernetes pod[1].
The "already exists" error comes from t
Yeah, that would be an option. but it would be just nicer if I could simply
skip events which fail to be serialized without prepending any operator to
the sink since, conceptually, that is not really part of the pipeline but
more about handling serialization errors.
If I'm not mistaken, what I'm a
Given that the "local://" schema means the jar is available in the
image/container of JobManager, so it could only be supported in the K8s
application mode.
If you configure the jarURI to "file://" schema for session cluster, it
means that this jar file should be available in the
flink-kubernetes-
Can't you add a flatMap function just before the Sink that does exactly
this verification and filters out everything that is not supposed to be
sent downstream?
Best,
Alexander Fedulov
On Thu, Sep 8, 2022 at 6:15 PM Salva Alcántara
wrote:
> Sorry I meant do nothing when the serialize method ret
Thanks !
For reference, solved with mapping to Flink tuples.
On Wed, Sep 7, 2022 at 2:27 PM Chesnay Schepler wrote:
> Are you running into this in the IDE, or when submitting the job to a
> Flink cluster?
>
> If it is the first, then you're probably affected by the Scala-free Flink
> efforts.
Hello,
You may have heard about a recent change to the licensing of Akka.
We just published a blog-post regarding this change and what it means
for Flink.
https://flink.apache.org/news/2022/09/08/akka-license-change.html
TL;DR:
Flink is not in any immediate danger and we will ensure that use
Sorry I meant do nothing when the serialize method returns null...
On 2022/09/08 15:52:48 Salva Alcántara wrote:
> I guess one possibility would be to extend/override the `write` method of
> the KafkaWriter:
>
>
https://github.com/apache/flink/blob/26eeabfdd1f5f976ed1b5d761a3469bbcb7d3223/flink-co
I guess one possibility would be to extend/override the `write` method of
the KafkaWriter:
https://github.com/apache/flink/blob/26eeabfdd1f5f976ed1b5d761a3469bbcb7d3223/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/KafkaWriter.java#L197
```
@Overri
Hello,
Is there way to configure Flink to expose watermarLag metric per topic per
partition? I think it could be useful to detect data skew between partitions
Thanks,
Alexey
At first glance this might happen when an older docker version is used:
https://github.com/adoptium/temurin-build/issues/2974
You may need to upgrade to Docker 20.10.5+.
On 08/09/2022 12:33, Sigalit Eliazov wrote:
Hi all,
We pulled the new image and we are facing an issue to start the job
ma
Hi! Is there a way to skip/discard messages when using the KafkaSink, so
that if for some reason messages are malformed they can simply be
discarded? I tried by returning null in the corresponding KafkaWriter but
that raises an exception:
```
java.lang.NullPointerException
at
org.apache.kafka.clie
Hi all,
We pulled the new image and we are facing an issue to start the job manager
pod.
we are using version 1.14.5-java11 and the cluster is started using flink
operator
the error is
[ERROR] Could not get JVM parameters and dynamic configurations properly.
[ERROR] Raw output from BashJavaUtil
13 matches
Mail list logo