Thanks Feng, it worked.
On Wed, Sep 6, 2023 at 8:09 AM Feng Jin wrote:
> Hi Nihar,
> Have you tried using the following configuration:
>
> metrics.reporter.my_reporter.filter.includes:
> jobmanager:*:*;taskmanager:*:*
>
> Please note that the default delimiter for the List parameter in Flink is
Hi Chirag,
Couple things can be done to reduce the attack surface (including but not
limited to):
* Use delegation tokens where only JM needs the keytab file:
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/security/security-delegation-token/
* Limit the access rights of the k
Hello Flink users,
We recently released a Maven plugin that detects known Flink issues at
packaging/compile time:
https://github.com/awslabs/static-checker-flink
Its scope is currently limited to finding known connector incompatibility
issues.
Some future ideas:
* Check for other static
Dear Apache Flink Community,
I am writing to urgently address a critical challenge we've encountered in
our IoT platform that relies on Apache Kafka and real-time data processing.
We believe this issue is of paramount importance and may have broad
implications for the community.
In our IoT ec
Hi Kamal
Check if the checkpoint of the task is enabled and triggered correctly. By
default, write parquet files will roll a new file when checkpointing.
Best,
Feng
On Thu, Sep 14, 2023 at 7:27 PM Kamal Mittal via user
wrote:
> Hello,
>
>
>
> Tried parquet file creation with file sink bulk wr
Hello Guys,
I am using Flink File Source with Amazon S3.
AFAIK, File source first downloads the file in temporary location and then
starts reading the file and emitting the records.
By default the download location is /tmp directory.
In case of containerized environment, where Pods have limited
Hi Karthick,
on a high level seems like a data skew issue and some partitions have way
more data than others?
What is the number of your devices? how many messages are you processing?
Most of the things you share above sound like you are looking for
suggestions around load distribution for Kafka.
Hello,
After upgrading Flink to 1.15.4 from 1.14.6, I've noticed that there are no
"{clusterId}-{componentName}-leader" config maps anymore, but instead there are
"{clusterId}-cluster-config-map" and "{clusterId}--cluster-config-map".
Is it expected ?
Thanks,
Alexey
Hi Alexey,
This is expected as Flink 1.15 introduced a new multiple component leader
election service that only runs a single leader election per Flink process. You
may set `high-availability.use-old-ha-services: true` to use the old high
availability services in case of any issues as well.
Bes
Hi Giannis
Thanks for the reply
some partitions have way more data than others?
Yes, some of the partitions are overloaded. Say 5 out of 10 were
overloaded. We are now using Default partitioner, the key is device unique
identifier.
What is the number of your devices? how many messages are you p
Hi Karthik
This appears to be a common challenge related to a slow-consuming
situation. Those with relevant experience in addressing such matters should
be capable of providing assistance.
Thanks and regards,
Gowtham S
On Fri, 15 Sept 2023 at 23:06, Giannis Polyzos
wrote:
> Hi Karthick,
>
> o
11 matches
Mail list logo