Update:
I haven't tried your first suggestion but I managed to get it working by
setting up the source idle timeout.
Thanks for your help.
Regards.
On Thu, May 11, 2023 at 12:34 PM Sharil Shafie wrote:
> Hi,
>
> I believe only one partition based on the log output below. I didn't make
> any
Hey Hang,
I am deploying my Flink Job in HA application mode, Whenever I redeploy my
job, or deploy an updated version of the job, it's using the same job_id. I
haven't configured anywhere to use a fixed job id, I think it's doing it by
default. Can you share where I can configure this? I tried it
Hi Weihua,
I am deploying my flink job in HA application mode on a kubernetes cluster.
I am using an external nfs mount for storing checkpoints. For some reason,
whenever I deploy an updated version of my application, it uses the same
job_id for the new job as for the previous job. Thus the flink
I have recently migrated from 1.13.6 to 1.16.1, I can see there is a
performance degradation for the Flink pipeline which is using Flink's
managed state ListState, MapState, etc. Pipelines are frequently failing
with the Exception:
06:59:42.021 [Checkpoint Timer] WARN o.a.f.r.c.CheckpointFailureM
Hi,
I believe only one partition based on the log output below. I didn't make
any changes to the DataGenerator from the flink playground.
2023-05-10 11:55:33,082 WARN
org.apache.kafka.clients.admin.AdminClientConfig [] - These
configurations '[key.deserializer, commit.offsets.on.chec
Hi, yangxueyong,
The filter(where condition) will be pushed down to the source if the
connector implements the interface `SupportsFilterPushDown`.
In your case, the sql planner analyzed that the records sent by
`test_flink_res1` would satisfy the conditions (`name` =
'abc0.11317691217472489') and
Hi,
How many partitions does your kafka topic have?
One possibility is that the kafka topic has only one partition,
and when the source parallelism is set to 2, one of the source
tasks cannot consume data and generate the watermark, so
the downstream operator cannot align the watermark and cannot
The Apache Flink community is very happy to announce the release of Apache
flink-connector-gcp-pubsub v3.0.1. This release is compatible with Flink
1.16.x and Flink 1.17.x
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate d
The Apache Flink community is very happy to announce the release of Apache
flink-connector-elasticsearch v1.0.1. This release is compatible with Flink
1.16.x and Flink 1.17.x
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurat
The Apache Flink community is very happy to announce the release of Apache
flink-connector-opensearch v1.0.1. This release is compatible with Flink
1.16.x and Flink 1.17.x
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate d
The Apache Flink community is very happy to announce the release of Apache
flink-connector-pulsar v4.0.0. This release is compatible with Flink 1.17.x
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applica
The Apache Flink community is very happy to announce the release of Apache
flink-shaded v17.0.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release is available for download at:
https:
The Apache Flink community is very happy to announce the release of Apache
flink-connector-rabbitmq v3.0.1. This release is compatible with Flink
1.16.x and Flink 1.17.x
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate dat
Hi all,
Trying to have a s3 parquet bulk writer with file roll policy based on size
limitation + checkpoint. For that I’ve extended the CheckpointRollingPolicy and
overwritten shouldRollOnEvent to return true if the part size is greater than
the limit.
The problem is that the part size that I
Hi, Iris,
The Flink counter is cumulative. There are `inc` and `dec` methods in it.
As the value of the counter has been calculated in Flink, we do not need
use the counter metric in statsd to calculate.
Best,
Hang
Iris Grace Endozo 于2023年5月10日周三 14:53写道:
> Hey Hang,
>
> Thanks for the prompt
15 matches
Mail list logo