Hi team,
We encountered the schema forbidden issue during deployment with the
changes of flink version upgrade (1.16.1 -> 1.17.1), hope to get some help
here :pray:.
In the changes of upgrade flink version from 1.16.1 to 1.17.1, we also
switched from our customized schema-registry dependency to
or
Hi Jing,
My team and I have been blocked by the need for a PyFlink release including
https://github.com/apache/flink/pull/23141, and I saw that you mentioned
that anybody can be the release manager of a bug fix release. Could we
explore what it would take for me to do this (assuming nobody is alre
Hi, sure, sharing it again:
SELECT a.funder, a.amounts_added, r.amounts_removed FROM table_a AS a JOIN
table_b AS r ON a.funder = r.funder
and the Optimized Execution Plan:
Calc(select=[funder, vid AS a_vid, vid0 AS r_vid, amounts_added,
amounts_removed])
+- Join(joinType=[InnerJoin], where=[(fu
Hey folks,
We currently run (py) flink 1.17 on k8s (managed by flink k8s
operator), with HA and checkpointing (fixed retries policy). We
produce into Kafka with AT_LEAST_ONCE delivery guarantee.
Our application failed when trying to produce a message larger than
Kafka's message larger than messag
Hi,Zakelly
Thank you for your answer.
Best,
rui
Zakelly Lan 于2023年10月13日周五 19:12写道:
> Hi rui,
>
> The 'state.backend.fs.memory-threshold' configures the threshold below
> which state is stored as part of the metadata, rather than in separate
> files. So as a result the JM will use its memory t
The additional exceptions with the same error but on different files
Pyflink lib error :
java.lang.RuntimeException: An error occurred while copying the file.
at org.apache.flink.api.common.cache.DistributedCache.getFile(
DistributedCache.java:158)
at org.apache.flink.python.env.PythonDependencyI
Hi,
We are using flink-1.17.0 table API and RocksDB as backend to provide a
service to our users to run sql queries. The tables are created using the
avro schema and we also provide users to attach python udf as a plugin.
This plugin is downloaded at the time of building the table and we update
th
Hello,
I'm on investigation to make SLI for Flink application, with several
metrics (such as 'numRegisteredTaskManager', 'job_numRestarts', etc)
- Are there some other metrics you are using for this?
- Also there are parameters related to the 'success/fail checkpoint'. Is
this affecting applicatio
Hello,
Thanks for feedback. I'll start with these.
Regards
2023년 9월 7일 (목) 오후 7:08, Gyula Fóra 님이 작성:
> Jung,
> I don't want to sound unhelpful, but I think the best thing for you to do
> is simply to try these different models in your local env.
> It should be very easy to get started with the
Hi,
I'd like to ask the behavior I am getting
I am using kafka as a source with window TumblingProcessingTime.
When I tried to fire 1 parallel config but submit 2 instnce of the same jar
in flink server, the data being consumed by the 2 jobs are the same
(duplicate) even they have the same kafk
10 matches
Mail list logo