Hi all,
We're using the Table API to read in batch mode from an Iceberg table on s3
using DynamoDB as the catalog implementation. We're hitting a NPE in the
Calcite planner. The same query works fine when using the local FS and
in-memory catalog. Below is a snipped stacktrace. Any thoughts on how
My job in Kubernetes periodically fails with the following error:
2023-10-13 18:22:32,153 ERROR
org.apache.flink.runtime.entrypoint.ClusterEntrypoint[] - Fatal error
occurred in the cluster entrypoint.
java.util.concurrent.CompletionException:
org.apache.flink.util.FlinkRuntimeException
Thanks for the feedback.
It looks like that is available in 1.15 and above? Is that correct? We can
look into upgrading.
We are on Flink 1.14. At the time a year or so ago, it was the latest version
that AWS EMR offered (emr-6.7.0) out of the box and we just keep up to date
with the latest
Hi all,
Apache Celeborn(Incubating) community is glad to announce the
new release of Apache Celeborn(Incubating) 0.3.1.
Celeborn is dedicated to improving the efficiency and elasticity of
different map-reduce engines and provides an elastic, high-efficient
service for intermediate data including
Hi rui,
The 'state.backend.fs.memory-threshold' configures the threshold below
which state is stored as part of the metadata, rather than in separate
files. So as a result the JM will use its memory to merge small
checkpoint files and write them into one file. Currently the
FLIP-306[1][2] is propo
Is it possible to use table API inside a processAll window function .
Lets say, the use case is process function should enrich for each element
by querying some SQL queries over the entire elements in the window using
table API. Is this case supported in flink? If not what is the suggested way
Tha
After the task restart of our 1.13 version, kakfa consumption zero problem
occurred. Have you ever encountered it?
We found that for some tasks, the JM memory continued to increase. I set
the parameter of state.backend.fs.memory-threshold to 0, and the JM memory
would no longer increase, but many small files might be written in this
way. Does the community have any optimization plan for this area?
After the task is restarted for several times, it is found that the
supported cp is deleted. I view the audit log of HDFS and find that the
deletion request comes from JM
Hangxiang Yu 于2023年9月30日周六 17:10写道:
> Hi,
> How did you point out the checkpoint path you restored from ?
>
> Seems that you
Hey,
The FlinkKinesisProducer is deprecated in favour of the KinesisSink. The
new sink does not rely on KPL, so this would not be a problem here. Is
there a reason you are using the FlinkKinesisProducer instead of
KinesisSink?
Thanks for the deep dive, generally speaking I agree it would be
p
10 matches
Mail list logo