Hi Stefan,
Thank you for analyzing the issue and providing a quick
reply. We are looking to go ahead with the programmatic method of rewriting
the metadata with the new path. We were successfully able to de-serialize
and serialize the metadata to a new path, but still we see that the
Hi all
We're exposing Prometheus metrics from our Flink (v1.7.1) pipeline to
Prometheus, e.g: the total number of processed records. This works fine
until any of the tasks is restarted within this yarn application. Then the
counter is reset and it starts incrementing values from 0.
How can we reta
Hi all
Our intention is to create a savepoint from the current prod pipeline
(running on Flink 1.7.1) and bring up another one behind the scenes using
this savepoint to avoid reprocessing of all data and create the local state
again.
It looks like it's technically possible but we're unsure about t
Hi,
I’d like to sink my data into hdfs using SequenceFileAsBinaryOutputFormat with
compression, and I find a way from the link
https://ci.apache.org/projects/flink/flink-docs-stable/dev/batch/hadoop_compatibility.html,
the code works, but I’m curious to know, since it creates a mapreduce Job
Hi,
I’m using Flink 1.5.6 and Hadoop 2.7.1.
My requirement is to read hdfs sequence file (SequenceFileInputFormat), then
write it back to hdfs (SequenceFileAsBinaryOutputFormat with compression).
Below code won’t work until I copy the flink-hadoop-compatibility jar to
FLINK_HOME/lib. I find