org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Checkpoint 9353
expired before completing
I might know why this happened in the first place. Our sink operator does
synchronous HTTP post, which had a 15-mint latency spike when this all
started. This could block flink threads and prevent
Hi team, I am a similar use case do we have any answers on this?
When we trigger savepoint can we store that information to ZK as well?
So I can avoid S3 file listing and do not have to use other external
services?
On Wed, Oct 25, 2017 at 11:19 PM vipul singh wrote:
> As a followup to above, is
Hi All,
Is there a Flink Elastic search connector for version 6.0 and scala 2.11? I
couldn't find it listed here
https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/elasticsearch.html
.
Regards,
Rahul Raj
Hi team, I have one follow up question on this.
There is a discussion on resuming jobs from *a saved external checkpoint*,
I feel there are two aspects of that topic.
*1. I do not have changes to the job, just want to resume the job from a
failure.*
I can see this automatically happen with ZK enab
Hi Mike,
The rationale behind implementing the FlinkFixedPartitioner as the default
is so that each Flink sink partition (i.e. one sink parallel subtask) maps
to a single Kafka partition.
One other thing to clarify:
By setting the partitioner to null, the partitioning is based on a hash of
the re
Is interactive development possible with fink like with spark in a REPL?
When trying to use the console mode of SBT I get the following error:
java.lang.Exception: Deserializing the OutputFormat
(org.apache.flink.api.java.Utils$CollectHelper@210d5aa7) failed: Could not
read the user code wrapper
Thanks Aljoscha. I have not tried with 1.3. I will try and check the
behavior.
Regarding setting UIDs to operators from Beam, do you know if thats
something planned for a near future release ?
Thanks,
Jins George
On 11/30/2017 01:48 AM, Aljoscha Krettek wrote:
Hi,
I think you might be run
Dear Flink community,
The Call for Presentations for Flink Forward San Francisco 2018 is now
open! Share your experiences and best practices in stream processing,
real-time analytics, and managing mission-critical Flink deployments in
production. We’re happy to receive your talk ideas until Decemb
Hi,
I think you might be running into a problem that is hard to solve with Flink
1.2 and Beam. As you noticed, it's a problem that Beam doesn't assign UIDs to
operators, which is a problem. Flink 1.3 and even more Flink 1.4 are a bit more
lenient in accepting changes to the graph, so you might