Hi,
We are using processing timer to implement some state clean up logic.
After switching from FsStateBackend to RocksDB, we encounter a lot of segfault
from the Time Trigger threads when accessing/clearing state value.
We currently uses the latest 1.3-SNAPSHOT, with the patch upgrading RocksDB
Hi,
I want to overwrite the method “openNewPartFile” in the BucketingSink Class
such that it creates part file name with inclusion of timestamp whenever it
rolls a new part file.
Can someone share some thoughts on how I can do this.
Thanks a ton, in advance.
Regards,
Raja.
Fabian,
Just to follow up on this, I took the patch, compiled that class and stuck
it into the existing 1.3.2 jar and all is well. (I couldn't get all of
flink to build correctly)
Thank you!
On Wed, Sep 20, 2017 at 3:53 PM, Garrett Barton
wrote:
> Fabian,
> Awesome! After your initial email
Hi, I am running Flink 1.3.2 on kubernetes, I am not sure why sometime one
of my TM is killed, is there a way to debug this? Thanks
= Logs
*2017-10-05 22:36:42,631 INFO
org.apache.flink.runtime.instance.InstanceManager - Registered
TaskManager at fps-flink-taskmanager-2384273
Hi Ken,
I don't have much experience with streaming iterations.
Maybe Aljoscha (in CC) has an idea what is happening and if it can be
prevented.
Best, Fabian
2017-10-05 1:33 GMT+02:00 Ken Krugler :
> Hi all,
>
> I’ve got a streaming topology with an iteration, and a RichAsyncFunction
> in that
Hi,
Is there a way to catch the timeouts thrown from async io operator?
We use async io API to make some high latency HTTP API calls. Currently
when the underlying http connection hangs and fails to timeout in the
configured time the async timeout kicks in and throws an exception which
causes th
We are using Flink 1.3.1 in Standalone mode with a HA job manager setup.
~
Karthik
On Fri, Oct 6, 2017 at 8:22 PM, Karthik Deivasigamani
wrote:
> Hi,
> I'm noticing a weird issue with our flink streaming job. We use async
> io operator which makes a HTTP call and in certain cases when the as
Hi,
I'm noticing a weird issue with our flink streaming job. We use async
io operator which makes a HTTP call and in certain cases when the async
task times out, it throws an exception and causing the job to restart.
java.lang.Exception: An async function call terminated with an
exception. Fai
Hi,
As you noticed, Flink does currently not put Source-X and Throttler-X (for some
X) in the same task slot (TaskManager). In the low-level execution system,
there are two connection patterns: ALL_TO_ALL and POINTWISE. Flink will only
schedule Source-X and Throttler-X on the same slot when the
Thank you for confirming.
I think this is a critical bug. In essence any checkpoint store (
hdfs/S3/File) will loose state if it is unavailable at resume. This
becomes all the more painful with your confirming that "failed checkpoints
killing the job" b'coz essentially it mean that if remote
Hi,
I am running a simple stream Flink job (Flink version 1.3.2 and 1.3.1)
whose source and sink is a Kafka cluster 0.10.0.1.
I am testing savepoints by stopping/resuming the job and when I checked the
validity of the data sunk during the stop time I observed that some of the
events have been los
Hello
I have set up a cluster and added taskmanagers manually with bin/taskmanager.sh
start.
I noticed that if i have 5 task managers with one slot each and start a job
with -p5, then if i stop a taskmanager the job will fail even if there are 4
more taskmanagers.
Is this expected (I turned off
Hi,
Yes, the AvroSerializer currently partially still uses Kryo for object copying.
Also, right now, I think the AvroSerializer is only used when the type is
recognized as a POJO, and that `isForceAvroEnabled` is set on the job
configuration. I’m not sure if that is always possible.
As mentioned
Hi,
I think you can implement that by writing a custom Trigger that combines
functionality of CountTrigger and EventTimeTrigger. You should keep in mind,
though, that having windows of size 1 hour and slide 5 seconds will lead to a
lot of duplication because in Flink every sliding window is con
Hi Dustin,
Are you using S3 for a Flink source / sink / streaming state backend? Or is it
simply used in one of your operators?
I’m assuming the latter since you mentioned “doing a search on an S3 system”.
For this, I think it would make sense to simply pass the job-specific S3
endpoint / cred
Hi Bo,
I'm not familiar with Mesos deployments, but I'll forward this to Till or Eron
(in CC) who perhaps could provide some help here.
Cheers,
Gordon
On 2 October 2017 at 8:49:32 PM, Bo Yu (yubo1...@gmail.com) wrote:
Hello all,
This is Bo, I met some problems when I tried to use flink in my
Hi!,
Since your running on AWS EMR, I’m assuming your deploying your Flink job /
cluster on YARN?
If so, make sure to specify the YARN application id also.
You should do that by:
flink savepoint -yid
Cheers,
Gordon
On 2 October 2017 at 9:39:09 PM, ant burton (apburto...@gmail.com) wrote:
Hi
Hi Garrett,
thanks for reporting back!
Glad you could resolve the issue :-)
Best, Fabian
2017-10-05 23:21 GMT+02:00 Garrett Barton :
> Fabian,
>
> Turns out I was wrong. My flow was in fact running in two separate jobs
> due to me trying to use a local variable calculated by
> ...distinct().c
18 matches
Mail list logo