Hi Jürgen,
I got your point from the log, but i think it can not do anything from
flink side. The task receives the cancel command from master, and it will
dipose the operator after task thread is interupted.
Maybe you can check if there are some parameters to set for waiting longer for
ack
In my jobmanager log I see this exception , probably is the root cause why the
whole job is killed…is there any memory problem in jobmanager ? any clue for
this error below?
I ran the yarn-session
And my flink-conf.yaml is pretty much unmodified
jobmanager.heap.mb: 256
taskmanager.heap.mb: 512
Here is the line where NPE was thrown:
mainThread.interrupt(); // the main thread may be sleeping for the
discovery interval
I wonder if runFetcher() encountered running being false - otherwise
mainThread should not have been null.
Looks like we should check whether mainThread is null when s
The taskmanger log does not point a line in my code ..but it seems like the
error occurs while it is trying to fetch kinesis record inside connector jar
red sequence number 49572261908151269541343187919820576263466496304458235906
2017-04-13 23:28:23,470 INFO
org.apache.flink.streaming.connector
Hi Ted, Sorry for my big font earlier…was not intended ☺
I am on flink 1.2.0
I built flink-connector-kinesis_2.10-1.2.0.jar from source and included in the
fatjar I am running.
Followed this http://www.kidder.io/2017/02/15/flink-kinesis-streaming-connector/
From code I read a kinesis stream usi
Can you give us a bit more information ?
release of flink
snippet of your code
Thanks
Hello Flink,
Is it possible to use contextual data as part of Event Processing (CEP)?
I would like to create some tables associated with each key. These tables would
need to be updated as information streams in. The values in these tables would
also have to be referenced as part of the rule log
Has some one encountered this error …as I am using DataStream api to read from
a kinesis stream .This happens intermittently and flink job dies.
reamShard{streamName='dev-ingest-kinesis-us-west-2', shard='{ShardId:
shardId-0009,HashKeyRange: {StartingHashKey:
30625413022884461711703714
I concur with Nico. We're actively working on improving Flink-on-Docker,
and this is a valid concern.
--
Patrick Lucas
On Tue, Apr 11, 2017 at 11:01 AM, Nico Kruber
wrote:
> Hi Kat,
> yes, this looks like it may be an issue, please create the Jira ticket.
>
> Some background:
> Although docker-
Hello,
We're going to migrate some applications that consume data from a Kafka 0.8
from Flink 1.0 to Flink 1.2.
We are wondering if the offset commitment system changed between those two
versions: is there a risk that the Flink 1.2-based application will start with
no offset (thus will use eit
I failed to reproduce your error.
How did you set up your project: SBT, Maven?
Maybe its dependency management is referring to an old version of flink? Maybe
different versions of scala are mixed?
In that case, you may try setting up a new project:
https://ci.apache.org/projects/flink/flink-docs
Hi zhijiang,
I checked this value and I haven't configured it so I think it should be
the default 10s. I checked how long the flink cancel command took with
the time command and it was finished after 6 seconds.
After filtering out the messages of one Sink, it looks like it
interrupts it in t
12 matches
Mail list logo