Any update when 1.3.1 will be available?
Our current copy is 1.2.0 but that has separate issue(invalid type code:
00).
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/invalid-type-code-00-td13326.html#a13332
--
View this message in context:
http://apache-flink-user-mailin
I think CheckpointListener?
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Use-Single-Sink-For-All-windows-tp13475p13653.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
Thanks Aljoscha for your response.
I would give a try..
1- flink call *invoke* method of SinkFunction to dispatch aggregated
information. My follow up question here is .. while snapshotState method is
in process, if sink received another update then we might have mix records,
however per doc
Is there any possibility to trigger sink operator on completion of
checkpoint?
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Deterministic-Update-tp13580.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
N
because of parallelism i am seeing db contention. Wondering if i can merge
sink of multiple windows and insert in batch.
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Use-Single-Sink-For-All-windows-tp13475p13525.html
Sent from the Apache
Thanks for sharing ticket reference. Is there any time line as this is
blocker?
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Exception-in-Flink-1-3-0-tp13493p13500.html
Sent from the Apache Flink User Mailing List archive. mailing list ar
yes i see that so shall i update code from RichProcessFunction to
RichFunction(based object)? The implementation throw compile exception as
RichProcessFunction has been removed from 1.3.
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Process
After upgrade i started getting this exception, is this a bug?
2017-06-05 23:45:03,423 INFO
org.apache.flink.runtime.executiongraph.ExecutionGraph- UTCStream ->
Sink: UTC sink (2/12) (f78ef7f7368d27f414ebb9d0db7a26c8) switched from
RUNNING to FAILED.
java.lang.Exception: Could not perfor
with 1.3 what should we use for processFunction?
org.apache.flink.streaming.api.functions.RichProcessFunction
org.apache.flink.streaming.api.functions.ProcessFunction.{OnTimerContext,
Context}
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com
Is it possible to return all windows update to single Sink (aggregated
collection).
The reason i am asking because we are using mysql for sink. I am wondering
if i can update all of the them in single batch so as to avoid possibly
avoid contention.
--
View this message in context:
http://apac
Nvm i found it. Backpressure caused by aws RDS instance of mysql.
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Checkpoints-very-slow-with-high-backpressure-tp12762p13468.html
Sent from the Apache Flink User Mailing List archive. mailing l
Enable info log. it seems it stuck
==> /mnt/ephemeral/logs/flink-flink-jobmanager-0-vpc2w2-rep-stage-flink1.log
<==
2017-06-01 12:45:18,229 INFO
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering
checkpoint 1 @ 1496321118221
==>
/mnt/ephemeral/logs/flink-flink-taskmanag
I tried to extend timeout to 1 hour but no luck. it is still timing out & no
exception in log file So i am guessing something stuck, will dig down
further.
Here is configuration detail.
Standalone cluster & checkpoint store in S3.
i just have 217680 messages in 24 partitions.
Anyidea?
I tried to extend timeout to 1 hour but no luck. it is still timing out. So i
am guessing something stuck, will dig down further.
Here is configuration detail.
Standalone cluster & checkpoint store in S3.
i just have 217680 messages in 24 partitions.
Anyidea?
--
View this message in co
So what is the resolution? flink consuming messages from kafka. Flink went
down about a day ago, so now flink has to process 24 hour worth of events.
But i hit backpressure, as of right now checkpoint are timing out. Is there
any recommendation how to handle this situation?
Seems like trigger are
Thanks Aljoscha Krettek,
So the results will not be deterministic for late events. For idempotent
update, i would need to find an additional key base of current event time if
they are late and attached to the aggregator which probably possible by
doing some function(maxEventTime, actualEventTime)
Which Flink version you are using? 1.2
What is your job doing (e.g. operators that you are using)? ProcessFunction
to determine if event is late change event time to current & then window
Which operator throws this exception? i will have to dig it further
Which state-backend are you using? mysql.
Sprodically i am seeing this error. Any idea?
java.lang.IllegalStateException: Could not initialize keyed state backend.
at
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initKeyedState(AbstractStreamOperator.java:286)
at
org.apache.flink.streaming.api.operators.A
Could you elaborate this more? If i assume if i set window time to max ..
does it mean my window will stay for infinite time framework,
Wouldn't this may hit memory overflow with time?
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/State-i
19 matches
Mail list logo