I also get warnning that CodeCache is full around that time. It's printed by
JVM and doesn't have timestamp. But I suspect that it's because so many failure
recoveries from checkpoint and the sql queries are dynamically compiled too
many times.
Java HotSpot(TM) 64-Bit Server VM warning: CodeC
Here is the exception and error log:
2018-05-29 14:41:04,762 WARN org.apache.flink.runtime.taskmanager.Task
- Task 'over: (PARTITION BY: uid, ORDER BY: proctime, RANGEBETWEEN
8640 PRECEDI
NG AND CURRENT ROW, select: (id, uid, proctime, group_concat($7) AS w0$o0)) ->
se
Hi,
when I stop my flink application in standalone cluster, one of the tasks can
NOT exit gracefully. And the task managers are lost(or detached?). I can't see
them in the web ui. However, the task managers are still running in the slave
servers.
What could be the possible cause? My applicati
Hi,
I am using flink sql 1.5.0. My application throws NPE. And after it recover
from checkpoint automatically, it throws NPE immediately from same line of code.
My application read message from kafka, convert the datastream into a table,
issue an Over-window aggregation and write the result in
Hi Esa,
I think having more than one env.execute() is anti-pattern in Flink.
env.execute() behaves differently depending on the env. For local, it will
generate the flink job graph, and start a local mini cluster in background
to run the job graph directly.
For remote case, it will generate the f
Hi Nara,
yes, the watermark in TimerService is not covered by the checkpoint, everytime
the job is restarted from a previous checkpoint, it is reset to Long.MIN_VALUE.
I can see it a bit tricky to cover it into the checkpoint, especially when we
need to support rescaling(it seems not like a pu
HI,
I found the following documentation in the code:
flink-runtime:
org.apache.flink.runtime.executiongraph.failover.RestartIndividualStrategy
Simple failover strategy that restarts each task individually.
* This strategy is only applicable if the entire job consists unconnected
* tasks, meaning
Hi Esa,
In Flink documentation[1], what you specified before env.execute() is the
job graph.
"Once you specified the complete program you need to *trigger the program
execution* by calling execute()".
execute() can be finite or infinite, depending on whether your data source
is finite, or whether
Hi,
Is it possible the watermark in TimerService not getting reset when a job
is restarted from a previous checkpoint? I would expect the watermark in a
TimerService also to go back in time.
I have the following ProcessFunction implementation.
override def processElement(
e: TraceEvent,
I want to add an extended resource via a ResourceSpec to a DataSet operation.
There is an envisioned usage example in issue FLINK-7878 using the setResource
method of SingleOutputStreamOperator (flatMap output). However, in Flink 1.5
the setResources method appears instead on the StreamTransfo
Thanks Till. `taskmanager.network.request-backoff.max` option helped in my
case. We tried this on 1.5.0 and jobs are running fine.
--
Thanks
Amit
On Thu 24 May, 2018, 4:58 PM Amit Jain, wrote:
> Thanks! Till. I'll give a try on your suggestions and update the thread.
>
> On Wed, May 23, 2018
Hi
I would be interested to know what are the most two or three typical use cases
in Flink ? What they can be ?
What people do most by Flink ? Do you have any opinion or experience about that
?
I mean mostly smaller examples of uses.
Best, Esa
I want to build Flink Cassandra connector against
datastax version 3.1.4
guava 16.0.1
using what command I can do that? and in what Flink source directory?
Hi,
Could you post full output of the mvn dependency:tree command on your project?
Can you reproduce this issue with some minimal project stripped down of any
custom code/external dependencies except of Flink itself?
Thanks Piotrek
> On 28 May 2018, at 20:13, Elias Levy wrote:
>
> On Mon, May
The cassandra-driver-core dependency is not relocated (i.e. renamed) in
the cassandra connector. If it were you wouldn't have conflicts ;)
The cassandra-connector artifact is a fat-jar, and thus always contains
this dependency. There is no way get rid of it with maven exclusions
(outside the I
Hi, I use Flink Cassandra Connector dependency in my maven project. Other
components have conflict with the cassandra-driver-core that is embedded in
flink-cassandra-connector. I tried to exclude that in pom.xml file like
this:
org.apache.flink
flink-connector-cassandra_2.11
1.4.2
Just a heads up. I haven't found the root cause for this issue yet but
restarting all the nodes seems to have solved this issue.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi
Are there only one env.execute() in application ?
Is it unstoppable forever loop ?
Or can I stop env.execute() and then do something and after that restart it ?
Best, Esa
From: Fabian Hueske
Sent: Tuesday, May 29, 2018 1:35 PM
To: Esa Heikkinen
Cc: user@flink.apache.org
Subject: Re: env.e
I'm not sure if this is a "best practice" for debugging, but I found that if
use apply()
one of the parameters passed into the WindowFunction that I must implement
contains
a TimeWindow object, that has start and end times:
private static class MyApplyWindowFunction implements
WindowFunction, Tupl
Hi,
It is mandatory for all DataStream programs and most DataSet programs.
Exceptions are ExecutionEnvironment.print() and
ExecutionEnvironment.collect().
Both methods are defined on the DataSet ExecutionEnvironment and call
execute() internally.
Best, Fabian
2018-05-29 12:31 GMT+02:00 Esa Heik
Hi
Is it env.execute() mandatory at the end of application ? It is possible to run
the application without it ?
I found some examples where it is missing.
Best, Esa
i had some catastrofic eroror
>
> ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint -
> Fatal error occurred in the cluster entrypoint.
> org.apache.flink.util.FlinkException: Failed to recover job
> a048ad572c9837a400eca20cd55241b6.
> File does not exist:
> /flink_1.5/ha/beam1/
22 matches
Mail list logo