Hi Stefan
Please find below stack trace and code :
java.lang.IllegalStateException: Job 81ca41b13e7be8feb99f064e5a9a4237 not
found
at
org.apache.flink.runtime.jobmanager.JobManager.org$apache$flink$runtime$jobmanager$JobManager$$handleKvStateMessage(JobManager.scala:1470)
at
org.a
Hi Vishnu
/val env = StreamExecutionEnvironment.getExecutionEnvironment
val jobID = env.getStreamGraph.getJobGraph.getJobID/
As I am using the jobId of current running job. It should exist.
Thanks and regards
Pushpendra
--
View this message in context:
http://apache-flink-user-mailing-list-
Have you tried profiling the application to see where most of the time is
spent during the runs?
If most of the time is spent reading in the data maybe any difference
between the two methods is being obscured.
--
Sent from a mobile device. May contain autocorrect errors.
On Sep 6, 2016 4:55 PM,
Hi Dan,
Flink currently allocates each task slot an equal portion of managed
memory. I don't know the best way to count task slots.
https://ci.apache.org/projects/flink/flink-docs-master/concepts/index.html#workers-slots-resources
If you assign TaskManagers less memory then Linux will use the me
Hi,
I am not broadcasting the data but the model, i.e. the weight vector
contained in the "State".
You are right, it would be better for the implementation with the while
loop to have the data on HDFS. But that's exactly the point of my
question: Why are the Flink Iterations not faster if you do
think your anwser.
but i can not get your ideal."If all elements of "words2" have been processed,
the right side of your coGroup will always be empty no matter what is incoming
in your socketTextStream.",the mean i can not get.
the following is the ideal from me(it maybe error):
the coG
my data from a Hbase table ,it is like a List[rowkey,Map[String,String]],
class MySplittableIterator extends SplittableIterator[String]{
// Members declared in java.util.Iterator
def hasNext(): Boolean = {
}
def next(): Nothing = {
}
// Members decl
I think you have to rethink your approach. In your example "words2" is a
stream but only with a fixed set of elements. If all elements of
"words2" have been processed, the right side of your coGroup will always
be empty no matter what is incoming in your socketTextStream.
It is not read in over
Hi,
this looks like a bug. I created an issue for it
(https://issues.apache.org/jira/browse/FLINK-4581). Could you also send
us the pom.xml you are using for your project?
Timo
Am 06/09/16 um 13:47 schrieb jiecxy:
Hi all,
I want to write a program, a thread read the real-time message fro
Hi all,
I want to write a program, a thread read the real-time message from
/var/log/messages and write them to kafaka, and it works. Then I want to use
sql of flink to query the messages, and the following are my code:
i try read data into a list or List[Map] to store the T2,but i think if use
list or List[Map],there is not parallelization,so i want to use coGroup.
other hand,the coGroup function is join the T1 and T2,and must have window and
trigger method,the window is cut the T1 and T2,
the trigger is trigg
Hi,
will words2 always remain constant? If yes, you don't have to create a
stream out of it and coGroup it, but you could simply pass the
collection to Map/FlatMap function and do the joining there without the
need of a window. Btw. you know that non-keyed global windows do not scale?
If I und
Hi,
you have to implement a class that extends
"org.apache.flink.util.SplittableIterator". The runtime will ask this
class for multiple "java.util.Iterator"s over your split data. How you
split your data and how an iterator looks like depends on your data and
implementation.
If you need mor
Hi,
what's the number of unique keys and the parallelism of your job? If the
former is larger than the latter you should indeed have one
"timeWindowFold" be responsible for several keys. How are you determining
whether one of these is only accumulating for a single key?
Cheers,
Aljoscha
On Mon, 5
Hi, the follow code:
val text = env.socketTextStream(hostName, port)val words1 = text.map {
x => val res = x.split(",") (res.apply(0)->res.apply(1))}
val words2 = env.fromElements(("a","w1"),("a","w2"),("c","w3"),("d","w4"))
val joinedStream = words1 .co
fromCollection is not parallelization,the data is huge,so i want to use
env.fromParallelCollection(data),but the data i do not know how to initialize,
- 原始邮件 -
发件人:Maximilian Michels
收件人:"user@flink.apache.org" , rimin...@sina.cn
主题:Re: fromParallelCollection
日期:2016年09月05日 16点58分
Plea
16 matches
Mail list logo